Open Source

AI Artifacts: An interview with Ruben Fro and Benjamin Bardou

February 12, 2025

In honor of this year’s AI Action Summit, we partnered with the Bibliothèque Nationale de France (BnF), Fisheye Immersive, Convergence, and Institut Montaigne to commission two open source AI-generated art installations by innovative artists, Ruben Fro and Benjamin Bardou. Presented during the Cultural Weekend organized by the Ministry of Culture with the support of the Ministry for Europe and Foreign Affairs in France, these works demonstrate the capability of open source AI as a new medium and creative tool.

Tokyo-based VFX artist Ruben Fro co-directed an audiovisual piece with immersive art director at Fisheye Mehdi Mejri entitled Deep Diving, which is built on Meta’s open source models, including SAM 2. The piece visualizes the inner workings of the BnF’s unique book and document delivery system, TADs. The film was projected onto one of the library towers, displayed inside the BnF, and shown on an LED screen outside of the library. “Deep Diving” will be on display for one week in celebration of the AI Action Summit.

In the second project, Memories of Paintings, French visual artist and filmmaker Benjamin Bardou used Llama to create reinterpretations of notable works by Edgar Degas, reflecting on the past and future of art history. Memories of Paintings premiered at NEO 612, an immersive showroom event—half technology, half art—hosted by Convergence and the Institut Montaigne. It will be presented at the Musée des Arts et Métiers.

We sat down with both Benjamin Bardou and Ruben Fro to learn more.

When did you first begin incorporating AI into your artwork?

Ruben Fro: I’ve been following AI advancements for quite a while, looking at its growth and how it could be integrated into my creative process. More than generative imagery or video, my first real use of AI was in coding assistance, helping me speed up the development of volumetric effects in the graphic shaders used in my artwork. It streamlined experimentation, allowing me to focus more on the creative vision rather than trial and error.

From there, I explored AI in depth map generation, text-to-3D conversion, and other ways to enhance volumetric visuals, always looking at AI as a tool to drive and shape aspects of creation rather than replace the artistic process itself.

Benjamin Bardou: I first used AI as if I were a flâneur in a city. I wandered through latent space as if I were strolling through the streets of a fictional city. I realized that I had the same curiosity in exploring the urban space as I did in navigating the latent space. Thus, wandering was my first experience with AI.

My work revolves around the themes of the city and memory. I attempt to shape memories of cities I have explored in recent years through 3D scanning techniques and point clouds. For this project, I wanted to experiment by replacing the urban motif with latent space, specifically by exploring the works of an artist I deeply admire: Edgar Degas.

What can you tell us about your AI-generated reinterpretations of Degas’s works and the inspiration behind them?

Benjamin Bardou: The idea here is to explore Degas’s pictorial space through artificial imagination. It is not about reproducing the artworks themselves but rather their memories.

When we recall a painting like The Mona Lisa or Luncheon on the Grass, the image that forms in our minds is not quite identical to the painted work. This gap is what interests me. It closely resembles the approximation found in AI-generated images.

Memories of Paintings thus attempts to represent the memory of an aesthetic experience in contact with pictorial material.

That’s fascinating. And what about you, Ruben? What can you tell us about Deep Diving and the inspiration behind it?

Ruben Fro: With Mehdi Mejri, the Immersive Art & Culture Director at Fisheye, we aimed to craft a journey into the hidden mechanics of knowledge transmission, as seen through the lens of the BnF’s TAD system. This network of robotic carriers, which transports books across the library’s vast archive, embodies the silent guardians of human knowledge, despite their seemingly lifeless, mechanical nature.

For my visual sequences, I aimed to visualize how a machine might perceive the world around it. Using volumetric captures, I recreated the environment as fragmented, glitching, and ever-evolving, like a raw digital perspective of reality.

The role of AI in the installation is to bridge the gap between machine logic and human emotions, transforming indexed data into an experience that feels alive, poetic, and introspective.

Was this the first time you’ve used SAM 2 in your work?

Ruben Fro: Yes, this was my first time integrating Segment Anything 2, and it’s been fascinating to see how these tools open up new creative possibilities. I see this as just the beginning—there’s still so much potential to explore in how these AI models could be used to interact with volumetric data, enhance real-time rendering, and push storytelling beyond what’s currently possible.

And you, Benjamin? To what extent have you integrated AI into your workflow?

Benjamin Bardou: I use AI at the very beginning of my creative process. The images generated through text-to-image models are part of a research process, allowing me to get closer to what I perceive as Degas’s aesthetic. The goal here is to align my personal memory of an artist’s works with the collective memory that constitutes the latent space.

These images are then incorporated into an advanced particle system. For Memories of Paintings, this system had to be adapted by working on the texture of paint strokes moving within the frame. Once again, the goal was not to exactly replicate Degas’s palette or brushstrokes but rather to convey the aesthetic sensation I feel when I closely examine one of his paintings or oil pastels.

Did either of you encounter any obstacles when integrating AI into your workflow? If so, how did you overcome these challenges?

Benjamin Bardou: The first obstacle one encounters is the gap between the image one has in mind and the one that AI produces. I believe I turned this limitation into an advantage by centering my theme around the restitution of memory.

Ruben Fro: Like many AI tools, there’s still a setup process and development work involved, which can be a challenge when trying to integrate them seamlessly into an artistic workflow. AI models are incredibly powerful, but they often require fine-tuning, structuring the input correctly, and adapting outputs to fit a specific creative vision.

That being said, these challenges are temporary. AI tools are evolving rapidly, and soon, integrating them into artistic workflows will be as intuitive as using Photoshop or 3D rendering software. It’s just a matter of time before this technology becomes fully accessible to creators across disciplines.

A question for you, Ruben: How, if at all, does Deep Diving relate to your earlier work, including but not limited to Dissolving Realities?

Ruben Fro:Dissolving Realities was an exploration of how we could perceive and reconstruct reality in a different way through pure data, using photogrammetry and volumetric capture. With these 3D datasets, I could slice fragments of the real world and reconstruct them in a way that blurred the line between the physical and the digital.

The sequences in Deep Diving are an evolution of that idea, but this time, the integration of AI into the creative process allows for a deeper connection between digital, machine-driven logic and fuzzy, imprecise human perception.

The installation explores how structured, mechanical systems (like the TADs) can still carry a poetic sense of motion and emotion, making it a reflection of how AI, automation, and data interact with human culture.

What about you, Benjamin? Is there a connection between Memories of Paintings and your previous works?

Benjamin Bardou: This exploration of painting memories is a direct extension of my work on urban themes. Ultimately, what interests me is capturing the shape and texture of memory.

What role do you think new technologies like AI, as well as augmented and virtual reality, should play in the world of fine arts?

Benjamin Bardou: I approach my artistic practice as research on form. In my case, what matters most is discovering or creating new forms. I explore and use Artificial Imagination today for that purpose.

Ruben Fro: AI is fundamentally changing the way we create, interpret, and interact with art, research, and innovation. It’s allowing us to discover new patterns, accelerate workflows, and push beyond the constraints of traditional processes. This means not just seeing AI as a tool for production, but as a co-creator—something that helps us visualize concepts, find new ideas, manipulate data, and create in ways that weren’t possible before.

AR and VR are the next major frontiers, and while they don’t yet receive the same attention as AI, they are on the verge of a significant breakthrough. We’re still in the early days, and hardware isn’t quite there yet, but rapid advancements (like Meta’s Orion) are showing us that soon, immersive digital experiences will be seamlessly integrated into our daily lives and accessible through something as simple as a pair of glasses.

We’re at a turning point where AI, AR, and VR will begin to merge with everyday experiences. AI already powers the way we search, write, and interact with information, improving research, creativity, and problem-solving. It’s already being used in major fields like healthcare and scientific research, and its importance will only continue to grow.

AR will introduce a spatial way of interacting with these new paradigms in daily life. The physical and augmented worlds will begin to blend seamlessly, opening up new possibilities for creativity, communication, and productivity.

My work often explores the relationship between human perception and digital systems. In Deep Diving, for example, we transformed an automated book transport system into something poetic, an AI-driven experience that reveals the hidden rhythm of knowledge transfer. It’s a small example of how machines and data aren’t just tools, but evolving entities that shape our interactions, emotions, and understanding of the world.

AI isn’t just about replacing tasks—it’s about unlocking new ways of seeing, creating, and connecting with the data that surrounds us.

Anything else you’d like to share with our readers?

Ruben Fro: We’re only scratching the surface of what’s possible. AI, AR, and immersive storytelling are evolving at an insane pace, and as artists, we have a unique opportunity to shape how these technologies are used, not just in creative fields, but in how people experience the world.


Share:

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
FEATURED
Research
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
FEATURED
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023