Autolume: Post-Photographic Cybernetic Portraiture

When Analog Film Got Freaky with AI at SFU’s Wosk Centre ‘Dialogue on Technology Project


Something remarkable is happening at the SFU Wosk Centre for Dialogue. It’s not just another art exhibition or academic symposium—it’s a meeting point where human creativity collides with artificial intelligence to redefine the boundaries of art, identity, and technology. Welcome to Autolume: Post-Photographic Cybernetic Portraiture, an experiment that distills decades of analog photography into the surreal visions of a generative AI.

With my 20-year archive of thousands and thousands analog film portraits as the foundation, and the neural network wizardry of SFU’s Metacreation Lab for Creative AI‘s incredible Autolume Neural Neural Synthesizer as the engine, we’re not just making art. We’re teaching machines to dream. And in those dreams, we glimpse the future of creativity, identity, and collaboration.


The Squad Behind the Magic

This isn’t a solo act—it’s a symphony of talents where analog soul meets computational brilliance. At the helm is Philippe Pasquier, conducting this cyber-orchestra with his visionary approach to creative AI.

One of my favorite experimental computational artists Lionel Ringenbach AKA Ucodia.space took the lead in crafting real-time interactivity, transforming raw data into fluid, dynamic experiences.

Arshia Sobhan, our machine learning maestro, taught the AI to not just process but truly understand the language of film photography.

When these worlds collide—my archive of thousands of analog portraits blending with their cutting-edge tools—the result isn’t just collaboration; it’s combustion. Together, we’re creating something that doesn’t just bridge disciplines but redefines them, pushing the boundaries of what art, technology, and identity can mean in the digital age.


What’s Autolume Anyway?

Picture this: a neural network on a creative vision quest. We built this thing called Autolume – part machine, part dream engine. At its heart is a GAN (that’s Generative Adversarial Network for the nerds out there), basically two AIs locked in an eternal art battle. One’s creating, one’s critiquing, both pushing each other into weird new territories.

The real magic happens in what we call “latent space” – imagine an infinite canvas where every point is a possible face, a possible reality. We’ve got vector databases cataloging all these possibilities, letting us surf through identity space like some kind of cyberpunk portrait artists.


The Live Demo: A Step Into AI-Driven Creativity

At the Wosk Centre our demo isn’t just about viewing art—it’s an immersive, interactive experience that puts visitors at the center of the creative process. Here’s how it unfolds:

Step 1: Capturing Your Image Visitors start by having their portraits captured using a carefully calibrated setup designed for sharp, well-lit images. A face-tracking system aligns your features, ensuring the input image is framed just right. This step minimizes noise and ensures optimal results when processed through the AI model.

Step 2: Projection Into the Neural Network Your portrait is instantly fed into a neural network trained on a 20-year archive of analog photography. Behind the scenes, the AI begins its work, analyzing and reinterpreting your likeness. Two computational models—one generating, one critiquing—collaborate in real-time to produce a reimagined version of your image. This isn’t just a copy—it’s a reinterpretation rooted in patterns and latent connections the AI has discovered across thousands of portraits.

Step 3: Dynamic Transformation The magic comes to life as your reimagined portrait is projected onto a square display in a looping animation. This animation is more than just an image—it’s a living, evolving creation that melds the tactile qualities of analog film with the boundless possibilities of AI artistry.

Step 4: Sharing the Results The experience doesn’t stop at the display. Visitors can opt to have their portraits shared online, adding them to a growing gallery of cybernetic art. Those interested in taking their portraits home can explore options for downloading animations or receiving stills, making the art both personal and communal.


    What Makes This Unique?

    The seamless integration of live photography, real-time AI processing, and dynamic output ensures that every visitor not only observes the art but becomes an active participant. The setup also emphasizes consent and personalization, giving attendees control over how their portraits are shared and celebrated.

    The demo bridges the gap between analog nostalgia and AI-driven innovation, turning a simple act—having your photo taken—into an exploration of identity and creativity in the digital age.


    Teaching Machines to See Like Artists

    Our GANs aren’t just a mimics; they are visionary. Instead of merely replicating faces, they delves into the latent space of my archive, unearthing patterns and possibilities I could never have imagined. The machine approaches my 20-year collection of analog portraits with an alien curiosity, reinterpreting their essence through the lens of neural creativity.

    When we project someone’s portrait into this system, it’s as if we’re posing a deeply philosophical question: “What does this human look like in your reality?” The machine responds with outputs that feel simultaneously intimate and otherworldly—a fusion of digital abstraction and analog soul.

    The results aren’t static reproductions but dynamic reimaginings: faces rendered as hybrid identities, alive with the imperfections of film grain yet reshaped by the infinite possibilities of computational artistry. It’s a process that transcends art and technology, turning pixels into poetry.


    Redefining Creativity for the Digital Age

    This is a new frontier for photography, where capturing moments is just the starting point. We’re transforming still images into dynamic identities, shaped by the convergence of human creativity and algorithmic insight. These aren’t just portraits—they’re evolving constructs, fluid and alive, continuously reinterpreted through the lens of AI.

    The result is a bold new artistic language that fuses the imperfections of analog photography with the infinite possibilities of machine intelligence. This is neither traditional photography nor purely digital art—it’s something entirely new. It’s hybrid art, a collaboration between human vision and computational imagination that challenges the very definition of what art can be.


    Navigating the Challenges

    Let’s get real—teaching a machine to appreciate the nuanced beauty of film grain and the organic imperfections of light leaks is no small feat. It took countless hours of tweaking, testing, and reimagining to help the AI grasp what makes analog photography special—the warmth, the texture, the unpolished humanity embedded in each frame.

    Making this all work in real time was another monumental challenge. From managing the technical demands of live projection to ensuring seamless interaction, every aspect required precision and persistence. But technical hurdles weren’t the only ones we faced.

    Consent and transparency were equally important. Every participant in this experiment understands they’re contributing to a larger artistic and philosophical exploration. Ensuring everyone felt informed and valued wasn’t just a checkbox—it was a cornerstone of the project. After all, this isn’t just about creating art; it’s about fostering trust in the creative process and respecting the role of every individual involved.


    Redefining the Meaning of Creativity

    This project isn’t just about crafting eye-catching visuals—it’s about reshaping the very fabric of artistic expression. By making machines our collaborators rather than mere tools, we’re cracking open a new realm of possibilities where the boundaries of art expand into uncharted territories.

    What does it mean to democratize creativity? It means that anyone, regardless of technical expertise, can tap into the vast potential of AI to create something extraordinary. This isn’t just a technological leap—it’s a cultural shift. When machines contribute as creative partners, we’re no longer confined to traditional notions of authorship or artistry.

    But with this evolution comes profound questions: Who owns the result of an AI-assisted creation—the artist, the machine, or the data it was trained on? And as we begin to feed these systems with emotions, memories, and subjective experiences, what new dimensions of creativity—and ethical dilemmas—will emerge?

    This is where the real conversation begins: exploring how AI doesn’t just expand what art can be, but how it redefines the role of the artist, the observer, and the medium itself.


    The Big Picture

    Listen, I’ve been in the tech game since we were building websites with GeoCities. I’ve seen hype cycles come and go. But this feels different. We’re not just making new tools – we’re creating new ways of seeing ourselves and each other.

    Autolume isn’t the destination – it’s just the beginning. The real story isn’t what I created – it’s what we might all create next. The canvas is infinite now, the possibilities are endless, and reality just keeps getting weirder.

    And yeah, sometimes the machine’s ideas are strange as hell. But isn’t that what art is supposed to be? Let’s get weird together and see what we can dream up next. The future isn’t human OR machine – it’s both. And it’s already here, one cybernetic portrait at a time.


    Project Description

    Title: Autolume: Post-Photographic Cybernetic Portraiture

    Credits:

    • Art direction, dataset creation: Kris Krüg
    • Art direction: Philippe Pasquier
    • Interaction design and development: Ucodia
    • Model training and early exploration: Arshia Sobhan

    Medium: Neural network-based generative AI synthesizer trained on analog portrait photography.

    Description: Autolume bridges analog heritage with computational innovation, creating art that redefines boundaries and delves into the fluidity of identity in the digital age. Using the Autolume Neural Visual Synth, developed by the Metacreation Lab for Creative AI at Simon Fraser University (SIAT), this project trained a generative AI on a 20-year archive of analog film portraits by photographer Kris Krüg. The result is a series of entirely new artistic expressions that blend the tactile essence of the past with the limitless potential of AI.

    This collaborative project showcases the beautiful chaos of merging analog nostalgia with AI futurism. The Autolume Neural Visual Synth interprets and recreates human faces, including those of the audience, generating dynamic, hybrid imagery that challenges conventional artistic norms while embracing the fluidity of digital identity.

    Interactive Element: Visitors’ pictures are taken and then projected into the latent space of Kris Krüg’s model and added to a video loop. 

    Bios

    Kris Krüg is a Vancouver-based photographer, artist, and AI community leader. As the CEO of Future Proof Creatives and a long-time collaborator with the Metacreation Lab for Creative AI at Simon Fraser University (SIAT), Kris pushes the boundaries of creative expression by blending analog photography with cutting-edge AI tools. His work explores the intersections of technology, art, and identity, leveraging over 20 years of photography to train generative AI models with the Autolume Neural Visual Synth, transforming traditional artistic practices into futuristic, hybrid forms.

    Philippe Pasquier is a multidisciplinary artist focused on the theory and practice of Creative AI. Be it as a musician, a composer, for dance, or interactive art, Philippe is interested in exploring the partial or complete automation of creative tasks. Philippe is a professor at the School of Interactive Arts + Technology at Simon Fraser University, in Vancouver, Canada. Philippe is the principal investigator behind Autolume.


    Discover more from Kris Krüg | Generative AI Tools & Techniques

    Subscribe to get the latest posts sent to your email.

    Leave a Reply