Selection of Extended Report Application jan - feb 2025
I. Conceptual Space
Outline projectChance.operator is an artistic research project that integrates algorithmic chance into the creation of visual art through computer-controlled machines. Unlike conventional generative AI tools, the project focusses on embodiment and audience interaction, making the creative process a intuitive experience between people and generative AI systems. It embraces unexpected outcomes by random and controlled variables in image and text generation, using algorithms as the primary creative and performative force.
The project explores the aesthetic and ethical dimensions of collaborating with AI, addressing questions of agency, authorship, and control. The main research question is:
- is it possible to use chance-based interventions to expose biases embedded in pretrained models of AI systems, and does this give insight in ethical concerns?
The research delves into an search for intuitive interfaces, extended embodiment and queer expression of the machine. Through reflection and play, it invites participants to engage with new dynamics of creativity, co-creation and a critical hacking mindset.
The key outcomes are sensor-driven instruments that allows the audience to influence randomness intuitively. This instruments guide the audience and AI in real-time creation, blending spontaneity with deliberate interaction to shape the collaborative process.
Within this report you will read reflections in the following order:
I. The Conceptual Space
II. Artistic Research (divided in 3 modes: surrealistic games, embodied agents and ai instrument) including downloads to my developed software and links to AI tools
III. Implementation in Model Collapse
III. Answers to my Research Questions
IV. Reflections and Continuations of the Project
Extended motivation and approach
Writing this research proposal, I kept circling back to the anti-art-art movement of the early 1960s, feeling there was something to be learned from Fluxus to investigate generative AI systems. I couldn’t completely grasp the motivation – through hands-on experimentation, reading, and making, I refined my approach and paved the foundation for a possible bigger project. This extended introduction outlines how these elements align in my artistic research and why they help in investigating the effects of generative AI.
Fluxus (and anti-art-art) taught us that everyone can be an artist - not just the elite. It embraced everyday objects, randomness and chance to expose invisible structures. The movement rejected the commodification of art, challenged the notion of artistic genius with loud, simple performative actions. Emphasizing process over product.
Artists like Yoko Ono and Alison Knowles subverted artistic norms through event scores and recipes. Ono’s Cut Piece (1964) used her body as a site of negotiation, inviting audiences to cut away her clothing, turning both vulnerability and violence into collective artistic practice. Knowles’ Make a Salad (1962) turned an ordinary domestic act into an avant-garde performance, democratizing artistic labor.
Simple instructions that anyone could follow to create art - parallels today's prompt engineering for AI systems. Prompts utilize common language and concepts to guide AI generation providing accessible instructions that reduce technical barriers.
Both anti-art(-art) and generative AI embrace algorithmic or systematic approaches to creation. But AI art's subversion is more incidental - it's a byproduct of the technology rather than an intentional philosophical stance. This tends to stay invisible in glossy confirming interfaces and outcomes that mimic stock photo’s.
The current development of AI is purely accidental and full of biases.– Liu Ting Chun and Leon-Etienne Kühr
Seeing the current state of AI as purely accidental
At this peak of AI hype, it’s tempting to look at the technological development of AI systems as a lineair process but it isn’t. Liu Ting Chun and Leon-Etienne Kühr suggest that the current state of AI is purely accidental—we roll with what works. The dominance of diffusion models over GAN’s wasn’t planned; it’s just where we landed.
Prove can be found in this infographic about text to image models, you see that all the applications we know – go back to one model: CLIP, and the dataset LAION 400MM, 2b, 5b. Only one model is used for most text to image applications - and one dataset.
These models are the backbones of all of the applications we know, also this is accidental - it could be any other models.. But what it means is that the bias we see inside here, is the bias of most of the image generation models.
What is in this dataset and what is this model?
Erasure and Politics of Filtering
If the image generated is noise it’s not being prompted, and will be refined again in a diffusion model that scries for onthological security. You can look at the outcomes as infographics of the dataset. Central stereotypes that are the most common, patterns that are engraved while the edges—deviations, anomalies—get erased. This eradication is political.
Consider the implementation of CLIP’s notsafeforwork filter in Stable Diffusion. If the treshold of the seventeen keywords: sexual, nude, sex, 18+, naked, nsfw, porn, dick, vagina, explicit content, uncensored, fuck, nipples, naked breast and areola is reached, it triggers the NSFW-filter. But these words mean nothing semantically—they are just numbers in a correlation matrix.
This echoes historical modes of classification, from Victorian moral taxonomies to the FBI’s Lavender Scare, which algorithmically tracked queerness as a security threat. Today, this mode of ghostly filtering extends into AI, where bodies, identities, and intimacies are invisibly censored, reflecting not universal ethics but the biases of those programming and funding these systems. The filter is always skewing towards the company that makes a profit out of your free labour.
Charlotte Moorman, whose Opera Sextronique (1967) performance led to her arrest for lewdness, was subject to similar invisible boundaries—only then it was the law, now it’s an algorithmic moralization masquerading as objectivity.
Scoring systems are embedded into every aspect of life—credit scores, academic rankings, social media engagement metrics—all shaping access, visibility, and control. Like event scores in Fluxus, these numbers dictate action, but instead of open-ended creativity, they serve as rigid gatekeepers. A variety of inputs are boiled down to a single number —a superpattern— which may be a threat score, the not safe for work score or a social sincerity score. It seems like everyone has their own algorithm, it varies where it doesn’t matter - but comes in where it does.
Yet, the input parameters remain opaque, reinforcing systems of power under the guise of objectivity to so step away from the responsibility in changing the system.
Classification in itself echoes the archivist, the classical values are being ingrained in what has been archived and how. As Eryk Salvaggio claimes Generative AI is digital humanities but then in reverse. The decisions from archivers are being quantified, requalified and these judgements are being reanimated through this technology.
James Bridle in “Ways of Being”, critiques the language used to describe data processes needed to create AI. The language used to separate signal from noise is surprisingly pastoral: data is farmed and harvested, knowledge is mined and extracted. This framing makes it seem like a natural process rather than a deliberate structuring of power. It’s important to see they are a result of specific human choices with significant impacts.
The so-called cloud is deeply rooted in the Earth's physical resources. Kate Crawford highlights how AI systems rely on extensive extraction of minerals and energy, leading to significant environmental and ethical implications. But who is gonna take accountability over these unfair infrastructures and interconnected domains that echo colonial power structures?
Who decides on signal versus noise?
Chance vs Decision-making
Text to image AI’s and large language models are not a decision-making technology, they are a decision-removing technology. It creates text to fill the space where a decision is needed. They generate text, but most powerfully, they generate pretext and images. There is an embedded lack of novelty, newness and risk in AI because it makes decisions on recognition. It makes what it can recognise instead of what you make of it. This is the opposite of artistic practice. Making art is about decision-making; AI is about probabilistic approximation.
AI outputs are blends of the most statistically likely components—clusters of the most common, overdetermined elements. This is why AI-generated art so often looks like stock photography, an uncanny arrangement of clichés. Making decisions is what artistry is, and making decisions can only if you fully grasp the understanding of the technology. We need new artistic strategies – because AI tools are politicised tools, extensions of broader forces, but that does not mean that's all they can be. By taking them apart, reassembling them, or chaining them together in different ways, deeper truths can be revealed, and other futures envisioned.
Learning from Noise Art
In an era of overwhelming information, AI acts as a filter, regulating signal and noise. But someone is making decisions about what is being filtered out and what not. Generative AI is a noise principle in a system. Quite literal from noise we generate new things, that we add again to the wall of signal.
But what happens when artists work with noise instead? Alvin Lucier’s I Am Sitting in a Room (1969) reveals space by recording and re-recording sound until only the resonances of the room remain. Due to the room's particular size and geometry, certain resonant frequencies are emphasized while others are attenuated. Similarly, artists like Laurie Spiegel and Pauline Oliveros explored the generative potential of sound algorithms, revealing structures by pushing against them. Transposed to AI, this method of iteration—feeding outputs back into the system, distorting and reprocessing— exposes the resonances of generative models, mapping their edges by seeing them fail.
Legacy Russell’s Glitch Feminism argues that glitches are sites of resistance and transformation—queerness is found in the breaks, the disruptions. If AI operates as a smooth, frictionless machine of recognition, then the glitch is where its edges fray, revealing its seams. This is where the work of AI noise practitioners becomes crucial.
AI Noise Praxis
Generative AI thrives on variability, yet is confined by its statistical core. While it recombines, it rarely invents. The promise that AI can create wholly new visions remains debatable—it generates difference, but not radical otherness. Artistic intervention requires grasping this structure and pushing against it.
This is why we need noise practitioners—art that embraces errors, artifacts, and feedback loops to make AI’s inner workings visible. Inspired by Fluxus playfulness, artists can probe AI’s edges, using failure as a method. Noise, glitch, and chance are critical tools for this practice. The goal is not to make AI-generated art more human-like but to uncover the ghostly traces of its algorithmic logic.
Like Moorman’s radical performances, Ono’s instructions, or Oliveros’ deep listening, an AI noise praxis reorients power, exposing not just how generative AI functions but who it serves and what it silences. It is a methodology of counter-mapping—a way of refusing seamlessness and instead embracing the glitch, the failure, the signal bleeding into noise. The task for artists is clear: not to generate frictionless machine-produced aesthetics but to reveal, distort, and disrupt the structures that make them possible. We don’t need AI to make art for us. We need art that makes AI uncomfortable.
II. Artistic Research and Play
Mode 1 of my research was to develop new methodologies to interact with generative AI (and computer controlled machines) in meaningful ways.
I have made an (open source) toolkit with digital applications that help myself and others to think, play and reflect on shared authorship with machines in artistic processes. Similar to scores, the games provide instructions or recipes for algorithms and humans, encouraging collaboration between human and computational creativity and processes.
By the integrated randomness and participatory elements in the games there is a place to explore the intentionality and (un)predictable nature of algorithmic creation; they are meant to be played with machines.
image: inspiration coming from fluxus movement recipe and scores. “distance” by ken friedman, 1971
Reflecting on playing these games with an audience resulted in a good warming up for larger programs and workshop about AI co-creation. They activate the player to think about shared authorship. By playing it with a group of human and non-human actors in a workshop it helps loosen up participants in ideas of co-creation, by designing specific moments to hand over control, allow randomness and actively observe. They can form a fun and bonding experience.
Within this mode i tried to embrace chance and translate digital feedback loops and noise methods in to analogue games that you can play in groups. They are simplified versions of artistic strategies using AI that mainly help the player open up their ideas, preconceptions and introduce them to egoless art and the practice of coding and decoding.
It helps tapping into the unconscious mind and reveal hidden truths by bypassing rational thought and conventional norms.
The noise the AI generates is all of a sudden clear, and it created an environment to reflect upon these go to aesthetics and responses. Somehow the highly technical ones worked as well as the not so technological ones. The simpler the game, the better. In my process i also came across a bunch of AI icebreakers games by AI x Design and other makers that i find useful in workshop settings.
To create an environment that gives more context, i have developed the workshop Scrying Through AI with Ymer Marinus. That I have been giving 4 times since January 2024. This workshop I sharpened to test out with Graphic Design students during teaching Interaction Course at University of the Arts in May 2024 with all new research from chance.operator.
Reflecting on Chance Operations
By integrating chance-based and game methods into my toolkit, I sought to create structured unpredictability—inviting players to engage with uncertainty as an active participant rather than a disruptive force. Inspired by George Brecht’s concept of choiceless choosing, these games provide a framework where chance does not eliminate agency but instead redirects it, offering a way to explore decisions without being bound by conventional authorship.
Choiceless choosing argues that uncertainty can be used intentionally and determine the design of games.
Games do not simply mimic the world’s uncertainty, but give metaphors for conceptualizing the world as uncertain in the first place. The idea of chance is always a worldly idea that depends on the equipment capable of exemplifying it. Games provide such equipment in the form of dice, cards, coins, roulette wheels, lottery draws, and spinners.
Chance can be designed and controlled in artistic compositions, as a tool for unbiased experiments to examine biased systems.
For many Fluxus artists, games, jokes, and toys were an ideal way to accomplish this goal—especially when they were made in a skewed or disrupted manner. Resulting in structured-but-aleatory cacophony. By establishing chance relations between objects, AI tools and technology, it produces an ontological flattening – eleminating hierarchical distinctions and nuanced categories. An acrobat exists in the same sense as an architectural drawing and a snowflake.
Control is important to any experiment, but introducing chance creates a testable variable whose possible outcomes can surprise the observer. Giving up control is always relative, the production of a zone of unknowing that is partial.
By leaving the realization of a given work up to the participant there comes to existence choiceless choosing, a synthesis of each constraint. Choice is ultimately illusory, and can be integrated as one more variable in an experiment. Chance only becomes visible for when it matters to the observer, when it becomes felt.
In this column you find a list of resources and software made as part of chance.operator.
image: application game, playing telephone with chatbots tested during open studio + ai tinkering session at umarts surrealistic games with embodied AI assistants
Playing Telephone with Chatbots
This application is made as a group exercise for up to 7 players.
Telephone is a human whispering game where the input slowly gets distorted. Playing this with your personalized AI powered chatbots will question the logic, invisible workings and your personal influence on them. All players add their name, one player starts and the next player is chosen by the system to echo the input through their chatbot. This goes on – at the end the first player is asked to tie the first input and last output together.
There are two versions of the application, one stand-alone (works on older computers) and one with a connection to a local language model (LLama3) with a character, at the end the language model makes a poem of all the in and outputs to tie them together.
image: card deck the exquisite image is a corpse
Surrealistic AI Card Deck
There is also a physical card deck with instructions to play surrealistic games with machines in groups; like exquisite corpse (each participant draws a section without seeing the others). Most games can be played in an analogue way.
Prompt Charades
Card deck to play the party game Prompt Charades, with written or visual instruction for simple, often absurd, and sometimes humorous action to act out. These instructions form event scores that could include instructions for simultaneously drawing or mark-making activities. Compiled from prompts on public AI channels, they challenge traditional notions of art creation and reveal a new mode of language.
WIP: petrified_diffusion, produced in december 2024
Mode two was embracing tools and micro-controllers. It brought the experimental self-built applications generating works beyond human control into my own lab of computer controlled machines and microcontrollers.
I have investigated how I could be able to execute visual works in real-time and materiality.
automated diagrams
The diagram of the piece is the physical scenario of how the piece unfolds, in this case it depicts the inner workings and infrastructure of Generative AI models.
image: pen plotted diagram of text-to-image AI infrastructure (partially generated)
image: pen plotted diagram of understanding harm throughout machine learning life cycle (inspired on research by Harini Suresh, John. V. Guttag) (partially generated)
image: pen plotted diagram NSFW filter triggers (partially generated)
image: pen plotted diagram of how to predict text/tokens/words with large language model NANO-GPT (inspired by the visualization of BBYCROFT) (partially generated)
image: pen plotted diagram of feedback loop in stable diffusion 1.5 (partially generated)
drawing 1) amount of different users, 2) first letter of prompts and amount of reruns, 3) observing one user (dot = redirecting prompt)
petrified.diffusion
petrified_diffusion shows a frozen moment in the AI image generation process, where the image is still undefined.
It symbolizes the diffusion phase, where forms emerge from *sacred noise*, but nothing is fixed. This moment is visualized as strata .. preserving the layers of ‘denoising’ as erosion within the image as if it were petrified over time.
petrified_diffusion opposes the *classifying gaze* by resisting final categorization. Parts of the image are algorithmically recognized into abstract terms and are used to generate endless variations of the becoming image. The part that get’s reinterpreted lights up with a LED light and is decoded on the screen.
fragment: automated drawing upgrade my silhouette cameo with 3d printed new pen holders
image: blueprint of pen-plotted generated sculpture, img2text
images: selection of studio stills of using different tools visualizing scores
latent fossils
image: test model speaker fossil
Inspired on the books a Bestiary of the Antrophoscene and Atlas of AI i have made a prototype of a futurist ‘latent’ fossil that you can activate by shaking it.
The activated fossils tell you stories of how they came into existence / extinct while the soundeffects immerse you in a scene of extraction.
All fossils are generated with AI and reflect on the self-consuming origins in extraction of the earth’s resources and data. This is a continuation of my show Model Collapse currently on in Tetem Enschede.
The fossil also works as an instrument, shake to activate local AI models to generate variations.
video: shell_diffuser, shake to generate images and sound
automated drawing
image: regenerating parts of the same image + pen plotted in real time
image: regenerating parts of the same image in color layers + pen plotted in real time + drawing by hand
Reflecting on Mode 2 + 3Feedback Loops as Artistic Strategy
At this stage of the research, i realised that the most potent artistic strategies in working with generative AI are using feedback loops, misuse, and hacking to crack the inner workings of (image-)generation models. This includes recursively confronting models with their output, deconstructing text-to-image pipelines, labelling images, and discovering unexpected correlations.
Examples of Feedback Loops using AI
image: tools, micro-controllers and buttons prepared in my studio
Mode three has a focus on the embodiment of AI tools. As the final step of the research I wanted to develop a prototype of an instrument that brings randomness, expression and emotion as parameters into the creation of automated creation.
The instruments are sculptures itself. The instruments measure data with the help of sensors and the incoming data will influence the AI generation of new visual works.
Resulting images, become a reflection and dialogue between the visitor and the algorithmic collaborator. The instrument will have an animistic character and is constantly generating new images and styles. During exhibitions these instruments are playable to influence event scores that guide the generative systems and robots in creation of new visual works.
I have experimented with sensors registering weight, pressure, distance, light, heartbeat and more to generate constant streams of input variating on the human interacting.
I have build a system that translates the data into language ‘prompts’ that interpret this into emotions, expressions and styles. The prompts are being used mixed with random assigned prompts, in a locally running open source text and image creation model.
latent explorer
This language synthesizer helps you create linguistic feedback loops in order to break local running AI’s.
This synthesizer like module lets you echo input through different local large language model characters, travel through meaning and categorization. Every knob has it’s own special function!
prompt.charms
Prompt.charms capture the unseen, conjuring meaning from color, pressure, distance, and light. They sense the currents of floating data weaving them into spells—an invocation of machine dreams. A ritual of emergence, where generative AI is not commanded but divined.
image: seeding_venus in Tetem
seeding_venus is a cybersculpture that interrogates the biases embedded in generative AI
models by reimagining the Venus of Willendorf.
video: seeding_venus in action
Based on an image of the Venus, a new body is generated using donated body parts and alternative prompts, hacking current perceptions, standards, and interpretations of beauty and femininity—limitations shaped by the training data of major generative models. Marquee displays in the sculpture show two types of text: in all caps, the labels assigned to the Venus of Willendorf in the LAION dataset (widely used in models like Midjourney, Stable Diffusion, and DALL-E); in lowercase, the alternative prompts introduced to generate new bodies that challenge this algorithmic gaze. As the Venus of Willendorf is speculated to be the first self-portrait of a woman, seeding_ venus reflects on the evolution from singular representations of the body to the constrained representations dictated by contemporary datasets. Within the exhibition model.collapse, the work initiates an infinite procession of alternative Venuses—each emerging when light is shed on the cybersculpture.
images: newly generated bodies for the venus
on our backs receiver
image: computer fan + antennas + button + arduino
This device helps you browse through a linguistic mapping of queer contact advertisements from On Our Backs, the first erotica magazine for a lesbian audience in the United States.
Designed as a metaphorical dowsing rod in a shortterm residency, this device seeks to uncover and illuminate queer language, communication nuances, and historical modes of connection that have often been marginalized or overlooked in traditional archival and digital datasets.
III. Implementation of Research in Model.Collapse
In the immersive exhibition Model Collapse, the landscape from which generative AI emerges is depicted. By displaying relics, artifacts and stories from the past, they reveal the origins of resource extraction.
Model Collapse shows cyber sculptures, each with its own algorithmically defined form and character. These sculptures fight against dominant representations and narratives entangled with the origins of artificial intelligence. They tell stories about mystification, gender bias, non-Western algorithmic practices, the ecological footprint and labor. As a visitor, you can engage directly with the cyber sculptures by typing in your query through the central interface.
A local language model determines your location within the digital environment they inhabit, and the closest sculpture awakens in response and shares its knowledge. Your query dynamically sets the scene of the exhibition and breathes life into their generative digital biome. Interacting with the main station triggers a chain of events. What starts with a few words results in far-reaching implications across multiple other dimensions. Intuitive insight is shared and invisible lines – which are hidden behind the inner workings of the AI – are opened.
AI’s entire supply chain reaches into capital, labor, and the Earth’s resources, demanding enormous amounts of each. This extraction is both literal and figurative, largely invisible and almost impossible to follow. - Kate Crawford says in Atlas of AI
applied Hacking strategy: specified intelligence
An example is how i use the large language model: LLama3 in my exhibition: Model Collapse in Tetem Enschede. Hereby the LLM has the function to measure the relation of the users input (this can be anything) and convert this to a coordinate in a 3D space. The coordinate is determined by my own research (that has not been generated) and alternative pre-histories i have collected around mystification, extraction, inclusion and classification in AI.
I give the list of stories and their location to the LLM and according to this it measures how the input relates to the 3d space. This results in a interface where you can read this story and mirrored you travel to this location in Unreal Engine game environment - a position in a generative landscape. Wherever you end up, it activates something in the landscape. Do you end up at a story about extraction of rare earth minerals - likely you see relics of these elements appear and traces of mining appear in the landscape and change it over time.
image: still of 3d environment part of model.collapse. where a local LLM decides your location in a digital landscape according to your input.
model collapse instruments
recent sculptural instruments for model collapse exhibition: 1) custom controller, 2) sculpture that is light sensitive with speaker that tells stories about AI + inclusion, 3) light sensitive sculpture that controls the domain of AI + extraction.
These instruments are marking the 4 domains in the digital landscape of the Model Collapse exhibition: extraction, inclusion, mystification and classification. The sculptures get activated when light sheds on them.
IV. Research Questions
Main Question: Is it possible to use chance-based interventions to expose biases embedded in pretrained models of AI systems, and does this give insight in ethical concerns?
Yes, chance-based interventions can effectively expose biases embedded in pretrained AI models, though identifying and categorizing specific biases remains challenging. Through randomness and unpredictability, these interventions push AI systems beyond their typical operational patterns, revealing underlying biases when the systems fail or produce unexpected results.
The research demonstrates this particularly through feedback loops and iterative processes, where repeatedly processing outputs makes biases more visible. For example, the seeding_venus project revealed beauty standard biases by challenging conventional body representations and labelling, while experiments with NSFW filters exposed how seemingly neutral keywords encode moral judgments. The tendency toward "stock photo" aesthetics in AI-generated images reveals statistical biases favoring the most common patterns.
These interventions provide valuable ethical insights. They expose how AI systems often reinforce existing power structures, hide chains of production and human labor, consume significant environmental resources through extraction and energy use, and can erase cultural nuances through filtering and "smoothing" of edge cases. However, it's important to note that biases can be introduced at multiple stages - from dataset collection to model training to deployment - making them complex to isolate and identify.
While chance-based interventions are valuable tools for exposing biases, they work most effectively as part of a broader critical practice that includes other methods like feedback loops, embodied interaction, and critical analysis of AI systems' material and social contexts. The difficulty in determining exactly which bias you're encountering during chance-based interventions means this approach should be seen as one component of a larger toolkit for understanding and addressing ethical concerns in AI systems.
We often think that biases are the result of data, but it is not only data. Because throughout the training process other biases can be added. If you train the data in a certain way this result in a bias that echoes on through out use of the system.
These biases are problems that happen in all of software development, not only AI and every bias is an ethical concern.
How can AI tools enhance traditional artistic techniques drawing, woodcutting, sculpture ?
Many AI tools are being implemented in software, to enhance workflows. Even the newest adobes software versions now have functions like: image generation, variation and upscaling. These are very user centered and help to speed up design processes. But are build as closed off pretrained models, gaining insight in the datasets they were trained on is impossible. They tend to be updated over time, so all specified knowledge can be gone after the latest update.
When knowlededge, technological advancement and intelligence is direct towards enhancing the traditional artistic tools - they could reshape mediation between human skill and robotic competence. Focusing on the intersection of craft and computation AI tools can enhance craftspersonship, improving precision and creativity while maintaining the tactile and cultural dimensions of making.
As assistant Professor Daniela Mitterberger claims: Rethinking the tools can lead to hybrid design and construction methods, offering a vision for a more collaborative and interactive approach to digital fabrication.
the problem of capitalist driven AI toolsThere are many nuances to answer this question, but for my important is to differentiate usecases of AI here.
The promises of these mainly implemented AI tools and services run by bigger agencies are: you can now excel at painting watercolors without even practicing.
This ethos of making knowledge and tools available for everyone is at the core of hacking, pirating and sharing culture and celebrated.
– Hito Steyerl
The artist is used to wash the tools clean of their capitalist and non ethical way of echoing extractivist and colonial structures. The artist is used to promote and experiment to find the creative boundaries, to perhaps move the attention from the fact that most AI models are trained on endless amounts of artist works without consent. Chains of production and labor are rendered invisible.
According to Marshall McLuhan, artists are particularly sensitive to the transformative effects of new technologies. By observing what artists are doing, people can become aware of imminent changes in the social and cultural fabric which, because they are so new and unfamiliar, remain otherwise imperceptible. As part of this oracular function, artists are early adopters of new technologies, often using them in unorthodox ways in order to reveal something about their unfamiliar essence.
I work with Gen AI - in order to hate it properly
- inspired by Nam June Paik
- inspired by Nam June Paik
The opening up to the public of AI tools has had major influence on this, and how we now as society look at LLM’s and image generation. While socially accepting these agents with invisible agendas in our daily tasks, we work towards a bigger acceptance of automation. We have experienced how easy and accessible knowledge is, so our framework has stretched.
A new magical language is invented full of corporate promises. We now make infinite images without there being policy and knowledge about the true costs of extractivism. Scroll for strategies to work with these tools.
Artistic innovation by generative AI tools
The creativity on the other hand, comes from breaking, owning and embracing the glitches within them. For this project I have worked with a focus on opensource generative tools, to be in control over the produced data and work as energy friendly as possible.
Over the project (and years) these are the most creative parts of working with AI for me
In a model that is trained on a large dataset you can explore the space between datapoints (latent space). Exploring and collage-ing combinations of styles, genres, timepieces hasn’t been able before for me as media-artist on my own devices.
Working with enormous amount of data, everything you create is instantly in dialogue with (and a result of) the dataset and its labeling. It is a result of a chain of (human and digital) labor but also of what people have been sharing online.
In the DNA of AI tools is the fostering of variations and quantity of the output. As working with clay through (mostly) words you can direct the algorithm in specific directions. The quantity it produces and power it uses doing so is harming the planet on a bigger scale. But just as learning how to draw with a new pencil, reflecting on the outcomes can give artistic agency in using tools. Computational quantity.
Over the years the field has changed - where it is now almost impossible to train your own AI with the same quality output as the services. There is a new urgency for working with existing services in new manners. An interesting way for me has been to specify the intelligence and pin point it to the direction it is useful for my artworks.
I give the system a strong ideology: restrictions and references in how to behave - in this way i can be more sure that the outcomes are of use for me.
How can I interact with generative systems like neural networks and algorithms during the creative process in a more embodied way?
Reframe: how can i interact with generative systems like text and image generators during the creative process in a more embodied way?
Important: the creative process is something that doesn’t stop at a point when a work can iterate and change form at later stages.
Combining low and high tech
My method has been by using low tech DIY sensor technology and micro-controllers to create alternative interfaces and controls. Our AI tools have been interfaced to always agree with us, be polite and the actual workings are far hidden from the users sights. Their output seem singular and sublime, and this is how users form their fast and easy judgement about them not being good enough etc. Non-working or leaking systems get taken offline to never return in the public’s eye.
Experimental Interfacing
By putting AI tools in a different form and making them available at another stage of the ‘design process’ they get another function.
Playful methodologies
By seeing generated AI outcomes as game element or instructions that bring a chance or inspiration, you most loosen up in your idea of authorship and agency. To help this process i have worked on surrealistic games that you can play with your AI systems in order to get input.
How to measure aesthetic appreciation of a material artwork produced by a robot?
Rather than relying on pure metrics, aesthetic appreciation of robot-produced artwork is best measured through a combination of integrated approaches within chance.operator.
A longer time ago i tried to measure aesthetic appreciation with Valentin Vogelmann for our project vNine - where we use the ideology of Wittgenstein to measure aesthetic appreciation of AI generated music. To give somehow a human value to the produced outcome, this proces was highly computational and keeped complex and abstract.
The most direct and effective strategy i found was to make reflection and audience engagement central to the artwork itself.
- Making the robotic creation process visible and transparent to viewers
- Creating opportunities for direct audience interaction with both process and product
- Gathering real-time feedback during exhibitions
- Documenting audience responses and the work's evolution over time
- Evaluating multiple dimensions including technical execution, innovation, emotional resonance, and human-machine interaction quality
It is important to acknowledge that aesthetic appreciation exists at the intersection of technical achievement, artistic intent, and human response. Rather than seeking a single measurement, the assessment can be part of the artwork's presentation and experience, allowing for both objective and subjective evaluation through active audience participation and reflection. Within chance.operator i’ve did this during my AI residencies, where i had conversations about the creation proces, while machines were creating parts of my projects.
research made possible by: