You steer,
AI motors.
A classroom and a playground for GLSL shaders. Built for artists working with large language models as collaborators — not as content engines, but as programmers you work with.
Machines and art
at CalArts
Computer-assisted image-making has a long history at the California Institute of the Arts, much of it predating the phrase in its current sense. Figures whose work at or near the school connects to this tradition include:
- Film & Video Ed Emshwiller · Adam Beckett · Pat O'Neill · John Lasseter · Joanna Priestley (Cubicomp)
- Electronic Music Morton Subotnick · James Tenney · Alison Knowles · David Rosenboom · Mark Trayle
- Video & Image Nam June Paik · Stephen Beck's video weavings
- Visual Music John & James Whitney · Mary Ellen Bute · Oskar Fischinger · Len Lye · Jordan Belson · Norman McLaren · Lillian Schwartz
This project is an attempt to continue a line — not to import a new technology into an art school, but to recognize that the school has been working with machines, procedurally and generatively, for a long time.
From picture machines
to code collaborators
Most AI tools built for artists share one design. You type a prompt; the system hands you a finished thing: an image, a video, a piece of music. This is efficient. It also removes much of the space where artistic judgment usually accumulates.
For many artists, the satisfaction of creative work comes from decisions made over time: this color not that one, this curve adjusted, this element removed. When a tool skips that process and delivers a result, the artist risks becoming a curator of outputs rather than the author of the work.
Writing code with a language model has a different shape. You still see every line. You can change anything. When something breaks, you debug it together; when something works, you can understand why. The model supplies syntax, algorithms, and standard solutions. You supply what you want to make, and why. The conversation stays yours.
The training-data story runs differently here as well. Image generators were trained on artists' work without consent — a claim still being argued. Large language models learned to code from a culture of publicly shared examples, explanations, and problem-solving, much of it posted to help other people. That is a different bargain. It isn't a clean one, but it is different.
"It feels more like artisan work. I used an LLM to help me with a Blender model, but I did all the sculpting myself with my mouse."
"This kind of code is so unintuitive. In my lifetime I would never have figured it out anyway. So for this, it doesn't feel like a moral quandary."
"I definitely think this feels more conscientious than other existing interactions with image generators."
The four-line stack
LLM = code collaborator JS library = orchestration layer WebGL = GPU interface GLSL = rendering logic
A shader is small — tens to a few hundred lines — and it runs on the GPU. A fragment shader answers one question: what color should this pixel be? It runs that rule, in parallel, once for every pixel on the screen.
Because shaders are small and well-documented, a language model can often reason about the whole program at once. The artist prompts; the LLM writes and edits GLSL; WebGL runs it; the browser paints it. Preview is instant.
What moves through
the pipeline
Four common channels move data through the pipeline. Attributes, uniforms, and textures come in from JavaScript; varyings pass data from the vertex shader to the fragment shader. The names appear throughout the classroom and the playground:
- attributes
- Per-vertex arrays — positions, normals, UVs — stored in WebGL buffers.
- uniforms
- Per-draw-call constants — time, matrices, colors, slider values.
- textures
- Images or data samplers passed into the fragment shader.
- varyings
- Written by the vertex shader, interpolated by the rasterizer, read by the fragment shader.
Three small moves
Three small widgets, each one teaching a single idea. The classroom opens each one into a chapter. Drag the sliders on the first two; the third does its own thing.
u_time drives the rest.Around the shader
A shader is a small program. On its own it draws an image. The playground wraps each shader with a modular set of tools that turn it into an instrument — something you can feed, play, and record.
- upload
- Drop an image, a video, or a 3D model (OBJ) onto any page and it becomes input to the shader. The kaleidoscope wants a video; the displace page wants a photograph to push pixels against; the Three.js pages want a model to wear a material. You drop the asset; the shader responds.
- record
- Every page has a red record button in the corner (or press R). It captures the shader's output at 1080p through the WebCodecs API — hardware-accelerated H.264, no UI chrome in the frame. You hit record, let the animation run, adjust sliders, stop, and download an MP4. The result goes straight into a video editor.
- react
- The audio pages route microphone or file input into the shader as uniforms — bass, treble, overall level, frequency bins. The same line that drives a pulsing circle from
u_timedrives it fromu_bass. Any shader with a uniform can become audio-reactive in a handful of edits.
Advanced students compose these. A student who understands all three — an uploader, the recorder, an audio-reactive hook — can take a piece of footage, drive its distortion with music, and record the result as MP4. The audio-reactive behavior from one shader can be isolated and redeployed to drive another transformation entirely: making a character appear to speak, or a starfield explode on a snare hit. The pieces are separable; what the advanced student contributes is the composition.
For the practical loop — installing, running, editing, committing — see the Workflow reference.
Working with
the model
A few habits make the difference between a productive hour and an exhausting one. They do not come from the model. They come from working with it.
Watch for false summits. The model draws from training data, and where many people have been stuck before, the model gets stuck in the same place. The signs are familiar — the same wrong answer in three different shapes, the same broken code with a different comment. When you recognize it, stop. Reset the conversation. A fresh session unblocks what a long conversation cannot.
Read errors literally. When something breaks, copy the exact error text and paste it into the conversation. The model parses errors far better than it parses descriptions of errors. "It isn't working" gets you guesses; the literal message gets you answers.
Change one thing at a time. When the model proposes five edits at once and the result is worse, you won't know which one caused it. Apply edits in single bites. Test. Then the next one.
Stay in conversation. A language model is not a better search engine. It is a collaborator with a short memory, a tendency to flatter, and a habit of confidently supplying what it thinks you want. The work is not to type better prompts. The work is to stay alert to what it is actually giving you — and to keep redirecting when it drifts.
For a framework for looking at finished vibecoded work together, see the Crit reference.
Two doors
Five chapters of interactive tutorials. Start here if you want to understand what your LLM is writing — and write your own without one.
Enter Classroom →Two dozen sections of working shaders — raymarched landscapes, kaleidoscopes, fluid sims, ASCII renderers. Read the source, prompt an edit, record the output.
Enter Playground →Built at the California Institute of the Arts by Douglas Goodwin.