Contents
1 Introduction
Version 1.5 of Codea introduced a very powerful new feature: GLSL shaders. As a beta tester, I was able to play with these a little longer than most Codea users. I'll record here what I learnt, and work through two shaders that I wrote. Before beginning, though, I should add that I knew nothing about shaders before Codea gave me access to it and I still feel that I know very little about them. However, that little might still be of use so hopefully is worth recording.
The two shaders are an explosion shader, and a game of life shader. They exhibit different characteristics of shaders and so hopefully provide illustrative examples of their capabilities.
2 Background
As I said above, I'm not an expert on shaders in any shape or form so I intend to explain what I've learnt by going through some examples. Nonetheless it is necessary to understand a little of the mechanism beforehand.
2.1 Meshes
To understand what a shader does it is first necessary to get straight what a mesh is. A mesh is a collection of triangles. Each triangle is specified by its vertices, thus to specify a mesh one gives a table of vertices which one thinks of as being grouped in threes (there is no actual grouping in the table, simply that every three vertices are taken as specifying a triangle in the mesh). It is possible to give more data to the mesh but the vertices of the triangles are the core.
That extra data can be quite general, but the most common extra pieces of information are colours and texture and texture coordinates. Indeed, before the advent of shaders in Codea this was all the information that one could pass. Let's start by assuming that we just pass colours. The way that this works is that we assign a colour to each vertex. What is important here is that this is a colour for each vertex in the list of vertices passed to the mesh, so if the same vertex appears twice then there is no need for it to have the same colour each time.
The mesh is drawn on to the screen, triangle by triangle. Codea takes a triangle from the mesh, figures out where the vertices go to, and then draws the triangle between those vertices. So the vertices get mapped to specific points on the screen and these three points specify a triangle on the screen. Then Codea has to figure out what colour to paint each pixel inside that vertex. It does that by mixing the colours specified at the vertices by a simple linear interpolation.
This mixing works as follows. Suppose that the three vertices are ${\stackrel{\rightharpoonup}{v}}_{1},{\stackrel{\rightharpoonup}{v}}_{2},{\stackrel{\rightharpoonup}{v}}_{2}$. Then for a point inside the triangle, say $\stackrel{\rightharpoonup}{w}$, there are unique real numbers ${\alpha}_{1},{\alpha}_{2},{\alpha}_{3}\in [0,1]$ with ${\alpha}_{1}+{\alpha}_{2}+{\alpha}_{3}=1$ such that:
$$\stackrel{\rightharpoonup}{w}={\alpha}_{1}{\stackrel{\rightharpoonup}{v}}_{1}+{\alpha}_{2}{\stackrel{\rightharpoonup}{v}}_{2}+{\alpha}_{3}{\stackrel{\rightharpoonup}{v}}_{3}$$ 
The colour at the point $\stackrel{\rightharpoonup}{w}$ is obtained by mixing the colours at the vertices according to the same formula:
$$c={\alpha}_{1}{c}_{1}+{\alpha}_{2}{c}_{2}+{\alpha}_{3}{c}_{3}.$$ 
The fact that the ${\alpha}_{i}$ are all in $[0,1]$ and that ${\alpha}_{1}+{\alpha}_{2}+{\alpha}_{3}=1$ ensures that $c$ is a genuine colour.
Note that if we sit at a vertex, say ${\stackrel{\rightharpoonup}{v}}_{1}$, then we have ${\alpha}_{1}=1$ and ${\alpha}_{2}={\alpha}_{3}=0$ whence the colour is precisely ${c}_{1}$. Thus the colouration can be thought of as colouring the vertices according to the rule and then blending the colours to get a smooth transition.
This process is done for every pixel in every triangle of the mesh.
2.2 Shaders
Shaders allow you to mess with the process of getting a mesh from its specifying data on to the screen. There are two crucial steps in that process: figuring out where a vertex ends up on the screen, and colouring the pixels in between. We can extend the first one a little to: figuring out the information associated to a vertex. The point about this extension is that it can include the colour or any other information that we have associated to a vertex.
A shader, therefore, hooks in to these two processes and allows us to insert some code in between. Thus when a vertex is processed to figure out where it is placed on the screen we can interrupt and say “Rather than doing the usual thing, do something else.". We can do the same when a pixel is being coloured. The first bit of code is called the vertex shader, the second is the fragment shader.
Here's the first important thing to remember about shaders. Each operates on a single object, and each must set the deciding property of that object (position for vertices, colour for pixels). The vertex shader operates on a single vertex at a time, the fragment shader on a single pixel at a time. Neither can set a property for another vertex or pixel, and neither can know anything about what happens to other vertices or pixels (unless it computes it for itself) because all the calculations happen effectively in parallel.
2.3 Data
It is possible to introduce more information to a shader than just the list of vertices and colours. There are two types of information: uniforms and attributes (on the Codea side these are known as buffers). The first, uniforms, are single pieces of information that are valid for the whole mesh. The second, attributes, are lists of pieces of information, one for each vertex. (In both of these, “piece of information" could be quite complicated.)
The vertex shader works first and can use this information in its processing. When working on a vertex it can use the information from the uniforms and from the associated attribute. That is, when working on a vertex it can use the colour associated to that vertex but not to any other vertex.
As the vertex shader works before the fragment shader, it can pass information to the fragment shader. This is done in the same way as the interpolation for the colour described above: the vertex shader specifies certain values for each vertex which are then blended to provide a value for each pixel in the intervening triangle. These are known as varying attributes. So the standard vertex shader sets the varying colour for a vertex to be its attribute colour and thus the shader knows to interpolate it accordingly.
2.4 Textures
A special type of data is that of a texture. This is an image that can be used as a source of colours. A texture is specified as a uniform, it cannot be passed in a buffer nor passed from the vertex to the fragment shader as a varying. The vertex program, in fact, cannot use textures they only work in fragment programs. The usual way of working with them is for the fragment program to get a colour from a texture by specifying “texture coordinates". These could be computed in a complicated manner, or could be simple varyings passed on from the vertex program.
The usual use of textures is to “paint" a triangle with a triangle from the image. You specify “texture coordinates" for each vertex, saying where in the image each vertex should correspond to. This specifies a triangle on the image and we can imagine cutting out that triangle and stretching it over the triangle that is rendered to the screen.
However, there are plenty of other uses of textures.
2.5 Summary
In summary, therefore the process proceeds as follows.

We specify the mesh by giving two types of data: uniforms, and vertex buffers. One of those vertex buffers is the vertices themselves.

The vertex program processes each vertex in turn. It must specify the final position of each vertex. It can pass data to the fragment program by specifying varyings.

The fragment program processes each pixel in turn. It must specify the final colour of each pixel. The varying information from the vertex program is linearly interpolated from the vertices. The fragment program can also use the uniforms.
The input is therefore the vertices, their attributes, and the uniforms. The output is the colours of pixels.
There is no other way to get information in to or out of a shader.
3 The Default Shader
Before going through specific shaders, let us look at the default shader. This is the following.
The basic vertex program is the following.
//
// A basic vertex shader
//
precision highp float;
//This is the current model * view * projection matrix
// Codea sets it automatically
uniform mat4 modelViewProjection;
//This is the current mesh vertex position, color and tex coord
// Set automatically
attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;
//This is an output variable that will be passed to the fragment shader
varying lowp vec4 vColor;
varying highp vec2 vTexCoord;
void main()
{
//Pass the mesh color to the fragment shader
vColor = color;
vTexCoord = texCoord;
//Multiply the vertex position by our combined transform
gl_Position = modelViewProjection * pos;
}
We have one uniform which is the current matrix. This is set automatically by Codea and is the product of the modelMatrix
, the viewMatrix
, and the projectionMatrix
.
We have three attributes: the position
(which is the actual vertices), the color
(sic), and the texCoords
. The latter refer to points on the texture.
We specify two variables to be passed to the fragment shader, vColor
and vTexCoord
.
The program executes the main
function for each vertex with the attributes set to the right values for that vertex. This program simply sets the vColor
and vTexCoord
to the values passed in, then multiplies the position by the matrix and sets the position of the vertex to that using the gl_Position
primitive.
The basic fragment program is the following.
//
// A basic fragment shader
//
//This represents the current texture on the mesh
uniform lowp sampler2D texture;
//The interpolated vertex color for this fragment
varying lowp vec4 vColor;
//The interpolated texture coordinate for this fragment
varying highp vec2 vTexCoord;
void main()
{
//Sample the texture at the interpolated coordinate
lowp vec4 col = texture2D( texture, vTexCoord );
col *= vColor;
//Set the output color to the texture color
gl_FragColor = col;
}
Here we have an additional uniform: texture
. We also declare the varyings that we'll use from the vertex program.
The main
function first samples the texture at the interpolated texture coordinate. Next, it tints the texture colour by the interpolated colour. Finally, it sets the output colour using the gl_FragColor
primitive.
4 Explosion
The first shader I want to go through is one I call the explosion shader. Its purpose is to explode an image. It starts by displaying the image and then splits it up into pieces which move over the screen.
To realise this, we need a mesh. The initial configuration of the mesh should display the image so the triangles are set to tile the image and the texture coordinates are set accordingly.
As time progresses the pieces of the mesh should fly off according to some rule. The obvious way to do this is by moving the vertices, thus this is accomplished as a vertex program and the fragment program is the same as the basic one above.
Here's the vertex shader.
//
// The explosion vertex shader
//
precision highp float;
//This is the current model * view * projection matrix
// Codea sets it automatically
uniform mat4 modelViewProjection;
uniform float time;
uniform float friction;
uniform float factor;
uniform float separation;
lowp vec4 gravity = vec4(0.,1.,0.,0.);
mediump float mtime = max(0.,time)*factor;
//This is the current mesh vertex position, color and tex coord
// Set automatically
attribute vec4 position;
attribute vec4 color;
attribute vec2 texCoord;
// These are vertex buffers: initial velocity of the square,
// angular velocity,
// centre of square
attribute vec4 velocity;
attribute float angvel;
attribute vec2 origin;
attribute float level;
//This is an output variable that will be passed to the fragment shader
varying lowp vec4 vColor;
varying highp vec2 vTexCoord;
// ODE: x'' = friction x' + gravity
// Solution: A exp( friction * time) + B + time*gravity/friction
// Initial conditions:
// A = gravity/(friction*friction)  x'(0)/friction
// B = x(0) A
void main()
{
//Pass the mesh color to the fragment shader
vColor = color;
vTexCoord = texCoord;
lowp vec4 pos;
mediump float t = mtime + level * separation;
lowp float angle = t*angvel;
highp vec4 A = gravity/(friction*friction)  velocity/friction;
highp vec4 B = vec4(origin,0.,0.)  A;
lowp mat2 rot = mat2(cos(angle), sin(angle), sin(angle), cos(angle));
pos = (position  vec4(origin,0.,0.));
pos.xy = rot * pos.xy;
pos += exp(t*friction)*A + B + t * gravity/friction;
if (level != 0. && t < 1.) pos = vec4(0.,0.,0.,1.);
//Multiply the vertex position by our combined transform
gl_Position = modelViewProjection * pos;
}
Let's start with the attributes. As well as the usual ones, we have:
attribute vec4 velocity;
attribute float angvel;
attribute vec2 origin;
attribute float level;
Each vertex comes with a velocity, and angular velocity, an “origin", and a level. The last of these is not important (it is used to make it easy to add “trails" to the explosion). These pieces of information are actually “per segment" data so that a segment of the image moves coherently, however we cannot pass in information on a per segment basis so we have to repeat it for each vertex.
The velocity is the initial velocity of the segment, the angular velocity adds a spin, and the origin is the point about which the segment should spin around.
Now let's look at the program itself. It starts by passing the colour and texture coordinates to the fragment shader. This is because we want each segment to display the part of the image to which it was originally associated so we don't change that. It then continues as follows.
lowp vec4 pos;
mediump float t = mtime + level * separation;
lowp float angle = t*angvel;
highp vec4 A = gravity/(friction*friction)  velocity/friction;
highp vec4 B = vec4(origin,0.,0.)  A;
lowp mat2 rot = mat2(cos(angle), sin(angle), sin(angle), cos(angle));
pos = (position  vec4(origin,0.,0.));
pos.xy = rot * pos.xy;
pos += exp(t*friction)*A + B + t * gravity/friction;
if (level != 0. && t < 1.) pos = vec4(0.,0.,0.,1.);
//Multiply the vertex position by our combined transform
gl_Position = modelViewProjection * pos;
The variable pos
will be our actual position of the vertex. The variable t
is the time since the explosion (the adjustment by level
is to enable easy drawing of the pieces staggered back in time to produce trails).
Using the time since the explosion and the angular velocity, we work out the accumulated rotation and store it as angle
. Note that we have to compute the accumulated rotation, we cannot work “step by step". This is a rotation about the origin of the segment.
The position of this origin is found using the solution of a particular ordinary differential equation describing motion under gravity with friction. We need to have a closed form for this as, again, we cannot work “step by step" but have to work out the accumulated displacement each time. The differential equation we use is:
$$\stackrel{\rightharpoonup}{x}"=\alpha \stackrel{\rightharpoonup}{x}\text{\'}+\stackrel{\rightharpoonup}{g}$$ 
where $\alpha $ is the coefficient of friction and $\stackrel{\rightharpoonup}{g}$ is the gravity vector. This has the solution:
$$\stackrel{\rightharpoonup}{A}{e}^{\alpha t}+\stackrel{\rightharpoonup}{B}+\frac{t}{\alpha}\stackrel{\rightharpoonup}{g}$$ 
where $\stackrel{\rightharpoonup}{A},\stackrel{\rightharpoonup}{B}$ are constant vectors that are chosen to ensure that the solution starts with the right initial position and velocity. Knowing these initial conditions, we compute $\stackrel{\rightharpoonup}{A}$ and $\stackrel{\rightharpoonup}{B}$ by the formulae:
$$\begin{array}{rl}\stackrel{\rightharpoonup}{A}& =\frac{1}{{\alpha}^{2}}\stackrel{\rightharpoonup}{g}\frac{1}{\alpha}\stackrel{\rightharpoonup}{x}\text{'}(0)\\ \stackrel{\rightharpoonup}{B}& =\stackrel{\rightharpoonup}{x}(0)\stackrel{\rightharpoonup}{A}\end{array}$$ 
In our program we work out $\stackrel{\rightharpoonup}{A}$ and $\stackrel{\rightharpoonup}{B}$ from the initial position and velocity which are specified using the attributes origin
and velocity
. We also need to know the gravity vector and the coefficient of friction, these are specified as uniforms
(one could make friction an attribute if one wanted to vary it over the segments).
To compute the final position of the vertex, we first find its relative position to the origin:
pos = (position  vec4(origin,0.,0.0);
(Here we see a conversion from vec2
to vec4
because the explosion is, in this example, two dimensional but internally shaders use four dimensions for specifying coordinates.)
Then we rotate this relative position (or, again, the first two components of it) using the angle we worked out earlier but converted to a $2\times 2$–matrix.
pos.xy = rot * pos.xy
Lastly we translate it by the displacement of the origin should have moved.
pos += exp(t * friction) * A + B + t * gravity/friction;
This is the final position of the vertex. (We ignore the level
line in this explanation.)
In conclusion, we have moved each vertex of our mesh according to some rule. We have had to work out a formula for that rule that can be computed directly from the time rather than using an Euler stepbystep method. This is irritating as it prevents us using more complicated (and realistic) dynamical systems, but is simply a limitation we have to accept. (There are sneaky ways of getting round this, but generally the extra computation needed is not worth the hassle.)
To use this shader we need to provide it with all of the required information. It needs the usual information: vertices, texture coordinates, a texture. We need to specify the friction and each time we call it we need to specify the time. For each vertex we need to specify its velocity, origin, and angular velocity. These will be constant over each segment but nonetheless need to be specified for each vertex.
Specifying uniforms is simple enough:
explosion.shader.friction = .1
Specifying attributes is a little more complicated. Here's code for the velocities (the segments are assumed to be squares, which are made up of two triangles and thus six vertices). It sets all the velocities to the same thing; obviously for a more realistic effect one would vary them over the segments.
vels = explosion:buffer("velocity")
vels:resize(numberOfSquares*6)
for i=1,m do  m is the number of squares across
for j = 1,n do  n is the number of squares up
for k=1,6 do  each square is six vertices
vels[6*rk+1] = vec4(1,1,0,0)
end
end
end
(Note that I don't claim that this shader is perfect. As I've written it I've spotted various ways in which this shader could be improved, from adding a third dimension to cleaning up the coding.)
The shader and code to use it can be found in my Codea library where it forms a class for exploding an image. I use it in a puzzle program for when the player has succeeded with a task: the screen “explodes" before switching to the next puzzle.
5 The Game of Life
Conway's Game of Life is a classic iteration system. In brief, we have a grid of objects each of which can be in one of two states (“alive" or “dead"). On each iteration, each object changes state according to certain rules. The rules depend on the states of its immediate neighbours (the eight surrounding objects) on the previous run. The rules are as follows:

If currently alive, stay alive if we have two or three alive neighbours otherwise die.

If currently dead, become alive if we have exactly three alive neighbours, otherwise stay dead.
In each iteration we only use information from the previous run so this is well suited to a shader. The only catch is that we need to keep all previous information around until we finish processing: we cannot overwrite the last run until we have finished working out the current states. This restriction actually fits in well with shaders as they have the same restriction.
We do, though, need the information from one run to be made available to the next. The output of a shader is as the colour information of pixels. The input is the various vertex attributes and uniforms. We can convert the pixel information to vectors to be fed back as vertices, but this is expensive. Since we can pass an image as a uniform to the fragment shader this seems the best way to pass the previous run to the shader. Thus the Game of Life is implemented as a fragment shader. The vertex shader is simply the basic one.
Here's the fragment shader.
//
// A Game of Life fragment shader, does the hard work.
//
//This represents the current texture on the mesh
uniform lowp sampler2D texture;
//The interpolated vertex color for this fragment
varying lowp vec4 vColor;
//The interpolated texture coordinate for this fragment
varying highp vec2 vTexCoord;
// width and height of the texture for finding neighbours
uniform highp float width;
uniform highp float height;
lowp float colstep = .05;
highp float w = 1./width;
highp float h = 1./height;
// get the neighbours' states
lowp int neighbours (lowp sampler2D s, highp vec2 p)
{
lowp int alive;
alive = 0;
if (texture2D(s,fract(p + vec2(w,0.))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(0.,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,0.))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(0.,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,h))).g > .5)
alive += 1;
return alive;
}
void main()
{
lowp int count;
lowp vec4 col;
count = neighbours(texture,vTexCoord);
col = texture2D(texture,vTexCoord);
if (col.r > colstep) {
col.r = colstep;
} else {
col.r = 0.;
}
if (col.g > .5) {
// alive
if (count < 2  count > 3) {
// lonely or overcrowded, kill it
col.g = 0.;
col.r = 1.;
}
} else {
// dead
if (count == 3) {
// born
col.g = 1.;
col.r = 1.;
col.b = 1.;
}
}
//Set the output color to the texture color
gl_FragColor = col;
}
The most important thing to realise about this fragment program is the relationship between pixels and texture coordinates. The texture is an image of a certain width and height. The mesh will be drawn on to another image of the same width and height. This ensures that each pixel under consideration by the shader corresponds exactly to a pixel in the source image. However, the texture coordinates are normalised so that they run from $0$ to $1$ both horizontally and vertically. So when we ask “Is the neighbouring pixel alive or dead?" we need to compute the offset in the normalised coordinates.
If our pixel is the $12$ across and there are $1024$ pixels horizontally our texture coordinate will be $12/1024$. The next pixel across will have texture coordinate $13/1024=12/1024+1/1024$. Thus we need to know the width and height of our image in order to work out the neighbouring pixels. We pass these as uniforms to the shader.
There is one additional wriggle here. Later iPads have retina screens where everything is actually twice the size that it appears to be. So on a retina iPad we have to pass twice the width and height that Codea claims the image to have.
We pass the actual width and height but actually we want the reciprocals of these. As these are simple constants based on the width and height we can compute them outside the main
function. This avoids recomputing them for every pixel, but note that there are limits on what can be done outside main
. My experiments show that conditionals and loops don't work here.
Now, we need to have an encoding scheme. All we really need to record in our texture is a $0$ or $1$ for whether that pixel is alive or dead. We record this in the green channel. We clearly have lots of spare capacity for information so we also record some auxiliary data. In the red channel we record the time since the last change of state. In the blue channel we record whether or not the pixel has ever been alive.
In the fragment shader we have an auxiliary function for computing the number of neighbours.
// get the neighbours' states
lowp int neighbours (lowp sampler2D s, highp vec2 p)
{
lowp int alive;
alive = 0;
if (texture2D(s,fract(p + vec2(w,0.))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(0.,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,0.))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(0.,h))).g > .5)
alive += 1;
if (texture2D(s,fract(p + vec2(w,h))).g > .5)
alive += 1;
return alive;
}
This samples the texture at the eight pixels around the given one. The fract
function means that it “wraps around": pixels on the left edge are thought of as neighbours of pixels on the right edge. Technically, this means that this is the Game of Life on a torus.
There's an obvious loop here, but I've unrolled it for optimisation. Loops and conditionals are generally to be avoided if possible. I could implement those conditions mathematically as:
alive += floor(texture2D(s,fract(p + vec2(0.,h))).g + .5)
This would probably be better shader style.
In the main
program we first count how many neighbours are alive and read our own state. Then we set our output information accordingly. If we're alive (col.g > .5
), we need to know if we should die and if so we do so (col.g = 0.
) and record the change of state (col.r = 1.
). If we're dead we see if we should be born and adjust accordingly (including col.b=1.
to record the fact that we have at some point become alive). Finally, we set the output colour.
This shader should be written to an image which is then used as the input for the next run. The image itself can be used as the input for a further shader which displays the information in a “prettier" form.
Our shader needs some initial data so the initial image should be set to something suitable. I use the noise
function to do this.