Strange Attractors
Strange attractors describe a state that a dynamical system tends towards over time for many initial conditions. Of all the attractors I’ve seen, the Halvorsen attractor is my favorite attractor.
Halvorsen Attractor§
The motion of a particle in the Halvorsen attractor is given by the following equations:
$$ \begin{align*} \dot{x} &= -\alpha x-4y-4z-y^2\\ \dot{y} &= -\alpha y-4z-4x-z^2\\ \dot{z} &= -\alpha z-4x-4y-x^2 \end{align*} $$
Where $\alpha$ is some constant, for the demo above it is equal to 1.89.
Other Attractors§
- Thomas’ cyclically symmetric attractor
- Hadley attractor
- Dadras’ attractor
- Sprott attractor
- Lorenz attractor
The full source for the above demos is available on GitHub.
Shaders§
Using WebGL2, we can take advantage of the GPU to process many points in parallel using shader code (GLSL ES). We can use shaders for both computation (GPGPU) and for rendering objects.
In the Halvorsen attractor demo above, we create two WebGL programs: one to compute the particle positions, and one to compute the particle’s color.
Vertex Shader§
Generating Model Coordinates§
Below is the vertex shader for computing the particle’s position.
#version 300 es
precision highp float;
uniform float u_Consts[6];
uniform float u_Speed;
uniform sampler2D u_RgbNoise;
in vec3 i_Position;
out vec3 o_Position;
The uniform inputs:
u_Consts
is an array containing the $\alpha$ parameter inside the Halvorsen equations.u_Speed
is a scalar that is used for scaling the velocity vector.u_RgbNoise
is an RGB texture with random byte values for each color channel.
Then, the get_velocity()
function is a straight forward implementation of the Halvorsen equations.
vec3 get_velocity()
{
vec3 velo = vec3(0.);
velo.x = -u_Alpha * i_Position.x
- 4. * i_Position.y
- 4. * i_Position.z
- i_Position.y * i_Position.y;
velo.y = -u_Alpha * i_Position.y
- 4. * i_Position.z
- 4. * i_Position.x
- i_Position.z * i_Position.z;
velo.z = -u_Alpha * i_Position.z
- 4. * i_Position.x
- 4. * i_Position.y
- i_Position.x * i_Position.x;
return velo;
}
Inside the main()
function, we compute the particle’s new position using the following steps:
- Compute the
pos
vector by adding the particle’s velocity timesu_Speed
to its current position. - Sample the
u_RgbNoise
texture to get a randomvec3
, and add it to thepos
vector.- This prevents the attractor from converging into lines too quickly.
- Finally, we check to see if the particle’s position exceeds a length of 25 units, if it does, teleport it back to a random position as sampled from
u_RgbNoise
.
We generated the u_RgbNoise
texture on the CPU to take advantage of JavaScript’s Math.random()
function in place of implementing a PRNG on the GPU. To sample from u_RgbNoise
we use the particle’s x and y coordinates modulo 512 to pick a pixel, and use the RGB values as a ivec3
. As these are byte values, we can divide this ivec3
by 255 to get a vec3
of floats in the range [0, 1].
void main()
{
vec3 pos = i_Position + get_velocity() * u_Speed;
ivec2 uv = ivec2(int(i_Position[0]) % 512, int(i_Position[1]) % 512);
vec3 noise = texelFetch(u_RgbNoise, uv, 0).rgb / 255.;
pos = mix(pos, noise, float(length(pos) > 25.));
pos += noise;
o_Position = pos;
}
Because we do not want to draw anything to the screen while computing the particle positions, we can simply discard
all the pixels in the frag shader:
#version 300 es
precision highp float;
void main()
{
discard;
}
Transforming into NDC§
#version 300 es
precision highp float;
uniform vec3 u_Angles;
uniform vec3 u_Camera;
uniform float u_Scale;
uniform float u_Ortho[6];
in vec3 i_Position;
The uniform inputs:
u_Angles
is a vector containing Tait-Bryan angles (describes the object’s rotation).u_Camera
is a vector containing the camera’s coordinates in Cartesian coordinates.u_Scale
is a scalar describing the $w$ coordinate for the particle’s Homogeneous coordinates.u_Ortho
is a 6-tuple (array) containing the clipping bounds for Orthographic projection.
Back in JavaScript, we compute the perspective and orthographic matrices. We do this on the CPU because it is a relatively simple computation. Also, these matrices are the same across all particles, so it does not make sense computing the same matrices $n$ times.
function get_perspective() {
const [cx, cy, cz] = attributes.angles.map(v => Math.cos(v))
const [sx, sy, sz] = attributes.angles.map(v => Math.sin(v))
return [
cy * cz, cy * sz, -sy,
sx * sy * cz - cx * sz, sx * sy * sz + cx * cz, sx * cy,
cx * sy * cz + sx * sz, cx * sy * sz - sx * cz, cx * cy
]
}
The get_perspective()
function computes the following projection matrix:
$$ \begin{bmatrix} 1 & 0 & 0\\ 0 & \cos(\theta_x) & \sin(\theta_x)\\ 0 & -\sin(\theta_x) & \cos(\theta_x) \end{bmatrix} \begin{bmatrix} \cos(\theta_y) & 0 & -\sin(\theta_y)\\ 0 & 1 & 0\\ \sin(\theta_y) & 0 & \cos(\theta_y) \end{bmatrix} \begin{bmatrix} \cos(\theta_z) & \sin(\theta_z) & 0\\ -\sin(\theta_z) & \cos(\theta_z) & 0\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x\\y\\z \end{bmatrix} $$
The $\vec{\theta} = \langle\theta_x, \theta_y, \theta_z\rangle$ are Tait-Bryan angles describing the object’s rotation in space. This matrix transforms the 3D coordinates into a perspective that resembles the orientation as described by the angles.
function get_orthographic() {
// [left, right, bottom, top, near far]
const [l, r, b, t, n, f] = attributes.ortho
return [
1. / (r - l), 0. , 0. , -(r + l) / (r - l),
2. , 2. / (t - b), 0. , -(t + b) / (t - b),
3. , 0. , -2. / (f - n), -(f + n) / (f - n),
4. , 0. , 0. , 1.
]
}
The get_orthographic()
function computes the orthographic projection matrix, given attributes.ortho
a 6-tuple of $(L, R, B, T, N, F)$
$$ \begin{bmatrix} \frac{2}{R - L} & 0 & 0 & \frac{L + R}{2}\\ 0 & \frac{T - B}{2} & 0 & \frac{T + B}{2}\\ 0 & 0 & \frac{F - N}{-2} & -\frac{F + N}{2}\\ 0 & 0 & 0 & 1 \end{bmatrix} $$
The 6-tuple defines clipping planes, and the matrix allows us to transform the particle’s coordinates into clip space.
void main() {
vec3 rel_position = i_Position - u_Camera;
gl_Position = u_Orthographic * vec4(u_Perspective * rel_position, u_Scale);
gl_PointSize = 1.;
}
In the main()
, we first compute rel_position
which is the particle’s position vector minus the camera’s position vector. Then, we apply the perspective projection matrix on rel_position
and create Homogeneous coordinates by making u_Scale
the $w$ component. Finally, we set gl_Position
to the product of the orthographic projection matrix with the perspective.
Frag Shader§
#version 300 es
precision highp float;
out vec4 o_FragColor;
void main()
{
vec3 color1 = vec3(1., 0., 0.);
vec3 color2 = vec3(0., 0., 1.);
vec3 color = mix(color1, color2, gl_FragCoord[2]);
o_FragColor = vec4(color, 1.);
}
The frag shader is very simple, all it does is compute a linear gradient based on the $z$ coordinate of the particle.