Back in 2011–2012, I played around with generating static images from mathematical functions. Each function effectively took an (x,y) coordinate and returned an RGB tuple. (By mathematical functions, I mean deterministic functions that don’t rely on external input and have no side effects. I.e., given the same input parameters, the function will always return the same value, and do nothing other than return that value.) Some scaffolding code called the function repeatedly and performed anti‐aliasing to produce an image. It was fun to see what I could do just by mapping spatial coordinates to color values.
Then earlier this year, I decided to add a time coordinate and make abstract, ambient videos using the same technique. The single‐threaded python code I used for still images was too slow to be practical for long videos, so I started from scratch with new C++ code and an improved interface. The video scaffolding code exposes the below interface for subclasses to define what the video will look like. (The use of ThreadState
and TimeState
does make the functional abstraction a bit less pure, but it’s important for performance to not re‐do some calculations for every sub‐pixel.) Once a subclass gives a function that defines what the video looks like in theory, the scaffolding code handles anti‐aliasing, motion blur, and rendering to turn it into a finite video that a computer can display.
// Main class to generate a video (or still frame). Subclass this. Use
// ThreadState for any information local to a thread, and TimeState for
// anything local to a single point in time. (There could be multiple
// TimeStates per frame if temporal oversampling is used.)
template <typename ThreadState = NullState, typename TimeState = NullState>
class VideoGenerator : public VideoGeneratorInterface {
…
protected:
// Get the color value at a single point in space and time. (x,y) are spatial
// coordinates, with the smaller dimension in the range [-1,1]. For a 2:1
// aspect ratio, x would be in the range [-2,2] and y in [-1,1]; for 1:2, x
// would be in [-1,1] and y in [-2,2]. t is in seconds.
virtual Rgb PointValue(
const ThreadState* thread_state, const TimeState* time_state,
float x, float y, double t) = 0;
// Override this if the per-thread state shouldn't be null.
virtual ::std::unique_ptr<ThreadState> GetThreadState() {
return nullptr;
}
// Override this if the per-point-in-time state shouldn't be null.
virtual ::std::unique_ptr<TimeState> GetTimeState(
const ThreadState* thread_state, double t) {
return nullptr;
}
…
};
The first video I made was directly inspired by the last still image I’d made with the technique, and used a very similar function. For each time value t, three dimensional Perlin noise provides a map from (x,y,t) to a value that I used as the elevation of the (x,y) point on a topographic map. The elevation values are then used to make contour lines with hue denoting the elevation of the line and lightness denoting how close each point is to its nearest contour line. The code is only 43 lines long including boilerplate, and produces this 12 hour long video of gently moving colorful curves:
Next, I played around with interference patterns similar to moiré patterns, to try to generate something even more abstract than an abstracted topographic map. This code uses a set of overlapping, moving blinds to generate patterns of light and dark. The blinds use Perlin noise to independently vary their size and rotation, and to move side to side. Separately, the hue for the entire screen varies over time. Warning: the end result might cause motion sickness in some people. I tried my best to avoid it, but I’m not sure how well I succeeded.
So far, I’m finding the results of functional video generation interesting, though I do think that more traditional computer animation is a lot more versatile.
Leave a Reply