List of abbreviations
On style and subject
On political art
OpenGL live demos
Why OpenGL in 2016?
Examples / tutorials
Modern vs. legacy OpenGL
Limited canvas size and resolution
Nonlinear functions: fragment shaders and inverses
Camera loop / continuous iteration
Alpha, transparency and blend
Colour manipulation and creation / non-iterated demos
Initial sets / image sources
Hardware and performance notes
Vertex shaders: pointillist graphing with forward functions in OpenGL
Problems and solutions
3D functions and iterations
Open source software
What follows are informal and incomplete notes on how I make my art. This is a work in progress started on 2018-07-18.
|IFS||iterated function system|
In 2017, I was planning to submit a paper on my OpenGL methods for publication. One of the most memorable reviews I got:
"Artists need not learn how to do GPU programming. Higher level languages and programming environments shield the artist from this tedious lower level."Another reviewer also mentions that tools like Processing may use OpenGL behind the scenes. Well, I did have a look at Processing when I was starting my live demos in 2016, and it could not do render-to-texture, i.e. no iterated GPU work.
In general, I absolutely believe that artists should make their own tools whenever feasible. Past masters were known to make their own dyes, for example. To me it is all about making your own thing, vs. turning the knobs on a system someone else designed.
Moreover, I want to understand what is going on in my work. I sometimes get interesting results from coding mistakes, and then I want to know what actually happened so I can make better use of that effect. This really is a more general open-source argument, especially pertinent to scientific code: can you explain how your algorithm works?
Another argument against ready-made software is that every tool implies some preferred ways of process and workflow. So you become accustomed to doing things in certain "professional" ways instead of thinking for yourself. It is much more interesting to build your own tools as you work; let the work shape the tools, not the other way. Your tools become part of your art, while there is this ongoing negotiation between the tech side and the art. You will probably get stuck in some of your own ways, but at least they will be something you came up with, instead of doing the same thing as all the other "professionals".
Something like Processing might give a good start for algorithmic art, but with such "user-friendly" systems there are always limitations that you'll run into sooner or later. My Python-gnuplot setup was easy enough for 12-16 year olds, and we went straight into real code and real commandline tools.
I don't think programming should be hard for its own sake. After all, I use Python for my demos, which I find much simpler and easier than the default Java used by Processing (though sensibly enough, it can also use Python).
I am interested in mathematics, and programming is my way of doing and understanding math. I try to find new ways of portraying "old" math, and keep the math simple enough to show through the art. While computers are all about computing, not all computer-generated art deserves to be called math art. There is math behind flashy, photorealistic effects, but one can choose to go math-first or flashy-first.
A key element in most math art is iteration: repeating a simple process over and over. The result will not be as simple, but it usually retains some innate harmony of the simple process. In music, harmony stems from simple ratios of frequencies; iterated function systems often exhibit the same ratios repeated on many levels, which I believe contributes to a sense of visual harmony. Conversely, flashy-first approaches often combine multiple different techniques, and the esthetics come more directly from the artist, rather than letting the math speak for itself.
My math-first philosophy is sometimes at crossroads with the desire to finish well-defined works of art. This is one reason why I prefer live visual demos, as they can be used together with music and other forms of performance for a more "artistic" whole, while staying unfinished themselves.
Live programming also facilitates realtime control in response to other performers and audience reactions. But I also have more prosaic reasons to avoid fixed works. Fine detail and pointillist graininess is often ruined by video compression, so viewers get a second-rate version of what I've intended. Finally, there's good old-fashioned programming pride: I can do this in realtime, while limiting myself to last decade's GPU technology :-j
There is a lot I could say about political art in general, little of which would be endearing. But my math-art has a simple and somewhat unpolitical message: math is art, a creative human endeavour like nothing else.
More specifically, I am concerned about the divide between the two cultures, arts vs. sciences. It is hard to make informed decisions on your future when you are told to choose between the soft, cuddly, fuzzy and spiritual arts, or the soulless and tedious but lucrative world of math and science.
For schoolchildren and young students, image is everything. If you're a creative type and also good at math, it can be hard to choose a math-strong career when the everyone around you thinks math is just an endless drill with numbers after numbers. Of course, a lot of the "math" taught in schools is arithmetic number-crunching, so it's easy to get the wrong idea. But in order to do new math or science, one needs to combine a bit of that school-like rigour with a heap of creativity.
In fact, during my days as a math and science teacher, I was quite concerned about the image of scientific careers, and I aimed to keep things exciting. I tried to exemplify someone who can wear flashy clothes and have fun while being a real scientist. I was also wondering if I could focus on the flashy demonstrations and experiencements as a career, rather than spending my energy on all that basic teaching. In a sense, I'm now doing something like that with my math art.
I began making art with iterated function systems in June 2015. I was taking the course "Johdatus fraktaaligeometriaan" (Introduction to fractal geometry) by Antti Käenmäki at the University of Jyväskylä. One night I decided to write a script to check some of my work visually, and things grew from there. My approach is outlined in my Bridges 2016 paper (Colour version).
I continue to use and develop that method, with the calculations done in Julia and the results presented in gnuplot. These are tied together with Bash scripts, whose importance is not to be underestimated. For example, to render animations, these scripts distribute the work to different CPUs on different machines.
I started hacking together my live demo system in March 2016, as I was preparing my first live show for Yläkaupungin yö. I already had scripts where the Julia-gnuplot approach generated new pictures with random parameters every few seconds for a slide show, and a number of pre-made animations. I had realized there were I/O bottlenecks with my setup, so the next version would have to work within a GPU.
OpenGL was not the obvious starting point, given the more flexible and powerful approaches that had already been around for a few years: OpenCL and CUDA (though these would generally use OpenGL to present the picture). The new unified framework Vulkan was also getting into mainstream.
However, my primary workstation/laptop did not even have OpenCL. While I did have newer/stronger machines around, I was already envisioning exhibitions with multiple small/cheap machines, so I wanted to keep the hardware requirements low. There was also a ton of literature, tutorials and examples on OpenGL, and I knew it could do fairly complex (pun intended) and flexible computations via shaders. (In fact, people were using OpenGL for non-graphics computing in the very early 2000s. These ideas were eventually formalized and developed into CUDA and OpenCL.)
Using Python's PyOpenGL was obvious early on, as I didn't need much power on the CPU side, and I knew Python well. Julia didn't have a complete OpenGL wrapper at the time, while Python has many other useful libraries I would need later, such as Numpy.
The choice of OpenGL had a number of effects on my art, as I moved from the math of abstract points into an image-manipulation approach. There were a lot of new limitations and quirks, which often turned out to make interesting results.
Iterating shader passes on an image needs render to texture, and Leovt's example turned out a good starting point. Pyglet also turned out better than Pygame for my purposes; both provide windowing and keyboard controls among other things.
Another important example was glslmandelbrot by Jonas Wagner. Doing ETG was not directly relevant to my art, but this was a good starting point for using and controlling shaders. In fact, I soon extended this into my own general ETG system, and integrated it with my IFS scripts (early example).
A lot of the examples/tutorials found online are still using Legacy OpenGL, i.e. pre-3.0 or pre-2008. This can include fairly recent ones, presumably because the author learned OpenGL in the olden days. Functions such as glColor, glVertex, glMatrixMode, glRotatef are tell-tales of the old fixed-function model. The code will usually run on regular computers due to compatibility layers, but for learning purposes it should be avoided. Of course, the examples may still have useful ideas, but the code itself should be updated.
Modern OpenGL may seem more involved because tools such as projection matrices are not ready-made. One reason for this is to make the GPU a more general processor, and projections of 3D scenes are just one of the ways you can use it. I like putting all the GPU power into my 2D works, without any unused 3D engines hanging about. If you want 3D you need to write some code.
The "canvas" in OpenGL is defined by coordinates of either [0, 1]2 (texture lookup) or [-1, 1]2 (target position). I usually scale these in my shaders so as to do math in [-2, 2]2 or [-10, 10]2 or something. In any case, they must be fixed at some point, and there is no simple way to auto-scale these depending on the image. The point of live demos is usually having randomly varying parameters, and you need extra room for that, but not too much as that would leave things small with huge margins.
Resolution becomes crucial in iterated OpenGL, because initial fine points will generally end up more or less blurred. Things must be quantized into the MxN grid at every iteration, and this often means interpolation blur. (Though the nearest-point approach can sometimes make neat retro/glitch effects.) For higher quality, I sometimes use FBs larger than the final screen.
In contrast, the mathematical points in my Julia-gnuplot approach have practically unlimited range and resolution, and gnuplot can auto-scale its output to fit the image nicely. But even then I often use larger-than-needed plotting resolutions to smooth things out, and I sometimes need to set my own ranges. (In any math problem, one should generally keep high precision and only round at the end.)
Linear functions can be realized in basic OpenGL methods without shader math. In fact, this is how I started — by noting that a linear function means taking a photo and moving it around. Then you can take a photo of the result and you have iteration.
Such linear mappings are defined by vertex placement. (Technically, these can be somewhat nonlinear, in case the target shape is not a square/rectangle/parallelogram, depending on your definition of linearity.) For starters, identity mapping:
These are the corresponding texture and target coordinates of the SW, SE, NE and NW corners in this order (recall the above canvas limits). The following mapping would rotate by -pi/4 and multiply by 1/sqrt(2) while keeping it centred:(0, 0) -> (-1, -1) (1, 0) -> (1, -1) (1, 1) -> (1, 1) (0, 1) -> (-1, 1)
In other words, I take the corners of the texture, and place them somewhere on the target FB (generally within corners, anything outside will be cut out). This example is illustrated in the picture, fox art courtesy of Aino Martiskainen.(0, 0) -> (-1, 0) (1, 0) -> (0, -1) (1, 1) -> (1, 0) (0, 1) -> (0, 1)
The basic photo-collage operations of rotation and translation, etc., cannot capture nonlinear transformations. Likewise, vertex positioning would fail to reproduce such mappings. The solution is found in shader programs, but there is a slight complication: the functions must be defined in the inverse direction.
Modern OpenGL uses shader programs all the time, but for linear mappings they are simple pass-through boilerplate. For each output pixel, the fragment shader decides the colour, often by looking up an input pixel whose position depends on the output position. For graphics processing it is a sensible direction: you only calculate what you'll see.
This stage provides a kind of workaround for the limited canvas size: if your lookup position is beyond 0 or 1, you can wrap it around (mod 1 or a mirroring version). This can be done with OpenGL settings, no need to change your shader code. It can make interestingly repeating, non-periodic patterns. (For wraparound in a linear IFS, write the linear function in a shader.)
Inverting functions is not always feasible. In some cases this makes things easier, for instance with Julia sets as they can keep the same form as they do in ETG. In other cases you get nice surprises as you try familiar functions in the inverse way.
At best, you will learn to think about functions in this lookup direction more naturally and take advantage of it. Mathematicians will find themselves right at home; preimages of sets are at the heart of topology and measure theory, for example.
I now have unified the linear and nonlinear approaches, since often a function can have a linear outer function, and this makes it easier on the GPU.
As a simple example, the Julia-set polynomial z2 + c has two parameters: Re(c) and Im(c). The parameter space is a 2D plane. In practice, one would use a box just around the Mandelbrot set to get nice Julia sets. (This video shows one such path along with its Julias, albeit not very box-like.)
The total n parameters of all functions can be regarded as a point in an n-dimensional box. To make my demos alive, I fly this point around the box using a kind of random walk. This post on my Facebook page elaborates a little, with another video.
One key idea in my Bridges 2016 paper was overlaying the stages of iteration into a single graph, so that was an important aim with OpenGL too. It was relatively straightforward, once every iteration was given its own FB. The pictures are white on transparent during iterations, and they are coloured by multiplication upon the overlay stage.
This method is relatively heavy due to the number of iterations per displayed frame. First, there are the individual iterations, typically 10. In addition, the overlay step involves one rendering per visible stage, so there are often 15...20 renders per frame, or about 1000 renders/sec with typical framerates.
An alternative method draws from real-life video loops, which are nice to play with if you have a video camera connected to a display. In IFS parlance, it would be a single linear function, so not particularly interesting. Adding multiple displays that show the same feed makes a proper IFS. The camera needs to see all of them at least to some extent; the background visible outside/between the displays will provide the light that circulates through the system.
In my IFS setup, this just means rendering the iterated images back onto the original. I use a cyclical buffer of several FBs to simulate/exaggerate the physical delay. As in the real setup, this becomes interesting as the functions themselves change during the iterations. Sometimes I leave out the delay buffer to simulate a continuous flow process (example).
Performance-wise, the Camtinuum™ is particularly nice as there is only one iteration per frame. This is easily noticed on slower integrated GPUs, which would choke on the corresponding multi-stage demo.
To simulate proper IFS graphs, blend is essential. The images of each function must be overlaid so that the (transparent) background of one doesn't hide other images. Sometimes the usual "alpha blend" is not the best approach, and others may give interesting results. Blend-less hard overlay naturally simulates multi-display camera feedback (unless you have those transparent displays from SF movies). Max blend is good for simulating CRT afterglow, albeit that would be a separate single-function, single-stage postproc operation.
The gradient effect from my Bridges 2016 paper can be simulated by varying the alpha levels within functions, depending on their derivatives. It is often hard to get nice results so I use it quite sparingly (example with both Julia-gnuplot and OpenGL methods on similar functions).
In the stages overlay, alpha levels can be set separately for each iterate to get translucent colour mixing. This can be used as a rudimentary simulation of the pointillist effect, where the final iterates have higher point densities due to being smaller.
While we're doing image processing, functions can alter colour as well as position of pixels. This is a huge difference to the basic IFS math of colourless points. Frankly, I have not wandered very far in that direction; my colour processing endeavours have been limited to single-pass demos such as these two. Even this level has a lot to explore.
My Pointillist paper has a section on different initial sets, and that applies to OpenGL too. The basic initial sets are solid shapes instead of sets of separate points. Random pointsets can be simulated but they often end up blurry due to the usual quantization and interpolation. Thus I sometimes use large, blocky "points" for a kind of retro pixel look.
As noted above, pixels in OpenGL are not just points but they also carry colour, so full-colour images can be used as initial sets. These are best used with single or 2-3 iterations. Pyglet has simple ways to load image files into textures.
Live video input is the next level, and it makes a nice proof that these things do run in realtime. It is a little more involved: I use OpenCV with bit of GStreamer to read cameras and video files, then pygarrayimage for fast texture transfer.
Most of my demos run fairly smoothly on integrated Intel GPUs from early 2010s such as HD 3000, albeit in resolutions closer to VGA than Full HD. For serious work on serious resolutions, I've found that entry-level discrete GPUs such as Nvidia GTX 750 suffice quite well; for the iterations in IFS art, dedicated VRAM makes a big difference. Intel GPUs also have outright bugs in their trigonometric functions, and they lack some nice OpenGL extensions.
So the promises/expectations of low hardware requirements have been met, at least relative to current laptops or desktops. But as for the cheap/small exhibition hardware, machines like the Raspberry Pi are unlikely to have the oomph for iterated demos. This is simply from looking at GPU FLOPS, and they have even slower video memory than integrated Intels.
In many simple cases, visually similar results can be achieved with much lower computational demands. However, this needs more math/code for each function and initial set. My framework is all about flexibility: plug in any function and any initial set/picture/video, and let the computer do the hard work. This, to me, is the essence of algorithms: repeating simple processes for a complex result.
[2018-09-13] My math-art holy grail would be doing everything in my Julia-gnuplot framework in live animation. A lot of it would be possible using OpenCL and related newer technologies, from which I have shied away so far. One reason is my nice gnuplot setup, which I would have to recreate to some extent, another is lower-end hardware compatibility. But last week I started taking a serious look at this.
After finding some promising OpenCL+GL examples, I came across compute shaders which looked even nicer for my purposes. They live within OpenGL but provide a lot of the general compute capabilities. But starting at OpenGL 4.3 it is relatively demanding of hardware and drivers, not unlike OpenCL.
Fortunately, after a few days of reading around, I stumbled upon transform feedback. This tutorial and its Python transliteration turned out particularly helpful. Available since OpenGL 3.0, it turned out the key bit for my live pointillist IFS art. So far, I have been aware of vertex shaders for manipulating the placement of points in the forward sense, but they did not seem useful, because the final coordinates would be lost; the result would be an image, which can only be iterated by fragment shaders and linear transforms. But transform feedback can retrieve the point coordinates for further processing.
Strictly speaking, all kinds of iteration can be done without the coordinate feedback. But my overlay method requires the rendering of different iterates as images, and then iterating further from those coordinates. To me, transform feedback is an obvious solution that avoids duplicate work.
After the fragment shader demos, the vertex approach represents yet another exciting paradigm shift for my work. On the other hand, it makes a full circle back into my pointillist origins. In some ways, the earlier OpenGL demos now seem like a detour; why didn't I realize the vertex approach to begin with? But I guess it was necessary for me to learn more OpenGL first, and in the process find a completely new way of doing math art besides pointillism :)
In the fragment shader demos, there's no practical way to autoscale/zoom based on the content. This is one thing gnuplot could do automatically, and I generally need to set the ranges in demos by hand. However, the vertex feedback mechanism provides a way to gather statistics on the coordinates, which can then be used to adjust the ranges. The GPU->CPU transfer does incur a cost on the framerate, but it is no show stopper.
The magic of pointillism, the way I use the word, comes about when individual pixels are too small to resolve. With regular computer monitors, this means rendering to temporary FBs considerably larger than the screen, generally at least 2x linear resolution. Naive downscalers tend to suck for pointillist images, so custom filters such as Gaussian blur can be useful, or at least use mipmaps. On the positive side, there's no need to iterate on the FBs, so relative to the sizes, the vertex approach is often faster.
In the image-manipulation approach, the outputs of multiple functions are readily combined by overlaying them, usually with some blend options. In the vertex approach, this would be less straightforward, and it would mean bigger arrays after each iteration. However, in my Julia code I already make heavy use of the random-game scheme: each iteration of the IFS is an n->n mapping, regardless of the number of functions. In fact, this is necessary for my colour overlay method. So in the end, vertex arrays of constant size work very nicely for me.
[2018-09-29] The image-manipulation technique is not readily extensible from 2D to 3D. AFAIK, render targets are 2D, so render-to-3D-texture must be done one 2D slice at a time. In addition, such 2D renders cannot access the entirety of a 3D source, only the visible portion.
However, the vertex shader approach lends itself to any dimensionality of space, or at least 3 for regular display pipelines. After reading some pointers on the requisite transform matrices, the upgrade to 3D was pretty simple (it helps if you know matrix math already :). Here's an early example using linear functions.
One reason why I haven't been too keen to adopt 3D is my use of complex numbers, which are fundamentally 2D. But the 3D framework can be used for some nice visualizations of these as well. For instance, the third dimension can be used as a parameter for the 2D function. Closer to my math-art origins is this 3D stacking of consecutive iterations. The stacking idea also works for the image-processing approach; it's just an alternative for the final overlay, the 2D iteration process remains the same. (In fact, such 3D stackings were in my mind when I first started the OpenGL demos, but it didn't seem worth learning when I couldn't do full 3D iterations.)
On a more subjective note, I think 3D is overrated and misunderstood in several ways. For starters, think of the cubists who combined different perspectives into single 2D works. I sometimes get a sense of depth from my 2D iterated scenes, though not necessarily the orthodox Euclidean kind. Fortunately, modern OpenGL doesn't force you to use regular geometries or perspectives, you can always roll your own transformations.
[2018-12-29] Geometry shaders are essentially a turbocharged version of vertex shaders. Instead of the one-to-one mapping of vertex shaders, they can output multiple vertices from a single input. This enables moving more work from CPU to GPU, as well as reducing CPU-GPU traffic. So far I have used them for two main purposes:
Starting with Linux in 1999, I use Free/Open source software almost exclusively (Nvidia Linux drivers being the main exception). I have also contributed to other projects and released my own, with the prominent examples on my Github account. That said, I have no plans to release my math-art code. It is an ongoing personal art project.
In open source, it is best to release projects that are
Still, I have used GPL3 examples, so if I ever have a good reason to release this code, it will be open source under the GPL3. In the meantime, please enjoy this little overview, may you be inspired in your own art :)
In 2017 I taught a workshop of fractal art at a local school, for 12-16 year olds. The course material contains simplified Python scripts for my pointillist method, along with the gnuplot files and some instruction to get you started. Please note the above support considerations — if this works for you, that's great, but don't expect free support.