art, fractal, software

Faster Fractal Flame Rendering

My astute readers may note I have written a more complete description of the actual algorithm for fractal flames in another article. This article is more about improvements to the general process in the software for rendering fractal flames.

Preface

Having been creating fractals for a number of years, I have grown considerably interested in making software that allows me to create these structures. Not considering CPU vs GPU, there are a couple of common approaches, and I have had the pleasure of trying them all out. After toying with various software and examining their inner mechanisms, I’m now familiar with how each of these programs render fractals.

Addiction to creating fractals can be a blessing and a curse. On the one hand, you are creating the most liberal art in the universe – pure color and shape – without so much as a single relation to a controversial topic. On the other hand, the artwork never provides complete gratification. It drives you to want to see more and more (and spend hours on end rendering it). The fine details are the main attraction and the source of endless entertainment.

There is an enormous potential in fractal design, and I’d like to see it manifested. However, fractal creation can be inherently slow or spotty in appearance depending on the type of fractal being rendered and the software used. Out of disappointment, I’ve pondered the details of the process and how to improve it. What follows in this article is an overview of those thoughts and an analysis of some of the solutions I’ve been dreaming up.

Define: Fractal and Fractal Flame

Fractals are complex shapes. They have an interior and an exterior but no definitive edge or ending point. No matter how close you zoom in on them, it is impossible to identify the “end”. However, every coordinate in space is either inside or outside of the fractal. Consequently, fractals have a kind of half-dimension unlike anything else in the world.

Authentic fractals can be mimicked by real things in life such as clouds, trees, and bacterial colonies, but all of these have definitive edges at a specific zoom level.

Certain fractals are visualized by sampling coordinates in space and checking whether or not that set of coordinates is inside or outside the fractal. The definition of the edge is determined by the granularity of space in such cases.

Other fractals are created by iterating a simple shape over and over again. It is possible then to create pseudo-fractals by performing this iterative process for a limited number of times until visually it is difficult for the audience to see how the result of this process differs from the complete fractal.

When such a repetitious process is used, especially in conjunction with non-iterative techniques, the resulting pseudo-fractal is called a “fractal flame”. Fractal flames do not always possess fractal-like qualities, but instead have come to refer to the visual output of software that obeys certain fractal or fractal-flame-related algorithms.

Commonly, fractal flames are created using the algorithm authored by Scott Draves. Apophysis (esp. 2) and its successor Apophysis 7x were some of the first if not the first software applications to use this algorithm. Later came JWildfire.

Flame Algorithm

The fractal flame algorithm involves taking a sample point from a square in space and tracking it as it is modified by a table of transforms. Each transform has its own set of modification rules that change the spacial coordinates and color of the sample point. The traversal through the table is dictated by relative weights, called “xaos”, which indicate which transforms should come next after the sample point is modified by one of the transforms in the table. Once a point has been run through a transform, it is “added” to the final image at its new destination point. The process is then continued via xaos until the sample point has been run through a certain number of transforms, at which time, it has run its course and a new sample point from the starting square is chosen. A number of other sample points may fall on the same pixel. If and when they do, they too are added to the pixel, increasing the pixel “intensity”. Once the image is completed, the intensity is converted to brightness by means of a gamma function to make the final image more visually appealing.

Failures of the Flame

Fractal creation is very much analogous to ray-tracing and path-tracing. Sampling space to identify the inside and outside of fractals – the method used for mandelbrots and mandelbulbs – is like path-tracing; the pixels on the screen are traced back to their intersection with the structure and then to the light (to some extent). Sampling a point source and transmitting through the transforms to the point on screen – the method typically used for fractal flames – is analogous to ray-tracing; the pixels on the screen are generated by following the light from the light source to the objects and then to the camera.

Ray-tracing is very slow compared to path-tracing because it requires generating an excess of information, much of which isn’t useful. Light doesn’t always make it back to the camera, and even when it does, it may be too dim from collisions to be relevant. Finally, the best lighting requires light scattering, which is often generated by Monte-Carlo methods and is thus very slow.

Fractal flames have analogous problems. There is no way of knowing in advance where a sample will end up, so it is randomly selected from the starting square and then randomly sent to different transforms (via the rules of xaos). The result is that many samples end up becoming superfluous and wasted, and many areas of the final image remain cloudy or blank due to few samples falling in them. Consequently, to render a solid (or mostly solid-looking) image in this chaotic fashion can require anywhere from a few hours to a few days, depending on the number of transforms, the types of transforms, and the software employed.

A New Approach to Flames

Just as fractal flames are analogous to ray-tracing, so there may be an inverse method. But this method is not necessarily the aforementioned sampling method used for the mandelbrot and mandelbulb even though it also would begin at the screen pixel rather than a source square.

The new method would involve tracing the fractal flame algorithm backwards, without random selection, to what would be the initial square for fractal flames rendered in the common method. At each stage, a transform would be chosen by selecting the “from” transforms of the xaos table rather than the “to” transforms. Such selection is trivial.

For the sake of simplicity, the common method will be called the “forward approach” or the “from-source approach” whereas this new approach will be called the “backward approach” or the “from-pixel approach”.

Challenges to the New Approach

There are a number of challenges to be met in the from-pixel/backward approach.

The first challenge is deciding how to perform color mixing. In the from-source/forward approach, color shifts are performed on an initial color setting, and the resultant color is the sum of all shifts. However, in the from-pixel/backward approach, there is no initial color information to work with other than the shift, which is currently meaningless. Moreover, we want control over the final color without needing to repeat the render, then enough color information must be generated from the render to perform application of the color once the render is complete rather than while it is being performed. In any case, for color results to be compatible with other software programs, the color will not be applicable until the render is complete and the starting transforms are known.

The second challenge is how to proceed through xaos. In the from-source/forward approach, transforms are randomly selected. The path through xaos is never tracked because there is no need to retrace steps to pursue all possible paths. The from-pixel/reverse approach requires tracking the path for both color information and verification that the point did originate in the center.

To get the full intensity of the pixel, we would need to trace every possible path through xaos up to a certain depth (path length or number of transforms). This would need to be done for each transform. Depending on the settings, this is either too simple to give good results or requires saving an enormous amount of information.

To see why, consider the following example. Suppose we have a fractal with only two transforms and a depth of 20. In common fractal flame generators, a depth of 20 is quite reasonable and gives some very interesting details. For a from-pixel/reverse approach flame generator, this requires keeping track of the results of paths. Each path contains at least two branches. Some paths terminate before reaching the full depth of 20 because the sample lands in the “start square” quickly. Such points can be considered of higher intensity, but regardless, their paths don’t need complete checking. However, since it is possible that the full path depth might be needed for every branch, we would need to allocate memory for – at the very least – the final result of each path (that is, whether or not the sample ultimately landed inside or outside the “start square”). Now let’s consider if we always started at only one of the two branches (thus eliminating half of the possible outcomes). There are always two branches, but at the first step, there is only 1 branch. At the second step, there are 2 branches. At the third step, there are 4 branches. At the fourth step, there are 8 branches. And so on. This can be summarized as 2 raised to the depth, or 2^depth. At 20, the depth is 2^20, but since we’re always starting with the same transform, the possible paths are 2^19. This is the total number of outcomes that must be tracked, and 2^19 = 524288. Assuming a simple representation of 24 to 32 bits (a bit is a 1 or a 0) for each path to track the color info, this amounts to being the same amount of information as in a standard image. That doesn’t seem like much, but remember, this is the information needed for ONE PIXEL. You image may itself require 524288 pixels, which would then mean you would need 524288×524288 bits… or somewhere over 260,000,000,000 bits (that’s around 260 GB).

Interestingly enough, processing that much information isn’t all that difficult for a computer. It would take an afternoon, of course, but the limitation is reading and writing memory. Furthermore, it would be ideal to limit the rendering time to a few minutes and have more than just a couple of transforms.

For this, something needs to be sacrificed, and that comes down to the color model. In short, for this to be feasible, there can be no backwards compatibility with other software programs. Instead, the color would need to be based on the initial transform and the path length and perhaps even some interpolation with successive transforms along a path.

The good news is that there is no replicate data, no overlapping data, and no wasted output data. The bad news is that the usage of the paths themselves is not optimized – many will be repeatedly created over the course of fractal creation, but this can’t be helped. More bad news is that, if we want anti-aliasing by using multiple samples per pixel, the cost (both computationally and in memory) will quadruple.

Another challenge to the from-pixel/backward approach is that some transforms do not have paths. They are endpoints. Consequently, the process cannot start by randomly picking a transform and using it as the only starting point. This would neglect the possibility of other transforms as end points. The trivial solution would be to start from each transform, but recall this increases the number of paths through xaos.

Combining Methods

The from-pixel/backward approach allows for the combining of rendering techniques. Mandelbrots and flames could be generated within the same fractal without the need of Monte-Carlo sampling that would otherwise create hazy or blurry edges.

Furthermore, it would be possible to incorporate 3D fractal rendering into the fractal flame creation process without the need of a pre-rendered image of a 3D fractal to save computing time, thus again avoiding wasted samples (since the entirety of the image may not be used) and strange aliasing effects (from the image possibly being scaled and modified when the samples from it travel through many transformations).

Fractal Sampling Speed Comparisons

2D fractals such as the mandelbrot are easily generated in a few seconds or less because they follow a rule or two that quickly reveals whether or not the pixel is inside or outside of the fractal. For 3D fractals such as the mandelbulb, there is a similar rule system, but this process is slower because the process needs to be repeated several times along the depth axis and there are few shortcuts. Current software programs use distance estimation techniques.

Comparatively, samples for fractal flames take mere nanoseconds or microseconds at worst. The method is thus attempting to employ a splatter technique, and it works effectively up to a point.

If limited to 2D fractals, a from-pixel/reverse approach to fractal flame creation would ideally be as fast as generating a mandelbrot in common software. However, it could potentially be slower than rendering a fractal flame.

Color Model

Model Variables

The current color models for rendering samples vary depending on the rendering approach and the various variables available to a point. Such variables include:

– The initial sample color value.

– The number of transforms applied to a sample.

– The number of iterations of a math rule used for detecting if the sample is inside or outside the fractal.

– A global color or color shift of the transform.

– The z-depth of the sample.

– The final transform applied to the sample.

Some variables work for 3D fractals and others work for 2D. For example, for 3D fractals, z-depth is available. For 2D fractals, iteration count is the substitute for z-depth.

For a from-pixel/backward approach, the primary variable that would likely be unavailable is the color shift because its meaning is unreasonable to obtain (for the aforementioned reasons).

Model Variations

Current fractal flame software uses a combination of intensity and color shift – intensity for brightness and color shift for moving a color more or less towards a particular color value. This allows for the creation of gorgeous gradients. This color model is also very easy to calculate because the color shift is a simple decimal-point number that is used to sample from a gradient.

Without color shift, gradients are trickier to replicate because the transition is no longer from a reliable starting point.

One option is to apply color shifts anyways and hope that the results aren’t weird. However, even doing this requires that color samples (one from each path) be averaged for the pixel, which brings up a problem when trying to separate rendering from color model.

Rendering is very costly, but often aesthetically, fractals need to be adjusted. This helps brighten darker/background areas – among other things – and create an overall pleasing effect. However, for common software, the entire fractal needs to be rendered again. There are a number of ways a repeat render could be avoided.

First, the color could be modified in an image program. This is limited because some colors simply cannot be adjusted in image editors without other colors also being adjusted. The colors interfere because they are usually all represented by three colors: red, green, and blue (or cyan, magenta, and yellow).

A second option would be to perform a render that contains coloring information without the colors themselves. This would allow modification by some interpretive program, which could then adjust the colors to the user’s liking without the need of a repeat render. The problem here is that such coloring information is ambiguous. For complete color information, all traversed transformed would need to be tracked (and we saw this was quite large). Alternatively, we could account for only one color per starting transform and make only the intensity to track how many times the sample encountered the starting square in the traversal through xaos.

A sub-option would be to keep a list of color-apply increments, one for each transform. Every time a transform was used, its color-apply would be incremented. But the mechanism for tracking these increments may or may not account for when paths fail to reach the “start square”.

A third option would be to simply default to a color based on a single variable like those given in the aforementioned list (in Color Model Variables). One sub-option in this case might be to use the colors of the last transform used in the shortest path. This might preserve (or destroy) the initial transform shape, which is important to the visible structure of the fractal.

3D Model Fractals

Meshes and voxels are much faster to render than point clouds or the results of distance-estimation calculations, depending on the methods employed. Voxels tend to be more akin to conventional fractal rendering processes, and consequently, they have been employed in making 3D fractal models, and they are sometimes converted to meshes for the sake of speed and portability. There exists voxel and mesh software to simplify these structures and make them render in real time, but the tiny details are inevitably lost.

Similar to voxels is the simple method of plotting the points in space and having them be represented by large shapes. (One of my experiments was generating scene files for Povray that represented fractal sample points as large spheres.) Again, this gives the overall shape of the fractal in a smaller form than voxels but the tiny details are lost.

Meshes, on the other hand, are able to represent the shape and many details better than voxels, but the problem is creating the mesh itself.

I spent a couple years creating software that would allow me to create iterated meshes of fractals. My approach was to start with a transform that generated a basic mesh and then chain to the transform other transforms that modified the mesh in various ways to create interesting shapes. The technique was successful – and allowed me to create some very complex 3D fractals in only a couple of minutes.

There were, sadly, some notable limitations. First, the more details there were, the slower the it was to render. I had planned for this in advance and created a progressive rendering system that would render only a certain number parts of the full structure every program cycle. This enables slower computers to render very complex meshes in a couple seconds with no strain on the CPU. The second limitation was that there were many wasted vertices and faces, and the meshes needed either manual cleaning (in a mesh editor) or “cleaning transforms”. Third limitation: all of the faces were drawn whether they were visible in the final render or not. Fourth: meshes don’t conform well to certain kinds of transformations, and thus they would become messy with inversion transformations or crinkled by curve transformations. Finally, the most important problem was that there was a limitation on the kinds of shapes that could be created. The size of details was limited by how well vertex positions could be represented by floating point number values. Applying shader materials would have solved the problem, but writing such shaders is tedious. Transparency also turned out to be ugly, and while this is already tricky with meshes, fractals with many details compound the problem.

In the end, “mesh fractals” were not the ultimate solution in terms of appearance, but they did partially solve the problem for creating fractal-like structures for 3D video games (which is one of the original intentions).

Alternatives to Better Rendering Approaches

One possible alternative to enhanced rendering is using photo-editing software. Certain software products now contain content-prediction-fill tools that allow generating content where it was incomplete or blurry.

I speculate these techniques may not necessarily capture the essence of the fractal itself and may result in a loss of some tiny details because of failed rendering from the original fractal software.

On the positive side, certain smooth areas of fractals may be more readily. Already it is possible in some software to use selective-blur tools that perform localized blurring with similar pixels. This provides some smoothing, especially in areas where the pixels were too dim.

I speculate a good blurring tool for fixing low-sample pixel areas might be one that raises background color pixels to the average intensity of the most intense pixels. But again, this might result in accidental loss of details in some areas.

Concluding Remarks

Fractal rendering approaches have overlapping or analogous problems to other kinds of computer-based rendering and, at the same time, present their own set of challenges that can be solved in unconventional ways. Being a highly abstract, pure form of art, fraction creation has no “right” or “wrong” way to be done; there is only “it looks good” or “it doesn’t look good”. Therefore, the responsibility of the software is to make it possible for good-looking fractals to be made and made easily.

The mechanisms (for speeding up the process) I have considered thus far all require much more memory. That’s typical in software, where the rule is “security, speed, or memory; pick two, the last must be sacrificed”. However, that limits the size of the final rendered image, which can be disappointing when you’d like to see those teeny tiny exciting details in the midst of a broad and mysterious world.

Advertisements

Enter the space and time of my little world... Welcome Earthling.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s