anime and games, media, software, tech news and opinions, tv

Animation Production in 3D – Part 2

In my previous article, I spoke about the charm of 2D animation and gave an overview on how efforts using 3D technology have struggled to replicate it. In this post, I’m going to talk about current technology and the direction of technology that will allow for production of better animation, especially anime style.

Digital Approaches To Animation

Currently, there are roughly three and a half approaches to creating animation digitally (if we don’t count anything unconventional).

The first approach is simply an extension of hand-drawn animation: draw the frames of the animation on the computer. The current industry standard tool for this is ClipStudio EX, but there’s also OpenToonz (originally created by Studio Ghibli), CACANi (used for the Inori Aizawa (Internet Explorer -tan) video), and TVPaint. The “half” approach is the decision either to complete the animation using those drawings or convert the drawings to line art to be used in other approaches. The next possible step is rigging 2D drawings. In this direction, there are tools like Toon Boom Harmony, Adobe Character Animator, and the ever-broken Synfig Studio.

The true second approach is creating vector graphics straight in the computer and animating those with key frames. Adobe Animate and Synfig Studio are in this category, but in a few years, an exciting new competitor is poised to arrive: VGC Animation. This software – based on VPaint – uses special technology that enables the easy manipulation and animation of vector graphics even when layers overlap or the shapes change in visually-sensible-but-programmatically-difficult ways. More on that later.

The third true approach is using 3D graphics and trying to fake the drawing part. The current industry leaders in animation are Maya (though this isn’t for toon animation), 3DS Max (already used extensively by animation houses like Gonzo, creators of Last Exile), and SmithMicro Poser, followed by Blender.

There are a number of other random things that can be animated, and for miscellaneous other needs, there are tools like Adobe After Effects.

Some of the best stuff isn’t on the market (at least not for you and me). It mostly consists of plugins or technology yet to be publicly released.

Hurdles

The challenges to creating cartoons (not just animation) digitally are primarily aesthetic, if you ignore the aspect of Flow of Motion (as discussed in my previous article). Of the three approaches I mentioned above, the most beautify is the first (hand-drawn on the computer) but the fastest is 3D. It would seem that animating vector graphics is an ideal compromise.

Vector Graphics

Vector graphics have the advantage of rendering lines perfectly smooth and filling them in. The challenges here are in creating natural-looking line art and quickly as well as creating and moving shadows. Natural-looking line art is hard to create by pushing around vector path nodes. To get a beautiful look, you have to draw it, which is what ClipStudio lets you do. Sadly, ClipStudio has no tools for animating line art – at least not yet – but CACANi does, I think. The other problem is creating line art quickly. Both of these problems can in part be accomplished by scanning in hand-drawn frames, but current algorithms for this problem give crude results. One algorithm is “skeletonize” – a process by which the lines are thinned out until they are 1-pixel thick, at which point the nodes for a vector path can be determined and replace the old drawing. The other algorithm is “Optimal Transportation Curve Reconstruction”, found in the CGAL library, and you can read about it here.

The last problem for vector graphics is shadows. While shadows and lighting could be digitally painted by hand, it would be ideal to simply create vector shapes that represented shadows and moved them around. But the flexibility is destroyed, and in many cases, the desired shadow looks completely different from one frame to the next just based on how you expect it to look given the lighting of the scene. On the bright side, VGC Animation might make it possible to do these shadows because of its underlying technology: “vector graphic complexes”.

Hand-Drawn

Doing everything by hand digitally is slower than drawing by hand but at least the output can be readily colored. In the long run, it’s an inefficient solution.

A number of programs, such as Toon Boom Harmony and CACANi, offer auto-tweening – automatically creating the “inbetweens” (frames between the key-frames, which define the motion). This is convenient, but to be effective, a large number of key-frames still need to be drawn.

Thus, we are left with 3D.

3D Graphics

The first hurdle of 3D is creating good line art.

I’ve tried using Blender for line art, and I’ve seen work created with it. It’s evident to me that the engine underneath – called Freestyle – doesn’t aim at pretty results. The line art is always choppy, and understandably so: When we draw something by hand, we guess at where the edges of things are and draw outlines. When computers paint something, they have to know everything. There is no guessing at where the edge is, so they end up painting even the ugly looking lines or the true lines in ugly ways just because those are the true edges of the entity being rendered. Getting away from that is tricky but possible.

Choppy lines tend to go away when the meshes (representing entities) are smoother. There are a number of ways of getting this. The obvious way is by using a higher-quality mesh with more vertices and surfaces, but this is memory-intensive. Another technique is called sub-divisioning, whereby a basic mesh is smoothed out by replacing surfaces with smaller ones that conform to the angles between the vertices better. Smoothing can be done for the entire model, as in the case of OpenSubdiv by Pixar, or on an as-needed basis. The as-needed-basis situation arises in raytracing, in which a mesh is smoothed out on a scale smaller than a single pixel, allowing for other effects like surface displacement, leading to the appropriate term: “subpixel displacement”.

The next hurdle is shading.

Basic shadows are as easy as picking threshold values for color. The edges won’t be smooth, so a short range of threshold values can be used for making transitions smooth. But in any case, this is easy.

What isn’t easy is creating smooth shadows that go from being hard shadows in some areas to fading into nothing in other areas – an effect that is absolutely gorgeous and essential for the best art. The best 3D software today has not accomplished this. The issue is that it takes a simple challenge like thresholding and compounds it by requiring that – in some other area – new rules are applied to shading.

There are possible solutions to this, but as of yet, I have not tested any of them nor seen them in practice.

The final hurdle is the flow of motion. Sadly, this author has not suggestions for how to solve the entirety of that problem yet. Let me remind you, it involves more than simply having characters move faster.

Breaking the Horizon

Several years ago, Gonzo created Last Exile, and the technology at the time was lame. They got away with it. Then years later, Arpeggio of Blue Steel came out and I about choked. On the bright side, Ald-Noah Zero did a decent job, but it’s evident that the animation tech still wasn’t quite there yet. The production process limited animation strictly to mechanical things. Moreover, backgrounds had to be added, so ultimately, composing a single shot was still a great deal of work. Characters and other organic entities were still hand-drawn.

Then about a year ago, the anime Blame! came out using technology developed by J-Cube called Maneki. While not entirely revolutionary in output, there are a number of subtle enhancements to the process and output that have ramped up the production power of animators.

They were kind enough to post a video about it on Vimeo:

J-Cube’s presentation on PSR link

Can’t watch the video? Let me give you a brief rundown of the technology.

First, the technology is a specific piece of software (a plugin, actually) that uses fast raytracing technology (and that hot graphics card) to render the entire scene (optional) of an animation in a specific style. It’s not the prettiest style, admittedly, and the output can be disappointing for reasons I mentioned in my first post, but it’s on a role:

It features the basic package: it renders line art, solid colors, and hard shadows. Line art is created using the z-depth and surface normals. Hard shadows are created by taking the direct lighting and using threshold values to divide the light regions into three separate color regions. For simplicity, let’s just call these regions “light”, “medium”, and “dark”.

On top of that, Maneki allows an amazing amount of control over lighting. It allows direct lighting, gradient lighting, area lighting, rim lighting, and delta lighting. Each of these works a bit different, so let me explain.

In area lighting, you pick a point in space and things relatively close to that point get lit whether any light truly hits them or is blocked (in other words, no raytracing, afaik) – it’s an ambient lighting boost.

In gradient lighting, points of an entity are lit based on 2D gradients applied to the entity as a whole (which is fake but nice for simulating side, ambient lighting without a source).

Rim lighting is the light along the edges of an entity. Just imagine that the part of a surface nearly aimed at a light is super bright compared to the same surface at any other angle.

Finally, delta lighting is just direct lighting that doesn’t undergo the hard-shadows creation process. It’s the most true to 3D appearance, so it’s usually toned down significantly.

All this wouldn’t matter if the geometry resulted in blocky shapes and thus choppy line art. So Maneki performs subdivision and subpixel displacement.

Maneki was designed to be fast even on low-end computers, which is necessary for animation, but the sad part is that it uses Monte Carlo raytracing (which, in the case of Maneki, involves tracing rays fired from random positions in the viewport into the scene to produce something – a fast but highly ineffective method for rendering scenes as pixel-perfect).

You can preview the results in the Blame! PV:

Competition

A competitor soon to start gaining ground is Poser by SmithMicro. Currently, SmithMicro is building a team of developers to push the project forward, spearheaded by Herb Gilliland and the original developer whose name I have sadly already forgotten. The product as it stands has a number of features to make it easy to use. While it’s current rendering may not be top-of-the-line just yet, the output you can obtain from it exceeds the (imo) garbage renders of RWBY by Rooster Teeth who used Poser to make their cartoon.

Not mentioning MikuMiku Dance would be shameful especially given its widespread usage. As a rendering engine, it primarily depends on the textures for meshes, but it’s rendering is decent for a free tool. The most difficult part is its user interface, which is troublesome to say the least, especially when the only guide available is in Japanese.

MikuMiku Dance (discontinued) has been superseded by MikuMiku Moving to some degree, though both are floating around the net.

Good Directing is Irreplaceable

Even well drawn cartoons that suffer from poor directing lose the charm that 2D animation has to offer. Part of this is the motion of objects in a scene. Entities that obey a strict z-layering (i.e. everything sits in a layer, not at a variable depth) as in Toon Boom Harmony, Adobe Animate, or CACANi, will look like they obey a z-layering, which steals from their surrealism.

Bad directing can also involve too many action shots or too many closeups, preventing the audience from seeing what’s happening and also making things look goofy rather than enjoyable.

For 3D, the opposite can be a problem: Shots that are too “normal” rather than closeup may look staged or make the usage of 3D rendering too obvious.

3D rendering also has the significant problem of raising awareness of the third dimension when the camera moves. In 2D animation, a scene in which the camera moves is still technically flat – what is going on is that the scene itself is drawn with a skewed perspective, and the audience is readily aware of this. In 3D rendering, however, the scene is not stretched, it is moved through, and that draws attention to the third dimension, thereby destroying the imaginative aspect of it that is so hard to get the user to have.

Concluding Thoughts

Thanks for reading. You’ve got enough to digest now. How about another video, this time from CACANi, and a good example of how not to do the layering:

Advertisements

Enter the space and time of my little world... Welcome Earthling.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s