X-Toon: The Extended Toon Shader

The X-Toon shader is a cartoon shading technique created by two French researchers at Artis in collaboration with Lee Marosian. X-Toon extends traditional cartoon shading one-dimensional texture mapping by using a two-dimensional texture map instead. This allows the artist to create depth, lighting, and viewpoint effects instead of being constrained to only lighting effects.

Traditional cartoon shading using a one-dimensional texture map is indexed with dot(norm, lightDir) where norm is the surface normal of the object being rendered at the current point and lightDir is the direction of the light from this point.

X-Toon extends this one-dimensional texture look up into two dimensions by using not only the previous dot product as an index but also a second parameter. The second parameter can be anything required to achieve the desired effect. In the paper they describe both an orientation based look up (based on the dot product between the normal and the view vectors) and a depth based look up. See their paper and website for details and results of the implementation.

The following is a video of the orientation and depth based effects created by the X-Toon shader. I created this application as a mini-project in a course at school using that same textures the original authors use. The first effect simulates varying opacity while the second simulates varying lighting based on orientation and view. The red texture on the car simulates backlighting, an effect normally faked by rendering a scene twice, rendered with one pass using the X-Toon texture mapping. The rest of the video shows depth-based effects and focuses on level of abstraction at far distances.


The best thing about this technique is that it's fast, simple, and new effects can easily be created by any artist. It's also an effective way to blend two different styles of rendering, which is an area of NPR I am currently looking into. Read More!

Gooch Shading and Sils

Gooch Shading is a lighting model for technical illustration created by Amy Gooch, Bruce Gooch, Peter Shirley, and Elaine Cohen. They observed that in certain technical illustrations objects were shaded with warm colors and cool colors to indicate direction of surface normals. As a side effect this also indicates depth because often as an object tapers away from us the surface normals begin to point away from us.

 

The Gooch lighting model modifies the classic Phong shading model to becomeGooch Formula

where l is the light direction and n is the normal of the surface at that point. Classically pure blue is chosen as the cool color and pure yellow is chosen as the warm color and these parameters are denoted b and y ( k_yellow = (y,y,0) and k_blue = (0,0,b) ). These two parameters control the strength of the temperature shift.

 

If this were the only contribution to the final output color we would see a gradual shift from yellow in areas of high illumination to blue in areas of low or no illumination. To fix this we can give the object a color denoted objColor. Two more parameters (alpha and beta) control the prominence of the object color and the strength of the luminance shift. alpha gives the prominence of the objColor at the warm temperature areas and beta gives the prominence of the objColor at the cool temperature areas.

 

Using this information k_warm = k_yellow + alpha * objColor and k_cool = k_blue + beta * objColor. These are then substituted into the first equation and the output color is calculated.

 

The shading alone gives the viewer a good indication of the overall shape of an object but the final image lacks definition. The creators fixed this by rendering the silhouettes of the object as well as using Gooch shading.

 

There are many methods of extracting silhouettes from an object but I will describe the one implemented in the demo video. The program in the demo video uses an object called an 'edge buffer'. The edge buffer is just a data structure that for each edge in a model stores whether that edge is part of a front facing face and/or if it's part of a back facing face. When rendering silhouettes we iterate through the edge buffer finding those edges that are on both front facing and back facing faces and render a silhouette line on that edge.

 

This method requires a little storage and a lot of preprocessing and is much more feasible in static scenes. It is equivalent to brute force in dynamic scenes.

 

Enjoy the demo. Reference


Read More!