Line Drawings (via Abstracted Shading)

As the final project in one of my courses in my undergraduate career I implemented the line drawing algorithm described by Lee, Markosian, Lee, and Hughes here. Their algorithm was implemented as a real-time post-processing shader. They achieved some very nice results and I suggest you check out their images section. For now, here are a couple of pictures of my results (click on them to see them full size to see the details).










When I implemented this, I did so in a ray tracer. This was terribly slow because of the algorithm (multiple passes - description follows) and because no ray tracing optimizations or optimized data structures were used.

Their algorithm first renders a tone image. The tone image is a gray-scale image of the scene that captures Phong diffuse lighting of the objects. This tone image is then blurred with a Gaussian kernel with a width equal to the width we want the lines to be rendered with. This is done for the same reason that you would blur an image before detecting blobs with a Laplacian filter. Finally, this blurred image is iterated over to find thin areas of dark shading. These areas are the ridges of the tone image.

The ridge detection routine described in the paper uses a local parametrization of a function at each pixel allowing for a pre-computation of the matrix used to find the function. The matrix arises from solving the system by least squares. In areas where the maximum principal curvature of the function exceeds a threshold a 'line' is output. Since this is all implemented in a fragment shader, the output is really a single dark pixel whose opacity is modulated based on distance from the principal axis of the function fit to the pixel. This allows for anti-aliasing of the lines.

By dynamically adjusting line width based on distance from the viewer the system allows for level of abstraction. Basically, as lines get further away they become thinner.

The final image is a combination of the output lines and a toon-shaded (either X-Toon or cel-shading) rendering of the scene. The algorithm normally extracts dark lines. Highlights are achieved if the tone image is changed to a specular tone image instead of a diffuse tone image and we invert the line color. My ray tracing system took so long to render because it would have to trace a diffuse tone image, a specular tone image, and a toon-shaded image of the scene before it could combine everything into the final result. A GPU implementation is much more conducive to multiple passes than a ray tracing approach.

Taking an image processing approach to the line drawing problem solved one major issue currently involved in rendering line drawings. It removes the need for curvature information (from the models themselves) which in itself is a huge boost in frame rate. However, I believe that until there are efficient object-space algorithms for extracting lines accurately from 3D objects no such line drawings will be accurate and efficient enough to run in real-time while mimicking line drawings by an artist. Read More!

NPAR 2009

NPAR 2009 is coming up in a few weeks. This is my favorite time of year because a lot of freely available papers on the latest research in NPR are released. An unofficial schedule has been released on their website. If you are interested in the latest NPR research check out the schedule for the speakers and the set of papers being presented. Most of these papers are freely available (just google their title).

What I wish I didn't have to miss this year is the talk from Ubisoft about Prince of Persia. For those of you that haven't played the new Prince of Persia all you need to know is it makes extensive use of NPR. Jean-François, the lead programmer on the new Prince of Persia, will be giving a talk that focuses on NPR while outlining the game's three year development cycle.

I hope they release a summary of the talk on ACM SIGGRAPH because I am tired of photorealism in games and want to see NPR take a hold. Besides that, NPR in games is one of my main interests. Read More!

Geometry Shader Silhouettes

There has been a lot of research put into efficiently rendering accurate silhouettes in real-time applications. Silhouettes offer definition and detail that are important, escpecially in non-photorealistic rendering. Traditional silhouette rendering used the brute force approach or the edge buffer algorithm I briefly described in a previous post. These approaches carried out their computation on the CPU every frame and were too slow in real-time applications.

There have been many attempts to extract silhouettes on the GPU through the use of vertex shaders, many of which require extra geometry to be needlessly rendered per edge or extra information sent to the GPU per vertex. With the advent of geometry shaders there have been a few attempts at creating new geometry on the fly whenever a silhouette is found to exist.

A recent paper that's being published by a group of students at a university in Venezuela describes a single pass geometry shader that effectively extracts silhouettes in real-time. The silhouettes are created as extrustions of edges that are detected as silhouettes and are properly textured. Their paper describes their algorithm and contains the following discussion (which I summarize in this post) as history and describes their solution.

Geometry shaders have access to adjacency information which makes them perfect for silhouette extraction. At first it seems like a great idea to just extrude edges whenever a silhouette edge is detected but this can arise in discontinuities in the silhouette where two edges meet at an angle.

Another problem with this approach is generating texture coordinates. Silhouettes are often stylized in non-photorealistic renderings to match the style of the rendering. Without a continuous silhouette it is not possible to generate texture coordinates that correctly tile and rotate a texture along the silhouette.

The method described in the paper solves both the issues of the discontinuities in the silhouette edges and of the generation of the texture coordinates with a slight performance hit over another recent geometry shader method. However, the accuracy their solution provides outweighs the slight performance hit.

The undergraduates that did the research will be presenting their solution at SIACG 2009. Read More!