Showing posts with label silhouettes. Show all posts
Showing posts with label silhouettes. Show all posts

Geometry Shader Silhouettes

There has been a lot of research put into efficiently rendering accurate silhouettes in real-time applications. Silhouettes offer definition and detail that are important, escpecially in non-photorealistic rendering. Traditional silhouette rendering used the brute force approach or the edge buffer algorithm I briefly described in a previous post. These approaches carried out their computation on the CPU every frame and were too slow in real-time applications.

There have been many attempts to extract silhouettes on the GPU through the use of vertex shaders, many of which require extra geometry to be needlessly rendered per edge or extra information sent to the GPU per vertex. With the advent of geometry shaders there have been a few attempts at creating new geometry on the fly whenever a silhouette is found to exist.

A recent paper that's being published by a group of students at a university in Venezuela describes a single pass geometry shader that effectively extracts silhouettes in real-time. The silhouettes are created as extrustions of edges that are detected as silhouettes and are properly textured. Their paper describes their algorithm and contains the following discussion (which I summarize in this post) as history and describes their solution.

Geometry shaders have access to adjacency information which makes them perfect for silhouette extraction. At first it seems like a great idea to just extrude edges whenever a silhouette edge is detected but this can arise in discontinuities in the silhouette where two edges meet at an angle.

Another problem with this approach is generating texture coordinates. Silhouettes are often stylized in non-photorealistic renderings to match the style of the rendering. Without a continuous silhouette it is not possible to generate texture coordinates that correctly tile and rotate a texture along the silhouette.

The method described in the paper solves both the issues of the discontinuities in the silhouette edges and of the generation of the texture coordinates with a slight performance hit over another recent geometry shader method. However, the accuracy their solution provides outweighs the slight performance hit.

The undergraduates that did the research will be presenting their solution at SIACG 2009. Read More!

Gooch Shading and Sils

Gooch Shading is a lighting model for technical illustration created by Amy Gooch, Bruce Gooch, Peter Shirley, and Elaine Cohen. They observed that in certain technical illustrations objects were shaded with warm colors and cool colors to indicate direction of surface normals. As a side effect this also indicates depth because often as an object tapers away from us the surface normals begin to point away from us.

 

The Gooch lighting model modifies the classic Phong shading model to becomeGooch Formula

where l is the light direction and n is the normal of the surface at that point. Classically pure blue is chosen as the cool color and pure yellow is chosen as the warm color and these parameters are denoted b and y ( k_yellow = (y,y,0) and k_blue = (0,0,b) ). These two parameters control the strength of the temperature shift.

 

If this were the only contribution to the final output color we would see a gradual shift from yellow in areas of high illumination to blue in areas of low or no illumination. To fix this we can give the object a color denoted objColor. Two more parameters (alpha and beta) control the prominence of the object color and the strength of the luminance shift. alpha gives the prominence of the objColor at the warm temperature areas and beta gives the prominence of the objColor at the cool temperature areas.

 

Using this information k_warm = k_yellow + alpha * objColor and k_cool = k_blue + beta * objColor. These are then substituted into the first equation and the output color is calculated.

 

The shading alone gives the viewer a good indication of the overall shape of an object but the final image lacks definition. The creators fixed this by rendering the silhouettes of the object as well as using Gooch shading.

 

There are many methods of extracting silhouettes from an object but I will describe the one implemented in the demo video. The program in the demo video uses an object called an 'edge buffer'. The edge buffer is just a data structure that for each edge in a model stores whether that edge is part of a front facing face and/or if it's part of a back facing face. When rendering silhouettes we iterate through the edge buffer finding those edges that are on both front facing and back facing faces and render a silhouette line on that edge.

 

This method requires a little storage and a lot of preprocessing and is much more feasible in static scenes. It is equivalent to brute force in dynamic scenes.

 

Enjoy the demo. Reference


Read More!