It's funny how things in academia are rarely ever heard about even if they are incredibly novel and interesting. I think it's because of their limited application, especially in the artistic rendering world. (Of course, it also includes the inability of most academics to commercialize so they rarely make it to press.)
A master's thesis from 2001 describes a wonderful way of animating Chinese landscape paintings and panoramas in order to create a three-dimensional walk through. Their method uses image based modeling and rendering (IBMR) and improves on a previous method called Tour in Picture (TIP). Their method is multi-perspective TIP and fixes many of the disadvantages of regular TIP. The main disadvantage of regular TIP was the short animation times. With multi-perspective TIP the animation times can be much larger. In fact, TIP animations were generally ten seconds in length but the animation on their research page is one minute and thirty seconds. (IBMR is not one of my areas of interest so I'm unable to summarize their method but they have very pretty results.)
I highly recommend you check out their video section. They really are able to create pseudo three-dimensional models of the scene that can be walked into (ie: they have depth) and through.
Read More!
NPR Line Drawings Really Work!
A new study by Forrester Cole, et al. titled, "How Well Do Line Drawings Depict Shape" presented at SIGGRAPH 2009 evaluates the methods of producing line drawings developed in the past few years (which are quite comprehensive and what Forrester Cole seems to have devoted his entire graduate and PhD Student career to, so let's thank him).
In their study they had participants orient what are called gauges onto line drawings of 3D objects. Each guage is a disc with a line that represents its normal. Participants had to orient the normal of each gauge. A correct disc orientation would make it look like the disc was actually on the surface at its position. The position of each gauge was fixed and each gauge was initially superimposed over the model so as to not cue the participant. The gauges also did not interact with the models.
The study compares six different styles of rendering. Among these six styles were fully shaded images, apparent ridges, suggestive contours and an artists line drawing of the same object (the authors note that plain contours were rendered over all models other than the artist's image).
Their results are quite long and detailed and take up a majority of their paper. They show that shaded images are the best for depicting shape, however, that was an expected hypothesis and not the point of the experiment. In summary, their data shows that each line drawing method has its own strengths and weaknesses. Most were comparable and at times better than an artist's drawing. Some methods are good for some types of models while other methods are good for other types of models.
This is good news for NPR but I still want to see these methods developed into something that doesn't bog down processing and destroy interactive frame rates. Albeit, some methods achieve interactive frame rates (30-60fps, but sometimes they quote interactive frame rates as 5fps) these methods are still unsuitable for real-time environments such as games and walk-throughs that are not prerecorded. Read More!
In their study they had participants orient what are called gauges onto line drawings of 3D objects. Each guage is a disc with a line that represents its normal. Participants had to orient the normal of each gauge. A correct disc orientation would make it look like the disc was actually on the surface at its position. The position of each gauge was fixed and each gauge was initially superimposed over the model so as to not cue the participant. The gauges also did not interact with the models.
The study compares six different styles of rendering. Among these six styles were fully shaded images, apparent ridges, suggestive contours and an artists line drawing of the same object (the authors note that plain contours were rendered over all models other than the artist's image).
Their results are quite long and detailed and take up a majority of their paper. They show that shaded images are the best for depicting shape, however, that was an expected hypothesis and not the point of the experiment. In summary, their data shows that each line drawing method has its own strengths and weaknesses. Most were comparable and at times better than an artist's drawing. Some methods are good for some types of models while other methods are good for other types of models.
This is good news for NPR but I still want to see these methods developed into something that doesn't bog down processing and destroy interactive frame rates. Albeit, some methods achieve interactive frame rates (30-60fps, but sometimes they quote interactive frame rates as 5fps) these methods are still unsuitable for real-time environments such as games and walk-throughs that are not prerecorded. Read More!
Line Drawings (via Abstracted Shading)
As the final project in one of my courses in my undergraduate career I implemented the line drawing algorithm described by Lee, Markosian, Lee, and Hughes here. Their algorithm was implemented as a real-time post-processing shader. They achieved some very nice results and I suggest you check out their images section. For now, here are a couple of pictures of my results (click on them to see them full size to see the details).
When I implemented this, I did so in a ray tracer. This was terribly slow because of the algorithm (multiple passes - description follows) and because no ray tracing optimizations or optimized data structures were used.
Their algorithm first renders a tone image. The tone image is a gray-scale image of the scene that captures Phong diffuse lighting of the objects. This tone image is then blurred with a Gaussian kernel with a width equal to the width we want the lines to be rendered with. This is done for the same reason that you would blur an image before detecting blobs with a Laplacian filter. Finally, this blurred image is iterated over to find thin areas of dark shading. These areas are the ridges of the tone image.
The ridge detection routine described in the paper uses a local parametrization of a function at each pixel allowing for a pre-computation of the matrix used to find the function. The matrix arises from solving the system by least squares. In areas where the maximum principal curvature of the function exceeds a threshold a 'line' is output. Since this is all implemented in a fragment shader, the output is really a single dark pixel whose opacity is modulated based on distance from the principal axis of the function fit to the pixel. This allows for anti-aliasing of the lines.
By dynamically adjusting line width based on distance from the viewer the system allows for level of abstraction. Basically, as lines get further away they become thinner.
The final image is a combination of the output lines and a toon-shaded (either X-Toon or cel-shading) rendering of the scene. The algorithm normally extracts dark lines. Highlights are achieved if the tone image is changed to a specular tone image instead of a diffuse tone image and we invert the line color. My ray tracing system took so long to render because it would have to trace a diffuse tone image, a specular tone image, and a toon-shaded image of the scene before it could combine everything into the final result. A GPU implementation is much more conducive to multiple passes than a ray tracing approach.
Taking an image processing approach to the line drawing problem solved one major issue currently involved in rendering line drawings. It removes the need for curvature information (from the models themselves) which in itself is a huge boost in frame rate. However, I believe that until there are efficient object-space algorithms for extracting lines accurately from 3D objects no such line drawings will be accurate and efficient enough to run in real-time while mimicking line drawings by an artist. Read More!
When I implemented this, I did so in a ray tracer. This was terribly slow because of the algorithm (multiple passes - description follows) and because no ray tracing optimizations or optimized data structures were used.
Their algorithm first renders a tone image. The tone image is a gray-scale image of the scene that captures Phong diffuse lighting of the objects. This tone image is then blurred with a Gaussian kernel with a width equal to the width we want the lines to be rendered with. This is done for the same reason that you would blur an image before detecting blobs with a Laplacian filter. Finally, this blurred image is iterated over to find thin areas of dark shading. These areas are the ridges of the tone image.
The ridge detection routine described in the paper uses a local parametrization of a function at each pixel allowing for a pre-computation of the matrix used to find the function. The matrix arises from solving the system by least squares. In areas where the maximum principal curvature of the function exceeds a threshold a 'line' is output. Since this is all implemented in a fragment shader, the output is really a single dark pixel whose opacity is modulated based on distance from the principal axis of the function fit to the pixel. This allows for anti-aliasing of the lines.
By dynamically adjusting line width based on distance from the viewer the system allows for level of abstraction. Basically, as lines get further away they become thinner.
The final image is a combination of the output lines and a toon-shaded (either X-Toon or cel-shading) rendering of the scene. The algorithm normally extracts dark lines. Highlights are achieved if the tone image is changed to a specular tone image instead of a diffuse tone image and we invert the line color. My ray tracing system took so long to render because it would have to trace a diffuse tone image, a specular tone image, and a toon-shaded image of the scene before it could combine everything into the final result. A GPU implementation is much more conducive to multiple passes than a ray tracing approach.
Taking an image processing approach to the line drawing problem solved one major issue currently involved in rendering line drawings. It removes the need for curvature information (from the models themselves) which in itself is a huge boost in frame rate. However, I believe that until there are efficient object-space algorithms for extracting lines accurately from 3D objects no such line drawings will be accurate and efficient enough to run in real-time while mimicking line drawings by an artist. Read More!
NPAR 2009
NPAR 2009 is coming up in a few weeks. This is my favorite time of year because a lot of freely available papers on the latest research in NPR are released. An unofficial schedule has been released on their website. If you are interested in the latest NPR research check out the schedule for the speakers and the set of papers being presented. Most of these papers are freely available (just google their title).
What I wish I didn't have to miss this year is the talk from Ubisoft about Prince of Persia. For those of you that haven't played the new Prince of Persia all you need to know is it makes extensive use of NPR. Jean-François, the lead programmer on the new Prince of Persia, will be giving a talk that focuses on NPR while outlining the game's three year development cycle.
I hope they release a summary of the talk on ACM SIGGRAPH because I am tired of photorealism in games and want to see NPR take a hold. Besides that, NPR in games is one of my main interests. Read More!
What I wish I didn't have to miss this year is the talk from Ubisoft about Prince of Persia. For those of you that haven't played the new Prince of Persia all you need to know is it makes extensive use of NPR. Jean-François, the lead programmer on the new Prince of Persia, will be giving a talk that focuses on NPR while outlining the game's three year development cycle.
I hope they release a summary of the talk on ACM SIGGRAPH because I am tired of photorealism in games and want to see NPR take a hold. Besides that, NPR in games is one of my main interests. Read More!