Dragon-003

Andrew Woo, Technical Advisor

Ray tracing is a rendering technique that dates all the way back to 1968.  It is acknowledged (since the 1980’s) to have many capabilities that other rendering techniques cannot easily achieve, such as:

  • Depth of field
  • Motion blur
  • Mirror reflections
  • Semi-transparency
  • Shadows
  • Support full 360 degree camera FOV (field of view)
  • Extremely parallelizable

The main Achilles heel of ray tracing is the slow performance, thus in the “old days”, ray tracing was not even considered as a production-ready tool, except maybe for rendering of posters.  But this also meant working with the limitations and restrictions of the non-ray-tracing renderers (that would be a topic for another blog).  However, this is the 21st century and lots have changed.

I visited a good friend (Sanjay) at Pixar a year ago.  I already knew that they had fully converted to using ray tracing for rendering all finalized shots.  As we walked past their render-farm room, Sanjay asked me how many CPU hours one typical frame would need?  I said 100 hours, and he said closer to 200+ hours, but with ray tracing being so parallelizable and a big render farm, a single frame can come out in seconds.  So you can check one finalized frame of rendering in just a matter of seconds – almost real-time in an off-line rendering scenario? But we would not call that real-time ray tracing, because not everyone has a huge, localized render farm at their disposal.

Given just one common desktop or laptop, can real-time ray tracing be achieved today?  Keep in mind that real-time rendering will need to count on maximizing GPUs because the goal tends to be more performance-based (simple shading) than looking gorgeous, shading wise (as in Pixar films).  To answer this question, let’s at least break up the question into two – can we achieve a real-time ray tracer for (A) polygons, and (B) voxels?

For ray tracing of polygons, the massively parallel characteristic of ray tracing still applies and is of advantage in a highly parallelizable GPU situation.  Here is what a ray tracer would need to do for each ray (one ray per thread):

  1. Traverse a spatial data structure (such as voxels, for example). See Figure 1, where the ray (arrow) traverses the voxels highlighted in darker grey.
  2. For each spatial region that the ray traverses, intersect against ALL polygons that reside in that spatial region. See Figure 1, where the red rectangles need to be tested against if they reside in the darker grey regions.
  3. If the ray does not hit any polygon, continue the traversal, until a polygon is hit or the ray has traversed the entire region. If the ray hits a polygon, then perform shading and you are done for this ray.

TraversingVoxels

Figure 1. Traversing voxels

Step 2, in particular, is the most troublesome for ray tracing of polygons, because the number of ray-tests against polygons can be large, and there tends to be a lot more misses than hits, especially in edge cases.  Thus, even in massively parallel situations, the division of labor can at most be done at a per-ray level, thus the number of ray-tests done still matters to the eventual performance.  This makes the performance of real-time ray tracing of polygons to be unpredictable.

The good news is that step 2 is not even needed in the ray tracing of voxels, as its computation per ray involves:

  • Traverse the voxel structure.
  • If the current voxel is occupied, compute the shading and you are done for this ray.
  • If the current voxel is empty, continue traversal.

So, as you can see, the cost for ray tracing voxels is mostly traversal.  You can bound the performance of traversal to the resolution of the voxel structure, so the performance is quite predictable.  And this is exactly what we are finding in the NGRAIN ray tracer, which is why we are very excited about it.

However, and you know there is always a “however”, there are caveats to consider:

  • Tablets and phones do not have nearly the GPU compute power and massive parallelism that desktops and laptops do, so even a voxel ray tracer is not real-time, for now anyways. Thus the NGRAIN splatting rendering technique continues to be active on those smaller platforms.
  • It is too bad that phones are not ready for ray tracing yet – when it is ready, VR will be very interesting, especially for wide-FOV VR headsets. Currently, wide-FOV is handled by doing multiple rendering passes (smaller FOV render passes to add up to the wide-FOV), but will encounter artifacts at the border of the separate renderings.  Ray tracing will only need one render pass that has not border artifacts to be concerned over.
  • The assumption of the above discussion is that only one voxelset exists. The discussion becomes more complicated (topic for another blog) when there are large number of voxelsets.