Simulation / Modeling / Design

Porsches, Storm Troopers, and Ray Tracing: How NVIDIA and Epic are Redefining Graphics

Throughout 2018, NVIDIA and Epic Games have led the charge in showing what development teams can pull off with real-time ray tracing. During GDC, they worked together to create a short demo entitled “Reflections”
 

Amazing, isn’t it? We’ve reached a stage where real-time ray tracing techniques can be used to construct a sequence that looks so real, it could have been pulled out of a live action movie.  
On the first day of SIGGRAPH 2018, NVIDIA and Epic Games showed off another eye-popping clip: “The Speed of Light”.

If you saw this as a Porsche ad on TV, would you even realize the car and environments were digital?
Since the release of “Toy Story” in 1995, gamers have pined for a day when video games would look like CG movies. NVIDIA and Epic Games are proving the time has finally come.
We had the chance to ask Ignacio Lamas, Senior Manager of Real-Time Ray Tracing Software at NVIDIA, Juan Cañada, Ray Tracing Lead at Epic Games, and Francois Antoine, Director of Embedded systems at Epic Games some questions about the work they’ve done on ray tracing:
Question 1: If “Speed of Light” had been made through traditional techniques like light baking or lightmaps, how would the end result differ? How would the development time change?
Answer [Ignacio Llamas, NVIDIA]: The entire demo is about fully dynamic studio lighting with multiple area lights. Using lightmaps would not be an option for this kind of setup. When everything is dynamic, traditional rasterization-based techniques are simply insufficient to correctly capture area lights shadows, correct artifact-free reflections and the diffuse light interactions. There was simply no way to do this without real-time ray tracing and the performance of Turing GPUs.
Question 2: “Speed of Light” delivered on the promise of photorealistic results from ray tracing. Can you give us a sense of how long it took to produce that clip? How big was the team that worked on it?
Answer [Ignacio Llamas, NVIDIA]: From a technology standpoint we started from where the “Reflections” demo left off — which means ray traced area lights, reflections and ambient occlusion. We had about three months to push the technology to the next level to meet the higher demand of fully dynamic ray traced lighting in this demo. To accomplish this we had about eight rendering engineers across NVIDIA and Epic involved to various degrees.
Answer [Francois Antoine, Epic Games]: The “Speed of Light” demo is actually made of two components — the cinematic ‘attract mode’ and the interactive lighting studio, and they both use the exact same vehicle asset. If I were to break it down by sub-project, we had two people working on the Speedster asset, three people working on the interactive lighting studio and about five people working on the cinematic. The production of the entire “Speed of Light” project took about eight weeks and  happened in parallel with the development of new ray-traced rendering features.
Question 3: Is “Speed of Light” using cinematic-quality assets, or in-game quality assets?
Answer [Ignacio Llamas, NVIDIA]: The original CAD model is as detailed as you can get. The in-engine version, tessellated to either 10 or 40 million polygons is in the range that we can consider cinematic quality. In addition to the polygon count, the other thing that makes the model cinematic quality is the physically-based materials, which have an amazing amount of detail and closely match reference samples and car photography.
Answer [ Francois Antoine, Epic Games]: The Porsche Speedster asset used in the “Speed of Light” was directly tessellated in Unreal Engine’s DataSmith using Porsche’s actual CATIA CAD manufacturing files. The first iteration of the Speedster was 40 million polygons, which we then selectively re-tessellated down to just shy of 10 million polygons. Ignacio had surprised us by saying that this optimization would not significantly impact the performance when rendering using RTX and that proved to be the case. The project performance was very similar with either version of the car! This is a real plus for the visualization of large enterprise datasets.

Question 4: The materials in the demo were highly varied, with a strong focus on reflective and translucent surfaces… how did you build those materials for the demo?
Answer [ Francois Antoine, Epic Games]: Indeed, when we first got wind of Turing’s ray-tracing feature set, we immediately thought of an automotive-focused project. Cars are all about smooth, curvy reflections — what we call “liquid lines” in industry speak — and we couldn’t think of any other subject that could benefit more from Turing’s ray-tracing. In order to get these reflective and translucent materials to look as accurate as possible, we emphasize the use of high quality real-world reference, in some cases going as far as ordering car parts and disassembling them to better understand their internal structures and how they interact with light. This is exactly what we did with the Speedster’s tail lights — this new found understanding coupled with the more physically accurate behavior of ray-tracing allowed us to achieve much more realistic taillights than we previously could.
Question 5: Is the entire demo ray-traced, or have some rasterization techniques been used?
Answer [Juan Canada, Epic Games]: The demo uses hybrid techniques where a raster base pass is calculated first. On top of that, ray traced passes are launched to calculate complex effects that would be very hard to achieve with traditional raster techniques.
Question 6: There’s so much to take in watching the “Speed of Light” clip. Are there any little details in the sequence that people might be prone to miss? Which moments show off ray tracing most effectively?
Answer [Francois Antoine, Epic Games]: There is a lot more innovative tech there than you will notice — and that is a good thing! The tech shouldn’t attract your attention, it should just make the image look more plausible. For example, the light streak reflecting in the car are not coming from a simple texture on a plane (as would traditionally be done in rendering), but is instead an animated textured area lights with ray-traced soft shadows. This mimics how these light streaks would be created in a real photo studio environment, with light affecting both the diffuse and specular components of the car’s materials and creating a much more realistic light behavior. Oh, and it’s amazing to finally have proper reflections on translucency, thanks to ray-tracing!

Question 7: Can you give us some detail about the creative process behind “Speed of Light”? How did the project evolve from initial pitch to final product?
Answer [ Francois Antoine, Epic Games]: Sure. The creative idea happened during an early debriefing call with EPIC CTO Kim Libreri. We got to talking about a way to visually manifest the essence of the Porsche 911 along with the massive advancement in lighting fidelity that RTX and Turing would bring to the real-time industry — and the theme of “Speed” and “Light” was born! The initial idea was developed into a real-time cinematic and interactive piece which would transition seamlessly to drive the point across that they used the same high quality ray-tracing, the same area lights, the same car asset and the same polycount. Both the cinematic and the lighting studio are being rendered in real-time with a level of fidelity that I think is fabulous, especially for first generation hardware!

Question 8: How difficult is it to get ray-traced reflections to look photorealistic?
Answer [Ignacio Llamas, NVIDIA]: Rendering ray-traced reflections in real-time with a single sample per pixel presents a few challenges. The first is in supporting the full range of roughness, from pure mirror to glossy surfaces. This requires a pretty sophisticated denoiser. Edward Liu at NVIDIA has been instrumental in the development of this denoising technology. It has been improved significantly since we started work on “Reflections” and it is now able to handle the full range of glossiness / roughness, matching pretty closely the reference obtained with 100s or 1000s of samples using a single reflection sample. The second challenge is computing accurate dynamic lighting in reflections, matching closely the lighting on surfaces seen directly from the camera. This requires tracing additional rays for secondary bounces and area light shadows on reflections, and then applying additional denoising to this lighting.
Question 9: From a game development perspective, what are the long-term advantages to supporting ray-tracing in your development pipeline today?
Answer [Juan Canada, Epic Games]: Ray tracing not only will allow to simulate more sophisticated optical phenomena than what has been seen to date in real-time graphics. It also brings remarkable advantages to the workflow. Ray tracing is more predictable and generates less visual artifacts than other techniques. Also code tends to be simpler: while there will be a transition period where mixing raster and ray tracing will require of advanced machinery to get both worlds working together, in the long term ray tracing will lead to code easier to maintain and expand.
Question 10: Do you have suggestions for how developers can use rasterization and ray tracing together to maximize the efficiency of their art pipeline? It seems like we’re experiencing a best-of-both-worlds moment — rasterization is still great for its efficient geometry projection and depth calculation, while ray tracing makes lighting faster.
Answer [Ignacio Llamas, NVIDIA]: For developers, my advice is to use the best tool for each problem. Rasterization is great at handling the view from the camera with efficient texture LOD computation. As long as the geometric complexity is below some point, it is still the right answer. If the amount of geometry goes up significantly, ray tracing can be more efficient. Then use ray tracing to solve all those problems that it is best at, such as dynamic area light shadows, reflections, ambient occlusion, diffuse GI, translucency with physically correct absorption and refraction, or caustics. For artists, the choice regarding using rasterization or ray tracing may already be made for them by the engine developer. I think what’s important for artists and their pipeline is making sure their flows adapt to enable the best possible quality that can be achieved now that ray-tracing is enabling looks that were not possible before. This means learning about the range of options they have, such as correct area lights and reflections, and making informed decisions on material parameters based on this. It may also mean for example ensuring that materials have new physically based parameters, such as correct absorption and index of refraction, which may have been ignored before.
Question 11: Ray traced diffused global illumination is a solution that should make life easier for artists — can you give us any insights on how to work with it effectively?
Answer [Ignacio Llamas, NVIDIA]: One piece of advice is to experiment with a few lights first to see where the effect of indirect lighting is coming from them first. In particular, spot lights, rect lights, and directional lights, all of which are directed, tend to show off indirect diffuse lighting better than point or spherical lights. I have seen lighting setups in some projects where we initially had direct lighting on every surface. This made it difficult to appreciate the indirect lighting. So turn off some lights and see what things look like with less of them, now that indirect lighting will  be there for you to enjoy. Real-time ray traced diffuse GI means you don’t need to add as many lights, bake lightmaps or manually tweak ambient lighting terms.
Answer [Francois Antoine, Epic Games]: The ray-traced dynamic diffuse global illumination was a godsend on this project. Dynamic GI is the holy grail of real-time computer graphics, and we got to have it on this project! It allowed us to light the car more efficiently, with fewer lights and more nuanced lighting behavior, than we could have previously.  
Question 12: The “Speed of Light” demo at SIGGRAPH and the “Battlefield V” demo at GamesCom impressed with convincing translucencies. How does ray-traced translucency work?
Answer [Ignacio Llamas, NVIDIA]: Ray traced translucency works by tracing primary rays from the camera, selecting to intersect first only the translucent geometry in the scene, and doing so up to the first opaque hit, which is computed by rasterizing the GBuffer. For every translucent layer found along the ray we then trace reflection and shadow rays, these reflection rays may themselves hit additional translucent layers and will also trace additional shadow rays. It is pretty much what people call ‘Whitted-style Ray Tracing’. We don’t do any denoising for ray traced translucency at the moment.
Question 13: When people first saw the real-time ray tracing technology demo “Reflections” at GDC, they couldn’t believe that level of performance would be possible without a $60,000 DGX Station. That same demo was shown at SIGGRAPH, running on a Quadro RTX 6000! How does performance on the Quadro compare to the performance of the same demo on the DGX Station?
Answer [Ignacio Llamas, NVIDIA]: The framerate of the cinematic sequence on the Quadro RTX 6000 averages around 50 FPS at 1440p with DLSS. The original demo run at 24 to 30 FPS at 1080p on a DGX Station.
Question 14: Real-time ray tracing on consumer grade hardware was not expected to be possible for at least another five years — what factors sped up the time table?
Answer [Ignacio Llamas, NVIDIA]: At NVIDIA we have been working on ray tracing technology for GPUs for over ten years, first with foundational research work in algorithms to build BVHs and traverse them efficiently, and second with the creation of easy to use ray tracing APIs like OptiX. As GPUs evolved it became clear that we’re reaching a point where they were starting to be useful for some selective real-time ray tracing purposes. But to cross the point where the investment starts to pay off for a developer, we needed the help of fixed function hardware that made ray tracing an order of magnitude faster (Turing’s RT Cores). We then realized that to really enable ray traced visuals that are stunning and not possible before, we also needed to develop denoisers that provide another order magnitude acceleration. Overall, they combine to provide two orders of magnitude acceleration, when compared to what was possible before. Finally, we also recognized the importance of enabling developers with standard ray tracing APIs that extend their current Graphics APIs in a simple an elegant way, enabling easy integration. It is the combination of all of this work that has enabled what I would consider the most significant leap in real-time computer graphics since programmable shading.
Answer [Juan Canada, Epic Games]: Several factors have played a fundamental role here. The first one is the outstanding work done at the hardware side by NVIDIA, to produce graphic cards capable of tracing enough rays per second to produce high quality ray traced images in real time. Second, the release of the DXR provides and API that allows to develop a ray tracer architecture quickly, without being force to implement low level DX12 machinery. Third, advances in real time denoisers have been crucial and allow to clean images generated with low ray budgets. Fourth, the immense amount of knowledge accumulated in offline ray tracing during the last twenty years allows to move faster than if it was an entirely new area.
 

Discuss (0)

Tags