High Dynamic Range Displays allow for more vibrant and realistic graphics for games. We are reaching a tipping point where game developers can really start to take advantage of HDR. NVIDIA has been researching the technology for some time (we showed HDR demos at SIGGRAPH 2014 and 2015) and have many insights on the benefits and challenges. Read this article to find out more.

While HDR has been a big part of gaming for roughly a decade, it has been all about generating good data and then tone mapping it for display on a monitor with a very limited range. Over that period, occasional demos of HDR displays offered promise of the future but delivered nothing to actual consumers. Good news - the future is now here, with some TV having up to 1000 nit maximum luminance levels (as much as 5x or 10x the monitor you may be viewing this on). Additionally, the UHD Alliance, an industry consortium, has published a standard on what it means to be an HDR compliant set or an HDR compatible set. (minimum contrast and luminance levels, more details on the UHD Alliance specs can be found here.)

The old standard in displays that we’re all used to is sRGB (or Rec. 709 if you are referring to HDTVs). The new standard that we are pushing towards is often called HDR10 or UHDA HDR, which builds on the wide gamut of BT. 2020 (often called Rec. 2020). While the old standard had a reference level of 80 nits of luminance (sometimes rounded to 100 nits), the new standard is designed to represent up to 10,000 nits of luminance. First generation HDR compliant displays will generate 1000 nits of luminance, and there will likely be a spectrum of HDR-compatible displays. (HDR OLEDs are dimmer, but make up for it by enhanced contrast) In addition to increasing the supported luminance range, the width of the color gamut is also being expanded. This allows for the display of much more saturated colors. While the standard defines a new and extremely large gamut, the first generation devices should be expected to be in the somewhat smaller DCI-P3 gamut also displayed below. (still much larger than the sRGB one we use today).


Comparison of breadth of colors supported by different gamut standards. Pointer’s Gamut is a proxy for colors frequently seen in real life.

Now, you’re probably asking yourself the question, “I already render HDR, how do I display it?” The answer is that it takes a bit of work to do it right. It isn’t very hard or expensive, but care should be taken to produce a good result. The first thing to understand is the concept of scene referred versus output referred. Scene referred means that the data in the image is representing photons striking the camera in a linear fashion. Output referred means that the image is encoded for display on a device with an expected level of luminance. Because even the best displays come nowhere close to the range of luminances in the real world, there still needs to be a tone mapping step to compress and convert from scene referred to output referred on HDR displays. Further, you need to consider that user interfaces are generally authored in the output referred sRGB space. This means that some care needs to be taken in compositing them into the HDR scene. (You don’t want to have your user squinting at a 1000 nit white dialog, so white sRGB content clearly doesn’t mean maximum brightness).

A good solution that we’ve worked with for getting from scene referred to output referred is the Academy Color Encoding System (ACES). This was developed by a technical working group set up by the Academy of Motion Pictures to deal with the capture, archival, and display of digital images with HDR and wide gamuts. Within it is a family of reference tone mappers designed to cover displaying to a range of display luminances. The process focuses on generating a filmic look that most developers like. A central part of ACES is that it applies a filmic sigmoid curve in an extremely wide color space. The result is that the overly bright colors naturally desaturate to white. This is a simple consequence of how the curve compresses data as it approaches the maximum. The sigmoidal curve is actually similar to how the eyes work, and the desaturation means that luminance gets retained better. (a fully bright pure blue only has 15% of the luminance of a fully bright pure white) So in short, ACES is great because it provides a framework for targeting a wide range of output luminance levels with a consistent look, while still taking advantage of the additional capability.

You may ask yourself why your present tone mapper may not be enough. The simple answer is that it probably has no notion of how bright the output device is. It simply has a notion of maximum brightness. Taking the Reinhard operator as a simple example, mapping its [0-1] output to the 1000 nit level of a monitor means that middle gray (generally considered the color of asphalt) will end up being displayed as bright as the white on your present monitor. We generally don’t need to make these dim colors brighter today. We need to keep them stationary and allow the colors we’ve had to compress in the past to now display in the extended range. In some cases, we might even want these dark colors to get a bit darker.


Comparing the output luminance levels of a Reinhard tone map, a standard LDR filmic tonamap, and a filmic tone map created for 1000 nits. (All scaled to a maximum 1000 nit output. Both the standard filmic and the Reinhard are outputting 100-200 nits for what should be a dark gray. This is as bright as the white on many monitors. The curve mastered for the HDR range leaves these values similar to what an LDR monitor produces, and maintains better definition in upper midtones and highlights.

So what is the big payoff here? It is obviously hard to demonstrate what HDR looks like in person on an LDR blog. However, we can compute the differences in color saturation that gets lost due needing to roll toward white to deal with the limited supported output luminance. Also, we can compute the difference in perceived brightness. The images below clearly show how the improved range allows less compromise. Even photos of an HDR display with LDR and HDR content shows a clear difference.


LDR tone mapped scene from Epic’s Kite demo.


Diagram of lost chromaticity in LDR tone mapped image when compared to HDR tone mapping to 1000 nits. The LDR image had to compromise on color richness to compress into the display range. In HDR, the sky will be bluer, the skin will be more pink and less washed out, and the logo on the shirt will have richer detail. This is all in a scene that doesn’t really have extensive dynamic range.


Scene from Infiltrator demo by Epic Games, tone mapped to LDR reference levels.


Visualization of extra brightness represented when the scene is tone mapped to 1000 nits instead of the LDR range. Note that brightness is highly non-linear (roughly the cube root of luminance), so the items that are fully white will appear 3x brighter. (even though they are 12x the luminance) While this may seem less than the chromaticity, these 1000 nit highlights can make you squint a little when viewed in a dark room.


Even though it is impossible to truly represent an HDR image for you on this blog post, photos of an HDR TV showing LDR tone mapping and HDR tone mapping on the same content shows true differentiation. These are photos of a Samsung 1000 nit HDR TV taken with a Nikon DSLR. The TV is in HDR mode on both, and the only difference is that the left image is LDR tone mapped. It is worth noting that all variables can’t be removed here, but it is a very good proxy of the difference in experience when viewed directly.

So that’s a really quick overview of some of the main considerations in taking advantage of HDR displays: understanding the difference between scene referred and output referred data, being careful with UI rendering, and having a display luminance-aware tonemapper. We’ll be sharing more detail on all of these considerations and a lot more in the near future, stay tuned.