Continuing our series on HDR color in games, we’ll start this post with the simple truth. There is no one correct way to do things. However, we do have advice on things that do work well and things that don’t. The first concern is obviously whether your game is rendering with HDR internally today. Since most PC games are, we’ll consider this a pretty safe bet. Beyond this are the more subtle challenges we’ll tackle in this post.
What is in the Frame Buffer
The first challenge is whether you really are rendering in HDR. Since all your development and testing has been on standard dynamic range displays, it is entirely possible that the scene you have rendered only has a ‘meh’ level of ‘high’ range. This doesn’t mean that your tech or game is bad; it just means that it was developed within a certain set of parameters. When I’m talking about having really good dynamic range, I’m talking about having some highlights that have values approaching or exceeding 184.0 in the frame buffer after adjusting for exposure. This value is 10 stops above the photographic middle gray of 0.18. The good news is that with tech like physically-based rendering getting data like this isn’t really a problem. The real world produces scenes like this, and rendering algorithms that attempt to mimic the real world do as well.
What is the Art Doing
Assuming that you have your rendering tech setup to produce great levels of dynamic range, the next challenge is whether the art is holding up its end of the bargain. Just because you have implemented physically-based algorithms doesn’t mean that the artists have reliably setup all the parameters to match. If some materials or lights have parameters that are off, it is possible that the art was then tweaked to match them, and while you still generate enough dynamic range to saturate SDR, in HDR you won’t be really maxing out the experience. Finally, it is pretty easy to have assets that should be coupled but are not properly coupled. I’m talking about things like lights and the proxy geometry used to represent them or skyboxes. Since these aren’t always tightly coupled, it is easy for the emissive value on the proxy geometry to be out of sync with the light value. This can lead to issues like the visualization below, where the sun actually is dimmer than the specular highlight it is creating. The issue doesn’t show up on an SDR screen, because both exceed the saturation level of that display. However, on an HDR display you can see the issue. The problem isn’t a complete show-stopper, but it does make the content less impactful than it otherwise would be.
Figure 1 - Specular highlight is two or more times more luminous than the sun that is supposed to be source of the light. (Visualization of scene referred luminance levels in Sun Temple sample from Unreal Engine 4)
Tone Map it Right
Tone mapping is the next big concern for getting good content to an HDR display. Tone mappers traditionally used in games all focus on the parametric [0-1] space, where 1.0 is just taken as maximum brightness. This falls apart with HDR displays. Applying the same tone mapper to screens with a maximum luminance of 200 nits and 1000 nits does not result in a pleasing image on both. It is the same as just turning up the brightness. You really want colors and luminance levels represented well today to remain the same. The chart below shows how remapping a tone mapper for SDR results in dim regions like middle grey being displayed at well over 100 nits, and approaching the brightness of a diffusely lit sheet of paper in your office. These facts mean that you need to use a tone map operator that is sensitive to display output levels. One we’re particularly happy with is the Output Device Transform (ODT) used in the Academy Color Encoding Standard (ACES). It is a great filmic operator, and it scales to all the levels you care about in the near future. We’ll talk more about it in a future post, but we already have an implementation available in our HDR SDK sample.
To be clear, while the tone mapping operator needs to change, what doesn’t need to change are operations like eye adaptation. HDR displays still cover nowhere near the breadth of human experience, so you still need to adapt to a proper middle gray to center the exposure. You may find though that you benefit from moving to a higher or lower key for your content, since you are no longer having to compromise as much.
Take Care with Post-Processing
Post-processing pipelines may need some reevaluation in the context of HDR. The first concern here is where different post-processing operations occur within the pipeline. Scene-referred data means that the image data has values consistent with the level of luminance as light propagated in the scene. Output-referred data means that the image data has values consistent with the light to be emitted from the display. We’ve had good results with pipelines where all post-processing was done on scene-referred data. The reason we like this methodology is that you get consistent results as the tone mapper behavior is altered to different display targets. It is important to note that this applies to color grading as well. Today’s technique of just having an artist edit a LUT in Photoshop doesn’t map well to HDR. They are directly editing data that is mastered for an SDR display, and they are generating standard dynamic range data. This process is lossy, and you will kill the HDR data by applying an SDR LUT like this.
Validate Your User Interface for HDR
In general, the user-interface is going to be a fairly minor challenge. As mentioned in the post outlining the interface to present the HDR frame to the display, I talked about how we use a color system derived from and compatible with sRGB. This means that compositing your sRGB UI on top actually does work quite well. There are a couple things to be on the lookout for. First, human perception is impacted by the surrounding environment. It is easy to have someone complain that white in your UI looks a bit gray when it is composited on a really bright scene. This holds doubly true if you are in a bright room. As you may recall, the sRGB standard for white is 80 nits. Due to working in brighter environments, many users have their monitors set to emit 150 or 200 nits as white. Placing a simple scale on the UI to boost the level by something like 2x will allow you to provide a brighter UI level to compensate for this sort of complaint. Second, if you have utilized extensive transparency in your UI, you may wish to consider a more complex compositing pass. Alpha blending an 80% opaque chat window over an SDR signal produces an easily readable image. However, with an HDR image and a 1000 nit highlight, that 20% bleed-through is now 200 nits making for difficult reading. Compositing the entire UI to an off-screen sRGB buffer, then compositing in a shader pass allows some extra tools to resolve situations like this. One particular trick I’ve found to work quite well is to simply apply a Reinhard operator of x / ( x+1) to the luminance of any HDR data where the overlay has a non-zero alpha. It is pretty cheap, and it keeps the HDR data safely below the level of the UI, and it preserves the color and details well.
HDR displays are phenomenally exciting, and they have the opportunity to deliver fantastic new gaming experiences to end users. The good news is that the rendering in many gaming applications today holds up quite well on an HDR display. We’ve had a lot of experience enabling it on some stock UE4 content like Infiltrator, Sun Temple, and Kite as well as the work we recently talked about in Tomb Raider. The good news is that in all these cases, the changes were all around backend things like tone mapping and the display output. The changes were not about content at all with the exception of color grading. We hope this advice gives everyone a good jumping off point for enabling real HDR in their games, and we’ll share more information soon.