As you may have already seen in our previous entries regarding high dynamic range (HDR) on this blog, we’re pretty excited about the promise of HDR. Now this series can get to the business of showing you how to enable it in your code.

Requirements (in case you missed them in the earlier post):

  1. NVIDIA GPU from the Maxwell or Pascal families (GTX 960, GTX 980, GTX 1070, GTX 1080, etc)
  2. HDR display supporting SMPTE 2084 often labeled as HDR10 (most likely a new UHD TV, but you can expect future support from monitors as well)
  3. Windows 7 or later

On the software side, your application must be able to run in full-screen exclusive mode, and create an fp16 swap chain. These are necessary in order to provide the full precision data to the display driver, so that it can provide the high precision data to the display. If your application does not run in full-screen exclusive mode, the desktop compositor will strip the extra range and precision necessary for HDR. It is important to understand that this is a temporary restriction as Microsoft announced plans for OS support for HDR.

In addition to getting the right data to the driver, your application needs to utilize NVAPI to identify the presence of HDR compatible displays and to enable the HDR mode on those displays. Again, this is an interim requirement until the operating system integrates the support.

Enumerating the HDR displays

Before displaying HDR output, you naturally want to find out whether an HDR display is attached to your system. We’ve added an NVAPI call to determine the HDR capabilities for all displays attached to NVIDIA GPUs on the system in the form of NvAPI_Disp_GetHdrCapabilities. This function takes a display index and fills out a structure with HDR capabilities for the device.

  
NVAPI_INTERFACE NvAPI_Disp_GetHdrCapabilities(__in NvU32 displayId,
                    __inout NV_HDR_CAPABILITIES *pHdrCapabilities);

The display index used by this function can be obtained by walking the GPU and display hierarchy with the NVAPI functions like this: (note NVAPI initialization and error handling skipped)

  
NvU32 gpuCount = 0;
NvPhysicalGpuHandle ahGPU[NVAPI_MAX_PHYSICAL_GPUS] = {};

nvStatus = NvAPI_EnumPhysicalGPUs(ahGPU, &gpuCount);

for (NvU32 i = 0; i < gpuCount; ++i)
{
    NvU32 displayIdCount = 16;
    NvU32 flags = 0;
    NV_GPU_DISPLAYIDS displayIdArray[16] = {};
    displayIdArray[0].version = NV_GPU_DISPLAYIDS_VER;

    // Query list of displays connected to this GPU
    NvAPI_GPU_GetConnectedDisplayIds(ahGPU[i], displayIdArray,
        &displayIdCount, flags);

    // Iterate over displays to test for HDR capabilities
}

Alternatively, you can use the display names, which can be obtained from DXGIOutputs, and query for the NVAPI display ID in that way, using the function NvAPI_DISP_GetDisplayByName.

NVAPI_INTERFACE NvAPI_DISP_GetDisplayIdByDisplayName(const char *displayName,
      NvU32* displayId);

Displays supporting HDR will set the isST2084EotfSupported member of the NV_HDR_CAPBIITIES structure to 1. The remainder of the structure contains potentially useful information on the capabilities of the display. As this information is optional, in many cases, the fields will just be zeros. If the white point coordinate is a 0, you definitely have a case of a display not reporting the data. However, when available, you can use this information to guide the selection of your tone map operator.

Enabling HDR output and sending meta-data

To enable the HDR output to the display, the application must send HDR meta-data to the display via the NvAPI_Disp_HdrColorControl function. Below is an example of setting up the HDR display to handle input with the default UHDA mastering levels. (DCI color primaries, 1000 nit maximum luminance) This is the easiest thing to do as 0 is interpreted as telling the display to expect the default mastering level.


NV_HDR_COLOR_DATA hdrColorData = {};

memset(&hdrColorData, 0, sizeof(hdrColorData));

hdrColorData.version = NV_HDR_COLOR_DATA_VER;
hdrColorData.cmd = NV_HDR_CMD_SET;
hdrColorData.static_metadata_descriptor_id = NV_STATIC_METADATA_TYPE_1;
hdrColorData.hdrMode = enableHDR ? NV_HDR_MODE_UHDA : NV_HDR_MODE_OFF;
hdrColorData.static_metadata_descriptor_id = NV_STATIC_METADATA_TYPE_1;

NvAPI_Disp_HdrColorControl( displayId, &hdrColorData); 

As you’ll note, the function works both for turning on the HDR output mode and for turning it off.

Supplying HDR data

As mentioned above, you need to have the swap chain be in fp16 format, but that is not the only requirement to get proper HDR output. The driver interprets the frame buffer data differently than with old standard dynamic range (SDR) monitors. First, as has been the case since Windows Vista, fp16 swap chains are expected to have linear color data. By this I mean that there is no gamma encoding. Secondly, with traditional displays, 1.0 always just meant the brightest intensity. With HDR, things have moved to a color space based on scRGB and a scheme where the value represents an absolute output level rather than a relative one. Now, 1.0 is interpreted as the level of 80 nit white. This is the standard white level set by the sRGB standard for displays, and it is probably a white at or slightly dimmer compared to the white for the monitor you are reading this on. 12.5 represents the 1000 nit white that is the practical limit for most HDR displays you can buy today. This representation is convenient, because content expecting SDR levels will look reasonable. It is just when you provide data outside the standard range, that you get the enhanced brightness. This allows your UI to work with little or no modification, instead of leaving your user staring at a blindingly white dialog box. Importantly, this support is exactly what Chas Boyd of Microsoft describes in the video linked above. As a result, investment in the HDR output pipeline is not ‘throwaway’ work.

Those that have been closely watching the UHD display improvements will likely be wondering about the expanded color gamuts supported, like DCI and BT. 2020. The support introduced here handles that as well. As mentioned above, the output color space is a derivation of scRGB, and as a result it uses the same limited color primaries as sRGB and Rec. 709. However, the frame buffer is now in full floating point. The ability to encode negative numbers allows us to encode colors outside the gamut of sRGB. In fact, it allows the representation of all of BT 2020 and much more. If you have rendered colors larger than the sRGB gamut, it is just a simple 3x3 matrix to convert the color space into the space that the display driver requires. For those unfamiliar, the diagrams below show the gamuts of sRGB and BT 2020 (also known as Rec. 2020).

BT 2020 Gamut

sRGB Gamut

Tips and Tricks

Now that you’ve had a chance to whet your appetite for HDR programming info, here are a few tips and tricks as you start thinking about it:

  • Not all HDMI ports on the display may accept HDMI 2.0 and HDR, consult the manual
  • Make sure to get a high-speed HDMI cable so that you get HDMI 2.0 transfer speeds
  • The NVIDIA control panel has toggles for RGB vs. YUV output and bit depth, if the TV supports it, you should use RGB and 10 or 12 bits
  • To determine if you’ve done things right in setting up HDR either
    • Try Holger’s trick from the Tomb Raider HDR experience by clamping some data to 1.0 to get bands of clipped versus full-range data
    • Add 1.0 to the frame buffer after tone mapping. If you are in HDR mode, you’ll see the scene, but it’ll be washed out. If you are in SDR mode, the screen will just look white.
  • Viewing environment is important. HDR will always look best in a dim environment as the human vision adjusts sensitivity based on the environment.
    • Displays with lower peak luminance like OLED are more heavily impacted

This is just one of many posts we have coming your way on how to update your game to HDR, so please follow the blog to get more information and sample code.