Multi-Res Shading was introduced with the Maxwell Architecture and focuses on one of the features of VR rendering. Virtual reality displays are a bit different than normal displays. In VR, there are 2 displays – one for the left eye and one for the right eye. And we view these displays through optical lens that enable our eyes to focus on a surface that is very close to our eyes. However, the curved nature of optical lenses create distortion.
Hardware: Compatible with: Maxwell and Pascal based GPUs. (GeForce GTX 900 series and Quadro M5000 and higher)
Software: Compatible with the following APIs: DX11, DX12, OpenGL. Integrated with Unreal Engine and Unity

To counteract this effect, VR rendering adds a final step to the graphics pipeline – a post-process called a ‘warp’ that takes the normal rectangular surface and distorts the image before it is output to the headset. This ‘inverse-distortion’ offsets the lens distortion so the image appears normal to the user.



If you look at what happens during that distortion pass, you find that while the center of the image stays the same (green area in image below), the edges are getting compressed quite a bit (red area) and the corners are almost entirely gone.


This means we’re over-shading the edges of the image. We’re generating lots of pixels that are never making it out to the display, they are just getting thrown out during the distortion pass.



The idea of multi-resolution shading is to split the image up into multiple viewports – as seen above, a 3x3 grid of them. We keep the center viewport the same size, but scale down all the ones around the edges. This better approximates the warped image that we want to eventually generate, but without so many wasted pixels. And because we shade fewer pixels, we can render faster. Depending on the nature of the content and movement, the developer can decide where to divide the image as well as how aggressive the resolution difference should be.


Ordinarily, replicating all scene geometry to a number of viewports would be prohibitively expensive. With Maxwell and Pascal, we have the ability to very efficiently broadcast the geometry to many viewports in hardware, while only running the GPU geometry pipeline once per eye.


Everest VR

We have seen MultiRes Shading boost the framerate of the Everest VR by +40%.

Performance increases are best when the applications are pixel bound.


Back