GPU Gems is now available, right here, online. You can purchase a beautifully printed version of this book, and others in the series, at a 30% discount courtesy of InformIT and Addison-Wesley.
The CD content, including demos and content, is available on the web and for download.
Kevin Bjorke
NVIDIA
Cube maps are typically used to create reflections from an environment that is considered to be infinitely far away. But with a small amount of shader math, we can place objects inside a reflection environment of a specific size and location, providing higher quality, image-based lighting (IBL).
Cube-mapped reflections are now a standard part of real-time graphics, and they are key to the appearance of many models. Yet one aspect of such reflections defies realism: the reflection from a cube map always appears as if it's infinitely far away. This limits the usefulness of cube maps for small, enclosed environments, unless we are willing to accept the expense of regenerating cube maps each time our models move relative to one another. See Figure 19-1.
Figure 19-1 Typical "Infinite" Reflections
When moving models through an interior environment, it would be useful to have a cube map that behaved as if it were only a short distance away—say, as big as the current room. As our model moved within that room, the reflections would scale appropriately bigger or smaller, according to the model's location in the room. Such an approach could be very powerful, grounding the viewer's sense of the solidity of our simulated set, especially in environments containing windows, video monitors, and other recognizable light sources. See Figure 19-2.
Figure 19-2 Localized Reflections
Fortunately, such a localized reflection can be achieved with only a small amount of additional shader math. Developers of some recent games, in fact, have managed to replace a lot of their localized lighting with such an approach.
Let's look at Figure 19-3. We see a reflective object (a large gold mask) in a fairly typical reflection-mapped environment.
Figure 19-3 Reflective Object with Localized Reflection
Now let's consider Figure 19-4, a different frame from the same short animation. The maps have not changed, but look at the differences in the reflection! The reflection of the window, which was previously small, is now large—and it lines up with the object. In fact, the mask slightly protrudes through the surface of the window, and the reflections of the texture-mapped window blinds line up precisely. Likewise, look for the reflected picture frame, now strongly evident in the new image.
Figure 19-4 Localized Reflection in a Different Location
At the same time, the green ceiling panels (this photographic cube map shows the lobby of an NVIDIA building), which were evident in the first frame, have now receded in the distance and cover only a small part of the reflection.
This reflection can also be bump mapped, as shown in Figure 19-5 (only bump has been added). See the close-up of this same frame in Figure 19-6.
Figure 19-5 Bump Applied to Localized Reflection
Figure 19-6 Close-Up of , Showing Reflection Alignment
Unshaded, the minimalism of the geometry is readily apparent in Figure 19-7.
Figure 19-7 Flat-Shaded Geometry from the Sample Scene
The illustration in Figure 19-8 shows the complete simple scene. The large cube is our model of the room (the shading will be described later). The 3D transform of the room volume is passed to the shader on the reflective object, allowing us to create the correct distortions in the reflection directly in the pixel shader.
Figure 19-8 Top, Side, and Front Views Showing Camera, Reflective Object, and Simple "Room" Object
To create a localized frame of reference for lighting, we need to create a new coordinate system. In addition to the standard coordinate spaces such as eye space and object space, we need to create lighting space— locations relative to the cube map itself. This new coordinate space will allow us to evaluate object locations relative to the finite dimensions of the cube map.
To simplify the math, we'll assume a fixed "radius" of 1.0 for our cube map—that is, a cube ranging from –1.0 to 1.0 in each dimension (the cube shape is really a convenience for the texturing hardware; we will project its angles against the sphere of all 3D direction vectors). This size makes it relatively easy for animators and lighting/level designers to pose the location and size of the cube map using 3ds max nulls, Maya place3DTexture nodes, or similar "dummy" objects.
In our example, we'll pass two float4x4 transforms to the vertex shader: the matrix of the lighting space (relative to world coordinates) and its inverse transpose. Combined with the world and view transforms, we can express the surface coordinates in lighting space.
We'll pass per-vertex normal, tangent, and binormal data from the CPU application, so that we can also bump map the localized reflection.
// data from application vertex buffer
struct appdata
{
float3 Position : POSITION;
float4 UV : TEXCOORD0;
float4 Normal : NORMAL;
float4 Tangent : TEXCOORD1;
float4 Binormal : TEXCOORD2;
};
The data we'll send to the pixel shader will contain values in both world and lighting coordinate systems.
// data passed from vertex shader to pixel shader
struct vertexOutput
{
float4 HPosition : POSITION;
float4 TexCoord : TEXCOORD0;
float3 LightingNormal : TEXCOORD1;
float3 LightingTangent : TEXCOORD2;
float3 LightingBinorm : TEXCOORD3;
float3 LightingEyeVec : TEXCOORD4;
float3 LightingPos : TEXCOORD5;
};
Listing 19-1 shows the vertex shader.
vertexOutput reflectVS(appdata IN, uniform float4x4 WorldViewProjXf,
uniform float4x4 WorldITXf, uniform float4x4 WorldXf,
uniform float4x4 ViewITXf, uniform float4x4 LightingXf,
uniform float4x4 LightingITXf)
{
vertexOutput OUT;
OUT.TexCoord = IN.UV;
float4 Po = float4(IN.Position.xyz, 1.0); // pad to "float4"
OUT.HPosition = mul(WorldViewProjXf, Po);
float4 Pw = mul(WorldXf, Po); // world coordinates
float3 WorldEyePos = ViewITXf[3].xyz;
float4 LightingEyePos = mul(LightingXf, float4(WorldEyePos, 1.0));
float4 Pu = mul(LightingXf, Pw);
float4 Nw = mul(WorldITXf, IN.Normal);
float4 Tw = mul(WorldITXf, IN.Tangent);
float4 Bw = mul(WorldITXf, IN.Binormal);
OUT.LightingEyeVec = (LightingEyePos - Pu).xyz;
OUT.LightingNormal = mul(LightingITXf, Nw).xyz;
OUT.LightingTangent = mul(LightingITXf, Tw).xyz;
OUT.LightingBinorm = mul(LightingITXf, Bw).xyz;
OUT.LightingPos = mul(LightingXf, Pw).xyz;
return OUT;
}
In this example, the point and vector values are transformed twice: once into world space, and then from world space into lighting space. If your CPU application is willing to do a bit more work, you can also preconcatenate these matrices, and transform the position, normal, tangent, and binormal vectors with only one multiplication operator. The method shown is used in CgFX, where the "World" and "WorldIT" transforms are automatically tracked and supplied by the CgFX parser, while the lighting-space transforms are supplied by user-defined values (say, from a DCC application).
Given the location of the shaded points and their shading vectors, relative to lighting space, the pixel portion is relatively straightforward. We look at the reflection vector expressed in lighting space, and starting from the surface location in lighting space, we intersect it with a sphere of radius = 1.0, centered at the origin of light space, by solving the quadratic equation of that sphere.
As a "safety precaution," we assign a default color of red (float4(1, 0, 0, 0)): if a point is shaded outside the sphere (so there can be no reflection), that point will appear red, making any error obvious during development. The fragment shader is shown in Listing 19-2.
float4 reflectPS(vertexOutput IN, uniform samplerCUBE EnvMap,
uniform sampler2D NormalMap, uniform float4 SurfColor,
uniform float Kr,
// intensity of reflection
uniform float KrMin,
// typical: 0.05 * Kr
uniform float FresExp,
// typical: 5.0
uniform float Bumpiness // amount of bump
)
: COLOR
{
float3 Nu = normalize(IN.LightingNormal);
// for bump mapping, we will alter "Nu" to get "Nb"
float3 Tu = normalize(IN.LightingTangent);
float3 Bu = normalize(IN.LightingBinorm);
float3 bumps = Bumpiness * (tex2D(NormalMap, IN.TexCoord.xy).xyz - (0.5).xxx);
float3 Nb = Nu + (bumps.x * Tu + bumps.y * Bu);
Nb = normalize(Nb); // expressed in user-coord space
float3 Vu = normalize(IN.LightingEyeVec);
float vdn = dot(Vu, Nb); // or "Nu" if unbumped - see text
// "fres" attenuates the strength of the reflection
// according to Fresnel's law
float fres = KrMin + (Kr - KrMin) * pow((1.0 - abs(vdn)), FresExp);
float3 reflVect = normalize(reflect(Vu, Nb)); // yes, normalize
// now we intersect "reflVect" with a sphere of radius 1.0
float b = -2.0 * dot(reflVect, IN.LightingPos);
float c = dot(IN.LightingPos, IN.LightingPos) - 1.0;
float discrim = b * b - 4.0 * c;
bool hasIntersects = false;
float4 reflColor = float4(1, 0, 0, 0);
if (discrim > 0)
{
// pick a small error value very close to zero as "epsilon"
hasIntersects = ((abs(sqrt(discrim) - b) / 2.0) > 0.00001);
}
if (hasIntersects)
{
// determine where on the unit sphere reflVect intersects
reflVect = nearT * reflVect - IN.LightingPos;
// reflVect.y = -reflVect.y; // optional - see text
// now use the new intersection location as the 3D direction
reflColor = fres * texCUBE(EnvMap, reflVect);
}
float4 result = SurfColor * reflColor;
return result;
}
We supply a few additional optional terms, to enhance the shader's realism.
The first enhancement is for surface color: this is supplied for metal surfaces, because the reflections from metals will pick up the color of that metal. For dielectric materials such as plastic or water, you can eliminate this term or assign it as white.
The second set of terms provides Fresnel-style attenuation of the reflection. These terms can be eliminated for purely metallic surfaces, but they are crucial for realism on plastics and other dielectrics. The math here uses a power function: if user control over the Fresnel approximation isn't needed, the falloff can be encoded as a 1D texture and indexed against abs(vdn).
For some models, you may find it looks better to attenuate the Fresnel against the unbumped normal: this can help suppress high-frequency "sparklies" along object edges. In that case, use Nu instead of Nb when calculating vdn.
For pure, smooth metals, the Fresnel attenuation is zero: just drop the calculation of fres and use Kr instead. But in the real world, few materials are truly pure; a slight drop in reflectivity is usually seen even on fairly clean metal surfaces, and the drop is pronounced on dirty surfaces. Likewise, dirty metal reflections will often tend toward less-saturated color than the "pure" metal. Use your best judgment, balancing your performance and complexity needs.
Try experimenting with the value of the FresExp exponent. See Figure 19-9. While Christophe Schlick (1994), the originator of this approximation, specified an exponent of 5.0, using lower values can create a more layered, or lacquered, appearance. An exponent of 4.0 can also be quickly calculated by two multiplies, rather than the potentially expensive pow() function.
Figure 19-9 Effects of the Fresnel-Attenuation Terms
The shader in Listing 19-2 can optionally flip the y portion of the reflection vector. This optional step was added to accommodate heterogeneous development environments where cube maps created for DirectX and OpenGL may be intermixed (the cube map specifications for these APIs differ in their handling of "up"). For example, a scene may be developed in Maya (OpenGL) for a game engine developed in DirectX.
Cube maps can also be used to determine diffuse lighting. Programs such as Debevec's HDRShop can integrate the full Lambertian contributions from a cube-mapped lighting environment, so that the diffuse contribution can be looked up simply by passing the surface normal to this preconvolved cube map (as opposed to reflective lighting, where we would pass a reflection vector based on both the surface normal and the eye location).
Localizing the diffuse vector, unfortunately, provides a less satisfying result than localizing the reflections, because the diffuse-lighting map has encoded its notion of the point's "visible hemisphere." These integrations will be incorrect for values away from the center of the sphere. Depending on your application, these errors may be acceptable or not. For some cases, linearly interpolating between multiple diffuse maps may also provide a degree of localization. Such maps tend to have very low frequencies. This is a boon to use for simple lighting, because errors must be large before they are noticeable (if noticeable at all). Some applications, therefore, will be able to perform all lighting calculations simply by using diffuse and specular cube maps.
By combining diffuse and specular lighting into cube maps, you may find that some applications have no need of any additional lighting information.
Using shadows with IBL complicates matters but does not preclude their use. Stencil shadow volume techniques can be applied here, as can shadow maps. In both cases, it may be wise to provide a small ambient-lighting term (applied in an additional pass when using stencil shadow volumes) to avoid objects disappearing entirely into darkness (unless that's what you want).
With image-based lighting, it's natural to ask: Where does the shadow come from? Shadows can function as powerful visual cues even if they are not perfectly "motivated." That is, the actual source of the shadow may not exactly correspond to the light source. In the case of IBL, this is almost certainly true: shadows from IBL would need to match a large number of potential light directions, often resulting in a very soft shadow. Yet techniques such as shadow mapping and stencil shadowing typically result in shadows with hard edges or only slight softening.
Fortunately, this is often not a problem if the directions of the shadow sources are chosen wisely. Viewers will often accept highly artificial shadows, because the spatial and graphical aspects of shadows are usually more important than as a means to "justify" the lighting (in fact, most television shows and movies tend to have very "unjustified" lighting). The best bet, when adding shadows to an arbitrary IBL scene, is to pick the direction in your cube map with the brightest area. Barring that, aim the shadow where you think it will provide the most graphic "snap" to the dimensionality of your models.
Shadows in animation are most crucial for connecting characters and models to their surroundings. The shadow of a character on the ground tells you if he is standing, running, or leaping in relationship to the ground surface. If his feet touch their shadow, he's on the ground (we call shadows drawn for this purpose contact shadows). If not, he's in the air.
This characteristic of shadowing, exploited for many years by cel animators, suggests that it may often be advantageous to worry only about the contact shadows in an IBL scene. If all we care about is the shadow of the character on the ground, then we can make the simplifying assumption when rendering that the shadow doesn't need to be evaluated for depth, only for color. This means we can just create a projected black-and-white or full-color shadow, potentially with blur, and just assume that it always hits objects that access that shadow map. This avoids depth comparisons and gives us a gain in effective texture bandwidth (because simple eight-bit textures can be used).
In such a scenario, characters' surfaces don't access their own shadow maps; that is, they don't self-shadow. Their lighting instead comes potentially exclusively from IBL. Game players will still see the character shadows on the environment, providing them with the primary benefit of shadows: a solid connection between the character and the 3D game environment.
In the illustrations in this chapter, we can see the reflective object interacting with the background. Without the presence of the background, the effect might be nearly unnoticeable.
In many cases, we can make cube maps from 3D geometry and just apply the map(s) to the objects within that environment—while rendering the environment normally. Alternatively, as we've done in Figure 19-10, we can use the map as the environment, and project it onto simpler geometry.
Figure 19-10 Lines Showing the Edges of the Room Cube Object
For the background cube, we also pass the same transform for the unit-cube room. In fact, for the demo scene, we simply pass the room shader its own transform. The simple geometry is just that—geometry—and doesn't need to have UV mapping coordinates or even surface normals.
As we can also see from Figure 19-10, using a simple cube in place of full scene geometry has definite limits! Note the "bent" ceiling on the left. Using proxy geometry in this way usually works best when the camera is near the center of the cube. Synthetic environments (as opposed to photographs, such as this one) can also benefit by lining up flat surfaces such as walls and ceilings exactly with the boundaries of the lighting space.
// data from application vertex buffer
struct appdataB
{
float3 Position : POSITION;
};
The vertex shader will pass a view vector and the usual required clip-space position.
// data passed from vertex shader to pixel shader
struct vertexOutputB
{
float4 HPosition : POSITION;
float3 UserPos : TEXCOORD0;
};
Listing 19-3 shows the vertex shader itself.
vertexOutput xfBoxVS(appdataB IN, uniform float4x4 WorldViewProj,
uniform float4x4 WorldIT, uniform float4x4 World,
uniform float4x4 ViewIT, uniform float4x4 UserXf)
{
vertexOutputB OUT;
float4 Po = float4(IN.Position.xyz, 1.0);
OUT.HPosition = mul(WorldViewProj, Po);
float4 Pw = mul(World, Po);
OUT.UserPos = mul(UserXf, Pw).xyz;
return OUT;
}
The pixel shader just uses the fragment shader to derive a direct texture lookup into the cube map, along with an optional tint color. See Listing 19-4.
float4 xfBoxPS(vertexOutput IN, uniform samplerCUBE EnvMap,
uniform float4 SurfColor)
: COLOR
{
float4 reflColor = SurfColor * texCUBE(EnvMap, -IN.UserPos);
return reflColor;
}
This shader is designed specifically to work well when projected onto a (potentially distorted) cube. Using variations with other simple geometries, such as a sphere, a cylinder, or a flat backplane, is also straightforward.
Image-based lighting provides a complex yet inexpensive alternative to numerically intensive lighting calculations. Adding a little math to this texturing method can give us a much wider range of effects than "simple" IBL, providing a stronger sense of place to our 3D images.
Schlick, Christophe. 1994. "An Inexpensive BRDF Model for Physically-Based Rendering." Computer Graphics Forum 13(3), pp. 233–246. This article presents the Fresnel equation approximation widely used throughout the graphics industry—so widely used, in fact, that some programmers mistakenly believe the approximation is the Fresnel equation.
Paul Debevec provides a number of useful IBL tools and papers at http://www.debevec.org
Ramamoorthi, Ravi, and Pat Hanrahan. 2001. "An Efficient Representation for Irradiance Environment Maps." In Proceedings of SIGGRAPH 2001. This article on diffuse convolution using spherical harmonics is available online at http://graphics.stanford.edu/papers/envmap
Apodaca, Anthony A., and Larry Gritz, eds. 1999. Advanced RenderMan: Creating CGI for Motion Pictures. Morgan Kaufmann. In this book, Larry Gritz describes a similar reflection algorithm using the RenderMan shading language.
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals.
The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein.
The publisher offers discounts on this book when ordered in quantity for bulk purchases and special sales. For more information, please contact:
U.S. Corporate and Government Sales
(800) 382-3419
corpsales@pearsontechgroup.com
For sales outside of the U.S., please contact:
International Sales
international@pearsoned.com
Visit Addison-Wesley on the Web: www.awprofessional.com
Library of Congress Control Number: 2004100582
GeForce™ and NVIDIA Quadro® are trademarks or registered trademarks of NVIDIA Corporation.
RenderMan® is a registered trademark of Pixar Animation Studios.
"Shadow Map Antialiasing" © 2003 NVIDIA Corporation and Pixar Animation Studios.
"Cinematic Lighting" © 2003 Pixar Animation Studios.
Dawn images © 2002 NVIDIA Corporation. Vulcan images © 2003 NVIDIA Corporation.
Copyright © 2004 by NVIDIA Corporation.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher. Printed in the United States of America. Published simultaneously in Canada.
For information on obtaining permission for use of material from this work, please submit a written request to:
Pearson Education, Inc.
Rights and Contracts Department
One Lake Street
Upper Saddle River, NJ 07458
Text printed on recycled and acid-free paper.
5 6 7 8 9 10 QWT 09 08 07
5th Printing September 2007