Which Global Illumination Technique Is Best For Your Game?

The team behind the global illumination technology Enlighten explains which techniques for lighting are out there and what the pros and cons of each of them are and when you should use which one.

 Ellie Stone Ellie Stone

Marketing Manager at Geomerics

Ellie Stone is Marketing Manager for Geomerics, the industry leader in game lighting technology. A Cambridge University graduate, she moved to the games industry after spending the first part of her career in the semiconductor space and is currently focussed on educating the international developer community on lighting best practices.

Lighting is the most fundamental element of any game environment. Done well, it can bring three dimensional life to the geometry, amplifies the emotional feel of a scene and draws the eyes to important areas of focus. With global illumination, or indirect lighting, playing a hugely significant role in the delivery of balanced scene lighting, it is vital that studios understand the techniques available and adopt the one that is most suitable for their platform, environment, gameplay and performance budget. In this article we’ll explore the main options available, weighing up their advantages and limitations and conclude with recommendations for different scenarios.

What Is Global Illumination?

Global illumination computes the effect of light being bounced, reflected, absorbed, transmitted and scattered as it encounters various surfaces in the environment. It provides indirect lighting to a scene, rather than limiting it to the direct light hitting a surface immediately from a light source.

It simulates the real-world behaviour of light, where objects and lights are interdependent of each other. As a result, the final composite image shows more lighting variation with smooth gradient across surfaces and a better perception of the rendered scene, as shown in the example (image 1). It also enables effects such as colour bleeding, where light reflected off a coloured surface causes the objects next to it to pick up that colour, as well as translucency where light travelling through an object gets filtered by its colour.

Image 1: This example shows how direct and indirect lighting lead to more lighting variation with smooth gradient across surfaces and a better perception of the rendered scene.

 

Baked Lighting

Popularized with »Quake« in 1996, baked lighting is an extremely common technique used for both direct and indirect lighting as well as other effects such as ambient occlusion. It fully pre-calculates lighting information for static scenes and stores the output in static data structures which are then consumed by fragment shaders.

The most commonly used data structures are textures, spherical harmonics probes and cubemaps. Textures are applied to any static meshes in the scene and their shading cost only involves a simple texture lookup. Spherical harmonics probes enable moving objects such as characters or set dressing objects to update their lighting with respect to their surroundings; their shading cost involves interpolation between probes organized in a regular grid, octree or tetrahedral mesh, and then evaluating the spherical harmonics coefficients based on the fragment normal. Cubemaps are used for view-dependent reflections and usually placed individually around a scene with an additionally defined projection volume. For evaluating cubemaps in the shader, their content gets re-projected to the projection volume based on the location of the camera. The accuracy of the reflections is limited by how well the projection volume approximates the surrounding geometry and how much effort is spent in the re-projection step. In practice, cubemaps are often only used as a fall-back for areas where screen-space reflections don’t provide sufficient result.

Spherical harmonics probes enable moving objects such as characters or set dressing objects to update their lighting with respect to their surroundings.

 

Baked lighting is widely available in third party engines and content creation tools. It can produce high quality results and it is flexible, with any kind of lighting related information able to be stored in one way or another. It is the most affordable runtime technique, enabling performance efficient indirect lighting.

However, everything is static at runtime both in the game and in the editor. Light sources, materials and meshes can’t be changed at runtime. The only option for dynamic content is to swap out entire data sets, which is very costly with respect to memory consumption as well as creating and managing this additional content. This either limits dynamic gameplay options or requires sacrifices with respect to visual quality or dynamic content. In addition, baked lighting has the lengthiest workflow of all the global illumination solutions listed in this article. In order to see the effect of any changes made to a scene, no matter how small, artists need to run a full lighting bake. Given that in order to get a scene to AAA quality an artist will need to iterate on each individual light many times, baking can have a significant impact on a studio’s bottom line as well as the quality of the final result.

Pre-computed Visibility

The most expensive part of calculating global illumination is determining the visibility between surfaces. By decoupling the surface-to-surface visibility calculation from the actual influence of light sources and materials, pre-computing only the visibility information, and only at runtime combining it with material and direct lighting information it is possible to update the global illumination in real-time both in the game and in the editor.

This graph shows how pre-computed visibility global illumination works.

 

Pre-computed visibility global illumination is similar to baked lighting in respect to storing its data output in the same three output data structures: lightmaps, spherical harmonics probes and cubemaps. However, dynamic effects such as time of day, changing materials, and player controlled light sources now become possible while the indirect lighting is updated dynamically and the visual end result is kept at a very high quality. What is more, once the initial visibility pre-computation is done, all light sources and materials can be changed in real-time in the editor and the resultant effect on the global illumination can be seen instantly. This allows artists to iterate over the lighting of a scene much more efficiently resulting in huge time savings compared to baked lighting and often better visual quality.

Employing domain-specific lossy compression and storage of the pre-computed surface-to-surface visibility information enables this pre-computed visibility to scale from high-end PCs and consoles right down to mobile platforms, all while maintaining global illumination in real-time. Also, similar to baked lighting, light leaking is usually not an issue for pre-computed visibility provided that sufficient care is taken when preparing the underlying assets. And because its outputs are equivalent to those of baked lighting, it is also possible to benefit from the improved artist workflow by only using real-time global illumination in the editor while baking out the final result as static data. Furthermore, it is possible to target different platforms with different outputs, e.g. indoor and outdoor lighting dynamic on PC, only indoor lighting dynamic on consoles, and static lighting on mobile, all using exactly the same underlying assets and lighting setup.

Popularized with »Quake« in 1996, baked lighting is an extremely common technique used for both direct and indirect lighting as well as other effects such as ambient occlusion.

 

The limitations to pre-computed visibility include the requirement for enough static geometry in a scene to enable the pre-computation step to take place – certain environments, for example procedurally generated, completely dynamic or user-generated content are not suitable. Also, while its runtime cost is affordable in most cases, pre-computed visibility still has a non-negligible impact on overall performance and memory budget, and especially very large scenes will require good streaming and load-balancing mechanisms to mitigate any negative performance impacts. Finally, the pre-computation step may well be an adjustment for artists who are used to instant updates from the techniques detailed later; however, it is still much faster than traditional baked lighting.

Light Propagation Volumes

The technique for light propagation volumes was proposed by Kaplanayan and Dachsbacher in 20101 as an alternative to the traditional pre-computed indirect lighting and first implemented in CryEngine 3. The core idea behind this technique is to inject virtual point lights into a three-dimensional data structure, aggregate the injected lights for each cell, and finally iteratively propagate them to neighbouring cells.

The technique for light propagation volumes was proposed by Kaplanayan and Dachsbacher in 2010 as an alternative to the traditional pre-computed indirect lighting and first implemented in CryEngine 3.

 

The virtual point injection is traditionally done via reflective shadow maps that render the scene from the location of each light source and store information such as depth, world space coordinates, normal and flux. This information is then aggregated into two distinct volume data structures, i.e. two three-dimensional regular grids. The first volume stores a set of spherical harmonics coefficients for each cell representing a virtual point light. The second volume is offset by half a cell in each direction and stores a set of spherical harmonics coefficients approximating the occlusion between the cells in the first grid. The spherical harmonics coefficients in both volumes are then iteratively combined to propagate indirect light within the first volume. Finally, applying lighting in the fragment shader simply involves looking up the interpolated value within the light propagation volume which is stored in a volume texture on the GPU. Also, in practice, light propagation volumes employ cascades of regular grids to scale to large environments where the cascades move with the camera location and snap to a world grid to avoid flickering.

Light propagation volumes allow lights, materials and objects to be fully dynamic at runtime. They work well with complex geometry and are relatively stable and flicker-free. At basic quality levels they can even scale down to mobile platforms. In addition, light propagation volumes support multiple bounces, glossy reflections and participating media; however, these extensions are usually considered too expensive in practical applications and the computational resources are rather used to improve the base lighting quality.

Light propagation volumes support glossy reflections and participating media; however, these extensions are usually considered as to be too expensive.

 

Light propagation volumes have a number of issues that limits their application, the biggest being their low spatial resolution due to memory constraints even when employing cascades. Due to this, light and shadow leaking are a common occurrence and the lighting result is fairly ambient with low contrast. In combination with the fact that their runtime performance scales with the number of light sources, light propagation volumes are mostly used for outdoor scenes with only a single directional light source.

Voxel Cone Tracing

Voxel cone tracing, proposed by Crassin et al. in 20112, has similarities to the light propagation volumes technique described above but creates higher quality results. While light propagation volumes calculate lighting for each cell and apply the indirect lighting result by a simple lookup into a volume texture, voxel cone tracing only calculates lighting in areas where there is actual geometry while performing a much more involved aggregation of the indirect lighting in the fragment shader.

The light computation for voxel cone tracing is split into a light injection as well as a light propagation step.

 

To achieve an adaptive resolution, voxel cone tracing uses a sparse voxel octree data structure effectively representing a three-dimensional MIP-map that is empty in areas where there is no geometry. This concept of MIP-maps is important for the later light aggregation in the fragment shader. Due to the adaptive resolution, voxel cone tracing can employ a much higher resolution for the same memory cost. The sparse voxel octree is usually built once based on the static objects in the scene and updated per frame for any moving objects.

The light computation is again split into a light injection as well as a light propagation step. Similar to light propagation volumes, reflective shadow maps are used for determining the influence of each light source, however, voxel cone tracing encodes its virtual point lights as Gaussian lobes in combination with pre-defined BRDFs instead of using spherical harmonics. The light propagation step involves aggregating the virtual point light information upwards in the MIP-map chain, going from the smallest octree cells to the largest ones.

Applying the lighting to an object in the fragment is done by raytracing within the sparse voxel octree. Here, a small number of rays are sent out from the fragment’s location in a hemisphere around the fragment normal to determine the indirect lighting as well as a single ray based on the view direction and the fragment normal to determine the specular reflection. When a ray hits a populated cell within the octree, the distance between the ray origin and the hit point determines which level in the MIP-map is evaluated; if the distance is small then a level with a higher resolution is selected and if the distance is large then a level with a lower resolution is selected. This results in the ray effectively representing a cone and giving this technique its characteristic name.

This image shows a voxel grid, where each voxel stores the signed distance to the closest piece of geometry. This allows for accelerated ray casting or cone tracing.

 

Voxel cone tracing provides an elegant light propagation and aggregation model for both indirect lighting as well as specular reflections. It is able to handle complex geometry including transparency and participating media while delivering smooth lighting results at a higher quality than light propagation volumes. However, it can be very expensive to achieve these high quality levels – the memory consumption is usually measured in gigabytes, and the performance in 10’s of milliseconds. What is more, the technique still suffers from significant light leaking issues even at medium quality levels and the directional information required to significantly improve quality would further increase memory cost.

As an extension to voxel cone tracing, NVIDIA offers a similar solution for DX11 GPUs called VXGI, which uses MIP-map cascades with efficient hardware addressing. This solution can handle large and dynamic scenes, is scalable with 3-5 LODs with 16-56 bytes per voxel. However, the light leaking issue of voxel cone tracing remains for thin or distant geometry and there is insufficient light quantization for HDR lighting. The voxelization also suffers from aliasing which results in flickering lights and objects when moving through the scene. Finally, it remains very expensive even on high-end devices and its per-frame voxelization prohibits complex geometry.

Other Techniques

Height field global illumination is a technique for the dynamic direct and indirect lighting of terrain only. It was introduced by Nowrouzezahrai and Snyder in 20093, and used by Epic Games in their recent »Kite« demo. This technique creates MIP-maps on the GPU based on the original height field data for first determining the height variation and computing direct shadows, and then computing indirect light bounces. It once again uses spherical harmonics to encode visibility as well as irradiance in the MIP-map levels. The strong limitation of this technique is that it can only be applied to height field based terrain, and it does not interact with any other objects in the scene apart from the terrain. For large terrains it is also necessary to limit the radius of influence for the indirect lighting computation to keep the GPU cost manageable.

Distance field global illumination is a technique under development by Epic Games. The core idea is that each mesh has its own a voxel grid, and each voxel stores the signed distance to the closest piece of geometry. This allows for accelerated ray casting or cone tracing since at each point it is possible to look up the distance a ray or cone is allowed to travel further without hitting any geometry. Distance field global illumination is still heavily work in progress with unclear commitment on the side of Epic Games. In addition, memory consumption and runtime performance are still likely to be an issue for more complex scenes.

Conclusion

The aforementioned techniques for computing global illumination can broadly be sorted into two categories:

 

Surface techniques compute indirect lighting based on the transfer of light between two-dimensional surfaces, whereas volume techniques rely on an underlying three-dimensional data structure. Surface techniques require different degrees of pre-computation and offer limited support for dynamic objects. However, they are affordable at runtime, don’t suffer from light or shadow leaking and are the only techniques that have proven scalability down to mobile platforms. Volume techniques require little or no pre-computation and offer a degree of dynamic object support, but with these advantages comes a heavy performance and memory cost and they are regularly limited to only the highest end of today’s PCs.

So if you are developing a game right now, which global illumination technique is best for you? The answer to this question depends strongly on the game environment and target platforms you are considering. If your game consists largely of outdoor areas and can be lit by a single directional light source, light propagation volumes is a techniques that you might want to consider.

Height field global illumination is a technique for the dynamic direct and indirect lighting of terrain only. It was introduced by Nowrouzezahrai and Snyder in 2009, and used by Epic Games in their recent »Kite« demo.

 

For those titles which are intended only for high-end PCs, VXGI is an option. Of the alternatives, voxel cone tracing is too expensive in most cases, especially with respect to memory consumption; meanwhile distance field global illumination is at an early development stage; and while it promises to offer visual results similar to voxel global illumination, it could equally be another technique that is too expensive for today’s games.

In all other cases, pre-computed visibility is a proven technique. If your game consists of enough static meshes to support the necessary surface-to-surface pre-computation, it offers the benefits of dynamism and rapid artist workflow along with the scalability enabled by baked lighting. Enlighten is the industry standard for pre-computed visibility global illumination. It is licensable for Unreal Engine 3 and 4 as well as for internal engines. It is also easily accessible as the default global illumination solution inside Unity 5.

As a final note, you should not dismiss baked lighting – provided that your scenes are static, of course. One option to remedy the workflow issues connected to baked lighting is to use one of the other real-time techniques as an in-editor tool and bake out the final result as static data.

 

Ellie Stone

 

References

If you want to read more about some of the techniques that are mentioned in the text, please use the following works for further information as mentioned as reference:

  1. Anton Kaplanyan, Carsten Dachsbacher: »Cascaded Light Propagation Volumes for Real-Time Indirect Illumination«, Symposium on Interactive 3D Graphics and Games, 2010
  2. Cyril Crassin, Farbice Neyret, Miguel Sainz, Simon Green, Elmar Eisemann: »Interactive Indirect Illumination Using Voxel Cone Tracing«, Pacific Graphics 2011
  3. Derek Nowrouzezahrai, John Snyder: »Fast Global Illumination on Dynamic Height Fields«, Eurographics Symposium on Rendering, 2009

http://www.makinggames.biz/

Leave a comment