GamingGPUs

Unreal Engine 5 Lumen vs Ray Tracing: Which One is Better?

Epic’s Unreal Engine 4 was one of the most popular game engines of the last generation. It was widely used by AAA giants like EA, Ubisoft, and Microsoft, as well as tons of indie studios. With Unreal Engine 5 and its Lumen and Nanite technologies, Epic is looking to extend its dominance in the video game industry. In this post, we have a look at Lumen, the global illumination solution used by Unreal Engine 5 to bring realistic lighting to next-gen gaming on a budget. We’ll be comparing Lumen to conventional ray-tracing, and analyze the differences in the quality and performance of the two.

What is Global Illumination?

In simple words, Global Illumination is the process of lighting or illuminating a scene by calculating the light emitted by luminant sources on and off the screen (by approximations or tracing its path). With ray and path-tracing, the light rays are cast from these sources, reaching various objects in the scene and illuminating them. The rays will behave differently depending on the nature of the objects encountered. For example, glossy (shiny) objects will reflect the ray, while opaque ones will simply block it and redirect it somewhere else. This redirection of light rays by objects in different directions is known as indirect or diffuse lighting, while the redirected rays are called diffuse rays.

Ray Traced Global Illumination (RTGI)

These indirect or diffuse rays then act as newly cast rays, further crashing into other objects and illuminating them in the process, with the object redirecting them basically acting as a light source. When the ray finally reaches the camera (your eyes), the information gathered by it is used to determine the lighting of the scene.

In most cases, the color of the ray is determined by the color of the pixel reflecting it. To save performance, the light rays (data like color, intensity, etc) directly hitting the screen (your eyes) are calculated with complex algorithms, while the rest of the diffuse rays are let off with simpler, less complex equations.

So, now you may be wondering how the rays are cast across the scene, and how the ray count per scene is determined. Well, that is done by probes that are basically sources of light placed in the scene by the developers before runtime. Each probe acts as a point source of light, casting rays, and illuminating the scene. Each probe can cast one or more rays, radially.

The rays cast by these probes are traced, shaded and the data such as irradiance and distance to geometry is stored and used for calculating the final lighting for the scene. In initial ray-traced titles, most developers used a single (or two) light probes to calculate the diffuse lighting. In the case of Metro Exodus, this was the sun and the sky textures. With the Enhanced Edition, this was extended to 256 light sources or probes. Therefore, overall, you had 256 light sources, plus the rays from the sun and the sky all used to calculate the lighting of each pixel. A 1080p display has 2 million pixels, 4 million for 1440p, and 8 million for 4K!

To make sure that contemporary hardware was actually able to run the game, the developers used grid cells or clusters to partition the scenes. Then similar to Screen Space effects, the light probes in range (in the grid) were used to calculate the lighting of the scene, the primary difference being that in the case of the former, the screen-space is divided into sections (depending upon their position in the Z-buffer), while in the case of the latter, the game world is partitioned, avoiding similar coverage issues.

Another optimization involves accumulating rays from previous frames and using them for additional diffuse light bounces, much like temporal upscaling. This is used to generate the lighting grid which can then be reused over consecutive frames, allowing nearly infinite bounces of light rays. You’re essentially casting diffuse rays temporally across multiple frames to reduce the performance impact and make the method more feasible for real-time use. Since the impact of diffuse rays is subtle (yet noticeable in poorly lit scenes), it doesn’t suffer from the issues usually associated with temporal rendering techniques.

Unreal Engine 5: Lumen vs Ray Tracing

Okay, so before we begin, let’s make one thing very clear. Lumen is based on ray-tracing, albeit a more optimized, hybrid form of it to allow more widespread adoption, across different graphics architectures without the need to own a $1,000 GPU.

Lumen is Unreal Engine 5’s new fully dynamic global illumination and reflections system that is designed for next-generation consoles. Lumen is the default global illumination and reflections system in Unreal Engine 5. It renders diffuse interreflection with infinite bounces and indirect specular reflections in large, detailed environments at scales ranging from millimeters to kilometers.

From the developers

By default, Lumen uses software ray-tracing (doesn’t utilize RT cores/accelerators), a highly optimized form of it. It uses multiple forms of ray-tracing including screen-tracing (SSRT), Signed Distance Fields (SDFs), and Mesh Distance Fields (MDFs) in parallel to calculate the global illumination of the scene depending on the objects, their distance from the screen, and certain other factors.

Signed Distance Fields and Mesh Distance Fields

Before we go any further, you need to know what Signed Distance Fields (and Mesh Distance Fields) are. Although this may sound like a very complicated term, SDF basically is a distance vector in a particular direction. Let’s take the below example to illustrate this:

SDF

A ray is cast from the camera which passes through the screen and then approaches the circular surface. Now, with ray-tracing, the most important part is figuring out which rays hit objects in the scene and which ones miss. SDF is used for this very purpose, more specifically to find out (for a ray starting in a particular direction) the closest point on the surface of the object where there’s an intersection.

Ray Marching

Here, the green circle represents the SDF. Calculating the SDF means finding out how far we can go (at the very least) in that direction along the ray. Then, that much distance is covered and the SDF is re-evaluated. This entire process is called ray-marching. Now employ the same method to meshes, and you get Mesh Distance Fields (MDFs). Either way, in this case, the ray doesn’t actually hit the circle, so the process is ended after a predetermined number of steps. In case, there was a hit, that SDF would have been used for calculating the lighting and other related equations.

Global Distance Fields, Mesh Distance Fields and Screen Tracing

Lumen in its software/hybrid tracing pipeline uses Global Distance Fields, Mesh Distance Fields, and Screen Traces to calculate the lighting of the scene. Global Distance Fields are the fastest, but much less accurate. This works to their advantage as they are used for the abstract exoskeleton of the scene such as the walls, floor, and large (but simple) texture blocks such as the cushions. Mesh Distance Fields are more detailed, but only hold sparse details near the surface of the objects. These SDFs are used with mipmaps depending on the distance of the objects from the camera.

The software ray-tracing process in lumen starts with screen space ray-tracing or screen tracing which is conducted against geometry/objects in the depth buffer (visible on the screen). As you can see in the diagram at the beginning of the section, this is often leveraged for edges and crevices, basically, geometry where screen space shadows (SSAO) are cast. After screen tracing, SDFs are used. First Mesh Distance Fields are used for nearby objects, followed by Global Distance Fields for the rest of the scene.

GDFs vs MDFs

Mesh Distance Fields (called detailed tracing) are traced for the objects up to 2 metres away from the camera, while the rest is traced using global distance fields (called global tracing). Each of the methods has its own 2D voxel representation of each scene, as you can see above. GDFs are less accurate as you’re essentially tracing them against the object silhouettes but are much faster than MDFs. Furthermore, the nature/distance of the target objects means that this works very well. MDFs, on the other hand, trace (relatively) low-poly versions of various objects in the scene to calculate the lighting (for objects up to 2 meters).

To further speed up the process, Lumen uses Surface Cache. Surface cache captures the geometric properties of objects from all angles and stores them offline in the form of an atlas (cache) of sorts. Captures happen as the player moves around, in higher resolution as you move closer and in lower resolution as you move farther from an object. However, this will only work for meshes with simple interiors. Caching happens for a limited amount of polygons of objects (a few hundred MBs of VRAM) and requires the LOD for various objects/sections for effective utilization.

One of the primary drawbacks of Lumen software RT is that it doesn’t work with skinned meshes (primarily skeletons), as they’re dynamic and change their shape with every frame (deformations/movement, etc). As a result of this, the BVH structures for these objects need to be created for every frame which isn’t possible with Lumen’s SW ray-tracer. Lumen SW creates the BVH objects for static meshes just once at runtime which greatly speeds up the process but renders it useless for dynamic meshes.

Lumen also comes with hardware ray-tracing, but most developers will be sticking to the former, as it’s 50% slower than the SW implementation even with dedicated hardware such as RT cores. Furthermore, you can’t have overlapping meshes with hardware ray-tracing or masked meshes either, as they greatly slow down the ray traversal process. Software Ray tracing basically merges all the interlapping meshes into a single distance field as explained above.

Overall, Lumen looks phenomenal, but its primary drawback is that it’s limited to Unreal 5. This means that similar to NVIDIA DLSS, it will never see the same amount of adoption as other open-source techniques (FXAA, SMAA, or even TAA). On the plus side, it should allow many indie studios to take advantage of this advanced GI technique without much effort from their end. Furthermore, it will also nudge other major engines (most notably CryEngine, Frostbyte, Dunia, and Snowdrop) to come up with their own optimized, software-based ray-tracers that work across all hardware.

Areej

Computer hardware enthusiast, PC gamer, and almost an engineer. Former co-founder of Techquila (2017-2019), a fairly successful tech outlet. Been working on Hardware Times since 2019, an outlet dedicated to computer hardware and its applications.

Related Articles

Back to top button