Texture filtering is one of those settings that makes your game look sharper. Well, then how is it different from the regular sharpening filter?
Texture filtering simply makes sure that the mipmaps of the textures are properly visible from the in-game camera. Traditionally, mipmaps are smaller than the original texture by a factor of 2 and there are points (texels) where the multiple mipmaps may converge. As such, these must be filtered to avoid blurring and other artifacts.
Bilinear filtering, the simplest form of texture filtering uses the following approach to calculate the color of a texel: It takes four texel samples from the approximate position of the texel (a texel is to texture what a pixel is to resolution) as indicated by the game engine and calculates its average which is then used as the final value. However, bilinear filtering only uses samples or texels from mipmaps identified by the game engine and if at any point with perspective-distorted textures, two different mipmaps are used, there are shifts in texture clarity.
Trilinear filtering improves on bilinear filtering by continuously sampling and interpolating (averaging) texels from the two closest mipmap sizes for the target texel, but like BF, this technique assumes that the texture is displayed as a square from the player’s perspective, and suffers a loss in quality when viewed from an oblique angle (especially when perpendicular to the screen).
This is due to the texel covering a depth (area along the axis perpendicular to the screen) longer than and a width narrower than the samples extracted from the mipmaps, resulting in blurring due to under and over-sampling, respectively.
To solve this, Anisotropic Filtering is used which scales either the height or width of a mipmap by a ratio relative to the angle of the texture against the screen. The ratio is dependent on the maximum sampling value specified, followed by taking the appropriate samples. AF can function with anisotropy levels between 1 (no scaling) and 16, defining the maximum degree by which a mipmap can be scaled by but AF is commonly offered to the user in powers of two: 2x, 4x, 8x, and 16x.
The difference between these settings is the maximum angle that AF will filter the texture by. For example, 4x will filter textures at angles twice as steep as 2x, but will still apply standard 2x filtering to textures within the 2x range to optimize performance.
Remember NVIDIA’s godrays? Yeah, that’s basically what volumetric lighting is. Team Green uses tessellated godrays which are more performance-intensive but look better too. Traditional volumetric lighting simply is a demonstration of how the sun rays (or any rays) appear and behave in the game world. They are usually traced using screen-space ray-tracing.
Screen Space Reflections
Screen space reflections is a technique to render dynamic in-game reflections. It is quite taxing and for good reason. SSR basically re-renders the scene on transparent surfaces. However, it only does so for the objects visible on the screen. If there are other objects that are present in the same location but not visible on the screen, they will be culled.
- What is Ray-Tracing and How is it Different from Rasterization: A Look at the Working of NVIDIA’s RTX GPUs
Another popular refection technique is cube-mapping. In this, the textures are pre-baked onto the various sides of a cube and stored as six square textures or unfolded into six square regions of a single texture. These are much less detailed and inaccurate, and at times one whole reflective surface as large as a river or lake may use the same cube map.
Tessellation is a DX11 based technique used to increase the level of detail in a scene without increasing the texture size. It is done by dividing the polygons into smaller ones to improve the mesh complexity and detail.
Tesselation is a technique that allows you to reproduce primitives (triangles, lines, points, and such) in a 3D application. It does this by repeatedly subdividing the current geometry into a finer mesh.
This allows you to load a relatively coarse mesh, generate more vertices and triangles dynamically and then make it into a finer mesh. There are three parts involved in tessellation: Hull shader which calculates the tessellation factors. These are very similar to geometry shaders in that they take patches and the output is sent to the tessellator after which the domain shader uses the new vertex data and control points to finalize the results. The DS is similar to the vertex shader. It takes vertices and control points as input and then outputs the vertex position for the newly produced domain samples.
Post-processing generally refers to effects that are implemented in the last phase of rendering, after all the other effects like tessellation, multi-sampling, reflections, and shadows are done. It includes shader-based effects such as depth of field, motion blur, ambient occlusion, FXAA, and/or SMAA as well.
Another widely used technique, V-Sync basically caps your in-game frame rate to your monitor’s refresh rate, preventing screen tearing. It does so by slowing down the GPU pipeline to make sure the frame rate doesn’t go above your refresh rate. However, this can also affect your performance negatively by inadvertently making your game lag or inducing an input lag. Read more on it here:
- What is V-Sync: Should it be Turned Off or On?
- What is the Difference Between DirectX 11 and DirectX 12
- DirectX 12 Ultimate: Mesh Shaders, Ray-Tracing and Sampler Feedback