NVIDIA HAS BEEN pushing so-called ‘neural rendering’ techniques since the launch of DLSS in 2018. While DLSS had a bit of a slow burn at launch, there are now more than 500 games and applications that use Nvidia RTX features.
The core idea of neural rendering is to leverage AI models to improve the quality and performance of games and other graphics applications. As pixels become increasingly complex to render, figuring out ways to reduce the number of fully rendered pixels and then interpolating to fill in the gaps can provide a better overall experience.
However, Nvidia’s solutions are designed to only work on Nvidia GPUs. Enter teams red and blue with alternatives that can work on a wider set of hardware. Upscaling and frame generation are here to stay, but how do the various AMD, Intel, and Nvidia solutions stack up, and what does the future hold for neural rendering techniques?
Join us as we cover the state of the upscaling industry and related technologies.
UPSCALING 101: THE ALGORITHMS
Fundamentally, upscaling isn’t a new idea. From the very first 2D sprite, games have been using upscaling algorithms. More recently, real-time upsampling of video content became an important feature, and we’ve seen various solutions on DVD, Blu-ray, and HDTV devices over the past couple of decades. Even before DLSS arrived, upscaling was available in games. All you need to do is run a game at a lower resolution than your display’s native resolution, and some form of upscaling happens, either via the GPU or the monitor.
But we’re more interested in the modern upscaling algorithms in games. At present, the three contenders are Nvidia DLSS, AMD FSR, and Intel XeSS, but there are different versions of each of those, with later iterations generally providing improved quality and additional features like frame generation. Let’s