INTO THE DEEP
As sales pitches go, Nvidia’s deep learning supersampling (DLSS) is as revelatory, and preposterously sci-fi, as it gets. Specific cores dedicated to machine learning, harnessing bleeding-edge algorithms, take resource-friendly 1440p resolution images and blow them up to super-sharp 4K proportions—and all without the performance hit of actually running at that higher resolution in the first place.
Along with that quietly revolutionary real-time ray tracing capability, DLSS was the big selling point of Turing cards. But while the advantages are real, and available to you in a generous swathe of games, its uneven execution from title to title is confusing. In its worst moments, it’s been capable of actually making your game look worse when enabled than in the original native resolution.
But the teething problems might prove trivial when we look at deep learning’s potential to turn graphics rendering on its head. Just as real-time ray tracing’s proved to be what some might call a soft launch, offering tantalizing glimpses of its promise amid the noise of patchy game support and performance issues, so it goes with DLSS. For owners, it feels like being in on the ground floor on an important new advancement, and having to deal with adjustment inertia from software developers and driver writers.
Before we can really assess what deep learning might do for our gaming rigs in five or ten years’ time,
You’re reading a preview, subscribe to read more.
Start your free 30 days