Nvidia just made the Deep Learning Super Sampling (DLSS) software development kit (SDK) freely available to developers. In a move that’s likely in response to AMD’s FidelityFX Super Resolution (FSR) technology, Nvidia has shown that it knows that the feature’s superior quality isn’t enough to compete with FSR alone.
Although the move certainly makes DLSS a more equal competitor to FSR, it alone isn’t enough to cement DLSS as the go-to upscaling feature. Nvidia has a commanding position with DLSS at the moment, and by borrowing a key feature from FSR, it could make AMD’s upscaling technique obsolete. Here’s how.
In the battle between DLSS and FSR, Nvidia’s feature is already winning. That’s not because it’s inherently better than FSR, but because Nvidia has been working on the technology for nearly three years. In that time, the list of supported DLSS titles has continued to grow, despite Nvidia asking developers to apply to use the technology.
Currently, DLSS is available in 56 titles. FSR is only available in 13. Nvidia has a clear lead in game support at the moment, but it’s important to remember that Nvidia built its lineup of games over a number of years. AMD has been able to more rapidly adopt titles, even if the number of games is far lower than what DLSS currently supports.
In a matter of days after FSR arriving on AMD’s GPUOpen platform, for example, Marvel’s Avengers received the feature, and AMD didn’t announce the game beforehand. This rapid adoption is likely what triggered Nvidia to make its SDK freely available to developers, especially as adopters of the feature claimed that AMD’s technology was easier to work with.
DLSS is already winning in the games race, and the move to make the SDK readily available greases the wheels of adoption. The latest SDK also brings Linux support, which wasn’t previously available with DLSS. FSR has worked on Linux since it launched.
For Nvidia, it’s not about DLSS winning against FSR. It’s about maintaining a position it has already built for itself over a few years, which shouldn’t be hard to do given the quality that DLSS provides over FSR, particularly with more demanding upscaling modes.
We were shocked in our FidelityFX Super Resolution review to find that AMD’s upscaling tech worked nearly as well as DLSS at the highest quality setting. A big reason why is the aura Nvidia built around DLSS. Requiring the company’s proprietary Tensor cores and only available to select developers, we thought FSR would fall clearly short of DLSS, even at the highest quality setting.
That’s not true of the lower quality settings, however. As the internal render resolution shrinks, the problems with FSR start to become clear. Although the quality certainly drops with DLSS as the internal render resolution does, Nvidia’s upscaling tech still holds up much better than FSR does in the more demanding modes.
There are two main reasons for this. The first is the artificial intelligence (A.I.) training that DLSS uses. There’s a generalized A.I. model that Nvidia trains with high-quality scans offline, which gives the upscaling algorithm more information to work with. Although that extra information isn’t important at high-quality modes, it becomes essential at low ones.
In addition, DLSS uses motion vectors while FSR doesn’t. This temporal data allows DLSS to use information from previous and future frames to accurately track moving objects in a scene, reducing visual artifacts. This is especially noticeable for distant objects, like molten steel pouring in Necromunda: Hired Gun and the flicker of wispy clouds in Death Stranding.
FSR is a very close approximation of DLSS. Once the tech is pushed to the limit, however, it’s clear that FSR is just an approximation. DLSS remains the benchmark due to its ability to take advantage of dedicated hardware, the A.I. model, and temporal information.
Quality alone may not be enough to give DLSS staying power over FSR, though.
The DLSS versus FSR discussion really isn’t relevant on high-end hardware. Simply turning on the feature to one of the higher quality modes will render excellent image quality and a significant boost in performance. Low-end hardware is the linchpin to FSR’s rapid adoption, and it remains the most potent threat to DLSS.
You need an RTX graphics card to use DLSS. To use FSR, you just need a supported game. That means you need at least an RTX 2060 for DLSS, which, thanks to the GPU pricing crisis, will cost you around $600. With FSR, you can get by with something like the GTX 1050 Ti, which only costs around $200 at the time of publication.
There are obvious performance differences between the two cards, but the fact remains that budget PC builders who most need access to an upscaling feature have been effectively priced out of DLSS. This is all the more frustrating because DLSS shows its clearest strengths at lower quality settings, which most benefit low-end hardware.
Even with a clear advantage in game support and quality, DLSS won’t be able to maintain its lead over FSR if budget hardware isn’t accounted for. FSR is a generalized solution that works pretty much regardless of the hardware you have, making it an obvious choice for users with inexpensive graphics cards or APUs.
Unfortunately, Nvidia can’t just adopt other graphics cards into DLSS. The feature requires Tensor cores, and they’re only available on the last two generations of Nvidia GPUs. However, Nvidia could use its experience with DLSS, A.I., and temporal upscaling to offer a feature that accounts for players who don’t have access to RTX graphics cards.
This is all the more important given the ongoing problems with finding a graphics card. When options are few and far between, builders are going to reach for whatever’s available. And when developers see that shift, they’ll be more likely to adopt a feature like FSR that works for their player base.
With DLSS, Nvidia has already optimized for the future. We’ve seen things like 8K gameplay above 60 frames per second thanks to the feature, accelerating what’s possible with PC gaming. In order to stay competitive with FSR, though, Nvidia also needs to optimize for the past.