Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2024)

Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (1)

Jensen Huang of Nvidia gave a sneak peek at what the trillion-dollar GPU company is planning to do with future iterations of Deep Learning Super Sampling (DLSS). During a Q&A session at Computex 2024 (reported by More Than Moore), Huang answered a DLSS-related topic, saying that in the future, we will see generated textures and objects that will be created purely through AI. Huang also stated that AI NPCs will also be generated purely through DLSS.

Generating in-game assets with DLSS will help boost gaming performance on RTX GPUs. Work transferred to the tensor cores will lead to less demand on the shader (CUDA) cores, freeing up resources and boosting frame rates. Huang explains that he sees DLSS generating texture and objects by itself and improving object quality, similar to how DLSS upscale frames today.

We could be somewhat close to this next iteration of DLSS technology. Nvidia is already working on a new texture compression technology that takes into account trained AI neural networks to significantly boost texture quality while retaining similar video memory (VRAM) demands of modern-day games. Traditional texture compression methodologies are limited to a compression ratio of 8x, but Nvidia's new neural network-based compression tech can compress textures up to a ratio of 16x.

This tech should apply to Huang's discussion of enhanced object image fidelity through DLSS. In-game objects are just textures wrapped in a 3D space, so this texture compression tech will inevitably boost texture quality.

The more intriguing aspect of Huang's future iteration of DLSS is in-game asset generation. This enhancement of Nvidia's DLSS 3 frame generation tech generates frames in between authentic frames to boost performance. Asset generation is a step beyond DLSS 3 frame generation, with in-game assets generated entirely from scratch through DLSS. (DLSS will need to be told where assets need to be placed in the game world and what assets need to be rendered, but they will be generated (created) entirely from scratch.)

Huang also discussed the future of DLSS surrounding NPCs. Not only does Huang expect DLSS to generate in-game assets, but he also envisions DLSS generating NPCs. He gave an example of six people existing in a video game; two of the six are real characters, while the other four are generated entirely by AI.

It is a callback to Nvidia ACE, which was demoed in 2023. ACE is an in-game LLM designed to bring NPCs to life, giving them unique dialogue and responses in conjunction with user interaction from another character in-game. Nvidia believes ACE (or some future form) will play a vital role in PC gaming and be an integral part of DLSS.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

It isn't the first time we've heard about DLSS's future capabilities. The tech giant has publicized that it expects the future of PC gaming to be rendered entirely through AI, replacing classic 3D graphics rendering. In the immediate turn, generating specific assets in-game is a step towards this AI-generated future Nvidia envisions.

Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2)

Aaron Klotz

Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

More about gpus

Micron says GDDR7 will provide a 30% improvement in gaming — both ray tracing and rasterizationWhat is GDDR7 memory — everything you need to know about the upcoming graphics VRAM technology

Latest

Nvidia could receive French ban hammer — antitrust charges may follow government raids of Nvidia's offices in France
See more latest►

11 CommentsComment from the forums

  • Metal Messiah.
    Nvidia's next-gen DLSS may leverage AI

    DLSS has always leveraged AI by the way. So word the title accordingly.

    "Nvidia's next-gen DLSS may leverage AI to generate in-game assets, objects and NPCs from scratch".

    But anyway, this is the actual Q&A snippet. Huang still was not very clear whether this tech will be included in next-gen version of DLSS , or will it be separate AI tool for gaming.

    If used in DLSS, then we could be looking at a future version 4 or 5 here. *speculation*

    Q: AI has been used in games for a while now, I’m thinking DLSS and now ACE. Do you think it’s possible to apply multimodality AIs to generate frames?
    A: "AI for gaming - we already use it for neural graphics, and we can generate pixels based off of few input pixels. We also generate frames between frames - not interpolation, but generation. In the future we’ll even generate textures and objects, and the objects can be of lower quality and we can make them look better.

    We’ll also generate characters in the games - think of a group of six people, two may be real, and the others may be long-term use AIs. The games will be made with AI, they’ll have AI inside, and you’ll even have the PC become AI using G-Assist. You can use the PC as an AI assistant to help you game. GeForce is the biggest gaming brand in the world, we only see it growing, and a lot of them have AI in some capacity. We can’t wait to let more people have it."

    Though, I'm more inclined towards the Neural Texture Compression (NTC) solution being used here as well.

    https://research.nvidia.com/labs/rtr/neural_texture_compression/assets/ntc_medium_size.pdf

    Reply

  • Metal Messiah.

    Somewhat related.

    https://nvidianews.nvidia.com/news/new-nvidia-research-creates-interactive-worlds-with-ai
    ayPqjPekn7g:92View: https://www.youtube.com/watch?v=ayPqjPekn7g&t=92s

    Reply

  • CmdrShepard

    All these decades in steady improvements until we reached almost fully photo-realistic rendering in games, all those gigabytes of textures, highly detailed 3D models, accurate mocap and lipsync... and now we are throwing all that out for some fake AI hallucinated frames?

    Let me be the first to say -- NO THANKS.

    That video above looks horrible to me, and any new games using these new AI gimmicks for "reducing load on CUDA cores" which I was dearly paying for generations ever since 8800 GTX will be on my hard pass list.

    I am not against use of AI for improving NPC personas (would be great for RPGs), but I don't want fake visual crap.

    Reply

  • ivan_vy

    looks like a fever dream, won't it compromise the creators' vision? like AI photo-coloring, looks great but sometimes it chose the wrong color.
    I'm more for it for content creation and assets compression, but for rendering ...mmm...I think it needs a few more generations.

    Reply

  • bit_user

    Metal Messiah. said:

    DLSS has always leveraged AI by the way. So word the title accordingly.

    ...to the extent that people use AI and Deep Learning interchangeably, yes. I had the same thought.

    Metal Messiah. said:

    But anyway, this is the actual Q&A snippet. Huang still was not very clear whether this tech will be included in next-gen version of DLSS , or will it be separate AI tool for gaming.

    It sounds to me like something fundamentally different than DLSS.

    Metal Messiah. said:

    Though, I'm more inclined towards the Neural Texture Compression (NTC) solution being used here as well.

    That paper didn't sound terribly practical, IMO. Texture lookups are higher-frequency than the rate at which DLSS interpolates pixels, so I don't know if it's a big win to put a lot more computation in that phase. You also need to make the model small enough that it's not going to generate more memory traffic than it saves by increasing texture compression ratios.

    That gets at a broader concern I have around this AI-generated content, which is the size of the models needed to generate convincing assets. These seem like they'd chew up a lot of memory and hardware bandwidth, if they're being run mid-gameplay (i.e. as opposed to being limited to level loading).

    Either way, I think it's not right around the corner, but maybe something that starts to happen in 3-4 years.

    Reply

  • bit_user

    ivan_vy said:

    looks like a fever dream, won't it compromise the creators' vision?

    Yeah, it will need to provide creators with enough control, but I guess big game publishers are known to be cheap. So, even if it doesn't have quite the degree of control they'd like, I'm not sure that'll keep it from being adopted by some.

    In terms of realism, I believe that much will need to be competitive with manually-crafted assets.

    Reply

  • Ogotai

    so nvidia wants to create more fake stuff, like the fake frames of DLSS 3 ?

    Reply

  • valthuer

    Ogotai said:

    so nvidia wants to create more fake stuff, like the fake frames of DLSS 3 ?

    Oh, please. What is real anyways? After all, we ‘re talking about virtual environments, for God’s sake.

    You ‘re living in a world with Anisotropic Filtering reducing texture pixel counts, heterogenous deferred shading reducing lighting pixel counts, Z-culling reducing rendered pixel counts, MSAA reducing rendered pixel counts (over SSAA), TSAA and other shader-based AA techniques reducing pixel counts (over MSAA), anisotropic pixels reducing pixel counts (e.g. Wipeout using variable pixel widths to raise and lower per-frame render loads to maintain 60FPS in varying environments), Variable Rate Shading reducing pixel counts dependant on screen content, screen-space reflections reducing rendered pixel counts by just duplicating rendered pixels, probe reflections reducing rendered pixels by just copying from a texture, and so on.

    Game engine optimisation is all about finding places where you can outright avoid doing work wherever possible. It's 'faking' all the way down.

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    If you have a good upscaling and sharpening model that looks better than native plus TAA, or at least close enough to be equivalent, then what's the problem? Especially if it boosts performance by 30–50 percent?

    Reply

  • bit_user

    valthuer said:

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    If you have a good upscaling and sharpening model that looks better than native plus TAA, or at least close enough to be equivalent, then what's the problem? Especially if it boosts performance by 30–50 percent?

    You're singing my tune!

    I maintain that every pixel at 4k is not precious. Most 4k monitors are too small for that resolution to really add much value to the gaming experience, yet a lot of people are moving that way on the resolution scale (often probably for non-gaming reasons). So it makes sense to use more approximations, interpolations, etc. to fill in those extra details.

    More to the point: the proof of the pudding is in the eating. If the end user finds technologies like DLSS 3 yield a better experience than going without, they'll use them. And what's wrong with that? I use motion interpolation on my TV, in spite of the occasional artifact, because the overall image quality is a lot better.

    Reply

  • thestryker

    valthuer said:

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    DLSS 3 isn't upscaling it's frame generation which is where the "fake frames" commentary comes from.

    I do think there's a lot of value to be had with frame generation technologies, but it's being pitched all wrong. For a good implementation it can make games at high detail look really good so long as your minimum frame rate is good enough. It can't make up for poor performance due to input lag, but it can make something that can run 120 FPS natively even better.

    Reply

Most Popular
Leaked Intel Arrow Lake chipset diagram show more PCIe lanes, no support for DDR4 — new chipset boasts two M.2 SSD ports connected directly to CPU
China plans state ownership for all of its rare earth metal resources — regulation comes into effect on Oct 1
Bitcoin Mt. Gox creditors will soon benefit from $9 billion payback — their BTC is over 10,000% more valuable than when it went missing
TSMC's overseas expansion will only contribute 10% of the foundry's production capacity: Report
Motherboards and systems with China's Loongson CPUs now shipping to US customers — options start from $373 for a DTX board with processor and cooler
FreeDOS open-source text-based OS turns 30, still in active development and primarily used for retro gaming
Sony cuts 250 jobs at optical media plant — recordable disc production to be phased out, says report
SK hynix plans $74.6 billion investment to strengthen its memory chip business — hopes for AI business boost
Microsoft's Copilot+ PC exclusive features are a bad joke, even for AI fans
Overseas Ryzen 9000 CPU preorders shed light on potential MSRPs — Ryzen 9 9950X for $707, Ryzen 5 9600X for $332
Hard drive, SSD puncher puts four holes through your drives — Puncher P30 destroys physical media with 12 tons of pressure
Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2024)

FAQs

How does nvidia dlss work? ›

DLSS uses the power of NVIDIA's supercomputers to train and regularly improve its AI model. The latest models are delivered to your GeForce RTX PC through Game Ready Drivers. Tensor Cores then use their teraFLOPS of dedicated AI horsepower to run the DLSS AI network in real time.

How does DLSS upscaling work? ›

It improves game performance by rendering games at a resolution below a display's native resolution, then using AI to upscale the result. Cyberpunk 2077 is an ideal example. If you select 4K resolution and choose DLSS Quality mode, the game will render at 1440p resolution. DLSS upscales the result to 4K.

Is DLSS actually AI? ›

DLSS is a form of machine learning that uses an AI model to analyze in-game frames and construct new ones—either at higher resolution or in addition to the existing frames. It uses Supersampling to sample a frame at a lower resolution, then uses that sample to construct a higher resolution frame.

Does DLSS actually increase FPS? ›

Does NVIDIA DLSS Improve FPS? In a word, absolutely. With NVIDIA DLSS, gamers aren't tethered to native 4K hoping to achieve 50-60 fps. They can render at resolutions like 1080p or 1440p and let DLSS reconstruct the visual data.

Is DLSS ray tracing? ›

DLSS 3.5 uses a new technology called Ray Reconstruction, which Nvidia describes as an AI-powered technique that enhances the quality of ray tracing. Ray Reconstruction swaps “hand-tuned denoisers” with an AI network trained by a supercomputer that should result in higher-quality pixels between sampled rays.

Does DLSS benefit 1080p? ›

Keep your resolution in mind

As mentioned, DLSS Super Resolution works best at a high output resolution. It looks and performs best if you're outputting at 1440p or 4K , and the benefits diminish at a lower resolution like 1080p.

Is it good to turn on NVIDIA DLSS? ›

There are definitely cases where DLSS can make a game look better. That's the idea behind DLAA in the first place — the AI-assisted anti-aliasing in DLSS looks fantastic. In games where you're already getting good performance, try turning on DLSS to Quality mode and see how it looks.

Does DLSS work automatically? ›

How to enable Nvidia DLSS in a video game. Unfortunately, you don't automatically get the benefits of DLSS with most games. Instead, you must enable DLSS in each game through its settings menu. First, launch a game that supports DLSS.

Does DLSS reduce GPU usage? ›

enabling DLSS just lowers the GPU usage but does not affect the framerate. When the game is paused during a race, the GPU usage will jump to 99% and the frame rate jumps over 120.

Does NVIDIA DLSS work on every game? ›

Does NVIDIA® DLSS support all games? No, not all games currently support NVIDIA® DLSS. However, the list of supported games is continuously growing as more developers integrate DLSS into their titles.

Top Articles
Latest Posts
Article information

Author: Aracelis Kilback

Last Updated:

Views: 5473

Rating: 4.3 / 5 (44 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Aracelis Kilback

Birthday: 1994-11-22

Address: Apt. 895 30151 Green Plain, Lake Mariela, RI 98141

Phone: +5992291857476

Job: Legal Officer

Hobby: LARPing, role-playing games, Slacklining, Reading, Inline skating, Brazilian jiu-jitsu, Dance

Introduction: My name is Aracelis Kilback, I am a nice, gentle, agreeable, joyous, attractive, combative, gifted person who loves writing and wants to share my knowledge and understanding with you.