NVIDIA’s Morgan McGuire: “First triple-A game to require a ray tracing GPU will be released in 2023”

Published by

Click here to post a comment for NVIDIA’s Morgan McGuire: “First triple-A game to require a ray tracing GPU will be released in 2023” on our message forum
https://forums.guru3d.com/data/avatars/m/224/224720.jpg
"every gaming platform" Yea right... Nintendo will never do ray tracing.
https://forums.guru3d.com/data/avatars/m/224/224720.jpg
Clawedge:

Required? If it's required for the story or serves a purpose for story telling, then I understand. If it's for nothing more than reflections, then it's not welcome. Just my opinion
It would be required because it's much easier for devs to setup the lighting for a game with ray tracing and when hardware is good enough that they can use it across the board, they will stop using older methods for lighting effects all together to make things easier.
data/avatar/default/avatar12.webp
So just like Halo 2 "required" DX10 + Windows Vista and Quantum Break "required" DX12 + Windows 10. Will be another artificially created problem to push hardware and OS sales.
https://forums.guru3d.com/data/avatars/m/227/227994.jpg
Watch people Mod it out :P
data/avatar/default/avatar25.webp
I remember when I watched the first RTX demo, all I could think was where are all the fingerprints? I can't wait for 2030 when we finally get more realistically dull worlds like reality.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
So an NVidia worker said that 20** serie is useless and that the 16** is sufficient 🙂 NVidia shot a bullet in own feet for main consumer market ... Also as 16** serie doesn't do high end GPU (does it mean that they will do 1670 or 1680? lol) so that RX 5*00 is the best choise in power per price, that post is holly bread for AMD.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
rl66:

So an NVidia worker said that 20** serie is useless and that the 16** is sufficient 🙂 NVidia shot a bullet in own feet for main consumer market ... Also as 16** serie doesn't do high end GPU (does it mean that they will do 1670 or 1680? lol) so that RX 5*00 is the best choise in power per price, that post is holly bread for AMD.
thats not what happened, you're viewing the post through your own tainted perception.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Clawedge:

Required? If it's required for the story or serves a purpose for story telling, then I understand. If it's for nothing more than reflections, then it's not welcome.
This is why NVidia (or current games) sort of shot themselves in the foot... Problem is, the hardware is not fast enough to do proper real ray-traced lighting + shadows + reflections + refraction + occlusion across ALL pixels in scene So the game devs simply cut out most of the best / slowest parts of ray-tracing and JUST did the easiest - reflection in small patches of reflective surfaces. Now the uninformed believes that "reflections in puddles" is all ray-tracing adds to games. When the hardware can do lighting + shadows + reflections + refraction + occlusion across ALL pixels at >60FPS then people will understand. The other problem is the current rasterizing rendering solutions are very good at simulating a lot of this (such as pre-baked maps/lighting/shadows) so it can be difficult to tell the faked from the real thing even if you know what you are looking for. And for many people, they can't see the difference in a lot of scenes.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
HWgeek:

So Dr.Lisa Su got it right? there was no reason to make RTRT GPU in 2018 🙂. Also if AMD will ANN 64CU Navi in coming month that will challenfge RTX 2080TI at ~800$ then AMD will force Nvidia to make Big GTX card again without RTX so they can compete in pricing?
Depends on the cost of adding RT and how much of an advantage having in your hardware 4 years in advance gives you. They can make a big "GTX" card with RT but drop tensor (DLSS) since DLSS is currently the only thing Tensors are used for.
data/avatar/default/avatar38.webp
Denial:

Depends on the cost of adding RT and how much of an advantage having in your hardware 4 years in advance gives you. They can make a big "GTX" card with RT but drop tensor (DLSS) since DLSS is currently the only thing Tensors are used for.
They really ought to make a big GTX card though - alot more people would be interested in that than an overpriced RTX card.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Dragam1337:

They really ought to make a big GTX card though - alot more people would be interested in that than an overpriced RTX card.
they aren't going to cut parts out of their top end products just for dinosaurs who want to stay on the old stuff. they didn't do it with tesselation, or SM3.0.
data/avatar/default/avatar05.webp
Astyanax:

they aren't going to cut parts out of their top end products just for dinosaurs who want to stay on the old stuff. they didn't do it with tesselation, or SM3.0.
Tesselation didn't reqruire additinal hardware. SM3.0 helped improve performance. DXR reqruires additional hardware, and significantly hampers performance. People want better performance, not worse performance, hence why barely anyone uses DXR, even if they have an RTX gpu, and hence why GTX cards without RT cores etc would sell better.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Dragam1337:

Tesselation didn't reqruire additinal hardware.
? It definitely does. I don't know if it's a good example compared to DXR - I think DXR is a more complicated feature to measure the "impact" of but both require additional hardware.
GeForce GTX 400 GPUs are built with up to fifteen tessellation units, each with dedicated hardware for vertex fetch, tessellation, and coordinate transformations. They operate with four parallel raster engines which transform newly tessellated triangles into a fine stream of pixels for shading. The result is a breakthrough in tessellation performance—over 1.6 billion triangles per second in sustained performance. Compared to the fastest competing product, the GeForce GTX 480 is up to 7.8x faster as measured by the independent website Bjorn3D.
data/avatar/default/avatar17.webp
Denial:

? It definitely does. I don't know if it's a good example compared to DXR - I think DXR is a more complicated feature to measure the "impact" of but both require additional hardware.
Right you are. Regardless, it didn't significantly add to the chip complexity and cost, unlike DXR.
airbud7:

a 1680ti would be cool though.
I'd buy a GTX 2080 ti at 800 $ in a heartbeat. But the 1400 $ prices being charged for the RTX 2080 ti, due to useless features i will never use...
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
Dragam1337:

Tesselation didn't reqruire additinal hardware. SM3.0 helped improve performance. DXR reqruires additional hardware, and significantly hampers performance. People want better performance, not worse performance, hence why barely anyone uses DXR, even if they have an RTX gpu, and hence why GTX cards without RT cores etc would sell better.
Then you would have performance pref of 1070ti, 1080, and 1080ti. Which would make no sense and instead just flood the market more.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Dragam1337:

Right you are. Regardless, it didn't significantly add to the chip complexity and cost, unlike DXR.
I don't think we know how much complexity DXR actually requires. RTX cards have Tensors which aren't required for RT. Strip the tensors off and how much bigger is a RT enabled GTX chip? Is it possible to make that core smaller and more efficient or implement DXR in a better way? For example AMD's implementation is looking like a part of the fetch in it's variant of the RT core is going to be done in the texture units - which should make the hardware footprint of it's "RT core" smaller. It's too early to tell.
vbetts:

Then you would have performance pref of 1070ti, 1080, and 1080ti. Which would make no sense and instead just flood the market more.
I mean you'd get the performance of 2080Ti/2080 etc - I think people falsely think you'd get more performance - but you won't because you are TDP limited. You would theoretically get cheaper prices though - if Nvidia passed the savings to consumers.
data/avatar/default/avatar28.webp
vbetts:

Then you would have performance pref of 1070ti, 1080, and 1080ti. Which would make no sense and instead just flood the market more.
No? I would prefer 2080 ti performance (preferably much higher performance) at more reasonable prices, which would be possible without RT cores etc.