NVIDIA releases some RTX 2080 performance numbers and some info on DLSS

Published by

Click here to post a comment for NVIDIA releases some RTX 2080 performance numbers and some info on DLSS on our message forum
data/avatar/default/avatar28.webp
Paulo Narciso:

Quantum Break used some upscaling technique from 720p to whatever resolution monitor was running, but looked like crap compared to native res. Also consoles have doing it for years, now.
That is true... upscaling is cancer, regardless of how it is done. Native res is the bare minimum i will accept, but downsampling is preferable 😉
Fox2232:

And about your setup? Keep it. This is all new, and there will be low hanging fruits harvest soonish. +7nm will really make difference. Moment you see it on native resolution, you'll be able to tell. TAA reduces resolution too. And looks like it is upscaled from lower resolution. But it completely removes shimmering. That's worth the sacrifice in many games.
Yeah ofc such a big node shrink will make a big difference, but you can always play the waiting game - i've had my gpu's for over 2 years, and i want new ones 😉 Besides, i reckon it will be at least 1 year before nvidia comes with a new series on 7nm.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Nvidias launch has created so much uncertainty and negative chatter over the web it would do them good to lift NDA asap. Really this dragging on til judgment day (20/9?) is ludicrous. What the hell are they trying to keep secret til then? AMD using the numbers to tweak their next release to be slightly ahead? NDA lift now vs a few/several months for AMD release is not going to make an iota of difference.
data/avatar/default/avatar33.webp
If you truly want to know how these cards will stack up against your 1080/1080 Ti, google Titan V reviews ( the $3000 monster ). @ 2560x1440 it's about 22% faster than a 1080 Ti @ 3840x2160 it's about 34% faster than a 1080 Ti The Titan V is using a HBM2, has 653 GB/s of memory bandwidth, has 5120 cuda cores and it has 640 Tensor cores. Its a good reference as to were these new RTX cards will stand. I'm going out on a limb, and without mature drivers and such, this is what should be expected. @ 2560x1440, the 2018 Ti should be in the ballpark of 16-17% on average faster than a 1080 Ti @ 3840x2160, the 2080 Ti should be in the ballpark of 28-29% on average faster than a 1080 Ti Some games will perform much better than others, but these are averages. Check around for the games you are most interested in playing and research accordingly.
https://forums.guru3d.com/data/avatars/m/105/105985.jpg
Noisiv:

Being huge means it will be easier to cool Heat Dissipation ~ Total Area
yeah but voltage could introduce eratica(?) in any chip but I realize the power train looks pretty good on the ref cards voltage can jump around the chip which in turn will be harder to cool and with its footprint would be a prime candidate for that to happen. so don't ruin my fantasies lol not for nothing no one seemed to bring it up but,we may have new oc "sliders" for the new hardware on chip..who knows right? that said they better get ali his cards cus we aint doing crap without him
https://forums.guru3d.com/data/avatars/m/165/165326.jpg
Well , i will wait until all the reviews are out to make a judgment on this new series of video cards. All i have heard so far is ray tracing this and ray tracing that ... 😕 , but what about actual game performance in today's game ? That's the answer i want to hear and see with facts from the reviews 😉 , i need to see real numbers 1080Ti versus 2080Ti , 1080 versus 2080 and 1070 versus 2070. awaiting Hilbert's review patiently 😀
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
hopefully Hilbert puts a 6th or 7th gen bench rig together, i don't want to see results of this being held back by that 5960 hes been using.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Battlefieldprin:

Of all the Nvidia series I have seen , this is strange one since the 2080 will be equal or slower even than 1080ti. In the past , each generation used to bring twice performance of the previous one !
Only someone who doesn't remember the past or cherry picks one specific generational boost will state what you have stated. 1080 ti is only 60% faster then the 980 ti, not twice the performance. 980 ti is only 40% faster then the 780 ti, not twice the performance. 780 ti is only 60% faster then 680, not twice the performance. 680 is only 50% faster then 580, not twice the performance. 580 is only 15% faster then 480, not twice the performance. Shall the list go on? you can do the different tiers as well and you'll find similar performance differences (obviously these are generalized performance differences and you will certainly find areas with higher performance differences)
https://forums.guru3d.com/data/avatars/m/66/66219.jpg
Solfaur:

While I'm taking that chart with a grain of salt, I'm looking forward to see DLSS way more than ray tracing.
I'm interested in it's performance and look, if it blurs the scene as much as TAA.
https://forums.guru3d.com/data/avatars/m/235/235344.jpg
It really does not matter performance wise how well these cards do versus Pascal based versions. There literally is no competition for yet another year from AMD https://www.techpowerup.com/247006/amd-7nm-vega-by-december-not-a-die-shrink-of-vega-10. AMD has no plans till navi for this segment. By the time navi lands in 2019, they will still only be targeting mid level. Navi is just going to be Polaris all over again. I am happy to have finally switched over graphics wise. Only hope for competition is going to come from Intel and they are not planning on anything till 2020. So nvidia can afford to piss us off...there is no where else to go. Prices be damned.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Solareus Prime:

Ray Tracing might be obsolete by the time new graphics engines are produced.
I'm thinking you don't know what ray tracing is with a comment like that.... There's nothing we are even close to getting graphically capable after ray tracing as of yet....
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
Battlefieldprin:

Of all the Nvidia series I have seen , this is strange one since the 2080 will be equal or slower even than 1080ti. In the past , each generation used to bring twice performance of the previous one !
Aura89:

Only someone who doesn't remember the past or cherry picks one specific generational boost will state what you have stated. 1080 ti is only 60% faster then the 980 ti, not twice the performance. 980 ti is only 40% faster then the 780 ti, not twice the performance. 780 ti is only 60% faster then 680, not twice the performance. 680 is only 50% faster then 580, not twice the performance. 580 is only 15% faster then 480, not twice the performance. Shall the list go on? you can do the different tiers as well and you'll find similar performance differences (obviously these are generalized performance differences and you will certainly find areas with higher performance differences)
This series of articles tries to define the difference with more recent cards GPUs go as far back as are usable for todays games, 780 onward. https://www.hardocp.com/article/2018/07/25/nvidia_gpu_generational_performance_part_1/ https://www.hardocp.com/article/2018/08/07/nvidia_gpu_generational_performance_part_2/ https://www.hardocp.com/article/2018/08/16/nvidia_gpu_generational_performance_part_3/ 980ti is 30-40% faster than 780ti 1080ti is 70%+ faster than 980ti Bear in mind that many analysis of 980ti vs 1080ti showed the 1080ti was close to SLI 980ti, thats where the double figure arose. I dont recall mention the 1080ti was twice as fast.
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
alanm:

Game devs @ unrealengine forum discussing RT development in gaming (seems many still uncertain how it will be implemented). https://forums.unrealengine.com/development-discussion/rendering/1517518-the-rtx-2080-realtime-ray-tracing-hype And a leaked perf chart showing shading performance Turing vs Pascal. Shading cores far more efficient apparently. https://cdn.videocardz.com/1/2018/08/NVIDIA-Turing-vs-Pascal-Shader-Performance-1600x819.jpg p.s. NDA lifts Sept 14
I get an 1011 error on the second link.
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
Thank you
https://forums.guru3d.com/data/avatars/m/259/259045.jpg
Why does it say 2080: 2x 1080 on the top of the chart, are they comparing 2 1080's in sli vs a single 2080? Or is that marketing for 2x performance in certain situations/games? Edit: Guess I can just assume the latter....
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
Fox2232:

Spoiler: "I think it is this one:"
alanm:

And a leaked perf chart showing shading performance Turing vs Pascal. Shading cores far more efficient apparently. https://cdn.videocardz.com/1/2018/08/NVIDIA-Turing-vs-Pascal-Shader-Performance-1600x819.jpg p.s. NDA lifts Sept 14
(Initially tried quoting @Fox2232 post with Spoiler pic in, but wouldn't show up in my post for some reason, that's the chart I'm referring to, and is the same one that Alanm linked, but link broken). That tallies roughly with the 40-60% increase (like for like settings, non-DLSS) that can be inferred from the graph released in the NVidia blog (and shown as the topic of this Guru3d article). I'll just re-link this article for ultimate clarity on which one I'm talking about, as a lot of Turing threads at the moment: https://www.guru3d.com/news-story/nvidia-releases-performance-metrics-and-some-info-on-dlss.html Given that it tallies roughly with "the already from NVidia released info", then this could either support the validity of this leak or on the other hand imply that it has been falsified using the "from NVidia released info" as a basis of creation. Ha, we might not be any closer to the truth!
data/avatar/default/avatar34.webp
Still having to wait for independent reviews means it is still difficult to judge this correctly on whether this will be worth upgrading to. Even with nvidias numbers we currently have 1080's going for just over £400 compared to over 700 for 2080 and 1080ti's for just over £600 compared to 1000 for 2080ti.... so big performance jump if true matches big price jump. Those tensor cores you pay for which are normally not used for gaming now have a role to play to assist performance it seems by taking over AA duties. That is another unknown of course, how many games will be using it. At least things will be interesting next few weeks 🙂 For me with a 970 I'm very interested how the 2070 does. nvidia state 499 EUR which is slightly more than the £400+ that 1080's go for, so for me the 2070 performance against 1080 and also what the AIB card prices will be matters. I guess there are a lot of people who are stringing out their 970 and missed 10xx series who will be interested also. Finally we need to see how well they overclock, as most 1080/ti are overclocked as standard
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
sverek:

Lemme know if I am wrong. DLSS = Deep Learning Super Sampling. From press conference, Jensen Huang mentioned to enable it, machine learning have to be performed first. I guess the machine learning results then will be passed to RTX by driver update or game files. So without machine learning performed by MASSIVE Nvidia GPU servers, DLSS is useless. You can see that even games that partnered with Nvidia, doesn't seem to support DLSS right away: https://www.anandtech.com/show/13266/nvidia-teases-geforce-rtx-2080-performance-numbers-announces-ansel-rt So, from what I understand, it's up to Nvidia to cherry pick which titles they want to support for DLSS, perform deep learning for them and only then DLSS will be available. Of course if developer has enough money and license to perform deep learning by themselves, it could be more viable. However, I don't think non-AAA titles are eager to rush to support DLSS.
This is about what I understood of it too, yes. It's a great idea, but it's never as "good" (meaning, as precisely calculated) as real AA / downsampling (it's an aproximative algorythm after all), and it requires (costly) server calculation time to do it. I guess it's either something Nvidia offers to devs, or that devs have to pay for to get for their games. I too wondered if this feature will be in the driver or in GFE, which would mean they'd try to pull people in more again.
https://forums.guru3d.com/data/avatars/m/226/226864.jpg
Offloading anti-aliasing workloads to the tensor cores via DLSS sounds like something that could be a game changer for those of us who like to have high quality Anti-Aliasing in our games. I hope it is comparable to 4x RGSS or 4x SGSSAA in quality with a minimal performance impact and easily forceable on any game.
https://forums.guru3d.com/data/avatars/m/97/97268.jpg
All I wanna see is 1080Ti vs 2080. Then I can decide 😀