PNY GeForce RTX 2080 and 2080 Ti Product Data Sheet Slips Out

Published by

Click here to post a comment for PNY GeForce RTX 2080 and 2080 Ti Product Data Sheet Slips Out on our message forum
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
alanm:

What other rumors? The 2080 will NOT be weaker than a 1080ti, there is no historical precedence. AdoredTV (who seems to have a sold inside source), and the first on the internet to let us know of the RTX2080, mentioned that the 2080 should be around 8% stronger than the 1080ti. That in itself is very disappointing. May not be accurate, who knows, but it will NOT be weaker than last gens Ti. That would be a monumental failure on Nvidias part and they would get eaten alive by review sites. Secondly, they will have an uphill battle trying to peddle this 'junk hardware' to gamers at the 'rumored' high prices. Pls people, use some common sense when tossing out these so-called 'rumors'. I dont even think they are rumors, just silly forums chatter here and there.
https://www.guru3d.com/articles-pages/geforce-rtx-2080-and-2080-ti-an-overview-thus-far,1.html I mean if you actually use this forum and read the pages, scroll down a bit and you'll see the 2080 has a rumoured compute limit of 11 TFLOPS, whilst the 1080TI has a maximum compute limit of 11.5 TFLOPS, as also stated its a bit of speculation on few data points. Please actually read articles posted by HH before you run your mouth. Like you i hope it isn't true, but it is possible. yes historically its never happened... but there is a first time for everything and if what we know is true, then in theory it might be a tad weaker
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
well it seems like Nvidia decided to play "tick-tock"... architecture now, process shrink later. with the efficiency of their design that is a viable play, more for shareholders than consumers. this buys them time to make sure all is hunky dory before a 7nm process. but... this leaves an 18 month window for AMD. all they have to do is make sure at least one new gpu scores as well as a 1080ti (for less money)...with Freesync that's a slam-dunk for sales. we will see in January
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Ricepudding:

2080 has a rumoured compute limit of 11 TFLOPS, whilst the 1080TI has a maximum compute limit of 11.5 TFLOPS
That doesn't mean much. Vega 64 (Water) has 13.7 TFlops and we already know that it can -barely- compete with 1080 (Non-TI), that is the founder Ed, non-overclocked, which only 8.9 Tflops. Comparing gaming performance of two different architectures (And Turing is a completely new one, according to nVidia) based solely on TFlops is just... silly. Also the 11, where is that coming from ? The shader count difference is exactly 15%. Since clocks are similar, if architecture isn't changed (much) it would result in 10.2 TFlops, not 11 ! Hey, I'm an Nvidia fan, typing right now on an 1080, looking at an 100Hz G-SYNC 34" monitor, with a Gameworks enabled game minimized in the Taskbar... but that doesn't mean I should blindly believe that the new gen is going to be a miracle beating soundly everything before it. Actually, the numbers that already popped up show that it will be just a small incremental upgrade, no more than 10-20%, compared to the chips that are being replaced. I guess we shall find out in 2-3 days what the truth is.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
wavetrex:

That doesn't mean much. Vega 64 (Water) has 13.7 TFlops and we already know that it can -barely- compete with 1080 (Non-TI), that is the founder Ed, non-overclocked, which only 8.9 Tflops. Comparing gaming performance of two different architectures (And Turing is a completely new one, according to nVidia) based solely on TFlops is just... silly. Also the 11, where is that coming from ? The shader count difference is exactly 15%. Since clocks are similar, if architecture isn't changed (much) it would result in 10.2 TFlops, not 11 ! Hey, I'm an Nvidia fan, typing right now on an 1080, looking at an 100Hz G-SYNC 34" monitor, with a Gameworks enabled game minimized in the Taskbar... but that doesn't mean I should blindly believe that the new gen is going to be a miracle beating soundly everything before it. Actually, the numbers that already popped up show that it will be just a small incremental upgrade, no more than 10-20%, compared to the chips that are being replaced. I guess we shall find out in 2-3 days what the truth is.
It's HH estimates from the page i linked in his review of information, i'm hoping its wrong like i said....Also don't compare Tflops from AMD vs Nvidia it doesn't work like that, Comparing just Nvidia makes more sense specially when the architecture is very similar (personally don't think its totally new). although i agree it isn't totally apples to apples comparison. I'm just going off the chart that HH posted on here Though i agree it shouldn't be some miracle card, its been 28 months currently since they released the 1080, we should see more of a difference between the two... a small incremental upgrade would of made more sense 12 months after that release not 28 and counting, Though personally i don't mind too much having a small increase means my 1080TI will last longer so eh... just a shame for the industry not to have its normally big leaps in performance, though we will see more hopefully soon what these cards can do and if tensor cores will play a role or not.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Ricepudding:

https://www.guru3d.com/articles-pages/geforce-rtx-2080-and-2080-ti-an-overview-thus-far,1.html I mean if you actually use this forum and read the pages, scroll down a bit and you'll see the 2080 has a rumoured compute limit of 11 TFLOPS, whilst the 1080TI has a maximum compute limit of 11.5 TFLOPS, as also stated its a bit of speculation on few data points. Please actually read articles posted by HH before you run your mouth. Like you i hope it isn't true, but it is possible. yes historically its never happened... but there is a first time for everything and if what we know is true, then in theory it might be a tad weaker
woops... missed wavetrex post... TFLOPs is a rough indicator, but doesnt equate to absolute GPU gaming performance, especially if comparing different arches. ie, Vega 64 had 12.6 TFLOPS vs 1080ti 11.5, but we know where things stand. Similarly GTX780ti with 5.3 TFLOPS vs 980ti @ 5.63 TFLOPs but performance spread is much larger. Other examples exist of TFLOPs vs performance disparity on different arches as well. Sorry if I had to 'run my mouth', but I cannot believe for a minute that Nvidia would upend its tradition of new card releases that ALWAYS had the no.2 card beating last gens flagships. Its like a 'sacrosanct' Nvidia formula. And the kicker is doing it at higher price points? They would not release such a product, in fact would make much more sense to have waited for 7nm and continue to milk Pascal.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
alanm:

woops... missed wavetrex post... TFLOPs is a rough indicator, but doesnt equate to absolute GPU gaming performance, especially if comparing different arches. ie, Vega 64 had 12.6 TFLOPS vs 1080ti 11.5, but we know where things stand. Similarly GTX780ti with 5.3 TFLOPS vs 980ti @ 5.63 TFLOPs but performance spread is much larger. Other examples exist of TFLOPs vs performance disparity on different arches as well. Sorry if I had to 'run my mouth', but I cannot believe for a minute that Nvidia would upend its tradition of new card releases that ALWAYS had the no.2 card beating last gens flagships. Its like a 'sacrosanct' Nvidia formula. And the kicker is doing it at higher price points? They would not release such a product, in fact would make much more sense to have waited for 7nm and continue to milk Pascal.
Oh no i totally agree, maybe i came off as a bit too disgruntled. What i meant was although it's not apples to apples it can be a good place to try and see performance changes specially when the architecture seems quite similar to pascal, though as i mentioned the tensor cores might play a bigger role in the difference in performance we should see...My big worry or issue, is that the 2080 should be more like the 2070 (normally the XX70 equals or beats the previous TI card give or take some), but it doesn't seem to be the case least by what we know... and yeah the prices of GPU's are a big kicker for almost anyone atm, i feel bad for first time buyers honestly. and with the 2080TI not being a massive jump but still asking price wise for that jump, yeah thats a kicker But if the performance jump isnt that great then i will happily skip this generation, though i'm quite shocked that they might bring out the 2080ti already, normally thats around a year after the XX80 cards come out.
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
RTX on Pascal vs Turing runs a factor 6 faster on the latter('s Tensor cores), so expect nVidia not only to coax game developers in implementing ray-tracing to promote Turing but also to push loyal Pascal users to buy new cards they otherwise definitely wouldn't have needed to buy yet. Impact on AMD GPU's remains to be seen, currently they're 'dreadfully' slow in comparison to 1080Ti, but Vega has Rapid Packet Math, who knows what's going to happen. Maybe the number of tensor cores on Turing is overkill for actual RTX/DXR implementation, so a Vega56 suddenly ends up being twice as fast as a Titan Xp in next year's ray-tracing games as the ray-tracing won't hit a bottleneck, and without the Vega56 necessarily being fast LOL...interesting times ahead. RTX in Vulkan should be nVidia/AMD apparently, so who knows...maybe the 7nm Vega really is a threat to the RTX Titan as AdoredTV's source was insinuating. edit: was it a factor 6 or was it worse...now I have to check the news again...
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
NDA apparnt dont mean anything anymore the leaks are like swiss cheese
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
2 more days hopefully things much clearer.
data/avatar/default/avatar18.webp
Nvidia should bring 512 and 448 bit back for in future. I have old gtx 560 1gb with 256 bit. Now I have new gtx 1050 ti 4gb with 128 bit lol. RTX 2080 ti with 352 bit is a joke.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
What are you guys smoking 670 is faster than the 1050 non ti! And 770 trading blows with the 1050ti! Winning slightly till you factor in the consumptio where the 1050's destroy em!
data/avatar/default/avatar33.webp
Texter:

RTX on Pascal vs Turing runs a factor 6 faster on the latter('s Tensor cores), so expect nVidia not only to coax game developers in implementing ray-tracing to promote Turing but also to push loyal Pascal users to buy new cards they otherwise definitely wouldn't have needed to buy yet. Impact on AMD GPU's remains to be seen, currently they're 'dreadfully' slow in comparison to 1080Ti, but Vega has Rapid Packet Math, who knows what's going to happen.
RTX hw has rapid packed math 2. In fact it goes deeper than Vega, scaling all the way to INT4. And it can execute INT and FLOAT operations in parallel, independently of each other. Plenty of new tricks in RTX, and Nvidia is not yet marketing all of them. https://abload.de/img/turingtdfjp.jpg
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Pimpiklem:

This misconception that vega 64 is slower than the 1080 needs to go away. Clearly the propaganda guys manage to succeed in planting that lie. Go on play frostpunk gtav sniper elite 4 project cars 2 total war hammer is you wish to cultivate the lie im happy for your ignorance.
I see you conveniently avoided quoting the words in question. Nobody said "slower".
https://forums.guru3d.com/data/avatars/m/249/249528.jpg
Jespi:

GTX 2080 has lesser cores than GTX 1080ti? So 1080ti will still win in performance ? Naaah???
I guess i should sell my gtx 1080 and grab a 780 ti because it has a few more cores..
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
IceVip:

I guess i should sell my gtx 1080 and grab a 780 ti because it has a few more cores..
Grab a GTX 280. Its got a massive 512bit bus. 😀
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
IceVip:

I guess i should sell my gtx 1080 and grab a 780 ti because it has a few more cores..
That is an incorrect comparison, because between 1080 and 780 Ti there's a massive clock difference: 780 Ti Base: 876, Boost: 928 with 2880 "cores", let's say 2.67 M "performance points" (Boost) 1080 Base: 1607, Boost: 1733 with 2560 "cores", hat results in 4.43 M of the same points. Plus 1080 boosts higher than that in many games (to 1900 or more), so 4.86 M points. That is assuming those cores are perfectly identical, 1080 still 82% faster. Everything that happened in NVidia GPU performance in the last 3 gens (including the new one, Turing) is much higher clock speeds as the result of 14nm, and also stuffing more of those cores because of the same 14nm. There's very little or no increase in IPC, as these cores are really simple floating point calculation units, and they are already at their best. It's very unlikely that anything can be done to improve IPC further, so all future GPU's using the same "array of floats multiplication" principle will just have more and more calculation cores and theoretically higher clock once moving to improved processes (7nm, 5nm...) --- Back to present day... well, Turing does not seem to have higher clock speeds. Actually, it might even clock lower than Pascal. And here lies the problem. Shader (or core) count is only 15% higher, clock difference is NIL. No amount of architecture tricks will make it faster, all the "low hanging fruit" have been picked already.
https://forums.guru3d.com/data/avatars/m/249/249528.jpg
wavetrex:

That is an incorrect comparison, because between 1080 and 780 Ti there's a massive clock difference: 780 Ti Base: 876, Boost: 928 with 2880 "cores", let's say 2.67 M "performance points" (Boost) 1080 Base: 1607, Boost: 1733 with 2560 "cores", hat results in 4.43 M of the same points. Plus 1080 boosts higher than that in many games (to 1900 or more), so 4.86 M points. That is assuming those cores are perfectly identical, 1080 still 82% faster. Everything that happened in NVidia GPU performance in the last 3 gens (including the new one, Turing) is much higher clock speeds as the result of 14nm, and also stuffing more of those cores because of the same 14nm. There's very little or no increase in IPC, as these cores are really simple floating point calculation units, and they are already at their best. It's very unlikely that anything can be done to improve IPC further, so all future GPU's using the same "array of floats multiplication" principle will just have more and more calculation cores and theoretically higher clock once moving to improved processes (7nm, 5nm...) --- Back to present day... well, Turing does not seem to have higher clock speeds. Actually, it might even clock lower than Pascal. And here lies the problem. Shader (or core) count is only 15% higher, clock difference is NIL. No amount of architecture tricks will make it faster, all the "low hanging fruit" have been picked already.
There's always that one guy that doesn't get it, thanks for the info anyways.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
airbud7:

I'm sure he was joking wavetrex
Obviously. @Texter thanks for that link. Good info... and yea, 1080 Ti is an absolute beast. Remains to be seen if 2080 manages to beat it, and if yes, by how much (But I doubt it, numbers don't support that assumption)
https://forums.guru3d.com/data/avatars/m/273/273754.jpg
alanm:

I dont recall the 8800gtx ever getting that high. Maybe 8800gtx ultra, but even that I recall was in the $800 region (possible peaking higher).
I think you missed the "inflation adjusted" part of the title along with the 2017 date on the side. On that note, the 2080 seems to be positionned between the 1080 and 8800 GTX on that graph if rumors are correct. But, guys, please, don't forget that while GDDR6 is cheaper and easier to produce than HBM2, it is still more expensive to produce than GDDR5 hence the cost increase. A lot of people seem to forget about this. I am personally excited to see the bench of these new cards.