PNY GeForce RTX 2080 and 2080 Ti Product Data Sheet Slips Out

Published by

Click here to post a comment for PNY GeForce RTX 2080 and 2080 Ti Product Data Sheet Slips Out on our message forum
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
Lebon30:

A lot of people seem to forget about this.
People using those graphs also try really hard to ignore that most of the x80s ever since Kepler weren't exactly prime silicon or actual top end chips and nVidia started inventing their own insane little price points in all of their markets ever since. Let's just leave it at that.
data/avatar/default/avatar31.webp
Pimpiklem:

If this was a AMD product everyone would be laughing and ridiculing the die size. But its nvidia so you all turn a blind eye. why is that ?
Because the die size is irrelevant to anything that has come before. There are now RT and Tensor cores as well as the CUDA cores on this new chip. People took issue with AMD because of the performance vs. heat issues, I wasn't aware that "die size" was ever an issue...
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
@airbud7: yeah but even nVidia were surprised by Titan's success, often selling them by the couple at $1k a piece...2080Ti should be interesting to behold though. 1080Ti's often hitting CPU limitations already. Turing desperately needs next-gen features that kill Pascal performance.
https://forums.guru3d.com/data/avatars/m/242/242134.jpg
@Pimpiklem complaining about a product that hasnt been released, you havent seen perform, with games not yet written. lol, ok.
data/avatar/default/avatar12.webp
Reading your guys posts cracks me up, Hell iam still on a 900 series 980 to be exact I see no need to upgrade unless your doing 4k max settings. But also iam 41 and my gaming days are now like once a week ( down from 24/7) my daughter took over my position lol. As far as performance goes I saw you guys comparing gtx 1080 from What i saw a 1080 is 2x the performance of a gtx 980 so i might just get a used 1080 when my rysen build is finished. 1080ti would be super-sweet for me! 2080 needs to be at least 25%-35% faster then 1080, ti version 40%-50% in my humble opinion. Later gurus. Nz.
data/avatar/default/avatar20.webp
Oh I almost forgot....Nvidia=Fps! These guys are a Monster Giant company they dont even need to try to sell products if they fart people will flock to come and get a smell, yes when you corner the market like that and have no compition thats what happens! I wish i was the ceo,sigh! Lol.
https://forums.guru3d.com/data/avatars/m/172/172560.jpg
rtx2070 will be faster or similar to 1080Ti. Just look at the history. Doesn't matter what "leaks" show, or don't. My claim has ~90% chance to be true, based on past 10 years of products.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
gx-x:

rtx2070 will be faster or similar to 1080Ti. Just look at the history. Doesn't matter what "leaks" show, or don't. My claim has ~90% chance to be true, based on past 10 years of products.
xx70 "should be" > last gens Ti, but have feeling this time it wont. This release is a whole new approach on Nv's part, where I think they may be re-drawing the tiers. The xx80 could be the only card that beats the last Ti from now on. In which case I may have to bump up a tier or skip Turing 12nm for the next big 70% performance jump. Dont really have much faith in Turings 12nm longevity and wouldnt be surprised if a 7nm refresh comes out within next 9-12 months (or whenever AMDs 7nm is out). Have a feeling Nvidias main intent with Turing 12nm is to showcase RT tech and to nudge game studios to adopt it as quickly as they can before the next RT 7nm release when things will be primed and ready for it.
https://forums.guru3d.com/data/avatars/m/53/53598.jpg
What will be even more interesting will be the second hand market for 1070/ 1080 cards, what will be the going price for cards that might have had the guts ran out of them for mining 24/7........should be very interesting to watch.
https://forums.guru3d.com/data/avatars/m/172/172560.jpg
I am not going to write down something you can see for yourself by looking through GPU reviews on guru3d. PS. I talked about performance per performance, not performance per dollar, watt, duck, dog, amd, sick days etc. it's not assumption, it's a fact. start with series 4xx and work your way up.
https://forums.guru3d.com/data/avatars/m/172/172560.jpg
Fox2232:

Neither was I. I just wrote about cause for improved performance for each generation. If you think 2070 will match/outperform 1080Ti, you have to have reason for it. Performance has to come from something. And with all respect, I did ask you to write it down for yourself. I know perfectly what were reasons for performance jumps. It just looked like you go by "Statistics" and presumed that nVidia will deliver same improvement again, ignoring technological background.
What nVidia does is make xx70 to be faster than previous gen xx80(Ti most often) so that you would sell old and buy new. Tech has nothing to do with their money making policy at the final step when it comes to naming scheme. Yea, sure, you are correct in your assumptions, which only leaves nVidia with more headroom for price gauging. Like I said, there is 90% chance that history will repeat itself. It's all just about naming products so they fit the agenda. @ivan There very well might be a 1000$ product xxxxx Titan. Again, history tells us, xx60, xx70, xx80, wait some months and xx80Ti comes. It is never the first card to arrive. It would make sense for the nV to release the slower(est, think GTX 960) cards at first until they get rid of as many of 1070/1080 as they can, at discounted prices.
https://forums.guru3d.com/data/avatars/m/172/172560.jpg
honestly, we don't know that yet. This article and what is going around now is speculation. Might be true, might not be. It's "wait and see" game at this point. On the other hand, there are far far more 1060 cards sold than 1070s, let alone 1080. They made much more money on low end.
https://forums.guru3d.com/data/avatars/m/242/242573.jpg
wavetrex:

Now the x80 is $800 and the Ti is $1000. Really ?? In just 3 generations we moved from $500 high-end mainstream and $700 for elite "Ti" to the new amazing prices. What's next ? $1000 for 3080 and $1500 for Ti version?
I guess you missed the part where he said those prices look to be place holders. And even if you dont factor in inflation, the top cards today are still cheaper than they were in 2007. https://hexus.net/media/uploaded/2017/3/1c9a8251-8039-4dc6-9e84-40f92178c220.png https://hexus.net/media/uploaded/2017/3/8a5fb095-af03-4a73-8016-a95830dedfda.png
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Andrew LB:

I guess you missed the part where he said those prices look to be place holders. And even if you dont factor in inflation, the top cards today are still cheaper than they were in 2007.
Noticed that later after my post. As for the 2nd part of your comment... well, not exactly. 8800 Ultra was a sort of a Titan at that time, with an outrageous price, a halo product and nothing more. "Originally retailing from $800 to $1000, most users thought the card to be a poor value, offering only 10% more performance than the GTX but costing hundreds of dollars more." The actual high-end card that people bought was 8800 GTX which had a MSRP of $600, in line with the rest of them... 400-500-650. 8800 Ultra with it's $850 price should not even be on the chart... like none of the Titans are. I just hope they keep it with the "500" for high-end x80 and "700" for the extreme Ti version in the future as well, and not create more anomalies like that 8800 Ultra. We have Titans for that purpose.
https://forums.guru3d.com/data/avatars/m/87/87487.jpg
I'm interested in the reviews of the RTX 2080 Ti but I'm not convinced I need one, not when the GTX 1080 Ti is still a beast and offering high framerates at 2560x1440 (and often at 4K too using DSR) with 60+ fps in most games (well, those that are well optimised anyway... We Happy Few, yes, I'm looking at you!!!). There are no new consoles due this year so all games will still be aimed at the base PS4 and Xbox One specs (with "up to" 4K enhanced ones for the Pro and X variants). That means that I would be better off waiting until next year for the 2nd generation Turing cards which would likely have a die-shrink meaning better power efficiency and higher clocks/performance. By then the PS5 and Xbox One successor will be on the horizon, if not released, which means that the extra power may be needed for multiplatform games.... or, maybe, not since I don't intend upgrading to a 4K display for a few years yet. As I stated previously, maxed out settings and high framerates are far more important to me than compromised settings at 4K and 2560x1440 does offer the perfect "sweet spot" IMO on a 27" 165 Hz display.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Fox2232:

So, only advancements in architecture remain. And there are plenty, right? But that means 2070 will pull ahead of 1080Ti only due to massive use of effects/features which will cripple that 1080Ti. I am sure everyone wants new HW to perform better, to deliver higher fps in already existing games. Not to show off by being drastically better in some tailored situation and show almost no improvement in existing games.
I mean at some point that was bound to happen.. it's not like they can keep shrinking dies indefinitely. I'd rather have a revolutionary leap in visuals than Nvidia selling +10% performance each generation like Intel. So yeah, new hardware needed - progress kind of "resets" here - but the alternative is what exactly? Plateaued visuals/performance as artists ram their heads into the proverbial wall of rasterization?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Paulo Narciso:

Unless this kind of tech is in the consoles (Ray tracing), Devs won't bother.
The raytracing tech is powered by Microsoft DX12's DXR - which will almost definitely hit consoles at some point (Khronos is building their own variant as well). It's up to AMD to support an acceleration method for it - they already partially can via RPM in Vega but it's significantly slower than what Nvidia is doing.
https://forums.guru3d.com/data/avatars/m/172/172560.jpg
Denial:

The raytracing tech is powered by Microsoft DX12's DXR - which will almost definitely hit consoles at some point (Khronos is building their own variant as well). It's up to AMD to support an acceleration method for it - they already partially can via RPM in Vega but it's significantly slower than what Nvidia is doing.
But, on top of the current limitations of hardware and 30 fps more often than not, adding RT?! Just because it can, doesn't mean it should. Maybe next generation of consoles...I mean, even on PC they are adding it into the mix with rasterization, a very small part of it, and it can be toggled on/off. Actually, new Metro isn't even out yet, all we saw was a demo video. It's very new tech (for games). Time is required to pass until adoption. Just like with SSAO, Tesselation (most cards still struggle with it)...Hell, even DX12 sees most of it's usage on windows 10 UI...
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
gx-x:

But, on top of the current limitations of hardware and 30 fps more often than not, adding RT?! Just because it can, doesn't mean it should. Maybe next generation of consoles...I mean, even on PC they are adding it into the mix with rasterization, a very small part of it, and it can be toggled on/off. Actually, new Metro isn't even out yet, all we saw was a demo video. It's very new tech (for games). Time is required to pass until adoption. Just like with SSAO, Tesselation (most cards still struggle with it)...Hell, even DX12 sees most of it's usage on windows 10 UI...
Well yeah, obviously if you're expecting 100% adoption in the first few years you're going to be disappointed, but I think it's promising enough that devs will utilize it to a higher rate than previous technologies. In the professional rendering industry nearly every single first and third party renderer has pledged support for it (mostly through OptiX). It will eventually save artists a ton of time and thus money for games as well as just overall visual improvements. I don't think it matters that it's mixed with rasterization as the it's alleviating the most difficult lighting tasks - ones that are often hard to replicate and performance intensive anyway.
Paulo Narciso:

Microsoft was selling DX12 as the fix for all problems, the tech is used in only a couple of games and it doesn't bring anything new. Vulkan is even worse. The only worth games are doom and wolfenstein.
No they weren't lol. Like show me where Microsoft was selling DX12 as the fix for everything? All the articles that came out with DX12 said the exact opposite, that DX12 would be hard to adopt and it would take a ton of time to come to fruition - only the larger, more experienced developers would even gain capability out of it. That's why they continue to develop/support DX11 along side of DX12. Aside from the CPU overhead reduction, all DX12 does is give you deeper access to the hardware. The developer has to be the one that uses that level of access to improve performance and essentially out optimize the driver developers at AMD/Nvidia. Thinking that was going to happen in any reasonable timeframe or to any real extent was only pushed by delusional forum going gamers with no understanding of how difficult that level of software development is - it certainly wasn't said by Microsoft.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Paulo Narciso:

So, they won't use DX12 because it's hard, but they will use DX12 ray tracing?
Some companies will use preconfigured libraries based on DX12, like Nvidia's RTX Gameworks systems and AMD equivalents. The bigger engines like UE/Unity/Crytech/Frostbite/etc will do all the hard work and the developers building games in those engines (which are the majority of developers now) will just call the DXR functions with a checkbox/switch/light-type/howeverthefucktheydecidetoimplement/etc in the engine. The hard part of DX12 is the optimization - you can easily wrap DX11 games in 12 and do no work but you're not getting any of the benefit of low level optimization (which is the hard part). Read the article I linked in the previous post. Should also point out that you're conflating DXR adoption with DX12 - if anything DXR is just another selling point for devs to take the step to DX12. Especially because the artistry time saved. There is an upfront cost in the switch to DX12 but the time saved in development after is worth the cost - especially as raytrace acceleration performance improves and comes to other platforms.