GeForce RTX 3090 gets 24GB GDDR6X and GeForce RTX 3080 to have 10GB GDDR6X

Published by

Click here to post a comment for GeForce RTX 3090 gets 24GB GDDR6X and GeForce RTX 3080 to have 10GB GDDR6X on our message forum
https://forums.guru3d.com/data/avatars/m/282/282392.jpg
$800 for a --80 series is pricey, however 4.3k cores and all the RTX stuff i wouldn't be mad at myself for going for one.
https://forums.guru3d.com/data/avatars/m/165/165326.jpg
I agree , this indeed will be a halo product like the Titan and more likely they will release a cut down version as 3080Ti to compete with big navy. It will be a monster fps piece of hardware for sure and i think it makes sense only if you play games at 4k+ resolution with all the bells and whistles turned on. I'm in for one 3090 as i only play games at 4k and my 2080Ti started to show it's age and no longer can cope with the new games at 4k. I see it as an investment for the next 2 years of my gaming needs and compare to my old hobby of racing cars at the drag strip this is much more affordable .
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
Although I skipped 2080Ti, I'm not sure I saved enough to afford a 3090. Even if there is no official price yet, you can bet all that horsepower will demand the lion's share. 😛
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
They intentionally left the whole 800-1400 $ range opened for the 3080-ti to later position itself within 6 months. 2080-ti owners will have to mostly go for the 3090 now instead of potential sidegrade with 3080, which at retail might be within 2080-ti overclock reach or close. "Put a card unreachable by AMD on top, then expand to more affordable range which is where the money is"
data/avatar/default/avatar06.webp
profion85:

Nvidia 3090 has a lot of vram 24gb, if the rumours are true... Is it possible to be a dual-gpu? like geforce 690 back in 2012.
24GB is the same as the RTX Titan, which this will probably replace. About time, too, the Titan naming scheme has been confusing as hell for years now. FWIW, 24GB is barely enough for high res 3D renders. It's depressingly easy to fill up 24GB when using uncompressed 8k or 16k textures; they sell the Quadro RTX 8000 with 48GB for a reason. Even as a hobbyist I run into situations where the 11GB on my 1080Ti aren't enough, using 4k and 8k textures I often have to budget the background objects I include and use tricks like DOF to hide it. The real suck is that if you want to render across multiple cards, you need them all to have the same amount of VRAM or you are stuck budgeting for whatever the lowest is. As soon as you run out, speeds drop through the floor across the entire project.
data/avatar/default/avatar22.webp
XenthorX:

They intentionally left the whole 800-1400 $ range opened for the 3080-ti to later position itself within 6 months. 2080-ti owners will have to mostly go for the 3090 now instead of potential sidegrade with 3080, which at retail might be within 2080-ti overclock reach or close. "Put a card unreachable by AMD on top, then expand to more affordable range which is where the money is"
Only if your 2080Ti can get 35% more speed from its OC, which is what the 3080 is supposedly doing.
https://forums.guru3d.com/data/avatars/m/280/280231.jpg
XenthorX:

They intentionally left the whole 800-1400 $ range opened for the 3080-ti to later position itself within 6 months. 2080-ti owners will have to mostly go for the 3090 now instead of potential sidegrade with 3080, which at retail might be within 2080-ti overclock reach or close. "Put a card unreachable by AMD on top, then expand to more affordable range which is where the money is"
3080 is not a sidegrade to 2080ti. It's an upgrade as long as 3080ti is not available. Like 2080 was a mild upgrade to 1080ti if you weren't to pay for a 2080ti. Upgrade definition is you never lose anything. 2080 beats 1080ti in all aspects and you gain new features unavailable to 1080ti owners. 3080 will have much better raytracing, which will make difference huge compared to 2080ti. I suppose everyone will get rtx3000 for a viable and sustainable raytracing experience. Even rtx 3060 will beat 2080ti with ray tracing enabled. I won't even mention with newer version dlss enabled.
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
You have a bunch of AIB partner boards already overclocked 10-15% out of the box (MSI GAMING X TRIO, EVGA FTW3 Ultra/XC Ultra OC, ... ). Add 100Mhz on the core (fairly conservative OC a lot of card can pull of) and you're already in the 20-25% area. Now the 2080-ti overclocks extremely well (beats RTX Titan after overclock), has 11Gb VRam, which i happened to regularly fill (Unreal Engine/Chrone/Netflix/Youtube...). I don't see a 10% performance upgrade (closer to 5% or less in what was my case) with 1gb less video ram as a valuable update option. Again of course, it's all speculation, we have no review, no benchmarks, nothing 😱
https://forums.guru3d.com/data/avatars/m/191/191875.jpg
XP-200:

Well if Nvida pricing keeps to form you might be cheaper just buying a small single seater plane, that way you can set everything to super ultra, plus free VR.....or just R.
Yeah but when that crashes it takes a little more than a double click of the icon to get it working again 😉 With the reality of folk potentially having to also fork out for a new PSU to power these things the total price of the next gen of cards is looking to be moving well North of silly. If AMD can nail the price / performance ratio with their next line up of cards they could steal a good number of customers. Maybe not something with the out and out power of the 3080 / 3090 but something that comes close at a cheaper price and doesn't require the end user to swap out their existing PSU / PSU connectors could see a small victory for the red team.
https://forums.guru3d.com/data/avatars/m/236/236506.jpg
Undying:

The gap between radeon and geforce cards has never been so big. I can already see the masacre in the guru3d review. Amd needs a 100% faster card than 5700xt to actually compete. Gpu war is a farse at this point nvidia has won and we all lost.
It's very disappointing that AMD are dragging their feet with the release of RDNA2 but they have Zen 3 to sort out, plus the new console APUs, those will surely take up a lot of their 7nm allocation? I feel like I read that they were hoping for a Q3 release for RDNA2 but I can't remember exactly where I read that now.
https://forums.guru3d.com/data/avatars/m/280/280231.jpg
Under heavy ray tracing scenarios rtx 2080ti loses easily from 25% up to 50% performance. If you think&calculate both RTX 30 superior ray tracing hardware plus speculated performance uplift, it's easy to say already RTX 3080 will have double or 2.5x the fps of 2080ti with RTX on. People continue to ignore that fact and stay with a mere ~10-25% difference 3080 vs 2080ti.
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
I wouldn't write AMD out at all, but rather expect them to compete with the 3080 and upcoming 3080-ti which is excellent.
itpro:

Under heavy ray tracing scenarios rtx 2080ti loses easily from 25% up to 50% performance. If you think&calculate both RTX 30 superior ray tracing hardware plus speculated performance uplift, it's easy to say already RTX 3080 will have double or 2.5x the fps of 2080ti with RTX on. People continue to ignore that fact and stay with a mere ~10-25% difference 3080 vs 2080ti.
You do have some nerve calling yourself "IT Pro" and drawing conclusions one week before the product line-up is unveiled and any review/specs is out. Pragmatism dear friend, rational thinking, facts, we're not drawing any conclusions on sand.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
itpro:

Under heavy ray tracing scenarios rtx 2080ti loses easily from 25% up to 50% performance. If you think&calculate both RTX 30 superior ray tracing hardware plus speculated performance uplift, it's easy to say already RTX 3080 will have double or 2.5x the fps of 2080ti with RTX on. People continue to ignore that fact and stay with a mere ~10-25% difference 3080 vs 2080ti.
Until many more games support raytracing and becomes a norm it will stay that way. I am very much interested how rdna2 will perform under the same condition. Im sure amd will make their own version of dlss.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Luc:

Two times bigger can be two times more expensive if it's monolothic, and if it performs close to a 3080 expect to be priced close to it. Everything could be really expensive until they launch a mcm card. At least, this time there aren't many leaks about Radeon, that could be a good sign...
No not really if we are comparing to Nvidia. We are talking about 505 mm² for big Navi. Nvidia's 2080 has a die size of 545 mm² and the 2080 TI has a 775 mm². If we are to say this 2x Navi card is the halo product its still pretty small in the grand scheme of GPU's.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
JamesSneed:

No not really. We are talking about 505 mm² for big Navi. Nvidia's 2080 has a die size of 545 mm² and the 2080 TI has a 775 mm². If we are to say this 2x Navi card is the halo product its still pretty small in the grand scheme of GPU's.
Yeah but what's the die size of a 2080Ti on 7nm? And how much more does it cost to manufacture it on 7nm?
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Denial:

Yeah but what's the die size of a 2080Ti on 7nm? And how much more does it cost to manufacture it on 7nm?
Really its how much does the 3080 cost vs Big Navi but I digress. Nobody knows cost per mm² except a couple people on the planet as this is the most guarded information. I have no hard answers but we certainly aren't looking at high costs for a 505mm² chip vs what high end GPU's sale for today. Tons of room for margins so I wouldn't even worry about that side of it because 505mm² is still not a large chip approaching the max reticle size. That all is to say I would only worry about Navi performance as chip cost should be perfectly within margins. We should expect more than 2x performance since its double the CU's with whatever tweaks made it into RDNA2. How much more than 2x I am not going to speculate.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
JamesSneed:

Really its how much does the 3080 cost vs Big Navi but I digress. Nobody knows cost per mm² except a couple people on the planet as this is the most guarded information. I have no hard answers but we certainly aren't looking at high costs for a 505mm² chip vs what high end GPU's sale for today. Tons of room for margins so I wouldn't even worry about that side of it because 505mm² is still not a large chip approaching the max reticle size.
We do have some public numbers on gate costs and the price is basically stalled as of 16nm. So essentially a x number transistor chip on 7nm costs identical (probably slightly more) as the same chip on 16nm. The node allows you double said chip (without increasing reticle size), but the manufacturing cost also essentially doubles because the cost per transistor didn't move and you're doubling the number of transistors. Now I agree that there are large margins to play with, so in reality the MSRP of the card won't double but it's still going to increase rather significantly. From the leaks, it sounds like the reason why Nvidia possibly went with Samsung is because they were able to negotiate a really good deal on 10nm as opposed to TSMC's 7nm. Which might have been done in an effort to maintain margins, I imagine Nvidia knows the market will only bear so much of a price increase. Like in order for Nvidia to improve performance from the 2080Ti, it needs a fuckton more transistors - there is no way around this.. but it no longer has price scaling from new nodes. So the best they can do is shop for a really cheap node to keep manufacturing costs down, but the new node isn't as good as 7nm so they design a new cooler, raise the TDP by 50-100w, release a youtube video about how revolutionary their new design is and kind of make it all work through marketing. I think AMD is probably in a better position but I doubt Nvidia cares much - it's clout/marketing/mindshare/etc essentially buys them time to maneuver. They are almost certainly going MCM with Hopper. As for the end result I couldn't careless. I'm buying a GPU at $1000, if the closest comes in at $800 then I'm saving $200 but I want something that can do 4K in Cyberpunk and close to max settings @ 60fps. Will probably end up with a 3080.. if AMD launches relatively soon I might consider them but idk, the black screen issues plagued a couple people who I convinced to buy AMD and now it's kind of turned me off AMD's GPUs again.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Denial:

We do have some public numbers on gate costs and the price is basically stalled as of 16nm. So essentially a x number transistor chip on 7nm costs identical (probably slightly more) as the same chip on 16nm. The node allows you double said chip (without increasing reticle size), but the manufacturing cost also essentially doubles because the cost per transistor didn't move and you're doubling the number of transistors. Now I agree that there are large margins to play with, so in reality the MSRP of the card won't double but it's still going to increase rather significantly. From the leaks, it sounds like the reason why Nvidia possibly went with Samsung is because they were able to negotiate a really good deal on 10nm as opposed to TSMC's 7nm. Which might have been done in an effort to maintain margins, I imagine Nvidia knows the market will only bear so much of a price increase. Like in order for Nvidia to improve performance from the 2080Ti, it needs a fuckton more transistors - there is no way around this.. but it no longer has price scaling from new nodes. So the best they can do is shop for a really cheap node to keep manufacturing costs down, but the new node isn't as good as 7nm so they design a new cooler, raise the TDP by 50-100w, release a youtube video about how revolutionary their new design is and kind of make it all work through marketing. I think AMD is probably in a better position but I doubt Nvidia cares much - it's clout/marketing/mindshare/etc essentially buys them time to maneuver. They are almost certainly going MCM with Hopper
No argument I just don't think cost on the higher end is even a consideration not since last year Turning hikes. So much margin built in these days. I have a feeling AMD negotiated a really good deal between Xbox, Playstaion, CPU's, GPU's, and other APU's because they had some high volume buying power. Anyhow I'm only worried about how Navi performs that will be 99% determining factor of AMD's success as there is tons of wiggle room to adjust pricing for the different high end tiers.
https://forums.guru3d.com/data/avatars/m/280/280231.jpg
XenthorX:

I wouldn't write AMD out at all, but rather expect them to compete with the 3080 and upcoming 3080-ti which is excellent. You do have some nerve calling yourself "IT Pro" and drawing conclusions one week before the product line-up is unveiled and any review/specs is out. Pragmatism dear friend, rational thinking, facts, we're not drawing any conclusions on sand.
Undying:

Until many more games support raytracing and becomes a norm it will stay that way. I am very much interested how rdna2 will perform under the same condition. Im sure amd will make their own version of dlss.
So we expect rdna2 and ampere have the same performance impact switching RT on like previous generation of nvidia, because developers know those games are minority. OK then, so why the jump to this generation's boat if we cannot play ray traced games with more than 60fps 4k? You are very pessimists about RT performance of new gpus, I would understand this ten years ago, now RT is becoming a real thing.
https://forums.guru3d.com/data/avatars/m/150/150085.jpg
Denial:

I mean if it's getting 24GB of ram and all that jazz it's clearly some kind of "Halo" product like a Titan even if it isn't labelled as such. I'm personally just going to buy whatever is in my price range of ~$1000.
Unfortunately with developers catering their engines to take full advantage of the next gen console bandwidth the 3090 won't be a halo product with 24gb of vram. It will be a sku needed on PC to combat the ever growing games that are bandwidth sensitive, MS FS2020, Horizon Dawn Zero, etc. Newly released sandbox games are showing this trend and I don't believe it's an outlier. Any, and dare I say all, future console ported, open world, sandbox games are going to be extremely bandwidth sensitive. The only way PC's can combat this is through increased vram, for one to at least match the ram on console.