AMD Radeon RX 6900XT to feature Navi 21 XTX GPU with 80 CUs

Published by

Click here to post a comment for AMD Radeon RX 6900XT to feature Navi 21 XTX GPU with 80 CUs on our message forum
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
shader =/= shader The architectures are very different, you can't simply compare a number with another (unless it's the end result, as in, FPS... that's the only number that matters)
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
kapu:

6gb can be used fully at 1080p . I know that for sure , 5.5gig in Horizon Dawn (full settings 1080p).
Horrizon can cache more into vram. Its constantly over 7gb using an rx580. Its one of the reasons runs better on this card.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Reddoguk:

How can 5120 shaders using GDDR6 beat 8‎704 shaders using GDDR6X. I highly doubt it'll beat 3080 but it might get close.
It very much depends on the architectures. I'm not going to claim I know one way or another but I also would not count this out until we hear next week. If AMD really is going to be pulling 25-30% more clock speed then who knows.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Fox2232:

I'll repeat myself for you. 16GB on 256bit bus with 16/18Gbps is too slow to move more than 8/9GB of data per frame from and into VRAM without fps falling under 60. Such card can't use more than 4/4.5GB per frame if you want 120fps. All that extra VRAM will be used only as cache. Therefore it may prevent occasional hitch from bad preloading of resources, but will not have positive effect on fps. (And that hitching does not happen on cards with 8GB VRAM anyway.) I would prefer 10GB 320bit @18Gbps over 16GB 256bit @18Gbps. In competitive games which are not using extra high resolution textures, AMD's solution may pretty much take crown even thanks to IC. But in those single player titles using detailed textures, nVidia with quite higher memory bandwidth is likely to dominate unless player reduces texture detail a bit. I tend to believe that AMD made a bet on being able to do a lot of processing in GPU with minimal access to VRAM. But question is how smart GPU is in deciding what will be sitting in IC and what will go back to VRAM.
While I don't disagree I think there is a huge unknown on what AMD is calling Infinity Cache and how that works. If AMD is saying they are going to be competitive with Nvidia at the highest levels and they went with a 256bit bus then something has to give. I have a feeling we are about to see something a bit revolutionary or the rumors all completely rubbish.
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
Fox2232:

I'll repeat myself for you. 16GB on 256bit bus with 16/18Gbps is too slow to move more than 8/9GB of data per frame from and into VRAM without fps falling under 60. Such card can't use more than 4/4.5GB per frame if you want 120fps. All that extra VRAM will be used only as cache. Therefore it may prevent occasional hitch from bad preloading of resources, but will not have positive effect on fps. (And that hitching does not happen on cards with 8GB VRAM anyway.)
From what I've been told, the Infinity cache system effectively increases RAM bandwidth, it quite literally moves information faster, unless I've misinterpreted what I was told. That's why they lowered it to this supposed 256 bit bus, because with the cache system it has enough bandwidth. I'll verify and get back about it. So... yes it should matter at higher resolutions, especially in the future... a lot.
data/avatar/default/avatar04.webp
DeskStar:

8K textures would say otherwise on how much RAM would be needed and at what resolution. I honestly think 16gb of VRam is the sweet spot for most if not all games at or below 4K. I personally game at 5120x1440 120hz and lies just below that of 4K in regards to pixels and I have used up all of my 2080 ti's memory that is for certain. I do not understand why Nvidia shat out cards with only 10gb of vram, but wait.... I do. Early adopters who scooped them up and got the ability to play first. Nvidia knew what they were doing, so when AMD drops their cards all they have to do is drop their 12-20gb variants to the masses. Early adopters this go'round might be a little pissed here soon. Just feel bad for the ones who fell to the scalpers though. Then again life is full of choices to say the least. Oh and my other system has that 8gb card you were speaking of and yeah my 5700xt uses up all of its RAM and then some to say the least. Especially anything over 1440p resolution.
The fact that a game or an api will fill up all the memory it can get, does not mean it will get more performance from it. 8gig can be enough to don t starve a gpu at 4k with great textures, provided you empty the buffer from texture you are not using, and if there is aggressive approach on loading everything in the ram ( textures, geometry and compiled shader programs ) you will fill up as many gigs as you have. Without necessarly gaining fps or frametimes
https://forums.guru3d.com/data/avatars/m/247/247218.jpg
6900XT - 749$ - 50$ more than 3080 for around the same performance (trading blows), but has 6GB more VRAM and will be available. 6800XT - 649$ - 50$ less than 3080, with 2-5% being slower, but again, with more VRAM and again available 6800 - 599 - 10% slower than 3080, but more VRAM and being available 6700XT - 529$ - Slightly above 3070 in price, but with 4GB more VRAM and being 7-12% faster than 3070 6700 - 499$ - trading blows with 3070 but with more VRAM
data/avatar/default/avatar31.webp
NGGJimmy:

6800XT - 649$ - 50$ less than 3080, with 2-5% being slower, but again, with more VRAM and again available 6800 - 599 - 10% slower than 3080, but more VRAM and being available 6700XT - 529$ - Slightly above 3070 in price, but with 4GB more VRAM and being 7-12% faster than 3070 6700 - 499$ - trading blows with 3070 but with more VRAM
4 cards with $150 between the fastest and the slowest. You'd run Zen 3 aground 🙂
data/avatar/default/avatar24.webp
The limiting factor (especially for high res gaming) will be the 256 bit bus width... it's going to have alot less bandwidth than the 3080.
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
Dragam1337:

The limiting factor (especially for high res gaming) will be the 256 bit bus width... it's going to have alot less bandwidth than the 3080.
Well some have been saying about the merits of all the extra cache it has and that could make up for it. I guess we'll see soon enough.
data/avatar/default/avatar21.webp
Dragam1337:

The limiting factor (especially for high res gaming) will be the 256 bit bus width... it's going to have alot less bandwidth than the 3080.
I can get few dollars it wont be any limiting factor . AMD is far from stupid , and 4k still is not most popular resolution , far from it . 1080/1440p is the king and will be for few years at least, and i can bet 1000$ that AMD will eat 3080 at lower resolutions where Amepere architecture sucks ( can even loss to 2080ti).
https://forums.guru3d.com/data/avatars/m/243/243189.jpg
I would love it if the card results AMD showed before was the standard 6800. That would certainly turn some heads. I also wonder if AMD is going hard on pricing this round. My gut tells me that 6700 could be 350-400USD, and the 6800 maybe in the 500USD ball park. I feel they want another 580 tier gpu - excellent mid range performer that will sell loads. I hope that this pushes decent to high tier PC gaming back into a more accessible price category.
data/avatar/default/avatar28.webp
Fox2232:

Very few of us will buy card with performance level of 3080 for 1080p. Few more for 1440p. I have been aiming from start at 60~64CUs RDNA2. That should be about optimal 1080p card for high fps gaming. Such card should end up being somewhere between 2080Ti and 3080 depending on game and ability of IC to do its thing. And have quite better ability to OC within reasonable TDP than card with full 80CUs. But it all depends on way AMD is cutting down RDNA2. Will be part of IC disabled with CUs? Or is it bound to IMC configuration? Or is it actually free from constraints? In optimal situation AMD would make GPUs with 80CUs, 60CUs, 40CUs. Or any other multiples depending on what RDNA2 is designed for. As having 80CUs GPU cut down to 64 makes it more expensive than directly making 64CUs GPU. (And potentially more power hungry as some unused parts from that 80CUs may still be powered to some degree.) I do expect that there will be direct multiples of 8 for CU count. So I would guess 80CUs with 72CUs cut. And then 64CUs with 56CUs cut. And finally 40CUs with 32CUs cut which can come later as RX 5700XT still works as place holder. (RX 5700 is already gone from most of shops. And there are like 5 models of RX 5700XT which have larger supply.) Considering expected clock and functionality uplift, even 32CUs should be better than current RDNA1 40CUs. AMD definitely prepared for RDNA1 not being very popular upon release of RDNA2. And its production capacity is free for RDNA2/consoles for some time now.
Right now 3080 is too power full for 1080 /1440p , that is correct , but will You say same thing in 3 years ? We can surely expect game requirements go up drastically in following years after release of new consoles , same as it was with PS3 launch.
data/avatar/default/avatar38.webp
DeskStar:

8K textures would say otherwise on how much RAM would be needed and at what resolution.
8K textures are pointless unless you're playing at 8K or above, and no-one with one of these cards is going to be playing games at 8K, unless it's old or has very low gpu requirements. 2D game perhaps. In which case it wont use anywhere near even 8GB. Sure more is nice, but the extra cost is not worth it for most people who wont be gaming past 4K. Certainly not worth having more VRAM at expense of bandwidth. Less but faster is preferable. Looking at Guru3d game reviews and VRAM usuage.. the only one that uses more than 8Gb is Flight Sim, but then it does have the entire world as texture cache.
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
Constant issue is that a 3080 or similar performer is just barely enough for 1080p Ray Traced games. But grossly overkill for standard raster only games at that resolution. We are in a bizarre point in time for GPUs.
data/avatar/default/avatar04.webp
kapu:

I can get few dollars it wont be any limiting factor . AMD is far from stupid , and 4k still is not most popular resolution , far from it . 1080/1440p is the king and will be for few years at least, and i can bet 1000$ that AMD will eat 3080 at lower resolutions where Amepere architecture sucks ( can even loss to 2080ti).
4k is not the most used resolution by the plebs, but quite alot of users on guru3d use it, myself included. It's a fact that bandwidth matters more the higher res you use, and it was also one of the primary reasons that amd used to generally scale better with higher res than nvidia, as they used to use 512 bit busses on their big gpus.
https://forums.guru3d.com/data/avatars/m/236/236506.jpg
Yeah I'm only interested in how the 6900XT performs at 4K and not bothered at all with ray tracing. The 3080 doesn't have enough VRAM for my liking, not enough to be comfortable a few years down the road from now, anyway. So, step up AMD, 16GB is enough to be very comfortable with but the memory bandwidth appears to be on the low side, at least not factoring in the additional cache. I don't believe AMD would put out a card that be be limited at 4K, though. I'm more concerned about the pricing AMD decides on, and expect a bit of shock/disappointment on the 28th in that regard.
data/avatar/default/avatar13.webp
I think an 80cu card with 256bit gddr6 must be bandwidth starved. Unless AMD has worked out some freakishly good bandwidth saving trick. I am guessing the extra cu units will be used for ray tracing and that is where their new cache system comes in to effect, normal raster operations are memory streaming intensive so the cache may not help with that BUT ray tracing calculations are not memory streaming operation intensive and will benefit hugely from local cache and may even benefit some other post processing like AA. Though a 80cu unit with HBM would be something to behold, though I am assuming they will be Pro stuff
https://forums.guru3d.com/data/avatars/m/165/165018.jpg
even if the flagship is slightly slower than a 3080 it's still has more VRAM and presumably it will actually be purchaseable....
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
kapu:

I can get few dollars it wont be any limiting factor . AMD is far from stupid , and 4k still is not most popular resolution , far from it . 1080/1440p is the king and will be for few years at least, and i can bet 1000$ that AMD will eat 3080 at lower resolutions where Amepere architecture sucks ( can even loss to 2080ti).
There is no way anyone is buying a $600+ GPU in 2021 to play in 1080p, not even 1440p. If they do, they need to upgrade their peripherals. Whatever is out at this range needs to do well at 4k.