AMD Radeon RX 6900XT to feature Navi 21 XTX GPU with 80 CUs

Published by

Click here to post a comment for AMD Radeon RX 6900XT to feature Navi 21 XTX GPU with 80 CUs on our message forum
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
V3RT3X79:

80 Cu and the performance of an rtx 2080 😀
When the price and performance is revealed it would make your 3090 purchase to be the stupidest decision you ever made.
data/avatar/default/avatar34.webp
People see a card beating another. I see a performance line between the 3060 and the 3090, and many points in the middle, by AMD and NVIDIA at different price points.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
If the 6800 comes in at around $500 that should be a pretty good selling card.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
wavetrex:

It all depends on how well that rumored L3 cache will do its job. It either works great, and then Navi 2X can be at least 10% faster than 3080, if not more ... or it doesn't, in which case it will be much slower, due to memory bandwidth constraints and all those CUs go to waste.
That's still a matter of if the L3 is for dGPUs and not iGPUs. But let's say it is for dGPUs - traditionally, AMD's seemed to be ridiculously starved for memory bandwidth, to the point that Vega actually warranted HBM2. I get the impression RDNA2 is a little less bandwidth-hungry, but AMD probably realized that the only way to make this problem affordable is to add a cache. Widening the memory bus means more memory chips, and there just isn't room for that.
fantaskarsef:

Give us benchmarks AMD. Please k thx. Without benchmarks all there is to do is guess, like we did last week and the week before...
Even if they provided benchmarks, I'd still take them with a grain of salt. AMD has been better about sneak-preview benchmarks but I still don't trust any manufacturer's cherry-picked results.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
schmidtbag:

Even if they provided benchmarks, I'd still take them with a grain of salt. AMD has been better about sneak-preview benchmarks but I still don't trust any manufacturer's cherry-picked results.
Absolutely true! But just like with Nvidia's "conditional truths" (when the benchmarks are cherrypicked), I'd prefer those over the wild speculations which often build up a hype not even the cherry picked benches can satisfy fully 😀
data/avatar/default/avatar30.webp
mitzi76:

What we need unless you have done Hilbert is some info re Vram and the potential pitfalls esp at higher resolutions. The 16gb vs 10gb thing could seem like a clever move by Amd. I think it's potentially a key selling point. Personally if am gonna spend loads on a 4k screen etc i'd want a bit more Vram based of off some benchmarks have seen. (some were at 9gb but I didnt see any slowdown).
Higher resolution really doesn't make that much difference to the amount of VRAM used. A 1080p dual 32bit frame buffers and 32bit depth buffer uses 24MB of VRAM. 4K of the same type of buffers uses around 100MB of VRAM. Now some games might have more buffers for various things, but the vast majority of VRAM is used for holding textures, which don't necessarily need to be bigger. There's no reason you can't play 4K on 8GB just fine. Of course it depends if the game wants to use way more VRAM, but then the issue is the same at lower resolutions too. 8GB was the sweet spot, I think 10 or 12 will become the next sweet spot. 16 or 20 is just overkill for games, even at 4k or even 8k!
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
lukas_1987_dion:

I think it might be something like: 6900XT beats RTX 3080/20GB 6800XT beats RTX 3700Ti/16GB 6800 beats RTX 3700/8GB 6700XT beats RTX 3600Ti 6700 beats RTX 3600 Will see xD
I think AMD´s cards are going to be a little slower than Nvidia´s counterparts, around 10%, but also cheaper providing better a performance/price ratio. If MAD had cards better than the ones from Nvidia then they would say it as loud as possible so very possible customer could eard them. My guess of course, i need benchies!!!
https://forums.guru3d.com/data/avatars/m/262/262085.jpg
kapu:

Im at 1080p and not planing to change anytime soon , with 1060 6GB right now. I'm thinking if i double the performance of 1060 i will be good at 1080p 😀
Its gonna be faster than a 1060 for sure, I went from a 1060 to a RX 5700 and it's like 2 to 3 times faster
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
schmidtbag:

AMD's seemed to be ridiculously starved for memory bandwidth, to the point that Vega actually warranted HBM2
Actually, no. HBM(1, 2) was put on to save power, as both the memory itself and the memory controller inside the chip use less energy for a given bandwidth. It's also the reason why professional chips use HBM2 now, as those may run 24/7 and power becomes quite relevant in the long run ! The unfortunate side effect is that it made BOM too expensive and the gaming cards unprofitable. HBM is still the future, once it drops in price enough to make GDDR irrelevant.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
wavetrex:

Actually, no. HBM(1, 2) was put on to save power, as both the memory itself and the memory controller inside the chip use less energy for a given bandwidth. It's also the reason why professional chips use HBM2 now, as those may run 24/7 and power becomes quite relevant in the long run ! The unfortunate side effect is that it made BOM too expensive and the gaming cards unprofitable. HBM is still the future, once it drops in price enough to make GDDR irrelevant.
I don't think you understood me: I didn't say HBM was used because of the bandwidth, I'm saying despite its tremendous bandwidth, the GPUs were able to take advantage of it anyway.
data/avatar/default/avatar07.webp
This looks nice from AMD side, but IMO I have doubts that neither of these cards can beat the 3080 in 4K gaming. But I truly hope they can beat the 3000 series, even though I was lucky to get a 3080 this year!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
EngEd:

This looks nice from AMD side, but IMO I have doubts that neither of these cards can beat the 3080 in 4K gaming. But I truly hope they can beat the 3000 series, even though I was lucky to get a 3080 this year!
They don't need to beat the 3080 in 4K gaming. Even if they're 20% slower, they'll still be capable of delivering a good 4K experience.
https://forums.guru3d.com/data/avatars/m/280/280231.jpg
Theoretically Navi 21 XTX should be 2x5700XT + 30%(15%*2). It is already in 3090 ballpark and even faster. Get a life haters. Only problem will be the bottleneck of low bandwidth for an enthusiast grade product. I am curious if AMD can alleviate those performance losses for the end product.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
yeaaah more vram
https://forums.guru3d.com/data/avatars/m/200/200207.jpg
Fox2232:

I'll repeat myself for you. 16GB on 256bit bus with 16/18Gbps is too slow to move more than 8/9GB of data per frame from and into VRAM without fps falling under 60. Such card can't use more than 4/4.5GB per frame if you want 120fps. All that extra VRAM will be used only as cache. Therefore it may prevent occasional hitch from bad preloading of resources, but will not have positive effect on fps. (And that hitching does not happen on cards with 8GB VRAM anyway.) I would prefer 10GB 320bit @18Gbps over 16GB 256bit @18Gbps. In competitive games which are not using extra high resolution textures, AMD's solution may pretty much take crown even thanks to IC. But in those single player titles using detailed textures, nVidia with quite higher memory bandwidth is likely to dominate unless player reduces texture detail a bit. I tend to believe that AMD made a bet on being able to do a lot of processing in GPU with minimal access to VRAM. But question is how smart GPU is in deciding what will be sitting in IC and what will go back to VRAM.
Ok got it. Ta.
https://forums.guru3d.com/data/avatars/m/118/118854.jpg
Going to be mighty interesting along with rtx 3070... Was on planning a gaming rig, mainly for gaming only, really interested in the rtx 3070 or the new RX series. Lol, my 4 970's is only for project/tuning related, nothing more. Really looking foward to both rtx 3070 & the RX series. Better off holding until how the RX performs in general then make decision. More interested in both of those atm.
https://forums.guru3d.com/data/avatars/m/263/263435.jpg
I hope there drivers are as stable and fast like nvidia. A fast card is nice aslong u have a stable experience while gaming, AMD had allot of compatible and software issues, thats why most people where switching from amd to nvidia gpu,s purely for stability. I really hope for amd they wil deliver that in there next-gen gpu,s and still be fast aswell.
data/avatar/default/avatar19.webp
DeskStar:

8K textures would say otherwise on how much RAM would be needed and at what resolution. I honestly think 16gb of VRam is the sweet spot for most if not all games at or below 4K. I personally game at 5120x1440 120hz and lies just below that of 4K in regards to pixels and I have used up all of my 2080 ti's memory that is for certain. I do not understand why Nvidia shat out cards with only 10gb of vram, but wait.... I do. Early adopters who scooped them up and got the ability to play first. Nvidia knew what they were doing, so when AMD drops their cards all they have to do is drop their 12-20gb variants to the masses. Early adopters this go'round might be a little pissed here soon. Just feel bad for the ones who fell to the scalpers though. Then again life is full of choices to say the least. Oh and my other system has that 8gb card you were speaking of and yeah my 5700xt uses up all of its RAM and then some to say the least. Especially anything over 1440p resolution.
6gb can be used fully at 1080p . I know that for sure , 5.5gig in Horizon Dawn (full settings 1080p).
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
How can 5120 shaders using GDDR6 beat 8‎704 shaders using GDDR6X. I highly doubt it'll beat 3080 but it might get close.