Review average of 17 websites shows 6800 XT to be 7.4% Slower than GeForce RTX 3080

Published by

Click here to post a comment for Review average of 17 websites shows 6800 XT to be 7.4% Slower than GeForce RTX 3080 on our message forum
https://forums.guru3d.com/data/avatars/m/282/282392.jpg
These tests also show 2080TI > 3070
https://forums.guru3d.com/data/avatars/m/147/147322.jpg
I'm missing information on what CPU they tested, is it Intel or AMD (with SAM enabled)?
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
SpajdrEX:

I'm missing information on what CPU they tested, is it Intel or AMD (with SAM enabled)?
You're missing the point. It's purpose is to give the average difference of the gpus using results from various different configurations. However, I'd hazard a guess that the majority of sites would've used Intel in their test setups. IMHO, SAM is only relevant to new system builders/buyers. For those of us without SAM capability the article is much more representative of the performance we can expect. Make no mistake though, SAM makes a difference and 2% is 2% in my book.
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
How do the two compare in price, power consumption, and availability, though?
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
You're missing the point. It's purpose is to give the average difference of the gpus using results from various different configurations. However, I'd hazard a guess that the majority of sites would've used Intel in their test setups. IMHO, SAM is only relevant to new system builders/buyers. For those of us without SAM capability the article is much more representative of the performance we can expect. Make no mistake though, SAM makes a difference and 2% is 2% in my book.
For now though if NVIDIA does add this support as well and it works the same (Which it should.) then it's going to be interesting to see how AMD responds. 🙂 ~7% and then possibly 4-5% with SAM edging out a little bit more. Then back to 7% with NVIDIA but then as they also support Intel and PCI-E 3.0 that'll be a 9-10% lead on those systems for the 3080 instead now. Might give AMD a reason to just unlock this fully if it's something they're just using a standard thing for making it a bit special and trying to sell hardware through it at least the way I understood how this works.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Lets just hope that we get some "fine wine" treatment with the 6000 series GPU's. Like we did with the HD7000 series of cards which aged so damn well. AMD do seem a lot more focused these days, and with my 3900X CPU I have already seen this "fine wine" improvement over time with new BIOS/AGESA updates. The latest AGESA V2 PI 1.1.0.0 Patch C has brought some really really good improvements to the way the chip boosts. Before I would be seeing 4.65GHz for a nanosecond and the chip would stay around 4.2GHz at stock PBO enabled. Now with AGESA V2 PI 1.1.0.0 Patch C I am seeing with the same settings 4.35GHz across all cores and it boosts A LOT to 4.65GHz during bursty loads. We need to be seeing the same kind of improvements on the GPU side as well. I have my links bookmarked and shall be checking them every day to see if I can snag a 6800XT but I would love an Asus TUF 6800XT, the 3080 TUF looks so good and I love the blackout theme on it. Would suit my build very well. If anyone is interested OCUK has 3090's in stock (zotac and another brand) over 20+ in stock if that's the card for you.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
5700XT at first was slower than 2070 few months later its competing with 2070Super it will be the same this time. AMD needs some nice drivers improvements and optimizations it will come on top. 16GB vram also comes in handy on the long run. The most interesting of all will be the super resolution feature so we compare the quality and performance vs nvidia dlss.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
So it's a tiny bit slower for theoretically 8% lower price, with 60% more memory and significantly lower power consumption. FineWine will most likely put it ahead 1 year into the future. This sounds like a win to me... ... assuming there will be any stock. But I think the real winners will be partner cards. Already noticing ridiculous clocks out there, 2500+
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
Nvidia needs some nice driver improvements and optimizations and it will stay on top. SAM support also comes in handy in the long run. The most interesting of all will be the DLSS3.0 feature so we compare the quality and performance vs older nvidia dlss.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
CPC_RedDawn:

If anyone is interested OCUK has 3090's in stock (zotac and another brand) over 20+ in stock if that's the card for you.
Funny that, noticed it myself on some sites. Maybe people are finally realizing what a rip off it is and not bothering with it. Or waiting to see price of 3080Ti (will miss Christmas though 🙁) as I am...
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Stormyandcold:

Nvidia needs some nice driver improvements and optimizations and it will stay on top. SAM support also comes in handy in the long run. The most interesting of all will be the DLSS3.0 feature so we compare the quality and performance vs older nvidia dlss.
From what I've seen, Nvidia usually has pretty good and optimised drivers right from the beginning. Apart from occasional initial bug like problems, which caused the whole capacitor spectacle with the 3000 series. AMD, however, seems to require months to figure out how its own hardware works, to optimise the drivers. I wonder what would have happened if AMD had invested in a wider memory bus. The 128MB miracle cache was supposed to help, but it's precisely with 4k where 6800 (XT) seems to be lagging. Does the cache fail to deliver in the higher res? I'd like to imagine AMD tested the whole thing somehow during development. Or maybe the reason is elsewhere.
https://forums.guru3d.com/data/avatars/m/265/265196.jpg
So AMD even optimizes after "the beginning"? In other words: You are a hardware and driver developer for Nvidia and AMD and you can certainly present your statements in more detail. Thanks in advance.
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
Yeah but it's at 4K. Everyone I know, literally all of them, fall into 1 of 3 categories (in order of how common): 1 - The Plebeian: 1080p/60 2 - The Ascended: 1440p/144 or 165 3 - The Madman: 3440x1440/144 I don't know a single person that uses 4K, and if Steam charts are any indicator, that's how it is world wide. It seems to me the audience that's likely to buy a 6800 XT or 3080 is going to be using 1440p or ultrawide 1440p, at those resolutions the 6800 XT is closer to par or slightly faster than a 3080... in pure rasterization performance. Too bad its lack of DLSS and abysmal RT performance makes that moot.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
Fox2232:

Then someone can come and say: 6800XT is 7.2% cheaper and eats 12% less energy on reference design. One should not better look at AIB's cards. Other differences are not even needed to be mentioned. Like performance balance at 1080p vs 1440p vs 4K. Most relevant are 1440p results. 4K are almost irrelevant. And 1080p too, as very few people will pair 6800(XT) with 1080p screen.
yeah RTX cards do faster 4k but how's half the vram going to work out in a year or two? I'm almost certain even the 6800 will outpace them when newer games show up asking for more vram, doom eternal is already doing it were it not for the huge vram asterisk, the 3070-3080 would be a simple buy, nvidia loves doing this
data/avatar/default/avatar03.webp
Neo Cyrus:

Yeah but it's at 4K. Everyone I know, literally all of them, fall into 1 of 3 categories (in order of how common): 1 - The Plebeian: 1080p/60 2 - The Ascended: 1440p/144 or 165 3 - The Madman: 3440x1440/144 I don't know a single person that uses 4K, and if Steam charts are any indicator, that's how it is world wide. It seems to me the audience that's likely to buy a 6800 XT or 3080 is going to be using 1440p or ultrawide 1440p, at those resolutions the 6800 XT is closer to par or slightly faster than a 3080... in pure rasterization performance. Too bad its lack of DLSS and abysmal RT performance makes that moot.
True, 4k users are more likely to buy a 3090.
https://forums.guru3d.com/data/avatars/m/207/207671.jpg
I look forward to buying either an nv or amd card when they go on sale to the general public in 7/2021. Can't wait to buy one then.
https://forums.guru3d.com/data/avatars/m/201/201426.jpg
And most of those games are prolly nvidia titles too. Im not talking about gameworks an that fuzz. Just that alot of games are not omptimized for AMD gpus as much as Nvidia to do market share, fanboyism, and developer moronic issues. As someone who is a 1440p UW user, 4k is useless and trash anyways. 3440x1440>3840x2160 any day.
data/avatar/default/avatar24.webp
Agonist:

And most of those games are prolly nvidia titles too. Im not talking about gameworks an that fuzz. Just that alot of games are not omptimized for AMD gpus as much as Nvidia to do market share, fanboyism, and developer moronic issues. As someone who is a 1440p UW user, 4k is useless and trash anyways. 3440x1440>3840x2160 any day.
So the lower resolution is superior... great logics...
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
Yeah, and having just one boob to deal with is better as well, you only need to use one hand lmao. Yeah right.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
I feel like people are jumping the gun on RT perf, why not wait for more games? sure nvidia had a head start but look at dirt5, something tells me most games coming off the xbox/microsoft will perform closer to that sure codemasters had AMD watching over their shoulder, but its not like they completely disregarded nvidia's RT, no dev in their right mind would do that(discard 90% of users) if anything this is a huge opportunity for AMD, nvidia has the vast majority of the GPU market, but the RT capable GPU market is just getting started, if they manage to get a nice chunk of it, devs will have to optimize for it just as much, unlike what happens with their current 15% marketshare on normal GPUs that hardly any dev cares about