Is this the AMD Radeon 7900?

Published by

Click here to post a comment for Is this the AMD Radeon 7900? on our message forum
https://forums.guru3d.com/data/avatars/m/294/294076.jpg
k3vst3r:

Well if it's 2x Raster over previous gen and it's 2.4x increase in shaders over 6950xtx, it could well equal or beat the 4090 at less watts.
Double in Raster, equal in RT! 😀
https://forums.guru3d.com/data/avatars/m/132/132158.jpg
Based on some Nvidia papers i've read about new ray tracing techniques, i don't think AMD is gonna be on par in this scenario
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
It looks...old and sad...but "there is no accounting for taste" so others may like it. I hope that it will not be a disappointment like their other gpu's (Vega, Polaris, etc).
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
These power connectors tells an interesting strategy to me: First, I think AMD knows that Intel will inevitably be undercutting their low end and mid range products. For now, AMD has the advantage of offering better stability and performance-per-watt than Intel for those segments, but Intel's value will only improve as drivers stabilize. Second, assuming this is a 7900 (or any variant of it), the fact there are only 2x 8-pins suggests AMD has no intention to compete with the 4090, let alone 4090 Ti. I could see how the 6900XT maybe didn't sell that well since a 3090 made more sense for the price. Add to the fact the 4090 is a better value than the 3090, AMD probably realizes they cannot compete in the high-end enthusiast market, especially if RDNA3 didn't do enough to improve raytracing performance. AMD is also probably well aware of how people are not willing to shell out 4 figures for a very power hungry GPU. All of this puts AMD in a pretty narrow market, but an untapped one: upper mainstream GPUs with high efficiency for a reasonable price. For me personally, this is great because that's what I'm looking for.
barbacot:

I hope that it will not be a disappointment like their other gpu's (Vega, Polaris, etc).
I doubt it will be. The only problem with RDNA2 was sub-par raytracing performance. Some would argue not having DLSS is a demerit too; I personally wouldn't, but I get it. So, as long as RDNA 3 has better DXR performance and FSR gets another update with better image quality, this could be a real winner.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
schmidtbag:

I doubt it will be. The only problem with RDNA2 was sub-par raytracing performance. Some would argue not having DLSS is a demerit too; I personally wouldn't, but I get it. So, as long as RDNA 3 has better DXR performance and FSR gets another update with better image quality, this could be a real winner.
If price and availability is right then yes. I really hope that Nvidia will fell some real competition because they need it. It takes something really special to beat the absurd frame rates of the 4090 but you are right: if the raytracing is at least at the RTX 3XXX series level and some major increase in rasterization performance vs old generation with a correct price and a good availability Nvidia will fell the heat and we will win.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
barbacot:

If price and availability is right then yes. I really hope that Nvidia will fell some real competition because they need it. It takes something really special to beat the absurd frame rates of the 4090 but you are right: if the raytracing is at least at the RTX 3XXX series level and some major increase in rasterization performance vs old generation with a correct price and a good availability Nvidia will fell the heat and we will win.
Yes, price is a big takeaway here. I think AMD got a little bit of a blow to their ego when they found out gamers would rather spend a little extra with Nvidia and get better DXR performance, and like I said, Intel will be slowly nibbling away at the market AMD tends to do better in, so they're going to have to lower prices. The unsettling part is, trends suggest they won't.
https://forums.guru3d.com/data/avatars/m/246/246564.jpg
I have zero faith in AMD competing on price. I'll gladly be proven wrong, though.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Nekrosleezer:

Based on some Nvidia papers i've read about new ray tracing techniques, i don't think AMD is gonna be on par in this scenario
You don't think AMD also have tricks up their sleeves with RT as well? Haven't you read the Sony white paper on using FP16 to accelerate RT calculation. In theory this would lower RT quality a little but then be passed through a filter to smooth it out. Sort of like how FSR and DLSS render at lower res then upscales with A.I/Filters/Temporal data. Sony is pushing for something similar with RT. I wouldn't be surprised if AMD showcase a new software feature very similar to this. Sort of how they introduced a tessellation feature which could alter or completely turn off the feature at a driver level. Could this be easily implemented for the driver to detect BVH instructions and recognise them as RT calculations and apply the tweaks automatically where you have the choice of different quality settings each with their corrisponding performance improvements?
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
schmidtbag:

These power connectors tells an interesting strategy to me: First, I think AMD knows that Intel will inevitably be undercutting their low end and mid range products. For now, AMD has the advantage of offering better stability and performance-per-watt than Intel for those segments, but Intel's value will only improve as drivers stabilize. Second, assuming this is a 7900 (or any variant of it), the fact there are only 2x 8-pins suggests AMD has no intention to compete with the 4090, let alone 4090 Ti. I could see how the 6900XT maybe didn't sell that well since a 3090 made more sense for the price. Add to the fact the 4090 is a better value than the 3090, AMD probably realizes they cannot compete in the high-end enthusiast market, especially if RDNA3 didn't do enough to improve raytracing performance. AMD is also probably well aware of how people are not willing to shell out 4 figures for a very power hungry GPU. All of this puts AMD in a pretty narrow market, but an untapped one: upper mainstream GPUs with high efficiency for a reasonable price. For me personally, this is great because that's what I'm looking for.
Intel still has a very long way to go before it can take the low end market. For me the question is how serious is AMD about he lower end spectrum, the same for Nvidia. As for the 2x8 pins, i think that`s just a signal that AMD didn`t want to go overboard like Nvidia regarding the power profile. This could also be an way to keep costs down on a reference card and also an way of giving their partners more room to differentiate themselves with their custom cards agains the reference design. As for competiting directly against Nvidia, i think they can do it, even if they perform a little worse. The problem is that Nvidia invested heavily on RT and DLSS and those seem to be important for many buyers even if in the end they almost don`t need or use those features... As for me, i just want AMD to sell their cards at normal prices but i`m not holding my breath...
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
mackintosh:

I have zero faith in AMD competing on price. I'll gladly be proven wrong, though.
I think the 7 series Ryzens proved that.
https://forums.guru3d.com/data/avatars/m/278/278874.jpg
wavetrex:

It would be funny if it beats the 4090 (in raster) with just 320W ;-) Even if it doesn't beat it, being close enough (5%) would still make it a massive efficiency win! Thursday can't come fast enough...
It can go up to 375W (2x150W+75W PCI-E)
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
CPC_RedDawn:

You don't think AMD also have tricks up their sleeves with RT as well? Haven't you read the Sony white paper on using FP16 to accelerate RT calculation. In theory this would lower RT quality a little but then be passed through a filter to smooth it out. Sort of like how FSR and DLSS render at lower res then upscales with A.I/Filters/Temporal data. Sony is pushing for something similar with RT. I wouldn't be surprised if AMD showcase a new software feature very similar to this. Sort of how they introduced a tessellation feature which could alter or completely turn off the feature at a driver level. Could this be easily implemented for the driver to detect BVH instructions and recognise them as RT calculations and apply the tweaks automatically where you have the choice of different quality settings each with their corrisponding performance improvements?
I'll believe it when I see it. I could see that working well for secondary lighting effects (which IMO are some of the most important) since that never really needs high precision. For shadows and reflections, I would hope they have an option for FP32 calculations or else there might be too much quality loss. So far, AMD's approach seems to be just throwing more cores at the problem, which I'm not entirely against since I'd rather have many general-purpose cores than a variety of specialized cores, but it also isn't working very well for them. Right now, AMD is basically just ticking a box to say they have DXR support but it's still effectively useless/undesirable. Nvidia is at a point where it's only worthwhile on high-end GPUs. I believe AMD's/Sony's FP16 method will still be insufficient for mid-range, but perhaps it'll be okay if for mainstream if you're expecting the "cinematic" experience. In any case, though I do feel real-time raytracing is a big deal and a welcome change to the gaming industry, it's not a feature I'm going to be pursuing for several years. Like tessellation, it will take many years until it can be refined.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
schmidtbag:

I'll believe it when I see it. I could see that working well for secondary lighting effects (which IMO are some of the most important) since that never really needs high precision. For shadows and reflections, I would hope they have an option for FP32 calculations or else there might be too much quality loss. So far, AMD's approach seems to be just throwing more cores at the problem, which I'm not entirely against since I'd rather have many general-purpose cores than a variety of specialized cores, but it also isn't working very well for them. Right now, AMD is basically just ticking a box to say they have DXR support but it's still effectively useless/undesirable. Nvidia is at a point where it's only worthwhile on high-end GPUs. I believe AMD's/Sony's FP16 method will still be insufficient for mid-range, but perhaps it'll be okay if for mainstream if you're expecting the "cinematic" experience. In any case, though I do feel real-time raytracing is a big deal and a welcome change to the gaming industry, it's not a feature I'm going to be pursuing for several years. Like tessellation, it will take many years until it can be refined.
I agree, I too will believe it when I see it. It was just me reading the sony thing and thinking "well they are using AMD hardware so maybe there could be some collab there" and including a quality option at the driver level would help like it did with tessellation. As for the "throwing more cores at the problem" I don't think that is really accurate of what AMD is doing. For all intense and purposes RT cores where basically tacked onto RDNA2. The cores didn't even calculate most of the BVH code and a lot of the lifting fell onto the CU's, hence the bad performance with certain effects enabled. It was like you said "ticking the DXR box". Now with RDNA3 I think RT will be more a focus point for them. Easily doubling the RT cores (2 per CU instead of 1) moving the entire BVH code over to them to help free shader workload up in the CU's. Then factor in node shrink, clock speed increases, architecture improvements, and MAYBE this new driver level RT optimisation (like tessellation) and FSR2.1 (or above) helping out it isn't really outside the realm of possibility that AMD could 3x - 4x their RT performance this gen whilst 2x+ their shader performance. Will this be enough to take on RTX40 in RT, more than likely not. But it will surely close the gap a lot. As for shader performance I expect AMD to at the very least match the 4090, with both trading blows here and there. This could be wishful thinking, and that could be right. But nobody should wish for monopoly of any market..... Now if only Intel could get their act together 🙂
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
Well, AMD may not match Nvidia "beast" 4090 but it doesn't need to. In this economy people care less about bragging rights and more about their wallets. The 4090 for me is way ahead of its time and nothing that exists on the market today as game or application can make this monster sweat - maybe gaming max details RTX on no dlss in 8k but good luck finding a decent 8k monitor. There are signs of a more budget friendly card from AMD: first from the pictures the cooling solution looks really dated and not something high quality and over the top - so less money here and also the use (from rumors) of slower and cheaper GDDR6 memory. On the other hand those research costs for MCM design must be recovered from somebody - ah, it's us, the consumers...
https://forums.guru3d.com/data/avatars/m/72/72830.jpg
Predictions: Prices will be almost as horrible as Nvidia's They haven't caught up with RT performance
https://forums.guru3d.com/data/avatars/m/243/243133.jpg
AuerX:

Looks generic and dated. Hopefully performance is top notch.
I'm sure it will be. Drivers however might be the deal breaker for me. I love my AMD cards but lately, they have been diabolical on the driver front.
data/avatar/default/avatar03.webp
Hypernaut:

I'm sure it will be. Drivers however might be the deal breaker for me. I love my AMD cards but lately, they have been diabolical on the driver front.
Well it's MCM card, so I expect stuff be broken day 1 driver wise until software catches up with the hardware changes.
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
k3vst3r:

Well it's MCM card, so I expect stuff be broken day 1 driver wise until software catches up with the hardware changes.
I don't think that it will be that bad. Even though it's MCM, it is still only a single Graphics Processing Die.