Tensor Core equivalent Likely to Get Embedded in AMD rDNA3

Published by

Click here to post a comment for Tensor Core equivalent Likely to Get Embedded in AMD rDNA3 on our message forum
https://forums.guru3d.com/data/avatars/m/246/246088.jpg
Right, ive read through that and its all Greek to me, in a nutshell will games look better but with the same/improved framerates?
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
pegasus1:

Right, ive read through that and its all Greek to me, in a nutshell will games look better but with the same/improved framerates?
Games will look the same (or slightly worse) but with more frame rates. At least that's what I understood from what Nvidia is doing.
data/avatar/default/avatar13.webp
ROCm isn't used in games at all. It is used manly with Instinct cards on linux.
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
I guess offloading operations like FSR calculations to there, or they could be doing ray tracing with them.
https://forums.guru3d.com/data/avatars/m/247/247876.jpg
Glitch in brain made me read the title "Tenor Core equivalent ..."
https://forums.guru3d.com/data/avatars/m/220/220755.jpg
i guess these are the AMD AI cores, pretty useful stuff if you ask me
data/avatar/default/avatar08.webp
Shoulda been here 2 gens ago. Zero excuses.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
TimmyP:

Shoulda been here 2 gens ago. Zero excuses.
You say that as though it only takes a few months to develop such a thing... Yes, it's disappointing AMD is a couple years late, but unless they had spies in Nvidia's campus and knew what their tensor cores were going to do and how they were going to function before they were announced, AMD was going to be late. Notice how whenever Nvidia releases any new technology, AMD's response is almost always a couple years late, and often not fully polished. That's how long it takes for them to scramble and put together something they realized they're going to need. Of course, it's made more complicated when Nvidia keeps everything on their end proprietary, so AMD has to re-invent the wheel (often as open-source or at least platform-agnostic).
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
schmidtbag:

You say that as though it only takes a few months to develop such a thing... Yes, it's disappointing AMD is a couple years late, but unless they had spies in Nvidia's campus and knew what their tensor cores were going to do and how they were going to function before they were announced, AMD was going to be late. Notice how whenever Nvidia releases any new technology, AMD's response is almost always a couple years late, and often not fully polished. That's how long it takes for them to scramble and put together something they realized they're going to need. Of course, it's made more complicated when Nvidia keeps everything on their end proprietary, so AMD has to re-invent the wheel (often as open-source or at least platform-agnostic).
It's not only that. Simply because Nvidia introduces something costly (because it is costly, needing a good chunk of silicon, making the chip larger), it doesn't mean it will be a practical, developer/gamer approved, and commercial success. Of course Nvidia has lots of muscles to advertise their new tech and sponsor/support studios in using it, but it's still a risk, nonetheless. So, not only does AMD need to spend time and effort to "copy" it, but they also need to simply ponder if it's something that's going to fly, that is, become necessary in the future. Sometimes stuff works, sometimes it doesn't work, and sometimes it only works partially or in other forms. For example G-sync never became an astronomical success because people weren't happy about paying high premium for the screens. In the end Nvidia had to backpedal and basically adopt the open format adaptive sync AMD was concentrating on. If AMD had tried to copy Nvidia with a proprietary system, I don't think it would have ended well for AMD, but it still wouldn't have been a much better success for Nvidia. As it was, AMD had 20% discrete graphics card market share, but Freesync screens were probably five times more common than G-sync. Quite twisted. AMD made a similar miscalculation with HBM, which wasn't a great success in gaming, but Nvidia immediately adopted it into server cards, where it's been a grand success.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
TimmyP:

Shoulda been here 2 gens ago. Zero excuses.
how about one or two
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Kaarme:

It's not only that. Simply because Nvidia introduces something costly (because it is costly, needing a good chunk of silicon, making the chip larger), it doesn't mean it will be a practical, developer/gamer approved, and commercial success. Of course Nvidia has lots of muscles to advertise their new tech and sponsor/support studios in using it, but it's still a risk, nonetheless. So, not only does AMD need to spend time and effort to "copy" it, but they also need to simply ponder if it's something that's going to fly, that is, become necessary in the future. Sometimes stuff works, sometimes it doesn't work, and sometimes it only works partially or in other forms.
It's definitely this. Nvidia has capital to invest in projects that may not have an immediate return and the mind/marketshare to push it to customers/developers. The only time AMD does it is with consoles where they can leverage their position with Microsoft/Sony - which is actually really smart move by AMD. _ Also keep in mind Google's TPU came out in 2015. So it's not like dedicated hardware acceleration for these types of workloads is something that AMD only knew once Volta came out. Multiple companies were utilizing tensor hardware prior to Nvidia.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
TimmyP:

Shoulda been here 2 gens ago. Zero excuses.
Money is the excuse and its a good excuse. AMD didn't have the RnD budget they have now. CPU's are the core of their business, graphics is a subsection. They focused all their efforts on their CPU architecture first and now their graphics division is getting some much needed attention. Nvidia has the pleasure of only being a graphics company, where all there efforts and money gets poured into (ultimately) the same thing even if that is gaming, server, cloud, mobile, etc. It's all ultimately based on the same architectures and then trickles down.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
CPC_RedDawn:

Nvidia has the pleasure of only being a graphics company, where all there efforts and money gets poured into (ultimately) the same thing even if that is gaming, server, cloud, mobile, etc. It's all ultimately based on the same architectures and then trickles down.
Nvidia is first and foremost a graphics company, but that's not what they're limited to. Nvidia makes a killing on parallel compute and AI (GPGPU, via CUDA in addition to the tensor cores), where those "GPUs" don't even have display connectors. They also seem to be doing well in the automotive industry. Don't forget their acquisition of Mellanox, where Nvidia is heavily invested in high-speed networking. It shows, when you consider those beefy interconnects on their high-end server GPUs. For a little while, they were doing ok with their Tegra series in tablets and some gaming platforms (including the Switch) but Tegra mostly uses the ARM cores as a user-friendly medium to access the GPU.
data/avatar/default/avatar08.webp
AMD rnd knew. They've known since Volta. Thats why CDNA2\Instinct has matrix cores. If you want to claim financials go ahead youre not wrong, but they didn't pass any savings along to the consumer. Neither did. Nvidia simply had additional hardware, at the "same cost."
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
it's not about the hardware only you have to have resources (money and people) to use them in the first place amd's software team was the problem imo, they don't have enough qualified people to do drivers and develop ai based techniques at the same time
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
schmidtbag:

Nvidia is first and foremost a graphics company, but that's not what they're limited to. Nvidia makes a killing on parallel compute and AI (GPGPU, via CUDA in addition to the tensor cores), where those "GPUs" don't even have display connectors. They also seem to be doing well in the automotive industry. Don't forget their acquisition of Mellanox, where Nvidia is heavily invested in high-speed networking. It shows, when you consider those beefy interconnects on their high-end server GPUs. For a little while, they were doing ok with their Tegra series in tablets and some gaming platforms (including the Switch) but Tegra mostly uses the ARM cores as a user-friendly medium to access the GPU.
All their products still ultimately use the same architectures from top to bottom, regardless of connectors and the physical appearance of the chips, cards, or racks. Ampere is used in everything from the A100 down to the RTX3050, with them dipping into older architectures for other products (tegra, etc). They even use Epyc processors in their A100 racks. Not saying this is a bad thing it gives them a more focused outlook on their business and allows them the ability too go outside the box and invest in RnD for other things rather than standard raster compute, such as RT cores, Tensor cores, A.I, etc, etc. But everything always comes back to graphics and compute. AMD and Intel don't have this "benefit" depending on how you look at it.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
The way I understand it, Tensors in math are algorithms that are used widely in general mathematics. nVidia uses a tensor core, but it's merely hardware algorithms made to perform certain functions, and the transistor arrays to do those calculations nVidia has termed "tensor cores." Some think it's a custom kind of GPU core which only nVidia makes, and that's exactly what nVidia marketing wants you to think. It's like AMD's "ray trace" cores--it's a general name for a particular math function the hardware performs. https://en.wikipedia.org/wiki/Tensor nVidia is using tensor math in a custom nVidia transistor design which it calls tensor cores--that is, cores which do tensor math. nVidia's individual tensor math is unique to nVidia, but anyone can design and manufacture a circuit array which can do tensor math. nVidia has no trademark on "Tensor" or tensor math.
data/avatar/default/avatar16.webp
Into the GPU Chiplet Era: An Interview With AMD's Sam Naffziger June 24, 2022
We asked whether AMD would include some form of tensor core or matrix core in the architecture, similar to what both Nvidia and Intel are doing with their GPUs. He responded that the split between RDNA and CDNA means stuffing a bunch of specialized matrix cores into consumer graphics products really isn't necessary for the target market, plus the FP16 support that already exists in previous RDNA architectures should prove sufficient for inference-type workloads. We'll see if that proves correct going forward, but AMD seems content to leave the machine learning to its CDNA chips.
So looks like DP4a might be enough for AMD's consumer graphics, albeit slower than using tensor or matrix cores.
data/avatar/default/avatar07.webp
waltc3:

The way I understand it, Tensors in math are algorithms that are used widely in general mathematics. nVidia uses a tensor core, but it's merely hardware algorithms made to perform certain functions, and the transistor arrays to do those calculations nVidia has termed "tensor cores." Some think it's a custom kind of GPU core which only nVidia makes, and that's exactly what nVidia marketing wants you to think. It's like AMD's "ray trace" cores--it's a general name for a particular math function the hardware performs. https://en.wikipedia.org/wiki/Tensor nVidia is using tensor math in a custom nVidia transistor design which it calls tensor cores--that is, cores which do tensor math. nVidia's individual tensor math is unique to nVidia, but anyone can design and manufacture a circuit array which can do tensor math. nVidia has no trademark on "Tensor" or tensor math.
There are certainly patents that have been filed for tensor core operations. Both Nvidia and Google (and others) have patents covering specific tensor functionality researched.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
A tensor core is just a unit that does matrix to matrix multiplication and addition. The issue is doing it so many times, for each element of each matrix. This is very computational and memory intensive. Some time ago I asked a dev on another forum, to use nsight to measure Tensor utilization while using DLSS 2.x. He booted Cyberpunk and reported that it was using around 50%, on a 3090 at 4K. So for the frame rate and resolution he was getting, it was using around 140 TOPs. I expect that for lower resolutions, and lower frame rates, the amount of TOPs needed for DLSS would be lower.