GPU Compute render perf review with 20 GPUs

Graphics cards 1054 Page 1 of 1 Published by

Click here to post a comment for GPU Compute render perf review with 20 GPUs on our message forum
data/avatar/default/avatar20.webp
Wow, you actually managed to find OpenCL application where NVIDIA is competitive, that's surprising.
Kaarme:

Interesting that AMD cards are really good for mining crypto, but not for boosting rendering speed. Not all GPU compute is the same, clearly.
Of course not all GPU Compute is the same, but not all GPU Compute accelerated rendering is the same either. Check for example LuxMark (based on LuxRender), where AMD is doing just fine
https://forums.guru3d.com/data/avatars/m/235/235224.jpg
#RTXOn 😀
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Gomez Addams:

As you wrote, Nvidia owns it and no one else is allowed to adopt
This is incorrect, Nvidia have on multi occasions offered to work with AMD to run cuda on AMD hardware, and AMD have an inhouse tool for converting CUDA applications.
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
There is no link to this forum thread from the article, fyi.
https://forums.guru3d.com/data/avatars/m/183/183421.jpg
Hilbert Hagedoorn:

Unless I am completely overlooking it (and please do correct me if I am wrong), that setting is no longer present in the 2020 drivers.
Hilbert if you click the cog on the top right and choose Graphics under the first list you'll see an Advanced drop down click that and it's the second from the bottom choice
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Athlonite:

Hilbert if you click the cog on the top right and choose Graphics under the first list you'll see an Advanced drop down click that and it's the second from the bottom choice
No Sir, it isn't .... it might have become an architecture dependant setting though, so I'll look some more with another architecture, this is NAVI.
7685.png
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Hilbert Hagedoorn:

Unless I am completely overlooking it (and please do correct me if I am wrong), that setting is no longer present in the 2020 drivers.
The setting is probably specific to Vega.
data/avatar/default/avatar18.webp
Would've been nice to see some CPU thrown in so we can compare how much better GPUs are vs CPUs.
data/avatar/default/avatar02.webp
I know it's different but I transcoded a 3hr movie in H.264 4K HDR into H.265 once with only the cpu and then with NVENC on a 1080ti and the fps pretty much doubled would be great to have 1 "gaming" and 1 "hedt" cpus just to see the giant trench between gpu and cpu processing
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
From those results it appears to me the 2070 Super is the clear winner if you want to render using Blender. It is only 20% slower than faster 2080 Ti using the Optix API - and you can get it for 45% of price! Cheapest 2080Ti is about £950, cheapest 2070 Super is £450... most expensive, fastest 2070 Super is still only £582 FFS! Also an even closer 2080 Super or 2080 is still only the £650 area. 2080 ti should definitely be faster using Optix - it has 140% number of cores of 2080 Super and only 112% performance difference. Where did it lose the 28% difference?? Against 2070 Super it has 170% number of cores and only 120% difference in speed.
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
geogan:

From those results it appears to me the 2070 Super is the clear winner if you want to render using Blender. It is only 120% slower than faster 2080 Ti using the Optix API - and you can get it for 45% of price!
100% slower would be zero performance. Not sure what to make of 120% less performance, does it undo work?
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Mufflore:

100% slower would be zero performance. Not sure what to make of 120% less performance, does it undo work?
The 2070 Super is 120% while the 2080 Ti is 100% relative speed. It's obvious what I was talking about. So maybe its 20% slower then.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
Kaarme:

Strangely enough Nvidia has no problems using HBM in those expensive V100 cards, despite HBM being a project AMD launched years ago. Now Nvidia even allows GeForce gamers to tap into the vast pool of Freesync screens, which only used to exist because of AMD, though now there would be new generic adaptive sync screens as well. Conversely it would make sense Nvidia would allow AMD to put the Cuda API to use. It's like Jensen can find one sleeve of his Leather Jacket® but not the other.
HBM isn't owned by AMD. It was developed with SK Hynix. AMD has no control over who uses it. FreeSync is just the name AMD gave to their implementation of Variable Refresh Rate. VESA calls it Adaptive-Sync. Adaptive-Sync was adopted by VESA and added to the DP1.2a standard. If you want to get technicial, NVidia doesn't support AMD's FreeSync....nor any FreeSync monitor. NVidia is supporting VESA's DP1.2a interface standard. AMD doesn't benefit directly from Adaptive-Sync. Conversely, NVidia does benefit directly from CUDA.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
sykozis:

HBM isn't owned by AMD. It was developed with SK Hynix. AMD has no control over who uses it.
AMD isn't a memory manufacturer. They needed one to actually get things done outside of a laboratory. I never said they have control. If they did, would Nvidia be using it? However, Nvidia seems to have no compunctions using something AMD developed.
sykozis:

FreeSync is just the name AMD gave to their implementation of Variable Refresh Rate. VESA calls it Adaptive-Sync. Adaptive-Sync was adopted by VESA and added to the DP1.2a standard. If you want to get technicial, NVidia doesn't support AMD's FreeSync....nor any FreeSync monitor. NVidia is supporting VESA's DP1.2a interface standard. AMD doesn't benefit directly from Adaptive-Sync. Conversely, NVidia does benefit directly from CUDA.
Reread what I wrote. And of course they both benefit from adaptive sync. It's a recognised technology valued by lots of gamers. Gamers need good GPUs. The huge pool of adaptive sync screens would be a small pool without AMD because Nvidia wants you to pay 100 bucks extra for the small module inside, in addition to the other technology the screen must contain (like a sufficient panel). AMD doesn't. Consequently if a gamer wanted adaptive sync, they needed to either pay significantly more for Nvidia video card + expensive Gsync screen or less for an AMD GPU + any random adaptive sync screen. Since AMD GPUs have been less desirable for a while now, the difference in price with an adaptive sync package would have worked to compensate. Now Nvidia finally allows people to use a non-G-sync screen as well for adaptive sync, so people can go the Nvidia way without paying as much.
data/avatar/default/avatar38.webp
Hi again. I'm using a RX470 and the setting is there. (COMPUTE GPU WORKLOAD AMD) Not sure about Vega though. If you go to the right side cog, click and then click graphics and scroll down to advanced and click .it should be there under GPU WORKLOAD. Worth a look.
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
xg-ei8ht:

Hi again. I'm using a RX470 and the setting is there. (COMPUTE GPU WORKLOAD AMD) Not sure about Vega though. If you go to the right side cog, click and then click graphics and scroll down to advanced and click .it should be there under GPU WORKLOAD. Worth a look.
There is less chance he will miss your post if you reply to him.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
xg-ei8ht:

Hi again. I'm using a RX470 and the setting is there. (COMPUTE GPU WORKLOAD AMD) Not sure about Vega though. If you go to the right side cog, click and then click graphics and scroll down to advanced and click .it should be there under GPU WORKLOAD. Worth a look.
That setting isn't available to everyone. I have an RX5700 and the setting doesn't exist for me. It's quite possible that the setting only exists for certain GPUs....
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Kaarme:

AMD isn't a memory manufacturer. They needed one to actually get things done outside of a laboratory. I never said they have control. If they did, would Nvidia be using it? However, Nvidia seems to have no compunctions using something AMD developed.
There are dozens of technologies Nvidia shared and contributed that AMD utilizes. There is also examples of technologies that AMD/ATi developed that Nvidia couldn't use. I don't really think pointing the finger at one and saying it's worse really means much when you start stacking everything up.
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
Kaarme:

Strangely enough Nvidia has no problems using HBM in those expensive V100 cards, despite HBM being a project AMD launched years ago. Now Nvidia even allows GeForce gamers to tap into the vast pool of Freesync screens, which only used to exist because of AMD, though now there would be new generic adaptive sync screens as well. Conversely it would make sense Nvidia would allow AMD to put the Cuda API to use. It's like Jensen can find one sleeve of his Leather Jacket® but not the other.
I am not so sure it makes sense for Nvidia to allow anyone else to use the CUDA API because I do not know where they would see a benefit from doing so. The only thing that would happen is they could potentially lose market share. As I wrote, if Google wins their case against Oracle that becomes a moot point. Then the question becomes does AMD (or a third party) want to support CUDA on their GPUs? Conversion tools are not the same as supporting it because the conversion is not 100%. It is a one-time thing and then you have to tweak things here and there. This means, in the end, a developer has to choose a direction. There is a LOT of infrastructure involved for full support and it will be very interesting to see what AMD does.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Gomez Addams:

I am not so sure it makes sense for Nvidia to allow anyone else to use the CUDA API because I do not know where they would see a benefit from doing so. The only thing that would happen is they could potentially lose market share. As I wrote, if Google wins their case against Oracle that becomes a moot point. Then the question becomes does AMD (or a third party) want to support CUDA on their GPUs? Conversion tools are not the same as supporting it because the conversion is not 100%. It is a one-time thing and then you have to tweak things here and there. This means, in the end, a developer has to choose a direction. There is a LOT of infrastructure involved for full support and it will be very interesting to see what AMD does.
Sure. I wasn't really writing the comment from such a rational point of view. AMD could have kept HBM under heavy protection and licensing scheme, but it's possible it wouldn't have ever progressed anywhere like that, becoming another RDRAM. Hynix and others probably wouldn't have started to produce it if there weren't enough customer potential (which AMD alone isn't). Nvidia could only go for their closed technologies because of its highly dominating position. But even so there seemed to be a limit with adaptive sync.