Red Dead Redemption 2: PC graphics benchmark review (revisited)
Click here to post a comment for Red Dead Redemption 2: PC graphics benchmark review (revisited) on our message forum
The_Amazing_X
Nice work , thank you.
Seems like previus gen is a suprise, Vega 64 just a few fps lower then 1080ti and the gtx 1080 is a bit lower then expected.
DannyD
Looking forwards to this, thanks!
Undying
IceVip
Hilbert Hagedoorn
Administrator
The reason for max quality tests being 'standard' can be discussed sure. Yet, I've explained in the article that we'll be using this benchmark with future generation cards in mind, ergo we opt the best quality settings opposed to a mixed setting in the previous revision of the article. It's a terrific Vulkan benchmark for future products. Whether or not you choose max quality is up to you. But that's why I also included some tests with different settings as the game certainly can be tweaked in many many ways.
And it just that, the popularity for this game is huge as it's just such a terrific game, and I wanted it included again by adding a Vulkan compatible title into the test suite for graphics cards. Hopefully, future drivers and patches will no longer affect performance.
anxious_f0x
Thanks for the update Hilbert.
Vulkan MultiGPU also works nicely in this game π
HybOj
I really miss the comparison with old results, so I can see how the game has developed. Without it, the article somehow misses its point.
Still, Im very glad to see such a test like this one, comparing the game in time. Thank you!
vbetts
Moderator
Man, Pascal is really showing its limitations in both DX12 and Vulkan in modern titles. Wild to see a card like the 1080 once able to keep up with cards like the RTX series now fall so short, and then turn around to see cards like the Vega 56 and 64 both hold their own in these APIs.
Hilbert Hagedoorn
Administrator
Astyanax
DannyD
Elder III
Hilbert Hagedoorn
Administrator
HybOj
Fender178
Yeah seems like the 1000 series cards are starting to show their age. Even the 1080 Ti which was a great card at the beginning of the RTX's life cycle started to struggle. Man I need that upgrade. 3060/3070 should do nicely. Yeah it is understandable that the 1000 series cards would struggle considering DX12 and Vulkan are not their strong suits.
anxious_f0x
dampflokfreund
How about benchmarking the settings from Digital Foundry instead of Ultra. That would give a much better idea how that game performs.
The settings from DF have the best quality and performance ratio and looks very close to Ultra, with much better performance.
JonasBeckman
Ultra probably works well for later GPU's as well even if some of it has the usual minor difference in visual quality not so minor difference in performance due to all sorts of scaling and changes and what the GPU is good with or has problems with especially when some things like shaders might scale to higher resolutions and show a notably higher performance cost at 2560 compared to 1920 and such.
Would be neat if NVIDIA could take their DLSS solution one step further for the next version since it already uses minimal data and no per-game training model, their tech of course but it would get a lot of use for the upcoming games when scaling the resolution can make for some massive performance changes, maybe less so for benchmark purposes but for playing the game it has some really interesting utilization purposes and potential gains coupled with the upscaling to minimize image quality loss.
Comparably though some shaders and stuff like volumetric effects and the upcoming dabbling into ray tracing will be costly, Assassin's Creed Odyssey and it's cloud quality setting hitting a near 60% framerate decrease at ultra might just be a bit of a starter though new GPU hardware might help a bit or just do as usual and brute force the performance really. π
(Think it's still something like 40% on V.High and then it gets a bit more reasonable although still costly from High or Medium plus it has some pretty extreme image quality / performance ratios where it seems to barely change a thing above medium or high quality ha ha.)
Red here I suppose is down to the massive view distance and additional effects even if Vulkan and D3D12 prevents another GTA IV bottleneck situation.
Console max view distance was something at like 10 - 20% of that slider already showing again that sometimes the PC version gets a hefty increase in settings and scalability although the newer console generation and the upcoming one might change it around a bit again.
(Well sorta, been some years since games like Crysis or Half-Life 2 where the low settings actually look like a generation back and the higher settings are very future proofed much as it gets called out as unoptimized when that's attempted.)
EDIT: But yeah this game might work pretty well as a benchmark suite until I don't know, Horizon Zero Dawn has some benchmark mode coming out but might not be using anything too fancy.
GTA VI after this I suppose far as looking at what is next for Rocksteady in a year and then I suppose they are still doing that whole a year on console first thing still heh.
Character models and texture detailing aside as a bit of a lower point though still good looking it'll be interesting to see what the next-gen version of this RAGE engine can pull when no longer limited by the PS4 or XBox One hardware base.
Astyanax
NGX, the DLSS deep neural network is trained on a NVIDIA DGX-powered supercomputer.
DLSS 2.0 has two primary inputs into the AI network:
Low resolution, aliased images rendered by the game engine
Low resolution, motion vectors from the same images -- also generated by the game engine
Motion vectors tell us which direction objects in the scene are moving from frame to frame. We can apply these vectors to the previous high resolution output to estimate what the next frame will look like. We refer to this process as βtemporal feedback,β as it uses history to inform the future."
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/
I realise the snippet from the dlss page also says
"While the original DLSS required per-game training, DLSS 2.0 offers a generalized AI network that removes the need to train for each specific game."
But this isn't the case if you want the content to look correct.
This is incorrect, and appears to be based on a mis-interpretation of the following
One Network For All Games - The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games.
But further on states
"Using our Neural Graphics Framework, JonasBeckman
Ah that clears it up then, if NVIDIA had cleared the hurdle of per-game training down to requiring only small amount of data they would have been close to a almost generic implementation or setting in the control panel that maybe wouldn't be as detailed but could still be leveraging the DLSS 2.0 and newer improvements without game specific implementations being needed so it could just work on everything pretty much.
Upscaling from a resolution that could be just a quarter of what it's scaling up to for the final output with good enough results to be well worth the performance gains, assuming that had been the case it's a bit like a immediate win against the competition as outside of direct comparisons and settings users would just toggle it and that'd be it, big performance gains with few drawbacks and it'd probably improve further over time.
Scaling of geometry data and of course the pixels for the resolution targeted and what it's scaling up from plus shader performance scaling at different resolutions along with finer detail preservation and and also with TAA and maybe some sharpening and you'd have a pretty strong advantage there and not one I think AMD could easily match without having to make their own solution from start which would take time and resources and manpower to try and match this.
Game wise assuming it would have worked then yeah that's overall a pretty hefty boost to performance and from what's seen of it so far it would even be usable for downsampling resolutions at a lower performance cost or just staying at say 1920x1080 or 2560x1440 but with much less demand on the GPU now.
Well good to hear that cleared up though, bit confusing in the wording there but it makes sense after reading your explanation and getting it cleared up as I don't think it'd be possible to get away with even a game-engine generic version of this implementation (yet?) as not everything on Unity or Unreal to use these popular ones for this example is going to be similar after all though if that could be done that'd be a pretty huge thing at least from my view on how this works and how performance would change to where there would be no real competition as one GPU vendor would just have a way to make things significantly faster and with continual improvements that could be implemented over time as well.
Upcoming generation of games and demand on shader and geometry performance would be a big thing although eventually newer hardware capable of D3D12 Ultimate and the Vulkan utilization of these would be required for anyone not on a Turing type card.
Or already for that matter what with pushing shader effects that can halve framerate or more plus it's scaling with the users preferred resolution even if it might not be 1:1 it will still incur a noticeable higher performance hit above 1920x1080 π
EDIT: Optimistically that's a 30 - 50% extra performance just like that, kinda hard to match.
Realistically yeah it can't be quite that easy though it sounded like NVIDIA had cleared one of the major obstacles towards this.
Well that probably is enough on that and back to the actual benchmark.
Who knows maybe if NVIDIA has Ampere launching early maybe it won't be too long until the results will be updated with what these cards can do. π