Review: Total War WARHAMMER DirectX 12 PC GFX & CPU scaling performance

Published by

Click here to post a comment for Review: Total War WARHAMMER DirectX 12 PC GFX & CPU scaling performance on our message forum
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
No. He isn't. He just got confused. There are two Vega GPUs, four SKUs. "big vega" will be go against Titan/Ti "little vega" will go agaisnt 1080/1070 Fury is faster by more than 20%! Interesting number that, 20%... 20% is also an average OC on a 980. I wonder how a 980 @ 1500mhz would do in this test
But like everything else they can be overclocked too, but i think having a set line of default clocks then if you overclock well more performance. The biggest performance killer is depth of field. Not sure what the unlimited video memory option is used for but i have a R9 290 so limited to 4GB Ram. Wonder if the unlimited video memory is for caching or something. Maybe the 8GB cards will pull away with that enabled?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
No. He isn't. He just got confused. Interesting number that, 20%... 20% is also an average OC on a 980. I wonder how a 980 @ 1500mhz would do in this test
I didn't get confused, I know there will be two Vega GPU's but when you use "Vega" like that and don't specify I think it's pretty clear that you're referring to the larger of the two, or that you don't know two exist in the first place.
Are you saying that amd's high end should not be compared with nvidias? In any case regular Fury smashes 980 in this benchmark also.
I'm not saying AMD's high end should not be compared to Nvidias, but Nvidia's high end isn't GP104, it's GP102, which will probably be out around the same time Vega is. Just because AMD isn't currently competing against GP104 doesn't magically make GP104 the best Nvidia has to offer with Pascal.
https://forums.guru3d.com/data/avatars/m/154/154498.jpg
"Hitman is a cache what you can continuously type of title" Not sure what happened there but I think it might need to be fixed? Great article otherwise, very interesting AMD CPU results.
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
A stock Fury runs at 1.05Ghz, you're lucky if you're stable at 1150 10 % vs 25%
What i mean is i think HH was a little stretched for time to play about with tweaking every single graphics card, and although yours may hit 1500MHz it does not mean they all hit that so leaving it at a base line is the best thing to do.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
What happens when your baseline is more expensive and worse performing? lol I feel like the review process needs to change for the 1080, or we just punish Nvidia for coming up with it in the first place.
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
What happens when your baseline is more expensive and worse performing? lol I feel like the review process needs to change for the 1080, or we just punish Nvidia for coming up with it in the first place.
Well in fairness they want to make their new product look good, they don't want the 980Ti @ 1.4GHz catching up to their shiny new 1080, makes people think well why the hell would i buy that?
I never suggested HH does that. I don't have a 980. Most REFERENCE 980s in reviews hit 1500 approximately so it seems like a good OC baseline. The fact is when you see reviews citing 980 performance they're talking reference, and you can squeeze another 20-25% out of them. I'm not saying Hilbert should test every card OC'd, but that readers should be aware of this
But again not everyone overclocks, thats why i said if you know what you can get atleast and you overclock you get more then fair enough. Alot of people will check benchmarks for games and i bet a good amount of them have never even overclocked a video card. If you overclock it and they don't then they think why is my card so slow? Now no one will complain if theirs is faster see my point? So anyway assuming it is an ideal world that they increase exactly by the percentage of the overclock then the 980 would go from 51 to 63 while the Fury would be 63 to 71fps at 1440p.
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
Hmmmm..... AMD Fury and DX12 seem to be "glamorous". Forthcoming AMD GPUs appear to be cost effectively elegant ... Waiting....for....
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
I don't even know what to think about these DX12 Benchmark tests. For a start they all seem to favor AMD gfx cards which means god only knows what Nvidia cards are really capable of. We have to remember that HH only uses default clocks on reference gfx cards. I doubt anyone here even runs a reference card. What i do know is that my 980 G1 will out score a reference card by as much as 20% in some tests/benchmark results. If H does a test like this then usually i get about an extra 10 fps on top of his reference results. He's getting 72 fps on a reference 980 @ 1080p which for all we know is crippled by the AMD game engine. I'd get 80+fps on my G1 980 which for me is more than enough to be able to play an AMD DX12 game from 2016, so i'm actually very happy with that and hope to see some actually Nvidia favored DX12 games in the near future.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
omg! just look at my R9 390 betting the GTX 980... Finally we see DX12 bring out the full power of GCN and it's amazing.
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
I don't even know what to think about these DX12 Benchmark tests. For a start they all seem to favor AMD gfx cards which means god only knows what Nvidia cards are really capable of. We have to remember that HH only uses default clocks on reference gfx cards. I doubt anyone here even runs a reference card. What i do know is that my 980 G1 will out score a reference card by as much as 20% in some tests/benchmark results. If H does a test like this then usually i get about an extra 10 fps on top of his reference results. He's getting 72 fps on a reference 980 @ 1080p which for all we know is crippled by the AMD game engine. I'd get 80+fps on my G1 980 which for me is more than enough to be able to play an AMD DX12 game from 2016, so i'm actually very happy with that and hope to see some actually Nvidia favored DX12 games in the near future.
See i am not so sure about that as they only partnered up with AMD less than 2 months before release so too me thats to far gone, the main reason why AMD is performing so well in DX12 titles is due to the massive over head from the driver which has held AMD back in DX11 on a otherwise sound architecture. Fact is nVidia has been competing with crippled products because they had the advantage of having well optimised drivers to get the most out of their own architecture.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Almost all the titles have been AMD-sponsored, but I'm not even sure how much that matters any more. The two titles that NVIDIA are doing better are DX11 engines with DX12 patched on.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
I hope you don't mean ashes, because nvidia hardware is outperforming AMD in it
No, it's not. Unless you compare 16nm NVIDIA to 28nm AMD, it does not. The 390x is practically as fast as the 980Ti and the 380x as the 970.You say that AMD does not have an inherent DX12 advantage, which I agree with. The fact is though that NVIDIA is pricing their hardware according to what is basically DX11 perceived performance, and not what the cards can do to the max (as it happens with most DX12 titles). So AMD prices accordingly. The 380x reaching 970 performance is one example of pricing like that. So if you take that into account, then yeah, AMD does have a performance advantage in DX12 regarding performance/dollar. http://www.guru3d.com/index.php?ct=articles&action=file&id=22301
data/avatar/default/avatar35.webp
Almost all the titles have been AMD-sponsored, but I'm not even sure how much that matters any more. The two titles that NVIDIA are doing better are DX11 engines with DX12 patched on.
Can we get a source for that? I mean every time amd does better in something someone is claiming that amd paid for it. At least with nvidias gameworks it is pretty clear when they had a hand in it.
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
I feel like a broken record. AMD does not have an inherent DX12 advantage. AMD cards tend to be perform far better in DX12 compared to DX11 because of CPU overhead. In Ashes of the Singularity (and very probably this game as well) raw compute throughput is the major determining factor in game performance.
You and me both, although disabling async compute reduces performance by about 10% most of the increase in performance is the removal of the driver overhead like both of us have already said. It doesn't matter which game it is we seem to always come across it's been sabotaged against nVidia but i think people are over hyping async compute way too much. Rather than, nVidia cards have been running 99% efficient and AMD cards have been running 60% efficient and DX12 has allowed them to become 99% efficient since the driver is no longer fighting for resources on the one CPU core along with applications running code on that very core. I think it's also the reason why frame pacing is typically worse on AMD cards compared to nVidia too.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Can we get a source for that? I mean every time amd does better in something someone is claiming that amd paid for it. At least with nvidias gameworks it is pretty clear when they had a hand in it.
I won't even get into that, there is no meaning. The truth of the matter though is that in all engines that were developed with DX12 in mind, AMD has been getting much higher than their usually expected performance. Yes, they were games sponsored by them, but on the other hand there haven't been any closed libraries like GameWorks on, which means that NVIDIA has had access. They didn't even complain about no access either. The two games they "win" are Rise of the Tomb Raider (a game ported 3 months before its time with DX12 patched on), and Gears of War Ultimate Edition (an atrocious original release with DX12/UWP bolted on top of Unreal Engine 3).
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
Total war has always performed iffy and required pure clocks to mitigate. I bet they offered DX12 implementation to bring performance gains across the board. I doubt they just dropped a suitcase of money and said optimize for their cards.
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
Total war has always performed iffy and required pure clocks to mitigate. I bet they offered DX12 implementation to bring performance gains across the board. I doubt they just dropped a suitcase of money and said optimize for their cards.
Well since AMD is in the red for their finances, it would have been a pretty light suite case 😀 lol j/k
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
Well since AMD is in the red for their finances, it would have been a pretty light suite case 😀 lol j/k
Monopoly money. or Fury dies as a souvenir. :3eyes:
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
Warhammer is supposed to include async compute (the amd way) yet I see no indication of it
data/avatar/default/avatar17.webp
You guys hyped up about a particular GPU performance really shouldn't... I haven't read the article, but yall need to know that these Total War games have always been almost 100% CPU dependant, GPU's never really mattered... Their "Warscape" engine they use has been a piece of **** for a long time, All it's ever done in the past is stress out the first core 100% and not give a flying fck about what GPU's you have (I got 0 performance increase going from 660ti to 290x, and just a few extra frames going from single to Crossfire) Still with that said, the terrible history the game series has with Performance problems, I don't care that they've upgraded to 64-bit, don't trust them... also couldn't give a **** about Warrhammer, I'm a history nerd not a fantasy one