Futuremark Updates 3DMark with API Overhead Feature Test

Published by

Click here to post a comment for Futuremark Updates 3DMark with API Overhead Feature Test on our message forum
data/avatar/default/avatar27.webp
Ahh OK, thanks, will hold off on the win 10 install then so
Also, I would be cautious of factory installed key loggers....
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
NEED TO SEE RESULTS NOW!!!!!!! lol, im really looking forward to this, if its good i might even install Win10 to try it out
Win10x64 is really coming along nicely as of this latest build. Big, giant improvement already over stock 8/8.1. Still, I'd strongly advise anyone planning on running it to do a dual-boot w/Win8/7 (or whatever you are running.) Doing a clean install each build clocks me in at less than 30 minutes, complete. I did an upgrade install of 10041--with merely a handful of programs installed--took me *two hours* on the same system--and that's upgrading one Win10 build over another Win10 build. I can only imagine that if I was foolish enough to be using Win10 as a primary OS right now, and I had a lot programs installed, and I was upgrading from Win7--that it would take me closer to *three hours*, maybe...;) I'm not going to upgrade my Win8.1 installation until I can do it with the final version of Win10--when Microsoft drops the "Preview" label.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
All of those scores are relative. System configuration is going to have an effect on the scores, as usual. The faster the CPU and GPU, the more draw calls. To an extent, we'll still be able to use these scores for comparison to other systems. You just won't be able to properly compare AMD vs NVidia.
That's true, but I really think that performance in general--with D3d12 games, of course--is going to skyrocket, with some performance gains of ~50%, in some rare cases, maybe... But even an average of 20%-30% performance improvement on the same hardware is nothing to sneeze at, right? Pretty amazing I think. What I anticipate happening on the cpu side of things is that suddenly single-core performance will take a back seat to multicore performance in the latest games, and AMD cpus look *much stronger* in proper multicore applications than they do in single-core apps--so I think the upshot is that in CPU performance Intel won't see much of a gain--maybe even a slight drop in performance in d3d12 multicore games that hit multiple cores hard, while AMD is suddenly going to look much better than it has...Interesting times... But more than that, I think D3d12 games will really start to throw general performance to GPUs while making both AMD & Intel CPUs in general seem much less important in the scheme of things than they seem at present.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
That's true, but I really think that performance in general--with D3d12 games, of course--is going to skyrocket, with some performance gains of ~50%, in some rare cases, maybe... But even an average of 20%-30% performance improvement on the same hardware is nothing to sneeze at, right? Pretty amazing I think. What I anticipate happening on the cpu side of things is that suddenly single-core performance will take a back seat to multicore performance in the latest games, and AMD cpus look *much stronger* in proper multicore applications than they do in single-core apps--so I think the upshot is that in CPU performance Intel won't see much of a gain--maybe even a slight drop in performance in d3d12 multicore games that hit multiple cores hard, while AMD is suddenly going to look much better than it has...Interesting times... But more than that, I think D3d12 games will really start to throw general performance to GPUs while making both AMD & Intel CPUs in general seem much less important in the scheme of things than they seem at present.
The performance of DX12 has negative scaling when core count goes above 4. Although that may change by the time DX12 properly ships. http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/4
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Well the number of draw calls is somewhat limited to the command processor's ability to accept them on the GPU side. There is no game issuing 10 million draw calls so chances are the command processor wasn't designed to ever handle that many. I also don't realistically see games issuing this many calls ever. Honestly this test is more interesting from a CPU perspective then GPU.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
Honestly this test is more interesting from a CPU perspective then GPU.
Agreed. I'd still like to see a comparison between FX-8320E and i7 4790K. Both with an R9 290X...... Do 3 runs with each configuration and see how the numbers look.