AMD X390 and X399 chipsets diagrams reveal HEDT Information

Published by

Click here to post a comment for AMD X390 and X399 chipsets diagrams reveal HEDT Information on our message forum
https://forums.guru3d.com/data/avatars/m/165/165326.jpg
😱
https://forums.guru3d.com/data/avatars/m/149/149159.jpg
Quad channel only appears on the RZ4700 not the RZ2700, the latter being dual channel, read the labeling closely.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
I suspect some intel guys running in circles by now
https://forums.guru3d.com/data/avatars/m/262/262613.jpg
Witch this and hopefully some bios update/windows patch that could help match intel in games and it will be checkmate until Intel either reduces their prices or brings out new cpu that are dramatically faster.
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
If you can't beat them, copy them. Boring, though AMD fanboys will now have something to upgrade their Ryzen to.
Yeah.... no.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
If you can't beat them, copy them. Boring, though AMD fanboys will now have something to upgrade their Ryzen to.
Gotta love the trolls that come to this forum.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
I've cleaned up this thread a bit by over the top comments. Just a quick warning: if I have to do that again I'll delete the user accounts that go along with it as well. If you do not have anything useful to add to the topic, don't bother to reply really.
https://forums.guru3d.com/data/avatars/m/267/267641.jpg
Im afraid of TDP -200W+, i would prefer some dual CPU socket board.
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
The target market would be those that require heavy processing. It would be great for video editing and for special effects in the entertainment industry (along with GPU power). Basically any content creation, it's not targeting home gaming!
The lanes man, Volta will definitely saturate PCI 3.0. TXP/1080Ti can already do so.
data/avatar/default/avatar33.webp
YES PLEASE ill take one, time to upgrade my X99 Sabertooth and 5820K @4.5gz to 10-16 core machine. BUT i wont touch anything with less than quad channel and x10 SATA3 ports, got spoiled by Intel and yes I use all of my sata ports
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
Quad channel only appears on the RZ4700 not the RZ2700, the latter being dual channel, read the labeling closely.
Not so sure about that since there are duplicates of each, would make more sense that A1/A1 is a different channel than A2/A2, may have something to due with it being an MCM(IE channel a1 is die a, and a2 is die b). I think the single socket platform is quad channel and the dual socket board is 8channel like the naples platform is, which would make more sense since it is most likely a server derived chipset.
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
....no? They barely saturate PCI-Express 2.0
PCI-e 2.0 was saturated by quad SLI Titan (original) on X79, and it was shown to have improvements going to a board with an extra PLX (AsROCK Extreme 11)
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
PCI-e 2.0 was saturated by quad SLI Titan (original) on X79, and it was shown to have improvements going to a board with an extra PLX (AsROCK Extreme 11)
Again, barely. So how can you justify stating that they are saturating PCI-Express 3.0?
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
https://www.youtube.com/watch?v=8KOTiee5RqA https://www.youtube.com/watch?v=HchSu5peIoc lol ok
Love it when people post youtube reviews. Why do people like to throw them around like they are actually good reviews? Anyways, i watched both of your reviews, and again: Barely, so how can you justify stating that they are saturating PCI-Express 3.0? There's a very small amount of FPS difference in those videos, so again, incase the question is not CLEAR. How can you JUSTIFY stating that it's SATURATING 3.0 16x? For example, EVEN if it was, which its not, but EVEN if it was, how would you prove it? what would you have to prove that against? And since you're fond of youtube reviews, i'll just link you to a nice, reputable and accurate review, which in reality shows what those youtube videos also show, that it's barely, very barely saturated. http://www.guru3d.com/articles-pages/pci-express-scaling-game-performance-analysis-review,1.html
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
Love it when people post youtube reviews. Why do people like to throw them around like they are actually good reviews? Anyways, i watched both of your reviews, and again: Barely, so how can you justify stating that they are saturating PCI-Express 3.0? There's a very small amount of FPS difference in those videos, so again, incase the question is not CLEAR. How can you JUSTIFY stating that it's SATURATING 3.0 16x? For example, EVEN if it was, which its not, but EVEN if it was, how would you prove it? what would you have to prove that against? And since you're fond of youtube reviews, i'll just link you to a nice, reputable and accurate review, which in reality shows what those youtube videos also show, that it's barely, very barely saturated. http://www.guru3d.com/articles-pages/pci-express-scaling-game-performance-analysis-review,1.html
Actually, I meant 8x vs. 16x, as it's impossible to tell if 3.0 is being saturated without overclocking the BCLK/PCI-E, which no one has done tests for. And the author of the video literally has a different conclusion. 170fps vs. 210fps in one game, 60fps+ vs. <60 in another. The 1080 video isn't as impressive since 1080s are too weak for 4k.
Anyway: Overall, the results of our testing is pretty mixed. With a single Titan X, we saw a wide range of results between using a PCI-E 3.0 slot at x8 and x16. Some applications (Unigine Heaven Pro and Octane Render) showed no difference, while others (Ashes of the Singularity, GRID Autosport, and Davinci Resolve) showed up to ~5% difference in performance. With dual GPUs, the results actually got a bit more confusing. Although Unigine Heaven Pro didn't see much of a difference with a single card, with two cards in SLI driving three 4K displays in surround we saw roughly a 15% drop in performance running at x16/x8 and a massive 30% drop in performance running at x8/x8. On the other hand, Ashes of the Singularity only showed minimal differences, and GRID Autosport was actually faster at 1080p when running in x8/x8 - although it was about 8% slower at 4K and 4K surround. On the professional side, Octane Render still didn't show a difference when using two cards but Davinci Resolve did see up to a ~10% drop in performance with both x16/8 and x8/x8.
Same conclusion, depends on the APP, which is what the author of the videos said.