Radeon RX Vega Confirmed launching at SIGGRAPH

Published by

Click here to post a comment for Radeon RX Vega Confirmed launching at SIGGRAPH on our message forum
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Well lets hope they polish the Drivers before release.
https://forums.guru3d.com/data/avatars/m/263/263507.jpg
I hope this VEGA is a success (I doubt it). Right now I'm with a 1440p monitor and a 970 ITX, so I need to upgrade asap. The reason I haven't upgraded is that I don't really have too much time to play (I play/finish 1 game every 2-3 months). I'm also waiting for some 4K 120Hz monitors that can handle Variable refresh rate (like VESA adaptive sync) and good 1080p scaling for the most heavy games. So this is the reason I'm not grabbing a 1080 ti. I'm not getting a GSync in the future, and I don't need ti performance today for 1440p
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
HBM was the worst thing AMD thought to bring to consumers. I hope the drivers can pull a 1080ti miracle or they're ****ed.
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
thanks for the informative input technical expert of GPUs... LOL
The tech is certainly good and holds a number of advantages although my own knowledge about it is admittedly not very good either, but availability and price is a concern and that could hamper things even now with the GPU finally getting released. Will be interesting to see how this product matures from release, going by the "work" Vega released earlier as the Frontier edition the drivers at least for that could have done with some additional polish but with these models being more for gaming and probably cutting those mode switches and what not it might end up in better shape though overall performance in games might perhaps not differ but having the GPU driver not crash is a win too ha ha. 😀 (Granted while there were several such listed known issues they were also more situational or specific to certain conditions.)
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
LOL, a year too late and they still try spinning it like a superior gaming product. Sorry, but they have lost this round a year ago. HBM2 was a wrong bet.
Agreed, this should have been out before Christmas. I'm not sure who is willing to buy a product that sits between a year old 1070/1080 if we go by the FE results. the price difference between the two is not even that high anymore. unless they go and undercut the gtx 1070 price... HBM2 sounded lovely back when we still had plan GDDR5 now we got GDRR5x hitting 12ghz on some of the models, so the bandswidth gap is tiny
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
Well, it would have been too much if AMD also hit the jackpot in the GPU department, they certainly did with their CPUs. As said, HBM2 was a wrong bet at this point in time, it's the future, sure, but not yet.
data/avatar/default/avatar39.webp
AMD Plays Catch-Up in Deep Learning with New GPUs and Open Source Strategy June 28, 2017
While all of these GPUs are focused on the same application set, they cut across multiple architectures. The MI25 is built on the new “Vega” architecture, while the MI8 and MI6 are based on the older “Fuji” and “Polaris” platforms, respectively. The top-of-the-line MI25 is built for large-scale training and inferencing applications, while the MI8 and MI6 devices are geared mostly for inferencing. AMD says they are also suitable for HPC workloads, but the lower precision limits the application set principally to some seismic and genomics codes. According to an unnamed source manning the AMD booth at ISC, they are planning to deliver 64-bit-capable Radeon GPUs in the next go-around, presumably to serve a broader array of HPC applications. For comparison’s sake, NVIDIA’s P100 delivers 21.2 teraflops of FP16 and 10.6 teraflops of FP32. So from a raw flops perspective, the new MI25 compares rather favorably. However, once NVIDIA starts shipping the Volta-class V100 GPU later this year, its 120 teraflops delivered by the new Tensor Cores will blow that comparison out of the water. A major difference is that AMD is apparently building specialized accelerators for deep learning inference and training, as well as HPC applications, while NVIDIA has abandoned this approach with the Volta generation. The V100 is an all-in-one device that can be used across these three application buckets. It remains to be seen which approach will be preferred by users. The bigger difference is on the software side for GPU computing. AMD says it plans to keep everything in its deep learning/HPC stack as open source. That starts with the Radeon Open Compute platform, aka ROCm. It includes things such as GPU drivers, a C/C++ compilers for heterogeneous computing, and the HIP CUDA conversion tool. OpenCl and Python are also supported. New to ROCm is MIOpen, a GPU-accelerated library that encompasses a broad array of deep learning functions. AMD plans to add support for Caffe, TensorFlow and Torch in the near future. Although everything here is open source, the breadth of support and functionality is a fraction of what is currently available to CUDA users. As a consequence, the chipmaker has its work cut out for it to capture deep learning customers.
https://www.top500.org/news/amd-plays-catch-up-in-deep-learning-with-open-source-strategy/
data/avatar/default/avatar28.webp
I understand that Vega supports AMD's Infinity Fabric. What does that mean exactly? I am hearing that would allow multi-GPU to act as one card with scalability like their CPUs. Could somebody clear this up? Seems like its something that nobody is talking about much.
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
Agreed, this should have been out before Christmas. I'm not sure who is willing to buy a product that sits between a year old 1070/1080 if we go by the FE results. the price difference between the two is not even that high anymore. unless they go and undercut the gtx 1070 price... HBM2 sounded lovely back when we still had plan GDDR5 now we got GDRR5x hitting 12ghz on some of the models, so the bandswidth gap is tiny
Dunno, someone who's not willing to pay the price of a 1070 which atm sits priced way too high for its performance target?
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
HBM was the worst thing AMD thought to bring to consumers.
It's cool that you think better technology is a bad thing.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
Agreed, this should have been out before Christmas. I'm not sure who is willing to buy a product that sits between a year old 1070/1080 if we go by the FE results. the price difference between the two is not even that high anymore. unless they go and undercut the gtx 1070 price... HBM2 sounded lovely back when we still had plan GDDR5 now we got GDRR5x hitting 12ghz on some of the models, so the bandswidth gap is tiny
I've been waiting for an AMD GPU thats quite a bit faster than my 290, for quite some time. the 580 is about 'on-par', or a little bit better, so i wasn't willing to make the jump. I'll be quite happy with a reference RX Vega (either the top one, or the second one) and slap a waterblock on it.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Out of all of this, we don't actually have any indication that HBM is the issue here. Everybody who preordered Vega FE got it, and Koduri himself in the RTG AMA said that unlike HBM1, they have two suppliers for HBM2. It's also easier to manufacture than HBM1. HBM is a bit of a meme at this point I think.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
The only current failure we see is 100% on the software department. Until that clears out, a lot of things are open for the hardware.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Nvidia has their own kind of Infinity Fabric coming soon, and Intel will surely follow suit if they want to stay relevant. http://research.nvidia.com/publication/2017-06_MCM-GPU%3A-Multi-Chip-Module-GPUs The PDF is an interesting read for anyone who wants it. VEGA will need lots of optimization and at launch will not be 100% ready. Will likely take 6 months for the drivers to mature to the point that it can be close to 1080 Ti in some small cases, and by then Volta will arrive.
Wow, yeah that paper is awesome for anyone interested in GPU tech.