AMD Shares Instinct MI300X GPU Performance in MLPerf V4.1 AI Benchmark
Click here to post a comment for AMD Shares Instinct MI300X GPU Performance in MLPerf V4.1 AI Benchmark on our message forum
tunejunky
*this* is why there is no high end RDNA 4.
and for a very good reason. i also find it amusing that even in the hpc/ai market AMD is giving more vram than Nvidia... and at this point one has to wonder why (especially at these margins at these elevated prices).
all in all extremely competitive and as has been proven, more effective in supercomputing.
as long as AMD is producing this level of product i can deal with skipping a halo gpu every now and then.
Gomez Addams
This looks good and appears to be a competitive alternative to Nvidia. I suspect the benchmark is written using OpenCL since it's one of the few multi-platform APIs for this kind of stuff. The big question for those who use these kind of devices is the API - if they use CUDA then AMD's chip is not an alternative for them. If there was a viable way to use CUDA with AMD devices that would really, really help them. They would likely come in at a lower cost than Nvidia so they could potentially take a fair amount of business away from Nvidia if that were to happen.
schmidtbag
AMD has usually offered pretty good server and workstation hardware for reasonable prices. The problem always comes down to drivers.
tunejunky
pharma
Wonder why AMD used H100 for comparison instead of H200 in their official marketing? Looking at indepedent ML Perf reporting those results you can see the comparisons against H100, H200 and B200. In this case the extra memory advantage against H100 did not result in a large difference in performance.
barbacot
I feel vibes of "AMD marketing" again to make their competitors look worse by putting them in unfavorable scenarios:
1. Why not testing against B200? - maybe because of the power draw of B200? - it could be the only valid reason.
2. Why using a different CPU for Nvidia?
No CUDA support - then who cares? Almost all AI works today revolves around CUDA because of AMD fault - they did not foreseen this AI fast development and they did not cared enough to have a competitive comparable solution. Now they are too little too late, an year late to precise since H100 was launched in march last year.
H83
Undying
Maybe becouse B200 price is 30,000$-40,000$ and MI300x is 15,000$.
tunejunky
pharma
AMD's AI Plan: The Nvidia Killer or a Wasted Effort? (hpcwire.com)
I really don't know where you get your info but select clients already have B200 and will start ramping up in Qtr 4 this year. Volume production will be during Qtr 1 2025. You might want to read the linked below for more info on AMD's AI offering.
Ryu5uzaku
Neo Cyrus
Fun fact: AMD/ Radeon give 0 shits about the plebeian "gamer" GPU market. All their focus is on feeding the AI boomers these overpriced MI300X chips, who think they're cashing in, they only want to keep the Radeon brand functional enough to not lose the console contracts and to have a presence in the home market.
Not that nGreedia aren't the same shit.
Undying
CPC_RedDawn
Neo Cyrus
user1
Krizby
LOL i still remember all that hype about MI250x beating Nvidia H100, but now the MI300x that came out a year later just barely tie with H100 in MLperf benchmark?
Nvidia H200 is around 40% faster than H100 in MLperf
AMD hype train sure get derailed every time
Neo Cyrus
@user1 The key phrase in what I said was "while within their optimum formula".
user1
barbacot