Lisa Su confirms Q3 launch for Ryzen, Epyc and NAVI

Published by

Click here to post a comment for Lisa Su confirms Q3 launch for Ryzen, Epyc and NAVI on our message forum
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
some people hear "new" and use it as a placeholder for their wishes. Navi is, was, and will be a mid-range card with mainstream cut-down cards. don't get it twisted, i'm a big fan of Ms. Su. this is not a slam at AMD or Nvidia, it is what is. i bought a radeon VII (which i love) to replace my 1080ti for a period of 18 months to two years, until the full iteration of the "chiplet" design, "Arcturus" is on the market. that said, when they have a full Navi/Ryzen laptop, i'll buy it.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I currently have no reason to upgrade my 1500X, and that will probably remain the case until the next-gen consoles are released. But, I am still very interested in Zen2's performance. The die shrink, increased clock speeds, higher core/thread count, better RAM support, and Intel's security mitigations leads me to believe Zen2 will be the first truly all-around better CPU for the first time in over a decade. All that being said, I'm mostly interested in Navi. All I want is something that can play 4K@60FPS with AA off for a reasonable price.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
ttnuagmada:

Hmm, I wonder what they mean by "all new GPU architecture". Everything they've shown until now has indicated Navi was just going to be another GCN iteration.
It can easily be an all-new architecture and be GCN at the same time...just like the fact that a Ryzen 2700 and an Intel 8088 are both considered "x86" because of backwards compatibility even though the designs are totally unlike each other. We'll see...but you have to also realize this is an "investor" press release...not a consumer spec-sheet. OTOH, it was always obvious to me that RVII was a pro-sumer product (16GBs ram) and that another line of gaming GPUs was slated for release in 2H. Lisa Su practically said exactly that to the "pundits" on the RVII release Interviews she did--but as usual many of them didn't catch it. I'll be surprised if AMD doesn't skewer nVidia again in the volume discreet GPU markets as for some reason nVidia simply isn't competitive in the < ~$300 discrete GPU volume markets. Navi will probably be 8GBs for the gaming market--but I really couldn't say.
nevcairiel:

It may be ground breaking, but possibly not quite as useful. There is only so much hardware every core has, and while a second thread can make use of scheduling deficits to use idle hardware quite well, there are extremely strong diminishing returns from adding more threads to individual cores - nevermind software limitations in even keeping more threads busy. In a consumer line, like Ryzen, I really don't see the added value for more SMT threads.
Yes, If the 1 SMT thread capability isn't enough to keep the core going at 100%, then more than one might be needed. But since AMD has little trouble implementing lots of cores, there doesn't seem to be any room for a second SMT thread per core.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
EL1TE:

Been waiting for something to replace my GTX1080 but NVIDIA is too expensive, and from what i've been reading even top Navi isn't that good either, sigh. Guess i'll have to wait for Intel GPUs so that NVIDIA starts putting fair prices and better hardware (7nm RTX hopefully).
"Top Navi" hasn't been shipped, so you might want to avoid reading 100% speculation on unreleased products...;) Such "information" isn't accurate or reliable, as a rule.
https://forums.guru3d.com/data/avatars/m/230/230258.jpg
BReal85:

Have you seen this? Yes, they are rumours, but the top 2 models will have Radeon VII and Radeon VII +10% performance. The latter means it will be on par with/maybe a tiny bit faster than the RTX2080 and cost $300 less. We will see what we get, but if it's significantly cheaper than its NV counterpart (meaning any performance tiers) is, its a win-win situation. At the moment, we don't need faster GPUs, we need CHEAPER GPUs. $330 for an RTX2070 performance card? Bixxx please, give me that..
U completely missed my point. Sorry that i couldnt make myself clear. When i was talking about Navi, i was refering to the q3 2019 lunch of gpus from AMD. Talking about 2020 gpus even as a rumor seems too far fetched to me. Yeah, regarding price/performance , that's what i am saying. Navi doesnt need to get rtx 2080 level of performance to be a success. Even the recent rumors regarding power consumption are not worrying me much. If the die is small enough compared to nividia counterparts and price is reasonably adjusted, it will be a win-win situation for both, us and amd.
https://forums.guru3d.com/data/avatars/m/273/273565.jpg
Don't forget the VII is a recycled server card there still maybe something that can perform just as well for a lesser price that doesn't have 16GBs of hbm2
https://forums.guru3d.com/data/avatars/m/251/251046.jpg
Interesting, are they saying that there will be a new R7 based on Navi?
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
sverek:

My internal vodka doesn't get it.
My internal gin is confused ...
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
cryohellinc:

If in the end Navi lands in around the area of 2080 performance wise, and for a lesser price, that will be a successful product.
No doubt. If it can achieve that i'll hop on the train next fall.
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
kings:

Exactly. AMD has already said that Radeon VII would be the top of the line for 2019, so I don´t know why some people continue to delude themselves. Feed these illusions, only serves to disappoint in the future!
Why do we absolutely need to be disappointed beyond belief if it doesn't perform better than Vega 56 and 64? Why expecting it to be a rebrand of Vega56 and Vega64 will make it any less of a disappointment? Can't we expect AMD to deliver and then if they are not just be meh it's more of the same? I dunno i just don't see the point of not expecting a company to improve upon it's past products just to avoid being disappointed. I've had enough of disappointment for 2018-2019 with RTX and Radeon VII. More wont change anything to my overall level of disappointment. So yeah i'll wait and see and hope AMD delivers. I don't expect them to release a 2080 TI equivalent but if they release a 2070 TI for a fair price it will make my day.
data/avatar/default/avatar18.webp
Mundosold:

I thought Ryzen3 would be end of May/first week of June. Sad.
Thats when it'll probably be announced, at Computex. Actual release a few weeks later, hopefully in July.
data/avatar/default/avatar34.webp
Fox2232:

That's all, you got there? AMD always puts "Next Gen"/"Next-Gen" to products for which they do not have fitting code name. Please understand that NextGen != Next uArch. Generation is not specific in anything. It is very generic. Then those 4 mentioned things are not time schedule, but approach and it applies globally. Not to certain time frame or GPU generation. As for Linux thing, it does not confirm anything else than Navi being mostly driver compatible with GCN which has been stated in one of videos long time before someone found America again on Phoronix. And to fill you in. 7nm is far from early since Radeon 7 been made for quite some while and has 13B transistors. Phone chips available have more transistors than RX-580. IIRC Apple is having made 10B+ chips. 7nm is doing well in yields as mentioned few times before. So, do you actually have something that's not misinterpretation of source material?
Sorry mate, guess I don't have Lisa Su on stage using the exact words "Navi is still GCN". If you want to ignore all of the evidence out there, then be my guest (2 8 pin connectors on a mid-ranged part etc). When it does launch, and it's literally Polaris with GDDR6 and higher clocks, I'm sure you'll tell us about how great it still actually is because you can undervolt it to Nvidia 12nm perf/watt.
And to fill you in. 7nm is far from early since Radeon 7 been made for quite some while and has 13B transistors. Phone chips available have more transistors than RX-580. IIRC Apple is having made 10B+ chips. 7nm is doing well in yields as mentioned few times before.
Now let me fill you in. Most chips being made on it are sub 100mm low power parts. It's further along than it was a whopping 4 months ago with Radeon VII launched, but it's definitely light-years from being cheap enough to throw a 10+ billion transistor GPU into a 300 dollar product. All of those phone parts are going in $1000+ phones. I understand that you Fine Wine™ enthusiasts will deny it till the very end, but you're setting yourself up for disappointment.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
ttnuagmada:

Sorry mate, guess I don't have Lisa Su on stage using the exact words "Navi is still GCN". If you want to ignore all of the evidence out there, then be my guest (2 8 pin connectors on a mid-ranged part etc). When it does launch, and it's literally Polaris with GDDR6 and higher clocks, I'm sure you'll tell us about how great it still actually is because you can undervolt it to Nvidia 12nm perf/watt.
GCN itself is not a uArch. Polaris is a uArch. Navi is a uArch. Navi can be GCN and still be a completely different uArch from Polaris. It helps to understand what GCN actually is, before making comments about it.
Graphics Core Next (GCN)[1] is the codename for both a series of microarchitectures as well as for an instruction set. GCN was developed by AMD for their GPUs as the successor to TeraScale microarchitecture/instruction set.
https://en.wikipedia.org/wiki/Graphics_Core_Next
data/avatar/default/avatar25.webp
sykozis:

GCN itself is not a uArch. Polaris is a uArch. Navi is a uArch. Navi can be GCN and still be a completely different uArch from Polaris. It helps to understand what GCN actually is, before making comments about it. https://en.wikipedia.org/wiki/Graphics_Core_Next
Right, let's go ahead and ignore the last 7 years of what AMD's GPU's have been based on semantics. There's still hope! If you twist enough words around and stick your head deep enough in the sand, maybe you can will Navi into something other than a power-hungry underperformer! After a dozen iterations, they're finally going to give it all of these massive efficiency and performance boosts, right before moving on to their next gen architecture!
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
ttnuagmada:

Right, let's go ahead and ignore the last 7 years of what AMD's GPU's have been based on semantics. There's still hope! If you twist enough words around and stick your head deep enough in the sand, maybe you can will Navi into something other than a power-hungry underperformer! After a dozen iterations, they're finally going to give it all of these massive efficiency and performance boosts, right before moving on to their next gen architecture!
Again, understanding WHAT you're talking about, is rather important. It's quite clear that you have no idea what you're talking about. An instruction set and "uArch family" name are both completely different from a uArch itself. Let's see if I can explain this in a way you will actually understand. "GCN" (Graphics Core Next) is as much a uArch as a submarine is a space ship. AMD uses GCN as a "uArch family" name. Same as Intel does with "Core" and AMD does with "Zen". Under the "Core Family" from Intel, you have more than a dozen different uArchs. Just off hand... Conroe, Allendale, Wolfdale, Kentsfield, Yorkfield, Clarkdale, Lynnfield, Arrandale, Bloomfield, Gulftown, Sandy Bridge, Ivy Bridge, Haswell, Broadwell, Skylake, Kaby Lake, Coffee Lake.... All of them fall under the "Core Family" of Intel processors. AMD also uses GCN as an instruction set name, just like "x86", "x87", "3DNow", "MMX", "IA64", "86x64", "SSE", "AVX"....... AMD chose to name their uArch family after the instruction set that is used. The AMD/ATI graphics processors prior to the HD4000 series were all part of the "TeraScale" family of GPUs. Conveniently, "TeraScale" was also the name of the instruction set that those cards all used. "TeraScale" did not refer to a specific uArch either. Yes, Navi will be "GCN". It will be part of the GCN uArch family and use the GCN instruction set. The "Navi" uArch itself, will be quite different from Polaris though.
data/avatar/default/avatar29.webp
sykozis:

Again, understanding WHAT you're talking about, is rather important. It's quite clear that you have no idea what you're talking about. An instruction set and "uArch family" name are both completely different from a uArch itself. Let's see if I can explain this in a way you will actually understand. "GCN" (Graphics Core Next) is as much a uArch as a submarine is a space ship. AMD uses GCN as a "uArch family" name. Same as Intel does with "Core" and AMD does with "Zen". Under the "Core Family" from Intel, you have more than a dozen different uArchs. Just off hand... Conroe, Allendale, Wolfdale, Kentsfield, Yorkfield, Clarkdale, Lynnfield, Arrandale, Bloomfield, Gulftown, Sandy Bridge, Ivy Bridge, Haswell, Broadwell, Skylake, Kaby Lake, Coffee Lake.... All of them fall under the "Core Family" of Intel processors. AMD also uses GCN as an instruction set name, just like "x86", "x87", "3DNow", "MMX", "IA64", "86x64", "SSE", "AVX"....... AMD chose to name their uArch family after the instruction set that is used. The AMD/ATI graphics processors prior to the HD4000 series were all part of the "TeraScale" family of GPUs. Conveniently, "TeraScale" was also the name of the instruction set that those cards all used. "TeraScale" did not refer to a specific uArch either. Yes, Navi will be "GCN". It will be part of the GCN uArch family and use the GCN instruction set. The "Navi" uArch itself, will be quite different from Polaris though.
Like I said. If semantics makes you feel better about the disappointment you're about to experience, then by all means, go ahead and pretend like the specific definition of what a micro-architecture is (i literally never even used the term), will change the fact that the base design is the same as it was in 2012 (wow, it's almost like a complete change to an instruction set coincides with a hardware component), and coincidentally happens to line up with what everyone everywhere has referred to as "GCN" for the last 7 years. Maybe if you nitpick enough people about how they've been using terminology wrong this entire time, it will give Navi the power to overcome the fact that it's going to be a power hungry underperformer that just happens to look an aaaawful lot like all of the previous GPU's that have used the "GCN instruction set", that the entire internet has been referring to as "GCN " this entire time. Thanks for clearing all of that up! I'm sure that all of those Wikipedia cut and pastes make you feel much more intelligent now that you've won an argument that no one else was having! Now people won't be all confused about what I was referring to, despite knowing what I was referring to. From now on, i'll specifically refer to all of the AMD GPU's made since 2012 as "that unchanging GPU design that also happens to use the GCN instruction set" as to not cause any confusion.
AMD uses GCN as a "uArch family" name.
oh wow! it's almost like that's how myself and everyone else has been using the term in this thread the entire time, and your post was completely pointless and tryhard!
data/avatar/default/avatar35.webp
Fox2232:

And funnily enough, I could see you and others claiming that it is not GPU itself, but 4x 4GB of HBM2 what makes large portion of Radeon 7 price tag 😀 How those things twist like a snake when needed...
I'm glad you so freely admit that you literally need to put words in my mouth to make an argument
And to you prone statement, again false. Next time make bad claims with plausible denial statement. Sorry, not "All" of those parts go to $1000 phones. I have lovely new phone with Snapdragon 855 for $440. (No discount from operator or anything like that. It is from regular shop.) So tell me how much of that price is that chip made on 7nm. How much of that is 6GB of RAM. Three cameras (48Mpix, another one with optical zoom and last one with wide angle lens). How much of that is display with built in fingerprint scanner. And so on.
fair enough
Then tell me how much money manufacturer, supply chain, shops made on that phone. Once you reverse end cost down to that 73mm^2 chip made on 7nm, you'll realize that it is not very expensive to made as everyone in process of making such phone does it to make money.
If Navi is literally nothing but a 7nm Polaris, then it will still be twice that size. You've mostly been insisting that we should be expecting efficiency improvements, which the whole purpose of, would be allowing them to make a bigger chip. So which is it? Is AMD making a shrunken Polaris sized chip that they're going to blast with voltage and crank as high as it will go, allowing them to avoid how expensive it is to make a 7nm chip? Or did they improve the efficiency of the design allowing them to make something bigger, which would be cost prohibitive?
nVidia's top tier GPUs on that 12nm you mentioned. They made biggest GPU they could on given manufacturing process same way as they did it 20 years ago when they almost went out of business.
This comment shows a complete and fundamental lack of understanding of why Nvidia is so far ahead of AMD. Nvidia's design is so efficient that they were able to scale it up to the physical limits of the 12nm fab itself without it being too power hungry. They didn't have to do it at all, they did do it because they could do it. They certainly weren't feeling any pressure from AMD to do so, that's for sure. AMD can't even match that performance on a smaller node because they couldn't have made Vega 20 bigger even if they wanted to.
And if you want to throw ad hominem on people like "you Fine Wine™ enthusiasts". I actually spent a lot of time explaining to people here and elsewhere that nVidia is not overpricing those new GPUs. And same can be said about my view of those new things built in Turing.
fair enough
I look at things for what they are and judge that. I am not saying that Navi can't be another GCN. It is possible, but there is no clear evidence. And known things hint that amount of work put into Navi is worth around 4 generations of GCN + details from AMD's GPU related patents... Maybe it is still GCN, but then it went through big redesign to point you can as well stop calling it that way. Or AMD did sit on their hands for all those years. (But that would mean their statement that even CPU division people went to help with Navi was false.) = = = = So we are at square 0 again. Your statements were mostly false. Few misdirected at best. Do better job at whatever your intent is.
My statements are based on history and the large amount of information that's out there. We saw this exact denial with both Polaris and Vega prior to their release. We have a Navi PCB with 2 8 pin connectors. We have AMD admitting that the Radeon VII will still be their top tier product. We have 7 years of history of small iterations. We have slides from AMD that don't even attempt to make Navi look like it's going to be anything special. The writing is on the wall. Do these things individually prove anything? Of course not, but the bulk of them together says a whole heck of a lot.
data/avatar/default/avatar28.webp
Fox2232:

You'll be as surprised as with cellphone example.
Oh really? you extrapolate this from a single phone using a tiny die size?
Please do not go into that land. It is pointless to even fantasize that AMD did really sit on their hands doing nothing since they finalized Polaris design.
They've done very little since they finalized Hawaii. I'm not sure where your confidence comes from.
And once normalized nVidia's power efficiency advantage is exactly same as AMD's compute advantage
Only if you ignore RT/Tensor cores to make this claim
Think about it for a moment. (Yes it is 7nm vs 12nm, but let's not go back to 7nm Polaris=Navi ideas.)
Bold is what you need to think about.
Yes, it has been known for years that Navi will 1st come as mainstream and may not come as high end. But remind yourself of what's on that slide: "Q3" It means that Navi is not replacing Radeon 7 in Q3. And why would it. Radeon 7 is 16GB equipped card meant for people who need it.
Radeon VII had it's compute performance gimped. It's a gaming card. It being their top gaming card means that Navi sure as crap ain't gonna be matching a 2080. Mark my words. We'll get Vega 64 performance at Nvidia 12nm perf/watt
From pure kindness of my heart I'll remind you and some others by showing few images. It may light some bulbs:
fair enough
data/avatar/default/avatar33.webp
Fox2232:

Why not? That Tiny Die has about same transistor count as RX-580.
Because you have the Vega 20 with not even twice as many that is 4x larger .
Really, It is 7nm Vega, design released over 2 years ago. Enough of time for AMD to drop another 2 GCN generations as visible form list. That's why I mentioned that expecting Navi to be Some kind of no-work-done shrink is not going to get you anywhere in this discussion. (Or you can as well claim such thing directly as you point your fingers to that direction as often as you can.)
I obviously know it's not going to be exactly the same, but it's just going to be another 5-10% architectural efficiency gain that's just a drop in the bucket towards them catching up with Nvidia.
Radeon 7 is still 30% faster in Compute than equally priced RTX 2080 in FP16/32 and 10 times as fast in FP64. Radeon 7's compute FP16/32 matches RTX 2080Ti and is still 8 times as fast in FP64 while RTX 2080TI has 41% more transistors. By all means, AMD invested more into compute capability and nVidia into gaming. But since nVidia is finally delivering reasonable compute performance, gaming is going to benefit from compute power one way or another.
You need look no further than V100 to see that Nvidia could easily pour on the compute performance in the gaming sector the second it thought it needed to without losing a step in gaming performance efficiency.
I'll ask you this: Why do you think Navi was delayed by 6 quarters? Is it because AMD did nothing? Is it because AMD did something, but it never worked? Is it because they did wait for 7nm to be economical? (Try to exclude yourself from routine about AMD being incompetent, Zen spoke about that long time ago.)
7nm becoming economically viable is why it was delayed. They clearly couldn't do much more with 14nm, and since they make such tiny efficiency gains architecturally with every release, they have to rely on a node shrink to allow them to release something that would make any sense. It's that simple.
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
I've got no issues if you want to argue here, but keep it civil please guys. There's been a couple posts here that are borderline just rude.