Intel Might Drop 10nm node for Desktop processors

Published by

Click here to post a comment for Intel Might Drop 10nm node for Desktop processors on our message forum
data/avatar/default/avatar13.webp
It' a shame Ice Lake does not clock very high, as the increase in IPC is very good. You get good IPC but then not many C's though, one hand giveth and one hand taketh away, amen. I assume 7nm has both IPC and clocks. AMD needs to make hay whilst the sun shines.
data/avatar/default/avatar33.webp
Intels 14nm is still a competitive process performance-wise, and if they use their $3B to do competitive pricing, then the next year is still interesting, and even more so once Intels 7nm arrives (which should be competitive with TSMC 5nm at that point).
data/avatar/default/avatar27.webp
LOL. Epic, who could have thought. πŸ™„
Andy Watson:

It' a shame Ice Lake does not clock very high, as the increase in IPC is very good. You get good IPC but then not many C's though, one hand giveth and one hand taketh away, amen. I assume 7nm has both IPC and clocks. AMD needs to make hay whilst the sun shines.
2022 AMD going to be on Zen 5 5nm, and Ice Lake IPC is bit worse than the expected Zen 3, given the IPC gains of the Zen 2 over CFL also. In the mean time we going to have Zen 3 7nm+ and Zen 4 at 6/5nm EUV while Intel still has 14nm Skylake.
data/avatar/default/avatar14.webp
nevcairiel:

Intels 14nm is still a competitive process performance-wise, and if they use their $3B to do competitive pricing, then the next year is still interesting, and even more so once Intels 7nm arrives (which should be competitive with TSMC 5nm at that point).
No Intel 14nm isn't competitive any more. You cannot compare an 8core CPU (9900K) with a 12 core (3900X) or 16 core (3950X). And that includes the next round in 2020 (Zen 3) and the CPUs after that in 2021 (Zen 4) with DDR5 & PCIe 5.0.
data/avatar/default/avatar13.webp
Fediuld:

No Intel 14nm isn't competitive any more. You cannot compare an 8core CPU (9900K) with a 12 core (3900X) or 16 core (3950X). And that includes the next round in 2020 (Zen 3) and the CPUs after that in 2021 (Zen 4) with DDR5 & PCIe 5.0.
I have both 3900x and 9900k, and yes 9900k is stil very competetitive in performance. It can due to very low latency, and beats 3900x in many scenarios. Overclocked even more. Owners of Ryzen likes to think performance is all about playing Cinebench 24/7. It's not πŸ˜‰ 3900x with lower latency than 9900k would be an epic deal. Ps: I like both 3900x and 9900k. I use it for different things πŸ™‚
data/avatar/default/avatar31.webp
Fediuld:

No Intel 14nm isn't competitive any more. You cannot compare an 8core CPU (9900K) with a 12 core (3900X) or 16 core (3950X).
Thats why you compare an 8 core to an 8 core. And Intel will also sell you a 10/12/14/18 core at greatly reduced prices soon (and don't start with the cost of the X299 platform, X570 is also rather expensive). All I care about is performance and price, and with the much cheaper HEDT refresh coming up, that is very much competitive. AMD may have caught up with Intel, but if Intel reduces their prices, like they are doing with the HEDT refresh, they are very much still competitive.
Fediuld:

And that includes the next round in 2020 (Zen 3) and the CPUs after that in 2021 (Zen 4) with DDR5 & PCIe 5.0.
And you don't know what happens next year or even in two years.
https://forums.guru3d.com/data/avatars/m/271/271131.jpg
nizzen:

Owners of Ryzen likes to think performance is all about playing Cinebench 24/7. It's not πŸ˜‰
Ryzen owner here who gives a fck about benchmarks. I bought Ryzen because the price was half of an Intel one and Intel made big news with security vuls. at that time. I don't care which is faster, I care about price/performance ratio and security. I got a renewal cycle of 5 to 8 years. Beat that. πŸ˜€
data/avatar/default/avatar37.webp
7nm doesn't work properly Intel is learning with Ryzen troubles and smart to make something that works rather than beta test their technology on consumers, now if only they started sacrificing wafer space and not compacting so much their cores onto a tiny die area (it lowers their manufacturing cost, more dies on one wafer) they would be easier to cool and work better in case you didn't know the 9900k and those others chips are super small, dwarves compared to Ryzen Intel should understand that you can't properly cool a pinhead
data/avatar/default/avatar12.webp
i upgrade my 7700k at 5.05 ghz to a ryzen 3900x, all i can say is even tho i lost a bit of single core performance, not even that much 218 single core on cine15 to 210 with the 3900x my games dont stutter anymore, and i can stream with a one pc setup without performance loss, my system is running great with a 1080ti, there is no bottleneck or almost none, and with the new console coming with ryzen and navi, im looking forward to the high end gpu, it ll probably be the way to go for gaming for the next 5 years
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
kakiharaFRS:

7nm doesn't work properly, cheating and lying on clocks, using advanced tricks to downgrade some cores and favor others is not and Intel is learning with Ryzen troubles and smart to make something that works rather than beta test their technology on consumers, now if only they started sacrificing wafer space and not compacting so much their cores onto a tiny die area (it lowers their manufacturing cost, more dies on one wafer) they would be easier to cool and work better in case you didn't know the 9900k and those others chips are super small, dwarves compared to Ryzen Intel should understand that you can't properly cool a pinhead
This post is nonsense. The manufacturing process has nothing to do with the techniques that AMD is employing to maximize performance on it's processors. That's not to mention Intel also has favored cores, Intel is using it's 10nm process on consumer chips (the one that is basically broken on desktop and seemingly being abandoned) and my 7820x almost never ran at turbo 3.0 boost clocks even when I spent 3 hours trying to use Intel's shitty boost application.
data/avatar/default/avatar02.webp
Denial:

This post is nonsense. The manufacturing process has nothing to do with the techniques that AMD is employing to maximize performance on it's processors. That's not to mention Intel also has favored cores, Intel is using it's 10nm process on consumer chips (the one that is basically broken on desktop and seemingly being abandoned) and my 7820x almost never ran at turbo 3.0 boost clocks even when I spent 3 hours trying to use Intel's shitty boost application.
1) yes it does as the trickery (a very brief spike of Mhz at idle advertised as a "fix" is a cheat) is directly linked to the fact that AMD advertised "clocks" the same way Intel does except on an Intel the cores run at max speed all the time and AMD 7nm cannot maintain those high clocks 2) 7820x seems to have no problem to overclock around 4.3Ghz either you got a bad chip or it's coming from your setup (last time I checked using an app to oc is garbage you do it from the bios as each mb/memory/cpu is different and Intel app doesn't account for that) 3) those forums are filled of people overjoyed by the fact AMD has "fixed" their clock speed when it hasn't it created a fake spike so that people who only check the "maximum value" on their statistics page are happy, sorry that's not how I work I read graphs of all the cores and they better do what they should
https://forums.guru3d.com/data/avatars/m/272/272728.jpg
kakiharaFRS:

7nm doesn't work properly Intel is learning with Ryzen troubles and smart to make something that works rather than beta test their technology on consumers
7 nm works fine, don't you worry too much about it. Intel should learn from their own mistakes, like selling the same CPUs to consumers year after year for the past decade, just because competition is lacking. My guess is that Intel became so lazy and money was coming so big and so fast anyway, that they haven't actually invested properly in 10 nm node to start with. And when Ryzen struck, it was already too late.
https://forums.guru3d.com/data/avatars/m/277/277333.jpg
D3M1G0D:

This makes sense. Releasing 10nm chips on desktop now would mean a significant step back in clocks, which will drive away consumers. Their 14nm process is still competitive and they can coast on it for a while longer (perhaps they'll release some low power or specialty 10nm chips on desktop in the future, but not gaming chips). And the owners of Intel CPUs like to think performance is all about emulators and DOSBOX. It's not πŸ˜‰ All kidding aside, I bought my Ryzen CPUs for their great value and computing performance. I don't even bother to run benchmark apps at all since theoretical numbers mean nothing - what matters is real world performance, and Ryzen certainly delivers (I can show you my computing numbers if you want πŸ˜‰).
Hey, my 3700x runs emulators great! So even that isn't a real advantage for Intel anymore, I wonder what's left for the kool aid kids to brag about πŸ˜›
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
cant be true, if it takes them 2-3 more years to bring the heat, ryzen would be +70% of the gaming-desktop market by then
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
kakiharaFRS:

1) yes it does as the trickery (a very brief spike of Mhz at idle advertised as a "fix" is a cheat) is directly linked to the fact that AMD advertised "clocks" the same way Intel does except on an Intel the cores run at max speed all the time and AMD 7nm cannot maintain those high clocks 2) 7820x seems to have no problem to overclock around 4.3Ghz either you got a bad chip or it's coming from your setup (last time I checked using an app to oc is garbage you do it from the bios as each mb/memory/cpu is different and Intel app doesn't account for that) 3) those forums are filled of people overjoyed by the fact AMD has "fixed" their clock speed when it hasn't it created a fake spike so that people who only check the "maximum value" on their statistics page are happy, sorry that's not how I work I read graphs of all the cores and they better do what they should
D...Do you not know how PBO2 and XFR2 work?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Fediuld:

No Intel 14nm isn't competitive any more. You cannot compare an 8core CPU (9900K) with a 12 core (3900X) or 16 core (3950X). And that includes the next round in 2020 (Zen 3) and the CPUs after that in 2021 (Zen 4) with DDR5 & PCIe 5.0.
It is competitive, just not very. The only reason it doesn't seem that way is because Intel's pricing is ridiculous. TSMC's 7nm is, in a practical sense, superior, due to how crazy efficient it is. But Intel's 14nm allows for reliably high clock speeds. Depending on what your needs are, either choice is good.
D3M1G0D:

This makes sense. Releasing 10nm chips on desktop now would mean a significant step back in clocks, which will drive away consumers. Their 14nm process is still competitive and they can coast on it for a while longer (perhaps they'll release some low power or specialty 10nm chips on desktop in the future, but not gaming chips).
I said a while ago how Intel was digging themselves deeper into a hole, where for every generation they couldn't get 10nm out the door, they had to keep increasing clock speeds to make it seem like they were making progress. But every time they cranked up the clock speeds, they were making it that much more difficult for 10nm to match the performance. That being said, Kaby Lake is where they screwed up the most. That whole product lineup basically just a Skylake refresh with higher clocks (and higher prices, IIRC). If they just went for what Coffee Lake is, they could've actually lowered clock speeds, since the additional cores were actually an upgrade. But... we don't know what their 10nm node is capable of. Perhaps Coffee Lake lowering boost clocks by a few hundred MHz wouldn't have been enough.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
It's just a rumor denied by intel... It's open season on rumors...the newest:
Inno3D Mentions a GeForce RTX 2080 Ti Super On Its Website
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
This was simply wccftech making stuff up to get clicks. They didn't even pull the article. They state stuff as rumor what they really mean is they are starting a new rumor with nothing to back it up.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
JamesSneed:

This was simply wccftech making stuff up to get clicks. They didn't even pull the article. They state stuff as rumor what they really mean is they are starting a new rumor with nothing to back it up.
Yeah, and people start going crazy again about nodes, nanometers, codenames of architectures, Ryzen, AMD, Intel, xxxLake,7, 7++, 9++, 10++, 14++, and so on... Meanwhile in Linux world not even Linux is what is was before ("Joe Vennix of Apple Information Security found a significant security vulnerability (CVE-2019-14287) in the Linux sudo utility that could have allowed other users to gain unauthorized administrative (β€œroot”) privileges on a Linux machine.") and starts to get the bad behavior of MS Windows so clearly the end of the world is approaching...
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
D3M1G0D:

Intel was betting on their process tech, which isn't a bad bet to be honest - I don't think anybody could have imagined the troubles they would have with it. Intel had always been able to rely on their manufacturing advantage to keep them ahead of the pack but those days are likely over; TSMC has grown rich from Apple and Samsung money and I think they will be the leader in process tech going forward (as long as we keep buying new smartphones, they'll have the money to keep extending their lead). I've said this before but AMD couldn't have chosen a better time for a comeback. Intel still seems to believe in 10nm and wants to release desktop products, but I don't think anybody wants it. Even if Ice Lake at 4.5 GHz beats out a 9900K at 5 GHz, I think most Intel fans would prefer the latter for the boasting rights of higher clocks. The average consumer would also be put off by the lower clocks as they know nothing about IPC. Like I said before, if they release 10nm for desktop it will likely be low power chips, similar to what they're doing with mobile 10th gen.
Intel was way to aggressive with the density they tried to achieve using quad patterning for there 10nm process. TSMC's first go at 7nm was using quad patterning but was not as dense as Intel's 10nm and on top of that it was only some of the up front processes that were at the 7nm feature size. People bag Intel for their failures on 10nm but frankly I think most seem to miss they were trying to push the envelope too far without moving to EUV. I think you will see all the fabs learn from this and take a TSMC like approach where they iterate the process in small steps year on year because these nodes are extremely complex. For example of complexities, just to create the EUV light source it takes a room sized laser firing at tin droplets that are being shot at 100mph.