Samsung Talks About Chip Fab Production Roadmap up to 3 nanometers

Published by

Click here to post a comment for Samsung Talks About Chip Fab Production Roadmap up to 3 nanometers on our message forum
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Seesh... Many of us are still running 32nm Sandy Bridge, 22nm Ivy/Haswell and the occasional 14nm stuff, and these guys are already discussing 3nm I wonder what kind of processing power will be possible with mature "3nm" tech. Trillions of transistors ?
data/avatar/default/avatar40.webp
wavetrex:

Seesh... Many of us are still running 32nm Sandy Bridge, 22nm Ivy/Haswell and the occasional 14nm stuff, and these guys are already discussing 3nm I wonder what kind of processing power will be possible with mature "3nm" tech. Trillions of transistors ?
Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
FrostNixon:

Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
The main importance for Intel is the profit (actually it's for any company that plans to stay alive). The smaller the process technology, the more units they get out of a single wafer manufactured. Of equal importance is the improved energy efficiency, which is a decisive selling factor in many market sectors. So, yeah, while Intel had no interest in developing their architecture, they did try to develop the process technology.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
FrostNixon:

Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
Don't discount the importance of litography, Nvidia/AMD gained a ton of performance by switching to a smaller node just because it made higher frequencies possible. The architectural differences were minimal. It's worth mentioning that the node jump was pretty substantial however. On the other hand Intel have been leaders when it comes to litography yet the performance wasn't there because they were sitting on their asses collecting laurels instead of actually improving their CPUs. Ryzen was a big comeback for AMD, that is absolutely true. But if Intel had properly worked on their CPUs during these past years, Ryzen would've simply been a competitor. Instead Ryzen is stepping on Intel's face over and over again and I think it's going to get even more brutal next generation.
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Im more interested what happens past 1nm. Will there be a Nano-Centimetre? Quantum computing? Or something radically different?
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
to me this is Samsung responding to TSMC, essentially saying we're bigger, badder, and all around better. even if its not true. and folks...you are ignoring the elephant in the room...Apple. Apple has been paying incentives for process shrinkage ever since they went A8. over $2 Billion to date, and they might ditch Intel sooner than thought for regular computing. both their own (future) designs and AMD's are testing faster at lower power SoC's. and if you haven't noticed, microprocessors are becoming more and more SoC's (esp. Ryzen based).
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
cryohellinc:

Im more interested what happens past 1nm. Will there be a Nano-Centimetre? Quantum computing? Or something radically different?
Nano-centimeter doesn't make sense. Picometer is the next step down. Quantum computers are a very different "species" of computers. They don't use traditional transistors or binary calculations, so their development doesn't really have much in common. I personally don't see quantum computers being available for home use in the foreseeable future, at least not as they're used now. They're ideal for science-based calculations with massive and complex numbers, but not a whole lot else. Much like a CPU vs a GPU, quantum computers are good at some things and worse at others.
tunejunky:

and if you haven't noticed, microprocessors are becoming more and more SoC's (esp. Ryzen based).
I would actually argue Ryzen is the least SoC-ish of mainstream processors, whereas ARM is the most. Almost your entire phone's capabilities are pinned down to 1 chip. Everything else is just power regulation, connectors, and sensors that need to be positioned elsewhere. Intel also has some SoCs that don't have an external chipset.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Well 3nm talks ...now not sure how hard they can push it.... after that ...they have to change the material ..to something more efficient than sand i guess the ultimate shrink is up to 1 atom thickness? Good luck going that thin though 😛
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Venix:

Well 3nm talks ...now not sure how hard they can push it.... after that ...they have to change the material ..to something more efficient than sand i guess the ultimate shrink is up to 1 atom thickness? Good luck going that thin though 😛
Single-atom transistors or switches are a real thing. The tricky part is figuring out how to make use of them, let alone on a mass-produced scale. Just because you can go smaller, that doesn't mean you'll benefit from it. This is why I think Intel has been taking so long with 10nm - I'm sure they had it working well over a year ago, but it resulted in lower clocks. The advantages of such a die shrink are outweighed by the cons of slowing down the CPU.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Fox and shim fair points , what i wanted to say is that we really need to find new material , and then again how long would it take to reach nm limits? I believe we are aproaching an era that fabs would not be able to shrink anymore ...... except...if ant man help us ? 😛
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Venix:

Fox and shim fair points , what i wanted to say is that we really need to find new material , and then again how long would it take to reach nm limits? I believe we are aproaching an era that fabs would not be able to shrink anymore ...... except...if ant man help us ? 😛
Finding a new material is easier said than done. Keep in mind that it is no coincidence why silicon is used for transistors. It has all of the right properties to make for a good one: it's abundant and cheap, it's a semiconductor, it has relatively small atoms, it tetravalent (this is important), and the other elements it can be doped with have been well-researched at this point. So, take a look at all of the other potential candidates: * Tin - too expensive, too low of a melting point, and too conductive * Lead - biohazard, large atoms, and too conductive * Flerovium - Synthetic, and therefore utterly useless * Germanium - Can and has actually been used in transistors (in fact, it was used for the first ever transistor), but it's expensive and more picky about the manufacturing process (and in case you're not aware, silicon is pretty damn picky). It might be good for special-use cases, but not for mass production. * Looking beyond "group 14", there are potential candidates like gallium arsenide, but I get the impression those seem to only be suitable for proof-of-concepts rather than practical approaches. They definitely wouldn't help in terms of reducing transistor size (besides, gallium is relatively expensive and arsenic is a biohazard, so that doesn't help). So, that just leaves us with carbon. Carbon is being investigated for use with transistors, and it is thought to maybe be the successor to silicon. The problem with carbon is trying to figure out a cost-effective way to manufacture transistors for it, because otherwise the element itself is very cheap and abundant. Anyway, I don't think we really need to ditch silicon any time soon. I think one of the reasons so many companies are investing in AI lately is because they're trying to use AI to create new processor architectures. An AI could notice something humans may have never thought of before to get us a lot more efficiency and speed in our designs. Besides, look at a CPU architecture vs a GPU architecture with the same number of transistors - depending on the task, one will decimate the other. But, who says it has to be that way? It may be possible to create a design that obsoletes both CPUs and GPUs (as we know them). Such a design could have a lot of potential benefits, like having everything in shared memory (integrated GPUs still work relatively independently of the CPU), there would be a lot of time saved not needing to communicate over PCIe. Maybe such a CPU could be modular, where you could basically add more cores over PCIe if you really needed to.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
...actually Ryzen is the fruition of decades of SoC design, as is "infinity fabric". the entire scalable design was done simply because SoC's were unwieldy and less efficient than thought. and were the (previous) largest single market for AMD. Ryzen facilitated much more advanced SoC's (see Xbox/Playstation) and industry specific SoC's, but the design of the CPU itself was driven by SoC designers.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
tunejunky:

...actually Ryzen is the fruition of decades of SoC design, as is "infinity fabric". the entire scalable design was done simply because SoC's were unwieldy and less efficient than thought. and were the (previous) largest single market for AMD. Ryzen facilitated much more advanced SoC's (see Xbox/Playstation) and industry specific SoC's, but the design of the CPU itself was driven by SoC designers.
Infinity Fabric doesn't make a product an SoC. IF is just a feature that makes SoCs much easier to make. So AMD's design has the potential to be the "most SoC-ish product ever made" but currently, AMD does not hold that title. What makes an SoC is how many components you integrate into a single chip, hence the name. A Threadripper CPU with a discrete chipset, a discrete GPU, discrete RAM, etc is barely an SoC, because you have so many core components in separate chips. Compare that to some ARM processors, where the CPU, GPU, USB controller, sensor controllers, storage controller, PCIe lanes, and even the RAM are all integrated into 1 unified package. That is what makes an SoC; it's literally the entire system on a chip.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Even if carbon is viable then there is a matter of production to my understanding the silicon fabs cost billions to make making a new fab to make carbon transiators it is not something samsung or glofo or tsmc or intel will invest if the resault is not vastly superior and they have to be sure it will work !
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Venix:

Even if carbon is viable then there is a matter of production to my understanding the silicon fabs cost billions to make making a new fab to make carbon transiators it is not something samsung or glofo or tsmc or intel will invest if the resault is not vastly superior and they have to be sure it will work !
I figure carbon must be vastly superior, but there are a lot of complications involved beyond cost (I hear carbon transistors degrade much faster) and right now everything is just theoretical. Theory isn't good enough for companies to invest in, when reliability is so uncertain. Carbon-based research has been going on for roughly 2 decades; it wouldn't go on this long if the yield wasn't promising. There must really be something special about carbon if people are dumping this much time and money into making it a reality. After all, people have largely given up on researching germanium transistors, despite the potential advantages. Meanwhile silicon transistors seem to be pretty much as good as they're going to get, where all we can do is continue to shrink them. We're reaching a point where these die shrinks don't seem to be paying off, but manufacturers aren't really left with another option. I'm sure a brand new architecture could be designed that would carry silicon transistors very far (after all, x86 is I think about 40 years old - software has changed a lot in that amount of time) but the problem becomes software compatibility. There's so much potential out there, but the obstacles are too great to overcome.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
schmidtbag:

So, that just leaves us with carbon. Carbon is being investigated for use with transistors, and it is thought to maybe be the successor to silicon.
One only needs to look in the mirror, and that will tell that carbon is really the future of computing. Why ? We're made of carbon. So is every other living thing on the planet, down to the simplest microbe or virus. Nature had billions of years of evolution to design carbon-based computers (neurons), we're just catching up. It is also probable that A.I. will not really take off until we reach full scale carbon-based computer production with three-dimensional interconnected layers (just like neurons inside brains). But we're close... it's just a matter of time until smart people figure out how.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
wavetrex:

Why ? We're made of carbon. So is every other living thing on the planet, down to the simplest microbe or virus. Nature had billions of years of evolution to design carbon-based computers (neurons), we're just catching up.
That's sort of comparing apples to oranges, though. I don't think carbon really has much to do with what makes neurons so powerful. They're just a totally different machine. Keep in mind - the compounds in neurons involving carbon are utterly useless to transistors. Carbon is a very diverse element. Neurons are also significantly larger than modern-day transistors. What makes neurons so efficient is their ability to "re-purpose" themselves (known as neural plasticity) to optimize tasks that the brain deems important. That, and it doesn't seem neurons are constrained to any specific rules, which is why they're so error-prone. FPGAs are probably the closest resemblance to brains that humans have created, since they're basically logic chips that can be re-programmed to fulfill a different purpose. Due to the functional and structural differences between brains and CPUs, each is good at doing something that the other is bad at. Brains are excellent at interpretation, approximation, noise filtering, and adaptability. However, they're terrible at precision, calculations, impartiality, recollection/memorization, and prediction. CPUs are the exact opposite. This is why even a brain as small as a raven's can identify a specific person by sight in an instant, whereas it takes an incredibly power-hungry computer to accomplish the same task relatively slowly. Meanwhile, a computer can solve complex algebra equations in a matter of milliseconds, whereas it could take a human several minutes. Brains are a product of trial and error, where the successes were either coincidence or contributed toward survival (remember, evolution doesn't decide anything). Computers are a deliberate product of logic and reason. So, I don't think they can be compared. What I think would be an interesting idea is using selective breeding to create a processor (not necessarily humans...). That way, you get the best of both words: a highly efficient processor created by nature, where it was deliberately evolved via logic and reason.
It is also probable that A.I. will not really take off until we reach full scale carbon-based computer production with three-dimensional interconnected layers (just like neurons inside brains). But we're close... it's just a matter of time until smart people figure out how.
Interesting thought, and I'm sure you're right about that.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
It would be even more interesting to merge live neurons (those that we already have, from evolution), with carbon-based digital structures that can be powered by the energy transmitted by the living neurons (aka, not require special electrical power source). Basically, a hybrid brain, that can do both.... filter noise, adapt, interpret... but also compute numbers very fast and have huge near-instantly accessible precision memory. Instead of AI, we could just enhance ourselves to be much, much smarter than just with our limited 100% natural brains. Imagine if you could simply "know" anything anytime, without having to google for it... or dig for years into books. That is very likely a future evolution of intelligence... becoming cyborgs. (And soon... !)
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
wavetrex:

It would be even more interesting to merge live neurons (those that we already have, from evolution), with carbon-based digital structures that can be powered by the energy transmitted by the living neurons (aka, not require special electrical power source). Basically, a hybrid brain, that can do both.... filter noise, adapt, interpret... but also compute numbers very fast and have huge near-instantly accessible precision memory. Instead of AI, we could just enhance ourselves to be much, much smarter than just with our limited 100% natural brains. Imagine if you could simply "know" anything anytime, without having to google for it... or dig for years into books. That is very likely a future evolution of intelligence... becoming cyborgs. (And soon... !)
That is something that had occurred to me. I personally don't really understand the point of humans trying to create a machine to compete with the human brain - why not complement it? Like I said, CPUs can do what we can't and vise versa. Why not use this to our advantage? Why not give humans the ability to solve math equations in an instant, or precisely measure something just by looking at it, or have perfectly accurate, clear (or even shareable) memories? And then there's your ideas, like having the world's information just implanted into your mind. Such bionics could allow humans to skip through so many years of schooling. People would be far more proficient at their jobs (and keep in mind, it's people doing the jobs, not machines taking their place). We would have more freedom to do more things we care about, and, we'd be better at doing them. This could be due to either having more brain power concentrated toward our hobbies, or, it could be due to the bionic assistance. To me, that sounds like real progress, all while retaining what makes us human. As a side note, it could also put an end to political debates. With everyone having full and immediate knowledge of politics, we would all learn to agree to a single system that works for everyone.