Intel LGA 7529 Processors are Nearly 10cm in Length
Click here to post a comment for Intel LGA 7529 Processors are Nearly 10cm in Length on our message forum
schmidtbag
I feel like there must be an upper limit to how many pins are actually needed. You need a finite amount to handle the PCIe lanes and memory channels; there's not really a point in having more than 128 PCIe lanes (unless you're doing something like AMD where you're linking the CPUs via PCIe) and there gets to be a point where you just simply can't fit all the traces on a motherboard for more memory channels. With how large these packages are, it makes sense to integrate more onto the SoC, thereby reducing the need of more pins leading elsewhere to the motherboard.
So unless I'm missing something here, that just leaves pins for power delivery. If 7529 becomes that huge just so all the cores can be fed more power, that's a rather bleak future. Perhaps Intel is intending to compete with AMD by having one gargantuan socket rather than multi-socket designs, which overall makes sense.
Kaarme
user1
schmidtbag
user1
https://www.cerebras.net/blog/wafer-scale-processors-the-time-has-come/,(((wafer-scale)))
there is always room for more i/o, and more cores = more i/o. if amd could double the core count per package , they could double the io easily, you don't see much more than 16 layer pcbs on consumer hardware, but you can go beyond 24 layers, There is a long way to go before such things become impractical. If you're building a supremely large cluster or run a data center, more per rack means more throughput, at the current time there is practically infinite demand for computing resources, $30K racks aren't rare when a single xeon can set you back $10-20k, even something like a 24+ layer pcb is going to be a minor cost compared to the rest of the components, if you just think about a single compute node , which may comprise of 2 genoa cpus, and 12 gpus, the total silicon on that is huge, if you can wrap as much as you can into fewer larger packages, that saves you space, circuitry, and cooling complexity. its the obvious solution. and You should see some of the truly enormous mainframes from the past.
check this package out [youtube=xQ3oJlt4GrI],
computers were really big before stuff got small. and now due to technical limitations of shrinks, we're going big again. we've been kind of spoiled by node shrinks delivering performance uplift.
Industrial/commercial/government applications have very different requirements compared comsumer. and those giant xeons aren't for consumers, market conditions can support(and have supported) much larger packages.
here is the chip schmidtbag
https://www.phoronix.com/review/intel-scalability-optimizations
So perhaps their line of thinking is to just make a gargantuan socket, since they're under pressure to compete with AMD's core count and they don't have the time to figure out how to make inter-socket communication more efficient.
I think it's been a long while since a node shrink by itself had any noteworthy impact on delivering more per-transistor performance. It seems to me since around 22nm, the primary benefit of smaller nodes was fitting more transistors per wafer and better efficiency. That means you can either cram more transistors in the same square area, or, you can keep the transistor count the same and create more usable product per wafer. Even though we're only seeing maybe 2nm of shrinkage (which is nothing compared to 10 years ago), the transistors are already so small that 2nm is a relatively huge difference in size.
The chiplet design is somewhat analogous to die shrinks, because it allows you to fit more usable product per wafer. If a chiplet die has a defect, it's no big deal compared to a giant monolithic die. Since wafers are circular, you can fit more small square dies than one large one. Since chiplets imply they're meant to be paired up with others, that allows engineers to scale up.
So I guess what I'm getting at here: we're getting bigger not because of node limitations, but because it's now affordable to do so.
Understood, but like I said: those giant Xeons are still too small for what many of the potential customers need. I am feeling pretty confident they will be more expensive than a multi-socket motherboard. So, what I don't get is how having 4x 128-core CPUs makes less sense than a more expensive singular 512 core CPU. The amount of cores per rack would be the same, but one system is cheaper to manufacture than the other.
Cool - pretty interesting.
Yes, I get that there is almost always a way to just keep scaling up, but the underlying point is that if this were a viable strategy, we'd have seen it done much sooner. As complexity goes up, so does the chances of failure. Reliability is critical to such systems, so sometimes to ensure reliability, the price multiplies. Companies care about maximizing profits. So, you have to find the right balance between processing density, reliability, and price.
It seems to me Intel hasn't really found an effective way to do multi-socket configurations; this article on Phoronix does a good job showing how adding another socket does a lot of damage to total performance:
user1
schmidtbag
user1
https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fwww.cnews.cz%2Fwp-content%2Fuploads%2F2020%2F06%2FIntel-Xeony-Scalable-t%25C5%2599et%25C3%25AD-generace-Cooper-Lake-sch%25C3%25A9ma-8S-serveru.png&f=1&nofb=1&ipt=bc8fdeefd764ba71ca86ab5b943b26dd4b5ffe804ab260b0d0a9029b49ac9c16&ipo=images
[/SPOILER]
the package might gain some complexity, however , you must keep in mind the actual size of the package vs the board, the traces/ connections are alot shorter(better), and mechanically there is less going on. and the motherboards themselves are less complicated, though pcb cost isn't much of a concern for a $10k + server anyway, its mainly space efficiency that matters here.
if intel can replace quad socket boards or the mythical octo socket configs that intel supposedly supports, its probably worth it.
[SPOILER="bonus"]
Bonus graphic: 8s 3rd gen scalable processor (on lga 4189) block diagram