Chiplet Design Hopes to Rise Up as a Contender for Sidestepping Moore’s Law

March 28, 2022 by Jake Hertz

To overcome Moore's Law and die-to-die interconnection issues, the world’s top companies are turning to chiplet design for their next-generation computing hardware.

Driven historically by Moore's Law, the field of computing has been continuing to improve year over year. Despite the pushback, with Moore's Law slowing down, the demand for computing is greater than ever before.

With this in mind, the industry is seeing a new shift where engineers are considering new architectures, transistor types, and technologies to continually push for improvement instead of relying on Moore's Law. 


The slowing of Moore’s Law coupled with increasing die sizes is spelling trouble in the semiconductor industry.

The slowing of Moore’s Law coupled with increasing die sizes is spelling trouble in the semiconductor industry. Image used courtesy of AMD


One new concept in computing is the chiplet, an idea that has taken center stage so far in 2022.

In this article, we'll talk about chiplet design, why it's so promising, and how the biggest companies in the world are embracing it.


Why Chiplets?

Some of the biggest challenges slowing Moore's Law right now are the manufacturability of next-generation chips.

In general, scaling technology nodes aims to allow for higher density chips: more transistors mean more functionality for the same area. 

With Moore's Law slowing, each smaller node becomes harder to achieve, harder to manufacture, and more expensive overall. At the same time, system-on-a-chips (SoCs) are growing in physical size to try and fit more and more hardware blocks onto a single piece of silicon.

One of the main challenges here is the die reticle limit, the maximum chip area that can be exposed to a single photomask during manufacturing. 

Traditional SoCs are now reaching the die reticle limit without the same scaling rate as before, putting an upper bound on how much can be integrated onto a single die.

To many, the solution is chiplets: multiple individuals die connected to one another within the same package. This solution claims to have the benefit of allowing for extremely high amounts of integration within a single package without having to push individual dies to the reticle limit.


Traction in 2022

So far, 2022 has seen the industry embrace the concept of chiplets with open arms, as many of the industry's major players are now involved in the technology. 

The first big news on chiplets this year was Intel's announcement of joining the Universal Chiplet Interconnect Express (UCIe) consortium


UCIe's SoC architecture.

UCIe's SoC architecture. Image used courtesy of UCIe Consortium


Believing in the role of the chiplet in the future of computing, Intel joined the consortium to help develop the universal interconnect standards that will drive chiplet design. Yet, beyond R&D efforts, we then saw chiplets applied to some of the biggest and most impressive hardware releases of the year.


Apple’s M1 Ultra fuses two M1 Max dies together through a silicon interposer.

Apple’s M1 Ultra fuses two M1 Max dies together through a silicon interposer. Screenshot used courtesy of Apple


First, Apple made waves by announcing its new M1 Ultra, an SoC which claims to dominate the competition in terms of performance per watt. The key to its new devices is chiplet design. 

The M1 Ultra leverages two M1 Max dies together through a silicon interposer and its new UltraFusion interconnect. The result is a single package with 114 billion transistors and a distinct amount of performance.

Following this trend, even NVIDIA recently released its new Grace CPU, another high-performance device leveraging chiplet technology. 

Similar to the M1 Ultra, the Grace CPU connects two 72-core CPU dies together through NVIDIA's new NVLink-C2C technology. The result is a single package with 144 Arm cores and a claimed 2x performance per watt improvement over today's leading CPUs.


Shifting Towards a Chiplet Future?

While the benefits of chiplets have been discussed for some years, at this point, the promise of chiplet design has become genuinely undeniable. 

Today, we have one of the world's largest semiconductor companies in Intel, showing they are fully invested in the future of chiplets by joining the UCIe consortium. 

Beyond that, NVIDIA's Grace CPU and Apple's M1 Ultra, two of the most performant and impressive pieces of hardware on the market today, fully embrace the chiplet philosophy.

It seems clear that the industry will continue to trend in this direction, but only time will tell just how far chiplet design can take us.