Intel Moving to Chiplets: ‘Client 2.0’ for 7nm


One of the more esoteric elements of Intel’s Architecture Day 2020 came very close to the end, where Intel spent a few minutes discussing what it believes is the future of some of its products. Brijesh Tripathi, VP and CTO of Intel’s Client Computing Group, laid out a vision for the future of its client products in the 2024+ future framework. With the center around Intel’s 7+ manufacturing process, the goal was to enable ‘Client 2.0’ – a new way to deliver and enable immersive experiences through a more optimized silicon development strategy.

Chiplets are not new, especially not with recent launches from Intel’s competitors, and as we move to more complex process node development, the era of chiplets is rapidly gaining time to market and better within and revenue for a particular product. The key is to enable how those chipsets fit together, and at what points it makes sense to mix and match the relevant ones. Intel has previously spoken about this in a more generalized context, on its Technology and Manufacturing Day 2017, as shown in the carousel image at the top.

The goal here is to mix and match which process nodes work best for different parts of the chip. Intel seems set to realize this vision starting with its 7nm platform. At Architecture Day 2020, Brijesh Tripathi showed this slide:

To the left is a typical chip design – monolithic with everything it requires. For Intel’s leading products, this takes 3-4 years to develop, and bugs are found in silicon by both Intel first and later by Intel’s partners, as they can eliminate the time of silicon by a few large orders.

In the middle is a basic chiplet layout, similar to that of the 2017 slide, where various functions of the die are divided into their own modules. Assuming a consistent interconnect, there are some uses of the silicon elements again, such as AMD with the same core computing dies in client and server. For some semiconductor companies (except Intel) this is where we are.

To the right is where Intel sees its future. Instead of having a single number of chiplets in a product, it represents a world in the sense where each IP can be divided into multiple chiplets, allowing products to be built with different configurations of what works for the market. In this instance, a chiplet may be a PCIe 4.0 x16 switch – if the product requires more, it simply adds more of these chipsets. Same with memory channels, cores, media accelerators, AI accelerators, Ray Tracing engines, crypto accelerators, graphics, or even as far down as SRAM and caching blocks. The idea is that any IP can be split and then scaled. This means that the chiplets are small, can be built relatively quickly, and bugs need to be fixed very quickly.

In this diagram, we are treated to Intel’s long-term vision for the client – a basic interposer with an in-packet memory (something like an L3 or L4) that can act as the main SRAM cache for the whole thing, and then above- to this we get 24 different chiplets. Chiplets can be graphics, cores, AI, Media, IO, or anything else, but they can be mixed and matched based on what is needed. A content creator might want to strike a balance between good graphics acceleration and computing, while a gamer might just want to concentrate on the graphics. A business client like a workstation may need less graphics and more computing and AI, while a mobile version of the chip will be heavily invested in IO.

As always, there is some variation between chiplet size and complexity to actually put them together in a multi-die arrangement. Any communication between chipsets costs more power than a monolithic interpretation, and usually offers higher latency. Thermals must also be managed, and so these chipsets are sometimes limited by what thermal properties are available. Multi-die arrangements also cause headaches for mobile devices, where z-height is critical. However, the benefits you get by using the right process at the right time for the right product are great, as it helps deliver both performance and power at the best possible cost. It also gives the chance to bring in 3rd party IP rap when something great comes on the scene.

The only downside here is that Intel has not talked much about the glue that binds it all together. Chiplet strategies rely on complex protocols with high speed interconnect, customized or otherwise. The current uses of Intel’s die-to-die connectivity are simply memory protocols like FPGA fabric extensions – the size for server CPUs like UPI are not necessarily up to the task. CXL could be the future here, however, current CXL is built on PCIe, which means a complex CXL / PCIe controller for any chiplet that is likely to get hungry quickly.

Intel has announced that it will invent new packaging technology and new levels of connectivity to trade between the silicon – there is currently no disclosure about the protocols, however Intel acknowledges that to get to this scale level it needs to go beyond what the company has today, and that will require the creation of standards and innovation in this area. The goal is to create and support standards, and the first incarnation will have some standardization built in. Intel states that this is a method of extreme distribution, and noting that not everything connected to high bandwidth (like USB) should be a cohesive interconnect – Intel sees the goal with a handful of protocols across the entire spectrum.

There is also the developer market, which can be used for a more homogeneous implementation of resources in any particular product. Without careful planning and relevant coding, there is the potential for certain chiplet configurations to fail if, for example, the developer expected a certain ratio of computation to graphs. This is not something OneAPI could easily repair.

These are all issues that Intel needs to address, although they will take a few years until this comes to fruition. We were told that the internal name is Client 2.0, although it will likely be added more marketing dressing as Intel begins to talk in more detail about this.

Related reading