Authors: Yasumitsu Orii and Atsushi Takahashi
The amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time. It would be hard to sustain this growth if the AI chip development depends solely on the semiconductor process advancement since Moore’s Law has a 2-year doubling period and there is concern it is reaching its physical limit. The question this is how do we improve the computer hardware performance? The Von Neumann bottleneck is always the issue and Heterogenous Integration can provide good solution paths. And in recent years, chiplets, which allow us to achieve efficient high-performance computing better than ever before, have become the focus of the industry in response to the concern that Moore’s Law scaling is nearing its end. Another approach to resolve the Von Neumann bottleneck is to implement a neuromorphic device. We will discuss the key interconnection technologies such as high-density substrate, wafer level fan-out and Bridge to support chiplets and Neuromorphic devices.