Tech giants are assembling AI supercomputers of unprecedented scale. Microsoft, Meta, Amazon, and Elon Musk’s xAI are building clusters boasting 100,000 or more H100 GPUs, with xAI planning to double its Colossus system to 200,000 GPUs — including the upcoming H200 model. To put this in perspective, OpenAI reportedly trained GPT-4 using roughly 25,000 A100…
From chiplets to graphene and a second exascale computer, the first half of 2024 continued to push semiconductor limits
Moore’s Law, which predicted the doubling of transistor density roughly every two years, is approaching fundamental physical limits. Yet, GPU technology continues to evolve rapidly, with innovations in architecture and specialized processing units driving consistent performance gains. Multi-chip modules, 3D chip stacking, and advanced cache hierarchies are pushing beyond monolithic die limitations. The first half…