To run an AI model, computers must constantly shift vast amounts of data between separate memory and logic chips, a process that chokes performance. To solve this, Cerebras Systems in 2019 engineered a dinner plate–sized chip—the largest ever—that embeds both memory and logic. “People thought we were mad hatters,” says Andrew Feldman, Cerebras’s CEO and co-founder, given the huge technical hurdles. In March, the company released a third generation of the chip, the record-fast Wafer-Scale Engine 3 (WSE-3), which can train models 10 times bigger than OpenAI’s GPT-4, and will comprise the Condor Galaxy 3, a supercomputer under construction in Texas.
More Must-Reads from TIME
- L.A. Fires Show Reality of 1.5°C of Warming
- How Canada Fell Out of Love With Trudeau
- Trump Is Treating the Globe Like a Monopoly Board
- Bad Bunny On Heartbreak and New Album
- 10 Boundaries Therapists Want You to Set in the New Year
- The Motivational Trick That Makes You Exercise Harder
- Nicole Kidman Is a Pure Pleasure to Watch in Babygirl
- Column: Jimmy Carter’s Global Legacy Was Moral Clarity
Contact us at letters@time.com