Cerebras Systems Announces Launch of WSE-3 AI Chip
Highlights
- WSE-3 chip doubles AI model training power with 125 petaFLOPs.
- Cerebras surpasses Nvidia with 4 trillion transistors on WSE-3.
- WSE-3 and Qualcomm partnership slashes AI inference costs.
Cerebras Systems has introduced the Wafer Scale Engine 3 (WSE-3), marking a significant milestone in developing chips designed for generative artificial intelligence (AI).
The announcement, made on March 13, 2024, positions the WSE-3 as the world’s largest semiconductor, aimed at advancing the capabilities of large language models with tens of trillions of parameters. This development comes on the heels of the intensifying race in the tech industry to create more powerful and efficient AI models.
Doubling Down on Performance
The WSE-3 chip improves the performance of its predecessor, WSE-2, two times without an increase in power consumption or cost. This accomplishment is celebrated as one of the strides made per Moore’s Law, which states that chip circuitry is expected to become twice as complex approximately every 18 months.
Consequently, the WSE-3 chip, manufactured by TSMC, shows a decrease in the transistor size from 7 nanometers to 5 nanometers, which increases the transistor count to 4 trillion on a chip the size of almost an entire 12-inch semiconductor wafer. This increase results in a doubling of the computational power from 62.5 petaFLOPs to 125 petaFLOPs, thus improving the chip’s efficiency in training AI models.
Advantages Over Competitors
Cerebras’ WSE-3 substantially surpasses the industry standard, Nvidia’s H100 GPU, in size, memory, and computational capabilities. Featuring 52 times more cores, 800 times larger on-chip memory, and significant improvements in memory bandwidth and fabric bandwidth, the WSE-3 delivers the largest performance improvements ever targeted at AI computations.
These improvements allow the training of substantial neural networks, including a hypothetical 24 trillion parameter model on a single CS-3 computer system, demonstrating the vast potential of WSE-3 in speeding up AI model development.
Innovations in AI Training and Inference
The release of the WSE-3 is associated with improvements in the training and inference phases of AI model development. Cerebras emphasizes the chip’s capability to simplify the programming process since it requires much fewer lines of code than GPUs for modeling GPT-3. The simplicity with which 2,048 machines could be clustered and trained makes this design able to train large language models 30 times faster than the current leading machines.
Cerebras has additionally revealed a tie-up with Qualcomm to improve the inference part, which is about predicting based on the AI model trained. Through methods like sparsity and speculative decoding, the partnership seeks to reduce the computational costs and energy usage of generative AI models to the bare minimum.
As a result, this collaboration signifies a strategic move towards optimizing the efficiency of AI applications, from training to real-world deployment.
Read Also: Charles Hoskinson Eyes Lightweight Consensus for Cardano
- Crypto Gains New Use Case as Iran Turns to Digital Assets for Weapon Sales
- Bitcoin Could Rally to $170,000 in 2026 If This Happens: CryptoQuant
- Lighter Team Under Fire After Alleged $7.18M LIT Token Dump Post-Airdrop
- Binance Market Maker Hack: Trader Rakes in $1M via Failed BROCCOLI Price Manipulation
- Breaking: UK Begins New Initiative to Crack Down on Crypto Tax Evasion
- Shiba Inu Price Eyes a 45% Rebound as Burn Rate Spikes 10,700%
- Expert Predicts Ethereum Price Rebound to $4k as BitMine, Long-Term Holders Buy
- Bitcoin Price Prediction Ahead of FOMC Minutes
- U.S. Government Shutdown Looms: These 3 Crypto Predictions Could Explode
- Grayscale Files for First U.S. Bittensor ETF: Will TAO Price Rally to $300 in January?
- Shiba Inu Price Prediction: Will SHIB Show Golden Cross Signal in 2026?
Claim $500





