Cerebras Systems Announces Launch of WSE-3 AI Chip
Highlights
- WSE-3 chip doubles AI model training power with 125 petaFLOPs.
- Cerebras surpasses Nvidia with 4 trillion transistors on WSE-3.
- WSE-3 and Qualcomm partnership slashes AI inference costs.
Cerebras Systems has introduced the Wafer Scale Engine 3 (WSE-3), marking a significant milestone in developing chips designed for generative artificial intelligence (AI).
The announcement, made on March 13, 2024, positions the WSE-3 as the world’s largest semiconductor, aimed at advancing the capabilities of large language models with tens of trillions of parameters. This development comes on the heels of the intensifying race in the tech industry to create more powerful and efficient AI models.
Doubling Down on Performance
The WSE-3 chip improves the performance of its predecessor, WSE-2, two times without an increase in power consumption or cost. This accomplishment is celebrated as one of the strides made per Moore’s Law, which states that chip circuitry is expected to become twice as complex approximately every 18 months.
Consequently, the WSE-3 chip, manufactured by TSMC, shows a decrease in the transistor size from 7 nanometers to 5 nanometers, which increases the transistor count to 4 trillion on a chip the size of almost an entire 12-inch semiconductor wafer. This increase results in a doubling of the computational power from 62.5 petaFLOPs to 125 petaFLOPs, thus improving the chip’s efficiency in training AI models.
Advantages Over Competitors
Cerebras’ WSE-3 substantially surpasses the industry standard, Nvidia’s H100 GPU, in size, memory, and computational capabilities. Featuring 52 times more cores, 800 times larger on-chip memory, and significant improvements in memory bandwidth and fabric bandwidth, the WSE-3 delivers the largest performance improvements ever targeted at AI computations.
These improvements allow the training of substantial neural networks, including a hypothetical 24 trillion parameter model on a single CS-3 computer system, demonstrating the vast potential of WSE-3 in speeding up AI model development.
Innovations in AI Training and Inference
The release of the WSE-3 is associated with improvements in the training and inference phases of AI model development. Cerebras emphasizes the chip’s capability to simplify the programming process since it requires much fewer lines of code than GPUs for modeling GPT-3. The simplicity with which 2,048 machines could be clustered and trained makes this design able to train large language models 30 times faster than the current leading machines.
Cerebras has additionally revealed a tie-up with Qualcomm to improve the inference part, which is about predicting based on the AI model trained. Through methods like sparsity and speculative decoding, the partnership seeks to reduce the computational costs and energy usage of generative AI models to the bare minimum.
As a result, this collaboration signifies a strategic move towards optimizing the efficiency of AI applications, from training to real-world deployment.
Read Also: Charles Hoskinson Eyes Lightweight Consensus for Cardano
- Mt Gox Moving $950M in Bitcoin Sparks Panic of Crash to $56K Realized Price
- XRP Supply in Profit Hits Lowest Level Since Nov 2024 Despite Price Gains: Glassnode
- Crypto Market Crash Deepens as $1 Billion in Bitcoin, ETH, XRP, Altcoins Liquidated
- Bitcoin Price Crash to $89K, Last Chance to Buy Under $90K, Says Gemini Co-Founder
- XRP Gets Major Boost as Four Spot ETFs Prepare for Launch This Week
- Will MOODENG Reach $0.1 and MEW Hit $0.002 After Robinhood Listing?
- What’s Next for Cardano Price After Breaking Below Key Support Level?
- Pi Coin Price Could Jump 30%, But There’s a Catch
- Expert Sees XRP Price Rally if it Holds Key Support Ahead of Ripple ETF Launch
- Chainlink Price Eyes Breakout as Whales Scoop 150K $LINK
- Top Analysts Predict How Low Bitcoin Price Might Fall?





