Google's AI supercomputer outperforms Nvidia's A100 chip.
TL;DR Summary
Google has revealed that its custom-designed Tensor Processing Unit (TPU) supercomputer is faster and more power-efficient than comparable systems from Nvidia. The TPU is now in its fourth generation and is used for over 90% of Google's work on artificial intelligence training. Google has strung more than 4,000 of the chips together into a supercomputer using its own custom-developed optical switches to help connect individual machines. The company said its chips are up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia's A100 chip that was on the market at the same time as the fourth-generation TPU.
- Google says its AI supercomputer is faster, greener than Nvidia A100 chip Yahoo Finance
- Alphabet Says Its AI Chips Can Beat Nvidia's. It's Not Just Taking on Microsoft. Barron's
- Google's Bard Vs. Microsoft Backed ChatGPT - What It Takes To Train AI - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL) Benzinga
- Google Claims its 4000-Chip Supercomputer is Faster than NVIDIA's gizmochina
- Nvidia slips as Google touts prowess of its own custom chips (NVDA) Seeking Alpha
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
2 min
vs 3 min read
Condensed
80%
528 → 103 words
Want the full story? Read the original article
Read on Yahoo Finance