NVIDIA Unveils Latest Innovations in AI and Chip Design at GTC 2023.

TL;DR Summary
NVIDIA has announced the H100 NVL, a new H100 accelerator variant aimed at large language model (LLM) deployment. The dual-GPU/dual-card H100 NVL offers 188GB of HBM3 memory, 94GB per card, more memory per GPU than any other NVIDIA part to date. The H100 NVL is essentially a special bin of the GH100 GPU that’s being placed on a PCIe card. The H100 NVL will serve a specific niche as the fastest PCIe H100 option, and the option with the largest GPU memory pool. H100 NVL cards will begin shipping in the second half of this year.
- NVIDIA Announces H100 NVL - Max Memory Server Card for Large Language Models AnandTech
- GTC 2023 Keynote with NVIDIA CEO Jensen Huang NVIDIA
- NVIDIA's big AI moment is here Engadget
- Nvidia announces tech for speeding up chip design at AI conference Yahoo Finance
- Dear NVDA Stock Fans, Mark Your Calendars for March 21 InvestorPlace
Reading Insights
Total Reads
0
Unique Readers
0
Time Saved
7 min
vs 8 min read
Condensed
93%
1,459 → 96 words
Want the full story? Read the original article
Read on AnandTech