OpenAI Bypasses Nvidia With Ultra-Fast Codex on Cerebras Wafer-Scale Chips

1 min read
Source: Ars Technica
OpenAI Bypasses Nvidia With Ultra-Fast Codex on Cerebras Wafer-Scale Chips
Photo: Ars Technica
TL;DR Summary

OpenAI released GPT-5.3-Codex-Spark, a fast, text-only coding model that runs on Cerebras Wafer-Scale Engine 3 and achieves about 1,000 tokens per second, roughly 15x faster than its predecessor and faster than Nvidia-based options; it's a research preview for ChatGPT Pro and select partners, with a 128k-token window, built for speed-over-depth coding tasks, signaling OpenAI’s push to diversify hardware away from Nvidia.

Share this article

Reading Insights

Total Reads

0

Unique Readers

7

Time Saved

5 min

vs 6 min read

Condensed

94%

1,05161 words

Want the full story? Read the original article

Read on Ars Technica