Tag

Ai Supercomputer

All articles tagged with #ai supercomputer

Nvidia Unveils DGX Spark: A Compact AI Supercomputer for Personal and Enterprise Use
technology4 months ago

Nvidia Unveils DGX Spark: A Compact AI Supercomputer for Personal and Enterprise Use

Nvidia's DGX Spark is marketed as the world's smallest AI supercomputer, offering a cost-effective solution with 128 GB of memory and the GB10 SoC, capable of running large models up to 200 billion parameters. While not the fastest GPU, it excels in running models that consumer GPUs can't handle, making it suitable for AI development, fine-tuning, and inference workloads. Its compact size, software ecosystem, and ability to run models beyond typical consumer hardware make it a notable option for AI practitioners, though it faces competition from other small-form-factor systems and Nvidia's own higher-end offerings.

Nvidia Launches the Compact DGX Spark AI Supercomputer
technology4 months ago

Nvidia Launches the Compact DGX Spark AI Supercomputer

NVIDIA has announced the shipping of DGX Spark, the world's smallest AI supercomputer, offering petaflop performance and 128GB memory in a compact form factor, aimed at empowering individual developers and research organizations to run advanced AI models locally. The system integrates NVIDIA's full AI platform and is available through various partners worldwide.

Huawei Launches AI Supercomputer to Challenge Nvidia and Boost China's Tech Independence
technology7 months ago

Huawei Launches AI Supercomputer to Challenge Nvidia and Boost China's Tech Independence

Huawei unveiled the CloudMatrix 384, a powerful AI supercomputer built with 384 Ascend 910C processors, claiming it doubles Nvidia's performance and includes significant memory and bandwidth advantages. This development is part of China's push to expand its domestic AI chip market amid US export restrictions that limit Nvidia's sales in China, potentially shifting the global AI supply chain. However, Huawei's products are unlikely to compete internationally due to export laws, leaving Nvidia dominant outside China.

"Nvidia and Georgia Tech Launch AI Supercomputer for Student Learning"
technology1 year ago

"Nvidia and Georgia Tech Launch AI Supercomputer for Student Learning"

Nvidia and Georgia Tech have unveiled the first AI supercomputer designed for student use, aiming to democratize access to supercomputing resources typically reserved for tech giants and startups. Fueled by Nvidia's enterprise AI software and Penguin Solutions' "virtual gateway," the supercomputer will initially be available to Georgia Tech's undergraduate students, with plans to expand access to all undergraduate and graduate students by spring 2025. The supercomputer, powered by 20 Nvidia HGX H100 systems, will be used for various projects related to AI, robotics, engineering, and entrepreneurial ventures, providing students with valuable hands-on experience in AI technology.

Elon Musk's $1 Billion 'Dojo' A.I. Supercomputer: A Solution to Nvidia's Chip Shortage?
technology2 years ago

Elon Musk's $1 Billion 'Dojo' A.I. Supercomputer: A Solution to Nvidia's Chip Shortage?

Elon Musk plans to invest over $1 billion to build an A.I. supercomputer called Dojo because he can't get enough of Nvidia's advanced A100 tensor core GPU clusters. Musk stated that if Nvidia could deliver enough GPUs, Dojo might not be necessary. Tesla has a high demand for A.I. chips and aims to achieve an in-house compute capability of 100 exaFLOPS by the end of next year. Musk believes that achieving full autonomy would greatly increase Tesla's car sales and is willing to sacrifice profitability for volume. Dojo will utilize custom silicon developed internally at Tesla and will be optimized for processing video data.

Meta's Latest AI Advancements: Data Centers, Chips, and Open-Source Tech.
technology2 years ago

Meta's Latest AI Advancements: Data Centers, Chips, and Open-Source Tech.

Meta has announced a new AI data center design optimized for AI training and inference, leveraging its own silicon, the Meta training and inference accelerator (MTIA), a chip that will help accelerate AI workloads across various domains. The company has also built an AI supercomputer, the Research Supercluster (RSC), integrating 16,000 GPUs to train large language models. Meta's new data center designs will provide liquid cooling to the chips to deliver the right level of power efficiency, enabling the right cooling and power to enable AI.