Tag

Custom Silicon

All articles tagged with #custom silicon

technology1 year ago

Xiaomi to Launch 3nm Chipset in 2024, Aiming to Rival Qualcomm and MediaTek

Xiaomi plans to officially unveil its custom 3nm chipset next year, marking a significant shift from its reliance on Qualcomm and MediaTek. The company is expected to partner with TSMC for mass production, despite potential trade sanctions from the U.S. due to geopolitical tensions. This move could increase competition in the smartphone industry, as Xiaomi aims to enhance profitability and reduce dependency on external chipset suppliers.

technology1 year ago

Dell and NVIDIA Unveil AI-Powered PCs and Servers for 2024

NVIDIA and Dell are planning to enter the "AI PC" market next year, with NVIDIA's CEO Jensen Huang hinting at advanced AI-focused features and custom silicon solutions. This move comes as competition in the AI PC segment intensifies, with companies like Qualcomm making significant strides. NVIDIA aims to leverage its Tensor Core architecture and cutting-edge processes to offer high-performance AI computing solutions, potentially scaling from consumer PCs to high-end workstations.

technology1 year ago

"Meta Unveils Next-Gen AI Chip for Faster Performance"

Meta announces the development of a more powerful next generation of its custom AI chips, the Meta Training and Inference Accelerator (MTIA), designed to train ranking and recommendation models faster and more efficiently. The company plans to integrate the chips with its current technology infrastructure and future GPU advancements, with the goal of eventually expanding its capabilities to train generative AI. Early test results show the new chip performs three times better than the first-generation version. Meta also plans to develop other AI chips, such as Artemis for inference, in response to the increasing demand for compute power in AI use.

technology1 year ago

"Google's Axion: The Latest in Arm-based Data Center Processors"

Google has developed its own in-house datacenter CPU called Axion, based on the Arm Neoverse V2 platform, offering up to 50% higher performance and 60% better energy efficiency compared to current x86-based processors. The Axion CPUs incorporate Google's Titanium microcontrollers for networking, security, and storage I/O processing, freeing up CPU core resources for workloads. Google is ready to offer instances based on its Armv9-based Axion CPUs to customers and has previously deployed Arm-based processors for its own services.

technology1 year ago

"Microsoft's In-House AI Server Gear Reduces Reliance on Nvidia: Report"

Microsoft is reportedly developing its own networking card for AI datacenters to reduce reliance on Nvidia's hardware and optimize its Azure infrastructure, following the development of its own 128-core datacenter CPU and Maia 100 GPU. The company's acquisition of Fungible has provided the necessary networking technologies and IP for this endeavor, with Pradeep Sindhu leading the project. The new networking card aims to improve the performance and efficiency of Azure servers, potentially impacting Nvidia's server networking gear sales. This move aligns with the industry trend of cloud providers developing their own custom silicon, but the initial results may still be years away.

technology2 years ago

Meta Unveils Next-Gen AI Chip and Datacenter Technologies.

Meta Platforms, formerly known as Facebook, has unveiled its homegrown AI inference and video encoding chips at its AI Infra @ Scale event. The company has created its own hardware to drive its software stacks, top to bottom, and can do whatever it wants to create the hardware that drives it. The Meta Training and Inference Accelerator (MTIA) AI inference engine is based on a dual-core RISC-V processing element and is wrapped with a whole bunch of stuff but not so much that it won’t fit into a 25 watt chip and a 35 watt dual M.2 peripheral card.

technology2 years ago

Microsoft Develops Custom AI Chip Amid Growing Demand for Machine Learning.

Microsoft is developing a new AI chip, Athena, designed to handle large language model training, which could be available for use within the company and OpenAI as early as next year. Experts say Nvidia won't be threatened by this move, but it does signal the need for hyperscalers to develop their own custom silicon. The need for acceleration also applies to AI chips that support machine learning inference, and the hyperscalers see a way to impact the inference needs of their customers with customized silicon.

artificial-intelligence2 years ago

Navigating the Impact of Generative AI on Business and Society.

Swami Sivasubramanian, who leads database, analytics and machine learning services at AWS, discusses the broad landscape of generative AI, large language and foundation models, and how custom silicon can help to bring down costs, speed up training, and increase energy efficiency. He explains that large language and foundation models are going to become a core part of every application in the coming years, and that while the hype cycle will subside, they will be done in a grounded and responsible fashion.