Tag

Ondeviceinference

All articles tagged with #ondeviceinference

AMD bets on local AI with OpenClaw running on Ryzen and Radeon hardware
technology2 days ago

AMD bets on local AI with OpenClaw running on Ryzen and Radeon hardware

AMD unveils OpenClaw, a local-AI framework with two configurations—RyzenClaw and RadeonClaw—that run large language models on consumer hardware via Windows WSL2 and LM Studio (llama.cpp), aided by Memory.md for local context. RyzenClaw targets CPU-based inference with roughly 45 tokens/sec, a 260k token context, and up to six concurrent agents; RadeonClaw uses the Radeon AI PRO R9700 for around 120 tokens/sec, a 190k token context, and supports two agents. Priced from about $2,700 for a Ryzen-based desktop and $1,299 for the GPU, OpenClaw is aimed at developers and enthusiasts who value autonomy, privacy, and on-device AI over cloud-scale solutions.