Tag

Llm

All articles tagged with #llm

Adaptive drafting speeds up reasoning LLM training using idle compute
technology10 hours ago

Adaptive drafting speeds up reasoning LLM training using idle compute

MIT researchers introduce Taming the Long Tail (TLT), an adaptive speculative-decoding framework that trains a lightweight “drafter” on idle processors to predict the outputs of large reasoning LLMs, with an adaptive rollout engine selecting the best strategy for each batch. This speeds reinforcement-learning–based training by 70–210% while preserving accuracy, and the drafter can also be reused for efficient deployment. The approach aims to reduce training cost and energy for complex AI models and has been tested across multiple models and datasets.

Boeing Unveils Space-Grade AI, Pushing BA Higher on Edge-Computing Breakthrough
business3 days ago

Boeing Unveils Space-Grade AI, Pushing BA Higher on Edge-Computing Breakthrough

Boeing engineers demonstrate a space-qualified edge AI by running a compact large language model on standard hardware to autonomously analyze satellite telemetry, a development that helped BA stock rise about 2%. The story also covers a Supreme Court denial to hear a Southwest pilots’ union case, while analysts still rate BA as a Strong Buy with roughly 18.8% upside based on a $278 target after a year of gains.

AI-assisted Arkanix Stealer: a fleeting dark-web info-stealer experiment
technology4 days ago

AI-assisted Arkanix Stealer: a fleeting dark-web info-stealer experiment

Kaspersky researchers say Arkanix Stealer, promoted on dark-web forums in Oct 2025, was likely an AI-assisted, short-lived information-stealer project with Python and native C++ versions, a Discord community, and a referral scheme. It could harvest browser data (including 0Auth2 tokens), cryptocurrency wallet data, and credentials from Telegram and Discord, plus local-file exfiltration and modular plugins. The premium variant added anti-sandbox/debugging, RDP credential theft, and advanced post-exploitation tools like ChromElevator to bypass protections. The operation’s unclear purpose points to rapid, low-cost AI-driven malware development rather than a sustained campaign, with IoCs published by Kaspersky.

Training AI on Low-Quality Data Causes Cognitive Decline
technology4 months ago

Training AI on Low-Quality Data Causes Cognitive Decline

Researchers from Texas A&M, the University of Texas, and Purdue University have proposed the 'LLM brain rot hypothesis,' suggesting that training large language models on low-quality 'junk' data, such as trivial or sensationalist tweets, can cause lasting cognitive decline in these models, similar to human attention and memory issues caused by internet overuse.

Apple research reveals LLMs gain from classic productivity techniques
technology6 months ago

Apple research reveals LLMs gain from classic productivity techniques

A study by Apple researchers demonstrates that large language models (LLMs) can significantly improve their performance and alignment by using a simple checklist-based reinforcement learning method called RLCF, which scores responses based on checklist items. This approach enhances complex instruction following and could be crucial for future AI-powered assistants, although it has limitations in safety alignment and applicability to other use cases.

Anthropic revokes OpenAI's access to Claude over unauthorized tool usage
technology6 months ago

Anthropic revokes OpenAI's access to Claude over unauthorized tool usage

Anthropic revoked OpenAI's access to its Claude large language models after discovering that OpenAI was using the models to benchmark and develop its own competing AI, violating the terms of service. While OpenAI can still perform safety evaluations, its ability to use Anthropic's tools for development has been cut off, highlighting tensions in AI model sharing and competition.

technology8 months ago

Reducing reliance on large language models

Many developers are experiencing mixed feelings about using large language models (LLMs) for coding, recognizing their potential to accelerate tasks and generate boilerplate, but also noting issues like messy code, lack of ownership, and the need for disciplined management. While some see LLMs as invaluable assistants for small tasks and prototyping, others caution against over-reliance due to challenges in understanding and maintaining AI-generated code, emphasizing the importance of human oversight and skill.

AI's Path to Trust and Breakthroughs Amidst Potential Dangers
technology1 year ago

AI's Path to Trust and Breakthroughs Amidst Potential Dangers

AI is transforming scientific research from a passive tool to an active collaborator, as seen in Stanford's 'Virtual Lab' framework, which uses AI agents to assist in interdisciplinary research, such as designing nanobodies for SARS-CoV-2. These AI agents engage in discussions, propose solutions, and critically evaluate outcomes, though human oversight remains crucial to verify their accuracy. The framework is adaptable to various scientific fields, highlighting AI's growing role in accelerating scientific discovery.