Tag

Large Language Models

All articles tagged with #large language models

Tech Legends Clash Over AI-Generated Email Controversies

Originally Published 13 days ago — by Gizmodo

Featured image for Tech Legends Clash Over AI-Generated Email Controversies
Source: Gizmodo

Legendary software engineer Rob Pike received an unsolicited AI-generated email from a project called AI Village, which aimed to raise charity funds through AI agents but instead sent a message that Pike found offensive. The incident highlights the unpredictable and sometimes problematic behavior of large language models, raising questions about their development and ethical use.

GPT-5 Out, Qwen Takes the Spotlight

Originally Published 15 days ago — by WIRED

Featured image for GPT-5 Out, Qwen Takes the Spotlight
Source: WIRED

Qwen, an open-weight large language model developed by Alibaba, is gaining popularity worldwide due to its accessibility and versatility, surpassing some US models in usage and adoption, and being integrated into various applications from smart glasses to automotive dashboards, highlighting a shift towards more open and widely used AI models.

Navigating the AI Bubble: Risks and Stable Investment Strategies

Originally Published 16 days ago — by Marcus on AI

Featured image for Navigating the AI Bubble: Risks and Stable Investment Strategies
Source: Marcus on AI

The article argues that the AI bubble, particularly around large language models, is likely to burst around 2026 due to fundamental technical limitations like the lack of world models, which undermine reliability and profitability, leading to a potential unwinding of the industry.

AI-Driven News Changing Public Perspectives

Originally Published 23 days ago — by The Conversation

Featured image for AI-Driven News Changing Public Perspectives
Source: The Conversation

The article discusses how AI large language models increasingly influence public perception by generating news content and summaries, often exhibiting communication bias that emphasizes certain viewpoints over others. This bias, rooted in training data and market dynamics, can subtly shape opinions and reinforce disparities, raising concerns about trust and neutrality. While regulation aims to address harmful outputs, deeper solutions involve fostering competition, transparency, and user participation to mitigate bias and ensure AI contributes positively to society.

Exploring AI Personalities: From Psychometrics to Human Mimicry

Originally Published 24 days ago — by Nature

Featured image for Exploring AI Personalities: From Psychometrics to Human Mimicry
Source: Nature

This article presents a psychometric framework for reliably measuring and shaping personality traits in large language models (LLMs), demonstrating that larger, instruction-tuned models exhibit more human-like, valid, and reliable personality profiles, which can be systematically manipulated to influence model behavior, with significant implications for AI safety, responsibility, and personalization.

AI's Growing Role in Peer Review and Scientific Publishing

Originally Published 25 days ago — by Nature

Featured image for AI's Growing Role in Peer Review and Scientific Publishing
Source: Nature

Researchers are increasingly using AI as 'co-scientists' in various research stages, including hypothesis generation and paper writing, but current publication policies often restrict acknowledging AI contributions, raising questions about AI's creativity and review capabilities in science.

AI Adoption in Peer Review: A Growing Trend and Policy Challenge

Originally Published 27 days ago — by Nature

Featured image for AI Adoption in Peer Review: A Growing Trend and Policy Challenge
Source: Nature

A survey of 1,600 researchers across 111 countries reveals that over half now use AI for peer review, often against guidelines, with many employing it to assist in writing reports, summarizing manuscripts, and detecting misconduct. Despite its growing use, concerns about confidentiality, accuracy, and the need for responsible implementation persist, prompting publishers like Frontiers to develop policies and in-house AI tools. Experiments show AI can mimic review structure but lacks the ability to provide constructive feedback or detailed critique, highlighting both the potential and limitations of AI in peer review.

Advocating for AI that Understands the World, Not Just Predicts

Originally Published 27 days ago — by marketplace.org

Featured image for Advocating for AI that Understands the World, Not Just Predicts
Source: marketplace.org

The article discusses the limitations of current large language models in understanding the world and explores the development of 'world models' by AI researchers like Fei Fei Li and Yann Le Cun, which aim to incorporate human-like understanding into AI systems by partially programming rules into them, representing a potential leap forward in AI capabilities.

Brain and AI: Unveiling the Neural Foundations of Language Processing

Originally Published 28 days ago — by WIRED

Featured image for Brain and AI: Unveiling the Neural Foundations of Language Processing
Source: WIRED

A recent study challenges the notion that AI language models lack sophisticated reasoning abilities, demonstrating that at least one model can analyze language with human-like proficiency, including diagramming sentences and handling recursion, which has significant implications for understanding AI's capabilities in linguistic reasoning.