A prominent Chinese AI researcher, Yao Shunyu, left Anthropic due to its anti-China stance and joined Google DeepMind as a senior research scientist, citing disagreements over the company's characterization of China as an adversarial nation.
A prominent Chinese AI researcher, Yao Shunyu, left Anthropic due to its anti-China stance and joined Google DeepMind, highlighting tensions in the global AI research community related to geopolitical issues.
Google DeepMind's new Gemini Flash 2.5 Image model enables quick and realistic photo edits through simple prompts, allowing users to manipulate images with high fidelity, but raising concerns about potential misuse and the difficulty of detecting AI-generated manipulations.
Google has announced Gemini 2.5 Flash, an advanced AI image editing model that rivals and potentially surpasses existing tools like Photoshop, posing a significant threat to traditional software companies like Adobe. The model is praised for its ability to maintain likeness consistency and make precise edits, and is available for both consumers and professionals, with Adobe integrating Google's model into its own suite to stay competitive.
Google's DeepMind and OpenAI achieved a milestone by developing AI models that solved five out of six problems at the International Mathematical Olympiad, marking the first time AI systems crossed the gold-medal threshold in this high-school competition, indicating significant advancements in reasoning capabilities that could soon aid in solving complex research problems.
Vinod Khosla criticized Windsurf's founders for abandoning their team to join Google DeepMind after a failed deal with OpenAI, expressing disapproval of founders leaving their teams behind without sharing proceeds, and emphasizing the importance of trust and loyalty in startup leadership.
Google DeepMind has launched Gemini 2.0, an advanced AI model designed for the "agentic era," featuring enhanced multimodal capabilities, including native image and audio output and tool use. The Gemini 2.0 Flash model is currently available to developers and trusted testers, with broader access planned for early next year. Google is exploring new agentic experiences with projects like Astra, Mariner, and Jules, while emphasizing responsible AI development with a focus on safety and security.
Google DeepMind has introduced GenCast, an AI weather prediction model that surpasses traditional methods in accuracy for forecasts up to 15 days and excels in predicting extreme weather events. GenCast uses a probabilistic approach with ensemble predictions, trained on four decades of data from the European Centre for Medium-Range Weather Forecasting (ECMWF). It outperformed ECMWF's 15-day forecast on 97.2% of variables and generates predictions in just eight minutes. The model represents a significant advancement in AI-driven weather forecasting, though further testing on extreme events is needed.
Google DeepMind has open-sourced AlphaFold 3, a significant advancement in protein structure prediction that can model complex interactions between proteins, DNA, RNA, and small molecules. This release, following the creators' Nobel Prize win, aims to accelerate drug discovery and molecular biology research. While the code is freely available, access to model weights requires permission, balancing open science with commercial interests. AlphaFold 3's diffusion-based approach enhances accuracy in predicting molecular interactions, promising substantial impacts on scientific discovery and medicine, despite some limitations.
Former employees of OpenAI and Google DeepMind have published an open letter urging AI companies to allow employees to raise concerns about AI risks without fear of retaliation. The letter, signed by 13 individuals, highlights risks such as entrenchment of inequalities, misinformation, and loss of control over AI systems. The group requests four key principles, including not enforcing agreements that prohibit criticism, facilitating anonymous reporting, supporting open criticism, and not retaliating against employees who share confidential information.
Former employees of OpenAI and Google DeepMind have published an open letter urging AI companies to adopt principles that protect employees who raise concerns about AI risks from retaliation. The letter highlights the potential dangers of AI, such as entrenching inequalities and loss of control over autonomous systems, and calls for greater transparency and oversight. The move follows criticism of OpenAI's restrictive non-disclosure agreements and has garnered support from notable AI experts.
Current and former employees of AI companies, including OpenAI and Google DeepMind, have raised concerns about the risks posed by unregulated AI technology. They argue that financial motives hinder effective oversight and warn of potential dangers such as misinformation, loss of independent AI systems, and deepening inequalities. The group calls for better processes for raising risk-related concerns and criticizes the weak obligations of AI firms to share information with governments.
Current and former employees of OpenAI and Google DeepMind have issued an open letter warning about the dangers of advanced AI and the lack of oversight in the industry. They highlight risks such as inequality, misinformation, and loss of control over autonomous systems, potentially leading to human extinction. The letter calls for better whistleblower protections and a culture of open criticism to address these concerns.
Current and former employees from OpenAI and Google DeepMind have issued an open letter warning about the lack of safety oversight in the AI industry and calling for increased protections for whistleblowers. The letter highlights the need for transparency and accountability, criticizing companies for weak obligations to share critical information with governments and civil society. OpenAI defended its practices, while Google did not respond. The letter follows recent resignations and criticisms from key figures within OpenAI, emphasizing the urgency for effective government oversight and the protection of employees who raise safety concerns.
Sundar Pichai announced a reorganization merging Android, Chrome, and Pixel teams into a new Platforms & Devices organization to drive computing at the intersection of hardware, software, and AI, aiming to deliver higher quality products and experiences. Google DeepMind will now handle all compute-intensive model building, while Responsible AI teams move to DeepMind for closer collaboration. Additionally, Google Research is focusing on computing systems, foundational ML and algorithms, and applied science and society. Pichai emphasized the company's duty to be an objective and trusted provider of information, prioritizing the organization of the world's information for universal accessibility and usefulness.