Google's NotebookLM chat feature has been significantly upgraded with the latest Gemini models, offering an 8x larger 1 million token context window, improved multiturn conversation capacity, automatic history saving, and the ability to set specific goals or roles for the chat, enhancing performance, quality, and user satisfaction.
Anthropic researchers have discovered a new "many-shot jailbreaking" technique that exploits the increased "context window" of large language models (LLMs), allowing them to coax the AI into providing inappropriate answers after priming it with numerous harmless questions. This vulnerability stems from the LLMs' ability to hold extensive data in short-term memory, enabling them to improve performance on tasks with repeated examples. The researchers have informed the AI community about this exploit and are working on mitigating it by classifying and contextualizing queries before they reach the model.
Google is set to launch Gemini 1.5, the successor to its Gemini AI model, with significant improvements including a larger context window of 1 million tokens. This allows the model to handle larger queries and process more information at once. Gemini 1.5 will be available to developers and enterprise users initially, with plans for a consumer rollout in the future. The model is designed to be faster and more efficient, and Google is also testing its safety and ethical boundaries. The company is in a competitive race with OpenAI to build the best AI tool, and CEO Sundar Pichai believes that despite the technical advancements, users will eventually just consume the experiences without paying attention to the underlying technology.
Google unveils Gemini 1.5, a new AI model with significant upgrades from its predecessor, including a longer context window, improved understanding, and enhanced performance. Built on a new version of Mixture-of-Experts architecture, the model can process up to one million tokens, enabling it to handle vast amounts of information, including video, audio, and codebases. Gemini 1.5 Pro outperforms its predecessor in benchmarks and is being released in a limited preview to developers and enterprise customers at no cost, with plans for wider release with pricing tiers in the future.
Anthropic has released Claude 2.1, a large language model (LLM) with a 200,000-token context window, surpassing the capabilities of OpenAI's GPT-4 Turbo. The upgrade offers improved accuracy, reduced hallucination rates, system prompts for personalized responses, and tool integration. Claude 2.1 enables users to process and analyze long-form documents with precision, making it suitable for various applications. The release highlights the growing demand for AI models that can handle extensive context and provides new considerations for businesses and users in the AI industry.
Anthropic's Claude AI language model can now analyze an entire book's worth of material in under a minute, thanks to an expanded context window of 100,000 tokens. This is a big upgrade compared to OpenAI's GPT-4, which has context window lengths of only 4,096 tokens. The enlarged context window could potentially help businesses extract important information from multiple documents through a conversational interaction. Anthropic received a $300 million investment from Google in late 2022.