Google unveils Gemini 1.5, the latest iteration of its AI model, built on a Transformer and Mixture-of-Experts architecture. Gemini 1.5 Pro, designed for a wide range of tasks and devices, can handle larger prompts and outperforms its predecessor on testing benchmarks. It features in-context learning and is currently available for trials through AI Studio and Vertex AI, with a waitlist for interested developers.
Google is set to launch Gemini 1.5, the successor to its Gemini AI model, with significant improvements including a larger context window of 1 million tokens. This allows the model to handle larger queries and process more information at once. Gemini 1.5 will be available to developers and enterprise users initially, with plans for a consumer rollout in the future. The model is designed to be faster and more efficient, and Google is also testing its safety and ethical boundaries. The company is in a competitive race with OpenAI to build the best AI tool, and CEO Sundar Pichai believes that despite the technical advancements, users will eventually just consume the experiences without paying attention to the underlying technology.
French AI company Mistral AI has announced Mixtral 8x7B, an AI language model that reportedly matches the performance of OpenAI's GPT-3.5. Mixtral 8x7B is a "mixture of experts" (MoE) model with open weights, allowing it to run locally on devices with fewer restrictions. The model can process a 32K token context window and works in multiple languages. Mistral claims that Mixtral 8x7B outperforms larger models and matches or exceeds GPT-3.5 on certain benchmarks. The rise of open-weights AI models catching up with closed models has surprised many, and the availability of Mixtral 8x7B is seen as a significant development in the AI space.