Windows 11's Snipping Tool now includes a Bing-powered Visual Search feature, offering a Google Lens-like experience for screen captures, but Google Lens on Windows still outperforms it in accuracy and convenience. The update enhances the tool's functionality with features like live annotation, color picking, and better sharing options, making it a powerful utility for users, despite some limitations compared to Google Lens.
Google is enhancing its AI Mode to provide a more visual and interactive search experience, allowing users to see images alongside text for inspiration and shopping, with the ability to refine results conversationally and upload images for better relevance. This update leverages advanced visual understanding and multimodal capabilities, rolling out to US users and expanding globally.
Amazon has introduced Lens Live, a new iOS feature that uses your camera to scan and identify products in real-time, displaying matching items from its marketplace with options to purchase or add to a wishlist, enhancing visual search capabilities with AI integration.
Amazon has launched Lens Live, an AI-powered real-time visual search tool that enhances its existing Amazon Lens feature, allowing users to discover products by pointing their phone at objects in the real world. The tool integrates with Amazon's AI shopping assistant Rufus for product insights and is initially available on iOS for U.S. shoppers, aiming to improve in-store comparison shopping and product discovery.
Google is enhancing its AI Mode in Search with new features like Canvas for organizing study plans, real-time help via Search Live with Google Lens, and the ability to ask questions about on-screen content and PDFs, aiming to make search more interactive and context-aware.
YouTube is integrating Google Lens into Shorts, allowing viewers to search and learn more about objects or locations seen in videos by pausing and selecting the Lens option, enhancing visual search capabilities and content discovery, with rollout starting this week.
Google's Chrome browser for iOS is receiving updates that include simultaneous Google Lens image and text search, allowing users to refine searches by combining pictures with text. The update also enables saving images and files directly to Google Photos and Drive, bypassing iPhone's limited storage. Additionally, new features like Shopping Insights and a mini-map viewer are introduced, enhancing the browsing experience by offering deal notifications and quick map previews.
At Google's Marketing Live event, a new AI-powered visual search feature was introduced, blending search results with advertisements in a way that makes it difficult to distinguish between the two. This new approach, demonstrated through Google Lens, aims to enhance user engagement by integrating shopping options directly into search experiences, effectively blurring the lines between AI-assisted searches and advertising.
Ray-Ban's Meta sunglasses have introduced a new beta feature that uses AI-powered visual search to identify and describe landmarks, acting as a tour guide for travelers. The feature provides text and audio descriptions of landmarks, and is part of Meta's effort to enable users to ask AI questions about their environment through the glasses. The feature is currently available to a limited number of users in Meta's early access program, with plans to expand availability in the future.
Circle to Search, the latest feature for Pixel 8 and 8 Pro, is rolling out, allowing users to perform visual searches by circling, tapping, or squiggling over anything on their screens. This feature, reminiscent of "Now on Tap," provides a search bar with voice mic and Lens at the bottom of the screen, along with updated multisearch results. Users with gesture navigation can long-press the nav handle to invoke Circle to Search, while those with 3-button nav can use the home button, albeit losing the ability to launch Google Assistant that way.
Google has updated its Lens feature with new AI tools, allowing users to take a photo and ask questions about it, such as identifying a board game or a dish at a restaurant. The update enables users to ask specific questions about the photo without needing to describe its contents. The feature is available on the Google app for Android and iOS, and users can upload images from their phones to use the tool.
Google introduces Circle to Search feature, allowing users on high-end Android phones like the Samsung Galaxy S24 to pull up Google while using any app, circle or highlight any text or object, and get instant results using AI. This enhanced visual search will also be available on the Pixel 8 and Pixel 8 Pro. The feature enables users to search for any object or text on their screen without switching between multiple apps. Additionally, Google is integrating AI into more products, including an AI-powered upgrade for multisearch in Lens, allowing for more nuanced visual queries.
Google has introduced Circle to Search, a feature that allows users to circle something on their phone screen and instantly retrieve Google search results about the circled item. Initially available on select premium Android phones, the feature eliminates the need to switch between apps and simplifies the search process. Additionally, Google has updated its multisearch feature, enabling users to ask complex questions to refine visual searches, and this upgrade is available in the Google app for Android devices in the US.
This article provides four easy upgrades to improve your digital life, including changing passwords for important accounts, updating WiFi router software for security, choosing slower shipping for online orders to reduce environmental impact, and using visual search features on Amazon and Google for convenient online shopping and translation.
Apple's Vision Pro headset's visionOS operating system includes a feature called "Visual Search" that allows users to identify items, interact with text in the real world, copy and paste printed text into apps, translate text between 17 different languages, and more. The headset can detect text and documents, and real-time text translation will be useful for traveling. The feature was discovered by Steve Moser in the latest Xcode beta.