In addition to a new gesture-controlled search feature for Android devices, Google has unveiled an AI-enhanced capability within Google Lens's visual search functionality. Starting today, users can either point their camera or upload a photo or screenshot to Lens and then pose questions about what they see, receiving comprehensive answers powered by generative AI.
This enhancement builds on the multisearch capabilities of Lens, enabling users to search by combining both text and images. Previously, such searches would redirect users to other similar visuals. However, with this launch, users will receive AI-generated insights along with their image searches.
For instance, users can photograph a plant and ask, "When do I water this?" Instead of merely presenting similar images, the feature identifies the plant and provides care instructions, such as advising users to water it "every two weeks." This functionality leverages information gathered from various online sources, including websites, product pages, and videos.
Moreover, this feature seamlessly integrates with Google’s new gesture-based search method known as Circle to Search. Users can initiate these generative AI queries by making a gesture and then inquiring about the item they’ve circled, marked, or indicated a desire to learn more about.
It’s important to note that while the Lens multisearch feature offers generative AI answers, it is distinct from Google’s experimental Search Generative Experience (SGE). The latter remains an opt-in service.
The AI-enhanced multisearch feature in Lens is currently rolling out for all users in the U.S. who use English, commencing today. Unlike some other AI projects from Google, this feature isn’t confined to Google Labs. To access it, simply tap on the Lens camera icon within the Google search app on iOS or Android devices, or in the search box on your Android smartphone.
This addition, like Circle to Search, aims to ensure Google Search remains relevant in the age of AI. While the internet is often cluttered with SEO-optimized content, these innovations strive to improve search outcomes by leveraging a vast network of knowledge, including numerous pages in Google's index, while presenting results in a fresh format.
However, relying on AI presents certain challenges, as answers may occasionally lack accuracy or relevance. Since web pages are not encyclopedias, the quality of the responses is contingent upon the accuracy of the underlying sources and the AI’s capability to address queries without fabricating answers.
Google emphasizes that its generative AI offerings, including the Google Search Generative Experience, will cite their sources to help users verify the information provided. While SGE will remain in the Labs phase, Google plans to introduce more generative AI advancements as appropriate, following the current launch of multisearch results.
The AI-enhanced multisearch feature in Lens is available today, with the gesture-based Circle to Search set to debut on January 31.