Google is enhancing its experimental AI Mode by combining the visual power of Google Lens with the conversational intelligence of Gemini, offering users a more dynamic way to search.
Instead of typing queries alone, users can now upload photos or take snapshots with their smartphone to receive more insightful answers.
The new feature moves beyond traditional reverse image search. For instance, you could snap a photo of a mystery kitchen tool and ask, 'What is this, and how do I use it?', receiving not only a helpful explanation but links to buy it and even video demonstrations.
Rather than focusing on a single object, AI Mode can interpret entire scenes, offering context-aware suggestions.
Take a photo of a bookshelf, a meal, or even a cluttered drawer, and AI Mode will identify items and describe how they relate to each other. It might suggest recipes using the ingredients shown, help identify a misplaced phone charger, or recommend the order to read your books.
Behind the scenes, the system runs multiple AI agents to analyse each element, providing layered, tailored responses.
Although other platforms like ChatGPT also support image recognition, Google's strength lies in its decades of search data and visual indexing. Currently, the feature is accessible to Google One AI Premium subscribers or those enrolled in Search Labs via the Google mobile app.