Google Unveils New AI Voice Model As ‘Search Live’ Goes Global

Google has expanded its Search Live feature globally, alongside the introduction of a new artificial intelligence voice and audio model, Gemini 3.1 Flash Live, as it deepens efforts to make search more interactive and intuitive.
The company said the feature is now available in all regions where its AI Mode operates, enabling users in more than 200 countries and territories to interact with Search through voice and camera in real time.
With the rollout, users can speak their queries aloud and receive instant audio responses while continuing the conversation with follow-up questions. The feature is designed to provide a more natural, conversational experience, particularly when typing may be inconvenient.
Search Live can be accessed directly within the Google app on Android and iOS devices by selecting the Live option. In addition to voice interaction, the tool integrates camera functionality, allowing users to point their devices at objects and receive contextual, real-time assistance.
Google said the feature can support practical, everyday use cases, including step-by-step guidance on tasks and instant information about items within a user’s environment.
The rollout is powered by Gemini 3.1 Flash Live, the company’s latest AI model, which improves responsiveness and supports multilingual interactions, enabling users to communicate in their preferred languages.
Beyond consumer applications, the model is also being made available to developers through the Gemini Live API in AI Studio, as well as to enterprises via Gemini Enterprise for Customer Experience.
Google said the expansion reflects its broader push to make Search more accessible and responsive by combining voice, visual context, and conversational AI. The integration with tools such as Google Lens further enhances the experience, enabling real-time interactions based on what users see in their physical environment.
