AI & ML

Google expands Search Live globally with voice and camera AI

· 5 min read

Google is taking another big step toward turning Search into a full-blown AI assistant. The company has officially expanded Search Live globally, making the feature available in over 200 countries and territories, along with support for dozens of languages.

Search Live is now global 🌍

Interactive, multimodal conversations in AI Mode are now available in over 200 countries & territories.

This update is powered by Gemini 3.1 Flash Live, our highest quality audio and voice model yet. This model is also inherently multilingual, so… pic.twitter.com/zRVV69hGUE

— Google (@Google) March 26, 2026

Originally launched in the US, Search Live is part of Google’s broader push to make search more conversational, interactive, and most importantly, hands-free.

What exactly is Google Search Live?

Think of it as Google Search… but you talk to it. Search Live lets users ask questions using voice or even their phone’s camera, both on Android and iOS, via the Google App, and get spoken responses along with relevant web links.

🎙️ How to go Live with Search:

Open the Google app on Android or iOS and tap the Live icon under the Search bar. From there, you can ask your question out loud to get a helpful audio response, then continue the conversation with follow-up questions or dive deeper with helpful… pic.twitter.com/iZSR6YYVpq

— Google (@Google) March 26, 2026

For example, you could point your phone at something, say a broken shelf, and ask how to fix it. The AI will analyze what it sees and respond in real time, making it feel more like a conversation than a query. The feature is powered by Google’s Gemini 3.1 Flash model, which is designed for faster, more natural, and multilingual interactions.

So… is the search bar officially on notice?

This is a pretty big shift. Google isn’t just improving search, but it’s also slowly replacing the whole “type and scroll” experience. With Search Live, users can talk, ask follow-ups, and interact naturally, making it feel more like a conversation than a query. It’s basically ChatGPT-style interaction, but baked right into Google Search.

It also pushes things into multimodal territory, where voice, visuals, and context all work together. You can jump in via the Google app or trigger it through Lens, making it feel seamless. Looking ahead, this changes what “search” even means. It’s becoming an assistant that understands and responds in real time. And now that it’s rolling out globally, this isn’t a test anymore… It’s the new normal.