Google is widening access to Gemini 3, rolling out its newest flagship generative AI model to far more users through AI Mode in Search. After launching the model earlier this year, Gemini 3 Pro was initially limited to Google AI Pro and Ultra subscribers in the United States. Now, Google is expanding that access to 120 countries and territories across the Americas, Asia-Pacific, and EMEA, marking one of the company’s broader global deployments for an advanced model this early in its lifecycle.
Users in these regions who subscribe to Google’s higher-tier plans can choose the Thinking with 3 Pro option within AI Mode to tap into the model’s expanded reasoning and multimodal capabilities. For now, usage remains limited to English prompts, but Google typically adds languages gradually once the system stabilizes across markets.
The company says Gemini 3 brings stronger comprehension of intent and context, along with upgrades to AI Mode’s query fan-out method — the internal process that allows Search to broaden its reach when finding relevant material. Google claims the model also outperforms leading alternatives, noting that Gemini 3 recently surpassed competitors such as OpenAI’s GPT-5.1 on the LMArena Leaderboard. Alongside the rollout, Search has also received an updated model-routing feature that automatically sends more complex queries to Gemini 3 while relying on faster, lighter models for straightforward tasks.
Google is also extending access to Nano Banana Pro — the image generation model housed within the Gemini 3 family — to more countries via AI Mode for Pro and Ultra subscribers. That expansion aligns with other recent Gemini-related updates, including interactive science diagrams and tools for identifying AI-generated images within the Gemini app.
The full regional rollout underscores Google’s attempts to accelerate availability compared with previous model launches, where many users waited months for access to the most capable variants. Whether Gemini 3’s broader footprint meaningfully moves user adoption remains an open question, but providing faster access to higher-end models seems to be a priority as generative AI competition continues across both consumer tools and search experiences.

