Google I/O 2025 was, unsurprisingly, a densely packed showcase—but even so, the event’s heavy emphasis on AI development stood out. With nearly two hours of updates, the majority of announcements centered on the company’s Gemini platform and its growing integration across Android, Chrome, and Workspace. While some features are aimed at developers and enterprise users, many are headed directly to everyday devices—some starting today.
One of the most immediate changes is the rollout of Gemini Live on iPhones, expanding the feature beyond Android. Gemini Live lets users share their phone’s screen or camera feed with Google’s AI assistant for contextual help, making it easier to interact with the physical or digital world via AI. As long as you’ve installed the Gemini app, the feature works across platforms.
Google Search is also evolving with a full rollout of “AI Mode,” a reimagined search experience that acts more like a conversational assistant than a query box. You can now layer multiple questions into a single prompt, with Gemini parsing the request into detailed responses supported by citations and images. A new “Deep Search” feature promises even more comprehensive results, using multiple layered queries to build a fuller, more research-like answer. Google emphasized that screen sharing with Gemini will also become part of this AI-first search mode.
Perhaps more significantly, Google previewed a real-world assistant-style upgrade to Search through what it’s calling “Agent Mode.” This mode, built on Project Mariner, allows Gemini to complete multi-step tasks on your behalf—like locating and booking an apartment tour based on specific criteria or finding event tickets and filling out forms. How well it performs in the real world remains to be seen, but the demo highlighted Google’s ambition to automate tedious online workflows.
In Gmail and Google Meet, Workspace users—particularly those on paid plans—will soon see deeper Gemini integrations. Gmail will introduce personalized smart replies that reflect a user’s communication style and history across Google services. Meanwhile, Meet will offer live voice translations, with real-time AI dubbing between languages during video calls, starting today for eligible users.
For online shopping, Google is launching a “try it on” feature that uses AI to simulate how clothing would look on the user. This feature, aimed at reducing returns, goes live today for Search Labs users.
Google also confirmed it’s working on Android XR, its extended reality platform for AR glasses and headsets. While much of the news was expected, we saw a demo of a heads-up display capable of showing maps, messages, and live translation in real time. The company is partnering with Warby Parker and Gentle Monster to develop wearable hardware, though timelines remain vague.
On the creative AI front, Google introduced Imagen 4, its latest image generation model, with better handling of text and visual fidelity. It also launched Veo 3, its newest video generation model, which powers “Flow,” a new AI-driven video editing tool. Flow enables creators to generate, edit, and animate video sequences using natural language and visual prompts. While visually impressive, the practical use cases for consumer video remain unclear—outside of rapid prototyping or concept boards.
For Chrome, two AI-focused features are on the way: embedded Gemini integration directly in the browser and an automatic password updater, though the latter depends on participating websites.
Finally, Google announced a new tiered AI subscription model. The existing $20/month AI Pro plan is being rebranded but remains largely unchanged. A new “AI Ultra” plan, however, will cost $250/month and include access to all of Google’s advanced AI tools, including Gemini 2.5 Pro Deep Think, Veo 3, Flow, Project Mariner, and NotebookLM, as well as YouTube Premium and 30TB of cloud storage.
While much of what Google showcased remains in rollout phases or behind subscription tiers, the message is clear: AI will be deeply embedded in Google’s services going forward—across search, email, productivity tools, and even how users interact with websites and digital content.
