Google is preparing to make its Gemini assistant even more accessible by adding a new Tools button directly to the Gemini overlay, allowing users to access advanced AI features without leaving the app they’re currently using. The update, spotted in version 16.40.18 of the Google app, suggests a tighter integration of Gemini’s creative and research tools into Android’s multitasking experience.
Right now, users need to open the standalone Gemini app to use features like AI image generation, Veo video creation, Deep Research, or Canvas for visual brainstorming. The upcoming update would bring all of these directly into the overlay — the pop-up Gemini interface that can be triggered via a button or hotword on Android devices. This change effectively turns Gemini into a floating control center for on-demand AI tasks, mirroring how users might already summon it for quick prompts or contextual queries.
Once the new Tools icon appears in the overlay, tapping it will reveal the available options, and selecting one will show the corresponding tool’s icon inside the input box. From there, users can continue interacting with Gemini using either text or voice commands, without ever switching screens. It’s a small interface tweak, but one that could make Gemini’s broader AI ecosystem much more accessible in daily use — particularly for multitaskers who rely on it while navigating other apps.
In addition to the Tools button, code found in the same app version hints at further improvements. Google appears to be testing an integrated Circle to Select feature within the Gemini overlay, which would let users circle items or text on their screen to give the AI contextual information — similar to the Circle to Search gesture on Pixel and Samsung phones. This would allow Gemini to “see” what’s on screen and respond with relevant actions, such as identifying products, summarizing text, or generating visual content based on what the user highlights.
Other small refinements seem to be in development as well, including redesigned options for sharing and downloading generated images, which could make the process of exporting AI creations smoother and more intuitive.
Taken together, these updates point toward Google’s ongoing effort to weave Gemini deeper into the Android experience, blurring the line between an app and a system-level AI layer. If the Tools integration rolls out as expected, it could make Gemini feel more like a built-in assistant that supports creativity, productivity, and research in real time — all without requiring users to break focus or switch contexts.

