In a tantalizing preview ahead of its I/O developer conference, Google has piqued curiosity with a teaser video showcasing a prototype AI feature that leverages the camera to identify real-time objects and scenes.
The video demonstrates a Pixel device seemingly recognizing the Google I/O keynote stage and responding to voice queries about the event, offering details such as the nature of the event and its connection to AI advancements. This intriguing feature bears resemblance to Google Lens, albeit with real-time interaction and voice commands reminiscent of Meta’s multimodal AI in smart glasses.
The choice to showcase the demo on a Pixel device suggests that this feature could debut on Google’s flagship smartphone lineup first.
One more day until #GoogleIO! We’re feeling. See you tomorrow for the latest news about AI, Search and more. pic.twitter.com/QiS1G8GBf9
— Google (@Google) May 13, 2024
While the exact nature of the feature remains a mystery, Google’s timing in releasing this teaser is noteworthy. It coincides with OpenAI’s recent unveiling of similar capabilities in its GPT-4o model, potentially signaling a competitive response.
The full details of Google’s latest AI innovation are eagerly anticipated, with Google I/O set to kick off tomorrow. Stay tuned for live coverage and in-depth analysis from Engadget as the event unfolds.
