Google appears to be expanding its ambitions in generative AI content creation with a new feature called “Sparks,” currently being tested within its experimental Illuminate platform. While Illuminate initially launched as a tool for converting dense research papers into AI-generated audio discussions, it has quietly evolved into a broader media generator — and Sparks represents its most visually ambitious feature yet.
Sparks generates short, vertical explainer videos entirely through AI, designed in a style reminiscent of TikTok. These videos are typically one to three minutes long and are created from a single prompt. The AI not only produces the voiceover and script but also generates synchronized visuals — suggesting a multimodal model working behind the scenes to unify text, audio, and video into a single polished output.
Although the feature is not yet publicly available, sample Sparks videos have surfaced via TestingCatalog, offering a glimpse into how Google may be thinking about short-form content generation. These clips indicate that Google is experimenting with how to make knowledge more accessible — and more engaging — for users accustomed to scrolling through video-first platforms like TikTok, Instagram Reels, and YouTube Shorts.

TestingCatalog also discovered additional experimental features within Illuminate’s recent updates, including editable AI summaries of classic literature and image generation tools for visual assets like cover art. The platform, which currently allows users to create up to 20 audio summaries per day, appears to be shifting toward a more multimedia experience that integrates video alongside audio and text.
Sparks may also be tied to broader developments across Google’s AI ecosystem. The quality and fluidity of the videos have led to speculation that the Sparks feature could be leveraging either Google’s Gemini multimodal models or its Veo 3 video generation system, both of which have been in active development. Moreover, Sparks shares functional overlap with another Google initiative, NotebookLM, which is set to include AI-generated video explanations hosted by virtual presenters. Given the structural similarities between the two tools, it’s possible that both projects are drawing from a shared backend or converging roadmap.
The rapid rollout of experimental features like Sparks raises a broader question: how many generative AI products is Google quietly building behind the scenes? While Illuminate began as a niche tool for academic audio, it’s quickly morphing into a creative platform capable of generating media that could rival other short-form content ecosystems. It also hints at Google’s broader interest in transforming traditional information retrieval — long dominated by text-heavy search — into something more dynamic, visual, and conversational.
There’s no confirmed public release date for Sparks, and the tool appears to remain internal for now. But if the direction of testing is any indication, Google is exploring ways to meet younger, mobile-native users where they already are — in vertical video feeds, scrolling through bite-sized knowledge packaged with the polish of professional content creators.
As AI-generated media becomes more capable and more widely distributed, Sparks could become a foundational part of how Google redefines not just content consumption, but content creation at scale. Whether it’s simplifying complex texts, illustrating science concepts, or summarizing documents in video form, Google seems to be betting that the future of AI isn’t just intelligent — it’s visual.
