Hackers are using Gemini to accelerate parts of their operations, according to a new report from Google’s Threat Intelligence Group. The company says threat actors, including state-linked clusters associated with China, Iran, North Korea, and Russia, have incorporated Gemini into multiple stages of cyber activity, from reconnaissance and phishing to coding support and post-breach troubleshooting.
The findings point to a shift in workflow rather than a breakthrough in capability. According to Google, attackers are using Gemini to speed up routine but time-consuming tasks: profiling targets, drafting social engineering messages, translating content into other languages, testing vulnerabilities, and debugging malware when tools fail mid-intrusion. None of these tactics are new, but faster iteration can compress the timeline between initial probing and operational impact.
The report describes examples of China-linked actors prompting Gemini to assist with vulnerability analysis and structured test plans in simulated scenarios. In other cases, operators reportedly used the model for debugging code and refining technical approaches tied to ongoing intrusions. Google characterizes this as acceleration rather than a fundamental change in attacker methodology. The core playbook remains the same; the tools simply reduce friction.
For defenders, the concern is tempo. When reconnaissance, lure drafting, and tool refinement can be handled more quickly, security teams may have less time to detect early warning signs. Faster adjustments also mean fewer repeated errors that might otherwise surface in logs or anomaly detection systems. AI does not eliminate operational mistakes, but it can shorten the feedback loop.
Google’s report also highlights a separate issue: model extraction and knowledge distillation. In these cases, actors with legitimate API access attempt to replicate aspects of Gemini’s behavior by submitting large volumes of prompts and analyzing outputs. The goal is to train competing systems or replicate reasoning patterns. Google frames this as primarily a commercial and intellectual property risk, though the scale of such activity could present broader security concerns. One cited example involved approximately 100,000 prompts aimed at replicating performance in non-English tasks.
In response, Google says it has disabled accounts and infrastructure associated with documented abuse and introduced targeted defenses within Gemini’s classifiers. The company also states that it continues to refine safety guardrails and stress-test the system against misuse scenarios.
For enterprise security teams, the practical takeaway is not that AI-powered attacks are inherently more sophisticated, but that they may move faster. Monitoring for sudden improvements in phishing quality, rapid malware iteration, or unusual API consumption patterns may help identify AI-assisted workflows. As generative models become more accessible, the challenge for defenders will likely center on response speed and resilience rather than on any single new technique.
