For the first time, hackers used AI to build a working zero-day exploit -- and Google stopped it just before it could hit the internet at scale.
Here is the kind of story that makes you glad someone is watching the watchers. On May 11, 2026, Google's Threat Intelligence Group dropped a report that reads like the opening scene of a cyber-thriller: criminals had developed a real, working exploit using AI assistance. It was designed to bypass two-factor authentication on a widely used open-source web admin tool. And it was nearly ready for a mass exploitation event.
Google caught it in time. But the implications? Those are going to stick around.
What exactly happened?
According to GTIG, "prominent cyber crime threat actors" had built a zero-day -- a vulnerability no one else knew about yet -- for a web-based system administration tool used by countless organizations. The exploit was crafted to slip past 2FA, one of the most common security measures people rely on. In plain terms: even if you had two-factor authentication turned on, this attack could have walked right through it.
The really wild part? Google says the exploit itself carries fingerprints of AI generation.
How Google knew it was AI
Researchers spotted telltale signs baked into the Python script. There was a "hallucinated CVSS score" -- a severity rating that looked suspiciously like something an LLM would generate because it matched training data patterns rather than real-world scoring. The code formatting was described as "structured, textbook" -- clean, almost suspiciously clean, the way a language model writes when it is trying to be helpful.
The vulnerability itself was a "high-level semantic logic flaw": the developer had hardcoded a trust assumption into the 2FA system, and the AI-assisted exploit found it. Google did not name the tool or the criminals, but they were clear on one point -- they "do not believe Gemini was used." Whatever model the attackers tapped, it was not Google's own.
The bigger picture: AI vs. AI
This is not just another hacking story. It is the moment AI stopped being only a defender and started showing up on both sides of the battlefield.
For weeks, cybersecurity circles have been buzzing about models like Anthropic's Mythos and AI-assisted vulnerability discovery. Just recently, researchers found a Linux kernel vulnerability with AI help. Now we have the flip side: criminals using AI to weaponize unknown flaws.
Google's report also notes attackers are getting creative with "persona-driven jailbreaking" -- tricking AI models into security research mode by asking them to roleplay as experts. They are feeding entire vulnerability databases into models and refining payloads in controlled settings before unleashing them. It is methodical. It is industrial. And it is only going to get more common.
What this means for everyone else
If you run a website, a server, or anything with a login screen, here is the uncomfortable truth: the gap between "vulnerability discovered" and "vulnerability exploited" is shrinking. AI can find bugs faster. AI can write exploits faster. And now we know AI is doing both in the wild, not just in research labs.
Google says it was able to "disrupt" this particular attack. But the report is explicit: hackers are "increasingly using AI to find and take advantage of security vulnerabilities." GTIG is also seeing adversaries target the components that make AI systems useful -- third-party data connectors, autonomous skills, the plumbing behind the magic.
The AI arms race is real
There is a strange symmetry here. Security researchers use AI to find vulnerabilities faster. Criminals use AI to exploit them faster. Both sides are accelerating. The only question is who gets there first.
Google's catch is a win for the good guys, but it is also a warning shot. The first AI-built zero-day is a milestone. It will not be the last. And next time, the defenders might not be watching closely enough.
So maybe take a minute to update that server software. Turn on auto-updates. Double-check your 2FA setup. The robots are coming for your login credentials -- and some of them are not on your side.