Hackers Are Now Using AI to Build Zero-Day Exploits — What That Means for You
Something significant happened this week in the world of cybersecurity, and it deserves your full attention — even if you're not a tech person. For the first time ever, Google Threat Intelligence Group (GTIG) confirmed that cybercriminals used an artificial intelligence system to discover and build a working zero-day exploit. They were planning to use it in a mass attack that would have let them bypass two-factor authentication on a widely used online tool.
Google caught it before the attack could be launched. But the fact that it happened at all marks a serious turning point — and it changes the conversation around how we all need to think about online security.
What Exactly Happened?
According to GTIG's report, a group of cybercrime threat actors was planning a "mass vulnerability exploitation operation" targeting a popular open-source, web-based system administration tool. The exploit they built — written as a Python script — would have allowed an attacker with valid login credentials to completely bypass two-factor authentication (2FA), the extra security step most of us rely on when logging into important accounts.
What made this discovery especially alarming was how the exploit was built. Google's researchers found telltale signs that an AI — specifically a large language model (LLM) — wrote significant portions of the code. How could they tell? The script contained what researchers described as "educational docstrings," a "hallucinated CVSS score" (essentially a fake security rating the AI invented), and the kind of clean, structured, textbook-style Python formatting that is highly characteristic of LLM-generated code. GTIG noted that it does not believe Google's own Gemini AI was involved.
The underlying flaw itself was a "semantic logic error" — a developer had hardcoded a trust assumption that contradicted how the app was supposed to enforce authentication. It's exactly the kind of subtle, high-level logic bug that LLMs excel at spotting, because they can analyze large volumes of code and documentation far faster than a human researcher can.
Google worked with the affected vendor to patch the vulnerability before it could be exploited.
Why This Is a Big Deal
To understand why this matters, it helps to know what a "zero-day" is. A zero-day vulnerability is a security flaw that's unknown to the software maker — meaning there's no patch, no fix, and no warning. Attackers who find one have a wide-open door.
Historically, discovering zero-days required serious technical expertise. It was time-consuming, expensive work largely associated with nation-state hacking groups and well-funded criminal organizations. AI is changing that equation.
Ryan Dewhurst, watchTowr's Head of Threat Intelligence, put it plainly: "AI is already accelerating vulnerability discovery, reducing the effort needed to identify, validate, and weaponize flaws. Discovery, weaponization, and exploitation are faster. We're not heading toward compressed timelines; we've been watching the timelines compress for years."
Security Affairs notes that attackers now start scanning the internet for vulnerable systems within hours or days after a security flaw becomes public — leaving defenders with very little time to patch before attackers strike.
AI Is Showing Up Across the Attack Lifecycle
The zero-day discovery isn't an isolated case. Google's broader report paints a picture of AI being woven throughout the entire attack process.
Malware that thinks for itself. An Android backdoor called PROMPTSPY integrates directly with Google's Gemini API to autonomously navigate a victim's phone — simulating taps, swipes, and gestures without any human attacker at the keyboard. It can capture your PIN or lock screen pattern and replay it to regain access to a locked device. It even renders an invisible overlay over the "Uninstall" button to silently block you from removing it.
AI-generated decoy code. Russia-nexus actors deployed two malware families, CANFAIL and LONGSTREAM, which use LLM-generated filler code to disguise their malicious functions. One contained 32 separate instances of irrelevant code querying daylight saving time — purely to make the script look harmless to security analysts.
Nation-state actors going all-in. GTIG's report details how North Korea's APT45 sent thousands of repetitive prompts to analyze CVEs and validate exploits, how China-linked APT27 used Gemini to build network management tools for routing malicious traffic, and how a China-nexus group used "persona-driven jailbreaking" — essentially telling an AI to pretend it's a security expert — to extract vulnerability research.
Hackers targeting AI itself. Perhaps most unsettling: Security Affairs reports that attackers are now targeting the AI supply chain — exposed API keys, insecure integrations, and third-party AI tools — as an entry point into company networks. A cybercrime group called TeamPCP compromised GitHub repositories tied to the LiteLLM AI gateway library, embedding a credential stealer that harvested AWS keys and GitHub tokens later used in ransomware attacks.
What This Means for Everyday Users and Small Businesses
Here's the honest reality: most successful attacks still come from the same old basics — unpatched software, weak passwords, misconfigured settings, and people clicking the wrong link. AI doesn't change that. What it does is make attacks faster, more scalable, and accessible to less-skilled criminals.
Google's own report is clear that "many successful breaches still originate from common security failures such as misconfigurations, exposed services, weak access controls, and poor patch management." That's actually good news — because those are problems you can address today.
Here are the practical steps worth taking right now:
1. Keep everything updated. With attackers now scanning for vulnerable systems within hours of a flaw going public, the window between "patch released" and "mass exploitation" is shrinking fast. Enable automatic updates on your operating system, browser, and apps. Don't let prompts pile up.
2. Don't rely on 2FA alone. This particular exploit was designed to bypass 2FA. That doesn't mean 2FA is useless — it's still important — but it means you shouldn't treat it as an impenetrable shield. Strong, unique passwords combined with 2FA and staying alert to phishing are all part of the picture.
3. Audit what's connected to your network. For Yuba City small businesses especially, third-party tools, integrations, and cloud services all represent potential entry points. Review what software has access to your accounts and remove anything you don't actively use.
4. Be suspicious of unsolicited contact. AI is making phishing emails and social engineering more convincing. If something feels off — even if it looks professional — verify through a separate channel before clicking or providing any information.
5. Watch your AI tools. If your business uses AI services or API integrations, treat your API keys like passwords. Rotate them regularly and never leave them exposed in public code repositories.
If you're unsure whether your systems are properly patched or have vulnerabilities worth worrying about, we're happy to take a look — our diagnostic services exist exactly for situations like this, and our /membership plan includes ongoing vulnerability monitoring so you're not relying on luck.
The Bottom Line
What Google found this week is a milestone, not a surprise. Security researchers have been warning for years that AI would eventually lower the barrier to sophisticated cyberattacks. Now that moment has arrived. The good news is that the fundamentals of staying protected haven't changed — update your software, use strong authentication, and don't leave doors open. The bad news is that the cost of ignoring those fundamentals just went up.
AI in the hands of defenders is also improving — but as GTIG's John Hultquist noted, attackers are already looking for approaches that work at scale. When they find one, they'll lean into it hard. The best time to shore up your defenses is before that happens.