What changed when criminals got access to generative AI?

Generative AI lowered the technical bar for fraud. Criminals who once struggled with language can now produce polished, targeted attacks at industrial scale.

When OpenAI released ChatGPT in late 2022, most people thought about what it meant for writing essays or answering questions. Criminals saw something else. According to Rhiannon Williams, writing for MIT Technology Review on April 21, 2026, cybercriminals quickly began using large language models to produce malicious emails — both the untargeted spam kind and more sophisticated, targeted attacks designed to steal money and sensitive information.

The speed of adoption was faster than most security researchers expected. Research from Columbia University, cited in the MIT Technology Review piece, documents AI use in malicious email campaigns as early as late 2023. By 2025, the toolkit had expanded well beyond email. Criminals were using AI to compose phishing messages, generate hyperrealistic deepfake video and audio clips, tweak malware so it evades detection, automate vulnerability scans across networks, quickly produce ransom notes and analyze stolen data to identify what's most valuable and sellable.

What AI didn't do — at least not clearly — is make hacking itself fundamentally easier. Breaking into hardened systems still requires real expertise. But that misses the point. Most fraud doesn't require breaking into hardened systems. It requires convincing a person to hand over access, credentials or money. That's where AI has made the most damage.

Why don't bad grammar and weird formatting protect you anymore?

The "bad English" filter no longer works. AI produces grammatically fluent, contextually appropriate messages in any language at essentially zero marginal cost per message.

For years, the practical advice for spotting phishing was simple: look for bad grammar, unusual formatting, odd sender addresses and generic greetings. These were reliable signals because most phishing came from non-native English speakers working quickly and cheaply. The cognitive load of writing convincing, personalized English text was a natural bottleneck — it limited how many tailored attacks a single operator could produce.

Generative AI eliminates that bottleneck entirely. An AI model can produce a grammatically perfect, context-appropriate email in any language, at any formality level, in a fraction of a second. The marginal cost of personalizing an attack — using your name, referencing a recent interaction, matching the tone of your company's internal communications — approaches zero when AI is doing the writing.

This matters because personalization is what makes phishing work. Research consistently shows that people are more likely to click on emails that seem to know them — that use their real name, reference their actual employer or describe a situation that seems plausible. AI makes high-quality personalization cheap enough to use on millions of targets simultaneously.

The European law enforcement agency Europol has documented in its 2025 SOCTA report that AI tools are enabling criminals to automate the search for vulnerabilities in networks and computer systems. The same models that help developers find bugs in code can help attackers find bugs in systems they want to breach.

What do AI-powered scams actually look like in practice?

The most dangerous current forms include AI-voiced phone scams, deepfake video calls impersonating executives and spear-phishing campaigns customized from scraped social media data.

Consider what a modern AI-assisted fraud operation looks like. A criminal scrapes your LinkedIn profile, recent tweets or posts and any public information about your employer. An AI model builds a profile of your role, your reporting relationships and the kind of requests you'd plausibly receive. It then generates a phishing email that appears to come from your CEO — using language consistent with how your executive team communicates publicly — asking you to process an urgent wire transfer. The email lands at 5 PM on a Friday, when approval chains are harder to reach.

Voice cloning attacks follow a similar pattern. MIT Technology Review has documented cases where AI-generated audio mimicking a family member's voice has been used in "grandparent scams" — calling elderly victims and claiming to be a grandchild in urgent trouble who needs money immediately. The voice quality is now good enough that recipients who have heard their family member's voice thousands of times cannot reliably detect that it's synthetic.

At scale, the Southeast Asian scam center model documented by Interpol (reported by Bloomberg, February 2026) shows how AI is being used organizationally, not just technically. These centers use AI tools to quickly target greater numbers of potential victims and to switch to new locations and tactics faster when law enforcement applies pressure. AI becomes an operational agility tool, not just a writing assistant.

Scam attacks don't need to be particularly sophisticated to succeed. They need to be lucky — to reach an undefended machine or an unsuspecting victim at the right moment. AI allows operators to send far more attacks, which increases the number of lucky hits without requiring any individual attack to be especially clever.

AI-Powered Fraud Types and Key Characteristics — 2026
Attack Type AI's Role Primary Target Detection Difficulty
Spear-phishing email Personalized text generation from scraped data Employees, consumers High — grammatically correct, contextually appropriate
Voice cloning call Real-time voice synthesis mimicking known individuals Elderly individuals, finance staff Very high — indistinguishable from real voice in short clips
Deepfake video Face and voice synthesis for impersonating executives Corporate wire transfers, verification systems High — improving rapidly; currently best spotted at high res
Malware obfuscation Rewriting malicious code to evade antivirus signatures Enterprise networks, government systems Medium-High — requires behavior-based detection
Automated vulnerability scanning AI-driven network reconnaissance and exploit identification Any internet-facing system Medium — detectable via anomaly monitoring

What actually stops AI-powered fraud in 2026?

Channel verification is the most reliable human defense. If a message asks you to act, confirm through a separate, known-good channel before doing anything.

The security community has a term for the shift AI has caused: the trust model has flipped. Previously, the default was to trust a communication unless it showed obvious signs of fraud. Now, any unexpected request to transfer money, provide credentials or take urgent action should be treated as suspect until verified — regardless of how legitimate it looks.

Channel verification is the practical implementation of that mindset change. If you receive an email from your bank, call the number on the back of your card — not a number in the email. If your CEO sends an urgent Slack message asking for a wire transfer, call their phone directly using the number in your company directory. If a family member calls saying they're in trouble and need money, call another family member to verify. The verify-through-a-separate-channel rule is simple, friction-free enough to use routinely and it neutralizes most AI-generated fraud because the fraud can only exist within the compromised channel.

For organizations, cybersecurity researchers cited in the MIT Technology Review article by Rhiannon Williams suggest that basic defenses — keeping software updated, following network security protocols — remain effective against the majority of AI-assisted attacks that are currently most common. The attacks that are cheapest to generate are also the sloppiest and standard endpoint protection catches most of them. The sophisticated, targeted attacks are harder to stop and their defenses are less clear.

AI is also being deployed for defense, which is the most encouraging development in the threat landscape. Microsoft processes more than 100 trillion signals per day flagged by its AI systems as potentially malicious or suspicious. Between April 2024 and April 2025, Microsoft says it blocked $4 billion worth of scams and fraudulent transactions — many of which may have been AI-assisted on the attacker side. The technology enabling the attacks is also the technology best positioned to detect them at scale.

Anthropic's Glasswing project, reported earlier this month, illustrates how advanced AI is being applied to the defensive side of cybersecurity. Anthropic's Mythos model found thousands of critical vulnerabilities across every major operating system and web browser. All of them have been patched. Anthropic is delaying the model's public release because of these capabilities and has set up a consortium of tech companies to apply them defensively first. For more context on that development, see our earlier article on Anthropic Mythos and the Firefox zero-day disclosures.

What should you do differently starting today?

Update your fraud threat model: treat any unexpected urgent request as suspect. Verify through a separate channel, keep software current and treat urgency itself as a red flag.

The practical checklist for individuals is short:

Treat urgency as a red flag, not a reason to act. The design of most AI-powered fraud relies on urgency to short-circuit your verification instinct. If a message says "act now," "this expires in 10 minutes," or "I need this immediately" — that's the moment to slow down, not speed up.

Never use a phone number or link provided in a suspicious message to verify the message. A scammer controls both the message and the number they include in it. Use a number from a card, a company directory or a source you've independently verified.

Update software regularly. The Europol SOCTA 2025 report confirms that AI-automated vulnerability scanning is real and increasing. Unpatched software is the entry point most of these automated attacks exploit. Keeping systems current removes the majority of the attack surface they target.

Be skeptical of deepfakes in high-stakes situations. Current deepfake video can be detected at high resolution — look for unnatural blinking, hair edges or audio sync issues. But the quality is improving fast. For any situation involving a significant financial transaction or credential handover, require a live in-person interaction or a pre-arranged verification code that the real person would know.

For IT and security teams, the MIT Technology Review article points to software updates and network security protocols as the baseline that stops the most common current attacks. AI-powered security monitoring — the kind Microsoft, Google and others operate at scale — is increasingly what catches the sophisticated attacks that slip through. If your organization doesn't have behavior-based endpoint detection active, that's the highest-priority gap to close. For a look at how AI is being used in broader defensive security contexts, see our piece on AI safety filters and the risks that survive model distillation.

Analysis: the threat model has flipped — and most people haven't updated theirs

The core problem is that fraud detection is a pattern-matching exercise and AI has invalidated the patterns most people use. "Does this look legitimate?" used to be a reasonable first-pass filter. It no longer is. AI can make anything look legitimate — the email, the voice, the video, the interface.

The mental model that works now is closer to: "Has this communication arrived through a channel I trust AND can I verify the request through a second independent channel?" Both conditions need to be true before you take action. That's more friction than most people are used to building into routine decisions, which is why adoption of this mindset is slow even among people who understand the threat.

The more troubling dynamic is what happens as AI capabilities improve further. Current voice clones are very good but not perfect. Current deepfakes are convincing but detectable at high resolution. Current phishing is fluent but still pattern-identifiable with behavioral context. The question is whether human detection ability can keep pace with AI synthesis quality or whether we reach a point where the only reliable defense is institutional — meaning AI-powered detection systems on the receiving end that flag anomalies before they reach a human decision point. That transition, if it comes, will represent a fundamental change in how trust is established in digital communication. This is our forward-looking analysis based on the current threat trajectory; the actual pace of capability development is uncertain.

Sources

AI Scams Cybersecurity Phishing Consumer Safety Fraud