For Households & Businesses

Neo-Phishing: The AI-Powered Scams Now Targeting Households and Businesses

Published May 6, 2026 · 10 min read · By the ScamDrill Team
Editorial illustration of a familiar contact on a phone screen whose face fragments into AI-generated patterns and circuit lines, representing a deepfake or AI-cloned impostor

The phishing email you ignored ten years ago was easy to spot. Misspelled words. A Nigerian prince. A logo lifted from a Google image search. The worst attempts barely cleared a spam filter. The version landing in inboxes (and group chats, and voicemails, and Zoom calls) in 2026 doesn’t look anything like that. It uses your boss’s voice. It cites a meeting you actually had on Tuesday. It comes from a domain that looks identical to your bank’s. And in many cases, it was generated by an AI in five minutes for less than a dollar.

That’s the working definition of neo-phishing: phishing that combines AI, social engineering at scale, and live data scraping to impersonate institutions, colleagues, and family members so convincingly that traditional advice — look for typos, hover over the link — no longer reliably catches it. The umbrella covers AI-written email phishing, voice clones (vishing), deepfake video calls, and the hybrid attacks that stitch all three together.

There’s a slangier handle for it picking up online: phishmaxxing. It’s a riff on the Gen Z “looksmaxxing” trend — the maximalist self-optimization aesthetic that mainstreamed off TikTok in 2023, where every variable on your appearance gets pushed to its current ceiling. Same logic, just applied to a phishing attack instead of a jawline. Personalization: maxxed. Voice fidelity: maxxed. Sender-domain spoof: maxxed. Whatever you call it, the upshot is identical — an attack so personalized, fluent, and audiovisually convincing that the old rules stop catching it.

86% Share of phishing campaigns that involved AI tooling by April 2026, up from roughly 30% just two years earlier. Phishing has retaken the top spot as the No. 1 initial-access method in corporate breaches.
Source: KnowBe4 phishing benchmark, April 2026; Hoxhunt 2026 Phishing Trends Report

The numbers we’re working with

Americans lost a record $20.9 billion to cyber-enabled crime in 2025, according to the FBI’s 2025 Internet Crime Report. That’s a 26% jump in one year. For the first time, the bureau broke out an “artificial intelligence” category: 22,364 complaints with $893 million in losses. Phishing-specific complaints actually dropped slightly, but reported phishing losses tripled, climbing from roughly $70 million in 2024 to about $216 million. Fewer hits, more money per hit. That’s the signature of a more targeted, more believable attack.

The corporate side of the ledger is uglier. According to Group-IB’s 2026 Phishing Email Security Playbook, AI-enabled phishing campaigns now reach a 54% click-through rate — versus the 3–5% you’d see from a generic spam blast. IBM X-Force research summarized in Hoxhunt’s 2026 trends report found that an AI can produce a high-quality spear-phishing email in five minutes; the same email took an experienced human red teamer about sixteen hours. By March 2025, AI-generated phishing was already 24% more effective at fooling employees than the best human-written campaigns.

That’s what we mean by neo-phishing. Not a single new technique — a whole tier of new ones, all of them cheaper, faster, and more personal than what came before.

What makes it “neo”

Three changes separate today’s attacks from the phishing of even three years ago.

1. Hyper-personalization at scale. Open-source intelligence (OSINT) used to be the slow part of a targeted attack. Now an LLM can scrape LinkedIn, press releases, and recent X or Instagram posts and assemble a personalized email — referencing your actual job title, recent travel, last week’s company announcement — in seconds. The cost of running a personalized campaign has fallen by roughly 95% since 2023, per Brightside AI’s 2025 spear-phishing analysis.

2. Synthetic voice and video. The current generation of voice-cloning tools needs about three seconds of audio to produce a usable replica, as the American Bar Association noted in its September 2025 brief. Three seconds is shorter than your outgoing voicemail greeting, shorter than a TikTok comment, shorter than the “hello” on a podcast appearance. Video deepfakes have followed the same curve: more convincing, faster to produce, harder to spot in a webcam-quality call.

3. Polymorphic delivery. A SecurityWeek analysis describes how AI-generated campaigns now send slightly different versions of the same lure to every recipient. Subject line, sender domain, link redirect, attachment hash — each one varies just enough to slip past static signatures and secure email gateways. KnowBe4 measured a 47% rise in phishing emails evading Microsoft’s native filters and traditional SEGs in 2024 alone, and that figure has not improved since.

The Arup case: $25.6 million, one video call

The starkest documented example came out of Hong Kong. In early 2024 a finance employee at British engineering firm Arup attended what looked like a routine video call with the company’s UK-based CFO and several colleagues. The CFO walked through an urgent, confidential transaction. Everyone else on the call agreed. The employee — who had initially flagged the original email as a possible phish — felt his concerns evaporate when he saw and heard people he recognized.

Over the next few hours, he wired about $25.6 million across 15 transactions to bank accounts in Hong Kong. As CNN later confirmed, every face on that call, including the CFO, was a deepfake. Voices, gestures, the casual back-and-forth among “colleagues” — all of it generated by software. None of the money has been recovered.

The Arup attack is the exhibit-A version of neo-phishing because it stitches everything together: an initial email lure, a deepfake video conference for credibility, and a single human in the loop pressured to act fast. As of mid-2026, corporate fraud attempts using voice or video deepfakes have grown roughly 300% since 2024, and Deloitte projects total US AI-fraud losses could reach $40 billion annually by 2027.

Who’s actually being targeted

The honest answer is: most people. But the data points to two profiles taking the heaviest losses.

Households — especially older relatives

The “grandparent scam” used to be a kid in college playing the part of a panicked grandchild. It doesn’t need a kid anymore. The FBI’s May 2025 PSA warned of an active campaign in which attackers used AI-generated voice messages and texts to impersonate senior US officials. That template has now percolated down into ordinary families. As of early 2026, one in ten Americans reports having been hit by an AI voice-clone scam directly or in their household. Posted voicemail greetings, TikTok clips, even a 30-second podcast appearance are enough source material for the attacker.

If you have a parent or grandparent who would believe a panicked late-night call from a relative, you have someone in the bullseye.

Businesses — especially finance, IT, and HR

On the corporate side, neo-phishing’s favorite targets are people with the authority to move money or grant access. CFOs and finance staff. IT admins who can reset passwords or grant SSO access. HR teams handling payroll and direct-deposit changes. According to Hoxhunt’s 2026 phishing trends report, spear phishing now represents less than 0.1% of overall email volume but accounts for roughly two-thirds of breaches. The entire category has shifted toward small numbers of devastating, well-researched attempts.

Microsoft’s threat intelligence team also published an April 2026 case study on an AI-enabled “device code” phishing campaign in which attackers used an AI-driven assistant to maintain a real-time chat with the victim while feeding them through OAuth flows. The “support agent” was an LLM. The pretext, tone, and answers all adapted as the victim hesitated. There is no static checklist that catches that.

What to look out for

If you’re a household

If you’re a business

“The single most useful change a household or a small team can make in 2026 is moving the verification step from inside the message to outside it. Don’t reply to the email; call the person on a number you already have. Don’t trust the urgent voice; hang up and call back. The whole attack model collapses the moment you switch channels.”

If you’ve already been got

For a household

The first hour matters more than anything else.

Move in this order

  1. Stop the bleeding. If money was wired or sent through a payment app, call your bank immediately and ask for the transaction to be recalled or frozen. Zelle and ACH transfers can sometimes be reversed inside a narrow window. Card payments can be disputed.
  2. Lock down email first. Change your email password before anything else — it’s the master key to every account that uses email-based recovery. Then turn on two-factor authentication. Move on to the bank, brokerage, and any account where the password was reused.
  3. Freeze your credit at all three bureaus — Equifax, Experian, TransUnion. It’s free, takes about ten minutes per bureau, and stops anyone from opening new credit lines in your name.
  4. File the formal reports. Submit at reportfraud.ftc.gov, ic3.gov, and — if your identity was used — identitytheft.gov, which generates a personalized recovery plan. File a local police report too; you’ll need that report number for some bank and insurance claims.
  5. Tell the family. Scammers reuse contact lists. If they cloned your daughter’s voice to call you, your name and number may be the next one they use.

For a business

  1. Convene incident response immediately. Don’t wait to confirm whether the attack is real. Treat any suspected BEC or deepfake event as an active intrusion until proven otherwise.
  2. Notify your bank and request a financial fraud kill chain (FFKC) recall if money has moved internationally. IC3 coordinates with foreign banks but the process only works inside the first 24–72 hours.
  3. Preserve evidence. Keep email headers, recordings, call logs, screenshots of any video calls, and chat transcripts. Don’t delete anything — even legitimate-looking calendar entries.
  4. Audit affected accounts. If a finance employee was deceived, their credentials should be assumed compromised: rotate passwords, revoke active sessions, check OAuth grants and forwarding rules.
  5. Disclose to insurers, regulators, and (where required) customers. Most cyber-insurance policies require prompt notice. Some state breach laws give you only days to notify affected individuals.

How to drill against neo-phishing

The defense isn’t a tool. It’s a habit: practice the moment of doubt before you need it.

For households, that means agreeing on a family safe word, talking to elderly relatives about voice clones in concrete terms (“if I sound panicked and ask for money, hang up and call me back”), and running short, low-stakes “what would you do?” conversations at the kitchen table. For businesses, it’s making out-of-band verification a fixed, dull policy rather than a moment of judgment under pressure: any change to a bank account, payroll detail, or wire request gets a callback on a known number. No exceptions, no executive overrides.

The neo-phishing rule

If a message, call, or video creates urgency around money, access, or confidentiality — change the channel. Hang up and call back. Reply through the company directory, not the email thread. Walk to the person’s desk. The attack model assumes you’ll stay inside the channel they chose. The defense is to step out of it.

For the broader pattern this article fits inside, see our deep-dive on how AI is reshaping cybersecurity for businesses of every size, our AI voice cloning protection guide for households, and our 30-day phishing simulation plan for small businesses. If you’re also worried about a parent who would absolutely follow three numbered steps without questioning them, the 2026 elder-protection playbook is the place to start.

The internet hasn’t gotten more dangerous in any new technical sense. The scams have just gotten dramatically better at sounding like the people you trust. The fix isn’t to trust less — it’s to verify differently.

Build the “change the channel” reflex before it counts.

ScamDrill sends safe, realistic mock scams — including AI-written email lures, deepfake voicemail clips, and BEC vendor-impersonation tests — to your family or your team on a rotating schedule. When someone clicks where they shouldn’t, they get a friendly teachable moment instead of a real loss.

Start free →

Frequently asked questions

What is neo-phishing?

Neo-phishing is the working term for a new wave of phishing attacks that use AI, large-scale social engineering, and live data scraping to impersonate trusted institutions and people. It covers AI-written email phishing, voice clones (vishing), deepfake video calls, and hybrid attacks that stitch all three together. By April 2026, KnowBe4 reported that around 86% of phishing campaigns it tracked involved some form of AI tooling, up from roughly 30% two years earlier.

What is phishmaxxing?

Phishmaxxing is the slangier nickname for neo-phishing that has started circulating in Gen Z corners of the internet. It’s a play on “looksmaxxing” — the maximalist self-optimization trend that mainstreamed on TikTok in 2023, in which every variable on a person’s appearance gets pushed to its current ceiling. Phishmaxxing applies the same logic to a phishing attack: personalization, voice fidelity, sender-domain spoofing, and urgency are all dialed to the max with AI tools. The two terms describe the same phenomenon.

Who is being targeted by neo-phishing?

Two profiles are taking the heaviest losses. On the household side, older adults and their adult children are hit hardest, with one in ten Americans now reporting an AI voice-clone scam in their household. On the corporate side, finance staff, CFOs, IT admins and HR teams — anyone who can move money, reset passwords, or change payroll — are the favored targets. Spear phishing is now under 0.1% of email volume but accounts for roughly two-thirds of breaches, per Hoxhunt's 2026 trends report.

What does a neo-phishing attack actually look like?

It looks like a normal message or call, just better. An email referencing a real meeting you had on Tuesday, sent from a domain that looks identical to your bank's. A late-night call from your daughter's voice asking for bail money. A video meeting in which your CFO and three colleagues approve a wire transfer — except every face on the call is a deepfake, as in the 2024 Arup case that cost the firm $25.6 million. The defining trait is that traditional advice ("look for typos," "hover over the link") no longer reliably catches them.

What should I do immediately if I’ve been scammed?

Move fast in the first hour. Call your bank and ask for the transaction to be recalled or frozen. Change the password on your email account first (it's the master key to everything else) and turn on two-factor authentication. Freeze your credit at Equifax, Experian, and TransUnion. File reports at reportfraud.ftc.gov, ic3.gov, and identitytheft.gov, and a local police report. For businesses, treat it as an active intrusion: notify the bank for a SWIFT recall, preserve all evidence, rotate credentials for affected accounts, and notify your cyber insurer and any required regulators.

Join our free newsletter to stay ahead of the scammers

Receive updates on monthly scam trends, along with best practices to protect yourself and those you care about.