BlueStone Cyber logo
BLUESTONE CYBER

Phishing Has Evolved - Your Awareness Training Probably Hasn't

6 min read20 April 2026

Most businesses think they've got phishing covered. There's an annual training session. A slide deck about suspicious links. Maybe a simulated phishing email once a quarter that half the office recognises because it uses the same template as last time.

A few years ago, that was probably enough to tick the box. It isn't any more.

Phishing in 2026 doesn't look like it did in 2020. It doesn't sound like it did either. The attacks your staff trained to spot no longer exist. The ones hitting their inboxes right now are something else entirely.

The typo is dead

For years, the number one piece of phishing advice was: look for bad grammar. Misspellings. Odd formatting. If it reads badly, it's probably fake.

That rule made sense when phishing emails were written by people working in a second or third language, churning out bulk messages with obvious mistakes. It doesn't make sense now.

Large language models write perfect English. Perfect German. Perfect whatever the attacker needs. A phishing email that took a human sixteen hours to research and write, studying the target's company, mimicking their boss's tone, referencing a real project, takes an AI about five minutes. And the output is clean. No typos, no weird formatting, nothing that would trigger a second look.

AI-generated government-styled phishing emails have hit 67% click rates in testing. Two out of three people clicking through, on messages that would sail past every “spot the mistake” exercise your staff have ever done.

Teaching employees to look for typos isn't just outdated. It's actively dangerous, because it builds confidence in a skill that no longer works.

Your boss just called. Except they didn't.

Voice cloning used to be a novelty. Something you'd see in a demo and think: that's interesting, but who's actually going to do that?

Attackers. That's who.

Cloning someone's voice now takes as little as three seconds of sample audio. A YouTube clip, a conference recording, a voicemail greeting. That's enough. And these aren't static recordings played back down a phone line. They're live, interactive conversations where an AI responds in the cloned voice, adjusting to what the other person says.

In one documented case, an employee was convinced by a cloned voice that their boss was instructing them to wire $35 million. The voice sounded right. The request sounded urgent but plausible. The money moved.

Then there's video. A multinational firm in Hong Kong lost $25.5 million after an employee joined a video call where every other participant, including the CFO, was an AI-generated deepfake. The employee thought they were on a legitimate call with real colleagues. Every face on that screen was synthetic.

By the end of this year, experts expect it to be impossible for the human eye or ear to tell real from fake without technical tools. Seeing and hearing are no longer proof of anything.

The attack isn't just in your inbox any more

When businesses think about phishing, they think about email. That's where the training happens, where the simulations land, where the security tools focus.

Attackers have noticed.

45% of modern phishing campaigns are now multi-channel. An email followed by a Teams message. A text followed by a phone call. A LinkedIn connection request followed by a “video meeting” where the person on camera is a deepfake. If one channel doesn't convince the target, the next one reinforces it.

91% of businesses don't simulate attacks on platforms like Slack, Teams, or Zoom. But 64% of organisations report being targeted through those exact channels. The training covers one attack surface. The attackers are working across four or five.

Consent phishing is gaining ground too. Instead of stealing your password, the attacker sends a legitimate-looking permission request for a third party app in Microsoft 365 or Google Workspace. Approve it, and they get permanent access to your email and files. No password needed, no MFA prompt. One click on “Allow access” and they're in.

MFA interception kits, tools that steal authentication tokens in real time, have been used in roughly a million attacks since mid-2025. Multi-factor authentication is still worth having. It just isn't the wall it used to be.

Why your training programme isn't keeping up

Most security awareness programmes follow the same pattern: an annual session, a generic phishing simulation, a compliance checkbox. Staff sit through it, pass the quiz, and go back to their jobs until next year.

The threat doesn't operate on an annual cycle. It moves in weeks. An approach that worked twelve months ago may be obsolete by the time the next training session comes around. Here's what we see going wrong.

Generic simulations teach the wrong lesson.

Many simulated phishing emails use themes like “Review your benefits” or “Your package is delayed.” These get caught by email security gateways before they reach a real inbox. Employees practise spotting emails they'd never actually receive. That training gives no return.

Nobody trains for Teams, Slack, or voice.

Almost all phishing simulations target email. Almost none cover the collaboration platforms and phone-based attacks that are now central to how campaigns work.

Annual sessions build nothing lasting.

People forget training that happens once a year. It doesn't build instinct or change behaviour. It lets the business tick a compliance box and move on.

Click rate is the wrong metric.

Most programmes measure success by how few people clicked the link. That tells you who didn't fall for the test. It tells you nothing about who would report a real threat. The number that actually matters is the report rate, and few businesses measure it.

85% of businesses that experienced a breach in the past year faced a phishing attack. 60% of all data breaches involve a human element. Those numbers haven't improved despite decades of awareness training. The training isn't fixing the problem because it's aimed at a version of the problem that no longer exists.

What actually works now

The security industry has started using a different term for this: Human Risk Management. The goal isn't making people “aware” of threats. It's changing how they respond to them.

Short-cycle training beats annual sessions.

Two-minute weekly nudges that build scepticism are more effective than an hour-long session people tune out of once a year. Frequency builds habit. Duration doesn't.

Training at the moment of risk.

Modern tools can detect when someone is about to do something risky, like entering credentials on an unfamiliar site or pasting sensitive data into an unsanctioned tool, and deliver a prompt right then. Not six months later in a classroom.

Simulations that reflect real attacks.

If your phishing simulations look nothing like what actually lands in people's inboxes, they're a waste of everyone's time. Effective simulations use the same personalisation and multi-channel techniques real attackers use. Finance teams get deepfake wire transfer scenarios. IT staff get credential harvesting attempts.

Scepticism over pattern matching.

When AI produces flawless text, you can't train people to spot the fake by how it looks. You train them to question the intent. Why is this request urgent? Why is this person asking me to do this through this channel? Does the process feel right, regardless of how polished the message is?

Build a culture where people report, not hide

Most businesses get this part wrong, and it matters more than the training content.

When someone clicks a phishing link and the immediate response is a remedial training module and an email from IT, that person learns one thing: don't admit it next time. The whole organisation learns it by osmosis. People stop reporting suspicious messages because they don't want the hassle or the implication that they're the weak link.

That's backwards.

When one employee reports a suspicious email, the security team can pull that same message from every other inbox in the organisation. One report stops the attack before it spreads. That's worth something. It should be treated like it's worth something.

Measure and reward reporting. An employee who flags something suspicious, even if it turns out to be legitimate, has done the security team a favour. That behaviour needs to be visible and valued.

There's an authority problem too. A lot of phishing attacks, especially deepfake ones, work because they impersonate someone senior and junior staff don't feel they can push back. A message from the “CEO” asking for an urgent wire transfer gets actioned because people want to be helpful and responsive. Attackers know this. They design their attacks around it.

If your culture doesn't give people permission to pause and verify when a request comes from someone senior, no training programme will fix that.

What you can do this month

You don't need to overhaul everything at once. Some of this you can do this week.

Set up out-of-band verification for anything high-stakes.

Wire transfers, credential resets, sensitive data requests: verify through a second channel. If the request came by email, call the person on a number you already have. Not the number in the email.

Deploy a password manager.

Password managers only autofill credentials on the legitimate domain. A phishing site that looks identical but sits at a slightly different URL gets nothing. That layer of protection works even when human judgement doesn't.

Move away from SMS-based MFA.

Hardware security keys and passkeys bind authentication to a specific domain. A phishing site can't intercept what it can't trigger.

Audit your third party app permissions.

Go into your Microsoft 365 or Google Workspace admin panel and review what apps have been granted access. Revoke anything unused or unfamiliar. Consent phishing relies on people approving apps they shouldn't.

Tell your staff that perfect English is the new red flag.

Un-train the old advice. A well-written, perfectly formatted, urgent request is now more likely to be an attack than a sloppy one.

Agree on a codeword.

For high-value transactions, have an internal safe phrase that anyone can use to verify identity. Low-tech, but it works against deepfakes that cost thousands to produce.

And start measuring your report rate. If you don't know how many suspicious messages your staff report each month, you don't know how your security culture is actually performing. That number matters more than your click rate.

The old playbook doesn't work any more

Phishing used to be a numbers game played with blunt instruments. Spray a million poorly written emails, hope someone clicks. The training designed to counter that worked well enough for a while.

That era is over. Phishing in 2026 is precise, personalised, and increasingly multi-media. It uses your boss's face, your colleague's voice, and your company's internal language. It doesn't need you to miss a typo. It needs you to trust something that feels completely normal.

The businesses that will handle this are the ones that stop treating security awareness as a compliance exercise and start treating it as a behaviour change problem. Frequent training in short doses. A culture that rewards reporting. Verification habits that don't depend on recognising a fake.

Your staff can't spot the difference any more. Nobody can. That's not a failure of your people. It's a failure of the approach. And it's fixable, if you're willing to change how you think about the problem.

Bluestone Cyber helps UK businesses move from tick-box awareness training to human risk management that changes how people actually behave. If your programme hasn't been reviewed in the past twelve months, get in touch. The threat moved on. Your training should too.