IT and Cyber Security Awareness: AI phishing that sounds like your boss
You know the situation.
Inbox is running, Teams pings, and a message from “your boss” lands. The tone is just right. It’s short. It seems relevant. And there’s an element of time pressure: “I’m going to a meeting – please send me…”
This is exactly why AI phishing works now.
Written by Eya Beldi
Microsoft describes in Digital Defense Report 2025 that AI-automated phishing achieves significantly higher click-through rates than classic phishing attempts (54% vs. 12% in their example), not because people have become dumber, but because the attacks have become better at sounding “internal” and credible.
“This happened to me” (and why it matters)
I myself received an email pretending to be our CEO at Grape, Peter Hindkjær. It didn’t have the old “scam signals” (bad language, strange formatting, etc.). It just sounded… plausible.

The point is not to expose anyone. The point is to show employees that it’s no longer safe to trust the ‘vibe’ of an email.
Why the old advice no longer protects you
The classic advice “find spelling mistakes and bad grammar” falls apart when generative AI can:
- write flawless Danish
- mimic an internal tone-of-voice
- Make the message short and “busy”
- Hit the right time (e.g. just before meetings, deadlines, payroll runs)
And as phishing becomes harder to spot visually, the countermeasure necessarily becomes more process and behavior-based: pause → verify → report.
Compromising emails are expensive
(and they don’t require advanced hacking skills)
Business Email Compromise (BEC) is, at its core, social engineering: someone posing as a trusted person (boss, supplier, finance) to get someone to transfer money or share sensitive information. The FBI describes BEC as one of the most financially damaging forms of online crime.
At the same time, the FBI’s IC3 report shows that the total reported losses to cybercrime reached a record level (referred to as $16.6 billion in the report), and the FBI’s own press release highlights the same magnitude.
The new baseline rule:
Anything with money, access or data must be verified out-of-band
If a message is about:
- money / payment changes / account numbers
- Access / MFA codes / credentials
- sensitive files / personal data / contracts
- “quick exceptions” from normal process
…the employee must have one fixed reflex:
Pause → Verify → Report
Verify out-of-band means: verify via another channel you already trust (known phone number from the directory, existing internal thread, approved workflow), not by replying to the email or calling a number in the signature.
NCSC recommends making processes more resilient by ensuring that important email requests are verified via another form of communication.
CISA recommends similar out-of-band verification for wire/payment requests that appear to come from management.
Three AI scenarios every employee should train
(again and again)
1) “Can you update the payment details?”
The classic, but now written in perfect tone, often with better timing and more context.
Trained reflex: No payment changes without verified process + approvals (and always out-of-band).
2) Email bombing → Teams message → “IT support” impersonation
Microsoft describes attack chains where email bombing is used to create stress and chaos, after which the attacker switches channels (e.g. Teams) and poses as IT support to gain remote access.
Trained reflex: Sudden flood of emails + “support” making contact = stop, verify via official IT channel and report.
3) Leadership impersonation with video/voice (deepfakes)
The Arup case from Hong Kong is a brutal example: deepfake video/voice in a meeting led to transfers of approximately US$25 million.
Trained reflex: Even “face/voice” never overrides process at high risk.
Why awareness should be a practice, not a policy
Most organizations already have an IT security policy.
What is often missing is a shared muscle memory under pressure.
That’s why scenario-based training works better than “read and confirm”. And if you want to measure smarter than just “click-through rate”, NIST’s Phish Scale can help you assess severity and design fairer phishing exercises.
What you should do now (IT + HR/L&D)
If AI phishing has become harder to spot, your defenses need to be easier to execute.
Recommendation: Roll out a short, scenario-based IT Security Awareness e-learning course and make it mandatory for the entire organization.
At Grape, our IT security course is built as 9 modules and is designed to provide a solid foundation in good IT security (duration listed as 30 min.) so employees can build the same “pause, verify, report” reflex.