5 March 2026

Social Engineering 2.0: when AI gets involved, how to outsmart these scams

Social Engineering 2.0

Imagine this: your manager calls you, the familiar voice urgently requesting a wire transfer. Hard to refuse… Yet that voice may have been synthesised by artificial intelligence. Welcome to the era of Social Engineering 2.0, where voice deepfakes, manipulated videos and hyper-targeted phishing campaigns exploit human vulnerabilities. Experts are sounding the alarm: according to IBM’s Cost of a Data Breach Report, human factors remain involved in the vast majority of breaches in 2024. Meanwhile, Europol has warned that deepfakes are becoming a major vector for financial crime. Vigilance is no longer optional.
 

Deepfakes and Voice Cloning

 
Audio deepfakes (voice cloning) represent one of the most striking threats. In 2019, a UK-based energy firm was reportedly defrauded of £200,000 after scammers used AI-generated voice technology to impersonate a senior executive. Financial institutions are also reporting a surge in such attacks: call centres are increasingly targeted by fraudsters using synthetic voices to bypass verification systems, and voice-recognition safeguards can be deceived by convincingly generated recordings, as highlighted by the Financial Times and IBM Security.

More recently, engineering consultancy Arup confirmed it had been targeted in a sophisticated deepfake video conferencing scam involving millions of dollars in fraudulent transfers, according to coverage by The Guardian. Fake colleagues – complete with realistic faces and voices – persuaded an employee to authorise multiple payments.

These cases illustrate how AI is democratising fraud. Deepfake tools are now widely accessible and inexpensive, enabling even non-technical criminals to launch highly sophisticated scams.
 

Hyper-Targeted Phishing Powered by AI

 
Artificial intelligence is also making phishing dramatically more effective. Large language models such as ChatGPT can draft highly convincing, personalised emails in minutes. According to Proofpoint research, threat actors increasingly use AI to refine phishing campaigns, eliminating the spelling errors and awkward phrasing that once exposed malicious messages.

Security analysts have demonstrated that AI-generated phishing emails can be produced in a fraction of the time previously required, while maintaining a high level of credibility. These systems can scrape publicly available data – LinkedIn profiles, company websites, social media posts – to tailor attacks with alarming precision.

Traditional attack vectors (malicious emails, fraudulent links, impersonation attempts) are now amplified by AI: more targeted, multilingual and more believable than ever. The UK National Cyber Security Centre (NCSC) has warned that AI will increasingly lower the barrier to entry for cybercriminal activity, particularly in social engineering campaigns.
 

Practical Steps to Protect Your Organisation

  • Always verify the identity of the person contacting you (cross-check sources). If you receive an unusual request (fund transfers, sensitive data sharing, urgent payment instructions), do not rely solely on the incoming call or message. Hang up and call back directly using an official, independently verified number – for example via your organisation’s website or trusted directory. The NCSC’s phishing guidance strongly recommends independent verification to prevent impersonation fraud.
  • Implement dual authorisation for critical transactions. Require a second approval for exceptional payments or sensitive operations, ideally via a separate communication channel. Most “CEO fraud” or business email compromise schemes succeed only when a single individual validates the transaction without cross-checking.
  • Continuously train and raise awareness among your teams. Conduct phishing simulations and educate employees about deepfake risks. Explain that AI can generate convincing audio and video manipulations, and train staff to spot anomalies (lip-sync inconsistencies, unusual phrasing, unexpected urgency). Maintain strong cyber hygiene: keep software updated, enforce robust password policies and multi-factor authentication, and avoid oversharing personal or organisational information online.

By combining heightened vigilance, mutual control mechanisms and ongoing training, organisations can significantly reduce the risk of falling victim to AI-driven deception.

Stay alert and inquisitive: Social Engineering 2.0 is already here – and it continues to evolve.

📅 Sign up for free and discover Whaller I 👉 Request a demonstration I 📩 Need advice? Contact us !
 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Articles recommandés