AI-powered phishing attacks are skyrocketing in volume and sophistication. Attackers’ use of artificial intelligence supercharges these attacks. The total volume of phishing attacks has exploded by 4,151% since the advent of ChatGPT in 2022. This surge is no coincidence – generative AI enables cybercriminals to craft highly convincing emails, deepfake audio, and even video impersonations at scale.
Traditional security filters and cursory user scrutiny often fail to catch these AI-enhanced lures. Email security remains critical due to evolving threats such as AI-generated phishing, which can evade legacy defenses by mimicking legitimate communication with incredible accuracy.

How Attackers Leverage AI-Powered Phishing
Threat actors are now using AI tools to vastly improve the effectiveness of phishing and Business Email Compromise (BEC) schemes. Machine learning models can generate fluent, personalized emails that lack the tell-tale errors of old-school scams. Attackers feed these models with stolen data (corporate emails or social media) to craft messages that read as if a colleague or vendor wrote them.
They’re also deploying deepfake technology, which is AI-generated voice and video that can impersonate executives in real-time. Criminals have used deepfake audio on phone calls to convincingly pose as a CEO or CFO, tricking employees into executing large wire transfers. This blend of AI and social engineering creates a new level of deception. A phishing email might be followed by a “verification” call from a familiar voice, like CEO’s generated by AI. For busy employees, these attacks appear frighteningly legitimate.
The Escalating Scale and Impact
The convergence of AI-powered phishing has made attacks not only more convincing but also more prevalent and damaging. Phishing remains one of the top causes of security breaches, with the “human element” involved in 68% of breaches. Now armed with AI, phishers are successfully bypassing technical defenses and fooling people on a massive scale. Business Email Compromise – a form of highly targeted phishing – is causing billions in losses annually.
The FBI tallied nearly $2.8 billion in reported BEC losses in 2024 alone. Between 2022 and 2024, total BEC losses reached nearly $8.5 billion. These scams are widespread across various industries and geographies. The FBI’s IC3 reports Business Email Compromise (BEC) incidents in all 50 U.S. states and 186 countries. And the impact isn’t only financial. Attackers often steal sensitive data via phishing (e.g., by harvesting login credentials to email and cloud systems).
A successful AI-powered phishing-based breach can expose confidential communications or personal data, leading to regulatory penalties and reputational damage. The average financial hit per BEC incident is substantial as well. The IC3 report found that the typical loss per incident in 2024 was around $150,000. For a mid-market company, a single attack of that magnitude is harrowing. For a larger enterprise, a well-coordinated BEC scam might strike multiple times or aim for an even bigger payout.

Why Executives Should Care Now
AI-powered phishing isn’t a far-off threat – it’s here today and growing. Malicious actors leverage generative AI to create more convincing phishing emails and exploit new attack vectors like deepfake audio and even QR code phishing (“quishing”). These AI-crafted attacks often evade many traditional defenses and exploit human trust. Even organizations with mature cybersecurity programs find themselves at risk. This is no longer just an IT issue. It’s a business risk that demands the attention of the C-suite. CFOs and CEOs have been impersonated in scams, and finance teams have been tricked into transferring millions of dollars.
Surveys show 63–64% of organizations experienced BEC attempts in the past year, underscoring that no company is immune. Beyond the immediate financial loss, consider the secondary effects: lost funds may never be fully recovered (especially if they’ve been laundered through crypto or overseas accounts), incidents may need to be disclosed to investors or regulators, and client trust can be shaken.
We’re also seeing stricter regulatory scrutiny – for instance, payment networks are updating rules to curb BEC fraud by 2026, meaning boards and audit committees are starting to ask management, “What are we doing about this?”
AI Is Only Becoming More Sophisticated
Security leaders warn that the use of AI-powered phishing is only beginning. “In the near future, AI will power significantly more phishing attacks — everything from text-based impersonations to deepfake communications will become cheaper, more convincing, and more popular with threat actors,” predicts Mika Aalto, CEO of Hoxhunt. In other words, the cost and barrier to entry for executing advanced phishing schemes are dropping, which likely means more frequent and cunning attacks ahead. Forward-looking organizations are already upgrading their defenses (and training) in anticipation of this shift. Gartner analysts also flag AI-driven phishing as a rapidly evolving menace and advise security leaders to implement advanced BEC protections and user education now to stay ahead of the curve.
AI has fundamentally altered the phishing threat landscape in 2025. Attacks are more convincing, more numerous, and capable of fooling even savvy employees. This calls for a heightened state of awareness at the executive level. Leaders must recognize that a single clever email or deepfake call can bypass layers of technology by exploiting human trust.