The most pressing AI threat isn’t killer robots or mass surveillance—it’s the surge in sophisticated cybercrime enabled by generative AI. This has already cost Americans $16.6 billion in 2024, a 33% year-over-year jump, and a doubling in losses over three years. The real figure is likely higher, as fewer than 20% of victims report scams.
The New Face of Cybercrime
Recent discussions at the Aspen Institute’s Crosscurrent summit revealed a chilling trend: North Korean operatives are using AI-generated face overlays to pass remote job interviews at Western tech companies. These operatives then work multiple positions simultaneously, funneling salaries and intelligence back to Pyongyang. They leverage AI to fabricate resumes, prep for interviews, and convincingly impersonate legitimate candidates.
This isn’t just about financial theft; it’s a new form of state-sponsored espionage. The ability to infiltrate companies undetected is a significant security risk, but it’s not being treated with the same urgency as more sensational AI threats.
Generative AI Supercharges Fraud
The problem is accelerating. Generative AI makes cybercrime faster, cheaper, and more convincing than ever before. Phishing emails are now indistinguishable from legitimate communications, thanks to AI-powered language models. Synthetic identities—complete with fabricated photos and backgrounds—are easily created, enabling fraudsters to bypass identity verification systems.
Voice cloning has already resulted in multi-million dollar heists. In one case, a finance worker at Arup in Hong Kong transferred $25 million after a deepfake video call featuring the company’s CFO and colleagues. All participants were AI-generated fakes. CrowdStrike’s 2026 Global Threat Report found that AI-enabled attacks surged 89% year-over-year, with the average breach spreading throughout a network in under 30 seconds.
Why This Matters
Cybercrime isn’t new, but the scale and sophistication are unprecedented. The industrialization of scam operations in Southeast Asia, coupled with the rise of cryptocurrency and remote work, has created a perfect storm. Deloitte projects that generative AI-enabled fraud losses in the US could reach $40 billion by 2027.
The emotional manipulation is particularly dangerous. Romance scams, often targeting vulnerable individuals, are becoming more persuasive with AI assistance. Victims frequently refuse to believe they’ve been scammed even when presented with clear evidence.
Defense vs. Offense
While financial institutions and law enforcement are deploying AI to combat fraud (the FBI froze hundreds of millions in stolen funds last year), the consensus among experts is grim. Rob Joyce, former director of cybersecurity at the NSA, warns that we’re entering a period where offense far outpaces defense. Alice Marwick, director of research at Data & Society, echoes this pessimism.
The problem is compounded by underreporting and normalization. Each year’s record losses are absorbed as the cost of doing business online, obscuring the steepening curve.
The Bottom Line
AI-powered scams are a clear and present danger. Unlike speculative AI risks, this threat is already here, costing billions and exploiting human vulnerabilities. The race between offense and defense is heavily skewed in favor of attackers, and the scale of the problem suggests it will only worsen unless significant countermeasures are deployed.
