Cybercrime powered by artificial intelligence has become a major global security concern, with losses exceeding 893 million dollars in 2025, according to the latest Internet Crime Report compiled by the FBI’s Internet Crime Complaint Center.
The report highlights a sharp rise in sophisticated digital fraud schemes that increasingly rely on artificial intelligence tools to deceive victims, automate scams, and bypass traditional security systems. While cybercrime has been rising steadily for years, the integration of AI has significantly increased both the scale and speed of attacks.
Officials say the losses reflect only reported cases, meaning the actual global cost could be much higher when unreported incidents are considered. In total, cyber-enabled crimes across categories have now reached an estimated 17.6 billion dollars in global losses, showing how rapidly digital financial crime is expanding.
The report shows that AI is being used in several ways by cybercriminals. These include generating highly convincing phishing emails, creating deepfake audio and video content to impersonate individuals, automating large scale scam operations, and improving targeting precision for financial fraud.

One of the most concerning developments is the use of generative AI to mimic trusted voices and identities, making it increasingly difficult for individuals and institutions to distinguish between real and fake communications. This has led to a rise in business email compromise scams and impersonation fraud targeting both individuals and corporations.
Security experts warn that the accessibility of AI tools has lowered the barrier to entry for cybercrime. What once required technical expertise can now be done with minimal skill, allowing more actors to engage in sophisticated fraud campaigns.
The FBI report also notes that victims span across individuals, businesses, and government institutions, with financial fraud remaining the most common outcome. However, there is growing concern about the use of AI in more targeted attacks against critical infrastructure and sensitive data systems.
Cybersecurity firms have repeatedly warned that the next phase of digital crime will be defined by automation. AI systems can now scan large datasets, identify vulnerabilities, and execute attacks at a scale that human operators alone cannot match.
At the same time, defenders are also beginning to deploy AI to counter these threats. Financial institutions, tech companies, and government agencies are increasingly using machine learning systems to detect unusual transaction patterns, flag suspicious behaviour, and respond to breaches in real time.
Despite these efforts, experts say the gap between attackers and defenders is still widening, largely because cybercriminals can adopt new tools faster than regulatory and security systems can respond.
The growing financial impact is also raising policy concerns. Governments are under pressure to strengthen digital security frameworks, improve cross border cooperation on cybercrime, and regulate the misuse of AI technologies without stifling innovation.

The 2025 figures underline a broader trend: as digital systems become more integrated into everyday life, the cost of cyber insecurity is rising sharply. From banking and commerce to communication and identity verification, nearly every sector is now exposed to AI-enabled threats.
Analysts say the challenge ahead will not only be technical but also regulatory and educational, as individuals and organisations must become more aware of increasingly convincing digital scams.
While AI continues to drive innovation across industries, the same technology is also reshaping the nature of cybercrime, turning it into a faster, more scalable, and more dangerous global threat than ever before.