Advertisement
Advertisement

AI-driven cybercrime: The next global security challenge

Photo: Business Leader

BY AMINA ALLISON

Imagine receiving a video call. It’s your company’s chief financial officer, along with several other familiar senior colleagues. The connection is clear, their voices and mannerisms instantly recognisable. They instruct you, with convincing urgency, to process a series of confidential payments totalling $25 million. You
comply.

Only later do you discover the chilling truth: the faces and voices on the call were entirely fabricated by artificial intelligence. This isn’t a far-fetched movie
plot; it was the reality for a finance worker at the multinational firm Arup in early 2024, a stark illustration of a new, rapidly escalating global threat: AI-driven cybercrime.

Artificial intelligence is no longer just enhancing legitimate industries, it’s becoming a formidable weapon in the hands of cybercriminals. These actors are
leveraging AI to launch attacks that are more sophisticated, personalised, scalable, and dangerously effective than ever before. From crafting flawless phishing emails en masse to deploying malware that learns and adapts, AI is enhancing the criminal toolkit. Security experts warn that these AI-augmented attacks are growing in frequency and impact, transforming cybercrime from a persistent nuisance into an urgent global security challenge that demands immediate and coordinated action.

Advertisement

Understanding this evolving threat landscape is the first step towards defending against it.

How AI is Enhancing Cyber Threats

Cybercriminals are rapidly integrating AI into their operations, creating threats that challenge traditional defences. Here’s a look at the key ways AI is being
weaponized:

Advertisement

Hyper-Personalised Phishing at Scale

AI elevates phishing from generic scams to surgically precise operations. Malicious generative AI tools, like the notorious WormGPT discovered in 2023, are purpose-built without ethical constraints to automate the creation of highly convincing fake emails. These systems scrape data to personalise messages, mimicking corporate jargon or referencing specific details about a target’s life or work. The result is phishing lures that are grammatically perfect, contextually
relevant, and adept at bypassing spam filters.

Some reports, like one from SlashNext in 2023, indicated significant increases in Business Email Compromise (BEC) and advanced phishing, partly attributed to these AI tools. Furthermore, attackers are deploying AI chatbots; in one recent scheme highlighted by ThaiCERT and The Hacker News, fake Facebook warning emails
directed users to an “appeal” button, launching a chatbot that interactively coaxed victims into revealing their login credentials.

Adaptive Malware and Ransomware

Advertisement

AI introduces a new level of stealth and resilience to malicious software. AI-powered malware can learn from its environment, modify its own code to avoid signature detection (polymorphic malware), delay activation, or disguise its activity within legitimate system processes. Europol’s 2023 threat assessments
have highlighted growing concern over AI-generated malware precisely because its adaptive nature complicates detection and forensic analysis. Ransomware
gangs are also benefiting. AI can help identify high-value targets and data within a network, automate the crafting of personalised, psychologically manipulative
ransom notes, and even adapt encryption tactics to evade security software. The emergence of groups like “FunkSec” in late 2024, reportedly using AI-assisted
Tools for their ransomware campaigns, despite limited apparent experience, show how AI is dangerously democratising sophisticated cybercrime capabilities.

Deepfakes and Voice Cloning

AI’s ability to create hyper-realistic audio and video forgeries is perhaps its most alarming application in cybercrime. The technology allows near-perfect mimicry
of faces, voices, and mannerisms, making it increasingly difficult to trust digital interactions. This is actively exploited for fraud:

Video Impersonation: The $25 million Arup heist remains a landmark case, demonstrating how AI can convincingly replicate multiple executives simultaneously in a live video call scenario.

Advertisement

Voice Cloning: As early as 2019, fraudsters used AI-cloned voice technology to impersonate a parent company CEO over the phone, successfully directing a UK
subsidiary director to transfer €220,000 ($243,000), according to reports analysed by firms like Trend Micro. More recently, criminals have used AI voice cloning in harrowing ransom scams, such as a 2023 US case reported by The Guardian where a mother received a call featuring a perfect AI replica of her daughter’s voice claiming she’d been kidnapped.

With deepfake tools becoming more accessible and sophisticated, and predictions like Europol’s suggesting 90% of online content could be synthetically generated by 2026, the implications for trust in digital communications are profound.

Advertisement

Automated Disinformation Campaigns

AI acts as a powerful engine for spreading false information. Generative models can churn out realistic but fabricated news articles, social media posts, images,
and videos at an unprecedented scale. These can be used to manipulate public opinion, interfere in elections, or even destabilise markets, as seen in May 2023,
when a fake AI-generated image depicting an explosion near the Pentagon briefly caused market fluctuations. The automation extends to creating fake profiles and websites to amplify these narratives, overwhelming platforms’ ability to moderate content effectively.

Advertisement

AI-Powered Vulnerability Discovery

While less publicised, the potential for AI to automate the discovery of exploitable flaws in software (zero-day vulnerabilities) is a significant concern. AI systems can be trained to analyse vast codebases and identify subtle weaknesses far faster than human researchers. Although concrete examples of purely AI-discovered zero-days being exploited are scarce (often such discoveries are kept secret or used by nation-states), security experts agree that AI drastically accelerates the process, potentially leading to a surge in novel attacks exploiting previously unknown vulnerabilities.

Advertisement

Advanced Social Engineering with AI Chatbots

Beyond simple phishing emails, AI chatbots enable more complex, interactive social engineering attacks. Criminals can deploy chatbots that engage victims in seemingly legitimate conversations over extended periods, perhaps via corporate messaging systems, perfectly mimicking the writing style of a colleague or
manager to build trust before extracting sensitive information or directing malicious actions. The ThaiCERT/Hacker News example of the Facebook credential-stealing chatbot illustrates this interactive, trust-eroding tactic.

Adversarial AI

As organisations increasingly deploy AI for defence in spam filters, fraud detection, facial recognition, etc., attackers are developing counter-techniques
known as adversarial AI. These attacks involve crafting specific inputs (e.g., subtly modified images, text variations) designed to deceive defensive AI models.
Real-world research highlighted by sources like MIT has demonstrated techniques like using small stickers to fool Tesla Autopilot systems, employing adversarial makeup patterns to bypass iPhone Face ID, or crafting text modifications that successfully evade Gmail’s AI-powered spam filters. This represents an “arms race” where the AI security tools themselves become targets.

The cumulative effect is alarming. A 2025 report from SoSafe found that a staggering 87% of global organisations experienced an AI-powered cyberattack in the past year. Coupled with projections estimating global cybercrime costs rising towards $10.5 trillion annually by 2025 and generative AI-enabled fraud losses soaring, it’s clear that AI is not just another tool, but a catalyst, dramatically escalating the scale and severity of the cyber threat landscape.

Implications of AI-Driven Cybercrime

The rise of AI-powered cyber threats creates consequences that extend far beyond the digital realm, casting long shadows over economies, societal trust, and individual security. As cybercriminals weaponise artificial intelligence, their attacks gain unprecedented sophistication and automation, causing a ripple
effect with profound implications.

1. Escalating Economic Damage: The economic toll of cybercrime, already staggering, is set to escalate dramatically with AI acting as an accelerant. Global cybercrime costs, estimated at $8 trillion in 2023, are projected to climb towards $10.5 trillion by 2025, partly fueled by AI’s ability to amplify the frequency and success rate of attacks. We see this starkly in the soaring losses from generative AI-enabled fraud, predicted to jump from $12.3 billion in 2023 to $40 billion by 2027 in the US alone, and in the astronomical projections for ransomware damages, potentially reaching $275 billion annually by 2031.

Furthermore, the potential for AI-generated disinformation or deepfakes to trigger market panic, as demonstrated by the fake Pentagon explosion image incident, highlights a disturbing new vector for economic destabilisation.

2. Erosion of Trust and Social Cohesion: Beyond the financial costs, AI-driven cybercrime strikes at the heart of social cohesion by eroding the foundations of trust. The proliferation of convincing deepfakes fundamentally challenges our ability to verify identity online. When the voice on the phone or the face on a video call could be an AI fabrication, trust in everyday communications, financial transactions, and even personal relationships becomes perilously fragile.

This is compounded by the threat of AI-powered disinformation campaigns, which can flood online spaces with tailored, believable falsehoods at scale, potentially
manipulating public opinion, interfering in democratic processes, and weakening trust in institutions like government and the media.

3. Intensified Security Challenges: For security professionals and organisations, AI introduces a new layer of complexity and intensifies existing challenges. The sheer speed and volume of AI-automated attacks threaten to overwhelm traditional defences and fatigue security teams. Moreover, AI tools democratise sophisticated attack techniques, lowering the barrier for less-skilled actors to launch potent operations, significantly broadening the threat landscape. This dynamic forces defenders into a costly and continuous arms race, necessitating the deployment of defensive AI simply to keep pace. The World Economic Forum aptly notes that cybercriminals leverage AI both as an attack vector and an attack surface, highlighting the dual nature of this challenge.

4. Heightened Individual Vulnerability: Ultimately, the burden of these amplified threats falls heavily on individuals. Hyper-personalised phishing scams crafted by AI and convincing deepfake impersonations dramatically increase the risk of personal financial loss and identity theft. AI can analyse breached data or social media profiles to tailor attacks with unnerving accuracy. The emotional and psychological toll of falling victim to such sophisticated deception, whether through a fake ransom demand featuring a loved one’s cloned voice or a meticulously crafted impersonation, can be devastating.

Strategies and Reforms to Counter AI Cybercrime

Facing a threat landscape supercharged by artificial intelligence requires a multi-faceted response. While AI empowers criminals, it also offers powerful tools
for defenders. Combating this next generation of cybercrime effectively demands a combination of cutting-edge technological defences, robust organizational
practices, proactive governance, and international cooperation. Waiting for attacks to happen is no longer viable; resilience requires proactive and adaptive
strategies at every level.

Defence Strategies for Organisations

Organisations are on the front lines and must adopt layered, intelligent security postures. Key strategies include:

Leveraging AI for Defence: The principle of “using AI to fight AI” is paramount. Modern security tools increasingly employ machine learning for enhanced threat detection, analysing user behaviour (UEBA), spotting network anomalies, and automating threat hunting. AI can process vast security data streams, correlating events and identifying subtle indicators of compromise far faster than human analysts; for instance, Microsoft’s Security Copilot assists analysts using trillions of daily security signals. AI-powered email security filters learn to detect sophisticated, AI-crafted phishing attempts, while AI-enhanced endpoint protection focuses on behavioural analysis to stop adaptive malware before it executes.

Adopting Zero Trust Architecture (ZTA): The old model of trusting everything inside the network perimeter is obsolete. A Zero Trust approach, based on the principle of “never trust, always verify,” is crucial. This involves continuously authenticating users and devices, implementing network segmentation to limit lateral movement if a breach occurs, and enforcing least privilege access so compromised accounts have minimal reach. This framework helps contain breaches quickly, even against fast-moving AI-driven attacks.

Enhancing Authentication and Verification: Given AI’s prowess at impersonation, strengthening identity checks is vital. Multi-factor authentication (MFA) should be standard practice wherever possible. Crucially, for high-risk actions like financial transfers, organisations must implement strict out-of-band verification protocols requiring confirmation via a separate, trusted channel and training staff to rigorously apply these checks, even when requests appear urgent or originate from senior executives.

Strengthening the Human Firewall: Technology alone is insufficient; human vigilance remains critical. Comprehensive security awareness training must educate employees about AI-specific threats like deepfakes and hyper-personalised phishing. Fostering a culture where employees feel empowered to question suspicious requests and verify instructions through secondary channels is essential. Regular, realistic phishing simulations can reinforce this learning.

Planning for Rapid Incident Response: Speed is critical when dealing with AI-accelerated attacks, making tools for Security Orchestration, Automation, and Response (SOAR) valuable for streamlining workflows. Developing and routinely testing incident response plans that specifically incorporate AI threat scenarios, alongside robust backup and recovery strategies (including offline backups immune to ransomware), is fundamental for resilience.

Bridging the Gaps in Legal Frameworks and Policies

While organisations strengthen their defences, systemic change requires addressing significant gaps in law and policy that currently hinder efforts to deter
and prosecute AI-driven cybercrime.

Addressing Outdated Legislation: A primary challenge lies in outdated legislation. Many existing cybercrime laws were written before AI-enabled attacks like deepfake impersonation were feasible, creating ambiguity in how such crimes are prosecuted. This necessitates urgent updates to national and international legal frameworks to explicitly criminalise the malicious use of specific AI technologies. Encouragingly, some jurisdictions are beginning to propose laws targeting illicit deepfakes, an effort that needs acceleration and global harmonisation.

Regulating AI Weaponisation: There’s a concerning lack of regulation surrounding the weaponisation of AI and the distribution of potentially malicious AI tools. Open-source models can be easily repurposed for crime with little accountability for creators or distributors. Addressing this requires developing clear legal standards and potentially implementing controls for high-risk AI systems, alongside promoting responsible AI development practices within the industry.

Implementing Oversight on Synthetic Media: The proliferation of difficult-to-detect synthetic media also demands attention. The anonymity afforded by unlabeled AI-generated content enables widespread deception. Implementing policies that encourage or mandate detectable watermarks or metadata tags for synthetic content, particularly in sensitive contexts like news or finance, could significantly curb misuse. Initiatives like the EU’s AI Act, which includes disclosure requirements for deepfakes, point towards potential global standards.

Overcoming Jurisdictional Hurdles: AI-driven cybercrime frequently transcends borders, creating significant jurisdictional and enforcement hurdles. Attack attribution and international prosecution remain difficult. Therefore, strengthening international cooperation and updating agreements like the Budapest Convention on Cybercrime to specifically cover AI modus operandi is critical. Establishing global governance mechanisms and norms for AI use, particularly by state actors, and facilitating rapid information sharing between national law enforcement agencies are essential steps.

Closing Capability Gaps: Finally, there is a pressing need to close capability gaps within law enforcement and critical infrastructure sectors. Many agencies lack the specific training and forensic tools to effectively investigate AI-driven threats. Significant investment is required in training personnel and funding research and development for AI forensics, including better deepfake detection tools. Concurrently, developing and enforcing AI-specific cybersecurity guidelines for critical sectors like finance, healthcare, and energy, potentially aligned with broader regulations like the EU’s NIS2 directive, is vital for national security.

Conclusion

AI-driven cybercrime represents far more than just an incremental increase in digital threats, it marks the arrival of a new era in global security challenges. As
we’ve seen, artificial intelligence is rapidly being weaponised, enabling criminals to launch attacks of unprecedented sophistication, scale, and adaptability. The implications are profound, threatening not only economic stability but also social cohesion and individual safety.

Yet, this daunting challenge is not insurmountable. The same AI technologies fueling these advanced threats also offer powerful tools for cybersecurity professionals. The path forward lies in a concerted, multi-layered response: harnessing AI for defence and developing agile legal and policy frameworks that can keep pace with technological change.

The cat-and-mouse game of cybersecurity has reached a new tempo with the infusion of AI. Complacency is the greatest enemy. Maintaining trust and security in our increasingly digital world demands continuous adaptation, proactive defence, and a shared resolve to ensure that artificial intelligence serves progress and protection, not predation.

Amina Allison is a cybersecurity specialist, legal strategist, and Program Manager and Data Protection Officer at the Cybersecurity Education Initiative (CYSED). She can be contacted via [email protected] and on LinkedIn at https://www.linkedin.com/in/amina-allison-shallangwa



Views expressed by contributors are strictly personal and not of TheCable.

error: Content is protected from copying.