AI fuels cybercrime: why security gaps are growing- image 1

AI fuels cybercrime: why security gaps are growing

The article is also available at:
Ukrainian, Russian

Today, at a time when anyone can create a set of ransomware programs, fake Fortune 500 company employees, or fully AI-driven ransomware campaigns with a text request, malicious actors are no longer limited by their skill levels or operational scales. The latest Anthropic threat analysis report shows how artificial intelligence has become a major force multiplier for cybercrime, reducing the time required for attackers to execute attacks from weeks to hours, even assisting low-skilled attackers in bypassing traditional defense mechanisms.

As a result, security gaps are gradually increasing. Most organizations continue to rely on prevention tools to stop past attacks, while adversaries use AI to completely bypass these layers of security.

Register for our webinar “Next-Generation Cybersecurity: Utilizing AI in Vectra AI, CrowdStrike, and Cloudflare Platforms“, where our experts will explore the approach to using AI and identify current trends in AI implementation for organizational cybersecurity.

AI fuels cybercrime: why security gaps are growing - image 1
KEY TAKEAWAYS FROM ANTHROPIC REPORT

Agent AI systems are becoming weaponized

One of the most striking conclusions of the Anthropic report is that AI models are no longer passive assistants – they are becoming active operators in the attack chain. In a campaign dubbed ‘vibe hacking’ by researchers, a single cybercriminal used AI to launch and scale a ransomware attack. The artificial intelligence did not just suggest commands, it executed them directly. It performed automated reconnaissance, scanning thousands of VPN endpoints to massively identify vulnerabilities. The AI also systematically engaged in credential theft and lateral movement: extracting logins, enumerating Active Directory environments, and elevating privileges to penetrate deeper into victims’ networks. To bypass security measures when detection occurred, the AI generated obfuscated variants of the malware that masqueraded as legitimate Microsoft files. Finally, it engaged in data exfiltration and ransom organisation: extracting confidential records and analysing them to establish ransom demands and create psychologically targeted notes threatening to publicise the information and contact regulatory authorities.

In just one month, seventeen organisations were attacked, including government agencies, medical facilities, and emergency services. Ransom demands ranged from $75,000 to over $500,000, with each note customised to maximise pressure on the victim.

This fundamentally changes the economics of attacks: a single attacker armed with agent-based AI can now deliver the same impact as a coordinated group of cybercriminals. Operations that previously required weeks of planning and technical expertise can now be reduced to hours. The assumption that ‘complex attacks require complex adversaries’ is no longer true.

KEY TAKEAWAYS FROM ANTHROPIC REPORT

AI lowers the barriers to complex cybercrimes

Another key theme from Anthropic’s report is how AI makes advanced attack methods accessible to people with minimal technical skills or none at all.

Traditionally, creating ransomware required deep knowledge of cryptography, the internals of Windows, and obfuscation methods. However, in this case, a malicious actor from the UK with limited technical capabilities created and sold fully functional ‘Ransomware as a Service’ (RaaS) packages with the help of AI.

The malicious actor relied on AI for implementing cryptography (ChaCha20 encryption, RSA key management) and Windows API calls—tools that far exceeded his personal level. AI helped integrate techniques to bypass anti-EDR solutions, such as system call manipulations (FreshyCalls, RecycledGate) and string obfuscation. The ransomware was sold as meticulously designed service packages. Despite his reliance on AI, the operator distributed the malware on criminal forums, offering it to less skilled cybercriminals.

This example illustrates how AI democratizes access to sophisticated cybercrime: the development of complex malware is no longer the prerogative of highly skilled developers, with barriers of time, training, and expertise removed. By lowering the entry threshold, AI sharply increases the volume and variety of ransomware attacks that organizations may face.

KEY TAKEAWAYS FROM ANTHROPIC REPORT

Cybercriminals integrate AI into their operations

The Anthropic report shows that cybercriminals are using AI not just for individual tasks—they are integrating it into the very structure of their daily operations.

A vivid example is North Korean cybercriminals who posed as software developers in Western tech companies. In this scheme, AI played an important role at every stage of the operation:

  • Identity creation: The criminals used AI to generate professional resumes and technical portfolios that withstood rigorous scrutiny.
  • Application and interview: AI tailored cover letters to job openings and provided real-time assistance during coding skill assessments.
  • Employment support: After hiring, the criminals relied on AI to supply code, respond to code merge requests, and communicate in English, masking technical and cultural gaps.
  • Revenue generation: With AI supplying the knowledge they lacked, these criminals could hold multiple positions simultaneously, funding North Korea’s weapons programme.

This case is more than just fraud—it’s about AI as an operational foundation: Technical competence is no longer a prerequisite for accessing high-paying jobs in sensitive industries, and AI enables malicious insiders to retain positions for extended periods. State-sponsored groups can scale their influence as AI replaces years of technical education with skills available in real-time.

KEY TAKEAWAYS FROM ANTHROPIC REPORT

AI is used at all stages of fraudulent operations

Fraud has always depended on scale and speed. Now, AI amplifies all these aspects. The Anthropic report reveals how cybercriminals apply AI at all stages of the fraud supply chain, transforming isolated fraud into sustainable, industrialized ecosystems.

Attackers have used AI for data analysis and victim profiling: parsing massive log dumps and creating detailed profiles for precise targeting. To develop infrastructure, attackers built AI-based platforms with automated API switching and enterprise-level failover mechanisms. AI was used for credential fabrication: generating synthetic credentials with plausible personal data, allowing fraudsters to easily bypass banking and credit checks. Even for emotional manipulation in romance scams, AI-generated chatbots created advanced, emotionally intelligent responses.

AI no longer just aids fraud – it organizes it: attackers can automate every stage of the operation, and fraudulent campaigns become more scalable, adaptable, and harder to detect. Even low-skilled attackers can create fraudulent services that look like professional platforms.

INCREASING SECURITY GAP

The Anthropic report clearly shows that AI not only makes adversaries smarter – it makes them faster and more elusive. And this speed reveals gaps in current security stacks.

Traditional tools can’t keep up. Endpoint protection, multi-factor authentication, and email security are bypassed, not broken. Adversaries use AI to circumvent these controls, creating new malware variants or impersonating employees. The time for attacks is reduced. One AI operator can do the work of a whole team – reconnaissance, lateral movement, and ransomware now occur in hours, not weeks. Complexity no longer equals expertise. AI enables even unskilled actors to create advanced ransomware.

This is the critical security gap: while prevention tools defend against known techniques, AI-powered adversaries exploit the blind spots between them. And if SOC teams cannot see the behavior AI cannot hide (privilege escalation, anomalous access, lateral movement, data staging), the compromise remains unnoticed.

iIT Distribution – an esclusive distributor of advanced solutions Vectra AI in Ukraine, as well as Kazakhstan, Uzbekistan, Georgia, Kyrgyzstan, Moldova, and Tajikistan.

BRIDGING THE GAP WITH VECTRA AI SOLUTIONS

For business leaders, the conclusion is simple: if attackers use AI to scale up compromise, your SOC team needs AI to scale up detection and response. This is where Vectra AI provides significant value.

Vectra AI identifies what circulates within the network. Whether it’s AI-generated ransomware, compromised employees, or hidden data theft, Vectra AI focuses on attacker behaviors that cannot be hidden: privilege escalation, lateral movement, and data exfiltration. The system also ensures hybrid coverage by simultaneously monitoring identity systems, cloud systems, SaaS applications, and network traffic, eliminating blind spots. Through AI-based prioritization, Vectra AI uses AI to correlate and highlight the most pressing threats.

The business value lies in reducing the risk of financial loss and reputational damage while giving security teams the confidence that they can detect compromises, even when attackers use AI to bypass standard and non-standard protection systems.

News

Current news on your topic

All news
AI vs AI: CrowdStrike’s Response
CrowdStrike News
AI vs AI: CrowdStrike’s Response
All news