AI vs AI: CrowdStrike’s Response- image 1

AI vs AI: CrowdStrike’s Response

The article is also available at:
Ukrainian, Russian

Recently, the Anthropic Threat Intelligence team detected and successfully thwarted a sophisticated attack by a state-funded malicious organization, which utilized the agent capabilities of the Claude model and the Model Context Protocol to automate cyberattacks on numerous targets worldwide. This AI-based attack automated intelligence gathering, exploitation of vulnerabilities, lateral movement, and much more across multiple victim environments – all on an unprecedented scale and with great speed.

“The cybersecurity community must operate under the premise that a fundamental shift has occurred: security teams should experiment with applying AI for defense in areas such as SOC automation, threat detection, vulnerability assessment, and incident response. In addition, they should develop experience in what works in their specific environments.”

The message is clear: to defeat adversaries using AI, companies must leverage the same mechanisms against the attackers themselves.

AI vs AI: CrowdStrike’s Response - image 1
THE AI ERA FOR MALICIOUS ACTORS

We are now witnessing a watershed moment in cybersecurity. Historically, attackers had to manually initiate and confirm each phase of an attack, including conducting reconnaissance scans, discovering vulnerabilities, crafting exploits, executing intrusions, and launching data exfiltration. This limitation restrained the speed and scale of attacks. But with agent-based AI now employed for hostile actions, these restrictions are shattered:

“The criminal, the operator, tasked Claude Code to work in groups as autonomous agent-conductors for penetration testing. The attacker was able to use AI to perform 80-90% of tactical operations independently – physically, such a speed of requests would be impossible.”

This impressive figure should be a warning to the entire industry. It is not a gradual improvement – it is an exponential transformation. Defenders who rely on human-speed responses in this new AI era will find themselves outpaced and outsmarted.

Register for our webinar “Next-Generation Cybersecurity: Leveraging AI in Vectra AI, CrowdStrike, and Cloudflare Platforms“, where our experts will discuss the approach to using AI and identify current trends in the implementation of AI in organizational cyber protection.

NEW PACE OF ATTACK WITH FAMILIAR TECHNIQUES

Despite the revolutionary agent-driven mechanism that allowed for unprecedented speed and scale in this attack, the main methods and tools of cybercrime proved to be surprisingly traditional. The attacker did not develop exotic zero-day exploits, but rather the opposite:

“The operational infrastructure primarily relied on open source penetration testing tools rather than the development of specialized malware. The core technical toolkit consisted of standard security utilities, including network scanners, database exploitation frameworks, password cracking programs, and binary analysis suites.”

This is both reassuring and alarming. Reassuring because existing detection and response strategies and tools retain their relevance and value. For example, network scanners will still generate recognizable traffic patterns, even if they are run by an agent. TTP methods (tactics, techniques, and procedures) that SOC teams have learned to detect over the years have not become obsolete overnight. However, traditional defense technologies and strategies will only be relevant in this battle if they are bolstered by machine automation capable of keeping pace with the attacker and countering it through alert verification, sorting, orchestration, and AI-based response.

WORDS ARE WEAPONS: THE THREAT OF INTRODUCING PROMPTS

One of the most important aspects of this attack is how the attackers initially began collaborating with Claude. They did not exploit a software vulnerability or bypass authentication controls, but instead used prompt injection to circumvent the model’s defences:

“The key was role-playing: the attackers claimed to be employees of legitimate cybersecurity firms and convinced Claude that they were using it for cybersecurity penetration testing.”

Prompt injection, whereby attackers use instructions in queries to elicit malicious or undesirable behaviour from a model, is the number one risk in the OWASP Top 10 Risks for LLM applications, as it represents a potential ‘front door’ into corporate AI systems that requires robust protection.

CrowdStrike maintains the industry’s most comprehensive prompting taxonomy through its acquisition of Pangea, tracking over 150 different techniques. Enterprises that build and deploy their own AI systems must also recognise that these systems can be manipulated and weaponised through prompting.

Traditional security controls such as firewalls, antivirus software, and access controls do not protect against attackers who can successfully persuade an AI system to return information or perform an action contrary to its original purpose and limitations.

This requires a new set of security controls specifically designed for AI systems: to detect prompt injection, verify context, filter output, and monitor AI interaction behaviour. Enterprises must implement protective barriers that verify the legitimacy of requests, confirm that AI actions are consistent with authorised use cases, and detect manipulation of the AI system to perform unauthorised actions.

The attack surface has expanded to the semantic level. We have spent decades protecting endpoints, applications, networks, credentials, and cloud environments – now it’s time to protect this.

iIT Distribution is a distributor of CrowdStrike solutions in Ukraine, Eastern Europe, Central Asia, and the Baltics. CrowdStrike is an industry leader in cybersecurity, actively implementing advanced artificial intelligence technologies in its solutions.

THREAT DETECTION WITH AGENT SOC

The concept of an agent-based SOC does not entail replacing security analysts but aims to enhance their capabilities with automation that matches those used by adversaries. CrowdStrike customers already leverage the AI-powered capabilities of the CrowdStrike Falcon platform to be more effective in protection. The CrowdStrike security analyst Charlotte AI acts as a force multiplier for protection for SOC teams and provides autonomous threat detection, investigation, and response capabilities. These operate at the speed and scale necessary to counter adversaries using AI.

For example, the CrowdStrike Charlotte Agentic SOAR is enhanced with AI-based decision-making capabilities and can automatically execute response playbooks immediately after threats are detected. When the AI agent attempts lateral movement, CrowdStrike can automatically isolate the affected endpoint, stop malicious processes, and contain the threat before the adversary’s automated actions switch to the next target.

Clients using Charlotte AI and the AI-powered Falcon platform capabilities report significant improvements in the average time to detection and response, often reducing investigative time from hours to minutes. This, in turn, enables automatic responses to threats that would previously have required manual analyst intervention.

CORPORATE AI SYSTEMS: NEW CRITICAL INFRASTRUCTURE

While AI-based defences are essential to counter attackers using AI, defenders must also recognise that corporate AI systems are themselves vulnerable targets. As employees use AI tools for productivity and organisations rapidly deploy their own AI software and infrastructure, they create new attack surfaces that require specialised security controls, including:

  • Protection against prompting:
  1. Implement input validation and sanitisation for all queries sent to AI systems.
  2. Deploy prompt detection that identifies attempts to manipulate AI behaviour.
  3. Establish clear boundaries for permissible actions for AI systems and implement technical controls that prevent unauthorised actions, regardless of how compellingly they are requested.
  • Context verification and authorisation:
  1. Ensure that interactions with AI occur within legitimate business contexts.
  2. Implement authentication and authorisation controls that verify the identity of users and ensure that their requests are consistent with their authorised roles and responsibilities.
  • Filter and monitor input and output data:
  1. Monitor the input and output data of the AI system for leaks of confidential information, malicious or other harmful content;
  2. Implement filtering mechanisms that prevent the AI system from generating dangerous results, even if required;
  3. Log all interactions with the AI for security monitoring and forensic analysis.
  • Secure AI development lifecycle:
  1. Integrate security into AI system development from the outset;
  2. Conduct threat modelling for AI applications, perform security testing, including adversarial testing, implement real-time controls, and keep track of all AI systems deployed in your environment.

The Falcon platform provides comprehensive protection for the secure use and deployment of enterprise AI, based on AI detection and response. This is a turning point in cybersecurity, and the motto is now clear: to defeat attackers who use AI, companies must use the same mechanisms against the attackers themselves. Security teams that use AI-based protection and work hard to ensure the secure use and deployment of AI in their organisations will be well equipped to face this new cyber world.

News

Current news on your topic

All news
AI vs AI: CrowdStrike’s Response
CrowdStrike News
AI vs AI: CrowdStrike’s Response
All news