Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

The Anthropic Exploit: Welcome to the Era of AI Agent Attacks

Author

Chris Joynt

Director Product Marketing at Securiti

This post is also available in: Arabic

Executive Summary

The successful exploitation of a frontier model heralds the arrival of the Era of AI Agent Attacks, proving that threat actors can now bypass model guardrails and orchestrate highly automated complex cyber-espionage at machine speed.

The speed and scale of AI Agent attacks make your current perimeter defenses obsolete.

Organizations must develop new strategies to both protect data from AI Agent attacks and prevent their own AI agents from being exploited. To do so, we argue that organizations must immediately abandon the obsolete "outside-in" security model and pivot to scalable, integrated Data and AISecurity—building defense from the inside out.

This requires three strategic pillars:

  • Sensitive Data Intelligence - Comprehensive intelligence about your most treasured asset.
  • Rock solid Data Security Posture Management (DSPM) across your entire enterprise data estate
  • Fine-grained AI Runtime Controls that live outside the AI model and provide system-level protection

Securiti is the partner providing the foundational architecture—via Sensitive Data Intelligence, DataCommandGraph, and Gencore Firewalls—to deliver the necessary granularity of intelligence and control at enterprise scale, ensuring that as your AI adoption grows, your defenses are already ahead of the threat.

The Theoretical Just Became Real

For years, the cybersecurity community has imagined the potential of AI attacks. We hypothesized about agent-based threats and scaled-up attack operations in the abstract.  Security researchers probed AI models and systems to identify vulnerabilities that were primarily academic exercises. But now, with the recent disclosure of the successful exploitation of Claude Code, an AI agent popular among developers to build software, we can confirm that the Era of AI Agent Attacks is upon us. The academic exercises are now real. The hypothetical scenarios are now a critical case study to learn from.

The exploit of Claude Code will no doubt go down as a watershed moment in technology history—the first major, documented case of a threat actor successfully weaponizing a frontier model to conduct a large-scale attack requiring only nominal human guidance to execute. Someone, possibly state actors, was able to use the Claude Code as an orchestration system to automate nearly all aspects of attack operations. Operations were orchestrated over multiple stages, and the attackers were able to profile, probe, infiltrate, move laterally, escalate privileges, and exfiltrate data from an undisclosed number of high-value targets in infrastructure and government that could lead to future vulnerabilities.

Once Anthropic identified the suspicious activity and investigated further, they immediately banned the accounts responsible and notified the affected organizations while conducting a thorough analysis. Kudos to Anthropic for handling the matter professionally and sharing their findings with the public.

But there’s no mistaking it - this attack changes the game.  What’s notable about the attack is that while agents were utilized to orchestrate and execute attack operations, the tools the agents used were simple. This proves that AI-based attacks need not rely on novel exploits when overwhelming speed and scale can be so effective. Your defense in depth layers are no longer good enough - Cyber defense strategies must evolve. A piecemeal approach to securing data and AI system components will no longer suffice. AI is too fast and too relentless, working 24/7 with unwavering commitment to its goals.

In the era of AI agent attacks, any viable approach to protecting an organization’s most valuable asset (data) must begin with fine-grained intelligence about the assets you’re protecting. Other approaches will leave you with blind spots that can be exploited.  Viable approaches must be holistic and enterprise in scope as well - those that rely on siloed data security and AI security tools will create gaps.

 

What Happened?

A Brief Anatomy of the First AI Agent Attack Framework

The attack itself has been detailed extensively elsewhere. For clarity's sake, we will briefly summarize it here.

The attackers built their framework, which required them to build automation around the MCP servers selected for their integration. Then came the task of “jailbreaking” Claude Code. Jailbreaking is the art of getting a model to ignore the safeguards that would normally block “bad” outputs. In this case, the attackers posed as a legitimate cybersecurity firm conducting penetration testing. They also broke down tasks into seemingly innocuous subtasks that, when viewed individually, successfully masked their malicious intent.  This initial phase represented the bulk of manual effort.

With an attack framework built and Claude Code successfully jailbroken, the attackers then used Claude Code as an orchestration engine to execute sub-tasks across reconnaissance, vulnerability testing, credential harvesting, lateral movement, data extraction, and documentation. They even had Attack Agent automatically categorize extracted data and privileged account credentials by sensitivity and utility for a smooth handoff to other attack teams for sustained operations.

Credit: Anthropic Blog

 

This is a radically different model than a human using code generation capabilities to “vibe hack” or consulting an AI for advice during an otherwise human-driven attack campaign. The framework monitored the state of multiple parallel attack sequences, transitioned phases with minimal human intervention, and aggregated results across multiple sessions. According to Anthropic, “Peak activity included thousands of requests, representing sustained request rates of multiple operations per second.” It is worth noting that the framework leveraged standard open source tools for network scanning, penetration testing, code analysis, etc. This demonstrates that the speed and scale of AI can make attack frameworks very effective without reliance on novel exploits. It also suggests that similar attacks may proliferate.

 

Why This Attack Changes Everything

This attack makes real what security professionals have been warning about since the advent of enterprise agents. Technical innovation in the past few years has only improved AI Agents' ability to conduct various tasks, follow complex instructions, maintain state over multi-step processes, and make decisions to achieve an end goal. Meanwhile, standards like MCP have arisen to standardize how models use tools. Now, MCP servers exist to help agents interact with the outside world to gather information and execute tasks. Agents can automate browser activity, retrieve data, execute remote commands, and manipulate various systems. The same technical innovations that have given Agents greater utility have made them effective for cyber attacks.

Simple techniques jailbroke Claude Code. Simple tools used at machine speed and scale made it an effective weapon. Threat actors with limited resources will now be able to conduct attack campaigns that once required nation-state-level coordination.

This attack represents an inflection point.

Cyber leaders must now devise strategies to protect their valuable assets from AI Agent attacks and also take measures to secure their own Agents from being weaponized. 3 things that will no longer work in the era of Agent Attacks:

  • Reliance on internal model safeguards as a primary means of securing in-house AI systems themselves without controls external to the model.
  • Reliance on layers of defense that stop at creating a perimeter around data systems without file-level intelligence of the data inside the system.
  • Reliance on reactive security models that don’t proactively reduce attack surface and bolster security posture.

 

What Could Securiti Have Done to Reduce Risk in a Similar Scenario?

Our DataAI Security capabilities could mitigate or outright prevent many aspects of an AI Agent attack like this.

In a scenario where an attacker is using AI Agents to compromise your data, our platform would help to:

  • Gain visibility into vulnerable shadow assets.
  • Identify overpermissioned human and machine identities that multiply the reach of agentic attacks.
  • Identify toxic combinations of factors that cause risk when found together, like sensitive data being accessible by unintended identities or system misconfigurations allowing unauthenticated access to data.
  • Reduce potential attack surface through proactive least privilege access enforcement and removal of unnecessary, obsolete, or redundant data.
  • Integrate with various cybersecurity tools to enrich security operations with granular data intelligence.

If an AI Agent were to get in through outer layers of defense, it would find itself in a Least Privilege environment with far fewer opportunities to move laterally, escalate privileges, or access sensitive data.

Our AI runtime guardrails can help protect AI systems themselves from exploitation. There is no reason to believe that Claude is any less safe than any other frontier model. That said, there are additional controls that are necessary to protect your AI systems at runtime. In a scenario where an attacker attempted to jailbreak a model you were using and use the MCP tools it had access to in a malicious manner, our platform could help to:

  • Monitor and enforce policy against all relevant AI events, including prompts, outputs, and retrieval
  • Scan for and block jailbreak attempts
  • Quickly alert security analysts in the event of a jailbreak attempt
  • Prevent misuse of AI by restricting usage to approved topics aligned with the intended usage of the system

Securiti deploys AI runtime guardrails to protect the use of AI models by scanning all interactions between users, models, and tools. An attacker would find it much more difficult to jailbreak a model protected by Securiti, where prompts are scanned specifically for jailbreak attempts and any activity deemed off-topic or not in policy for that particular AI system before that prompt ever reaches the model. Even if an attacker was able to somehow still jailbreak the model itself, it would run into another hurdle of trying to use MCP tools in a malicious way.

 

The Challenge of Protecting AI Systems

 

Securing AI Systems Requires Much More Than Model Security

 

The old paradigm of focusing on the security of the AI model itself will not work as organizations experiment and productionize agentic frameworks. Traditional model-focused security comes from a bygone era when models were narrow and addressing vulnerabilities of and access to the model itself would suffice. Modern Agents, due to their broad general capabilities, are incredibly flexible and introduce new vulnerabilities, and can be manipulated in ways that can be quite subtle.

Security leaders must expand the scope from model-focused security to the entire AI system. This is not only a major expansion in scope for security, but it also entails shifting to security and governance of unstructured data (up to 90% of organizational data), expanding their user base, and monitoring and enforcing policy, all while new AI vulnerabilities like prompt injection and new regulations create additional requirements.

Hardly a week goes by without a new report demonstrating the ease with which frontier models can be broken. Recently, researchers discovered that disguising their attacks as "adversarial poetry” was good enough to bypass model safeguards 62% of the time, and even tested AI-generated adversarial poetry with great success.

The challenge is evolving quickly enough to keep pace with AI adoption. AI Security quickly becomes an “everything everywhere all at once” problem for which there is no feasible way to solve without automation.

 

The Challenge of Protecting Sensitive Data from AI-Powered Attacks

The key challenge of protecting sensitive data in the enterprise is the sheer scale and complexity of the modern data estate. The granularity of intelligence needed must be at the atomic level. It is not sufficient to read some headers or metadata and classify a text file. Sensitive data can be buried deep in unstructured content. The contents of every file must be scanned, classified, labeled, contextualized, and mapped. Doing this accurately across clouds and data centers, various file types, and sensitive data types is a daunting task.

Automation is needed to provide the baseline of intelligence upon which all DataAI security will rely.

3 Key Pillars of DataAI Security for the Era of AI Agent Attacks

Securiti enables safe use of data and AI by operationalizing DataAI security. Organizations must first have a deep and comprehensive understanding of the sensitive data that they need to protect.  Next, they must understand the complex relationships between elements, including access control and policy.

 

Why Sensitive Data Intelligence?

“You can’t protect what you can’t see” rings true in light of this attack, as the speed and scale of Agentic attacks can exploit any blind spot. Comprehensive discovery and visibility of data assets is, of course, a requirement to maintain a security posture.

However, a common approach to cybersecurity is to build defense-in-depth “layers” that stop at securing the data system itself, with no intelligence about what that data system might contain. Such outside-in approaches are insufficient in this era as they are essentially roadblocks that “buy time” to respond to a human-driven attack. Due to the lack of intelligence, this approach also treats all data equally.  AI Agent Attacks don’t work that way - they prioritize high-value sensitive data and operate at machine speed.

Lastly, Data Loss Prevention (DLP) tools, which traditionally have been a key building block of these layers, fail in the AI era. They offer no protection against prompt injection or jailbreak attempts and offer only generic definitions of what constitutes sensitive data, such as PII, with little flexibility to adapt to an organization’s unique data. AI Agent attacks can easily find sensitive data not covered under DLP policy, obfuscate data, or use non-traditional channels for exfiltration that DLP does not cover.

Modern data security starts with knowing exactly where sensitive data lives and whether it’s safe for AI use. Securiti’s sensitive data intelligence delivers this visibility from day one by applying hundreds of pre-trained, AI-powered classifiers that accurately detect regulated and personal data, as well as custom classifiers for detecting proprietary and confidential documents at scale.

With AI-driven tuning, teams can easily improve classification efficacy without re-scanning data, eliminating months of iterative exercises and manual tuning effort, dramatically reducing false positives and false negatives. This accelerates time-to-insight and gives security teams a reliable, context-rich foundation to determine which datasets are safe for AI workloads and which require additional controls.

The result: fast, trustworthy, and scalable sensitive data context that empowers organizations to securely unlock AI innovation.

 

Example: Securiti's classification of sensitive data is AI-powered and gives users the ability to tune classifiers to their data for maximum accuracy in classifying sensitive data of various types.

 

Why DSPM with DataCommandGraph?

Our holistic approach to DSPM emphasizes deep and comprehensive sensitive data intelligence across the enterprise, powered by our DataCommandGraph. Comprehensive discovery, classification, labeling, and mapping of all DataAI assets is a prerequisite to safe use of data and AI. Securiti’s graph-based approach automatically discovers how elements of your data estate (users, policies, regulations, systems, etc) are related and what risk those relationships create for your data estate.

By visualizing these relationships, our platform helps quickly identify data assets that combine elements of data risk, such as sensitivity, overpermissioning, and/or misconfigured security settings for prioritized remediation. Redundant and obsolete data that has exceeded retention requirements and can be targeted for minimization. DataCommandGraph will provide rich data access intelligence and remediation, to enforce least privilege access and sanitize data before it’s ingested to provide safe data on an ongoing basis. Closing these gaps in Data security lays the foundation for Safe AI usage and protects data in the event of an AI Agent Attack.    

Example: DataCommandGraph analyzes the access control configuration and shows relationships between human and machine accounts and their access to data within your environment. This intelligence helps proactively mitigate over-permission accounts, which could be exploited by an AI Agent attacker or an innocent AI query.

 

Why Gencore Firewalls for AI Runtime Guardrails?

Additional AI runtime guardrails are necessary to prevent jailbreaks and other attacks on AI systems. Reliance on the frontier model safeguards is insufficient for enterprise AI security.  Models are, by nature, opaque and require a great deal of resources to train or fine-tune. Today’s frontier models are trained on huge swaths of data from the public internet before being further trained to recognize “bad” outputs. These safeguards do not perform well when they come across real-world examples that they have not seen in training, creating the cat-and-mouse game of jailbreaking. These safeguards are also generic, not specific to an organization’s needs.

Safeguards that are internal to the model do not offer enterprises the control and flexibility they need. Gencore firewalls are deployed outside the AI model, giving organizations the ability to independently configure and enforce policies specific to their AI and security needs across models and systems. Gartner’s TRiSM AI security framework explicitly recommends "Enterprises must retain independence from any single AI model or hosting provider to ensure scalability, flexibility, cost control, and trust, as AI markets rapidly mature and change."

By intercepting input before it reaches a model, it gives organizations the ability to do things like redact sensitive data before exposure to a 3rd party model (ChatGPT famously keeps all data input as prompts). Alternatively, in the event malicious instructions are detected, an alert can be generated for security teams to. One of the most powerful functions of Gencore Firewalls is the ability to restrict AI activity to a set of given topics, preventing misuse. For example, if a user were to ask an AI system restricted to providing customer support, questions about authentication methods for internal systems, that prompt could be blocked.

The breadth of policy, ease of configuration, and deployment flexibility of Gencore Firewall make it scalable to the enterprise, where the proliferation of users, models, tools, and data sources creates complexity at an exponential rate as AI scales.

Lastly, by placing a firewall at the output stage, an extra layer of protection can be configured to do things like redact or mask sensitive data outputs. That way, in the event of a successful jailbreak, attackers still would fail in retrieving sensitive data. An insider who knew the system well and navigated well would still log a policy violation that would draw scrutiny. 

 

Example: Securiti’s Gencore Firewalls allow an organization to configure policy and deploy outside the model, exerting control on the flow of data into their own AI systems. Prompt injection attacks or jailbreak attempts are flagged before they ever reach their model.

 

Protecting Your Critical Assets in the Era of AI Agent Attacks

What’s needed in this new era is an integrated security paradigm that prioritizes comprehensive sensitive data intelligence as a pre-requisite, maps relationships between all objects across the data estate, and puts a layer of control on the AI system outside the AI model, and scales to meet the needs of the enterprise across complex data and technology landscapes.

Securiti has built exactly that with our integrated DataCommandGraph and Gencore Firewalls solutions. Securiti is built for accuracy, scale, and the ability to adapt to your enterprise data. Our AI-powered system uses unique classifiers for many types of sensitive data and a 5-step pipeline to extract signals from raw data for accuracy. One graph will support all DataAI sec use cases at a massive scale, and our AI-powered system adapts to your data- learning relationships, tuning classification, and generating new policies without any code being written or data being transferred.

Long before the current wave of AI, Securiti began preparing for this moment, architecting our system to meet the challenges of Agentic AI.

 

The Anthropic attack wasn't a warning shot; it was the opening salvo. We have definitely moved from a theoretical threat landscape to a practical one. The "era of the AI agent attack" is here.

Success in this new era will not be defined by building taller walls, but by building smarter, more integrated defenses. It requires a fundamental shift to a unified DataAI security posture that protects the entire lifecycle of data and its interaction with AI. Securiti is uniquely built to deliver the intelligence, flexibility, and scalability your organization needs.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
View More
Aligning Your AI Systems With GDPR: What You Need to Know
Securiti’s latest blog walks you through all the important information and guidance you need to ensure your AI systems are compliant with GDPR requirements.
Network Security: Definition, Challenges, & Best Practices View More
Network Security: Definition, Challenges, & Best Practices
Discover what network security is, how it works, types, benefits, and best practices. Learn why network security is core to having a strong data...
View More
Data & AI Security Challenges in the Credit Reporting Industry
Explore key data and AI security challenges facing credit bureaus—PII exposure, model risk, data accuracy, access governance, AI bias, and compliance with FCRA, GDPR,...
EU AI Act: What Changes Now vs What Starts in 2026 View More
EU AI Act: What Changes Now vs What Starts in 2026
Understand the EU AI Act rollout—what obligations apply now, what phases in by 2026, and how providers and deployers should prepare for risk tiers,...
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New