In the AI ecosystem, a clearer division of responsibility is emerging between model creators and the organizations that implement them. Technology providers, such as Anthropic, focus on the security of the models themselves – this includes designing them according to specific principles, testing for undesirable behaviors (so-called red-teaming), and controlling capabilities before making them available to users.
However, equally important, and practically much more complex, is the area related to the use of AI in the enterprise environment. The responsibility for how models are used in practice, what permissions they have, what data they access, and what actions they perform lies with the organization and security solution providers. This includes information access control, monitoring of AI agents’ activities, and enforcing security policies across the IT environment.
It is this second area that poses the greatest challenge today. Even the most advanced and secure model can become a risk if it operates without appropriate oversight in a business environment, having access to sensitive data and critical business systems.