The Cribl report particularly emphasizes the security aspect – telemetry data pipelines powering AI systems are becoming a new, attractive attack surface for adversaries.
- One of the most important conclusions of the report is the forecast of a sharp increase in observability tool costs. According to Cribl, by 2027, over one-third of enterprises will spend more than 15% of their IT operational budget on monitoring and observability, whereas currently, this level is typically 3–7%.
The source of this increase includes:
- cloud-native environments and microservices architectures,
- a growing number of distributed systems,
- AI deployments generating huge volumes of logs, metrics, and traces.
The report clearly indicates that the previous approach of “collect everything and store without limits” is unsustainable. The solution to this problem is data tiering, which is the intelligent differentiation of data in terms of their business and operational value.
Critical data used in real-time should go to high-performance observability platforms. Historical and less operationally important data can be safely stored in layers such as object data storage or security data lakes, maintaining availability for audits, incident analyses, or regulatory compliance.
- Predictions indicate that in 2026, 20% of Fortune 2000 companies will experience significant security incidents originating from manipulated or compromised telemetry data used by AI systems.
Particularly high risks are posed by closed, opaque data pipelines that eliminate human oversight over what information reaches AI models. The report unequivocally recommends maintaining a human-in-the-loop approach, supported by full visibility, metrics, and monitoring of data flow.
- One of the report’s forecasts suggests that by 2027, as much as 90% of AI production deployments will fail to meet business expectations mainly due to data architecture constraints, not the AI models themselves.
Agentic AI environments mean thousands of autonomous agents generating and consuming data continuously and in parallel. Without an intermediary layer that can scale with demand, normalize data and control its flow, organizations risk bottlenecks, instability, and system failures.
- The report also points to a significant market change: by 2027, 15% of organizations will switch their main security or observability tool provider not because of functionality, but due to AI ecosystem requirements.
More producers are building closed AI platforms, based on:
- limited API interfaces,
- strong ties of AI agents to a single ecosystem.
In the short term, this may simplify deployments, but long-term, it leads to vendor lock-in and loss of technological flexibility. The alternative is data independence – separating the data layer from analytics tools and AI.
Open telemetry pipelines and neutral data repositories allow organizations to maintain control, experiment with new AI models, and change tools without the need to rebuild the architecture’s foundations.
- The Cribl report also highlights a crucial yet often overlooked macroeconomic factor affecting AI development. Forecasts suggest that in the coming years, issues in the private credit market could significantly slow the pace of data center expansion used for training and running AI systems.
The limitation of funding availability for large infrastructure projects could result in slower computational power and storage capacity growth. Consequently, organizations will not be able to base AI development solely on continuous infrastructure expansion but will need to increase the efficiency of their resources.
The report indicates that a data-focused approach will gain crucial importance – reducing redundant telemetry, selective information processing, and intelligent data flow management. In practice, it will be data architecture, not the scale of infrastructure, that becomes the main factor enabling further AI-based solution development and scaling.