According to a new press release, Dynatrace has introduced AI-powered data observability capabilities for its analytics and automation platform. Named Dynatrace Data Observability, the feature aims to enhance the reliability and accuracy of data in the Dynatrace platform for business analytics, cloud orchestration, and automation. The technology allows teams to rely on high-quality data, ensuring trustworthiness for the platform’s Davis AI engine. It complements existing data cleansing and enrichment capabilities, monitoring data from external sources such as OpenTelemetry and custom instrumentation. By tracking freshness, volume, distribution, schema, lineage, and availability, data observability reduces the need for additional data cleansing tools.
Organizations increasingly rely on high-quality data for strategic decision-making, process optimization, and automation. The scale and complexity of data from modern cloud ecosystems and the use of open-source solutions pose challenges in maintaining data quality. Data observability techniques, such as those offered by Dynatrace, aim to improve data availability and reliability throughout its lifecycle. By 2026, Gartner predicts that 30% of enterprises with distributed data architectures will adopt data observability techniques, up from less than 5% in 2023. Dynatrace Data Observability, working in conjunction with the platform’s AI capabilities, provides benefits such as monitoring data freshness, volume, distribution, schema changes, lineage, and availability.
Bernd Greifeneder, CTO at Dynatrace, emphasized the importance of data quality and reliability for organizations to innovate and comply with industry regulations. The addition of data observability to the platform aims to empower customers to harness data from various sources for analytics and automation, ensuring data health without the need for extra tools. The feature is expected to be generally available for all Dynatrace SaaS customers within 90 days of the announcement.