In today’s software development landscape, effective collaboration among teams and seamless service orchestration are essential. Achieving these goals requires adherence to organizational standards for quality, security, and compliance. Delivery metrics, software delivery KPIs, and engineering metrics are key tools for tracking and improving software delivery processes, helping organizations measure performance, identify bottlenecks, and drive continuous improvement. Without diligent monitoring, organizations risk losing sight of their delivery workflows, complicating the assessment of impacts on release velocity, stability, developer experience, and overall application performance.
To address these challenges, many organizations have begun tracking DevOps Research and Assessment (DORA) metrics. These essential metrics provide crucial insights for any team involved in software development, offering a comprehensive view of the Software Development Life Cycle (SDLC). Essential metrics help engineering managers and teams align software delivery with business goals and drive value delivery for the entire business. DORA metrics are particularly useful for teams practising DevOps methodologies, including Continuous Integration/Continuous Deployment (CI/CD) and Site Reliability Engineering (SRE), which focus on enhancing system reliability.
However, the collection and analysis of these metrics can be complex. Decisions about which data points to track and how to gather them often fall to individual team leaders. Additionally, turning this data into actionable insights for engineering teams and leadership can be challenging. Aligning software delivery metrics with business goals ensures technology initiatives deliver real value, and metrics should focus on outcomes rather than individual outputs.
Choosing the right software delivery metrics is crucial for making informed decisions that align with business objectives and improve customer experience.
The DORA research team at Google conducts annual surveys of IT professionals to gather insights into industry-wide software delivery practices. From these surveys, four key metrics have emerged as indicators of software teams’ performance, particularly regarding the speed and reliability of software deployment. Cycle time measures and deployment frequency measures are key indicators of team performance and delivery speed, helping teams understand how efficiently they deliver value. These key DORA metrics include:

DORA metrics connect production-based metrics with development-based metrics, providing quantitative measures that complement qualitative insights into engineering performance. The four key metrics are: deployment frequency measures, which assess how often code is deployed to production and correlate with the speed of the engineering team; lead time for changes, which is closely related to cycle time measures and tracks how long a work item takes from idea to completion—a critical KPI because it reflects delivery speed and highlights bottlenecks; change failure rate measures, which indicate the percentage of deployments that cause a failure in production requiring remediation; and time to restore services, which measures how quickly teams recover from failures. These metrics help teams understand their performance and progress over time.
They focus on two primary aspects: speed and stability. Deployment frequency and lead time for changes relate to throughput, while time to restore services and change failure rate address stability. High deployment frequency isn't just a sign of developer productivity; it means tighter feedback loops and lower risk per release. Additionally, flow efficiency tracks the percentage of total work time your team spends actively progressing tasks instead of waiting on dependencies, helping to identify bottlenecks and improve process speed.
Contrary to the historical view that speed and stability are opposing forces, research from DORA indicates a strong correlation between these metrics in terms of overall performance. Additionally, these metrics often correlate with key indicators of system success, such as availability, thus offering insights that benefit application performance, reliability, delivery workflows, and developer experience. Quality metrics like Change Failure Rate and Defect Density help catch issues early, potentially reducing significant financial losses from software failures. Software delivery metrics help teams understand their overall engineering performance, track their progress, and identify areas for improvement.
While DORA DevOps metrics may seem straightforward, measuring them can involve ambiguity, leading teams to make challenging decisions about which data points to use. Delivery excellence and delivery excellence metrics provide a holistic view of the software delivery process, helping to identify bottlenecks and areas for improvement. Tracking software delivery metrics encourages continuous improvement by providing concrete data to compare over time, and excellence metrics are critical for understanding efficiency in delivering quality products and services. Excellence metrics also offer a holistic view of how projects are managed and executed.
Improving visibility into software delivery metrics can lead to better team productivity and development speed, especially when coordinating multiple teams. A culture of psychological safety and transparency enhances the effectiveness of software delivery metrics, ensuring that teams feel safe to share data and insights openly.
Below are guidelines and best practices to ensure accurate and actionable DORA metrics:
These best practices help organizations collect reliable data, enabling engineering managers and teams to make informed decisions that drive continuous improvement and enhance software delivery performance.
Establishing a standardized process for monitoring DORA metrics can be complicated due to differing internal procedures and tools across teams, as well as the challenges of coordinating and managing multiple teams working on related projects. Effective team collaboration is crucial to ensure transparency, alignment, and consistent delivery across these teams. Clearly defining the scope of your analysis—whether for a specific department or a particular aspect of the delivery process—can simplify this effort. It’s essential to consider the type and amount of work involved in different analyses and standardize data points to align with team, departmental, or organizational goals.
Strong development processes also play a key role in enabling teams to recover efficiently from failures and ensure rapid incident response, minimizing downtime and maintaining robust software delivery.
For example, platform engineering teams focused on improving delivery workflows may prioritize metrics like deployment frequency and lead time for changes. In contrast, SRE teams focused on application stability might prioritize change failure rate and time to restore service. By scoping metrics to specific repositories, services, and teams, organizations can gain detailed insights that help prioritize impactful changes.
Best Practices for Defining Scope:
To maintain consistency in collecting DORA metrics, address the following questions:
1. What constitutes a successful deployment?
Establish clear criteria for what defines a successful deployment within your organization. Consider the different standards various teams might have regarding deployment stages. For instance, at what point do you consider a progressive release to be “executed”? Additionally, monitor for problematic code during deployments to prevent defective or unstable code from reaching production and to enhance software reliability.
2. What defines a failure or response?
Clarify definitions for system failures and incidents to ensure consistency in measuring change failure rates. Differentiate between incidents and failures based on factors such as application performance and service level objectives (SLOs). For example, consider whether to exclude infrastructure-related issues from DORA metrics.
3. When does an incident begin and end?
Determine relevant data points for measuring the start and resolution of incidents, which are critical for calculating time to restore services. Decide whether to measure from when an issue is detected, when an incident is created, or when a fix is deployed.
4. What time spans should be used for analysis?
Select appropriate time frames for analyzing data, taking into account factors like organization size, the age of the technology stack, delivery methodology, and key performance indicators (KPIs). Adjust time spans to align with the frequency of deployments to ensure realistic and comprehensive metrics.
Best Practices for Standardizing Data Collection:
Effective software delivery is a cornerstone of success for modern organizations navigating the fast-paced world of software development. When engineering teams focus on optimizing their software development life cycle using key metrics such as deployment frequency, lead time, and cycle time, they unlock a range of benefits that extend beyond the development pipeline.
One of the most significant advantages is improved customer satisfaction. By delivering new features and updates more frequently and reliably, teams can respond quickly to user feedback and evolving market demands, resulting in higher user satisfaction and loyalty. Enhanced delivery speed and quality also mean that software development projects are more closely aligned with business objectives, ensuring that technology investments directly support strategic goals.
Increased team productivity is another major benefit. When teams leverage valuable insights from software development metrics, they can identify and eliminate bottlenecks, streamline workflows, and foster a culture of continuous improvement. This not only accelerates the development cycle but also boosts team morale and motivation, as engineers see the direct impact of their work on business outcomes.
Furthermore, effective software delivery leads to higher overall software quality. By continuously monitoring and refining processes, teams can reduce defects, improve code stability, and ensure that each release meets high standards. This focus on quality, combined with the ability to measure performance through key metrics, empowers organizations to deliver software that delights users and drives business growth.
Ultimately, effective software delivery is about more than just speed—it’s about delivering value. By embracing continuous improvement and aligning software development efforts with business goals, organizations can achieve greater efficiency, higher quality, and lasting customer satisfaction.
Before diving into improvements, it’s crucial to establish a baseline for your current continuous integration and continuous delivery performance using DORA metrics. This involves gathering historical data to understand where your organization stands in terms of deployment frequency, lead time, change failure rate, and MTTR. Engineering managers rely on this data to track key software delivery metrics and use software delivery forecast dates, which leverage historical delivery patterns and current progress, to provide realistic estimates of when a feature will land. This baseline will serve as a reference point to measure the impact of any changes you implement.
Actionable Insights: Deployment frequency measures tell you how often your team ships code to production, serving as a key indicator of team performance and productivity. High deployment frequency isn't just a sign of developer productivity; it means tighter feedback loops, lower risk per release, and a direct positive impact on customer experience by enabling faster response to user needs. If your deployment frequency is low, it may indicate issues with your CI/CD pipeline or development process. Investigate potential causes, such as manual steps in deployment, inefficient testing procedures, or coordination issues among team members.
Strategies for Improvement:
Actionable Insights: Long change lead time often points to inefficiencies in the development process. Cycle time measures and lead time are critical for understanding how quickly work moves from idea to completion. Lead time measures the time required to complete a unit of work, referring to the overall time to deliver an increment of software from initial idea through to deployment to live. By analyzing your CI/CD pipeline, you can identify delays caused by manual approval processes, inadequate testing, or other obstacles. Reducing lead time enhances value delivery by enabling faster, more reliable releases to customers, which directly impacts customer satisfaction and team motivation.
Strategies for Improvement:
Actionable Insights: A high change failure rate is a clear sign that the quality of code changes needs improvement. Change failure rate measures the percentage of deployments that cause a failure in production requiring remediation, making it a key indicator of deployment quality and the effectiveness of development and testing practices. Tracking quality metrics such as code coverage, code complexity, and maintainability, as well as identifying problematic code early, helps prevent failures and improve software reliability. Metrics like Change Failure Rate and Defect Density help catch issues early, potentially reducing significant financial losses from software failures. This can be due to inadequate testing or rushed deployments.
Strategies for Improvement:
Actionable Insights: If your MTTR is high, it suggests challenges in incident management and response capabilities. Strong development processes and the team's ability to respond quickly are essential for minimizing MTTR and ensuring rapid recovery. Mean time to recovery measures how long it takes to recover from failure and indicates the quality of the engineering team's processes, with a target of less than 1 hour for critical services in 2026. High MTTR can lead to longer downtimes and reduced user trust.
Strategies for Improvement:
Despite the clear benefits, many teams encounter persistent challenges that can hinder their ability to deliver software efficiently and effectively. One of the most common obstacles is a lack of visibility into the software delivery process. Without clear insights into each stage of the development cycle, it becomes difficult for teams to pinpoint bottlenecks, measure performance, or make informed decisions that drive improvement.
Technical debt is another significant challenge. As software projects evolve, shortcuts and legacy code can accumulate, slowing down the development cycle and negatively impacting code quality. This can lead to increased maintenance costs and make it harder for teams to deliver new features or respond to issues quickly.
Test management also presents hurdles for many teams. Flaky tests and inadequate automated testing can undermine confidence in the deployment process, resulting in longer cycle times and increased risk of failures in production environments. Inefficient test management not only affects software quality but can also erode team morale, as developers spend more time troubleshooting test issues instead of building new functionality.
The complexity of modern software delivery, with its emphasis on continuous delivery and rapid deployment, can be overwhelming—especially for teams managing multiple projects or working across different tools and environments. Without the right software development metrics in place, it’s challenging to maintain delivery efficiency and ensure that the team’s progress aligns with business goals.
To overcome these challenges, it’s essential for development teams to focus on collecting and analyzing the right software development metrics, such as DORA metrics. By doing so, teams can gain valuable insights into their delivery process, identify areas for improvement, and implement changes that reduce failure rates, improve code quality, and enhance customer satisfaction. Addressing these common challenges head-on is key to building motivated teams, delivering high-quality software, and achieving continuous improvement in the software delivery process.
Utilizing DORA metrics is not a one-time activity but part of an ongoing process of continuous improvement, which is essential for maximizing customer satisfaction and loyalty. Establish a regular review cycle where teams assess their DORA metrics and adjust practices accordingly. This creates a culture of accountability and encourages teams to seek out ways to improve their CI/CD workflows continually.
Continuous improvement in software delivery requires a culture of learning, experimentation, psychological safety, and transparency. Engineering metrics, including DORA and other excellence metrics, should be combined with this supportive culture to improve team productivity. Tracking excellence metrics provides concrete data to compare over time, further encouraging continuous improvement and helping teams identify areas for growth.
Etsy, an online marketplace, adopted DORA metrics to assess and enhance its CI/CD workflows. By focusing on improving its deployment frequency and lead time for changes, Etsy was able to increase deployment frequency from once a week to multiple times a day, significantly improving responsiveness to customer needs. Elite teams are characterized by high deployment frequency measures and a strong focus on delivering a superior customer experience, setting a benchmark for operational excellence in software delivery.
Flickr used DORA metrics to track its change failure rate. By implementing rigorous automated testing and post-mortem analysis, Flickr reduced its change failure rate significantly, leading to a more stable production environment. Additionally, by closely monitoring change failure rate measures and incorporating quality metrics such as code coverage and maintainability, Flickr was able to further enhance deployment quality and ensure ongoing production stability.
Google’s Site Reliability Engineering (SRE) teams utilize DORA metrics to inform their practices. By focusing on MTTR, Google has established an industry-leading incident response culture, resulting in rapid recovery from outages and high service reliability. Engineering managers play a key role in this process by leveraging strong development processes and fostering effective team collaboration, which together enable quick incident response and minimize downtime.
Typo is a powerful tool designed specifically for tracking and analyzing DORA metrics. It provides an efficient solution for development teams seeking precision in their DevOps performance measurement.
