"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." – Stephen Hawking
Thus, rather than forming opinions based solely on what we at Hivel think, let’s also turn to the co-creator of DORA metrics, the 2024 DORA report, and the GitHub survey.
This combined perspective will provide a strong foundation for this discussion and eliminate any bias if in place.
Perspective #1:
What does the co-creator of DORA think of the relevance of DORA metrics in an era of AI coding?
Host: “These days, many Software Engineers are using coding assistants. How has it changed your thinking about developer productivity and experience?”
Nicole Forsgren (Co-creator of DORA and SPACE): “With AI, I am getting so many questions around how we can improve the way developers work, how we make sure they don’t distract from their work, [and] does it impact SPACE and DORA metrics? [big pause] So, quickly say, DORA metrics in general won’t change. [small pause] And in terms of SPACE, I think it is still incredibly applicable because we wanna know how satisfied developers are, their performance outcomes like quality and security, and how many things they are able to get done. I think efficiency and flow are super important.”
Perspective #2:
What does the 2024 DORA Report reveal about DORA metrics and AI coding?
The 2024 DORA report comprehensively studies the role of AI in software engineering, mainly its impact on DORA metrics and software delivery performance.
As expected, the findings reveal that AI is significantly enhancing developer productivity and code quality. But at the same time, the report also highlights challenges such as trust in AI-generated code and negative impacts on delivery performance.
The following are the key findings with regard to DORA and AI coding.
1. Measuring AI adoption in software development
The DORA team found that 75.9% of developers rely on AI in one or more of their daily professional responsibilities.
Further findings reveal that the two most adopted use cases around AI in software development are code writing and summarizing information.
Though 87.9% of respondents reported trust in AI code quality, the degree or level of their trust was low, with 39.2% reporting little (27.3%) or no trust (11.9%) at all.
Why Measuring AI Adoption & Outcomes is Critical?
2. Measuring AI adoption in software development
The report reveals that if developers increase AI adoption by 25%, it will lead to a 2.1% increase in productivity, a 2.2% increase in job satisfaction, and a 2.6% increase in flow.

75% of developers report an increase in their productivity from using AI in their coding tasks, and 67% say AI has improved their ability to write code.
Additionally, the report further outlines that if developers increase AI adoption by 25%, code review speed, code quality, and documentation quality are likely to increase by 3.1%, 3.4%, and 7.5%, respectively.

This affects DORA’s Lead Time for Changes (LTC) metric, as AI speeds up coding and review cycles.
3. AI’s negative impacts on software delivery performance
Both throughput and stability suffer when AI is heavily used.
The report finds that a 25% increase in AI adoption is likely to result in a 1.5% drop in delivery throughput and 7.2% in delivery stability.
This impacts key DORA metrics directly.
• Change Lead Time: AI-generated code may require additional review.
• Deployment Frequency: AI-driven deployment may not result in faster deployment.
• Failed Deployment Recovery Time: The clear aftermath of increased AI usage is longer incident resolution times.
• Change Failure Rate: AI-generated code may introduce more failures.
• Rework Rate: After deployment, there are higher chances for more debugging and modification efforts.
4. Best practices for using AI in alignment with DORA metrics
The report also outlines the following best practices to not replace the fundamental principles of DORA and software engineering in an era of AI coding.
• Avoid using AI for large-scale code generation. Instead, use it for code quality improvements, documentation, and reviews.
• Avoid deploying bulk AI-generated changes.
• Maintain small batch sizes.
• With a focus on maintaining stability, balance automation with thorough validation.
• Recognize that the learning curve for effective AI utilization might initially impact delivery performance.
• Measure how AI adoption impacts developer experience.
• Measure the impact of AI in a specific DORA metric.
Perspective #3:
How does GitHub see DORA metrics evolving with AI coding?
The GitHub survey tells the same story as the 2024 DORA Report.
Upskilling is one of the major use cases of AI for developers. Almost 57% of developers believe AI can help them improve their coding language skills.
With developers increasing their coding skills, it can surely impact DORA metrics in a positive way, and over time, organizations can see a positive trajectory of DORA scores even if developers are relying more on AI.
Over 80% of the developers also believe that AI could help them to improve teamwork during tasks like security reviews, planning, solution design, and pair programming. (Collaboration is not a metric focused on in DORA. It is from SPACE metrics.)
Perspective #4:
How do we at Hivel see the relationship between DORA metrics and AI coding?
Well, I always believe the DORA metrics alone aren’t enough to capture the full picture. It must be complemented by SPACE metrics to provide a more holistic view of both developer performance or productivity and developer well-being.
And now, when AI takes over the world of software engineering with faster adoption than ever, my belief in this balanced approach has only strengthened.
What supports this is my past experience as CTO and my current experience of building Hivel - an AI-powered Software Engineering Intelligence platform.
I have seen many companies (many of them are our clients now) were struggling with balancing delivery speed, code quality, and developer well-being even after investing heavily in running AI adoption programs.
In my previous role as CTO, I often found myself in paradoxical situations where many times I could not figure out why this developer suddenly started writing quality code, leading to zero change failure rate. Is it because of AI adoption, async collaboration, less burnout, or something else entirely?
The challenge was - calculating the ROI of AI adoption (which is directly linked to performance and well-being gain of developers) in a scenario where both the performance of developers and their well-being need to be measured by two different metrics - DORA and SPACE.
So, answering your question, DORA isn’t obsolete. Because the fundamental nature of software engineering is still the same. We’re still pushing codes, rolling back bad releases, and aiming for rapid recovery. What has changed is the execution style of developers. And it is something that SPACE metrics take more care of than DORA.
Thus, DORA + SPACE is a new reality. Happy Engineering!
FAQs
1.How does Hivel help you measure the ROI of AI adoption with DORA metrics?
Hivel seamlessly integrates with development tools, CI/CD pipelines, and collaboration platforms to track AI’s impact on DORA by analyzing…
• AI-driven improvements in deployment frequency and lead time
• Reduction in change failure rate and MTTR
• AI’s role in optimizing developer workflows
• Overall improvement in engineering velocity
With this data being available in your single dashboard, you can get to know whether AI is improving the score of DORA metrics or degrading it.
2.Can Hivel help prevent burnout while improving DORA metrics?
Yes. Hivel identifies burnout rate, unaccounted work, and cognitive load using SPACE metrics at both individual and team levels. With these insights, managers can make strategic resource allocation decisions to not let burnout result in lower deployment frequency and higher lead time for changes, change failure rate & mean time to recovery. This eventually leads to improved DORA scores.
3.How does Hivel help engineering teams avoid AI hype traps?
Many AI tools promise big productivity gains but fail to fit into your work dynamics. Hivel prevents wasted investment by providing…
• Data-driven validation of AI’s impact on software delivery (DORA Metrics)
• Clear insights into developer adoption and experience (SPACE Metrics)