Webinar: ‘The Hows & Whats of DORA’ with Nathen Harvey and Ido Shveki

Typo recently hosted an engaging live webinar titled “The Hows and Whats of DORA,” featuring DORA expert Nathen Harvey and special guest Ido Shveki. With over 170 attendees, we explored DORA and other crucial engineering metrics in depth.

Nathen, the DORA Lead & Developer Advocate at Google Cloud, and Ido, the VP of R&D at BeeHero, one of our valued customers, brought their unique insights to the discussion.

The session explored why only 5-10% of engineering teams actively use DORA metrics and examined the current state of data-driven metrics like DORA and SPACE. It also highlighted the organizational and cultural elements essential for successfully implementing these metrics.

Further, Nathen explained how advanced frameworks such as DORA become critical based on team size and DevOps maturity, and offered practical guidance on choosing the most relevant metrics and benchmarks for the organization.

The event concluded with an engaging Q&A session, allowing attendees to ask questions and gain valuable insights.

P.S.: Our next live webinar is on August 28, featuring DORA expert Bryan Finster. We hope to see you there!

Want to Implement DORA Metrics for Improving Dev Visibility and Performance?

Timestamps

  • 00:00 - Introduction
  • 02:11 - Understanding the Low Uptake of Metrics
  • 08:11 - Mindset Shifts Essential for Metrics Implementation
  • 10:11 - Ideal Team Size for Metrics Implementation
  • 15:36 - How to Identify Benchmarks?
  • 22:06 - Aligning Business with Engineering Metrics
  • 25:04 - Choosing the Right Metrics
  • 30:49 - Q&A Session
  • 45:43 - Conclusion

Links and Mentions 

Webinar Transcript

Kovid Batra: All right. Hi, everyone. Thanks for joining in for our DORA Exclusive webinar- The Hows & Whats of DORA, powered by Typo. This is Kovid, founding member at Typo and your host for today's webinar. And with me, I have two special co-hosts. Please welcome the DORA expert tonight, Nathen Harvey. He's the Lead and Dev Advocate at Google Cloud. And we have with us one of our product mentors, Typo Advocates, Ido Shveki, who is VP of R&D at BeeHero. Thanks, Nathen. Thanks, Ido, for joining in. 

Nathen Harvey: Oh, thanks for having us. I'm really excited to be here today. Thanks, Kovid. 

Ido Shveki: Me too. Thanks, Kovid. 

Kovid Batra: Guys, um, honestly, like before we get started, uh, I have to share this with the, with our audience today. Uh, both of you have been really nice. It was just one message and you were so positive in the first response itself to join this event. And honestly, uh, I feel that this, these kinds of events are really helpful for the engineering community because we are picking up a topic which is growing, people want to learn more, and, uh, Nathen, Ido,, once again, thanks a lot for, for joining in on this event. 

Nathen Harvey: Oh yeah, it really is my pleasure and I totally agree that these events are so important. Um, I often say that, you know, you can't improve alone. Uh, and that's true that each individual, we can't improve our entire organization or even our entire team on our own. It requires the entire team, but even an entire team within one organization, there's so much that we can learn from each other when we look into other organizations around the world and other challenges that people are running into, how they've overcome them. And I truly believe that each and every one of us has something to share with the community, uh, even if you were just getting started, uh, maybe you found a new pitfall that others should avoid. Uh, so you can bring along those cautionary tales and share those with, with the global community. I think it's so important that we continue to learn from and, and be inspired by one another. 

Kovid Batra: Totally. I totally agree with that. All right. So I think, we'll just get started with you, Nathen. Uh, so I think the first thing that I want to talk about is very fundamental to the implementation of DORA, right? We know lately we had a Gartner report saying there were only 5 to 10 percent of teams who actually implement such frameworks through tools or through processes in their, in their organizations. Whereas, I mean, I have grown up in my professional career hearing that if we are measuring something, only then we can improve it. So if you go to any department or any, uh, business unit for that matter, everyone follows some sophisticated processes or tooling to measure those KPIs, right? Uh, why is it, why this number is so low in our engineering teams? And if let's say, they are following something only through What's the current landscape according to you? I mean, you have been such a great believer of all this data-driven DORA metrics, engineering metrics, SPACE. So what's, what's your thought around it? 

Nathen Harvey: Yeah, it's a, it's a good question. And I think it's really interesting to think about. I think when you look at the practice of software engineering or development, or even operations like reliability engineering and things along those lines, these all tend to be, um, one creative work, right? When you're writing software, you're probably writing things that have never been written before. You're trying to solve a new problem that's very specific to your context. Um, it can be very difficult to measure, what does that look like? I mean, we've, we've used hundreds of different measures over the years. Some are terrible. You know, I think back to a while ago, and hopefully no one watching is under this measurement today. But how many lines of code did you commit to the repository? That's, that's a measure that has certainly been used in the past to figure out, is this a develop, is this developer being productive or not? Uh, we all know, hopefully by now that that's a, it's a terrible way to measure whether or not you're delivering value, whether or not you're actually being productive. So, I think that that's, that's part of it. 

I also think, frankly, that, uh, until a few years ago, the world was working in a, in a way in which finances were easy to get. We were kind of living in this zero interest rate, uh, world. Um, and engineers, you know, we're, we're special. We do work that is, that can't be understood by anyone else because we have this depth of knowledge in exactly what we're doing. That's kind of a lie. Uh, those salespeople, those marketing people, they have a depth of knowledge that we don't understand, that we couldn't do their job in the same way that they couldn't do our job. And that's, that's not to say that one is better than the other, or one is more special than the other, but we absolutely need different ways to measure. And even ways that we have to measure other sort of disciplines, uh, don't actually give us the whole picture. Take sales, for example, right? You might look at well, uh, how much, uh, how much revenue is this particular salesperson bringing in to the organization? That is certainly one measure of the productivity of that salesperson, but it doesn't really give you the whole picture, right? How is that salesperson's experience? How are the people that are interacting with that salesperson? How is their experience? So I think that it is really difficult to agree on a good set of measures to understand what those measures are. And frankly, and this, this might be a little bit shocking, Kovid, but look, I, I, I am a big proponent of DORA and the research and everything that we've done here. But between you and me, I don't want you to do DORA metrics. I don't want you to. I don't care about the DORA metrics. What I care about is that you and your team are improving, improving the practices and the processes that you have to deliver and operate software, improving the well-being of the members of your team, improving the value that you're creating for your business, and improving the experience that you're creating for your customers.

Now, none of those are the DORA metrics. Of course, Measuring the DORA metrics helps us assess some of those things and what we've been able to show through the research is that improving things like software delivery performance have positive outcomes or positive predictive nature of better organizational success, better customer satisfaction, better well-being for your teams. And so, I think there's there's this point where, you know, there's, uh, maybe this challenge, right, do you want, do you want me to spend as an engineer? Do you want me to spend time measuring the work that I'm doing, measuring how much value am I delivering, or do you want me delivering more value? Right? And it's not really an either or trade-off, but this is kind of some of the mindsets I have. And I think that this is some of the, the blockers that come in place when people want to try to bring in a measurement framework or a metrics framework. And then finally, Uh, you know, between you and me, nobody really likes their work to be measured. I want to feel like I'm providing valuable work and, and know that that's the case, but if you ask me to measure it, I start to get really worried about why are you asking that question. Are you asking that question because you want to give me a raise and a promotion and more money? Great. I'm gonna make sure that these numbers look really good. If you're asking that question to figure out if you need to keep me on board, or maybe you can let me go, now I'm getting really nervous about the questions that you're asking.

And so I think there's a lot of like human nature in the prevention of adopting these sorts of frameworks. And it really gets back to, like, who are these frameworks for? And again, I'll just go back to what I said sort of towards the beginning. I don't want you to do DORA metrics. I want you to improve. I want you to get better. And so, if we think about it in that perspective, really the DORA metrics are for me and my teammates. They aren't necessarily for my leaders. Because it's me and my teammates that are going to make those improvement efforts. 

Kovid Batra: Totally. I think, um, very wise words there. One thing that I just picked up from what you just said, uh, from the narrative, like there is a huge organizational cultural play in this, right? People are at the center of how things get implemented. So, you have been experiencing this with a lot of teams. You have implemented this. What's the difference that you have seen? What are those mindsets which make these things implement actually? What are those organizational factors that make these things implement? 

Nathen Harvey: Yeah, that's a, that's a good question. I would say, first it starts with, uh, the team that you're going to start measuring, or the application, the group of people and the technology that you want to start measuring. First, these people have to want to change, because if we're, if we're going to make a measure on something, presumably we're making that measure so that we understand how we are, so that we can improve. And to improve, we have to change something. So it starts with the people wanting to change. Oh, except I have to be honest, that's not enough. Wanting to change actually isn't enough. We all want to change. We all want to get better. Actually, maybe we all just want to get better, but we don't want to have to change anything. Like I'm very comfortable in the way that I work. So can it, can it just produce better results? The truth is, I think we have to find teams that need to change. There has to be some. Motivating factor that's really pushing them to change because after we look at the dashboard, after we see some numbers, if we're not truly motivated, if there isn't a need for us to change, we're probably not going to change our behavior. So I think that's the first critical component is this need to improve, this fundamental desire that goes beyond just the desire. It's, it's a motivating factor. You have to do this. You have to get better because the competition is coming after you, because you're feeling burnt out, because for a myriad of reasons. So I think that that's a big first step in it. 

Kovid Batra: A lot of times, what I have seen while talking to a lot of my Typo clients also, uh, is, uh, they feel that there is a stage when this needs to be implemented, right? So people use Git metrics, Jira metrics to make sure things are running fine. And I kind of agree to them, like very small teams can, can rely on that. Like maybe under 10 size teams are good. But, what do you think, uh, what, what's the DevOps maturity? What's the team size that impacts this, where you need to get into a sophisticated framework or a process like DORA to make sure things are, uh, in, in the right visibility? 

Nathen Harvey: Yeah, that's, that's, that's a really good question. And I think unfortunately, of course, the answer is it, it depends, right? It is pretty context-specific. I do think it matters that, uh, it matters the level at which you're measuring these things. You know, the DORA metrics have always been meant, and if you look at our survey, we always sort of prepend our questions with, for the primary application or service that you're working on. So when we think about those DORA metrics, those software delivery metrics in particular, we aren't talking about an organization. What is the, you know, we don't ask, for example, what is the deployment frequency at Typo? But instead, we ask about specific applications within Typo, and we expect that you're going to have variation across the applications within your team. And so, when you have to get into this sort of more formal measurement program, I think that really is context-specific. It really depends on the business and even what are you measuring? In fact, if if your team has, uh, more of a challenge with developing code than they do with shipping code, then maybe the DORA metrics aren't the right metrics to start with. You want to sort of find your constraint within your organization, and DORA is very much focused on software delivery and operational performance. So on the software delivery piece, it's really about are we able to take this code that was written and get it out the door, put it in front of customers. Of course, there's a lot of things on the development side that enable that. There's a lot of things on the operational side that benefit from that. It all kind of comes together, but it is really looking at finding that particular pain point or friction point within your organization. 

And then, I think one other thing that I'll just comment on really quickly here is that as teams start to adopt these frameworks, there's often an overfitting for precision. We need precise data when it comes to this. And honestly, again, if you go back to the methods that DORA uses, each year we run an annual survey. We ask people, what is your average time or your typical time from code committed to code in production? We're not hooking into your Git systems or your software delivery pipelines or your, uh, task backlog management systems. We're not hooking into any of those things. We're asking about your experience. Now, we have to do that given that we're asking the entire world. We can't simply integrate with all of those systems. But this level of precision is very helpful at some point. But it doesn't necessarily need to be where you start. Right? Um, I always find it's best to start with a conversation. Kind of like what we're having today. 

Kovid Batra: But yeah, I think, uh, the toolings that are coming into the, into the picture now are solving that piece also. So I think both the things are getting, uh, balanced there because I feel the survey part is also very critical to really understand what's going on. And on top of that, you have some data coming from the systems without any effort that reduces your pain and trust on what you are looking at. So yeah, that makes sense. 

Nathen Harvey: Yeah, absolutely. And, and, and there is a cautionary tale built in there. I've seen, I've seen too many teams go off and try to integrate all of these systems together to get all of the precise data and beautiful dashboards. Sometimes that effort ends up taking months. Sometimes that effort ends up taking years. But what those teams fail to do over those months or years is actually try to improve anything. All they're trying to improve is the precision of the data that they have. And so, at the end of that process, they have more precise, a more precise understanding of what they knew at the beginning of that process.

And they haven't made any improvements. So that's where a tool like Typo, uh, or others of this nature like really come in because now I don't have to think about as much, all of that integration, I can, I can take something off the shelf, uh, and run it in my systems and immediately start to get value from that. 

Kovid Batra: Totally. I think, uh, when it comes to using the product, uh, Ido has been, uh, one of the people who has connected with me almost thrice in the last few days, giving me some feedback around how to do things. And I would let Ido have some of his, uh, questions here. And, uh, I have, uh, my demo dashboard also ready. So if there is anything that you want to refer back to Ido or Nathen, to like highlight some metrics that they can look at, I, I'll be happy to share my screen also. Uh, over to you, Ido. I'll, I'll put you on the main screen so that the audience sees you well. 

Ido Shveki: Oh, thanks, Kovid. And hi again, Nathen. Uh, first of all, very interesting, uh, and you speaking about it. I also find this topic, uh, close to my heart. So I, uh, I, it's a fascinating to hear you talk about it. I wanted to know, uh, if you have any, you mentioned before that among the different, like you said, it may be inside the Typo as a company, there are like different benchmarks, different, uh, so how can you identify this, uh, benchmark? Maybe my questions are a bit practical, but let me know if that's the case, but yeah, I just want to know how to identify this benchmark because as you mentioned, and also at BeeHero, we have like, uh, uh, different teams, different sizes, different maturity, uh, different, uh, I mean, uh, seniority level. So how can I start with these benchmarks?

Nathen Harvey: Yeah, yeah. That's a, that's a really great question. So, um, one of the things that I like to do when I get together with a new team is we first kind of, or a new organization first, let's, let's pick an application or two. So at, BeeHero, uh, I, I, I know very little about what BeeHero does, you know, tell us a little bit about BeeHero. Give us, give us like a 30-second pitch on BeeHero. What do you do there? 

Ido Shveki: Cool. So we are an Israeli startup where we deal with agriculture. What we do is we place, uh, sensors inside beehives as the, as the name might, uh, you know, give you a hint. Uh, we put sensors inside beehives and this way we can give a lot of, uh, we, we collect metrics and we give great, uh, like, uh, good insights, interesting insights to beekeepers, uh, so that they can know what to do with their bee colony, how to treat it, and how to maintain the bee colony. So, this is, you know, basically, and if, if I'm, uh, to your question, so we have, yeah, uh, different platforms. We have the infra platforms, we have the firmware guys, we have mobile app, et cetera. So. But I assume that like every company has this, different angles of a product. 

Nathen Harvey: Yeah. Yeah. Yeah. Of course. Every company has hundreds, maybe thousands of different products that they're maintaining. Yeah, for sure. Um, not, well that's first, that's super cool. Um, keeping the farmers and the bees happy. Now, so what I like to do with, with a new team or organization that I'm working with is we start with an application on our service. So maybe, maybe we take the mobile application that BeeHero has and what we want to do is bring together, in the perfect world, we bring together into a physical room, everyone that's responsible for prioritizing work for that application, designing that work, writing the software, shipping the software, running the service, answering customer requests, all of that stuff. Uh, perhaps we'd let the bees stay in the hives. We don't bring them into the room with us. Um, software engineers aren't, aren't known for being good with bees, I guess. So, but.. 

Ido Shveki: They do affect the metrics though. Yeah, I don't want, I don't want that. 

Nathen Harvey: Absolutely. Absolutely. So, so we'll bring these people together. And I like to just start with a conversation, uh, at dora.dev, we have a quick check, that allows you to quickly answer those for software development or software delivery performance metrics. You know, the deployment frequency, change lead time, your change failure rate and your failed deployment recovery time. But even before we get to those metrics, I like to start with a simpler question. Okay, so together as a team, a developer has just committed a change to the version control system. As a team, let's go to the board and let's map out every step in the process, every handoff that has to happen between that code commit and that code landing in production, right, so that the users can use it. And the reason we bring together a cross-functional team is because in many organizations, I don't know how big BeeHero is, but in many organizations, there are handoffs that happen from one team to the next, sort of that chain of custody, if you will, to get to production. Unfortunately, every single one of those handoffs is an opportunity for introducing friction, for hiding information, you know. I've, I've worked with teams as an example where the development team is responsible for building a package, testing that package and then they hand it off to the test team. Well, the test team does, takes that package and they discard it. They go back to the Git repo. They actually clone the Git repo and then they build another package and then start testing that. So now, the developers have built a package that gets discarded. Now the testers build another package that they test against that probably gets discarded and then someone else builds a third package for production. So there's, as you can imagine, there's lots of ways for that handoff and those three different packages to be different from one another. This is, it's, it's mind boggling. But until we put all those people in the room together, you might not even see that friction and that waste in the process. So I start there to really identify where are those friction points? Where are those pain points? And oftentimes you have immediate sort of low hanging fruit, if you will, immediate improvement opportunities.

And the most exhilarating part of that process as a facilitator is to see those aha moments. "Oh my gosh! I didn't realize that you did that." "Oh, I thought I packaged it this way so that you could do this thing that you're not even doing. You're just rubber stamping and passing it on." Or whatever it is. Right? So you find those things, but once you've done that map, then you go back to those four questions. How's my, what are my, you know, we used a quick check in that process. What does my software delivery performance look like? This gives us a baseline. This is how we're doing today. But in this process, we've already started to identify some of those areas for improvement that we want to set next. Now I do this from one team to the next or encourage the teams to do this on their own. And this way we aren't really comparing, you know, what is your mobile app look like versus the front end website, right? Should they have the same deployment frequency? I don't know. They have different customers. They have different needs. They have different teams that are working on them. So you expect them to be different. And the thing that I don't really care about over time is that everyone gets to the top level or a consistent performance across all of the teams. What I'd much rather see is that everyone is improving over time, right? So in other words, I'd rather reward the most improved team than the team that has the highest performance. Does that make sense? 

Ido Shveki: Yeah, a lot actually. 

Nathen Harvey: All right. 

Ido Shveki: Thanks. 

Nathen Harvey: Awesome. Yeah. 

Ido Shveki: Kovid, do we have another, time for another question? 

Kovid Batra: Yeah, I do. I mean, uh, you can go ahead, please. Uh, we have another three minutes. Yeah. 

Ido Shveki: Oh, cool. I'll make it quick. I'm actually interested in how do you aligned the business to DORA metrics? Because I usually I find myself talking to the management, CEO, CTO, trying to explain to them what's, what's happening under the hood in the developer team and it's not always that easy. Do you have some tips there?

Nathen Harvey: Yeah, you know, as your CEO come to you and said, You know, you know, last year you did 250 deploys. If you do 500 this year, I'm going to double your salary. They probably never said that to you. Did that? 

Ido Shveki: No, no. 

Nathen Harvey: No, no. Primarily because your CEO probably doesn't care how many deploys you delivered. Your CEO. 

Ido Shveki: And I think that's, I mean, I wouldn't want them to. 

Nathen Harvey: You don't want them to. You're, you're exactly right. But they do care about other things, right? They care about, I don't, I don't know, I'm going to make up some metrics. They care about how many, uh, like the health of the hives that each farmer has, right? Like, that's what they care about. They care about how many new farmers have signed up or how many new beekeepers have signed up, what is their experience like with BeeHero. And, and so really, as you go to get your executives and your management and, and the business tied into these metrics, it's probably best not to talk about these metrics, but better to talk in terms of the value that they care about, the measures that they care about. So, you know, our onboarding experience has left some room for improvement. If we ship software faster, we can improve that onboarding experience. And really it's a hypothesis. We believe that by improving our software delivery performance, we'll be able to respond faster to the market needs, and we'll be able to therefore improve our onboarding process as an example, right? And so now you can talk to your CEO or other business counterparts about look, as we've improved these engineering capacities and capabilities, we've seen this direct impact on our customers, on the business value that we care about. DORA shows, through our data collection, that software delivery performance is predictive of better organizational performance. 

But it's up to you to prove that, right? It's up to you, essentially, we encourage you to replicate our study. We see this when we look across teams. Do you see this on your team? Do you see that improving? And that's really, I think, how you should talk about it with your business counterparts. And frankly, um, you, you are the business as well. So it also encourages you and the rest of the engineers on your team to remember, we aren't creating this application because we want to use the new, uh, serverless technology, or we want to play with the latest, greatest AI. We're building this application to help with the health of bees, right? And so, keeping that connection back to the business, I think is really important. 

Kovid Batra: Okay. On your behalf, can I ask one question? 

Yeah. So I think, uh, there are certain things that we also struggle with, with not just Ido, but, uh, various other clients also that, which metrics to pick up. So can we just run through a quick example from your, uh, history of clients where you have, uh, probably highlighted for, uh, let's say, a 100-member dev team. What metrics make sense in what scenario? I, I'll quickly share my screen. Uh, I have some metrics highlighted for, for DORA and more than that on Typo. 

Nathen Harvey: Oh, great! 

Kovid Batra: You can tell me which metrics one should look at and how one should navigate through it. 

Nathen Harvey: Yeah, for sure. That, that'd be awesome. That'd be awesome. So I think as you're pulling up your screen, I'll just start with, you know, the, the reason that the DORA software delivery metrics are nice, is kind of multifold, right? First, there's only four of them. So you can, you can count them on one hand. That's, that's a good thing. Uh, you aren't over, like over, have too many metrics that you just don't know. How many, which lever should we pull? There's too many in front of me, right? Second. Um, they, they represent both lagging and leading indicators. In other words, they're lagging indicators for what does your engineering process look like? What's engineering excellence or delivery excellence look like within your organization? These DORA metrics can tell you. Those are the lagging indicators. You have to change things over here to make them improve. But they're leading indicators for those business KPIs, right? Organizational performance, well-being for the people on your team. So as we improve these, we expect those things to improve as well. And so, the nice thing about starting with those four metrics is that it gives you a good sense of where you are. Gives you a nice baseline. 

And so, I'm just going to make my screen a little bit bigger so I can see your, uh, yeah, that's much better. I can see your dashboard now. All right. So you've got, uh, you've got those, uh, looks like those four, uh, a couple of those delivery metrics you got, uh, oh, actually tell me what, what do you have here, Kovid? 

Kovid Batra: Yeah. So we have these four DORA metrics for us, the cycle time, deployment frequency, change failure rate, and mean time to restore. So we also believe in the same thing where we start off with these fundamental metrics. And then, um, we, we have more to deep dive into, like, uh, you can see things at team level, so there are different teams in one single view where you can see each team on high level, how their velocity, quality, and throughput looks like. And when you deep dive, you find out those specific metrics that basically contribute to velocity, quality, and throughput of the teams. And these are driven from DORA and various other metrics that we realized were important and critical for people to actually measure what's going on.

Nathen Harvey: Yeah. Yep, that's great. And so I really like that you can see the trend over time because honestly, the, the single number doesn't really mean anything to you. It's like getting on the scale in the morning. There's a number on the scale. I don't know if that's good or bad. It depends on what it was yesterday and what it will be tomorrow. So seeing that trend is the really important thing here because then you can start to make decisions and commitments as a team on experiments that you want to run, right? And so in this particular case, you see your cycle time going up. So now what I want to do is kind of dig in. Well, what's, what's behind the cycle time, what's causing this? And that's where the things like the, that map and, and you see here, we've got a little map that shows you exactly sort of what happens along that flow. So let's take a look at those. We have coding, pick up, review and merge, right? Okay, yup. And so the, nice thing there is that the pickup seems like it's going pretty well, right? One of the things that we found last year in our survey was that teams with faster code reviews have 50 percent better software delivery performance. And so it looks like this team is doing pretty good job. I imagine that pickup is you're reviewing that code, right? 

Kovid Batra: Yeah. Yeah. Yeah. 

Nathen Harvey: Mm hmm. Yeah. So, so that's good. It's good to see that. But what's the review? Oh, I see. So pickup must be when you first grab the PR and then review maybe incorporates all the sort of back and forth feedback time. 

Kovid Batra: Yes, yes. And finally, when you're merging it to your main branch, so the time frame between that is your review time. 

Nathen Harvey: Ah, gotcha, gotcha, gotcha. Okay, so for me, this would be a good place to dig in. What's, what's happening there? Because if you look between that pickup and review, that's about 8 hours of your 5, 10, 15, uh, 18 hours. So it's a significant portion there is, sort of in that code review cycle. This is something I'd want to look at. 

Kovid Batra: Perfect. 

Nathen Harvey: Yeah. Yeah. And we see this, we see this a lot. Um, one, one organization I worked with, um, the, the challenge that they had was not necessarily in code review, but in approvals, they were in a regulated industry and they sent all changes off to a change approval board that had to approve them, that change approval board only met so frequently, as you can imagine, that really slowed down their cycle time. Uh, it also did not help with their stability, right? Um, changes were just as likely to fail when they went to production as not, uh, regardless of whether or not they went through that change approval board. So we really looked at that change approval process and worked to help them automate that. The net result is I think they're deploying about 600 times more frequently today than they were before we started the process, which is pretty incredible. 

Kovid Batra: Cool. That's really helpful, Nathen. And thanks for those examples that fit into the context of a lot of our audience here. In fact, this question, I just realized was asked by Benny Doan also. So I think he would be happy to hear you. And, uh, I think now it's time. I feel the audience can ask their questions. So, um, we'll start with a 15 minute Q&A round where all the audience, you are free to comment in the comment sections with all the questions that you have. And, uh, Nathen, Ido, uh, would be happy to listen out to you on those particular questions. 

Ido Shveki: Kovid, should we just start answering these questions? 

Kovid Batra: Yeah. 

Nathen Harvey: Yeah, I'm having trouble switching to the comments tab. So maybe you could read some of the questions. I can't see them. 

Ido Shveki: Um, I can see a question that was also asked by Benny, which I worked in the past. Oh, hi, Benny. Nice that you're here. Um, about how to, like, uh it was by Nitish and Benny as well, asking about how does the, the dev people, the developers won't feel micromanaged when we are using, um, uh, the DORA metrics with them. Um, I can begin, I'll let you Nathen, uh, uh, elaborate on it in a second. I can begin with my experience thing that first of all, it is a slippery slope. I mean, I do find it not trivial to, to, um, like if you would just show them that I'm looking at the times from this PR to the improvement and line of codes, et cetera, like Nathan said in the beginning. Yeah. I mean, they would feel micromanage. Um, I, uh, first of all, I, I usually talk about it on a, on a team level or an organization level. And when I do want to raise this, uh, questions or maybe like address them as a growth opportunities for a certain developer. Uh, personally, I don't look at it as a, like criticism. I, it's like a, it's a beginning of a conversation. It's not like I don't know. I didn't make up my mind before. Uh, and because of this metric looks like this, then I'm not pleased with how you perform. It's just like, All right. I've seen that there is a decrease here. Uh, is there a reason? Let's talk about, let's discuss it. I'm easily convinced if there are like, uh, ways to be convinced. And, but, but yeah, I do look at it as a growth. Um, I try to, to, to convince and I do look at it as a, like a, growth opportunity for the developer to, to look at, uh, yeah, that's, that's at least my take over this. 

Nathen Harvey: Yeah, I definitely agree with that, you know, because I think that this question really gets to a question of trust. Um, and how do you build trust with your teammates? And I think the way that you build trust is through your actions. Right? And so if you start measuring and then start like taking punitive action against individual developers or even teams, that's going to, your actions are going to tell people, you should be afraid of these metrics. You should do whatever you can to not be measured by these metrics, right? But instead, if and DORA talks a lot about culture, if you lean in and use this as an opportunity to improve, an opportunity to learn more about how the team is going. And, and I like your approach there where you're taking sort of an inquisitive approach. Hey, as an example, you know, Hey, I see that the PRs, uh, that you started to submit fewer PRs than you have in the past, what's going on? It may be that that person has, for the time being, prioritized code reviews. So they're doing less PRs. It may be that they're working on some new architectural thing. They're doing less PRs. It may be that, uh, they've had a family emergency and they've been out of the office more. That's going to lower their PRs. That's the, the, the fact that they have fewer PRs is not enough information for you to go on. It is a good place to start a conversation. 

And then, I think the other thing that really helps is that you use these metrics at the team level. So if you as a team start reviewing them, maybe during your regular retrospectives or planning sessions, and then, importantly, it comes back to what are you going to change? Is the team going to try something different based on what they've learned from these metrics? Oh, we see that our lead time is going up, maybe our continuous integration practices, we need to put some more effort into those or some more automated testing. So over the next sprint or time block, we're going to add, you know, 20 percent more capacity for automated testing. And let's see how that impacts things. So seeing that these metrics are being used to inform improvements, that's how you prevent that slippery slope, I think. 

Kovid Batra: Totally. Okay. I think we can move on to this next question. Uh, this is Nitish. Uh, how can DORA and data-driven approach be implemented in a way that devs don't feel micromanaged? Yeah, I think. 

Nathen Harvey: Yeah, I think, I think we've covered a little bit of this in the previous question here, Nitish. And I think that it really comes back to remembering that these are not measures that should be at the individual level. We're not asking, Kovid, what's your deployment frequency? You know, what's yours? Oh, one of you is better than the other. Something's going to change. No, no, no. That's not how we, that's not how we use these measures. They're really meant for that application or service level. When it comes to developing, delivering, operating software or any technology, that's a team sport. It's not an individual sport. 

Kovid Batra: All right. Then we have from Abderrahmane, uh, how are the market segments details used for benchmark collected? 

Nathen Harvey: Yeah, this is a really good question. Thanks for that. So, uh, as you know, we run a survey each year and we ask about what industry are you in? Um, and what we found surprisingly, maybe, maybe not surprisingly is that over the years, industry is not really a determinant of how your software delivery performance is going to be. In other words, what we see across every industry, whether it's technology or retail or government or finance, we see teams that have really good software delivery performance. We also see in all of those industries, teams that have route rather poor software, but lots of opportunities to improve their software delivery performance, I should say. Yeah. 

So we see that, uh, the market segments are there and, and honestly, we, we publish that data so that people can see that, look, this can happen in our industry too. Um, I always worry that, you know, someone might use their industry as a reason not to question the status quo. Oh, we're in a regulated industry, so we can't do any better. It doesn't matter what industry you're in. You can always do better. You can always do worse as well. So just be careful, like focus on that improvement. 

Kovid Batra: Cool. Uh, next question is from Thomas. Uh, how do you plan the ritual with engineers and stakeholders when you're looking at this metric? Yeah, this is a very, uh, important question. I think Nathen, would you like to take this up? 

Nathen Harvey: Yeah, I'll take this. I'd love to hear how Ido is doing this as well, sort of incorporating the metrics into their daily work. But I think it's, it's, it's just that as you go into your planning or retrospective cycle, maybe as a team, you think about the last period and you pull up maybe the DORA quick check, or if you're using Typo or something like it, you pull up the dashboard and say, "Look, over the last two weeks over the last month, here's where we're trending. What are we going to do about that? Is there something that we'd like to change about that? What can we learn about that?" Start just with those questions. Start thinking about that. So I think really just using it as a, as a discussion point in those retrospectives, maybe an agenda item in those retrospectives is a really powerful thing that you can do.

Ido, what's your experience? 

Ido Shveki: Yeah. So, um, I totally agree. And I think in most parts, this is what I, what I also do, we're also doing at BeeHero, in their perspectives, maybe not on, bi-weekly basis, like every two weeks because sometimes they find it too often and not too much to, to, you know, to, uh, I want it to be, uh, tell them something new, let's say. Um, but also I do find it in what we, when we are doing some rituals for some incident that happened and we're discussing this issue, I really put emphasis, and I think this is the cultural part that you mentioned before. Uh, in this rituals of, uh, incident, uh, rituals. I really try to point out and look at, uh, uh, how long did it take us to mitigate it? Um, when, like how, how long, uh, until the, the customer didn't see the issue. And from these, uh, points, you can, I mean, I hope the team understands the culture that I'm pushing towards. And from that point, they will also want to implement DORA metrics without even knowing the name DORA. We don't really care about the name. I mean, we don't really, it doesn't really matter if they know how to call it. Just like to, as you mentioned before, I don't want you to know about DORA. Just get better or just be better at this. So yeah, that's basically it. 

Nathen Harvey: Thanks. Awesome. 

Kovid Batra: All right. I think there is one thing that I wanted to ask from this. It's good with the engineers, probably, and you can just pull in every time. But when it comes to other stakeholders in the business, what I have seen and experienced with my clients is they find it hard to explain these DORA metrics in terms of the business language. I think Nathen, you touched upon this in the beginning. I would like to just highlight this again for the audiences' sake. 

Nathen Harvey: Yeah, I think that, I think that's really important. And I think that when it comes to dashboards, uh, it, it would be really good to put your delivery performance metrics right next to your organizational performance metrics, right? Are we seeing better customer, like, are we seeing the same trend? As software delivery improves, so do customer signups, so do, uh, revenue that we get per customer or something along those lines. That's, you know, if you think about it, we're really just trying to validate an experiment. We think that by shipping this feature, we're going to improve revenue. Let's test that. Let's look at that side-by-side. 

Kovid Batra: Totally. All right. Uh, we have a lot of questions coming in. Uh, so sorry, audience, I'm not able to pick all of those because we are running short on time. We'll pick one last question. Uh, okay. That's from Julia. Uh, are there any variations of DORA metrics you have found in customer deployed or installed software? Example, deployment frequency may not be directly relevant. A very relevant question. So yeah. 

Nathen Harvey: Yeah, absolutely. Uh, I think, I think the beauty of the four key metrics is that they are very simple, except they're not, they are very simple on the surface, right? If, and if you take just, let's just take one of them, change lead time. In DORA's language, that starts at like change is committed and it ends when that changes in production. Okay. What does committed mean? Is it committed to a branch? Is it committed to the main line? Is that branch, has that branch been merged into the main line? Who knows? Um, I have a perspective, but it doesn't really matter what my perspective is. When it comes to production. What does it mean to be in production? If we're doing, um, progressive deploys, does it mean the first user in production has it or only when 100 percent of users have it? Is that when it's in production? Or somewhere in between? Or we're running mobile applications where we ship it off to the app store and we have to wait for it to get approved, or installed software where we package it up and we shrink wrap it into a box and we ship out a CD. Is that deployed? I mean, I don't, I don't know that anyone does that any, well, I'm sure it happens. I know in the Navy they put software on helicopters and they fly it out to ships. So that's, you know, all of these things happen. Here's the thing. For your application, what you need to do is think about those four metrics and write down for, for this application, commit, change, change lead time starts here at this event ends here at that event. We're going to write that down probably in maybe something like an architectural decision record and ADR, put it into the code base. And as you write it down, make sure that it's clear, make sure that everyone agrees to it, and probably just as importantly, make sure that when you write it down, you also write down the date at which we will revisit this decision, right? Because it doesn't have to be set in stone. Maybe this is how we're going to measure things starting today, and we'll come back to this in six months. And some of the things that drive that might be the mechanics of how you deliver software. Some of the things that drive that might be the data that you have access to, right? And over time, you may have access to more precise data, additional data that you can then start to use in that. So the important thing is that you take these metrics and you contextualize them for your team. You write down what those metrics are, what their definitions are for your team and you revisit those decisions over time. 

Kovid Batra: Perfect. Perfect. All right. I think, uh, it's already time. Nathen, would you like to take one more question? 

Nathen Harvey: Uh, I'm happy to take one more question. Yes. 

Kovid Batra: All right. All right. So this is going to be the last one. Sorry if someone's question is not being asked here. But let's, let's take this up. Uh, this is Jimmy. Uh, do you ever try to map a change in behavior/automation/process to a change in, in the macro-DORA performance? Or should we have faith that our good practices is what is driving positive DORA trends? 

Nathen Harvey: Um, I think that, uh, having faith is a good thing to do. Uh, but validating your experiments is an even better thing to do. So, uh, as an example, uh, let's see, trying to change a behavior automation process to a change in the macro performance. Okay. Uh, I'll, I'll, I'll pick a change that you might have or an automation that you might have. Let's say that, uh, today, your deployment process is a manual process, uh, and you're doing, and there's lots of steps that are manual, uh, and you want to automate that process. Uh, so, we can figure out what are our, what's our software delivery performance look like today, you can use a Typo dashboard, you could use the DORA quick check. Write that number down. Now make some investments in automation, deployment automation, you've deployed, instead of having 50 manual steps, you now have 10 manual steps that you take and 40 that have been automated. Now let's go back and remeasure those DORA performance metrics. Did they improve? One would think and one would have faith that they will have improved. You may find for some reason that they didn't. But, validating an experiment and invalidating an experiment are kind of the same thing. In either case, it's really about the approach that you take next. Are you using this as an opportunity to learn and decide how are we going to respond to the, the new information that we have? It really is about a process of continuous learning, And hopefully continuous improvement, but with every improvement, there may be setbacks along the way. 

Kovid Batra: Great. All right. On that note, I think that's our time. We tried to answer all the questions, but of course we couldn't. So we'll have more sessions like this, uh, to help all the audience over here. So thanks a lot. Uh, thank you for being such a great audience. Uh, we hope this session helped you build some great confidence around how to implement DORA metrics in your teams.

And in the end, a heartfelt thanks to my cohosts, Nathen and Ido, and to my Typo team who made this event possible. Thanks a lot, guys. Thank you. 

Nathen Harvey: Thank you so much. Bye bye. 

Ido Shveki: Thanks for having us. Bye.