Webinar: ‘The Hows and Whats of DORA' with Dave Farley and Denis Čahuk

In this DORA exclusive webinar, hosted by Kovid from Typo, notable software engineers Dave Farley and Denis Čahuk discuss the profound impact of DORA metrics on engineering productivity.

Dave, co-author of 'Continuous Delivery,' emphasized the transition to continuous delivery (CD) and its significant benefits, involving systematic quality improvements and efficient software release cycles. Denis, a technical coach and TDD/DDD expert, shared insights into overcoming resistance to CD adoption. The discussion covered the challenges associated with measuring productivity, differentiating between continuous delivery and continuous deployment, and the essential role of team dynamics in successful implementation. The session also addressed audience questions about balancing speed and quality, using DORA metrics effectively, and handling burnout and engineering well-being.

Timestamps

  • 00:00 - Introduction
  • 00:14 - Meet the Experts: Dave Farley and Denis Čahuk
  • 01:01 - Dave Farley's Journey and Hobbies
  • 02:38 - Denis Čahuk's Passion for Problem Solving
  • 06:37 - Challenges in Adopting Continuous Delivery
  • 11:34 - Engineering Mindset and Continuous Improvement
  • 14:54 - Measuring Success with DORA Metrics
  • 25:38 - Addressing Burnout and Team Performance
  • 32:33 - The Benefits of Ensemble Programming
  • 33:34 - ThoughtWorks and Lean Development
  • 34:45 - Social Variants in Agile Practices
  • 36:52 - Continuous Delivery and Team Well-being
  • 42:59 - The Importance of TDD and Pairing
  • 46:45 - Q&A Session
  • 01:00:09 - Conclusion and Final Thoughts

Links and Mentions

Transcript

Kovid Batra: All right. So time to get started. Uh, thanks for joining in for this DORA exclusive webinar, The Hows and Whats of DORA session three, powered by Typo. I am Kovid, founding member at Typo and your host for today's webinar. With me today, I have two extremely passionate software engineers. Please welcome the DORA expert tonight, Dave Farley. Dave is a co-author of award-winning books, Continuous Delivery, Modern Software Engineering, and a pioneer in DevOps. Along with him, we have the technical coach, Denis Čahuk, who is TDD, DDD expert, and he is a stress-free high-performance development culture provider in the tech teams. Welcome to the show, both of you. Thank you so much for joining in.

Dave Farley: Pleasure. Thank you for having me.

Denis Čahuk: Thank you for having me.

Kovid Batra: Great guys. So I think we will take it one by one. Uh, so let's, let's, let's start with, uh, I think, uh, Dave first. Uh, so Dave, uh, this is a ritual that we follow on this webinar. You have to tell us about yourself, uh, that your LinkedIn profile doesn't tell. So you have to give us a quick, sweet intro about yourself.

Dave Farley: Okay. Um, I'm a long-time software developer who really enjoys problem-solving. I really enjoy that aspect of the job. I, if you want, if you want to get me, get me to come and work at your place, you tell me that the problem's hard to solve. And that's, that's the kind of stuff that I like, and I've spent much of my career doing some of those hard to solve problems and figuring out ways in which to make that easier.

Kovid Batra: Great. All right. So I think, Dave, uh, apart from that, uh, anything that you love beyond software engineering that you enjoy doing?

Dave Farley: Yeah, my wife says that my hobby is collecting hobbies. So, so I'm, I'm a guitarist. I used to, I used to play in rock bands years ago. Um, I, until fairly recently, I was a member of the British aerobatics team, flying competition aerobatics in a 300 horsepower, plus 10, minus 10 G, uh, aerobatic airplane, which, which was awesome, but, uh, I don't do that anymore. I've stopped very recently.

Kovid Batra: That's amazing, man. That's really amazing. Great. Thank you. Thank you so much for that, uh, intro about yourself and, uh, Denis over to you, man.

Denis Čahuk: Um, like Dave, I really like problem solving, but, but I like involving, uh, I spent the beginning of my career in focusing too much on the compiler and I like focusing on the human problems as well. So how, what, what makes the team tick and in particular with TDD, it really, really scratched an itch about what makes teams resistant and what makes teams a little bit more open to change and improvement and dialogue, especially dialogue. Uh, that has become my specialty since. So yes, I brand myself as a TDD, DDD coach, but that's primarily there to drive engagement. I'm, I'm super interested in engineering leadership and specifically what drives trends and what helps people, what helps, uh, engineers, engineering teams overcome their own resistance, sort of, if they're in their own way, you know, why is that there, how to, how to resolve any kind of, um, blockers, let's say, human blockers, not, not, not the compiler kind, uh, in engineering things. I don't plan any planes, but I do have, I do share, uh, Dave's passion for music. So I do have a guitar and, uh, the drum there behind me. So whenever I'm not streaming or coding, I am jamming out as much as I can.

Kovid Batra: Perfect. Perfect, man. All right. So I think it's time we get started and move to the, to move to the main section. Uh, so the first thing that I love to talk to you, uh, Dave first, uh, so you have this, uh, YouTube channel, uh, and it's not in your name, right? It's, it's Continuous Delivery. Uh, what, what makes Continuous Delivery so important to you?

Dave Farley: Somebody else said to, this to me very recently, which, which I agree with, which is that I think that Continuous Delivery, without seeming too immodest, because my name's associated with it, but I think it represents a step change in what we can do as software developers. I think it's a significant step forward in our ability to create better software faster. If you embrace the ideas of continuous delivery, which includes things like test-driven development, in DDD, as Denis was describing, and is very team-centered as well, which Denis was also talking about. If you, if you embrace those ideas and adopt the disciplines of continuous delivery, which fundamentally, all devolve into one idea, which is working software is always in a releasable state, then you get quite dramatically better outcomes. And I think without too much fear of contradiction, continuous delivery represents the state of the art in software development. It's what the best organizations at software development do. And so, I think it's an important idea and it's as I said, although I sound rather immodest because I'm one of the people that helped at least put the language to it, but people were doing these things, but Jez, Jez and my book define the language around which continuous delivery talking is usually structured these days. Um, and so, so I think it's an important idea and I think that software engineering is one of the most important things that we do in our society and it matters a lot and we ought to be better at it as an industry and I think that this is how we get better at it. So, so I get an awful lot of job satisfaction and personal pleasure on trying to help people on their journey towards achieving continuous delivery.

Kovid Batra: And I think you're being just modest here. Your book just didn't define or give a language there. It did way, way more than that. And, uh, kudos to you for that. Uh, I think my next question would be like, what's that main ingredient, uh, that separates a team following CD and a team not following CD? What do you think makes the big difference there?

Dave Farley: There are some easy answers. Let me just tackle the difficult answer first, because I think the difficulty with continuous delivery is that the idea is simple, but it's so challenging to most people that it's very difficult to adopt. It challenges the way in which we think about software. I think it challenges to some degree. I'm a bit of a pop psychologist. I think in many ways it challenges, um, our very understanding of what software is to some extent, and certainly what software development is. And that's difficult. That means that it changes every person's role in undertaking this. It, as I said already, it's a much more team centered approach, I think, uh, to be able to achieve this permanent releasability of our software. But fundamentally, I think if you want to boil it down to more straightforward concepts to think about, I think that what we're talking about here is kind of applying what I think of as a kind of scientific rationalism to solving problems in software. And so the biggest part of that, the two biggest ideas there, from my point of view, are working in small steps and essentially, treating each of those steps as a little experiment and assuming that we're going to be wrong. So it's always one of the big ideas in science is that you start off assuming that your ideas are wrong, and then you try and figure out how and why they're wrong. I think we do the same thing in continuous delivery and software engineering, modern software engineering. We try to figure out how can we detect where our ideas are wrong, and then we try and detect where they're wrong, in those places and find out if they're wrong or not and then correct them. And that's how we build a better software. And so this, I think that goes quite deep and it affects quite a lot about how we undertake our work. But I think that one of the step changes in capability is that I think that previous thinking about software development kind of started off from the assumption that our job is to get everything perfectly right from the start. And that's simply irrational and impossible. And so, instead of taking a more scientific mindset and starting off assuming that we will be wrong, and so we give ourselves the freedom to be wrong and the ability to um, recover from it easily is almost the whole game.

Kovid Batra: Got it. I think Denis has a question. He wants to, yeah, please go ahead.

Denis Čahuk: Sure. I'm going to go off script. I think I like that distinction of psychologist. Sometimes I feel myself, find myself in a similar role. And I think the core disagreement comes from this idea of a lot of engineers, organizational owners, CTOs don't like this idea that their code is an experiment. They want some like certain assurances that it has been inspected and that it's, it's not, it's not something that we expect to fail. So from their perspective, non-CD adopters think that the scientific rationale is hard inspection towards requirements rather than conducting an experiment. And I see that, um, sort of providing a lot of resistance regarding CD adoption cause it is very hard to do, or it's very hard to come from that rationale and say, okay, we're now doing CD, but we're not doing CD right now. We're adopting CD right now. So we're kind of doing it, but not doing it. And it just creates a lot of tension and resistance in companies. Did you find similar situations? How do you, how do you sort of massage this sort of identity shift identity crisis?

Dave Farley: Yeah. Yeah I think, I think absolutely that's a thing and, and that is the challenge. It is that is to try and find ways to help those people to see the light. So I know I sound like an evangelist. Yeah, but, but I guess I see that as part of my role. But..

Denis Čahuk: You did write the book, so..

Dave Farley: Yeah, so, so, so I think this is in everybody's interest. I mean, the data backs me up. The DORA data says that if you adopt the practices of continuous delivery, you spend 44 percent as an organization more time on building new features than if you don't. That's pretty slam dunk in terms of value as far as I'm concerned, and there's lots more to it than that. But, you know, so why wouldn't anybody want to be able to build better software faster? And this is the best way that we know of so far, how to do that. So, so that seems like a reasonably rational way of deciding that this is a good idea, but that's not enough to change people's minds. And you've got to change people's minds in all sorts of different ways. Um, I think it's important to make these sorts of things, but going back to those people that you said that, you know, engineers who think it's their job to get it right first time, they don't understand what engineering is. Managers who want to build the software more quickly, get more features out. They don't understand what building software more quickly really means because if either of those groups knew those things, they'd be shouting out and demanding continuous delivery, because it's the thing that you need. We don't know the right answers first time. Look at any technology. Let alone any product and its history. Look at the aeroplane. In the first aeroplane that could carry a person under power in a controllable way was the Wright Flyer in 1903. And for the first 20 or 30 years, all aeroplanes were death traps. People were, they were such dangerous devices. But engineering as a discipline adopted an incremental approach to learning and discovery to improve the airplane. And by 2017, two thirds of the planet, the equivalent of two thirds of the population of the planet, flew in commercial airliners and nobody was killed. That's what engineering does. It's an incremental process. It doesn't, we don't, we never ever, ever get it right first time. The iPhone, the first iPhone didn't have an app store, didn't have a camera, didn't have Siri, didn't have none of these things, didn't..

Denis Čahuk: Multitasking.

Dave Farley: Didn't have multitasking, all of these things. And now we have these amazing devices in our pockets that can do all sorts of amazing things that the original designers of the iPhone didn't actually predict. I'm sure that they had vague wishes in their minds, but they didn't predict them ahead of time. That's not how engineering works. So the way that engineering works is by exploration and discovery. And we need to, to be good at it, we need to organize to be good at exploration and discovery. And the way that, you know, so if we want to build things more efficiently, then we would, we need to adopt the disciplines that allow us to make these mistakes and accept that we will and look, you know, detect them as quickly as we can and learn from them as quickly as we can. And that's, you know, that's why, to my mind, you know, the results of the DORA thing, so there's no trade-off between speed and quality because you work in these small steps, you get faster feedback on, on whether your ideas are good or bad. So those small steps are important. And then when you find out that they're a bad idea, you correct them. And that's how you get to good.

Kovid Batra: Totally. I think, uh, one very good point, uh, here, we are sure like now CD and other practices like TDD impact engineering in a very positive way, improving the overall productivity and actually delivering value and the slam dunk like 44 percent more value delivered, right? But when it really comes to proving that number to these teams, uh, do you, like, do you use any framework? Do you use like DORA or SPACE to tell whether implementing CD was effective in a way? How do you measure that impact?

Dave Farley: No, most, mostly I recommend that people use the DORA metrics. Um, I, let me just talk momentarily about that because I think that that's important. I think the work of Nicole and the rest of the team in starting off the DORA was close to genius in identifying, as far as I can think of, the only generic measures in software. If you think about what, what the, the DORA metrics of stability and throughput measure, um, it's, um, the quality of the software that we produce and the rate at which we can produce software of that quality. That stability is the quality. Throughput is the efficiency with which we can produce software of that quality. Those are fundamental. They say nothing at all about the nature of the problem we're solving, the technology we're using, or anything else. If you're writing, if you're configuring SAP to do a better job of whatever it is that you're trying to do, that's still a good measure of success, stability and throughput. Um, if I'm writing some low-level code for an operating system, that's still a good measure of success. It doesn't matter. So, so we have these generic measures. Now they aren't enough to measure everything that's important in software. What they do is that they tell us whether we're building software right. They don't tell us whether we're building the right software, for example. So we need different kinds of experiments to understand other aspects of software. But I don't think there's much else. There's nothing else that I can think of that's in the same category. Stability and throughput in terms of the generosity of those measurements. And so, if you want a place to start of what to measure, start with stability and throughput and then figure out how to measure the other things out because they're going to be dependent on your context.

I'm a big fan of Site Reliability Engineering as a model for this. It talks in terms of, um, um, SLOs and SLIs, Service Level Indicators and Service Level Objectives. So the Service Level Indicator is what measure will determine the success of this service. So you identify, for every single feature, you identify what you should measure to know whether it's good or not. And then you set an objective of what score on that scale you want to achieve for this thing. That's a good way of measuring things, but it's kinda difficult. The huge difference is it's completely contextual, not even application by application, but feature by feature. So one feature might improve the latency, another feature might improve the rate at which we recruit new customers. And we've got to figure out, you know, that's how we get experimental with those kinds of things, by being more specific about and targeted with what we measure. I am skeptical of most of the generic measures. Not because I don't want them, it's just that I don't think that most of the others are generic and do what we want them to. Um, I'm not quite sure what I make of the SPACE framework, which is Nicole's new work on measuring developer, developer productivity. She's very smart and very good at the research-driven stuff. Uh, I spoke to her about some of this stuff on my, my podcast and, um, she had interesting things to say about it. I am still nervous of measuring individual developer productivity because as Denis said in his introduction, one of the really important things is how well a team works. So I think modern software development. unless it's building something trivial usually, is a team game. It's a matter of people coming together and organizing themselves in a way to be able to achieve some goal. And that takes an awful lot, and you can have people working with different levels of skill, experience, diligence, who may be still contributing strongly to the team, even if they're not pulling their weight in other respects. So I think it's a complicated thing to measure, a very human thing to measure. So, so I'm a bit suspect of that, but I'm fairly confident that Nicole will have some data that proves me wrong. But I, you know, that's, that's my position so far.

Kovid Batra: Totally makes sense. I think with almost all the frameworks, there have been some level of challenges and so is with DORA, SPACE, but I think in your experience, when, when you have seen, uh, and you have helped teams implement such practices, uh, what do you think have become the major reasons where they get stuck, not implementing these frameworks, not implementing proper engineering metrics? What, what, what stops them from doing it? What stops them from adopting it?

Dave Farley: I think specifically with using DORA, um, there are some complexities. If you, if you, if you are in a, a regular kind of organization that hasn't been working in the ways in which we've been talking about so far, um, then measuring stuff, just, just measuring stuff is hard. You're not used to doing it. The number of organizations that I talked to that couldn't even tell you how much, excuse me, time was spent on a feature, they don't measure it. They don't know. And so just getting the basics in, the thinking in, to be able to start to be a little bit more quantitative on these things is hard. And that's hard for people like us probably to get our heads around a little bit because when you've got a working deployment pipeline, this stuff is actually pretty easy because you just instrument your deployment pipeline and it gives you all the answers pretty much. So I think that there's that kind of practical difficulty, but I don't think that's the big ticket problem. The big ticket problem is just the mindset, my, I am old enough and comfortable enough in my shoes to recognize that I'm a grumpy old man. Um, and part of my grumpy old manness is to look at our industry and think that our industry is largely a fashion industry. It's not a technical industry. And there's an awful lot of mythology that goes on in the software industry. That's simply nothing to do with doing a good job. It's just what everybody thinks everybody else is doing. And I think that's incredibly common. And you've got to overcome that because if you're talking to a team, I'm going to trample on some people's sacred cow right now, but if you're talking to a team that works with feature branching, the evidence is in. Feature branching doesn't work as well as trunk-based development. That's more learning that we got from the DORA metrics, measuring those. Teams that work with feature branches build slightly lower quality code and they do it slightly more slowly than teams working on trunk. Now the problem is, is that it's almost inconceivable how you can do trunk-based development safely to people that buy into the, what I would think of as the mythology of feature branching. The fact that it, it, you can do it safely and you can do it safely at scale with complicated software, they start to deny because they assume that, that, that you can't, because they can't think of how you would do it. And that's the kind of difficulty that, that you face. It's not that it's a rational way of thinking about it, because I, I think it's very easy to defend why trunk-based development and continuous integration are more true, more, more, more accurate. You know, you, you organize things so that there's one point of truth. And in feature branching, you don't have one point of truth, you have multiple points of truth. And so it's clear that it's easier to determine whether the one point of truth is correct than deciding that multiple points of truth, that you don't know how you're going to integrate them together yet, is correct. You can't tell.

So it's, it's, it's tricky. So I think that there are rational ways of thinking that help us to do this, which is why I started, I've started to think about and talk about what we do as engineering more than as craft or just software development. If we do it well, uh, it's engineering and if we do it well and use engineering, we get a better result, which is kind of the definition of what engineering is in another discipline. If we work in certain ways, we do get better results. I think that's important stuff. So it's very, very hard to convince people and to get them away from their, what I would think of as mythologies sometimes. Um, and it's also difficult to be able to have these kinds of conversations and not seem very dogmatic. I get accused of being dogmatic about this stuff all of the time. Being arrogant for a moment. I think there's a big difference between being dogmatic and being right. I, I think that if we talk about, you know, having evidence like the DORA metrics, having a model like the way that I describe how these things stitch together and the reasons why they work and just having a favorite way of doing things, there's a difference between those things. I don't like continuous integration because it's my favorite. I like continuous integration because it works better than anything else. I like TDD not because I think it's my ideal for designing software. It's just that it's a better way of designing software than anything else. That's my belief. And, and so it's difficult to have these kinds of conversations because inevitably, you know, my viewpoints are going to be covered, colored by my experiences and what I've seen. But I try hard to be honest myself as an aspiring engineer and scientific rationalist. I try to be true to myself and try to critique my own ideas to find the holes in them. And I think that's the best that we can do in terms of staying sane on these things.

Kovid Batra: Sure. I think on that note, I think Denis would also resonate with that fact, because last time when Denis and I were talking, he mentioned about how he's helping teams implement TDD and like taking away those roadblocks time to time. So I'm sure Denis has certain questions around that, and he would like to jump in. Denis, uh, do you have any questions?

Denis Čahuk: I have a few, actually, I need your help a little bit to stay on topic. Um, so Dave mentioned something really important that sort of touched me more than the rest, which is this sort of concern for measuring individual performance. And I've been following Nicole's work as well, um, especially with SPACE metrics and what the team topology community is doing now with flow engineering. Um, there, there is a, let's say, strong interest in the community and the engineering intelligence community to measure burnout, to measure.

Dave Farley: Mm-Hmm.

Denis Čahuk: So, so the, so to clarify, do we have a high-performing team that's burnt out or do we have a healthy team that's low-performing? And to really, and to really sort of start course correct in the right areas is very difficult to measure burnout without being individual because of the need for it to be a subjective experience. Um, and I share Dave's concern where the productivity metrics are being put in the same bucket as the psychological safety and burnout research. So, I'm wondering when you're dealing with teams, because I see this with product engineering, I see this with TDD, I see this with engineering leaders who are just resistant to this idea of, you know, are we burned out? Are we just tired and we're following the right process? Or is the process correct, but it's being implemented incorrectly? How do you, how do you navigate this rift? I mean, specifically, do you find any quick, uh, lagging indicators from the DORA metrics to help you a little bit, like to cajole the conversation a little bit more? Um, or do you go to other metrics, like SPACE metrics, et cetera, to sort of, or surveying to help you start some kind of continuous delivery initiative? So a lot of teams who are not doing CD, they do complain about burnout when they're sort of being asked to start just measuring everything, just out of, um, out of, I would say, fatigue.

Dave Farley: Yeah, and, and, uh, and, uh, it gets to the, uh, Matt and Manuel's thing from the team, the Team Topologies guys, you know, uh, uh, description of cognitive load. I know it's not their, their, their idea originally, but, but, but applying it to software teams. It's, it, I, I think burnout is primarily a matter of, a mix of cognitive load and excessive cognitive load and the freedom to direct your own destiny within a team, you know? You need, you need kind of the Daniel Pink thing, autonomy, mastery and purpose. You need freedom to do a good job. You need enough scope to be, and, and that those are the kinds of things that I think are important in terms of measuring high-performance teams. I think that it's a false correlation. Um, I know that recent versions of the, the DORA reports have thrown up some, what seemed to me to be, um, counterintuitive findings. So people saying things like working with continuous integration has, is correlated with increased levels of burnout. That makes no sense to me. I put this to, to Nicole when I spoke to her as well, and she was a little skeptical of that too, in terms of the methodology for collecting the data. That's no, it's no aspersion on the people. We all get these things wrong from time to time, but I'm distrustful of that result. But if that is the result, you know, I've got to change my views on things. But my experience, and that's in the absence of, of hard data, except that previous versions of DORA gave us hard data and now the finding seems to have changed. But my experience has been that teams that are good at continuous delivery don't burn out, because it's, it's sustainable. It's long-term sustainable. The LMAX team that, that I led in the beginning of that team have been going, how long is it now? Uh, about 15 years. And those, those people weren't burning, people weren't burning out, you know, and they're producing high-quality software still, um, and their process is working still. Um, so I I'm not, I, I think that mostly burnout is a symptom of something being wrong. Um, and something being wrong in terms of too much cognitive load and not enough control of your own destiny within the team. Now, that's complicated stuff to do well, and it gets into some of the, for want of a better term, softer things, the less technical aspects of organizing teams and leading teams and so on. So we need leaders that are inspirational, that can kind of set a vision and a direction, and also demonstrating the, the right behavior. So going home on time, not, not working all hours and, you know, not telling people off if things go wrong, if it's not their fault, and all these kinds of things. So we need.. The best teams in my experience, take a lot of personal responsibility for their work, but that's, that's doing it themselves. That's not externally forced on them, and that's a good thing because that makes you both be prouder of the things that you do and more committed to doing a good job, which is in everybody's interest.

So, so I think there's, I think there's quite a lot to this. And again, it's, none of it's easy, but I think that shaping to be able to keep our software in a releasable state and working in small steps, gathering feedback, focusing on learning all of those techniques, the kind of things that I talk about all the time are some of the tools that help us to at least have a better chance of reducing burnout. Now that, there are always going to be some individuals in any system that get burnt out for other reasons. You get burnt out because of pressures from home or because your dog died or whatever it might be. Um, but, you know, we need, we need to treat this stuff seriously because we need to take care of people even if that's only for pragmatic commercial reasons, that we don't want to burn people because that's not going to be good for us long term as an industry. I, I, I, that's not more the primary reason why I would do it. But if I'm talking to a hard-nosed commercial person, I still think it's in their interest to treat people well. And so, and so we need to be cautious of people and more caring of people in the workplace. It's one of the things that I think that ensemble programming, whether it's pairing or mobbing, are significantly better for, and probably that's counterintuitive to many people, because there's a degree to which pair programming in particular applies a bit of extra pressure. You're a bit more on your game. You get a bit more, more tired at the end of each day's work, but you also build better friendships amongst your, your, your team workers and you learn from one another more effectively and you can depend on one another. If you're having a bad day, your, your, your pair might pick up the pace and be, you know, sustaining productivity or whatever else. There are all these kinds of subtle complex interactions that go on to producing a healthy workspace where, where people can keep at it for a long, you know, a long time, years at a time. And I, I think that's really important.

I worked at a company called ThoughtWorks in, in the early 2000s, and during that period, ThoughtWorks and ThoughtWorks in London in particular where I worked, where I think some of the thought leaders in agile thinking, we were pushing the boundaries of agile projects at that time and doing all sorts of interesting things. So we experimented a lot. We tried out lots of different, you know, leading edge, bleeding edge, often ideas in, in development. One of those, I worked on one of the early teams in London that was doing full-blown lean and applying that to software development. Um, and one of the things that we found was that that tended to, to, to burn us out a little bit over months because it just started to feel a bit like a treadmill. There was no kind of cadence to it because you just pick up a feature off the Kanban board, you'd work on that feature, you'd deliver the feature, you'd showcase the feature, you'd pick the next feature and you'd, and so on and so on and so on, and that was it. And you did that for months on end. And we were, we were, we were building good software. We were mostly having a good time, but over, over time it made us tired. So we started to think about how to make more social variants in the way in which we could do things. And we ended up doing the same thing, but also having iterations or most people would call them 'sprints' these days, of two weeks so that we could have a party at the end and celebrate the things that we did release, even though we weren't committing to what we'd release in the next two weeks. And, you know, we'd have some cake and stuff like that at the end, and all of those sorts of human things that just made it feel a little bit more different. We could celebrate our success and forget about our losses. Software development is a human endeavor. Let's not forget that and not try and talk, turn us into cogs in a machine. Let's treat us like human beings. Sorry. I'm off-road. I'm not sure if I answered your question.

Denis Čahuk: This is great. This is great, Dave. No need to apologize. We're enjoying this and I think our audiences as well.

Kovid Batra: I'm sure. All right. So, Denis, uh, do you have any other question?

Denis Čahuk: Well, I would like to follow up with what the story with the, with the ThoughtWorks story that Dave just mentioned You know, you mentioned you had evidence of high performance in that team. You know, we tend to forget that lean is primarily a product concern, not an engineering concern. So it sort of has to go through the ringer and to make sure, you know, does it apply to software engineering as well? And I have similar findings with things like lean, things like Kanban, particularly Scrum or the bad ways of doing Scrum is that it is, it can, it can show evidence of high performance, but not sustainably due to its lack of social component. And the retrospectives are a lame excuse at social components. It's just forcing people to judge each other and usually produces negative results rather than positive ones. So I'm wondering, you just mentioned this two-week iteration cycle for increments, but also you're leaning towards small batches. Are you still adamant on like this two-week barrier for social engagement? So, so, so what we There does seem to be a difference.

Dave Farley: Yeah, so, so, so what we did is that we retained the lean kind of Kanban style planning. We just kept that as it was, but we kind of overlaid a two-week schedule where we would have a kickoff meeting at the start of an iteration, and we would have a little retrospective at the end of an iteration and we, you know, we would talk about the work that we did over that period. So, so we had this, this kind of different cycle and that was purely human stuff. It wasn't even visible really outside of the team. It was just the way that we organized our work so that we could just look ahead for, for, for what's coming downstream as far as our Kanban board said today, and look back at what, what, what we'd, you know, what we delivered over the pre, you know, the previous iteration. It was just that kind of thing. And that was enough to give us this more human cycle, you know, because we could be, we could be looking forward to, so I'm releasing this feature, we're nearly at the end, you know, we'll talk about that tomorrow or whatever else it is, you know, and it was just nice to kind of reconnect with the rest of the team in that way. And it just, we used it essentially, I suppose you could pragmatically look at it as just as a meeting schedule for, for, for the team-level stuff. I suppose you could look at it like that, but it was, it felt like a bit more, more than that to us. But I've, by default, if I'm in a position to control these things, that's how I've organized teams ever since. And that, that's how, that's how we worked at LMAX where we built our financial exchange. That's the organization that's been going for 15 odd years, um, doing this real high-performance version of continuous delivery.

Denis Čahuk: But to pick your brain, uh, Dave, sorry to interject. When you said, you separated out the work cycles from the social cycles, that does involve daily deployments, right? Like daily pairing, daily deployments. So the releases were separate from the meeting, uh, routine.

Dave Farley: Yes. Yeah, so, so, so we, we were, we were doing the, we were doing the, the, the, the Kanban continuous delivery kind of thing of when a feature was finished, it was ready to go. So, so we were working that way. Um, there was some limitations on that sometimes, but, but, but pretty much that, that's a very close approximation have been an accurate statement, at least. Um, so, so we, we were working that way. Yeah. So we'd really, we'd essentially release on demand. We'd, we'd release when, you know, at every point when we were ready. And that was more often, usually, than once every two weeks. So the releases weren't, weren't forced to be aligned with those two week schedules. So it wasn't a technical thing at all. It was, uh, it was primarily a team social thing, but, but it worked. It worked very well.

Denis Čahuk: I really liked the brief mention about SPACE and Nicole's other work. Kovid and I are very active in the Google community. It's sort of organizing DORA-related events. And Google does have a very heavy interest in measuring well-being, measuring burnout, or just, you know, trying to figure out whether engineers and managers are actually really contributing or whether they're just slowing things down. And it's very hard to just judge from DORA metrics alone, or at least to get a clearer picture. Um, is there anything else you use for situational awareness? What would you recommend for either evidence of micromanagement, or maybe the team wants to do TDD, but there's sort of an anti-pairing stigma, if you have to, how would you approach, um, the sort of more survey-oriented, SPACE-oriented?

Dave Farley: From my experience, and I'm saying that with reservations, not with not, not, not boasting. I'm not saying because I've got great experience, but, but from my experience, I, I'm a little bit wary of trying to find quantity of ways of evaluating those things. These are very human things. So stuff like some of the examples that you mentioned, I, I've spent a significant proportion of my career as a leader of technical teams and I've always thought that it was a failure on my part as a leader of a technical team if I don't know, notice that somebody's struggling or that somebody's not pulling their weight or, or I haven't got the kind of relation, relationship where the team, if I, if I don't, if I don't know something, the team doesn't come and tell me and then I can help. I'm kind of in a weird position, for example, I'm in a slightly weird position in terms of career reviews. I think that as a manager or a leader, if you don't know the stuff that you find out in a review, you're not doing your job. You should be knowing that stuff all of the time. And it's kind of the Gemba thing. It's kind of walking around and being with the team. It's it's spending time and understanding the team as a member of the team because that's what you are. You're not outside it. You're not different. You're a member of the team, so you should feel part of that and you should be there to help, help people guide their careers and steer them in the right direction of doing better and doing, doing good things from their point of view and from the organization's point of view. But to do that, you've got, you've got to understand a little bit about what's going on. And that feels like one of those very, very human things. It's about empathy, and it's about understanding. It's about communication, and it's about trust between, between the people. And I'm not quite sure how well you can quantify that stuff.

Denis Čahuk: I coach teams primarily through this kind of engagement, to rebuild trust.

Dave Farley: Yes.

Denis Čahuk: So I have found I have zero success rate in adopting TDD if the team isn't prepared to pair on it.

Dave Farley: Yeah.

Denis Čahuk: Once the team is pairing, once the team is assembling, TDD, continuous delivery, trunk-based\ development, no problem. Once they're prepared to sort of invest time into each other, just form friendships or if nothing else, cordial acquaintances, sort of, we can sort of, bridge that gap of, well, I want you to write a test so that he can go home and spend time with his kids without worrying about deployment. So that, that is the ulterior motive, not that there is some like, you know, fairytale fashion metric to tick a box on.

Dave Farley: Yeah.

Denis Čahuk: Um, since you mentioned quantitative metrics, to sort of backtrack a little bit on that and tie it together with TDD, did you find any lagging indicators of a team that, that did adopt TDD after you came in that, you know, what, what are the key metrics that are getting better, different after TDD adoption, or maybe leading indicators or perhaps leading indicators that say, hey, this more than anything else needs attention?

Dave Farley: So, so, so, so I think, I think, I think mostly, uh, stability. So, so it's a lagging indicator, but I, I think that's the measure that, you know, tells us whether you're doing a good enough job on quality. And if you're not doing TDD, mostly the data says you're not doing a good enough job on quality. There's a lot of other measures that kind of reinforce that picture, but fundamentally in terms of monitoring our performance day-to-day, I think stability is the best tool for that. Um, and, you know, so, so some, you know, so there's, I, I, I'm interested as a technologist from a technical point of view in some of the work that, um, Adam Thornhill, uh, uh, and code scene are doing in terms of red code and things like that. So patterns of use in code, the stuff that changes a lot and monitoring the stuff that changes a lot versus this stuff that, you know, where, where defects happen and all that kind of stuff. And so, you know, the crossover between sort of cyclomatic complexity and other measures of complexity in code and the need to change it a lot and all that kind of stuff. I think that's all interesting and kind of, but I see that as reinforcing this view of how important quality is. And fundamentally, we need to find ways of doing less work, managing our cognitive load to achieve higher quality, and that's what TDD does. So TDD isn't the end in itself. It's, it's a tool that gives us, that pushes us in the direction of the end that matters, which is building high-quality software and maintaining our ability to change it. And that's, again, that's what TDD does. So, so, so I think that TDD influences software in some deep ways that people that don't practice TDD miss all of the time.

And it's linked to lots of other practices. Like you said, um, you know, pairing is a great way of helping to introduce TDD, uh, particularly for our people that already know how to do TDD in the team. That's, that's the way that you spread it, certainly, but it's, I can't, I can't think of many things that, that, as I say, I'm wary of measures. I tend to either use tactical measures that just seem right in the context of what we're doing now, sort of treating each thing as an experiment and trying to figure out how to experiment on this thing and what do I need to measure to, to do that, or I use stability and throughput primarily.

Kovid Batra: Uh, I'll just, uh, take a pause here for all of us because, uh, we have a QnA lined up for the audience. And, uh, we will try to take like 30, 30 seconds of a break here and, uh, audience, you can get started, posting your questions. Uh, we are ready to take them.

Denis Čahuk: We already have a few comments and we had, uh,

Kovid Batra: Okay. I think, uh, we can start with the questions.

Denis Čahuk: Before we go into Paul's question. Paul has a great question. I just want to preface that by saying that not this one, the DORA-related one.

Kovid Batra: But I like this one more.

Denis Čahuk: Yes.

Kovid Batra: Dave, I think you have to answer this one. Uh, where do you get your array of t-shirts?

Dave Farley: So, so, so mostly I buy my t-shirts off a company based in Ireland called QWERTEE. "QWERTEE". And if you go to, if you go to any of my videos, there's usually a link underneath them where you can get a discount off the t-shirts because we did a deal with QWERTEE because, because so many people commented on my t-shirts.

Denis Čahuk: Great t-shirts. Well done.

Kovid Batra: Yeah. Denis.

Denis Čahuk: I just wanted to, I just wanted to preface Paul's other question regarding how to measure that, you know, Kovid and I are very active in the DORA communities on the Google, Google group, and by far the most asked questions are, how do I precisely measure X? How do I correctly measure this? My team does not follow continuous delivery. We have feature branches. How do I correctly measure this metric, that metric? Before we go into too much detail, I just wanna emphasize that if you're not measuring, if you're not doing continuous delivery, then the metrics will tell you that you should probably be doing continuous delivery. And..

Dave Farley: Yeah.

Denis Čahuk: The ulterior motive is how can we get to continuous delivery sooner? Not how can we correctly measure DORA metrics and continue doing feature branching. Yeah, that's that is generally the most trending conversation topic on these groups. And I just want to take a lot of time to sort of nail, like the, it's about the business. It's about continuous delivery, running experiments quickly, smoother, safely, sustainably, rather than directly measuring any kind of dysfunctional workflow. Or even if you can judge that your workflow is bad because the metrics don't track properly, which is usually where people turn towards DORA metrics.

Dave Farley: Yeah, I would add to that as well is that even if you, even if you get the measures and you use the measures, you're still not going to convince people it's the measures enough alone aren't enough. You need, you need to approach this from a variety of different directions to start convincing people to change their minds over things, and that's without being disrespectful from those, of those people that differ in terms of their viewpoints, because it's hard to change your mind about something if you've, if you've made a career working in a certain way, it's hard to change the things that from the things that you've learned. Um, so this is challenging, and that's the downside of continuous delivery. It works better than anything else. It's the most fun way of organizing our work. It does tend to eliminate, in my experience, burnout in teams, all of these good things. You build better software more quickly working this way. But it's hard to adopt when you're not, when you've not done it before. Everybody that I know that's tried likes it better, but it's hard to make the change.

Denis Čahuk: It's a worthwhile change that manages a lot of stress and burnout, but that doesn't mean there aren't difficult conversations along the way.

Dave Farley: Sure.

Kovid Batra: All right, uh, moving on to the next one. Uh, how do you find the right balance between speed and quality while delivering software?

Dave Farley: The DORA metrics answer this question. There is no trade off, so there is no need to balance. If you want more speed, you need to build with higher quality. If you want more quality, you need to build faster. So let's just, let's just explain that a little bit because I think it's useful to just have this idea in mind because, because we have to defend ourselves because it seems, it seems like a reasonable idea that there's a trade off between speed and quality. It's just not true. But it seems like a reasonable idea. So, so if I build bad software this week and then next week, I've got a load more pressure on me to build next week's work, next week, I'm going to have all of that pressure plus all of the cost of the bad software that I wrote this week. So it's obviously more efficient if I build good software this week and then I don't have that work next week and then I could build good software next week as well. And what that plays out to is that that's where the 44 percent comes from. That's where the increase in productivity comes from. If we concentrate and organize our work to build higher quality software, we save time. We don't, we don't waste, we don't, it doesn't cost time.

Now there's a transition period. If you're busy working in a poor software development environment, that's building crap software, then, you know, it's going to take you a while to learn some of these things. So there's, there's an activation energy to get better at building software. But once you do, you will be going faster and building higher quality software at the same time because they come together. So what do we mean by fast when we talk about going fast if you want high quality software? Fundamentally, that's about working in smaller steps. So we want to organize our work into much smaller steps so that after each small step, we can evaluate where we are and whether what, whether that step that we took was, was a good one. And that's in all kinds of ways. Does my software do what I think it does? Does it do what the customer wants it to do? Is it making money in production or whatever else it is? So, so all of these things, you know, these are learning points and we need to build that more experimental mindset into the, in deep, into the way that we work.

And the smart thing to do. To optimize all of this is to make it easy to do the right things. It makes it, make it easy for us to carry out these small steps in these experiments. And that's what continuous delivery does. That's what the deployment pipeline fundamentally is for. It's an experimental platform that will give us a definitive statement on the releasability of our software multiple times per day. And that makes it easier then to, to work, to work in these small steps and do that quickly and, and get high quality results back.

Kovid Batra: Totally makes sense. Moving on, uh, Agustin, uh, why is it so, why is it so important in your opinion to differentiate between continuous delivery, continuous deployment, and how that affects the delivery process performance, also known as the DORA metrics?

Dave Farley: So, so, so, so let me first differentiate between them and then explain why I think it matters. So, so continuous delivery is working so that our software is always in a releasable state. Continuous deployment is built on top of continuous delivery. And if all of your tests pass, you just push the change out automatically into production. And that's a really, really good thing. If you can get, if you can get to that point where you can release all of the time small changes, that's probably the best way of getting this, optimising to get this really fast feedback, all the way out to your end users. Now the problem is, is that there are some kinds of software where that doesn't make any sense. There are some kinds of software for a variety of different kinds of reasons, depending on the technology, the regulation, um, real practical limitations for some reason, why we can't do that. So, Tesla are a continuous delivery company. But part of what they are continuously, continuously delivering is software embodied as silicon burnt into devices in the car. There's physics involved in burning the silicon. So you can't always release every change immediately that the software is, the software is done. That's not practical. So you have to manage that slightly differently. Uh, one of my clients, um, Siemens build medical devices and so, within the regulatory framework for medical devices that can kill people, you're not allowed to release them all of the time into production. And so, continuous delivery is the foundational idea but continuous deployment is kind of the, the limit, I suppose of where you can get to. If you're Amazon, continuous, continuous deployment makes a huge amount of sense. Amazon are pushing out changes. I think it's currently 1. 5 changes per second. It might be more than that. It might be five changes per second. Something like that. Something ridiculous like that. But that's what they're doing. And so they're able to move ridiculously fast and learn ridiculously quickly. And so build better software. I think you can think of it from a more internally focused viewpoint as that they each optimize for slightly different things.

Continuous delivery gives us feedback on whether we are, um, building things right and continuous deployment gives us feedback on whether we're building the right things. So we learn more about our product from continuous deployment by getting it into the hands of real users, monitoring that and understanding their impact. We get, and we can't get that kind of feedback any other way really than getting out to real users. We don't learn those lessons until real users are really using it. Continuous delivery though, gives us feedback on, does this do what we think it's doing? Um, is it good quality? Is it fast enough? Is it resilient enough? All of those kinds of things. We can measure those things. And we can know those before we release. So, they are slightly different things. And they do, they do balance off in different ways. They give us different levels of value. There's an excellent book that's recently been released on continuous deployment. Um, I've forgotten the name of the author. Valentina, somebody, I think. Um, but I wrote the foreword, so I should remember the name of the author. I'm very embarrassed, but it's, it's, it's a really good book, and it goes into lots of detail about continuous deployment as distinct from continuous delivery. I think, but I suppose I would say this, wouldn't I? I think that continuous delivery is the more foundational practice here, and I think that depending on your viewpoint, I think this is one of the very, very few ideas where, where Jez Humble and I would, would come at this from slightly different perspectives. I tended, I've tended to spend the latter part of my career working in environments where continuous deployment wasn't practical. I couldn't, I was never going to get my clients to, to, to do it in, in, in the environments in which they were building things. And sometimes they couldn't even if they wanted to. Um, I think Jez has worked in environments where continuous deployment was a little easier. And so that seems more natural. And so I think that kind of is why, um, some of the DORA metrics, for example, measure the efficiency based on assumptions, really, of continuous deployment.

Um, so I think, I think continuous deployment is the right target to aim for. You want to be able to release as frequently as is practicable, given the constraints on you, and you want to kind of push at the boundaries of those constraints where you can. So, for example, working with Siemens, we weren't allowed to release software into production of medical systems in clinical settings, but we could release much more frequently to non-clinical settings. So we did that, so we identified some non-clinical settings, and we released frequently to those places, in university hospitals, for example, and so on.

Kovid Batra: So I think it's almost time. Uh, and, uh, we do have more questions, but just because the stream is for an hour, uh, it's going to end. So we'll take those questions offline. Uh, I'll email the answers to you. Uh, audience, please don't be disappointed here. It's just in the interest of time that we'll have to stop here. Thank you so much, Dave, Denis, for this amazing, amazing session. It was nice talking to you and learning so much about CD, TDD, engineering metrics from you. Thank you so much once again.

Dave Farley: It's a pleasure. Thank you. Bye-bye. Thanks everyone.

Denis Čahuk: Thanks!