The DORA Lab – #02 Marian Kamenistak | VP Engineering Coach, ex-VPE at Mews

In the second episode of ‘The DORA Lab’ – an exclusive podcast by Beyond the Code, host Kovid Batra engages in a thought-provoking discussion with Marian Kamenistak, VP Engineering Coach and former VP of Product Engineering at Mews.

The discussion begins with Marian elaborating on two key terms – DevOps and DORA metrics. He then explores the application of DORA metrics within teams and their significance. He provides examples of how DORA metrics can pinpoint team issues, such as utilizing change failure rate to gauge team satisfaction.

Lastly, Marian offers valuable insights into strategies for engineering managers to tackle inefficiencies beyond DORA metrics and navigate execution steps when tackling challenges like high cycle time or low roadmap contribution.

Time stamps

  • (0:06): Marian’s background
  • (0:58): Diving deep into DevOps and DORA metrics
  • (2:31): How do DORA metrics fit specific teams and what value do they bring?
  • (6:46): Examples of how DORA metrics pinpoint team issues 
  • (12:51): Are engineering teams facing challenges implementing these metrics?
  • (21:29): What is the typical adoption time for teams to implement these metrics effectively?
  • (26:32): How can Engineering Managers pinpoint and improve areas of inefficiency beyond DORA metrics?
  • (35:05): How can metrics guide execution steps when addressing issues 

Links and mentions

Episode transcript

Kovid Batra: Hi everyone! This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have an interesting guest whom we loved so much that we had to call him back. Our engineering metrics expert from Prague. He has 15+ years of engineering leadership experience. He has been former Vice President at Mews and currently a Vice President Engineering Coach.

Welcome back to the show, Marian. Great to have you here. 

Marian Kamenistak: Greetings all! And thank you Kovid for having me once again, looking forward to it. It will be a ride today. 

Kovid Batra: Absolutely. So Marian, like today, the discussion topic is all around engineering metrics as last time, and our audience loved it so much that we wanted to deep dive into certain topics. And touch the fundamentals of DevOps, DORA, and discuss more on such topics with you with certain examples that you have implemented or seen as working well for different teams. 

So, before we jump into the metrics and how to implement those, the first fundamental questions that I would like to have a clarity is what is DevOps? And, what is DORA? 

Marian Kamenistak: Okay. So, let’s start with the DevOps first, I guess. So, the way how I perceive DevOps really is some sort of a system, how to build software faster, like the operational side of things, really. So, to be very specific in that, it encompasses the for example, you know, the configuration cycle, the release cycle all together monitoring and the tooling that we can combine all together. So, in other words, it saves time and, make our developers much more efficient as opposed to taking care of the, you know, low level stuff, let’s put it this way. So, in other words, DevOps saves usually the time to engineers and, increases dramatically their efficiency and basically their time to value. 

On the other hand, when it comes to the DORA metrics itself, it’s basically, some sort of representation in numbers, how we can measure the team’s efficiency, making sure that we, unlock the potential of our teams. And, keeping our teams on, let’s say, you know, on a high-performing level, right? And, there is a bunch of indicators. You might be talking about, top 5, top 10. I would be honest, it might be like up to a hundred different indicators. Nevertheless, in DORA’s case, it’s just four basic indicators that we talk about. And we could cover that in much more in depth, you know, at our session.

Kovid Batra: Perfect. Perfect. So, to begin with, I think when we are talking about DORA metrics, can you give some example where exactly you feel that DORA metrics really fit for a particular team? What is the importance that this DORA metrics bring to the table? And like, what results can we get if we work on them?

Marian Kamenistak: Okay. So, I would say these days, DORA metrics is some sort of a standard or established standard due to the fact that there has been a huge research coming from Google. What are the, basically the most impactful metrics that we might want to follow, right? From my own perspective, out of the four DORA metrics themselves, to me, the most important attribute of that or indicator of efficiency is, for example, deployment frequency, where basically we are saying how much time it takes us to basically turn our efforts into a value, to be very practical to basically to move our new feature set into production, for example, right? And the reason why I think this metric is, I would say the most influential is, you know what, let’s put it in a very simple example. If our team goes dark for three months, and all of a sudden they release something, you know how it goes. We’ve all heard the story, right? So, they want to shake their hands cause it’s a huge success. But on the other hand, the Product Manager is saying, “That’s not really what I wanted.” The stakeholders are sort of, you know, reluctant to that. The clients might be saying like, you know, that’s not what we meant to receive underway, how we work with. And there’s a bunch of bugs for another two sprints out of it, right? I think we’ve all experienced this story. 

So, in the end of the day, what we want to see is that our deployment frequency’s sort of frequent in a way that we release our increments in an established cadence, I would say, right? Here, I want to, basically pay attention to one of the things, which is usually we have to take into consideration the difference between, let’s say a large corporate company and a small startup, where, for example, speaking of the threshold, it’s totally fine that I can imagine that in the startup environment or scale of the environment, we can have the deployment frequency about, let’s say that we release twice a day right? That’s totally fine. While in the, for example, regulatory banking business, if we release something on a monthly basis, that’s a hell of a good achievement, right? So, including the whole testing, the regressions and you know, regulatory constraints and so on and so forth. So, we always have to take the context into consideration. So, don’t mix, you know, the startup metrics with, with the corporate ones, right? So, that’s so much about the deployment frequency, in my opinion. 

Kovid Batra: Right. Actually the, in not just the team size, I think it’s about how you function in, what domain you are and what exactly you’re working on. These metrics could vary for you and the benchmarks could vary for you. So, I think that’s a very good point that you touched on. And I mean, we also generally look this from a very deep lens that, okay, what exactly is needed for this team, how they’re functioning right now, what should be the benchmark for this particular team if they’re working on the front end back end. So, of course that matters a lot. So, deciding on a particular metric and then setting up benchmarks is not just straightforward. Like, you just go and say, okay, we want five deployments for a week and then we are sorted. That’s not something we.. 

Marian Kamenistak: Yeah. And if I may add another two cents to the story, Kovid, what I really found out is that actually what worked the best is that we set specific thresholds for specific teams cause you might have different teams that work on the platform or enabling teams or basically the new vertical teams and so on and so forth. Some of the things might have a high amount of dependencies and so on, or there is a high amount of, for example, unplanned work due to the maintenance and, other things.

So, it’s really great to basically set up the different expectations or thresholds for the different teams. And the way how I do it, I ask the teams to come up with their own proposals. 

Kovid Batra: Oh, great! 

Marian Kamenistak: And it works because you don’t break the principle of ownership. So just take it aside as a small tip and we’ll come back to our story for sure.

Kovid Batra: Sure. Sure. Absolutely. Marian, can you give me some other examples around how we can use these metrics to understand the problems of the team? Like I, just to give a head start there maybe. I’m looking for something that if we can understand how the code review process is going on in the team, or how does the velocity of a team looks like, or how does the collaboration looks like for a team. Can we identify such kind of inefficiencies and problems in the team using these metrics? And if you know, like, can you give an example? 

Marian Kamenistak: Yeah, totally. Let’s be honest, we can measure like dozens to hundreds of different things, but that’s not very wise, right? We have to start from somewhere. So, usually the way how I approach it, especially when a company invites me to do some sort of internal efficiency audits, there are two types of inputs. First, the first input is really like talking with people with the right people and, you know, open their heart and get all the insights as much as possible. And of course, the second element of that is data itself. So, looking into the data, seeing the improvement opportunities there and digesting the data the way that you can identify pretty well what might be the, the, the top root causes of why the machinery is sort of slower than expected, right? 

And here to be very specific, the one metric that I love looking at usually is the change failure rate, which is part of still the DORA metrics, where basically, the change failure rate can be translated as in my opinion, team satisfaction. If we don’t have a team that is satisfied, then we can hardly achieve some sort of a, you know, highly performing team or environment. And the reason why I think there is a correlation between the two is that if me as a boss, I’m coming to my team every second time after they release something and I’m telling them that, you know, they didn’t do the best job. And, you know, the production basically is full chaos. There was a failure and, there was an outage. Then of course, it doesn’t contribute to their satisfaction after they expect some kind of words from my side after releasing some sort of expected functionality, right? So, having the change failure rate pretty low, meaning, so let’s say, 5 to maximum 15 percent that says that, you know, there’s a certain, although yet minimal probability that things go wrong after doing something, that’s quite important to see. 

There is a story that I’m usually saying that my developers, they live out of motivation and satisfaction, right? So, the motivation is basically why we are here, what mission we contribute and the satisfaction; what do we get back after we accomplish our work. So again, the change failure rate is something that is, I, in my opinion, highly underrated. And, some people don’t see the correlation between the satisfaction and the change failure rate itself, right? So, I think that this might be yet another practical example how to think of the metrics and translate that to the real world, because without the true culture and, you know, the satisfaction, you cannot achieve some sort of a high efficiency levels of your teams. 

Kovid Batra: Yeah, absolutely. I mean, this is a very good example actually. When you look at change failure rate, the first thing that would come to your mind is like, this metric can tell us how satisfied our customers would be. But looking at it from the other perspective where you’re looking at team satisfaction, it makes a lot of sense actually. 

Marian Kamenistak: It’s bidirectional.

Kovid Batra: I had this, one of our podcast guests, with whom, I was discussing these metrics. So, he mentioned about using two metrics. One is comments per PR and then comments after PRs are raised for review to understand whether the teams are collaborating well and whether the initial code quality is good or not. And it was amazing. When I looked at the thought process, how he approached that, I was like, pretty amazing. That opened a few more doors of understanding how these engineering metrics work.

Marian Kamenistak: Right. And thank you for introducing this example, because usually, you know, the people are getting crazy about the DORA metrics. In my opinion, it brings certain value, but, we need to read from the context. There are some, I would say, preconditions and better opportunities to check before we start, you know, moving to DORA.

What I mean to say, and again, another practical example, is that I might have all the DORA metrics sort of in a positive threshold and that might signal us that, you know, the team is hopefully highly performing. Nevertheless, if the team or my teams don’t work on things that matter the most, meaning the roadmap, then we go belly up, right? So, what, I rather prefer looking at is, you know, the let’s say the portfolio investment, how much of our, you know, efforts or talents, efforts goes into the roadmap, into the product roadmap or technical roadmap or business as your meaning of roadmap or another, I’ll say improvement initiatives, new features and so on and so forth.

And usually, my advice is that if we speak about, we talk about the, let’s say product engineering teams, the teams that are basically implementing the new functionality. There, I want to see that these teams, usually they contribute to the roadmap at minimum 60 percent of their time, right? So, that makes me sure that actually, I’m basically investing their talent wisely. 

Kovid Batra: Right. 

Marian Kamenistak: If I have on the other hand, all the metrics that, you know, the velocity and, you know, all the, as you’ve been saying, all the pull requests and comments, seems great. But if they don’t work on things that matter the most, again, we are not gonna shake our hands together. And, that’s a waste of time. And, uh, the funny thing is it’s not the fault of the team. It’s the fault of me as their manager, of the manager of the team, not of the first line manager, that I haven’t taken care of that. So, don’t.. Let’s don’t use excuses about like, you know, this team is, you know, it’s my team, it’s not that highly performing and so on and so forth. It’s bullshit. Sorry to put it this way. We are responsible for making sure that our teams work on the right things that we are able to accomplish our roadmaps and our strategy. Period. Yeah.

Kovid Batra: No, absolutely. I think this makes a lot of sense. What I have felt so far talking to a lot of people about the metrics is that people do know about DORA metrics. They understand what engineering metrics are and measuring seems to be very obvious to, like everyone. It’s just the engineering department, where we are having these kinds of debates, on social media, how to measure developer productivity or how to look at these engineering metrics. 

Marian Kamenistak: Yeah. 

Kovid Batra: If we talk about other departments or business units of a business, like there are strong measurement tools in place who are using to measure the efficiency of, let’s say for a salesperson, maybe we have tools like salesforce, right? It is one of the renowned examples. You can understand a lot about individual person on what he or she’s doing, how they’re performing.

Same goes for us, but here the challenge. So the main point that I’m trying to bring up here is that the challenge is that probably the engineering community is finding it difficult to make sense out of these metrics to solve their problems. And hence, I mean, this could be just my perception after talking to a lot of people. But, this is what I have felt. I’m not sure what’s your opinion on that. I would love to know it. The biggest challenge is of course how to use them and if you don’t understand how to use them, automatically there is an inhibition to not implement such things. And then, the bias goes towards, why to have these metrics? We should just look at the happiness of the team and that’s all about it. If they’re motivated, we are good about it. So, I mean, this is my observation. What’s your take on that? 

Marian Kamenistak: Yeah. Yeah. That’s a very good topic. Thank you, Kovid, for opening that. Sometimes I, although, I see a huge impact as well, which is like, you know, we implemented certain metrics, engineering efficiency metrics, in the company. We turn these metrics into basically, OKRs or indicators that are tied with the bonus, for example, which is crazy because people start to get gamified, if you know what I mean, and it all goes belly up. So, that’s really like the, the, I would say the most severe anti-pattern I’ve experienced, right? 

And, to comment on your situation, actually, or your scenario, actually you are disclosing a very good point that I see happening. 

Kovid Batra: Yeah. Basically, It’s an implementation challenge. Like, there is a challenge with the implementation. I mean, you must have faced different kinds of situations where there would have been an implementation challenge. So, I mean, I just wanted to have your opinion on that. Yeah. 

Marian Kamenistak: So, right. The most, I would say, challenging situation while implementing these metrics is that usually companies start from the middle. The way how I see it, and that’s one of the, you know, the most frequent anti-patterns is that let’s implement a certain solution out of the box that implements DORA, SPACE, or whatever that is. Of course, we have to adjust our existing Jira and to do some cleanup. That’s a secret story and it takes some time, right? To comply with the standards. Then, all the numbers come up, right? And you have a great dashboard with great colors and everything.

But the challenge is exactly as you described, Kovid is to make a decision. What are out of this 20 different indicators, the top three or top four that we really want to focus? And how to set the, the thresholds properly, so we know what it means if it turns, you know, what’s the severity of that metric if it turns from green to orange or from orange to, to red, basically, right? So, without having this exercise and the decision making about like, you know, what are the main indicators that we really want to follow instead of like, you know, you have the full dashboard. It’s your responsibility now as a, as a new team leads to have all these numbers green, right? So, and these guys, they are basically lost in that, right? So, that’s one of the things. 

I want to add another, example, which is, you know, basically what I’m saying to the companies and to the clients is that implementing metrics or certain efficiency indicators, it’s one third of the story. Another two thirds is how to adopt these metrics into the company, right? It’s not about like, you know, we just did a Jira. We signed a contract. Here’s the dashboard. From now on, you are good to go and blessed to, uh, be a high-performing team because of the numbers, right? It doesn’t work that way. 

So, making sure that people understand why we want to implement these metrics, what’s the motivation, and explaining and massaging them all the time about like, what’s the purpose of that. Because usually, you know, let’s be honest, what’s the most frequent phrase we hear from, for example, the developers or team leads. Usually they are telling us, you know, get screwed with your metrics. You want to basically micromanage us, right? 

Kovid Batra: Exactly. Yeah. 

Marian Kamenistak: And, turning that, let’s say that change, because we talk about the change management in the end of the day. It’s not about changing the process, but also changing the people’s mindset and their perception on this matter, on this subject.

So usually, what I advise the teams and the companies is to change the narrative from, you know, we want to micromanage you and making sure that you are highly performing to stick to basically two principles. The first principle is transparency, meaning it’s even better if it’s part of our values or openness or something similar to that. Cause in the end of the day, I want to see and have a way to, where to look, to see real time data, which team is highly performing or which is not that highly performing, right? And it’s fully transparent. This is what it is, guys, and let’s digest it and let’s improve, right? The other thing is the other principle that I want to follow is prevention, basically.

And here, the metrics themselves, I’m gonna use some curse words, sorry for that, but it saved my, it saved my ass quite a lot of times. So that’s the fact, right? When I saw that, for example, some of the, some of the indicator goes totally down or belly up, or something is happening with the culture or relationship with the manager, when it comes to some sort of, you know, happiness indicators or that the number of frequency go, you know, it’s getting worse or, you know, the change failure rate is like, you know, moving up to, let’s say 40 percent in the last two months, then that’s an indicator that something is fishy there. And if I have, if I, if I wouldn’t have these numbers, I wouldn’t be aware of that situation. I wouldn’t be able to react. I wouldn’t be able to ask the team lead, my team lead, “Hey, Mike, please, in the next 1-on-1 with all the guys, please don’t ask them the same question. What’s going on? How we can improve it or what’s, what’s happening there? If I don’t have the numbers, I cannot basically, uh, react. And eventually I might end up with a totally deteriorated team and that I will have to rebuild and it will take me another half year. So, what I’m trying to say is having well-established metrics is something that really pays off in the end of the day when you compare it with, let’s say the waste of the time that some sort of inefficiency can create and having the situation when the team is sort of rotting and underperforming. So, that’s usually what I see. 

And, uh, the third thing that I want to open here. Kovid, sorry for speaking too much.

Kovid Batra: No, no, please go ahead. I think it’s very interesting. 

Marian Kamenistak: Sometimes, while implementing the indicators, sometimes I see that people look at these indicators as sort of KPIs, as opposed to indicators only. So, what I mean to say, you really need to be careful about how we translate the numbers. To be very specific, for example, if I see that one single individual contributor has a low amount of pull requests, right? And we are two weeks before, let’s say the performance reviews, review process. So, what do I do? Do I come up with a number and say, “Hey, Patrick, you, you, you are screwed.”? Or do I take the context into consideration by knowing that Patrick is a senior developer, and his strength is to enable other people. Therefore, what he’s doing for me is that he’s pairing with other guys, right? So he’s sort of, I would say, invisible in the process. But his value is amazing and it’s great, it’s huge, right? So, always take these numbers into consideration, that’s my, it’s indicators. So, be careful about how you, how you basically treat these numbers and how you communicate the numbers. 

So, my message would be make sure that you stay on the positive line all the time. Everybody is able to shout. I would be really careful about it. Yeah. 

Kovid Batra: Totally. I think, and one more important thing, like you, you mentioned about what kind of prerequisite in a culture you should have, and then how you should go about looking at these metrics where you tell everyone what is the motivation behind doing it. You answer the ‘whys’ for the people and bring in that innate motivation for everyone to look at those metrics as indicators of how work should proceed or how efficiency should look like. And I mean, I totally agree to that. From your experience, can you tell us is there an average time for a team to adopt these metrics? Like, how much duration should people wait for having these phases of implementation? Because I personally have seen with Typo that we are implementing with teams. And sometimes what happens is like, a few teams do it within one to two months of implementation, like they bring in the dashboard and they just have certain goals set up for the team, they identify the issues and they start up with it. And sometimes it takes more than three months also for teams to get gelled up with it. What’s your take on that part? Like, how much time does it take for, usually for the teams to? 

Marian Kamenistak: That’s a great topic. And thank you for opening that. My experience tells me that usually it’s roughly three months to onboard, all the teams, let’s say 6 to 10+ teams, towards the metrics to explain them, like what’s the reason behind how to use them, how to translate them and so on and so forth. 

Plus, let’s be honest. Implementing this functionality or indicators into the efficiency process of your company, as we are saying, it’s not just the change of the process, it’s also the mindset change, right? The best thing is really to, for example, you know, involve, for example, the team leads into this transformation early on, right? So, they are part of these conversations. Explaining them the motivation once again, making sure that they are in the center of our decision-making process. For example, I may be coming from my internal audit with, let’s say, out of the 15 with the seven or top eight different indicators that I think that are the most influential ones, right? And, but, it’s them who are saying in the end how they see it from their own perspective, right? And what are the, let’s say the final top four that we will start with, right? When it comes to the implementation, of course, the implementation, that’s the hard work. And as I was saying, the dirty secret is that you have to do a clean up of your ticketing system because, because, you know, if your data is screwed, your numbers will be screwed as well. No surprise here. And, in the end, of course, it’s about, you know, doing some sort of, I would say town halls and workshops with all the teams, including the product managers, the developers and others, so they understand the numbers and everybody’s on the same page, right? Including that. 

And again, the trick that I’m using quite a lot is that actually, I’m not saying what are the thresholds. Of course, I’m sort of, I would say lobbying for what the thresholds might be, but I’m asking the teams to come up with their own thresholds, right? As a proposal. Of course, we challenge each other. And this way I make sure that basically they own it from zero, from day zero, right? Or day one.

Kovid Batra: Yeah, absolutely. 

Marian Kamenistak: And that’s a trick that, that I use, quite often. And that being said, if I see, for example, to be very practical, if I see one of the teams that’s not like, you know, taking up fast enough, basically the agreement is, okay, take it easy. No worries here. Usually there is another, let’s say, situation that is not technical. Maybe it’s cultural or maybe it’s a personal situation. That’s my experience. That makes numbers go down, right? And, we invest a certain time to improve, you know, the root cause and, or to basically work on the root cause. And after the number starts to, you know, go to a reasonable level, then we say, “Okay, this team from now on is enabled and is using the metrics, you know, in full functionality, basically.”

So, what I mean to say, the anti-pattern in other words is say, okay, so these are the generic metrics. Let’s do it without, you know, having the context about what the team is about and, or, and the other anti-pattern is basically to say like, you know, this is the start, and from now on, everybody has to be compatible with the thresholds, which, you know, sometimes, there are, let’s say interpersonal reasons or some sort of cultural reasons why things might not be working as expected. 

Kovid Batra: Yeah. 

Marian Kamenistak: And here I want to basically double down this message, because usually, you know, the CTO is saying, “Okay, let’s implement the metrics. And from now on, you know, I will have high-performing teams, right? But, if you have a rotting situation or the Product Manager is not in synergy with the Engineering Manager or whoever, and, there’s some toxicity, let’s be honest, in the team, no metrics will help you. So, using the excuse of, you know, “Here’s the numbers.” And, you know, “Work your ass off.” Never helps. 

Kovid Batra: Yeah, absolutely. I think I can’t agree more on that. I think once you have this implementation in place, I’m just moving on to the next piece where I, I see teams implementing it, spending those one or two months or three months of time to get that into the picture where everyone is aligned.

Now, how does this process of, identifying different areas of inefficiency starts? And, just for example, like if I have a problem with the initial code quality in the team, or let’s say if I have a problem of deliverability in the team, where, maybe we are taking too long to deliver epics. And, uh, if there are like too many bugs coming in and there is high resolution time for the bugs, right? So which is directly impacting the delivery of the product with the customer. So, there could be a lot of areas where we can just start off. Like today, I’m an engineering manager. I might get overwhelmed with areas where I can work on and I will not be sure that, okay, which metrics should I choose?

So, can you give some examples which not only include DORA, but maybe we can just look at things beyond DORA and find out areas where an Engineering Manager or an Engineering Leader can get help on understanding where things are going wrong and how exactly one could improve on those?

Marian Kamenistak: Okay, perfect. Yeah, that’s a great topic, Kovid.

So, surprise, surprise! In my opinion, the most crucial indicators are not part of DORA. First of all, I want to make sure that, one that the teams, do know what they are supposed to work on and they are able to get focused on that, right? DORA says nothing about it. And usually, that’s one of the most frequent root causes that I see in the companies. To be very specific, the situations that I see the most often is that when we start to measure the, I call it roadmap contribution, meaning how much time we spend, our talent spans on the roadmap, on the things that matter the most. If we start to measure it, and you can do it very simply, you just, you know, put the tasks or stories that are part of, of the roadmap. Usually they belong to certain increments as epics, right? 

Kovid Batra: Yeah. 

Marian Kamenistak: You can label these epics by, let’s say the, the quarter or something of, of that year, year 2024 Q1 or whatever, right? Okay. So, this way you can distinguish, you know, whether that increments belong to an epic or not. And, if I have a task that has a parent epic with that label that clearly signals me that it contributes there, right? And just measuring, for example, the cycle time as a of these tasks, that’s the basic unit meaning, you know, how much time it takes from start, how much time it’s the ticket is in progress. And, comparing the summary of, you know, the total amount of, of the cycle time in the last three months, for example, with the ratio of only the cycle time of the tasks that, that contributes to the roadmap. That’s already a hell of a good indicator. And, as I was saying, usually I want to see that it’s roughly about, let’s say if we, if we have a, let’s say high-maintenance team, of course, it might be like, you know, just 30 to 40%. I understand that because there’s quite a lot of bugs and support tasks getting in, right? 

On the other hand, if we have a, let’s say a small startup team, there I want to see that the contribution, the roadmap contribution is close to 90%. The reason why is that they have no production bugs. They have no production support issues and so on and so forth, right? And if we, let’s be, let’s be real. If we are something about 55 to 60%, and we spend most of the time on things that matter the most with our teams, then that’s a good achievement. So what I need to say, the situation usually is that after we start measuring these things, we find out that our teams have a scattered focus and, you know, there’s quite a lot of unplanned work coming in. And, actually the roadmap contribution is let’s say 35 percent only, and everybody gets surprised about it, right? Nobody has measured that. So the, the top managers, they think that, that our teams work on the roadmap. Product Manager is still complaining that, you know, he doesn’t get enough attention. The support is complaining that, for example, you know, the amount of bugs is still increasing and so on and so forth. 

So if you at least, create some sort of expectations and, basically the balance between that, that’s a hell of a good regime already. And say clearly that, for example, yeah, for this quarter, I want to see that this, this quarter we spent 55 percent on, on the, let’s say product roadmap. 25 percent on the technical roadmap and let’s say another, the rest which is 20 percent on the off road map, right? Usually again, if you start measuring it, you will, you might get surprised that the off road map piece is 60% or 65%. Tell me how the DORA metrics can help you if you have this issue. So that’s one of the things that I want to highlight. 

The second indicator is focus. No surprise. How much able, uh, my teams are to keep focused, right? For example, what I’m measuring, what I, what I love observing is how many tickets in progress per person in a team we have. You would be surprised how high the correlation is between the, you know, the focus factor and the efficiency of the team. In terms of the focus, I want to see, for example, I’m telling it as a story. So I want to see that, each single person has maximum two tickets open in progress in parallel. I want to see that the whole team has maximum two increments or epics opened in parallel. You might be saying like, why two, why not three? There’s a third hidden epic, which is business as usual, the off road map. Take that into consideration. I want to see that, on the, let’s say department level, we work maximum on two large initiatives, right? On the company level, I want to see that we work maximum on 2 OKRs in parallel, not 5, not 10, right? That creates focus. If you don’t have these sort of work in progress limits, you cannot make sure that the focus is there, right? So, just making sure that this one works, all of a sudden you would see that the teams can start to breathe, right?

So, because the anti-pattern usually is that we have a team of five people and each and every person is working on a different increment, on a different epic. Tell me if this is a team or is this a bunch of individuals, right? There is no teamwork in my opinion, there is no knowledge sharing. You can’t help each other. You cannot, you know, start finishing. You just, or, you know, the system doesn’t work. It’s broken. 

Kovid Batra: Yeah. Yeah, absolutely. 

Marian Kamenistak: Again, DORA metrics in this case won’t help. Or pull request metrics, whatever metrics based on pull requests will not help us here, right? So, that’s one of the things. Another thing, or the third thing, I’m sorry, Kovid, once again for being too much talkative. This is the one that I love talking about. I will describe it as a story, right? Sometimes I get invited to companies and, I hear that, you know, my teams haven’t delivered on the roadmap for the third quarter. I don’t know what to do. Please have a look. Usually, you know, it’s not about the delivery. Usually the teams are performing pretty well. Of course, you can improve certain things and ceremonies and the flow and improve the overall efficiency of the teams by, let’s say, 10 to 30%, which is pretty nice. Okay. Sometimes what gets broken is, the discovery, meaning like, you know, the specification or, you know, the purpose on what are the things that we have to work on or we want to work on and what’s the most valuable thing that we have to work on? There is some sort of disconnect between the product and the team, the engineering team. So again, you need to rather work on the continuous delivery and these things. That’s what helps the most, because if you, you know, the story, “trash in, trash out”, right? So again, I’m telling the story, why the heck do you do, do you pay high-performing teams? And these teams are usually very expensive. If you throw trash on them, it’s, it’s not worth it. It’s not worth the investment, right? 

And the one single thing that I wanted to highlight is synergies. To be very practical, I can increase the efficiency of a team by, I’d say dozens of percent, right? By measuring things that matter the most. Nevertheless, if we create a synergy between the Product Manager and our Engineering Manager and the whole team, then all of a sudden we are getting 300 percent boost. So there is what I mean to say. There is something more than numbers. Surprise! 

Kovid Batra: Yeah, yeah, absolutely. There is.

Marian Kamenistak: So, these are the things that I usually, you know, observe. So it’s part of the numbers, the data, talking with people, the culture, the synergies that tells us the most, right? And only after we start to pick the most useful indicators, such as roadmap contribution or focus time or epic cycle time or team satisfaction and so on and so forth. So that’s, that’s my advice. Yeah. 

Kovid Batra: Totally makes sense. And I think looking at epic cycle time or roadmap contribution gives a lot of clarity on how, what we are exactly doing and are we headed in the right direction or not. These are two very good indicators. And I think very good for anyone to start off also, like for anyone to start off looking at what are the metrics which we should focus on at this point in time. As a startup or as even a midsize team, this makes a lot of sense, actually.

Marian Kamenistak: Right.

Kovid Batra: One more thing, just,I just want to discuss on this part is, like when we are looking at these metrics, there is of course, not just things around deliverability. And once you understand that part, what are the next steps that you have to take? So, I get to know that, okay, my team has lower contribution towards roadmap or our epic cycle time is high. So, what kind of execution steps should we take there? And are there any metrics involved that could help us understand whether the execution towards those is going right or not? 

So just to give you an example, like, epic cycle time is too high for my team. And I started to look at what, where is the problem. And I found out that one of the biggest bottlenecks was my deployment cycle. And every time a PR was ready to be merged to the production, everything was there. The builds took a lot of time. And every month, almost 15 deployments were being done. Whereas, the PRs were already pushed at, let’s say in six to eight hours or 10 hours of time. So basically, what happened was that every month we were wasting 50 percent of the time in getting those deployments done swiftly. And ultimately, this became a reason for epic cycle time to be too high for the team, right? And if you look at it on daily basis, you might feel as an Engineering Manager, there is a bottleneck. But when you look at it from a quarter’s perspective where you see your epic cycle time for a lot of epics was almost two to three times of what you were expecting, you would be amazed to see that. And you would want to take certain steps. So, I just wanted to understand like today, if I understand roadmap contribution is low, what steps should I take and how even the metrics can help in that situation? 

Marian Kamenistak: Okay. So, we might picture a couple of scenarios here, right? You described pretty well the epic cycle time and what might be the root cause. And usually we, you know, no surprise, here we talk about our ability to do a drill down just to analyze where this number is coming from, and so on and so forth. Exactly as precisely as you described, if the epic cycle time is, is too large or too high, I want to see what are the specific, sub-sequences in the cycle time of implementing certain increment as an epic. It might be either the deployment time or the testing or, or the adoption and so on and so forth. And, uh, if we identify what’s the, you know, the root cause or what’s the largest chunk that basically consumes most of our effort, then we can again narrow down, what that root cause and, do some sort of, adjustments and basically, healing scenarios, just to make sure that things work, and it might be either some sort of automation steps or making sure that we improve the whole process. So overall, either manually or by the process, or as I was saying, the automation, or basically we say, like, you know, we can have a, let’s say, software constraints in certain scenarios when we, for example, release only some sort of Betas or MVP and so on and so forth, right? So, we don’t have to have or a complete checklist of definition of ready to production, right? In this case. So, there are certain scenarios how to, how to treat these things. 

The other example that I find quite useful. For example, I want to close the loop and we started with, with the satisfaction of people. Here, of course, there’s a couple of tools and ways how to basically measure the team satisfaction. And again, this is one of the surprises that, that I’ve discovered, and I was really astonished by that to see how much such a metric can help me to make sure that my teams stay on a high performing level and the culture is existing there. Well, to be very practical, if I have a tool that tells me that, you know, the score from, let’s say 0 to 10 in terms of the relationship with the manager or relationship with the peers or the satisfaction or the wellness is roughly about, let’s say, 8 to 9, that’s totally fine. But if I see a drop from 8 to 2 with the relationship with the manager, then and it’s me being a manager, then I know that there is something that I should be eventually working on, right? So, and again, it’s a great act of prevention. And without this data, sometimes we don’t have the wake up call, right? So, that’s yet another example how we can help each other. 

And again, if I see this number, then of course, what we can do is again, to narrow, to do the drawdown, to ask the people for the feedback, to ask for the data, and so on and so forth, because in the end of the day, the data is what matters the most, right? If, If we, let’s say, on the cover, our hypothesis by data, then, we’re probably strong into our assumption and we already know what might be the root cause and how to basically prevent the situation from rotting and destabilizing the whole team or the whole department. So these are usually the scenarios that I see repeated, quite often. 

So, the message here is once again, as we said from the very beginning, don’t treat these indicators as KPIs or harder data. Make sure that you understand the context first, you do the drill down, you do your homework and only then start talking about it, because if you use it in an incorrect manner, it will bite you back. 

Kovid Batra: Yeah, of course, it will. Cool, Marian. I think this was a great conversation. I think we can have many more such discussions and keep diving into different use cases, but in the interest of time and for this particular episode, let’s close it here and look forward to having another such episode where we are talking more in depth about the problems that the teams face with these metrics.

Great, Marian. Once again, thanks a lot for bringing in such practical advice around metrics. I’m sure people, audience is going to love it. And I love it too. 

Marian Kamenistak: Thank you, Kovid, for having me here. Looking forward for our next session and, let’s make the world better. See you soon! 

Kovid Batra: Absolutely. See you!