Build efficient, productive dev teams with our practical insights & guides on engineering analytics, developer experience, and more - backed by the top tech leaders across the globe.
'Data-Driven Engineering: Building a Culture of Metrics' with Mario Viktorov Mechoulam, Sr. Engineering Manager, Contentsquare
March 14, 2025
•
27 min read
How do you build a culture of engineering metrics that drives real impact? Engineering teams often struggle with inefficiencies — high work-in-progress, unpredictable cycle times, and slow shipping. But what if the right metrics could change that?
In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Mario Viktorov Mechoulam, Senior Engineering Manager at Contentsquare, about how to establish a data-driven engineering culture using effective metrics. From overcoming cultural resistance to getting executive buy-in, Mario shares his insights on making metrics work for your team.
What You’ll Learn in This Episode:
✅ Why Metrics Matter:How the lack of metrics creates inefficiencies & frustrations in tech teams.
✅ Building a Metrics-Driven Culture:The five key steps — observability, accountability, understanding, discussions, and agreements.
✅ Overcoming Resistance: How to tackle biases, cultural pushback, and skepticism around metrics.
✅ Practical Tips for Engineering Managers: Early success indicators like reduced work-in-progress & improved predictability.
✅ Getting Executive Buy-In:How to align leadership on the value of engineering metrics.
✅ A Musician’s Path to Engineering Metrics:Mario’s unique journey from music to Lean & Toyota Production System-inspired engineering.
Timestamps
00:00 — Let’s begin!
00:47 — Meet the Guest: Mario
01:48 — Mario’s Journey into Engineering Metrics
03:22 — Building a Metrics-Driven Engineering Culture
06:49 — Challenges & Solutions in Metrics Adoption
07:37 — Why Observability & Accountability Matter
11:12 — Driving Cultural Change for Long-Term Success
Kovid Batra: Hi, everyone. Welcome to the all new episode of groCTO by Typo. This is Kovid, your host. Today with us, we have a very special guest whom I found after stalking a lot of people on LinkedIn, but found him in my nearest circle. Uh, welcome, welcome to the show, Mario. Uh, Mario is a Senior Engineering Manager at Contentsquare and, uh, he is an engineering metrics enthusiast, and that’s where we connected. We talked a lot about it and I was sure that he’s the guy we should have on the podcast to talk about it. And that’s why we thought today’s topic should be something that is very close to Mario, which is setting metrics culture in the engineering teams. So once again, welcome, welcome to the show, Mario. It’s great to have you here.
Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. I’m very happy to join this series.
Kovid Batra: Great. So Mario, I think before we get started, one quick question, so that we know you a little bit more. Uh, this is kind of a ritual we always have, so don’t get surprised by it. Uh, tell us something about yourself from your childhood or from your teenage that defines who you are today.
Mario Viktorov Mechoulam: Right. I think my, my, both of my parents are musicians and I played violin for a few years, um, also in the junior orchestra. I think this contact with music and with the orchestra in particular, uh, was very important to define who I am today because of teamwork and synchronicity. So, orchestras need to work together and need to have very, very good collaboration. So, this part stuck somewhere on the back of my brain. And teamwork and collaboration is something that defines me today and I value a lot in others as well.
Kovid Batra: That’s really interesting. That is one unique thing that I got to learn today. And I’m sure orchestra must have been fun.
Mario Viktorov Mechoulam: Yes.
Kovid Batra: Do you do that, uh, even today?
Mario Viktorov Mechoulam: Uh, no, no, unfortunately I’m, I’m like the black sheep of my family because I, once I discovered computers and switched to that, um, I have not looked back. Uh, some days I regret it a bit, uh, but this new adventure, this journey that I’m going through, um, I don’t think it’s, it’s irreplaceable. So I’m, I’m happy with what I’m doing.
Kovid Batra: Great! Thank you for sharing this. Uh, moving on, uh, to our main section, which is setting a culture of metrics in engineering teams. I think a very known topic, a very difficult to do thing, but I think we’ll address the elephant in the room today because we have an expert here with us today. So Mario, I think I’ll, I’ll start with this. Uh, sorry to say this, but, uh, this looks like a boring topic to a lot of engineering teams, right? People are not immediately aligned towards having metrics and measurement and people looking at what they’re doing. And of course, there are biases around it. It’s a good practice. It’s an ideal practice to have in high performing engineering teams. But what made you, uh, go behind this, uh, what excited you to go behind this?
Mario Viktorov Mechoulam: A very good question. And I agree that, uh, it’s not an easy topic. I think that, uh, what’s behind the metrics is around us, whether we like it or not. Efficiency, effectiveness, optimization, productivity. It’s, it’s in everything we do in the world. So, for example, even if you, if you go to the airport and you stay in a queue for your baggage check in, um, I’m sure there’s some metrics there, whether they track it or not, I don’t know. And, um, and I discovered in my, my university years, I had, uh, first contact with, uh, Toyota production system with Lean, how we call it in the West, and I discovered how there were, there were things that looked like, like magic that you could simply by observing and applying use to transform the landscape of organizations and the landscape systems. And I was very lucky to be in touch with this, uh, with this one professor who is, uh, uh, the Director of the Lean Institute in Spain. Um, and I was surprised to see how no matter how big the corporation, how powerful the people, how much money they have, there were inefficiencies everywhere. And in my eyes, it looks like a magic wand. Uh, you just, uh, weave it around and then you magically solve stuff that could not be solved, uh, no matter how much money you put on them. And this was, yeah, this stuck with me for quite some time, but I never realized until a few years into the industry that, that was not just for manufacturing, but, uh, lean and metrics, they’re around us and it’s our responsibility to seize it and to make them, to put them to good use.
Kovid Batra: Interesting. Interesting. So I think from here, I would love to know some of the things that you have encountered in your journey, um, as an engineering leader. Uh, when you start implementing or bringing this thought at first point in the teams, what’s their reaction? How do you deal with it? I know it’s an obvious question to ask because I have been dealing with a lot of teams, uh, while working at Typo, but I want to hear it from you firsthand. What’s the experience like? How do you bring it in? How do you motivate those people to actually come on board? So maybe if you have an example, if you have a story to tell us from there, please go ahead.
Mario Viktorov Mechoulam: Of course, of course. It’s not easy and I’ve made a lot of mistakes and one thing that I learned is that there is no fast track. It doesn’t matter if you know, if you know how to do it. If you’ve done it a hundred times, there’s no fast track. Most of the times it’s a slow grind and requires walking the path with people. I like to follow the, these steps. We start with observability, then accountability, then understanding, then discussions and finally agreements. Um, but of course, we cannot, we cannot, uh, uh, drop everything at, at, at, at once at the team because as you said, there are people who are generally wary of, of this, uh, because of, um, bad, bad practices, because of, um, unmet expectations, frustrations in the past. So indeed, um, I have, I have had to be very, very careful about it. So to me, the first thing is starting with observability, you need to be transparent with your intentions. And I think one, one key sentence that has helped me there is that trying to understand what are the things that people care about. Do you care about your customers? Do you care about how much focus time, how much quality focus time do you have? Do you care about the quality of what you ship? Do you care about the impact of what you ship? So if the answer to these questions is yes, and for the majority of engineers, and not only engineers, it’s, it’s yes, uh, then if you care about something, it might be smart to measure it. So that’s a, that’s a good first start. Um, then by asking questions about what are the pains or generating curiosity, like for example, where do you think we spend the most time when we are working to ship something? You can, uh, you can get to a point where the team agrees to have some observability, some metrics in place. So that’s the first step.
Uh, the second step is to generate accountability. And that is arguably harder. Why so? Because in my career, I’ve seen sometimes people, um, who think that these are management metrics. Um, and they are, so don’t get me wrong. I think management can put these metrics to good use, um, but this sends a message in that nobody else is responsible for them, and I disagree with this. I think that everybody is responsible. Of course, I’m ultimately responsible. So, what I do here is I try to help teams understand how they are accountable of this. So if it was me, then I get to decide how it really works, how they do the work, what tools they use, what process they use. This is boring. It’s boring for me, but it’s also boring and frustrating for the people. People might see this as micromanagement. I think it’s, uh, it’s much more intellectually interesting if you get to decide how you do the work. And this is how I connect the accountability so that we can get teams to accept that okay, these metrics that we see, they are a result of how we have decided to work together. The things, the practices, the habits that we do. And we can, we can influence them.
Kovid Batra: Totally. But the thing is, uh, when you say that everyone should be onboarded with this thought that it is not just for the management, for the engineering, what exactly, uh, are those action items that you plan that get this into the team as a culture? Because I, I feel, uh, I’ll touch this topic again when we move ahead, but when we talk about culture, it comes with a lot of aspects that you can, you can not just define, uh, in two days or three days or five days of time. There is a mindset that already exists and everything that you add on top of it comes only or fits only if it aligns with that because changing culture is a hard thing, right? So when you say that people usually feel that these are management metrics, somehow I feel that this is part of the culture. But when you bring it, when you bring it in a way that everyone is accountable, bringing that change into the mindset is, is, is a little hard, I feel. So what exactly do you do there is what I want to understand from you.
Mario Viktorov Mechoulam: Sure. Um, so just, just to be, to be clear, at the point where you introduce this observability and accountability, it’s not, it’s not part of the culture yet. I think this is the, like, putting the foot on the door, uh, to get people to start, um, to start looking at these, using these and eventually they become a culture, but way, way later down the line.
Kovid Batra: Got it, got it. Yeah.
Mario Viktorov Mechoulam: Another thing is that culture takes, takes a lot of time. It’s, uh, um, how can we say? Um, organic adoption is very slow. And after organic adoption, you eventually get a shifting culture. Um, so I was talking to somebody a few weeks back, and they were telling me a senior leader for one of another company, and they were telling me that it took a good 3–4 years to roll out metrics in a company. And even then, they did not have all the levels of adoption, all the cultural changes everywhere in all the layers that they wanted to. Um, so, so this, there’s no fast track. This, this takes time. And when you say that, uh, people are wary about metrics or people think that manage, this is management metrics when they, when, when you say this is part of culture, it’s true. And it comes maybe from a place where people have been kept out of it, or where they have seen that metrics have been misused to do precisely micromanagement, right?
Kovid Batra: Right.
Mario Viktorov Mechoulam: So, yeah, people feel like, oh, with this, my work is going to be scrutinized. Perhaps I’m going to have to cut corners. I’m going to be forced to cut corners. I will have less satisfaction in the work we do. So, so we need to break that, um, to change the culture. We need to break the existing culture and that, that takes time. Um, so for me, this is just the first step. Uh, just the first step to, um, to make people feel responsible, because at the end of the day, um, every, every team costs some, some, some budget, right, to the company. So for an average sized team, we might be talking $1 million, depending on where you’re located, of course. But $1 million per year. So, of course, this, each of these teams, they need to make $1 million in, uh, in impact to at least break even, but we need more. Um, how do we do that? So two things. First, you need, you need to track the impact of the work you do. So that already tells you that if we care about this, there is a metric that we have to incorporate. We have to track the impact, the effect that the work we ship has in the product. But then the second, second thing is to be able to correlate this, um, to correlate what we ship with the impact that we see. And, and there is a very, very, uh, narrow window to do that. You cannot start working on something and then ship it three years later and say, Oh, I had this impact. No, in three years, landscape changed a lot, right? So we need to be quicker in shipping and we need to be tracking what we ship. Therefore, um, measuring lead time, for example, or cycle time becomes one of the highest expressions of being agile, for example.
Kovid Batra: Got it.
Mario Viktorov Mechoulam: So it’s, it’s through these, uh, constant repetition and helping people see how the way they do work, how, whether they track or not, and can improve or not, um, has repercussions in the customer. Um, it’s, it’s the way to start, uh, introducing this, this, uh, this metric concept and eventually helping shift the culture.
Kovid Batra: So is, let’s say cycle time for, for that matter, uh, is, is a metric that is generally applicable in every situation and we can start introducing it at, at the first step and then maybe explore more and, uh, go for some specifics or cycle time is specific to a situation in itself?
Mario Viktorov Mechoulam: I think cycle time is one of these beautiful metrics that you can apply everywhere. Uh, normally you see it applied on the teams. To do, doing, done. But, uh, what I like is that you can apply it, um, everywhere. So you can apply it, um, across teams, you can apply, apply it at line level, you can even apply it at company level. Um, which is not done often. And I think this is, this is a problem. But applying it outside of teams, it’s definitely part of the cultural change. Um, I’ve seen that the focus is often on teams. There’s a lot of focus in optimizing teams, but when you look at the whole picture, um, there are many other places that present opportunities for optimization, and one way to do that is to start, to start measuring.
Kovid Batra: Mario, did you get a chance where you could see, uh, or compare basically, uh, teams or organizations where people are using engineering metrics, and let’s say, a team which doesn’t use engineering metrics? How does the value delivery in these systems, uh, vary, and to what extent, basically?
Mario Viktorov Mechoulam: Let me preface that. Um, metrics are just a cornerstone, but they don’t guarantee that you’d do better or worse than the teams that don’t apply them. However, it’s, it’s very hard, uh, sometimes to know whether you’re doing good or bad if you don’t have something measurable, um, to, to do that. What I’ve seen is much more frustration generally in teams that do not have metrics. But because not having them, uh, forces them into some bad habits. One of the typical things that I, that I see when I join a team or do a Gemba Walk, uh, on some of the teams that are not using engineering metrics, is high work in progress. We’re talking 30+ things are ongoing for a team of five engineers. This means that on average, everybody’s doing 5–6 things at the same time. A lot of context switching, a lot of multitasking, a lot of frustration and leading to things taking months to ship instead of days. Of course, as I said, we can have teams that are doing great without this, but, um, if you’re already doing this, I think just adding the metric to validate it is a very small price to pay. And even if you’re doing great, this can start to change in any moment because of changes in the team composition, changes in the domain, changes in the company, changes in the process that is top-down. So it’s, uh, normally it’s, it’s, it’s very safe to have the metrics to be able to identify this type of drift, this type of degradation as soon as they happen. What I’ve seen also with teams that do have metric adoption is first this eventual cultural change, but then in general, uh, one thing that they do is that they keep, um, they keep the pieces of work small, they limit the work in progress and they are very, very much on top of the results on a regular basis and discussing these results. Um, so this is where we can continue with the, uh, cultural change.
Uh, so after we have, uh, accountability, uh, the next thing, step is understanding. So helping people through documentation, but also through coaching, understand how the choices that we make, the decisions, the events, produce the results that we see for which we’re responsible. And after that, fostering discussion for which you need to have trust, because here we don’t want blaming. We don’t want comparing teams. We want to understand what happened, what led to this. And then, with these discussions, see what can we do to prevent these things. Um, which leads to agreement. So doing this circle, closing the circle, doing it constantly, creates habits. Habits create continuous improvement, continuous learning. And at a certain point, you have the feeling that the team already understands the concepts and is able to work autonomously on this. And this is the moment where you delegate responsibility, um, of this and of the execution as well. And you have created, you have changed a bit the culture in one team.
Kovid Batra: Makes sense. What else does it take, uh, to actually bring in this culture? What else do you think is, uh, missing in this recipe yet?
Mario Viktorov Mechoulam: Yes. Um, I think working with teams is one thing. It’s a small and controlled environment. But the next thing is that you need executive sponsorship. You need to work at the organization level. And that is, that is a bit harder. Let’s say just a bit harder. Um, why is it hard?
Kovid Batra: I see some personal pain coming in there, right?
Mario Viktorov Mechoulam: Um, well, no, it depends. I think it can be harder or it can be easier. So, for example, uh, my experience with startups is that in general, getting executive sponsorship there, the buy-in, is way easier. Um, at the same time, the, because it’s flatter, so you’re in contact day to day with the people who, who need to give you this buy-in. At the same time, very interestingly, engineers in these organizations often are, often need these metrics much less at that point. Why? Because when we talk about startups, we’re talking about much less meetings, much less process. A lot of times, a lot of, um, people usually wear multiple hats, boundaries between roles are not clear. So there’s a lot of collaboration. People usually sit in the very same room. Um, so, so these are engineers that don’t need it, but it’s also a good moment to plant the seed because when these companies grow, uh, you’ll be thankful for that later. Uh, where it’s harder to get it, it’s in bigger corporations. But it’s in these places where I think that it’s most needed because the amount of process, the amount of bureaucracy, the amount of meetings, is very, very draining to the teams in those places. And usually you see all these just piles up. It seldom gets removed. Um, that, maybe it’s a topic for a different discussion. But I think people are very afraid of removing something and then be responsible of the result that removal brings. But yeah, I have, I have had, um, we can say fairly, a fair success of also getting the executive sponsorship, uh, in, in organizations to, to support this and I have learned a few things also along the way.
Kovid Batra: Would you like to share some of the examples? Not specifically from, let’s say, uh, getting sponsorship from the executives, I would be interested because you say it’s a little hard in places. So what things do you think, uh, can work out when you are in that room where you need to get a buy-in on this? What exactly drives that?
Mario Viktorov Mechoulam: Yes. The first point is the same, both for grassroots movements with teams and executive sponsorship, and that is to be transparent. Transparent with what, what do you want to do? What’s your intent and why do you think this is important? Uh, now here, and I’m embarrassed to say this, um, we, we want to change the culture, right? So we should focus on talking about habits, um, right? About culture, about people, et cetera. Not that much about, um, magic to say that, but I, but I’m guilty of using that because, um, people, people like how this sounds, people like to see, to, to, to hear, oh, we’ll introduce metrics and they will be faster and we’ll be more efficient. Um, so it’s not a direct relationship. As I said, it’s a stepping stone that can help you get there. Um, but, but it’s not, it’s not a one month journey or a one year journey. It can take slightly longer, but sometimes to get, to get the attention, you have to have a pitch which focuses more on efficiency, which focuses more on predictability and these type of things. So that’s definitely one, one learning. Um, second learning is that it’s very important, no matter who you are, but it’s even more important when you are, uh, not at the top of the, uh, of the management, uh, uh, pyramid to get, um, by, uh, so to get coaching from your, your direct manager. So if you have somebody that, uh, makes your goals, your objectives, their own, uh, it’s great because they have more experience, uh, they can help you navigate these and present the cases, uh, in a much better and structured way for the, for the intent that you have. And I was very lucky there as well to count on people that were supportive, uh, that were coaching me along the way. Um, yes.
So, first step is the same. First step is to be transparent and, uh, with your intent and share something that you have done already. Uh, here we are often in a situation where you have to put your money where your mouth is, and sometimes you have to invest from your own pocket if you want, for example, um, to use a specific tool. So to me, tools don’t really matter. So what’s important is start with some, something and then build up on top of it, change the culture, and then you’ll find the perfect tool that serves your purpose. Um, exactly. So sometimes you have to, you have to initiate this if you want to have some, some, some metrics. Of course, you can always do this manually. I’ve done it in the past, but I definitely don’t recommend it because it’s a lot of work. In an era where most of these tools are commodities, so we’re lucky enough to be able to gather this metric, this information. Yeah, so usually after this PoC, this experiment for three to six months with the team, you should have some results that you can present, um, to, um, to get executive sponsorship. Something that’s important here that I learned is that you need to present the results very, very precisely. Uh, so what was the problem? What are the actions we did? What’s the result? And that’s not always easy because when you, when you work with metrics for a while, you quickly start to see that there are a lot of synergies. There’s overlapping. There are things that impact other things, right? So sometimes you see a change in the trend, you see an improvement somewhere, uh, you see the cultural impact also happening, but you’re not able to define exactly what’s one thing that we need or two things that we, that we need to change that. Um, so, so that part, I think is very important, but it’s not always easy. So it has to be prepared clearly. Um, the second part is that unfortunately, I discovered that not many people are familiar with the topics. So when introducing it to get the exact sponsorship, you need to, you need to be able to explain them in a very simple, uh, and an easy way and also be mindful of the time because most of the people are very busy. Um, so you don’t want to go in a full, uh, full blown explanation of several hours.
Kovid Batra: I think those people should watch these kinds of podcasts.
Mario Viktorov Mechoulam: Yeah. Um, but, but, yeah, so it’s, it’s, it’s the experiment, it’s the results, it’s the actions, but also it’s a bit of background of why is this important and, um, yeah, and, and how did it influence what we did.
Kovid Batra: Yeah, I mean, there’s always, uh, different, uh, levels where people are in this journey. Let’s, let’s call this a journey where you are super aware, you know what needs to be done. And then there is a place where you’re not aware of the problem itself. So when you go through this funnel, there are people whom you need to onboard in your team, who need to first understand what we are talking about what does it mean, how it’s going to impact, and what exactly it is, in very simple layman language. So I totally understand that point and realize that how easy as well as difficult it is to get these things in place, bring that culture of metrics, engineering metrics in the engineering teams.
Well, I think this was something really, really interesting. Uh, one last piece that I want to touch upon is when you put in all these efforts into onboarding the teams, fostering that culture, getting buy-in from the executives, doing your PoCs and then presenting it, getting in sync with the team, there must be some specific indicators, right, that you start seeing in the teams. I know you have just covered it, but I want to again highlight that point that what exactly someone who is, let’s say an engineering manager and trying to implement it in the team should be looking for early on, or let’s say maybe one month, two months down the line when they started doing that PoC in their teams.
Mario Viktorov Mechoulam: I think, um, how comfortable the people in the team get in discussing and explaining the concepts during analysis of the metrics, this quality analysis is key. Um, and this is probably where most of the effort goes in the first months. We need to make sure that people do understand the metrics, what they represent, how the work we do has an impact on those. And, um, when we reached that point, um, one, one cue for me was the people in my teams, uh, telling me, I want to run this. This meant to me that we had closed the circle and we were close to having a habit and people were, uh, were ready to have this responsibility delegated to them to execute this. So it put people in a place where, um, they had to drive a conversation and they had to think about okay, what am I seeing? What happened? But what could it mean? But then what actions do we want to take? But this is something that we saw in the past already, and we tried to address, and then maybe we made it worse. And then you should also see, um, a change in the trend of metrics. For example, work in progress, getting from 30+ down to something close to the team size. Uh, it could be even better because even then it means that people are working independently and maybe you want them to collaborate. Um, some of the metrics change drastically. Uh, we can, we can talk about it another time, but the standard deviation of the cycle time, you can see how it squeezes, which means that, uh, it, it doesn’t, uh, feel random anymore. When, when I’m going to ship something, but now right now we can make a very, um, a very accurate guess of when, when it’s going to happen. So these types of things to me, mark, uh, good, good changes and that you’re on the right path.
Kovid Batra: Uh, honestly, Mario, very insightful, very practical tips that I have heard today about the implementation piece, and I’m sure this doesn’t end here. Uh, we are going to have more such discussions on this topic, and I want to deep dive into what exact metrics, how to use them, what suits which situation, talking about things like standard deviation from your cycle time would start changing, and that is in itself an interesting thing to talk about. So probably we’ll cover that in the next podcast that we have with you. For today, uh, this is our time. Any parting advice that you would like to share with the audience? Let’s say, there is an Engineering Manager. Let’s say, Mario five years back, who is thinking to go in this direction, what piece of advice would you give that person to get on this journey and what’s the incentive for that person?
Mario Viktorov Mechoulam: Yes. Okay. Clear. In, in general, you, you’ll, you’ll hear that people and teams are too busy to improve. We all know that. So I think as a manager who wants to start introducing these, uh, these concepts and these metrics, your, one of your responsibilities is to make room, to make space for the team, so that they can sit down and have a quality, quality time for this type of conversation. Without it, it’s not, uh, it’s not going to happen.
Kovid Batra: Okay, perfect. Great, Mario. It was great having you here. And I’m sure, uh, we are recording a few more sessions on this topic because this is close to us as well. But for today, this is our time. Thank you so much. See you once again.
Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. Bye-bye!
How do you build a culture of engineering metrics that drives real impact? Engineering teams often struggle with inefficiencies — high work-in-progress, unpredictable cycle times, and slow shipping. But what if the right metrics could change that?
In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Mario Viktorov Mechoulam, Senior Engineering Manager at Contentsquare, about how to establish a data-driven engineering culture using effective metrics. From overcoming cultural resistance to getting executive buy-in, Mario shares his insights on making metrics work for your team.
What You’ll Learn in This Episode:
✅ Why Metrics Matter:How the lack of metrics creates inefficiencies & frustrations in tech teams.
✅ Building a Metrics-Driven Culture:The five key steps — observability, accountability, understanding, discussions, and agreements.
✅ Overcoming Resistance: How to tackle biases, cultural pushback, and skepticism around metrics.
✅ Practical Tips for Engineering Managers: Early success indicators like reduced work-in-progress & improved predictability.
✅ Getting Executive Buy-In:How to align leadership on the value of engineering metrics.
✅ A Musician’s Path to Engineering Metrics:Mario’s unique journey from music to Lean & Toyota Production System-inspired engineering.
Timestamps
00:00 — Let’s begin!
00:47 — Meet the Guest: Mario
01:48 — Mario’s Journey into Engineering Metrics
03:22 — Building a Metrics-Driven Engineering Culture
06:49 — Challenges & Solutions in Metrics Adoption
07:37 — Why Observability & Accountability Matter
11:12 — Driving Cultural Change for Long-Term Success
Kovid Batra: Hi, everyone. Welcome to the all new episode of groCTO by Typo. This is Kovid, your host. Today with us, we have a very special guest whom I found after stalking a lot of people on LinkedIn, but found him in my nearest circle. Uh, welcome, welcome to the show, Mario. Uh, Mario is a Senior Engineering Manager at Contentsquare and, uh, he is an engineering metrics enthusiast, and that’s where we connected. We talked a lot about it and I was sure that he’s the guy we should have on the podcast to talk about it. And that’s why we thought today’s topic should be something that is very close to Mario, which is setting metrics culture in the engineering teams. So once again, welcome, welcome to the show, Mario. It’s great to have you here.
Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. I’m very happy to join this series.
Kovid Batra: Great. So Mario, I think before we get started, one quick question, so that we know you a little bit more. Uh, this is kind of a ritual we always have, so don’t get surprised by it. Uh, tell us something about yourself from your childhood or from your teenage that defines who you are today.
Mario Viktorov Mechoulam: Right. I think my, my, both of my parents are musicians and I played violin for a few years, um, also in the junior orchestra. I think this contact with music and with the orchestra in particular, uh, was very important to define who I am today because of teamwork and synchronicity. So, orchestras need to work together and need to have very, very good collaboration. So, this part stuck somewhere on the back of my brain. And teamwork and collaboration is something that defines me today and I value a lot in others as well.
Kovid Batra: That’s really interesting. That is one unique thing that I got to learn today. And I’m sure orchestra must have been fun.
Mario Viktorov Mechoulam: Yes.
Kovid Batra: Do you do that, uh, even today?
Mario Viktorov Mechoulam: Uh, no, no, unfortunately I’m, I’m like the black sheep of my family because I, once I discovered computers and switched to that, um, I have not looked back. Uh, some days I regret it a bit, uh, but this new adventure, this journey that I’m going through, um, I don’t think it’s, it’s irreplaceable. So I’m, I’m happy with what I’m doing.
Kovid Batra: Great! Thank you for sharing this. Uh, moving on, uh, to our main section, which is setting a culture of metrics in engineering teams. I think a very known topic, a very difficult to do thing, but I think we’ll address the elephant in the room today because we have an expert here with us today. So Mario, I think I’ll, I’ll start with this. Uh, sorry to say this, but, uh, this looks like a boring topic to a lot of engineering teams, right? People are not immediately aligned towards having metrics and measurement and people looking at what they’re doing. And of course, there are biases around it. It’s a good practice. It’s an ideal practice to have in high performing engineering teams. But what made you, uh, go behind this, uh, what excited you to go behind this?
Mario Viktorov Mechoulam: A very good question. And I agree that, uh, it’s not an easy topic. I think that, uh, what’s behind the metrics is around us, whether we like it or not. Efficiency, effectiveness, optimization, productivity. It’s, it’s in everything we do in the world. So, for example, even if you, if you go to the airport and you stay in a queue for your baggage check in, um, I’m sure there’s some metrics there, whether they track it or not, I don’t know. And, um, and I discovered in my, my university years, I had, uh, first contact with, uh, Toyota production system with Lean, how we call it in the West, and I discovered how there were, there were things that looked like, like magic that you could simply by observing and applying use to transform the landscape of organizations and the landscape systems. And I was very lucky to be in touch with this, uh, with this one professor who is, uh, uh, the Director of the Lean Institute in Spain. Um, and I was surprised to see how no matter how big the corporation, how powerful the people, how much money they have, there were inefficiencies everywhere. And in my eyes, it looks like a magic wand. Uh, you just, uh, weave it around and then you magically solve stuff that could not be solved, uh, no matter how much money you put on them. And this was, yeah, this stuck with me for quite some time, but I never realized until a few years into the industry that, that was not just for manufacturing, but, uh, lean and metrics, they’re around us and it’s our responsibility to seize it and to make them, to put them to good use.
Kovid Batra: Interesting. Interesting. So I think from here, I would love to know some of the things that you have encountered in your journey, um, as an engineering leader. Uh, when you start implementing or bringing this thought at first point in the teams, what’s their reaction? How do you deal with it? I know it’s an obvious question to ask because I have been dealing with a lot of teams, uh, while working at Typo, but I want to hear it from you firsthand. What’s the experience like? How do you bring it in? How do you motivate those people to actually come on board? So maybe if you have an example, if you have a story to tell us from there, please go ahead.
Mario Viktorov Mechoulam: Of course, of course. It’s not easy and I’ve made a lot of mistakes and one thing that I learned is that there is no fast track. It doesn’t matter if you know, if you know how to do it. If you’ve done it a hundred times, there’s no fast track. Most of the times it’s a slow grind and requires walking the path with people. I like to follow the, these steps. We start with observability, then accountability, then understanding, then discussions and finally agreements. Um, but of course, we cannot, we cannot, uh, uh, drop everything at, at, at, at once at the team because as you said, there are people who are generally wary of, of this, uh, because of, um, bad, bad practices, because of, um, unmet expectations, frustrations in the past. So indeed, um, I have, I have had to be very, very careful about it. So to me, the first thing is starting with observability, you need to be transparent with your intentions. And I think one, one key sentence that has helped me there is that trying to understand what are the things that people care about. Do you care about your customers? Do you care about how much focus time, how much quality focus time do you have? Do you care about the quality of what you ship? Do you care about the impact of what you ship? So if the answer to these questions is yes, and for the majority of engineers, and not only engineers, it’s, it’s yes, uh, then if you care about something, it might be smart to measure it. So that’s a, that’s a good first start. Um, then by asking questions about what are the pains or generating curiosity, like for example, where do you think we spend the most time when we are working to ship something? You can, uh, you can get to a point where the team agrees to have some observability, some metrics in place. So that’s the first step.
Uh, the second step is to generate accountability. And that is arguably harder. Why so? Because in my career, I’ve seen sometimes people, um, who think that these are management metrics. Um, and they are, so don’t get me wrong. I think management can put these metrics to good use, um, but this sends a message in that nobody else is responsible for them, and I disagree with this. I think that everybody is responsible. Of course, I’m ultimately responsible. So, what I do here is I try to help teams understand how they are accountable of this. So if it was me, then I get to decide how it really works, how they do the work, what tools they use, what process they use. This is boring. It’s boring for me, but it’s also boring and frustrating for the people. People might see this as micromanagement. I think it’s, uh, it’s much more intellectually interesting if you get to decide how you do the work. And this is how I connect the accountability so that we can get teams to accept that okay, these metrics that we see, they are a result of how we have decided to work together. The things, the practices, the habits that we do. And we can, we can influence them.
Kovid Batra: Totally. But the thing is, uh, when you say that everyone should be onboarded with this thought that it is not just for the management, for the engineering, what exactly, uh, are those action items that you plan that get this into the team as a culture? Because I, I feel, uh, I’ll touch this topic again when we move ahead, but when we talk about culture, it comes with a lot of aspects that you can, you can not just define, uh, in two days or three days or five days of time. There is a mindset that already exists and everything that you add on top of it comes only or fits only if it aligns with that because changing culture is a hard thing, right? So when you say that people usually feel that these are management metrics, somehow I feel that this is part of the culture. But when you bring it, when you bring it in a way that everyone is accountable, bringing that change into the mindset is, is, is a little hard, I feel. So what exactly do you do there is what I want to understand from you.
Mario Viktorov Mechoulam: Sure. Um, so just, just to be, to be clear, at the point where you introduce this observability and accountability, it’s not, it’s not part of the culture yet. I think this is the, like, putting the foot on the door, uh, to get people to start, um, to start looking at these, using these and eventually they become a culture, but way, way later down the line.
Kovid Batra: Got it, got it. Yeah.
Mario Viktorov Mechoulam: Another thing is that culture takes, takes a lot of time. It’s, uh, um, how can we say? Um, organic adoption is very slow. And after organic adoption, you eventually get a shifting culture. Um, so I was talking to somebody a few weeks back, and they were telling me a senior leader for one of another company, and they were telling me that it took a good 3–4 years to roll out metrics in a company. And even then, they did not have all the levels of adoption, all the cultural changes everywhere in all the layers that they wanted to. Um, so, so this, there’s no fast track. This, this takes time. And when you say that, uh, people are wary about metrics or people think that manage, this is management metrics when they, when, when you say this is part of culture, it’s true. And it comes maybe from a place where people have been kept out of it, or where they have seen that metrics have been misused to do precisely micromanagement, right?
Kovid Batra: Right.
Mario Viktorov Mechoulam: So, yeah, people feel like, oh, with this, my work is going to be scrutinized. Perhaps I’m going to have to cut corners. I’m going to be forced to cut corners. I will have less satisfaction in the work we do. So, so we need to break that, um, to change the culture. We need to break the existing culture and that, that takes time. Um, so for me, this is just the first step. Uh, just the first step to, um, to make people feel responsible, because at the end of the day, um, every, every team costs some, some, some budget, right, to the company. So for an average sized team, we might be talking $1 million, depending on where you’re located, of course. But $1 million per year. So, of course, this, each of these teams, they need to make $1 million in, uh, in impact to at least break even, but we need more. Um, how do we do that? So two things. First, you need, you need to track the impact of the work you do. So that already tells you that if we care about this, there is a metric that we have to incorporate. We have to track the impact, the effect that the work we ship has in the product. But then the second, second thing is to be able to correlate this, um, to correlate what we ship with the impact that we see. And, and there is a very, very, uh, narrow window to do that. You cannot start working on something and then ship it three years later and say, Oh, I had this impact. No, in three years, landscape changed a lot, right? So we need to be quicker in shipping and we need to be tracking what we ship. Therefore, um, measuring lead time, for example, or cycle time becomes one of the highest expressions of being agile, for example.
Kovid Batra: Got it.
Mario Viktorov Mechoulam: So it’s, it’s through these, uh, constant repetition and helping people see how the way they do work, how, whether they track or not, and can improve or not, um, has repercussions in the customer. Um, it’s, it’s the way to start, uh, introducing this, this, uh, this metric concept and eventually helping shift the culture.
Kovid Batra: So is, let’s say cycle time for, for that matter, uh, is, is a metric that is generally applicable in every situation and we can start introducing it at, at the first step and then maybe explore more and, uh, go for some specifics or cycle time is specific to a situation in itself?
Mario Viktorov Mechoulam: I think cycle time is one of these beautiful metrics that you can apply everywhere. Uh, normally you see it applied on the teams. To do, doing, done. But, uh, what I like is that you can apply it, um, everywhere. So you can apply it, um, across teams, you can apply, apply it at line level, you can even apply it at company level. Um, which is not done often. And I think this is, this is a problem. But applying it outside of teams, it’s definitely part of the cultural change. Um, I’ve seen that the focus is often on teams. There’s a lot of focus in optimizing teams, but when you look at the whole picture, um, there are many other places that present opportunities for optimization, and one way to do that is to start, to start measuring.
Kovid Batra: Mario, did you get a chance where you could see, uh, or compare basically, uh, teams or organizations where people are using engineering metrics, and let’s say, a team which doesn’t use engineering metrics? How does the value delivery in these systems, uh, vary, and to what extent, basically?
Mario Viktorov Mechoulam: Let me preface that. Um, metrics are just a cornerstone, but they don’t guarantee that you’d do better or worse than the teams that don’t apply them. However, it’s, it’s very hard, uh, sometimes to know whether you’re doing good or bad if you don’t have something measurable, um, to, to do that. What I’ve seen is much more frustration generally in teams that do not have metrics. But because not having them, uh, forces them into some bad habits. One of the typical things that I, that I see when I join a team or do a Gemba Walk, uh, on some of the teams that are not using engineering metrics, is high work in progress. We’re talking 30+ things are ongoing for a team of five engineers. This means that on average, everybody’s doing 5–6 things at the same time. A lot of context switching, a lot of multitasking, a lot of frustration and leading to things taking months to ship instead of days. Of course, as I said, we can have teams that are doing great without this, but, um, if you’re already doing this, I think just adding the metric to validate it is a very small price to pay. And even if you’re doing great, this can start to change in any moment because of changes in the team composition, changes in the domain, changes in the company, changes in the process that is top-down. So it’s, uh, normally it’s, it’s, it’s very safe to have the metrics to be able to identify this type of drift, this type of degradation as soon as they happen. What I’ve seen also with teams that do have metric adoption is first this eventual cultural change, but then in general, uh, one thing that they do is that they keep, um, they keep the pieces of work small, they limit the work in progress and they are very, very much on top of the results on a regular basis and discussing these results. Um, so this is where we can continue with the, uh, cultural change.
Uh, so after we have, uh, accountability, uh, the next thing, step is understanding. So helping people through documentation, but also through coaching, understand how the choices that we make, the decisions, the events, produce the results that we see for which we’re responsible. And after that, fostering discussion for which you need to have trust, because here we don’t want blaming. We don’t want comparing teams. We want to understand what happened, what led to this. And then, with these discussions, see what can we do to prevent these things. Um, which leads to agreement. So doing this circle, closing the circle, doing it constantly, creates habits. Habits create continuous improvement, continuous learning. And at a certain point, you have the feeling that the team already understands the concepts and is able to work autonomously on this. And this is the moment where you delegate responsibility, um, of this and of the execution as well. And you have created, you have changed a bit the culture in one team.
Kovid Batra: Makes sense. What else does it take, uh, to actually bring in this culture? What else do you think is, uh, missing in this recipe yet?
Mario Viktorov Mechoulam: Yes. Um, I think working with teams is one thing. It’s a small and controlled environment. But the next thing is that you need executive sponsorship. You need to work at the organization level. And that is, that is a bit harder. Let’s say just a bit harder. Um, why is it hard?
Kovid Batra: I see some personal pain coming in there, right?
Mario Viktorov Mechoulam: Um, well, no, it depends. I think it can be harder or it can be easier. So, for example, uh, my experience with startups is that in general, getting executive sponsorship there, the buy-in, is way easier. Um, at the same time, the, because it’s flatter, so you’re in contact day to day with the people who, who need to give you this buy-in. At the same time, very interestingly, engineers in these organizations often are, often need these metrics much less at that point. Why? Because when we talk about startups, we’re talking about much less meetings, much less process. A lot of times, a lot of, um, people usually wear multiple hats, boundaries between roles are not clear. So there’s a lot of collaboration. People usually sit in the very same room. Um, so, so these are engineers that don’t need it, but it’s also a good moment to plant the seed because when these companies grow, uh, you’ll be thankful for that later. Uh, where it’s harder to get it, it’s in bigger corporations. But it’s in these places where I think that it’s most needed because the amount of process, the amount of bureaucracy, the amount of meetings, is very, very draining to the teams in those places. And usually you see all these just piles up. It seldom gets removed. Um, that, maybe it’s a topic for a different discussion. But I think people are very afraid of removing something and then be responsible of the result that removal brings. But yeah, I have, I have had, um, we can say fairly, a fair success of also getting the executive sponsorship, uh, in, in organizations to, to support this and I have learned a few things also along the way.
Kovid Batra: Would you like to share some of the examples? Not specifically from, let’s say, uh, getting sponsorship from the executives, I would be interested because you say it’s a little hard in places. So what things do you think, uh, can work out when you are in that room where you need to get a buy-in on this? What exactly drives that?
Mario Viktorov Mechoulam: Yes. The first point is the same, both for grassroots movements with teams and executive sponsorship, and that is to be transparent. Transparent with what, what do you want to do? What’s your intent and why do you think this is important? Uh, now here, and I’m embarrassed to say this, um, we, we want to change the culture, right? So we should focus on talking about habits, um, right? About culture, about people, et cetera. Not that much about, um, magic to say that, but I, but I’m guilty of using that because, um, people, people like how this sounds, people like to see, to, to, to hear, oh, we’ll introduce metrics and they will be faster and we’ll be more efficient. Um, so it’s not a direct relationship. As I said, it’s a stepping stone that can help you get there. Um, but, but it’s not, it’s not a one month journey or a one year journey. It can take slightly longer, but sometimes to get, to get the attention, you have to have a pitch which focuses more on efficiency, which focuses more on predictability and these type of things. So that’s definitely one, one learning. Um, second learning is that it’s very important, no matter who you are, but it’s even more important when you are, uh, not at the top of the, uh, of the management, uh, uh, pyramid to get, um, by, uh, so to get coaching from your, your direct manager. So if you have somebody that, uh, makes your goals, your objectives, their own, uh, it’s great because they have more experience, uh, they can help you navigate these and present the cases, uh, in a much better and structured way for the, for the intent that you have. And I was very lucky there as well to count on people that were supportive, uh, that were coaching me along the way. Um, yes.
So, first step is the same. First step is to be transparent and, uh, with your intent and share something that you have done already. Uh, here we are often in a situation where you have to put your money where your mouth is, and sometimes you have to invest from your own pocket if you want, for example, um, to use a specific tool. So to me, tools don’t really matter. So what’s important is start with some, something and then build up on top of it, change the culture, and then you’ll find the perfect tool that serves your purpose. Um, exactly. So sometimes you have to, you have to initiate this if you want to have some, some, some metrics. Of course, you can always do this manually. I’ve done it in the past, but I definitely don’t recommend it because it’s a lot of work. In an era where most of these tools are commodities, so we’re lucky enough to be able to gather this metric, this information. Yeah, so usually after this PoC, this experiment for three to six months with the team, you should have some results that you can present, um, to, um, to get executive sponsorship. Something that’s important here that I learned is that you need to present the results very, very precisely. Uh, so what was the problem? What are the actions we did? What’s the result? And that’s not always easy because when you, when you work with metrics for a while, you quickly start to see that there are a lot of synergies. There’s overlapping. There are things that impact other things, right? So sometimes you see a change in the trend, you see an improvement somewhere, uh, you see the cultural impact also happening, but you’re not able to define exactly what’s one thing that we need or two things that we, that we need to change that. Um, so, so that part, I think is very important, but it’s not always easy. So it has to be prepared clearly. Um, the second part is that unfortunately, I discovered that not many people are familiar with the topics. So when introducing it to get the exact sponsorship, you need to, you need to be able to explain them in a very simple, uh, and an easy way and also be mindful of the time because most of the people are very busy. Um, so you don’t want to go in a full, uh, full blown explanation of several hours.
Kovid Batra: I think those people should watch these kinds of podcasts.
Mario Viktorov Mechoulam: Yeah. Um, but, but, yeah, so it’s, it’s, it’s the experiment, it’s the results, it’s the actions, but also it’s a bit of background of why is this important and, um, yeah, and, and how did it influence what we did.
Kovid Batra: Yeah, I mean, there’s always, uh, different, uh, levels where people are in this journey. Let’s, let’s call this a journey where you are super aware, you know what needs to be done. And then there is a place where you’re not aware of the problem itself. So when you go through this funnel, there are people whom you need to onboard in your team, who need to first understand what we are talking about what does it mean, how it’s going to impact, and what exactly it is, in very simple layman language. So I totally understand that point and realize that how easy as well as difficult it is to get these things in place, bring that culture of metrics, engineering metrics in the engineering teams.
Well, I think this was something really, really interesting. Uh, one last piece that I want to touch upon is when you put in all these efforts into onboarding the teams, fostering that culture, getting buy-in from the executives, doing your PoCs and then presenting it, getting in sync with the team, there must be some specific indicators, right, that you start seeing in the teams. I know you have just covered it, but I want to again highlight that point that what exactly someone who is, let’s say an engineering manager and trying to implement it in the team should be looking for early on, or let’s say maybe one month, two months down the line when they started doing that PoC in their teams.
Mario Viktorov Mechoulam: I think, um, how comfortable the people in the team get in discussing and explaining the concepts during analysis of the metrics, this quality analysis is key. Um, and this is probably where most of the effort goes in the first months. We need to make sure that people do understand the metrics, what they represent, how the work we do has an impact on those. And, um, when we reached that point, um, one, one cue for me was the people in my teams, uh, telling me, I want to run this. This meant to me that we had closed the circle and we were close to having a habit and people were, uh, were ready to have this responsibility delegated to them to execute this. So it put people in a place where, um, they had to drive a conversation and they had to think about okay, what am I seeing? What happened? But what could it mean? But then what actions do we want to take? But this is something that we saw in the past already, and we tried to address, and then maybe we made it worse. And then you should also see, um, a change in the trend of metrics. For example, work in progress, getting from 30+ down to something close to the team size. Uh, it could be even better because even then it means that people are working independently and maybe you want them to collaborate. Um, some of the metrics change drastically. Uh, we can, we can talk about it another time, but the standard deviation of the cycle time, you can see how it squeezes, which means that, uh, it, it doesn’t, uh, feel random anymore. When, when I’m going to ship something, but now right now we can make a very, um, a very accurate guess of when, when it’s going to happen. So these types of things to me, mark, uh, good, good changes and that you’re on the right path.
Kovid Batra: Uh, honestly, Mario, very insightful, very practical tips that I have heard today about the implementation piece, and I’m sure this doesn’t end here. Uh, we are going to have more such discussions on this topic, and I want to deep dive into what exact metrics, how to use them, what suits which situation, talking about things like standard deviation from your cycle time would start changing, and that is in itself an interesting thing to talk about. So probably we’ll cover that in the next podcast that we have with you. For today, uh, this is our time. Any parting advice that you would like to share with the audience? Let’s say, there is an Engineering Manager. Let’s say, Mario five years back, who is thinking to go in this direction, what piece of advice would you give that person to get on this journey and what’s the incentive for that person?
Mario Viktorov Mechoulam: Yes. Okay. Clear. In, in general, you, you’ll, you’ll hear that people and teams are too busy to improve. We all know that. So I think as a manager who wants to start introducing these, uh, these concepts and these metrics, your, one of your responsibilities is to make room, to make space for the team, so that they can sit down and have a quality, quality time for this type of conversation. Without it, it’s not, uh, it’s not going to happen.
Kovid Batra: Okay, perfect. Great, Mario. It was great having you here. And I’m sure, uh, we are recording a few more sessions on this topic because this is close to us as well. But for today, this is our time. Thank you so much. See you once again.
Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. Bye-bye!
In the ever-changing world of software development, tracking progress and gaining insights into your projects is crucial. While GitHub Analytics provides developers and teams with valuable data-driven intelligence, relying solely on GitHub data may not provide the full picture needed for making informed decisions. By integrating GitHub Analytics with JIRA, engineering teams can gain a more comprehensive view of their development workflows, enabling them to take more meaningful actions.
Why GitHub Analytics Alone is Insufficient
GitHub Analytics offers valuable insights into:
Repository Activity: Tracking commits, pull requests and contributor activity within repositories.
Collaboration Effectiveness: Evaluating how effectively teams collaborate on code reviews and issue resolution.
Workflow Identification: Identifying potential bottlenecks and inefficiencies within the development process.
However, GitHub Analytics primarily focuses on repository activity and code contributions. It lacks visibility into broader project management aspects such as sprint progress, backlog prioritization, and cross-team dependencies. This limited perspective can hinder a team's ability to understand the complete picture of their development workflow and make informed decisions.
The Power of GitHub & JIRA Integration
JIRA is a widely used platform for issue tracking, sprint planning, and agile project management. When combined with GitHub Analytics, it creates a powerful ecosystem that:
Connects Code Changes with Project Tasks and Business Objectives: By linking GitHub commits and pull requests to specific JIRA issues (like user stories, bugs, and epics), teams can understand how their code changes contribute to overall project goals.
Real-World Example: A developer fixes a bug in a specific feature. By linking the GitHub pull request to the corresponding JIRA bug ticket, the team can track the resolution of the issue and its impact on the overall product.
Provides Deeper Insights into Development Velocity, Bottlenecks, and Blockers: Analyzing data from both GitHub and JIRA allows teams to identify bottlenecks in the development process that might not be apparent when looking at GitHub data alone.
Real-World Example: If a team observes a sudden drop in commit frequency, they can investigate JIRA issues to determine if it's caused by unresolved dependencies, unclear requirements, or other blockers.
Enhances Collaboration Between Engineering and Product Management Teams: By providing a shared view of project progress, GitHub and JIRA integration fosters better communication and collaboration between engineering and product management teams.
Real-World Example: Product managers can gain insights into the engineering team's progress on specific features by tracking the progress of related JIRA issues and linked GitHub pull requests.
Ensures Traceability from Feature Requests to Code Deployments: By linking JIRA issues to GitHub pull requests and ultimately to production deployments, teams can establish clear traceability from initial feature requests to their implementation and release.
Real-World Example: A team can track the journey of a feature from its initial conception in JIRA to its final deployment to production by analyzing the linked GitHub commits, pull requests, and deployment information.
More Examples of How JIRA + GitHub Analytics Brings More Insights
Tracking Work from Planning to Deployment:
Without JIRA: GitHub Analytics shows PR activity and commit frequency but doesn't provide context on whether work is aligned with business goals.
With JIRA: Teams can link commits and PRs to specific JIRA tickets, tracking the progress of user stories and epics from the backlog to release, ensuring that development efforts are aligned with business priorities.
Identifying Bottlenecks in the Development Process:
Without JIRA: GitHub Analytics highlights cycle time, but it doesn't explain why a delay is happening.
With JIRA: Teams can analyze blockers within JIRA issues—whether due to unresolved dependencies, pending stakeholder approvals, unclear requirements, or other factors—to pinpoint the root cause of delays and address them effectively.
Enhanced Sprint Planning & Resource Allocation:
Without JIRA: Engineering teams rely on GitHub metrics to gauge performance but may struggle to connect them with workload distribution.
With JIRA: Managers can assess how many tasks remain open versus completed, analyze team workloads, and adjust priorities in real-time to ensure efficient resource allocation and maximize team productivity.
Connecting Engineering Efforts to Business Goals:
Without JIRA: GitHub Analytics tracks technical contributions but doesn't show their impact on business priorities.
With JIRA: Product owners can track how engineering efforts align with strategic objectives by analyzing the progress of JIRA issues linked to key business goals, ensuring that the team is working on the most impactful tasks.
Getting Started with GitHub & JIRA Analytics Integration
Start leveraging the power of integrated analytics with tools like Typo, a dynamic platform designed to optimize your GitHub and JIRA experience. Whether you're working on a startup project or managing an enterprise-scale development team, such tools can offers powerful analytics tools tailored to your specific needs.
How to Integrate GitHub & JIRA with Typo:
Connect Your GitHub and JIRA Accounts: Visit Typo's platform and seamlessly link both tools to establish a unified view of your development data.
Configure Dashboards: Build custom analytics dashboards that track both code contributions (from GitHub) and issue progress (from JIRA) in a single, integrated view.
Analyze Insights Together: Gain deeper insights by analyzing GitHub commit trends alongside JIRA sprint performance, identifying correlations and uncovering hidden patterns within your development workflow.
Conclusion
While GitHub Analytics is a valuable tool for tracking repository activity, integrating it with JIRA unlocks deeper engineering insights, allowing teams to make smarter, data-driven decisions. By bridging the gap between code contributions and project management, teams can improve efficiency, enhance collaboration, and ensure that engineering efforts align with business goals.
Sign Up for Typo’s GitHub & JIRA Analytics Today!
Whether you aim to enhance software delivery, improve team collaboration, or refine project workflows, Typo provides a flexible, data-driven platform to meet your needs.
In today's fast-paced software development landscape, optimizing engineering performance is crucial for staying competitive. Engineering leaders need a deep understanding of workflows, team velocity, and potential bottlenecks. Engineering intelligence platforms provide valuable insights into software development dynamics, helping to make data-driven decisions. While Swarmia is a well-known player, it might not be the perfect fit for every team.This article explores the top Swarmia alternatives, giving you the knowledge to choose the best platform for your organization's needs. We'll delve into features, benefits, and potential drawbacks to help you make an informed decision.
Understanding Swarmia's Strengths
Swarmia is an engineering intelligence platform designed to improve operational efficiency, developer productivity, and software delivery. It integrates with popular development tools and uses data analytics to provide actionable insights.
Key Functionalities:
Data Aggregation: Connects to repositories like GitHub, GitLab, and Bitbucket, along with issue trackers like Jira and Azure DevOps, to create a comprehensive view of engineering activities.
Workflow Optimization: Identifies inefficiencies in development cycles by analyzing task dependencies, code review bottlenecks, and other delays.
Performance Metrics & Visualization: Presents data through dashboards, offering insights into deployment frequency, cycle time, resource allocation, and other KPIs.
Actionable Insights: Helps engineering leaders make data-driven decisions to improve workflows and team collaboration.
Why Consider a Swarmia Alternative?
Despite its strengths, Swarmia might not be ideal for everyone. Here's why you might want to explore alternatives:
Limited Customization: May not adapt well to highly specialized or unique workflows.
Complex Onboarding: Can have a steep learning curve, hindering quick adoption.
Pricing: Can be expensive for smaller teams or organizations with budget constraints.
User Interface: Some users find the UI challenging to navigate.
Top 6 Swarmia Competitors: Features, Pros & Cons
Here are six leading alternatives to Swarmia, each with its own unique strengths:
Typo is a comprehensive engineering intelligence platform providing end-to-end visibility into the entire SDLC. It focuses on actionable insights through integration with CI/CD pipelines and issue tracking tools.
Key Features:
Unified DORA and engineering metrics dashboard.
AI-driven analytics for sprint reviews, pull requests, and development insights.
Industry benchmarks for engineering performance evaluation.
Automated sprint analytics for workflow optimization.
Pros:
Strong tracking of key engineering metrics.
AI-powered insights for data-driven decision-making.
Responsive user interface and good customer support.
Cons:
Limited customization options in existing workflows.
Potential for further feature expansion.
G2 Reviews Summary:
G2 reviews indicate decent user engagement with a strong emphasis on positive feedback, particularly regarding customer support.
Jellyfish is an advanced analytics platform that aligns engineering efforts with broader business goals. It gives real-time visibility into development workflows and team productivity, focusing on connecting engineering work to business outcomes.
Key Features:
Resource allocation analytics for optimizing engineering investments.
Real-time tracking of team performance.
DevOps performance metrics for continuous delivery optimization.
Pros:
Granular data tracking capabilities.
Intuitive user interface.
Facilitates cross-team collaboration.
Cons:
Can be complex to implement and configure.
Limited customization options for tailored insights.
G2 Reviews Summary:
G2 reviews highlight strong core features but also point to potential implementation challenges, particularly around configuration and customization.
LinearB is a data-driven DevOps solution designed to improve software delivery efficiency and engineering team coordination. It focuses on data-driven insights, identifying bottlenecks, and optimizing workflows.
Key Features:
Workflow visualization for process optimization.
Risk assessment and early warning indicators.
Customizable dashboards for performance monitoring.
Pros:
Extensive data aggregation capabilities.
Enhanced collaboration tools.
Comprehensive engineering metrics and insights.
Cons:
Can have a complex setup and learning curve.
High data volume may require careful filtering
G2 Reviews Summary:
G2 reviews generally praise LinearB's core features, such as flow management and insightful analytics. However, some users have reported challenges with complexity and the learning curve.
Waydev is an engineering analytics solution with a focus on Agile methodologies. It provides in-depth visibility into development velocity, resource allocation, and delivery efficiency.
Key Features:
Automated engineering performance insights.
Agile-based tracking of development velocity and bug resolution.
Budgeting reports for engineering investment analysis.
Pros:
Highly detailed metrics analysis.
Streamlined dashboard interface.
Effective tracking of Agile engineering practices.
Cons:
Steep learning curve for new users.
G2 Reviews Summary:
G2 reviews for Waydev are limited, making it difficult to draw definitive conclusions about user satisfaction.
Sleuth is a deployment intelligence platform specializing in tracking and improving DORA metrics. It provides detailed insights into deployment frequency and engineering efficiency.
Key Features:
Automated deployment tracking and performance benchmarking.
Real-time performance evaluation against efficiency targets.
Lightweight and adaptable architecture.
Pros:
Intuitive data visualization.
Seamless integration with existing toolchains.
Cons:
Pricing may be restrictive for some organizations.
G2 Reviews Summary:
G2 reviews for Sleuth are also limited, making it difficult to draw definitive conclusions about user satisfaction
6. Pluralsight Flow (formerly Git Prime)
Pluralsight Flow provides a detailed overview of the development process, helping identify friction and bottlenecks. It aligns engineering efforts with strategic objectives by tracking DORA metrics, software development KPIs, and investment insights. It integrates with various manual and automated testing tools such as Azure DevOps and GitLab.
Key Features:
Offers insights into why trends occur and potential related issues.
Predicts value impact for project and process proposals.
Features DORA analytics and investment insights.
Provides centralized insights and data visualization.
Pros:
Strong core metrics tracking capabilities.
Process improvement features.
Data-driven insights generation.
Detailed metrics analysis tools.
Efficient work tracking system.
Cons:
Complex and challenging user interface.
Issues with metrics accuracy/reliability.
Steep learning curve for users.
Inefficiencies in tracking certain metrics.
Problems with tool integrations.
G2 Reviews Summary -
The review numbers show moderate engagement (6-12 mentions for pros, 3-4 for cons), placing it between Waydev's limited feedback and Jellyfish's extensive reviews. The feedback suggests strong core functionality but notable usability challenges.Link to Pluralsight Flow's G2 Reviews
The Power of Integration
Engineering management platforms become even more powerful when they integrate with your existing tools. Seamless integration with platforms like Jira, GitHub, CI/CD systems, and Slack offers several benefits:
Automation: Automates tasks like status updates and alerts.
Customization: Adapts to specific team needs and workflows.
Centralized Data: Enhances collaboration and reduces context switching.
By leveraging these integrations, software teams can significantly boost productivity and focus on building high-quality products.
Key Considerations for Choosing an Alternative
When selecting a Swarmia alternative, keep these factors in mind:
Team Size and Budget: Look for solutions that fit your budget, considering freemium plans or tiered pricing.
Specific Needs: Identify your key requirements. Do you need advanced customization, DORA metrics tracking, or a focus on developer experience?
Ease of Use: Choose a platform with an intuitive interface to ensure smooth adoption.
Integrations: Ensure seamless integration with your current tool stack.
Customer Support: Evaluate the level of support offered by each vendor.
Conclusion
Choosing the right engineering analytics platform is a strategic decision. The alternatives discussed offer a range of capabilities, from workflow optimization and performance tracking to AI-powered insights. By carefully evaluating these solutions, engineering leaders can improve team efficiency, reduce bottlenecks, and drive better software development outcomes.
Software teams relentlessly pursue rapid, consistent value delivery. Yet, without proper metrics, this pursuit becomes directionless.
While engineering productivity is a combination of multiple dimensions, issue cycle time acts as a critical indicator of team efficiency.
Simply put, this metric reveals how quickly engineering teams convert requirements into deployable solutions.
By understanding and optimizing issue cycle time, teams can accelerate delivery and enhance the predictability of their development practices.
In this guide, we discuss cycle time's significance and provide actionable frameworks for measurement and improvement.
What is the Issue Cycle Time?
Issue cycle time measures the duration between when work actively begins on a task and its completion.
This metric specifically tracks the time developers spend actively working on an issue, excluding external delays or waiting periods.
Unlike lead time, which includes all elapsed time from issue creation, cycle time focuses purely on active development effort.
Core Components of Issue Cycle Time
Work Start Time: When a developer transitions the issue to "in progress" and begins active development
Development Duration: Time spent writing, testing, and refining code
Review Period: Time in code review and iteration based on feedback
Testing Phase: Duration of QA verification and bug fixes
Work Completion: Final approval and merge of changes into the main codebase
Understanding these components allows teams to identify bottlenecks and optimize their development workflow effectively.
Why Does Issue Cycle Time Matter?
Here’s why you must track issue cycle time:
Impact on Productivity
Issue cycle time directly correlates with team output capacity. Shorter cycle times allows teams to complete more work within fixed timeframes. So resource utilization is at peak. This accelerated delivery cadence compounds over time, allowing teams to tackle more strategic initiatives rather than getting bogged down in prolonged development cycles.
Identifying Bottlenecks
By tracking cycle time metrics, teams can pinpoint specific stages where work stalls. This reveals process inefficiencies, resource constraints, or communication gaps that break flow. Data-driven bottleneck identification allows targeted process improvements rather than speculative changes.
Enhanced Collaboration
Rapid cycle times help build tighter feedback loops between developers, reviewers, and stakeholders. When issues move quickly through development stages, teams maintain context and momentum. When collaboration is streamlined, handoff friction is reduced. And there’s no knowledge loss between stages, either.
Better Predictability
Consistent cycle times help in reliable sprint planning and release forecasting. Teams can confidently estimate delivery dates based on historical completion patterns. This predictability helps align engineering efforts with business goals and improves cross-functional planning.
Customer Satisfaction
Quick issue resolution directly impacts user experience. When teams maintain efficient cycle times, they can respond quickly to customer feedback and deliver improvements more frequently. This responsiveness builds trust and strengthens customer relationships.
3 Phases of Issue Cycle Time
The development process is a journey that can be summed up in three phases. Let’s break these phases down:
Phase 1: Ticket Creation to Work Start
The initial phase includes critical pre-development activities that significantly impact
overall cycle time. This period begins when a ticket enters the backlog and ends when active development starts.
Teams often face delays in ticket assignment due to unclear prioritization frameworks or manual routing processes. One of the reasons behind this is resource allocation, which frequently occurs when assignment procedures lack automation.
Implementing automated ticket routing and standardized prioritization matrices can substantially reduce initial delays.
Phase 2: Active Work Period
The core development phase represents the most resource-intensive segment of the cycle. Development time varies based on complexity, dependencies, and developer expertise.
Success in this phase demands precise requirement documentation and proactive dependency management. One should also establish escalation paths. Teams should maintain living documentation and implement pair programming for complex tasks.
Phase 3: Resolution to Closure
The final phase covers all post-development activities required for production deployment.
This stage often becomes a significant bottleneck due to:
Sequential review processes
Manual quality assurance procedures
Multiple approval requirements
Environment-specific deployment constraints
How can this be optimized? By:
Implementing parallel review tracks
Automating test execution
Establishing service-level agreements for reviews
Creating self-service deployment capabilities
Each phase comes with many optimization opportunities. Teams should measure phase-specific metrics to identify the highest-impact improvement areas. Regular analysis of phase durations allows targeted process refinement, which is critical to maintaining software engineering efficiency.
How to Measure and Analyse Issue Cycle Time
Effective cycle time measurement requires the right tools and systematic analysis approaches. Businesses must establish clear frameworks for data collection, benchmarking, and continuous monitoring to derive actionable insights.
Here’s how you can measure issue cycle time:
Metrics and Tools
Modern development platforms offer integrated cycle time tracking capabilities. Tools like Typo automatically capture timing data across workflow states.
These platforms provide comprehensive dashboards displaying velocity trends, bottleneck indicators, and predictability metrics.
Integration with version control systems enables correlation between code changes and cycle time patterns. Advanced analytics features support custom reporting and team-specific performance views.
Establishing Benchmarks
Benchmark definition requires contextual analysis of team composition, project complexity, and delivery requirements.
Start by calculating your team's current average cycle time across different issue types. Factor in:
Team size and experience levels
Technical complexity categories
Historical performance patterns
Industry standards for similar work
The right approach is to define acceptable ranges rather than fixed targets. Consider setting graduated improvement goals: 10% reduction in the first quarter, 25% by year-end.
Using Visualizations
Data visualization converts raw metrics into actionable insights. Cycle time scatter plots show completion patterns and outliers. Cumulative flow diagrams can also be used to show work in progress limitations and flow efficiency. Control charts track stability and process improvements over time.
Ideally businesses should implement:
Weekly trend analysis
Percentile distribution charts
Work-type segmentation views
Team comparison dashboards
By implementing these visualizations, businesses can identify bottlenecks and optimize workflows for greater engineering productivity.
Regular Reviews
Establish structured review cycles at multiple organizational levels. These could be:
Weekly team retrospectives should examine cycle time trends and identify immediate optimization opportunities.
Monthly department reviews analyze cross-team patterns and resource allocation impacts.
Quarterly organizational assessments evaluate systemic issues and strategic improvements.
These reviews should be templatized and consistent. The idea to focus on:
Trend analysis
Bottleneck identification
Process modification results
Team feedback integration
Best Practices to Optimize Issue Cycle Time
Focus on the following proven strategies to enhance workflow efficiency while maintaining output quality:
Automate Repetitive Tasks: Use automation for code testing, deployment, and issue tracking. Implement CI/CD pipelines and automated code review tools to eliminate manual handoffs.
Adopt Agile Methodologies: Implement Scrum or Kanban frameworks with clear sprint cycles or workflow stages. Maintain structured ceremonies and consistent delivery cadences.
Limit Work-in-Progress (WIP): Set strict WIP limits per development stage to reduce context switching and prevent resource overallocation. Monitor queue lengths to maintain steady progress.
Conduct Daily Standups: Hold focused standup meetings to identify blockers early, track issue age, and enable immediate escalation for unresolved tasks.
Ensure Comprehensive Documentation: Maintain up-to-date technical specifications and acceptance criteria to reduce miscommunication and streamline issue resolution.
Cross-Train Team Members: Build versatile skill sets within the team to minimize dependencies on single individuals and allow flexible resource allocation.
Streamline Review Processes: Implement parallel review tracks, set clear review time SLAs, and automate style and quality checks to accelerate approvals.
Leverage Collaboration Tools: Use integrated development platforms and real-time communication channels to ensure seamless coordination and centralized knowledge sharing.
Track and Analyze Key Metrics: Monitor performance indicators daily with automated reports to identify trends, spot inefficiencies, and take corrective action.
Host Regular Retrospectives: Conduct structured reviews to analyze cycle time patterns, gather feedback, and implement continuous process improvements.
By consistently applying these best practices, engineering teams can reduce delays and optimise issue cycle time for sustained success.
Real-life Example of Optimizing
A mid-sized fintech company with 40 engineers faced persistent delivery delays despite having talented developers. Their average issue cycle time had grown to 14 days, creating mounting pressure from stakeholders and frustration within the team.
After analyzing their workflow data, they identified three critical bottlenecks:
Code Review Congestion: Senior developers were becoming bottlenecks with 20+ reviews in their queue, causing delays of 3-4 days for each ticket.
Environment Stability Issues: Inconsistent test environments led to frequent deployment failures, adding an average of 2 days to cycle time.
Unclear Requirements: Developers spent approximately 30% of their time seeking clarification on ambiguous tickets.
The team implemented a structured optimization approach:
Phase 1: Baseline Establishment (2 weeks)
Documented current workflow states and transition times
Calculated baseline metrics for each cycle time component
Surveyed team members to identify perceived pain points
Phase 2: Targeted Interventions (8 weeks)
Implemented a "review buddy" system that paired developers and established a maximum 24-hour review SLA
Standardized development environments using containerization
Created a requirement template with mandatory fields for acceptance criteria
Set WIP limits of 3 items per developer to reduce context switching
Phase 3: Measurement and Refinement (Ongoing)
Established weekly cycle time reviews in team meetings
Created dashboards showing real-time metrics for each workflow stage
Implemented a continuous improvement process where any team member could propose optimization experiments
Results After 90 Days:
Overall cycle time reduced from 14 days to 5.5 days (60% improvement)
Code review turnaround decreased from 72 hours to 16 hours
Deployment success rate improved from 65% to 94%
Developer satisfaction scores increased by 40%
On-time delivery rate rose from 60% to 87%
The most significant insight came from breaking down the cycle time improvements by phase: while the initial automation efforts produced quick wins, the team culture changes around WIP limits and requirement clarity delivered the most substantial long-term benefits.
This example demonstrates that effective cycle time optimization requires both technical solutions and process refinements. The fintech company continues to monitor its metrics, making incremental improvements that maintain their enhanced velocity without sacrificing quality or team wellbeing.
Conclusion
Issue cycle time directly impacts development velocity and team productivity. By tracking and optimizing this metric, teams can deliver value faster.
Typo's real-time issue tracking combined with AI-powered insights automates improvement detection and suggests targeted optimizations. Our platform allows teams to maintain optimal cycle times while reducing manual overhead.
In today's fast-paced software development world, tracking progress and understanding project dynamics is crucial. GitHub Analytics transforms raw data from repositories into actionable intelligence, offering insights that enable teams to optimize workflows, enhance collaboration, and improve software delivery. This guide explores the core aspects of GitHub Analytics, from key metrics to best practices, helping you leverage data to drive informed decision-making.
Why GitHub Analytics Matters
GitHub Analytics provides invaluable insights into project activity, empowering developers and project managers to track performance, identify bottlenecks, and enhance productivity. Unlike generic analytics tools, GitHub Analytics focuses on software development-specific metrics such as commits, pull requests, issue tracking, and cycle time analysis. This targeted approach allows for a deeper understanding of development workflows and enables teams to make data-driven decisions that directly impact project success.
Understanding GitHub Analytics
GitHub Analytics encompasses a suite of metrics and tools that help developers assess repository activity and project health.
Key Components of GitHub Analytics:
Data and Process Hygiene: Establishing standardized workflows through consistent labeling, commit keywords, and issue tracking is paramount. This ensures data accuracy and facilitates meaningful analysis.
Real-World Example: A team standardizes issue labels (e.g., "bug," "feature," "enhancement," "documentation") to categorize issues effectively and track trends in different issue types.
Pulse and Contribution Tracking: Monitoring repository activity, including commit frequency, work distribution among team members, and overall activity trends.
Real-World Example: A team uses GitHub Analytics to identify periods of low activity, which might indicate potential roadblocks or demotivation, allowing them to proactively address the issue.
Team Performance Metrics: Analyzing key metrics like cycle time (the time taken to complete a piece of work), lead time for changes, and DORA metrics (Deployment Frequency, Change Failure Rate, Mean Time to Recovery, Lead Time for Changes) to identify inefficiencies and improve productivity.
Real-World Example: A team uses DORA metrics to track deployment frequency and identify areas for improvement in their continuous delivery pipeline, leading to faster releases and reduced time to market.
GitHub Analytics vs. Other Analytics Tools
While other analytics platforms focus on user behavior or application performance, GitHub Analytics specifically tracks code contributions, repository health, and team collaboration, making it an indispensable tool for software development teams. This focus on development-specific data provides unique insights that are not readily available from generic analytics platforms.
Role of GitHub Analytics in Project Management
Performance Monitoring: Analytics provide real-time visibility into how and when contributions are made, enabling project managers to track progress against milestones and identify potential delays.
Real-World Example: A project manager uses GitHub Analytics to track the progress of critical features and identify any potential bottlenecks that might impact the project timeline.
Resource Allocation: Data-driven insights from GitHub Analytics help optimize resource allocation, ensuring that team members are working on the most impactful tasks and that their skills are effectively utilized.
Real-World Example: A project manager analyzes team member contributions and identifies areas where specific skillsets are lacking, informing decisions on hiring or training.
Quality Assurance: Identifying recurring issues, analyzing code review comments, and tracking bug trends helps teams proactively refine processes, improve code quality, and reduce the number of defects.
Real-World Example: A team analyzes code review comments to identify common code quality issues and implement best practices to prevent them in the future.
Strategic Planning: Historical project data, including past performance metrics, successful strategies, and areas for improvement, informs future roadmaps, enabling teams to predict and mitigate potential risks.
Real-World Example: A team analyzes past project data to identify trends in development velocity and predict future project timelines more accurately.
Getting Started with GitHub Analytics
Accessing GitHub Analytics:
Connect Your GitHub Account: Integrate analytics tools via GitHub settings or utilize GitHub's built-in insights.
Use GitHub's Built-in Insights: Access repository insights to track contributions, trends, and identify areas for improvement.
Customize Your Dashboard: Set up personalized views with relevant KPIs (Key Performance Indicators) that are most important to your team and project goals.
Navigating GitHub Analytics:
Real-Time Dashboards: Monitor KPIs such as deployment frequency and failure rates in real-time to gain immediate insights into project health.
Filtering Data: Focus on relevant insights using custom filters based on time frames, contributors, issue labels, and other criteria.
Multi-Repository Monitoring: Track multiple projects from a single dashboard to gain a comprehensive overview of team performance across different initiatives.
Configuring GitHub Analytics for Efficiency:
Customize Dashboard Templates: Create and save custom dashboard templates for different projects or teams to streamline analysis and reporting.
Optimize Data Insights: Aggregate pull requests, issues, and commits to generate meaningful reports and identify trends.
Foster Collaboration: Share dashboards with the entire team to promote transparency, foster a data-driven culture, and encourage open discussion around project performance.
Key GitHub Analytics Metrics
Software Development Cycle Time Metrics:
Coding Time: Duration from the start of development to when the code is ready for review.
Review Time: Measures the efficiency of collaboration in code reviews, indicating potential bottlenecks or areas for improvement in the review process.
Merge Time: Time taken from the completion of the code review to the integration of the code into the main branch.
Software Delivery Speed Metrics:
Average Pull Request Size: Tracks the scope of merged pull requests, providing insights into the team's approach to code changes and identifying potential areas for improvement in code modularity.
Deployment Frequency: How often changes are deployed to production.
Change Failure Rate: Percentage of deployments that result in failures.
Lead Time for Changes: The time it takes to go from code commit to code in production.
Mean Time to Recovery: The average time it takes to restore service after a deployment failure.
Issue Queue Time: Measures how long issues remain unaddressed, highlighting potential delays in issue resolution and potential impacts on project progress.
Overdue Items: Tracks tasks that exceed their expected completion times, identifying potential bottlenecks and areas for improvement in project planning and execution.
Process Quality and Compliance Metrics:
Bug Lead Time for Changes (BLTC): Tracks the speed of bug resolution, providing insights into the team's responsiveness to and efficiency in addressing defects.
Raised Bugs Tracker (RBT): Monitors the frequency of bug identification, highlighting areas where improvements in code quality and testing can be made.
Pull Request Review Ratio (PRRR): Ensures adequate peer review coverage for all code changes, promoting code quality and knowledge sharing within the team.
Best Practices for Monitoring and Improving Performance
Regular Analytics Reviews:
Scheduled Checks: Conduct weekly or bi-weekly reviews of key metrics to track progress toward project goals and identify any emerging issues.
Sprint Planning Integration: Incorporate GitHub Analytics data into sprint planning meetings to refine sprint objectives, allocate resources effectively, and make data-driven decisions about scope and priorities.
CI/CD Monitoring: Track deployment success rates and identify areas for improvement in the continuous integration and continuous delivery pipeline.
Encouraging Team Engagement:
Open Data Access: Promote transparency by sharing analytics dashboards and reports with the entire team, fostering a shared understanding of project performance.
Training on Analytics: Provide training to team members on how to effectively interpret and utilize GitHub Analytics data to make informed decisions.
Recognition Based on Metrics: Acknowledge and reward team members and teams for achieving positive performance outcomes as measured by key metrics.
Unlocking the Potential of GitHub Analytics
GitHub Analytics tools like Typo are powerful tools for software teams, providing critical insights into development performance, collaboration, and project health. By embracing these analytics, teams can streamline workflows, enhance software quality, improve team communication, and make informed, data-driven decisions that ultimately lead to greater project success.
A toolset that provides insights into repository activity, collaboration, and project performance.
How does GitHub Analytics support project management?
It helps monitor team performance, allocate resources effectively, identify inefficiencies, and make data-driven decisions to improve project outcomes.
Can GitHub Analytics be customized?
Yes, users can tailor dashboards, select specific metrics, and configure reports to meet their unique needs and project requirements.
What key metrics are available?
Key metrics include development cycle time metrics, software delivery speed metrics (including DORA metrics), and process quality and compliance metrics.
Can analytics improve code quality?
Yes, by tracking bug reports, analyzing code review trends, and identifying recurring issues, teams can proactively address code quality concerns and implement strategies for improvement.
Can GitHub Analytics help manage technical debt?
Absolutely. By monitoring changes, identifying areas needing improvement, and tracking the impact of technical debt on development velocity, teams can strategically address technical debt and maintain a healthy codebase.
Achieving engineering excellence isn’t just about clean code or high velocity. It’s about how engineering drives business outcomes.
Every CTO and engineering department manager must know the importance of metrics like cycle time, deployment frequency, or mean time to recovery. These numbers are crucial for gauging team performance and delivery efficiency.
But here’s the challenge: converting these metrics into language that resonates in the boardroom.
In this blog, we’re going to share how you make these numbers more understandable.
What are Engineering Metrics?
Engineering metrics are quantifiable measures that assess various aspects of software development processes. They provide insights into team efficiency, software quality, and delivery speed.
Some believe that engineering productivity can be effectively measured through data. Others argue that metrics oversimplify the complexity of high-performing teams.
While the topic is controversial, the focus of metrics in the boardroom is different.
In the board meeting, these metrics are a means to show that the team is delivering value. The engineering operations are efficient. And the investments being made by the company are justified.
Challenges in Communicating Engineering Metrics to the Board
Communicating engineering metrics to the board isn’t always easy. Here are some common hurdles you might face:
1. The Language Barrier
Engineering metrics often rely on technical terms like “cycle time” or “MTTR” (mean time to recovery). To someone outside the tech domain, these might mean little.
For example, discussing “code coverage” without tying it to reduced defect rates and faster releases can leave board members disengaged.
The challenge is conveying these technical terms into business language—terms that resonate with growth, revenue, and strategic impact.
2. Data Overload
Engineering teams track countless metrics, from pull request volumes to production incidents. While this is valuable internally, presenting too much data in board meetings can overwhelm your board members.
A cluttered slide deck filled with metrics risks diluting your message. These granular-level operational details are for managers to take care of the team. The board members, however, care about the bigger picture.
3. Misalignment with Business Goals
Metrics without context can feel irrelevant. For example, sharing deployment frequency might seem insignificant unless you explain how it accelerates time-to-market.
Aligning metrics with business priorities, like reducing churn or scaling efficiently, ensures the board sees their true value.
Key Metrics CTOs Should Highlight in the Boardroom
Before we go on to solve the above-mentioned challenges, let’s talk about the five key categories of metrics one should be mapping:
R&D Spend as a Percentage of Revenue: Tracks how much is invested in engineering relative to the company's revenue. Demonstrates commitment to innovation.
CapEx vs. OpEx Ratio: This shows the balance between long-term investments (e.g., infrastructure) and ongoing operational costs.
Allocation by Initiative: Shows how engineering time and money are split between new product development, maintenance, and technical debt.
2. Deliverables
These metrics focus on the team’s output and alignment with business goals.
Feature Throughput: Tracks the number of features delivered within a timeframe. The higher it is, the happier the board.
Roadmap Completion Rate: Measures how much of the planned roadmap was delivered on time. Gives predictability to your fellow board members.
Time-to-Market: Tracks the duration from idea inception to product delivery. It has a huge impact on competitive advantage.
3. Quality
Metrics in this category emphasize the reliability and performance of engineering outputs.
Defect Density: Measures the number of defects per unit of code. Indicates code quality.
Customer-Reported Incidents: Tracks issues reported by customers. Board members use it to get an idea of the end-user experience.
Uptime/Availability: Monitors system reliability. Tied directly to customer satisfaction and trust.
4. Delivery & Operations
These metrics focus on engineering efficiency and operational stability.
Deployment Frequency: Tracks how often code is deployed. Reflects agility and responsiveness.
Mean Time to Recovery (MTTR): Measures how quickly issues are resolved. Impacts customer trust and operational stability.
5. People & Recruiting
These metrics highlight team growth, engagement, and retention.
Offer Acceptance Rate: Tracks how many job offers are accepted. Reflects employer appeal.
Attrition Rate: Measures employee turnover. High attrition signals team instability.
Employee Satisfaction (e.g., via surveys): Gauges team morale and engagement. Impacts productivity and retention.
By focusing on these categories, you can show the board how engineering contributes to your company's growth.
Tools for Tracking and Presenting Engineering Metrics
Here are three tools that can help CTOs streamline the process and ensure their message resonates in the boardroom:
1. Typo
Typo is an AI-powered platform designed to amplify engineering productivity. It unifies data from your software development lifecycle (SDLC) into a single platform, offering deep visibility and actionable insights.
Key Features:
Real-time SDLC visibility to identify blockers and predict sprint delays.
Automated code reviews to analyze pull requests, identify issues, and suggest fixes.
DORA and SDLC metrics dashboards for tracking deployment frequency, cycle time, and other critical metrics.
Developers experience insights to benchmark productivity and improve team morale.
For customizable data visualization, tools like Tableau or Looker are invaluable. They allow you to create dashboards that present engineering metrics in an easy-to-digest format. With these, you can highlight trends, focus on key metrics, and connect them to business outcomes effectively.
3. Slide Decks
Slide decks remain a classic tool for boardroom presentations. Summarize key takeaways, use simple visuals, and focus on the business impact of metrics. A clear, concise deck ensures your message stays sharp and engaging.
Best Practices and Tips for CTOs for Presenting Engineering Metrics to the Board
More than data, engineering metrics for the board is about delivering a narrative that connects engineering performance to business goals.
Here are some best practices to follow:
1. Educate the Board About Metrics
Start by offering a brief overview of key metrics like DORA metrics. Explain how these metrics—deployment frequency, MTTR, etc.—drive business outcomes such as faster product delivery or increased customer satisfaction. Always include trends and real-world examples. For example, show how improving cycle time has accelerated a recent product launch.
2. Align Metrics with Investment Decisions
Tie metrics directly to budgetary impact. For example, show how allocating additional funds for DevOps could reduce MTTR by 20%, which could lead to faster recoveries and an estimated Y% revenue boost. You must include context and recommendations so the board understands both the problem and the solution.
3. Highlight Actionable Insights
Data alone isn’t enough. Share actionable takeaways. For example: “To reduce MTTR by 20%, we recommend investing in observability tools and expanding on-call rotations.” Use concise slides with 5-7 metrics max, supported by simple and consistent visualizations.
4. Emphasize Strategic Value
Position engineering as a business enabler. You should show its role in driving innovation, increasing market share, and maintaining competitive advantage. For example, connect your team’s efforts in improving system uptime to better customer retention.
5. Tailor Your Communication Style
Understand your board member’s technical understanding and priorities. Begin with business impact, then dive into the technical details. Use clear charts (e.g., trend lines, bar graphs) and executive summaries to convey your message. Tell stories behind the numbers to make them relatable.
Conclusion
Engineering metrics are more than numbers—they’re a bridge between technical performance and business outcomes. Focus on metrics that resonate with the board and align them with strategic goals.
When done right, your metrics can show how engineering is at the core of value and growth.
Webinar: Unlocking Engineering Productivity with Ariel Pérez & Cesar Rodriguez
January 29, 2025
•
59 min read
In the second session of the 'Unlocking Engineering Productivity' webinar by Typo, host Kovid Batra engages engineering leaders Cesar Rodriguez and Ariel Pérez in a conversation about building high-performing development teams.
Cesar, VP of Engineering at StackGen, shares insights on ingraining curiosity and the significance of documentation and testing. Ariel, Head of Product and Technology at Tinybird, emphasizes the importance of clear communication, collaboration, and the role of AI in enhancing productivity. The panel discusses overcoming common productivity misconceptions, addressing burnout, and implementing effective metrics to drive team performance. Through practical examples and personal anecdotes, the session offers valuable strategies for fostering a productive engineering culture.
Timestamps
00:00 — Introduction
01:14—Childhood Stories and Personal Insights
04:22—Defining Engineering Productivity
10:27—High-Performing Teams and Data-Driven Decisions
16:03—Counterintuitive Lessons in Leadership
22:36—Navigating New Leadership Roles
31:47—Measuring Impact and Outcomes in Engineering
32:13—North Star Metrics and Customer Value
32:53—DORA Metrics and Engineering Efficiency
33:30—Learning from Customer Behavior and Feedback
35:19—Scaling Engineering Teams and Productivity
39:34—Implementing Metrics and Tools for Team Performance
41:01—Qualitative Feedback and Customer-Centric Metrics
46:37—Q&A Session: Addressing Audience Questions
58:47—Concluding Thoughts on Engineering Leadership
Kovid Batra: Hi everyone, welcome to the second webinar session of Unlocking Engineering Productivity by Typo. I’m your host, Kovid, excited to bring you all new webinar series, bringing passionate engineering leaders here to build impactful dev teams and unlocking success. For today’s panel, we have two special guests. Uh, one of them is our Typo champion customer. Uh, he’s VP of Engineering at StackGen. Welcome to the show, Cesar.
Cesar Rodriguez: Hey, Kovid. Thanks for having me.
Kovid Batra: And then we have Ariel, who is a longtime friend and the Head of Product and Technology at Tinybird. Welcome. Welcome to the show, Ariel.
Ariel Pérez: Hey, Kovid. Thank you for having me again. It’s great chatting with you one more time.
Kovid Batra: Same here. Pleasure. Alright, um, so, Cesar has been with us, uh, for almost more than a year now. And he’s a guy who’s passionate about spending quality time with kids, and he’s, uh, into cooking, barbecue, all that we know about him. But, uh, Cesar, there’s anything else that you would like to tell us about yourself so that, uh, the audience knows you a little more, something from your childhood, something from your teenage? This is kind of a ritual of our show.
Cesar Rodriguez: Yeah. So, uh, let me think about this. So one of, one of the things. So something from my childhood. So I had, um, I had the blessing of having my great grandmother alive when I was a kid. And, um, she always gave me all sorts of kinds of food to try. And something she always said to me is, “Hey, don’t say no to me when I’m offering you food.” And that stayed in my brain till.. Now that I’m a grown up, I’m always trying new things. If there’s an opportunity to try something new, I’m always, always want to try it out and see how it, how it is.
Kovid Batra: That’s, that’s really, really interesting. I think, Ariel, , uh, I’m sure you, you also have some something similar from your childhood or teenage which you would like to share that defines who you are today.
Ariel Pérez: Yeah, definitely. Um, you know, thankfully I was, um, I was all, you know, reminded me Cesar. I was also, uh, very lucky to have a great grandmother and a great grandfather, alive, alive and got to interact with them quite a bit. So, you know, I think we know very amazing experiences, remembering, speaking to them. Uh, so anyway, it was great that you mentioned that. Uh, but in terms of what I think about for me, the, the things that from my childhood that I think really, uh, impacted me and helped me think about the person I am today is, um, it was very important for my father who, uh, owned a small business in Washington Heights in New York City, uh, to very early on, um, give us the idea and then I know that in the sense that you’ve got to work, you’ve got to earn things, right? You’ve got to work for things and money just doesn’t suddenly appear. So at least, you know, a key thing there was that, you know, from the time I was 10 years old, I was working with my father on weekends. Um, and you know, obviously, you know, it’s been a few hours working and doing stuff and then like doing other things. But eventually, as I got older and older through my teenage years, I spent a lot more time working there and actually running my father’s business, which is great as a teenager. Um, so when you think about, you know, what that taught me for life. Obviously, there’s the power of like, look, you’ve got to work for things, like nothing’s given to you. But there’s also the value, you know, I learned very early on. Entrepreneurship, you know, how entrepreneurship is hard, why people go follow and go into entrepreneurship. It taught me skills around actual management, managing people, managing accounting, bookkeeping. But the most important thing that it taught me is dealing with people and working with people. It was a retail business, right? So I had to deal with customers day in and day out. So it was a very important piece of understanding customers needs, customers wants, customers problems, and how can I, in my position where I am in my business, serve them and help them and help them achieve their goals. So it was a very key thing, very important skill to learn all before I even went to college.
Kovid Batra: That’s really interesting. I think one, Cesar, uh, has learned some level of curiosity, has ingrained curiosity to try new things. And from your childhood, you got that feeling of building a business, serving customers; that is ingrained in you guys. So I think really, really interesting traits that you have got from your childhood. Uh, great, guys. Thank you so much for this quick sweet intro. Uh, so coming to today’s main section which is about talking, uh, about unlocking engineering productivity. And today’s, uh, specifically today’s theme is around building that data-driven mindset around unlocking this engineering productivity. So before we move on to, uh, and deep dive into experiences that you have had in your leadership journey. First of all, I would like to ask, uh, you guys, when we talk about engineering productivity or developer productivity, what exactly comes to your mind? Like, like, let’s start with a very basic, the fundamental thing. I think Ariel, would you like to take it first?
Ariel Pérez: Absolutely. Um, the first thing that comes to mind is unfortunate. It’s the negative connotation around developer productivity. And that’s primarily because for so long organizations have trying to figure out how do I measure the productivity of these software developers, software engineers, who are one of my most expensive resources, and I hate the word ‘resource’, we’re talking about people, because I need to justify my spend on them. And you know what, they, I don’t know what they do. I don’t understand what they do. And I got to figure out a way to measure them cause I measure everyone else. If you think about the history of doing this, like for a while, we were trying to measure lines of code, right? We know we don’t do that. We’re trying to open, you know, we’re trying to, you know, measure commits. No, we know we don’t do that either. So I think for me, unfortunately, in many ways, the term ‘developer productivity’ brings so many negative associations because of how wrong we’ve gotten it for so long. However, you know, I am not the, I am always the eternal optimist. And I also understand why businesses have been trying to measure this, right? All these things are inputs into the business and you build a business to, you know, deliver value and you want to understand how to optimize those inputs and you know, people and a particular skill set of people, you want to figure out how to best understand, retain the best people, manage the best people and get the most value out of those people. The thing is, we’ve gotten it wrong so many times trying to figure it out, I think, and you know, some of my peers who discuss with me regularly might, you know, bash me for this. I think DORA was one good step in that direction, even though there’s many things that it’s missing. I think it leans very heavily on efficiency, but I’ll stop, you know, I’ll leave that as is. But I believe in the people that are behind it and the people, the research and how they backed it. I think a next iteration SPACE and trying to go to SPACE, moved this closer and tried to figure it out, you know, there’s a lot of qualitative aspects that we need to care about and think about. Um, then McKinsey came and destroyed everything, uh, unfortunately with their one metric to rule it all. And it was, it’s been all hell broke loose. Um, but there’s a realization and a piece that look, we, as, as a, as a, as an industry, as a role, as a type of work that we do, we need to figure out how we define this so that we can, you know, not necessarily justify our existence, but think about, how do we add value to each business? How do we define and figure out a better way to continually measure? How do we add value to a business? So we can optimize for that and continually show that, hey, you actually can’t live without us and we’re actually the most important part of your business. Not to demean any other roles, right? But as software engineers in a world where software is eating the world and it has eaten the world, we are the most important people in the, in there. We’re gonna figure out how do we actually define that value that we deliver. So it’s a problem that we have to tackle. I don’t think we’re there yet. You know, at some point, I think, you know, in this conversation, we’ll talk about the latest, the latest iteration of this, which is the core 4, um, which is, you know, things being talked about now. I think there’s many positive aspects. I still think it’s missing pieces. I think we’re getting closer. But, uh, and it’s a problem we need to solve just not as a hammer or as, as a cudgel to push and drive individual developers to do more and, and do more activity. That’s the key piece that I think I will never accept as a, as a leader thinking about developer productivity.
Kovid Batra: Great, I think that that’s really a good overview of how things are when we talk about productivity. Cesar, do you have a take on that? Uh, what comes to your mind when we talk about engineering and developer productivity?
Cesar Rodriguez: I think, I think what Ariel mentioned resonates a lot with me because, um, I remember when we were first starting in the industry, everything was seen narrowly as how many lines of code can a developer write, how many tickets can they close. But true productivity is about enabling engineers to solve meaningful problems efficiently and ensuring that those problems have business impact. So, so from my perspective, and I like the way that you wrote the title for this talk, like developer (slash) engineering. So, so for me, developer, when I think about developer productivity, that that brings to my mind more like, how are your, what do your individual metrics look like? How efficiently can you write code? How can you resolve issues? How can you contribute to the product lifecycle? And then when you think about engineering metrics, that’s more of a broader view. It’s more about how is your team collaborating together? What are your processes for delivering? How is your system being resilient? Um, and how do you deliver, um, outcomes that are impactful to the business itself? So I think, I think I agree with Ariel. Everything has to be measured in what is the impact that you’re going to have for the business because if you can’t tie that together, then, then, well, I think what you’re measuring is, it’s completely wrong.
Kovid Batra: Yeah, totally. I, I, even I agree to that. And in fact, uh, when we, when we talk about engineering and developer productivity, both, I think engineering productivity encompasses everything. We never say it’s bad to look at individual productivity or developer productivity, but the way we need to look at it is as a wholesome thing and tie it with the impact, not just, uh, measuring specific lines of code or maybe metrics like that. Till that time, it definitely makes sense and it definitely helps measure the real impact, uh, real improvement areas, find out real improvement areas from those KPIs and those metrics that we are looking at. So I think, uh, very well said both of you. Uh, before I jump on to the next piece, uh, one thing that, uh, I’m sure about that you guys have worked with high-performing engineering teams, right? And Ariel, you had a view, like what people really think about it. And I really want to understand the best teams that you have worked with. What’s their perception of, uh, productivity and how they look at, uh, this data-driven approach, uh, while making decisions in the team, looking at productivity or prioritizing anything that comes their way, which, which would need improvement or how is it going? How, how exactly these, uh, high-performing teams operate, any, any experiences that you would like to share?
Ariel Pérez: Uh, Cesar, do you want to start?
Cesar Rodriguez: Sure. Um, so from my perspective, the first thing that I’ve observed on high-performing teams is that is there is great alignment with the individual goals to what the business is trying to achieve. Um, the interests align very well. So people are highly motivated. They’re having fun when they’re working and even on their outside hours, they’re just thinking about how are you going to solve the problem that they’re, they’re working on and, and having fun while doing it. So that’s, that’s one of the first things that I observed. The other thing is that, um, in terms of how do we use data to inform the decisions, um, high-performing teams, they always use, consistently use data to refine processes. Um, they identify blockers early and then they use that to prioritize effectively. So, so I think all ties back to the culture of the team itself. Um, so with high-performing teams, you have a culture that is open, that people are able to speak about issues, even from the lowest level engineer to the highest, most junior engineers, the most highest senior engineer, everyone is treated equally. And when people have that environment, still, where they can share their struggles, their issues and quickly collaborate to solve them, that, that for me is the biggest thing to be, to be high-performing as a team.
Kovid Batra: Makes sense.
Ariel Pérez: Awesome. Um, and, you know, to add to that, uh, you know, I 1000% agree with the things you just mentioned that, you know, a few things came to mind of that, like, you know, like the words that come to mind to describe some of the things that you just said. Uh, like one of them, for example, you know, you think about the, you know, what, what is a, what is special or what do you see in a high-performing team? One key piece is there’s a massive amount of intrinsic motivation going back to like Daniel Pink, right? Those teams feel autonomy. They get to drive decisions. They get to make decisions. They get to, in many ways own their destiny. Mastery is a critical thing. These folks are given the opportunity to improve their craft, become better and better engineers while they’re doing it. It’s not a fight between ‘should I fix this thing’ versus ‘should I build this feature’ since they have autonomy. And the, you know, guide their own and drive their own agenda and, and, and move themselves forward. They also know when to decide, I need to spend more time on building this skill together as a team or not, or we’re going to build this feature; they know how to find that balance between the two. They’re constantly becoming better craftsmen, better engineers, better developers across every dimension and better people who understand customer problems. That’s a critical piece. We often miss in an engineering team. So becoming better at how they are doing what they do. And purpose. They’re aligned with the mission of the company. They understand why we do what we do. They understand what problem we’re solving. They, they understand, um, what we sell, how we sell it, whose problems to solve, how we deliver value and they’re bought in. So all those key things you see in high-performing teams are the major things that make them high-performing.
The other thing sticking more to like data and hardcore data numbers. These are folks that generally are continually improving. They think about what’s not working, what’s working, what should we do more of, what should we do less of, you know, when I, I forgot who said this, but they know how to turn up the good. So whether you run retros, whether you just have a conversation every day, or you just chat about, hey, what was good today, what sucked; you know, they have continuous conversations about what’s working, what’s not working, and they continually refine and adjust. So that’s a key critical thing that I see in high-performing teams. And if I want to like, you know, um, uh, button it up and finish it at the end is high-performing teams collaborate. They don’t cooperate, they collaborate. And that’s a key thing we often miss, which is and the distinction between the two. They work together on their problems, which one of those key things that allows them to like each other, work well with each other, want to go and hang out and play games after work together because they depend on each other. These people are shoulder to shoulder every day, and they work on problems together. That helps them not only know that they can trust each other, they can trust each other, they can depend on each other, but they learn from each other day in and day out. And that’s part of what makes it a fun team to work on because they’re constantly challenging each other, pushing each other because of that collaboration. And to me, collaboration means, you know, two people, three people working on the same problem at the same time, synchronously. It’s not three people separating a problem and going off on their own and then coming back together. You know, basically team-based collaboration, working together in real time versus individual work and pulling it together; that’s another key aspect that I’ve often seen in high-performing teams. Not saying that the other ways, I have not seen them and cannot be in a high-performing team, but more likely and more often than not, I see this in high-performing teams.
Kovid Batra: Perfect. Perfect. Great, guys. And in your journeys, um, there have been, there must have been a lot of experiences, but any counterintuitive things that you have realized later on, maybe after making some mistakes or listening to other people doing something else, are there any things which, which are counterintuitive that you learned over the time about, um, improving your team’s productivity?
Ariel Pérez: Um, I’ll take this one first. Uh, I don’t know if this is counterintuitive, but it’s something you learn as you become a leader. You can’t tell people what to do, especially if they’re high-performing, you’re improving them, even if you know better, you can’t tell them what to do. So unfortunately, you cannot lead by edict. You can do that for a short period of time and get away with it for a short period of time. You know, there’s wartime versus peacetime. People talk about that. But in reality, in many ways, it needs to come from them. It needs to be intrinsic. They’re going to have to be the ones that want to improve in that world, you know, what do you do as a leader? And, you know, I’ve had every time I’ve told them, do this, go do this, and they hated me for it. Even if I was right at the end, then even if it took a while and then they eventually saw it, there was a lot of turmoil, a lot of fights, a lot of issues, and some attrition because of it. Um, even though eventually, like, yes, you were right, it was a bit more painful way, and it was, you know, me and the purpose for the desire, you know, let me go faster. We got to get this done. Um, it needs to come from the team. So I think I definitely learned that it might seem counterintuitive. You’re the boss. You get to tell people to do. It’s like, no, actually, no, that’s not how it works, right? You have to inspire them, guide them, drive them, give them the tools, give them the training, give them the education, give them the desire and need and want for how to get there, have them very involved in what should we do, how do we improve, and you can throw in things, but it needs to come from them. If there were anything else I’d throw into that, it was counterintuitive, as I think about improving engineering productivity was, to me, this idea of that off, you know, as we think about from an accounting perspective, there’s just no way in hell that two engineers working on one problem is better than one. There’s no way that’s more productive. You know, they’re going to get half the work done. That’s, that’s a counterintuitive notion. If you think about, if you think about it, engineers as just mere inputs and resources. But in reality, they’re people, and that software development is a team sport. As a matter of fact, if they work together in real time, two engineers at the same time, or god forbid, three, four, and five, if you’re ensemble programming, you actually find that you get more done. You get more done because things, like they need to get reworked less. Things are of higher quality. The team learns more, learns faster. So at the end of the day, while it might feel slow, slow is smooth and smooth is fast. And they get just get more over time. They get more throughput and more quality and get to deliver more things because they’re spending less time going back and fixing and reworking what they were doing. And the work always continues because no one person slows it down. So that’s the other counterintuitive thing I learned in terms of improving and increasing productivity. It’s like, you cannot look at just productivity, you need to look at productivity, efficiency, and effectiveness if you really want to move forward.
Kovid Batra: Makes sense. I think, uh, in the last few years, uh, being in this industry, I have also developed a liking towards pair programming, and that’s one of the things that align with, align with what you have just said. So I, I’m in for that. Yeah. Uh, great. Cesar, do you have, uh, any, any learnings which were counterintuitive or interesting that you would like to share?
Cesar Rodriguez: Oh, and this goes back to the developer versus engineering, uh, conversation, uh, and question. So productivity and then something that’s counterintuitive is that it doesn’t mean that you’re going to be busy. It doesn’t mean that you’re just going to write your code and finish tickets. It means that, and this is, if there are any developers here listening to this, they’re probably going to hate me. Um, you’re going to take your time to plan. You’re going to take your time to reflect and document and test. Um, and we, like, we’ve seen this even at StackGen last quarter, we focused our, our, our efforts on improving our automated tests. Um, in the beginning, we’re just trying to meet customer demands. We, unfortunately, they didn’t spend much time testing, but last quarter we made a concerted effort, hey, let’s test all of our happy paths, let’s have automated tests for all of that. Um, let’s make sure that we can build everything in our pipelines as best as possible. And our, um, deployment frequency metrics skyrocketed. Um, so those are some of the, uh, some of the counterintuitive things, um, maybe doing the boring stuff, it’s gonna be boring, but it’s gonna speed you up.
Ariel Pérez: Yeah, and I think, you know, if I can add one more thing on that, right, that’s critical that many people forget, you know, not only engineers, as we’re working on things and engineering leadership, but also your business peers; we forget that the cost of software, the initial piece of building it is just a tiny fraction of the cost. It’s that lifetime of iterating, maintaining it, managing, building upon it; that’s where all the cost is. So unfortunately, we often cut the things when we’re trying to cut corners that make that ongoing cost cheaper and you’re, you’re right, at, you know, investing in that testing upfront might seem painful, but it helps you maintain that actual, you know, uh, that reasonable burn for every new feature will cost a reasonable amount, cause if you don’t invest in that, every new feature is more expensive. So you’re actually a whole lot less productive over time if you don’t invest on these things at the beginning.
Cesar Rodriguez: And it, and it affects everything else. If you’re trying to onboard somebody new, it’ll take more time because you didn’t document, you didn’t test. Um, so your cost of onboarding new people is going to be more expensive. Your cost of adding new people, uh, new features is going to be more expensive. So yeah, a hundred percent.
Kovid Batra: Totally. I think, Cesar, documentation and testing, uh, people hate it, but that’s the truth for sure. Great, guys. I think, uh, there is more to learn on the journey and there are a lot more questions that I have and I’m sure audience would also have a lot of questions. So I would request the audience to put in their questions in the comment section right now, because at the end when we are having a Q&A, we’ll have all the questions sorted and we can take all of them one by one. Okay. Um, as I said, like a lot of learning and unlearning is going to happen, but let’s talk about some of, uh, your specific experiences, uh, learn some practical tips from there. So coming to you, Ariel. Uh, you have recently moved into this leadership role at Tinybird. Congratulations, first of all.
Ariel Pérez: Thank you.
Kovid Batra: And, uh, I’m sure this comes with a lot of responsibility when you enter into a new environment. It’s not just a new thing that you’re going to work upon, it’s a whole new set of people. I’m sure you have seen that in your career multiple times. But every time you step in and you’re a new person there, and of course, uh, you’re going as a leader, uh, it could be overwhelming, right? Uh, how do you manage that situation? How do you start off? How do you pull off so that you actually are able to lead, uh, and, and drive that impact which you really want?
Ariel Pérez: Got it. Um, so, uh, the first part is one of, this may sound like fluff, but it really helps, um, in many ways when you have a really big challenge ahead, you know, you have to avoid, you have to figure out how to avoid letting imposter syndrome freeze you. And even if you’ve had a career of success, you know, in many ways, imposter syndrome still creeps up, right? So how do I fight, how do I fight that? It’s one of those things like stand in front of the mirror and really deep breaths and talk about I got this job for a reason, right? I, you know, I, I, they, they’re trusting me for a reason. I got here. I earned this. Here’s my track record. I worked this. Like I deserve to be here. I’m supposed to be here. I think that’s a very critical piece for any new leader, especially if you’re a new leader in a new place, because you have so much novelty left and right. You have to prove yourself and that’s very daunting. So the first piece is you need to figure out how to get yourself out of your own head. And push yourself along and coach yourself, like I’m supposed to be here, right? Once you get that piece, you know down pat, it really helps in many ways helps change your own mindset your own framing. When you’re walking into conversations walking into rooms, there’s a big piece of how, how that confidence shines through. That confidence helps you speak and get your ideas and thoughts out without tripping all over yourself. That confidence helps you not worry about potentially ruffling some feathers and having hard conversations. When you’re in leadership, you have to have hard conversations. It’s really important to have that confidence, obviously without forgetting it, without saying, let me run over everybody, cause that’s not what it means, but it just means you got to get over the piece that freezes you and stops you. So that’s the first piece I think. The second piece is, especially when moving higher and higher into positions of leadership; it’s listening. Listening is the biggest thing you do. You might have a million ideas, hold them back, please hold them back. And that’s really hard for me. It’s so hard cause I’m like, “I see that I can fix that. I can fix that too. I’ve seen that before I can fix it too.” But, you know, you earn more respect by listening and observing. And actually you might learn a few things or two. I’m like, “Oh, that thing I wanted to fix, there’s a reason why it’s the way it is.” Because every place is different. Every place has a different history, a different context, a different culture, and all those things come into play as to why certain decisions were made that might seem contrary to what you would have done. And it helps you understand that context. That context is critical, not only to then figure out the appropriate solution to the problem, but also that time while you’re learning and listening and talking to people, you’re building relationships with people, you’re connecting to people, you’re understanding, you’re understanding the players, understanding who does well, who doesn’t do well, you’re understanding where all the bodies are buried, you’re understanding the strategy, you’re getting a big picture of all the things so that then when it comes time to say now time to implement change, you have a really good setup of who are the people that are gonna help me make the change, who are the people that are going to be challenging, how do I draw a plan to do change management, which is a big important thing. Change management is huge. It’s 90% people. So you need to understand the people and then understand, it also gives you enough time to understand the business strategy, the context, the big problem where you’re going to kind of be more effective at. Here’s why I got hired. Now I’m going to implement the things to help me execute on what I believe is the right strategy based on learning and listening and keeping my mouth shut for the time, right? Now, traditionally, you’ll hear this thing about 90 days. I think the 90 days is overly generous if you’re in a really big team, I think it leans and skews toward big places, slower moving places, um, and, and places that move. That’s it. Bigger places, slower places. When you join a startup environment, we join a smaller company. You need to be able to move faster. You don’t have 90 days to make decisions. You don’t have 90 days. You might have 30 days, right? You want to push that back as far as you can to get an appropriate context. But there’s a bias for action, reasonably so because you’re not guaranteed that the startup is going to be there tomorrow. So you don’t have 90 days, but you definitely don’t want to do it in two weeks and probably not start doing things in a month.
Kovid Batra: Makes sense. Makes sense. So, uh, a follow-up question on that. Uh, when you get into this position, if you are in a startup, let’s say you get 30 to 45 days, but then because of your bias towards action, you pick up initiatives that you would want to lead and create that impact. In your journey at Tinybird, have you picked up something, anything interesting, maybe related to AI or maybe working with different teams that you think would work on your existing code base to revamp it, anything that you have picked up and why?
Ariel Pérez: Yeah, a bunch of stuff. Um, I think when I first joined Tinybird, my first role was field CTO, which is a role that takes the, the, the responsibilities of the CTO and the external facing aspects of them. So I was focused primarily on the market, on customers, on prospects. And as part of that one, you know, one of the first initiatives I had was how do we, uh, operate within the, you know, sales engineering team, who was also reporting to me, and make that much more effective, much more efficient. So a few of the things that we were thinking of there were, um, AI-based solutions and GenAI-based solutions to help us find the information we need earlier, sooner, faster. So that was more like an optimization and efficiency thing in terms of helping us get the answers and clarify and understand and gather requirements from customers and very quickly figure out this is the right demo for you, these are the right features and capabilities for you, here’s what we can do, here’s what we can’t do, to get really effective and efficient at that. When moving into a product role though, and product and engineering role, in terms of the, the latest initiatives that I’ve picked up, like there, there, there, there are two big things in terms of themes. One of them is that Tinybird must always work, which sounds like, yeah, well, duh, obviously it must always work, but there’s a key piece underpinning that. Number one, obviously the, you know, stability and reliability are huge and required for trust from customers wanting to use you as a dev tool. You need to be able to depend on it, but there’s another piece is anything I must do and try to do on the platform, it must fail in a way that I understand and expect so that then I can self service it and fix it. So that idea of Tinybird always works that I’ve been picking up and working on projects is transparency, observability, and the ability for customers to self-service and resolve issues simply by saying, “I need more resources.” And that’s a, you know, it’s a very challenging thing because we’ve got to remove all the errors that have nothing to do with that, all the instability and all the reliability problems so that those are granted. And then remaining should only be issues that, hey, customer, you can solve this by managing your limits. Hey, customer, you can solve this by increasing the cores you’re using. You can solve this by adding more memory and that should be the only thing that remains. So working on a bunch of stuff there on predicting whether something will fail or not, predicting whether something is going to run out of resources or not, very quickly identifying if you’re running out of resources so there’s almost like an SRE monitoring observability aspect to this, but turning that back into a product solution. That’s one side of it. And then the other big pieces will be called a developer’s experience. And that’s something that my, you know, my, my, my peer is working internally on and leading is a lot more about how developers develop today. Developers develop today, well, they always develop locally. They prefer not depending on IO on a network, but developer, every developer, whether they tell you yes or no, is using an AI assistant; every developer, right? Or 99% of developers. So the idea is, how do we weave that into the experience without making it be, you know, a gimmick? How do we weave an AI Copilot into your development experience, your local development experience, your remote development experience, your UI development experience so that you have this expert at your disposal to help you accelerate your development, accelerate your ability to find problems before you ship? And even when you ship, help you find those problems there so you can accelerate those cycles, so you can shorten those lead time, so you can get to productivity and a productive solution faster with less errors and less issues. So that’s one major piece we’re working on there on the embedding AI; and not just AI and LLMs and GenAI, all you think about, even traditional. I say traditional like ML models on understanding and predicting whether something’s going to go wrong. So we’re working on a lot of that kind of stuff to really accelerate the developer, uh, accelerate developer productivity and engineering team productivity, get you to ship some value faster.
Kovid Batra: Makes sense. And I think, uh, when you’re doing this, is there any kind of framework, tooling or processes that you’re using to measure this impact, uh, over, over the journey?
Ariel Pérez: Yeah, um, for this kind of stuff, I lean a lot more toward the outcomes side of the equation, you know, this whole question of outputs versus outcomes. I do agree. John Cutler, very recently, I loved listening to John Cutler. He very recently published something like, look, we can’t just look at outcomes, because unfortunately, outcomes are lagging. We need some leading indicators and we need to look at not only outcomes, but also outputs. We need to look at what goes into here. We need to look at activity, but it can’t be the only thing we’ll look at. So the things I look at is number one, um, very recently I started working with my team to try to create our North Star metric. What is our North Star metric? How do we know that what we’re doing and what we’re solving for is delivering value for our customers? And is that linked to our strategy and our vision? And do we see a link to eventual revenue, right? So all those things, trying to figure out and come up with that, working with my teams, working, looking at our customers, understanding our data, we’ve come up with a North Star metric. We said, great, everything we do should move that number. If that moving, if that number is moving up into the right, we’re doing the right things. Now, looking at that alone is not enough, because especially as engineering teams, I got to work back and say, how efficient are we at trying to figure that out? So there’s, you know, a few of the things that I look at, I obviously look at the DORA metrics. I do look at them because they help us try to figure out sources of issues, right? What’s our lead time? What’s our cycle time? What’s our deployment frequency? What’s our, you know, what, you know, what, what’s our, uh, you know, change failure rate? What’s our mean time to recover? Those are very critical to understand. Are we running as a tip-top shop in terms of engineering? How good are we at shipping the next thing? Because it’s not just shipping things faster; it’s if there’s a problem, I need to fix it really fast. If I want to deliver value and learn, and this is the second piece is critical that many companies fail is, I need to put it out in the hands of customers sooner. That’s the efficiency piece. That’s the outputs. That’s the, you know, are we getting really good at putting it in front of customers, but the second piece that we must need independent of the North Star metric is ‘and what happened’, right? Did it actually improve things? Did it, did it make things worse. So it’s optimizing for that learning loop on what our customers are doing. Do we have.. I’m tracking behavioral analytics pieces where the friction points were funnels. Where are they dropping off? Where are they circling the wheels, right? We’re looking at heat maps. We’re looking at videos and screen shares of like, what did the customer do? Why aren’t they doing what they thought we thought they were going to do? So then now when we learn this, go back to that really awesome DORA numbers, ship again, and let’s see, let’s see, let’s fix on that. So, to me, it’s a comprehensive view on, are we getting really good at shipping? And are we getting really good at shipping the right thing? Mixing both those things driven by the North star metric. Overall, all the stuff we’re doing is the North star moving up into the right.
Kovid Batra: Makes sense. Great. Thanks, Ariel. Uh, this was really, really insightful. Like, from the point you enter as a leader, build that listening capability, have that confidence, uh, driving the initiatives which are right and impactful, and then looking at metrics to ensure that you’re moving in the right direction towards that North Star. I think to sum up, it was, it was really nice and interesting. Cesar, I think coming to your experience, uh, you have also had a good stint at, uh, at StackGen and, uh, you were mentioning about, uh, taking up this transition successfully, uh, which was multi-cloud infrastructure that expanded your engineering team. Uh, right? And I would want to like deep dive into that experience. Uh, you specifically mentioned that, uh, that, uh, transition was really successful, and at that time, you were able to keep the focus, keep the productivity in place. How things went for you, let’s deep dive into that experience of yours.
Cesar Rodriguez: Yeah. So, so from my perspective, the goals that you are going to have for your team are going to be specific to where the business is at, at that point in time. So, for example, StackGen, we started in 2023. Initially, we were a very small number of engineers just trying to solve the initial problem, um, which we’re trying to solve with Stackdn, which is infrastructure from code and easily deploying cloud architecture into, into the cloud environment. Um, so we focus on one cloud provider, one specific problem, with a handful of engineers. And once we started learning from customers, what was working, what was not working, um, and we started being pulled into different directions, we quickly learned that we needed to increase engineering capacity to support additional clouds, to deliver additional features faster. Um, our clients were trying to pull us in different directions. So that required, uh, two things. One is, um, hiring and scaling the team quickly. So now we are, at the moment we’re 22 engineers; so hiring and scaling the engineering team quickly and then enabling new team members to be as productive as possible in day zero. Um, and this is where, this is where the boring, the boring actions come into play. Um, uh, so first of all, making sure that you have enough documentation so somebody can get up and running on day one, um, and they can start doing pull requests on day one. Second of all, making sure that you have, um, clear expectations in terms of quality and what is your happy path, and how can you achieve that. And third, um, is making sure everyone knows what is expected from them in terms of the metrics that we’re looking for and, uh, the quality that we’re looking for in their outcomes. And this is something that we use Typo for. So, for example, we have an international team. We have people in India, Portugal, US East Coast, US West Coast. And one of the things that we were getting stuck early on was our pull requests were getting opened, but then it took a really long time for people to review them, merge them, and take action and get them deployed. So, um, we established a metric, and we did this using Typo, where we were measuring, hey, if you have a pull request open more than 12 hours, let’s create an alert, let’s alert somebody, so that somebody can be on top of that. We don’t want to get somebody stuck for more than a working day, waiting for somebody to review the pull request. And, and the other metric that we look at, which is deployment frequency, we see that an uptick of that. Now that people are not getting stuck, we’re able to have more frictionally, frictionless, um, deployments to our SDLC where people are getting less stuck. We’re seeing collaboration between the team members regardless of their time zone improving. So that’s something actionable that we’ve implemented.
Kovid Batra: So I think you’re doing the boring things well and keeping a good visibility on things, how they’re proceeding, really helped you drive this transition smoothly, and you were able to maintain that productivity in the team. That’s really interesting. But touching on the metrics part again, uh, you mentioned that you were using Typo. Uh, there, there are, uh, various toolings to help you, uh, plan, execute, automate, and reflect things when you are, when you are into a position where as a leader, uh, you have multiple stakeholders to manage. So my question to both of you, actually, uh, when we talk about such toolings, uh, that are there in the market, like Typo, how these tools help you exactly, uh, in each of these phases, or if you’re not using such tools, you must be using some level of metrics, uh, to actually, let’s say you’re planning an initiative, how do you look at numbers? If you’re automating something and executing something, how do you look at numbers and how does this whole tooling piece help you in that? Um, yeah.
Cesar Rodriguez: I think, I think for me, the biggest thing before, uh, using a tool like Typo was it was very hard to have a meaningful conversation on how the engineering team was performing, um, without having hard, hard data and raw data to back it up. So, um, the conversation, if you don’t, if you’re not measuring things, it’s more about feelings and more about anecdotal evidence. But when you have actual data that you can observe, then you can make improvements, and you can measure how, how, how that, how things are going well or going bad and take action on it. So, so that’s the biggest, uh, for me, that’s the biggest benefit for, from my perspective. Um, you have, you can have conversations within your team and then with the rest of the organization, um, and present that in a, in a way that makes sense for everyone.
Kovid Batra: Makes sense. I think that’s the execution part where you really take the advantage of the tool. You mentioned with one example that you had set a goal for your team that okay, if the review time is more than 12 hours, you would raise an alert. So, totally makes sense, that helps you in the execution, making it more smooth, giving you more, uh, action-driven, uh, insights so that you can actually make teams move faster. Uh, Ariel, for you, any, any experiences around that? How do you, uh, use metrics for planning, executing, reflecting?
Ariel Pérez: So I think, you know, one of the things I like doing is I like working from the outside in. By that I mean, first, let me look at the things that directly impact customers, that is visible. There’s so much there on, you know, in terms of trust to customers. There’s also someone’s there on like actual eventual impact. So I lay looking, for example, um, the, it may sound negative, but it’s one of those things you want to track very closely and manage and then learn from is, what’s our incident number? Like, how many incidents do we have? You know, how many P0s? How many P1s? That is a very important metric to trust because I will guarantee you this, if you don’t have that number as an engineering leader, your CEO is going to try to figure out, hey, why are we having so many problems? Why are so many customers angry calling me? So that’s a number you’re going to want to have a very strong pulse on: understand incidents. And then obviously, take that number and try to figure out what’s going on, right? There’s so much behind it. But the first part is understand the number and you want that number to go down over time. Um, obviously, like I said, there’s a North star metric. You’re tracking that. Um, I look at also, which, you know, I don’t lean heavily on these, but they’re still used a lot and they’re still valuable. Things like NPS and CSAT to help you understand how customers are feeling, how customers are thinking. And it allows me to get often when it’s paired with qualitative feedback, even more so because I want to understand the ‘why’ and I’ll dive more into the qualitative piece, how critical is it and how often we forget that piece when we’re chasing metrics and looking for numbers, especially we’re engineers, we want numbers. We need a story and the story, you can’t get the story just from the numbers. So I love the qualitative aspect. And then the third thing I look at is, um, SCIs or failed customer interactions, trying to find friction in the journeys. What are all the times a customer tries to do something and they fail? And you know, you can define that in so many kinds of ways, but capturing that is one of those things you try to figure out. Find failed customer interactions, find where customers are hitting friction points, and let’s figure out which of those are most important to attack. So these things help guide, at the minimum, what do we need to work on as a team? Right? What are the things we need to start focusing on to deliver and build? Like, how do I get initiatives? Obviously, that stuff alone doesn’t turn into initiatives. So the next thing I like ensuring and I drive to figure out what we work on is with all my leaders. And in our organization, we don’t have separate product managers. You know, engineering leaders are product managers. They have to build those product skills because we have such a technical product that we decided to make that decision, not only for efficiency’s sake and stop having two people in every conversation, but also to build up that skill set of ‘I’m building for engineers, and I need to know my engineering product very well, but now let me enable these folks with the frameworks and methodologies, the ideas and the things that help them make product decisions.’ So, when we look at these numbers, we try to look at what are some frameworks and ways to think about what am I going to build? Which of these is going to impact? How much do we think it’s going to impact it? What level of confidence do I have in that? Does that come from the gut? Does that come from several opinions that customers tell us that, is the data telling us that, are competitors doing it? Have we run an experiment? Did we do some UX research? So the different levels of, uh, confidence in I want to do this thing. Cause this thing’s going to move that number. We believe that number is important. The FCI is it through the roof. I want to attack them. This is going to move it. Okay. How sure are you? He’s going to move it. Now, how are we going to measure that? And indeed moved it. Then I worked, so that’s the outside of the onion. Then I work inward and say, great, how good are we at getting at those things? So, uh, there’s two combinations of measures. I pull measures and data from, from GitLab, from GitHub, I look at the deployments that we have. Thankfully, we run a database. We have an OLAP database, so I can run a bunch of metrics off of all this stuff. We collect all this data from all this telemetry from our services, from our deployments, from our providers for all of the systems we use, and then we have these dashboards we built internally to track aggregates, track metrics and track them in real time, because that’s what Tinybird does. So, we use Tinybird to Tinybird while we Tinybird, which is awesome. So I, we’ve built our own back dashboards and mechanisms to track a lot of these metrics and understand a lot of these things. However, there’s a key piece which I haven’t introduced yet, but I have a lot of conversations with a lot of people on, hey, why did this number move? What’s going on? I want to get to the place that we actually introduce surveys. Funny enough, when you talk about the beginning of DORA, even today, DORA says, surveys are the best way to do this. We try to get hard data, but surveys are the best way to get it. For me, surveys really help, um, forget for a second what the numbers are telling me, how do the engineers feel? Because then I get to figure out why do you feel that way? It allows me to dive in. So that’s why I believe the qualitative subjective piece is so important to then bolster the numbers I’m seeing, either A: explain the numbers, or the other way around, when I see a story, I’m like, do the numbers back up that story? The reality is somewhere in the middle, but I use both, both of those to really help me.
Kovid Batra: Makes sense. Makes sense. Great guys. I think, uh, thank you. Thank you so much for sharing such good insights. I’m sure our audience has some questions for us, uh, so we can break in for a minute and, uh, then start the QnA.
Kovid Batra: All right. I think, uh, we have a lot of questions there, but I’m sure we are going to pick a few of them. Let’s start with the first one. That’s from Vishal. Hi Ariel, how do you, how do I decide which metrics to focus while measuring teams productivity and individual metrics? So I think the question is simple, but please go ahead.
Ariel Pérez: Um, I would start with in terms of, I would measure the core four of DORA at the minimum across the team to help me pinpoint where I need to go. I would start with that to help me pinpoint. In terms of which team productivity metrics or individual productivity metrics, I’d be very wary of trying to measure individual productivity metrics, not because we shouldn’t hold individuals accountable for what they do, not because individuals don’t also need to understand, uh, how we think about performance, how we manage that performance, but for individuals, we have to be very careful, especially in software teams. Since it’s a team sport, there’s no individual that is successful on their own, and there’s no individual that fails on their own. So if I were to think, you know, if I were to measure and try to figure out how to identify how this individual is doing, I would, I would look for at least two things. Number one, actual peer feedback. How, how do their peers think about this person? Can they depend on this person? Is this person there when they need him? Is this person causing a lot of problems? Is this person fixing a lot of problems? But I’d also look at the things, to me, for the culture I want to build, I want to measure how often is this person reviewing other people’s PRs? How often is this person sitting with other people, helping unblock them? How often is this person not coding because they’re going and working with someone else to help unblock them? I actually see that as a positive. Most frameworks will ding that person for inactivity. So I try to find the things that don’t measure activity, but are measuring that they’re doing the right things, which is teamwork. They’re actually being effective at working in a team when it comes to individuals.
Kovid Batra: Great. Thanks, Ariel. Uh, next question. That’s for you, Cesar. How easy or hard is the adoption and implementation of SEI tools like Typo? Okay, so you can share your experience, how it worked out for you.
Cesar Rodriguez: So, so two things. So, so when I was evaluating tools, um, I prefer to work with startups like Typo because they’re extremely responsive. If you go to a big company, they’re not going to be as responsive and as helpful as a startup is. They change the product to meet your expectations and they work extremely fast. So that’s the first thing. Um, the hard part of it is not about the technology itself. The technology is easy. The hard part is the people aspect of it. So you have to, if you can implement it early, uh, when your company is growing, that’s better because they’ll, when new team members come in, they already know what are the expectations and what to expect. The other thing is, um, you need to communicate effectively to your team members why are you using this tool, and getting their buy-in for measuring. Some people may not like that you’re going to be measuring their commits, their pull requests, their quality, their activity, but if you have a conversation with, with those people to make them understand the ‘why’ and how can you connect their productivity to the business outcomes, I think that goes far along. And then once you’re, once you’re in place, just listening to your engineers feedback about the tool, working with the vendor to, to modify anything to fit your company’s need, um, a lot of these tools are very cookie cutter in their approach, um, and have a set of, set of capabilities, but teams are made of people and people have different needs. So, so make sure that you capture that feedback, give it to your vendor and work with them to make the tool work for your specific individual teams.
Kovid Batra: Makes sense. Next question. That’s from, uh, Mohd Helmy Ibrahim, uh, Hi Ariel, how to make my senior management and junior implement project management software in their work, tasking to be live update tracking status update?
Ariel Pérez: Um, I, that one, I’m of two minds cause only because I see a lot of organizations who can get really far without actual sophisticated project management tooling. Like they just use, you know, Linear and that’s it. That’s all enough. Other places can’t live without, you know, a super massive, complex Jira solution with all kinds of things and all kinds of bells and whistles and reports. Um, I think the key piece here that’s important and actually it was funny enough. I was literally just having this conversation with my leadership team, my engineering leadership team. It’s this, you know, when it comes to the folks involved is do you want to spend all day asking, answering questions about where is this thing, how is this thing doing, is this thing going to finish, when is it going to finish, or do you want to just get on with your work, right? If you want to just get on with your work and actually do the work rather than talk about the work to other people who don’t understand it, if you want to find out when you want to do it, you need some level of information radiator. Information reader, radiators are critical at the minimum so that other folks can get on the same page, but also if someone comes to you and says, Hey, where is this thing? Look at the information radiator. It’s right there. You, where’s the status on the, it’s on the information radiator. When’s this going to be done? Look at the information radiator, right? That’s the key piece for me is if you don’t want to constantly answer that question, then you will, because people care about the things you’re working on. They want to know when they can sell this thing or they want to know so they can manage their dependencies. You need to have some level, some minimum level of investment of marking status, marking when you think it’s going to be done and marking how it’s going. And that’s a regular piece. Write it down. It’s so much easier to write it down than to answer that question over and over again. And if you write it down in a place that other people can see it and visualize it, even better.
Kovid Batra: Totally makes sense. All right, moving on. Uh, the next question is for Cesar from Saloni. Uh, good to see you here. I have a question around burnout. How do you address burnout or disengagement while pushing for high productivity? Oh, very relevant question, actually.
Cesar Rodriguez: Yeah, so so for this one, I actually use Typo as well. Um, so Typo has this gauge to, um, that tells you based on the data that it’s collecting, whether somebody is working higher than expected or lower than expected. And it gives you an alert saying, hey, this person may be prone to burnout or this person is burning out. Um, so I use that gauge to detect how is the team doing and it’s always about having a conversation with the individual and seeing what’s going on with their lives. There may be, uh, work things that are impacting their productivity. There may be things that are outside of work that are impacting that individual’s productivity. So you have to work around that. Um, we are, uh, it’s all about people in the end, um, and working with them, setting the right expectations and at the same time being accommodating if they’re experiencing burnout.
Kovid Batra: Cool. I think, uh, more than myself, uh, you have promoted Typo a lot today. Great, but glad to know that the tool is really helping you and your team. Yeah. Next question. Uh, this one is again for you, Cesar from Nisha. Uh, how do you encourage accountability without micromanaging your team?
Cesar Rodriguez: I think, I think Ariel answered this question and I take this approach even with my kids. Um, it’s not about telling them what to do. It’s about listening and helping them learn and come to the same conclusion as you’re coming to without forcing your way into it. So, so yeah, you have to listen to everybody, listen to your stakeholders, listen to your team, and then help them and drive a conversation that can point them in the right direction without forcing them or giving them the answer which is, which requires a lot of tact.
Ariel Pérez: One more thing I’ll add to that, right, is, you know, so that folks don’t forget and think that, you know, we’re copping out and saying, hold on, what’s your job as a leader? What are you accountable for? Right? In that part, there’s also like, our job is let them know what’s important. It’s our job to tell them what is the most important thing, what is the most important thing now, what is the most important thing long term, and repeat that ad hominem until they make fun of you for it, but they need to understand what’s the most important, what’s the strategy, so you need to provide context, because there’s a piece of, it’s almost like, it’s unfair, and it’s actually, I think, very, um, it’s a very negative thing to say, go figure it out, without telling them, hold on, figure what out? So that’s a key piece there as well, right? It’s you, you’re accountable as the leader for telling them what’s important, letting them understand why this is important, providing context.
Kovid Batra: Totally. All right. Next one. This one’s for you, Cesar. According to you, what are the most common misconceptions about engineering productivity? How do you address them?
Cesar Rodriguez: So, so I think the, for me, the biggest thing is people try to come with all these new words, DORA, SPACE, uh, whatever latest and greatest thing is. Um, the biggest thing is that, uh, there’s not going to be a cookie cutter approach. You have to take what works from those frameworks to your specific team in your specific situation of your business right now. And then from there, you have to look at the data and adapt as your team and as your business is evolving. So that’s, that’s the biggest. misconception for me. Um, you can take, you can learn a lot from the things that are out there, but always keep in mind that, um, you have to put that into the context of your current situation.
Kovid Batra: I think, uh, Ariel, I would like to hear you on this one too.
Ariel Pérez: Yeah. Uh, definitely. Um, I think for me, one of the most common misconceptions about engineering productivity as a whole is this idea that engineering is like manufacturing. And for so long, we’ve applied so many ideas around, look, engineering is all about shipping more code because just like in a fan of factory, let’s get really good at shipping code and we’re going to be great. That’s how you measure productivity. Ship more code, just like ship more widgets. How many widgets can I ship per, per hour? That’s a great measure of engineering productivity in a factory. It’s a horrible measure of productivity in engineering. And that’s because many people, you know, don’t realize that engineering productivity and engineering in particular, and I’m gonna talk development, as a piece of development, is it’s more R&D than it is like doing things than it’s actual shipping things. Software development is 99% research and development, 1% actually coding the thing. And if they want any more proof of that is if you have an engineer working on something or a team working on something for three weeks and somehow it all disappears and they lose all of it, how long will it take them to recode the same thing? They’ll probably recode the same thing in about a day. So that tells you that most of those three weeks was figuring out the right thing, the right solution, the right piece, and then the last piece was just coding it. So I think for me, that’s the big misconception about engineering productivity, that it has anything to do with manufacturing. No, it has everything to do with R&D. So if we want to understand how to better measure engineering productivity, look at industries where R&D is a very, very heavy piece of what they do. How do they measure productivity? How did they think about productivity of their R&D efforts?
Kovid Batra: Cool. Interesting. All right. I think with that, uh, we come to the end of this session. Before we part, uh, I would like to thank both of you for making this session so interesting, so insightful for all of us. And thanks to the audience for bringing up such nice questions. Uh, so finally, before we part, uh, Ariel, Cesar, anything you would say as parting thoughts?
Ariel Pérez: Cesar, you wanna go first?
Cesar Rodriguez: No, no, um, no, no parting thoughts here. Feel free to, anyone that wants to chat more, feel free to hit me up on LinkedIn. Check out stackgen.com if you want to learn about what we do there.
Ariel Pérez: Awesome. Um, for me, uh, in terms of parting thoughts is; and this is just because how I’ve personally thought about this is, um, I think if you lean on the figuring out what makes people tick and figure, and you’re trying to take your job from the perspective of how do I improve people, how to enrich people’s lives, how do I make them better at what they do every day? If you take it from that perspective, I don’t think you can ever go wrong. If you make your people super happy and engaged and they want to be here and you’re constantly motivating them, building them and growing them, as a consequence, the productivity, the outputs, the outcomes, all that stuff will come. I firmly believe that. I’ve seen it. I firmly believe it. It really, it would be really hard to argue that with some folks, but I firmly believe it. So that’s my parting thoughts, focus on the people and what makes them tick and what makes them work, everything else will fall into place. And if I, you know, just like Cesar, I can’t walk away without plugging Tinybird. Tinybird is, you know, data infrastructure for software teams. You want to go faster, you want to be more productive, you want to ship solutions faster and for the customers, Tinybird is, is built for that. It helps engineering teams build solutions over analytical data faster than anyone else without adding more people. You can keep your team smaller for longer because Tinybird helps you get that efficiency, that productivity out there.
Kovid Batra: Great. Thank you so much guys and all the best for your ventures and for the efforts that you’re doing. Uh, we’ll see you soon again. Thank you.
Cesar Rodriguez: Thanks, Kovid.
Ariel Pérez: Thank you very much. Bye bye.
Cesar Rodriguez: Thank you. Bye!
Best Practices of CI/CD Optimization Using DORA Metrics
Every delay in your deployment could mean losing a customer. Speed and reliability are crucial, yet many teams struggle with slow deployment cycles, frustrating rollbacks, and poor visibility into performance metrics.
When you’ve worked hard on a feature, it is frustrating when a last-minute bug derails the deployment. Or you face a rollback that disrupts workflows and undermines team confidence. These familiar scenarios breed anxiety and inefficiency, impacting team dynamics and business outcomes.
Fortunately, DORA metrics offer a practical framework to address these challenges. By leveraging these metrics, organizations can gain insights into their CI/CD practices, pinpoint areas for improvement, and cultivate a culture of accountability. This blog will explore how to optimize CI/CD processes using DORA metrics, providing best practices and actionable strategies to help teams deliver quality software faster and more reliably.
Understanding the challenges in CI/CD optimization
Before we dive into solutions, it’s important to recognize the common challenges teams face in CI/CD optimization. By understanding these issues, we can better appreciate the strategies needed to overcome them.
Slow deployment cycles
Development teams frequently experience slow deployment cycles due to a variety of factors, including complex code bases, inadequate testing, and manual processes. Each of these elements can create significant bottlenecks. A sluggish cycle not only hampers agility but also reduces responsiveness to customer needs and market changes. To address this, teams can adopt practices like:
Streamlining the pipeline: Evaluate each step in your deployment pipeline to identify redundancies or unnecessary manual interventions. Aim to automate where possible.
Using feature flags: Implement feature toggles to enable or disable features without deploying new code. This allows you to deploy more frequently while managing risk effectively.
Frequent rollbacks
Frequent rollbacks can significantly disrupt workflows and erode team confidence. They typically indicate issues such as inadequate testing, lack of integration processes, or insufficient quality assurance. To mitigate this:
Enhance testing practices: Invest in automated testing at all levels—unit, integration, and end-to-end testing. This ensures that issues are caught early in the development process.
Implement a staging environment: Conduct final tests before deployment, use a staging environment that mirrors production. This practice helps catch integration issues that might not appear in earlier testing phases.
Visibility gaps
A lack of visibility into your CI/CD pipeline can make it challenging to track performance and pinpoint areas for improvement. This opacity can lead to delays and hinder your ability to make data-driven decisions. To improve visibility:
Adopt dashboard tools: Use dashboards that visualize key metrics in real time, allowing teams to monitor the health of the CI/CD pipeline effectively.
Regularly review performance: Schedule consistent review meetings to discuss metrics, successes, and areas for improvement. This fosters a culture of transparency and accountability.
Cultural barriers
Cultural barriers between development and operations teams can lead to misunderstandings and inefficiencies. To foster a more collaborative environment:
Encourage cross-team collaboration: Hold regular meetings that bring developers and operations staff together to discuss challenges and share knowledge.
Cultivate a DevOps mindset: Promote the principles of DevOps across your organization to break down silos and encourage shared responsibility for software delivery.
We understand how these challenges can create stress and hinder your team’s well-being. Addressing them is crucial not just for project success but also for maintaining a positive and productive work environment.
Introduction to DORA metrics
DORA (DevOps Research and Assessment) metrics are key performance indicators that provide valuable insights into your software delivery performance. They help measure and improve the effectiveness of your CI/CD practices, making them crucial for software teams aiming for excellence.
Overview of the four key metrics
Deployment frequency: This metric indicates how often code is successfully deployed to production. High deployment frequency shows a responsive and agile team.
Lead time for changes: This measures the time it takes for code to go from committed to deployed in production. Short lead times indicate efficient processes and quick feedback loops.
Change failure rate: This tracks the percentage of deployments that lead to failures in production. A lower change failure rate reflects higher code quality and effective testing practices.
Mean time to recovery (MTTR): This metric assesses how quickly the team can restore service after a failure. A shorter MTTR indicates a resilient system and effective incident management practices.
By understanding and utilizing these metrics, software teams gain actionable insights that foster continuous improvement and a culture of accountability.
Best practices for CI/CD optimization using DORA metrics
Implementing best practices is crucial for optimizing your CI/CD processes. Each practice provides actionable insights that can lead to substantial improvements.
Measure and analyze current performance
To effectively measure and analyze your current performance, start by utilizing the right tools to gather valuable data. This foundational step is essential for identifying areas that need improvement.
Utilize tools: Use tools like GitLab, Jenkins, and Typo to collect and visualize data on your DORA metrics. This data forms a solid foundation for identifying performance gaps.
Conduct regular performance reviews: Regularly review performance to pinpoint bottlenecks and areas needing improvement. A data-driven approach can reveal insights that may not be immediately obvious.
Establish baseline metrics: Set baseline metrics to understand your current performance, allowing you to set realistic improvement targets.
How Typo helps: Typo seamlessly integrates with your CI/CD tools, offering real-time insights into DORA metrics. This integration simplifies assessment and helps identify specific areas for enhancement.
Set specific, measurable goals
Clearly defined goals are crucial for driving performance. Establishing specific, measurable goals aligns your team's efforts with broader organizational objectives.
Define SMART goals: Establish goals that are Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) aligned with your DORA metrics to ensure clarity in your objectives.
Communicate goals clearly: Ensure that these goals are communicated effectively to all team members. Utilize project management tools like ClickUp to track progress and maintain accountability.
Align with business goals: Align your objectives with broader business goals to support overall company strategy, reinforcing the importance of each team member's contribution.
How Typo helps: Typo's goal-setting and tracking capabilities promote accountability within your team, helping monitor progress toward targets and keeping everyone aligned and focused.
Implement incremental changes
Implementing gradual changes based on data insights can lead to more sustainable improvements. Focusing on small, manageable changes can often yield better results than sweeping overhauls.
Introduce gradual improvements: Focus on small, achievable changes based on insights from your DORA metrics. This approach is often more effective than trying to overhaul the entire system at once.
Enhance automation and testing: Work on enhancing automation and testing processes to reduce lead times and failure rates. Continuous integration practices should include automated unit and integration tests.
Incorporate continuous testing: Implement a CI/CD pipeline that includes continuous testing. By catching issues early, teams can significantly reduce lead times and minimize the impact of failures.
How Typo helps: Typo provides actionable recommendations based on performance data, guiding teams through effective process changes that can be implemented incrementally.
Foster a culture of collaboration
A collaborative environment fosters innovation and efficiency. Encouraging open communication and shared responsibility can significantly enhance team dynamics.
Encourage open communication: Promote transparent communication among team members using tools like Slack or Microsoft Teams.
Utilize retrospectives: Regularly hold retrospectives to celebrate successes and learn collectively from setbacks. This practice can improve team dynamics and help identify areas for improvement.
Promote cross-functional collaboration: Foster collaboration between development and operations teams. Conduct joint planning sessions to ensure alignment on objectives and priorities.
How Typo helps: With features like shared dashboards and performance reports, Typo facilitates transparency and alignment, breaking down silos and ensuring everyone is on the same page.
Review and adapt regularly
Regular reviews are essential for maintaining momentum and ensuring alignment with goals. Establishing a routine for evaluation can help your team adapt to changes effectively.
Establish a routine: Create a routine for evaluating your DORA metrics and adjusting strategies accordingly. Regular check-ins help ensure that your team remains aligned with its goals.
Conduct retrospectives: Use retrospectives to gather insights and continuously improve processes. Cultivate a safe environment where team members can express concerns and suggest improvements.
Consider A/B testing: Implement A/B testing in your CI/CD process to measure effectiveness. Testing different approaches can help identify the most effective practices.
How Typo helps: Typo’s advanced analytics capabilities support in-depth reviews, making it easier to identify trends and adapt your strategies effectively. This ongoing evaluation is key to maintaining momentum and achieving long-term success.
Additional strategies for faster deployments
To enhance your CI/CD process and achieve faster deployments, consider implementing the following strategies:
Automation
Automate various aspects of the development lifecycle to improve efficiency. For build automation, utilize tools like Jenkins, GitLab CI/CD, or CircleCI to streamline the process of building applications from source code. This reduces errors and increases speed. Implementing automated unit, integration, and regression tests allows teams to catch defects early in the development process, significantly reducing the time spent on manual testing and enhancing code quality.
Additionally, automate the deployment of applications to different environments (development, staging, production) using tools like Ansible, Puppet, or Chef to ensure consistency and minimize the risk of human error during deployments.
Version Control
Employ a version control system like Git to effectively track changes to your codebase and facilitate collaboration among developers. Implementing effective branching strategies such as Gitflow or GitHub Flow helps manage different versions of your code and isolate development work, allowing multiple team members to work on features simultaneously without conflicts.
Continuous Integration
Encourage developers to commit their code changes frequently to the main branch. This practice helps reduce integration issues and allows conflicts to be identified early. Set up automated builds and tests that run whenever new code is committed to the main branch.
This ensures that issues are caught immediately, allowing for quicker resolutions. Providing developers with immediate feedback on the success or failure of their builds and tests fosters a culture of accountability and promotes continuous improvement.
Continuous Delivery
Automate the deployment of applications to various environments, which reduces manual effort and minimizes the potential for errors. Ensure consistency between different environments to minimize deployment risks; utilizing containers or virtualization can help achieve this.
Additionally, consider implementing canary releases, where new features are gradually rolled out to a small subset of users before a full deployment. This allows teams to monitor performance and address any issues before they impact the entire user base.
Infrastructure as Code (IaC)
Use tools like Terraform or CloudFormation to manage infrastructure resources (e.g., servers, networks, storage) as code. This approach simplifies infrastructure management and enhances consistency across environments. Store infrastructure code in a version control system to track changes and facilitate collaboration.
This practice enables teams to maintain a history of infrastructure changes and revert if necessary. Ensuring consistent infrastructure across different environments through IaC reduces discrepancies that can lead to deployment failures.
Monitoring and Feedback
Implement monitoring tools to track the performance and health of your applications in production. Continuous monitoring allows teams to proactively identify and resolve issues before they escalate. Set up automated alerts to notify teams of critical issues or performance degradation.
Quick alerts enable faster responses to potential problems. Use feedback from monitoring and alerting systems to identify and address problems proactively, helping teams learn from past deployments and improve future processes.
Final thoughts
By implementing these best practices, you will improve your deployment speed and reliability while also boosting team satisfaction and delivering better experiences to your customers. Remember, you’re not alone on this journey—resources and communities are available to support you every step of the way.
Mobile development comes with a unique set of challenges: rapid release cycles, stringent user expectations, and the complexities of maintaining quality across diverse devices and operating systems. Engineering teams need robust frameworks to measure their performance and optimize their development processes effectively.
DORA metrics—Deployment Frequency, Lead Time for Changes, Mean Time to Recovery (MTTR), and Change Failure Rate—are key indicators that provide valuable insights into a team’s DevOps performance. Leveraging these metrics can empower mobile development teams to make data-driven improvements that boost efficiency and enhance user satisfaction.
Importance of DORA Metrics in Mobile Development
DORA metrics, rooted in research from the DevOps Research and Assessment (DORA) group, help teams measure key aspects of software delivery performance.
Here's why they matter for mobile development:
Deployment Frequency: Mobile teams need to keep up with the fast pace of updates required to satisfy user demand. Frequent, smooth deployments signal a team’s ability to deliver features, fixes, and updates consistently.
Lead Time for Changes: This metric tracks the time between code commit and deployment. For mobile teams, shorter lead times mean a streamlined process, allowing quicker responses to user feedback and faster feature rollouts.
MTTR: Downtime in mobile apps can result in frustrated users and poor reviews. By tracking MTTR, teams can assess and improve their incident response processes, minimizing the time an app remains in a broken state.
Change Failure Rate: A high change failure rate can indicate inadequate testing or rushed releases. Monitoring this helps mobile teams enhance their quality assurance practices and prevent issues from reaching production.
Deep Dive into Practical Solutions for Tracking DORA Metrics
Tracking DORA metrics in mobile app development involves a range of technical strategies. Here, we explore practical approaches to implement effective measurement and visualization of these metrics.
Implementing a Measurement Framework
Integrating DORA metrics into existing workflows requires more than a simple add-on; it demands technical adjustments and robust toolchains that support continuous data collection and analysis.
Automated Data Collection
Automating the collection of DORA metrics starts with choosing the right CI/CD platforms and tools that align with mobile development. Popular options include:
Jenkins Pipelines: Set up custom pipeline scripts that log deployment events and timestamps, capturing deployment frequency and lead times. Use plugins like the Pipeline Stage View for visual insights.
GitLab CI/CD: With GitLab's built-in analytics, teams can monitor deployment frequency and lead time for changes directly within their CI/CD pipeline.
GitHub Actions: Utilize workflows that trigger on commits and deployments. Custom actions can be developed to log data and push it to external observability platforms for visualization.
Technical setup: For accurate deployment tracking, implement triggers in your CI/CD pipelines that capture key timestamps at each stage (e.g., start and end of builds, start of deployment). This can be done using shell scripts that append timestamps to a database or monitoring tool.
Real-Time Monitoring and Visualization
To make sense of the collected data, teams need a robust visualization strategy. Here’s a deeper look at setting up effective dashboards:
Prometheus with Grafana: Integrate Prometheus to scrape data from CI/CD pipelines, and use Grafana to create dashboards with deployment trends and lead time breakdowns.
Elastic Stack (ELK): Ship logs from your CI/CD process to Elasticsearch and build visualizations in Kibana. This setup provides detailed logs alongside high-level metrics.
Technical Implementation Tips:
Use Prometheus exporters or custom scripts that expose metric data as HTTP endpoints.
Design Grafana dashboards to show current and historical trends for DORA metrics, using panels that highlight anomalies or spikes in lead time or failure rates.
Comprehensive Testing Pipelines
Testing is integral to maintaining a low change failure rate. To align with this, engineering teams should develop thorough, automated testing strategies:
Unit Testing: Implement unit tests with frameworks like JUnit for Android or XCTest for iOS. Ensure these are part of every build to catch low-level issues early.
Integration Testing: Use tools such as Espresso and UIAutomator for Android and XCUITest for iOS to validate complex user interactions and integrations.
End-to-End Testing: Integrate Appium or Selenium to automate tests across different devices and OS versions. End-to-end testing helps simulate real-world usage and ensures new deployments don't break critical app flows.
Pipeline Integration:
Set up your CI/CD pipeline to trigger these tests automatically post-build. Configure your pipeline to fail early if a test doesn’t pass, preventing faulty code from being deployed.
Incident Response and MTTR Management
Reducing MTTR requires visibility into incidents and the ability to act swiftly. Engineering teams should:
Implement Monitoring Tools: Use tools like Firebase Crashlytics for crash reporting and monitoring. Integrate with third-party tools like Sentry for comprehensive error tracking.
Set Up Automated Alerts: Configure alerts for critical failures using observability tools like Grafana Loki, Prometheus Alertmanager, or PagerDuty. This ensures that the team is notified as soon as an issue arises.
Strategies for Quick Recovery:
Implement automatic rollback procedures using feature flags and deployment strategies such as blue-green deployments or canary releases.
Use scripts or custom CI/CD logic to switch between versions if a critical incident is detected.
Weaving Typo into Your Workflow
After implementing these technical solutions, teams can leverage Typo for seamless DORA metrics integration. Typo can help consolidate data and make metric tracking more efficient and less time-consuming.
For teams looking to streamline the integration of DORA metrics tracking, Typo offers a solution that is both powerful and easy to adopt. Typo provides:
Automated Deployment Tracking: By integrating with existing CI/CD tools, Typo collects deployment data and visualizes trends, simplifying the tracking of deployment frequency.
Detailed Lead Time Analysis: Typo’s analytics engine breaks down lead times by stages in your pipeline, helping teams pinpoint delays in specific steps, such as code review or testing.
Real-Time Incident Response Support: Typo includes incident monitoring capabilities that assist in tracking MTTR and offering insights into incident trends, facilitating better response strategies.
Seamless Integration: Typo connects effortlessly with platforms like Jenkins, GitLab, GitHub, and Jira, centralizing DORA metrics in one place without disrupting existing workflows.
Typo’s integration capabilities mean engineering teams don’t need to build custom scripts or additional data pipelines. With Typo, developers can focus on analyzing data rather than collecting it, ultimately accelerating their journey toward continuous improvement.
Establishing a Continuous Improvement Cycle
To fully leverage DORA metrics, teams must establish a feedback loop that drives continuous improvement. This section outlines how to create a process that ensures long-term optimization and alignment with development goals.
Regular Data Reviews: Conduct data-driven retrospectives to analyze trends and set goals for improvements.
Iterative Process Enhancements: Use findings to adjust coding practices, enhance automated testing coverage, or refine build processes.
Team Collaboration and Learning: Share knowledge across teams to spread best practices and avoid repeating mistakes.
Empowering Your Mobile Development Process
DORA metrics provide mobile engineering teams with the tools needed to measure and optimize their development processes, enhancing their ability to release high-quality apps efficiently. By integrating DORA metrics tracking through automated data collection, real-time monitoring, comprehensive testing pipelines, and advanced incident response practices, teams can achieve continuous improvement.
Tools like Typo make these practices even more effective by offering seamless integration and real-time insights, allowing developers to focus on innovation and delivering exceptional user experiences.
For agile teams, tracking productivity can quickly become overwhelming, especially when too many metrics clutter the process. Many teams feel they’re working hard without seeing the progress they expect. By focusing on a handful of high-impact JIRA metrics, teams can gain clear, actionable insights that streamline decision-making and help them stay on course.
These five essential metrics highlight what truly drives productivity, enabling teams to make informed adjustments that propel their work forward.
Why JIRA Metrics Matter for Agile Teams
Agile teams often face missed deadlines, unclear priorities, and resource management issues. Without effective metrics, these issues remain hidden, leading to frustration. JIRA metrics provide clarity on team performance, enabling early identification of bottlenecks and allowing teams to stay agile and efficient. By tracking just a few high-impact metrics, teams can make informed, data-driven decisions that improve workflows and outcomes.
Top 5 JIRA Metrics to Improve Your Team’s Productivity
1. Work In Progress (WIP)
Work In Progress (WIP) measures the number of tasks actively being worked on. Setting WIP limits encourages teams to complete existing tasks before starting new ones, which reduces task-switching, increases focus, and improves overall workflow efficiency.
Technical applications:
Setting WIP limits: On JIRA Kanban boards, teams can set WIP limits for each stage, like “In Progress” or “Review.” This prevents overloading and helps teams maintain steady productivity without overwhelming team members.
Identifying bottlenecks: WIP metrics highlight bottlenecks in real time. If tasks accumulate in a specific stage (e.g., “In Review”), it signals a need to address delays, such as availability of reviewers or unclear review standards.
Using cumulative flow diagrams: JIRA’s cumulative flow diagrams visualize WIP across stages, showing where tasks are getting stuck and helping teams keep workflows balanced.
2. Work Breakdown
Work Breakdown details how tasks are distributed across project components, priorities, and team members. Breaking down tasks into manageable parts (Epics, Stories, Subtasks) provides clarity on resource allocation and ensures each project aspect receives adequate attention.
Technical applications:
Epics and stories in JIRA: JIRA enables teams to organize large projects by breaking them into Epics, Stories, and Subtasks, making complex tasks more manageable and easier to track.
Advanced roadmaps: JIRA’s Advanced Roadmaps allow visualization of task breakdown in a timeline, displaying dependencies and resource allocations. This overview helps maintain balanced workloads across project components.
Tracking priority and status: Custom filters in JIRA allow teams to view high-priority tasks across Epics and Stories, ensuring critical items are progressing as expected.
3. Developer Workload
Developer Workload monitors the task volume and complexity assigned to each developer. This metric ensures balanced workload distribution, preventing burnout and optimizing each developer’s capacity.
Technical applications:
JIRA workload reports: Workload reports aggregate task counts, hours estimated, and priority levels for each developer. This helps project managers reallocate tasks if certain team members are overloaded.
Time tracking and estimation: JIRA allows developers to log actual time spent on tasks, making it possible to compare against estimates for improved workload planning.
Capacity-based assignment: Project managers can analyze workload data to assign tasks based on each developer’s availability and capacity, ensuring sustainable productivity.
4. Team Velocity
Team Velocity measures the amount of work completed in each sprint, establishing a baseline for sprint planning and setting realistic goals.
Technical applications:
Velocity chart: JIRA’s Velocity Chart displays work completed versus planned work, helping teams gauge their performance trends and establish realistic goals for future sprints.
Estimating story points: Story points assigned to tasks allow teams to calculate velocity and capacity more accurately, improving sprint planning and goal setting.
Historical analysis for planning: Historical velocity data enables teams to look back at performance trends, helping identify factors that impacted past sprints and optimizing future planning.
5. Cycle Time
Cycle Time tracks how long tasks take from start to completion, highlighting process inefficiencies. Shorter cycle times generally mean faster delivery.
Technical applications:
Control chart: The Control Chart in JIRA visualizes Cycle Time, displaying how long tasks spend in each stage, helping to identify where delays occur.
Custom workflows and time tracking: Customizable workflows allow teams to assign specific time limits to each stage, identifying areas for improvement and reducing Cycle Time.
SLAs for timely completion: For teams with service-level agreements, setting cycle-time goals can help track SLA adherence, providing benchmarks for performance.
How to Set Up JIRA Metrics for Success: Practical Tips for Maximizing the Benefits of JIRA Metrics with Typo
Effectively setting up and using JIRA metrics requires strategic configuration and the right tools to turn raw data into actionable insights. Here’s a practical, step-by-step guide to configuring these metrics in JIRA for optimal tracking and collaboration. With Typo’s integration, teams gain additional capabilities for managing, analyzing, and discussing metrics collaboratively.
Step 1: Configure Key Dashboards for Visibility
Setting up dashboards in JIRA for metrics like Cycle Time, Developer Workload, and Team Velocity allows for quick access to critical data.
How to set up:
Go to the Dashboards section in JIRA, select Create Dashboard, and add specific gadgets such as Cumulative Flow Diagram for WIP and Velocity Chart for Team Velocity.
Position each gadget for easy reference, giving your team a visual summary of project progress at a glance.
Step 2: Use Typo’s Sprint Analysis for Enhanced Sprint Visibility
Typo’s sprint analysis offers an in-depth view of your team’s progress throughout a sprint, enabling engineering managers and developers to better understand performance trends, spot blockers, and refine future planning. Typo integrates seamlessly with JIRA to provide real-time sprint insights, including data on team velocity, task distribution, and completion rates.
Key features of Typo’s sprint analysis:
Detailed sprint performance summaries: Typo automatically generates sprint performance summaries, giving teams a clear view of completed tasks, WIP, and uncompleted items.
Sprint progress tracking: Typo visualizes your team’s progress across each sprint phase, enabling managers to identify trends and respond to bottlenecks faster.
Velocity trend analysis: Track velocity over multiple sprints to understand performance patterns. Typo’s charts display average, maximum, and minimum velocities, helping teams make data-backed decisions for future sprint planning.
Step 3: Leverage Typo’s Customizable Reports for Deeper Analysis
Typo enables engineering teams to go beyond JIRA’s native reporting by offering customizable reports. These reports allow teams to focus on specific metrics that matter most to them, creating targeted views that support sprint retrospectives and help track ongoing improvements.
Key benefits of Typo reports:
Customized metrics views: Typo’s reporting feature allows you to tailor reports by sprint, team member, or task type, enabling you to create a focused analysis that meets team objectives.
Sprint performance comparison: Easily compare current sprint performance with past sprints to understand progress trends and potential areas for optimization.
Collaborative insights: Typo’s centralized platform allows team members to add comments and insights directly into reports, facilitating discussion and shared understanding of sprint outcomes.
Step 4: Track Team Velocity with Typo’s Velocity Trend Analysis
Typo’s Velocity Trend Analysis provides a comprehensive view of team capacity and productivity over multiple sprints, allowing managers to set realistic goals and adjust plans according to past performance data.
How to use:
Access Typo’s Velocity Trend Analysis to view velocity averages and deviations over time, helping your team anticipate work capacity more accurately.
Use Typo’s charts to visualize and discuss the effects of any changes made to workflows or team processes, allowing for data-backed sprint planning.
Incorporate these insights into future sprint planning meetings to establish achievable targets and manage team workload effectively.
Step 5: Automate Alerts and Notifications for Key Metrics
Setting up automated alerts in JIRA and Typo helps teams stay on top of metrics without manual checking, ensuring that critical changes are visible in real-time.
How to set up:
Use JIRA’s automation rules to create alerts for specific metrics. For example, set a notification if a task’s Cycle Time exceeds a predefined threshold, signaling potential delays.
Enable notifications in Typo for sprint analysis updates, such as velocity changes or WIP limits being exceeded, to keep team members informed throughout the sprint.
Automate report generation in Typo, allowing your team to receive regular updates on sprint performance without needing to pull data manually.
Step 6: Host Collaborative Retrospectives with Typo
Typo’s integration makes retrospectives more effective by offering a shared space for reviewing metrics and discussing improvement opportunities as a team.
How to use:
Use Typo’s reports and sprint analysis as discussion points in retrospective meetings, focusing on completed vs. planned work, Cycle Time efficiency, and WIP trends.
Encourage team members to add insights or suggestions directly into Typo, fostering collaborative improvement and shared accountability.
Document key takeaways and actionable steps in Typo, ensuring continuous tracking and follow-through on improvement efforts in future sprints.
Scope creep—when a project’s scope expands beyond its original objectives—can disrupt timelines, strain resources, and lead to project overruns. Monitoring scope creep is essential for agile teams that need to stay on track without sacrificing quality.
In JIRA, tracking scope creep involves setting clear boundaries for task assignments, monitoring changes, and evaluating their impact on team workload and sprint goals.
How to Monitor Scope Creep in JIRA
Define scope boundaries: Start by clearly defining the scope of each project, sprint, or epic in JIRA, detailing the specific tasks and goals that align with project objectives. Make sure these definitions are accessible to all team members.
Use the issue history and custom fields: Track changes in task descriptions, deadlines, and priorities by utilizing JIRA’s issue history and custom fields. By setting up custom fields for scope-related tags or labels, teams can flag tasks or sub-tasks that deviate from the original project scope, making scope creep more visible.
Monitor workload adjustments with Typo: When scope changes are approved, Typo’s integration with JIRA can help assess their impact on the team’s workload. Use Typo’s reporting to analyze new tasks added mid-sprint or shifts in priorities, ensuring the team remains balanced and prepared for adjusted goals.
Sprint retrospectives for reflection: During sprint retrospectives, review any instances of scope creep and assess the reasons behind the adjustments. This allows the team to identify recurring patterns, evaluate the necessity of certain changes, and refine future project scoping processes.
By closely monitoring and managing scope creep, agile teams can keep their projects within boundaries, maintain productivity, and make adjustments only when they align with strategic objectives.
Building a Data-Driven Engineering Culture
Building a data-driven culture goes beyond tracking metrics; it’s about engaging the entire team in understanding and applying these insights to support shared goals. By fostering collaboration and using metrics as a foundation for continuous improvement, teams can align more effectively and adapt to challenges with agility.
Regularly revisiting and refining metrics ensures they stay relevant and actionable as team priorities evolve. To see how Typo can help you create a streamlined, data-driven approach, schedule a personalized demo today and unlock your team’s full potential.
Think of reading a book with multiple plot twists and branching storylines. While engaging, it can also be confusing and overwhelming when there are too many paths to follow. Just as a complex storyline can confuse readers, high Cyclic Complexity can make code hard to understand, maintain, and test, leading to bugs and errors.
In this blog, we will discuss why high cyclomatic complexity can be problematic and ways to reduce it.
What is Cyclomatic Complexity?
Cyclomatic Complexity, a software metric, was developed by Thomas J. Mccabe in 1976. It is a metric that indicates the complexity of the program by counting its decision points.
A higher cyclomatic Complexity score reflects more execution paths, leading to increased complexity. On the other hand, a low score signifies fewer paths and, hence, less complexity.
Cyclomatic Complexity is calculated using a control flow graph:
M = E - N + 2P
M = Cyclomatic Complexity
N = Nodes (Block of code)
E = Edges (Flow of control)
P = Number of Connected Components
Understanding Cyclomatic Complexity Through a Simple Example
Let's delve into the concept of cyclomatic complexity with an easy-to-grasp illustration.
Imagine a function structured as follows:
function greetUser(name) { print(`Hello, ${name}!`); }
In this case, the function is straightforward, containing a single line of code. Since there are no conditional paths, the cyclomatic complexity is 1—indicating a single, linear path of execution.
Now, let's add a twist:
function greetUser(name, offerFarewell = false) { print(`Hello, ${name}!`);
if (offerFarewell) { print(`Goodbye, ${name}!`); } }
In this modified version, we've introduced a conditional statement. It presents us with two potential paths:
Path One: Greet the user without a farewell.
Path Two: Greet the user followed by a farewell if is true.
By adding this decision point, the cyclomatic complexity increases to 2. This means there are two unique ways the function might execute, depending on the value of the parameter.
Key Takeaway: Cyclomatic complexity helps in understanding how many independent paths there are through a function, aiding in assessing the possible scenarios a program can take during its execution. This is crucial for debugging and testing, ensuring each path is covered.
Why is High Cyclomatic Complexity Problematic?
Increases Error Prone
The more complex the code is, the more the chances of bugs. When there are many possible paths and conditions, developers may overlook certain conditions or edge cases during testing. This leads to defects in the software and becomes challenging to test all of them.
Impact of Cyclomatic Complexity on Testing
Cyclomatic complexity plays a crucial role in determining how we approach testing. By calculating the cyclomatic complexity of a function, developers can ascertain the minimum number of test cases required to achieve full branch coverage. This metric is invaluable, as it predicts the difficulty of testing a particular piece of code.
Higher values of cyclomatic complexity necessitate a greater number of test cases to comprehensively cover a block of code, such as a function. This means that as complexity increases, so does the effort needed to ensure the code is thoroughly tested. For developers looking to streamline their testing process, reducing cyclomatic complexity can greatly ease this burden, making the code not only less error-prone but also more efficient to work with.
Leads to Cognitive Complexity
Cognitive complexity refers to the level of difficulty in understanding a piece of code.
Cyclomatic Complexity is one of the factors that increases cognitive complexity. Since, it becomes overwhelming to process information effectively for developers, which makes it harder to understand the overall logic of code.
Difficulty in Onboarding
Codebases with high cyclomatic Complexity make onboarding difficult for new developers or team members. The learning curve becomes steeper for them and they require more time and effort to understand and become productive. This also leads to misunderstanding and they may misinterpret the logic or overlook critical paths.
Higher Risks of Defects
More complex code leads to more misunderstandings, which further results in higher defects in the codebase. Complex code is more prone to errors as it hinders adherence to coding standards and best practices.
Rise in Maintainance Efforts
Due to the complex codebase, the software development team may struggle to grasp the full impact of their changes which results in new errors. This further slows down the process. It also results in ripple effects i.e. difficulty in isolating changes as one modification can impact multiple areas of application.
To truly understand the health of a codebase, relying solely on cyclomatic complexity is insufficient. While cyclomatic complexity provides valuable insights into the intricacy and potential risk areas of your code, it's just one piece of a much larger puzzle.
Here's why multiple metrics matter:
Comprehensive Insight: Cyclomatic complexity measures code complexity but overlooks other aspects like code quality, readability, or test coverage. Incorporating metrics like code churn, test coverage, and technical debt can reveal hidden challenges and opportunities for improvement.
Balanced Perspective: Different metrics highlight different issues. For example, maintainability index offers a perspective on code readability and structure, whereas defect density focuses on the frequency of coding errors. By using a variety of metrics, teams can balance complexity with quality and performance considerations.
Improved Decision Making: When decisions hinge on a single metric, they may lead to misguided strategies. For instance, reducing cyclomatic complexity might inadvertently lower functionality or increase lines of code. A balanced suite of metrics ensures decisions support overall codebase health and project goals.
Holistic Evaluation: A codebase is impacted by numerous factors including performance, security, and maintainability. By assessing diverse metrics, teams gain a holistic view that can better guide optimization and resource allocation efforts.
In short, utilizing a diverse range of metrics provides a more accurate and actionable picture of codebase health, supporting sustainable development and more effective project management.
How to Reduce Cyclomatic Complexity?
Function Decomposition
Single Responsibility Principle (SRP): This principle states that each module or function should have a defined responsibility and one reason to change. If a function is responsible for multiple tasks, it can result in bloated and hard-to-maintain code.
Modularity: This means dividing large, complex functions into smaller, modular units so that each piece serves a focused purpose. It makes individual functions easier to understand, test, and modify without affecting other parts of the code.
Cohesion: Cohesion focuses on keeping related code close to functions and modules. When related functions are grouped together, it results in high cohesion which helps with readability and maintainability.
Coupling: This principle states to avoid excessive dependencies between modules. This will reduce the complexity and make each module more self-contained, enabling changes without affecting other parts of the system.
Conditional Logic Simplification
Guard Clauses: Developers must implement guard clauses to exit from a function as soon as a condition is met. This avoids deep nesting and enhances the readability and simplicity of the main logic of the function.
Boolean Expressions: Use De Morgan's laws and simplify Boolean expressions to reduce the complexity of conditions. For example, rewriting! (A && B) as ! A || !B can sometimes make the code easier to understand.
Conditional Expressions: Consider using ternary operators or switch statements where appropriate. This will condense complex conditional branches into more concise expressions which further enhance their readability and reduce code size.
Flag Variables: Avoid unnecessary flag variables that track control flow. Developers should restructure the logic to eliminate these flags which can lead to simpler and cleaner code.
Loop Optimization
Loop Unrolling: Expand the loop body to perform multiple operations in each iteration. This is useful for loops with a small number of iterations as it reduces loop overhead and improves performance.
Loop Fusion: When two loops iterate over the same data, you may be able to combine them into a single loop. This enhances performance by reducing the number of loop iterations and boosting data locality.
Loop Strength Reduction: Consider replacing costly operations in loops with less expensive ones, such as using addition instead of multiplication where possible. This will reduce the computational cost within the loop.
Loop Invariant Code Motion: Prevent redundant computation by moving calculations that do not change with each loop iteration outside of the loop.
Code Refactoring
Extract Method: Move repetitive or complex code segments into separate functions. This simplifies the original function, reduces complexity, and makes code easier to reuse.
Introduce Explanatory Variables: Use intermediate variables to hold the results of complex expressions. This can make code more readable and allow others to understand its purpose without deciphering complex operations.
Replace Magic Numbers with Named Constants: Magic numbers are hard-coded numbers in code. Instead of directly using them, create symbolic constants for hard-coded values. It makes it easy to change the value at a later stage and improves the readability and maintainability of the code.
Simplify Complex Expressions: Break down long, complex expressions into smaller, more digestible parts to improve readability and reduce cognitive load on the reader.
5. Design Patterns
Strategy Pattern: This pattern allows developers to encapsulate algorithms within separate classes. By delegating responsibilities to these classes, you can avoid complex conditional statements and reduce overall code complexity.
State Pattern: When an object has multiple states, the State Pattern can represent each state as a separate class. This simplifies conditional code related to state transitions.
Observer Pattern: The Observer Pattern helps decouple components by allowing objects to communicate without direct dependencies. This reduces complexity by minimizing the interconnectedness of code components.
6. Code Analysis Tools
Static Code Analyzers: Static Code Analysis Tools like Typo or Sonarqube, can automatically highlight areas of high complexity, unused code, or potential errors. This allows developers to identify and address complex code areas proactively.
Code Coverage Tools: Code coverage is a measure that indicates the percentage of a codebase that is tested by automated tests. Tools like Typo measures code coverage, highlighting untested areas. It helps ensure that the tests cover a significant portion of the code which helps identifies untested parts and potential bugs.
Other Ways to Reduce Cyclomatic Complexity
Identify andremove dead code to simplify the codebase and reduce maintenance efforts. This keeps the code clean, improves performance, and reduces potential confusion.
Consolidate duplicate code into reusable functions to reduce redundancy and improve consistency. This makes it easier to update logic in one place and avoid potential bugs from inconsistent changes.
Continuously improve code structure by refactoring regularly to enhance readability, and maintainability, and reduce technical debt. This ensures that the codebase evolves to stay efficient and adaptable to future needs.
Perform peer reviews to catch issues early, promote coding best practices, and maintain high code quality. Code reviews encourage knowledge sharing and help align the team on coding standards.
Write Comprehensive Unit Tests to ensure code functions correctly and supports easier refactoring in the future. They provide a safety net which makes it easier to identify issues when changes are made.
To further limit duplicated code and reduce cyclomatic complexity, consider these additional strategies:
Extract Common Code: Identify and extract common bits of code into their own dedicated methods or functions. This step streamlines your codebase and enhances maintainability.
Leverage Design Patterns: Utilize design patterns—such as the template pattern—that encourage code reuse and provide a structured approach to solving recurring design problems. This not only reduces duplication but also improves code readability.
Create Utility Packages: Extract generic utility functions into reusable packages, such as npm modules or NuGet packages. This practice allows code to be reused across the entire organization, promoting a consistent development standard and simplifying updates across multiple projects.
By implementing these strategies, you can effectively manage code complexity and maintain a cleaner, more efficient codebase.
Typo - An Automated Code Review Tool
Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.
Key Features:
Supports top 8 languages including C++ and C#.
Understands the context of the code and fixes issues accurately.
Optimizes code efficiently.
Provides automated debugging with detailed explanations.
Standardizes code and reduces the risk of a security breach
The cyclomatic complexity metric is critical in software engineering. Reducing cyclomatic complexity increases the code maintainability, readability, and simplicity. By implementing the above-mentioned strategies, software engineering teams can reduce complexity and create a more streamlined codebase. Tools like Typo’s automated code review also help in identifying complexity issues early and providing quick fixes. Hence, enhancing overall code quality.
Burndown charts are essential instruments for tracking the progress of agile teams. They are simple and effective ways to determine whether the team is on track or falling behind. However, there may be times when a burndown chart is not ideal for teams, as it may not capture a holistic view of the agile team’s progress.
In this blog, we have discussed the latter part in greater detail.
What is a Burndown Chart?
Burndown Chart is a visual representation of the team’s progress used for agile project management. They are useful for scrum teams and agile project managers to assess whether the project is on track or not.
The primary objective is to accurately depict the time allocations and plan for future resources.
In agile and scrum environments, burndown charts are essential tools that offer more than just a snapshot of progress. Here’s how they are effectively used:
Create a Work Management Baseline: By establishing a baseline, teams can easily compare planned work versus actual work, allowing for a clear visual of progress.
Conduct Gap Analysis: Identify discrepancies between the planned timeline and current progress to adjust strategies promptly.
Inform Future Sprint Planning: Use information from the burndown chart to enhance the accuracy of future sprint planning meetings, ensuring better time and resource allocation.
Reallocate Resources: With real-time insights, teams can manage tasks more effectively and reallocate resources as needed to ensure sprints are completed on time.
Burndown charts not only provide transparency in tracking work but also empower agile teams to make informed decisions swiftly, ensuring project goals are met efficiently.
Understanding How a Burndown Chart Benefits Agile Teams
A burndown chart is an invaluable resource for agile project management teams, offering a clear snapshot of project progress and aiding in efficient workflow management. Here’s how it facilitates team success:
Progress Tracking: It visually showcases the amount of work completed versus what remains, allowing teams to quickly gauge their current status in the project lifecycle.
Time Management: By highlighting the time remaining, teams can better allocate resources and adjust priorities to meet deadlines, ensuring timely project delivery.
Task Overview: In addition to being a visual aid, it can function as a comprehensive list detailing tasks and their respective completion percentages, providing a clear outline of what still needs attention.
Transparency and Communication: Promoting open communication, the chart offers a shared view for all team members and stakeholders, leading to improved collaboration and more informed decision-making.
Overall, a burndown chart simplifies the complexities of agile project management, enhancing both team efficiency and project outcomes.
Components of Burndown Chart
Axes
There are two axes: x and y. The horizontal axis represents the time or iteration and the vertical axis displays user story points.
Ideal Work Remaining
It represents the remaining work that an agile team has at a specific point of the project or sprint under an ideal condition.
Actual Work Remaining
It is a realistic indication of a team's progress that is updated in real time. When this line is consistently below the ideal line, it indicates the team is ahead of schedule. When the line is above, it means they are falling behind.
Project/Sprint End
It indicates whether the team has completed a project/sprint on time, behind or ahead of schedule.
Data Points
The data points on the actual work remaining line represents the amount of work left at specific intervals i.e. daily updates.
Understanding a Burndown Chart
A burndown chart is a visual tool used to track the progress of work in a project or sprint. Here's how you can read it effectively:
Core Components
Axes Details:
X-Axis: Represents the timeline of the project or sprint, usually marked in days.
Y-Axis: Indicates the amount of work remaining, often measured in story points or task hours.
Key Features
Starting Point: Located at the far left, indicating day zero of the project or sprint.
Endpoint: Located at the far right, marking the final day of the project or sprint.
Lines to Note
Ideal Work Remaining Line:
A straight line connecting the start and end points.
Illustrates the planned project scope, estimating how work should progress smoothly.
At the end, it meets the x-axis, implying no pending work. Remember, this line is a projection and may not always match reality.
Actual Work Remaining Line:
This line tracks the real progress of work completed.
Starts aligned with the ideal line but deviates as actual progress is tracked daily.
Each daily update adds a new data point, creating a fluctuating line.
Interpreting the Chart
Behind Schedule: When the actual line stays above the ideal line, there's more work remaining than expected, indicating delays.
Ahead of Schedule: Conversely, if the actual line dips below the ideal line, it shows tasks are being completed faster than anticipated.
In summary, by regularly comparing the actual and ideal lines, you can assess whether your project is on track, falling behind, or advancing quicker than planned. This helps teams make informed decisions and adjustments to meet deadlines efficiently.
Types of Burndown Chart
There are two types of Burndown Chart:
Product Burndown Chart
This type of burndown chart focuses on the big picture and visualises the entire project. It helps project managers and teams monitor the completion of work across multiple sprints and iteration.
Sprint Burndown Chart
Sprint Burndown chart particularly tracks the remaining work within a sprint. It indicates progress towards completing the sprint backlog.
Advantages of Burndown Chart
Visualises Progress
Burndown Chart captures how much work is completed and how much is left. It allows the agile team to compare the actual progress with the ideal progress line to track if they are ahead or behind the schedule.
Encourages Teams
Burndown Chart motivates teams to align their progress with the ideal line. These small milestones boost morale and keep their motivation high throughout the sprint. It also reinforces the sense of achievement when they see their tasks completed on time.
Informs Retrospectives
It helps in analyzing performance over sprint during retrospection. Agile teams can review past data through burndown Charts to identify patterns, adjust future estimates, and refine processes for improved efficiency. It allows them to pinpoint periods where progress went down and help to uncover blockers that need to be addressed.
Shows a Direct Comparison
Burndown Chart visualizes the direct comparison of planned work and actual progress. It can quickly assess whether a team is on track to meet the goals, and monitor trends or recurring issues such as over-committing or underestimating tasks.
Burndown Chart can be Misleading too. Here’s Why?
While the Burndown Chart comes with lots of pros, it could be misleading as well. It focuses solely on the task alone without accounting for individual developer productivity. It ignores the aspects of agile software development such as code quality, team collaboration, and problem-solving.
Burndown Chart doesn’t explain how the task impacted the developer productivity or the fluctuations due to various factors such as team morale, external dependencies, or unexpected challenges. It also doesn’t focus on work quality which results in unaddressed underlying issues.
How Does the Accuracy of Time Estimates Affect a Burndown Chart?
The effectiveness of a burndown chart largely hinges on the precision of initial time estimates for tasks. These estimates shape the 'ideal work line,' a crucial component of the chart. When these estimates are accurate, they set a reliable benchmark against which actual progress is measured.
Impacts of Overestimation and Underestimation
Overestimating Time: If a team overestimates the duration required for tasks, the actual work line on the chart may show progress as being on track or even ahead of schedule. This can give a false sense of comfort and potentially lead to complacency.
Underestimating Time: Conversely, underestimating time can make it seem like the team is lagging, as the actual work line falls behind the ideal. This situation can create unnecessary stress and urgency.
Mitigating Estimation Challenges
To address these issues, teams can introduce an efficiency factor into their calculations. After completing an initial project cycle, recalibrating this factor helps refine future estimates for more accurate tracking. This adjustment can lead to more realistic expectations and better project management.
By continually adjusting and learning from previous estimates, teams can improve their forecasting accuracy, resulting in more reliable burndown charts.
Other Limitations of Burndown Chart
Oversimplification of Complex Projects
While the Burndown Chart is a visual representation of Agile teams’ progress, it fails to capture the intricate layers and interdependencies within the project. It overlooks the critical factors that influence project outcomes which may lead to misinformed decisions and unrealistic expectations.
Ignores Scope Changes
Scope Creep refers to modification in the project requirement such as adding new features or altering existing tasks. Burndown Chart doesn’t take note of the same rather shows a flat line or even a decline in progress which can signify that the team is underperforming, however, that’s not the actual case. This leads to misinterpretation of the team’s progress and overall project health.
Gives Equal Weight to all the Tasks
Burndown Chart doesn’t differentiate between easy and difficult tasks. It considers all of the tasks equal, regardless of their size, complexity, or effort required. Whether the task is on priority or less impactful, it treats every task as the same. Hence, obscuring insights into what truly matters for the project's success.
Neglects Team Dynamics
Burndown Chart treats team members equally. It doesn't take individual contributions into consideration as well as other factors including personal challenges. It also neglects how well they are working with each other, sharing knowledge, or supporting each other in completing tasks.
To ensure projects are delivered on time and within budget, project managers need to leverage a combination of effective planning, monitoring, and communication tools. Here’s how:
1. Utilize Advanced Project Management Tools
Integrating digital tools can significantly enhance project monitoring. For example, platforms like Microsoft Project or Trello offer real-time dashboards that enable managers to track progress and allocate resources efficiently. These tools often feature interactive Gantt charts, which streamline scheduling and enhance team collaboration.
2. Implement Burndown Charts
Burndown charts are invaluable for visualizing work remaining versus time. By regularly updating these charts, managers can quickly spot potential delays and bottlenecks, allowing them to adjust plans proactively.
3. Conduct Regular Meetings and Updates
Scheduled meetings provide consistent check-in times to address issues, realign goals, and ensure everyone is on the same page. This fosters transparency and keeps the team aligned with project objectives, minimizing miscommunications and errors.
4. Foster Effective Communication Channels
Utilizing platforms like Slack or Microsoft Teams ensures quick and efficient communication among team members. A clear communication strategy minimizes misunderstandings and accelerates decision-making, keeping projects on track.
5. Prioritize Risk Management
Anticipating potential risks and having contingency plans in place is crucial. Regular risk assessments can identify potential obstacles early, offering time to devise strategies to mitigate them.
By combining these approaches, project managers can increase the likelihood of delivering projects on time and within budget, ensuring project success and stakeholder satisfaction.
What are the Alternatives to Burndown Chart?
To enhance sprint management, it's crucial to utilize a variety of tools and reports. While burndown charts are fundamental, other tools can offer complementary insights and improve project efficiency.
Gantt Charts
Gantt Charts are ideal for complex projects. They are a visual representation of a project schedule using horizontal axes. They provide a clear timeline for each task, indicating when the project starts and ends, as well as understanding overlapping tasks and dependencies between them. This comprehensive view helps teams manage long-term projects alongside sprint-focused tools like burndown charts.
Cumulative Flow Diagram
CFD visualizes how work moves through different stages. It offers insight into workflow status and identifies trends and bottlenecks. It also helps in measuring key metrics such as cycle time and throughput. By providing a broader perspective of workflow efficiency, CFDs complement burndown charts by pinpointing areas for process improvement.
Kanban Boards
Kanban Boards is an agile management tool that is best for ongoing work. It helps to visualize work, limit work in progress, and manage workflows. They can easily accommodate changes in project scope without the need for adjusting timelines. With their ability to visualize workflows and prioritize tasks, Kanban boards ensure teams know what to work on and when, enhancing the detailed task management that burndown charts provide.
Burnup Chart
Burnup Chart is a quick, easy way to plot work schedules on two lines along a vertical axis. It shows how much work has been done and the total scope of the project, hence, providing a clearer picture of project completion.
While both burnup and burndown charts serve the purpose of tracking progress in agile project management, they do so in distinct ways.
Similar Components, Different Actions:
Both charts utilize a vertical axis to represent user stories or work units.
The burndown chart measures the remaining work by removing items as tasks are completed.
In contrast, the burnup chart reflects progress by adding completed work to the vertical axis.
This duality in approach allows teams to choose the chart that best suits their need for visualizing project trajectory. The burnup chart, by displaying both completed work and total project scope, provides a comprehensive view of how close a team is to reaching project goals.
Developer Intelligence Platforms
DI platforms like Typo focus on how smooth and satisfying a developer experience is. They streamline the development process and offer a holistic view of team productivity, code quality, and developer satisfaction. These platforms provide real-time insights into various metrics that reflect the team’s overall health and efficiency beyond task completion alone. By capturing a wide array of performance indicators, they supplement burndown charts with deeper insights into team dynamics and project health.
Incorporating these tools alongside burndown charts can provide a more rounded picture of project progress, enhancing both day-to-day management and long-term strategic planning.
What Role does Real-Time Dashboards & Kanban Boards Play in Project Management?
In the dynamic world of project management, real-time dashboards and Kanban boards play crucial roles in ensuring that teams remain efficient and informed.
Real-Time Dashboards: The Pulse of Your Project
Real-time dashboards act as the heartbeat of project management. They provide a comprehensive, up-to-the-minute overview of ongoing tasks and milestones. This feature allows project teams to:
View updates instantaneously, thus enabling swift decision-making based on the most current data.
Track metrics such as task completion rates, resource allocation, and deadline adherence effortlessly.
Eliminate the delays associated with outdated information, ensuring that every team action is grounded in the present context.
Essentially, real-time dashboards empower teams with the data they need right when they need it, facilitating proactive management and quick responses to any project deviations.
Kanban Boards: Visualization and Prioritization
Kanban boards are pivotal for visualizing workflows and managing tasks efficiently. They:
Offer a clear visual representation of project stages, providing transparency across all levels of a team.
Help in organizing product backlogs and streamlining sprints by categorizing tasks into columns like "To Do," "In Progress," and "Done."
Enable scrum teams to prioritize tasks systematically, ensuring everyone knows what to focus on next.
By making workflows visible and manageable, Kanban boards foster better collaboration and continuous process improvement. They become a valuable archive for reviewing past sprints, helping teams identify successes and areas for enhancement.
In conclusion, both real-time dashboards and Kanban boards are integral to effective project management. They ensure that teams are always aligned with objectives, enhancing transparency and facilitating a smooth, agile workflow.
Typo - An Effective Sprint Analysis Tool
One such platform is Typo, which goes beyond the traditional metrics. Its sprint analysis is an essential tool for any team using an agile development methodology. It allows agile teams to monitor and assess progress across the sprint timeline, providing visual insights into completed work, ongoing tasks, and remaining time. This visual representation allows to spot potential issues early and make timely adjustments.
Our sprint analysis feature leverages data from Git and issue management tools to focus on team workflows. They can track task durations, identify frequent blockers, and pinpoint bottlenecks.
With easy integration into existing Git and Jira/Linear/Clickup workflows, Typo offers:
Velocity Chart that shows completed work in past sprints
Sprint Backlog that displays all tasks slated for completion within the sprint
Tracks the status of each sprint issue.
Measures task durations
Highlights areas where work is delayed and identifies task blocks and causes.
Historical Data Analysis that compares sprint performance over time.
Hence, helping agile teams stay on track, optimize processes, and deliver quality results efficiently.
While the burndown chart is a valuable tool for visualizing task completion and tracking progress, it often overlooks critical aspects like team morale, collaboration, code quality, and factors impacting developer productivity. There are several alternatives to the burndown chart, with Typo’s sprint analysis tool standing out as a powerful option. Through this, agile teams gain a more comprehensive view of progress, fostering resilience, motivation, and peak performance.
Understanding the Human Side of DevOps: Aligning Goals Across Teams
One of the biggest hurdles in a DevOps transformation is not the technical implementation of tools but aligning the human side—culture, collaboration, and incentives. As a leader, it’s essential to recognize that different, sometimes conflicting, objectives drive both Software Engineering and Operations teams.
Engineering often views success as delivering features quickly, whereas Operations focuses on minimizing downtime and maintaining stability. These differing incentives naturally create friction, resulting in delayed deployment cycles, subpar product quality, and even a toxic work environment.
The key to solving this? Cross-functional team alignment.
Before implementing DORA metrics, you need to ensure both teams share a unified vision: delivering high-quality software at speed, with a shared understanding of responsibility. This requires fostering an environment of continuous communication and trust, where both teams collaborate to achieve overarching business goals, not just individual metrics.
Why DORA Metrics Outshine Traditional Metrics
Traditional performance metrics, often focused on specific teams (like uptime for Operations or feature count for Engineering), incentivize siloed thinking and can lead to metric manipulation. Operations might delay deployments to maintain uptime, while Engineering rushes features without considering quality.
DORA metrics, however, provide a balanced framework that encourages cooperative success. For example, by focusing on Change Failure Rate and Deployment Frequency, you create a feedback loop where neither team can game the system. High deployment frequency is only valuable if it’s accompanied by low failure rates, ensuring that the product's quality improves alongside speed.
In contrast to traditional metrics, DORA's approach emphasizes continuous improvement across the entire delivery pipeline, leading to better collaboration between teams and improved outcomes for the business. The holistic nature of these metrics also forces leaders to look at the entire value stream, making it easier to identify bottlenecks or systemic issues early on.
Leveraging DORA Metrics for Long-Term Innovation
While the initial focus during your DevOps transformation should be on Deployment Frequency and Change Failure Rate, it’s important to recognize the long-term benefits of adding Lead Time for Changes and Time to Restore Service to your evaluation. Once your teams have achieved a healthy rhythm of frequent, reliable deployments, you can start optimizing for faster recovery and shorter change times.
A mature DevOps organization that excels in these areas positions itself to innovate rapidly. By decreasing lead times and recovery times, your team can respond faster to market changes, giving you a competitive edge in industries that demand agility. Over time, these metrics will also reduce technical debt, enabling faster, more reliable development cycles and an enhanced customer experience.
Building a Culture of Accountability with Metrics Pairing
One overlooked aspect of DORA metrics is their ability to promote accountability across teams. By pairing Deployment Frequency with Change Failure Rate, for example, you prevent one team from achieving its goals at the expense of the other. Similarly, pairing Lead Time for Changes with Time to Restore Service encourages teams to both move quickly and fix issues effectively when things go wrong.
This pairing strategy fosters a culture of accountability, where each team is responsible not just for hitting its own goals but also for contributing to the success of the entire delivery pipeline. This mindset shift is crucial for the success of any DevOps transformation. It encourages teams to think beyond their silos and work together toward shared outcomes, resulting in better software and a more collaborative work environment.
Early Wins and Psychological Momentum: The Power of Small Gains
DevOps transformations can be daunting, especially for teams that are already overwhelmed by high workloads and a fast-paced development environment. One strategic benefit of starting with just two metrics—Deployment Frequency and Change Failure Rate—is the opportunity to achieve quick wins.
Quick wins, such as reducing deployment time or lowering failure rates, have a significant psychological impact on teams. By showing progress early in the transformation, you can generate excitement and buy-in across the organization. These wins build momentum, making teams more eager to tackle the larger, more complex challenges that lie ahead in the DevOps journey.
As these small victories accumulate, the organizational culture shifts toward one of continuous improvement, where teams feel empowered to take ownership of their roles in the transformation. This incremental approach reduces resistance to change and ensures that even larger-scale initiatives, such as optimizing Lead Time for Changes and Time to Restore Service, feel achievable and less stressful for teams.
The Role of Leadership in DevOps Success
Leadership plays a critical role in ensuring that DORA metrics are not just implemented but fully integrated into the company’s DevOps practices. To achieve true transformation, leaders must:
Set the right expectations: Make it clear that the goal of using DORA metrics is not just to “move the needle” but to deliver better software faster. Explain how the metrics contribute to business outcomes.
Foster a culture of psychological safety: Encourage teams to see failures as learning opportunities. This cultural shift helps improve the Change Failure Rate without resorting to blame or fear.
Lead by example: Show that leadership is equally committed to the DevOps transformation by adopting new tools, improving communication, and advocating for cross-functional collaboration.
Provide the right tools and resources: For DORA metrics to be effective, teams need the right tools to measure and act on them. Leaders must ensure their teams have access to automated pipelines, robust monitoring tools, and the support needed to interpret and respond to the data.
Typo: Accelerating Your DevOps Transformation with Streamlined Documentation
In your DevOps journey, the right tools can make all the difference. One often overlooked aspect of DevOps success is the need for effective, transparent documentation that evolves as your systems change. Typo, a dynamic documentation tool, plays a critical role in supporting your transformation by ensuring that everyone—from engineers to operations teams—can easily access, update, and collaborate on essential documents.
Typo helps you:
Maintain up-to-date documentation that adapts with every deployment, ensuring that your team never has to work with outdated information.
Reduce confusion during deployments by providing clear, accessible, and centralized documentation for processes and changes.
Improve collaboration between teams, as Typo makes it easy to contribute and maintain critical project information, supporting transparency and alignment across your DevOps efforts.
With Typo, you streamline not only the technical but also the operational aspects of your DevOps transformation, making it easier to implement and act on DORA metrics while fostering a culture of shared responsibility.
Starting a DevOps transformation can feel overwhelming, but with the focus on DORA metrics—especially Deployment Frequency and Change Failure Rate—you can begin making meaningful improvements right away. Your organization can smoothly transition into a high-performing, innovative powerhouse by fostering a collaborative culture, aligning team goals, and leveraging tools like Typo for documentation.
The key is starting with what matters most: getting your teams aligned on quality and speed, measuring the right things, and celebrating the small wins along the way. From there, your DevOps transformation will gain the momentum needed to drive long-term success.
Measuring Project Success with DevOps Metrics
October 4, 2024
•
11 min read
Are you feeling unsure if your team is making real progress, even though you’re following DevOps practices? Maybe you’ve implemented tools and automation but still struggle to identify what’s working and what’s holding your projects back. You’re not alone. Many teams face similar frustrations when they can’t measure their success effectively.
But here’s the truth: without clear metrics, it’s nearly impossible to know if your DevOps processes are driving the results you need. Tracking the right DevOps metrics can make all the difference, offering insights that help you streamline workflows, fix bottlenecks, and make data-driven decisions.
In this blog, we’ll dive into the essential DevOps metrics that empower teams to confidently measure success. Whether you’re just getting started or looking to refine your approach, these metrics will give you the clarity you need to drive continuous improvement. Ready to take control of your project’s success? Let’s get started.
What Are DevOps Metrics?
DevOps metrics are statistics and data points that correlate to a team's DevOps model's performance. They measure process efficiency and reveal areas of friction between the phases of the software delivery pipeline.
These metrics are essential for tracking progress toward achieving overarching goals set by the team. The primary purpose of DevOps metrics is to provide insight into technical capabilities, team processes, and overall organizational culture.
By quantifying performance, teams can identify bottlenecks, assess quality improvements, and measure application performance gains. Ultimately, if you don’t measure it, you can’t improve it.
Key Categories of DevOps Metrics
The DevOps Metrics has these primary categories:
Software Delivery Metrics: Measure the speed and efficiency of software delivery.
Stability Metrics: Assess the reliability and quality of software in production.
Operational Performance Metrics: Evaluate system performance under load.
Security Metrics: Monitor vulnerabilities and compliance within the software development lifecycle.
Cost Efficiency Metrics: Analyze resource utilization and cost-effectiveness in DevOps practices.
Understanding these categories helps organizations select relevant metrics tailored to their specific challenges.
Why Metrics Matter: Driving Measurable Success with DevOps
DevOps is often associated with automation and speed, but at its core, it is about achieving measurable success. Many teams struggle with measuring their success due to inconsistent performance or unclear goals. It's understandable to feel lost when confronted with vast amounts of data and competing priorities.
However, the right metrics can simplify this process.
They help clarify what success looks like for your team and provide a framework for continuous improvement. Remember, you don't have to tackle everything at once; focusing on a few key metrics can lead to significant progress.
Key DevOps Metrics to Track for Success
To effectively measure your project's success, consider tracking the following essential DevOps metrics:
Deployment Frequency
This metric tracks how often your team releases new code. A higher frequency indicates a more agile development process. Deployment frequency is measured by dividing the number of deployments made during a given period by the total number of weeks/days. One deployment per week is standard, but it also depends on the type of product.
For example, a team working on a mission-critical financial application may aim for daily deployments to fix bugs and ensure system stability quickly. In contrast, a team developing a mobile game might release updates weekly to coincide with the app store's review process.
Lead Time for Changes
Measure how quickly changes move from development to production. Shorter lead times suggest a more efficient workflow. Lead time for changes is the length of time between when a code change is committed to the trunk branch and when it is in a deployable state, such as when code passes all necessary pre-release tests.
Consider a scenario where a developer submits a bug fix to the main codebase. The change is automatically tested, approved, and deployed to production within an hour. This rapid turnaround allows the team to quickly address customer issues and maintain a high level of service.
Change Failure Rate
This assesses the percentage of changes that cause issues requiring a rollback. Lower rates indicate better quality control. The change failure rate is the percentage of code changes that require hot fixes or other remediation after production, excluding failures caught by testing and fixed before deployment.
Imagine a team that deploys 100 changes per month, with 10 of those changes requiring a rollback due to production issues. Their change failure rate would be 10%. By tracking this metric over time and implementing practices like thorough testing and canary deployments, they can work to reduce the failure rate and improve overall stability.
Mean Time to Recovery (MTTR)
Evaluate how quickly your team can recover from failures. A shorter recovery time reflects resilience and effective incident management. MTTR measures how long it takes to recover from a partial service interruption or total failure, regardless of whether the interruption is the result of a recent deployment or an isolated system failure.
In a scenario where a production server crashes due to a hardware failure, the team's MTTR is the time it takes to restore service. If they can bring the server back online and restore functionality within 30 minutes, that's a strong MTTR. Tracking this metric helps teams identify areas for improvement in their incident response processes and infrastructure resilience.
These metrics are not about achieving perfection; they are tools designed to help you focus on continuous improvement. High-performing teams typically measure lead times in hours, have change failure rates in the 0-15 percent range, can deploy changes on demand, and often do so many times a day.
Common Challenges When Measuring DevOps Success
While measuring success is essential, it's important to acknowledge the emotional and practical hurdles that come with it:
Resistance to change
People often resist change, especially when it disrupts established routines or processes. Overcoming this resistance is crucial for fostering a culture of improvement.
For example, a team that has been manually deploying code for years may be hesitant to adopt an automated deployment pipeline. Addressing their concerns, providing training, and demonstrating the benefits can help ease the transition.
Lack of time
Teams frequently find themselves caught up in day-to-day demands, leaving little time for proactive improvement efforts. This can create a cycle where urgent tasks overshadow long-term goals.
A development team working on a tight deadline may struggle to find time to optimize their deployment process or write automated tests. Prioritizing these activities as part of the sprint planning process can help ensure they are not overlooked.
Complacency
Organizations may become complacent when things seem to be functioning adequately, preventing them from seeking further improvements. The danger lies in assuming that "good enough" will suffice without striving for excellence.
A team that has achieved a 95% test coverage rate may be tempted to focus on other priorities, even though further improvements could catch additional bugs and reduce technical debt. Regularly reviewing metrics and setting stretch goals can help avoid complacency.
Data overload
With numerous metrics available, teams might struggle to determine which ones are most relevant to their goals. This can lead to confusion and frustration rather than clarity.
A large organization with dozens of teams and applications may find itself drowning in DevOps metrics data. Focusing on a core set of key metrics that align with overall business objectives and tailoring dashboards for each team's specific needs can help manage this challenge.
Measuring success
Determining what success looks like and how to measure it in a continuous improvement culture can be challenging. Setting clear goals and KPIs is essential but often overlooked.
A team may struggle to define what "success" means for their project. Collaborating with stakeholders to establish measurable goals, such as reducing customer support tickets by 20% or increasing revenue by 5%, can provide a clear target to work towards.
If you're facing these challenges, remember that you are not alone. Start by identifying the most actionable metrics that resonate with your current goals. Focusing on a few key areas can make the process feel more manageable and less daunting.
How to Use DevOps Metrics for Continuous Improvement
Once you've identified the key metrics to track, it's time to leverage them for continuous improvement:
Establish baselines: Begin by establishing baseline measurements for each metric you plan to track. This will give you a reference point against which you can measure progress over time.
For example, if your current deployment frequency is once every two weeks, establish that as your baseline before setting a goal to deploy weekly within three months.
Set clear objectives: Define specific objectives for each metric based on your baseline measurements. For instance, if your current deployment frequency is once every two weeks, aim for weekly deployments within three months.
Implement feedback loops: Create mechanisms for gathering feedback from team members about processes and tools regularly used in development cycles. This could be through retrospectives or dedicated feedback sessions focusing on specific metrics.
After each deployment, hold a brief retrospective to discuss what went well, what could be improved, and any insights gained from the deployment metrics. Use this feedback to refine processes and inform future improvements.
Analyze trends: Regularly analyze trends in your metrics data rather than just looking at snapshots in time. For example, if you notice an increase in change failure rate over several weeks, investigate potential causes such as code complexity or inadequate testing practices.
Use tools like Typo to visualize trends in your DevOps metrics over time. Look for patterns and correlations that can help identify areas for improvement. For instance, if you notice that deployments with more than 50 commits tend to have higher failure rates, consider breaking changes into smaller batches.
Encourage experimentation: Foster an environment where team members feel comfortable experimenting with new processes or tools based on insights gained from metrics analysis. Encourage them to share their findings with others in the organization.
If a developer discovers a new testing framework that significantly reduces the time required to validate changes, support them in implementing it and sharing their experience with the broader team. Celebrating successful experiments helps reinforce a culture of continuous improvement.
Celebrate improvements: Recognize and celebrate improvements achieved through data-driven decision-making efforts—whether it's reducing MTTR or increasing deployment frequency—this reinforces positive behavior within teams.
When a team hits a key milestone, such as deploying 100 changes without a single failure, take time to acknowledge their achievement. Sharing success stories helps motivate teams and demonstrates the value of DevOps metrics.
Iterate regularly: Continuous improvement is not a one-time effort; it requires ongoing iteration based on what works best for your team's unique context and challenges encountered along the way.
As your team matures in its DevOps practices, regularly review and adjust your metrics strategy. What worked well in the early stages may need to evolve as your organization scales or faces new challenges. Remain flexible and open to experimenting with different approaches.
By following these steps consistently over time, you'll create an environment where continuous improvement becomes ingrained within your team's culture—ultimately leading toward greater efficiency and higher-quality outputs across all projects.
Overcoming Obstacles with Typo: A Powerful DevOps Metrics Tracking Solution
One tool that can significantly ease the process of tracking DevOps metrics is Typo—a user-friendly platform designed specifically for streamlining metric collection while integrating seamlessly into existing workflows:
Key Features of Typo
Intuitive interface: Typo's user-friendly interface allows teams to easily monitor critical metrics such as deployment frequency and lead time for changes without extensive training or onboarding processes required beforehand.
For example, the Typo dashboard provides a clear view of key metrics like deployment frequency over time so teams can quickly see if they are meeting their goals or if adjustments are needed.
By automating data collection processes through integrations with popular CI/CD tools like Jenkins or GitLab CI/CD pipelines—Typo eliminates manual reporting burdens placed upon developers—freeing them up so they can focus more on delivering value rather than managing spreadsheets!
Typo automatically gathers deployment data from your CI/CD tools so developers save time while reducing human error risk associated with manual data entry—allowing them instead to concentrate solely on improving results achieved through informed decision-making based upon actionable insights derived directly from their own data!
Real-time performance dashboards
Typo provides real-time performance dashboards that visualize key metrics at a glance, enabling quick decision-making based on current performance trends rather than relying solely upon historical data points!
The Typo dashboard updates in real time as new deployments occur, giving teams an immediate view of their current performance against goals. This allows them to quickly identify and address any issues arising.
Customizable alerts & notifications
With customizable alerts set up around specific thresholds (e.g., if the change failure rate exceeds 10%), teams receive timely notifications that prompt them to take action before issues escalate further down production lines!
Typo allows teams to set custom alerts based on specific goals and thresholds—for example, receiving notification if the change failure rate rises above 5% over three consecutive deployments, helping catch potential issues early before they cause major problems.
Integration capabilities
Typo effortlessly integrates with various project management tools (like Jira) alongside monitoring solutions (such as Datadog), providing comprehensive insights into both development processes and operational performance simultaneously.
Using Typo empowers organizations simplifying metric tracking without overwhelming users allowing them instead concentrate solely upon improving results achieved through informed decision-making based upon actionable insights derived directly from their own data.
Embracing the DevOps Metrics Journey
As we conclude this discussion, measuring project success, effective DevOps metrics serve invaluable strategies driving continuous improvement initiatives while enhancing collaboration efforts among various stakeholders involved throughout every stage—from development through deployment until final delivery. By focusing specifically on key indicators like deployment frequency alongside lead time changes coupled together alongside change failure rates mean time recovery—you'll gain deeper insights into identifying bottlenecks while optimizing workflows accordingly.
While challenges may arise along this journey towards achieving excellence within software delivery processes—tools like Typo combined alongside supportive cultures fostered throughout organizations will help navigate these obstacles successfully unlocking full potential inherent within each team member involved.
So take those first steps today!
Start tracking relevant metrics now—watch closely improvements unfold before eyes transforming not only how projects executed but also elevating overall quality delivered across all products released moving forward.
In the ever-changing world of software development, tracking progress and gaining insights into your projects is crucial. While GitHub Analytics provides developers and teams with valuable data-driven intelligence, relying solely on GitHub data may not provide the full picture needed for making informed decisions. By integrating GitHub Analytics with JIRA, engineering teams can gain a more comprehensive view of their development workflows, enabling them to take more meaningful actions.
Why GitHub Analytics Alone is Insufficient
GitHub Analytics offers valuable insights into:
Repository Activity: Tracking commits, pull requests and contributor activity within repositories.
Collaboration Effectiveness: Evaluating how effectively teams collaborate on code reviews and issue resolution.
Workflow Identification: Identifying potential bottlenecks and inefficiencies within the development process.
However, GitHub Analytics primarily focuses on repository activity and code contributions. It lacks visibility into broader project management aspects such as sprint progress, backlog prioritization, and cross-team dependencies. This limited perspective can hinder a team's ability to understand the complete picture of their development workflow and make informed decisions.
The Power of GitHub & JIRA Integration
JIRA is a widely used platform for issue tracking, sprint planning, and agile project management. When combined with GitHub Analytics, it creates a powerful ecosystem that:
Connects Code Changes with Project Tasks and Business Objectives: By linking GitHub commits and pull requests to specific JIRA issues (like user stories, bugs, and epics), teams can understand how their code changes contribute to overall project goals.
Real-World Example: A developer fixes a bug in a specific feature. By linking the GitHub pull request to the corresponding JIRA bug ticket, the team can track the resolution of the issue and its impact on the overall product.
Provides Deeper Insights into Development Velocity, Bottlenecks, and Blockers: Analyzing data from both GitHub and JIRA allows teams to identify bottlenecks in the development process that might not be apparent when looking at GitHub data alone.
Real-World Example: If a team observes a sudden drop in commit frequency, they can investigate JIRA issues to determine if it's caused by unresolved dependencies, unclear requirements, or other blockers.
Enhances Collaboration Between Engineering and Product Management Teams: By providing a shared view of project progress, GitHub and JIRA integration fosters better communication and collaboration between engineering and product management teams.
Real-World Example: Product managers can gain insights into the engineering team's progress on specific features by tracking the progress of related JIRA issues and linked GitHub pull requests.
Ensures Traceability from Feature Requests to Code Deployments: By linking JIRA issues to GitHub pull requests and ultimately to production deployments, teams can establish clear traceability from initial feature requests to their implementation and release.
Real-World Example: A team can track the journey of a feature from its initial conception in JIRA to its final deployment to production by analyzing the linked GitHub commits, pull requests, and deployment information.
More Examples of How JIRA + GitHub Analytics Brings More Insights
Tracking Work from Planning to Deployment:
Without JIRA: GitHub Analytics shows PR activity and commit frequency but doesn't provide context on whether work is aligned with business goals.
With JIRA: Teams can link commits and PRs to specific JIRA tickets, tracking the progress of user stories and epics from the backlog to release, ensuring that development efforts are aligned with business priorities.
Identifying Bottlenecks in the Development Process:
Without JIRA: GitHub Analytics highlights cycle time, but it doesn't explain why a delay is happening.
With JIRA: Teams can analyze blockers within JIRA issues—whether due to unresolved dependencies, pending stakeholder approvals, unclear requirements, or other factors—to pinpoint the root cause of delays and address them effectively.
Enhanced Sprint Planning & Resource Allocation:
Without JIRA: Engineering teams rely on GitHub metrics to gauge performance but may struggle to connect them with workload distribution.
With JIRA: Managers can assess how many tasks remain open versus completed, analyze team workloads, and adjust priorities in real-time to ensure efficient resource allocation and maximize team productivity.
Connecting Engineering Efforts to Business Goals:
Without JIRA: GitHub Analytics tracks technical contributions but doesn't show their impact on business priorities.
With JIRA: Product owners can track how engineering efforts align with strategic objectives by analyzing the progress of JIRA issues linked to key business goals, ensuring that the team is working on the most impactful tasks.
Getting Started with GitHub & JIRA Analytics Integration
Start leveraging the power of integrated analytics with tools like Typo, a dynamic platform designed to optimize your GitHub and JIRA experience. Whether you're working on a startup project or managing an enterprise-scale development team, such tools can offers powerful analytics tools tailored to your specific needs.
How to Integrate GitHub & JIRA with Typo:
Connect Your GitHub and JIRA Accounts: Visit Typo's platform and seamlessly link both tools to establish a unified view of your development data.
Configure Dashboards: Build custom analytics dashboards that track both code contributions (from GitHub) and issue progress (from JIRA) in a single, integrated view.
Analyze Insights Together: Gain deeper insights by analyzing GitHub commit trends alongside JIRA sprint performance, identifying correlations and uncovering hidden patterns within your development workflow.
Conclusion
While GitHub Analytics is a valuable tool for tracking repository activity, integrating it with JIRA unlocks deeper engineering insights, allowing teams to make smarter, data-driven decisions. By bridging the gap between code contributions and project management, teams can improve efficiency, enhance collaboration, and ensure that engineering efforts align with business goals.
Sign Up for Typo’s GitHub & JIRA Analytics Today!
Whether you aim to enhance software delivery, improve team collaboration, or refine project workflows, Typo provides a flexible, data-driven platform to meet your needs.
In today's fast-paced software development landscape, optimizing engineering performance is crucial for staying competitive. Engineering leaders need a deep understanding of workflows, team velocity, and potential bottlenecks. Engineering intelligence platforms provide valuable insights into software development dynamics, helping to make data-driven decisions. While Swarmia is a well-known player, it might not be the perfect fit for every team.This article explores the top Swarmia alternatives, giving you the knowledge to choose the best platform for your organization's needs. We'll delve into features, benefits, and potential drawbacks to help you make an informed decision.
Understanding Swarmia's Strengths
Swarmia is an engineering intelligence platform designed to improve operational efficiency, developer productivity, and software delivery. It integrates with popular development tools and uses data analytics to provide actionable insights.
Key Functionalities:
Data Aggregation: Connects to repositories like GitHub, GitLab, and Bitbucket, along with issue trackers like Jira and Azure DevOps, to create a comprehensive view of engineering activities.
Workflow Optimization: Identifies inefficiencies in development cycles by analyzing task dependencies, code review bottlenecks, and other delays.
Performance Metrics & Visualization: Presents data through dashboards, offering insights into deployment frequency, cycle time, resource allocation, and other KPIs.
Actionable Insights: Helps engineering leaders make data-driven decisions to improve workflows and team collaboration.
Why Consider a Swarmia Alternative?
Despite its strengths, Swarmia might not be ideal for everyone. Here's why you might want to explore alternatives:
Limited Customization: May not adapt well to highly specialized or unique workflows.
Complex Onboarding: Can have a steep learning curve, hindering quick adoption.
Pricing: Can be expensive for smaller teams or organizations with budget constraints.
User Interface: Some users find the UI challenging to navigate.
Top 6 Swarmia Competitors: Features, Pros & Cons
Here are six leading alternatives to Swarmia, each with its own unique strengths:
Typo is a comprehensive engineering intelligence platform providing end-to-end visibility into the entire SDLC. It focuses on actionable insights through integration with CI/CD pipelines and issue tracking tools.
Key Features:
Unified DORA and engineering metrics dashboard.
AI-driven analytics for sprint reviews, pull requests, and development insights.
Industry benchmarks for engineering performance evaluation.
Automated sprint analytics for workflow optimization.
Pros:
Strong tracking of key engineering metrics.
AI-powered insights for data-driven decision-making.
Responsive user interface and good customer support.
Cons:
Limited customization options in existing workflows.
Potential for further feature expansion.
G2 Reviews Summary:
G2 reviews indicate decent user engagement with a strong emphasis on positive feedback, particularly regarding customer support.
Jellyfish is an advanced analytics platform that aligns engineering efforts with broader business goals. It gives real-time visibility into development workflows and team productivity, focusing on connecting engineering work to business outcomes.
Key Features:
Resource allocation analytics for optimizing engineering investments.
Real-time tracking of team performance.
DevOps performance metrics for continuous delivery optimization.
Pros:
Granular data tracking capabilities.
Intuitive user interface.
Facilitates cross-team collaboration.
Cons:
Can be complex to implement and configure.
Limited customization options for tailored insights.
G2 Reviews Summary:
G2 reviews highlight strong core features but also point to potential implementation challenges, particularly around configuration and customization.
LinearB is a data-driven DevOps solution designed to improve software delivery efficiency and engineering team coordination. It focuses on data-driven insights, identifying bottlenecks, and optimizing workflows.
Key Features:
Workflow visualization for process optimization.
Risk assessment and early warning indicators.
Customizable dashboards for performance monitoring.
Pros:
Extensive data aggregation capabilities.
Enhanced collaboration tools.
Comprehensive engineering metrics and insights.
Cons:
Can have a complex setup and learning curve.
High data volume may require careful filtering
G2 Reviews Summary:
G2 reviews generally praise LinearB's core features, such as flow management and insightful analytics. However, some users have reported challenges with complexity and the learning curve.
Waydev is an engineering analytics solution with a focus on Agile methodologies. It provides in-depth visibility into development velocity, resource allocation, and delivery efficiency.
Key Features:
Automated engineering performance insights.
Agile-based tracking of development velocity and bug resolution.
Budgeting reports for engineering investment analysis.
Pros:
Highly detailed metrics analysis.
Streamlined dashboard interface.
Effective tracking of Agile engineering practices.
Cons:
Steep learning curve for new users.
G2 Reviews Summary:
G2 reviews for Waydev are limited, making it difficult to draw definitive conclusions about user satisfaction.
Sleuth is a deployment intelligence platform specializing in tracking and improving DORA metrics. It provides detailed insights into deployment frequency and engineering efficiency.
Key Features:
Automated deployment tracking and performance benchmarking.
Real-time performance evaluation against efficiency targets.
Lightweight and adaptable architecture.
Pros:
Intuitive data visualization.
Seamless integration with existing toolchains.
Cons:
Pricing may be restrictive for some organizations.
G2 Reviews Summary:
G2 reviews for Sleuth are also limited, making it difficult to draw definitive conclusions about user satisfaction
6. Pluralsight Flow (formerly Git Prime)
Pluralsight Flow provides a detailed overview of the development process, helping identify friction and bottlenecks. It aligns engineering efforts with strategic objectives by tracking DORA metrics, software development KPIs, and investment insights. It integrates with various manual and automated testing tools such as Azure DevOps and GitLab.
Key Features:
Offers insights into why trends occur and potential related issues.
Predicts value impact for project and process proposals.
Features DORA analytics and investment insights.
Provides centralized insights and data visualization.
Pros:
Strong core metrics tracking capabilities.
Process improvement features.
Data-driven insights generation.
Detailed metrics analysis tools.
Efficient work tracking system.
Cons:
Complex and challenging user interface.
Issues with metrics accuracy/reliability.
Steep learning curve for users.
Inefficiencies in tracking certain metrics.
Problems with tool integrations.
G2 Reviews Summary -
The review numbers show moderate engagement (6-12 mentions for pros, 3-4 for cons), placing it between Waydev's limited feedback and Jellyfish's extensive reviews. The feedback suggests strong core functionality but notable usability challenges.Link to Pluralsight Flow's G2 Reviews
The Power of Integration
Engineering management platforms become even more powerful when they integrate with your existing tools. Seamless integration with platforms like Jira, GitHub, CI/CD systems, and Slack offers several benefits:
Automation: Automates tasks like status updates and alerts.
Customization: Adapts to specific team needs and workflows.
Centralized Data: Enhances collaboration and reduces context switching.
By leveraging these integrations, software teams can significantly boost productivity and focus on building high-quality products.
Key Considerations for Choosing an Alternative
When selecting a Swarmia alternative, keep these factors in mind:
Team Size and Budget: Look for solutions that fit your budget, considering freemium plans or tiered pricing.
Specific Needs: Identify your key requirements. Do you need advanced customization, DORA metrics tracking, or a focus on developer experience?
Ease of Use: Choose a platform with an intuitive interface to ensure smooth adoption.
Integrations: Ensure seamless integration with your current tool stack.
Customer Support: Evaluate the level of support offered by each vendor.
Conclusion
Choosing the right engineering analytics platform is a strategic decision. The alternatives discussed offer a range of capabilities, from workflow optimization and performance tracking to AI-powered insights. By carefully evaluating these solutions, engineering leaders can improve team efficiency, reduce bottlenecks, and drive better software development outcomes.
Issue Cycle Time: The Key to Engineering Operations
Software teams relentlessly pursue rapid, consistent value delivery. Yet, without proper metrics, this pursuit becomes directionless.
While engineering productivity is a combination of multiple dimensions, issue cycle time acts as a critical indicator of team efficiency.
Simply put, this metric reveals how quickly engineering teams convert requirements into deployable solutions.
By understanding and optimizing issue cycle time, teams can accelerate delivery and enhance the predictability of their development practices.
In this guide, we discuss cycle time's significance and provide actionable frameworks for measurement and improvement.
What is the Issue Cycle Time?
Issue cycle time measures the duration between when work actively begins on a task and its completion.
This metric specifically tracks the time developers spend actively working on an issue, excluding external delays or waiting periods.
Unlike lead time, which includes all elapsed time from issue creation, cycle time focuses purely on active development effort.
Core Components of Issue Cycle Time
Work Start Time: When a developer transitions the issue to "in progress" and begins active development
Development Duration: Time spent writing, testing, and refining code
Review Period: Time in code review and iteration based on feedback
Testing Phase: Duration of QA verification and bug fixes
Work Completion: Final approval and merge of changes into the main codebase
Understanding these components allows teams to identify bottlenecks and optimize their development workflow effectively.
Why Does Issue Cycle Time Matter?
Here’s why you must track issue cycle time:
Impact on Productivity
Issue cycle time directly correlates with team output capacity. Shorter cycle times allows teams to complete more work within fixed timeframes. So resource utilization is at peak. This accelerated delivery cadence compounds over time, allowing teams to tackle more strategic initiatives rather than getting bogged down in prolonged development cycles.
Identifying Bottlenecks
By tracking cycle time metrics, teams can pinpoint specific stages where work stalls. This reveals process inefficiencies, resource constraints, or communication gaps that break flow. Data-driven bottleneck identification allows targeted process improvements rather than speculative changes.
Enhanced Collaboration
Rapid cycle times help build tighter feedback loops between developers, reviewers, and stakeholders. When issues move quickly through development stages, teams maintain context and momentum. When collaboration is streamlined, handoff friction is reduced. And there’s no knowledge loss between stages, either.
Better Predictability
Consistent cycle times help in reliable sprint planning and release forecasting. Teams can confidently estimate delivery dates based on historical completion patterns. This predictability helps align engineering efforts with business goals and improves cross-functional planning.
Customer Satisfaction
Quick issue resolution directly impacts user experience. When teams maintain efficient cycle times, they can respond quickly to customer feedback and deliver improvements more frequently. This responsiveness builds trust and strengthens customer relationships.
3 Phases of Issue Cycle Time
The development process is a journey that can be summed up in three phases. Let’s break these phases down:
Phase 1: Ticket Creation to Work Start
The initial phase includes critical pre-development activities that significantly impact
overall cycle time. This period begins when a ticket enters the backlog and ends when active development starts.
Teams often face delays in ticket assignment due to unclear prioritization frameworks or manual routing processes. One of the reasons behind this is resource allocation, which frequently occurs when assignment procedures lack automation.
Implementing automated ticket routing and standardized prioritization matrices can substantially reduce initial delays.
Phase 2: Active Work Period
The core development phase represents the most resource-intensive segment of the cycle. Development time varies based on complexity, dependencies, and developer expertise.
Success in this phase demands precise requirement documentation and proactive dependency management. One should also establish escalation paths. Teams should maintain living documentation and implement pair programming for complex tasks.
Phase 3: Resolution to Closure
The final phase covers all post-development activities required for production deployment.
This stage often becomes a significant bottleneck due to:
Sequential review processes
Manual quality assurance procedures
Multiple approval requirements
Environment-specific deployment constraints
How can this be optimized? By:
Implementing parallel review tracks
Automating test execution
Establishing service-level agreements for reviews
Creating self-service deployment capabilities
Each phase comes with many optimization opportunities. Teams should measure phase-specific metrics to identify the highest-impact improvement areas. Regular analysis of phase durations allows targeted process refinement, which is critical to maintaining software engineering efficiency.
How to Measure and Analyse Issue Cycle Time
Effective cycle time measurement requires the right tools and systematic analysis approaches. Businesses must establish clear frameworks for data collection, benchmarking, and continuous monitoring to derive actionable insights.
Here’s how you can measure issue cycle time:
Metrics and Tools
Modern development platforms offer integrated cycle time tracking capabilities. Tools like Typo automatically capture timing data across workflow states.
These platforms provide comprehensive dashboards displaying velocity trends, bottleneck indicators, and predictability metrics.
Integration with version control systems enables correlation between code changes and cycle time patterns. Advanced analytics features support custom reporting and team-specific performance views.
Establishing Benchmarks
Benchmark definition requires contextual analysis of team composition, project complexity, and delivery requirements.
Start by calculating your team's current average cycle time across different issue types. Factor in:
Team size and experience levels
Technical complexity categories
Historical performance patterns
Industry standards for similar work
The right approach is to define acceptable ranges rather than fixed targets. Consider setting graduated improvement goals: 10% reduction in the first quarter, 25% by year-end.
Using Visualizations
Data visualization converts raw metrics into actionable insights. Cycle time scatter plots show completion patterns and outliers. Cumulative flow diagrams can also be used to show work in progress limitations and flow efficiency. Control charts track stability and process improvements over time.
Ideally businesses should implement:
Weekly trend analysis
Percentile distribution charts
Work-type segmentation views
Team comparison dashboards
By implementing these visualizations, businesses can identify bottlenecks and optimize workflows for greater engineering productivity.
Regular Reviews
Establish structured review cycles at multiple organizational levels. These could be:
Weekly team retrospectives should examine cycle time trends and identify immediate optimization opportunities.
Monthly department reviews analyze cross-team patterns and resource allocation impacts.
Quarterly organizational assessments evaluate systemic issues and strategic improvements.
These reviews should be templatized and consistent. The idea to focus on:
Trend analysis
Bottleneck identification
Process modification results
Team feedback integration
Best Practices to Optimize Issue Cycle Time
Focus on the following proven strategies to enhance workflow efficiency while maintaining output quality:
Automate Repetitive Tasks: Use automation for code testing, deployment, and issue tracking. Implement CI/CD pipelines and automated code review tools to eliminate manual handoffs.
Adopt Agile Methodologies: Implement Scrum or Kanban frameworks with clear sprint cycles or workflow stages. Maintain structured ceremonies and consistent delivery cadences.
Limit Work-in-Progress (WIP): Set strict WIP limits per development stage to reduce context switching and prevent resource overallocation. Monitor queue lengths to maintain steady progress.
Conduct Daily Standups: Hold focused standup meetings to identify blockers early, track issue age, and enable immediate escalation for unresolved tasks.
Ensure Comprehensive Documentation: Maintain up-to-date technical specifications and acceptance criteria to reduce miscommunication and streamline issue resolution.
Cross-Train Team Members: Build versatile skill sets within the team to minimize dependencies on single individuals and allow flexible resource allocation.
Streamline Review Processes: Implement parallel review tracks, set clear review time SLAs, and automate style and quality checks to accelerate approvals.
Leverage Collaboration Tools: Use integrated development platforms and real-time communication channels to ensure seamless coordination and centralized knowledge sharing.
Track and Analyze Key Metrics: Monitor performance indicators daily with automated reports to identify trends, spot inefficiencies, and take corrective action.
Host Regular Retrospectives: Conduct structured reviews to analyze cycle time patterns, gather feedback, and implement continuous process improvements.
By consistently applying these best practices, engineering teams can reduce delays and optimise issue cycle time for sustained success.
Real-life Example of Optimizing
A mid-sized fintech company with 40 engineers faced persistent delivery delays despite having talented developers. Their average issue cycle time had grown to 14 days, creating mounting pressure from stakeholders and frustration within the team.
After analyzing their workflow data, they identified three critical bottlenecks:
Code Review Congestion: Senior developers were becoming bottlenecks with 20+ reviews in their queue, causing delays of 3-4 days for each ticket.
Environment Stability Issues: Inconsistent test environments led to frequent deployment failures, adding an average of 2 days to cycle time.
Unclear Requirements: Developers spent approximately 30% of their time seeking clarification on ambiguous tickets.
The team implemented a structured optimization approach:
Phase 1: Baseline Establishment (2 weeks)
Documented current workflow states and transition times
Calculated baseline metrics for each cycle time component
Surveyed team members to identify perceived pain points
Phase 2: Targeted Interventions (8 weeks)
Implemented a "review buddy" system that paired developers and established a maximum 24-hour review SLA
Standardized development environments using containerization
Created a requirement template with mandatory fields for acceptance criteria
Set WIP limits of 3 items per developer to reduce context switching
Phase 3: Measurement and Refinement (Ongoing)
Established weekly cycle time reviews in team meetings
Created dashboards showing real-time metrics for each workflow stage
Implemented a continuous improvement process where any team member could propose optimization experiments
Results After 90 Days:
Overall cycle time reduced from 14 days to 5.5 days (60% improvement)
Code review turnaround decreased from 72 hours to 16 hours
Deployment success rate improved from 65% to 94%
Developer satisfaction scores increased by 40%
On-time delivery rate rose from 60% to 87%
The most significant insight came from breaking down the cycle time improvements by phase: while the initial automation efforts produced quick wins, the team culture changes around WIP limits and requirement clarity delivered the most substantial long-term benefits.
This example demonstrates that effective cycle time optimization requires both technical solutions and process refinements. The fintech company continues to monitor its metrics, making incremental improvements that maintain their enhanced velocity without sacrificing quality or team wellbeing.
Conclusion
Issue cycle time directly impacts development velocity and team productivity. By tracking and optimizing this metric, teams can deliver value faster.
Typo's real-time issue tracking combined with AI-powered insights automates improvement detection and suggests targeted optimizations. Our platform allows teams to maintain optimal cycle times while reducing manual overhead.
The Software Development Life Cycle (SDLC) methodologies provide a structured framework for guiding software development and maintenance.
Development teams need to select the right approach for their project based on its needs and requirements. We have curated the top 8 SDLC methodologies that you can consider. Choose the one that best aligns with your project. Let’s get started:
8 Software Development Life Cycle Methodologies
Waterfall Model
The waterfall model is the oldest surviving SDLC methodology that follows a linear, sequential approach. In this approach, the development team completes each phase before moving on to the next. The five phases include Requirements, Design, Implementation, Verification, and Maintenance.
However, in today’s world, this model is not ideal for large and complex projects, as it does not allow teams to revisit previous phases. That said, the Waterfall Model serves as the foundation for all subsequent SDLC models, which were designed to address its limitations.
Iterative Model
This software development approach embraces repetition. In other words, the Iterative model builds a system incrementally through repeated cycles. The development team revisits previous phases, allowing for modifications based on feedback and changing requirements. This approach builds software piece by piece while identifying additional needs as they go along. Each new phase produces a more refined version of the software.
In this model, only the major requirements are defined from the beginning. One well-known iterative model is the Rational Unified Process (RUP), developed by IBM, which aims to enhance team productivity across various project types.
Incremental Model
This methodology is similar to the iterative model but differs in its focus. In the incremental model, the product is developed and delivered in small, functional increments through multiple cycles. It prioritizes critical features first and then adapts additional functionalities as requirements evolve throughout the project.
Simply put, the product is not held back until it is fully completed. Instead, it is released in stages, with each increment providing a usable version. This allows for easy incorporation of changes in later increments. However, this approach requires thorough planning and design and may require more resources and effort.
Agile Model
The Agile model is a flexible and iterative approach to software development. Developed in 2001, it combines iterative and incremental models aiming to increase collaboration, gather feedback, and rapid product delivery. It is based on the theory “Fail Fast and Early” which emphasizes quick testing and learning from failures early to minimize risks, save resources, and drive rapid improvement.
The software product is divided into small incremental parts that pass through some or all the SDLC phases. Each new version is tested and feedback is gathered from stakeholders throughout their process. This allows for catching issues early before they grow into major ones. A few of its sub-models include Extreme Programming (XP), Rapid Application Development (RAD), Scrum, and Kanban.
Spiral Model
A flexible SDLC approach in which the project cycles through four phases: Planning, Risk Analysis, Engineering, and Evaluation, repeatedly in a figurative spiral until completion. This methodology is widely used by leading software companies, as it emphasizes risk analysis, ensuring that each iteration focuses on identifying and mitigating potential risks.
This model also prioritizes customer feedback and incorporates prototypes throughout the development process. It is particularly suitable for large and complex projects with high-risk factors and a need for early user input. However, for smaller projects with minimal risks, this model may not be ideal due to its high cost.
Lean Model
Derived from Lean Manufacturing principles, the Lean Model focuses on maximizing user value by minimizing waste and optimizing processes. It aligns well with the Agile methodology by eliminating multitasking and encouraging teams to prioritize essential tasks in the present moment.
The Lean Model is often associated with the concept of a Minimum Viable Product (MVP), a basic version of the product launched to gather user feedback, understand preferences, and iterate for improvements. Key tools and techniques supporting the Lean model include value stream mapping, Kanban boards, the 5S method, and Kaizen events.
V-Model
An extension to the waterfall model, the V-model is also known as the verification and validation model. It is categorized by its V-shaped structure that emphasizes a systematic and disciplined approach to software development. In this approach, the verification phase ensures that the product is being built correctly and the validation phase focuses on the correct product is being built. These two phases are linked together by implementation (or coding phase).
This model is best suited for projects with clear and stable requirements and is particularly useful in industries where quality and reliability are critical. However, its inflexibility makes it less suitable for projects with evolving or uncertain requirements.
DevOps Model
The DevOps model is a hybrid of Agile and Lean methodologies. It brings Dev and Ops teams together to improve collaboration and aims to automate processes, integrate CI/CD, and accelerate the delivery of high-quality software.It focuses on small but frequent updates, allowing continuous feedback and process improvements. This enables teams to learn from failures, iterate on processes, and encourage experimentation and innovation to enhance efficiency and quality.
DevOps is widely adopted in modern software development to support rapid innovation and scalability. However, it may introduce more security risks as it prioritizes speed over security.
How Does Typo Help in Improving SDLC Visibility?
Typo is an intelligent engineering management platform. It is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Through SDLC metrics, you can ensure alignment with business goals and prevent developer burnout. This tool can be integrated with the tech stack to deliver real-time insights. Git, Slack, Calendars, and CI/CD to name a few.
Apart from the Software Development Life Cycle (SDLC) methodologies mentioned above, there are others you can take note of. Each methodology follows a different approach to creating high-quality software, depending on factors such as project goals, complexity, team dynamics, and flexibility.
Be sure to conduct your own research to determine the optimal approach for producing high-quality software that efficiently meets user needs.
FAQs
What is the Software Development Life Cycle (SDLC)?
Planning: Identifying project scope, objectives, and feasibility.
Requirement Analysis: Gathering and documenting user and business requirements.
Design: Creating system architecture, database structure, and UI/UX design.
Implementation (Coding): Writing and developing the actual software.
Testing: Identifying and fixing bugs to ensure software quality.
Deployment: Releasing the software for users.
Maintenance: Providing updates, fixing issues, and improving the system over time.
What is the purpose of SDLC?
The purpose of SDLC is to provide a systematic approach to software development. This ensures that the final product meets user requirements, stays within budget, and is delivered on time. It helps teams manage risks, improve collaboration, and maintain software quality throughout its lifecycle.
Can SDLC be applied to all types of software projects?
Yes, SDLC can be applied to various software projects, including web applications, mobile apps, enterprise software, and embedded systems. However, the choice of SDLC methodology depends on factors like project complexity, team size, budget, and flexibility needs.
Comprehensive Guide to Best Practice KPI Setting for Software Development
Nowadays, software development teams face immense pressure to deliver high-quality products rapidly. To navigate this complexity, organizations must embrace data-driven decision-making. This is where software development metrics become crucial. By carefully selecting and tracking the right software KPIs, teams can gain valuable insights into their performance, identify areas for improvement, and ultimately achieve their goals.
Why are Software Development Metrics Important?
Software metrics provide a wealth of information that can be used to:
Improve Decision-Making: For example, by tracking deployment frequency, a team can identify bottlenecks in their release pipeline and make informed decisions about investing in automation tools like Jenkins or CircleCI to accelerate deployments.
Enhance Visibility:Software metrics such as lead time for changes provide real-time visibility into the development process, allowing teams to identify delays and proactively address issues. For instance, if the team observes an increase in lead time, they can investigate potential root causes, such as complex code reviews or insufficient testing resources.
Increase Accountability: Tracking developer KPI metrics such as individual contribution to code commits and code reviews can help foster a culture of accountability and encourage continuous improvement. This can also help identify areas where individual team members may need additional support or training.
Improve Communication: By sharing data on software development KPI such as cycle time with stakeholders, development teams can improve communication and build trust with other departments. For example, by demonstrating a consistent reduction in cycle time, teams can effectively communicate their progress and build confidence among stakeholders.
Enhance Customer Satisfaction: By focusing on software development metrics that directly impact customer experience, such as mean time to restore service and change failure rate, teams can improve product reliability and enhance customer satisfaction. This directly translates to increased customer retention and positive brand perception.
Which Software Development KPIs are Critical?
Several software development metrics are considered critical for measuring team performance and driving success. These include:
DORA Metrics:
Deployment Frequency: How often code is released to production (e.g., daily, weekly, monthly).
Example: A team might aim to increase deployment frequency from weekly to daily releases to improve responsiveness to customer needs and accelerate time-to-market.
Lead Time for Changes: The time it takes to go from code commit to production release (e.g., hours, days).
Example: A team can set a target of reducing lead time for changes by 20% within a quarter by streamlining the review process and automating deployments.
Mean Time to Restore Service: How quickly service is restored after an outage (e.g., minutes, hours).
Example: A team might set a target of restoring service within 15 minutes of an outage to minimize customer impact and maintain service availability.
Change Failure Rate: The percentage of deployments that result in service degradation or outages (e.g., 5%, 1%).
Example: By implementing robust testing procedures, including unit tests, integration tests, and TDD (Test-Driven Development) practices, teams can strive to reduce the change failure rate and improve the overall stability of their software.
Code Quality Metrics:
Code Coverage: The percentage of code covered by automated tests (e.g., 80%, 90%).
Example: By setting a target code coverage goal and regularly monitoring test results, teams can identify areas with low coverage and prioritize writing additional tests to improve code quality and reduce the risk of bugs.
Static Code Analysis Findings: The number and severity of code quality issues detected by static analysis tools.
Example: Utilizing tools like SonarQube or Checkmarx to identify and address code smells, security vulnerabilities, and other potential issues early in the development cycle.
Code Churn: The frequency of code changes.
Example:High code churn can indicate potential instability and increased technical debt. By analyzing code churn patterns, teams can identify areas of the codebase that require refactoring or redesign to improve maintainability.
Team-Specific Metrics:
Cycle Time: The time it takes to complete a single piece of work.
Example: Tracking cycle time for different types of tasks (e.g., bug fixes, feature development) can help identify bottlenecks and areas for process improvement within the SDLC (Software Development Lifecycle).
Work in Progress (WIP) Limits: The number of tasks a team can work on concurrently.
Example: Implementing WIP limits can prevent task overload, improve focus, and reduce the risk of context switching.
Burn Rate: The speed at which the team is completing work.
Example: Tracking burn rate can help teams accurately estimate the time required to complete projects and make adjustments to their workload as needed.
Best Practice KPI Setting for Software Development
To effectively leverage software development metrics, teams should:
Establish Clear Goals: Define specific, measurable, achievable, relevant, and time-bound (SMART) goals aligned with the chosen software engineering KPIs. For example, a team might set a goal to increase deployment frequency by 50% within the next quarter.
Collect and Analyze Data: Utilize tools such as project management software (e.g., Jira, Asana), version control systems (like Git), and monitoring dashboards to collect data on key metrics. Analyze this data to identify trends and identify areas for improvement.
Visualize Data: Create dashboards and reports to visualize key metrics and trends over time. This could include burndown charts and graphs that show progress towards goals.
Regularly Review and Adjust: Regularly review and analyze the collected data to identify areas for improvement and adjust strategies as needed. For example, if the team is struggling to meet a specific goal, they can investigate the root cause and implement corrective actions.
Involve the Team: Encourage team members to understand and contribute to the data collection and analysis process. This can foster a sense of ownership and encourage a data-driven culture within the team.
Software Metrics and Measures in Software Architecture
Software metrics and measures in software architecture play a crucial role in evaluating the quality and maintainability of software systems. Key metrics include:
Coupling: A measure of how interdependent different modules within a system are.
Example: High coupling occurs when changes in one module significantly impact other modules. This can be measured by analyzing dependencies between modules using tools like static code analyzers. To reduce coupling, consider using design principles like the Interface Segregation Principle and Dependency Inversion Principle.
Cohesion: A measure of how closely related the elements within a module are.
Example: High cohesion means that a module focuses on a single, well-defined responsibility. To improve cohesion, refactor code to group related functionalities together and avoid creating "god objects" with multiple unrelated responsibilities.
Complexity: A measure of the difficulty of understanding, modifying, and testing the software.
Example: Cyclomatic complexity is a common metric for measuring code complexity. Tools can analyze code and calculate cyclomatic complexity scores, highlighting areas with high complexity that may require refactoring.
Quality Metrics in Software Engineering Template
A comprehensive quality metrics in software engineering template should include:
Functional Metrics: Metrics related to the functionality of the software, such as defect density (number of defects per lines of code), user satisfaction, and customer churn rate.
Performance Metrics: Metrics related to the performance of the software, such as response time, throughput, and resource utilization.
Usability Metrics: Metrics related to the ease of use of the software, such as user satisfaction, task completion time, and error rates.
Reliability Metrics: Metrics related to the reliability of the software, such as mean time to failure (MTTF) and mean time to repair (MTTR).
Maintainability Metrics: Metrics related to the ease of maintaining and modifying the software, such as code complexity, coupling, and cohesion.
Software Development Metrics Examples
Software development metrics examples can include:
Deployment Frequency: How often code is released to production (e.g., daily, weekly, monthly).
Lead Time for Changes: The time it takes to go from a code commit to a production release (e.g., hours, days).
Mean Time to Restore Service: How quickly service is restored after an outage (e.g., minutes, hours).
Change Failure Rate: The percentage of deployments that result in service degradation or outages (e.g., 5%, 1%).
Code Coverage: The percentage of code covered by automated tests (e.g., 80%, 90%).
Static Code Analysis Findings: The number of critical, major, and minor code quality issues identified by static analysis tools.
By carefully selecting and tracking the right software engineering KPIs, teams can gain valuable insights into their performance, identify areas for improvement, and ultimately deliver higher-quality software more efficiently.
How Platform Engineering Teams Leverage Software Development KPIs & SDLC Insights
Platform engineering teams play a crucial role in enabling software development teams to deliver high-quality products faster. By providing self-service infrastructure, automating processes, and streamlining workflows, platform engineering teams empower developers to focus on building innovative solutions.
To effectively fulfill this mission, platform engineering teams must also leverage software development KPIs and software development lifecycle insights. Here are some key ways they do it:
Measuring the Impact of Platform Services:
KPI: Time to Provision Infrastructure.
Real-world Example: A platform team might track the time it takes for developers to provision new environments (e.g., development, testing, production) using self-service tools like Terraform or Pulumi. By monitoring this such right KPIs, the team can identify bottlenecks in the provisioning process and optimize their infrastructure-as-code templates to accelerate provisioning times.
KPI: Developer Satisfaction with Platform Services.
Real-world Example: Conducting regular surveys among developers to gather feedback on the usability, reliability, and performance of platform services. This feedback can be used to prioritize improvements and ensure that platform services meet the evolving needs of the development teams.
Optimizing Development Workflows:
KPI: Lead Time for Changes (for platform services).
Real-world Example: Tracking the time it takes to deploy changes to platform services (e.g., updates to CI/CD pipelines, new infrastructure components). By minimizing lead time for changes, platform teams can ensure that developers have access to the latest and greatest tools and services.
KPI: Change Failure Rate (for platform services).
Real-world Example: Monitoring the frequency of incidents or outages caused by changes to platform services. By analyzing these incidents (key performance indicators), platform teams can identify root causes, implement preventative measures, and improve the overall reliability of their services.
Improving Developer Productivity:
KPI: Time Spent on Repetitive Tasks.
Real-world Example: Analyzing developer activity logs to identify repetitive tasks that can be automated by platform services. For example, automating the process of setting up new developer environments or deploying applications to different environments.
KPI: Developer Self-Sufficiency.
Real-world Example: Tracking the number of support tickets raised by developers related to platform services. By reducing the number of support tickets, platform teams can demonstrate their effectiveness in empowering developers and minimizing disruptions to their work.
By carefully analyzing different KPIs and SDLC insights, platform engineering teams can continuously improve their services, enhance developer productivity, and ultimately contribute to the overall success of the organization.
What are Software Engineering KPIs Specifically Used For Within Companies Like Uber, Netflix, and Facebook?
These tech giants heavily rely on tracking software development KPIs to drive continuous improvement and maintain their competitive edge. Here are some real-world examples:
Uber:
Deployment Frequency: Uber aims for very high deployment frequencies to quickly adapt to changing market demands, introduce new features, and fix bugs. They leverage automation and continuous integration/continuous delivery (CI/CD) pipelines to achieve this.
Lead Time for Changes: Minimizing lead time is crucial for Uber to quickly respond to user feedback and introduce new features like surge pricing adjustments or safety initiatives.
Mean Time to Restore Service: Given the critical nature of their ride-hailing service, Uber focuses heavily on minimizing downtime. KPIs related to service restoration time help them identify and address potential issues proactively.
Netflix:
Change Failure Rate: Netflix strives for a very low change failure rate to maintain high service availability for its millions of subscribers. This is critical for preventing disruptions to streaming services.
Code Coverage: With a complex streaming infrastructure, Netflix prioritizes high code coverage to ensure the reliability and stability of their platform.
Customer Satisfaction: Netflix closely monitors customer satisfaction metrics, which are directly influenced by the quality and performance of their software.
Facebook:
Deployment Frequency: Facebook's rapid pace of innovation necessitates frequent deployments to introduce new features, improve user experience, and address security threats.
Code Quality: Given the massive scale of Facebook's user base, maintaining high code quality is paramount to prevent major outages and ensure data security. They utilize static analysis tools and rigorous code review processes to achieve this.
Usability Metrics: Facebook heavily relies on user engagement and retention metrics. These KPIs guide product development decisions and help identify areas for improvement in the user interface and user experience.
By leveraging data-driven insights from these KPIs, these companies can continuously optimize their development processes, boost team productivity, improve product quality, and deliver exceptional user experiences.
Key Takeaways:
Software development metrics are essential for driving continuous improvement in software development processes.
DORA metrics, code quality metrics, and team-specific metrics are critical for measuring efficiency of software development projects & software development teams.
By effectively tracking quantitative metrics & software development KPIs, engineering leader can make data-driven decisions, enhance visibility of software development initiatives, boost development velocity, do a better resource allocation, and meet specific business objective.
Software metrics and measures in software architecture play a crucial role in evaluating the quality and maintainability of software systems.
By embracing best-practice KPI settings for software development and leveraging SEI tools you can unlock the full potential of the software engineering metrics for business success.
Thinking about what your engineering health metrics look like?
An engineering team at a tech company was asked to speed up feature releases. They optimized for deployment velocity. Pushed more weekly updates. But soon, bugs increased and stability suffered. The company started getting more complaints.
The team had hit the target but missed the point—quality had taken a backseat to speed.
In engineering teams, metrics guide performance. But if not chosen carefully, they can create inefficiencies.
Goodhart’s Law reminds us that engineering metrics should inform decisions, not dictate them.
And leaders must balance measurement with context to drive meaningful progress.
In this post, we’ll explore Goodhart’s Law, its impact on engineering teams, and how to use metrics effectively without falling into the trap of metric manipulation.
Let’s dive right in!
What is Goodhart’s Law?
Goodhart’s Law states: “When a metric becomes a target, it ceases to be a good metric.” It highlights how excessive focus on a single metric can lead to unintended consequences.
In engineering, prioritizing numbers over impact can cause issues like:
Speed over quality: Rushing deployments to meet velocity goals, leading to unstable code.
Bug report manipulation: Closing easy or duplicate tickets to inflate resolution rates.
Feature count obsession: Shipping unnecessary features just to hit software delivery targets.
Code quantity over quality: Measuring productivity by lines of code written, encouraging bloated code.
Artificial efficiency boosts: Engineers breaking tasks into smaller pieces to game completion metrics.
Test coverage inflation: Writing low-value tests to meet percentage requirements rather than ensuring real coverage.
Customer support workarounds: Delaying bug reports or reclassifying issues to reduce visible defects.
Understanding this law helps teams set better engineering metrics that drive real improvements.
Why Setting Engineering Metrics Can Be Risky
Metrics help track progress, identify bottlenecks, and improve engineering efficiency.
But poorly defined KPIs can lead to unintended consequences:
Focus shifts to gaming the system rather than achieving meaningful outcomes.
Creates a culture of stress and fear among team members.
Undermines collaboration as individuals prioritize personal performance over team success.
When teams chase numbers, they optimize for the metric, not the goal.
Engineers might cut corners to meet deadlines, inflate ticket closures, or ship unnecessary features just to hit targets. Over time, this leads to burnout and declining quality.
Strict metric-driven cultures also stifle innovation. Developers focus on short-term wins instead of solving real problems.
Teams avoid risky but impactful projects because they don’t align with predefined KPIs.
Leaders must recognize that engineering metrics are tools, not objectives. Used wisely, they guide teams toward improvement. Misused, they create a toxic environment where numbers matter more than real progress.
Psychological Pitfalls of Metric Manipulation
Metrics don’t just influence performance—they shape behavior and mindset. When poorly designed, the outcome will be the opposite of why they were brought in in the first place. Here are some pitfalls of metric manipulation in software engineering:
1. Pressure and Burnout
When engineers are judged solely by metrics, the pressure to perform increases. If a team is expected to resolve a certain number of tickets per week, developers may prioritize speed over thoughtful problem-solving.
They take on easier, low-impact tasks just to keep numbers high. Over time, this leads to burnout, disengagement, and declining morale. Instead of building creativity, rigid KPIs create a high-stress work environment.
2. Cognitive Biases
Metrics distort decision-making. Availability bias makes teams focus on what’s easiest to measure rather than what truly matters.
If deployment frequency is tracked but long-term stability isn’t, engineers overemphasize shipping quickly while ignoring maintenance.
Similarly, the anchoring effect traps teams into chasing arbitrary targets. If management sets an unrealistic uptime goal, engineers may hide system failures or delay reporting issues to meet it.
3. Loss of Autonomy
Metrics can take decision-making power away from engineers. When success is defined by rigid KPIs, developers lose the freedom to explore better solutions.
A team judged on code commit frequency may feel pressured to push unnecessary updates instead of focusing on impactful changes. This stifles innovation and job satisfaction.
How to Avoid Metric Manipulation
Avoiding metric manipulation starts with thoughtful leadership. Organizations need a balanced approach to measurement and a culture of transparency.
Here’s how teams can set up a system that drives real progress without encouraging gaming:
1. Set the Right Metrics and Convey the ‘Why’
Leaders play a crucial role in defining metrics that align with business goals. Instead of just assigning numbers, they must communicate the purpose behind them.
For example, if an engineering team is measured on uptime, they should understand it’s not just about hitting a number—it’s about ensuring a seamless user experience.
When teams understand why a metric matters, they focus on improving outcomes rather than just meeting a target.
2. Balance Quantitative and Qualitative Metrics
Numbers alone don’t tell the full story. Blending quantitative and qualitative metrics ensures a more holistic approach.
Instead of only tracking deployment speed, consider code quality, customer feedback, and post-release stability.
For example, A team measured only on monthly issue cycle time may rush to close smaller tickets faster, creating an illusion of efficiency.
But comparing quarterly performance trends instead of month-to-month fluctuations provides a more realistic picture.
If issue resolution speed drops one month but leads to fewer reopened tickets in the following quarter, it’s a sign that higher-quality fixes are being implemented.
This approach prevents engineers from cutting corners to meet short-term targets.
3. Encourage Transparency and Collaboration
Silos breed metric manipulation. Cross-functional collaboration helps teams stay focused on impact rather than isolated KPIs.
There are project management tools available that can facilitate transparency by ensuring progress is measured holistically across teams.
Encouraging team-based goals instead of individual metrics also prevents engineers from prioritizing personal numbers over collective success.
When teams work together toward meaningful objectives, there’s less temptation to game the system.
4. Rotate Metrics Periodically
Static metrics become stale over time. Teams either get too comfortable optimizing for them or find ways to manipulate them.
Rotating key performance indicators every few months keeps teams engaged and discourages short-term gaming.
For example, a team initially measured on deployment speed might later be evaluated on post-release defect rates. This shifts focus to sustainable quality rather than just frequency.
5. Focus on Trends, Not Snapshots
Leaders should evaluate long-term trends rather than short-term fluctuations. If error rates spike briefly after a new rollout, that doesn’t mean the team is failing—it might indicate growing pains from scaling.
Looking at patterns over time provides a more accurate picture of progress and reduces the pressure to manipulate short-term results.
By designing a thoughtful metric system, building transparency, and emphasizing long-term improvement, teams can use metrics as a tool for growth rather than a rigid scoreboard.
Real-Life Example of Metric Manipulation and How it Was Solved
A leading SaaS company wanted to improve incident response efficiency, so they set a key metric: Mean Time to Resolution (MTTR). The goal was to drive faster fixes and reduce downtime. However, this well-intentioned target led to unintended behavior.
To keep MTTR low, engineers started prioritizing quick fixes over thorough solutions. Instead of addressing the root causes of outages, they applied temporary patches that resolved incidents on paper but led to recurring failures. Additionally, some incidents were reclassified or delayed in reporting to avoid negatively impacting the metric.
Recognizing the issue, leadership revised their approach. They introduced a composite measurement that combined MTTR with recurrence rates and post-mortem depth—incentivizing sustainable fixes instead of quick, superficial resolutions. They also encouraged engineers to document long-term improvements rather than just resolving incidents reactively.
This shift led to fewer repeat incidents, a stronger culture of learning from failures, and ultimately, a more reliable system rather than just an artificially improved MTTR.
How Software Engineering Intelligence Tools like Typo Can Help
To prevent MTTR from being gamed, the company deployed a software intelligence platform that provided deeper insights beyond just resolution speed. It introduced a set of complementary metrics to ensure long-term reliability rather than just fast fixes.
Key metrics that helped balance MTTR:
Incident Recurrence Rate – Measured how often the same issue reappeared after being "resolved." If the recurrence rate was high, it indicated superficial fixes rather than true resolution.
Time to Detect (TTD) – Ensured that issues were reported promptly instead of being delayed to manipulate MTTR data.
Code Churn in Incident Fixes – Tracked how frequently the same code area was modified post-incident, signaling whether fixes were rushed and required frequent corrections.
Post-Mortem Depth Score – Analyzed how thorough incident reviews were, ensuring teams focused on root cause analysis rather than just closing incidents quickly.
Customer Impact Score – Quantified how incidents affected end-users, discouraging teams from resolving issues in ways that degraded performance or introduced hidden risks.
Hotspot Analysis of Affected Services – Highlighted components with frequent issues, allowing leaders to proactively invest in stability improvements rather than just reactive fixes.
By monitoring these additional metrics, leadership ensured that engineering teams prioritized quality and stability alongside speed. The software intelligence tool provided real-time insights, automated anomaly detection, and historical trend analysis, helping the company move from a reactive to a proactive incident management strategy.
As a result, they saw: ✅ 50% reduction in repeat incidents within six months. ✅ Improved root cause resolution, leading to fewer emergency fixes. ✅ Healthier team workflows, reducing stress from unrealistic MTTR targets.
No single metric should dictate engineering success. Software intelligence tools provide a holistic view of system health, helping teams focus on real improvements instead of gaming the numbers. By leveraging multi-metric insights, engineering leaders can build resilient, high-performing teams that balance speed with reliability.
Conclusion
Engineering metrics should guide teams, not control them. When used correctly, they help track progress and improve efficiency. But when misused, they encourage manipulation, stress, and short-term thinking.
Striking the right balance between numbers and why these numbers are being monitored ensures teams focus on real impact. Otherwise, employees are bound to find ways to game the system.
For tech managers and CTOs, the key lies in finding hidden insights beyond surface-level numbers. This is where Typo comes in. With AI-powered SDLC insights, Typo helps you monitor efficiency, detect bottlenecks, and optimize development workflows—all while ensuring you ship faster without compromising quality.
86% of software engineering projects face challenges—delays, budget overruns, or failure.
31.1% of software projects are cancelled before completion due to poor planning and unaddressed delivery risks.
Missed deadlines lead to cost escalations. Misaligned goals create wasted effort. And a lack of risk mitigation results in technical debt and unstable software.
But it doesn’t have to be this way. By identifying risks early and taking proactive steps, you can keep your projects on track.
How to Mitigate Delivery Risks in Software Engineering
Here are some simple (and not so simple) steps we follow:
1. Identify Potential Risks During Project Planning
The earlier you identify potential challenges, the fewer issues you'll face later. Software engineering projects often derail because risks are not anticipated at the start.
By proactively assessing risks, you can make better trade-off decisions and avoid costly setbacks.
Start by conducting cross-functional brainstorming sessions with engineers, product managers, and stakeholders. Different perspectives help identify risks related to architecture, scalability, dependencies, and team constraints.
You can also use risk categorization to classify potential threats—technical risks, resource constraints, timeline uncertainties, or external dependencies. Reviewing historical data from past projects can also show patterns of common failures and help in better planning.
Tools like Typo help track potential risks throughout development to ensure continuous risk assessment. Mind mapping tools can help visualize dependencies and create a structured product roadmap, while SWOT analysis can help evaluate strengths, weaknesses, opportunities, and threats before execution.
2. Prioritize Risks Based on Likelihood and Impact
Not all risks carry the same weight. Some could completely derail your project, while others might cause minor delays. Prioritizing risks based on likelihood and impact ensures that engineering teams focus on what matters.
You can use a risk matrix to plot potential risks—assessing their probability against their business impact.
Applying the Pareto Principle (80/20 Rule) can further optimize software engineering risk management. Focus on the 20% of risks that could cause 80% of the problems.
If you look at the graph below for top five engineering efficiency challenges:
The top 2 risks (Technical Debt and Security Vulnerabilities) account for 60% of total impact
The top 3 risks represent 75% of all potential issues
Following the Pareto Principle, focusing on these critical risks would address the majority of potential problems.
For engineering teams, tools like Typo’s code review platform can help analyze codebase & pull requests to find risks. It auto-generates fixes before you merge to master, helping you push the priority deliverables on time. This reduces long-term technical debt and improves project stability.
3. Implement Robust Development Practices
Ensuring software quality while maintaining delivery speed is a challenge. Test-Driven Development (TDD) is a widely adopted practice that improves software reliability, but testing alone can consume up to 25% of overall project time.
If testing delays occur frequently, it may indicate inefficiencies in the development process.
High E2E test failures (45%) suggest environment inconsistencies between development and testing
Integration test failures (35%) indicate potential communication gaps between teams
Performance test issues (30%) point to insufficient resource planning
Security test failures (25%) highlight the need for security consideration in the planning phase
Lower unit test failures (15%) suggest good code-level quality but system-level integration challenges
Testing is essential to ensure the final product meets expectations.
To prevent testing from becoming a bottleneck, teams should automate workflows and leverage AI-driven tools. Platforms like Typo’s code review tool streamline testing by detecting issues early in development, reducing rework.
Beyond automation, code reviews play a crucial role in risk mitigation. Establishing peer-review processes helps catch defects, enforce coding standards, and improve code maintainability.
Similarly, using version control effectively—through branching strategies like Git Flow ensures that changes are managed systematically.
4. Monitor Progress Against Milestones
Tracking project progress against defined milestones is essential for mitigating delivery risks. Measurable engineering metrics help teams stay on track and proactively address delays before they become major setbacks.
Note that sometimes numbers without context can lead to metric manipulation, which must be avoided.
Break down development into achievable goals and track progress using monitoring tools. Platforms like Smartsheet help manage milestone tracking and reporting, ensuring that deadlines and dependencies are visible to all stakeholders.
For deeper insights, engineering teams can use advanced software development analytics. Typo, a software development analytics platform, allows teams to track DORA metrics, sprint analysis, team performance insights, incidents, goals, and investment allocation. These insights help identify inefficiencies, improve velocity, and ensure that resources align with business objectives.
By continuously monitoring progress and making data-driven adjustments, engineering teams can maintain predictable software delivery.
5. Communicating Effectively with Stakeholders
Misalignment between engineering teams and stakeholders can lead to unrealistic expectations and missed deadlines.
Start by tailoring communication to your audience. Technical teams need detailed sprint updates, while engineering board meetings require high-level summaries. Use weekly reports and sprint reviews to keep everyone informed without overwhelming them with unnecessary details.
You should also use collaborative tools to streamline discussions and documentation. Platforms like Slack enable real-time messaging, Notion helps organize documentation and meeting notes.
Ensure transparency, alignment, and quick resolution of blockers.
6. Adapting to Changing Circumstances with Agile Methodologies
Agile methodologies help teams stay flexible and respond effectively to changing priorities.
The idea is to deliver work in small, manageable increments instead of large, rigid releases. This approach allows teams to incorporate feedback early and pivot when needed, reducing the risk of costly rework.
You should also build a feedback-driven culture by:
Encouraging open discussions about project challenges
Collecting feedback from users, developers, and stakeholders regularly
Holding retrospectives to analyze what’s working and what needs improvement
Making data-driven decisions based on sprint outcomes
Using the right tools enhances Agile project management. Platforms like Jira and ClickUp help teams manage sprints, track progress, and adjust priorities based on real-time insights.
7. Continuous Improvement and Learning
The best engineering teams continuously learn and refine their processes to prevent recurring issues and enhance efficiency.
Post-Mortem Analysis
After every major release, conduct post-mortems to evaluate what worked, what failed, and what can be improved. These discussions should be blame-free and focused on systemic improvements.
Categorize insights into:
Process inefficiencies (e.g., bottlenecks in code review)
Retaining knowledge prevents teams from repeating mistakes. Use platforms like Notion or Confluence to document:
Best practices for coding, deployment, and debugging
Common failure points and their resolutions
Lessons learned from previous projects
Upskill and Reskill the Team
Software development evolves rapidly, and teams must stay updated. Encourage your engineers to:
Take part in workshops, hackathons, and coding challenges
Earn certifications in cloud computing, automation, and security
Use peer learning programs like mentorship and internal tech talks
Providing dedicated learning time and access to resources ensures that engineers stay ahead of technological and process-related risks.
By embedding learning into everyday workflows, teams build resilience and improve engineering efficiency.
Conclusion
Mitigating delivery risk in software engineering is crucial to prevent project delays and budget overruns.
Identifying risks early, implementing robust development practices, and maintaining clear communication can significantly improve project outcomes. Agile methodologies and continuous learning further enhance adaptability and efficiency.
With AI-powered tools likeTypo that offer Software Development Analytics and Code Reviews, your teams can automate risk detection, improve code quality, and track key engineering metrics.
Professional service organizations within software companies maintain a delivery success rate hovering in the 70% range.
This percentage looks good. However, it hides significant inefficiencies given the substantial resources invested in modern software delivery lifecycles.
Even after investing extensive capital, talent, and time into development cycles, missing targets on every third of projects should not be acceptable.
After all, there’s a direct correlation between delivery effectiveness and organizational profitability.
However, the complexity of modern software development - with its complex dependencies and quality demands - makes consistent on-time, on-budget delivery persistently challenging.
This reality makes it critical to master effective software delivery.
What is the Software Delivery Lifecycle
The Software Delivery Lifecycle (SDLC) is a structured sequence of stages that guides software from initial concept to deployment and maintenance.
Consider Netflix's continuous evolution: when transitioning from DVD rentals to streaming, they iteratively developed, tested, and refined their platform. All this while maintaining uninterrupted service to millions of users.
A typical SDLC has six phases:
Planning: Requirements gathering and resource allocation
Design: System architecture and technical specifications
Development: Code writing and unit testing
Testing: Quality assurance and bug fixing
Deployment: Release to production environment
Maintenance: Ongoing updates and performance monitoring
Each phase builds upon the previous, creating a continuous loop of improvement.
Modern approaches often adopt Agile methodologies, which enable rapid iterations and frequent releases. This also allows organizations to respond quickly to market demands while maintaining high-quality standards.
7 Best Practices to Achieve Effective Software Delivery
Even the best of software delivery processes can have leakages in terms of engineering resource allocation and technical management. By applying these software delivery best practices, you can achieve effectiveness:
1. Streamline Project Management
Effective project management requires systematic control over development workflows while maintaining strategic alignment with business objectives.
Modern software delivery requires precise distribution of resources, timelines, and deliverables.
Here’s what you should implement:
Set Clear Objectives and Scope: Implement SMART criteria for project definition. Document detailed deliverables with explicit acceptance criteria. Establish timeline dependencies using critical path analysis.
Effective Resource Allocation: Deploy project management tools for agile workflow tracking. Implement capacity planning using story point estimation. Utilize resource calendars for optimal task distribution. Configure automated notifications for blocking issues and dependencies.
Prioritize Tasks: Apply MoSCoW method (Must-have, Should-have, Could-have, Won't-have) for feature prioritization. Implement RICE scoring (Reach, Impact, Confidence, Effort) for backlog management. Monitor feature value delivery through business impact analysis.
Continuous Monitoring: Track velocity trends across sprints using burndown charts. Monitor issue cycle time variations through Typo dashboards. Implement automated reporting for sprint retrospectives. Maintain real-time visibility through team performance metrics.
2. Build Quality Assurance into Each Stage
Quality assurance integration throughout the SDLC significantly reduces defect discovery costs.
Early detection and prevention strategies prove more effective than late-stage fixes. This ensures that your time is used for maximum potential helping you achieve engineering efficiency.
Some ways to set up robust a QA process:
Shift-Left Testing: Implement behavior-driven development (BDD) using Cucumber or SpecFlow. Integrate unit testing within CI pipelines. Conduct code reviews with automated quality gates. Perform static code analysis during development.
Automated Testing: Deploy Selenium WebDriver for cross-browser testing. Implement Cypress for modern web application testing. Utilize JMeter for performance testing automation. Configure API testing using Postman/Newman in CI pipelines.
QA as Collaborative Effort: Establish three-amigo sessions (Developer, QA, Product Owner). Implement pair testing practices. Conduct regular bug bashes. Share testing responsibilities across team roles.
3. Enable Team Collaboration
Efficient collaboration accelerates software delivery cycles while reducing communication overhead.
There are tools and practices available that facilitate seamless information flow across teams.
Here’s how you can ensure the collaboration is effective in your engineering team:
Foster open communication with dedicated Slack channels, Notion workspaces, daily standups, and video conferencing.
Encourage cross-functional teams with skill-balanced pods, shared responsibility matrices, cross-training, and role rotations.
Streamline version control and documentation with Git branching strategies, pull request templates, automated pipelines, and wiki systems.
4. Implement Strong Security Measures
Security integration throughout development prevents vulnerabilities and ensures compliance. Instead of fixing for breaches, it’s more effective to take preventive measures.
To implement strong security measures:
Implement SAST tools like SonarQube in CI pipelines.
Deploy DAST tools for runtime analysis.
Conduct regular security reviews using OWASP guidelines.
Implement automated vulnerability scanning.
Apply role-based access control (RBAC) principles.
Implement multi-factor authentication (MFA).
Use secrets management systems.
Monitor access patterns for anomalies.
Maintain GDPR compliance documentation and ISO 27001 controls.
Conduct regular SOC 2 audits and automate compliance reporting.
5. Build Scalability into Process
Scalable architectures directly impact software delivery effectiveness by enabling seamless growth and consistent performance even when the load increases.
Strategic implementation of scalable processes removes bottlenecks and supports rapid deployment cycles.
Here’s how you can build scalability into your processes:
Scalable Architecture: Implement microservices architecture patterns. Deploy container orchestration using Kubernetes. Utilize message queues for asynchronous processing. Implement caching strategies.
Cloud Infrastructure: Configure auto-scaling groups in AWS/Azure. Implement infrastructure as code using Terraform. Deploy multi-region architectures. Utilize content delivery networks (CDNs).
Monitoring and Performance: Deploy Typo for system health monitoring. Implement distributed tracing using Jaeger. Configure alerting based on SLOs. Maintain performance dashboards.
6. Leverage CI/CD
CI/CD automation streamlines deployment processes and reduces manual errors. Now, there are pipelines available that are rapid, reliable software delivery through automated testing and deployment sequences. Integration with version control systems ensures consistent code quality and deployment readiness. This means there are less delays and more effective software delivery.
7. Measure Success Metrics
Effective software delivery requires precise measurement through carefully selected metrics. These metrics provide actionable insights for process optimization and delivery enhancement.
Here are some metrics to keep an eye on:
Deployment Frequency measures release cadence to production environments.
Change Lead Time spans from code commit to successful production deployment.
Mean Time to Recovery quantifies service restoration speed after production incidents.
Code Coverage reveals test automation effectiveness across the codebase.
Technical Debt Ratio compares remediation effort against total development cost.
These metrics provide quantitative insights into delivery pipeline efficiency and help identify areas for continuous improvement.
Challenges in the Software Delivery Lifecycle
The SDLC has multiple technical challenges at each phase. Some of them include:
1. Planning Phase Challenges
Teams grapple with requirement volatility leading to scope creep. API dependencies introduce integration uncertainties, while microservices architecture decisions significantly impact system complexity. Resource estimation becomes particularly challenging when accounting for potential technical debt.
2. Design Phase Challenges
Design phase complications are around system scalability requirements conflicting with performance constraints. Teams must carefully balance cloud infrastructure selections against cost-performance ratios. Database sharding strategies introduce data consistency challenges, while service mesh implementations add layers of operational complexity.
3. Development Phase Challenges
Development phase issues leads to code versioning conflicts across distributed teams. Software engineers frequently face memory leaks in complex object lifecycles and race conditions in concurrent operations. Then there are rapid sprint cycles that often result in technical debt accumulation, while build pipeline failures occur from dependency conflicts.
4. Testing Phase Challenges
Testing becomes increasingly complex as teams deal with coverage gaps in async operations and integration failures across microservices. Performance bottlenecks emerge during load testing, while environmental inconsistencies lead to flaky tests. API versioning introduces additional regression testing complications.
5. Deployment Phase Challenges
Deployment challenges revolve around container orchestration failures and blue-green deployment synchronization. Teams must manage database migration errors, SSL certificate expirations, and zero-downtime deployment complexities.
6. Maintenance Phase Challenges
In the maintenance phase, teams face log aggregation challenges across distributed systems, along with memory utilization spikes during peak loads. Cache invalidation issues and service discovery failures in containerized environments require constant attention, while patch management across multiple environments demands careful orchestration.
These challenges compound through modern CI/CD pipelines, with Infrastructure as Code introducing additional failure points.
Effective monitoring and observability become crucial success factors in managing them.
Use software engineering intelligence tools like Typo to get visibility on precise performance of the teams, sprint delivery which helps you in optimizing resource allocation and reducing tech debt better.
Conclusion
Effective software delivery depends on precise performance measurement. Without visibility into resource allocation and workflow efficiency, optimization remains impossible.
Typo addresses this fundamental need. The platform delivers insights across development lifecycles - from code commit patterns to deployment metrics. AI-powered code analysis automates optimization, reducing technical debt while accelerating delivery. Real-time dashboards expose productivity trends, helping you with proactive resource allocation.
Transform your software delivery pipeline with Typo's advanced analytics and AI capabilities.
In theory, everyone knows that resource allocation acts as the anchor for project success — be it engineering or any business function.
But still, engineering teams are often misconstrued as cost centres. It can be because of many reasons:
Difficulty quantifying engineering's direct financial contribution
Performance is often measured by cost reduction rather than value creation
Direct revenue generation is not immediately visible
Complex to directly link engineering work to revenue
Expenses like salaries, equipment, and R&D are seen as pure expenditures
And these are only the tip of the iceberg.
But how do we transform these cost centres into revenue-generating powerhouses? The answer lies in strategic resource allocation frameworks.
In this blog, we look into the complexity of resource allocation for engineering leaders—covering visibility into team capacity, cost structures, and optimisation strategies.
Let’s dive right in!
What is Resource Allocation in Project Management?
Resource allocation in project management refers to the strategic assignment of available resources—such as time, budget, tools, and personnel—to tasks and objectives to ensure efficient project execution.
With tight timelines and complex deliverables, resource allocation becomes critical to meeting engineering project goals without compromising quality.
However, engineering teams often face challenges like resource overallocation, which leads to burnout and underutilisation, resulting in inefficiency. A lack of necessary skills within teams can further stall progress, while insufficient resource forecasting hampers the ability to adapt to changing project demands.
Project managers and engineering leaders play a crucial role in dealing with these challenges. By analysing workloads, ensuring team members have the right skill sets, and using tools for forecasting, they create an optimised allocation framework.
This helps improve project outcomes and aligns engineering functions with overarching business goals, ensuring sustained value delivery.
Why Resource Allocation Matters for Engineering Teams
Resource allocation is more than just an operational necessity—it’s a critical factor in maximizing value delivery.
In software engineering, where success is measured by metrics like throughput, cycle time, and defect density, allocating resources effectively can dramatically influence these key performance indicators (KPIs).
Misaligned resources increase variance in these metrics, leading to unpredictable outcomes and lower ROI.
Let’s see how precise resource allocation shapes engineering success:
1. Alignment with Project Goals and Deliverables
Effective resource allocation ensures that engineering efforts directly align with project objectives, which helps reduce misdirection. And by this function, the output increases. By mapping resources to deliverables, teams can focus on priorities that drive value, meeting business and customer expectations.
2. Prevention of Bottlenecks and Over-allocations
Time and again, we have seen poor resource planning leading to bottlenecks. This always disrupts the well-established workflows and delays progress. Over-allocated resources, on the other hand, lead to employee burnout and diminished efficiency. Strategic allocation eliminates these pitfalls by balancing workloads and maintaining operational flow.
3. Ensuring Optimal Productivity and Quality
With a well-structured resource allocation framework, engineering teams can maintain a high level of productivity without compromising on quality. It enables leaders to identify skill gaps and equip teams with the right resources, fostering consistent output.
4. Creating Visibility and Transparency for Engineering Leaders
Resource allocation provides engineering leaders with a clear overview of team capacities, progress, and costs. This transparency enables data-driven decisions, proactive adjustments, and alignment with the company’s strategic vision.
5. The Risks of Poor Allocation
Improper resource allocation can lead to cascading issues, such as missed deadlines, inflated budgets, and fragmented coordination across teams. These challenges not only hinder project success but also erode stakeholder trust. This makes resource allocation a non-negotiable pillar of effective engineering project management.
Key Elements of Resource Allocation for Engineering Leaders
Resource allocation typically revolves around five primary types of resources. Irrespective of which industry you cater to and what’s the scope of your engineering projects, you must consider allocating these effectively.
1. Personnel
Assigning tasks to team members with the appropriate skill sets is fundamental. For example, a senior developer with expertise in microservices architecture should lead API design, while junior engineers can handle less critical feature development under supervision. Balanced workloads prevent burnout and ensure consistent output, measured through velocity metrics in tools like Typo.
2. Time
Deadlines should align with task complexity and team capacity. For example, completing a feature that involves integrating a third-party payment gateway might require two sprints, accounting for development, testing, and debugging. Agile sprint planning and tools like Typo that help you analyze sprints and bring predictability to delivery can help maintain project momentum.
3. Cost
Cost allocation requires understanding resource rates and expected utilization. For example, deploying a cloud-based CI/CD pipeline incurs ongoing costs that should be evaluated against in-house alternatives. Tracking project burn rates with cost management tools helps avoid budget overruns.
4. Infrastructure
Teams must have access to essential tools, software, and infrastructure, such as cloud environments, development frameworks, and collaboration platforms like GitHub or Slack. For example, setting up Kubernetes clusters early ensures scalable deployments, avoiding bottlenecks during production scaling.
5. Visibility
Real-time dashboards in tools like Typo offer insights into resource utilization, team capacity, and progress. These systems allow leaders to identify bottlenecks, reallocate resources dynamically, and ensure alignment with overall project goals, enabling proactive decision-making.
When you have a bird’s eye view of your team's activities, you can generate insights about the blockers that your team consistently faces and the patterns in delays and burnouts. That said, let’s look at some strategies to optimize the cost of your software engineering projects.
5 Cost Optimization Strategies in Software Engineering Projects
Engineering projects management comes with a diverse set of requirements for resource allocation. The combinations of all the resources required to achieve engineering efficiency can sometimes shoot the cost up. Here are some strategies to avoid the same:
1. Resource Leveling
Resource leveling focuses on distributing workloads evenly across the project timeline to prevent overallocation and downtime.
If a database engineer is required for two overlapping tasks, adjusting timelines to sequentially allocate their time ensures sustained productivity without overburdening them.
This approach avoids the costs of hiring temporary resources or the delays caused by burnout.
Techniques like critical path analysis and capacity planning tools can help achieve this balance, ensuring that resources are neither underutilized nor overextended.
2. Automation and Tools
Automating routine tasks and using project management tools are key strategies for cost optimization.
Tools like Jira and Typo streamline task assignment, track progress, and provide visibility into resource utilization.
Automation in areas like testing (e.g., Selenium for automated UI tests) or deployment (e.g., Jenkins for CI/CD pipelines) reduces manual intervention and accelerates delivery timelines.
These tools enhance productivity and also provide detailed cost tracking, enabling data-driven decisions to cut unnecessary expenditures.
3. Continuous Review
Cost optimization requires continuous evaluation of resource allocation. Weekly or bi-weekly reviews using metrics like sprint velocity, resource utilization rates, and progress against deliverables can reveal inefficiencies.
For example, if a developer consistently completes tasks ahead of schedule, their capacity can be reallocated to critical-path activities. This iterative process ensures that resources are used optimally throughout the project lifecycle.
4. Cross-Functional Collaboration
Collaboration across teams and departments fosters alignment and identifies cost-saving opportunities. For example, early input from DevOps, QA, and product management can ensure that resource estimates are realistic and reflect the project's actual needs. Using collaborative tools helps surface hidden dependencies or redundant tasks, reducing waste and improving resource efficiency.
5. Avoiding Scope Creep
Scope creep is a common culprit in cost overruns. CTOs and engineering managers must establish clear boundaries and a robust change management process to handle new requests.
For example, additional features can be assessed for their impact on timelines and budgets using a prioritization matrix.
Conclusion
Efficient resource allocation is the backbone of successful software engineering projects. It drives productivity, optimises cost, and aligns the project with business goals.
With strategic planning, automation, and collaboration, engineering leaders can increase value delivery.
Take the next step in optimizing your software engineering projects—explore advanced engineering productivity features of Typoapp.io.
Imagine you are on a solo road trip with a set destination. You constantly check your map and fuel gauge to check whether you are on a track. Now, replace the road trip with an agile project and the map with a burndown chart.
Just like a map guides your journey, a burndown chart provides a clear picture of how much work has been completed and what remains.
What is the Burndown Chart?
Burndown charts are visual representations of the team’s progress used for agile project management. They are useful for scrum teams and agile project managers to assess whether the project is on track.
Burndown charts are generally of three types:
Product Burndown Chart
The product burndown chart focuses on the big picture and visualizes the entire project. It determines how many product goals the development team has achieved so far and the remaining work.
Sprint Burndown Chart
Sprint burndown charts focus on the ongoing sprints. It indicates progress towards completing the sprint backlog.
Epic Burndown Chart
This chart focuses on how your team performs against the work in the epic over time. It helps to track the advancement of major deliverables within a project.
When it comes to agile project management, a burndown chart is a fundamental tool, and understanding its key components is crucial. Let's break down what makes up a burndown chart and why each part is essential.
Core Elements of a Burndown Chart
Time Representation: The X-Axis
The horizontal axis, or X-axis, signifies the timeline for project completion. For projects following the scrum methodology, this axis often shows the series of sprints. Alternatively, it might detail the remaining days, allowing teams to track timelines against project milestones.
Effort Representation: The Y-Axis
The vertical axis, known as the Y-axis, measures the effort still needed to reach project completion. This is often quantified using story points, a method that helps estimate the work complexity and the labor involved in finishing user stories or tasks.
Real Progress Line
This line on the chart shows how much work remains after each sprint or day. It gives a tangible picture of team progress. Since every project encounters unexpected obstacles or shifts in scope, this line is usually irregular, contrasting with the straight trajectory of planned efforts.
Benchmark Progress Line
Also known as the ideal effort line, this is the hypothetical path of perfectly steady progress without setbacks. It generally runs in a straight line, descending from total projected work to zero. This line serves as a standard, assisting teams in assessing how their actual efforts measure up against expected outcomes.
Quantifying Effort: Story Points
Story points are a tool often used to put numbers to the effort needed for completing tasks or larger work units like epics. They are plotted on the Y-axis of the burndown chart, while the X-axis aligns with time, such as the number of ongoing sprints.
Sprint Objectives
A clear goal helps maintain focus during each sprint. On the burndown chart, this is represented by a specific target line. Even though actual progress might not always align with this objective, having it illustrated on the chart aids in driving the team towards its goals.
Incorporating these components into your burndown chart not only provides a visual representation of project progress but also serves as a guide for continual team alignment and focus.
How Does a Burndown Chart Work?
A burndown chart shows the amount of work remaining (on the vertical axis) against time (on the horizontal axis). It includes an ideal work completion line and the actual work progress line. As tasks are completed, the actual line "burns down" toward zero. This allows teams to identify if they are on track to complete their goals within the set timeline and spot deviations early.
Understanding the Ideal Effort Line
The ideal effort line is your project's roadmap, beginning with the total estimated work at the start of a sprint and sloping downward to zero by the end. It acts as a benchmark to gauge your team's progress and ensure your plan stays on course.
Tracking the Actual Effort Line
This line reflects your team's real-world progress by showing the remaining effort for tasks at the end of each day. Comparing it to the ideal line helps determine if you are ahead, on track, or falling behind, which is crucial for timely adjustments.
Spotting Deviations
Significant deviations between the actual and ideal lines can signal issues. If the actual line is above the ideal, delays are occurring. Conversely, if below, tasks are being completed ahead of schedule. Early detection of these deviations allows for prompt problem-solving and maintaining project momentum.
Recognizing Patterns and Trends
Look for trends in the actual effort line. A flat or slow decline might indicate bottlenecks or underestimated tasks, while a steep drop suggests increased productivity. Identifying these patterns can help refine your workflows and enhance team performance.
Evaluating the Projection Cone
Some burndown charts include a projection cone, predicting potential completion dates based on current performance. This cone, ranging from best-case to worst-case scenarios, helps assess project uncertainty and informs decisions on resource allocation and risk management.
By mastering these elements, you can effectively interpret burndown charts, ensuring your project management efforts lead to successful outcomes.
How to Track Daily Progress and Remaining Work in a Burndown Chart?
Burndown charts are invaluable tools for monitoring progress in project management. They provide a clear visualization of work completed versus the work remaining.
Steps to Effectively Track Progress:
Set Initial Estimates: Begin by estimating the total effort required for your project. This lays the groundwork for tracking actual progress.
Daily Updates: Use your burndown chart to record the time spent on tasks each day. This will help to visualize how work is being completed over time.
Pacing Toward Goals:
Monitor Completed Tasks: Each task should be logged with the time taken to complete it. This gives insight into your efficiency and assists in forecasting future task completion times.
Evaluate Daily Against Estimates: Compare your daily progress to your initial estimates. By the conclusion of a specified period, such as five days, you should check if your completed hours align with your predicted timeline (e.g., 80 hours).
Visual Tools:
Use a Chart or Timeline Tool: A burndown chart could be created using spreadsheet software like Excel or Google Sheets, or specialized tools such as Trello or Jira, which offer built-in features for this purpose.
Track Remaining Work: Your chart should show a descending line representing the decrease in work as tasks are completed. Ideally, it should slope downwards steadily towards zero, indicating that you're on track.
By adopting these methods, teams can efficiently track their progress, ensuring that they meet their objectives within the desired timeframe. Analyzing the slope of the burndown chart regularly helps in making proactive adjustments as needed.
Purpose of the Burndown Chart
A burndown chart is a visual tool used by agile teams to track progress. Here is a breakdown of its key functions:
Identify Issues Early
Burndown charts allow agile teams to visualize the remaining work against time which helps to spot issues early from the expected progress. They can identify bottlenecks or obstacles early which enables them to proactive problem-solving before the issue escalates.
Visualize Sprint Progress
The clear graphical representation of work completed versus work remaining makes it easy for teams to see how much they have accomplished and how much is left to do within a sprint. This visualization helps maintain focus and alignment among team members.
Boost Team Morale
The chart enables the team to see their tangible progress which significantly boosts their morale. As they observe the line trending downward, indicating completed tasks, it fosters a sense of achievement and motivates them to continue performing well.
Improve Estimation
After each sprint, teams can analyze the burndown chart to evaluate their estimation accuracy regarding task completion times. This retrospective analysis helps refine future estimates and improves planning for upcoming sprints.
How to Estimate Effort for a Burndown Chart
Estimating effort for a burndown chart involves determining the amount of work needed to complete a sprint within a specific timeframe. Here's a step-by-step approach to getting this estimation right:
Define Your Ideal Baseline
Start by identifying the total amount of work you expect to accomplish in the sprint. This requires knowing your team's productivity levels and the sprint duration. For instance, if your sprint lasts 5 days and your team can handle 80 hours in total, your baseline is 16 hours per day.
Break Down the Work
Next, divide the work into manageable chunks. List tasks or activities with their respective estimated hours. This helps in visualizing the workload and setting realistic daily goals.
Example Breakdown:
Task A: 20 hours
Task B: 30 hours
Task C: 30 hours
Determine Daily Workload
With your total hours known, distribute these hours across the sprint days. Begin by plotting your starting effort on a graph, like 80 hours on the first day, and then reduce it daily as work progresses.
Daily Tracking For a 5-Day Sprint:
Day 1: Start with 80 hours
Day 2: Reduce to 64 hours
Day 3: Decrease further to 48 hours
Day 4: Lower to 32 hours
Day 5: Finish with 16 hours
Monitor Your Progress
As the sprint moves forward, track the actual hours spent versus the estimated ones. This allows you to adjust and manage any deviations promptly.
By following these steps, you ensure that your burndown chart accurately reflects your team's workflow and helps in making informed decisions throughout the sprint.
How Does a Burndown Chart Help Prevent Scope Creep in Projects?
A burndown chart is a vital tool in project management, serving as a visual representation of work remaining versus time. Although it might not capture every aspect of a project’s trajectory, it plays a key role in preventing scope creep.
Firstly, a burndown chart provides a clear overview of how much work has been completed and what remains, ensuring that project teams stay focused on the goal. By continuously tracking progress, teams can quickly identify any deviation from the planned trajectory, which is often an early signal of scope creep.
However, a burndown chart doesn’t operate in isolation. It is most effective when used alongside other project management tools:
Backlog Management: A well-maintained product backlog is essential. It allows the team to prioritize tasks and ensures that only the most important items get addressed within the project's timeframe.
Change Control Processes: Even though a burndown chart might not show changes directly, integrating it with a robust change control process helps in capturing and managing these alterations systematically. This prevents unauthorized changes from bloating the project scope.
By consistently monitoring the relationship between time and completed work, project managers can maintain control and make informed decisions quickly. This proactive approach helps teams stay aligned with the project's original vision, thus minimizing the risk of scope creep.
Burndown Chart vs. Burnup Chart
Understanding the Difference Between Burndown and Burnup Charts
Both burndown and burnup charts are essential tools for managing projects, especially in agile environments. They provide visual insights into project progress, but they do so in different ways, each offering unique advantages.
Burndown Chart: Tracking Work Decrease
A burndown chart focuses on recording how much work remains over time. It's a straightforward way to monitor project progress by showing the decline of remaining tasks. The chart typically features:
X-Axis: Represents time over the life cycle of a project.
Y-Axis: Displays the amount of work left to complete, often measured in hours or story points.
This type of chart is particularly useful for spotting bottlenecks, as any deviation from the ideal line can indicate a pace that’s too slow to meet the deadline.
Burnup Chart: Visualizing Work Completion
In contrast, a burnup chart highlights the work that has been completed, alongside the total work scope. Its approach includes:
X-Axis: Also represents time.
Y-Axis: Shows cumulative work completed alongside total project scope.
The key advantage of a burnup chart is its ability to display scope changes clearly. This is ideal when accommodating new requirements or adjusting deliverables, as it shows both progress and scope alterations without losing clarity.
Summary
While both charts are vital for tracking project dynamics, their perspectives differ. Burndown charts excel at displaying how rapidly teams are clearing tasks, while burnup charts provide a broader view by also accounting for changes in project scope. Using them together offers a comprehensive picture of both time management and scope management within a project.
How to create a burndown chart in Excel?
Step 1: Create Your Table
Open a new sheet in Excel and create a new table that includes 3 columns.
The first column should include the dates of each sprint, the second column have the ideal burndown i.e. ideal rate at which work will be completed and the last column should have the actual burndown i.e. updating them as story points get completed.
Step 2: Add Data in these Columns
Now, fill in the data accordingly. This includes the dates of your sprints and numbers in the Ideal Burndown column indicating the desired number of tasks remaining after each day throughout the let’s say, 10-day sprint.
As you complete tasks each day, update the spreadsheet to document the number of tasks you can finish under the ‘Actual Burndown’ column.
Step 3: Create a Burndown Chart
Now, it’s time to convert the data into a graph. To create a chart, follow these steps: Select the three columns > Click ‘Insert’ on the menu bar > Select the ‘Line chart’ icon, and generate a line graph to visualize the different data points you have in your chart.
How to Compile the Final Dataset for a Burndown Chart?
Compiling the final dataset for a burndown chart is an essential step in monitoring project progress. This process involves a few key actions that help translate raw data into a clear visual representation of your work schedule.
Step 1: Compare Initial Estimates with Actual Work Time
Start by gathering your initial effort estimates. These estimates outline the anticipated time or resources required for each task. Then, access your actual work logs, which you should have been maintaining consistently. By comparing these figures, you’ll be able to assess where your project stands in relation to your original forecasts.
Step 2: Keep Logs Accessible
Ensure that your logged work data is kept in a centralized and accessible location. This strategy fosters team collaboration and transparency, allowing team members to view and update logs as necessary. It also makes it easier to pull together data when you’re ready to update your burndown chart.
Step 3: Visualize with a Burndown Chart
Once your data is compiled, the next step is to plot it on your burndown chart. This graph will visually represent your team's progress, comparing estimated efforts against actual performance over time. Using project management software can simplify this step significantly, as many tools offer features to automate chart updates, streamlining both creation and maintenance efforts.
By following these steps, you’ll be equipped to create an accurate and insightful burndown chart, providing a clear snapshot of project progress and helping to ensure timelines are met efficiently.
Limitations of Burndown Chart
One-Dimensional View
ABurndown chart mainly tracks the amount of work remaining, measured in story points or hours. This one-dimensional view does not offer insights into the complexity or nature of the tasks, hence, oversimplifying project progress.
Unable to Detect Quality Issues or Technical Debt
Burndown charts fail to account for quality issues or the accommodation of technical debt. Agile teams might complete tasks on time but compromise on quality. This further leads to long-term challenges that remain invisible in the chart.
Lack of Visibility into Team Dynamics
The burndown chart does not capture team dynamics or collaboration patterns. It fails to show how team members are working together, which is vital for understanding productivity and identifying areas for improvement.
Mask Underlying Problems
The problems might go unnoticed related to story estimation and sprint planning. When a team consistently underestimates tasks, the chart may still show a downward trend. This masks deeper issues that need to be addressed.
Changes in Work Scope
Another disadvantage of burndown charts is that they do not reflect changes in scope or interruptions that occur during a sprint. If new tasks are added or priorities shift, the chart may give a misleading impression of progress.
Unable to Show Work Distribution and Bottlenecks
The chart does not provide insights into how work is distributed among team members or highlight bottlenecks in the workflow. This lack of detail can hinder efforts to optimize team performance and resource allocation.
What Key Components Are Missing in Burndown Charts for a Complete View of Sprints?
Burndown charts are great tools for tracking progress in a sprint. However, they don’t provide a full picture of sprint performance as they lack the following dimensions:
Real-time Sprint Monitoring Metrics
Velocity Stability Indicators
Sprint velocity variance: It tracks the difference between planned and actual sprint velocities to assess predictability.
Story completion rate by size category: It evaluates the team's ability to complete stories of varying complexities.
Average time in each status: It highlights bottlenecks by analyzing how long stories stay in each stage (To Do, In Progress, etc.).
Number of stories carried over: It measures unfinished work moved to the next sprint, which impacts planning accuracy.
Scope change percentage: It reflects how much the sprint backlog changes during execution
Quality Metrics
Code review coverage and throughput: It highlights the extent and speed of code reviews to ensure quality.
Unit test coverage trends: It measures improvements or regressions in unit test coverage over time.
Number of bugs found: It monitors the quality of sprint deliverables.
Technical debt items identified: It evaluates areas where shortcuts may have introduced long-term risks.
Build and deployment success rate: It highlights stability in CI/CD processes.
Production incidents related to sprint work: It connects sprint output to real-world impact.
Team Collaboration Indicators
Code review response time: It measures how quickly team members review code, impacting workflow speed.
Pair programming hours: It reflects collaborative coding time, boosting knowledge transfer and quality.
Knowledge-sharing sessions: This indicates team growth through discussions or sessions.
Cross-functional collaboration: It highlights collaboration across different roles, like devs and designers.
Blockers resolution time: It monitors how quickly obstacles are removed.
Team capacity utilization: It analyzes whether team capacity is effectively utilized.
Work Distribution Analysis
Task distribution across team members: It checks for workload balance.
Skill coverage matrix: It monitors whether all necessary skills are represented in the sprint.
Dependencies resolved: It highlights dependency identification and resolution.
Context switching frequency: It analyzes task switching, which can impact productivity.
Planned vs unplanned work ratio: It evaluates how much work was planned versus ad-hoc tasks.
Sprint Retrospective Analysis
Quantitative Measures
Sprint Goals Achievement
Completed story points vs committed: It evaluates sprint completion success.
Critical features delivered: It monitors feature delivery against sprint goals.
Technical debt addressed: It tracks progress on resolving legacy issues.
Quality metrics achieved: It ensures deliverables meet quality standards.
Process Efficiency
Lead time for user stories: Time taken from story creation to completion.
Cycle time analysis: It tracks how long it takes to move work items through the sprint.
Sprint predictability index: It compares planned vs actual progress consistency.
Planning accuracy percentage: It monitors how well the team plans tasks.
Team Performance
Team happiness index: It gauges morale.
Innovation time percentage: It monitors time spent on creative or experimental work.
Learning goals achieved: It tracks growth opportunities taken.
Cross-skilling progress: It measures skill development.
Qualitative Measures
Sprint Planning Effectiveness
Story refinement quality: It assesses the readiness and clarity of backlog items.
Estimation accuracy: It monitors the accuracy of time/effort estimates.
Dependencies identification: It indicates how well dependencies were spotted.
Risk assessment adequacy: It ensures risks are anticipated and managed.
Team Dynamics
Communication effectiveness: It ensures clarity and quality of team communication.
Collaboration patterns: It highlights team interactions.
Knowledge sharing: It checks for the effective transfer of knowledge.
Decision-making efficiency: It gauges the timeliness and effectiveness of team decisions.
Continuous Improvement
Action items completion rate: It measures follow-through on retrospective action items.
Process improvement initiatives: It tracks changes implemented for efficiency.
Tools and automation adoption: It monitors how well the team leverages technology.
Team capability enhancement: It highlights skill and process improvements.
Typo - An Effective Sprint Analysis Tool
Typo’s sprint analysis feature allows engineering leaders to track and analyze their team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint hence, identifying any potential problems early on and taking corrective action.
Sprint analysis in Typo with burndown chart
Key Features:
A velocity chart shows how much work has been completed in previous sprints.
A burndown chart to measure progress
A sprint backlog that shows all of the work that needs to be completed in the sprint.
A list of sprint issues that shows the status of each issue.
Time tracking to See how long tasks are taking.
Blockage tracking to check how often tasks are being blocked, and what the causes of those blocks are.
Bottleneck identification to identify areas where work is slowing down.
Historical data analysis to compare sprint data over time.
Burndown charts offer a clear and concise visualization of progress over time. While they excel at tracking remaining work, they are not without limitations, especially when it comes to addressing quality, team dynamics, or changes in scope.
Integrating advanced metrics and tools like Typo, teams can achieve a more holistic view of their sprint performance and ensure continuous improvement.
Developer Experience (DevEx) is essential for boosting productivity, collaboration, and overall efficiency in software development. The right DevEx tools streamline workflows, provide actionable insights, and enhance code quality.
We’ve explored the 10 best Developer Experience tools in 2025, highlighting their key features and limitations to help you choose the best fit for your team.
Key Features to Look For in DevEx Tools
Integrated Development Environment (IDE) Plugins
The DevEx tool must contain IDE plugins that enhance coding environments with syntax highlighting, code completion, and error detection features. They must also allow integration with external tools directly from the IDE and support multiple programming languages for versatility.
Collaboration Features
The tools must promote teamwork through seamless collaboration, such as shared workspaces, real-time editing capabilities, and in-context discussions. These features facilitate better communication among teams and improve project outcomes.
Developer Insights and Analytics
The Developer Experience tool could also offer insights into developer performance through qualitative metrics including deployment frequency and planning accuracy. This helps engineering leaders understand the developer experience holistically.
Feedback Loops
For a smooth workflow, developers need timely feedback for an efficient software process. Hence, ensure that the tools and processes empower teams to exchange feedback such as real-time feedback mechanisms, code quality analysis, or live updates to get the view of changes immediately.
Impact on Productivity
Evaluate how the tool affects workflow efficiency and developers’ productivity. Assess it based on whether it reduces time spent on repetitive tasks or facilitates easier collaboration. Analyzing these factors can help gauge the tool's potential impact on productivity.
Typo is an intelligent engineering management platform to gain visibility, remove blockers, and maximize developer effectiveness. It captures 360 views of the developer experience and uncovers real issues. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins. Typo also sends automated alerts to identify burnout signs in developers at an early stage. It can seamlessly integrate with third-party applications such as Git, Slack, Calenders, and CI/CD tools.
GetDX is a comprehensive insights platform founded by researchers behind the DORA and SPACE framework. It offers both qualitative and quantitative measures to give a holistic view of the organization. GetDX breaks down results based on personas and streamlines developer onboarding with real-time insights.
Key Features
Provides a suite of tools that capture data from surveys and systems in real time.
Contextualizes performance with 180,000+ industry benchmark samples.
Uses advanced statistical analysis to identify the top opportunities.
Limitations
GetDX’s frequent updates and features can disrupt user experience and confuse teams.
New managers often face a steep learning curve.
Users managing multiple teams face configuration and managing team data difficulties.
Jellyfish is a developer experience platform that combines developer-reported insights with system metrics. It captures qualitative and quantitative data to provide a complete picture of the development ecosystem and identify bottlenecks. Jellyfish can be seamlessly integrated with survey tools or use sentiment analysis to gather direct feedback from developers.
Key Features
Enables continuous feedback loops and rapid response to developer needs.
Allows teams to track effort without time tracking.
Tracks team health metrics such as code churn and pull request review times.
Limitations
Problem in integrating with popular tools like Jira and Okta which complicates the initial setup process and affects the overall user experience.
Absence of an API restricts users from exporting metrics for further analysis in other systems.
Overlooks important aspects of developer productivity by emphasizing throughput over qualitative metrics.
LinearB provides engineering teams with data-driven insights and automation capabilities. This software delivery intelligence platform provides teams with full visibility and control over developer experience and productivity. LinearB also helps them focus on the most important aspects of coding to speed up project delivery.
Key Features
Automates routine tasks and processes to reduce manual effort and cognitive load.
Offers visibility into team workload and capacity.
Helps maximize DevOps groups’ efficiency with various metrics.
Limitations
Teams that do not utilize GIT-based workflow may find that many of the features are not applicable or useful to their processes.
Lacks comprehensive historical data or external benchmarks.
Needs to rely on separate tools for comprehensive project tracking and management.
Github Copilot was developed by GitHub in collaboration with open AI. It uses an open AI codex for writing code, test cases and code comments quickly. It draws context from the code and suggests whole lines or complete functions that developers can accept, modify, or reject. Github Copilot can generate code in multiple languages including Typescript, Javascript and C++.
Key Features
Creates predictive lines of code from comments and existing patterns in the code.
Seamlessly integrates with popular editors such as Neovim, JetBrains IDEs, and Visual Studio.
Create dictionaries of lookup data.
Limitations
Struggles to fully grasp the context of complex coding tasks or specific project requirements.
Less experienced developers may become overly reliant on Copilot for coding task.
Postman is a widely used automation testing tool for API. It provides a streamlined process for standardizing API testing and monitoring it for usage and trend insights. This tool provides a collaborative environment for designing APIs using specifications like OpenAPI and a robust testing framework for ensuring API functionality and reliability.
Key Features
Enables users to mimic real-world scenarios and assess API behavior under various conditions.
Creates mock servers, and facilitates realistic simulations and comprehensive testing.
Auto-generates documentation to make APIs easily understandable and accessible.
Limitations
User interface non friendly for beginners.
Heavy reliance on Postman may create challenges when migrating workflows to other tools or platforms.
More suitable for manual testing rather than automated testing.
An AI code-based assistant tool that provides code-specific information and helps in locating precise code based on natural language description, file names, or function names.
It improves the developer experience by simplifying the development process in intricate enterprise environments.
Key Features
Explain complex lines of code in simple language.
Identifies bugs and errors in a codebase and provides suggestions.
Offers documentation generation.
Limitations
Doesn’t support creating insights over specific branches or revisions.
Codebase size and project complexity may impact performance.
Certain features available when running insights over all repositories.
Code Climate Velocity is an engineering intelligence platform that provides leaders with customized solutions based on data-driven insights. Teams using Code Climate Velocity follows a three-step approach: a diagnostic workshop with Code Climate experts, a personalized dashboard with insight reports, and a customized action plan tailored to their business.
Key Features
Seamlessly integrates with developer tools such as Jira, GitLab, and Bitbucket.
Supports long-term strategic planning and process improvement efforts.
Offers insights tailored for managers to help them understand team dynamics and individual contributions.
Limitations
Relies heavily on the quality and comprehensiveness of the data it analyzes.
Overlooks qualitative aspects of software development, such as team collaboration, creativity, and problem-solving skills.
Vercel is a cloud platform that gives frontend developers space to focus on coding and innovation. It simplifies the entire lifecycle of web applications by automating the entire deployment pipeline. Vercel has collaborative features such as preview environments to help iterate quickly while maintaining high code quality.
Key Features
Applications can be deployed directly from their Git repositories.
Includes pre-built templates to jumpstart the app development process.
Allows to create APIs without managing traditional backend infrastructure.
Limitations
Projects hosted on Vercel may rely on various third-party services for functionality which can impact the performance and reliability of applications.
A cloud deployment platform to simplify the deployment and management of applications.
It automates essential tasks such as server setup, scaling, and configuration management that allows developers to prioritize faster time to market instead of handling infrastructure.
Key Features
Supports the creation of ephemeral environments for testing and development.
Scales applications automatically on demand.
Includes built-in security measures such as multi-factor authentication and fine-grained access controls.
Limitations
Occasionally experiences minor bugs.
Can be overwhelming for those new to cloud and DevOps.
Deployment times may be slow.
Conclusion
We’ve curated the best Developer Experience tools for you in 2025. Feel free to explore other options as well. Make sure to do your own research and choose what fits best for you.
As a CTO, you often face a dilemma: should you prioritize efficiency or effectiveness? It’s a tough call.
Engineering efficiency ensures your team delivers quickly and with fewer resources. On the other hand, effectiveness ensures those efforts create real business impact.
So choosing one over the other is definitely not the solution.
That’s why we came up with this guide to software engineering efficiency.
Defining Software Engineering Efficiency
Software engineering efficiency is the intersection of speed, quality, and cost. It’s not just about how quickly code ships or how flawless it is; it’s about delivering value to the business while optimizing resources.
True efficiency is when engineering outputs directly contribute to achieving strategic business goals—without overextending timelines, compromising quality, or overspending.
A holistic approach to efficiency means addressing every layer of the engineering process. It starts with streamlining workflows to minimize bottlenecks, adopting tools that enhance productivity, and setting clear KPIs for code quality and delivery timelines.
As a CTO, to architect this balance, you need to foster collaboration between cross-functional teams, defining clear metrics for efficiency and ensuring that resource allocation prioritizes high-impact initiatives.
Establishing Tech Governance
Tech governance refers to the framework of policies, processes, and standards that guide how technology is used, managed, and maintained within an organization.
For CTOs, it’s the backbone of engineering efficiency, ensuring consistency, security, and scalability across teams and projects.
Here’s why tech governance is so important:
Standardization: Promotes uniformity in tools, processes, and coding practices.
Risk Mitigation: Reduces vulnerabilities by enforcing compliance with security protocols.
Operational Efficiency: Streamlines workflows by minimizing ad-hoc decisions and redundant efforts.
Scalability: Prepares systems and teams to handle growth without compromising performance.
Transparency: Provides clarity into processes, enabling better decision-making and accountability.
For engineering efficiency, tech governance should focus on three core categories:
1. Configuration Management
Configuration management is foundational to maintaining consistency across systems and software, ensuring predictable performance and behavior.
It involves rigorously tracking changes to code, dependencies, and environments to eliminate discrepancies that often cause deployment failures or bugs.
Using tools like Git for version control, Terraform for infrastructure configurations, or Ansible for automation ensures that configurations are standardized and baselines are consistently enforced.
This approach not only minimizes errors during rollouts but also reduces the time required to identify and resolve issues, thereby enhancing overall system reliability and deployment efficiency.
2. Infrastructure Management
Infrastructure management focuses on effectively provisioning and maintaining the physical and cloud-based resources that support software engineering operations.
The adoption of Infrastructure as Code (IaC) practices allows teams to automate resource provisioning, scaling, and configuration updates, ensuring infrastructure remains agile and cost-effective.
Advanced monitoring tools like Typo provide real-time SDLC insights, enabling proactive issue resolution and resource optimization.
By automating repetitive tasks, infrastructure management frees engineering teams to concentrate on innovation rather than maintenance, driving operational efficiency at scale.
3. Frameworks for Deployment
Frameworks for deployment establish the structured processes and tools required to release code into production environments seamlessly.
A well-designed CI/CD pipeline automates the stages of building, testing, and deploying code, ensuring that releases are both fast and reliable.
Additionally, rollback mechanisms safeguard against potential issues during deployment, allowing for quick restoration of stable environments. This streamlined approach reduces downtime, accelerates time-to-market, and fosters a collaborative engineering culture.
Together, these deployment frameworks enhance software delivery and also ensure that the systems remain resilient under changing business demands.
By focusing on these tech governance categories, CTOs can build a governance model that maximizes efficiency while aligning engineering operations with strategic objectives.
Balancing Business Impact and Engineering Productivity
If your engineering team’s efforts don’t align with key objectives like revenue growth, customer satisfaction, or market positioning, you’re not doing justice to your organization.
To ensure alignment, focus on building features that solve real problems, not just “cool” additions.
1. Chase value addition, not cool features
Rather than developing flashy tools that don’t address user needs, prioritize features that improve user experience or address pain points. This prevents your engineering team from being consumed by tasks that don’t add value and keeps their efforts laser-focused on meeting demand.
2. Decision-making is a crucial factor
You need to know when to prioritize speed over quality or vice versa. For example, during a high-stakes product launch, speed might be crucial to seize market opportunities. However, if a feature underpins critical infrastructure, you’d prioritize quality and scalability to avoid long-term failures. Balancing these decisions requires clear communication and understanding of business priorities.
3. Balance innovation and engineering efficiency
Encourage your team to explore new ideas, but within a framework that ensures tangible outcomes. Innovation should drive value, not just technical novelty. This approach ensures every project contributes meaningfully to the organization’s success.
Communicating Efficiency to the CEO and Board
If you’re at a company where the CEO doesn’t come from a technical background — you will face some communication challenges. There will always be questions about why new features are not being shipped despite having a good number of software engineers.
What you should focus on is giving the stakeholders insights into how the engineering headcount is being utilized.
1. Reporting Software Engineering Efficiency
Instead of presenting granular task lists, focus on providing a high-level summary of accomplishments tied to business objectives. For example, show the percentage of technical debt reduced, the cycle time improvements, or the new features delivered and their impact on customer satisfaction or revenue.
Include visualizations like charts or dashboards to offer a clear, data-driven view of progress. Highlight key milestones, ongoing priorities, and how resources are being allocated to align with organizational goals.
2. Translating Technical Metrics into Business Language
Board members and CEOs may not resonate with terms like “code churn” or “defect density,” but they understand business KPIs like revenue growth, customer retention, and market expansion.
For instance, instead of saying, “We reduced bug rate by 15%,” explain, “Our improvements in code quality have resulted in a 10% reduction in downtime, enhancing user experience and supporting retention.”
3. Building Trust Through Transparency
Trust is built when you are upfront about trade-offs, challenges, and achievements.
For example, if you chose to delay a feature release to improve scalability, explain the rationale: “While this slowed our time-to-market, it prevents future bottlenecks, ensuring long-term reliability.”
4. Framing Discussions Around ROI and Risk Management
Frame engineering decisions in terms of ROI, risk mitigation, and long-term impact. For example, explain how automating infrastructure saves costs in the long run or how adopting robust CI/CD practices reduces deployment risks. Linking these outcomes to strategic goals ensures the board sees technology investments as valuable, forward-thinking decisions that drive sustained business growth.
Build vs. Buy Decisions
Deciding whether to build a solution in-house or purchase off-the-shelf technology is crucial for maintaining software engineering efficiency. Here’s what to take into account:
1. Cost Considerations
From an engineering efficiency standpoint, building in-house often requires significant engineering hours that could be spent on higher-value projects. The direct costs include developer time, testing, and ongoing maintenance. Hidden costs like delays or knowledge silos can also reduce operational efficiency.
Conversely, buying off-the-shelf technology allows immediate deployment and support, freeing the engineering team to focus on core business challenges.
However, it’s crucial to evaluate licensing and customization costs to ensure they don’t create inefficiencies later.
2. Strategic Alignment
For software engineering efficiency, the choice must align with broader business goals. Building in-house may be more efficient if it allows your team to streamline unique workflows or gain a competitive edge.
However, if the solution is not central to your business’s differentiation, buying ensures the engineering team isn’t bogged down by unnecessary development tasks, maintaining their focus on high-impact initiatives.
3. Scalability, Flexibility, and Integration
An efficient engineering process requires solutions that scale with the business, integrate seamlessly into existing systems, and adapt to future needs.
While in-house builds offer customization, they can overburden teams if integration or scaling challenges arise.
Off-the-shelf solutions, though less flexible, often come with pre-tested scalability and integrations, reducing friction and enabling smoother operations.
Key Metrics CTOs Should Measure for Software Engineering Efficiency
While the CTO’s role is rooted in shaping the company’s vision and direction, it also requires ensuring that software engineering teams maintain high productivity.
Here are some of the metrics you should keep an eye on:
1. Cycle Time
Cycle time measures how long it takes to move a feature or task from development to deployment. A shorter cycle time means faster iterations, enabling quicker feedback loops and faster value delivery. Monitoring this helps identify bottlenecks and improve development workflows.
2. Lead Time
Lead time tracks the duration from ideation to delivery. It encompasses planning, design, development, and deployment phases. A long lead time might indicate inefficiencies in prioritization or resource allocation. By optimizing this, CTOs ensure that the team delivers what matters most to the business in a timely manner.
3. Velocity
Velocity measures how much work a team completes in a sprint or milestone. This metric reflects team productivity and helps forecast delivery timelines. Consistent or improving velocity is a strong indicator of operational efficiency and team stability.
4. Bug Rate and Defect Density
Bug rate and defect density assess the quality and reliability of the codebase. High values indicate a need for better testing or development practices. Tracking these ensures that speed doesn’t come at the expense of quality, which can lead to technical debt.
5. Code Churn
Code churn tracks how often code changes after the initial commit. Excessive churn may signal unclear requirements or poor initial implementation. Keeping this in check ensures efficiency and reduces rework.
By selecting and monitoring these metrics, you can align engineering outcomes with strategic objectives while building a culture of accountability and continuous improvement.
Conclusion
The CTO plays a crucial role in driving software engineering efficiency, balancing technical execution with business goals.
By focusing on key metrics, establishing strong governance, and ensuring that engineering efforts align with broader company objectives, CTOs help maximize productivity while minimizing waste.
A balanced approach to decision-making—whether prioritizing speed or quality—ensures both immediate impact and long-term scalability.
Effective CTOs deliver efficiency through clear communication, data-driven insights, and the ability to guide engineering teams toward solutions that support the company’s strategic vision.
You are driving a high-performance car, but the controls are clunky, the dashboard is confusing, and the engine constantly overheats.
Frustrating, right?
When developers work in a similar environment, dealing with inefficient tools, unclear processes, and a lack of collaboration, it leads to decreased morale and productivity.
Just as a smooth, responsive driving experience makes all the difference on the road, a seamless Developer Experience (DX) is essential for developer teams.
DX isn't just a buzzword; it's a key factor in how developers interact with their work environments and produce innovative solutions. In this blog, let’s explore what Developer Experience truly means and why it is crucial for developers.
What is Developer Experience?
Developer Experience, commonly known as DX, is the overall quality of developers’ interactions with their work environment. It encompasses tools, processes, and organizational culture. It aims to create an environment where developers are working efficiently, focused, and producing high-quality code with minimal friction.
Why Does Developer Experience Matter?
Developer Experience is a critical factor in enhancing organizational performance and innovation. It matters because:
Boosts Developer Productivity
When developers have access to intuitive tools, clear documentation, and streamlined workflow, it allows them to complete the tasks quicker and focus on core activities. This leads to a faster development cycle and improved efficiency as developers can connect emotionally with their work.
As per Gartner's Report, Developer Experience is the key indicator of Developer Productivity
High Product Quality
Positive developer experience leads to improved code quality, resulting in high-quality work. This leads to customer satisfaction and a decrease in defects in software products. DX also leads to effective communication and collaboration which reduces cognitive load among developers and can thoroughly implement best practices.
Talent Attraction and Retention
A positive work environment appeals to skilled developers and retains top talents. When the organization supports developers’ creativity and innovation, it significantly reduces turnover rates. Moreover, when they feel psychologically safe to express ideas and take risks, they would want to be associated with an organization for the long run.
Enhances Developer Morale
When developers feel empowered and supported at their workplace, they are more likely to be engaged with their work. This further leads to high morale and job satisfaction. When organizations minimize common pain points, developers encounter fewer obstacles, allowing them to focus more on productive tasks rather than tedious ones.
Competitive Advantage
Organizations with positive developer experiences often gain a competitive edge in the market. Enabling faster development cycles and higher-quality software delivery allows companies to respond more swiftly to market demands and customer needs. This agility improves customer satisfaction and positions the organization favorably against competitors.
What is Flow State and Why Consider it as a Core Goal of a Great DX?
In simple words, flow state means ‘Being in the zone’. Also known as deep work, it refers to the mental state characterized by complete immersion and focused engagement in an activity. Achieving flow can significantly result in a sense of engagement, enjoyment, and productivity.
Flow state is considered a core goal of a great DX because this allows developers to work with remarkable efficiency. Hence, allowing them to complete tasks faster and with higher quality. It enables developers to generate innovative solutions and ideas when they are deeply engaged in their work, leading to better problem-solving outcomes.
Also, flow isn’t limited to individual work, it can also be experienced collectively within teams. When development teams achieve flow together, they operate with synchronized efficiency which enhances collaboration and communication.
What Developer Experience is not?
Developer Experience is Not Just a Good Tooling
Tools like IDEs, frameworks, and libraries play a vital role in a positive developer experience, but, it is not the sole component. Good tooling is merely a part of the overall experience. It helps to streamline workflows and reduce friction, but DX encompasses much more, such as documentation, support, learning resources, and the community. Tools alone cannot address issues like poor communication, lack of feedback, or insufficient documentation, and without a holistic approach, these tools can still hinder developer satisfaction and productivity.
Developer Experience is Not a Quick Fix
Improving DX isn’t a one-off task that can be patched quickly. It requires a long-term commitment and a deep understanding of developer needs, consistent feedback loops, and iterative improvements. Great developer experience involves ongoing evaluation and adaptation of processes, tools, and team dynamics to create an environment where developers can thrive over time.
Developer Experience isn’t About Pampering Developers or Using AI tools to Cut Costs
One common myth about DX is that it focuses solely on pampering developers or uses AI tools as cost-cutting measures. True DX aims to create an environment where developers can work efficiently and effectively. In other words, it is about empowering developers with the right resources, autonomy, and opportunities for growth. While AI tools help in simplifying tasks, without considering the broader context of developer needs may lead to dissatisfaction if those tools do not genuinely enhance their work experience.
Developer Experience is Not User Experience
DX and UX look alike, however, they target different audiences and goals. User Experience is about how end-users interact with a product, while Developer Experience concerns the experience of developers who build, test, and deploy products. Improving DX involves understanding developers' unique challenges and needs rather than only applying UX principles meant for end-users.
Developer Experience is Not Same as Developer Productivity
Developer Experience and Developer Productivity are interrelated yet not identical. While a positive developer experience can lead to increased productivity, productivity metrics alone don’t reflect the quality of the developer experience. These metrics often focus on output (like lines of code or hours worked), which can be misleading. True DX encompasses emotional satisfaction, engagement levels, and the overall environment in which developers work. Positive developer experience further creates conditions that naturally lead to higher productivity rather than measuring it directly through traditional metrics
How does Typo Help to Improve DevEx?
Typo is a valuable tool for software development teams that captures 360 views of developer experience. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.
Key features
Research-backed framework that captures parameters and uncovers real issues.
In-depth insights are published on the dashboard.
Combines data-driven insights with proactive monitoring and strategic intervention.
Identifies the key priority areas affecting developer productivity and well-being.
Sends automated alerts to identify burnout signs in developers at an early stage.
Developer Experience empowers developers to focus on building exceptional solutions. A great DX fosters innovation, enhances productivity, and creates an environment where developers can thrive individually and collaboratively.
Implementing developer tools empowers organizations to enhance DX and enable teams to prevent burnout and reach their full potential.
SPACE Framework: Strategies for Maximum Efficiency in Developer Productivity
What if we told you that writing more code could be making you less productive?
While equating productivity with output is tempting, developer efficiency is far more complex. The real challenge often lies in processes, collaboration, and well-being. Without addressing these, inefficiencies and burnout will inevitably follow.
You may spend hours coding, only to feel your work isn’t making an impact—projects get delayed, bug fixes drag on, and constant context switching drains your focus. The key isn’t to work harder but smarter by solving the root causes of these issues.
The SPACE framework addresses this by focusing on five dimensions: Satisfaction, Performance, Activity, Communication, and Efficiency. It helps teams improve how much they do and how effectively they work, reducing workflow friction, improving collaboration, and supporting well-being to boost long-term productivity.
Understanding the SPACE Framework
The space framework addresses five key dimensions of developer productivity: satisfaction and well-being, performance, activity, collaboration and communication, and efficiency and flow. Together, these dimensions provide a comprehensive view of how developers work and where improvements can be made, beyond just measuring output.
By taking these factors into account, teams can better support developers, helping them not only produce better work but also maintain their motivation and well-being. Let’s take a closer look at each part of the framework and how it can help your team achieve a balance between productivity and a healthy work environment.
Common Developer Challenges that SPACE Addresses
In fast-paced, tech-driven environments, developers face several roadblocks to productivity:
Constant interruptions: Developers often deal with frequent context switching, from bug fixes to feature development to emergency support, making it hard to stay focused.
Cross-team collaboration: Working with multiple teams, such as DevOps, QA, and product management, can lead to miscommunication and misaligned priorities.
Lack of real-time feedback: Without timely feedback, developers may unknowingly veer off course or miss performance issues until much later in the development cycle.
Technical debt: Legacy systems and inconsistent coding practices create overhead and slow down development cycles, making it harder to move quickly on new features.
The space framework helps identify and address these challenges by focusing on improving both the technical processes and the developer experience.
How SPACE can help: A Deep Dive into Each Dimension
Let’s explore how each aspect of the space framework can directly impact technical teams:
Satisfaction and well-being
Developers are more productive when they feel engaged and valued. It's important to create an environment where developers are recognized for their contributions and have a healthy work-life balance. This can include feedback mechanisms, peer recognition, or even mental health initiatives. Automated tools that reduce repetitive tasks can also contribute to overall well-being.
Performance
Measuring performance should go beyond tracking the number of commits or pull requests. It’s about understanding the impact of the work being done. High-performing teams focus on delivering high-quality code and minimizing technical debt. Integrating automated testing and static code analysis tools into your CI/CD pipeline ensures code quality is maintained without manual intervention.
Activity
Focusing on meaningful developer activity, such as code reviews, tests written, and pull requests merged, helps align efforts with goals. Tools that track and visualize developer activities provide insight into how time is spent. For example, tracking code review completion times or how often changes are being pushed can reveal bottlenecks or opportunities for improving workflows.
Collaboration and communication
Effective communication across teams reduces friction in the development process. By integrating communication tools directly into the workflow, such as through Git or CI/CD notifications, teams can stay aligned on project goals. Automating feedback loops within the development process, such as notifications when builds succeed or fail, helps teams respond faster to issues.
Efficiency and flow
Developers enter a “flow state” when they can work on a task without distractions. One way to foster this is by reducing manual tasks and interruptions. Implementing CI/CD tools that automate repetitive tasks—like build testing or deployments—frees up developers to focus on writing code. It’s also important to create dedicated time blocks where developers can work without interruptions, helping them enter and maintain that flow.
Practical Strategies for Applying the SPACE Framework
To make the space framework actionable, here are some practical strategies your team can implement:
Automate repetitive tasks to enhance focus
A large portion of developer time is spent on tasks that can easily be automated, such as code formatting, linting, and testing. By introducing tools that handle these tasks automatically, developers can focus on the more meaningful aspects of their work, like writing new features or fixing bugs. This is where tools like Typo can make a difference. Typo integrates seamlessly into your development process, ensuring that code adheres to best practices by automating code quality checks and providing real-time feedback. Automating these reviews reduces the time developers spend on manual reviews and ensures consistency across the codebase.
Track meaningful metrics
Instead of focusing on superficial metrics like lines of code written or hours logged, focus on tracking activities that lead to tangible progress. Typo, for example, helps track key metrics like the number of pull requests merged, the percentage of code coverage, or the speed at which developers address code reviews. These insights give team leads a clearer picture of where bottlenecks are occurring and help teams prioritize tasks that move the project forward.
Improve communication and collaboration through integrated tools
Miscommunication between developers, product managers, and QA teams can cause delays and frustration. Integrating feedback systems that provide automatic notifications when tests fail or builds succeed can significantly improve collaboration. Typo plays a role here by streamlining communication between teams. By automatically reporting code review statuses or deployment readiness, Typo ensures that everyone stays informed without the need for constant manual updates or status meetings.
Protect flow time and eliminate disruptions
Protecting developer flow is essential to maintaining efficiency. Schedule dedicated “flow” periods where meetings are minimized, and developers can focus solely on their tasks. Typo enhances this by minimizing the need for developers to leave their coding environment to check on build statuses or review feedback. With automated reports, developers can stay updated without disrupting their focus. This helps ensure that developers can spend more time in their flow state and less time on administrative tasks.
Identify bottlenecks in your workflow
Using metrics from tools like Typo, you can gain visibility into where delays are happening in your development process—whether it's slow code review cycles, inefficient testing processes, or unclear requirements. With this insight, you can make targeted improvements, such as adjusting team structures, automating manual testing processes, or dedicating more resources to code reviews to ensure smoother project progression.
How Typo supports the SPACE framework
By using Typo as part of your workflow, you can naturally align with many of the principles of the space framework:
Automated code quality: Typo ensures code quality through automated reviews and real-time feedback, reducing the manual effort required during code review processes.
Tracking developer metrics: Typo tracks key activities that are directly related to developer efficiency, helping teams stay on track with performance goals.
Seamless communication: With automatic notifications and updates, Typo ensures that developers and other team members stay in sync without manual reporting, which helps maintain flow and improve collaboration.
Supporting flow: Typo’s integrations provide updates within the development environment, reducing the need for developers to context switch between tasks.
Bringing it all together: Maximizing Developer Productivity with SPACE
The space framework offers a well-rounded approach to improving developer productivity and well-being. By focusing on automating repetitive tasks, improving collaboration, and fostering uninterrupted flow time, your team can achieve more without sacrificing quality or developer satisfaction. Tools like Typo naturally fit into this process, helping teams streamline workflows, enhance communication, and maintain high code quality.
If you’re looking to implement the space framework, start by automating repetitive tasks and protecting your developers' flow time. Gradually introduce improvements in collaboration and tracking meaningful activity. Over time, you’ll notice improvements in both productivity and the overall well-being of your development team.
What challenges are you facing in your development workflow?
Share your experiences and let us know how tools like Typo could help your team implement the space framework to improve productivity and collaboration!
Developer productivity is the new buzzword across the industry. Suddenly, measuring developer productivity has started going mainstream after the remote work culture, and companies like McKinsey are publishing articles titled - ”Yes, you can measure software developer productivity” causing a stir in the software development community, So we thought we should share our take on- Developer Productivity.
We will be covering the following Whats, Whys & Hows about Developer Productivity in this piece-
What is developer productivity?
Why do we need to measure developer productivity?
How do we measure it at the Team and individual level? & Why is it more complicated to measure developer productivity than Sales or Hiring productivity?
Challenges & Dangers of measuring developer productivity & What not to measure.
What is the impact of measuring developer productivity on engineering culture?
What is Developer Productivity?
Developer productivity refers to the effectiveness and efficiency with which software developers create high-quality software that meets business goals. It encompasses various dimensions, including code quality, development speed, team collaboration, and adherence to best practices. For engineering managers and leaders, understanding developer productivity is essential for driving continuous improvement and achieving successful project outcomes.
Key Aspects of Developer Productivity
Quality of Output: Developer productivity is not just about the quantity of code or code changes produced; it also involves the quality of that code. High-quality code is maintainable, readable, and free of significant bugs, which ultimately contributes to the overall success of a project.
Development Speed: This aspect measures how quickly developers (usually referred as developer velocity) can deliver features, fixes, and updates. While developer velocity is important, it should not come at the expense of code quality. Effective engineering teams strike a balance between delivering quickly and maintaining high standards.
Collaboration and Team Dynamics: Successful software development relies heavily on effective teamwork. Collaboration tools and practices that foster communication and knowledge sharing can significantly enhance developer productivity. Engineering managers should prioritize creating a collaborative environment that encourages teamwork.
Adherence to Best Practices for Outcomes: Following coding standards, conducting code review, and implementing testing protocols are essential for maintaining development productivity. These practices ensure that developers produce high-quality work consistently, which can lead to improved project outcomes.
We all know that no love to be measured but the CEOs & CFOs have an undying love for measuring the ROI of their teams, which we can't ignore. The more the development productivity, the more the RoI. However, measuring developer productivity is essential for engineering managers and leaders too who want to optimize their teams' performance- We can't improve something that we don't measure.
Understanding how effectively developers work can lead to improved project outcomes, better resource allocation, and enhanced team morale. In this section, we will explore the key reasons why measuring developer productivity is crucial for engineering management.
Enhancing Team Performance
Measuring developer productivity allows engineering managers to identify strengths and weaknesses within their teams. By analyzing developer productivity metrics, leaders can pinpoint areas where new developer excel and where they may need additional support or resources. This insight enables managers to tailor training programs, allocate tasks more effectively, and foster a culture of continuous improvement.
Team's insights in Typo
Driving Business Outcomes
Developer productivity is directly linked to business success. By measuring development team productivity, managers can assess how effectively their teams deliver features, fix bugs, and contribute to overall project goals. Understanding productivity levels helps align development efforts with business objectives, ensuring that the team is focused on delivering value that meets customer needs.
Improving Resource Allocation
Effective measurement of developer productivity enables better resource allocation. By understanding how much time and effort are required for various tasks, managers can make informed decisions about staffing, project timelines, and budget allocation. This ensures that resources are utilized efficiently, minimizing waste and maximizing output.
Fostering a Positive Work Environment
Measuring developer productivity can also contribute to a positive work environment. By recognizing high-performing teams and individuals, managers can boost morale and motivation. Additionally, understanding productivity trends can help identify burnout or dissatisfaction, allowing leaders to address issues proactively and create a healthier workplace culture.
Developer surveys insights in Typo
Facilitating Data-Driven Decisions
In today’s fast-paced software development landscape, data-driven decision-making is essential. Measuring developer productivity provides concrete data that can inform strategic decisions. Whether it's choosing new tools, adopting agile methodologies, or implementing process changes, having reliable developer productivity metrics allows managers to make informed choices that enhance team performance.
Investment distribution in Typo
Encouraging Collaboration and Communication
Regularly measuring productivity can highlight the importance of collaboration and communication within teams. By assessing metrics related to teamwork, such as code reviews and pair programming sessions, managers can encourage practices that foster collaboration. This not only improves productivity but overall developer experience by strengthening team dynamics and knowledge sharing.
Ultimately, understanding developer experience and measuring developer productivity leads to better outcomes for both the team and the organization as a whole.
How do we measure Developer Productivity?
Measuring developer productivity is essential for engineering managers and leaders who want to optimize their teams' performance.
Strategies for Measuring Productivity
Focus on Outcomes, Not Outputs: Shift the emphasis from measuring outputs like lines of code to focusing on outcomes that align with business objectives. This encourages developers to think more strategically about the impact of their work.
Measure at the Team Level: Assess productivity at the team level rather than at the individual level. This fosters team collaboration, knowledge sharing, and a focus on collective goals rather than individual competition.
Incorporate Qualitative Feedback: Balance quantitative metrics with qualitative feedback from developers through surveys, interviews, and regular check-ins. This provides valuable context and helps identify areas for improvement.
Encourage Continuous Improvement: Position productivity measurement as a tool for continuous improvement rather than a means of evaluation. Encourage developers to use metrics to identify areas for growth and work together to optimize workflows and development processes.
Lead by Example: As engineering managers and leaders, model the behavior you want to see in your team & team members. Prioritize work-life balance, encourage risk-taking and innovation, and create an environment where developers feel supported and empowered.
Measuring Dev productivity involves assessing both team and individual contributions to understand how effectively developers are delivering value through their development processes. Here’s how to approach measuring productivity at both levels:
Team-Level Developer Productivity
Measuring productivity at the team level provides a more comprehensive view of how collaborative efforts contribute to project success. Here are some effective metrics:
DORA Metrics
The DevOps Research and Assessment (DORA) metrics are widely recognized for evaluating team performance. Key metrics include:
Deployment Frequency: How often the software engineering team releases code to production.
Lead Time for Changes: The time taken for committed code to reach production.
Change Failure Rate: The percentage of deployments that result in failures.
Time to Restore Service: The time taken to recover from a failure.
Issue Cycle Time
This metric measures the time taken from the start of work on a task to its completion, providing insights into the efficiency of the software development process.
Team Satisfaction and Engagement
Surveys and feedback mechanisms can gauge team morale and satisfaction, which are critical for long-term productivity.
Collaboration Metrics
Assessing the frequency and quality of code reviews, pair programming sessions, and communication can provide insights into how well the software engineering team collaborates.
While team-level metrics are crucial, individual developer productivity also matters, particularly for performance evaluations and personal development. Here are some metrics to consider:
Pull Requests and Code Reviews: Tracking the number of pull requests submitted and the quality of code reviews can provide insights into an individual developer's engagement and effectiveness.
Commit Frequency: Measuring how often a developer commits code can indicate their active participation in projects, though it should be interpreted with caution to avoid incentivizing quantity over quality.
Personal Goals and Outcomes: Setting individual objectives related to project deliverables and tracking their completion can help assess individual productivity in a meaningful way.
Skill Development: Encouraging developers to pursue training and certifications can enhance their skills, contributing to overall productivity.
Measuring developer productivity metrics presents unique challenges compared to more straightforward metrics used in sales or hiring. Here are some reasons why:
Complexity of Work: Software development involves intricate problem-solving, creativity, and collaboration, making it difficult to quantify contributions accurately. Unlike sales, where metrics like revenue generated are clear-cut, developer productivity encompasses various qualitative aspects that are harder to measure for project management.
Collaborative Nature: Development work is highly collaborative. Team members often intertwine with team efforts, making it challenging to isolate the impact of one developer's work. In sales, individual performance is typically more straightforward to assess based on personal sales figures.
Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity of a pragmatic engineer. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
Varied Work Activities: Developers engage in various activities beyond coding, including debugging, code reviews, and meetings. These essential tasks are often overlooked in productivity measurements, whereas sales roles typically have more consistent and quantifiable activities.
Productivity Tools and Software development Process: The developer productivity tools and methodologies used in software development are constantly changing, making it difficult to establish consistent metrics. In contrast, sales processes tend to be more stable, allowing for easier benchmarking and comparison.
By employing a balanced approach that considers both quantitative and qualitative factors, with a few developer productivity tools, engineering leaders can gain valuable insights into their teams' productivity and foster an environment of continuous improvement & better developer experience.
Challenges of measuring Developer Productivity - What not to Measure?
Measuring developer productivity is a critical task for engineering managers and leaders, yet it comes with its own set of challenges and potential pitfalls. Understanding these challenges is essential to avoid the dangers of misinterpretation and to ensure that developer productivity metrics genuinely reflect the contributions of developers. In this section, we will explore the challenges of measuring developer productivity and highlight what not to measure.
Challenges of Measuring Developer Productivity
Complexity of Software Development: Software development is inherently complex, involving creativity, problem-solving, and collaboration. Unlike more straightforward fields like sales, where performance can be quantified through clear metrics (e.g., sales volume), developer productivity is multifaceted and includes various non-tangible elements. This complexity makes it difficult to establish a one-size-fits-all metric.
Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
Team Dynamics and Collaboration: Measuring individual productivity can overlook the collaborative nature of software development. Developers often work in teams where their contributions are interdependent. Focusing solely on individual metrics may ignore the synergistic effects of collaboration, mentorship, and knowledge sharing, which are crucial for a team's overall success.
Context Ignorance: Developer productivity metrics often fail to consider the context in which developers work. Factors such as project complexity, team dynamics, and external dependencies can significantly impact productivity but are often overlooked in traditional assessments. This lack of context can lead to misleading conclusions about a developer's performance.
Potential for Misguided Incentives: Relying heavily on specific metrics can create perverse incentives. For example, if developers are rewarded based on the number of commits, they may prioritize frequent small commits over meaningful contributions. This can lead to a culture of "gaming the system" rather than fostering genuine productivity and innovation.
What Not to Measure
Lines of Code (LOC): While LOC can provide some insight into coding activity, it is not a reliable measure of productivity. More code does not necessarily equate to better software. Instead, focus on the quality and impact of the code produced.
Commit Frequency: Tracking how often developers commit code can give a false sense of productivity. Frequent commits do not always indicate meaningful progress and can encourage developers to break down their work into smaller, less significant pieces.
Bug Counts: Focusing on the number of bugs reported or fixed can create a negative environment where developers feel pressured to avoid complex tasks that may introduce bugs. This can stifle innovation and lead to a culture of risk aversion.
Time Spent on Tasks: Measuring how long developers spend on specific tasks can be misleading. Developers may take longer on complex problems that require deep thinking and creativity, which are essential for high-quality software development.
Measuring developer productivity is fraught with challenges and dangers that engineering managers must navigate carefully. By understanding these complexities and avoiding outdated or superficial metrics, leaders can foster a more accurate and supportive environment for their development team productivity.
What is the impact of measuring Dev productivity on engineering culture?
Developer productivity improvements are a critical factor in the success of software development projects. As engineering managers or technology leaders, measuring and optimizing developer productivity is essential for driving development team productivity and delivering successful outcomes. However, measuring development productivity can have a significant impact on engineering culture & software engineering talent, which must be carefully navigated. Let's talk about measuring developer productivity while maintaining a healthy and productive engineering culture.
Measuring developer productivity presents unique challenges compared to other fields. The complexity of software development, inadequate traditional metrics, team dynamics, and lack of context can all lead to misguided incentives and decreased morale. It's crucial for engineering managers to understand these challenges to avoid the pitfalls of misinterpretation and ensure that developer productivity metrics genuinely reflect the contributions of developers.
Remember, the goal is not to maximize metrics but to create a development environment where software engineers can thrive and deliver maximum value to the organization.
Development teams using Typo experience a 30% improvement in Developer Productivity. Want to Try Typo?
Code review is all about improving the code quality. However, it can be a nightmare for developers when not done correctly. They may experience several code review challenges and slow down the entire development process. This further reduces their morale and efficiency and results in developer burnout.
Hence, optimizing the code review process is crucial for both code reviewers and developers. In this blog post, we have shared a few tips on optimizing code reviews to boost developer productivity.
Importance of Code Reviews
The Code review process is an essential stage in the software development life cycle. It has been a defining principle in agile methodologies. It ensures high-quality code and identifies potential issues or bugs before they are deployed into production.
Another notable benefit of code reviews is that it helps to maintain a continuous integration and delivery pipeline to ensure code changes are aligned with project requirements. It also ensures that the product meets the quality standards, contributing to the overall success of sprint or iteration.
With a consistent code review process, the development team can limit the risks of unnoticed mistakes and prevent a significant amount of tech debt.
They also make sure that the code meets the set acceptance criteria, and functional specifications and whether the code base follows consistent coding styles across the codebase.
Lastly, it provides an opportunity for developers to learn from each other and improve their coding skills which further helps in fostering continuous growth and helps raise the overall code quality.
How do Ineffective Code Reviews Decrease Developer Productivity?
Unclear Standards and Inconsistencies
When the code reviews lack clear guidelines or consistent criteria for evaluation, the developers may feel uncertain of what is expected from their end. This leads to ambiguity due to varied interpretations of code quality and style. It also takes a lot of their time to fix issues on different reviewers’ subjective opinions. This leads to frustration and decreased morale among developers.
Increase in Bottlenecks and Delays
When developers wait for feedback for an extended period, it prevents them from progressing. This slows down the entire software development lifecycle, resulting in missed deadlines and decreased morale. Hence, negatively affecting the deployment timeline, customer satisfaction, and overall business outcomes.
Low Quality and Delayed Feedback
When reviewers communicate vague, unclear, and delayed feedback, they usually miss out on critical information. This leads to context-switching for developers which makes them lose focus on their current tasks. Moreover, they need to refamiliarize themselves with the code when the review is completed. Hence, resulting in developers losing their productivity.
Increase Cognitive Load
Frequent switching between writing and reviewing code requires a lot of mental effort. This makes it harder for developers to be focused and productive. Poorly structured, conflicting, or unclear feedback also confuses developers on which of them to prioritize first and understand the rationale behind suggested changes. This slows down the progress, leading to decision fatigue and reducing the quality of work.
Knowledge Gaps and Lack of Context
Knowledge gaps usually arise when reviewers lack the necessary domain knowledge or context about specific parts of the codebase. This results in a lack of context which further misguides developers who may overlook important issues. They may also need extra time to justify their decision and educate reviewers.
How to Optimize Code Review Process to Improve Developer Productivity?
Set Clear Goals and Standards
Establish clear objectives, coding standards, and expectations for code reviews. Communicate in advance with developers such as how long reviews should take and who will review the code. This allows both reviewers and developers to focus their efforts on relevant issues and prevent their time being wasted on insignificant matters.
Use a Code Review Checklist
Code review checklists include a predetermined set of questions and rules that the team will follow during the code review process. A few of the necessary quality checks include:
Readability and maintainability: This is the first criterion and cannot be overstated enough.
Uniform formatting: Whether the code with consistent indentation, spacing, and naming convention easy to understand?
Testing and quality assurance: Whether it have meticulous testing and quality assurance processes?
Boundary testing: Are we exploring extreme scenarios and boundary conditions to identify hidden problems?
Security and performance: Are we ensuring security and performance in our source code?
Architectural integrity: Whether the code is scalable, sustainable, and has a solid architectural design?
Prioritize High-Impact Issues
The issues must be prioritized based on their severity and impact. Not every issue in the code review process is equally important. Take up those issues first which affect system performance, security, or major features. Review them more thoroughly rather than the ones that have smaller and less impactful changes. It helps in allocating time and resources effectively.
Encourage Constructive Feedback
Always share specific, honest, and actionable feedback with the developers. The feedback must point in the right direction and must explain the ‘why’ behind it. It will reduce follow-ups and give necessary context to the developers. This also helps the engineering team to improve their skills and produce better code which further results in a high-quality codebase.
Automate Wherever Possible
Use automation tools such as style check, syntax check, and static code analysis tools to speed up the review process. This allows for routine checks for style, syntax errors, potential bugs, and performance issues and reduces the manual effort needed on such tasks. Automation allows developers to focus on more complex issues and allocate time more effectively.
Keep Reviews Small and Focused
Break down code into smaller, manageable chunks. This will be less overwhelming and time-consuming. The code reviewers can concentrate on details, adhere to the style guide and coding standards, and identify potential bugs. This will allow them to provide meaningful feedback more effectively. This helps in a deeper understanding of the code’s impact on the overall project.
Recognize and Reward Good Work
Acknowledge and celebrate developers who consistently produce high-quality code. This enables developers to feel valued for their contributions, leading to increased engagement, job satisfaction, and a sense of ownership in the project’s success. They are also more likely to continue producing high-quality code and actively participate in the review process.
Encourage Pair Programming or Pre-Review
Encourage pair programming or pre-review sessions to by enabling real-time feedback, reducing review time, and improving code quality. This fosters collaboration, enhances knowledge sharing, and helps catch issues early. Hence, leading to smoother and more effective reviews. It also promotes team bonding, streamlines communication, and cultivates a culture of continuous learning and improvement.
Use a Software Engineering Analytics Platform
Using an Engineering analytics platform in an organization is a powerful way to optimize the code review process and improve developer productivity. It provides comprehensive insights into the code quality, technical debt, and bug frequency which allow teams to proactively identify bottlenecks and address issues in real time before they escalate. It also allow teams to monitor their practices continuously and make adjustments as needed.
Typo — Automated Code Review Tool
Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.
Key Features
Supports top 8 languages including C++ and C#.
Understands the context of the code and fixes issues accurately.
Optimizes code efficiently.
Provides automated debugging with detailed explanations.
Standardizes code and reduces the risk of a security breach
If you prioritize the code review process, follow the above-mentioned tips. It will help in maximizing code quality, improve developer productivity, and streamline the development process.
Happy reviewing!
Mastering Developer Productivity with the SPACE Framework
In the crazy world of software development, getting developers to be productive is like finding the Holy Grail for tech companies. When developers hit their stride, turning out valuable work at breakneck speed, it’s a win for everyone. But let’s be honest—traditional productivity metrics, like counting lines of code or tracking hours spent fixing bugs, are about as helpful as a screen door on a submarine.
Say hello to the SPACE framework: your new go-to for cracking the code on developer productivity. This approach doesn’t just dip a toe in the water—it dives in headfirst to give you a clear, comprehensive view of how your team is doing. With the SPACE framework, you’ll ensure your developers aren’t just busy—they’re busy being awesome and delivering top-quality work on the dot. So buckle up, because we’re about to take your team’s productivity to the next level!
Introduction to the SPACE Framework
The SPACE framework is a modern approach to measuring developer productivity, introduced in a 2021 paper by experts from GitHub and Microsoft Research. This framework goes beyond traditional metrics to provide a more accurate and holistic view of productivity.
Nicole Forsgren, the lead author, emphasizes that measuring productivity by lines of code or speed can be misleading. The SPACE framework integrates several key metrics to give a complete picture of developer productivity.
Detailed Breakdown of SPACE Metrics
The five SPACE framework dimensions are:
Satisfaction and Well-being
When developers are happy and healthy, they tend to be more productive. If they enjoy their work and maintain a good work-life balance, they're more likely to produce high-quality results. On the other hand, dissatisfaction and burnout can severely hinder productivity. For example, a study by Haystack Analytics found that during the COVID-19 pandemic, 81% of software developers experienced burnout, which significantly impacted their productivity. The SPACE framework encourages regular surveys to gauge developer satisfaction and well-being, helping you address any issues promptly.
Performance
Traditional metrics often measure performance by the number of features added or bugs fixed. However, this approach can be problematic. According to the SPACE framework, performance should be evaluated based on outcomes rather than output. This means assessing whether the code reliably meets its intended purpose, the time taken to complete tasks, customer satisfaction, and code reliability.
Activity
Activity metrics are commonly used to gauge developer productivity because they are easy to quantify. However, they only provide a limited view. Developer Activity is the count of actions or outputs completed over time, such as coding new features or conducting code reviews. While useful, activity metrics alone cannot capture the full scope of productivity.
Nicole Forsgren points out that factors like overtime, inconsistent hours, and support systems also affect activity metrics. Therefore, it's essential to consider routine tasks like meetings, issue resolution, and brainstorming sessions when measuring activity.
Collaboration and Communication
Effective communication and collaboration are crucial for any development team's success. Poor communication can lead to project failures, as highlighted by 86% of employees in a study who cited ineffective communication as a major reason for business failures. The SPACE framework suggests measuring collaboration through metrics like the discoverability of documentation, integration speed, quality of work reviews, and network connections within the team.
Efficiency and Flow
Flow is a state of deep focus where developers can achieve high levels of productivity. Interruptions and distractions can break this flow, making it challenging to return to the task at hand. The SPACE framework recommends tracking metrics such as the frequency and timing of interruptions, the time spent in various workflow stages, and the ease with which developers maintain their flow.
Benefits of the SPACE Framework
The SPACE framework offers several advantages over traditional productivity metrics. By considering multiple dimensions, it provides a more nuanced view of developer productivity. This comprehensive approach helps avoid the pitfalls of single metrics, such as focusing solely on lines of code or closed tickets, which can lead to gaming the system.
Moreover, the SPACE framework allows you to measure both the quantity and quality of work, ensuring that developers deliver high-quality software efficiently. This integrated view helps organizations make informed decisions about team productivity and optimize their workflows for better outcomes.
Implementing the SPACE Framework in Your Organization
Implementing the SPACE productivity framework effectively requires careful planning and execution. Below is a comprehensive plan and roadmap to guide you through the process. This detailed guide will help you tailor the SPACE framework to your organization's unique needs and ensure a smooth transition to this advanced productivity measurement approach.
Step 1: Understanding Your Current State
Objective: Establish a baseline by understanding your current productivity measurement practices and developer workflow.
Conduct a Productivity Audit
Review existing metrics and tools like Typo used for tracking productivity.
Identify gaps and limitations in current measurement methods.
Gather feedback from developers and managers on existing practices.
Analyze Team Dynamics and Workflow
Map out your development process, identifying key stages and tasks.
Observe how teams collaborate, communicate, and handle interruptions.
Assess the overall satisfaction and well-being of your developers.
Outcome: A comprehensive report detailing your current productivity measurement practices, team dynamics, and workflow processes.
Step 2: Setting Goals and Objectives
Objective: Define clear goals and objectives for implementing the SPACE framework.
Identify Key Business Objectives
Align the goals of the SPACE framework with your company's strategic objectives.
Focus on improving areas such as time-to-market, code quality, customer satisfaction, and developer well-being.
Set Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) Goals
Example Goals
Increase developer satisfaction by 20% within six months.
Reduce average bug resolution time by 30% over the next quarter.
Improve code review quality scores by 15% within the next year.
Outcome: A set of SMART goals that will guide the implementation of the SPACE framework.
Step 3: Selecting and Customizing SPACE Metrics
Objective: Choose the most relevant SPACE metrics and customize them to fit your organization's needs.
Review SPACE Metrics
Satisfaction and Well-being
Performance
Activity
Collaboration and Communication
Efficiency and Flow
Customize Metrics
Tailor each metric to align with your organization's specific context and objectives.
Example Customizations
Satisfaction and Well-being: Conduct quarterly surveys to measure job satisfaction and work-life balance.
Performance: Track the reliability of code and customer feedback on delivered features.
Activity: Measure the number of completed tasks, code commits, and other relevant activities.
Collaboration and Communication: Monitor the quality of code reviews and the speed of integrating work.
Efficiency and Flow: Track the frequency and duration of interruptions and the time spent in flow states.
Outcome: A customized set of SPACE metrics tailored to your organization's needs.
Step 4: Implementing Measurement Tools and Processes
Objective: Implement tools and processes to measure and track the selected SPACE metrics.
Choose Appropriate Tools
Use project management tools like Jira or Trello to track activity and performance metrics.
Implement collaboration tools such as Slack, Microsoft Teams, or Confluence to facilitate communication and knowledge sharing.
Utilize code review tools like CodeIQ by Typo to monitor the quality of code and collaboration.
Set Up Data Collection Processes
Establish processes for collecting and analyzing data for each metric.
Ensure that data collection is automated wherever possible to reduce manual effort and improve accuracy.
Train Your Team
Provide training sessions for developers and managers on using the new tools and understanding the SPACE metrics.
Encourage open communication and address any concerns or questions from the team.
Outcome: A fully implemented set of tools and processes for measuring and tracking SPACE metrics.
Step 5: Regular Monitoring and Review
Objective: Continuously monitor and review the metrics to ensure ongoing improvement.
Establish Regular Review Cycles
Conduct monthly or quarterly reviews of the SPACE metrics to track progress towards goals.
Hold team meetings to discuss the results, identify areas for improvement, and celebrate successes.
Analyze Trends and Patterns
Look for trends and patterns in the data to gain insights into team performance and productivity.
Use these insights to make informed decisions and adjustments to workflows and processes.
Solicit Feedback
Regularly gather feedback from developers and managers on the effectiveness of the SPACE framework.
Use this feedback to make continuous improvements to the framework and its implementation.
Outcome: A robust monitoring and review process that ensures the ongoing effectiveness of the SPACE framework.
Step 6: Continuous Improvement and Adaptation
Objective: Adapt and improve the SPACE framework based on feedback and evolving needs.
Iterate and Improve
Continuously refine and improve the SPACE metrics based on feedback and observed results.
Adapt the framework to address new challenges and opportunities as they arise.
Foster a Culture of Continuous Improvement
Encourage a culture of continuous improvement within your development teams.
Promote openness to change and a willingness to experiment with new ideas and approaches.
Share Success Stories
Share success stories and best practices with the broader organization to demonstrate the value of the SPACE framework.
Use these stories to inspire other teams and encourage the adoption of the framework across the organization.
Outcome: A dynamic and adaptable SPACE framework that evolves with your organization's needs.
Conclusion
Implementing the SPACE framework is a strategic investment in your organization's productivity and success. By following this comprehensive plan and roadmap, you can effectively integrate the SPACE metrics into your development process, leading to improved performance, satisfaction, and overall productivity. Embrace the journey of continuous improvement and leverage the insights gained from the SPACE framework to unlock the full potential of your development teams.
SPACE Framework: How to Measure Developer Productivity
In today’s fast-paced software development world, understanding and improving developer productivity is more crucial than ever. One framework that has gained prominence for its comprehensive approach to measuring and enhancing productivity is the SPACE Framework. This framework, developed by industry experts and backed by extensive research, offers a multi-dimensional perspective on productivity that transcends traditional metrics.
This blog delves deep into the genesis of the SPACE Framework, its components, and how it can be effectively implemented to boost developer productivity. We’ll also explore real-world success stories of companies that have benefited from adopting this framework.
The genesis of the SPACE Framework
The SPACE Framework was introduced by researchers Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila, Thomas Zimmermann, Brian Houck, and Jenna Butler. Their work was published in a paper titled “The SPACE of Developer Productivity: There’s More to it than You Think!” emphasising that a single metric cannot measure developer productivity. Instead, it should be viewed through multiple lenses to capture a holistic picture.
Components of the SPACE Framework
The SPACE Framework is an acronym that stands for:
Satisfaction and Well-being
Performance
Activity
Communication and Collaboration
Efficiency and Flow
Each component represents a critical aspect of developer productivity, ensuring a balanced approach to measurement and improvement.
Detailed breakdown of the SPACE Framework
1. Satisfaction and Well-being
Definition: This dimension focuses on how satisfied and happy developers are with their work and environment. It also considers their overall well-being, which includes factors like work-life balance, stress levels, and job fulfillment.
Why It Matters: Happy developers are more engaged, creative, and productive. Ensuring high satisfaction and well-being can reduce burnout and turnover, leading to a more stable and effective team.
Metrics to Consider:
Employee satisfaction surveys
Work-life balance scores
Burnout indices
Turnover rates
2. Performance
Definition: Performance measures the outcomes of developers’ work, including the quality and impact of the software they produce. This includes assessing code quality, deployment frequency, and the ability to meet user needs.
Why It Matters: High performance indicates that the team is delivering valuable software efficiently. It helps in maintaining a competitive edge and ensuring customer satisfaction.
Metrics to Consider:
Code quality metrics (e.g., number of bugs, code review scores)
Deployment frequency
Customer satisfaction ratings
Feature adoption rates
3. Activity
Definition: Activity tracks the actions developers take, such as the number of commits, code reviews, and feature development. This component focuses on the volume and types of activities rather than their outcomes.
Why It Matters: Monitoring activity helps understand workload distribution and identify potential bottlenecks or inefficiencies in the development process.
Metrics to Consider:
Number of commits per developer
Code review participation
Task completion rates
Meeting attendance
4. Communication and Collaboration
Definition: This dimension assesses how effectively developers interact with each other and with other stakeholders. It includes evaluating the quality of communication channels and collaboration tools used.
Why It Matters: Effective communication and collaboration are crucial for resolving issues quickly, sharing knowledge, and fostering a cohesive team environment. Poor communication can lead to misunderstandings and project delays.
Metrics to Consider:
Frequency and quality of team meetings
Use of collaboration tools (e.g., Slack, Jira)
Cross-functional team interactions
Feedback loops
5. Efficiency and Flow
Definition: Efficiency and flow measure how smoothly the development process operates, including how well developers can focus on their tasks without interruptions. It also looks at the efficiency of the processes and tools in place.
Why It Matters: High efficiency and flow indicate that developers can work without unnecessary disruptions, leading to higher productivity and job satisfaction. It also helps in identifying and eliminating waste in the process.
Metrics to Consider:
Cycle time (time from task start to completion)
Time spent in meetings vs. coding
Context switching frequency
Tool and process efficiency
Implementing the SPACE Framework in real life
Implementing the SPACE Framework requires a strategic approach, involving the following steps:
Establish baseline metrics
Before making any changes, establish baseline metrics for each SPACE component. Use existing tools and methods to gather initial data.
Actionable Steps:
Conduct surveys to measure satisfaction and well-being.
Use code quality tools to assess performance.
Track activity through version control systems.
Analyze communication patterns via collaboration tools.
Measure efficiency and flow using project management software.
Set clear goals
Define what success looks like for each component of the SPACE Framework. Set achievable and measurable goals.
Actionable Steps:
Increase employee satisfaction scores by 10% within six months.
Reduce bug rates by 20% over the next quarter.
Improve code review participation by 15%.
Enhance cross-team communication frequency.
Shorten cycle time by 25%.
Implement changes
Based on the goals set, implement changes to processes, tools, and practices. This may involve adopting new tools, changing workflows, or providing additional training.
Actionable Steps:
Introduce well-being programs to improve satisfaction.
Adopt automated testing tools to enhance performance.
Encourage regular code reviews to boost activity.
Use collaboration tools like Slack or Microsoft Teams to improve communication.
Streamline processes to reduce context switching and improve flow.
Monitor and adjust
Regularly monitor the metrics to evaluate the impact of the changes. Be prepared to make adjustments as necessary to stay on track with your goals.
Actionable Steps:
Use dashboards to track key metrics in real time.
Hold regular review meetings to discuss progress.
Gather feedback from developers to identify areas for improvement.
Make iterative changes based on data and feedback.
Integrating the SPACE Framework with DORA Metrics
SPACE Dimension
Definition
DORA Metric Integration
Actionable Steps
Satisfaction and Well-being
Measures happiness, job fulfillment, and work-life balance
High deployment frequency and low lead time improve satisfaction; high failure rates increase stress
– Conduct satisfaction surveys
– Correlate with DORA metrics
– Implement well-being programs
Performance
Assesses the outcomes of developers’ work
Direct overlap with DORA metrics like deployment frequency and lead time
– Use DORA metrics for benchmark
– Track and improve key metrics
– Address failure causes
Activity
Tracks volume and types of work (e.g., commits, reviews)
Frequent, high-quality activities improve deployment frequency and lead time
– Track activities and DORA metrics
– Promote high-quality work practices
– Balance workloads
Communication and Collaboration
Evaluates effectiveness of interactions and tools
Effective communication and collaboration reduce failure rates and restoration times
– Use communication tools (e.g., Slack)
– Conduct retrospectives
– Encourage cross-functional teams
Efficiency and Flow
Measures smoothness and efficiency of processes
Efficient workflows lead to higher deployment frequencies and shorter lead times
GitHub implemented the SPACE Framework to enhance its developer productivity. By focusing on communication and collaboration, they improved their internal processes and tools, leading to a more cohesive and efficient development team. They introduced regular team-building activities and enhanced their internal communication tools, resulting in a 15% increase in developer satisfaction and a 20% reduction in project completion time.
Microsoft
Microsoft adopted the SPACE Framework across several development teams. They focused on improving efficiency and flow by reducing context switching and streamlining their development processes. This involved adopting continuous integration and continuous deployment (CI/CD) practices, which reduced cycle time by 30% and increased deployment frequency by 25%.
Key software engineering metrics mapped to the SPACE Framework
This table outlines key software engineering metrics mapped to the SPACE Framework, along with how they can be measured and implemented to improve developer productivity and overall team effectiveness.
Activity in tools (e.g., Slack messages, Jira comments)
Collaboration tools (e.g., Slack, Jira)
– Promote use of collaboration tools
– Provide training on tool usage
Cross-functional Interactions
Number of interactions with other teams
Project management tools, communication tools
– Encourage cross-functional projects
– Facilitate regular cross-team meetings
Feedback Loops
Number and quality of feedback instances
Feedback tools, retrospectives
– Implement regular feedback sessions
– Act on feedback to improve processes
Efficiency and Flow
Key Metrics
Measurement Tools/Methods
Implementation Steps
Cycle Time
Time from task start to completion
Project management tools (e.g., Jira)
– Monitor cycle times
– Identify and remove bottlenecks
Time Spent in Meetings vs. Coding
Hours logged in meetings vs. coding
Time tracking tools, calendar tools
– Optimize meeting schedules
– Minimize unnecessary meetings
Context Switching Frequency
Number of task switches per day
Time tracking tools, self-reporting
– Reduce unnecessary interruptions
– Promote focused work periods
Tool and Process Efficiency
Time saved using tools/processes
Productivity tools, surveys
– Regularly review tool/process efficiency
– Implement improvements based on feedback
What engineering leaders can do
Engineering leaders play a crucial role in the successful implementation of the SPACE Framework. Here are some actionable steps they can take:
Promote a culture of continuous improvement
Encourage a mindset of continuous improvement among the team. This involves being open to feedback and constantly seeking ways to enhance productivity and well-being.
Actionable Steps:
Regularly solicit feedback from team members.
Celebrate small wins and improvements.
Provide opportunities for professional development and growth.
Invest in the right tools and processes
Ensure that developers have access to the tools and processes that enable them to work efficiently and effectively.
Actionable Steps:
Conduct regular tool audits to ensure they meet current needs.
Invest in training programs for new tools and technologies.
Streamline processes to eliminate unnecessary steps and reduce bottlenecks.
Foster collaboration and communication
Create an environment where communication and collaboration are prioritized. This can lead to better problem-solving and more innovative solutions.
Actionable Steps:
Organize regular team-building activities.
Use collaboration tools to facilitate better communication.
Encourage cross-functional projects to enhance team interaction.
Prioritize well-being and satisfaction
Recognize the importance of developer well-being and satisfaction. Implement programs and policies that support a healthy work-life balance.
Actionable Steps:
Offer flexible working hours and remote work options.
Provide access to mental health resources and support.
Recognize and reward achievements and contributions.
Conclusion
The SPACE Framework offers a holistic and actionable approach to understanding and improving developer productivity. By focusing on satisfaction and well-being, performance, activity, communication and collaboration, and efficiency and flow, organizations can create a more productive and fulfilling work environment for their developers.
Implementing this framework requires a strategic approach, clear goal setting, and ongoing monitoring and adjustment. Real-world success stories from companies like GitHub and Microsoft demonstrate the potential benefits of adopting the SPACE Framework.
Engineering leaders have a pivotal role in driving this change. By promoting a culture of continuous improvement, investing in the right tools and processes, fostering collaboration and communication, and prioritizing well-being and satisfaction, they can significantly enhance developer productivity and overall team success.
In the software development industry, while user experience is an important aspect of the product life cycle, organizations are also considering Developer Experience.
A positive Developer Experience helps in delivering quality products and allows developers to be happy and healthy in the long run.
However, it is not always possible for organizations to measure and improve developer experience without any good tools and platforms.
What is Developer Experience?
Developer Experience is about the experience software developers have while working in the organization. It is the developers’ journey while working with a specific framework, programming languages, platform, documentation, general tools, and open-source solutions.
Positive Developer Experience = Happier teams
Developer Experience has a direct relationship with developer productivity. A positive experience results in high dev productivity, leading to high job satisfaction, performance, and morale. Hence, happier developer teams.
This starts with understanding the unique needs of developers and fostering a positive work culture for them.
Why Developer Experience is important?
Smooth onboarding process
Good DX ensures the onboarding process is as simple and smooth as possible. It includes making them familiar with the tools and culture and giving them the support they need to proceed further in their career. It also allows them to know other developers which helps in collaboration, open communication, and seeking help, whenever required.
Improves product quality
A positive Developer Experience leads to 3 effective C’s – Collaboration, communication, and coordination. Besides this, adhering to coding standards, best practices, and automated testing helps promote code quality and consistency and fix issues early. As a result, development teams can easily create products that meet customer needs and are free from errors and glitches.
Increases development speed
When Developer Experience is handled with care, software developers can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are few ways to boost development speed. It also lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.
Attracts and retains top talents
Developers usually look out for a strong tech culture. So they can focus on their core skills and get acknowledged for their contributions. Great DX increases job satisfaction and aligns their values and goals with the organization. In return, developers bring the best to the table and want to stay in the organization for the long run.
Enhances collaboration
The right kind of Developer Experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings. Developers can easily discuss issues, share feedback, and work together on tasks. It helps streamline the development process and results in high-quality work.
A powerful time management tool that streamlines and automates the calendar and protects developers’ flow time. It helps to strike a balance between meetings and coding time with a focus time feature.
Key features
Seamlessly integrates with third-party applications such as Slack, Google Calendar, and Asana.
Determines the most suitable meeting times for both developers and engineering leaders.
Creates custom smart holds i.e. protected time throughout the hold.
Reschedules the meetings that are marked as ‘Flexible’.
Provides a quick summary of how much meetings and focus time was spent last week.
A straightforward time-tracking, reporting, and billing tool for software developers. It lets development teams view tracked team entries in a grid or calendar format.
Key features
‘Dashboard and Reporting’ feature offers in-depth analysis and lets engineering leaders create customized dashboards.
Simple and easy-to-use interface.
Preferable for those who avoid real-time tracking rather than track their time manually.
Offers a PDF invoice template that can be downloaded easily.
Includes optional Pomodoro setting that allows developers to take regular quick breaks.
Typo is an intelligent engineering management platform used for gaining visibility, removing blockers, and maximizing developer effectiveness. It gives a comparative view of each team’s performance across velocity, quality, and throughput. This tool can be integrated with the tech stack to deliver real-time insights. Git, Slack, Calenders, and CI/CD to name a few.
Key features
Seamlessly integrates with third-party applications such as Git, Slack, Calenders, and CI/CD tools.
‘Sprint analysis’ feature allows for tracking and analyzing the team’s progress throughout a sprint.
Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
Offers engineering benchmark to compare the team’s results across industries.
An AI code-based assistant tool that provides code-specific information and helps in locating precise code based on natural language description, file names, or function names.
Key features
Explain complex lines of code in simple language.
Identifies bugs and errors in a codebase and provides suggestions.
Offers documentation generation.
Answers questions about existing code.
Generates code snippets, fixes, and improves code.
Developed by GitHub in collaboration with open AI, it uses an open AI codex for writing code quickly. It draws context from the code and suggests whole lines or complete functions that developers can accept, modify, or reject.
Key features
Creates predictive lines of code from comments and existing patterns in the code.
Generates code in multiple languages including Typescript, Javascript, Ruby, C++, and Python.
Seamlessly integrates with popular editors such as Neovim, JetBrains IDEs, and Visual Studio.
A widely used communication platform that enables developers to real-time communication and share files. It also allows team members to share and download files and create external links for people outside of the team.
Key features
Seamlessly integrates with third-party applications such as Google Calendar, Hubspot, Clickup, and Salesforce.
‘Huddle’ feature that includes phone and video conferencing options.
Accessible on both mobile and desktop (Application and browser).
Offers ‘Channel’ feature i.e. similar to groups, team members can create projects, teams, and topics.
Perfect for asynchronous communication and collaboration.
A part of the Atlassian group, JIRA is an umbrella platform that includes JIRA software, JIRA core, and JIRA work management. It relies on the agile way of working and is purposely built for developers and engineers.
Key features
Built for agile and scrum workflows.
Offers Kanban view.
JIRA dashboard helps users to plan projects, measure progress, and track due dates.
Offers third-party integrations with other parts of Atlassian groups and third-party apps like Github, Gitlab, and Jenkins.
Offers customizable workflow states and transitions for every issue type.
A project management and issue-tracking tool that is tailored for software development teams. It helps the team plan their projects and auto-close and auto-archive issues.
Key features
Simple and straightforward UI.
Easy to set up.
Breaks larger tasks into smaller issues.
Switches between list and board layout to view work from any angle.
Quickly apply filters and operators to refine issue lists and create custom views.
A cloud-based cross-browser testing platform that provides real-time testing on multiple devices and simulators. It is used to create and run both manual and automatic tests and functions via the Selenium Automation Grid.
Key features
Seamlessly integrates with other testing frameworks and CI/CD tools.
Offers detailed automated logs such as exception logs, command logs, and metadata.
Runs parallel tests in multiple browsers and environments.
Offers command screenshots and video recordings of the script execution.
Facilitates responsive testing to ensure the application works well on various devices and screen sizes.
A widely used automation testing tool for API. It provides a streamlined process for standardizing API testing and monitoring it for usage and trend insights.
Key features
Seamlessly integrates with CI/CD pipelines.
Enable users to mimic real-world scenarios and assess API behavior under various conditions.
Creates mock servers, and facilitates realistic simulations and comprehensive testing.
Provides monitoring features to gain insights into API performance and usage trends.
Friendly and easy-to-use interface equipped with code snippets.
Certified with FebRamp and SOC Type II compliant, It helps in achieving CI/CD in open-source and large-scale projects. Circle CI streamlines the DevOps process and automates builds across multiple environments.
Key features
Seamlessly integrates with third-party applications with Bitbucket, GitHub, and GitHub Enterprise.
Tracks the status of projects and keeps tabs on build processes
‘Parallel testing’ feature helps in running tests in parallel across different executors.
Allows a single process per project.
Provides ways to troubleshoot problems and inspect things such as directory path, log files, and running processes
Specifically designed for software development teams. Swimm is an innovative cloud-based documentation tool that integrates continuous documentation into the development workflow.
Key features
Seamlessly integrates with development tools such as GitHub, VSC, and JetBrains IDEs.
‘Auto-sync’ feature ensures the document stays up to date with changes in the codebase.
Creates new documents, rewrites existing ones, or summarizes information.
Creates tutorials and visualizations within the codebase for better understanding and onboarding new members.
Analyzes the entire codebase, documentation sources, and data from enterprise tools.
A valuable tool for development teams that captures 360 views of developer experience and helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.
Key features
Research-backed framework that captures parameters and uncovers real issues.
In-depth insights are published on the dashboard.
Combines data-driven insights with proactive monitoring and strategic intervention.
Identifies the key priority areas affecting developer productivity and well-being.
Sends automated alerts to identify burnout signs in developers at an early stage.
A comprehensive insights platform that is founded by researchers behind the DORA and SPACE framework. It offers both qualitative and quantitative measures to give a holistic view of the organization.
Key features
Provides a suite of tools that capture data from surveys and systems in real-time.
Breaks down results based on personas.
Streamlines developer onboarding with real-time insights.
Contextualizes performance with 180,000+ industry benchmark samples.
Uses advanced statistical analysis to identify the top opportunities.
Conclusion
Overall Developer Experience is crucial in today’s times. It facilitates effective collaboration within engineering teams, offers real-time feedback on workflow efficiency and early signs of burnout, and enables informed decision-making. By pinpointing areas for improvement, it cultivates a more productive and enjoyable work environment for developers.
There are various tools available in the market. We’ve curated the best Developer Experience tools for you. You can check other tools as well. Do your own research and see what fits right for you.
All the best!
Measuring Developer Productivity: A Comprehensive Guide
The software development industry constantly evolves, and measuring developer productivity has become crucial to success. It is the key to achieving efficiency, quality, and innovation. However, measuring productivity is not a one-size-fits-all process. It requires a deep understanding of productivity in a development context and selecting the right metrics to reflect it accurately.
This guide will help you and your teams navigate the complexities of measuring dev productivity. It offers insights into the process’s nuances and equips teams with the knowledge and tools to optimize performance. By following the tips and best practices outlined in this guide, teams can improve their productivity and deliver better software.
What is Developer Productivity?
Development productivity extends far beyond the mere output of code. It encompasses a multifaceted spectrum of skills, behaviors, and conditions that contribute to the successful creation of software solutions. Technical proficiency, effective collaboration, clear communication, suitable tools, and a conducive work environment are all integral components of developer productivity. Recognizing and understanding these factors is fundamental to devising meaningful metrics and fostering a culture of continuous improvement.
Benefits of developer productivity
Increased productivity allows developers to complete tasks more efficiently. It leads to shorter development cycles and quicker delivery of products or features to the market.
Productivity developers can focus more on code quality, testing, and optimization, resulting in higher-quality software with fewer bugs and issues.
Developers can accomplish more in less time, reducing development costs and improving the organization’s overall return on investment.
Productive developers often experience less stress and frustration due to reduced workloads and smoother development processes that lead to higher job satisfaction and retention rates.
With more time and energy available, developers can dedicate resources to innovation, continuous learning, experimenting with new technologies, and implementing creative solutions to complex problems.
Metrics for Measuring Developer Productivity
Measuring software developers’ productivity cannot be any arbitrary criteria. This is why there are several metrics in place that can be considered while measuring it. Here we can divide them into quantitative and qualitative metrics. Here is what they mean:
Quantitative Metrics
Lines of Code (LOC) Written
While counting lines of code isn’t a perfect measure of productivity, it can provide valuable insights into coding activity. A higher number of lines might suggest more work done, but it doesn’t necessarily equate to higher quality or efficiency. However, tracking LOC changes over time can help identify trends and patterns in development velocity. For instance, a sudden spike in LOC might indicate a burst of productivity or potentially code bloat, while a decline could signal optimization efforts or refactoring.
Time to Resolve Issues/Bugs
The swift resolution of issues and bugs is indicative of a team’s efficiency in problem-solving and code maintenance. Monitoring the time it takes to identify, address, and resolve issues provides valuable feedback on the team’s responsiveness and effectiveness. A shorter time to resolution suggests agility and proactive debugging practices, while prolonged resolution times may highlight bottlenecks in the development process or technical debt that needs addressing.
Number of Commits or Pull Requests
Active participation in version control systems, as evidenced by the number of commits or pull requests, reflects the level of engagement and contribution to the codebase. A higher number of commits or pull requests may signify active development and collaboration within the team. However, it’s essential to consider the quality, not just quantity, of commits and pull requests. A high volume of low-quality changes may indicate inefficiency or a lack of focus.
Code Churn
Code churn refers to the rate of change in a codebase over time. Monitoring code churn helps identify areas of instability or frequent modifications, which may require closer attention or refactoring. High code churn could indicate areas of the code that are particularly complex or prone to bugs, while low churn might suggest stability but could also indicate stagnation if accompanied by a lack of feature development or innovation. Furthermore, focusing on code changes allows teams to track progress and ensure that updates align with project goals while emphasizing quality code ensures that these changes maintain or improve the overall codebase integrity and performance.
Effective code reviews are crucial for maintaining code quality and fostering a collaborative development environment in engineering org. Monitoring code review feedback, such as the frequency of comments, the depth of review, and the incorporation of feedback into subsequent iterations, provides insights into the team’s commitment to quality and continuous improvement. A culture of constructive feedback and iteration during code reviews indicates a quality-driven approach to development.
Team Satisfaction and Morale
High morale and job satisfaction among engineering teams are key indicators of a healthy and productive work environment. Happy and engaged teams tend to be more motivated, creative, and productive. Regularly measuring team satisfaction through surveys, feedback sessions, or one-on-one discussions helps identify areas for improvement and reinforces a positive culture that fosters teamwork, productivity, and collaboration.
Rate of Feature Delivery
Timely delivery of features is essential for meeting project deadlines and delivering value to stakeholders. Monitoring the rate of feature delivery, including the speed and predictability of feature releases, provides insights into the team’s ability to execute and deliver results efficiently. Consistently meeting or exceeding feature delivery targets indicates a well-functioning development process and effective project management practices.
Customer Satisfaction and Feedback
Ultimately, the success of development efforts is measured by the satisfaction of end-users. Monitoring customer satisfaction through feedback channels, such as surveys, reviews, and support tickets, provides valuable insights into the effectiveness of the software in delivering meaningful solutions. Positive feedback and high satisfaction scores indicate that the development team has successfully met user needs and delivered a product that adds value. Conversely, negative feedback or low satisfaction scores highlight areas for improvement and inform future development priorities.
Best Practices for Measuring Developer Productivity
While analyzing the metrics and measuring software developer productivity, here are some things you need to remember:
Balance Quantitative and Qualitative Metrics: Combining both types of metrics provides a holistic view of productivity.
Customize Metrics to Fit Team Dynamics: Tailor metrics to align with the development team’s unique objectives and working styles.
Ensure Transparency and Clarity: Communicate clearly about the purpose and interpretation of metrics to foster trust and accountability.
Iterate and Adapt Measurement Strategies: Continuously evaluate and refine measurement approaches based on feedback and evolving project requirements.
How does Generative AI Improve Developer Productivity?
Below are a few ways in which Generative AI can have a positive impact on developer productivity:
Focus on meaningful tasks: Generative AI tools take up tedious and repetitive tasks, allowing developers to give their time and energy to meaningful activities, resulting in productivity gains within the team members’ workflow.
Assist in their learning graph: Generative AI lets software engineers gain practical insights and examples from these AI tools and enhance team performance.
Assist in pair programming: Through Generative AI, developers can collaborate with other developers easily.
Increase the pace of software development: Generative AI helps in the continuous delivery of products and services and drives business strategy.
How does Typo Measure Developer Productivity?
There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.
Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.
Here are three ways in which Typo measures the team productivity:
Software Development Visibility
Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.
Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.
Code Quality Automation
Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.
Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback. This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.
Developer Experience
Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.
Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.
Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.
Track Developer Productivity Effectively
Measuring developers’ productivity is not straightforward, as it varies from person to person. It is a dynamic process that requires careful consideration and adaptability.
To achieve greater success in software development, the development teams must embrace the complexity of productivity, select appropriate metrics, use relevant tools, and develop a supportive work culture.
There are many developer productivity tools available in the market. Typo stands out to be the prevalent one. It’s important to remember that the journey toward productivity is an ongoing process, and each iteration presents new opportunities for growth and innovation.
'Data-Driven Engineering: Building a Culture of Metrics' with Mario Viktorov Mechoulam, Sr. Engineering Manager, Contentsquare
March 14, 2025
•
27 min read
How do you build a culture of engineering metrics that drives real impact? Engineering teams often struggle with inefficiencies — high work-in-progress, unpredictable cycle times, and slow shipping. But what if the right metrics could change that?
In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Mario Viktorov Mechoulam, Senior Engineering Manager at Contentsquare, about how to establish a data-driven engineering culture using effective metrics. From overcoming cultural resistance to getting executive buy-in, Mario shares his insights on making metrics work for your team.
What You’ll Learn in This Episode:
✅ Why Metrics Matter:How the lack of metrics creates inefficiencies & frustrations in tech teams.
✅ Building a Metrics-Driven Culture:The five key steps — observability, accountability, understanding, discussions, and agreements.
✅ Overcoming Resistance: How to tackle biases, cultural pushback, and skepticism around metrics.
✅ Practical Tips for Engineering Managers: Early success indicators like reduced work-in-progress & improved predictability.
✅ Getting Executive Buy-In:How to align leadership on the value of engineering metrics.
✅ A Musician’s Path to Engineering Metrics:Mario’s unique journey from music to Lean & Toyota Production System-inspired engineering.
Timestamps
00:00 — Let’s begin!
00:47 — Meet the Guest: Mario
01:48 — Mario’s Journey into Engineering Metrics
03:22 — Building a Metrics-Driven Engineering Culture
06:49 — Challenges & Solutions in Metrics Adoption
07:37 — Why Observability & Accountability Matter
11:12 — Driving Cultural Change for Long-Term Success
Kovid Batra: Hi, everyone. Welcome to the all new episode of groCTO by Typo. This is Kovid, your host. Today with us, we have a very special guest whom I found after stalking a lot of people on LinkedIn, but found him in my nearest circle. Uh, welcome, welcome to the show, Mario. Uh, Mario is a Senior Engineering Manager at Contentsquare and, uh, he is an engineering metrics enthusiast, and that’s where we connected. We talked a lot about it and I was sure that he’s the guy we should have on the podcast to talk about it. And that’s why we thought today’s topic should be something that is very close to Mario, which is setting metrics culture in the engineering teams. So once again, welcome, welcome to the show, Mario. It’s great to have you here.
Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. I’m very happy to join this series.
Kovid Batra: Great. So Mario, I think before we get started, one quick question, so that we know you a little bit more. Uh, this is kind of a ritual we always have, so don’t get surprised by it. Uh, tell us something about yourself from your childhood or from your teenage that defines who you are today.
Mario Viktorov Mechoulam: Right. I think my, my, both of my parents are musicians and I played violin for a few years, um, also in the junior orchestra. I think this contact with music and with the orchestra in particular, uh, was very important to define who I am today because of teamwork and synchronicity. So, orchestras need to work together and need to have very, very good collaboration. So, this part stuck somewhere on the back of my brain. And teamwork and collaboration is something that defines me today and I value a lot in others as well.
Kovid Batra: That’s really interesting. That is one unique thing that I got to learn today. And I’m sure orchestra must have been fun.
Mario Viktorov Mechoulam: Yes.
Kovid Batra: Do you do that, uh, even today?
Mario Viktorov Mechoulam: Uh, no, no, unfortunately I’m, I’m like the black sheep of my family because I, once I discovered computers and switched to that, um, I have not looked back. Uh, some days I regret it a bit, uh, but this new adventure, this journey that I’m going through, um, I don’t think it’s, it’s irreplaceable. So I’m, I’m happy with what I’m doing.
Kovid Batra: Great! Thank you for sharing this. Uh, moving on, uh, to our main section, which is setting a culture of metrics in engineering teams. I think a very known topic, a very difficult to do thing, but I think we’ll address the elephant in the room today because we have an expert here with us today. So Mario, I think I’ll, I’ll start with this. Uh, sorry to say this, but, uh, this looks like a boring topic to a lot of engineering teams, right? People are not immediately aligned towards having metrics and measurement and people looking at what they’re doing. And of course, there are biases around it. It’s a good practice. It’s an ideal practice to have in high performing engineering teams. But what made you, uh, go behind this, uh, what excited you to go behind this?
Mario Viktorov Mechoulam: A very good question. And I agree that, uh, it’s not an easy topic. I think that, uh, what’s behind the metrics is around us, whether we like it or not. Efficiency, effectiveness, optimization, productivity. It’s, it’s in everything we do in the world. So, for example, even if you, if you go to the airport and you stay in a queue for your baggage check in, um, I’m sure there’s some metrics there, whether they track it or not, I don’t know. And, um, and I discovered in my, my university years, I had, uh, first contact with, uh, Toyota production system with Lean, how we call it in the West, and I discovered how there were, there were things that looked like, like magic that you could simply by observing and applying use to transform the landscape of organizations and the landscape systems. And I was very lucky to be in touch with this, uh, with this one professor who is, uh, uh, the Director of the Lean Institute in Spain. Um, and I was surprised to see how no matter how big the corporation, how powerful the people, how much money they have, there were inefficiencies everywhere. And in my eyes, it looks like a magic wand. Uh, you just, uh, weave it around and then you magically solve stuff that could not be solved, uh, no matter how much money you put on them. And this was, yeah, this stuck with me for quite some time, but I never realized until a few years into the industry that, that was not just for manufacturing, but, uh, lean and metrics, they’re around us and it’s our responsibility to seize it and to make them, to put them to good use.
Kovid Batra: Interesting. Interesting. So I think from here, I would love to know some of the things that you have encountered in your journey, um, as an engineering leader. Uh, when you start implementing or bringing this thought at first point in the teams, what’s their reaction? How do you deal with it? I know it’s an obvious question to ask because I have been dealing with a lot of teams, uh, while working at Typo, but I want to hear it from you firsthand. What’s the experience like? How do you bring it in? How do you motivate those people to actually come on board? So maybe if you have an example, if you have a story to tell us from there, please go ahead.
Mario Viktorov Mechoulam: Of course, of course. It’s not easy and I’ve made a lot of mistakes and one thing that I learned is that there is no fast track. It doesn’t matter if you know, if you know how to do it. If you’ve done it a hundred times, there’s no fast track. Most of the times it’s a slow grind and requires walking the path with people. I like to follow the, these steps. We start with observability, then accountability, then understanding, then discussions and finally agreements. Um, but of course, we cannot, we cannot, uh, uh, drop everything at, at, at, at once at the team because as you said, there are people who are generally wary of, of this, uh, because of, um, bad, bad practices, because of, um, unmet expectations, frustrations in the past. So indeed, um, I have, I have had to be very, very careful about it. So to me, the first thing is starting with observability, you need to be transparent with your intentions. And I think one, one key sentence that has helped me there is that trying to understand what are the things that people care about. Do you care about your customers? Do you care about how much focus time, how much quality focus time do you have? Do you care about the quality of what you ship? Do you care about the impact of what you ship? So if the answer to these questions is yes, and for the majority of engineers, and not only engineers, it’s, it’s yes, uh, then if you care about something, it might be smart to measure it. So that’s a, that’s a good first start. Um, then by asking questions about what are the pains or generating curiosity, like for example, where do you think we spend the most time when we are working to ship something? You can, uh, you can get to a point where the team agrees to have some observability, some metrics in place. So that’s the first step.
Uh, the second step is to generate accountability. And that is arguably harder. Why so? Because in my career, I’ve seen sometimes people, um, who think that these are management metrics. Um, and they are, so don’t get me wrong. I think management can put these metrics to good use, um, but this sends a message in that nobody else is responsible for them, and I disagree with this. I think that everybody is responsible. Of course, I’m ultimately responsible. So, what I do here is I try to help teams understand how they are accountable of this. So if it was me, then I get to decide how it really works, how they do the work, what tools they use, what process they use. This is boring. It’s boring for me, but it’s also boring and frustrating for the people. People might see this as micromanagement. I think it’s, uh, it’s much more intellectually interesting if you get to decide how you do the work. And this is how I connect the accountability so that we can get teams to accept that okay, these metrics that we see, they are a result of how we have decided to work together. The things, the practices, the habits that we do. And we can, we can influence them.
Kovid Batra: Totally. But the thing is, uh, when you say that everyone should be onboarded with this thought that it is not just for the management, for the engineering, what exactly, uh, are those action items that you plan that get this into the team as a culture? Because I, I feel, uh, I’ll touch this topic again when we move ahead, but when we talk about culture, it comes with a lot of aspects that you can, you can not just define, uh, in two days or three days or five days of time. There is a mindset that already exists and everything that you add on top of it comes only or fits only if it aligns with that because changing culture is a hard thing, right? So when you say that people usually feel that these are management metrics, somehow I feel that this is part of the culture. But when you bring it, when you bring it in a way that everyone is accountable, bringing that change into the mindset is, is, is a little hard, I feel. So what exactly do you do there is what I want to understand from you.
Mario Viktorov Mechoulam: Sure. Um, so just, just to be, to be clear, at the point where you introduce this observability and accountability, it’s not, it’s not part of the culture yet. I think this is the, like, putting the foot on the door, uh, to get people to start, um, to start looking at these, using these and eventually they become a culture, but way, way later down the line.
Kovid Batra: Got it, got it. Yeah.
Mario Viktorov Mechoulam: Another thing is that culture takes, takes a lot of time. It’s, uh, um, how can we say? Um, organic adoption is very slow. And after organic adoption, you eventually get a shifting culture. Um, so I was talking to somebody a few weeks back, and they were telling me a senior leader for one of another company, and they were telling me that it took a good 3–4 years to roll out metrics in a company. And even then, they did not have all the levels of adoption, all the cultural changes everywhere in all the layers that they wanted to. Um, so, so this, there’s no fast track. This, this takes time. And when you say that, uh, people are wary about metrics or people think that manage, this is management metrics when they, when, when you say this is part of culture, it’s true. And it comes maybe from a place where people have been kept out of it, or where they have seen that metrics have been misused to do precisely micromanagement, right?
Kovid Batra: Right.
Mario Viktorov Mechoulam: So, yeah, people feel like, oh, with this, my work is going to be scrutinized. Perhaps I’m going to have to cut corners. I’m going to be forced to cut corners. I will have less satisfaction in the work we do. So, so we need to break that, um, to change the culture. We need to break the existing culture and that, that takes time. Um, so for me, this is just the first step. Uh, just the first step to, um, to make people feel responsible, because at the end of the day, um, every, every team costs some, some, some budget, right, to the company. So for an average sized team, we might be talking $1 million, depending on where you’re located, of course. But $1 million per year. So, of course, this, each of these teams, they need to make $1 million in, uh, in impact to at least break even, but we need more. Um, how do we do that? So two things. First, you need, you need to track the impact of the work you do. So that already tells you that if we care about this, there is a metric that we have to incorporate. We have to track the impact, the effect that the work we ship has in the product. But then the second, second thing is to be able to correlate this, um, to correlate what we ship with the impact that we see. And, and there is a very, very, uh, narrow window to do that. You cannot start working on something and then ship it three years later and say, Oh, I had this impact. No, in three years, landscape changed a lot, right? So we need to be quicker in shipping and we need to be tracking what we ship. Therefore, um, measuring lead time, for example, or cycle time becomes one of the highest expressions of being agile, for example.
Kovid Batra: Got it.
Mario Viktorov Mechoulam: So it’s, it’s through these, uh, constant repetition and helping people see how the way they do work, how, whether they track or not, and can improve or not, um, has repercussions in the customer. Um, it’s, it’s the way to start, uh, introducing this, this, uh, this metric concept and eventually helping shift the culture.
Kovid Batra: So is, let’s say cycle time for, for that matter, uh, is, is a metric that is generally applicable in every situation and we can start introducing it at, at the first step and then maybe explore more and, uh, go for some specifics or cycle time is specific to a situation in itself?
Mario Viktorov Mechoulam: I think cycle time is one of these beautiful metrics that you can apply everywhere. Uh, normally you see it applied on the teams. To do, doing, done. But, uh, what I like is that you can apply it, um, everywhere. So you can apply it, um, across teams, you can apply, apply it at line level, you can even apply it at company level. Um, which is not done often. And I think this is, this is a problem. But applying it outside of teams, it’s definitely part of the cultural change. Um, I’ve seen that the focus is often on teams. There’s a lot of focus in optimizing teams, but when you look at the whole picture, um, there are many other places that present opportunities for optimization, and one way to do that is to start, to start measuring.
Kovid Batra: Mario, did you get a chance where you could see, uh, or compare basically, uh, teams or organizations where people are using engineering metrics, and let’s say, a team which doesn’t use engineering metrics? How does the value delivery in these systems, uh, vary, and to what extent, basically?
Mario Viktorov Mechoulam: Let me preface that. Um, metrics are just a cornerstone, but they don’t guarantee that you’d do better or worse than the teams that don’t apply them. However, it’s, it’s very hard, uh, sometimes to know whether you’re doing good or bad if you don’t have something measurable, um, to, to do that. What I’ve seen is much more frustration generally in teams that do not have metrics. But because not having them, uh, forces them into some bad habits. One of the typical things that I, that I see when I join a team or do a Gemba Walk, uh, on some of the teams that are not using engineering metrics, is high work in progress. We’re talking 30+ things are ongoing for a team of five engineers. This means that on average, everybody’s doing 5–6 things at the same time. A lot of context switching, a lot of multitasking, a lot of frustration and leading to things taking months to ship instead of days. Of course, as I said, we can have teams that are doing great without this, but, um, if you’re already doing this, I think just adding the metric to validate it is a very small price to pay. And even if you’re doing great, this can start to change in any moment because of changes in the team composition, changes in the domain, changes in the company, changes in the process that is top-down. So it’s, uh, normally it’s, it’s, it’s very safe to have the metrics to be able to identify this type of drift, this type of degradation as soon as they happen. What I’ve seen also with teams that do have metric adoption is first this eventual cultural change, but then in general, uh, one thing that they do is that they keep, um, they keep the pieces of work small, they limit the work in progress and they are very, very much on top of the results on a regular basis and discussing these results. Um, so this is where we can continue with the, uh, cultural change.
Uh, so after we have, uh, accountability, uh, the next thing, step is understanding. So helping people through documentation, but also through coaching, understand how the choices that we make, the decisions, the events, produce the results that we see for which we’re responsible. And after that, fostering discussion for which you need to have trust, because here we don’t want blaming. We don’t want comparing teams. We want to understand what happened, what led to this. And then, with these discussions, see what can we do to prevent these things. Um, which leads to agreement. So doing this circle, closing the circle, doing it constantly, creates habits. Habits create continuous improvement, continuous learning. And at a certain point, you have the feeling that the team already understands the concepts and is able to work autonomously on this. And this is the moment where you delegate responsibility, um, of this and of the execution as well. And you have created, you have changed a bit the culture in one team.
Kovid Batra: Makes sense. What else does it take, uh, to actually bring in this culture? What else do you think is, uh, missing in this recipe yet?
Mario Viktorov Mechoulam: Yes. Um, I think working with teams is one thing. It’s a small and controlled environment. But the next thing is that you need executive sponsorship. You need to work at the organization level. And that is, that is a bit harder. Let’s say just a bit harder. Um, why is it hard?
Kovid Batra: I see some personal pain coming in there, right?
Mario Viktorov Mechoulam: Um, well, no, it depends. I think it can be harder or it can be easier. So, for example, uh, my experience with startups is that in general, getting executive sponsorship there, the buy-in, is way easier. Um, at the same time, the, because it’s flatter, so you’re in contact day to day with the people who, who need to give you this buy-in. At the same time, very interestingly, engineers in these organizations often are, often need these metrics much less at that point. Why? Because when we talk about startups, we’re talking about much less meetings, much less process. A lot of times, a lot of, um, people usually wear multiple hats, boundaries between roles are not clear. So there’s a lot of collaboration. People usually sit in the very same room. Um, so, so these are engineers that don’t need it, but it’s also a good moment to plant the seed because when these companies grow, uh, you’ll be thankful for that later. Uh, where it’s harder to get it, it’s in bigger corporations. But it’s in these places where I think that it’s most needed because the amount of process, the amount of bureaucracy, the amount of meetings, is very, very draining to the teams in those places. And usually you see all these just piles up. It seldom gets removed. Um, that, maybe it’s a topic for a different discussion. But I think people are very afraid of removing something and then be responsible of the result that removal brings. But yeah, I have, I have had, um, we can say fairly, a fair success of also getting the executive sponsorship, uh, in, in organizations to, to support this and I have learned a few things also along the way.
Kovid Batra: Would you like to share some of the examples? Not specifically from, let’s say, uh, getting sponsorship from the executives, I would be interested because you say it’s a little hard in places. So what things do you think, uh, can work out when you are in that room where you need to get a buy-in on this? What exactly drives that?
Mario Viktorov Mechoulam: Yes. The first point is the same, both for grassroots movements with teams and executive sponsorship, and that is to be transparent. Transparent with what, what do you want to do? What’s your intent and why do you think this is important? Uh, now here, and I’m embarrassed to say this, um, we, we want to change the culture, right? So we should focus on talking about habits, um, right? About culture, about people, et cetera. Not that much about, um, magic to say that, but I, but I’m guilty of using that because, um, people, people like how this sounds, people like to see, to, to, to hear, oh, we’ll introduce metrics and they will be faster and we’ll be more efficient. Um, so it’s not a direct relationship. As I said, it’s a stepping stone that can help you get there. Um, but, but it’s not, it’s not a one month journey or a one year journey. It can take slightly longer, but sometimes to get, to get the attention, you have to have a pitch which focuses more on efficiency, which focuses more on predictability and these type of things. So that’s definitely one, one learning. Um, second learning is that it’s very important, no matter who you are, but it’s even more important when you are, uh, not at the top of the, uh, of the management, uh, uh, pyramid to get, um, by, uh, so to get coaching from your, your direct manager. So if you have somebody that, uh, makes your goals, your objectives, their own, uh, it’s great because they have more experience, uh, they can help you navigate these and present the cases, uh, in a much better and structured way for the, for the intent that you have. And I was very lucky there as well to count on people that were supportive, uh, that were coaching me along the way. Um, yes.
So, first step is the same. First step is to be transparent and, uh, with your intent and share something that you have done already. Uh, here we are often in a situation where you have to put your money where your mouth is, and sometimes you have to invest from your own pocket if you want, for example, um, to use a specific tool. So to me, tools don’t really matter. So what’s important is start with some, something and then build up on top of it, change the culture, and then you’ll find the perfect tool that serves your purpose. Um, exactly. So sometimes you have to, you have to initiate this if you want to have some, some, some metrics. Of course, you can always do this manually. I’ve done it in the past, but I definitely don’t recommend it because it’s a lot of work. In an era where most of these tools are commodities, so we’re lucky enough to be able to gather this metric, this information. Yeah, so usually after this PoC, this experiment for three to six months with the team, you should have some results that you can present, um, to, um, to get executive sponsorship. Something that’s important here that I learned is that you need to present the results very, very precisely. Uh, so what was the problem? What are the actions we did? What’s the result? And that’s not always easy because when you, when you work with metrics for a while, you quickly start to see that there are a lot of synergies. There’s overlapping. There are things that impact other things, right? So sometimes you see a change in the trend, you see an improvement somewhere, uh, you see the cultural impact also happening, but you’re not able to define exactly what’s one thing that we need or two things that we, that we need to change that. Um, so, so that part, I think is very important, but it’s not always easy. So it has to be prepared clearly. Um, the second part is that unfortunately, I discovered that not many people are familiar with the topics. So when introducing it to get the exact sponsorship, you need to, you need to be able to explain them in a very simple, uh, and an easy way and also be mindful of the time because most of the people are very busy. Um, so you don’t want to go in a full, uh, full blown explanation of several hours.
Kovid Batra: I think those people should watch these kinds of podcasts.
Mario Viktorov Mechoulam: Yeah. Um, but, but, yeah, so it’s, it’s, it’s the experiment, it’s the results, it’s the actions, but also it’s a bit of background of why is this important and, um, yeah, and, and how did it influence what we did.
Kovid Batra: Yeah, I mean, there’s always, uh, different, uh, levels where people are in this journey. Let’s, let’s call this a journey where you are super aware, you know what needs to be done. And then there is a place where you’re not aware of the problem itself. So when you go through this funnel, there are people whom you need to onboard in your team, who need to first understand what we are talking about what does it mean, how it’s going to impact, and what exactly it is, in very simple layman language. So I totally understand that point and realize that how easy as well as difficult it is to get these things in place, bring that culture of metrics, engineering metrics in the engineering teams.
Well, I think this was something really, really interesting. Uh, one last piece that I want to touch upon is when you put in all these efforts into onboarding the teams, fostering that culture, getting buy-in from the executives, doing your PoCs and then presenting it, getting in sync with the team, there must be some specific indicators, right, that you start seeing in the teams. I know you have just covered it, but I want to again highlight that point that what exactly someone who is, let’s say an engineering manager and trying to implement it in the team should be looking for early on, or let’s say maybe one month, two months down the line when they started doing that PoC in their teams.
Mario Viktorov Mechoulam: I think, um, how comfortable the people in the team get in discussing and explaining the concepts during analysis of the metrics, this quality analysis is key. Um, and this is probably where most of the effort goes in the first months. We need to make sure that people do understand the metrics, what they represent, how the work we do has an impact on those. And, um, when we reached that point, um, one, one cue for me was the people in my teams, uh, telling me, I want to run this. This meant to me that we had closed the circle and we were close to having a habit and people were, uh, were ready to have this responsibility delegated to them to execute this. So it put people in a place where, um, they had to drive a conversation and they had to think about okay, what am I seeing? What happened? But what could it mean? But then what actions do we want to take? But this is something that we saw in the past already, and we tried to address, and then maybe we made it worse. And then you should also see, um, a change in the trend of metrics. For example, work in progress, getting from 30+ down to something close to the team size. Uh, it could be even better because even then it means that people are working independently and maybe you want them to collaborate. Um, some of the metrics change drastically. Uh, we can, we can talk about it another time, but the standard deviation of the cycle time, you can see how it squeezes, which means that, uh, it, it doesn’t, uh, feel random anymore. When, when I’m going to ship something, but now right now we can make a very, um, a very accurate guess of when, when it’s going to happen. So these types of things to me, mark, uh, good, good changes and that you’re on the right path.
Kovid Batra: Uh, honestly, Mario, very insightful, very practical tips that I have heard today about the implementation piece, and I’m sure this doesn’t end here. Uh, we are going to have more such discussions on this topic, and I want to deep dive into what exact metrics, how to use them, what suits which situation, talking about things like standard deviation from your cycle time would start changing, and that is in itself an interesting thing to talk about. So probably we’ll cover that in the next podcast that we have with you. For today, uh, this is our time. Any parting advice that you would like to share with the audience? Let’s say, there is an Engineering Manager. Let’s say, Mario five years back, who is thinking to go in this direction, what piece of advice would you give that person to get on this journey and what’s the incentive for that person?
Mario Viktorov Mechoulam: Yes. Okay. Clear. In, in general, you, you’ll, you’ll hear that people and teams are too busy to improve. We all know that. So I think as a manager who wants to start introducing these, uh, these concepts and these metrics, your, one of your responsibilities is to make room, to make space for the team, so that they can sit down and have a quality, quality time for this type of conversation. Without it, it’s not, uh, it’s not going to happen.
Kovid Batra: Okay, perfect. Great, Mario. It was great having you here. And I’m sure, uh, we are recording a few more sessions on this topic because this is close to us as well. But for today, this is our time. Thank you so much. See you once again.
Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. Bye-bye!
Kovid Batra: Bye.
Webinar: Unlocking Engineering Productivity with Clint Calleja & Rens Methratta
February 28, 2025
•
59 min read
In the third session of 'Unlocking Engineering Productivity' webinar by Typo, host Kovid Batra converses with engineering leaders Clint Calleja and Rens Methratta about strategies for enhancing team productivity.
Clint, Senior Director of Engineering at Contentsquare, and Rens, CTO at Prendio, share their perspectives on the importance of psychological safety, clear communication, and the integration of AI tools to boost productivity. The panel emphasizes balancing short-term deliverables with long-term technical debt, and the vital role of culture and clear goals in aligning teams. Through discussions on personal experiences, challenges, and learnings, the session provides actionable insights for engineering managers to improve developer experience and foster a collaborative working environment.
Kovid Batra: All right. Welcome to the third session of Unlocking Engineering Productivity. This is Kovid, your host and with me today we have two amazing, passionate engineering leaders, Clint and Rens. So I’ll introduce them one by one. Let’s go ahead. Uh, Clint, uh, he’s the senior Director of engineering at Contentsquare, ex-Hotjar, uh, a long time friend and a mentor. Uh, welcome, welcome to the show, Clint. It is great to have you here.
Clint Calleja: Thank you. Thank you, Kovid. It’s, uh, it’s uh, it’s very exciting to be here. Thank you for the invite.
Kovid Batra: Alright. Uh, so Clint, uh, I think we were talking about your hobbies last time and I was really, uh, fascinated by the fact. So guys, uh, Clint is actually training in martial arts. Uh, he’s very, very, uh, well trained professional martial arts player and he’s particularly, uh, more interested in karate. Is it right, Clint?
Clint Calleja: Yes. Yes indeed. It’s, uh, I wouldn’t say professionally, you know, we’ve been at it for two years, me and the kids. But yes, it’s, uh, it’s grown on me. I enjoy it.
Kovid Batra: Perfect. What else do you like? Uh, would you like to share something about your hobbies and passions?
Clint Calleja: Yeah, I’m, I’m pretty much into, um, on the, you know, more movement side. Uh, I’m, I’m, I’m into sports in general, like fit training, and I enjoy a game of squash here and there. And then on the commerce side, I need my, you know, daily dose of reading. It, it varies. Sometimes it’s, uh, around leadership, sometimes psychology. Uh, lately it’s a bit more also into stoicism and the art of controlling, you know, thinking about what we can control. Uh, yeah, that’s, that’s me basically.
Kovid Batra: That’s great. Really interesting. Um, the, another guest that we have today, uh, we have Rens with us. Uh, Rens is CTO of Prendio. Uh, he is also a typo product user and a product champion. He has been guiding us throughout, uh, on building the product so far. Welcome to the show, Rens.
Rens Methratta: Hi, Kovid. Uh, you know, it’s good to be here. Uh, Clint, it’s really good to meet you. Uh, very excited to participate and, uh, uh, it’s always really good to, uh, talk shop. Uh, enjoy it.
Kovid Batra: Thank you so much. Thank you so much. Uh, all right, uh, Rens, would you like to tell us something about your hobbies? How do you unwind your day? What do you do outside of work?
Rens Methratta: Yeah, no, um, it’s funny, I don’t have many, I don’t think I have many hobbies anymore. I mean, I have two young kids now, um, and they are, uh, they take up, their hobbies are my hobbies. So, uh, um, so gymnastics, soccer, um, we have, uh, other, you know, a lot of different sports things and piano. So I, I’ve, I’ve learning, I’m learning piano with my daughter. I guess that’s a hobby. Um, that’s, uh, not far asleep, but I’m, I’m enjoying it. But I think a lot of their things that they do become stuff that, um, I get involved in and I really try to, um, I try to enjoy it with them as well. It makes, it makes it more fun.
Kovid Batra: No, I can totally understand that, because having two kids and, uh, being in a CTO position, uh, I think all your time would be consumed outside of work by the kids. Uh, that’s, that’s totally fine. And if your hobbies are aligned with what your kids are doing, that’s, that’s good for them and good for you.
Rens Methratta: Yeah, no, I, I, I think, I think it’s, I, I, I, I love it. I enjoy it. I, it keeps me, you know, I always say, you know, I think there’s a, I remember learning a long time ago, someone said that, you know, how you, uh, the, when you get older, you know, life, life goes by faster. ’cause you kept on doing the same stuff every day and your mind just samples less, right? So, like, they kind of keep me young. I get to do new stuff, um, with, through them. So it’s, it’s been good.
Kovid Batra: Perfect. Perfect. Um, thanks for the, for the introduction. Uh, we got to know you a little more, but that doesn’t stop here. Uh, Clint, you were talking about psychology reading those books. Uh, there is one small ritual, uh, on, on this show, uh, that is again, driven from my, uh, love for psychology, understanding human behavior and everything. So, uh, the ritual is basically that you have to tell something about yourself, uh, from your childhood, from your teenage, uh, that defines you who you are today.
Clint Calleja: Very interesting question. It reminds me of, uh, of a previous manager I used to have used to like, okay, asking this question as well. I think, um, there was a recent one which we just mentioned. Uh, you know, we’re mentioning kids, Rens, you got me to it. The fact that I actually started training martial arts because of the kids, I took them and I ended up doing it myself. Uh, so it was one of those. But I think the biggest one that comes to mind was, um, in 2005, at the age of 22, um, in Malta, you know, we’re a very tightly-knit culture. Um, uh, people, you know, stay living with their parents long, where a small island, everyone is close by. So I wanted to see what’s out there. So I went to live for a year in Germany. Um, and it was, I think this was the most defining moment because for two France. On one side it was the, um, a career opportunity where whilst I was still studying. So for software engineering part-time, um, there was this company that offered to take me on as an intern, but trained me for a whole year in their offices in Germany. So that was a good, uh, step in the right direction career wise, but, uh, outside, you know, of the profession, just on a personal life basis, it was such an eye-opener. It was, uh, this was the year where I realized, um, how many things were, was I taking for granted? You know, like, uh, family and friends. Yeah. Close by when you need them. Um, even simple things like, you know, the sunny weather in Malta, so the sea close by, like, I think this was the year where I became much more aware of all of this and, uh, could be, could reflect a bit deeper.
Kovid Batra: I totally relate to this actually. Uh, for you it happened, uh, I would say a little late because probably you moved out during your, uh, job or, uh, later in the college. For me, it happened in early teenage, I moved out for schooling for host, hostel and there were same realizations that I had and it got me thinking a lot about what I was taking for granted. So I totally relate and, and, and that actually defined me, who am I today. I’m much more grateful towards my parents and, uh, family that I have with me. Yeah.
Rens Methratta: Yeah. I, I, yeah. I’m, I’m glad, um, I, thinking through this, it was, it was an interesting question. Um, I think, you know, I, I, I’d say me growing up, I grew up on a, with my grandparents, right. And, and we, we had a farm, and I think growing up with them, obviously them being a bit older, I, I think they had a, you know, that there’s, I think you get older, you get a little bit more sense of maturity, kind of, you know, thinking of the world and seeing that at a young age, I think was really good for me because, you know, they were, you know, in farming there’s lots of things that sometimes go wrong. There’s floods, there’s, uh, disease, there’s, yeah, lots of stuff. But you know how they kind of approach things like, you know, they’re, you know, they were, they were never about, you know, let’s, let’s blame anyone, let’s do this, let’s, you know, really focus on, hey, let’s stay calm. Let’s focus on solving the problem. Let’s figure it out. Kind of staying positive. And, and I think that was really helpful for me. I think in setting an example, and really the biggest thing they taught me was like, you know, there are certain things you just can’t control. You know, just focus on things you can control and worry about those. And, you know, and, and, and, and that’s it. Really be positive in a lot of ways. Yeah. And I, I carry that with me a lot. And I think there’s, you know, there’s a lot of stuff you can stress out about, but there’s only so many things you can control and you kinda let go of everything else. So, so totally. That’s kind of, keep that with me.
Kovid Batra: Totally makes sense. I mean, uh, people like you coming from humble backgrounds are more fighter in nature. Uh, they’re more positive in lives, they’re more optimistic towards such situations. And I’ve seen that in a lot of, a lot of folks around me. Uh, people who are privileged do not really, um, get to be that anti-fragile, uh, when situations come, they break down easily. So I think that defines who you are. I totally relate to that. Perfect. Great. Thank you. Thank you for sharing this. Uh, alright guys. Uh, I think now we will move on to the main section, uh, whatever this particular unlocking engineering productivity is about. Uh, uh, today’s theme is around, uh, developer experience and of course the experience is that you have, you both have gathered over your engineering leadership journey. So I’ll start with a very fundamental thing and I think, uh, we’ll, we’ll go with Rens first. Uh, so let, let’s say, Rens, we, we’ll start. Uh, what according to you is engineering productivity? I mean, that’s the very fundamental question that I ask on this episode, but I want to hear out, the audience would want to understand what is the perspective of engineering leaders of such high performing teams when they talk about productivity?
Rens Methratta: Yeah, I think, you know, a lot of ways I, there there’s, are the, obviously the simple things, metrics you like, um, you know, velocity, things like that. Those are always good. Those are good to have. But from my perspective, um, and the way that I, I think really good teams function is, uh, making sure the teams are aligned with business objectives, right? So what we’re trying to accomplish, common goals, um, regardless of, you know, how big an organization is, I think if, um, and it gets harder when you get bigger, obviously, right? It’s like identifying, uh, the layers between your impact and then maybe the business is higher. Maybe it’s easier for smaller teams. Um, but, but regardless, I think that’s, that’s always been what I’ve seen that the outcome, a linking to the outcomes that makes the most sense, and understanding productivity. So like, hey, these are, this is, this is what their goals are. You know, I think OKRs work really well in terms of structuring that as a, as a framework. Right. But realistically I’m saying, Hey, here’s, here’s what we as a team are trying to accomplish. Uh, you know, here’s how we’re gonna measure it based on some sort of, you know, whatever the business metric is or the key outcome is. And then let’s work on fi, let’s work on figuring it out. And then, then, then, then basically how we, how we do that is okay. We, uh, I think my approach has always been, um, this is what we want to do. This is what we think we need to do to do, uh, do it. And then what are we gonna commit to? Like, when do we think, what are we gonna commit to? When are we gonna get it done, right? And how well do we do to that? Right? So I think that’s how we tie it all together. Um, so basically just yeah, uh, you know, getting us all line aligned on objectives, right? And making sure the objectives have meaning to the team. Like I, I, it’s always hard when people feel like why am I doing this, right? And, and that’s always the worst, right? But if it’s clear that, hey, we, we know how this is gonna make an impact on our customers or the business, and they can see it. Um, and then, but then aligning to, okay, we see it, we see the problem, here’s a solution, we think it’s gonna work. Uh, here’s what we’re committing to, to fix it. And then, then, then it’s really measuring, you know, how well did we meet on committing, getting to that? You know, did we re, did we deliver what we said we’re gonna deliver? Did we deliver it on time? Those are things that we kind of look at.
Kovid Batra: Got it. Got it. What, what do you have to say, Clint? Uh.
Clint Calleja: It’s, uh, it, it’s, uh, my, my definition is very much aligned. Like, uh, from a, a higher perspective. To me, it all boils down. And, um, how well are we, uh, and how well and quickly are we delivering the right value to the user? Right? And, uh, this kind of, uh, if we drill down to this, um, this means like how quickly are we able to experiment and learn from that as our architecture is allowing us to do that as quickly as we want. How well are we planning and executing on those plans, uh, with least disruptions possible, like being proactive rather than be reactive to disruption. So there’s, you know, there’s a whole, uh, sphere of accountability that we, we, we need to take into consideration.
Kovid Batra: Makes sense. I think both of you, uh, are pointing towards the same thing. Fundamentally, if I see it, it’s about more about delivering value to the customer, delivering more value, which aligns with the business goals, and that totally makes sense. But what, what do you guys think, uh, when it comes to, uh, other peers, uh, other engineering leaders, do you see the same perspective, uh, about engineering productivity, uh, Rens?
Rens Methratta: Um, I think in general, yes. I think, I think you end up and I, and I think sometimes you end up getting caught in. Um, I, you know, sometimes you get caught up in just hitting, trying to hit a metric, right? And then losing track of, you know, are we working on the right things? Is this, is this worthwhile? I think that’s when it’s, it could be, uh, you know, maybe problematic, right? So I, you know, I even in my early in my career at this, if they, where I’ve, I’ve done that, like, hey, you know, let’s, let’s be really as efficient as possible in terms of building a metrics organization, right? We’ll do kind of, everything’s our small projects, right? And we’ll get these things in really quickly. And, you know, I I, I think that, you know, what I learned is in that situation, like, yeah, we’re, we’re doing stuff, but then the team’s not as motivated. The, you know, we’re not, it’s not as collaborative, the outcome isn’t gonna be, um, as good. Like I think if we have, I think the really key thing is from my perspective, is keeping having a, a team that’s engaged, right? And being part of the process and proactive. Right. And obviously measuring to what the outcomes are, but, um, that’s side of my, where I feel it’s great when we, when we go to a, like a, a, uh, or a retrospective or a sprint planning where we’re like, Hey, teams are like, I don’t think this works. Like I, I, the worst part is when you get like crickets from people, like, okay, this is what we wanna do. Like, and no, no real feedback. Right? I think I really look for, you know, how engaged teams are, you know, how in terms of solving the problem, right? Um, and that’s, and I think that a lot of that cross collaboration, right? And how, um, and building a, a kind of environment where people feel empowered to kind of ask those questions, be collaborative, ask tough questions too, right? Like, I, I love it when an engineer says this, this is not gonna work, right? And it’s great. I’m like, yeah, let’s tell me why. Right? So I, I think if we can build cultures that way, that, that, that’s, that’s ideal.
Kovid Batra: Makes sense. Perfect. Uh, Clint, for you, uh, do you see the same perspective and how, how things get prioritized in that way?
Clint Calleja: I, I particularly love the focus and the tension on the culture, the cultural aspect. I think there’s, there’s more to it that we can unpack there, but yes, in general, um, I think actually when, when I heard this question, it reminded me of when I realized the needs of data points, not just for the sake of data points, of KPIs, but what I started to see as the company started to grow is that without sitting down and agreeing on what good looks like, what are the data points that we’re looking at, I started to see, um, a close definition, but not, not exact definition, which still left, you know, openness to interpretation. And there were cases as we grew, uh, bigger and bigger where, for example, I felt we were performing well, whereas someone else felt differently. And then you sit down and you realize, okay, this is the crux of the problem, so we need that. That was the eureka moment where like, okay, so this is where we need data points on team performance, on system performance, on product performance in general.
Kovid Batra: Yeah. Makes sense. I think both of, both of you have brought in some really good points and I would like to deep dive into those as we move forward. Uh, but before, like we move on to some specific experiences around those pointers, uh, Clint, uh, in your journey, uh, over the last so many years, there must have been some counterintuitive things that I would have that you would’ve realized now, uh, uh, that are not aligned with what you used to think earlier, how productivity is driven in teams. Are there any, anythings? Is, is that thing ringing a bell?
Clint Calleja: Uh, well, if you ask me about learnings on, uh, you know, things that I used to think are good for productivity and now I think they’re not, and I evolve them, I think I keep having these one and one out, right? Um, but again, like, uh, the alignment on piece, key set of indicators rather than just a few was one of those big changes in my, in my leadership career, because I went from using sprint, um, as the only data points to then extending to understanding why the DORA metrics better, why actually SPACE matters even more because they’re the, um, the, the wellbeing factor and how engaged people are. So, uh, I realized that I needed to form a bigger picture of data points in order to be able, able, able to understand all the picture. And again, not just the data points, the quantitative data points, I also needed to fill in the gaps with the qualitative parts.
Kovid Batra: Sure. I think, yeah, that goes hand in hand, uh, alone, either of those is not sufficient to actually look at what’s going on and how to improve on those factors. Perfect. Makes sense. Rens, for you, uh, there must have been some learnings over the years and, uh, anything that you find was intuitive earlier, but now it’s counterintuitive? Yeah.
Rens Methratta: Yeah, no, I think I, learnings every day. Um, but I, I, yeah, in general, maybe what Clint said, right? So you, I did end up at some point overindexing on like some of the quantitative stuff, right? And it’s like, and, and you lose track of what are you trying to do, right? So, hey, I did, we did really well. We, um, you know, we’re doing, um, in terms of, uh, you know, we, we got through our sprints, we got, we were getting to the, uh, we’re doing planning where, hey, our meeting times are low, right? Or these, these are so many things you can, there’s so many things you can look at, and then you lose pic, then lose track of the greater picture, right? So I, I do think of, you know, identifying those north stars, right? And these was referencing, right? Those what we think are important, saying, Hey, what are, how are we measuring to that? And also then also that helps you make sure you’re looking at the right metrics, potentially, right? And putting them in the right context, right? So, you know, it doesn’t matter your, if your velocity’s great, if you’re not building the right things, right? If you, it doesn’t matter if you’re like, so those are the things I think you kinda learned. Like, hey, sometimes it’s just. Um, you know, simplify, you know, the, you know, the what you want, what you, the what you, what you look at, right? And, and try to, try to think through like a, how are, how are we as a team meeting that? And also try to, primarily from a team perspective, right? Getting alignment on that. Like, Hey, this is what we’re, this is the goal we’re trying to get to. And I, I feel like that’s when you get most, uh, commitment from the team too. Like, Hey, uh, it’s easy. I know what we’re trying to do it, and it, it, it, it motivates people to be like, yep, this is what we’re trying to get to. It’s, it’s, it’s something to celebrate when we get to it, right? Otherwise, it just gets, I, it’s, it’s not hard. It’s, it’s it’s hard to celebrate. Like, oh, we got, we got X velocity. I’m like, ah, that’s not, that’s not right. So yeah, I think that’s a better learning, simplifying, right? And, and, um, and also, you know, simplifying in a way and then defining the metrics based on those core metrics like the, uh, and so they all flow down so that it, it aligns, right? And people, you can point something easily, easily and say, this is why it’s important. Right? Um, yeah, I think that’s really important when you communicate to people, Hey, look, this is problematic. Uh, we need to, we might need to take a look at this. And be able to really simplify, say, this is why it’s important, this is why it’s problematic. Uh, this is why it’s gonna impact our North Star. Right? I think that makes conversations a lot easier.
Kovid Batra: Totally, makes sense guys. I think having the right direction along with whatever you are doing on day-to-day basis as KPIs is very important. And of course, to understand a holistic picture, uh, to understand the developer’s experience, a team’s experience to improve overall productivity, not just quantitative, but qualitative data is equally important. So I think to sum up both of your learnings, I think this is a good piece of insight. Now, um, we will jump onto the next section, section, but before that, I would like to, uh, tell my audience, our audience that uh, if they have any questions, we are gonna have a QnA round at the end of the session. So it’s better you guys put in all your questions in the comments right now so that we have filtered it out by the end of the episode. And, uh, moving on now. So guys, uh. The next section is about specific experiences that we are gonna deep dive into from Rens and Clint’s journeys. Uh, we’ll start with you, Clint. Uh, I think the, the best part about your experience that I have felt after stalking you on LinkedIn is that, uh, you, you have had experience of going through multiple acquisitions and, uh, you work with smaller and larger organizations, high performing teams. Uh, this kind of experience brings a lot of exposure and we would want to learn something from you. How does this transition happen? And in those transitions, what, what should an engineering leader should be doing, uh, to make it more, uh, to not make it overwhelming and to be more productive and do the right things, create the impact even during those transitions?
Clint Calleja: Uh, yes. Uh, we’ve been through a, I’ve been through a couple of interesting experience, and I think like, uh, I dare to say, for me, they were like, especially the acquisition, uh, where HR was acquired was, um, uh, a unique, a very unique experience to big companies merging together. Um, it’s very easy for such a transition to be overwhelming. I mean, there’s a lot of things to do. Um, so I think the first key takeaway for me is like, clear communication, intentional, um, communication, regular, and, uh, making sure that we as a leader, like you’re making yourself available to support everyone and to help to guide others along this journey. Um, it’s, then there’s the other side of it that, you know, uh, it, it, such an experience is. Um, does not come without its own challenges, right? Uh, the outcomes are big. Um, uh, and in engineering leadership specifically, you know, that there’s a primary, um, area that you start to think about is, okay, the, the systems, what does it mean when talk about technology stacks the platforms? But it’s something not to underestimate, is also the ways of working in the culture when merging with companies because, uh, I, I ca, I ca I started to, uh, coming to the realization that I think there’s more effort required than planning there, than in the technology side, um, of the story. So, very interesting experience. Um, how to get the teams up and running. I mean, my experience last year was very, again, very, very challenging in a good way. You know, I started in a completely new. Department with about 55 people. 70% of them were new to me coming from the parent company. So we needed to, we already had goals to deliver by June and by September. So it, yes. Talk about overwhelm. And I think one of the, one of the key, um, exercises that really helped us, um, start to carve out, um, some breathing space was these iterations of higher level estimations of the things that we need, um, to implement. Uh, they started immediately enabling us to understand if we needed to scope, if we needed to have discussions to either delay or the scope or bring more, uh, people to the mix. So, and following that, you know, kickstarting, we needed to give some space to the teams to come together, start forming their ways of working. The same time getting a high level understanding of what we can commit to. And from there it’s all, again, about regular communication and reflections. It’s like, okay, biweekly, let’s have a quick update, um, and let’s call a spa. The spa. If we’re in the red, let’s call it out. I’d rather, you know, we’d rather know early so that we can do something about it where there’s time rather than, I’m not sure if you’ve ever seen the situation, but you see a status in the green for almost all a quarter. All of a sudden you get to the last two weeks and then the red. So.
Kovid Batra: Makes sense. Um, while, while we were, uh, initially talking, you said there is a lot to unpack on the, on the developer experience part. I’m sure, uh, that that’s something very core to you, your leadership style, where you ensure a good, uh, developer experience amongst your team. So now you shifted to a new team, uh, and in general, wherever you have been leading teams, uh, are there any specific practices around building a good developer experience that you have been following and that have been working for you? And if there are, uh, can you just share something?
Clint Calleja: That’s a very good question, because I, I see different teams, right? So I’ve done different exercises with different teams, but in this particular case, where I started from was I, I realized that, okay, when you have a new, uh, line being formed, mixed cultures coming from different companies, I said, okay, the one thing I can start with is at least providing, um, a community culture, um, where people feel safe enough to speak up. Why? Because we have challenging goals. We have a lot of questions. There are areas that are known. If people won’t be able to speak up, will, you know, the probability of surprises is gonna be much, much higher.
Kovid Batra: Right.
Clint Calleja: Um, so what are some elements, you know, some actions that I’ve taken to try and improve here? So I think when it comes to leading teams directly in general, we find much more support because even if you look at the Agile manifesto, it talks about a default team where you have a number of engineers who have a trio, ideally, you know, enabled to do decision making. There’s a pattern of reflections that, uh, happen, as Rens said in the retrospectives. Ideally actions get taken. There’s a continuous cycle of improvement. What I found interesting was that beyond one team, right, when I started to lead other leaders or managers, I could see a much bigger opportunity in this team of leaders or team of managers to actually work together as a team. By default, we’re all kind of more focused on our scope, making sure that our people, you know, are, are, are, are, uh, well supported and, uh, and heard and that our team is delivering. But I, I think it’s also worth thinking about if we’re calling it developer experience, let’s call this the manager experience of how much can we help each other out? How much can we support each other to remember that we’re people before managers, you know, like, uh, I dunno, it’s not the first time I, I went to work where I wasn’t feeling so great. So I needed to fine tune myself, expectations on what to produce. So if this is not shared outside with my, with my lead, with my manager, or with my peers, you know, their expectations cannot adjust accordingly. So there’s, there’s a lot of this that I try to, to prioritize by, uh, simple gestures like sharing my weekly goals, trying to encourage my managers to do the same.
Kovid Batra: Yeah.
Clint Calleja: So we can understand the, we try to take one too much in end of week reflection. Think of it like a retrospective, but between managers to say, okay, hey, it was much more disruption than I anticipated this week, and it’s okay. Part of it is actually the psychological safety of being able to say, oh, my short 400%, I only achieved 50. It’s okay. But I learned, right, and I think in terms of metrics, another exercise that I immediately tried to restart in my new line was this exercise that I call the high altitude flight. And this was an exercise where as leaders, we connect those KPIs, um, with qualitative data, like, uh, the weekly pulse and feedback from 15Five, for example. Mm-hmm. And we talk about it on a monthly basis. We bring those data points on a board, you know, start asynchronous, we raise the right questions, uh, challenge each other out and this way we were regularly bringing those data points into the discussion and making sure we’re prioritizing some actions towards them.
Kovid Batra: Totally. I think, um, after talking to so many engineering leaders, one common pattern that I’ve seen, uh, in some of the best leaders is they, they, they over communicate, like in, in a very positive sense I’m saying this, uh, they, they tend, because everyone feels that, uh, a lot of times you’re in a hybrid culture, in a remote culture where as much as you communicate is less actually. So, having those discussions, giving that psychological safety has always worked out for the teams, and I’m sure your team would be very happy in the way you have been driving things. Uh, but thanks, thanks for sharing this experience. I’ll get back to you. Uh, I have a few questions for Rens also on this note. Uh, so Rens’ journey has also been very interesting. He has been the CTO at Prendio and uh, recently I was talking to him about some of the recent initiatives that he was working on with the team. And he talked about some, uh, copilot and, uh, few other automated, uh, code analysis tools that he has been integrating in the team. So Rens, could you just share some experience from there and how that has, uh, impacted the developer experience and the productivity in your teams?
Rens Methratta: Um, yeah, I’d be happy to. It’s been, I think there’s a lot of, a lot of change, uh, happening in terms of capabilities with, uh, AI, right. And, and how we best utilize it. So like, we’ve definitely seen it, you know, as, as models have gotten better, right, I think the biggest thing is we have a, you know, relatively large code base and um, and newer code base for things. And so it’s, it was always good for like, maybe, maybe even like six months ago we would say, Hey, it’s, we can look at some new code, we can improve it, write some unit tests, things like that. But you know, having an AI that has like a really cohesive understanding of our code base and be able to, um, you know, have it, you know, suggest or build, uh, code that would work well, it wasn’t hap, it wouldn’t happen, right. But now it is, right? So I think that’s, that’s coming, the probably the biggest thing we’ve seen in the last couple months and really changing, um, you know, how we think about development a bit, right? Kind of moving, uh, we’ve done some, you know, a lot of this is AI first development, like it changed mindset for us as a team, right? Like how do we, how do we build it? Um, you know, lots of new tools. I think, Kovid, you mentioned there’s tons of new tools available. Yeah. It’s changing constantly, right? So, um, you know, we’ve spent some time. Looking at, looking at some of the newer tools, um, we’ve actually, uh, agreed to as of now, uh, a tool, we, we actually gonna moved everyone over to Cursor. Right. ’cause I just on terms of, um, like what the capabilities it provided and, and, and that, so, uh, and then similarly outside of code, yeah. It’s like, uh, there are tools that, you know, typo has the, uh, you know, for the pull requests, the, you know, uh, uh, summary, things like that’s really helpful, right, for things of that, even on the, and then automated testing, uh, there’s a bunch of things that I think are really changing how we work and make us more productive. And, and it’s, it’s challenging ’cause it’s, you know, it’s, obviously, and it’s good. It’s, it’s a lot of new stuff, right? And it’s really re-making us rethink how we do it. Like, um, you know, developing. So we built, uh, some things now from an AI-first approach, right? So we have to kind of relearn how we do things, right? We’re thinking things out a bit more, like defining things from a prompt first approach, right? What are our prompts, what are our templates for prompts? Like, it’s, it’s been really interesting to see and good to see. Um, uh, and I think, yeah, it definitely made us, uh, more productive and I think we’ll get more productivity as we kind of embrace the tools, but also kind of embrace the mindset. It’s, um, I think for the folks for who’s actually used it as most, and you can kind of see like when they first start utilizing it to maybe where they’re now, like the productivity increase has been tremendous. So that’s probably the biggest change we’ve seen recently. Um, but it, it’s an exciting time. We’re, we’re looking forward to kinda learning more and, and it’s something that we have to, um, you know, we really have to, um, get a better understanding of, uh. But again, which also like challenges too. I would say that too. Right? So, uh, like previously we had a good understanding of what our velocity would be. I am, I mean, right now it’s a good, maybe a good thing, like our velocity would be better, right? And it’s higher. So like, you know, uh, even engaging effort, things like that, it’s, it’s com, it’s a lot of new things that we’re gonna have to learn and, and figure out and, um, reassess. Um, but, um, but yeah, I, I mean, I, I think if I, if I look at anything that’s been different recently, that’s been probably the biggest thing and the biggest change for us in terms of how we work. And then also in kind of incorporate, making sure that we incorporate that into our existing workflows or existing development, uh, structure. That’s, I think it’s a lot of new changes for our team, um, trying to help, help us do it, uh, effectively and making sure we’re thinking about it, but also like giving our team power to like try new stuff has also been really cool too.
Kovid Batra: Perfect. Perfect. And, uh, my next question is actually to both of you. Um, you both are trying to implement, let’s say, AI, uh, think you’re trying to implement a better communication, better one-on-ones, bringing that psychological safety, everything that you guys do, I’m sure you, you both have some way to measure the impact of that particular initiative. So how do you guys actually, uh, measure the impact of such initiatives or such, uh, let’s say AI tooling that you brought in, bring into the picture. Uh, maybe Clint, you, you can go ahead.
Clint Calleja: I don’t have examples around AI toolings and in general, it’s more, you know, about utilizing the, actually deciding which of those KPIs are we actually optimizing for for this quarter. So I am guessing, for example, in Rens’ case we were talking about how much AI already is influencing, uh, productivity. So, um, so I would, um, assume or expect a pattern of decreased cycle time because of, uh, the quicker time, uh, to, to implement certain code. Um, I think the key part is something that Rens said a while ago is not focusing a lot on the KPI per se, just for the sake of that KP, but connecting it even in the narrative, in the communication, when we set the goal with the teams, connecting it to the user value. So some, for example, some experiences I’ve had. Okay. I had an, an interesting experience where I did exactly that. I just focused on the pickup time without a user connection. And this is where I got the learning. I’m like, okay, maybe I’m optimizing too much about the, the data points. Whereas eventually, we started shifting towards utilizing MTTD, for example, to, uh, reduce the impact of service disruptions on our customers by detecting, um, disruptions internally, uh, using SLOs to understand proactively if we’re eating too much into the other budget. So we actually act before an incident happens, right?
Kovid Batra: Um, right.
Clint Calleja: So, uh, it’s different, uh, different data points. And when going back to the wellbeing, what I found very interesting, I know that there are the engagement surveys that happen every six months to a year usually. Uh, but because of that time frequency, it sets wellbeing as a legging indicator. When we started utilizing, um, 15Five, for example, there are other tools like it, but the intention is, for every one-on-one, weekly or biweekly, you fill a form starting with how well did you feel from 1 to 5. Because we were collecting that data weekly, all of a sudden the wellbeing pulse became a leading indicator, something that I could attribute to a change, an intentional change that we decided to do in leadership.
Kovid Batra: Makes sense. Rens, for you. I think, uh, the question is pretty simple again, uh, like you, you are already using typo, right?
Rens Methratta: We are, yeah.
Kovid Batra: But I would just rephrase my question to ask you like, how do use such tools to, uh, make sure maybe your planning, your execution or your automation or your reflection is in place. Uh, how, how do you leverage that?
Rens Methratta: Yeah, and, and I think, I think it’s, uh, maybe understand the same thing. I think, uh, and aligning those two, you know, what the objectives are, right? So I love, uh, I know primarily the sprint retrospective like it, but not the detail, like more on just, um, as a collective team, we said, Hey, this is what we are trying to accomplish, this, we have a plan to do this. We’ve agreed to, um, hey, this is what we have to get done for, you know, these next couple of weeks to make it right. And then it’s, you know, really having the, all that in one place to see like, uh, we said we’re gonna, we need to get all this stuff done. Um, you know, we, this is how, this is how we did, right? So there’s, for us, there’s multiple tools to put together. We have ticketing with Jira, we have obviously Git to kind of get little controls, but then having all that merged into one place we can easily see like, okay, this is what we committed to, this is what we did. Right? And then, and then if, then basically having, being able to say, okay, I will, well, okay, one, okay, here’s where we are. Do we need to, what do we need to do to kind of, uh, kind of problem solve? Are we behind? What do we, what should we do? Right? Having those discussions is great. And then also being like, okay, well, um, and so then it’s again, can we still meet these goals that we wanna do from an objective perspective? What’s holding us back? I think getting to the point where we can have those conversations easily, right? That’s, that’s what the tools, uh, well, and for Typo, that’s what we really use it for, right? So in, instead of, uh, because it’s the context that all those individual stats provide, right? That’s more important. Uh, and and that context towards how does that, at the end of the day, that aligns to what our overall goal is, right? We have this goal, we’re trying to build this or, uh, change this, right? And for our customers, or because of a reason, uh, and, and being able to see like how we’re doing, uh, to that, right? In a, in a, in a good summary is really, is really, uh, is what we find the most useful and then so we can take action on it, right? Um, otherwise if we might, yeah, sometimes you kind of, you look at all these individual stats and you kind of, you, you lose track of it if you just look at those individually. Kind of. But if you have a holistic view of here’s how we are doing, uh, which, which we use it for, that, that really helps.
Kovid Batra: Perfect. Perfect. Clint, do you have anything to add on to that?
Clint Calleja: Not specifically, not at this stage.
Kovid Batra: Alright, perfect. Uh, I think, uh, thanks, uh, both of you for sharing, uh, your experiences. Now I think it’s time for us to move on to the QnA section. And I already see, uh, we are, uh, flooded with a few questions here and, uh, we’ll take a minutes break right now and, uh, in the meantime I will just pick on the questions that we need to prioritize here. Alright.
All right, so, uh, there are some interesting people here who have come up. Uh, Clint, I’m sure you have already seen the name. Uh, Paulo uh, the, the guest from the last session and one of our common friends, uh, uh, I’ll just pull up his question first. So I think, uh, this question, uh, Clint, you can take up. Uh, engineering productivity is a lot about the relationship with the product. As senior engineering leaders, what does a great product lead look like?
Clint Calleja: Very good question. Uh, hi Paulo. Um, well, I, I, I, I’ve seen a fair share, right? Uh, of, uh, good traits in, in product, product leads. That’s not me, right? No, that’s not you. Um, I, I think what I can speak for is what I, I tend to look for, um, and first and foremost, I tend to look for in a partner, um, uh, so ideally no division, because that division, um, easily, um, gets, uh, downstream, you know. You start to see that division happening in the teams as well. There’s the, secondly is in the alignment of objectives. So, um, I always tend to lean on my product counterpart to understand more about the top priorities of our product goals. And I bring, uh, which would answer some of the questions here, I bring in the picture, uh, the top, um, technical solvency, uh, challenges. In order to sustain those product goals. And this way, uh, we find a balance on how to set up the next set of goals for a quarter or half a year. Right. And we build together a narrative that we can share both upwards and with the rest of our teams. Uh, and another characteristic, yes, is regular, um, is actually the teamwork element. Uh, a while ago I explained the differentiation, the opportunity I’ve seen, uh, amongst team leads or managers to work together as a team as well. I think the way I like to see it is as a leader, you have at least three teams. You have the people that, uh, you work for, that report to you. You have your trio as another team, and then there’s the, um, the managers in the department, the other leaders in the department, which is yet another team. So in the product lead, I do lean on for, uh, one of my teams to be one of my team, uh, peers.
Kovid Batra: Makes sense. Perfect. Alright, uh, moving on, uh, to the next question. Uh, that’s from, uh, Vishal. Uh, what are the, some of, what are some of the best practices or tools you have found to improve your own productivity? Rens, would you like to take that?
Rens Methratta: Uh, sure. Um, I’m trying to think. I, I, there’s a lot of tools obviously. I think, I think at the end of the day, I. Um, more than anything else, I would say communication is the biggest thing, right? I would think for productivity. Yeah. From a team perspective. Like, um, like I’ve, you know, I, I’ve, I I’ve worked in a lot of different, uh, types of places from really large enterprise companies to really small startups. Right? And, and I think the common, the common thing, regardless of, of whatever tools we do is really one, how, how well do we, how well are we connected to what we’re building? How well are we, do we as a team understand what we’re trying to build and the overall objectives, right? And, and, um, I think that just, you know, that itself, uh, more than anything is what drives productivity. So like I, you know, I’ve, uh. I, I think the most productive I’ve ever been is, uh, we, we, I was in a startup. We were, uh, we had this one small attic space in the, in, in this, uh, in our, in our city, in, in Cambridge. There was five of us in one room for like, uh, we were, but we were constantly together communicating. Um, and so, uh, and, and then we had, we had a shared vision, right? So we were able to do a lot of stuff very quickly. Um, I think that, so I think what I look into is some of the tools that maybe help us now. It is challenging, I would say, with everyone being remote, right? Distributed. That is probably one of the biggest challenges I, I have for productivity. Um, so, you know, trying to get everyone together. Video calls are great. We try to make sure everyone goes on video, uh, but like at least, you know, try to get, um, as much of that, um, workflow of thinking through like, uh, being together even though we’re not together as much as possible. I, I think that helps a lot. Um, and tools that..
Kovid Batra: Have you, uh, have you tried those digital office tools like, uh, you are virtually in an office?
Rens Methratta: Yeah, we tried that. Uh, I think it’s okay. Uh, we tried some of the white, the whiteboarding tools as well. Right. It, it’s okay. Yeah. You know, quite, and it’s, it’s honestly, it’s, it’s good. Um, but I, you know, the interesting thing I’ve really found, and I, if possible, even if we, if we met one person live for in, in person, even once, right? Yeah. I feel like the relationship between the teams are so much different. So no matter what, no matter how far away we try, we try to get everyone together at least who wants to meet because it is just, uh, I think even like, um, how people’s expressions are, how they are in real life, it, it is so hard to replicate. Right?
Kovid Batra: Totally. Totally. Yeah.
Rens Methratta: And, uh, and those nuances are important in terms of communication. So, um, and, but you know, outside of that, I mean, yeah, I think whatever the things that I would say are the things that can simplify, uh, objectives, right? Make sure it’s clear, uh, anything that would, uh, make, make that easy and straightforward, uh, I think that’s, that’s the best. And then it’s making sure you have easy ways to talk to each other and communicate with each other to kind of, uh, yeah, keep track of what we were doing.
Kovid Batra: Uh, I could see a big smile on Clint’s face when I talked about this virtual office tool. Is, is there an experience that you would like to share, Clint?
Clint Calleja: Uh, not, not really. Like it was, it was fun to hear the question because I’ve been wondering about it as well, but I have to agree with Rens. I think nothing beats, you know, the change that happens after an in-person meetup.
Kovid Batra: Sure.
Clint Calleja: The, the relationships that get built from there are, take a different, you know, a different go to..
Rens Methratta: It is, it is different. Yeah. I, I don’t know why, but if I’ve met someone in person for, I feel like, I know, I feel like I know ’em at a much deeper level than, uh, even though we’re, you know, uh, on video for a long time. It just, it is a different experience.
Kovid Batra: Totally. I think there is another good question. I, I think you both would relate to it. Have you guys had an experience to, uh, work with the Gen Z developers, uh, recently or, or in the last few years?
Rens Methratta: I, I mean, I probably, I’m trying to think through like what Gen Z would be. Yeah.
Kovid Batra: I, I, I get that in my circle, uh, a lot that dealing with the Gen Z developers is getting a little hard for us. And there’s like almost 10 to 12 years and maybe more age gap there. And the thing, and things have changed drastically. So, uh, people find it a little hard to understand and empathize, uh, on, on that front. So do you have anything to share? By the way, this is a question from Madhurima. Uh, yeah.
Rens Methratta: I think, well, I think in general, I think I will just say maybe not, maybe Gen Z, but just in general for junior, more junior developers we bring on board like younger developers, I think it’s, it has been challenging for them too because I think a lot of it’s been remote, A lot of their experience has been remote, right? Um, I think it is harder to acclimate and, and that that a lot of the stuff I’ve learned when I remember, uh, coming up as a software engineer and that’s, a lot of that experience has been like, you know, getting in, meeting with people, whiteboarding, getting through that, right. And having those relationships, um, was really beneficial. So I definitely think it’s harder, um, in that sense. Uh, I, I do think we’ve, uh, personally tried to try to get, um, you know, people who are more junior developers, you know, try to more opportunities to, um, you know, uh, more coaching, um, uh, and, and also like, uh, more one-on-one time just to try to help them acclimate to that because I think we’ve identified that it is harder, especially if we’re being remote first. Um, I haven’t had any, um, I don’t think anything, yeah, I know the memes of the Gen Z developers. I haven’t got any meme worthy stuff or experiences for Gen Z developer. Hasn’t been that, so I’ve maybe, I’ve been lucky, so, but I, I do, but I would empathize with that. It is harder for junior devs because, you know, we are in a much more, you know, uh, remote world and it, it, it’s harder to make those connections.
Kovid Batra: Totally. All right.
Clint Calleja: I think, uh, if I, if I may add something to this, I, uh, maybe what I, I’d add is I, I don’t have a specific way to deal with Gen Z developers because what I try to do is I try to optimize for inclusivity. Okay, there’s Gen Z, but there are many other, you know, cultures and subcultures that require specific attention. So at the end of the day, what I found to be the, at least the best way forward, is a good, strong set of values that are exemplified, that comes from the company, a consistent way of sharing feedback, uh, and the guidelines of how feedback is shared, and of course, making space for anyone to be heard in their preferred, uh, way, you know, they’d like to communicate and you can easily understand this if, you know, as just a part, part of your onboarding, you ask people to provide the user manual for you to understand how is it the best way, you know, for these people to communicate, to get the feedback. So think of it this way, it’s like, okay, providing the support for interfaces which are consistent for everyone, but then being available, uh, for everyone to communicate and get the support the way they prefer it, if that makes sense.
Kovid Batra: Okay. Totally. Alright, uh, thanks guys. Moving on to the next question. Uh, this is from Gaurav. Uh, how do you balance short-term deliverables with long-term technical debt management? Also, how to plan them out effectively while giving some freedom to the engineering teams, some bandwidth to explore and innovate and delve into the unknowns. Uh, Clint, would you like to go first?
Clint Calleja: Sure. Uh, when I, when going through this question, the first thing that came to mind, something that I wanna be clear, I’m not an expert of, but I started, you know, trying and iterating upon is the definition of an engineering strategy. Uh, because this is exactly what I used to try and understand, get a di.. So there’s this, the, the book, uh, ‘Good Strategy Bad Strategy’. So I try to replicate the tips from there. And it’s basically getting a diagnosis of, okay, where’s the money coming from? What are our product goals? And there are other areas to cover. And then coming up with policies, guiding policies. So the, you know, your team knows the direction we want to go, and some high level actions that could be really and truly could become projects or goals to be set as OKRs, for example, I don’t know. Uh, we realized the need from the diagnosis. We realize the need, we need to simplify our architecture, for example. So then I connect that engineering strategy and actions to goals, so that the teams have enough freedom to choose what to tackle on, uh, first, uh, whilst having enough direction on my end.
Kovid Batra: Makes sense.
Clint Calleja: So I’m still fine tuning on how, how good that strategy is. Right. But it’s, it really helps me there.
Kovid Batra: Perfect. Uh, the other part of the question also mentions about giving engineering teams that bandwidth, uh, that freedom to innovate and delve into the unknown. So, of course, one part of the question does get answered from your strategy framework, but in that, how, how much do you account for the bandwidth that teams would need to innovate and delve into the unknown? Uh..
Rens Methratta: I, I can deal with that or Clint, either way, I, I think..
Clint Calleja: Go, go, go, go.
Rens Methratta: No, uh, it’s, it’s an interesting point. Like, um, we look at it, I, I think in, in general, like I, we define it like an overall architecture. We try to, for everything we do, like here’s our high level where we want to be from a technical perspective, right? And then whatever solutions we’re trying to do, we, we always wanna try to get to that. But there’s always these, you know, the short and long term and, and how much do we give engineers ability to innovate? We really look at it this way there. If it’s something we need to do right away and we say, Hey, look. Uh, and then, um, and typically if someone has a really great idea and then just like, let’s, let’s do it. Uh, I think our overall question is, okay, worst case scenario, what’s our long, how long would this take to, uh, completely redo to get back to our architecture? Right? Um, and if it’s, if it’s like, Hey, if it’s not gonna, it’s, it’s not gonna increase in, it’s not gonna increase in complexity to redo this a year from now if we, if this is the wrong mistake, right? If we, if the, so we, we are much more lenient towards let’s try something, let’s do this, right? If we think worst case scenario, it’s not gonna be exponentially worse if we put this into production to, to roll this back. Right. And so, uh, if it’s something that is gonna say like, oh, this is gonna lead us down a path where if we’re, this is gonna be, we’re never gonna be able to be fix this, right? Or it’s gonna take us so much effort to fix this, then we’re much more careful and we’re like, well, let’s, let’s see, you know, we might not wanna give as much leeway there. So that’s, that’s kinda how we balance it out typically.
Kovid Batra: Makes sense. Makes sense. Perfect. Uh, moving on, uh, probably the last question. Uh, this is from Moshiour. Uh, what’s your approach to balancing new feature development with improving system? I think this is what we have already taken up. Do you have practical guidelines for deciding on when to prioritize innovation versus strengthening your foundations? Uh, Moshiour, I think we just answered this question, uh, in the previous question. So, we’ll, we’ll, uh, give this a pass for now, uh, and move on to the next question. Okay. There is another one from Paulo. Uh, how much of engineering productivity do you attribute to great engineers versus how work and information flows among individuals? So, Rens, would you like to take that?
Rens Methratta: Um, this is like a yes and yes. Like, I mean, uh, I, I, I think, uh, really great engineers have like, you know, really great productivity, right? It’s, it’s, it’s, it’s a both, it’s a both thing, right? So if you have, um, we’ve seen, I think we’ve kind of seen it from, I get more experienced, like, uh, even on the let’s recent stuff on the AI side. Like we, we playing around with folks who’ve really have gotten understand, understood our, like really solid understanding of our technical infrastructure, but can, you know, learn to use those tools effectively. The output is, is like maybe 10x, but someone who’s, um, you know, not as solid on like maybe some of our existing code base technical understandings and utilizing it is, is still improving. It’s like, you know, maybe 2x, 3x, right? So you definitely see that difference. Um, and I think that’s important. Um, but I, I think, you know, the other part about that is communication between the teams and how you do it and making sure that, similarly going back to productivity, like are we, are we building the right things? Right? We can build, yeah, you know, a lot of, a lot of stuff very quickly, but it might not be worth it if we don’t communicate well, we’re probably building completely different things. So I, I think it goes hand in hand. Um, I, you know, I think, I don’t think there’s a really way to. Uh, it’s not an, it’s not an ‘or’, it’s really an ‘and’.
Kovid Batra: Perfect. No, I think it’s, it’s well answered. Clint, do you have anything to add here?
Clint Calleja: It’s, uh, very much in line with Rens, I think, and even, you know, even in fact the KPI, the KPI suggest looking at the holistic of a theme. So once I do believed that, you know, great engineers, the experience an engineer brings will make a difference. It’s not the first time I’ve also seen great engineers not compatible with a team, and they, you know, the, the, it doesn’t work out. So you start to see that the productivity is not really, uh, improving. So yes, you need great engineers, but, uh, there’s a very big emphasis. I think it goes, it’s beyond 50/50. I think there’s a bigger emphasis, in my opinion, on the ways of working, the respectful ways of working, small details. I don’t know, like, um, when is, when should I expect my teammate to pick up a pull request during the sprint? Um, how do I make it easier for them? Should opening up a request with 50 change files, embedding refactoring with a bot fix, does that make it, you know, small things. But I think this is where, um, you can reduce a lot of friction and may make, uh, bring more harmony.
Kovid Batra: Okay. Makes sense. Um, you guys, I think we are already, uh, done with our time today, but, uh, I feel bad for other people who have put in questions, so I just wanna take one more, uh, this sounds interesting. Uh, are you guys okay to extend it for like 2–3 more minutes?
Rens Methratta: Sure.
Kovid Batra: Perfect. Uh, this question comes from Nisha. Uh, how to align teams to respond to developer surveys and use engineering metrics to improve overall experience and performance. So I think both of you have some experience here. Uh, clint is, uh, already, uh, a promoter of having communication, having those one-on-ones with teams. So, and for, uh, Rens, I know he’s using Typo, so he’s already into that setup where he is using engineering metrics and developer surveys with the, with the developers. So both of your opinion would be great here. Uh, Rens, would you like to go first?
Rens Methratta: Um, yeah. To Nisha’s question, um, I’ve never had good luck with like, surveys and, uh, with like developers, quite honestly. They’re just not, um, you know, I think a lot of it is, uh, time spent and, and, and, you know, I, I try to try to do one-on-ones with people, um, and just, you know, get an understanding of how people are doing. Um, I, I, you know, um, we’ve done, tried to do surveys and it’s, you know, people, it becomes, people aren’t, you know, I don’t think the, the responses get, um, less and less valid in some ways if, if it becomes robotic, uh, in a lot of ways. So I, I really think, I think aligning to how people are doing is, from my perspective, is really more, more hands-on, more one-on-one discussions and conversations.
Kovid Batra: Makes sense. How, how did that work for you, Clint? Uh..
Clint Calleja: I, uh, what, what, what Rens just, uh, just, just explained, uh, resonates with a lot of my experiences in the past. It was, uh, a different and eye-opening experience at Hotjar, where I’ve seen the use of, the weekly use of such a survey being well, um, being, um, well adopted. And when I joined Hotjar, I joined as an individual contributor, as a front end engineer. So the first time I had to fill one of these, first I was like, okay, I have to do this every week. But the thing that made me change my mind was the actions I was seeing coming out, the benefits for me that I was seeing coming out from my lead. This wasn’t just a form, this was becoming the talking points of the one hour session I had with him every week. Actions get taken out, which were dedicated to me. So it was a fun fact. This was the first remote experience for me, but the one-on-ones felt like the most tailored I’ve ever had. So think..
Kovid Batra: That’s interesting. Yeah.
Clint Calleja: If I can sum up on the developer surveys, um, I understand that the less people can under an attribute, their input to actual outcomes, to actual change then, you know, why spend the effort? So on, on my end, what I try to do as much as possible is not just collect the data. Here’s a summary of the points. Here are some actions which are now part of this strategy. Remember the connection of the strategy. And here’s why when we are trying to attack what. So again, not a silver, uh, silver bullet.
Kovid Batra: Yeah. Yeah.
Clint Calleja: And then the second part on engineering metrics, I think here, uh, I really rely on engineering leaders to be the glue of bringing those data points into the retrospectives. So the engineering managers are in the best position to connect those data points with the ways of working and the patterns seen throughout the sprints. And in an end of sprint review, you know, express, okay, here are the patterns that I see. Let’s talk about this. Let’s celebrate this because it’s a, you know, huge milestone.
Kovid Batra: Makes sense. Great. Uh, Rens, you wanna add something?
Rens Methratta: No, I, I, I would agree. I think that’s a good, I that’s a good call out. Uh, yeah. Getting, maybe having more action oriented from the surveys would provide different results. Um, and we, we, we tried something where we try to do our, do our one-on-ones as a, as, as a daily survey. Yeah. I didn’t think it was successful because it, it didn’t, I think people weren’t, um, weren’t seeing that individual response back from them. Right. It was just more like data collection for data aggregation purposes. Yeah. Wasn’t, which wasn’t, people didn’t seem to value it.
Kovid Batra: Perfect. Perfect. Thank you so much guys. Uh, this was an amazing session. Uh, thank you for your time. Thank you for sharing all your thoughts. It’s always a pleasure to talk to you, to talk to folks like you who are open, take out time from their busy schedules and give it for the community. Thanks once again.
Clint Calleja: Thanks for the invite. Yeah. And nice to meet you guys.
Rens Methratta: Same here, Clint.
Kovid Batra: All right, guys. That’s our time. Signing off for today. Bye-bye. Okay.
'How EMs Break into Leadership—Road to Success' with C S Sriram, VP of Engineering, Betterworks
February 21, 2025
•
41 min read
How do you transition from being a strong Engineering Manager to an effective VP of Engineering? What challenges do leaders face as they scale their impact from team execution to organizational strategy?
In this episode of the groCTO Podcast, host Kovid Batra speaks with C S Sriram, VP of Engineering at Betterworks, about his career journey from an engineering manager to a VP role. He shares the hard-earned lessons, leadership principles, and mindset shifts that helped him navigate this transition.
What You’ll Learn in This Episode:
✅ From IC to Leadership: How Sriram overcame early challenges as a new engineering manager and grew into an executive role.
✅ Building a High-Performing Engineering Culture: The principles and frameworks he uses to drive accountability, innovation, and efficiency.
✅ Balancing Business Goals & Technical Excellence: Strategies to prioritize impact, make trade-offs, and maintain quality at scale.
✅ The Role of Mentorship & Coaching: How investing in people accelerates engineering success.
✅ Scaling Leadership with Dashboards & Skip-Level 1:1s: How structured communication helps VPs and Directors manage growing teams effectively.
✅ Closing with Inspiration: Sriram shares a poem he wrote, reflecting on the inner strength and vision required to succeed in leadership.
Kovid Batra: Hi everyone, this is Kovid, back with another episode of groCTO by Typo. Today with us, we have a very special guest. He's VP of Engineering at Betterworks, comes with 20+ years of engineering and leadership experience. Welcome to the show, Sriram.
C S Sriram: Thanks. Thanks so much for having me over, Kovid, and thanks for the opportunity. I really appreciate it.
Kovid Batra: No, it's our pleasure. So, Sriram, uh, today, I think we have a lot to talk about, about your engineering and leadership experience, your journey from an engineering manager to engineering leader. But before we get started on that, there is a small ritual that we follow on this podcast. To know you a little more, we would like to ask you one question. Tell us something about yourself from your childhood, from your teenage that defines you, who you are today. So you have to share something from the past, so that we get to know the real Sriram.
C S Sriram: Sure. Yes. Uh, uh, I think the one thing that I can recall is something that happened when I was in my seventh standard. My then school principal, her name is Mrs. Anjana Rajsekar. I'm still in touch with her. She's a big inspiration for me. She founded and she was running the school that I was studying in. She nudged me towards two things which I think have defined my life. The first thing that she nudged me towards was computers. Until then I hadn't really touched a real computer. That school was the first place where I wrote my very first logo and basic programs. Uh, so that was the first thing. And the second thing that she nudged me towards was just writing in general. And that gave me an interest towards, uh, languages, towards, uh, writing, reading, uh, poetry, short stories, novels, all of that. I think that she kind of created those two very crucial parts of my identity and that's what I would like to share.
Kovid Batra: That's really inspiring actually. Teachers are always great in that sense. Uh, and I think you had one, so I really appreciate that. Thanks for sharing. And, Sriram, is there anything from your writing piece that you would like to share with us? Anything that you find really interesting or that you wrote sometime in the past, which you think would be good to share here?
C S Sriram: Oh, I wasn't prepared for that. Uh..
Kovid Batra: No, that's fine.
C S Sriram: Maybe, maybe towards the end. I'll try and see if I can find something towards the end.
Kovid Batra: Sure, no problem. All right. So getting started with the main section, just to iterate this again, we are going to talk about your engineering leadership journey, specifically from an Engineering Manager to a VP of Engineering at Betterworks. I think the landscape changes, the perspective changes, and there are a lot of aspiring engineering managers engineering leaders who are actually looking towards that career path. So I think this podcast would be really helpful for them to learn and to understand what exactly needs to be there in a person to go through that journey and what challenges, what opportunities come on the way, how to tackle them. So, to start with, I think tell us about your first engineering management experience when you moved in, uh, from, uh, from, uh, let's say a tech lead or an individual contributor role to an EM role and how things changed at that point. How was that experience for you? Was that overwhelming or that came in very easily to you and you were there when you, when you actually arrived in that particular role or responsibility?
C S Sriram: I was a programmer once. So I'll start from index 0 instead of index 1. So I had a, uh, index 0 programmer, uh, engineering management experience where I was given the designation of Engineering Manager for about a month. And I ran back to my CEO and said that I'm not doing management. Uh, take the designation away from me, take the people away from me. I'm not doing it anymore. Uh, that was the index 0 and index 1 was when I started my own software consultancy, roughly about 10 years ago.
Kovid Batra: Okay.
C S Sriram: And then I didn't realize I would have to do management. I just wanted that thrill of running my own business. I guess to paraphrase Shakespeare, you know, "Some people are born managers. Some people are made managers. Some people have management thrust on them." So it was thrust on me. It was my necessity that I got into management and for the first five years, I really messed it up. Because I was running a business, I was also trying to get some coding done for the business. I was also trying to win sales. I was trying to manage the people, recruit them and all of it. I didn't do a great job of it at all. And then when I joined Betterworks was where I think I really did something meaningful with, uh, engineering management. I took the time to study some first principles, understood where I went wrong and corrected. So yeah, that's how I got into management. And it was, uh, it wasn't scary the first time because I didn't know I was doing it. Uh, so I didn't know I was doing a lot of things wrong, so there was no fear there. Uh, but the second time around, when I started in Betterworks, I was very scared that, uh, of a lot of things. There were a lot of insecurities. The fact that I was letting go of control and most of the time intentionally, that was a very scary thing. But yeah, it's, it's comfortable at the moment.
Kovid Batra: Perfect. Perfect. But I'm sure that experience of running a business would have brought a lot of aspects which you could have not learned if you were in a trivial journey of a job where you were a software engineer and then moved into, let's say a tech lead or a management role. I'm sure that piece of your entrepreneurship would have taught you a lot more about bringing more value or bringing more business aspect to the engineering. Was it so?
C S Sriram: A 100% yes. I think the main thing that I learned through that was that software doesn't exist in isolation. A team doesn't exist in isolation. You building the most beautiful user experience or design, you building the most beautiful software, uh, most beautiful piece of code that you've ever written, uh, means nothing if it doesn't generate some sort of business value. I think that was the biggest lesson that I took away from that, because we did a lot of work that I would call as very good engineering work, but extremely poor from the business side. I understood that importance that, you know, it is, it always has to be connected to some business outcome.
Kovid Batra: Great. I think there must be some good examples, some real life examples that you would like to share from your engineering management stint that might revolve around having a good culture, that might revolve around building more or better processes in the team. So could you share something from your start of the journey or maybe something that you're doing today?
C S Sriram: Definitely. Yes, I can. I think I'll start with, uh, the Betterworks/Hyphen journey. So when I joined, it was called Hyphen. We were an employee engagement, uh, SaaS platform. We had a team of really talented engineers and a very capable, uh, Director of Product, uh, and an inspirational CEO. All the ingredients were there to deliver success. But when I joined the team, they hadn't completed even a single story. Forget about a feature, a complete, uh, you know, product; they hadn't completed, uh, a single story in over two quarters. What I had to do in that case was just prioritize shipping over everything else. Like there were a lot of distractions, right? The team was talking about a lot of things. There was recruitment. There was the team culture, process, et cetera, et cetera. I think the first thing that I did there was after a month of observation, I decided that, okay, sprint one, somebody has to ship some things. And just setting that one finish line that people have to cross, that built up the momentum that was required, uh, and it kept pushing things forward. And I got, uh, hands-on in a way that I wouldn't have got hands-on before. Like usually I would've jumped into the code and started writing code myself. That was my usual approach until then. This time I got hands-on on the product side. Uh, I worked with the, uh, director of the product, uh, to prioritize the stories, to refine acceptance criteria, uh, give a sprint goal and then tell everybody that, okay, this is the goal. This is what is included. This is what is not included. Get it done. And it happened. Uh, so that's how that got started.
Kovid Batra: Perfect. So I think when you're sharing this, this is from your initial phase when you actually start working as an Engineering Manager and working directly with the product, uh, managing the team, uh, getting into that real engineering management role, bridging that gap. What exactly led you or made you understand that priority? Like, you went in, saw a lot of things distracting you, people and culture changes. So, initially when you moved into such a space, which is completely new, right? What exactly made you realize, okay, one thing is of course, they didn't ship anything for, let's say a good amount of time, so you had to prioritize that and you went in with that goal. But if you just focus on one thing, do not take people along, there is a lot of resistance that you get. So when you were deciding to do this, uh, you cannot be ruthless when you are joining in new. So was there any friction? How did you deal with it? How did you bring everyone on the same page? Is there anything specific you would like to share from that part?
C S Sriram: Yeah, yeah. See, the diagnosis was actually pretty straightforward because I had a very supportive CEO at that time. Orno, that was his name. So he was very supportive. When I told him that, okay, I'm going to take a month to just observe. Don't expect any changes from me. Uh, in the first month, uh, I don't want to just start applying changes. He was very supportive of that, and I was given a month to just observe and make my own notes. Once I diagnosed the problem, the application of solution took a bit of time. The first thing was to build culture. Uh, now a lot of people talk a lot of things about, uh, culture. Uh, to me, or what culture means is what are the negotiable and non-negotiable policies within your team? Uh, like what is acceptable? What is not acceptable? Uh, and even in acceptable, what are the gray areas? That there may be some areas where you have a bit of negotiation that is allowed. Uh, so that was the first thing that I wanted to sort out. The way I did that first was, like I said, I spent a month studying the team and then I proposed a set of working rules. I talked about working hours. Uh, that was the time when we were all in office. So presence in, uh, office, the work, how do we do work handoff? How do we make decisions? All of those things. Uh, and these, uh, I presented some of them saying that, see, I am tasked with getting some things done. So these are non-negotiable for me. Uh, like you are doing this, uh, you don't have the space to negotiate and say that you are not going to be in office for two weeks, for example. Or you're not going to say that, uh, I won't write automated tests. Those are my, uh, you know, addition areas. I'm owning them. But you can say that, uh, I will be 10 to 15 minutes late because of Bangalore traffic. So we had that kind of agreement that was made and we had an open discussion about it. That was the first presentation that I made to the team saying that these are our working rules and this is how we'll proceed. And I need explicit agreement from all of you. If anybody is not going to agree, you let me know, we'll negotiate and we'll see where we can get to. Now, once that happened, uh, there was a question of enforcing the policy. And I think this is where I failed in my previous attempt at management. I had a set of policies, but I wasn't very consistent in enforcing them. And this time I had a system where I said that, okay, if someone strayed from a policy, someone said that they'll do something, but they haven't done it, my usual reaction would have been either if I thought it wasn't so important, ignore it. Or if it was important, you know, just go ballistic, go lose your temper and ask questions and, uh, you know, do that boss kind of stuff. This time I took a different approach, which was curiosity over trying to being, uh, you know, trying to be right. So I spent a bit of time to understand why did, you know, this miss happen? Why did this person stray from the agreed policy? Was it because the policy itself wasn't well-defined? Uh, or did they agree to the policy without fully understanding it? Or was it just a, you know, human error that can be corrected? Or is it an attitude issue that I can't tolerate? Now in most cases, what happened is once I started putting these curious questions and I started sharing them, people started aligning themselves automatically because nobody wants to be in that uncomfortable position of having to explain themselves. It's just human nature to, you know, avoid that and correct themselves. So that itself gave me the results most of the time. In a few cases, uh, the policy wasn't well-defined or it wasn't well-understood, in which case I had to refine it and make sure it is explained very clearly. And the last thing was, uh, in a few cases where despite repeated feedback, they couldn't really correct themselves. I had to make the decision saying that, okay, this person is not suited for what I want and I'll have to let them go. And we've made some decisions like that also.
Kovid Batra: I think setting those ground rules becomes very important because when you go out and just explicitly do something, assuming that, okay, this is, uh, this is something that should be followed and people are not aligned on that, that creates more friction than, uh, if they're beforehand aware of what needs to be done, how need, how it needs to be done. So I think stepping into that role and taking up that responsibility, it's a good start to first diagnose, understand what's there, and then setting some ground rules with negotiables and non-negotiables. I think it makes a lot of sense. And when you're sharing those specific details, it all the way more aligns with my thought of how one should go out and take up this responsibility. But Sriram, uh, when you jump into that role there are a lot of things that come into your mind that you need to do as an Engineering Manager. What are those top 3-4 things that you think you need to consistently deliver on? I mean, this could be something very simple, related to how fast your teams are shipping. It could be something related to what's the quality of the work that is coming out. So, anything. But, in your scenario, what were your business priorities? According to that, you as an engineering manager, what were your KPIs or what were those things that you mostly aligned with and tried to deliver consistently?
C S Sriram: Yeah, so two things mattered most. And I think it still matters even today for me. The first is what business value is a team delivering. A lot of people get confused where they say they have high-performing teams when actually the teams are just shipping features very regularly, uh, instead of creating business value, uh, like, that's something that I ask my managers a lot as well. Like, what is the business problem that your team is solving? Not just what is the feature that they are shipping next? So that is the first thing. So, um, having a very clear sprint goal, if you're doing a sprint goal, a quarterly goal that says that this is the business outcome that we are achieving. Maybe you're trying to increase the signups. Maybe you're trying to increase the revenue. You're trying to increase the retention. You're trying to solve a specific problem for a customer. A customer is struggling with a particular business outcome at their end, and that is what your software is solving. And once you set, set that as the priority, then adjusting your scope, adjusting what you want to deliver to meet that outcome becomes very easier, very easy. Like I've seen cases where we thought we will have to deliver like 10 or 15 use cases for a feature, but narrowing it down to five, uh, gave us more results because we've been solving what was most valuable for the customer rather than shipping everything that we thought we have to ship. So that is one of the biggest metrics that I try to use. Like, what final business outcome can I connect this team's output to?
Kovid Batra: Makes sense. Almost every day we deal with this situation when, so when I say 'we,' people who are into those position where they have to take some decisions that would impact the business directly. Of course, a developer also writes code and it impacts the business. But I hope you understand where I'm coming from. Like you are in that position where you're taking decisions and you are managing the team as well. So there is a lot of hustle bustle going on on a day-to-day basis. How did you make space for doing this? Uh, that prioritizing even more, highlighting those five things out of those 15 that needs to be done. What kind of drive you need or what kind of process setting you need for yourself to come to that point? Because I strongly believe I have talked to so many engineering leaders and engineering managers, this one quality has always stood out in all high-performing, uh, engineering leaders and engineering managers. They value the value delivery. Like, anything else comes second. They are so focused on delivering value, which makes sense, but how do you make that space? How do you stay focused on that part?
C S Sriram: Uh, see, I think anybody who makes a transition to management from engineering has a big advantage over there. If you are a good engineer you would have learned to define the problem well before you solve it. Uh, you would have learned to design systems. You would have learned to visualize you know, the problem and the solution before you even implement it. Like, a good engineer is going to draw a high-level and a low-level system diagram before they write the first line of code. They will write tests before they write the first line of code. It is just about transposing that into management. This means that before your team starts working on anything crucial, you spend that focus time, and that's where I think a lot of engineering managers get confused as well. I see a lot of engineering managers talking about, Oh, I'm always in meetings. Uh, I don't know what to do. I'm always running around. Uh, having that focus time for yourself, where you are in deep work, trying to define a problem and to define its solution, that makes a huge difference. And when people try to define a problem, I think it always helps to use some sort of standard frameworks. Like right now, uh, as an engineering leader, most of my problem definitions are strategy definitions. Uh, like what policies you know, should the team pursue for the next one to two quarters? What policies drive things like recruitment, uh, promotion, compensation, management, et cetera, et cetera? Now I try to follow some sort of framework. Like I try to follow a policy diagnosis, risk and actions framework. That is how I define my you know, uh, policies. And for each of those problems that you're trying to define, there are usually standard frameworks that are available so that you don't have to break your head trying to come up with some way of defining them. I think leaning on that sort of structure helps as well.
Kovid Batra: Got it.
C S Sriram: And over time, that structure itself becomes reusable. You will tweak it. You will see that some parts of the structure are useful, some parts are not, and it gets better over time.
Kovid Batra: Makes sense. For an engineering manager, I think these are some really good lessons and coming with specific examples that you have taken, I think it becomes even more valuable. One thing that I want to always understand, how much you prioritize quality over fast shipping or fast shipping over quality?
C S Sriram: Yeah. Uh, okay. So I had, uh, an ex, uh, manager who is my current mentor as well, and he keeps saying that he says that 'slow is smooth and smooth is fast.'
Kovid Batra: Yeah, yeah.
C S Sriram: Okay, so I don't aim for just shipping things fast, but I aim to create systems that enable both speed and quality. I think a lot of engineering managers, they always try to improve immediate speed and that's almost an impossibility. Like you can't fix a pipeline while things are running through it already, uh, you need to step away from the pipeline and you're going to get speed results, you know, speed outcomes. Over time, quality outcomes over time. I think that is the first step towards speed and quality. You need to accept that any improvement will take a little bit of time. Now, once you accept that, then defining these things also, again, makes a huge difference. If it's speed, what is speed for you? Is it just shipping features out or is it creating value faster? The best way of increasing speed I've seen is just measuring team cycle time. Like you don't even have to put in any solutions in place, just measuring and reporting the cycle time to the team automatically starts moving things forward because nobody likes to see that it takes two weeks to move a ticket to 'done' in there. And people start getting curious and they start finding out, okay, I'm not moving that fast. I'm actually working a lot more than at that speed, but I've moved only one ticket in two weeks. That's not acceptable. Then you see things changing over there. Same thing with quality also. I like to define what quality clearly means. Like what is a P0, P1 test case that you cannot afford to miss? What are acceptable non-functional requirements? Like, well, you know, uh, not every team has to build the most performant solution. There may be a team that might say that, okay, a one second latency is acceptable for us. A hundred requests per second throughput is more than sufficient for us. So building with that in mind also makes a huge difference. And once you do that, for quality, I would always say the best thing to do is to shift quality left. The earlier you enforce quality in your process, the better it is. And there are standard techniques to do that. You can use mind maps, you can use the three Amigo calls, automated tests, et cetera, et cetera. One example that I can think of is that when I was working with Hyphen, uh, there were a set of data reporting screens, a set of reports which all had very similar kind of charts, grouping and filters. So I spent time with QA to develop some mind maps where we listed all the use cases for all the reports, that were common to all the reports. And we kind of had these mind maps put up during these print review calls during the QA review calls and all of it. If a developer is going to start development, they have it on their screen before they start developing. The developer develops to match those quality requirements rather than trying to catch up with the quality later on. Uh, and this is another example that I like, uh, analogy that I like using as well. Developers, when they write code, they should write as if they are writing an exam where the answers are already available to you and you should really try to score the highest marks possible. Uh, no need to keep anything secret or anything. I think that's an approach that testers should also adopt. You write the exam with every answer available and you score the maximum marks.
Kovid Batra: Makes sense. So I think in your EM journey, if you have to sum it up for us, when was the point when you felt that, okay, uh, you're doing good and these are the top 2-3 things which you did as an EM that really made you that visible, made you that accomplished in a team that you were ready for the next role?
C S Sriram: Got it. I think it took me about a year at Hyphen. So that would be about six years after I started engineering management. So 1 in 5 years running my own consultancy and then 1 year at Hyphen. the outcome that made me feel that okay, I've done something with engineering management was that we ship the entire product. It was a migration from JavaScript to TypeScript, from an old UI to a new UI, a complete migration of a product that was already in use. We hit $2 million ARR and we got acquired by Betterworks. So those were good, uh, you know, outcomes that I could actually claim as victory for myself and for the team. And that was, uh, what I thought was success at that time. But what really feels like success right now is that engineers from that time call me and tell me that you know, working with me during that time was really good and they are yet to find that kind of culture, that kind of clarity. So that is, you know, that turned out to be a good success.
Kovid Batra: Makes sense. Okay, so now moving from that point of engineering management to a leader, how has your perspective changed? I think the company altogether changed because now you are part of Betterworks, which is a bigger organization. You're working with global teams who are situated across different countries. How your perspective, how your approach to the overall delivery, value delivery has changed, I would like to hear that.
C S Sriram: Yeah. So, Betterworks, I would split it into two halves, two and a half years, two and a half years, uh, you know, at Betterworks, uh, leaving that first year at Hyphen. The first two and a half years I was working towards more of a directorship kind of role where I wanted to own complete execution. That was a time I learned how to manage managers, how to get a few other things done as well, like, uh, tie that, uh, you know, the engineering teams outcome, uh, output to the business outcome. The first principle that I learned through that, uh, and the second two and a half years was really about strategy, about executive management. Now, the first principle that I learned that was your first team changes once you start getting on this journey. Until you're an engineering manager, the team that you manage is your team. You belong to that team. That's kind of the outcome that you always look at. Once you start this journey towards engineering leader, that is not your first team anymore. Your first team is the team that you work with, which is your Co-Directors, Co-VPs, your, you know, immediate boss. That leadership team is the core team. You're creating value for that team. And the team that you manage is a "tool" that you use to get those results. Uh, and I would, you know, put a quotation mark around the "tool" because you still need to be respectful and empathetic towards people. It's not just using them, but that's, that's kind of the mindset that you need to adopt. The side effects of this mindset is that you have to learn to be alone, right? At least when I was an Engineering Manager and all of it, uh, there were these moments when you could gossip and complain about what's happening and all of it, the higher up you go, the lesser, uh, you know, you have space for all of that. Um, uh, you, like, who can you go and complain when you have all the power to, you know, do anything. You have the power to do everything that you want. So you have to learn to be alone and to operate by yourself. So that is the second side effect of that. The next principle that I learned was to give up what you take or built. Luckily, it came on, came easily to me at that point. I'm really thankful for that. Like I had built this whole product and, you know, we completed the migration and we got acquired by Betterworks and all of it was something that I was really proud off. But the moment the first opportunity came, I delegated it to someone else. Now, if I had held on to that product because it was my baby, I wouldn't have had the opportunity to scale Betterworks India. We went from I think around five or six engineers, today we are almost 45+, uh, engineers in India. That sort of a 5x scale, 7x scale would have been very difficult to achieve if I had held on to any of the babies that I was building at that time. So that sort of giving up things, uh, is something that's very important. And the next thing that I learned was to coach engineering managers. You basically have to repeat what you did with your developers. Like with, once you manage developers, you don't develop. You delegate. You try to ask them questions. You nudge them and you guide them. You need to repeat the same process with managers as well. That's another thing that I had to learn. And the last thing that I had to learn was setting up teams for success. This was a big challenge because most of my managers were first-time managers at that time. So the potential for failure was huge. So I had to take my time to make sure I set boundaries within which they can make mistakes, fail, and learn. And that was a balance because I couldn't set boundaries that were so safe that they'll never make a mistake.
Kovid Batra: Yeah, that makes sense.
C S Sriram: And at the same time, I, yeah, yeah.. Because it has to be that space. I think you know that, uh. And at the same time, the boundaries can't be so open that they can, they make mistakes that can turn into disasters. And luckily I had good leaders at Betterworks, uh, who guided me through that. So that worked very well. And I also had to spend a lot of time sharing these success stories and learnings with peers and with leadership. Uh, that was something that I didn't invest a lot of time in as a manager. That sort of story building, narrative building both within the team and outside the team, that was another skill that I had to learn.
Kovid Batra: Perfect. So when you talk about the story building and bringing up those stories to your team, which is the leadership, what exactly would you tell them, can you give some example? Like in, for someone who's listening to you right now, what kind of situations, and how those situations should be portrayed to the leadership team would bring a better visibility of your work as an engineering director to the overall leadership?
C S Sriram: Sure. Yes. I think a classic example would be compensation. So I can go back to that just around the COVID time where suddenly investment was booming. The job market was booming. Every candidate that we were trying to hire had three to four offers. We were not assured of the candidate joining us even after they came on board and people were coaching our engineers left, right, and center as well. So that was a crazy time. Betterworks is a very prudent business. That's something that I'm always thankful for. We don't go and spend money like water just because we've got investment. And this means that now as an Engineering Manager, if I'm going to go and talk about compensation, about business planning and all of it with my leadership team, most of the time, I'm just going to say that, hey, this person is demanding so much. That person is demanding so much. I don't know what to do. That is an Engineering Manager approach, and it is justified because an Engineering Manager, depending on what sort of company and what sort of scale you are in has limited scope on what they can actually do in these cases. But the story that you take as an engineering director is you spend time collecting data from the market to see what is the market compensation rate. You see how many exits have happened in your team. How many of those exits are because of compensation, what percentages have those people been offered outside in the market. You collect all that data. You can't even stop at saying that, okay, I'll put all this data in front of management and I'll tell them that see, we are losing people because we are not able to match requirements. We need to change our, uh, you know, numbers. Even that is not sufficient because that is still a director-level, uh, you know, solution that you can offer. If you want to offer a truly executive-level, you are going to look at costs in the business. You're going to look at optimizations that you can do. You're going to come up with a system saying that this is how compensation can be managed. Again, most of the stories that I tell to my executive team come to the point where it's like, there is a problem there is potential solutions, and usually I even recommend one solution out of the solutions that I'm already suggesting. Uh, and this really helps the leadership team as well, because when I think of my boss or my CEO, they are possibly dealing with 20 things that are more complex than I've ever seen in my life.
Kovid Batra: Right.
C S Sriram: So how can I ensure that A, I get the decision that I think is right. And at the same time, I give them enough information so that they can correct me if my decision is wrong. Uh, both are crucial. You know, one of the scariest things that can happen to me is that I get a decision that I want and the decision turns out to be wrong. So giving myself..
Kovid Batra: That's a balanced approach where you are giving the person an option, an opportunity to at least make your decision even better if it is possible and if you're missing out on something. So that totally makes sense. And putting out things to the leadership in such way and how you're solving them would be really good. But one thing that I could understand from your EM to an EL transition you start becoming more cost and budget kind of things being, start coming in more as compared to an EM position. Is it right?
C S Sriram: 100% yes. That's what I've seen with all the great engineering leaders that I've worked with as well. Yes, they love engineering. They get into, uh, engineering, architecture and development at whatever, all levels of interest and time that they have. But there is always a question of how much value am I getting for the money that I'm spending? And I think that is a question that any manager who wants to become a leader should learn to ask like, uh, I think about two and a half years ago when I was asking my then manager, how do I get into leadership? That was the first thing that he said, "Follow the money. Try to understand how the business works. Try to understand where sales comes from. Try to understand where outflow goes." That made a huge difference.
Kovid Batra: Totally. Makes sense. I think this is something that you realize more when you get into this position. But going back to an EM role also, if you start seeing that picture and you emphasize more on that part, automatically your visibility, the kind of work that you're doing becomes even better. Like you're able to deliver what business is asking. So, totally agree. But one thing always surprises me and I ask this multiple times because everyone has a different approach to this problem, which is now you have a layer of managers who are actually dealing with developers, right? And there are situations you would want to really understand what's exactly going on, how things are quality-wise, speed-wise, and you really don't have that much time that you go out and talk to 45 engineering leaders , engineering managers, engineers, to understand what's exactly going on with them. So, there must be some approach that you are following to have that visibility because you can't just go blind and just say, "Okay, whatever engineering managers are doing, how I'm coaching them would work out wonders." You have to like trust them, but then you have to have a check. You have to understand what exactly is going on. So how do you manage that piece as a director here at Betterworks?
C S Sriram: Yeah, no, that was a very interesting coaching experience for me, where I worked with each of my managers for almost over six months to help them build that discipline. Like any good software engineer will tell you, pulling is never a good idea. If you think of your manager as a software service, you don't want to ask them every half an hour or one hour 'what's the update?' Uh, I like push-based updates. So I help them set up dashboards. So you know, dashboards that talk to them about their team's delivery, their team's quality, uh, their team's motivation and general status and all of it. Uh, and I work with them to design it for their purpose. Uh, I think that was the first thing that I was very clear about. This is not a dashboard that I'm designing so that they can present a story to me, but it's a dashboard that they are using to solve their problems and I'm just peeking in to see what's happening. So that made it very usable. I use those dashboards to inform myself. I ask the questions that I would expect a manager to ask from them. And over time, you know, they got into the habit of asking it themselves because in every 1-on-1 we'd spend 10-15 minutes discussing those numbers. By the time we did it for three to six months, it had become internalized. They knew to look for, you know, signs, they knew to look for challenges. So that became quite natural from there on. And I again want to emphasize on that one part that these were dashboards that were designed to solve their problems. If there was a dashboard or information that I had to design to relay some information or story to a leadership team or to some other team or something like that, that would be something very different. But this is primarily a dashboard that a team uses to run itself. And I was just peeking into that. I was just looking at it to gather some information for myself. So that made a big difference. The second thing that I also did was skip-level 1-on-1s. It took me, I think, almost six months to learn how to do skip-level 1-on-1s, uh, because the two challenges that I faced with skip-level 1-on-1s was it turned out to be another project status update session initially. I was getting the same information from 2-3 places, which was inefficient. It was also a waste of time for the engineers to come and report what they've already done. And the second thing also was, there were a lot of complaints coming in my skip-level 1-on-1s initially as well. And especially more so because many of the engineers that I was doing skip-level 1-on-1s with were engineers who I managed earlier. So I had to slowly cut that relationship and just connect them to their new managers. And I started turning the skip-level 1-on-1s into sessions where I can coach and I can give people some guidance. And I can also use it to get the pulse of the team. Like, is the team generally positive or is the team generally frustrated? And who are the second-level leaders that I need to be aware of? Whose stories I have to carry on? Who I think can become the core of the business after my first-level leaders? So I changed the purpose of the skip-level 1-on-1s and over time that also developed into a good thing.
Kovid Batra: Great. Great. there is a lot that we can go in and talk about this, but we are running out of time. So I will put a pause to this session here, but before we end this session for us, I would love for you to share one of those best learnings that you think as an engineering leader made you an accomplished one, and you think that can drive real growth for someone who is in that position and looking for the next job.
C S Sriram: Got it. Yeah. The one thing that, uh, was a breakthrough learning for me was mentorship and coaching. My then boss, uh, who moved on to another company, I spoke with him and I turned him into a mentor. His name is Chris Lanier. Uh, he's an exceptional executive. I connect with him very regularly to discuss a lot of challenges that I face. It helps me in two ways. The first thing it helps me is I get an outsider's perspective to solve certain problems that, uh, I can't even take to my leaders because those are problems that I am expecting no answers for. So that is the first thing that I get. And the second thing is the more you grow in this career, the bigger the imposter syndrome gets. So that reassurance that someone with the kind of experience and the success that he has, still goes through all of those things; that's quite reassuring. You know, you steady yourself and then you move forward. The next thing that I would also recommend for anybody who is looking at going into this role is to get a coach. A coach is different from a mentor. A coach is going to diagnose challenges that you have and work on specific areas. Like I had two specific challenges, uh, about two years ago. Betterworks was really generous enough to give me a coach at that time. Challenge number one was that my peer-to-peer relationships were terrible. Like, I didn't have a relationship at all. It's not even that, you know, they were poor relationships. There's no relationships at all. Uh, an introvert like me, I didn't see the value of doing it as well. The second thing was public speaking skills. Almost 40% of my speaking was filler words. So I worked on both of those with the help of a coach and got those two addressed and they made a huge difference. So I would highly, and at this level, you can't afford unknown unknowns, like you can afford it at an engineer level. You can afford it at a manager level. If you don't know what you're missing, that can turn into a disaster for both the business and for you at the executive level. So a mentor and a coach are two things that I would highly recommend.
Kovid Batra: Makes sense. And I think I can't agree more on that front because we as humans have this tendency to be in our zones and think that, okay, whatever we are doing is fine and we are doing the right things. But when a third person perspective comes in, it humbles you down, gives you more perspective to look at things and improve way faster than you could have done from your own journey on or your own mistakes that you make. So I totally agree on that. And with that, I think, thanks a lot, Sriram. This was a really good experience.
C S Sriram: Yeah, sorry to, sorry to interrupt you. If you've got a minute, I did pick something to read. You asked at the beginning, something from my writing, do we have a minute for that?
Kovid Batra: Yes, for sure. Please go ahead.
C S Sriram: Cool. Perfect. Okay. This is something that I wrote in 2020. Uh, it's a poem called "No Magic". This is how it goes:
There is no magic in this world. No magical letter shall arrive to grant us freedom from the cupboard under the stairs, and the tyrants who put us there.
No wizard shall scratch our door with his mischievous staff and pack us off unwilling on an adventure that will draw forth our hidden courage.
No peddler shall sell us a flying horse made of the darkest ebony to exile us away to mystic lands and there to find love and friendship.
No letters, no wizards, no winged horses. In our lives of facts, laws, and immovable rules, where trees don’t walk, beasts don’t talk, and we don’t fly.
Except… when we close our eyes and dream some dreams, of magic missiles that bring us freedom, of wily wizards that thrust us into danger, of soaring speeds that lead us to destiny.
And thence we fly from life to hope and back again. Birds that fly from the nest to sky and back again.
There is no magic in the world but in the void of the nests of our mind. The bird with its hollow bones, where will it fly, if not in the unreachable sky?
Kovid Batra: Amazing! I mean, I could get like 60% of it, but I could feel what you are trying to say here. And I think it's within us that makes us go far, makes us go everywhere. It's not the magic, but we need to believe the magic that we have in us. So I think, a really inspiring one.
C S Sriram: Thanks. Thank you so much.
Kovid Batra: Great, Sriram, this session was really amazing. We would love to connect with you once again. Talk more about your current role, more into leadership. But for today, I think this is our time. Thank you so much. Thank you for joining.
C S Sriram: Absolutely. Thanks for having me, Kovid. I really enjoyed it.
'Guiding Dev Teams Through an Acquisition' with Sheeya Gem, Director of Engineering, ShareFile
February 7, 2025
•
28 min read
In this episode of the groCTO by typo Podcast, host Kovid Batra speaks with Sheeya Gem, Director of Engineering and Product Strategy at ShareFile, about her experiences leading dev teams through mergers and acquisitions.
Sheeya discusses the importance of building collaborative relationships with stakeholders, maintaining effective communication, and fostering a shared purpose among teams. She emphasizes the significance of continuous learning, adaptability, and leveraging tools and processes to keep projects on track. The conversation also touches on managing cultural transitions, supporting teams through change, and ensuring successful integration post-acquisition. Finally, Sheeya shares valuable parting advice for engineering leaders, promoting trust, shared purpose & continuous learning.
Kovid Batra: Hi everyone. This is Kovid, back with another episode of the groCTO by typo podcast. Today with us, we have a special guest who has 20+ years of engineering and leadership experience. She’s not just a tech leader, but also an innovator, a business-minded person, which is a rare combination to find. Welcome to the show, Sheeya.
Sheeya Gem: Hi, Kovid. Thank you for inviting me. It’s a pleasure to join you today.
Kovid Batra: The pleasure is all ours. So Sheeya, guys, uh, let me introduce her a little bit more. Uh, she’s the Director of Engineering and Product Strategy at ShareFile. So ShareFile is a startup that was acquired by Progress from Citrix and, uh, the journey, uh, I was talking to Sheeya, was really interesting and that’s when we thought that we should conduct a podcast and talk about this, uh, merger and acquisition journey that she has gone through and talking about her leadership experiences. So today, uh, the, the main section would be around leading dev teams through mergers and acquisitions, and, uh, Sheeya would be taking us through that. But before we jump onto that section, uh, Sheeya, I think it’s a ritual. This is a surprise for you. Uh, so we get to know our guests a little more, uh, by knowing something which is deep down in their memory lane from their childhood or from their teenage, uh, that defines them today. So give us an introduction of yourself and some of those experiences from your childhood or teenage that define who you are today.
Sheeya Gem: Oh, you got me here. Uh, um, so my name is Sheeya Gem and, um, I am, I, I’m from Bangalore and, uh, grew up in Bangalore. This was when Bangalore was, was, was much smaller. Um, it was, uh, it was considered a retirees paradise back then. And, uh, growing up, my mom was a very strong, um, mentor and, and, and, and a figure in my life. She’d read to me when I was very young. Um, lots of stories, lots of novels, lots of books. So she was an English Lit major. And so, so she’d have all these plays. So I grew up listening to Shakespearean plays. Um, and, uh, one of the books that she’d read and it still sticks with me, and, and actually there’s, I actually have a little frame of this at this time. And it says, “She believed she could, so she did.” And it’s powerful. It’s powerful. Um, I’m sorry. I lost her a few years ago. And, uh, it’s, it’s defined me. It’s a big part of who I am, um, because at every stage in your life, and, and this has been true for me, um, at every stage I have challenged myself, and it’s, it’s my mom. It’s that voice. It says, “You can do what you need to do because you believe in it and you know it’s going to be true.”
Kovid Batra: I’m sorry for your loss, but I think she would be resting in peace and would be happy to see you where you are today and how she has inspired you to be who you are today. Uh.
Sheeya Gem: Thank you. Thank you.
Kovid Batra: All right, Sheeya. Thank you so much for sharing that and it means a lot. Uh, on that note, I think we can move on to the main section. Uh, yeah. Uh, so I think, uh, your journey at, at Progress ShareFile, uh, starts from the acquisition part, right? Uh, so tell us about how, how this acquisition happened and, uh, how things went at that time, some stories that would be, uh, lessons for the engineering leaders and engineering managers sitting out there listening to this.
Sheeya Gem: Yeah. Yeah. Um, so for most leaders who are part of an acquisition, you kind of are part of the conversations as you lead up to the, to the acquisition. And for ShareFile, this journey really started a few years ago. I’m just going to really quickly go through ShareFile’s story. ShareFile is a startup from Raleigh, North Carolina. Um, and it’s, it started up in the early 2000s and was bought by Citrix in 2012 and was part of the Citrix suite of products for, uh, for about 10 years, 10–12 years. And at that time, um, uh, a private equity group called Cloud Software Group acquired Citrix and as part of their portfolio, they have several other products as well. And that’s when ShareFile’s really acquisition journey started and as part of our strategy, ShareFile decided to go back to its roots and the roots of ShareFile was a vertical market strategy. And so for the past 2–3 years, um, and, and this was a fantastic ride because we got to innovate at a scale that we never could. CSG gave us the backing and the financing, the funding and the support and ShareFile had the right amount of talent to make things happen. As leadership, we knew that an acquisition was going to be our, our exit. So we were aware of that and we were very transparent with our, with our entire teams, everybody knew that an acquisition was on the radar. And as such, when Progress started talking to us, um, and ShareFile started sharing our financials, you know, how we do our business and all of those things, we, we knew it was, it was coming. So as such as leaders, you’re part of the journey that makes a successful exit. So the acquisition was a successful exit for us. And then it also starts the next part of your journey where you’re now with a company that has acquired you because they believe in your fundamentals, they believe in your team; and as leadership, it becomes important for us to make sure that that transition is successful and that merger goes as it needs to go.
Kovid Batra: So when you joined, uh, Progress, this was basically a new team coming into an existing company and that experience itself could be a little overwhelming. I haven’t gone through any such, uh, experience in my life, but I can just imagine and trying to relate here. That can be a little overwhelming because the culture completely changes. Um, you are in a setup where people know you, there is defined leadership which you are a part of, you’re part of the overall strategy and then defining, giving directions. But suddenly when you move here, things can change a lot culturally, as well as in terms of the goals and, uh, how things operate. So when this happened with you, was this an overwhelming experience or it came easily? And in either of the cases, how you handled it?
Sheeya Gem: Uh, was it an overwhelming experience? Um, not necessarily. It is an experience. It is different. And, and most humans coping with change and dealing with change is, is hard. And, um, and I think it’s important to recognize that different people are going to handle that change differently. And in many ways, it actually is almost the grieving of the loss of one thing before moving to the next thing, and as leaders, it’s important to make room for that, to give people a chance to, to absorb the change that’s happening, but to continue to be there to support, to provide that clarity, be transparent in what’s happening, where we’re going, and, and just knowing that, you know, some people are probably going to bounce right back. The two days they’re back, they’re okay. And some people are going, it’s going to take longer. It’s, it’s almost like those seven stages of grieving, uh, you know, and to make room for that and to know that, that kind of change from what was, people were comfortable with that, people probably excelled in that, going through the uncertainty of what is to come is a normal human reaction, and I think that’s where leaders shine, to know that this is a normal human reaction. I recognize it. I respect it. And I’m here for you when, when you’re ready to move to the next step.
Kovid Batra: Makes sense. So when you moved here, what exactly was your first initiative or what was that major initiative that you took up after moving in here that made you, uh, put down your feet and get back to work and outshine that, uh, outshine that particular initiative?
Sheeya Gem: Um, are you talking about post-acquisition, the steps that we took? Is that what you’re thinking about? Okay. So, all right. So maybe I could frame it this way. A company exists pre-acquisition. It has a set of goals. There’s a vision. There’s a strategy, right? Everybody is comfortable with it. You’re probably talking about it in your all-hands, in your small group meetings and every leadership meeting that you have in any kind of ‘ask me anything’. The leadership team is talking about what you’re saying. This is our vision. This is our goal. This is the strategy. Once the acquisition happens, you’re now looking at the goal, strategy, and vision of the new company. Now, likely they’re related because there was a reason that the acquiring company went ahead and bought this company. There’s a relationship there, but there’s also likely things that are going to be different. As an example, it’s possible, in our case, this is the situation, Progress has a heavy enterprise footprint. And so some of the strategy and goals are going to be a little different compared to, um, an SMB market where ShareFile continued to, uh, to excel. So, but are there commonalities? Yes. And, and I think this is where, again, leadership comes in where we say, “Hey, this is what we were pursuing. This continues to be our plan and our strategy. This is where ShareFile, Progress’ strategy comes in and in order to manage the transition and have success on both sides, we talk about what needs to happen next. And often what happens is in a mature acquisition, and this is often the case, there is a, there is, there’s plenty of time for companies to say, “Okay, I’m slowly going to bring in the new set of goals that we need to work towards.” Some companies don’t change at all. As an example, when IBM acquired Red Hat, for five years, Red Hat did what they always did. There was no change. Eventually, right, the goal started shifting and changing to align more with IBMs. So different companies have different trajectories. However, what’s common, what needs to happen is communication. Leaders need to be talking to their teams all the time, because without the communication, this is where that uncertainty creeps in. People don’t have the answers, so they start looking for answers and those answers may not be right. So at this time for leadership, it’s important to double down and say, “This is our strategy. This is a strategy for Progress. This is a transition plan to move towards a new strategy. Or it could be that for the next six months, guys, it is business as usual. We’re going to continue with our existing strategy. And over time, we’ll start bringing in aspects of the, of the acquiring company strategy.” So key thing here, support your teams, keep communicating.
Kovid Batra: So at that, during that phase, uh, what was your routine like? Every, uh, board meeting you had, after that, or every leadership meeting you had, you used to gather your team, communicate the things that you had with them, or you waited for a little while, uh, thought through things, how it should be put to your team? Because it’s, it’s a question of, uh, how you communicate it to your teams, because you understand them better, in what state they are, how they’re going to perceive it. So I’m just looking for some actionable here.
Sheeya Gem: Yeah.
Kovid Batra: Like how exactly you did that communication because having that communication definitely seems to be the key here. But how exactly it needs to be done? I want to know that.
Sheeya Gem: Yeah, yeah, you actually almost answered the question here. Uh, so you’re 100% right, right? You don’t necessarily come out and throw little bits of information here and there because that’s not a coherent strategy. Yes, the leadership is continuing to meet and it’s okay to tell your teams that the leadership, leadership teams are continuing to meet and are working through this. But yes, eventually, when we are in a place where we have a handle on how we’re going to do things, that’s when the communication comes up. Like I said, it’s important for teams to know, yes, we’re working with you, we’re thinking through things and then set a clear date, call the meetings, it’s usually like an all-hands kind of situation and then plenty of time for Q&A, gather your teams and present in a format that’s, that’s most comfortable for that culture. And, and sometimes it’s, it’s an ‘ask me anything’ kind of format. Sometimes it’s a chat by the fire kind of, kind of informal thing. And sometimes, and we actually did this year. We did an all-hands, had plenty of time for Q&A, and that evening we took our teams to the closest hangout place that we have. We usually gather there Thursday evenings for beer, and leadership was there and we answered questions. It was an informal setting and sometimes it’s important to, to, you know, go to a location that’s not your usual place of work. So a good restaurant, um, a place where you can maybe just, just chill a little bit, right? And, and, and have those conversations and there you’re able to meet people where they are and then connect with them on that 1-on-1 level and, and maybe answer questions a little bit more deeply.
Kovid Batra: One thing if I have to ask you, which you think you could have done better during that phase, uh, would be?
Sheeya Gem: What could I have done better? Um, it’d be terrible to say we got everything right. Uh, so here’s the thing. No matter how well you manage this, because remember I said that everybody’s going to go through those different stages of change, you will always see people where somebody is, is more agitated, feeling a little bit more anxious than other, right? And, and by, just by the reality of communications, where we say, “Okay, a month from now, we’re going to address this.” There are some people who are going to hit that stage of ‘I need to know now’ two weeks before that. And in that situation, it’s hard, but maybe what people can do is if you’re close enough to that, to be able to just reassure people a little bit more. Um, I think that’s something that, that I certainly could have done a little bit more of, but it’s also one of the situations where you’re kind of like weighing it. How much do I, should I be talking about this where not everything is clear and how much should I just hold? Um, so, so there is that balanced conversation that happens.
Kovid Batra: And in that situation, do you think is it okay to come out and say that I am in a phase where even I am processing the information? More like being vulnerable, I would say. Not exactly vulnerable, but saying that we are in a phase where we are processing things. I don’t want to say anything which, uh, maybe makes you more anxious instead of giving you more certainty at this phase. So making statements like this as a leader, is it okay?
Sheeya Gem: I think it is. I think it’s important to your point. Vulnerability is key where you trust your teams and you’re expecting them to trust you. So showing that vulnerable side, uh, builds empathy and helps people, uh, relate to you more. Um, what I would be careful though is some people could perceive that differently. Oh, leadership doesn’t have all the answers. So yeah, know your audience, know your audience.
Kovid Batra: Makes sense. Yeah, all right. I think, uh, this was really interesting. Anything, uh, Sheeya, uh, that you think had really driven you and made you who you are as an engineering leader in your whole career, not just at ShareFile, but in general I’m asking, what are those few principles or frameworks that have really worked out for you as a good leader?
Sheeya Gem: Yeah, um, I think it’s learning. For me, I, I have this desire to learn and, um, and I believe that no matter a situation, right, you can have a good situation or you could have a bad situation. No matter the situation, though, where you win is learning, learning from the situation, no matter what that situation is. So when you exit that situation, you have learned, you are a better person because you have learned from that situation. So, so that’s, that’s a big takeaway for me and, and something that, that I, maybe your audience will enjoy and that is for humans, you know, there are some things that are going to go really, really well and some things that are going to be downright awful and I think that’s life. But in each of these situations of the mindset is, “Hey, I’m put in a situation that I haven’t dealt with before. What can I take away from this?” You exit that situation as a winner, no matter what the situation was. And I’ve applied that through my life where, um, I’ve, I’ve, I’ve had the, uh, the good luck to work at some fantastic companies and, and be mentored by, by amazing people, um, from Etrade to eBay, uh, Citrix, several companies along the way. And at each of them, uh, when I changed jobs, I went into a job that was just a little different from what I did, and it kind of like opened up things for me. Um, and it helps you learn. So that would be a good takeaway where every time you go into something, try something just a little different. Uh, it changes your perspective. It, it builds empathy. When you do a little bit of marketing, you now have empathy for your marketing department a little bit more. When you do a little bit of work that, that’s not just pure engineering, it helps you see things in a different light and gives you a different perspective.
Kovid Batra: Touching on the marketing bit. I think, uh, the last time when we were talking, you mentioned that you have this urge, you have this curiosity all the time, and I think it’s driven from the same fact, learning, that you work with different teams to understand them more. So do you have any experience, like very specific experience where you had a chat with a sales guy or a marketing team person that led you to build something, like engineer something and build something for the customers?
Sheeya Gem: Yeah, yeah. Uh, that’s a good topic. Um, a part of leadership is besides guiding your teams, it’s about the collaborative relationships you build with other stakeholders. And a lot of people, when we hear the word ‘stakeholder’, we kind of like mentally take a step back. But what if we consider all of those stakeholders, people who are in that journey together with us? Because ultimately, that’s why they’re here. Um, it’s to be successful. And to define success in a way that resonates with each person is the concept of building collaborative relationships. It goes to the heart of shared purpose. Um, so as we were building some new innovative products, um, and, and I, ShareFile is a tech company and which means the product is tech. Who knows more about the product and the tech than the engineers who are building it, right? They are the builders However, all of the other stakeholders that we’re talking about are instrumental to making the product successful. That’s why they’re all here. So for me, it started becoming a case of saying that “Hey, we have uncovered this new way to do something and we believe there is an audience for this. There is a market for this.” Then the first set of people that we start talking to is being able to work with product management to say, “ What do you see? What have you seen in the field? You’re talking to customers all the time.” And it becomes, starts becoming this, this little bit of a cycle where they feed information to you and you’re feeding information back and it’s a loop. It’s, it’s becoming this loop that’s continuing to build and continuing to grow. Um, so there is a, there’s a fantastic book. Um, I think it’s called ‘Good to Great’. Um, and in that the author talks about the flywheel effect and that’s exactly what this is. So as you’re talking to product and you’re building that, that, that coherent thought of, “Okay, I have something here. I may have something really, really big.” The next step is talking to sales because sales tends to be the biggest cheerleader of the product in the market. They’re selling. This is their whole goal. They are your cheerleaders. And so then the next step of building that relationship with sales and saying, “Hey guys, what are you seeing? If I were to build something like this, what do you see, um, in the way it plays out in the market?” And you put that early version of the product in front of sales. Give them a prototype. Ask them to play with it. And most companies don’t tend to do this because sometimes there are walls, sometimes there’s a little bit of a, does sales really want to look at my prototype? They do, because that’s how they know what’s coming next. You’re opening that channel up, right? Similarly with marketing, to be able to say, I have something here. Do you think we could do some marketing spend to move this forward? And just like that you’ve built shared purpose because you’ve defined what success looks like for each group.
Kovid Batra: Right. That’s really interesting. And the, the last word ‘shared purpose’, I think that brings in more, uh, enthusiasm and excitement in individuals to actually contribute towards the same thing as you’re doing. And on that note, I, I think, uh, I would love to know something from you about how you have been bringing this shared purpose, particularly in the engineering team. So just now you mentioned that there could be, uh, walls which would prevent you from bringing out that prototype to the sales team, right? So in that exact situation, uh, what, what way do you think would work for teams, uh, and the leaders who are leading, let’s say, a group of, let’s say, 20 folks? I’m sure you’re leading a bigger team, but I’m just taking an example here that how do you take out that time, take out that bandwidth, uh, with the engineering team to work on the prototype? Because I’m sure the teams are always overloaded, right? They would always have the next feature to roll out. They would always have the next tech debt to solve, right? So how do you make sure that this feeling of shared purpose comes in and then people execute regardless of those barriers or how to overcome those barriers?
Sheeya Gem: Yes. Um, to have something like shared purpose work, you absolutely need the backing of your entire leadership org. And I’ve been very, very lucky to have that. Uh, from the Chief Product Officer to the CEO, to the Chief Technology Officer, we were aligned on this, completely and totally aligned on this. And so what this translate then, translates to then is investments, right? You talked about tech debt and how teams are always loaded, but if your entire leadership team is bought into that vision, then the way you set the investment profile itself is different, where you might say that, you know, half of the org is going to totally and completely focus on innovation. We are going to build this. Right. Then you have that, that organizational support. Now as leadership, as we are building that, when you start talking to your teams about the level of organizational support that you have, and remember, engineers want to build things that are successful with customers. Nobody wants to build something and put it on a shelf in their house. They want it on the market. That is the excitement of engineering. So to then be able to say that, “Hey! We believe in this. Our leadership believes in this. Our stakeholders are excited about this.” It’s the kind of excitement and adrenaline adrenaline pump that happens that nothing else gives that cheer. And that’s what we saw happen with our teams, that getting behind a vision, making that strategy your own, knowing that you are a key contributor to that success of the product and hence the success of the org, that is a vision that sustains and feeds itself. And, and that’s what we were able to build. Um, that’s something that I made the time for every day. You talk to your teams, you connect with your teams, you’re talking to your engineering managers, you’re talking to the principal engineers, and every time there is, there is concern, and there will be many, many concerns along the way, and I’m not going to have all the answers. That’s normal. I should not have all the answers, because if I have all the answers, then the thinking is limited to the max of my thinking, and a group’s thinking is always greater, right? The sum of that group’s thinking is always greater than any one individual’s thinking. So then it starts becoming a case of, this is the problem that we’re trying to solve. How best would we solve it? And when you put it in front of the brightest people in the room, the answers that you get to that problem, the solutions that you get, breaks through every bound that you can see.
Kovid Batra: So do you usually practice this? Like, uh, every week you have a meeting with your team and there are some folks who are actually working on the innovation piece or maybe not every week, maybe in a month? I, I am not sure about the cadets, but in general, what’s the practice like? How, how do you exactly make sure that that is happening and people are on track?
Sheeya Gem: Yeah, we actually meet every week and then any number of informal conversations throughout the day, right? You run into someone in the elevator, you have two minute conversation. You run into someone in the hallway, you have a two minute conversation. But yes, as leadership, we meet, uh, every week. And when I say leadership, and this is where my definition of leadership may be different from maybe some parts, some others. And, and, and to me, leadership is not just a title that’s given to someone. A lot of people think that one year, once you’re a manager, you’re a leader. The truth of it is, you’re going to see leaders in engineers, people who think differently, people who, um, who can drive something to success, people who can stand behind something because they know that area and know what to do next. They’re all leaders. So in my leadership meeting, I actually have a mix of engineering managers. I have principal engineers. I even have some, a couple of junior team leads because they are that good. And that group meets every week. And we talk about the biggest problems that we have and it becomes a group problem solving effort. We draw action items from that and then smaller groups form from there, solve, come back to the meeting next week and they talk about how they are, how they are going about it. So it is very much a team environment and a team success, um, metric the way we go behind things.
Kovid Batra: Makes sense. Um, one last thing that I would want to touch upon is that when you are doing all these communication, when you are making sure you’re learning, your team is having a shared purpose, everyone is driven towards the same goal, one thing that I feel is it is important to see how teams are moving, how teams are doing on different parameters, like how fast they’re moving, how good quality code is being produced there. And you mentioned, like you lead a team of almost a hundred people where there are few engineering managers and then engineers out there. As a Director of Engineering, there is no direct visibility into what exactly is happening on the ground. How do you ensure that piece, uh, in your position right now that everything which you think is important and critical, uh, is, is there, is on the tack on the track?
Sheeya Gem: Yeah, yeah, this is where tools come in. Also, very clear processes. Um, my recommendation is to keep the processes very lightweight because you don’t want people to be caught up in the administration of that process. But things like your hygiene, it’s important. You closed a story, close the story, right? Or let us know if you need help. Uh, so that becomes important. Um, there are lots of project management tools that are available on the market. Um, and again, like I said, lightweight, clear process. Uh, the ability to be able to, um, demonstrate work in progress, things like that. And that’s something else that we have. Um, we have this practice called show, tell and align and, um, we meet every week and this is all of engineering, and just like the title says, you show whatever you’ve got. And if you’re not in a position to show, you can talk about what you’ve got. And the purpose of it is to drive alignment and it’s, it’s, it’s an amazing meeting and we have a fantastic manager who runs that meeting. There’s a lot of energy there and we have no rules about what you can show or where you can show it. You know, some, some, some companies have rules like, oh, it needs to be in production for you to do. No, no, no, I want to see it if it’s on your dev laptop. I want to see it. Your team leads to want to see it. Uh, so we keep it very, very easy. And in that meeting, every senior leader who attends that meeting is encouraged to come in as an engineer and as an engineer only. Uh, they’re supposed to leave their titles at the door. It’s, it’s, it’s, it’s, it’s a challenge. It’s a challenge, but no one can come in and say, “Hey, I didn’t approve that!” Because you’re coming to this meeting as an engineer, which means if, if, and sometimes we’ve had, you know, directors and VPs who have something to share because they’re able to leave the title at the door. Uh, so it’s, it’s been a great practice for us, this ability to, to show our work in progress. Um, “Oh, look, I got this done.” Uh, “Here’s a little notification tab that I was able to build in three days. I’m going to show this to the team.” Or, or “Here’s a new framework that I’m thinking about and I found this. I’m going to show this to the team.” Uh, so this is a regular practice, um, at ShareFile and now at Progress.
Kovid Batra: Perfect. Perfect. Great, Sheeya. I think, uh, this was a really, really interesting talk, uh, learning about communication, learning about learning all the time, having a shared purpose. Show, tell, and align, that was interesting on the last piece. So I think with this, uh, we, we come to the end of this episode. It was really, really nice to have you here and we would love to have you again. Is there any parting advice for our audience that you would like to share? Uh, most of us are like engineering managers, aspiring engineering leaders or engineering leaders. If you would like to share, please go ahead.
Sheeya Gem: Um, we covered a lot of topics today, didn’t we? Um..
Kovid Batra: Yeah.
Sheeya Gem: Uh, what do I have for our, um, for our engineering managers? Trust your teams, but trust and verify. Um, and this is where, you know, some of the things we talked about, things like OKRs, things about lightweight process comes in. Trust, but verify. That’s important. Uh, the second part of it is shared purpose. You want to build that across your, not just your teams, but all of the stakeholders that you’re interacting with. So people are driving in the same direction, uh, and we’re all moving towards the same success and the same set of goals and every opportunity is a learning opportunity.
Kovid Batra: Great! Thank you, Sheeya. Thank you so much once again. Great to have you today.
Sheeya Gem: It was a pleasure. Thank you for inviting me on your show.
'Leading Dev Teams vs Platform Teams' with Anton Zaides, Director of Engineering, Taranis
January 24, 2025
•
28 min read
In this episode of the groCTO Podcast, host Kovid Batra interviews Anton Zaides, the Director of Engineering at Taranis and author of the Leading Developers newsletter. Their discussion focuses on the challenges and strategies involved in leading development teams versus platform teams.
He recounts how his early interest in gaming and experiences as a guild master in World of Warcraft shaped his leadership style, teaching him valuable lessons in social intelligence and teamwork. Maher outlines his proprietary framework for peak performance focusing on shared understanding, trust, and competence, and highlights the significant benefits of leveraging generative AI tools like GitHub Copilot for improving productivity. The episode also delves into the complexities of implementing new technologies and managing distributed teams, underscoring Maher’s strategies for overcoming these challenges through continuous learning and fostering a collaborative culture.
Timestamps
00:00 — Introduction
01:15 — Meet Anton
01:35 — Anton's Journey and Achievements
02:04 — Dev vs Platform Teams: What's the difference?
04:21 — Challenges in Platform Teams
12:24 — Strategies for Better Collaboration
25:12 — The Role of Product Managers in Platform Teams
Kovid Batra: Hi everyone. This is Kovid, back with another episode of groCTO by Typo. And today with us, we have a very special guest who is coming to the show for the second time, but first time for this year. That’s Anton. Welcome to the show, Anton.
Anton Zaides: Thank you, Kovid. Great to be back.
Kovid Batra: So let me introduce Anton. Uh, so Anton, guys, is Director of Engineering at Taranis, a company from Tel Aviv. And, uh, he is also the author of Leading Developers, which is a trending newsletter, at least on my list. And he is having almost 18,000 subscribers there, writing some great articles we are really fond of at groCTO. So congratulations to that, Anton, and welcome to the show again.
Anton Zaides: Thank you so much.
Kovid Batra: All right. Uh, so today’s topic of discussion is one of the topics from Anton’s newsletter, which is ‘Leading Dev Teams Vs Platform Teams’. This was a very interesting topic. Uh, I read the whole newsletter, Anton, and I really found it very interesting and that’s the reason I pulled you off here. And, uh, before we like jump into this, I’m really curious to ask you a few questions about it. But before that, I just want to know, uh, how was your last year? How did 2024 go? What are your plans for 2025? So that we get to know a little more about you.
Anton Zaides: So 24 was very busy. I had my, uh, I had my first kid at the beginning of the year, so a year ago, and got promoted a month after that. So it was a year full of..
Kovid Batra: Super hectic.
Anton Zaides: Yeah! Hectic, career, family, and I think a small one would be in my, uh, first international conference, uh, back in September, which was a great experience for me, you know, like talking in English with an audience. So I would say lot of family, lot of career. And in the next year it’s more about family. I’m right now taking a 7–8 months break and I’m planning to work on my own thing. Early child education, mainly helping parents, children, like my own kid’s age. Just a bit of technology and also learn about it. You know, I feel parents don’t really know what they’re doing. So that’s my goal for next year, to be a better father and use technology for that.
Kovid Batra: No, that’s really amazing. I know this is, I think there are a few experiences in a human’s life and this is one of those which changes you completely. And, and in a, in a very good way, actually. Uh, when you’re young, you usually do not love to take responsibilities. Nobody loves to do that. But when such kind of responsibilities come in, uh, I think you, you grow as a person, there is something that, uh, something else that you explore in your life, at least I would, I’ve seen, uh, in my friend circle and of course, I can relate to what you’re saying also. So, congratulations and all the best. Uh, we really feel that you would do great here as well.
Anton Zaides: Thank you. Thank you. Definitely. We’ll try.
Kovid Batra: Yeah. All right, Anton, uh, coming to the main section, uh, talking about platform teams and dev teams, uh, this topic is very unique in, uh, in a way that nobody usually gets to talk about it in detail, in depth the way you have done it. Of course, a lot of generic articles are there. I’ve read a lot. This session could be a really good guide for someone who is, uh, in a position where they are moving into these roles from, uh, leading dev teams to platform teams. They could really have some learnings from what you have experienced in the past. So, first question to you actually, why did this topic come to you? What happened in your personal experience that made you realize that, okay, this could be something that an engineering manager or a tech lead who is switching between these kind of responsibilities would be interested in knowing?
Anton Zaides: Going back, I first started in a classic dev team, right? I wrote code like everyone else for a few years, and then I switched to the platform side, DevOps side, more infrastructure, and led the team there for a couple of years. And I decided to switch back. So it was two switches I did. And in my last role as an engineering manager of a classic product-facing, you know, user-facing team, I felt that most of the other engineering managers in the organization, they don’t really know how to work with the platform team. We have a DevOps platform team that provide us, you know, all the tools, they help us, and I felt they don’t really understand, uh, how to approach them, how to help them, how to connect them to the business. So they just really liked working with my team and I always got what I wanted and I pushed the agenda for that. And it really, really helped my developers too, right? Because they got close to the platform developers and they understood it better, that made them better developers. And I felt like this connection can help other engineering managers who never experienced how difficult it is to be in a platform or DevOps team. I’m using the terms, uh, interchangeably, but, uh, let’s call them platform for now. So I felt that, you know, I can show the other side and I hope it will help other engineering managers to see the difficulties and stop being annoying, because, you know, we are the, we are the clients. It’s very, very hard to satisfy developers for platform teams. It’s almost impossible. You’re always too slow. You’re always like, too many bugs. You’re always not prioritizing me enough. So I wanted to show a bit of the other side. So that was the focus of the article, like showing the inside of a DevOps team with some tips, product teams on how to help the, those DevOps teams. That was the idea.
Kovid Batra: Hmm. Interesting. Interesting. So this was some real pain coming out there and like you telling people, okay, this is what the picture is so that they know what’s going on. Right. I think that makes a lot of sense. And I think a lot of people connected to that. And even I like the article a lot. Um, I was reading one section, uh, from the article, which mentions about, like this is something which is really, really hard to manage, right? Uh, because the, the expectations are very hard and you just now mentioned about, uh, it’s, it’s very hard to satisfy the developers and then the requirements are changing too fast. So these were the first two things I remember from your article which were, you, you touched base upon. So can you just give me some examples and the audience about how you see things are changing really fast or how it is becoming very difficult for you to manage these demanding clients, actually?
Anton Zaides: First of all, I think when your clients are technical and they are inside the company, they feel the privilege to tell you how to do things and prioritize your work, right? Because they say, Oh, why does it take you a month? So, I know I can do it for a week, right? They feel they can do the platform work and they kind of push the platform teams. Um, I had an example where when I was doing the platform team, we were responsible for, I don’t want to get too technical, but we had, uh, you know, database services like Postgres, MongoDB, Redis, right? Storage databases. So we were in a private cloud and we were responsible for, uh, providing those database as a service. What do you have in AWS and GCP? You just can request one. So we needed to do the same in our own private cloud, which is quite complex. And we provided PostgreSQL and MongoDB and Redis. And every day another developer says like, why don’t you do Cassandra? Or why don’t you do CouchDB? Like they felt like they know what needs to be done and they didn’t. They never thought, you know, in my opinion, Postgres is perfect for 99.9% of the startups and their products, but the developers felt like they need to push me to provide them new database just because they wanted to use new technologies, right? And now I heard like, uh, for example, we have Jenkins, right? So in my company, I heard developers complain, why Jenkins? It’s so slow. We need to replace it for something faster. Right. And this is something as a product team, you’ll never hear your client tell you, why do you use React? You need to use Vue. Right? It’s faster. It’s, they don’t care, right? They care about the end result. And here the comments like this, like does somebody really know how hard it is to replace Jenkins with another tool? What are the costs? What are the benefits? Why do it? So So they feel very comfortable, like, suggesting and giving their opinion, even if nobody really asks them, I would say. That’s one thing.
And the other one about the priorities is it’s actually, I would say sense of urgency that there are a lot more fires in the platform teams. For example, if you have, uh, we had the case of a GPU problem, right? You know, the world has, uh, not enough GPUs. So we had, we use, uh, the cheaper version of GPUs where they don’t promise you enough. And then we had a bottleneck and we needed the GPUs, but we couldn’t get them. And now we needed to change all the infrastructure to request the higher GPUs and kind of balance them to save prices. And this is a project that took one month and it’s completely stopped what they’re working on, which was also important. And you have so many incoming things like that, you know, you have an alert somewhere, right? Something is crashing. Very often it’s the developer’s problem. But if you see, uh, prod crashing, you say, okay, it’s, it’s the DevOps. They don’t have enough memory or they don’t have enough nodes or something like that. And then you kind of need to debug and then you understand it’s the developer’s problem. You tell them and then they debug and come back to you because they don’t do their job well. So this all back and forth makes it very, very, very hard to concentrate. I remember sitting in, you know, you have this tap on the shoulder, “Please help me a bit. Uh, please explain to me why this is not working.” Uh, clients usually in a product team, you have customer support, you have customer success. You have so many layers that isolate the developers from distractions, right? And you can see it straight here. Your clients are sitting by your side and they just go over and sit by you expecting you to help them. I think product developers would have been crazy if your client would come up to you and say, “Oh, this. I see an error, help fix it now.” So, yeah, I agree. Those are the two things that, that make it, uh, very hard, clients being opinionated and always distracted.
Kovid Batra: Right. I think from the two points that you mentioned, uh, there is always unwanted suggestions, recommendations, and then there is, there is this explanation when you do not want to be directly interacting with them, there should be a first level of curation on whether the problem belongs to the platform team or to the developer, there should be some level of clarity there and then probably there should be deep diving into what’s going on, who’s responsible. So what I felt is, let’s say just hypothetically, uh, five years down the line, you are an engineering leader who is managing the complete tech for, for an org. Uh, you have platform team, you have your development team, right? What advice or what kind of culture you would like to set in? Because it seems like a problem of a culture or perception where people like blame the platform teams or do not empathize with the platform teams that much. So, as an engineering leader down the line who is leading two different teams, what kind of culture you would like to set in or what kind of practices you would want to set in so that platform teams who are equally critical and responsible and accountable for things as development teams are operating neck to neck? Or I’m not, I’m short of words here, but I hope you get the sense of what I’m trying to say.
Anton Zaides: Yeah, I think I got it and, it’s, it’s a small thing that we’ve actually tried, but I think if I would have been the decision maker to be on a biggest scale, actually to switch places for at least a while. So I believe that platform and DevOps knowledge is super useful for every engineer, right? Not always the other way around. So I truly believe that every product engineer should know about platform, at least the basics, not every platform engineer should know React, right? Depends on what they work in, but I would put the product engineers and put them for a month, uh, helping the platform teams in a project. Like, everyone should do a bit of platform work to understand, to see how they work, right? They can work in Kanban and not your usual scrum to see how they’re day to day. If you see from the other side, like if you need to provide support to your own team, right, you are the pipeline. You will see how many requests are coming through and the other way around. I feel that we had, uh, for two sprints, like for a month, we had one of the platform developers in our team because he wanted to experience the life of a developer to understand the problem better and the usage of his own systems. And it was really, really mind opening for him too, to understand why we complain, what he thought was so easy to understand that it’s our problem. Once he sat with us and tried and developed and, uh, released some backend code to production and understood it’s not that easy. And so this connection of switching places and it has some cost, but I feel it’s worth it.
And the second one I would say is connect, like the road map shouldn’t be different, right? They should be much more connected. So when you’re building the platform roadmap, you should have, of course, the engineering managers, but not only when you build it. Like, they should be there at every release kickoff, every, every time they should be part of the platform roadmap. This is the easy part. The harder part is to explain to the platform people the your product, right, how is your 3–4 months going to look? What are you working on? What do you expect? And not just the managers, which is what usually happens, right? You have a manager sitting with a manager, discussing and stuff like that. The people underneath need to understand that, uh, sit there. For example, a platform engineer should hear customer success stories that he indirectly helped because a big part of the problem that when you work in the platform team, you don’t really affect the business bottom line, right? You help developers create solutions, but if you can have those stories of how you helped someone deliver something faster and what was the impact on the company, it creates like a shared responsibility because next time you will want to help them faster. You will want to understand the problem better because you feel the impact. Saying, “I released the service to production in five minutes instead of three hours.” That’s nice. But saying, “I released the feature a week earlier and a bigger deal was, uh, agreed by the customer because of the DevOps team.” Right? Doing this connection. It’s not always easy, but in a couple of cases, we were able to do that connection. Um, platform work directly to business outcomes. I feel that would be something that we try, uh, much more. Um, so yeah, if I had to choose one, it’s just, uh, switching the places a bit, we had a concept called ‘DevOps Champions’, but it can be ‘Platform Champions’, uh, where you pick one developer from each product team and they have a weekly meeting with the platform team and like hear about the latest news, ask questions. And for example, they are the point of contact before you can contact the platform team. You have someone in your team who is interested in platform and he gets more, uh, he gets like, I would say Slack, direct Slack access to the DevOps team They know like this person, if you ask, we will drop everything and help them. And they, they do trust. And then the whole team talks to one person instead of to the DevOps team. And, and this helps a bit. So I hope it was not too confusing. So if I sum it up, I say switch places and have a dedicated platform, uh, representative inside the product teams and also connect the platform team to the business side. Yeah.
Kovid Batra: That really makes sense. Uh, this point which you mentioned about bringing DevOps Champions, right? Like who are going to be the point of contact for the product teams to share knowledge, understand things. Going back to your newsletter, uh, you mentioned about bringing more visibility and recognition also. So is this dev champs, DevOps Champions some way of recognition also that you want to bring in into the teams to have a better culture there? I mean, basically these teams lack that level of recognition just because they’re not, again, directly impacting the business. So they don’t really get to see or feel what exactly they have done is, is this an outcome or consequence of that?
Anton Zaides: No, I think it’s a bit different because the champions are product engineers, like who are originally from inside the team. So if I have five developers, one of them will be like, uh, will wear the platform hat, but he will be a product engineer and he will get to, to, uh, learn from them and work with them, the ones who are interested. For the recognition, I’m talking about recognition of the pure platform engineers, which are usually in the dark and separate there. And there it’s about what we, we discussed a bit earlier, also sharing their stories, but also public acknowledgement. That’s something that I really, I have the privilege of having a LinkedIn, you know, and I constantly write there. So I, I did a couple of shoutouts for our platform engineers after nice projects, and they really, really appreciated it because, you know, people usually, you know how it is. If it works, they don’t hear about platform, only when it breaks. So they don’t get like kudos for nice projects and stuff like that. So I really try both on LinkedIn, but also in internal companies like channels, you know, saying nice words, uh, appreciating the work, stuff like that.
Kovid Batra: Makes sense. Makes sense. Totally. I think, uh, one thing I would be interested in knowing, like any of the projects that you took up as a platform team lead and completed that project. What was the mindset, what was the need, uh, and then how you accomplished it? Just deep diving into that process of being a platform team lead, uh, leading a project to make the lives of your developers, uh, better and maybe making them more productive, maybe delivering faster.
Anton Zaides: So let me think, it’s been a while, right? It’s four or five years ago since I was there. But I think if I go back, right, my team’s role was to deliver database as a service for our customers, right? Customers and developers, they want, uh, whatever PostgreSQL, uh, MongoDB and they, it’s hard for me to explain to people how it is without a public cloud. I was in a government agency, so there was no GCP, AWS, Azure. It was like everything, you need to create everything. It was an air gapped environment. Because of, you know..
Kovid Batra: Uh, information, regulation.
Anton Zaides: Regulation, information, you couldn’t use stuff like that. So we need to do everything from, from scratch. And one thing that, uh, we were a small team, so all the communication was, uh, we didn’t have like a portal, right? I know it’s very hard to imagine a world without the public cloud, but it was like emails and messages, please create me a database and stuff like that. And one very small annoying thing was the extensions and Postgres. You have many default extensions, like you have PostGIS, like for geographic extensions, you have like, uh, for using it as a vector database, you have many extensions, and we wanted to help them use those extensions, right? Because every time they needed a new extension, they need to send us an email. We need to check it. We needed to roll it out and stuff like that. So I know it’s, I think it’s not what ideally what you, uh, meant because it was quite a small project, but I saw that pain and we kind of went and figured out the top 20–30 extensions that did some templates and did some UI work, which is quite rare for platform teams, right? Because you hate UI, usually if you’re in platform. At most, you can do some backend, but you prefer to do like, you know, flash scripts and stuff. So we did some basic, uh, interface with React, HTML, CSS. So to create this very ugly portal, which I think people appreciated. It makes the work easier. And I think the good, the good platform teams are not afraid of writing a bit of code and using like graphical interface to a small portal or like, uh, if you want to request to see stuff like that instead of waiting for product teams to help them create a nice screen and stuff like that. Now with Cursor and, you know, and all the LLM, it can take you 30 minutes to do everything you need. Like, you have APIs, you can put them where they can use buttons to do like that, you need to request something. So I think like that barrier, if I go back to the story to break the barrier and not say, okay, I can only do backend stuff. That’s how it works. I will. And just think about the next step and go where it’s, it’s uncomfortable. I had, I was lucky because I had the background as a product developer, so it’s easy for me. But all of my team members, there was like, no, no way we’re going to write React. No, it’s not our job and stuff like that. So I had to, to force them a bit, force them and I actually enjoyed it because you know, it’s It’s, it’s rarely in the platform that you can actually see something immediately
Kovid Batra: This was an interesting experience and how this experience would have changed in case of such kind of requirement when it comes to dev teams, like, because we are just comparing like a while leading dev teams is different from leading platform teams. So in this situation, of course, there was a barrier. Uh, there was a problem which the platform teams had to solve, but it came with a solution that platform teams are usually not inclined towards like building the UI, right? If a similar kind of a situation had to come in for the dev teams, how do you think it would have been easier or difficult for you to manage as a manager?
Anton Zaides: I would say as a dev team, you have a product manager, you have UX designers, and you get a ready Figma of how it should look like, and you just implement it in, in a couple of days, right? It’s so much easier because someone is doing the research of talking to the customers. Some platform teams have a product manager, right? I would not say, but they for sure don’t have a UX designer working with them, because the system is internals and everybody say, “Oh, just make it good enough. Uh, these are our people anyway. You don’t need to make it beautiful.” So this, this is usually how it works. And in the product team, for me as a manager, it’s so much, much less work for me. The product manager, uh, doing most of the work. And I would just like, you know, manage the people a bit, coach them. But as a platform team, I did it, like 50% of my job I did product management. For some of the time I did have a dedicated product manager, but some of that I didn’t and I needed to kind of fill the hole myself. Yeah, because in platform team, it’s the first team where you cut the product manager. You say, “Oh, it’s internal. No need. Uh, the engineering manager can manage.”
Kovid Batra: That’s even my point, yeah. So even I, I felt so, like for platform teams, do you think it is even important to have a product manager? Because the tech lead or probably the engineering manager who’s involved with the team would be a good start to make sure like things are falling in the right places and understanding the problem. See, ultimately for a product manager, it is very important to be more empathetic towards the client’s problems and be able to relate to it. The more they relate, The more fit is there, the better solutioning they can design. Right. Similarly for an engineering manager who is leading the platform team, it would be more of a product role and it makes more sense also, as per my understanding. What do you have to say about that?
Anton Zaides: I have had experience with product managers with platform team who didn’t come from an engineering background and it was always a failure in my experience. Uh, I would say it’s better to have no product manager to let the engineering manager do the job. And ideally in, in that team after, I think it was after a year and a half, one of the engineers, like she mentioned she wants to become a product manager. This is her career path and then it’s a perfect fit, right? If you have an engineer who wants to become a product manager from inside the company, then it can work great. But I feel that in the platform case, the product manager must have an engineering background. Otherwise, like you can try to learn to be technical, but it would just be, it would be a different language. It would be, it’s not like product teams. Yeah, I agree. I feel it’s, uh, yeah, it just doesn’t work in my experience.
Kovid Batra: Makes sense. By leading a platform team where you find this kind of a fit where some engineer who is interested in becoming a product manager comes in and plays a role, I think I sense that there is definitely a need of a person who understands the pain, whether that person is an engineer or the engineering manager who is working as a product manager, but you definitely need that kind of a support in the system to make sure that requirements are flowing in correctly, right?
Anton Zaides: Yeah, I agree.
Kovid Batra: And most of the time what I have seen or felt is that engineers usually shy away or the engineering team shies away from being involved that aggressively towards client requirements. So when it comes to platform teams, how do you bring that extra level of empathy towards customer problems? Of course, they are developers, they relate to the problem, but still, I feel that in a world where we live dealing with real world problems, being a developer, you still get to see some side of it because you’re a human, you’re living in the, in that world. But when it comes to platform teams, it’s all technical. You have seen things, but still, it’s more like you are just solving a technical problem. So the empathy towards deep diving into the problem and bringing up a solution, does it become harder or easier when you are raising a product manager in an engineering team for platform teams?
Anton Zaides: I think it’s quite hard and I think this is the role of the engineering manager, of the platform engineering manager. Like I feel the product managers still have difficulty bridging that gap. I would say that platform engineers, either by experience or by character, they care more about the technical side. You know this term of product engineer, which is like pure product engineer, not like software engineer, like the people who decide what to build. Platform engineers, from my experience, care about the technical side, like much, much more, right? They want to build excellent solutions, they are excited by crazy bugs and they are excited by saving costs, stuff that most people are less excited by that. And yeah, it’s, it’s purely the job of the engineering manager. Like, as a platform manager, you need to show the pains of the developers too. That’s much more than in a product team where the PM filled that gap. I feel that even if a PM is an ex-engineer, in my experience, somehow, like, if the engineering manager won’t do it, the developers will resist much more the PM. Right? I think that’s what comes to mind. You have much more resistance in the platform team because they want to stay in the code. They don’t want to join customer meetings. They don’t want those things. Just want to code. So you need to, you know, like, uh, peel the shell and try to bring developers to share their stories, send them for a month for a development team, as we discussed, which they will hate probably. So you need to, to, push a bit. And the PM, it’s not, they are not his or her direct report. So they have limited power and you can actually, I would not say force, but kind of help them hardly along that path, uh, of understanding the user brains. Yeah.
Kovid Batra: Great, Anton. I think, um, thanks. Thanks for this interesting talk and helping us deep dive into the platform teams and the dev teams and how they differ in their core DNA. Uh, I think there were some great insights about how things change when you are leading a platform team, that from the expectations, from the kind of mindset that the developers come with, the unwanted suggestions, and like how you bring more connectedness to the business and recognizing teams. So I think this was a very interesting talk. Before we moved from the session, uh, is there any advice, uh, parting advice that you would like to give to the audience?
Anton Zaides: My main advice would be to the product leaders, product engineering managers to try much harder to understand the pain of the platform teams in your organization and how can you help them. Schedule 1-on-1s with the platform engineering manager, be more involved because they will appreciate that help and they might not even know they need your help. And in my experience, you will benefit for sure.
Kovid Batra: Makes sense. Makes sense. I think this would not only help reducing the friction, but will also help, uh, in bringing a better and a collaborative effort to build better product also like better platforms also.
Anton Zaides: For sure.
Kovid Batra: Great, Anton. Thank you. Thank you so much once again, uh, it was great having you on the show. Thank you.
Anton Zaides: Thank you, Kovid. It was great being here.
'Driving Engineering Productivity as a VPE' with Maher Hanafi, VP of Engineering, Betterworks
January 10, 2025
•
46 min read
In this episode of the groCTO Podcast, host Kovid Batra welcomes Maher Hanafi, VP of Engineering at Betterworks, to discuss engineering productivity hacks. Maher shares insights from his 16+ years of engineering and leadership experience, emphasizing the importance of passion and individualized growth paths for team members.
He recounts how his early interest in gaming and experiences as a guild master in World of Warcraft shaped his leadership style, teaching him valuable lessons in social intelligence and teamwork. Maher outlines his proprietary framework for peak performance focusing on shared understanding, trust, and competence, and highlights the significant benefits of leveraging generative AI tools like GitHub Copilot for improving productivity. The episode also delves into the complexities of implementing new technologies and managing distributed teams, underscoring Maher's strategies for overcoming these challenges through continuous learning and fostering a collaborative culture.
Timestamps
00:00 — Introduction
00:54 — Welcome to the Podcast
01:16 — Meet Maher Hanafi
02:12 — Maher’s Journey into Gaming and Leadership
04:21 — Role and Responsibilities at Betterworks
06:20 — Transition from Manager to VP of Engineering
13:59 — Frameworks for Engineering Productivity
22:40 — Challenges and Initiatives in Engineering Leadership
Kovid Batra: Hi, everyone. Welcome back to groCTO by Typo. Uh, this is Kovid, your host, wishing you all a very, very happy new year. Today, we are kicking off this year’s groCTO Podcast journey with the first episode of 2025, hoping to make it even better, even more insightful for all the listeners out there. And today, for the first episode, uh, we have our special guest, Maher Hanafi. He’s VP of Engineering at Betterworks, comes with 16 plus years of engineering and leadership experience. Welcome to the show, Maher.
Maher Hanafi: Thank you, Kovid. Thank you for having me and happy new year.
Kovid Batra: Same to you, man. All right. Uh, so, Maher, uh, today we are going to talk about some engineering productivity hacks from a VP’s perspective. But before we jump onto our main discussion, uh, I think there is a lot to know about you. And to start off, uh, we would like to know something about you that your resume or your LinkedIn profile doesn’t tell. Something from your childhood, which was very eventful and then defines you today. So would you, would you like to take the stage and tell us about yourself?
Maher Hanafi: Well, that’s a great way to start the conversation. Thank you for asking this. Um, yeah, it’s not something that is on my resume and in my bio, but um people who know me know this. So I’m into gaming and I used to play video games a lot when I was a kid, to the point that I wanted my career to, to be in gaming. So I have a telecommunication background, engineering background. And then, as soon as I finished that, and I was ready to go to the market to start working, I decided to completely go and pursue a career in gaming. So what I did is, um, I looked into the gaming job, game developer jobs, and I figured out everything they’d need to, um, to have, to be had as a game developer. And I learned that. I taught myself these things and two years later I was working for Electronic Arts. So a great story there is like this passion I had as a kid for many years led me to, um, go into and pursue that career. Another part of that same story, as a gamer, I used to play a lot of, uh, massive multiplayer online video games, like MMOs. Uh, one of the biggest one is World of Warcraft, and at that time, I used to play the game a lot to the point that I was a guild master, meaning I was leading a big team, uh, hundreds of people, um, telling them, you know, kind of a leadership position. So in other words, I was a manager, uh, before I even started my career as an, as an engineer, or, uh, before I became an Engineering Manager later. So that taught me a lot of things from, you know, social intelligence and how you manage people and how you hire and fire and kind of manage productivity and performance, which will be the topic of today. So happy to be going to that later in a moment.
Kovid Batra: Oh, that’s very, very interesting. So I think, uh, before you even started off your leadership journey, you, you were actually leading a team. Though it was just gamers, but still it must have taught you a lot.
Maher Hanafi: Absolutely. Yeah, I learned a lot and I’m so grateful to that experience and a lot of what I did there are things that I brought to my career and I used as a, as a manager, um, to, to get to the engineering level.
Kovid Batra: Perfect. Perfect. I think it’s time. Let’s, let’s move on to something, uh, which is around the topic. And before, before, again, we jump onto that, uh, tell us something about Betterworks, your role and responsibility as a VP of Engineering over there. How is it like at Betterworks?
Maher Hanafi: Yeah. So, Betterworks, we are an enterprise, uh, SaaS company. So we develop an enterprise performance management software for global big companies, all the tools and suite of tools they need to manage performance internally, uh, for big companies. Again, this is more challenging when you have a, you know, departments and team and business units, and like you’re just globally distributed. Managing performance in general is very challenging. So we build and provide all these tools for, for our big customers. I’m currently the VP of Engineering. I lead all our engineering teams. Uh, we’re split between India and the US, and yeah, uh, I do different things. I, obviously, lead the technical perspective from a vision and strategy and architecture, help the team make the right decisions, build the right software, and also I contribute a lot to our strategy over time and vision, including AI. So this was one of the most recent, you know, kind of areas of focus of mine to help the team and the company deliver generative AI integrations and features and hand feature on top of what we offer, which is obviously very, very kind of important these days to be on top of that and deliver. So that’s what I do. And again, as a VP of Engineering, there’s a lot of things that get into that, including, you know, managing the team, managing productivity, ensuring that everything is being efficient and effective in having an impact.
Kovid Batra: Talking about productivity and efficiency, I think, um, I was just stalking your profile and like, I was stalking you on LinkedIn and I realized like, you have had this good journey from being a developer and then manager and then leader, right? I would want to understand how your perspective towards improving team efficiency and team productivity has changed while you were working as a manager and now working as a VP, like how, how your perspective has changed?
Maher Hanafi: Yeah. I mean, working as a, you know, going from an IC to a manager is one thing, is like going from this, you hear this a lot, going from being a player to being a coach, maybe captain/coach. So you have your scope, which is small. Usually you have your team, which is also usually small. The areas of expertise in terms of like stack and technology is also small most of the time. So when I started my journey as a manager, I was managing mobile teams and mobile development teams. So that was my area of expertise when I turned into management. But then when you get into more like senior management and the Director of Engineering and VP of Engineering, you, your scope is growing and you will be turned more horizontal than vertical, right? Like your depth of expertise gets kind of, uh, get to a certain level where you cannot go any deeper if you want to manage bigger teams. And add to that, you get involved into managing managers and you become like a coach of coaches. So the whole dynamics change over time and your areas of focus change and you become less hands-on, less technical, but still you need to keep up with things that are happening. If you go online and search for VP of Engineering, you’ll find a lot of people saying that VP of Engineering is like the hardest job in the engineering technology stack or all the roles because it has this challenge of going horizontal, trying to be as vertical as possible, managing managers and managing performance and again, focus on impact. So I think the mindset, the way my mindset changed over time is I needed to let go some of my biggest passions when, you know, I used to code and I used to go deeper into little details and very specific stacks and go more horizontal, but keep myself really up to date with things, so I can go and speak to my teams, their language and help them move the needle or what with what they do and still be a someone who can bring a vision that everyone can stand behind. So it’s a completely different game over time, but it’s organic, you know, you cannot just hop on overnight to into a new role like this and just expect yourself to be successful. So there’s a lot of learning, a lot of education You need to keep up with everything that is happening as much as you can obviously And then help your team execute and find the gaps in your own set of skills, technical, non-technical skills to be the best VP of Engineering you can to help your team proceed.
Kovid Batra: So if I have to ask about one more, like one of the hardest things for you, when you had to change yourself and you moved into this role, what was it?
Maher Hanafi: I think, definitely, going very horizontal because I think when I turned more into senior leadership positions in engineering management, I found myself very quickly into completely outside of my comfort zone, right? Like I used to do, you know, I started with gaming, obviously, that was my area of expertise. And then I learned mobile, which was a passion of mine. And then I was, that was my space. I was very comfortable there. I can do anything. I can be very efficient and I can lead a team to deliver on these areas. But then overnight, you take over, you know, web development and backend technologies and then cloud native, you know, distribution systems. So overnight you find yourself completely outside of the zone where you’re very comfortable and your team is looking up for you to guide sometimes, right? And it’s very hard for you to do any of that if you are able to speak the language to catch up with these technologies, to be someone people can stand behind in terms of like, uh, trust in terms of guidance. So that’s the moment where I felt like, “Oh, this is not the, this is not a thing I can keep doing the same way I used to do other things before. Now I need to get myself into continuous learning more proactively even ahead, you know, going a little bit ahead of my initial plans and managing teams.” So, very quickly I turn on, “Okay, what is web development? What are the key areas and components and technology stacks? How can I manage a team that does that? How can I learn back end very quickly? How can I learn infrastructure and data and then QA and security and all of that?” So as you go into these roles, again, your scope is going to grow, you know, significantly, and you need to catch up with these technologies, again, to a certain level of depth. I cannot go as deep as I went into mobile and into other technologies I was very hands-on in, but you need to have that level of depth that is good enough to drive these teams to really be a source of trust and confidence and people can stand with you as a leader, and again, be productive and perform.
Kovid Batra: Right. I think that makes a lot of sense, actually. But the thing is, like, when you are in that dilemma that how, whether you should go vertically deep into the topic or you have a responsibility to like, go horizontal as well, how do you take that call, “Okay, this is where I have to stop”, and like “This is how I would be guiding my team.”? Because when you’re talking to technologists and specifically in your case you were coming from a mobile and then a gaming background and then you took up other technologies. Anyone who is expecting some guidance there would be much deeper into that technology. So what would be that situation? Let’s say, I am that person who has technically, probably spent three, four years already in web development and you have come in as a VP and you’re trying to have a conversation with me and telling me that, okay, this is how you should be taking up things. Don’t you think that I would be the person who already knows more hands-on than you? And then in that situation, how could you guide me better?
Maher Hanafi: Well, that’s, that’s where a mix of soft skills and hard skills get into the game. And that’s where you can get into the VP of Engineering role is to be smart and socially capable of navigating these situations, right? So first of all, all the hard skills, as I said, you need to go and learn the minimum to be able to speak the language. You cannot go to, again, back end engineers and start telling them things and telling them stories about your front end engineering background. It doesn’t work. So you need to get to a certain level of learning and efficiency in the stack and the technology to be able to at least speak at a high level. And then, the other thing is where the soft skills get into the game. You need to be vulnerable. You need to be very clear about your level of expertise. You need to highlight your team members as the experts and create this environment of collaboration where you come as a leader, but they are the expert in the field, and together you can make, you can move the needle, together you can make things happen. So build that kind of trust relationship that will, that is based on their competence and your leadership and together you can really get things in motion. It’s very hard for someone who doesn’t have the strong IC technical hands-on background in a specific stack to come and lead them from a technical perspective purely with their own leadership. And that’s, in another language, that’s not a good leadership framework or management style if you just come in and guide the whole team to do what you want them to do. So that’s where, again, your soft skills get into the play where you come in and say, okay, what’s the vision here? What’s the plan what you have been going through? What are the challenges? And then, over time as you get more mature and more experienced as a leader, you’ll find a way, you’ll find a way to make it work. But again, I think you need to really get your ego outside of the room. Get and talk to these individuals. Make sure they understand you are here to support them and guide them from a leadership perspective, but they are still the expert in the fields and you count on them and give them space to experiment, give them space to own and lead and drive things. And that’s what leads to good collaboration between the leaders and the team behind.
Kovid Batra: Totally makes sense. Totally makes sense. So, um, moving on to the part where we talk about managing the teams, making them more efficient, making them more productive, what do you think, is there a framework that fits for everyone? Do you follow a framework to improve the overall engineering productivity, developer productivity in your teams?
Maher Hanafi: Honestly, this is a very kind of hard question, right? There is no pattern. There is no formula, one size fits all here for performance and for productivity. As a leader, you need to get into learning what your team is about, what the challenges they are facing, what kind of combination of skills, again, hard and soft skills you have in the team to figure out what is missing and how can you address this. But there is still like, even if this is not like a, there is no specific framework, I personally have been following a framework that helped me a lot in my journey. This is based, this is a twist of Daniel H. Pink, um, kind of autonomous team or the art of mastery, based on his book Drive. It’s by someone called, I think, John Ferguson Smart, and it’s a combination of three things. Shared understanding, which is mainly making sure that everyone in your team has the same understanding of what you are trying to do, what is the vision, and get that level of alignment, because sometimes teams cannot perform if they don’t have the same definition of something. Like if you want to build a feature and two parts of your team have this different understanding of that feature, that’s not going to lead to a highly performant outcome. So shared understanding is key and sometimes we miss this as leaders. We, we kind of delegate this to other people or other departments like product and project management say, “Okay, well, you, you, you define what is the statement and let the team work on it.” But as an engineering leader, you need to make sure your team has that same alignment.
The second thing is I list, I actually, I talked about this earlier is trust. I think trust is, again, really underrated when it comes to engineering leadership and we focus on technical and like this and that, but to build the value of trust in your team, to make sure, again, what I said earlier, talk to your team and tell them you are the expert. I’m here to help you get the best out of your expertise. And then, they should trust you also as a leader, as someone who can really help them navigate these things, not worry about the external noise and focus on what they need to deliver. And this leads to peak performance, which hopefully we’re going to get to at some point. The third part of this is competence, and this is mainly about hard skills which are, you know, very related to how efficient they can be at their, their, the stack and the technology they’re working on and all of that. So it’s more about the deep knowledge. So now defining shared understanding, trust and competence, you have overlap between these things, shared understanding and trust gives flexibility. So if you and your team members have the exact same understanding and you trust them, you can give your team the flexibility to do whatever they want. They work in their own way, the best way that works for them and own and kind of drive a higher level of ownership and use their own better judgement to get to the delivery. And flexibility works a lot to improve performance. So if you give people the flexibility they need, they can be very successful. The overlap between trust and competence provides excellence; meaning that if you trust them and they have the right skills, they will deliver the best outcome from a technology perspective. They will build the best code they can, because they trust their own frameworks and practices. Obviously you need, as a leader, you need to make sure it’s all aligned across the teams and not, it’s not based on individuals. And then last overlap is between shared understanding and competence. You get the focus. So if they have the skills and they have a clear understanding, they can be very focused on delivering exactly the right desired outcome you have for the team.
So this is the framework I use. It’s very kind of, um, very vague from, from, from distance. But when you start using it and really try to put together some specific goals and expectations to get higher on all of these, you get the center of all of these overlap, which is a very highly autonomous team that master their technology and the work they do. And again, they can have, deliver the highest impact possible. So that’s one of the frameworks, obviously there are more, but that’s one I really, that really resonated with me. Uh, I have the books, I have the TED, I mean, I watched the TED talk from Daniel H. Pink, which is really great, I recommend it to everyone.
Kovid Batra: Perfect. I think shared knowledge, competence, flexibility, trust, like when you are putting it out there as a framework, I’m sure there are some specific processes, there are some specific things that you are doing to ensure everything falls into place. So can you just give like one example that is most impactful in implementing each of these pieces? Like one, one thing that impacts a lot that you are practicing.
Maher Hanafi: Yeah. Yeah, that’s a good point. And again, that was one framework, but there is a very popular framework, PPT, right? Like people, process and technology. These are key factors influencing engineering productivity and you need to work on them. The one focused on people has two sub, sub parts, which are the individual of part of people, and then there’s the team. So you need to make sure for the individual factors, you work on skills and experience and growth development. You need to make sure people have the motivation, engagement, work life balance, and all of that. And for the team, you need to focus on communication, collaboration, team dynamics. So one good example is I worked at companies where there were very distributed teams, including contractors, you know, engineering teams. there are some in-house engineering, there are contractors engineering, the in-house are distributed, the contractors are distributed. When I joined this company, people were naming the other parties by the name of the contractor, like the company, like, “Oh, this part of the software is like owned by this and that part is owned by us, the in-house engineers.” Based in the West, as an example. And I was so confused because for me, an engineering team is one engineering team, even if it’s distributed, like these boundaries are just geo-based boundaries. They cannot be just also deep into the engineering process in work. So what I did is I made sure like all these kind of boundaries, you know, are removed, virtual boundaries are removed. Engineering team is aligned. They use the same framework. They use the same language. They use even at some point, the same technology stacks as much as possible by aligning on design patterns, uh, building SDKs, building shared components. And that kind of created more dynamics between these teams that got them to deliver higher productivity and higher impactful software. Because at the beginning, again, there was, like every team was delivering their own standards, their own patterns, even their own stacks. Like some part was written in Python. The other part was no, the other part is in Go. They were just serving each other and in a handoff process, like, “Oh, you want this? Here you go. You have this service build.” And he does this and you have an API. But as soon as you, as a manager, I needed to put resources in different teams and focus on one areas. When I had to manage that mobility of the engineers, they were going into new piece of software saying like, “I’m not familiar with the stack and I’m not.. Even for me, even if I’m familiar with the stack, I’m not familiar with the design patterns that are in this stack in this piece of software.” And for me, that was a challenge. So, one big part we forget about improving productivity is making sure from a technology perspective, the tools, the stack, the design patterns are aligned as much as possible. You introduce new systems like CI/CDs and observability to make sure things are moving along really quickly.
And then the, the second part of this is as you said earlier, it’s the process, like what methodology you have, what kind of channels to communicate, work, you know, how efficient is your workflow as a team and what kind of practices you have introduced to your teams. And these practices should be as aligned as possible across everyone, you know, including, you know, distributed teams to achieve higher performance and higher productivity in general. That was, again, that was one of the biggest learning I had when I, when my teams started scaling up and also going more distributed from a, from a geo-based location ensuring that it’s not just a handoff process between software engineers. It was more about alignment. And I think that that solution can scale with the scale of the problem as well.
Kovid Batra: Makes sense. Perfect. Perfect. I think with that, I would like to know some of your initiatives that you would have worked in the last year or must be planning a few more initiatives this year to actually impact your engineering productivity. Is there something that was challenging last year for you? You accomplished something out of it or are still working on that?
Maher Hanafi: Yeah. So, one of the biggest areas I focus on is this again, individual and team factors, the people side of things, right? Again, technology, we talked about this enough, in my opinion, process as well, but the people side of things could be tricky. And it takes a lot of time and experience to get to a place where you can have as a leader, as an engineering leader, you can have an impact on the people. So some of the biggest initiatives I work on is ensuring on the individual side of things, we have a continuous learning development of skills for everyone on the team, no matter what level they’re in, even if you are the most highly senior engineer principal and architect level, there’s still something for you to learn. There is a new area to discover in engineering and software and hands-on work, but also maybe in some other soft skills. So providing resources, time and, you know, availability to go and explore different areas that definitely could be driven by their own passion and that’s another framework I want to bring, which is something as a, going back to the first question, you know, the story of my childhood and all of that, I was passionate about video games and I wanted to work in that space because I think when people work on their passion, they can really break the limits of what’s possible. So that’s something I always bring to my work and I get to my team and I say, let’s work together on aligning on where you want to be next and how can we achieve that. And I never bring my own pattern of growth and maybe success and say, Oh, like I go to a Director of Engineering and say, “If you want to be a VP of Engineering, this is what you need to do based on what I did.” No, everyone is different. Every path and journey is different. And I, what I do is I work with them to define their own definition to get to their own definition of success. And I say, “What makes you successful? What makes you happy in working on things that you’re very excited about? What makes you more motivated and engaged?” So the other tool or framework I use is really collaborate with individual and teams to identify their own definition of success. And then I add to it some spices, I would say, from my own recipe and from my own experience as a leader to just kind of tweak it a little bit. But most of the time that’s what I focus on is like, “Tell me exactly where you see yourself. What’s your passion about?” And this could be completely like 180 degrees. It could be doing like a software engineering on the backend and then when I go into AI. And I help them to transition there, again, over time. And I think that’s the key. And I, I think, and I hope I was able to turn around a lot of people in, in, in getting into higher productivity and performance because of this, because I never go to someone and say, “You need to do this. To be successful, you need to follow this path.” I always try to listen and get their own definition of success and work with them through this and then say, “Okay, based on everything you said, based on your passion, based on your motivation and where you want to be and with my own tweaks, This is what we need to do. And I will do followups with you and we’ll work together to achieve that.” This is something, again, if you talk to anyone I worked with in the previous companies or better works today, this is something that resonates really well with people. They recognize as a working efficient way to get better over time. And when you achieve this on the individual level, obviously your teams in general will be impacted and you’ll create some sort of like leadership and ownership and people driving things. And everyone is pushing the boundaries of what you can do as an engineering team in general. And it has been very efficient. And for me as an engineering leader, that’s where I get my rewarding experience. This is where I feel I had an impact. And this is where I was able sometimes again, to turn around completely low performance into high performance.
Kovid Batra: But I think in this case, as much as I agree to what you’re saying really resonates and in fact, that could be true for any department, like any leader enabling team members in the direction where they are passionate about, would something, would be something that would energize the whole, whole team. But still, I feel that there is a lot of complication that gets added because at the end of the day, we are humans. We have changing desires, changing passions, and then a lot of things get complex. So while you implement this framework in an engineering team, what kind of challenges you have seen? Is there sometimes some kind of a shortage of a particular skill set in the team because a lot of people are more passionate about doing the back end and you have less front end engineers or maybe vice versa. So there could be a lot of such complications there. So any challenges that you’ve seen while implementing these things?
Maher Hanafi: Absolutely. I mean, you said there are some complications and challenges, but there’s a lot. I mean, there are a lot of complications and challenges when you work as an engineering leader. This is again, as I said earlier, some people call it the most difficult position to be in because you’re, you’re managing different things. Again, we talked about people, process and technology. We, we talked about hard and soft skills, but on the, on this side, when you’re trying to implement something like this, some of the examples I can bring up here to the conversation are the initiatives you have running, maybe some of the greatest initiatives you have happening in the engineering team, like, uh, at Betterworks, as an example, we are, we have been building generative AI, you know, enhanced features and bringing these great technologies, we have been kind of refactoring, revamping some of our technologies to build newer, better systems. And, but you still have the other old legacy systems. You have things are running in production that you need to maintain. You have incidents to manage and stuff like that. And sometimes you have, you know, resources, people, teams are watching other teams and other people doing other exciting stuff, and they are still like doing the old stuff. And as an, again, an engineering leader, your job is to make sure that there’s a good dynamic. There’s a good culture of, again, trust and shared understanding that these things are happening to everyone at the same time. It’s just that it takes a little bit more time in process and priorities to get there. So it’s part of that, again, earlier, when I talked about the own definition of success is to really know where everyone is eager to be doing as, again, an individual. And then, when you talk to the team in general, you need to see what you’d listen to their feedback and understand their point of view. So sometimes some teams will say, “Okay, well, we have been coding in this part of the software for like three or four years now, and nothing is moving too much.” Versus other teams where like every quarter, they have a new feature, they have great stuff, it’s being communicated and published. And it gets a lot of like credits and all of that. So you need to make sure you have the right process in your team to be able to rotate the projects, to rotate the excitement, to get people to, again, own and lead to experiment. So some of the initiatives we do are always you know, hackathons, you know, give people time to just do something completely different from what they do on a daily basis. So that will, you know, trigger the creativity of everyone, the passion again, and you can see where everyone’s mind is at and what they want to do. So again, it’s, it’s a little bit tricky. It’s not that easy. It’s not like, Oh, everyone will be doing this. And then six months later, you’ll be doing something more fun. But that’s where, again, your presence as an engineering leader is so important. Your vision is so important. You need to people to have your teams behind you in terms of vision and trust that it’s going to happen in that kind of way of rotation and mobility and everyone will be impacted.
So, absolutely, it’s one of these challenges you see, like people trying to get into more exciting projects while you have some support. One other thing you need to do as a leader is to ensure these kind of single point of failures and you cannot. afford to have one person or one team that is just expert, very deeply expert in one area. And it creates this environment where you are afraid of two things, these team or these individuals leaving and creating a gap in knowledge, or these people being stuck in that knowledge and cannot afford to do anything else. Even if they are passionate about it or they are bored of that, you know, they, they have been building this service for too long. They want to experiment something else, but you cannot let them go because you say you’re the only expert. So my job is ensure that knowledge transfer is happening, people getting into new systems, delegate a little bit and offer everyone option to get out and do something else that they’re excited about. It’s a dance, right? It’s a push and pull. You need to get into understanding how things work. and be involved a little bit deeper to be more effective as an engineering leader.
Kovid Batra: I think the core of it lies in that you have to be a good listener, not like exactly ‘listening’ listening, but being more empathetic and understanding of what everyone needs and the situation needs and try to accommodate every time because it’s going to be dynamic. It’s going to change. You just have to keep adjusting, keep tweaking, calibrating according to that. So it totally makes sense.
Maher Hanafi: And the funny part is, uh, the funny part is a lot of this I learned while playing video games. That’s gonna connect to the first question you asked. You know, when you play a video game, you’re a guild master of like 200–300 people. And you know, you go and do these raids and experiences and then you have loot to share. And you need to make decisions and everyone wants something. Yeah, you kind of build up some experience early on about people dynamics, about making sure how you make people happy and how you navigate conflicts in opinions. And sometimes when you have very senior people also, you have a clash of opinions. So how would you navigate that? How would you make sure they can work in an environment where everyone has a strong opinion about things? So yeah, a lot of this I learned early on in my journey before even I got into engineering, while playing video games and dealing with people, which is really great.
Kovid Batra: Cool. I think that’s on the people part. And I think that was really, really insightful. I think we should have some, instead of books, have the list of games that one should play early on in their life to be a manager.
Maher Hanafi: Yeah.
Kovid Batra: So moving on from people like you mentioned about technology, right? What happened in 2024 or you’re planning for 2025 in technology to make your teams even more efficient?
Maher Hanafi: Yeah, I would say a few things. Focus on technology. There are, I would say, three big pillars. One of them is really addressing poor designs, poor patterns in your software. We underestimate this again as, underthink about it as a problem that is impacting productivity and performance. When engineers are dealing with older legacy software that has poor designs, it takes time. It introduces more bugs. No matter how skilled they are, it’s challenging. So really as an engineering leader, you need to always make sure there’s time to recover, time to pay back technical debt, time to go back and redesign, refactor, and reinvent a little bit your software stack to get people to enjoy newer, more modern architecture that will lead to high performance and productivity. Things can happen fast when you have the right patterns that are more accurate, more modern today. Again, this is very, this is something I do on a, you know, frequent basis at Betterworks and before, one of my key areas of focus as an engineering leader is to help teams pay back technical debt, build better software so they can be more productive. The second thing is investing, I would say. Investing in tooling and platforming. I mean, we always forget about platform engineering as a pillar to software engineering in general, but being able to build the right continuous integration, continuous delivery system, CI/CD, you know, have proper observability in place to get all these logging and monitoring and alerts you need to be able to know and quickly debug and figure out things. It helps a lot and it makes sure, you know, it creates a good level of confidence of the team in terms of the quality of the code. And again, you can, it’s, it’s a lot of things are happening most recently, and this is where I’m going into a third kind of component that is impacting performance and productivity from a technical perspective is generative AI. And we have seen over the last two years now, the development of these co-pilots, the coding assistance. And it’s true. It’s not fully there. It’s not fully efficient so far, but it’s very effective to get a certain level of delegation to AI when it comes to like, as an example, writing tests for functions you have, for helping you optimize some of the code base, even migrate from a stack to another. So it’s a, it’s becoming a powerful tool capable of learning from your stack and your, your software learning over time as well, adapting, and even solving some problems and some real problems at some point. As a very good example at Betterworks today, we have a, you know, top-down approach to adopting generative AI. Everyone at the company is really encouraged and asked to leverage AI in their own areas of expertise and for engineering in particular, we ask everyone to use these co-pilots and coding assistants to leverage the new ideas coming up out there to experiment and really to bring use case and say, “Okay, I have been using this to achieve this thing.” I think there are very key areas again, PR, pull request work and improvement, writing tests and even infrastructure in the future seems like infrastructure could have a big area of impact when AI helps optimize infrastructure, not to build everything from scratch on behalf of people. I don’t think AI will replace software engineers, honestly, but it will make them better software engineers capable of achieving way more, be more productive and more performant. And I think that’s the goal.
Kovid Batra: Makes sense. I think when you said redesigning and taking up the new patterns, getting rid of the old ones, or if it’s about, let’s say, rewriting code pieces, generative AI is actually putting in as a fundamental piece everywhere, right? And there could be a lot of use cases. There are a lot of startups. There are a lot of tools out there. But according to you, while you were researching that which areas should be now on higher priority from an engineering standpoint and AI could really be leveraged, I think you would have first checked this tool has evolved in this area, and this could be a right fit to be used right now. Like you mentioned about co-pilots, right? It can write a better level of code and it can actually be integrated. We can try new IDs to ensure that we have better code, faster code in place. Are there any specific tools, I mean, if you’re comfortable sharing names or telling us, what could work better for other teams as well, other engineering leaders, other engineering teams outside, out there, uh, any examples or anything that you found very interesting?
Maher Hanafi: I mean, the number one tool is obviously GitHub Copilot. A lot of teams today are on GitHub anyway. So it’s very well embedded into the system and you know, a lot of plugins for all the IDE’s out there. So I think it’s the first one that comes to mind. Also now they released the free license tier that will help a lot of people get into it. So I think that’s the no brainer. But, uh, for me, I will go a little bit off a tangent here and say that one of the best ways to experiment with, E gen AI as a software engineer could be to run gen AI locally on your machines, which are things we can do today. And personally, even a, as, as an, an engineering leader not being very, very hands-on today. You know, I found out that something like a combination of Ollama which helps you run systems, I mean LLMs locally and open source models out there like, uh, the Llama 3 models or the Mistral models. You can have, you can have a local assistant to do a lot of things, including code assistant and writing code and refactoring and all of that. And add to, if you add to that some IDEs like cursor, now you can use your ID connected to your own LLM, that again, if you have the level of experience to maybe go and fine tune it over time and use, leverage Ollama to also include, do some rag and bring some more code and bring some documentations to think in very good examples on how you do tests as an example, it could be a very strong tool for more experienced engineers. And I think one of the biggest area Gen AI would have an impact is testing. I think testing, the testing pyramid has always been to fully automate, the ambition is to automate as much as possible. And I think with gen AI, there will be more use cases to just do that. If you leverage generative AI to write tests, I think you will have a bigger, better suite of tools to ensure that your quality of code is meeting a certain level to test for edge cases you didn’t think about when you were writing code. So I think testing is one area. The other area would be in general research, honestly, in learning as a software engineer, if you have a co-pilot or just any LLM or chat based LLM, like chatGPT or Gemini or Claude, you can go and really, you know, learn about things faster. Yes, it does a lot of things for you. Like, as an example, you can copy paste a function, say, “Hey, can you optimize this?” The key if you’re leveraging generative AI is learning. It’s not to delegate. I mean, some people might think, “Oh, I don’t have to worry about this. I’m going to write random code, but then the, uh, gen AI will optimize it for me.” The key is for you to learn from that optimization that was offered to you. And we should not forget, you know, LLMs are not perfect and you can think about them as another software engineer, maybe more experienced for sure, but an engineer who can make mistakes. So it’s your part to be really curious and critical about the outcome you get from GenAI to make sure you’re at the same time leveraging the tool to learn, to grow, and to have a bigger impact and be more productive.
Kovid Batra: Yeah, I think these are some of the hard truths about AI, uh, code assistance, but lately I’ve been following a few people on LinkedIn, and I’ve seen different opinions on how Copilot has actually helped in improving the code writing speed or in general, the quality. There is a mixed opinion. And in such situations, I think any engineering org which is implementing such technology would want to have clarity on whether it is working out for them or not, and it’s completely possible that it works out for some companies and it doesn’t for some. In your case, do you like measure specific things when you, let’s say, implement the technology or you implement a new process just to, like, improve productivity, is there something that you specifically look at while implementing those at the beginning and the end to ensure, like, okay, if this is working out or not?
Maher Hanafi: Yeah, I mean, some things are measurable. Some things are not measurable, honestly, and this is known, you know, the challenge is to measure the immeasurable to find out where this technology is having impact without having tangible metrics to measure. And you need to use proxies based on that. You need to collect feedback. You need to get some sort of an assessment of how you feel about your own productivity as an engineer using these tools. So we do that every once in a while. Again, we have a very specific internal strategy and vision that is driven by, I mean, that is focused on using and leveraging generative AI in every area of the business, and one of them is software engineering. And when we started, one of the very good use cases, again, was QA and writing tests. And we have been measuring how much time it takes, I would say, a software development in tests to write the suite of tests for a new piece of code. We try to compare both, you know, ways the old ways, which is mainly kind of manual, like let’s look at this, let’s write all the tests that are needed or define the test suite for these, and then the other way is QA, you share the QA, the concept, the requirements, the acceptance criteria, and then you expect it to generate for you the test. And we have noticed that the time that takes an engineer in a software development engineering test to get to the desired outcome is way more significant. I don’t have exact percentages or numbers, but it’s like it takes 20 percent time versus, you know, a hundred percent to just achieve the whole test suite. So for, you know, this area of like bringing generative AI, it’s good, but again, we should not forget that these tests, you know, have to be reviewed. The human should be in the loop. I don’t believe in a lot of things to be fully automated and you don’t have to worry about, and you don’t have to look back. But I also, on another end, I really believe that Gen AI will become table stakes in software engineering. The same way we had these great IDs developing over time, the same way we had autocomplete for code, the same way we had process and tools to improve our quality of code, the same way we had patterns and, you know, things, I think Gen AI will become that thing that we all use, we all have, it’s common knowledge and it’s going to be a shift in the way we work as software engineers. You know, we used to use a lot of Stack Overflow and go and search and do this and do that. All that will be replaced now in your own environment, in the work and the flow of work and you will have all the answers you need. I don’t think it will take over software engineering 100 percent and like you don’t have to write anything and you hear, and you see this in LinkedIn, as you said, you hear like, oh, this was developed. I think these are, as of today, these are naive, you know, thinking about software engineering. You know, you can build a proof of concept, you can build some basic, one single feature aspects, but as you get to build enterprise, you know, distributed systems, this doesn’t scale to that level. But the technology is evolving and GenAI is doing its best to get there, and we’re here for it. We’re here to support that, and we’re here to learn it, and use it. But again, we all go back to the same saying of like a software engineer who’s leveraging generative AI will be more productive and efficient than a software engineer who doesn’t.
Kovid Batra: Makes sense. All right. I think with that, we come to the end of this episode. I could continue talking to you. It’s super, super exciting and insightful to hear all the things that you have been doing. I think you are a really accomplished engineering leader. It is very evident from what you’re saying, what you’re doing at the organization, at your organization. It is very difficult to be in this overwhelming position. It, it, it looks like that it is very overwhelming. So any piece of advice to all the other engineering leaders who are listening to you? How to keep that sanity in place while managing this whole chaos?
Maher Hanafi: I think it’s a matter of, again, going in circles here, but it’s, it’s a passion, right? I think you need to have the level of passion to be able to navigate this role. And the passion is what keeps you pushing the boundaries in making things that are complex and hard and challenging look easy and look fun and enjoyable, right? Some parts of my work are hard and tough, but I honestly enjoy them and I go through them with a positive attitude, it’s like, “This is a tough conversation I need to have. This is it. You know, I’m going to bring my principal engineers. We’re going to talk about something. And I know everyone will have an opinion, but you know what? We need to leave this meeting with a decision.” And, you know, you need to have the passion to be able to navigate these complexities. Being someone who is very driven about solving problems, navigating people dynamics, passion about technology, obviously, and have a good mindset of getting, you know, getting to the finish line. So we, you have been asking about a lot of frameworks and other frameworks, which again, very popular one is get things done. GTD. As an engineering leader, a VP for Engineering, you need to get things done. That’s your job. So you need to be passionate about that. Get to the finish line. So it’s a lot of things here and there. I don’t recommend engineering leadership in general. For people who are very passionate about just pure technical things, people who are very passionate about coding, it’s, it’s going to be very hard for them to detach from coding and technology aspect and get into navigating these things. So when you get to this level, you focus about different things from just the perfect code that you’ll ever write, and it’s more about the perfect outcome you can get out of the resources you have and have an impact. I use this word a lot. I think engineering leaders are all about impact and all about getting the best resources or the best outcomes from the resources they have and even minimize our resources, obviously, time and money in this case. So it’s not easy. But if you have the passion, you can make things happen and you can turn these complex things into fun challenges to have and solve them and really get that rewarding experience at the end where you go, “You know what? I came here, there was a big challenge, there was a big problem, I helped the team solve it, let’s move on to the next big thing.” And I think that’s my advice to people who are looking to become engineering leaders.
Kovid Batra: Perfect. On point. All right, Maher. Thank you. Thank you so much for your time. And we would love to have you again on the episode for sure, sometime again, and talk more in depth, what you’re doing, how you’re leading the teams.
Maher Hanafi: Thank you again. Thank you so much. I really appreciate it. Thank you for having me on, on your podcast.
‘Integrating Acquired Tech Teams’ with David Archer, Director of Software Engineering, Imagine Learning
December 13, 2024
•
24 min read
In this episode of the groCTO Podcast, host Kovid Batra interviews David Archer, the Director of Software Engineering at Imagine Learning, with over 12 years of experience in engineering and leadership, including a tenure at Amazon.
The discussion centers on successfully integrating acquired teams, a critical issue following company mergers and acquisitions. David shares his approach to onboarding new team members, implementing a buddy system, and fostering a growth mindset and no-blame culture to mitigate high attrition rates. He further discusses the importance of having clear documentation, pairing sessions, and promoting collaboration across international teams. Additionally, David touches on his personal interests, emphasizing the impact of his time in Japan and his love for Formula 1 and rugby. The episode provides insights into the challenges and strategies for creating stable and cohesive engineering teams in a dynamic corporate landscape.
Timestamps
00:00 - Introduction
00:57 - Welcome to the Podcast
01:06 - Guest Introduction: David's Background
03:25 - Transitioning from Amazon to Imagine Learning
10:49 - Integrating Acquired Teams: Challenges and Strategies
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO podcast. And today with us, we have a very special guest. He has 12 plus years of engineering and leadership experience. He has been an ex-Software Development Manager for Amazon and currently working as Director of Engineering for Imagine Learning. Welcome to the show, David. Great to have you here.
David Archer: Thanks very much. Thanks for the introduction.
Kovid Batra: All right. Um, so there is a ritual, uh, whosoever comes to our podcast, before we get down to the main section. So for the audience, the main section, uh, today’s topic of discussion is how to integrate the acquired teams successfully, uh, which has been a burning topic in the last four years because there have been a lot of acquisitions. There have been a lot of mergers. But before we move there, uh, David, we would love to know something about you, uh, your hobbies, something from your childhood, from your teenage or your, from personal life, which LinkedIn doesn’t tell and you would like to share with us.
David Archer: Sure. Um, so in terms of my personal life, the things that I’ve enjoyed the most, um, I always used to love video games as a child. And so, one of the things that I am very proud of is that I went to go and live in Japan for university and, and that was, um, a genuinely life-changing experience. Um, and I absolutely loved my time there. And I think it’s, it’s had a bit of an effect on my time, uh, since then. But with that, um, I’m very much a fan of formula one and rugby. And so, I’ve been very happy in the last, in the post-COVID-19 years, um, of spending a lot of time over in Silverstone and Murrayfield to go and see some of those things. So, um, that’s something that most people don’t know about me, but I actually quite like my sports of all things. So, yeah.
Kovid Batra: Great. Thanks for that little, uh, cute intro and, uh, with that, I think, uh, let’s get going with the main section. Uh, so integrating, uh, your acquired team successfully has been a challenge with a lot of, uh, engineering leaders, engineering managers with whom I have talked. And, uh, you come with an immense experience, like you have had been, uh, engineering manager for OVO and then for, uh, Amazon. I mean, you have been leading teams at large organizations and then moving into Imagine Learning. So before we touch on the topic of how you absorbed such teams successfully, I would love to know, how does this transition look like? Like Amazon is a giant, right? And then you’re moving to Imagine Learning. Of course, that is also a very big company. But there is definitely a shift there. So what made you move? How was this transition? Maybe some goods or bads, if you can share without getting your job impacted.
David Archer: Yeah, no problem. Um, so once upon a time, um, you’re correct in terms of that I’ve got, you know, over 12 years experience in the industry. Um, but before that, I was a teacher. So for me, education is extremely important and I still think it’s one of the most rewarding things that as a human you can be a part of. Helping to bring the next generation, or in terms of their education, give them better, uh, capabilities and potential for the future. Um, and so when somebody approached me with the position here at Imagine Learning, um, I had to jump at the chance. It sounded extremely exciting and, um, I was correct. It was extremely exciting. There’s definitely been a lot of movement and, and I’m sure we’ll touch on that in a little while, but there is definitely a, a, quite a major cultural shift. Um, and then obviously there is the fact that Amazon being a US-centric company with a UK arm, which I was a part of, um, Imagine Learning is very similar. Um, it’s a US-centric company with a US-centric educational stance. Um, and then, yeah, me being part of the UK arm of the company means that there are some cultural challenges that Amazon has already worked through that Imagine Learning still needed to work through. Um, and so part of that challenge is, you know, sort of educating up the chain, if you like, um, on the cultural differences between the two. So, um, definitely some, some big changes. It’s less easy to sort of move sideways as you can in companies like Amazon, um, where you can transition from one team to another. Um, here, it’s a little bit more, um, put together. There’s, there’s, there’s only one or two teams here that you could potentially work for. Um, but that’s not to say that the opportunities aren’t there. And again, we’ll touch on that in a little bit, I’m sure.
Kovid Batra: Perfect. Perfect. All right. So one, one question I think, uh, all the audience would love to know, like, in a company like Amazon, what is it like to get there? Because it takes almost eight to 10 years if you’re really good at something in Amazon, spend that time and then you move into that profile of a Software Development Manager, right? So how, how was that experience for you? And what do you think it, it requires, uh, in an Engineering Manager at Amazon to be there?
David Archer: That’s a difficult question to answer because it changes upon the person. Um, I jumped straight in as a Software Development Manager. And in terms of what they’re looking for, anybody that has looked into the company will be aware of their leadership principles. And being able to display their leadership principles through previous experiences, that’s the thing that will get you in. So if you naturally have that capability to always put the customer first, to ensure that you are data-driven, to ensure that you have, they call it a bias for action, but that you move quickly is kind of what it comes down to. Um, and that you earn trust in a meaningful way. Those are some of the things that I think most managers would be looking for, and when interviewing, of course, there is a technical aspect to this. You need to be able to talk the talk, and, um, I think if you are not able to be able to reel off the information in an intrinsic manner, as in you’ve internalized how the technology works, that will get picked up. Of course it will. You can’t prepare for it like you can an exam. There is an element of this that requires experience. That being said, there are definitely some areas that people can prepare for. Um, and those are primarily in the area of ensuring that you get the experiences that meet the leadership principles that will push you into that position. In order to succeed, it requires a lot of real work. Um, I’m not going to pretend that it’s easy to work at a company like Amazon. They are well known for, um, ensuring that the staff that they have are the best and that they’re working with the best. And you have to, as a manager, ensure that the team that you’re building up can fulfill what you require them to do. If you’re not able to do that, if you’re taking people on because they seem like they might be a good fit for now, you will in the medium to long-term find that that is detrimental to you as a manager, as well as your team and its capabilities, and you need to be able to then resolve that potential problem by making some difficult decisions and having some difficult conversations with individuals, because at the end of the day, you as a manager are measured on what your team output, not what you as an individual output. And that’s a real shift in thinking from being a, even a Technical Lead to being an Engineering Manager.
Kovid Batra: That’s for sure there. One thing, uh, that you feel, uh, stands out in you, uh, that has put you in this position where you are an SDM at Amazon and then you transitioned to a leadership position now, which is Director of Engineering at Imagine Learning. So what is that, uh, one or two traits of yourself that you might have reflected upon that have made you move here, grow in the career?
David Archer: I think you have to be very flexible in your thinking. You have to have a manner of thinking that enables for a much wider scope and you have to be able to let go of an individual product. If your thinking is really focused on one team and one product and it stays in that single first party of what you’re concentrating on that moment in time, then it really limits your ability to look a little bit further beyond the scope and start to move into that strategic thinking. That’s where you start moving from a Software Development Manager into a more senior position is with that strategic thinking mindset where you’re thinking beyond the three months and beyond the single product and you’re starting to move into the half-yearly, full-yearly thinking is a minimum. And you start thinking about how you can bring your team along for a strategic vision as opposed to a tactical goal.
Kovid Batra: Got it. Perfect. All right. So with that, moving to Imagine Learning, uh, and your experience here in the last, uh, one, one and a half years, a little more than that, actually, uh, you, you have, uh, gone through the phase of your self-learning and then getting teams onboarded that were from the acquired product companies and that experience when you started sharing with me on our last, last call, I found that very interesting. So I think we can start off with that point here. Uh, like how this journey of, uh, rearranging teams, bringing different teams together started happening for you. What were the challenges? What was your roadmap in your head and your team? How will you align them? How will you make the right impact in the fastest timeframe possible? So how things shaped up around that.
David Archer: Sure. Initially, um, the biggest challenge I had was that there was a very significant knowledge drain before I had started. Um, so in the year before I came on board and it was in the first year post-acquisition, the attrition rate for the digital part of the company was somewhere in the region of 50%. Um, so people were leaving at a very fast pace. Um, I had to find a way to plug that end quickly because we couldn’t continue to have such a large knowledge drain. Um now the way that I did that was I, I believe in, in the engineers that I have in front of me. They wouldn’t be in the position that they’re in if they didn’t have a significant amount of capability. But I also wanted to ensure that they had and acquired a growth mindset. Um, and that was something that I think up until that point they were more interested in just getting work done as opposed to wanting to grow into a, a sort of more senior position or a position with more responsibility and a bigger challenge. And so I ensured that I mixed the teams together. We had, you know, front enders and back enders in separate teams initially. And so I joined them together to make sure that they held responsibility for a piece of work from beginning to end, um, which gave them autonomy on the work that they were doing. I ensured that I earner trust with that team as well. And most importantly, I put in a ‘no-blame culture’, um, because my expectation is that everybody’s always acting with the best of intentions and that usually when something is going wrong, there is a mechanism that is missing that would have resolved the issue.
Kovid Batra: But, uh, sorry to interrupt you here. Um, do you think, uh, the reasons for attrition were aligned with these factors in the team where people didn’t have autonomy, uh, there was a blame game happening? Were these the reasons or, uh, the reasons were different? I mean, if you’re comfortable sharing, cool, but otherwise, like we can just move on.
David Archer: No, yeah, I think that in reality there, there was an element of that there, there was a, um, a somewhat, not toxic necessarily culture, but definitely a culture of, um, moving fast just to get things done as opposed to trying to work in the correct manner. And that means that people then did feel blamed. They felt pressured. They felt that they had no autonomy. Every decision was made for them. And so, uh, with more senior staff, especially, you know, looking at an MNA situation where that didn’t change, they didn’t see a future in their career there because they didn’t know where they could possibly move forward into because they had no decision-making or autonomy capability themselves.
Kovid Batra: Makes sense. Got it. Yeah, please go on. Yeah.
David Archer: Sorry, yes. So, um, we’re putting these things in place, giving everybody a growth mindset mentality and ensuring that, um, you know, there was a no-blame culture. There were some changes in personnel as well. Um, I identified a couple of individuals that were detrimental to the team and those sort of things are quite difficult, you know, moving people on who, um, they’re trying their best and I don’t deny that they are, but their way of working is, is detrimental to a team. But with those changes, um, we then move from a 50% regressive attrition to a 5% regressive attrition over the course of 23 and 24, which is a very, very significant change in, um, in attrition. And, uh, we also, at that point in time, were able to start implementing new methodologies of bringing in talent from, from below. So we started partnering with Glasgow University to bring in an internship program. We also took on some of their graduates to ensure that we had, um, for once with a better phrase, new blood in the team to ensure that we’re bringing new ideas in. Um, and then we prepared people through the training programs that they should need.
Kovid Batra: I’m curious about one thing, uh, saying that stopping this culture of blame game, uh, is definitely, uh, good to hear, but what exactly did you do in practice on a daily level or on a weekly level or on every sprint level that impacted and changed this mindset? What, what were the things that you inculcated in the culture?
David Archer: So initially, um, and some people think that this might be a trite point, but, um, I actually put out the policy in front of people. I wrote it down and put it in front of people and gave them a document review session to say, “This is a no-blame culture, and this is what I mean by that.” So that people understood what my meaning was from that. Following that, um, I then did have a conversation with some of the parts of, you know, some people in other parts of the company to say, “Please, reroute your conversations through me. Don’t go directly to engineers. I want to be that, that point of contact going forward so that I can ensure that communication is felt in the right manner and the right capacity.” And then, um, the, the other thing is that we started bringing in things like, um, postmortems or incident response management, um, sessions that, that where we, I was very forceful on ensuring that no names were put into these documents because until that point, people did put other people’s names in, um, and wanted to make sure that it was noted that it was so and so’s fault. Um, and I had to step on that very, very strongly. I was like, this could have been anyone’s fault. It’s just that they happen to be at that mine of code at that point in time. Um, and made that decision, which they did with a good intention. Um, so I had to really step in with the team and every single post mortem, every major decision in that, that area, every sprint where we went through what the team had completed in terms of work and made sure we did pick out individuals in terms of particularly good work that they did, but then stepped very strongly on any hint of trying to blame someone for a problem that had happened and made it very clear to them again that this could have happened to anyone and we need to work together to ensure it can’t happen to anyone ever again.
Kovid Batra: Makes sense. So when, when this, uh, impact started happening, uh, did you see, uh, people from the previous, uh, developers, like who were already the part of Imagine Learning, those were getting retained or, uh, the ones who joined after acquisition from the other company, those developers were also getting retained? How, how did it impact the two groups and how did they like, gel up later on?
David Archer: Both actually. Yeah. So the, the staff who were already here, um, effectively the, the, the drain stopped and there weren’t people leaving anymore that had had, you know, some level of tenure longer than six months, um, at all from that point forward, and new staff that were joining, they were getting integrated with these new teams. I implemented a buddy system so that every new engineer that came in would have somebody that they could work alongside for the first six months and show that they had some, somebody to contact for the whole time that they were, um, getting used to the company. And, uh, I frequently say that as you join a company like this, you are drinking from a fire hose for the first couple of months. There’s a lot of information that comes your way. Um, and so having a buddy there helped there. Um, I added software engineering managers to the team to ensure that there were people who specifically looked after the team, continue to ensure there was a growth mindset to continue to implement the plans that I had, um, to make these teams more stable. Um, and that took a while to find the right people, I will say that. Um, there was also a challenge with integrating the teams from our vendors in, um, international, uh, countries. So we worked with some teams in India and some teams in the Ukraine. Um, and with integrating people from those teams, there was some level of separation, and I think one of the major things we started doing then was getting the people to meet in a more personal manner, bringing them across to our team to actually meet each other face-to-face, um, and realize that these are very talented individuals, just like we are. They’re, they’re no different just because they, you know, live a five and a half hour time zone away and doesn’t mean that they’re any less capable. Um, they just have a different way of working and we can absolutely work with these very talented people. And bringing them into the teams via a buddy, ensuring that they have someone to work with, making sure that the no-blame culture continued, even into our contractors, it took a while, don’t get me wrong. And there were definitely some missteps, um, but it was vital to ensuring that there was team cohesion all the way across.
Kovid Batra: Definitely. And, uh, I’ve also experienced this, uh, when talking to other, uh, engineering leaders that when teams come in, usually it is hard to find space for them to do that impactful work, right? So you, you need to give those people that space in general in the team, which you did. But also at the same time, the kind of work they are picking up, that also becomes a challenge sometimes. So was that a case in your scenario as well? And did you like find a way out there?
David Archer: It was the case here. Um, there definitely was a case of the, the work was predefined, if you like, to some extent by the, the most senior personnel. And so one of the things that we ensured that we did, uh, I worked very closely with our product team to ensure that this happened is that we brought the engineers in a lot sooner. We ensured that this wasn’t just the most senior member of the team, but instead that we worked with different personnel and de-siloing that information from one person to another was extremely important because there were silos of information within our teams. And I made it very clear that if there’s an incident and somebody needs some help, and there’s only one person on the team, um, that is capable of actually working, then, um, we’re going to find ourselves in, in a real problem. Um, and I think people understood that intrinsically because of the knowledge loss that had happened before I started, or just as I was coming on board, um, because they knew that there were people who, you know, knew this part of the code base or this database or how this part of infrastructure worked, and suddenly we didn’t have anybody that had that knowledge. So we now needed to reacquire it. And so, I ensured that the, you know, this comes from an Amazon background, so anybody that, that has worked at this company will know what I’m talking about here, but documentation is key. Ensuring document reviews was extremely important. Um, those are the kind of things, ensuring that we could pass on information from one person to another from one team to another in the most scalable fashion, it does slow you down in delivery, but it speeds you up in the longer term because it enables more people to do a wider range of work without needing to rely on that one person that knows everything.
Kovid Batra: Sure, definitely. I think documentation has been like always on the top of, uh, the priority list itself now whomsoever I’m talking to, because once there are downturns and you face such problems, you realize the importance of it. In the early phase, you are just running, building, not focusing on that piece, but later on, it becomes a matter of priority for sure. And I can totally relate to it. Um, so talking about these people, uh, who have joined in and you’re trying to integrate, uh, they definitely need some level of cultural alignment also, like they are coming from a different background, coming into a new company. Along with that, there might be requirements, you mentioned like skill development, right? So were there any skill development plans that worked out, that worked out here that you implemented? Anything from that end you want to share?
David Archer: Yeah, absolutely. So with joining together our teams of frontend and backend developers, um, that’s obviously going to cause some issues. So some developers are not going to be quite as excited about working in a different area. Um, but I think with knowing that the siloing of information was there and that we had to resolve that as an issue and then ensuring that people who are being brought on via, you know, vendors from international countries and things like that, um, what we started to do was to ensure that we put in, um, pairing sessions with all of our developers. Up until that point, they kind of worked on their own and so, um, I find that working one-to-one with another individual tends to be the fastest way to learn how the things work, work in the same way as, um, a child learns their language from their parents far faster than they ever would from watching TV. Um, although sometimes I do wonder about that myself with my daughter singing baby shark to me 16 times and I don’t think I’ve ever sung that. So let’s see where that goes. Um, but having that one-to-one, um, relationship with the person means that we’re able to ask questions, we’re able to gain that knowledge very quickly. Having the documentation backing that up means that you’ve got a frame of reference to keep going to as well. And then if you keep doing that quite frequently and add in some of the more abstract knowledge sharing sessions, I’m thinking like, um, a ‘launch and learn’ type sessions or lightning talks, as well as having a, a base of, sort of a knowledge base that people can learn from. So, obvious examples of things like Pluralsight or O’Reilly’s library. Um, But we also have our own internal documentation as well where we give people tutorials, we walk people through things, we added in a code review session, we added in a code of the sprint and a session as well for our um, sprint reviews that went out to the whole team and to the rest of the company where we showed that we’re optimizing where we can. And all these things, they didn’t just enable the team to, to become full stack and I will say all of our developers now are full stack. I’d be very surprised if there are any developers I’m working with that are not able to make a switch. But it also built trust with the rest of the company as well and that’s the thing with being a company that has been acquired is that we need to, um, very quickly and very deliberately shout about how well we’re doing as a company so that they can look at what we’re doing and use us, as has frequently been the case recently actually as a best practice, a company that’s doing things well and doing things meaningfully and has that growth mindset. And we start then to have conversations with the wider company, which enables things like a tiger team type session that enables us to widen our scope and have more same company. It’s kind of a spiral at that point in time because you start to increase your scope and with doing that, it means that your team can grow because you know, that they know that thing, that they can trust us to do things effectively. And it also gives, going back to what I said at the beginning, and people more autonomy, then more decision-making capabilities they need to get further out into a company.
Kovid Batra: And in such situations, the opinions that they’re bringing in are more customer-centric. They have more understanding of the business. All those things ultimately add up to a lot of intrinsic incentivization, I would say. That if I’m being heard in the team, being a developer, I feel good about it, right? And all of this is like connected there. So I, it totally makes sense. And I think that’s a very good hack to bringing new, uh, people, new teams into the same, uh, journey where you are already continuing. So, great. I think, uh, with that, we have, uh, come to, uh, the end of this discussion. And in the interest of time, we’ll have to pause here. Uh, really loved talking to you, would love to know more such experiences from you, but it will be in the, maybe in the next episodes. So, David, once again, thanks a lot for your time. Thanks for sharing your experiences. It was great to have you here.
David Archer: Thank you so much and I really appreciate, uh, the time that you’ve taken with me. I hope that this proves useful to at least one person and they can gain something from this. So, thank you.
Kovid Batra: I’m sure it will be. Thank you. Thank you so much. Have a great day ahead.
David Archer: Thank you. Cheers now!
Webinar: 'Unlocking Engineering Productivity' with Paulo André & Denis Čahuk
December 6, 2024
•
63 min read
In the first session of the ‘Unlocking Engineering Productivity’ webinar series, host Kovid Batra from Typo welcomes two prominent engineering leaders: Paulo André, CTO of Resquared, and Denis Čahuk, a technical coach and TDD/DDD expert.
They discuss the importance of engineering productivity and share insights about their journeys. Paulo emphasizes the significance of collaboration in software development and the pitfalls of focusing solely on individual productivity metrics. Denis highlights the value of consistent improvement and reliability over individual velocity. Both guests underline the importance of creating clarity and making work visible within teams to enhance productivity. Audience questions address topics such as balancing technical debt with innovation and integrating new tools without disrupting workflows. Overall, the session offers practical strategies for engineering leaders to build effective and cohesive teams.
Timestamps
00:00 — Introduction
00:52 — Meet the Experts: Paulo and Denis
03:13 — Childhood Stories that Shaped Careers
05:37 — Defining Engineering Productivity
11:18 — Why Focus on Engineering Productivity Now?
15:47 — When and How to Measure Productivity
22:00 — Team vs. Individual Productivity
35:35 — Real-World Examples and Insights
37:17 — Addressing Common Engineering Challenges
38:34 — The Importance of Team Reliability
40:32 — Planning and Execution Strategies
45:31 — Creating Clarity and Competence
53:24 — Audience Q&A: Balancing Technical Debt and Innovation
57:02 — Audience Q&A: Overlooked Metrics and Security
01:02:49 — Audience Q&A: Integrating New Tools and Frameworks
Kovid Batra: All right. Time to get started. Uh, welcome everyone. Welcome to the first episode, first session of our new, all new webinar series, Unlocking Engineering Productivity. So after the success of our previous webinar The Hows and Whats of DORA, we are even more excited to bring you this webinar series which is totally designed to help the engineering leaders become better, learn more and build successful, impactful dev teams. And today with us, uh, we have two passionate engineering leaders. Uh, I have known them for a while now. They have been super helpful, all the time up for helping us out. So let me start with the introduction. Uh, Paulo, Paulo André, uh, CTO of Resquared, a YC-backed startup. He has been the, he has been ex-engineering leadership coach for Hotjar, and he has, he’s an author of the Hagakure newsletter. So welcome to, welcome to the unlocking, uh, engineering productivity webinar, Paulo.
Paulo André: Thanks for having me. It’s a real pleasure to be here.
Kovid Batra: Great. Uh, then we have Denis. Uh, he’s coming to this for the second time. And, uh, Denis is a tech leadership coach, TDD expert, and author of Crafting Tech Teams. And he’s also a guitar player, a professional gamer. Uh, hi, hi, Denis. Welcome, welcome to the episode.
Denis Čahuk: Hi, thanks for inviting me again. Always a pleasure. And Hey, Paulo, it’s our first time meeting on stage.
Paulo André: Good to meet you, Denis.
Kovid Batra: I think I missed mentioning one thing about Paulo. Like, uh, he is like a very, uh, he’s an avid book reader and a coffee lover, just like me. So on that note, Paulo, uh, which book you’re reading these days?
Paulo André: Oh, that’s a good question. Let, let me pull up my, because I’m always reading a bunch of them at the same time, sort of. So right now, I’m very interested, I wonder why in, you know, geopolitical topics. So I’m reading a lot about, you know, superpowers and how this has played out, uh, in history. I’m also reading a fiction book from an author called David Baldacci. It’s this series that I recommend everyone who likes to read thrillers and stuff like that. It’s called the 6:20 Man. So.
Kovid Batra: Great.
Paulo André: That’s what I’m reading right now.
Kovid Batra: So what’s going to be the next superpower then? Is it, is it, is it China, Russia coming in together or it’s the USA?
Paulo André: I’ll tell you offline. I’ll tell you offline.
Kovid Batra: All right. All right. Let’s get started then. Um, I think before actually we move on to the main section, uh, there is one ritual that we have to follow every time so that our audience gets to know you a little more. Uh, this is my favorite question. So I think I’ll, I’ll start with Paulo, you once again. Uh, you have to tell us something from your childhood or from teenage, uh, that defines you, who you are today. So over to you.
Paulo André: I mean, you already talked about the books. I think the reason why I became such a book lover was because there were a ton of books in my house, even though my parents were not readers. So I don’t know, it was more decorative. But I think more importantly for this conversation, I think the one thing about my childhood was when they gifted me a computer when I was six years old. We’re talking about 88, 89 of the type that you still connected to your big TV in the living room. So that changed my life because it came with an instruction manual that had code listings. Then you could type it in and you can see what happens on the screen and the rest is history. So I think that was definitely the most consequential thing that happened in my childhood when you consider how my life and career has played out.
Kovid Batra: Definitely. Cool. Um, Denis, I think the same question to you, man. Uh, what, what has been that childhood teenage memory that has been defining you today?
Denis Čahuk: Oh, you’re putting me on the spot here. I’ll have to come up with a new story every time I join a new webinar. Uh, no, no, I had a similar experience as Paulo. Um, I have an older brother and our household got our first computer when I was five-six years old, first commodore 64. So I learned how to code before I could read. Uh, I knew, I knew what keys to press so I could load Donald Duck into the, into the TV. Um, yeah, other than that when I, when I got a little bit, you know into the teenage years, I, um, World of Warcraft and playing games online became my passion project when I, when I received access to the internet. Um, so that’s, you know, I played World of Warcraft professionally, semi-professionally for quite a few years, like almost an entire decade, you know, and that, that was sort of parallel with my, with my sort of tech career, because we’re usually doing it in a very large organization, game-wise. Yeah. And that, that, that had a huge influence because it gave me an outlet for my competitiveness.
Kovid Batra: That’s interesting. All right, guys. Thanks. Thanks for sharing this with us. Uh, I think we’ll now move on to the main section and discuss something around which our audience would love to learn from you both. Uh, so let’s, let’s start with the first basic fundamental definition of what productivity, what dev productivity or engineering productivity looks like to you. So Paulo, would you like to take this first? Like, how do you define productivity?
Paulo André: So you start with a very small question, right? Um, you actually start with a million-dollar question. What is productivity? I’m happy to take a stab at it, but I think it’s one of those things that everyone has their own definition. For what it’s worth, when I think about productivity of engineering teams, I cannot decouple it from the purpose of an engineering team. And then ultimately, the way I see it is that an engineering team serves a business and serves the users of that business in case it’s a product company, obviously, um, but any, any kind of company kind of has that as the delivery of value, right? So with that in mind, is this team doing their part in the delivery of value, whatever value is for that business and for those users, right? And so having that sort of frame in mind, I also break it down in my mind, at least, in terms of like winning right now and increasing our capacity to win in the future. So a productive team is not just a team that delivers today, but it’s also a team that is getting better and better at delivering tomorrow, right? And so productivity would be, are we doing what it takes to deliver that value regardless of the output? Um, it is necessary to have output to have results and outcomes, but at the end of the day, how are we contributing to the outcomes rather than to the, um, the just purely to the outputs? And the reason why I bring this up has to do obviously with sometimes you see the obsession about things like story points and you know, all of that stuff that ultimately you can be working a lot, but achieving very little or nothing at all. So, yeah, I would never decouple, um, the delivery of value from how well an engineering team is doing.
Kovid Batra: Perfect. I think very well framed here and the perspective makes a lot of sense. Um, by the way, uh, audience, uh, while we are talking, discussing this EP, please feel free to shoot out all the questions that you have in the comments section. We’ll definitely be taking them at the end of the session. Uh, but it would be great if you could just throw in questions right now. Well, this was an advice from Denis, so I wouldn’t want to forget this. Okay. Uh, I think coming back, Denis, what’s your take on, uh, productivity, engineering productivity, dev productivity?
Denis Čahuk: Well, aPauloal said, that’s a million dollar question. I think, I think coming from a, from like a more analytical perspective, more data-driven perspective, I think we like to use the, the financial analogies, metaphors a lot for things like technical debt and, you know, good story points. It’s all about estimating something, you know, value of something or, or scale of something, scope of something. I think just using two metaphors is very useful for productivity. One is, you know, how risky is the team itself? And risk can come from many different places. It can be their methodologies, their personalities, the age of the company, the maturity of the company. The project can be risky. The timing on the market can be risky, right? So, but there is an inherent risk coming from the team itself. That’s, that’s what I mean. So how risky is it to work with this team in particular? Uh, and the other thing is to what degree does the team reason about, um, “I will produce this output for this outcome.” versus “I need to fill my schedule with activity because this input is demanded of me.” Right? So if I, if I use the four pillars that you probably know from business model canvases for activity, input, output, outcome, um, a productive team would not be measuring productivity per se. They will be more aligned with their business, aligned with their product and focusing on what, which of their outputs can provide what kind of outcomes for the business, right? So it’s not so much about measuring it or discussing it. It’s more about a, you know, are we shifting our mentality far enough into the things that matter, or are we chasing our own tail, essentially, um, protecting our calendars and making sure we didn’t over-promise or under-promise, etc.?
Kovid Batra: Got it. Makes sense.
Paulo André: Can I just add one, one last thing here, because Denis got my, my brain kinda going? Um, just to make the point that I think the industry spends a lot of time thinking about what is productivity and trying to define productivity. I think there is value in really getting clear about what productivity is not. And so I think what both Denis and I are definitely aligned on among other things is that it’s not output. That’s not what productivity is in isolation. So output is necessary, but it is not sufficient. And unfortunately, a lot of these conversations end up being purely about output because it’s easy to measure and because it’s easy to measure, that’s where we stop. And so we need to do the homework and measure what’s hard as well, so we can get to the real insight.
Kovid Batra: No, totally makes sense. I think I relate to this because when I talk to so many engineering leaders and almost all the time this, this comes into discussion, like how exactly they should be doing it. But what, what is becoming more interesting for me is that this million dollar question has suddenly started raising concerns, right? I mean, almost everywhere in like in business, uh, people are measuring productivity in some or the other way, right? But somehow engineering teams have suddenly come into the focus. So this, this perspective of bringing more focus now, why do you think it has come into the picture now?
Paulo André: Is that for me or Denis? Who should go first?
Kovid Batra: Anyone. Maybe Paulo, you can go ahead. No problem.
Paulo André: Okay. So, look. In, in my opinion, I think I was thinking a little bit about this. I think it’s a good question. And I think there’s at least three things, three main things that are kind of conspiring for this renewed focus or double down on engineering productivity specifically. I think on the one hand, it’s what I already mentioned, right? It’s easier to measure engineering than anything else. Um, at least in the product design and engineering world, of course, sales are very easy to measure. Did you close or not? And that sort of thing. But when it comes to product design and engineering, engineering, especially if you focus on outputs is so much easier to measure. And then someone gets a good sense of ROI from that, which may or may not be accurate. But I think that’s one of the things. The other thing is that when times get more lean or things get more difficult and funding kind of dries up, um, then, of course, you need to tighten the belt and where are you going to tighten the belt? And at the end of the day, I always say this to my teams, like, engineering is not more special in any way than any other team in a company. That being said, when it comes to a software company, the engineering team is where the rubber meets the road. In other words, you do absolutely need some degree of engineering team or engineering capacity to translate ideas and designs and so on into actual software. So it’s very easy to kind of just look at it as in, “Oh, engineers are absolutely critical. Everything else, maybe are nice to have.” Or something of that, to that effect, right? And then lastly, I think the so-called Elon Musk effect definitely is a thing. I mean, when someone with that prominence and with, you know, the soapbox that he has, comes in and says, you know, we’re going to focus on engineers and it’s about builders and even Mark Andreessen wrote an article like three years ago or so saying it’s time to build, all of that speaks like engineering, engineering, engineering. Um, and so when you put that all together and how influencible all of us are, but I think especially then founders and CEOs are kind of really attuned to their industry and to investors and so on, and I think there’s this, um, feedback loop where engineering is where it’s at right now, especially the age of AI and so on. So yeah, i’m not surprised that when you put this all together in this day and age, we have what we have in terms of engineering being like the holy grail and the focus.
Kovid Batra: Uh, Denis, you, you have something to add on this?
Denis Čahuk: I mean, when it comes to the timing, I don’t think anything comes to mind, you know, why now? What I can definitely say is that engineering of everything that’s going on is the biggest cost in a, in a large company. I mean, it’s not, not to say that it’s all about salaries or operational expenses, but it is also from a business’s perspective, engineering is, you know, if I put a price to the business being wrong on an experiment, the engineering side of things, the product engineering side of things defines most of that cost, right? So when it comes to experiments, the likelihood of it succeeding or not succeeding, or the how fast you gain feedback to be able to, you know, to, to think of experiment feedback as cashflow, you know, you want the big bet that you do once every three months, or do you want to do a bunch of small bets continuously several times per day? You know, all of that is decided and all of that happens in engineering and it also happens to be the biggest fiscal costs. So it makes sense that, hey, there’s an, you know, there’s a big thing that costs a lot, that is very complex and it’s defining the company. Yeah, of course, business owners would want to measure it. It will be irresponsible not to. It doesn’t mean that it, that productivity from a team’s or an engineer’s, an individual’s perspective is the most sensible thing to measure. But I, you know, I understand the people that would intuitively come to that conclusion.
Kovid Batra: Yeah. I think that makes a lot of sense. And what do you think, like, this should be done that, that is totally, uh, understandable, but when is the right time to start doing this and how one should start it? Because every time our engineering leader is held accountable for a team, whether big or small, there is a point where you have to decide your priorities and think about things that you are going to do, right? So how and when should an engineering leader or an engineering manager for a team should start taking up this journey?
Paulo André: I think Denis can go first on this one.
Denis Čahuk: Well, I would never, you know, I would never start measuring. So I coach teams professionally, you know, they, they reach out to me because something about my communication on LinkedIn or newsletter resonated with them regarding, you know, a very no-nonsense way of how to deal with customers, how to communicate, how to plan, how to not plan, how to, how to bring, you know, that excitement into engineering, that makes engineering very hyperproductive and fun. And then they come to me and ask, well, you know, “I want to measure all these things to see what I can do.” I think that context is always misleading. You know, we don’t just go in, you know, it’s not a speedometer like the, I think the very, very first intuition that people still have from the 90s, from the, from the, like the initial scrum and Kanban, um, modes of thought that, “Oh, I can just put up speedometer on the team and it will have a velocity and it, you know, it will just be a number.” Um, I think that is naive. That is not what measuring is. And that is not the right time ever to measure that. Like that I think is my say. Um, the right time to measure is when you say, “I am improving A or B. I am consciously trying to figure out continuously, consciously trying to figure out what will make my teams better.” So a leader might approach, “Okay. If I introduce this initiative, how can I tell if things are better?” And then you can say, “Well, I’ll eyeball it or I’ll survey the team.” And at a certain point, the eyeballing is too inaccurate or it requires too many disagreeing eyeballs, or, um, you run the risk of a survey fatiguing the team, so it’s just way too many surveys asking boring questions, and when you ask engineers to do repetitive, boring things, they will start giving you nonsense answers, right? So that would be the point where I think measuring makes sense, right? Where you basically take a little bit of subjective opinion out, with the exception of surveys, qualitative surveys, and you introduce a machine that says, “Hey, this is a process.” You know, it’s one computer talking to the other computer, you know, in the case of GitHub and similar, which seems to be the primary vector for measurement. Um, can I just extract some metrics of, you know, what are the characteristics of the machine? It doesn’t tell you how fast or how slow it’s going. Just what are the characteristics? Maybe I can get some insights too and decide whether this was a good idea or a bad idea, or if we’re missing something. But the decision to help your teams improve on some initiative and introducing the initiative comes first. And then you measure if you have no other alternative or if the alternatives are way too fuzzy.
Kovid Batra: Makes sense. Paulo, would you like to add something?
Paulo André: Yeah, I mean, I think my, my perspective on this is not very different from, from Denis. Uh, maybe it comes from a slightly different angle and I’ll explain what I mean. So, at the end of the day, if you want to create an outcome, right? And you want to change customer behavior, you want to create results for the business, you’re going to have to build something. And where I would not start is with the metrics, right? So you asked Kovid, like where, where do we start in this journey? I would say do not start with the metrics because in my mind, the metrics are a source of insight or answers to a set of questions. And so start with the questions, right? Start with the challenges that we, that you have to get to where you want to be, right? And so, coming back to what I was saying, if you want to create value, you’re going to have to build something, typically, most of the time, sometimes it creates value by removing something, but in general, you are building and iterating on your products. And, and so with that in mind, what is going back to first principles? What is the nature of software development? Well, it’s a collaborative effort. Nobody does everything end-to-end by themselves. And so with that in mind, there’s going to be handoffs. There’s going to be collaboration. There’s going to be all, all of that sort of flow, right? Where, where the work goes through a certain, you can see it as a pipeline. And so then when it comes to productivity, to me is, is, you know, from a lean software development perspective is how do we increase the flow? If you think of a Kanban board, how do you go, you know, in a smooth way, as smooth as possible from left to right, from something being ready for development to being shipped in production and creating value for the user and for the company? And so if you see it that way with that mental model, then it becomes like, where is the constraint? What is the bottleneck? And then how do we measure that? How do we get the answers is by measuring. And so when it comes to the DORA metrics that you guys obviously with Typo provide, um, you know, a good, good insight into, and, and other such things, generally cycle time, lead time really allows us to start understanding where’s this getting stuck. And that leads to then conversations around what can we do about that? And ultimately everybody can rally around the idea of how do we increase flow? And so that’s where I would start is what are we trying to do? What is getting in our way? And then let’s look at the data that we have available without going too crazy about that into like, what can we learn and where can we improve and where’s the biggest leverage?
Kovid Batra: Makes sense. I think one, one good point that you brought here is that software development is a collaborative effort, right? And every time when we go about doing that, there are people, there are teams, uh, there are processes, right? Uh, how, how would you define in a situation that whether you should go about measuring, uh, at an individual-level productivity, a developer-level productivity, and, uh, and then when, when we are talking about this collaborative effort, the engineering productivity? So how do you differentiate and how do you make sure that you are measuring things right? And sometimes the terminologies also bring in a lot of confusion. Uh, like, I would never perceive developer productivity to be something, uh, specific to developers. It ultimately boils down to the team. So I would want to hear both of you on this point, like how, how do you differentiate or what’s your perspective on that? When you talk to your team that, okay, this is what we are going to measure, uh, your teams are not taken aback by that, and there is a smooth transition of thought, goals when we are talking about improving the productivity. Uh, Paulo, maybe you could answer that.
Paulo André: I was trying to unmute myself. I was actually gonna.. Um, and then it feels free to kind of like interject at any point with your thinking as well. You know, if I follow up on what I was just saying that this is a team sport, then the unit of value is going to be the team. Are there individual productivity metrics? Yes. Are they insightful? Yes, they can be. But for what end? What can you actually infer from them? What can you learn from them? Personally, as an engineering leader, the way I look at individual productivity metrics is more like a smoke alarm. So, for example, if someone is not pushing code for long periods of time, that’s a question. Like, what’s going on? There might be some very good reasons for that, or maybe this person is struggling and so I’m glad that I saw that in the, in the metrics, right? And then we can have a conversation around it. Again, the individual is necessary, but it’s not sufficient to deliver value. And so I need to focus on the team-level productivity metrics, right? Um, so that’s, that’s kind of like how I disambiguate, if you will, this, these two, like the individual and the team, the team comes first. I look at the individual to understand to what degree is the individual or the individuals serving the team, because it comes back to also questions, obviously, of performance and, and performance reviews and compensation and promotions, like all of that stuff, right? Um, but do I look at the metrics to decide on that? Personally, I don’t. What I do look at is what can I see in the metrics in terms of what this person’s contribution to the team is and for the team to be able to be successful and productive.
Kovid Batra: Got it. Denis, uh, you have something to add here?
Denis Čahuk: It’s, it’s such an interesting topic that sort of has nuances from many different perspectives that my brain just wants to talk about all three at the same time. So I want to sort of approach every, like, do a quick dip into all three areas. First is the business side, right? So, uh, for example, let’s take a, let’s take the examples of baseball and soccer. Um, off, when off season comes for baseball. Baseball is more of an individual sport than soccer, you know, like the individual performance stands out way more than in soccer when everything’s moving all the time. Um, it’s, it’s very difficult to individuate performance in soccer, although you still can and people still do and it’s still very sexy. Um, when it’s off season, people want to decide, okay, which players do we keep? Which players do we trade? Which players do we replace? You know, this is completely normal, and you would want to do this, and you would want to have some kind of metrics, ideally, merit-based metrics of, yeah, this person performed better. Having this person on the team makes the team better. In baseball, this makes perfect sense. In soccer, not so much, but you still have to decide, well, how much do we pay each player? And you can probably tell if you’re following the scene that every soccer player is being, you know, their salary, their, their, um, their contracts are priced individually based on their value to the brand of the team, all the way to public relations, marketing, and yes, performance on, on the field. Even if they’re on the bench all the time, you know, they might have a positive effect on the team as a coach or as a mentor, as a captain. Um, so if you did bring that into that, that’s one aspect. So now bringing it back into software teams, that’s the business side of things. Yes, these decisions have to be made.
Then there’s the other side of things, which is how does the team work? You know, from my perspective, if output or outcomes can be traced back to one individual person, I think there’s something wrong. I think there’s a lot of sort of value left on the table if you can say, “Oh, this thing was done by this one person.” Generally, it’s a team effort and the more complex the problems get, the harder it is, you know, look, look, for example, NASA, um, the Apollo missions. Which one engineer, you know, made the rocket fly? You don’t have an answer to that because it was thousands of people collaborating together. You know, which one person made a movie? Yes, the director or the producer or the main actor, like they are, they stand out when it comes to branding. But there were tens of thousands of people involved, right? So like to, you know, at the end of the day, what matters is the box office. So I think that that’s what it really comes down to, uh, is that yes, generally there will be like a few stars and some smoke alarms, as Paulo mentioned, I really liked that analogy, right? So you’re sort of checking for, hey, is anybody below standard and does anybody sort of stand out? Usually in branding and communication, not in technical skill. Um, and then try to reason about the team as a whole.
And then there’s the third aspect, which is how productive does the individual feel? You know, how productive, if somebody says they’re a senior with seven years of experience, how productive they, do they feel? Do they get to do everything they wanted to in a day? You know, and then keep going up. Does the product owner feel productive or efficient? Or does the leader feel that they’re supporting their teams enough, right? So it also comes down to perception. We saw this recently with the usages and various surveys regarding AI usage and coding assistance, where developers say, “Yeah, it makes me feel amazing because I feel more productive.” But in reality, the outcomes that it produces didn’t change, or it was so insignificant that it was very difficult to measure.
So with those three sort of three angles to consider, I would say, you know, the way to approach measuring and particularly this individual versus team performance, is that it’s a moving target. You sort of need to have a plan for why you’re measuring and what you’re measuring and ideally, once you know that you’re measuring the right things when it comes to the business, it’ll be very difficult, um, to trace it back to an individual. If tracing it back to an individual is very easy, or if that’s an outcome that you’re pursuing, I would say there’s other issues or potential improvements afoot. And again, measuring those might show you that measuring them is a wrong, is a bad idea.
Paulo André: Can I just add one, one quick thing again? Like, this is something that took me a little while to understand for myself and to become intuitive, which is not intuitive at all. Um, but I think it’s an important pitfall to kind of highlight, which is if we incentivize individual behaviors, individual productivity, that can really backfire on the team. And again, I remind you that the team is the unit of value. And so if we incentivize throughput or output from individual developers, how does that hurt the team? It doesn’t sound very intuitive, but if you think about, for example, a very prolific developer that is constantly just taking on more tickets and creating more pull requests, and those pull requests are just piling up because there’s no capacity in the team to review them, the customer is not getting any value on the other side. That work in progress is just in lean terminology. It’s just waste at that point, right? But that developer can be regarded depending on how you look at it as a very productive developer, but is it? Or could it be that that developer could be testing something? Or could it be that that developer is helping doing code reviews and so on and so forth, right? So again, the team and individual productivity can lead to wildly different results. And sometimes you have teams that are very unproductive despite having very productive developers in them, but they are looking at the wrong, sort of, in my opinion, wrong definition of what productivity is and where it comes from, and what the unit of value is, like I said, it’s the team.
Kovid Batra: Yeah.
Denis Čahuk: Can I jump in quickly, Kovid?
Kovid Batra: Yeah.
Denis Čahuk: There’s something I’ve always said. Um, it’s very unintuitive, and I can give you a complete example from coaching, that it throws leaders off-guard every time I suggest it, and it ends up being a very positive outcome. I always ask them, you know, “What are you using to assign tickets? Are you assigning them?” And they say, “Yes, we use Jira.” Or something equivalent. And I tell them, And I ask them, “Well, have you considered not assigning the tickets?” Right? And, well, who should own it? And I say, “Well, it’s in the team’s backlog. The team owns it. Stop assigning an individual.” Right? And they’re like, and they’re usually taken aback. It’s like, “What do you mean? Like, it won’t get done if I don’t assign it.” No, it’s in the team’s backlog, of course it’ll get done. Right? And if not, if they can’t decide who will do it, then that’s a conversation they should have, and then keep it unassigned. Or, alternatively, use some kind of software that allows multiple people to be assigned. But you don’t need to, because the moment you start, you know, Jira, for example, had like a full activity log, so I comment on it, you comment on it, you review, I review, we merge, I merge, I ask a question. You have a full paper trail of everybody who was involved. Why would you need an owner, right? So this idea of an owner is, again, going back to lean activities and talking about handoffs, right? So I hand it off to you, you’re now the owner, and you’ll hand it off to somebody else. Well, and, but having many handoffs is an anti-pattern in itself, usually in most contexts. Actually the better idea would be, how can we have less people than we have? How can we have less handoffs then we have people? If there are seven people in the pipeline, there shouldn’t be seven handoffs, you know, how can we have just one deliverable, just one thing to assign and seven people working on it? That would be the best sort of positive outcome because then you don’t cap, you know, how much money you can put around a problem because that allows you to sort of scale your efforts in intensity, not just in parallelism. Um, and usually that parallelism comes at a very, very steep cost.
Paulo André: Yeah.
Denis Čahuk: Um, so incentivizing methods to make individual work activity untraceable can unintuitively have, and usually does, drastic and immediate positive, positive benefits for the team. Also, if the team is lacking in psychological safety, this will make it immediately sort of washed over them and they’ll have to have some like really rough conversations in the first week and then things drastically start improving. At least that’s my experience.
Paulo André: Yeah. And the handoff piece is a very interesting one. I’ll be very quick, uh, Kovid. When we think about the perspective of a piece of work, a work package, a ticket or whatever, it’s either being actively worked on or it’s waiting for someone to do something about it, right? And if we measure these things, what we, what we realize, and it’s the same thing if you go to the airport and we think about how often, how much time are we actually spending on something like checking in or boarding the plane versus waiting at some of the stages, the waiting time is typically way more than the active time. And so that waiting time is waste as well. That’s an opportunity. Those delays, we can think about how can we reduce those and the more handoffs we have in the process, the more opportunity for delay creeps in, right? So it’s, it’s a very different way of looking at things. But sometimes when I say estimates and so on, estimates is all about like active time. It’s how long it’s going to take, but we don’t realize that nothing is done individually, and because of the handoffs, you cannot possibly predict the waiting times. So the best that you can do is to reduce the handoffs, so you have less opportunity for those delays to creep in.
Kovid Batra: Totally. I think to summarize both of your points, I would have understood is that making those smoke alarms ready at individual level and at process level also ready so that you are able to understand those gaps if there is something falling apart. But at the end of the day, if you’re measuring productivity for a team, it has to be a collaborative team-level thing that you’re looking at and looking at value delivery. So I think it’s a very interesting thing. Uh, I think there’s a lot of learning for us when we are working at Typo that we need to think more on the angle of how we bring in those pointers, those metrics which work as those smoke alarms, rather than just looking at individual efficiency or productivity and defining that for somebody. Uh, I think that, that makes a lot of sense. All right. I think we are into a very interesting conversation and I would like to ask one of you to tell us something from your experience. So let’s start with you, Denis. Um, like you have been coaching a lot of teams, right? And, uh, there, there are instances where you deal with large-scale teams, small teams, startups, right? There are different combinations. Anything that you feel is an interesting experience to share here about how a team approached solving a particular problem or a bottleneck in their team that was slowing them down, basically like not having the right impact that they wanted to, and what did they do about it? And then how, how they arrived to the goal that they were looking at?
Denis Čahuk: Well, I can, I can list many. I’ll, I’ll focus on two. One is, generally the team knows what’s the problem. Generally, the team knows already, hey, yeah, we don’t have enough tests, or, ah, yeah, we keep missing deadlines, or our relationship with stakeholders is very bad, and they just communicate with us through, you know, strict roadmaps and strict deadlines and strict expectations. Um, that’s a problem to be solved. That’s not, you know, it doesn’t have to be that way. So if you know what the problem is, there’s no point measuring, because there’s no, there’s no further insight to be gained that, yeah, this is a problem, but hey, let’s get distracted with this insight. No, like, you know what the problem is, you can just decide what to do, and then if you need help along the way, maybe measurements would help. Or maybe measurements on an organizational level would help, not, not just engineering. Um, or you bring on a coach to sort of help you, you know, gain clarity. That’s one aspect. If you know what the problem is, you don’t need to measure. Usually people ask me, Denis, what should I measure? Should I introduce DORA metrics? And I usually tell them, Oh, what’s the main problem? What’s the problem this week? Oh yeah, a lot of PRs are waiting around and we’re not writing enough tests. Okay, that’s actionable. Like, that’s enough. Like, do you want more? Like, but do you need a bigger problem? Because then you just, you know, spend a lot of time looking for a problem that you wish was bigger than that so that you wouldn’t have to, right, because that’s just resistance that just either your ego or trying to play it safe or trying to put it into the next quarter when maybe there’s less stress and right, there isn’t. That’s one aspect.
The other aspect, you know, this idea of.. How did you phrase it? An approach that works that aren’t generally approaches that work. You know, I always say that everything we do is nowadays basically a proxy to eliminating handoffs, right? Getting the engineers very close to the customer and, um, you know, getting closer to continuous delivery. Continuous integration at the very minimum, but continuous delivery, right? So that when software is ready, it’s releasable on demand, and there isn’t like this long waiting that Paolo mentioned earlier, right? Like this is just a general form of waste. Um, but potentially something that both of these cases handle unintuitively that I like to bring in as a sort of more qualitative metric is, um, the reliability of the team. You know, we like to measure the reliability of systems and the whole Scrum movement introduced this idea of velocity, and I like to bring in this idea of, let’s say you want to be on time as a leader. Um, I’m interested in proving the theory that, hey, if you want to be on time, you probably need to be on time every week, and in order to be on time on the week, you probably need to be on time every day. So if you don’t know what an on-time day looks like, there’s no point planning roadmaps and saying that deadlines are a primary focus. Maybe the team should be planning in smaller batches, not with, not trying to chase higher accuracy in something very large. And what I usually use as a proxy metric is just to say, how risky is your word? Right, so how reliable is your promise? Uh, and we don’t measure how fast the team is moving. What I like to measure with them is say, okay, when do you think this will be done? They say Friday. Okay. If you’re right, Monday needs to look like this. Tuesday needs to look like this. Let me just try to reverse engineer it from that. It’s very basic. And then I’m trying to figure out how many days or hours or minutes into a plan they’re off-track. I don’t care about velocity. So no proxy metrics. I’m just interested if they create like a three month roadmap, how many hours into the three-month roadmap are they off-course? Because that’s what I’m interested in, because that’s actionable. Okay. You said three months from now, this is done. One month from now, there’ll be a milestone. But yesterday you said that today something would be done. It’s not done. Maybe we should work on that. Maybe we should really get down to a much smaller batch size and just try to make the communication structures around the team building stuff more reliable. That would de-stress a lot of people at the same time and sort of reduce anxiety. And maybe the problem is that you have a building-to-deploying nuance and maybe that’s also part of the problem. It usually is. And then there might be a planning-to-building nuance that also needs to be addressed. And then we basically come down to this idea of continuous delivery extreme programming, you know, let’s plan a little bit. Let’s Build a little bit. Let’s test it. Let’s test our assumptions. And behind the scenes once we do that for a few days, once we have evidence that we’re reliable, then let’s plan the next two weeks. Only when we have shown evidence of the team understands what a reliable work week for them looks like. If they’ve never experienced that and they’ve been chasing their own tail deadline after deadline, um, there’s not much you can do with such a team. And a lot of people just need a wake up call to see that, “Hey, you know what? I actually don’t know how to plan. You know, I don’t know how to estimate.” And that’s okay. As long as you have this intention of trying to improve or trying to look for alternatives, not to become better.
Kovid Batra: I think my next question would be, uh, like when you’re talking about, uh, this aspect in the teams, how do you exactly go about having that conversations or having that, that visibility on a day-to-day basis? Like most, most of the things that you mentioned were qualitative in nature, right, as, as you mentioned, right? So how, how do you exactly go about doing that? Like if someone wants to understand and deploy the same thought-process in a team, how should they actually do and measure it?
Denis Čahuk: Well, from a leader’s perspective, it’s very simple, you know, because I can just ask them, “Hey, is it done? Is it on anybody’s mind today?” Um, and they might tell me, “Yeah, it’s done, but not merged.” Or, “It’s waiting for review, but it’s done, but it’s kind of waiting for review.” And then that might be one possible answer. Um, it doesn’t need to be qualitative in the sense that I need a human for that. What, you know, what I’m looking for is precision. Like, is it, is it definitively done? Was there an increment? You know, did we test our assumptions? What, is there a releasable artifact? Is it possible to gain feedback on this?
Kovid Batra: Got it.
Denis Čahuk: Did you, did you talk to the team to establish if we deploy this as soon as possible, what question do we want to answer? Like what feedback, what kind of product feedback are we looking for? Or are we just blindly going through a list of features? Like, are we making improvements to our software or is somebody else who is not an engineer? Maybe that’s the problem, right? So it’s very difficult to pinpoint to like one generic thing. But a team that I worked with, the best proxy for these kinds of improvements from the leader was how ready they felt to be interrupted and get course correction. Right? Because the main thing with priorities in a team is that, you know, the main unintuitive thing is that you need to make bets and you need to reduce the cost of you being wrong, right? So the business is making bets on the market, on the product and working with this particular team with these particular individuals. The team is making bets with implementation details to a choice of technology, ratio between keeping the lights on, technical debt and new features, support and communication styles, you know, change of technology maybe. Um, so you need to just make sure that you’re playing with the market. The upside will take care of itself. You just need to make sure that you’re not making stupid mistakes that cost you a lot, either in opportunity or actual fiscal value. Um, but once you got that out of the way, you know, sky’s the limit. A lot of engineers think that we’re expensive. It’s large projects. We gotta get it right the first time. So they try to measure how often they got it right the first time, which is silly. And usually that’s where most measurements go. Are we getting it right the first time? We need to do this to get it right the first time, right? So failure is not an option. Whereas my mantra would be, no, you are going to fail. Just make sure it happens sooner rather than later and with as little intensity as possible so that we can act on it while there’s still time.
Kovid Batra: Got it. Makes sense. Makes sense. All right. Uh, Paulo, I think, uh, we are just running short on time, but I really want to ask this question to you as well, uh, just like Denis has shared something from his experience and that’s really interesting to know like how qualitatively you can measure or see things every time and solve for those. In your experience, um, you have, uh, recently joined this startup as, as a CTO, right? So maybe how does it feel like a new CTO and what things come to your mind when you would think of improving productivity in your teams and building a team which is impactful?
Paulo André: Yeah, I joined this company as a CTO six months ago. It’s been quite a journey and it’s, so it’s very fresh in my mind. And of course, every team is different and every starting point is different and so on, but ultimately, I think the pattern that i’ve always seen in my career is that some things are just not connected and the work is not visible and there’s lack of clarity about what’s value, uh, about what are the goals, what are the priorities, how do we make decisions, like all of that stuff, right? And so, every hour that I’ve been putting into this role with my team so far in these six months has been really either, either about creating clarity or about developing competence to the extent that I can. And so the development of competence is, is basically every opportunity is an opportunity to learn, both for myself and for anyone else in the team. And I can try to leverage my coaching skills, um, in making those learning conversations effective. And then the creation of clarity in my role, I happen to lead both product and engineering, so I cannot blame somebody else for lack of clarity on what the product should be or where it should go. It’s, it’s on me. And I’ve been working with some really good people in terms of what is our product strategy? What do we focus on and not focus on? Why this and not that? What are we trying to accomplish? What are those outcomes that we were talking about that we want to drive, right? So all of that is hard to answer. It’s deceptively difficult to answer. But at the end of the day, it’s what’s most important for that engineering productivity piece, because if you have an engineering team that is, you know, doing wasted work left and right, or things are not connected, and they’re just like, not clear about what they should be doing in the first place, that doesn’t sound like the ingredients for a productive team, right? And ultimately, the product side needs to answer to a large extent those, those difficult questions. So obviously, I could go into a lot of specific details about how we’re doing this and that. I don’t think we have at least today the time for that. Maybe we can do a deep dive later. But ultimately, it’s all about how do I create clarity for everyone and for myself in the first place so I can give it and then also developing the competence of the people that we do have. And that’s the increasing the capacity to win that I was talking about earlier. And if we make good progress on these two things, then we can give a lot of control and autonomy to people because they understand what we’re going for, and they have the skills to actually deliver on that, right? That’s, that’s the holy grail. And that’s motivation, right? That’s happiness. That’s a moment at work that is so elusive. But at the end of the day, I think that’s what we’re, we’re working towards.
Kovid Batra: Totally. I’ll still, uh, want to deep dive a little bit in any one of those, uh, instances, like if you have something to share from last six months where you actually, when prioritized this transparency for the team to be in, uh, how exactly you executed it, a small instance or a small maybe a meeting that you have had and..
Paulo André: Very simple example. Very simple example. Um, one of the things that I immediately noticed in the team is that a lot of the work that was happening was just not visible. It was not on a ticket. It was not on a notion document. It was nowhere, right? Because knowledge was in people’s minds, and so there was a lot of like, gaps of understanding and things that would just take a lot longer than they think they should. And so I already mentioned my bias towards lean software development. What does that mean? First and foremost, make the work visible because if you don’t make the work visible, you have no chance of optimizing the process and getting better at what you do. So I’ve been hammering this idea of making the work visible. I think my team is sick of me pointing to is there a ticket for it? Did you create a ticket for it? Where is the ticket? And so on. Because the way we work with Jira, that’s, that’s where the work becomes visible. And I think now we got to a point where this just became second nature, uh, for all of us. So that would be one example where it’s like very basic fundamental thing. Don’t need to measure anything. Don’t need complicated KPIs and whatnot. What we do need is to make the work visible so we can reason about it together. That’s it.
Kovid Batra: Makes sense. And anything which you found very unique about this team and you took a unique approach to solve it? Any, anything of that sort?
Paulo André: Unique? Oh, that’s a, that’s a really good question. I mean, everyone is different, but at the end of the day, we’re all human beings trying to work together towards something that is somehow meaningful. And so from that perspective, frankly, no real surprises. I think what I’m, if anything, I’m really grateful for the team to be so driven to do better, even if, you know, we lack the experience in many areas that we need to level up. Um, but as far as something being really unique, I think maybe a challenge our team has to really deal with tough technical challenges is around email deliverability, for example, that’s not necessarily unique. Of course, there’s other companies that need to debate themselves with the exact same problems. But in my career, that’s not a particular topic that I have to deal with a lot. And I’m seeing, like, just how complex and how tricky it is to get to get right. Um, and it’s an always evolving sort of landscape for those that are familiar with that type of stuff. So, yeah, not a good, not a good answer to your question. There’s nothing unique. It’s just that, yeah, what’s unique is the team. The team is unique. There’s no other team like this one, like these individuals doing this thing right here, right now in this company in 2024.
Kovid Batra: Great, man. I think your team is gonna love you for that. All right. I think there will be a lot more questions from the audience now. We’ll dedicate some time to that. We’ll take a minute’s break here and we’ll just gather all the questions that the audience has put in. Uh, though we are running a little out of time, is it okay for you guys to like extend for 5–10 minutes? Perfect. All right. Uh, so we’ll take a break for a minute and, uh, just gather the questions here.
All right. I think time to get started with the questions. Uh, I see a lot of them. Uh, let’s take them one by one on the screen and start answering those. Okay. So the first one is coming from, uh, Kshitij Mohan. That’s, uh, the CEO of Typo. Hi, Kshitij. Uh, everything is going good here. Uh, so this is for Denis. Uh, as someone working at the intersection of engineering and cloud technologies, how do you prioritize between technical debt and innovation?
Denis Čahuk: It’s a great question. Hey, Kshitij. Well, I think first of all, I need to know whether it’s actual debt or whether it’s just crap code. You know, like it’s crappy implementation is not an excuse for debt, right? So for you to have debt, there are three things needed to have happen. At some point in the past, you had two choices, A or B. And you made a choice without, with insufficient knowledge. And later on, you figured out that either something in the market changed or timing changed, or we gained more knowledge, and we realized that we, that now the other one is better, for whatever reason. I mean, it’s unnecessary that it was wrong at the time, but we now have more information that we need to go from A to B. Uh, originally we picked A. Now you also need to know how much it costs to go from A to B and how much you stand to gain or trade if you decide not to do that, right? So maybe going from A to B now cost you two months and ten thousand euros and doing it later next year, maybe it’s going to double the cost and add an extra week. That’s technical debt. Like the, the nature of that decision, that’s technical debt. If you, if you made the wrong decision in, in the past and you know it was the wrong decision and now you’re trying to explore whether you want to do something about it, that’s not technical debt. That’s just, you know, that’s you seeking for excuses to not do a rewrite. So it’s, first of all you need to identify is it debt. If it is debt, you know the cost, you know the trade-off, you know, you know, you can either put it on a timeline or you can measure some kind of business outcome with it. So that’s one side.
On the, on the innovation side, you need to decide what is innovation exactly? You know, is it like an investment? Is it a capital expense where I am building a laboratory and we’re going to innovate with new technologies? And then once we build them, we will find, um, sort of private market applications for them or B2B applications for them. Like, is it that kind of innovation? Or is innovation a umbrella term for new features, right? Cause, cause that’s operational. That’s much closer to operational expense, operational expense, right? So it’s just something you do continuously and you deliver continuously, and that innovation that you do can continuously feature development will also produce new debt. So once you’ve got these two things, these two sides figured out, then it’s a very simple decision. How much debt can you live with? How fast are you creating new debt compared to how fast you’re paying it off? And what can you do to get rid of all the non-debt, all the crap, essentially? That’s it, you know. Then you just make sure that you balance out those activities and that you consistently do them. It isn’t just, oh yeah. We do innovation for nine months and then we pay off debt. That usually doesn’t go very well.
Kovid Batra: I think this is coming from a very personal pain point. Now we’re really moving towards the AI wave and building things at Typo. That’s where Kshitij is coming from. Uh, totally. I think, thanks, thanks, Denis. I think we’ll move on to the next question now. Uh, that’s from, uh, Madhurima. Yeah. Hey Paulo, this one’s for you. Uh, which metric do you think is often overlooked in engineering teams but has significant impact on long-term success?
Paulo André: Yeah, that’s a great question. I’m going to, I’m going to give a bit of a cheeky answer and I’m going to say, disclaimer, this is not a metric that I track with, we track with, with my team, and it’s also not, I don’t know, a very scientific way or concrete way of measuring it. However, to the question, what is overlooked in engineering teams and has significant long-term impact, or success, on long-term success, that’s what I would call ‘mean time to clarity’. How quickly do we get clear on where we need to be and how do we get there? Right? And we don’t have all the answers upfront. We need to, as Denis mentioned earlier, experiment and iterate and learn and we’ll get smarter, hopefully, as we go along, as we learn. But how quickly we get to that clarity in every which way that we’re working. I think that’s, that’s the one that is most important because it has implications, right? Um, if we don’t look at that and if we don’t care about that, are we doing what it takes to create that clarity in the first place? And if that’s not the case, the waste is going to be abundant, right? So that’s the one I would say as an engineering leader, how do I get for myself all the clarity that I need to be able to pass it along to others and create that sense that we know where we’re going and what we don’t know, we have the means to learn and to keep getting smarter.
Kovid Batra: Cool. Great answer there. Uh, let’s move on to the next one. I think this one is again for Paulo. Yeah.
Paulo André: Okay, so you know what? Maybe this is going to be a bit, uh, I don’t know what to call it, but considering that I don’t think the most important things are gonna change in the next five years, um, AI notwithstanding, and what are the most important things? It’s still a bunch of people working together and depending on each other to achieve common goals. We may have less people with more artificial intelligence, but I don’t think we’re anywhere near the point where the artificial intelligence just does everything, including the thinking for itself. And so with that in mind, it’s still back to what I said earlier, um, in the session. It’s really about how is the work flowing from left to right? And I don’t know of a better, um, sort of set of metrics than the DORA metrics for this, particularly cycle time and deployment frequency and that sort of stuff that is more about the actual flow. Um, but like, you know, let’s not get into the DORA metrics. I’m sure the audience here already knows a lot about it, but that’s, that’s, I think, what, what is the most important, um, and will continue to be critical in the next five years, um, that’s, that’s basically it.
Kovid Batra: Cool. Moving on. All right. That’s again for, oh, this one, Denis. How do you ensure cloud solutions remain secure and scalable while addressing ever-changing customer demands?
Denis Čahuk: Well, there’s two parts to that question. You know, one is security, the other one is ever-changing customer demands. I think, you know, security will be a sort of an expression of the standard, or at least some degree of sensible defaults within the team. So the better question would be, what do engineers need to not have to consciously, to not have to constantly and consciously and deliberately think about security, right? So do they have support by, are they supported by a security expert? Do they have platform engineering teams that are supporting with security initiatives, right? So if there’s a product team that’s focusing on product, support them so that they also don’t have to become an expert in security, cause that’s where all the problems start, where you basically have a team of five and they need to wear 20 hats and they start triaging the hats and making trade-offs in security, you know. And usually, usually large teams that are overwhelmed, love doing privacy or security trade-offs because they don’t have skin in the game. The business has skin in the game, right? And then when you individuate incentive to such a degree that it becomes dysfunctional, um, security usually doesn’t bode well. Um, at least not till there’s some incident or maybe some security review or some inspection, et cetera.
So give the teams what they need. If they’re not a security expert, provide them support. Um, and the same thing with scalability. Scalability is also something that can benefit more from tighter collaboration, more so than security. Um, so just make sure that the team is able to express itself as a team through pair programming or having more immediate conversations rather than just, you know, asynchronous code review conversations or stand up conversations way at the end of the cycle. At the end of the cycle when the code is written and it’s going into merging or QA, it’s too late, the code is written, right? So you want the preempt. That solution is being created by the team being able to express itself as a team rather than just a group of individuals, being the individual goals.
Kovid Batra: Cool. I think, uh, we have a few more questions, but running way out of time now. Uh, maybe we can take one more last, last question and then we can wrap it up.
Paulo André: Sounds good. Okay, so this one is for me, right? How do I approach, uh, integrating new tools and frameworks into engineering workflows without disrupting productivity? That, that final piece is interesting. I think it also starts with how we frame this type of stuff. So there is a cost to making improvements. I don’t think we can have our cake and eat it, too, necessarily. And it’s just part of the job, and it’s part of what we do. And so, um, you know, for example, if you take the time to have a regular retrospective with your team, right, is that going to impact productivity? I mean, you could be coding for an extra hour every two weeks. It’s certainly going to have some impact. But then it also depends on what is the outcome of that retrospective, and how much does it impact the long-term, um, you know, capacity to win of the team. So with that in mind, what I would say is that the most important thing I find is that you don’t just, again, as an engineering leader, as an engineering manager, you just don’t, you don’t just download certain practices and tools and frameworks on the teams. You always start from what are we trying to solve here and why does it matter and get that shared understanding to the point where we’re all looking at the same problem roughly the same way. We can then disagree on solutions, but we agree that this is a problem worth solving right now, and we’re gonna go and do that. And so the tools and the frameworks are kind of like downstream from that. Okay, now what do we need to gain the inside? Oh, now what do we need to solve the problem? Then we can talk about those things. Okay? So as an example, one thing I’m working on now with my team, I mentioned this earlier, I believe is like, uh, a bit of a full-on product delivery, product discovery and delivery, um, process, right? That includes a product strategy, um, that shouldn’t change that much that often. And then there are a lot of tools and frameworks that we can use. Tools, we use three different types of projects in Jira, for example. And when it comes to frameworks, we’re starting to adopt something called opportunity solution trees, which is just a fancy way of saying what outcomes are we trying to generate, what opportunities do we see to, to get there and what are the solutions that can capitalize on these opportunities, right? That sort of thing. But it all starts with we need to gain clarity about where we’re gonna go as a business and as a product and everything kind of comes downstream from that, right? So I think if you take the time and this is where I’ll leave it. If you take the time and I think you should to start there and to do this groundwork and create this shared context and understanding with your teams, everything else downstream becomes so much easier because you can connect it to the problem that you’re solving. Otherwise, you’re just talking solutions for problems that most people will think they are inexistent or they just look completely different, right? And this takes work, this takes time, this takes energy, this takes attention, takes all of those things. But frankly, if you ask me, that’s the work of leadership. That’s the work of management.
Kovid Batra: Great. Well said, Paulo. I think Denis has a point to add here.
Denis Čahuk: Yeah, I had a conversation this week with one of the CEOs and founders of one of Ljubljana, Slovenia’s biggest agencies, because we were talking about this. And, and, and they asked me this question, they said, “Denis, you don’t have a catalog. Like, what do you do? Like, how do, how does working with you look like? Do we do a workshop or something?” And I said, and I asked, “Do you want to do a workshop? And, and I saw on their face, they said, “Well..” I told them, “Yes, exactly, exactly. That’s why I don’t have a catalog because, because, because the workshops are this, I will show you how a great team works, right? I will give you all of this fancy storytelling about how productive teams work, and then you’re like, “Great. Cool. But we’re not that and we can’t have that in our team.” So great, now I’d go away because I’m, because I’d feel demoralized, right? Like that’s not a good way of approaching working with that team. I, I always tell them, “Look, I don’t know what will help you. You probably also don’t know what will help you. We need to figure it out together. But generally, what’s more important than figuring out how to help you is to figure out how much are you willing to invest consistently in improvement? Because maybe I teach you something and you only have 10 minutes. That’s the wrong way about it, right? I need to ask you how much time do you have consistently every week 15 minutes? Okay, then when I need to teach you something that you can put in practice every 15 minutes Otherwise, I’m robbing you of your time. Otherwise, I’m wasting your time. If you have three hour retrospectives and we’re putting nothing into action, I’m wasting your time, right? So we need to personally figure out like what is consistent for you? What kind of improvement, how intense do you want it? How do you know if you’re making progress?”
Those two are the most important things, because I always come to these kinds of questions about new tools and frameworks because people love asking me about, “Hey, Denis. Can you do a TDD workshop?”, “Denis, can you do a domain-driven design workshop?”, “Denis, can you help us do event storming?” And I always say, “If what you need is that one workshop, it’s not going to solve any problems because I’m all about consistent improvement, about learning, about growing your team, about, you know, investing into the people, not about changing, you know, changing some label or some other label.” And I always come back to the mantra of what can you do consistently starting this week so that the product and the team is much better six months from now? That’s the big question. That’s, that should be the focus. Cause if you need to learn something, you know, go do a certification that takes you a year to perform correctly, and then you need to renew it every year. That’s nonsense. This week, what can we do this week? Start this week, apply this week, and then consistently grow and apply every single week for the next six months. That would be huge. Or you can go to a conference and send everybody on vacation and pretend the workshop was very productive. Thank you.
Kovid Batra: Perfect. I think that brings us to the end of this episode. Uh, I think the next episode that we’re going to have would be in the next year, which is not very far. So, before we depart, uh, I think I would like to wish the audience, uh, a very Happy New Year in advance, a Merry Christmas in advance. And to both of our panelists also, Paulo, Denis, thank you, thank you so much, uh, for taking out time. It was really great talking to you. I would love to have you both again here. talking more in depth about different topics and how to make teams better. But for today, that’s our time. Anything that you would like to, that you guys would want to add, please feel free. All right. Yeah, please go ahead.
Denis Čahuk: Thanks for inviting us.
Paulo André: Yeah, exactly. From my side, I was just going to say that thanks for having us. Thanks also to the audience that has put up with us and also asked very good questions, to be honest. Unfortunately, we couldn’t get to a few more that are still there that I think are very good ones. Um, but yeah, looking forward to coming back and deep diving into, into some of the topics that we talked about here.
Kovid Batra: Great. Definitely.
Denis Čahuk: And thank you for Kovid for inviting us and for introducing us to each other and to everybody backstage and at Typo for, they’re probably doing a lot of annoying groundwork at the background that makes all of this so much more enjoyable. Thank you.
Kovid Batra: All right, guys. Thank you. Thank you so much. Have a great evening ahead. Bye!
'Leading Tech Teams at Stack Overflow' with Ben Matthews, Senior Director of Engineering, Stack Overflow
November 29, 2024
•
29 min read
In this episode of the groCTO Podcast, host Kovid Batra is joined by Ben Matthews, Senior Director of Engineering at Stack Overflow, with over 20 years of experience in engineering and leadership.
Ben shares his career journey from QA to engineering leadership, shedding light on the importance of creating organizations that function collaboratively rather than just executing tasks independently. He underscores the need for cross-functional teamwork and reducing friction points to build cohesive and successful teams. Ben also addresses the challenges and opportunities presented by the AI revolution, emphasizing Stack Overflow’s strategy to embrace and leverage AI innovations. Additionally, he offers valuable advice for onboarding junior developers, such as involving them in code reviews and emphasizing documentation.
Throughout the discussion, Ben highlights essential leadership principles like advocating for oneself and one’s team, managing team dynamics, and setting clear expectations. He provides practical tips for engineering managers on creating value, addressing organizational weaknesses, and fostering a supportive environment for continuous growth and learning. The episode wraps up with Ben sharing his thoughts on maintaining a vision and connecting it with new technological developments.
Timestamps
00:00 - Introduction
01:08 - Meet Ben Matthews
01:22 - Ben's Journey from QA to Engineering Leadership
03:21 - The Importance of Team Collaboration
04:03 - Current Role and Responsibilities at Stack Overflow
09:12 - Advice for Aspiring Technologists
17:41 - Embracing AI at Stack Overflow
23:30 - Onboarding and Nurturing Junior Developers
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO podcast. And today with us, we have an exciting guest. This is Senior Director from Stack Overflow with 20 plus years of experience in engineering and leadership, Ben Matthews. Hey, Ben.
Ben Matthews: Thanks for having me. I just wanted to cover you there.
Kovid Batra: All right. So I think, uh, today, uh, we’re going to talk about, uh, Ben’s journey and how he moved from a QA to an engineering leadership position at Stack Overflow. And here we are like primarily interested in knowing how they are scaling tech and teams at Stack Overflow. So we are totally excited about this episode, man. But before we jump on to the main section, uh, there is a small ritual that we have. So you have to introduce yourself that your LinkedIn profile doesn’t tell you about.
Ben Matthews: Okay. Uh, well, that’s not in my LinkedIn profile. Well, um, So I am the Senior Director of Engineering at Stack Overflow for our community products, but something about myself that’s not, uh, I, I love to snowboard. I’m a huge fan of calzones and I’m a total movie nerd. Is that what you had in mind?
Kovid Batra: Yeah, of course. I mean, uh, I would love you to talk a little more, even if there is something that you want to share that tells about you in terms of who you are. Maybe something from your childhood, from your teenage, anything, anything of that sort that you think defines you who you are today.
Ben Matthews: Uh, yeah. Um, yeah, that’s a great question. Of, of really just getting into tech in general, a lot of that did come from some natural inclinations, uh, that have kind of always been there. For the longest time I didn’t think I would really enjoy technology. There was the stereotype of the person who sat in the corner, just coded all day and never talked to people like kind of the Hollywood impression of what a developer was. That didn’t seem very appealing. I like interacting with people. I like actually making some tangible differences, but once I actually dug into it and actually saw like there was that click that a lot of people have the first time that you compile and run your code and you’re like, wait, I made that happen. I made that change and that’s what kind of the addiction started. But even after that, I still loved interacting with people. Um, and I think we were very lucky. I came at a time where the industry was starting to change, where it was no longer people working in isolation. This, this is a team sport now, like developers have to work together. You’re working with other departments. And that’s actually kind of what I really enjoy. I love, I love interacting with people and building things that people like to work with. So, um, that’s really kind of what sings to me about tech is it’s a quick way to build things that other people can interact with and bring value to them. And I get to do it together with another team of people who, who enjoy it as well. So I would say like, that’s kind of what gets me out of bed in the morning of trying to help people do more with their day and build something that helped them.
Kovid Batra: Great, great. Thanks for that intro. Um, I think, uh, I’m really interested to start with the part, uh, with your current role and responsibility at Stack Overflow. Uh, like, uh, like how, uh, you, you started here or in fact, like, we can go a little back also, like from where you actually started. So wherever you are comfortable, like, uh, you can just begin. Yeah.
Ben Matthews: Yeah. Um, so the, the full journey has its interesting and boring parts altogether, but how it really started was out of school, I still had that feeling of I didn’t know if development was for me because of the perception I had. But I actually got my first job as a quality assurance engineer for a small startup. Uh, now the best part about working at a small company is that you’re forced to wear multiple hats. That, you know, you don’t just have one role. I was also doing tech support. And then I also looked at some of the code. I helped to do some small code reviews. And from there, I thought like, you know, I would love to take a shot at doing this development thing. Maybe, maybe I would like it more. Um, and then I did, I kind of got that high of like, I pushed this live and people are using it and you know, that’s mine and they’re enjoying it and that kind of became addictive to me, of where I really liked being a developer. So I really leaned into that. Um, and then enjoying that startup and having a great mentor there, uh, that really kind of, I set a foundation for how I view, how I want to develop and the things I want to build, uh, of really taking the point of view of how I’m creating value for the users. And my, and my next role, I actually worked for a marketing agency doing digital marketing. Um, and that took that up to 11 of the number of things I had to interact with and be prepared for. Like every week or every couple weeks I had a new project, a new customer, a new problem to solve, and I had to use usually with code, sometimes not with code. We’re solving these problems and creating value and getting that whole high level view of working on databases, kind of doing QA for other people doing development front and back, and I got to see what I really like to do. But I also got an insight into how organizations work, how pieces of a company work together, pieces of a development team work together, and how that really creates value for, for users and customers, which in the end, that’s what we’re here to do is to create value for people.
Um, so my next role after that is my first foray into leadership. I went to another digital agency leading a small development team. And, um, it had its highs and lows. There was definitely a learning curve there. Um, there, there was that ache of not being able to develop of, of enabling other people to develop.
Kovid Batra: Yeah. And this was, and this was a startup or this was an organization like, uh, medium or large-scale organization?
Ben Matthews: This was a medium-sized organization, much more, uh, founded, they, they were trying to start up a new tech department, so I had a little freedom in setting some standards. But it was a mature organization. Um, they kind of knew what they wanted to accomplish. Um, so like then I had a big learning curve, excuse me, of what it’s like to work there, how do I lead people, how do I set expectations for them, um, how do I advocate for myself and others, and, you know, I had plenty of missteps that like looking back now, there’s a bunch of times I wish I could go back and say, “Nope, this is totally the wrong direction. Your instincts are wrong. You need to learn and grow.” Um, and then after that I went to a couple of other organizations of doing leadership there, some very, very large, some smaller, getting that whole view of kind of ins and outs and the stacks of what I would like to be. Then I landed here on Stack which has been a terrific fit for me of, of getting to work directly with users and, uh, and knowing that the people I’m leading are customers, of Stack Overflow just as much as they are employees here, which is very satisfying. We really feel like we’re helping people. I get to have a big impact on a very large application and, um, there’s still a lot of freedom for me to, to execute in the vision. Working with the other leaders here has been a joy as well, since we’re kind of like-minded, which I think is very important for people looking for a place to land. Uh, I know in a lot of interviews, you rarely get to interact with people who will be your peers, but when you do, like really see how well do you bounce off of each other, um, are you all alike? Cause that’s not great. Or are you all different? That’s not great either. You want to have like a little bit of friction there so you can create great ideas. And I think that’s what we have at Stack and it’s been wonderful.
Kovid Batra: No, I think that’s great. But, uh, one question here. Like, um, you were very, uh, passionate about when you told how you started your journey, uh, with the, with the startup, you got an exposure, uh, from the business level to, uh, product teams to developers, and that really opened your mind. Um, would you recommend this for anyone who is beginning their journey in, in, in tech, like, uh, would this be a recommended way of going about how you, uh, set your foundation?
Ben Matthews: Yeah, that’s a great question. I think a lot of people are going to have very different journeys. Um, that I think, you know, one thing that really stuck out to me actually just recently talking to someone when I was, I was at a panel just this past weekend and the variety of journeys that people took of where they started. I think one of the most fascinating ones was someone who was not in tech at all. They’ve been a teacher for 15 years, teaching parts of computer science and design, never professionally worked on one. And now they’re breaking into it now and having a lot of success. Um, I mean, I think my advice to people is like, like your journey is not right or wrong, whatever you’re trying to get to, I think there’s plenty of ways to get to it. What I would say that you do want to focus on though, is that you keep challenging yourself of what I thought I would be working on now is certainly not, uh, what I’m actually working on today, uh, even, whether, I think that’s at all levels, whether at senior, uh, executive, down to like junior engineer, uh, from year to year, the technology landscape changes. How we organize people and execute on that changes. Um, so whatever that journey is, whatever you think it’s going to be, I’m 99 percent sure it’s going to be different than what you envisioned and you have to be prepared to shift that way and keep learning and challenging yourself and it’ll be uncomfortable but that, that’s part of the journey.
Kovid Batra: Yeah, I think that’s the way to go, actually. Then that’s the area when you learn the maximum I think. Uh, so yeah, totally agree with that. Uh, when, uh, when you reflect back, when you see your journey from a QA to a Senior Director at Stack Overflow, I’m curious to know, like, do you know what is that quality in you, uh, that made you stand out and grow to such a profile in, in a, in a reputed organization?
Ben Matthews: Yeah, I think, um, I had a great mentor that pointed out a lot of things that weren’t obvious to me. Um, and I think being a developer, um, I think sometimes for, for us being a people leader is it doesn’t come as naturally sometimes because we tend to think more functionally, which isn’t a bad thing. But there’s some things that at least for me, it didn’t jump out, obviously. I remember one great piece of feedback that took me from just a team manager to get me into a higher level piece was really advocating for yourself. Uh, that didn’t come naturally to me. And I don’t think that comes naturally to a lot of people in our industry. Um, some like to just label it as bragging or see it as bragging, but if you’re not being proud of your successes, other people won’t know they’re there. But it’s not even just for you, but you should be bragging and, and communicating the successes of your team, communicating the successes of your organization. That’s a big part of letting people know of what’s worked, what hasn’t. So one that you can keep doing it. But also other people can emulate it, emulate it and other people in your organization can see you there. There needs to be a profile there. You need to be visible to be a leader. Uh, and I separate that from manager. Being a manager, you don’t necessarily have to be visible. You, there’s very good managers that don’t like to be in the limelight. They’re still supporting their people and moving things forward. But if you’re going to be a leader and set an example and set hard expectations of the vision of where things are going to go, you need to be visible and part of that is advocating and communicating more broadly.
Kovid Batra: Sure. Makes sense. Okay, coming back to your, your current, uh, roles and responsibilities at Stack Overflow. I’m sure working with developers, uh, who know, uh, what the product is about and they are themselves the users. What is that, uh, one thing that you really, uh, abide by as a principle for leading your teams? How, how you’re leading it differently at Stack Overflow, making things successful, scalable, robust?
Ben Matthews: Yeah. Um, and that’s a great question. Cause every organization is different, I’ve had to tackle this problem in different ways at different places. At Stack, I’ve been very fortunate that, uh, there’s already a very talented group of people here that I’ve been able to expand on and keep growing. Um, people tend to be very passionate about the project already, the project and products that we build. That’s a great benefit to have as well. You’re not really trying to talk people into the vision of Stack Overflow, that they were users before there were customers. So that, that was great. But, um, but with that also comes like a different way of how do you leverage the most out of people given this hand? Um, and I know it’s partially a cliché, but with that vision that’s already there with already talented people, um, kind of the steps of making sure you’re setting clear expectations for your folks, setting that vision very loudly, broadly, and clearly to them, um, and then making sure they have all the resources they need to do that. Sometimes it’s time, sometimes it’s, it’s some money or equipment. And then lastly, kind of getting out of their way and removing all the roadblocks. Those three steps are kind of the big parts that I think are general rule of thumb, but, um, given that a lot of other friction points were out of the way, I could really lean into that.
A great example was, uh, I had a team that, uh, was trying to work on a brand new product that, uh, no, it didn’t quite work out before, but we were going to give it another try. We were starting over. And looking at some of the things that went well and what didn’t, it was honestly just a clear lack of vision was their problem. They kept changing directions often. And I was talking to product of like, “Hey, what went wrong?” And they had their own internal struggles. We had our struggles and just aligning that saying like, “Hey, this is going to be a little bit more broad. We’re specifically trying to accomplish this. How do we do it?” And from a bottom-up approach, they set the goals, they set what they think the milestone should be, and that was so much more successful. Um, like that formula that doesn’t work everywhere, but it really thrives here at Stack of like, “Hey, what do you think? How is the best way to execute this?” And we tweak it, we manage it, we keep it on the rails. But once they started moving into it, um, it actually launched and became very successful. So that’s another way of like, kind of reading your team, reading the other stakeholders and, and leveraging their strengths.
Kovid Batra: But what I feel is that, uh, it’s great. Like this approach works at, uh, Stack, but usually what I felt is that when you go with the bottom-up approach, uh, there is an imbalance, uh, like developers are usually inclined towards taking care of the infra, managing the tech debt and not really intuitively prioritizing your, uh, customer needs and requirements, even though they relate to it at times, at least in case of Stack, I can say that. But still there is a, there is a bias in the developer to make the code better before looking at the customer side of it. So how, how do you take care of that?
Ben Matthews: That’s a, that’s a great point. Um, and just to be clear to other developers listening, I love that instinct if you have it, it’s so valuable that you want to leave code better than you found it. But, uh, to what your point, I think that goes back to setting those clear expectations again of, “Hey, like this is what we’re going to accomplish. This is how we need to do it. Um, if we can address tech debt along the way, you need to justify that. I give you the freedom to justify that. But in the end, I, I’m setting these goals. This is what has to happen by then and I’m happy to support you and what we need to get there.” Um, and then also sharing advice and, and, and you know, learning where the minds are on some of those paths. Uh, some people have experience in making these mistakes like I have. I’ve, uh, tried to say, “Well, we could also do this and then also do this and then also do our goal.” And then we’ve taken on too much, and we’re, you know, we’re trying to do too many things at once that we can’t execute.
So you’re right in that. Just kind of not giving any clear direction or expectations, things can kind of go off the rails and what they want to work on isn’t always what we need to focus on. I think there’s a balance there. But, uh, yeah, I mean, setting those expectations is a key part to those three steps, I would say arguably the most important part. If they don’t know which way they’re supposed to be aiming for, they can’t execute on it.
Kovid Batra: Makes sense. Okay, um, next thing that I want to know is, uh, in the last few, few, not actually, actually few years, it’s just been a year or two when the AI wave has like taken over the industry, right? And everyone’s rushing. Um, I’m sure there was a huge impact on the user base, but maybe I’m wrong, on the user base of Stack because people go there to see code, uh, libraries and like code which is there. Now, uh, ChatGPT and tools like that are really helping developers do like automated code. Uh, how you have, uh, taken up with that and what’s your new strategy? I mean, of course you can say everything here, but I would love to know, like how it has been absorbed in the team now.
Ben Matthews: Now, um, I think for the most part, we’ve kind of worn our strategy on our sleeve. Our, our CEO and Chief Product Officer and our CTO have talked about this a bit of, I mean, Stack is, is there to help educate and empower technologists of the world. This is a new tool that’s part of the landscape now and there are a lot of companies that are concerned about it or feel like it’s a doomsday. Um, we’re embracing it. It’s a new way for information to get in and out of people’s hands. Uh, and this is something we were going to try to be a part of. I think we’ve made some great steps of leveraging AI, uh, we’re trying to build some partnerships with people to kind of get a hand on the wheel to make sure that like this is going in the right direction. But, um, there’s technical revolutions every couple years, and this is another one. Uh, and how Stack fits into it is we’re still going to try to provide that value to folks and AI is a new part of it. Uh, we’re building new products that leverage AI. Um, we actually have a couple that are hopefully going to be launching soon that try to improve the experience for users on the site, leveraging AI. We’re going to try to find new ways for people to interact with AI to know that Stack Overflow is a part of what that experience is and to kind of create a cycle there. Um, But it’s changed how people work. But I think Stack Overflow is still a big part of that equation. Uh, we are a big knowledge repository, uh, like along with Reddit or, or news articles, like all of these things need to be there to even power AI. That, that’s sort of the cycle. Like, um, that has to go there. Without human beings, without a community generating content, AI is pretty powerless. But, um, so there has to be a way for us to keep that feedback loop going. And we’re excited that of all the opportunities to be a part of that and find new ways to keep educating people.
Kovid Batra: Definitely. I think that’s a very good point, actually. Like, without humans feeding that information, at least right now AI is not at that stage that it can generate things on its own. It’s the community that would always be driving things at the end. So I also believe in that fact. My question, uh, a follow-up question on that is that when such kind of, uh, big changes happen, how, how your teams are taking it? Like, at Stack, how people are embracing it, particularly developers? I’m just saying that if there are new products that we are going to work on or new tech that we are going to build, how people are embracing it, how fast they are adopting to the new requirements and the new thought process which the company’s adopting?
Ben Matthews: Uh, through the context of AI or just in general?
Kovid Batra: Just, just in the context of AI.
Ben Matthews: Oh yeah. Um, well, in a fun way, there’s been a wide range of opinions on how for us to embrace or to try to channel the AI capabilities that are now very pervasive in the industry. Um, um, so first part of it starts with a lot of that we’re trying to gather as much data and information we can. Again, we have a good user base. So we’re able to interact with them and ask them questions. We’re looking at behavior changes. And so from there, we try to make a data informed decision to our teams of like, “Hey, this is what we’re seeing. So this is what we’re going to try.” Um, I mean, the beauty of data is there’s a bunch of ways to interpret it and our developers are no different. They have some thoughts on, on the best ways to go about it. But I think this also goes to a general leadership technique is you’re never going to get unanimous consent on an idea. If that’s what your requirement is, you’re never going to move forward. What you do have to get is people to at least agree that this is worth trying or like understand that I might be wrong. And a lot of people feel like this is the best way, so we’ll give it a shot. Uh, and that’s something I’ve been proud of to be able to achieve at Stack. It’s something that is very important for a leader of saying, “Hey, I know you don’t agree, but I need you to roll along with me on this. I understand your point. You’ve been heard, but this is the decision we’re making.” Um, a lot of people agree with the idea. Some don’t, but trying to get the enthusiasm and I think also connecting the dots on those ideas with the larger picture. I think that’s also something people miss a lot during these revolutions of if you start out with like vision A. And then something big happens and now you have vision B, um, you still have to connect the dots in like, “Hey, we’re still trying to, to like provide value the same way. We’re still the same company. We’re in this new thing that you’re doing. This dot still connects to what we want to do. There’s still a path there. We’re not like totally pivoting to block chain or something like that. It’s not a huge change for us.” So I think that also motivates people like we’re still trying to build the same vision, the same power for the company. We’re just doing it in a different way. And what you’re doing is still really creating value. I think that’s a big part for leaders to, to keep people motivated.
Kovid Batra: Makes sense. When it comes to, uh, bringing developers on board and nurturing them, I think the biggest challenge that I have always heard from managers, particularly is, uh, getting these new-age, uh, junior developers and the fresh ones coming into the picture. Um, any thoughts, any techniques that you have used to, uh, bring these people on board, nurture them well, and so that they can contribute and create that impact?
Ben Matthews: Yeah. Uh, onboarding people is a huge thing that I try to give the other managers that work for me that are bringing on new team members. Um, uh, I mean, a big part of it, it goes back to empowerment, but I think a lot of it is also the same challenges we’ve had I think for decades, of me even having my own Computer Science degree. In my first development job, there was a huge gap of what I learned in school versus what I’m doing day-to-day as an actual developer. Uh, as far as I can tell, that hasn’t really changed that much. People come in from bootcamps or not. Uh, funny we’ve had a really good experience of people that don’t have formal degrees coming in, who have just been coding their whole time. They tend to actually have an easier time working within a team. That’s not to disparage any Computer Science degree, it’s still very valuable, but it’s just to highlight the gap between what you actually do and what they’ve been training. A great example is, um, of what we try to get junior engineers to really focus on initially, it’s like just doing code reviews. That is a huge part of what we do in modern development. It’s a great way for you to understand the code base, understand how your team works, understand like kind of the ins and outs and where some of the scary parts of the code are. And, um, and even though that can be intimidating, the best thing I think you can do in a code review is just ask questions of like, “Hey, I see you’re doing this. This doesn’t make sense to me. Can you explain why?” And after time, even a senior engineer will read them and be like, “You know what? That is kind of confusing. Why did we do it that way? Let me..” And they’ll even update their PR. I think that’s one of the best tools to get a junior engineer up to speed is just like get them in the code and reviewing it.
Um, the other part of kind of the unsung hero of all of software development that never gets enough love is just documentation, of having them go through some of the pieces of the product, commenting and documenting how things work. That, one, it helps onboard other people, but two, that, that forces them to have an understanding of how parts of the code work. Uh, and then from there at their own pace, here at Stack, we, we try to have people push code to production on day one. Uh, we find something small for them to do, work them through the whole build pipeline process so they can see how it works and like, kind of get that scary part of the way. Like something you wrote is now in production on Stack Overflow in front of hundreds of millions of people. Congratulations! But let’s just get that part out of the way. Um, but then how they can actually understand the code and keep building things, take on new tickets, work with product, size, refinement, all of that, we just ease them into that in their own pace, but keeping them exposed to that code through documentation and PRs really shortens the learning curve.
Kovid Batra: Cool. Makes sense. I think, uh, most of the things, uh, that I have seen, uh, working out for the developers, for, uh, the, the teams that are working well, the managers play a really, really good role there. Like the team managers who are leading them play a very good role there. So before we like end this discussion, I would love for you, uh, to give some parting advice to the engineering managers who are leading such teams, uh, who are looking forward to growing in their career also, uh, that would be helpful for them. Yeah.
Ben Matthews: Yeah. I, I, I, uh, I would say three big points that were big for me from that mentor. One, I’ve already spoke on of advocating for yourself. And, um, and for you, your team and your people, that’s a big part of getting visibility to, to try to grow, to show that you’re being successful. And, and, and honestly, just helping your other peers be successful. It’s a great way for people to see that you’re good at what you do. Another thing that, that I think people could focus on is building an organization that functions and not just executes. Those are kind of two different things, though they sound similar. For I can have a front end team that is great at pumping out front end code or building a new front end framework, and that’s valuable. They’re executing. But they have to work in concert with our back end team or DBA team, with product to align things, getting those things to work together, that’s an organization that functions. And though it may seem like you might be slowing down one to get them to work in tandem or in line with another one, um, that’s actually what’s really going to make your organization successful. If you can show that you have teams working together, reducing friction points and actually building things as one unit, that shows you’re being a good leader, you’re setting a clear vision and you’re, you’re creating the most value you can out of that organization. Um, and last I would say is, um, really identifying friction points or slowdowns in your organization, owning them and setting a plan on how to tackle them. There I had a natural inclination as I was moving up to hide my weaknesses, like to hide what was not going well in my organization. Um, and because of that, I wasn’t able to get feedback from my fellow leaders, from my manager or help. Um, but I would say if you have a problem that you’re tackling, own it and be like, “Hey, this is what’s going on. This is a problem I’m having here. So I’m going to address it.” And welcome any thoughts, but that’s another success story to share that you can tackle problems and things that are going wrong and also advocate for those. Uh, show that you can address problems and keep improving and making things better.
Uh, those three things I think have really helped me move forward in my career of kind of that mindset has made my organizations better, made my people better and let people know that, um, you know, I’m there to try to create the most value I can in the organization.
Kovid Batra: Makes sense. Thank you, Ben. Thank you so much for such a, such a great session, uh, and such great advice. Uh, for today, uh, in the interest of time, we’ll have to stop here, but we would love to know more of your, uh, stories and experiences, maybe on another episode. It was great to have you today here.
Ben Matthews: Thank you, Kovid. It was great to be here.
'Product Thinking Secrets for Platform Teams' with Geoffrey Teale, Principal Product Engineer, Upvest
November 15, 2024
•
31 min read
In this episode of the groCTO Podcast, host Kovid Batra engages in a comprehensive discussion with Geoffrey Teale, the Principal Product Engineer at Upvest, who brings over 25 years of engineering and leadership experience.
The episode begins with Geoffrey's role at Upvest, where he has transitioned from Head of Developer Experience to Principal Product Engineer, emphasizing a holistic approach to improving both developer experience and engineering standards across the organization. Upvest's business model as a financial infrastructure company providing investment banking services through APIs is also examined. Geoffrey underscores the multifaceted engineering requirements, including security, performance, and reliability, essential for meeting regulatory standards and customer expectations. The discussion further delves into the significance of product thinking for internal teams, highlighting the challenges and strategies of building platforms that resonate with developers' needs while competing with external solutions.
Throughout the episode, Geoffrey offers valuable insights into the decision-making processes, the importance of simplicity in early-phase startups, and the crucial role of documentation in fostering team cohesion and efficient communication. Geoffrey also shares his personal interests outside work, including his passion for music, open-source projects, and low-carbon footprint computing, providing a holistic view of his professional and personal journey.
Timestamps
00:00 - Introduction
00:49 - Welcome to the groCTO Podcast
01:22 - Meet Geoffrey: Principal Engineer at Upvest
01:54 - Understanding Upvest's Business & Engineering Challenges
03:43 - Geoffrey's Role & Personal Interests
05:48 - Improving Developer Experience at Upvest
08:25 - Challenges in Platform Development and Team Cohesion
13:03 - Product Thinking for Internal Teams
16:48 - Decision-Making in Platform Development
19:26 - Early-Phase Startups: Balancing Resources and Growth
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO Podcast. Today with us, we have a very special guest who has great expertise in managing developer experience at small scale and large scale organizations. He is currently the Principal Engineer at Upvestm, and has almost 25 plus years of experience in engineering and leadership. Welcome to the show, Geoffrey. Great to have you here.
Geoffrey Teale: Great to be here. Thank you.
Kovid Batra: So Geoffrey, I think, uh, today's theme is more around improving the developer experience, bringing the product thinking while building the platform teams, the platform. Uh, and you, you have been, uh, doing all this from quite some time now, like at Upvest and previous organizations that you've worked with, but at your current company, uh, like Upvest, first of all, we would like to know what kind of a business you're into, what does Upvest do, and let's then deep dive into how engineering is, uh, getting streamlined there according to the business.
Geoffrey Teale: Yeah. So, um, Upvest is a financial infrastructure company. Um, we provide, uh, essentially investment banking services, a complete, uh, solution for building investment banking experiences, uh, for, for client organizations. So we're business to business to customer. We provide our services via an API and client organizations, uh, names that you'd heard of people like Revolut and N26 build their client-facing applications using our backend services to provide that complete investment experience, um, currently within the European Union. Um, but, uh, we'll be expanding out from there shortly.
Kovid Batra: Great. Great. So I think, uh, when you talk about investment banking and supporting the companies with APIs, what kind of engineering is required here? Is it like more, uh, secure-oriented, secure-focused, or is it more like delivering on time? Or is it more like, uh, making things very very robust? How do you see it right now in your organization?
Geoffrey Teale: Well, yeah, I mean, I think in the space that we're in the, the answer unfortunately is all of the above, right? So all those things are our requirements. It has to be secure. It has to meet the, uh, the regulatory standards that we, we have in our industry. Um, it has to be performant enough for our customers who are scaling out to quite large scales, quite large numbers of customers. Um, has to be reliable. Um, so there's a lot of uh, uh, how would I say that? Pressure, uh, to perform well and to make sure that things are done to the highest possible standard in order to deliver for our customers. And, uh, if we don't do that, then, then, well, the customers won't trust us. If they don't trust us, then we wouldn't be where we are today. So, uh, yeah.
Kovid Batra: No, I totally get that. Uh, so talking more about you now, like, what's your current role in the organization? And even before that, tell us something about yourself which the LinkedIn doesn't know. Uh, I think the audience would love to know you a little bit more. Uh, let's start from there. Uh, maybe things that you do to unwind or your hobbies or you're passionate about anything else apart from your job that you're doing?
Geoffrey Teale: Oh, well, um, so, I'm, I'm quite old now. I have a family. I have two daughters, a dog, a cat, fish, quail. Keep quail in the garden. Uh, and that occupies most of my time outside of work. Actually my passions outside of work were always um, music. So I play guitar, and actually technology itself. So outside of work, I'm involved and have been involved in, in open source and free software for, for longer than I've been working. And, uh, I have a particular interest in, in low carbon footprint computing that I pursue outside of, out of work.
Kovid Batra: That's really amazing. So, um, like when you say low carbon, uh, cloud computing, what exactly are you doing to do that?
Geoffrey Teale: Oh, not specifically cloud computing, but that would be involved. So yeah, there's, there's multiple streams to this. So one thing is about using, um, low power platforms, things like RISC-V. Um, the other is about streamlining of software to make it more efficient so we can look into lots of different, uh, topics there about operating systems, tools, programming languages, how they, uh, how they perform. Um, sort of reversing a trend, uh, that's been going on for as long as I've been in computing, which is that we use more and more power, both in terms of computing resource, but also actual electricity for the network, um, to deliver more and more functionality, but we're also programming more and more abstracted ways with more and more layers, which means that we're actually sort of getting less, uh, less bang for buck, if you, if you like, than we used to. So, uh, trying to reverse those trends a little bit.
Kovid Batra: Perfect. Perfect. All right. That's really interesting. Thanks for that quick, uh, cute little intro. Uh, and, uh, now moving on to your work, like we were talking about your experience and your specialization in DevEx, right, improving the developer experience in teams. So what's your current, uh, role, responsibility that comes with, uh, within Upvest? Uh, and what are those interesting initiatives that you have, you're working on?
Geoffrey Teale: Yeah. So I've actually just changed roles at Upvest. I've been at Upvest for a little bit over two years now, and the first two years I spent as the Head of Developer Experience. So running a tribe with a specific responsibility for client-facing developer experience. Um, now I've switched into a Principal Engineering role, which means that I have, um, a scope now which is across the whole of our engineering department, uh, with a, yeah, a view for improving experience and improving standards and quality of engineering internally as well. So, um, a slight shift in role, but my, my previous five years before, uh, Upvest, were all in, uh, internal development experience. So I think, um, quite a lot of that skill, um, coming into play in the new role which um, yeah, in terms of challenges actually, we're just at the very beginning of what we're doing on that side. So, um, early challenges are actually about identifying what problems do exist inside the company and where we can improve and how we can make ourselves ready for the next phase of the company's lifetime. So, um, I think some of those topics would be quite familiar to any company that's relatively modern in terms of its developer practices. If you're using microservices, um, there's this aspect of Conway's law, which is to say that your organizational structure starts to follow the program structure and vice versa. And, um, in that sense, you can easily get into this world where teams have autonomy, which is wonderful, but they can be, um, sort of pushed into working in a, in a siloized fashion, which can be very efficient within the team, but then you have to worry about cohesion within the organization and about making sure that people are doing the right things, uh, to, to make the services work together, in terms of design, in terms of the technology that we develop there. So that bridges a lot into this world of developer experience, into platform drives, I think you mentioned already, and about the way in which you think about your internal development, uh, as opposed to just what you do for customers.
Kovid Batra: I agree. I mean, uh, as you said, like when the teams are siloed, they might be thinking they are efficient within themselves. And that's mostly the use case, the case. But when it comes to integrating different pieces together, that cohesion has to fall in. What is the biggest challenge you have seen, uh, in, in the teams in the last few years of your experience that prevents this cohesion? And what is it that works the best to bring in this cohesion in the teams?
Geoffrey Teale: Yeah. So I think there's, there's, there's a lot of factors there. The, the, the, the biggest one I think is pressure, right? So teams in most companies have customers that they're working for, they have pressure to get things done, and that tends to make you focus on the problem in front of you, rather than the bigger picture, right? So, um, dealing, dealing with that and reinforcing the message to engineers that it's actually okay to do good engineering and to worry about the other people, um, is a big part of that. I've always said, actually, that in developer experience, a big part of what you have to do, the first thing you have to do is actually teach people about why developer experience is important. And, uh, one of those reasons is actually sort of saying, you know, promoting good behavior within engineering teams themselves and saying, we only succeed together. We only do that when we make the situation for ourselves that allows us to engineer well. And when we sort of step away from good practice and rush, rush, um, that maybe works for a short period of time. But, uh, in the long term that actually creates a situation where there's a lot of mess and you have to deal with, uh, getting past, we talk about factors like technical debt. There's a lot of things that you have to get past before you can actually get on and do the productive things that you want to do. Um, so teaching organizations and engineers to think that way is, uh, is, uh, I think a big, uh, a big part of the work that has to be done, finding ways to then take that message and put it into a package that is acceptable to people outside of engineering so that they understand why this is a priority and why it should be worked on is, I think, probably the second biggest part of that as well.
Kovid Batra: Makes sense. I think, uh, most of the, so is it like a behavioral challenge, uh, where, uh, developers and team members really don't like the fact that they have to work in cohesion with the teams? Or is it more like the organizational structure that put people into a certain kind of mindset and then they start growing with that and that becomes a problem in the later phase of the organization? What, what you have seen, uh, from your experience?
Geoffrey Teale: Yeah. So I mean, I think growth is a big part of this. So, um, I mean, I've, I've worked with a number of startups. I've also worked in much bigger organizations. And what happens in that transition is that you move from a small tight-knit group of people who sort of inherently have this very good interpersonal communication, they all know what's going on with the company as a whole, and they build trust between them. And that way, this, this early stage organization works very well, and even though you might be working on disparate tasks, you always have some kind of cohesion there. You know what to do. And if something comes up that affects all of you, it's very easy to identify the people that you need to talk to and find a solution for it. Then as you grow, you start to have this situation where you start to take domains and say, okay, this particular part of, of what we do now belongs in a team, it has a leader and this piece over here goes over there. And that still works quite well up into a certain scale, right? But after time in an organization, several things happen. Okay, so your priorities drift apart, right? You no longer have such good understanding of the common goal. You tend to start prioritizing your work within those departments. So you can have some, some tension between those goals. It's not always clear that Department A should be working together with Department B on the same priority. You also have natural staff turnover. So those people who are there at the beginning, they start to leave, some of them, at least, and these trust relationships break down, the communication channels break down. And the third factor is that new people coming into the organization, they haven't got these relationships, they haven't got this experience. They usually don't have, uh, the position to, to have influence over things on such a large scale. So they get an expectation of these people that they're going to be effective across the organization in the way that people who've been there a long time are, and it tends not to happen. And if you haven't set up for that, if you haven't built the support systems for that and the internal processes and tooling for that, then that communication stops happening in the way that it was happening before.
So all of those things create pressure to, to siloes, then you put it on the pressure of growth and customers and, and it just, um, uh, ossifies in that state.
Kovid Batra: Totally. Totally. And I think, um, talking about the customers, uh, last time when we were discussing, uh, you very beautifully put across this point of bringing that product thinking, not just for the products that you're building for the customer, but when you're building it for the teams. And I, what I feel is that, the people who are working on the platform teams have come across this situation more than anyone else in the team as a developer, where they have to put in that thought of product thinking for the people within the team. So what, what, what, uh, from where does this philosophy come? How you have fitted it into, uh, how platform teams should be built? Just tell us something about that.
Geoffrey Teale: Yeah. So this is something I talk about a little bit when I do presentations, uh, about developer experience. And one of the points that I make actually, particularly for platform teams, but any kind of internal team that's serving other internal teams is that you have to think about yourself, not as a mandatory piece that the company will always support and say, "You must use this, this platform that we have." Because I have direct experience, not in my current company, but in previous, uh, in previous employers where a lot of investment has been made into making a platform, but no thought really was given to this kind of developer experience, or actually even the idea of selling the platform internally, right? It was just an assumption that people would have to use it and so they would use it. And that creates a different set of forces than you'll find elsewhere. And, and people start to ignore the fact that, you know, if you've got a cloud platform in this case, um, there is competition, right? Every day as an engineer, you run into people out there working in the wide world, working for, for companies, the Amazons, AWS of this world, as your Google, they're all producing cloud platform tools. They're all promoting their cloud native development environments with their own reasons for doing that. But they expend a lot of money developing those things, developing them to a very high standard and a lot of money promoting and marketing those things. And it doesn't take very much when we talk just now about trust breaking down, the cohesion between teams breaking down. It doesn't take very much for a platform to start looking like less of a solution and more of a problem if it's taking you a long time to get things done, if you can't find out how to do things, if you, um, you have bad experiences with deployment. This all turns that product into an internal problem.
Kovid Batra: In context of an internal problem for the teams.
Geoffrey Teale: Yeah, and in that context, and this is what I, what I've seen, when you then either have someone coming in from outside with experience with another, a product that you could use, or you get this kind of marketing push and sales push from one of these big companies saying, "Hey, look at this, this platform that we've got that you could just buy into." um, it, it puts you in direct competition and you can lose that, that, right? So I have seen whole divisions of a, of a very large company switch away from the internal platform to using cloud native development, right, on, on a particular platform. Now there are downsides for that. There are all sorts of things that they didn't realize they would have to do that they end up having to do. But once they've made the decision, that battle is lost. And I think that's a really key topic to understand that you are in competition, even though you're an internal team, you are in competition with other people, and you have to do some of the things that they do to convince the people in your organization that what you're doing is beneficial, that it's, it's, it's useful, and it's better in some very distinct way than what they would get off the shelf from, from somewhere else.
Kovid Batra: Got it. Got it. So, when, uh, whenever the teams are making this decision, let's, let's take something, build a platform, what are those nitty gritties that one should be taking care of? Like, either people can go with off the shelf solutions, right? And then they start building. What, what should be the mindset, what should be the decision-making mindset, I must say, uh, for, for this kind of a process when they have to go through?
Geoffrey Teale: So I think, um, uh, we within Upvest, follow a very, um, uh, prescribed is not the right word, but we have a, we have a process for how we think about things, and I think that's actually a very useful example of how to think about any technical project, right? So we start with this 'why' question and the 'why' question is really important. We talk about product thinking. Um, this is, you know, who are we doing this for and what are the business outcomes that we want to achieve? And that's where we have to start from, right? So we define that very, very clearly because, and this is a really important part, there's no value, uh, in anybody within the organization saying, "Let's go and build a platform." For example, if that doesn't deliver what the company needs. So you have to have clarity about this. What is the best way to build this? I mean, nobody builds a platform, well not nobody, but very few people build a platform in the cloud starting from scratch. Most people are taking some existing solution, be that a cloud native solution from a big public cloud, or be that Kubernetes or Cloud Foundry. People take these tools and they wrap them up in their own processes, their own software tools around it to package them up as a, uh, a nice application platform for, for development to happen, right? So why do you do that? What, what purpose are you, are you serving in doing this? How will this bring your business forward? And if you can't answer those questions, then you probably should never even start the project, right? That's, that's my, my view. And if you can't continuously keep those, um, ideas in mind and repeat them back, right? Repeat them back in terms of what are we delivering? What do we measure up against to the, to the, to the company? Then again, you're not doing a very good job of, of, of communicating why that product exists. If you can't think of a reason why your platform delivers more to your company and the people working in your company than one of the off the shelf solutions, then what are you for, right? That's the fundamental question.
So we start there, we think about those things well before we even start talking about solution space and, and, um, you know, what kind of technology we're going to use, how we're going to build that. That's the first lesson.
Kovid Batra: Makes sense. A follow-up question on that. Uh, let's say a team is let's say 20-30 folks right now, okay? I'm talking about an engineering team, uh, who are not like super-funded right now or not in a very profit making business. This comes with a cost, right? You will have to deploy resources. You will have to invest time and effort, right? So is it a good idea according to you to have shared resources for such an initiative or it doesn't work out that way? You need to have dedicated resources, uh, working on this project separately or how, how do you contemplate that?
Geoffrey Teale: My experience of early-phase startups is that people have to be multitaskers and they have to work on multiple things to make it work, right? It just doesn't make sense in the early phase of a company to invest so heavily in a single solution. Um, and I think one of the mistakes that I see people making now actually is that they start off with this, this predefined idea of where they're going to be in five years. And so they sort of go away and say, "Okay, well, I want my, my, my system to run on microservices on Kubernetes." And they invest in setting up Kubernetes, right, which has got a lot easier over the last few years, I have to say. Um, you can, to some degree, go and just pick that stuff off the shelf and pay for it. Um, but it's an example of, of a technical decision that, that's putting the cart before the horse, right? So, of course, you want to make architectural decisions. You don't want to make investments on something that isn't going to last, but you also have to remember that you don't know what's going to happen. And actually, getting to a product quickly, uh, is more important than, than, you know, doing everything perfectly the first time around. So, when I talk about these, these things, I think uh, we have to accept that there is a difference between being like the scrappy little startup and then being in growth phase and being a, a mega corporation. These are different environments with different pressures
Kovid Batra: Got it. So, when, when teams start, let's say, work on it, working on it and uh, they have started and taken up this project for let's say, next six months to at least go out with the first phase of it. Uh, what are those challenges which, uh, the platform heads or the people who are working, the engineers who are working on it, should be aware of and how to like dodge those? Something from your experience that you can share.
Geoffrey Teale: Yes. So I mean, in, in, in the, the very earliest phase, I mean, as I just alluded to that keeping it simple is, is a, a, a big benefit. And actually keeping it simple sometimes means, uh, spending money upfront. So what I've, what I've seen is, is, um, many times I've, I've worked at companies, um, but so many, at least three times who've invested in a monitoring platform. So they've bought a off the shelf software as a service monitoring platform, uh, and used that effectively up until a certain point of growth. Now the reason they only use it up into a certain point of growth is because these tools are extremely expensive and those costs tend to scale with your company and your organization. And so, there comes a point in the life of that organization where that no longer makes sense financially. And then you withdraw from that and actually invest in, in specialist resources, either internally or using open source tools or whatever it is. It could just be optimization of the tool that you're using to reduce those costs. But all of those things have a, a time and financial costs associated with them. Whereas at the beginning, when the costs are quite low to use these services, it actually tends to make more sense to just focus on your own project and, and, you know, pick those things up off the shelf because that's easier and quicker. And I think, uh, again, I've seen some companies fail because they tried to do everything themselves from scratch and that, that doesn't work in the beginning. So yeah, I think that's a, it's a big one.
The second one is actually slightly later as you start to grow, getting something up and running at all is a challenge. Um, what tends to happen as you get a little bit bigger is this effect that I was talking about before where people get siloized, um, the communication starts to break down and people aren't aware of the differing concerns. So if you start worrying about things that you might not worry about at first, like system recovery, uh, compliance in some cases, like there's laws around what you do in terms of your platform and your recoverability and data protection and all these things, all of these topics tend to take focus away, um, from what the developers are doing. So on the first hand, that tends to slow down delivery of, of, features that the engineers within your company want in favor of things that they don't really want to know about. Now, all the time you're doing this, you're taking problems away from them and solving them for them. But if you don't talk about that, then you're not, you're not, you may be delivering value, but nobody knows you're delivering value. So that's the first thing.
The other thing is that you then tend to start losing focus on, on the impact that some of these things have. If you stop thinking about the developers as the primary stakeholders and you get obsessed about these other technical and legal factors, um, then you can start putting barriers into place. You can start, um, making the interfaces to the system the way in which it's used, become more complicated. And if you don't really focus then on the developer experience, right, what it is like to use that platform, then you start to turn into the problem, which I mentioned before, because, um, if you're regularly doing something, if you're deploying or testing on a platform and you have to do that over and over again, and it's slowed down by some bureaucracy or some practice or just literally running slowly, um, then that starts to be the thing that irritates you. It starts to be the thing that's in your way, stopping you doing what you're doing. And so, I mean, one thing is, is, is recognizing when this point happens, when your concerns start to deviate and actually explicitly saying, "Okay, yes, we're going to focus on all these things we have to focus on technically, but we're going to make sure that we reserve some technical resource for monitoring our performance and the way in which our customers interact with the system, failure cases, complaints that come up often."
Um, so one thing, again, I saw in much bigger companies, is they migrated to the cloud from, from legacy systems in data centers. And they were used to having turnaround times on, on procedures for deploying software that took at least weeks or having month-long projects because they had to wait for specific training that they had to get sign off. And they thought that by moving to an internal cloud platform, they would solve these things and have this kind of rapid development and deployment cycle. They sort of did in some ways, but they forgot, right? When they were speculating out, they forgot to make the developers a stakeholder and saying, "What do you need to achieve that?" And what they actually need to achieve that is a change in the mindset around the bureaucracy that came around. It's all well and good, like not having to physically put a machine in a rack and order it from a company. But if you still have these rules that say, okay, you need to go in this training course before you can do anything with this, and there's a six month waiting list for that training course, or this has to be approved by five managers who can only be contacted by email before you can do it. These processes are slowing things down. So actually, I mentioned that company that, uh, we lost the whole department from the, from the, uh, platform that we had internally. One of the reasons actually was that just getting started with this platform took months. Whereas if you went to a public cloud service, all you needed was a credit card and you could do it and you wouldn't be breaking any rules in the company in doing that. As long as you had the, the right to spend the money on the credit card, it was fine.
So, you know, that difference of experience, that difference of, uh, of understanding something that starts to grow out as you, as you grow, right? So I think that's a, uh, a thing to look out for as you move from the situation when you're 10, 20 people in the whole company to when you're about, I would say, 100 to 200 people in the whole company. These forces start to become apparent.
Kovid Batra: Got it. So when, when you touch that point of 100-200, uh, then there is definitely a different journey that you have to look up to, right? And there are their own set of challenges. So from that zero to one and then one to X, uh, journey, what, what things have you experienced? Like, this would be my last question for, for today, but yeah, I would be really interested for people who are listening to you heading teams of sizes, a hundred and above. What kind of things they should be looking at when they are, let's say, moving from an off the shelf to an in-house product and then building these teams together?
Geoffrey Teale: Oh, what should they be looking at? I mean, I think we just covered, uh, one of the big ones. I'd say actually that one of the, the biggest things for engineers particularly, um, and managers of engineers is resistance to documentation and, and sort of ideas about documentation that people have. So, um, when you're again, when you're that very small company, it's very easy to just know what's going on. As you grow, what happens, new people come into your team and they have the same questions that have been asked and answered before, or were just known things. So you get this pattern where you repeatedly get the same information being requested by people and it's very nice and normal to have conversations. It builds teams. Um, but there's this kind of key phrase, which is, 'Documentation is automation', right? So engineers understand automation. They understand why automation is required to scale, but they tend to completely discount that when it comes to documentation. So almost every engineer that I've ever met hates writing documentation. Not everyone, but almost everyone. Uh, but if you go and speak to engineers about what they need to start working with a new product, and again, we think about this as a product, um, they'll say, of course, I need some documentation. Uh, and if you dive into that, they don't really want to have fancy YouTube videos. And so, that sometimes that helps people overcome a resistance to learning. Um, but, uh, having anything at all is useful, right? But this is a key, key learning documentation. You need to treat it a little bit like you treat code, right? So it's a very natural, um, observation from, from most engineers. Well, if I write a document about this, that document is just going to sit there and, and rot, and then it will be worse than useless because it will say the wrong thing, which is absolutely true. But the problem there is that someone said it will sit there and rot, right? It shouldn't be the case, right? If you need the documentation to scale out, you need these pieces to, to support new people coming into the company and to actually reduce the overhead of communication because more people, the more different directions of communication you have, the more costly it gets for the organization. Documentation is boring. It's old-fashioned, but it is the solution that works for fixing that.
The only other thing I'm going to say about is mindset, is it's really important to teach engineers what to document, right? Get them away from this mindset that documentation means writing massive, uh, uh, reams and reams of, of text explaining things in, in detail. It's about, you know, documenting the right things in the right place. So at code-level, commenting, um, saying not what the code there does, but more importantly, generally, why it does that. You know, what decision was made that led to that? What customer requirement led to that? What piece of regulation led to that? Linking out to the resources that explain that. And then at slightly higher levels, making things discoverable. So we talk actually in DevEx about things like, um, service catalogs so people can find out what services are running, what APIs are available internally. But also actually documentation has to be structured in a way that meets the use cases. And so, actually not having individual departments dropping little bits of information all over a wiki with an arcane structure, but actually sort of having a centralized resource. Again, that's one thing that I did actually in a bigger company. I came into the platform team and said, "Nobody can find any information about your platform. You actually need like a central website and you need to promote that website and tell people, 'Hey, this is here. This is how you get the information that you need to understand this platform.' And actually including at the very front of that page why this platform is better than just going out somewhere else to come back to the same topic."
Documentation isn't a silver bullet, but it's the closest thing I'm aware of in tech organizations, and it's the thing that we routinely get wrong.
Kovid Batra: Great. I think, uh, just in the interest of time, we'll have to stop here. But, uh, Geoffrey, this was something really, really interesting. I also explored a few things, uh, which were very new to me from the platform perspective. Uh, we would love to, uh, have you for another episode discussing and deep diving more into such topics. But for today, I think this is our time. And, uh, thank you once again for joining in, taking out time for this. Appreciate it.
Are you tired of feeling like you’re constantly playing catch-up with the latest AI tools, trying to figure out how they fit into your workflow? Many developers and managers share that sentiment, caught in a whirlwind of new technologies that promise efficiency but often lead to confusion and frustration.
The problem is clear: while AI offers exciting opportunities to streamline development processes, it can also amplify stress and uncertainty. Developers often struggle with feelings of inadequacy, worrying about how to keep up with rapidly changing demands. This pressure can stifle creativity, leading to burnout and a reluctance to embrace the innovations designed to enhance our work.
But there’s good news. By reframing your relationship with AI and implementing practical strategies, you can turn these challenges into opportunities for growth. In this blog, we’ll explore actionable insights and tools that will empower you to harness AI effectively, reclaim your productivity, and transform your software development journey in this new era.
The Current State of Developer Productivity
Recent industry reports reveal a striking gap between the available tools and the productivity levels many teams achieve. For instance, a survey by GitHub showed that 70% of developers believe repetitive tasks hamper their productivity. Moreover, over half of developers express a desire for tools that enhance their workflow without adding unnecessary complexity.
Understanding the Productivity Paradox
Despite investing heavily in AI, many teams find themselves in a productivity paradox. Research indicates that while AI can handle routine tasks, it can also introduce new complexities and pressures. Developers may feel overwhelmed by the sheer volume of tools at their disposal, leading to burnout. A 2023 report from McKinsey highlights that 60% of developers report higher stress levels due to the rapid pace of change.
Common Emotional Challenges
As we adapt to these changes, feelings of inadequacy and fear of obsolescence may surface. It’s normal to question our skills and relevance in a world where AI plays a growing role. Acknowledging these emotions is crucial for moving forward. For instance, it can be helpful to share your experiences with peers, fostering a sense of community and understanding.
Key Challenges Developers Face in the Age of AI
Understanding the key challenges developers face in the age of AI is essential for identifying effective strategies. This section outlines the evolving nature of job roles, the struggle to balance speed and quality, and the resistance to change that often hinders progress.
Evolving Job Roles
AI is redefining the responsibilities of developers. While automation handles repetitive tasks, new skills are required to manage and integrate AI tools effectively. For example, a developer accustomed to manual testing may need to learn how to work with automated testing frameworks like Selenium or Cypress. This shift can create skill gaps and adaptation challenges, particularly for those who have been in the field for several years.
Balancing speed and Quality
The demand for quick delivery without compromising quality is more pronounced than ever. Developers often feel torn between meeting tight deadlines and ensuring their work meets high standards. For instance, a team working on a critical software release may rush through testing phases, risking quality for speed. This balancing act can lead to technical debt, which compounds over time and creates more significant problems down the line.
Resistance to Change
Many developers hesitate to adopt AI tools, fearing that they may become obsolete. This resistance can hinder progress and prevent teams from fully leveraging the benefits that AI can provide. A common scenario is when a developer resists using an AI-driven code suggestion tool, preferring to rely on their coding instincts instead. Encouraging a mindset shift within teams can help them embrace AI as a supportive partner rather than a threat.
Strategies for Boosting Developer Productivity
To effectively navigate the challenges posed by AI, developers and managers can implement specific strategies that enhance productivity. This section outlines actionable steps and AI applications that can make a significant impact.
Embracing AI as a Collaborator
To enhance productivity, it’s essential to view AI as a collaborator rather than a competitor. Integrating AI tools into your workflow can automate repetitive tasks, freeing up your time for more complex problem-solving. For example, using tools like GitHub Copilot can help developers generate code snippets quickly, allowing them to focus on architecture and logic rather than boilerplate code.
Recommended AI tools: Explore tools that integrate seamlessly with your existing workflow. Platforms like Jira for project management and Test.ai for automated testing can streamline your processes and reduce manual effort.
Actual AI Applications in Developer Productivity
AI offers several applications that can significantly boost developer productivity. Understanding these applications helps teams leverage AI effectively in their daily tasks.
Code generation: AI can automate the creation of boilerplate code. For example, tools like Tabnine can suggest entire lines of code based on your existing codebase, speeding up the initial phases of development and allowing developers to focus on unique functionality.
Code review: AI tools can analyze code for adherence to best practices and identify potential issues before they become problems. Tools like SonarQube provide actionable insights that help maintain code quality and enforce coding standards.
Automated testing: Implementing AI-driven testing frameworks can enhance software reliability. For instance, using platforms like Selenium and integrating them with AI can create smarter testing strategies that adapt to code changes, reducing manual effort and catching bugs early.
Intelligent debugging: AI tools assist in quickly identifying and fixing bugs. For example, Sentry offers real-time error tracking and helps developers trace their sources, allowing teams to resolve issues before they impact users.
Predictive analytics for sprints/project completion: AI can help forecast project timelines and resource needs. Tools like Azure DevOps leverage historical data to predict delivery dates, enabling better sprint planning and management.
Architectural optimization: AI tools suggest improvements to software architecture. For example, the AWS Well-Architected Tool evaluates workloads and recommends changes based on best practices, ensuring optimal performance.
Security assessment: AI-driven tools identify vulnerabilities in code before deployment. Platforms like Snyk scan code for known vulnerabilities and suggest fixes, allowing teams to deliver secure applications.
Continuous Learning and Professional Development
Ongoing education in AI technologies is crucial. Developers should actively seek opportunities to learn about the latest tools and methodologies.
Online resources and communities: Utilize platforms like Coursera, Udemy, and edX for courses on AI and machine learning. Participating in online forums such as Stack Overflow and GitHub discussions can provide insights and foster collaboration among peers.
Cultivating a Supportive Team Environment
Collaboration and open communication are vital in overcoming the challenges posed by AI integration. Building a culture that embraces change can lead to improved team morale and productivity.
Building peer support networks: Establish mentorship programs or regular check-ins to foster support among team members. Encourage knowledge sharing and collaborative problem-solving, creating an environment where everyone feels comfortable discussing their challenges.
Setting Effective Productivity Metrics
Rethink how productivity is measured. Focus on metrics that prioritize code quality and project impact rather than just the quantity of code produced.
Tools for measuring productivity: Use analytics tools like Typo that provide insights into meaningful productivity indicators. These tools help teams understand their performance and identify areas for improvement.
How Typo Enhances Developer Productivity?
There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.
Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.
Here are three ways in which Typo measures the team productivity:
Software Development Lifecycle (SDLC) Visibility
Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.
Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.
AI Powered Code Review
Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them using AI before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.
Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback. This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.
Developer Experience
Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.
Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.
Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.
Continuous Learning: Empowering Developers for Future Success
With its robust features tailored for the modern software development environment, Typo acts as a catalyst for productivity. By streamlining workflows, fostering collaboration, integrating with AI tools, and providing personalized support, Typo empowers developers and their managers to navigate the complexities of development with confidence. Embracing Typo can lead to a more productive, engaged, and satisfied development team, ultimately driving successful project outcomes.
Ha͏ve͏ yo͏u ever felt ͏overwhelmed trying to ͏mainta͏in co͏nsist͏ent͏ c͏o͏de quality acros͏s ͏a remote te͏am? As mo͏re development t͏eams shift to remo͏te work, t͏he challenges of code͏ revi͏e͏ws onl͏y gro͏w—slowed c͏ommunication͏, la͏ck o͏f real-tim͏e feedba͏ck, and t͏he c͏r͏eeping ͏possibility of errors sl͏ipp͏i͏ng t͏hro͏ugh. ͏
Moreover, thin͏k about how͏ much ti͏me is lost͏ ͏waiting͏ fo͏r feedback͏ o͏r having to͏ rewo͏rk code due͏ ͏to sma͏ll͏, ͏overlooked issues. ͏When you’re͏ working re͏motely, the͏se frustra͏tio͏ns com͏poun͏d—su͏ddenly, a task that shou͏ld take hours stre͏tc͏hes into days. You͏ migh͏t ͏be spendin͏g tim͏e on ͏repetitiv͏e tasks ͏l͏ike͏ s͏yn͏ta͏x chec͏king, cod͏e formatting, and ma͏nually catch͏in͏g errors that could be͏ ha͏nd͏led͏ more ef͏fi͏cie͏nt͏ly. Me͏anwhile͏,͏ ͏yo͏u’r͏e ͏expected to deli͏ver high-quality͏ ͏work without delays. ͏
Fortuna͏tely,͏ ͏AI-͏driven too͏ls offer a solutio͏n t͏h͏at can ea͏se this ͏bu͏rd͏en.͏ B͏y automating ͏the tedi͏ous aspects of cod͏e ͏re͏views, such as catchin͏g s͏y͏ntax ͏e͏r͏rors and for͏m͏a͏tting i͏nconsistenc͏ies, AI ca͏n͏ gi͏ve deve͏lopers m͏or͏e͏ time to focus on the creative and comple͏x aspec͏ts of ͏coding.
͏In this ͏blog, we’͏ll ͏explore how A͏I͏ can ͏help͏ remote teams tackle the diffic͏u͏lties o͏f͏ code r͏eviews ͏a͏nd ho͏w ͏t͏o͏ols like Typo can fu͏rther͏ im͏prove this͏ proc͏ess͏, allo͏wing t͏e͏am͏s to focu͏s on what ͏tru͏ly matter͏s—writing excellent͏ code.
Remote work h͏as int͏roduced a unique se͏t of challenges t͏hat imp͏a͏ct t͏he ͏code rev͏iew proce͏ss. They a͏re:͏
Co͏mmunication bar͏riers
When team members are͏ s͏cat͏t͏ered across ͏diffe͏rent time ͏zon͏e͏s, real-t͏ime discussions and feedba͏ck become ͏mor͏e difficult͏. Th͏e͏ lack of face͏-to-͏face͏ ͏int͏e͏ra͏ctions can h͏i͏nder effective ͏commun͏icati͏on ͏an͏d͏ le͏ad ͏to m͏isunde͏rs͏tandings.
Delays in fee͏dback͏
Without͏ the i͏mmedi͏acy of in-pers͏on ͏collabo͏rati͏on͏,͏ remote͏ ͏tea͏ms͏ often experie͏n͏ce del͏ays in receivi͏ng feedback on͏ thei͏r code chang͏e͏s. This ͏can slow d͏own the developmen͏t cycle͏ and fru͏strat͏e ͏te͏am ͏member͏s who are ea͏ger t͏o iterate and impro͏ve the͏ir ͏code.͏
Inc͏rea͏sed risk ͏of human error͏
͏C͏o͏mplex ͏code͏ re͏vie͏ws cond͏ucted ͏remo͏t͏ely are more͏ p͏ro͏n͏e͏ to hum͏an overs͏ight an͏d errors. When team͏ memb͏ers a͏re no͏t ph͏ysically ͏pres͏ent to catch ͏ea͏ch other's mistakes, the risk of intro͏duci͏ng͏ bug͏s or quality i͏ssu͏es into the codebase increa͏ses.
Emo͏tional stres͏s
Re͏mot͏e͏ work can take͏ a toll on t͏eam mo͏rale, with f͏eelings͏ of ͏is͏olation and the pres͏s͏ure ͏to m͏ai͏nt͏a͏in productivit͏y w͏eighing heavily ͏on͏ developers͏. This emo͏tional st͏ress can negativel͏y ͏impact col͏laborati͏on͏ a͏n͏d code quality i͏f not͏ properly add͏ress͏ed.
Ho͏w AI Ca͏n͏ Enhance ͏Remote Co͏d͏e Reviews
AI-powered tools are transforming code reviews, helping teams automate repetitive tasks, improve accuracy, and ensure code quality. Let’s explore how AI dives deep into the technical aspects of code reviews and helps developers focus on building robust software.
NLP for Code Comments
Natural Language Processing (NLP) is essential for understanding and interpreting code comments, which often provide critical context:
Tokenization and Parsing
NLP breaks code comments into tokens (individual words or symbols) and parses them to understand the grammatical structure. For example, "This method needs refactoring due to poor performance" would be tokenized into words like ["This", "method", "needs", "refactoring"], and parsed to identify the intent behind the comment.
Sentiment Analysis
Using algorithms like Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks, AI can analyze the tone of code comments. For example, if a reviewer comments, "Great logic, but performance could be optimized," AI might classify it as having a positive sentiment with a constructive critique. This analysis helps distinguish between positive reinforcement and critical feedback, offering insights into reviewer attitudes.
Intent Classification
AI models can categorize comments based on intent. For example, comments like "Please optimize this function" can be classified as requests for changes, while "What is the time complexity here?" can be identified as questions. This categorization helps prioritize actions for developers, ensuring important feedback is addressed promptly.
Static Code Analysis
Static code analysis goes beyond syntax checking to identify deeper issues in the code:
Syntax and Semantic Analysis
AI-based static analysis tools not only check for syntax errors but also analyze the semantics of the code. For example, if the tool detects a loop that could potentially cause an infinite loop or identifies an undefined variable, it flags these as high-priority errors. AI tools use machine learning to constantly improve their ability to detect errors in Java, Python, and other languages.
Pattern Recognition
AI recognizes coding patterns by learning from vast datasets of codebases. For example, it can detect when developers frequently forget to close file handlers or incorrectly handle exceptions, identifying these as anti-patterns. Over time, AI tools can evolve to suggest better practices and help developers adhere to clean code principles.
Vulnerability Detection
AI, trained on datasets of known vulnerabilities, can identify security risks in the code. For example, tools like Typo or Snyk can scan JavaScript or C++ code and flag potential issues like SQL injection, buffer overflows, or improper handling of user input. These tools improve security audits by automating the identification of security loopholes before code goes into production.
Code Similarity Detection
Finding duplicate or redundant code is crucial for maintaining a clean codebase:
Code Embeddings
Neural networks convert code into embeddings (numerical vectors) that represent the code in a high-dimensional space. For example, two pieces of code that perform the same task but use different syntax would be mapped closely in this space. This allows AI tools to recognize similarities in logic, even if the syntax differs.
Similarity Metrics
AI employs metrics like cosine similarity to compare embeddings and detect redundant code. For example, if two functions across different files are 85% similar based on cosine similarity, AI will flag them for review, allowing developers to refactor and eliminate duplication.
Duplicate Code Detection
Tools like Typo use AI to identify duplicate or near-duplicate code blocks across the codebase. For example, if two modules use nearly identical logic for different purposes, AI can suggest merging them into a reusable function, reducing redundancy and improving maintainability.
Automated Code Suggestions
AI doesn’t just point out problems—it actively suggests solutions:
Generative Models
Models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) can create new code snippets. For example, if a developer writes a function that opens a file but forgets to handle exceptions, an AI tool can generate the missing try-catch block to improve error handling.
Contextual Understanding
AI analyzes code context and suggests relevant modifications. For example, if a developer changes a variable name in one part of the code, AI might suggest updating the same variable name in other related modules to maintain consistency. Tools like GitHub Copilot use models such as GPT to generate code suggestions in real-time based on context, making development faster and more efficient.
Reinforcement Learning for Code Optimization
Reinforcement learning (RL) helps AI continuously optimize code performance:
Reward Functions
In RL, a reward function is defined to evaluate the quality of the code. For example, AI might reward code that reduces runtime by 20% or improves memory efficiency by 30%. The reward function measures not just performance but also readability and maintainability, ensuring a balanced approach to optimization.
Agent Training
Through trial and error, AI agents learn to refactor code to meet specific objectives. For example, an agent might experiment with different ways of parallelizing a loop to improve performance, receiving positive rewards for optimizations and negative rewards for regressions.
Continuous Improvement
The AI’s policy, or strategy, is continuously refined based on past experiences. This allows AI to improve its code optimization capabilities over time. For example, Google’s AlphaCode uses reinforcement learning to compete in coding competitions, showing that AI can autonomously write and optimize highly efficient algorithms.
AI-Assisted Code Review Tools
Modern AI-assisted code review tools offer both rule-based enforcement and machine learning insights:
Rule-Based Systems
These systems enforce strict coding standards. For example, AI tools like ESLint or Pylint enforce coding style guidelines in JavaScript and Python, ensuring developers follow industry best practices such as proper indentation or consistent use of variable names.
Machine Learning Models
AI models can learn from past code reviews, understanding patterns in common feedback. For instance, if a team frequently comments on inefficient data structures, the AI will begin flagging those cases in future code reviews, reducing the need for human intervention.
Hybrid Approaches
Combining rule-based and ML-powered systems, hybrid tools provide a more comprehensive review experience. For example, DeepCode uses a hybrid approach to enforce coding standards while also learning from developer interactions to suggest improvements in real-time. These tools ensure code is not only compliant but also continuously improved based on team dynamics and historical data.
Incorporating AI into code reviews takes your development process to the next level. By automating error detection, analyzing code sentiment, and suggesting optimizations, AI enables your team to focus on what matters most: building high-quality, secure, and scalable software. As these tools continue to learn and improve, the benefits of AI-assisted code reviews will only grow, making them indispensable in modern development environments.
Here’s a table to help you seamlessly understand the code reviews at a glance:
Practical Steps to Im͏pleme͏nt AI-Driven Co͏de ͏Review͏s
To ef͏fectively inte͏grate A͏I ͏into your remote͏ tea͏m's co͏de revi͏ew proce͏ss, con͏side͏r th͏e followi͏ng ste͏ps͏:
Evaluate͏ and choo͏se ͏AI tools: Re͏sear͏ch͏ and ͏ev͏aluat͏e A͏I͏-powe͏red code͏ review tools th͏at ali͏gn with your tea͏m'͏s n͏e͏eds an͏d ͏de͏vel͏opment w͏orkflow.
S͏t͏art with͏ a gr͏ad͏ua͏l ͏approa͏ch: Us͏e AI tools to ͏s͏upp͏ort h͏uman-le͏d code ͏reviews be͏fore gr͏ad͏ua͏lly ͏automating simpler tasks. This w͏ill al͏low your͏ te͏am to become comfortable ͏w͏ith the te͏chnol͏ogy and see its ͏ben͏efit͏s firsthan͏d͏.
͏Foster a cu͏lture of collaboration͏: ͏E͏nc͏ourage͏ yo͏ur tea͏m to view AI ͏as͏ a co͏llaborati͏ve p͏ar͏tner rathe͏r tha͏n͏ a replac͏e͏men͏t for ͏huma͏n expert͏is͏e͏. ͏Emp͏hasize ͏the impo͏rtan͏ce of human oversi͏ght, ͏especially for complex issue͏s th͏at r͏equire ͏nuance͏d͏ ͏judgmen͏t.
Provi͏de trainin͏g a͏nd r͏eso͏urces: Equi͏p͏ ͏your͏ team ͏with͏ the neces͏sary ͏training ͏an͏d resources to ͏use A͏I ͏c͏o͏de revie͏w too͏ls͏ effectively.͏ T͏his include͏s tuto͏rials, docume͏ntatio͏n, and op͏p͏ortunities fo͏r hands-on p͏r͏actice.
Lev͏era͏ging Typo to ͏St͏r͏eam͏line Remot͏e Code ͏Revi͏ews
Typo is an ͏AI-͏po͏w͏er͏ed tool designed to streamli͏ne the͏ code review process for r͏emot͏e teams. By i͏nte͏grating seamlessly wi͏th ͏your e͏xisting d͏e͏vel͏opment tool͏s, Typo mak͏es it easier͏ to ma͏nage feedbac͏k, improve c͏ode͏ q͏uali͏ty, and ͏collab͏o͏ra͏te ͏acr͏o͏ss ͏tim͏e zone͏s͏.
S͏ome key͏ benefi͏ts of ͏using T͏ypo ͏inclu͏de:
AI code analysis
Code context understanding
Auto debuggging with detailed explanations
Proprietary models with known frameworks (OWASP)
Auto PR fixes
Here's a brief comparison on how Typo differentiates from other code review tools
The Hu͏man Element: Com͏bining͏ ͏AI͏ and Human Exp͏ert͏ise
Wh͏ile AI ca͏n ͏s͏i͏gn͏ificantly͏ e͏nhance͏ the code ͏review proces͏s, i͏t͏'s essential͏ to maintain ͏a balance betw͏een AI ͏and human expert͏is͏e. AI ͏is not ͏a repla͏ce͏me͏nt for h͏uman intuition, cr͏eativity, or judgmen͏t but rather ͏a ͏s͏upportive t͏ool that augme͏nts and ͏emp͏ower͏s ͏developers.
By ͏using AI to ͏handle͏ re͏peti͏tive͏ tasks a͏nd prov͏ide real-͏time f͏eedba͏ck, develope͏rs can͏ foc͏us on higher-lev͏el is͏su͏es ͏that re͏quire ͏h͏uman problem-solving ͏skills. T͏h͏is ͏division of͏ l͏abor͏ allows teams ͏to w͏ork m͏ore efficient͏ly͏ and eff͏ectivel͏y while still͏ ͏ma͏in͏taining͏ the ͏h͏uma͏n touch that is cr͏uc͏ial͏ ͏for complex͏ ͏p͏roble͏m-solving and innov͏ation.
Over͏c͏oming E͏mot͏ional Barriers to AI In͏tegra͏tion
In͏troducing new t͏echn͏ol͏og͏ies͏ can so͏metimes be ͏met wit͏h r͏esist͏ance or fear. I͏t's ͏im͏porta͏nt ͏t͏o address these co͏ncerns head-on and hel͏p your͏ team understand t͏he͏ be͏nefits of AI integr͏ation.
Some common͏ fears—͏su͏ch as job͏ r͏eplacement or dis͏r͏u͏pt͏ion of esta͏blished workflows—͏shou͏ld be dire͏ctly addre͏ssed͏.͏ Reas͏sur͏e͏ your t͏ea͏m͏ that AI is ͏designed to r͏e͏duce workload and enh͏a͏nce͏ pro͏duc͏tiv͏ity, no͏t rep͏lace͏ human ex͏pertise.͏ Foster an͏ en͏vironment͏ that embr͏aces new t͏echnologie͏s while focusing on th͏e long-t͏erm be͏nefits of improved ͏eff͏icienc͏y, collabor͏ati͏on, ͏and j͏o͏b sat͏isfaction.
Elevate Your͏ Code͏ Quality: Em͏b͏race AI Solut͏ions͏
AI-d͏riven co͏d͏e revie͏w͏s o͏f͏fer a pr͏omising sol͏ution f͏or remote teams ͏lookin͏g͏ to maintain c͏ode quality, fo͏ster collabor͏ation, and enha͏nce productivity. ͏By emb͏ra͏cing͏ ͏AI tool͏s like Ty͏po, you can streamline ͏your code rev͏iew pro͏cess, reduce delays, and empower ͏your tea͏m to focus on writing gr͏ea͏t code.
Remem͏ber tha͏t ͏AI su͏pports and em͏powers your team—not replace͏ human expe͏rti͏se. Exp͏lore and experim͏ent͏ with A͏I͏ code review tools ͏in y͏o͏ur ͏teams, and ͏wa͏tch as your remote co͏lla͏borati͏on rea͏ches new͏ he͏i͏ghts o͏f effi͏cien͏cy and success͏.
The software development field is constantly evolving field. While this helps deliver the products and services quickly to the end-users, it also implies that developers might take shortcuts to deliver them on time. This not only reduces the quality of the software but also leads to increased technical debt.
But, with new trends and technologies, comes generative AI. It seems to be a promising solution in the software development industry which can ultimately, lead to high-quality code and decreased technical debt.
Let’s explore more about how generative AI can help manage technical debt!
Technical debt: An overview
Technical debt arises when development teams take shortcuts to develop projects. While this gives them short-term gains, it increases their workload in the long run.
In other words, developers prioritize quick solutions over effective solutions. The four main causes behind technical debt are:
Business causes: Prioritizing business needs and the company’s evolving conditions can put pressure on development teams to cut corners. It can result in preponing deadlines or reducing costs to achieve desired goals.
Development causes: As new technologies are evolving rapidly, It makes it difficult for teams to switch or upgrade them quickly. Especially when already dealing with the burden of bad code.
Human resources causes: Unintentional technical debt can occur when development teams lack the necessary skills or knowledge to implement best practices. It can result in more errors and insufficient solutions.
Resources causes: When teams don’t have time or sufficient resources, they take shortcuts by choosing the quickest solution. It can be due to budgetary constraints, insufficient processes and culture, deadlines, and so on.
Why generative AI for code management is important?
As per McKinsey’s study,
“… 10 to 20 percent of the technology budget dedicated to new products is diverted to resolving issues related to tech debt. More troubling still, CIOs estimated that tech debt amounts to 20 to 40 percent of the value of their entire technology estate before depreciation.”
But there’s a solution to it. Handling tech debt is possible and can have a significant impact:
“Some companies find that actively managing their tech debt frees up engineers to spend up to 50 percent more of their time on work that supports business goals. The CIO of a leading cloud provider told us, ‘By reinventing our debt management, we went from 75 percent of engineer time paying the [tech debt] ‘tax’ to 25 percent. It allowed us to be who we are today.”
There are many traditional ways to minimize technical debt which includes manual testing, refactoring, and code review. However, these manual tasks take a lot of time and effort. Due to the ever-evolving nature of the software industry, these are often overlooked and delayed.
Since generative AI tools are on the rise, they are considered to be the right way for code management which subsequently, lowers technical debt. These new tools have started reaching the market already. They are integrated into the software development environments, gather and process the data across the organization in real-time, and further, leveraged to lower tech debt.
Some of the key benefits of generative AI are:
Identify redundant code: Generative AI tools like Codeclone analyze code and suggest improvements. This further helps in improving code readability and maintainability and subsequently, minimizing technical debt.
Generates high-quality code: Automated code review tools such as Typo help in an efficient and effective code review process. They understand the context of the code and accurately fix issues which leads to high-quality code.
Automate manual tasks: Tools like Github Copilot automate repetitive tasks and let the developers focus on high-quality tasks.
Optimal refactoring strategies: AI tools like Deepcode leverage machine learning models to understand code semantics, break it down into more manageable functions, and improve variable namings.
Case studies and real-life examples
Many industries have started adopting generative AI technologies already for tech debt management. These AI tools assist developers in improving code quality, streamlining SDLC processes, and cost savings.
Below are success stories of a few well-known organizations that have implemented these tools in their organizations:
Microsoft uses Diffblue cover for Automated Testing and Bug Detection
Microsoft is a global technology leader that implemented Diffblue cover for automated testing. Through this generative AI, Microsoft has experienced a considerable reduction in the number of bugs during the development process. It also ensures that the new features don’t compromise with existing functionality which positively impacts their code quality. This further helps in faster and more reliable releases and cost savings.
Google implements Codex for code documentation
Google is an internet search engine and technology giant that implemented OpenAI’s Codex for streamlining code documentation processes. Integrating this AI tool helped subsequently reduce the time and effort spent on manual documentation tasks. Due to the consistency across the entire codebase, It enhances code quality and allows developers to focus more on core tasks.
Facebook adopts CodeClone to identify redundancy
Facebook, a leading social media, has adopted a generative AI tool, CodeClone for identifying and eliminating redundant code across its extensive codebase. This resulted in decreased inconsistencies and a more streamlined and efficient codebase which further led to faster development cycles.
Pioneer Square Labs uses GPT-4 for higher-level planning
Pioneer Square Labs, a studio that launches technology startups, adopted GPT-4 to allow developers to focus on core tasks and let these AI tools handle mundane tasks. This further allows them to take care of high-level planning and assist in writing code. Hence, streamlining the development process.
How Typo leverage generative AI to reduce technical debt?
Typo’s automated code review tool enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells.
Typo also auto-analyses your codebase pulls requests to find issues and auto-generates fixes before you merge to master. Its Auto-Fix feature leverages GPT 3.5 Pro trained on millions of open source data & exclusive anonymised private data as well to generate line-by-line code snippets where the issue is detected in the codebase.
As a result, Typo helps reduce technical debt by detecting and addressing issues early in the development process, preventing the introduction of new debt, and allowing developers to focus on high-quality tasks.
Issue detection by Typo
Autofixing the codebase with an option to directly create a Pull Request
Key features
Supports top 10+ languages
Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.
Fix every code issue
Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.
Efficient code optimization
Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.
Professional coding standards
Typo standardizes code and reduces the risk of a security breach.
While generative AI can help reduce technical debt by analyzing code quality, removing redundant code, and automating the code review process, many engineering leaders believe technical debt can be increased too.
Bob Quillin, vFunction chief ecosystem officer stated “These new applications and capabilities will require many new MLOps processes and tools that could overwhelm any existing, already overloaded DevOps team,”
They aren’t wrong either!
Technical debt can be increased when the organizations aren’t properly documenting and training development teams in implementing generative AI the right way. When these AI tools are adopted hastily without considering any long-term implications, they can rather increase the workload of developers and increase technical debt.
Ethical guidelines
Establish ethical guidelines for the use of generative AI in organizations. Understand the potential ethical implications of using AI to generate code, such as the impact on job displacement, intellectual property rights, and biases in AI-generated output.
Diverse training data quality
Ensure the quality and diversity of training data used to train generative AI models. When training data is biased or incomplete, these AI tools can produce biased or incorrect output. Regularly review and update training data to improve the accuracy and reliability of AI-generated code.
Human oversight
Maintain human oversight throughout the generative AI process. While AI can generate code snippets and provide suggestions, the final decision should be upon the developers for final decision making, review, and validate the output to ensure correctness, security, and adherence to coding standards.
Most importantly, human intervention is a must when using these tools. After all, it’s their judgment, creativity, and domain knowledge that help to make the final decision. Generative AI is indeed helpful to reduce the manual tasks of the developers, however, they need to use it properly.
Conclusion
In a nutshell, generative artificial intelligence tools can help manage technical debt when used correctly. These tools help to identify redundancy in code, improve readability and maintainability, and generate high-quality code.
However, it is to be noted that these AI tools shouldn’t be used independently. These tools must work only as the developers’ assistants and they muse use them transparently and fairly.
The code review process is one of the major reasons for developer burnout. This not only hinders the developer’s productivity but also negatively affects the software tasks. Unfortunately, it is a crucial aspect of software development that shouldn’t be compromised.
So, what is the alternative to manual code review? Let’s dive in further to know more about it:
The current State of Manual Code Review
Manual code reviews are crucial for the software development process. It can help identify bugs, mentor new developers, and promote a collaborative culture among team members. However, it comes with its own set of limitations.
Software development is a demanding job with lots of projects and processes. Code review when done manually, can take a lot of time and effort from developers. Especially, when reviewing an extensive codebase. It not only prevents them from working on other core tasks but also leads to fatigue and burnout, resulting in decreased productivity.
Since the reviewers have to read the source code line by line to identify issues and vulnerabilities, it can overwhelm them and they may miss out on some of the critical paths. This can result in human errors especially when the deadline is approaching. Hence, negatively impacting project efficiency and straining team resources.
In short, manual code review demands significant time, effort, and coordination from the development team.
This is when AI code review comes to the rescue. AI code review tools are becoming increasingly popular in today’s times. Let’s read more about AI code review and why is it important for developers:
What is AI Code Review?
AI code review is an automated process that examines and analyzes the code of software applications. It uses artificial intelligence and machine learning techniques to identify patterns, detect potential problems, common programming mistakes, and potential security vulnerabilities. These AI code review tools are entirely based on data so they aren’t biased and can read vast amounts of code in seconds.
Why AI in the Code Review Process is Important?
Augmenting human efforts with AI code review has various benefits:
Enhance Overall Quality
Generative AI in code review tools can detect issues like potential bugs, security vulnerabilities, code smells, bottlenecks, and more. The human code review process usually overlooks these issues. Hence, helping in identifying patterns and recommending code improvements that can enhance efficiency and maintenance and reduce technical debt. This leads to robust and reliable software that meets the highest quality standards.
Improve Productivity
AI-powered tools can scan and analyze large volumes of code within minutes. It not only detects potential issues but also suggests improvements according to coding standards and practices. This allows the development team to catch errors early in the development cycle by providing immediate feedback. This saves time spent on manual inspections and rather, developers can focus on other intricate and imaginative parts of their work.
Better Compliance with Coding Standards
The automated code review process ensures that code conforms to coding standards and best practices. It allows code to be more readable, understandable, and maintainable. Hence, improving the code quality. Moreover, it enhances teamwork and collaboration among developers as all of them adhere to the same guidelines and consistency in the code review process.
Enhance Accuracy
The major disadvantage of manual code reviews is that they are prone to human errors and biases. It further increases other critical issues related to structural quality, architectural decisions or so which negatively impact the software application. Generative AI in code reviews can analyze code much faster and more consistently than humans. Hence, maintaining accuracy and reducing biases since they are entirely based on data.
Increase Scalability
When software projects grow in complexity and size, manual code reviews become increasingly time-consuming. It may also struggle to keep up with the scale of these codebases which further delay the code review process. As mentioned before, AI code review tools can handle large codebases in a fraction of a second and can help development teams maintain high standards of code quality and maintainability.
How Typo Leverage Gen AI to Automate Code Reviews?
Typo’s automated code review tool not only enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells. It auto-analyses your codebase and pulls requests to find issues and auto-generates fixes before you merge to master.
Typo’s Auto-Fix feature leverages GPT 3.5 Pro to generate line-by-line code snippets where the issue is detected in the codebase. This means less time reviewing and more time for important tasks. As a result, making the whole process faster and smoother.
Issue detection by Typo
Auto fixing the codebase with an option to directly create a Pull Request
Key Features
Supports Top 10+ Languages
Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.
Fix Every Code Issue
Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.
Efficient Code Optimization
Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.
Professional Coding Standards
Typo standardizes code and reduces the risk of a security breach.
Comparing Typo with Other AI Code Review Tools
There are other popular AI code review tools available in the market. Let’s compare how we stack against others:
Typo
Sonarcloud
Codacy
Codecov
Code analysis
AI analysis and static code analysis
No
No
No
Code context
Deep understanding
No
No
No
Proprietary models
Yes
No
No
No
Auto debugging
Automated debugging with detailed explanations
Manual
No
No
Auto pull request
Automated pull requests and fixes
No
No
No
AI vs. Humans: The Future of Code Reviews?
AI code review tools are becoming increasingly popular. One question that has been on everyone’s mind is whether these AI code review tools will take away developers’ jobs.
The answer is NO.
Generative AI in code reviews is designed to enhance and streamline the development process. It lets the developers automate the repetitive and time-consuming tasks and focus on other core aspects of software applications. Moreover, human judgment, creativity, and domain knowledge are crucial for software development that AI cannot fully replicate.
While these tools excel at certain tasks like analyzing codebase, identifying code patterns, and software testing, they still cannot fully understand complex business requirements, and user needs, or make subjective decisions.
As a result, the combination of AI code review tools and developers’ intervention is an effective approach to ensure high-quality code.
Conclusion
The tech industry is demanding. The software engineering team needs to stay ahead of the industry trends. New AI tools and technologies can help them complement their skills and expertise and make their task easier.
AI in the code review process offers remarkable benefits including reducing human error and consistent accuracy. But, make sure that they are here to assist you in your task, not your whole strategy or replace you.
How Generative AI Is Revolutionising Developer Productivity
Generative AI has become a transformative force in the tech world. And it isn’t going to stop anytime soon! It will continue to have a major impact, especially in the software development industry.Generative AI, when used in the right way, can help developers in saving their time and effort. It allows them to focus on core tasks and upskilling. It further helps streamline various stages of SDLC and improves Developer Productivity. In this article, let’s dive deeper into how generative AI can positively impact developer productivity.
What is Generative AI?
Generative AI is a category of AI models and tools that are designed to create new content, images, videos, text, music, or code. It uses various techniques including neural networks and deep learning algorithms to generate new content.Generative artificial intelligence holds a great advantage for software developers in improving their productivity. It not only improves code quality and delivers better products and services but also allows them to stay ahead of their competitors.Below are a few benefits of Generative AI:
Increases Efficiency
With the help of Generative AI, developers can automate tasks that are either repetitive or don’t require much attention. This saves a lot of time and energy and allows developers to be more productive and efficient in their work. Hence, they can focus on more complex and critical aspects of the software without constantly stressing about other work.
Improves Quality
Generative AI can help in minimizing errors and address potential issues early. When they are set as per the coding standards, it can contribute to more effective coding reviews. This increases the code quality and decreases costly downtime and data loss.
Helps in Learning and Assisting with Work
Generative AI can assist developers by analyzing and generating examples of well-structured code, providing suggestions for refactoring, generating code snippets, and detecting blind spots. This further helps developers in upskilling and gaining knowledge about their tasks.
Cost Savings
Integrating generative AI tools can reduce costs. It enables developers to use existing codebases effectively and complete projects faster even with shorter teams. Generative AI can streamline the stages of the software development life cycle and get the most out of less budget.
Predict Analytics
Generative AI can help in detecting potential issues in the early stages by analyzing historical data. It can also make predictions about future trends. This allows developers to make informed decisions about their projects, streamline their workflow, and hence, deliver high-quality products and services.
How does Generative AI Help Software Developers?
Below are four key areas in which Generative AI can be a great asset to software developers:
It Eliminates Manual and Repetitive Tasks
Generative AI can take up the manual and routine tasks of software development teams. A few of them are test automation, completing coding statements, writing documentation, and so on. Developers can provide the prompt to Generative AI i.e. information regarding their code and documentation that adheres to the best practices. And it can generate the required content accordingly. It minimizes human errors and increases accuracy.This increases the creativity and problem-solving skills of developers. It further lets them focus more on solving complex business challenges and fast-track new software capabilities. Hence, it helps in faster delivery of products and services to end users.
It Helps Developers to Tackle New Challenges
When developers face any challenges or obstacles in their projects, they can turn to these AI tools to seek assistance. These AI tools can track performance, provide feedback, offer predictions, and find the optimal path to complete tasks. By providing the right and clear prompts, these tools can provide problem-specific recommendations and proven solutions.This prevents developers from being stressed out with certain tasks. Rather, they can use their time and energy for other important tasks or can take breaks.It increases their productivity and performance. Hence, improves the overall developer experience.
It Helps in Creating the First Draft of the Code
With the help of generative artificial intelligence, developers can get helpful code suggestions and generate initial drafts. It can be done by entering the prompt in a separate window or within the IDE that helps in developing the software.This prevents developers from entering into a slump and getting in the flow sooner. Besides this, these AI tools can also assist in root cause analysis and generate new system designs. Hence, it allows developers to reflect on code at a higher and more abstract level and focus more on what they want to build.
It Helps in Making Changes to Existing Code Faster
Generative AI can accelerate updates to existing code faster. Developers simply have to provide the criteria for the same and the AI tool can proceed further. It usually includes those tasks that get sidelined due to workload and lack of time. For example, Refactoring existing code further helps in making small changes and improving code readability and performance.As a result, developers can focus on high-level design and critical decision-making without worrying much about existing tasks.
How does Generative AI Improve Developer Productivity?
Below are a few ways in which Generative AI can have a positive impact on developer productivity:
Focus on Meaningful Tasks
As Generative AI tools take up tedious and repetitive tasks, they allow developers to give their time and energy to meaningful activities. This avoids distractions and prevents them from stress and burnout. Hence, it increases their productivity and positively impacts the overall developer experience.
Assist in their Learning Graph
Generative AI lets developers be less dependent on their seniors and co-workers. Since they can gain practical insights and examples from these AI tools. It allows them to enter their flow state faster and reduces their stress level.
Assist in Pair Programming
Through Generative AI, developers can collaborate with other developers easily. These AI tools help in providing intelligent suggestions and feedback during coding sessions. This stimulates discussion between them and leads to better and more creative solutions.
Increase the Pace of Software Development
Generative AI helps in the continuous delivery of products and services and drives business strategy. It addresses potential issues in the early stages and provides suggestions for improvements. Hence, it not only accelerates the phases of SDLC but improves overall quality as well.
Typo auto-analyzes your code and pull requests to find issues and suggests auto-fixes before getting merged.
Use Case
The code review process is time-consuming. Typo enables developers to find issues as soon as PR is raised and shows alerts within the git account. It gives you a detailed summary of security, vulnerability, and performance issues. To streamline the whole process, it suggests auto-fixes and best practices to move things faster and better.
Github Copilot is an AI pair programmer that provides autocomplete style suggestions to your code.
Use Case
Coding is an integral part of your software development project. However, when done manually, takes a lot of effort. Github Copilot picks suggestions from your current or related code files and lets you test and select your code to perform different actions. It also ensures that vulnerable coding patterns are filtered out and blocks problematic public code suggestions.
Tabnine is an AI-powered code completion tool that uses deep learning to suggest code as you type.
Use Case
Writing code can prevent you from focusing on other core activities. Tabnine can provide accurate suggestions over time as per your coding habits and personalize code too. It also includes programming languages such as Javascript and Python and integrates them with popular IDEs for speedy setup and reduced context switching.
ChatGPT is a language model developed by OpenAI to understand prompts and generate human-like texts.
Use Case
Developers need to brainstorm ideas and get feedback on their projects. This is when ChatGPT comes to their rescue. This AI tool helps in finding answers to their coding, technical documentation, programming concepts and much more quickly. It uses natural language to understand questions and provide relevant suggestions.
Mintlify is an AI-powered documentation writer that allows developers to quickly and accurately generate code documentation.
Use Case
Code documentation can be a tedious process. Mintlify can analyze code, quickly understand complicated functions, and include built-in analytics to help developers understand how users engage with the documentation. It also has a Mintlify chat that reads documents and answers user questions instantly.
How to Mitigate Risks Associated with Generative AI?
No matter how effective Generative AI is becoming nowadays, it also comes with a lot of defects and errors. They are not always correct hence, human review is important after giving certain tasks to AI tools.Below are a few ways you can reduce risks related to Generative AI:
Implement Quality Control Practices
Develop guidelines and policies to address ethical challenges such as fairness, privacy, transparency, and accuracy of software development projects. Make sure to monitor a system that tracks model accuracy, performance metrics, and potential biases.
Provide Generative AI Training
Offer mentorship and training regarding Generative AI. This will increase AI literacy across departments and mitigate the risk. Help them know how to effectively utilize these tools and know their capabilities and limitations.
Understand AI is an Assistant, Not a Replacement
Make your developers understand that these generative tools should be viewed as assistants only. Encourage collaboration between these tools and human operators to leverage the strength of AI.
Conclusion
In a nutshell, Generative AI stands as a game-changer in the software development industry. When they are harnessed effectively, they can bring a multitude of benefits to the table. However, ensure that your developers approach the integration of Generative AI with caution.
Speed matters in software development. Top-performing teams ship code in just two days, while many others lag at seven.
Software cycle time directly impacts product delivery and customer satisfaction - and it’s equally essential for your team's confidence.
CTOs and engineering leaders can’t reduce cycle time just by working faster. They must optimize processes, identify and eliminate bottlenecks, and consistently deliver value.
In this post, we’ll break down the key strategies to reduce cycle time.
What is Software Cycle Time
Software cycle time measures how long it takes for code to go from the first commit to production.
It tracks the time a pull request (PR) spends in various stages of the pipeline, helping teams identify and address workflow inefficiencies.
Cycle time consists of four key components:
Coding Time: The time taken from the first commit to raising a PR for review.
Pickup Time: The delay between the PR being raised and the first review comment.
Review Time: The duration from the first review comment to PR approval.
Merge Time: The time between PR approval and merging into the main branch.
Software cycle time is a critical part of DORA metrics, complimenting others like deployment frequency, lead time for changes, and MTTR.
While deployment frequency indicates how often new code is released, cycle time provides insights into the efficiency of the development process itself.
Why Does Software Cycle Time Matter?
Understanding and optimising software cycle time is crucial for several reasons:
1. Engineering Efficiency
Cycle time reflects how efficiently engineering teams work. For example, there are brands that reduce their PR cycle time with automated code reviews and parallel test execution. This change allows developers to focus more on feature development rather than waiting for feedback, resulting in faster, higher-quality code delivery.
2. Time to Market
Reducing cycle time accelerates product delivery, allowing teams to respond faster to market demands and customer feedback. Remember Amazon’s “two-pizza teams” model? It emphasizes small, independent teams with streamlined processes, enabling them to deploy code thousands of times a day. This agility helps Amazon quickly respond to customer needs, implement new features, and outpace competitors.
3. Competitive Advantage
The ability to ship high-quality software quickly can set a company apart from competitors. Faster delivery means quicker innovation and better customer satisfaction. For example, Netflix’s use of chaos engineering and Service-Level Prioritized Load Shedding has allowed it to continuously improve its streaming service, roll out updates seamlessly, and maintain its market leadership in the streaming industry.
Cycle time is one aspect that engineering teams cannot overlook — apart from all the technical reasons, it also has psychological impact. If Cycle time is high, the productivity level further drops because of demotivation and procrastination.
6 Challenges in Reducing Cycle Time
Reducing cycle time is easier said than done. There are several factors that affect efficiency and workflow.
Inconsistent Workflows: Non-standardized processes create variability in task durations, making it harder to detect and resolve inefficiencies. Establishing uniform workflows ensures predictable and optimized cycle times.
Limited Automation: Manual tasks like testing and deployment slow down development. Implementing CI/CD pipelines, test automation, and infrastructure as code reduces these delays significantly.
Overloaded Teams: Resource constraints and overburdened engineers lead to slower development cycles. Effective workload management and proper resourcing can alleviate this issue.
Waiting on Dependencies: External dependencies, such as third-party services or slow approval chains, cause idle time. Proactive dependency management and clear communication channels reduce these delays.
Resistance to Change: Teams hesitant to adopt new tools or practices miss opportunities for optimization. Promoting a culture of continuous learning and incremental changes can ease transitions.
Unclear Prioritization: When teams lack clarity on task priorities, critical work is delayed. Aligning work with business goals and maintaining a clear backlog ensures efficient resource allocation.
6 Proven Strategies to Reduce Software Cycle Time
Reducing software cycle time requires a combination of technical improvements, process optimizations, and cultural shifts. Here are six actionable strategies to implement today:
1. Optimize Code Reviews and Approvals
Establish clear SLAs for review timelines—e.g., 48 hours for initial feedback. Use tools like GitHub’s code owners to automatically assign reviewers based on file ownership. Implement peer programming for critical features to accelerate feedback loops. Introduce a "reviewer rotation" system to distribute the workload evenly across the team and prevent bottlenecks.
2. Invest in Automation
Identify repetitive tasks such as testing, integration, and deployment. And then implement CI/CD pipelines to automate these processes. You can also use test parallelization to speed up execution and set up automatic triggers for deployments to staging and production environments. Ensure robust rollback mechanisms are in place to reduce the risk of deployment failures.
3. Improve Team Collaboration
Break down silos by encouraging cross-functional collaboration between developers, QA, and operations. Adopt DevOps principles and use tools like Slack for real-time communication and Jira for task tracking. Schedule regular cross-team sync-ups, and document shared knowledge in Confluence to avoid communication gaps. Establish a "Definition of Ready" and "Definition of Done" to align expectations across teams.
4. Address Technical Debt Proactively
Schedule dedicated time each sprint to address technical debt. One amazing cycle time reduction strategy is to categorise debt into critical, moderate, and low-priority issues and then focus first on high-impact areas that slow down development. Implement a policy where no new feature work is done without addressing related legacy code issues.
5. Leverage Metrics and Analytics
Track cycle time by analysing PR stages—coding, pickup, review, and merge. Use tools like Typo to visualise bottlenecks and benchmark team performance. Establish a regular cadence to review these engineering metrics and correlate them with other DORA metrics to understand their impact on overall delivery performance. If review time consistently exceeds targets, consider adding more reviewers or refining the review process.
6. Prioritize Backlog Management
A cluttered backlog leads to confusion and context switching. Use prioritization frameworks like MoSCoW or RICE to focus on high-impact tasks. Ensure stories are clear, with well-defined acceptance criteria. Regularly groom the backlog to remove outdated items and reassess priorities. You can also introduce a “just-in-time” backlog refinement process to prepare stories only when they're close to implementation.
Tools to Support Cycle Time Reduction
Reducing software cycle time requires the right set of tools to streamline development workflows, automate processes, and provide actionable insights.
Here’s how key tools contribute to cycle time optimization:
1. GitHub/GitLab
GitHub and GitLab simplify version control, enabling teams to track code changes, collaborate efficiently, and manage pull requests. Features like branch protection rules, code owners, and merge request automation reduce delays in code reviews. Integrated CI/CD pipelines further streamline code integration and testing.
2. Jenkins, CircleCI, or TravisCI
These CI/CD tools automate build, test, and deployment processes, reducing manual intervention, ensuring faster feedback loops and more effective software delivery. Parallel execution, pipeline caching, and pre-configured environments significantly cut down build times and prevent bottlenecks.
3. Typo
Typo provides in-depth insights into cycle time by analyzing Git data across stages like coding, pickup, review, and merge. It highlights bottlenecks, tracks team performance, and offers actionable recommendations for process improvement. By visualizing trends and measuring PR cycle times, Typo helps engineering leaders make data-driven decisions and continuously optimize development workflows.
Cycle Time as shown in Typo App
Best Practices to Reduce Software Cycle Time
In your next development project, if you do not want to feel that this is taking forever, follow these best practices:
Break down large changes into smaller, manageable PRs to simplify reviews and reduce review time.
Define expectations for reviewers (e.g., 24-48 hours) to prevent PRs from being stuck in review.
Reduce merge conflicts by encouraging frequent, small merges to the main branch.
Track cycle time metrics via tools like Typo to identify trends and address recurring bottlenecks.
Deploy incomplete code safely, enabling faster releases without waiting for full feature completion.
Allocate dedicated time each sprint to address technical debt and maintain code maintainability.
Conclusion
Reducing software cycle time is critical for both engineering efficiency and business success. It directly impacts product delivery speed, market responsiveness, and overall team performance.
Engineering leaders should continuously evaluate processes, implement automation tools, and track cycle time metrics to streamline workflows and maintain a competitive edge.
And it all starts with accurate measurement of software cycle time.
Professional service organizations within software companies maintain a delivery success rate hovering in the 70% range.
This percentage looks good. However, it hides significant inefficiencies given the substantial resources invested in modern software delivery lifecycles.
Even after investing extensive capital, talent, and time into development cycles, missing targets on every third of projects should not be acceptable.
After all, there’s a direct correlation between delivery effectiveness and organizational profitability.
However, the complexity of modern software development - with its complex dependencies and quality demands - makes consistent on-time, on-budget delivery persistently challenging.
This reality makes it critical to master effective software delivery.
What is the Software Delivery Lifecycle
The Software Delivery Lifecycle (SDLC) is a structured sequence of stages that guides software from initial concept to deployment and maintenance.
Consider Netflix's continuous evolution: when transitioning from DVD rentals to streaming, they iteratively developed, tested, and refined their platform. All this while maintaining uninterrupted service to millions of users.
A typical SDLC has six phases:
Planning: Requirements gathering and resource allocation
Design: System architecture and technical specifications
Development: Code writing and unit testing
Testing: Quality assurance and bug fixing
Deployment: Release to production environment
Maintenance: Ongoing updates and performance monitoring
Each phase builds upon the previous, creating a continuous loop of improvement.
Modern approaches often adopt Agile methodologies, which enable rapid iterations and frequent releases. This also allows organizations to respond quickly to market demands while maintaining high-quality standards.
7 Best Practices to Achieve Effective Software Delivery
Even the best of software delivery processes can have leakages in terms of engineering resource allocation and technical management. By applying these software delivery best practices, you can achieve effectiveness:
1. Streamline Project Management
Effective project management requires systematic control over development workflows while maintaining strategic alignment with business objectives.
Modern software delivery requires precise distribution of resources, timelines, and deliverables.
Here’s what you should implement:
Set Clear Objectives and Scope: Implement SMART criteria for project definition. Document detailed deliverables with explicit acceptance criteria. Establish timeline dependencies using critical path analysis.
Effective Resource Allocation: Deploy project management tools for agile workflow tracking. Implement capacity planning using story point estimation. Utilize resource calendars for optimal task distribution. Configure automated notifications for blocking issues and dependencies.
Prioritize Tasks: Apply MoSCoW method (Must-have, Should-have, Could-have, Won't-have) for feature prioritization. Implement RICE scoring (Reach, Impact, Confidence, Effort) for backlog management. Monitor feature value delivery through business impact analysis.
Continuous Monitoring: Track velocity trends across sprints using burndown charts. Monitor issue cycle time variations through Typo dashboards. Implement automated reporting for sprint retrospectives. Maintain real-time visibility through team performance metrics.
2. Build Quality Assurance into Each Stage
Quality assurance integration throughout the SDLC significantly reduces defect discovery costs.
Early detection and prevention strategies prove more effective than late-stage fixes. This ensures that your time is used for maximum potential helping you achieve engineering efficiency.
Some ways to set up robust a QA process:
Shift-Left Testing: Implement behavior-driven development (BDD) using Cucumber or SpecFlow. Integrate unit testing within CI pipelines. Conduct code reviews with automated quality gates. Perform static code analysis during development.
Automated Testing: Deploy Selenium WebDriver for cross-browser testing. Implement Cypress for modern web application testing. Utilize JMeter for performance testing automation. Configure API testing using Postman/Newman in CI pipelines.
QA as Collaborative Effort: Establish three-amigo sessions (Developer, QA, Product Owner). Implement pair testing practices. Conduct regular bug bashes. Share testing responsibilities across team roles.
3. Enable Team Collaboration
Efficient collaboration accelerates software delivery cycles while reducing communication overhead.
There are tools and practices available that facilitate seamless information flow across teams.
Here’s how you can ensure the collaboration is effective in your engineering team:
Foster open communication with dedicated Slack channels, Notion workspaces, daily standups, and video conferencing.
Encourage cross-functional teams with skill-balanced pods, shared responsibility matrices, cross-training, and role rotations.
Streamline version control and documentation with Git branching strategies, pull request templates, automated pipelines, and wiki systems.
4. Implement Strong Security Measures
Security integration throughout development prevents vulnerabilities and ensures compliance. Instead of fixing for breaches, it’s more effective to take preventive measures.
To implement strong security measures:
Implement SAST tools like SonarQube in CI pipelines.
Deploy DAST tools for runtime analysis.
Conduct regular security reviews using OWASP guidelines.
Implement automated vulnerability scanning.
Apply role-based access control (RBAC) principles.
Implement multi-factor authentication (MFA).
Use secrets management systems.
Monitor access patterns for anomalies.
Maintain GDPR compliance documentation and ISO 27001 controls.
Conduct regular SOC 2 audits and automate compliance reporting.
5. Build Scalability into Process
Scalable architectures directly impact software delivery effectiveness by enabling seamless growth and consistent performance even when the load increases.
Strategic implementation of scalable processes removes bottlenecks and supports rapid deployment cycles.
Here’s how you can build scalability into your processes:
Scalable Architecture: Implement microservices architecture patterns. Deploy container orchestration using Kubernetes. Utilize message queues for asynchronous processing. Implement caching strategies.
Cloud Infrastructure: Configure auto-scaling groups in AWS/Azure. Implement infrastructure as code using Terraform. Deploy multi-region architectures. Utilize content delivery networks (CDNs).
Monitoring and Performance: Deploy Typo for system health monitoring. Implement distributed tracing using Jaeger. Configure alerting based on SLOs. Maintain performance dashboards.
6. Leverage CI/CD
CI/CD automation streamlines deployment processes and reduces manual errors. Now, there are pipelines available that are rapid, reliable software delivery through automated testing and deployment sequences. Integration with version control systems ensures consistent code quality and deployment readiness. This means there are less delays and more effective software delivery.
7. Measure Success Metrics
Effective software delivery requires precise measurement through carefully selected metrics. These metrics provide actionable insights for process optimization and delivery enhancement.
Here are some metrics to keep an eye on:
Deployment Frequency measures release cadence to production environments.
Change Lead Time spans from code commit to successful production deployment.
Mean Time to Recovery quantifies service restoration speed after production incidents.
Code Coverage reveals test automation effectiveness across the codebase.
Technical Debt Ratio compares remediation effort against total development cost.
These metrics provide quantitative insights into delivery pipeline efficiency and help identify areas for continuous improvement.
Challenges in the Software Delivery Lifecycle
The SDLC has multiple technical challenges at each phase. Some of them include:
1. Planning Phase Challenges
Teams grapple with requirement volatility leading to scope creep. API dependencies introduce integration uncertainties, while microservices architecture decisions significantly impact system complexity. Resource estimation becomes particularly challenging when accounting for potential technical debt.
2. Design Phase Challenges
Design phase complications are around system scalability requirements conflicting with performance constraints. Teams must carefully balance cloud infrastructure selections against cost-performance ratios. Database sharding strategies introduce data consistency challenges, while service mesh implementations add layers of operational complexity.
3. Development Phase Challenges
Development phase issues leads to code versioning conflicts across distributed teams. Software engineers frequently face memory leaks in complex object lifecycles and race conditions in concurrent operations. Then there are rapid sprint cycles that often result in technical debt accumulation, while build pipeline failures occur from dependency conflicts.
4. Testing Phase Challenges
Testing becomes increasingly complex as teams deal with coverage gaps in async operations and integration failures across microservices. Performance bottlenecks emerge during load testing, while environmental inconsistencies lead to flaky tests. API versioning introduces additional regression testing complications.
5. Deployment Phase Challenges
Deployment challenges revolve around container orchestration failures and blue-green deployment synchronization. Teams must manage database migration errors, SSL certificate expirations, and zero-downtime deployment complexities.
6. Maintenance Phase Challenges
In the maintenance phase, teams face log aggregation challenges across distributed systems, along with memory utilization spikes during peak loads. Cache invalidation issues and service discovery failures in containerized environments require constant attention, while patch management across multiple environments demands careful orchestration.
These challenges compound through modern CI/CD pipelines, with Infrastructure as Code introducing additional failure points.
Effective monitoring and observability become crucial success factors in managing them.
Use software engineering intelligence tools like Typo to get visibility on precise performance of the teams, sprint delivery which helps you in optimizing resource allocation and reducing tech debt better.
Conclusion
Effective software delivery depends on precise performance measurement. Without visibility into resource allocation and workflow efficiency, optimization remains impossible.
Typo addresses this fundamental need. The platform delivers insights across development lifecycles - from code commit patterns to deployment metrics. AI-powered code analysis automates optimization, reducing technical debt while accelerating delivery. Real-time dashboards expose productivity trends, helping you with proactive resource allocation.
Transform your software delivery pipeline with Typo's advanced analytics and AI capabilities.
Smooth and reliable deployments are key to maintaining user satisfaction and business continuity. This is where DORA metrics play a crucial role.
Among these metrics, the Change Failure Rate provides valuable insights into how frequently deployments lead to failures. Hence, helping teams minimize disruptions in production environments.
Let’s read about CFR further!
What are DORA Metrics?
In 2015, Gene Kim, Jez Humble, and Nicole Forsgren founded the DORA (DevOps Research and Assessment) team to evaluate and improve software development practices. The aim is to improve the understanding of how organizations can deliver faster, more reliable, and higher-quality software.
DORA metrics help in assessing software delivery performance based on four key (or accelerate) metrics:
Deployment Frequency
Lead Time for Changes
Change Failure Rate
Mean Time to Recover
While these metrics provide valuable insights into a team's performance, understanding CFR is crucial. It measures the effectiveness of software changes and their impact on production environments.
Overview of Change Failure Rate
The Change Failure Rate (CFR) measures how often new deployments cause failures, glitches, or unexpected issues in the IT environment. It reflects the stability and reliability of the entire software development and deployment lifecycle.
It is important to measure the Change Failure Rate for various reasons:
A lower change failure rate enhances user experience and builds trust by reducing failures.
It protects your business from financial risks, revenue loss, customer churn, and brand damage.
Lower change failures help to allocate resources effectively and focus on delivering new features.
How to Calculate Change Failure Rate?
Change Failure Rate calculation is done by following these steps:
Identify Failed Changes: Keep track of the number of changes that resulted in failures during a specific timeframe.
Determine Total Changes Implemented: Count the total changes or deployments made during the same period.
Apply the formula:
CFR = (Number of Failed Changes / Total Number of Changes) * 100 to calculate the Change Failure Rate as a percentage.
For example, Suppose during a month:
Failed Changes = 2
Total Changes = 30
Using the formula: (2/30)*100 = 5
Therefore, the Change Failure Rate for that period is 6.67%.
What is a Good Failure Rate?
An ideal failure rate is between 0% and 15%. This is the benchmark and standard that the engineering teams need to maintain. Low CFR equals stable, reliable, and well-tested software.
When the Change Failure Rate is above 15%, it reflects significant issues with code quality, testing, or deployment processes. This leads to increased system downtime, slower deployment cycles, and a negative impact on user experience.
Hence, it is always advisable to keep CFR as low as possible.
How to Correctly Measure Change Failure Rate?
Follow the right steps to measure the Change Failure Rate effectively. Here’s how you can do it:
Define ‘Failure’ Criteria
Clearly define what constitutes a ‘Change’ and a ‘Failure,’ such as service disruptions, bugs, or system crashes. Having clear metrics ensures the team is aligned and consistently collecting data.
Accurately Capture and Label Your Data
Firstly, define the scope of change that needs to be included in CFR calculation. Besides this, include the details to be added for deciding the success or failure of changes. Have a Change Management System to track or log changes in a database. You can use tools like JIRA, GIT or CI/CD pipelines to automate and review data collection.
Measure Change Failure, Not Deployment Failure
Understand the difference between Change Failure and Deployment Failure.
Deployment Failure: Failures that occur during the process of deploying code or changes to a production environment.
Change Failure: Failures that occur after the deployment when the changes themselves cause issues in the production environment.
This ensures that the team focuses on improving processes rather than troubleshooting unrelated issues.
Analyze Trends Over Time
Don’t analyze failures only once. Analyze trends continuously over different time periods, such as weekly, monthly, and quarterly. The trends and patterns help reveal recurring issues, prioritize areas for improvement, and inform strategic decisions. This allows teams to adapt and improve continuously.
Understand the Limitations of DORA Metrics
DORA Metrics provide valuable insights into software development performance and identify high-level trends. However, they fail to capture the nuances such as the complexity of changes or severity of failures. Use them alongside other metrics for a holistic view. Also, ensure that these metrics are used to drive meaningful improvements rather than just for reporting purposes.
Consider Contextual Factors
Various factors including team experience, project complexity, and organizational culture can influence the Change Failure Rate. These factors can impact both the failure frequency and effect of mitigation strategy. This allows you to judge failure rates in a broader context rather than only based on numbers.
Exclude External Incidents
Filter out the failures caused by external factors such as third-party service outages or hardware failure. This helps accurately measure CFR as external incidents can distort the true failure rate and mislead conclusions about your team’s performance.
How to Reduce Change Failure Rate?
Identify the root causes of failures and implement best practices in testing, deployment, and monitoring. Here are some effective strategies to minimize CFR:
Automate Testing Practices
Implement an automated testing strategy during each phase of the development lifecycle. The repeatable and consistent practice helps catch issues early and often, hence, improving code quality to a great extent. Ensure that the test results are also made accessible so they can have a clear focus on crucial aspects.
Deploy small changes frequently
Small deployments in more frequent intervals make testing and detecting bugs easier. They reduce the risks of failures from deploying code to production issues as the issues are caught early and addressed before they become significant problems. Moreover, the frequent deployments provide quicker feedback to the team members and engineering leaders.
Adopt a CI/CD
Continuous Integration and Continuous Deployment (CI/CD) ensures that code is regularly merged, tested, and deployed automatically. This reduces the deployment complexity and manual errors and allows teams to detect and address issues early in the development process. Hence, ensuring that only high-quality code reaches production.
Prioritize Code Quality
Establishing a culture where quality is prioritized helps teams catch issues before they escalate into production failures. Adhering to best practices such as code reviews, coding standards, and refactoring continuously improves the quality of code. High-quality code is less prone to bugs and vulnerabilities and directly contributes to a lower CFR.
Implement Real-Time Monitoring and Alerting
Real-time monitoring and alerting systems help teams detect issues early and resolve them quickly. This minimizes the impact of failures, improves overall system reliability, and provides immediate feedback on application performance and user experience.
Cultivate a Learning Culture
Creating a learning culture within the development team encourages continuous improvement and knowledge sharing. When teams are encouraged to learn from past mistakes and successes, they are better equipped to avoid repeating errors. This involves conducting post-incident reviews and sharing key insights. This approach also fosters collaboration, accountability, and continuous improvement.
How Does Typo Help in Reducing CFR?
Since the definition of Failure is specific to teams, there are multiple ways this metric can be configured. Here are some guidelines on what can indicate a failure :
A deployment that needs a rollback or a hotfix
For such cases, any Pull Request having a title/tag/label that represents a rollback/hotfix that is merged to production can be considered a failure.
A high-priority production incident
For such cases, any ticket in your Issue Tracker having a title/tag/label that represents a high-priority production incident can be considered a failure.
A deployment that failed during the production workflow
For such cases, Typo can integrate with your CI/CD tool and consider any failed deployment as a failure.
To calculate the final percentage, the total number of failures is divided by the total number of deployments (this can be picked either from the Deployment PRs or from the CI/CD tool deployments).
Measuring and reducing the Change Failure Rate is a strategic necessity. It enables engineering teams to deliver stable software, leading to happier customers and a stronger competitive advantage. With tools like Typo, organizations can easily track and address failures to ensure successful software deployments.
Most companies treat software development costs as just another expense and are unsure how certain costs can be capitalized.
Recording the actual value of any software development process must involve recognizing the development process as a high-return asset.
That’s what software capitalization is for.
This article will answer all the what’s, why’s, and when’s of software capitalization.
What is Software Capitalization?
Software capitalization is an accounting process that recognizes the incurred software development costs and treats them as long-term assets rather than immediate expenses. Typical costs include employee wages, third-party app expenses, consultation fees, and license purchases. The idea is to amortize these costs over the software’s lifetime, thus aligning expenses with future revenues generated by the software.
This process illustrates how IT development and accounting can seamlessly integrate. As more businesses seek to enhance operational efficiency, automating systems with custom software applications becomes essential. By capitalizing software, companies can select systems that not only meet their operational needs but also align accounting practices with strategic IT development goals.
In this way, software capitalization serves as a bridge between the tech and financial realms, ensuring that both departments work hand in hand to support the organization’s long-term objectives. This synergy reinforces the importance of choosing compatible systems that optimize both technological advancements and financial reporting.
Why is Software Capitalization Important?
Shifting a developed software’s narrative from being an expense to a revenue-generating asset comes with some key advantages:
1. Preserves profitability
Capitalization helps preserve profitability for the longer term by reducing the impact on the company’s expenses. That’s because you amortize intangible and tangible asset expenses, thus minimizing cash flow impact.
2. Reflects asset value
Capitalizing software development costs results in higher reported asset value and reduces short-term expenses, which ultimately improves your profitability metrics like net profit margin, ARR growth, and ROA (return on assets).
3. Complies with accounting standards
Software capitalization complies with the rules set by major accounting standards like ASC 350-40, U.S. GAAP, and IFRS and makes it easier for companies to undergo audits.
When is Software Capitalization Applicable?
Here’s when it’s acceptable to capitalize software costs:
1. Development stage
The software development stage starts when you receive funding and are in an active development phase. Here, you can capitalize on any cost directly related to development, considering the software is for internal use.
Example costs include interface designing, coding, configuring, installation, and testing.
For internal-use software like CRM, production automation, and accounting systems, consider the following:
Preliminary Stage: Record expenses as they’re incurred during the initial phase of the project.
Application Development Stage: Capitalize costs related to activities like testing, programming, and installation. Administrative costs, such as user training or overhead, should be expensed.
Implementation Stage: Record any associated costs of the roll-out, like software maintenance and user training, as expenses.
2. Technical feasibility
If the software is intended for external use, then your costs can be capitalized when the software reaches the technical feasibility stage, i.e., when it’s viable. Example costs include coding, testing, and employee wages.
3. Future economic benefits
The software must be a probable candidate to generate consistent revenue for your company in the long run and considered an “asset.” For external use software, this can mean it possesses a selling and leasing expectation.
4. Measurable costs
The overall software development costs must be accurately measurable. This way, you ensure that the capitalized amount reflects the software’s exact invested amount.
Regulatory Compliance
Ensure that all accounting procedures adhere to GAAP regulations, which provide the framework for accurately reporting and capitalizing software costs. This compliance underscores the financial integrity of your capitalization efforts.
By combining these criteria with a structured approach to expense and capital cost management, companies can effectively navigate the complexities of software capitalization, ensuring both compliance and financial clarity.
Key Costs that can be Capitalized
The five main costs you can capitalize for software are:
1. Direct development costs
Direct costs that go into your active development phase can be capitalized. These include payroll costs of employees who were directly part of the software development, additional software purchase fees, and travel costs.
2. External development costs
These costs include the ones incurred by the developers when working with external service providers. Examples include travel costs, technical support, outsourcing expenses, and more.
3. Software Licensing Fees
License fees can be capitalized instead of being treated as an expense. However, this can depend on the type of accounting standard. For example, GAAP’s terms state capitalization is feasible for one-time software license purchases where it provides long-term benefits.
When deciding whether to capitalize or expense software licenses, timing and the stage of the project play crucial roles. Generally, costs incurred during the preliminary and implementation stages are recorded as expenses. These stages include the initial planning and setup, where the financial outlay does not yet contribute directly to the creation of a tangible asset.
In contrast, during the development stage, many costs can be capitalized. This includes expenditures directly contributing to building and testing the software, as this stage is where the asset truly begins to take shape. Capitalization should continue until the project reaches completion and the software is either used internally or marketed externally.
Understanding these stages and criteria allows businesses to make informed decisions about their software investments, ensuring they align with accounting principles and maximize financial benefits.
4. Acquisition costs
Acquisition costs can be capitalized as assets, provided your software is intended for internal use.
5. Training and documentation costs
Training and documentation costs are considered assets only if you’re investing in them during the development phase. Post-implementation, these costs turn into operating expenses and cannot be amortized.
Costs that should NOT be Capitalized
Here are a few costs that do not qualify for software capitalization and are expensed:
1. Research and planning costs
Research and planning stages are categorized under the preliminary software development stage. These incurred costs are expensed and cannot be capitalized. The GAAP accounting standard, for example, states that an organization can begin to capitalize on costs only after completing these stages.
2. Post-implementation costs
Post-implementation or the operational stage is the maintenance period after the software is fully deployed. Any costs, be it training, support, or other operational charges during this time are expensed as incurred.
3. Costs for upgrades and enhancements
Any costs related to software upgrades, modernization, or enhancements cannot be capitalized. For example, money spent on bug fixes, future modifications, and routine maintenance activities.
Accounting Standards you should know for Software Capitalization
Below are the two most common accounting standards that state the eligibility criteria for software capitalization:
1. U.S. GAAP (Generally Accepted Accounting Principles)
GAAP is a set of rules and procedures that organizations must follow while preparing their financial statements. These standards ensure accuracy and transparency in reporting across industries, including software.
Understanding GAAP and key takeaways for software capitalization:
GAAP allows capitalization for internal and external costs directly related to the software development process. Examples of costs include licensing fees, third-party development costs, and wages of employees who are part of the project.
Costs incurred after the software is deemed viable but before it is ready for use can be capitalized. Example costs can be for coding, installation, and testing.
Every post-implementation cost is expensed.
A development project still in the preliminary or planning phase is too early to capitalize on.
2. IFRS (International Financial Reporting Standards)
IFRS is an alternative to GAAP and is used worldwide. Compared to GAAP, IFRS allows better capitalization of development costs, considering you meet every criterion, naturally making the standard more complex.
Understanding IFRS and key takeaways for software capitalization:
IFRS treats computer software as an intangible asset. If it’s internally developed software (for internal/external use or sale), it is charged to expense until it reaches technical feasibility.
All research and planning costs are charged as expenses.
Development costs are capitalized only after technical or commercial feasibility for sale if the software’s use has been established.
Financial Implications of Software Capitalization
Software capitalization, from a financial perspective, can have the following aftereffects:
1. Impact on profit and loss statement
A company’s profit and loss (P&L) statement is an income report that shows the company’s overall expenses and revenues. So, if your company wishes to capitalize some of the software’s R&D costs, they are recognized as “profitable assets” instead of “losses,” so development can be amortized over a time period.
2. Balance sheet impact
Software capitalization treats your development-related costs as long-term assets rather than incurred expenses. This means putting these costs on a balance sheet without recognizing the initial costs until you have a viable finished product that generates revenue. As a result, it delays paying taxes on those costs and leads to a bigger net income over that period.
Accounting Procedure: Software capitalization is not just a financial move but an accounting procedure that recognizes development as a fixed asset. This strategic move places your development costs on the balance sheet, transforming them from immediate expenses into long-term investments.
Financial Impact: By delaying the recognition of these costs, businesses can spread expenses over several years, typically between two and five years. This is achieved through depreciation or amortization, often using the straight-line method, which evenly distributes the cost over the software's useful life.
Benefits: The primary advantage here is the ability to report fewer expenses, which results in a higher net income. This not only reduces taxable income but also enhances the company's appeal to potential investors, presenting a more attractive financial position.
This approach allows companies to manage their financial narratives better, demonstrating profitability and stability, which are crucial for growth and investment.
3. Tax considerations
Although tax implications can be complex, capitalizing on software can often lead to tax deferral. That’s because amortization deductions are spread across multiple periods, reducing your company’s tax burden for the time being.
Consequences of Canceling a Software Project in Terms of Capitalization
When a software project is canceled, one of the key financial implications revolves around capitalization. Here's what you need to know:
Cessation of Capitalization: Once a software project is terminated, the accounting treatment changes. Costs previously capitalized as an asset must stop accumulating. This means that future expenses related to the project can no longer be deferred and must be expensed immediately.
Impact on Financial Statements: Canceling a project leads to a direct impact on the company's financial statements. Previously capitalized costs may need reevaluation for impairment, potentially resulting in a write-off. This can affect both the balance sheet, by reducing assets, and the income statement, through increased expenses.
Tax Implications: Depending on jurisdiction, the tax treatment of capitalized expenses could change. Some regions allow for a deduction of capitalized costs when a project is canceled, impacting the company’s taxable income.
Resource Reallocation: Financial resources that were tied up in the project become available for redeployment. This can offer new opportunities for investment but requires strategic planning to ensure the best use of freed-up funds.
Stakeholder Communication: It's essential to communicate effectively with stakeholders about the financial changes due to the project's cancellation. Clear, transparent explanations help maintain trust and manage expectations around the revised financial outlook.
Understanding these consequences helps businesses make informed decisions about resource allocation and financial management when considering the fate of a software project.
Precise tracking of story points allows granular cost allocation
Multi-tier engineer cost model reflects skill complexity
Comprehensive overhead and infrastructure costs included
Rigorous capitalization criteria applied
Recommendation
Capitalize the entire $464,145 as an intangible asset, amortizing over 4 years.
How Typo can help
Tracking R&D investments is a major part of streamlining software capitalization while leaving no room for manual errors. With Typo, you streamline this entire process by automating the reporting and management of R&D costs.
Typo’s best features and benefits for software capitalization include:
Automated Reporting: Generates customizable reports for capitalizable and non-capitalizable work.
Resource Allocation: Provides visibility into team investments, allowing for realignment with business objectives.
Custom Dashboards: Offers real-time tracking of expenditures and resource allocation.
Predictive Insights: Uses KPIs to forecast project timelines and delivery risks.
DORA Metrics: Assesses software delivery performance, enhancing productivity.
Typo transforms R&D from a cost center into a revenue-generating function by optimizing financial workflows and improving engineering efficiency, thus maximizing your returns on software development investments.
Wrapping up
Capitalizing software costs allows tech companies to secure better investment opportunities by increasing profits legitimately.
Although software capitalization can be quite challenging, it presents massive future revenue potential.
With a tool like Typo, you rapidly maximize returns on software development investments with its automated capitalized asset reporting and real-time effort tracking.
Look, let's cut to the chase. As a software developer, you've probably heard about cyclomatic complexity, but maybe you've never really dug deep into what it means or why it matters. This guide is going to change that. We'll break down everything you need to know about cyclomatic complexity - from its fundamental concepts to practical implementation strategies.
What is Cyclomatic Complexity?
Cyclomatic complexity is essentially a software metric that measures the structural complexity of your code. Think of it as a way to quantify how complicated your software's control flow is. The higher the number, the more complex and potentially difficult to understand and maintain your code becomes.
Imagine your code as a roadmap. Cyclomatic complexity tells you how many different paths or "roads" exist through that map. Each decision point, each branch, each conditional statement adds another potential route. More routes mean more complexity, more potential for bugs, and more challenging maintenance.
Why Should You Care?
Code Maintainability: Higher complexity means harder-to-maintain code
Testing Effort: More complex code requires more comprehensive testing
Potential Bug Zones: Increased complexity correlates with higher bug probability
Performance Implications: Complex code can lead to performance bottlenecks
What is the Formula for Cyclomatic Complexity?
The classic formula for cyclomatic complexity is beautifully simple:
Where:
V(G): Cyclomatic complexity
E: Number of edges in the control flow graph
N: Number of nodes in the control flow graph
P: Number of connected components (typically 1 for a single function/method)
Alternatively, you can calculate it by counting decision points:
Decision points include:
if statements
else clauses
switch cases
for loops
while loops
&& and || operators
catch blocks
Ternary operators
Practical Calculation Example
Let's break down a code snippet:
Calculation:
Decision points: 4
Cyclomatic Complexity: 4 + 1 = 5
Practical Example of Cyclomatic Complexity
Let's walk through a real-world scenario to demonstrate how complexity increases.
Visual Studio Code: Extensions like "Code Metrics"
JetBrains IDEs: Built-in code complexity analysis
Eclipse: Various complexity measurement plugins
Cloud-Based Analysis Platforms
GitHub Actions
GitLab CI/CD
Typo AI
SonarCloud
How Typo solves for Cyclomatic Complexity?
Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother by optimizing complex methods, reducing cyclomatic complexity, and standardizing code efficiently.
Cyclomatic complexity isn't just a theoretical concept—it's a practical tool for writing better, more maintainable code. By understanding and managing complexity, you transform yourself from a mere coder to a software craftsman.
Remember: Lower complexity means:
Easier debugging
Simpler testing
More readable code
Fewer potential bugs
Keep your code clean, your complexity low, and your coffee strong! 🚀👩💻👨💻
Pro Tip: Make complexity measurement a regular part of your code review process. Set team standards and continuously refactor to keep your codebase healthy.
Scope creep is one of the most challenging—and often frustrating—issues engineering managers face. As projects progress, new requirements, changing technologies, and evolving stakeholder demands can all lead to incremental additions that push your project beyond its original scope. Left unchecked, scope creep strains resources, raises costs, and jeopardizes deadlines, ultimately threatening project success.
This guide is here to help you take control. We’ll delve into advanced strategies and practical solutions specifically for managers to spot and manage scope creep before it disrupts your project. With detailed steps, technical insights, and tools like Typo, you can set boundaries, keep your team aligned, and drive projects to a successful, timely completion.
Understanding Scope Creep in Sprints
Scope creep can significantly impact projects, affecting resource allocation, team morale, and project outcomes. Understanding what scope creep is and why it frequently occurs provides a solid foundation for developing effective strategies to manage it.
What is Scope Creep?
Scope creep in projects refers to the gradual addition of project requirements beyond what was originally defined. Unlike industries with stable parameters, Feature projects often encounter rapid changes—emerging features, stakeholder requests, or even unanticipated technical complexities—that challenge the initial project boundaries.
While additional features can improve the end product, they can also risk the project's success if not managed carefully. Common triggers for scope creep include unclear project requirements, mid-project requests from stakeholders, and iterative development cycles, all of which require proactive management to keep projects on track.
Why does Scope Creep Happen?
Scope creep often results from the unique factors inherent to the industry. By understanding these drivers, you can develop processes that minimize their impact and keep your project on target.
Scope creep often results from several factors unique to the field:
Unclear requirements: At the start of a project, unclear or vague requirements can lead to an ever-expanding set of deliverables. For engineering managers, ensuring all requirements are well-defined is critical to setting project boundaries.
Shifting technological needs: IT projects must often adapt to new technology or security requirements that weren’t anticipated initially, leading to added complexity and potential delays.
Stakeholder influence and client requests: Frequent client input can introduce scope creep, especially if changes are not formally documented or accounted for in resources and timelines.
Agile development: Agile development allows flexibility and iterative updates, but without careful scope management, it can lead to feature creep.
These challenges make it essential for managers to recognize scope creep indicators early and develop robust systems to manage new requests and technical changes.
Identifying Scope Creep Early in the Sprints
Identifying scope creep early is key to preventing it from derailing your project. By setting clear boundaries and maintaining consistent communication with stakeholders, you can catch scope changes before they become a problem.
Define Clear Project Scope and Objectives
The first step in minimizing scope creep is establishing a well-defined project scope that explicitly outlines deliverables, timelines, and performance metrics. In sprints, this scope must include technical details like software requirements, infrastructure needs, and integration points.
Regular Stakeholder Check-Ins
Frequent communication with stakeholders is crucial to ensure alignment on the project’s progress. Schedule periodic reviews to present progress, confirm objectives, and clarify any evolving requirements.
Routine Project Reviews and Status Updates
Integrate routine reviews into the project workflow to regularly assess the project’s alignment with its scope. Typo enables teams to conduct these reviews seamlessly, providing a comprehensive view of the project’s current state. This structured approach allows managers to address any adjustments or unexpected tasks before they escalate into significant scope creep issues.
Strategies for Managing Scope Creep
Once scope creep has been identified, implementing specific strategies can help prevent it from escalating. With the following approaches, you can address new requests without compromising your project timeline or objectives.
Implement a Change Control Process
One of the most effective ways to manage scope creep is to establish a formal change control process. A structured approach allows managers to evaluate each change request based on its technical impact, resource requirements, and alignment with project goals.
Effective Communication and Real-Time Updates
Communication breakdowns can lead to unnecessary scope expansion, especially in complex team environments. Use Typo’s Sprint Analysis to track project changes and real-time developments. This level of visibility gives stakeholders a clear understanding of trade-offs and allows managers to communicate the impact of requests, whether related to resource allocation, budget implications, or timeline shifts.
Prioritize and Adjust Requirements in Real Time
In Software development, feature prioritization can be a strategic way to handle evolving needs without disrupting core project objectives. When a high-priority change arises, use Typo to evaluate resource availability, timelines, and dependencies, making necessary adjustments without jeopardizing essential project elements.
Advanced Tools and Techniques to Prevent Scope Creep
Beyond basic strategies, specific tools and advanced techniques can further safeguard your IT project against scope creep. Leveraging project management solutions and rigorous documentation practices are particularly effective.
Leverage Typo for End-to-End Project Management
For projects, having a comprehensive project management tool can make all the difference. Typo provides robust tracking for timelines, tasks, and resources that align directly with project objectives. Typo also offers visibility into task assignments and dependencies, which helps managers monitor all project facets and mitigate scope risks proactively.
Detailed Change Tracking and Documentation
Documentation is vital in managing scope creep, especially in projects where technical requirements can evolve quickly. By creating a “single source of truth,” Typo enables the team to stay aligned, with full visibility into any shifts in project requirements.
Budget and Timeline Contingencies
Software projects benefit greatly from budget and time contingencies that allow for minor, unexpected adjustments. By pre-allocating resources for possible scope adjustments, managers have the flexibility to accommodate minor changes without impacting the project’s overall trajectory.
Maintaining Team Morale and Focus amid Scope Creep
As scope adjustments occur, it’s important to maintain team morale and motivation. Empowering the team and celebrating their progress can help keep everyone focused and resilient.
Empower the Team to Decline Non-Essential Changes
Encouraging team members to communicate openly about their workload and project demands is crucial for maintaining productivity and morale.
Recognize and Celebrate Milestones
Managing IT projects with scope creep can be challenging, so it’s essential to celebrate milestones and acknowledge team achievements.
Typo - An Effective Sprint Analysis Tool
Typo’s sprint analysis monitors scope creep to quantify its impact on the team’s workload and deliverables. It allows you to track and analyze your team’s progress throughout a sprint and helps you gain visual insights into how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This information enables you to identify any potential problems early on and take corrective action.
Our sprint analysis feature uses data from Git and issue management tools to provide insights into how your team is working. You can see how long tasks are taking, how often they’re being blocked, and where bottlenecks are occurring. This information can help you identify areas for improvement and make sure your team is on track to meet their goals.
Taking Charge of Scope Creep
Effective management of scope creep in IT projects requires a balance of proactive planning, structured communication, and robust change management. With the right strategies and tools like Typo, managers can control project scope while keeping the team focused and aligned with project goals.
If you’re facing scope creep challenges, consider implementing these best practices and exploring Typo’s project management capabilities. By using Typo to centralize communication, track progress, and evaluate change requests, IT managers can prevent scope creep and lead their projects to successful, timely completion.
Are your code reviews fostering constructive discussions or stuck in endless cycles of revisions?
Let’s change that.
In many development teams, code reviews have become a necessary but frustrating part of the workflow. Rather than enhancing collaboration and improvement, they often drag on, leaving developers feeling drained and disengaged.
This inefficiency can lead to rushed releases, increased bugs in production, and a demotivated team. As deadlines approach, the very process meant to elevate code quality can become a barrier to success, creating a culture where developers feel undervalued and hesitant to share their insights.
The good news? You can transform your code review process into a constructive and engaging experience. By implementing strategic changes, you can cultivate a culture of open communication, collaborative learning, and continuous improvement.
This blog aims to provide developers and engineering managers with a comprehensive framework for optimizing the code review process, incorporating insights on leveraging tools like Typo and discussing the technical nuances that underpin effective code reviews.
The Importance of Code Reviews
Code reviews are a critical aspect of the software development lifecycle. They provide an opportunity to scrutinize code, catch errors early, and ensure adherence to coding standards. Here’s why code reviews are indispensable:
Error detection and bug prevention
The primary function of code reviews is to identify issues before they escalate into costly bugs or security vulnerabilities. By implementing rigorous review protocols, teams can detect errors at an early stage, reducing technical debt and enhancing code stability.
Utilizing static code analysis tools like SonarQube and ESLint can automate the detection of common issues, allowing developers to focus on more intricate code quality aspects.
Knowledge sharing
Code reviews foster an environment of shared learning and expertise. When developers engage in peer reviews, they expose themselves to different coding styles, techniques, and frameworks. This collaborative process enhances individual skill sets and strengthens the team’s collective knowledge base.
To facilitate this knowledge transfer, teams should maintain documentation of coding standards and review insights, which can serve as a reference for future projects.
Maintaining code quality
Adherence to coding standards and best practices is crucial for maintaining a high-quality codebase. Effective code reviews enforce guidelines related to design patterns, performance optimization, and security practices.
By prioritizing clean, maintainable code, teams can reduce the likelihood of introducing technical debt. Establishing clear documentation for coding standards and conducting periodic training sessions can reinforce these practices.
Enhanced collaboration
The code review process inherently encourages open dialogue and constructive feedback. It creates a culture where developers feel comfortable discussing their approaches, leading to richer collaboration. Implementing pair programming alongside code reviews can provide real-time feedback and enhance team cohesion.
Accelerated onboarding
For new team members, code reviews are an invaluable resource for understanding the team’s coding conventions and practices. Engaging in the review process allows them to learn from experienced colleagues while providing opportunities for immediate feedback.
Pairing new hires with seasoned developers during the review process accelerates their integration into the team.
Common Challenges in Code Reviews
Despite their advantages, code reviews can present challenges that hinder productivity. It’s crucial to identify and address these issues to optimize the process effectively:
Lengthy review cycles
Extended review cycles can impede development timelines and lead to frustration among developers. This issue often arises from an overload of reviewers or complex pull requests. To combat this, implement guidelines that limit the size of pull requests, making them more manageable and allowing for quicker reviews. Additionally, establishing defined review timelines can help maintain momentum.
Inconsistent feedback
A lack of standardization in feedback can create confusion and frustration among team members. Inconsistency often stems from varying reviewer expectations. Implementing a standardized checklist or rubric for code reviews can ensure uniformity in feedback and clarify expectations for all team members.
Bottlenecks and lack of accountability
If code reviews are concentrated among a few individuals, it can lead to bottlenecks that slow down the entire process. Distributing review responsibilities evenly among team members is essential to ensure timely feedback. Utilizing tools like GitHub and GitLab can facilitate the assignment of reviewers and track progress in real-time.
Limited collaboration and feedback
Sparse or overly critical feedback can hinder the collaborative nature of code reviews. Encouraging a culture of constructive criticism is vital. Train reviewers to provide specific, actionable feedback that emphasizes improvement rather than criticism.
Regularly scheduled code review sessions can enhance collaboration and ensure engagement from all team members.
How Typo can Streamline your Code Review Process
To optimize your code review process effectively, leveraging the right tools is paramount. Typo offers a suite of features designed to enhance productivity and code quality:
Automated code analysis
Automating code analysis through Typo significantly streamlines the review process. Built-in linting and static analysis tools flag potential issues before the review begins, enabling developers to concentrate on complex aspects of the code. Integrating Typo with CI/CD pipelines ensures that only code that meets quality standards enters the review process.
Feedback and commenting system
Typo features an intuitive commenting system that allows reviewers to leave clear, actionable feedback directly within the code. This approach ensures developers receive specific suggestions, leading to more effective revisions. Implementing a tagging system for comments can categorize feedback and prioritize issues efficiently.
Metrics and insights
Typo provides detailed metrics and insights into code review performance. Engineering managers can analyze trends, such as recurring bottlenecks or areas for improvement, allowing for data-driven decision-making. Tracking metrics like review time, comment density, and acceptance rates can reveal deeper insights into team performance and highlight areas needing further training or resources.
In addition to leveraging tools like Typo, adopting best practices can further enhance your code review process:
1. Set clear objectives and standards
Define clear objectives for code reviews, detailing what reviewers should focus on during evaluations. Developing a comprehensive checklist that includes adherence to coding conventions, performance considerations, and testing coverage ensures consistency and clarity in expectations.
2. Leverage automation tools
Employ automation tools to reduce manual effort and improve review quality. Automating code analysis helps identify common mistakes early, freeing reviewers to address more complex issues. Integrating automated testing frameworks validates code functionality before reaching the review stage.
3. Encourage constructive feedback
Fostering a culture of constructive feedback is crucial for effective code reviews. Encourage reviewers to provide specific, actionable comments emphasizing improvement. Implementing a “no blame” policy during reviews promotes an environment where developers feel safe to make mistakes and learn from them.
4. Balance thoroughness and speed
Finding the right balance between thorough reviews and maintaining development velocity is essential. Establish reasonable time limits for reviews to prevent bottlenecks while ensuring reviewers dedicate adequate time to assess code quality thoroughly. Timeboxing reviews can help maintain focus and reduce reviewer fatigue.
5. Rotate reviewers and share responsibilities
Regularly rotating reviewers prevents burnout and ensures diverse perspectives in the review process. Sharing responsibilities promotes knowledge transfer across the team and mitigates the risk of bottlenecks. Implementing a rotation schedule that pairs developers with different reviewers fosters collaboration and learning.
While developers execute the code review process, engineering managers have a critical role in optimizing and supporting it. Here’s how they can contribute effectively:
Facilitating communication and support
Engineering managers must actively facilitate communication within the team, ensuring alignment on the goals and expectations of code reviews. Regular check-ins can help identify roadblocks and provide opportunities for team members to express concerns or seek guidance.
Setting expectations and accountability
Establishing a culture of accountability around code reviews is essential. Engineering managers should communicate clear expectations for both developers and reviewers, creating a shared understanding of responsibilities. Providing ongoing training on effective review practices reinforces these expectations.
Monitoring metrics and performance
Utilizing the metrics and insights provided by Typo enables engineering managers to monitor team performance during code reviews. Analyzing this data allows managers to identify trends and make informed decisions about adjustments to the review process, ensuring continuous improvement.
Promoting a growth mindset
Engineering managers should cultivate a growth mindset within the team, encouraging developers to view feedback as an opportunity for learning and improvement. Creating an environment where constructive criticism is welcomed fosters a culture of continuous development and innovation. Encouraging participation in code review workshops or technical training sessions can reinforce this mindset.
Wrapping up: Elevating your code review process
An optimized code review process is not merely a procedural necessity; it is a cornerstone of developer productivity and code quality. By establishing clear guidelines, promoting collaboration, and leveraging tools like Typo, you can streamline the review process and foster a culture of continuous improvement within your team.
Typo serves as a robust platform that enhances the efficiency and effectiveness of code reviews, allowing teams to deliver higher-quality software at an accelerated pace. By embracing best practices and adopting a collaborative mindset, you can transform your code review process into a powerful driver of success.
In an ever-changing tech landscape, organizations need to stay agile and deliver high-quality software rapidly. DevOps plays a crucial role in achieving these goals by bridging the gap between development and operations teams.
In this blog, we will delve into how to build a DevOps culture within your organization and explore the fundamental practices and strategies that can lead to more efficient, reliable, and customer-focused software development.
What is DevOps?
DevOps is a software development methodology that integrates development (Dev) and IT operations (Ops) to enhance software delivery’s speed, efficiency, and quality. The primary goal is to break down traditional silos between development and operations teams and foster a culture of collaboration and communication throughout the software development lifecycle. This creates a more efficient and agile workflow that allows organizations to respond quickly to changes and deliver value to customers faster.
Why DevOps Culture is Beneficial?
DevOps culture refers to a collaborative and integrated approach between development and operations teams. It focuses on breaking down silos, fostering a shared sense of responsibility, and improving processes through automation and continuous feedback.
Fostering collaboration between development and operations allows organizations to innovate more rapidly, and respond to market changes and customer needs effectively.
Automation and streamlined processes reduce manual tasks and errors to increase efficiency in software delivery. This efficiency results in faster time-to-market for new features and updates.
Continuous integration and delivery practices improve software quality by early detection of issues. This helps maintain system stability and reliability.
A DevOps culture encourages teamwork and mutual trust to improve collaboration between previously siloed teams. This cohesive environment fosters innovation and collective problem-solving.
DevOps culture results in faster recovery time as they can identify and address issues more swiftly, reducing downtime and improving overall service reliability.
Delivering high-quality software quickly and efficiently enhances customer satisfaction and loyalty, which is vital for long-term success.
The CALMS Framework of DevOps
The CALMS framework is used to understand and implement DevOps principles effectively. It breaks down DevOps into five key components:
Culture
The culture pillar focuses on fostering a collaborative environment where shared responsibility and open communication are prioritized. It is crucial to break down silos between development and operations teams and allow them to work together more effectively.
Automation
Automation emphasizes minimizing manual intervention in processes. This includes automating testing, deployment, and infrastructure management to enhance efficiency and reliability.
Lean
The lean aspect aims to optimize workflows, manage work-in-progress (WIP), and eliminate non-value-adding activities. This is to streamline processes to accelerate software delivery and improve overall quality.
Measurement
Measurement involves collecting data to assess the effectiveness of software delivery processes and practices. It enables teams to make informed, fact-based decisions, identify areas for improvement, and track progress.
Sharing
The sharing component promotes open communication and knowledge transfer among teams It facilitates cross-team collaboration, fosters a learning environment, and ensures that successful practices and insights are shared and adopted widely.
Tips to Build a DevOps Culture
Start Simple
Don’t overwhelm teams completely with the DevOps haul. Begin small and implement DevOps practice gradually. You can start first with the team that is better aligned with DevOps principles and then move ahead with other teams in the organization. Build momentum with early wins and evolve practices as you gain experience.
Foster Communication and Collaborative Environment
Communication is a key. When done correctly, it promotes collaboration and a smooth flow of information across the organization. This further aligns organization operations and lets the engineering leaders make informed decisions.
Moreover, the combined working environment between the development and operations teams promotes a culture of shared responsibility and common objectives. They can openly communicate ideas and challenges, allowing them to have a mutual conversation about resources, schedules, required features, and execution of projects.
Create Common Goal
Apart from encouraging communication and a collaborative environment, create a clear plan that outlines where you want to go and how you will get there. Ensure that these goals are realistic and achievable. This will allow teams to see the bigger picture and understand the desired outcome, motivating them to move in the right direction.
Focus on Automation
Tools such as Slack, Kubernetes, Docker, and Jfrog help build automation capabilities for DevOps teams. These tools are useful as they automate repetitive and mundane tasks and allow teams to focus on value-adding work. This allows them to fail fast, build fast, and deliver quickly which enhances their efficiency and process acceleration, positively impacting DevOps culture. Ensure that instead of assuming, ask your team directly what part can be automated and further support facilities to automate it.
Implement CI/CD pipeline
The organization must fully understand and implement CI/CD to establish a DevOps culture and streamline the software delivery process. This allows for automating deployment from development to production and releasing the software more frequently with better quality and reduced risks. The CI/CD tools further allow teams to catch bugs early in the development cycle, reduce manual work, and minimize downtime between releases.
Foster Continuous Learning and Improvement
Continuous improvement is a key principle of DevOps culture. Engineering leaders must look for ways to encourage continuous learning and improvement such as by training and providing upskilling opportunities. Besides this, give them the freedom to experiment with new tools and techniques. Create a culture where they feel comfortable making mistakes and learning from them.
Balance Speed and Security
The teams must ensure that delivering products quickly doesn’t mean compromising security. In DevOps culture, the organization must adopt a ‘Security-first approach’ by integrating security practices into the DevOps pipeline. To maintain a strong security posture, regular security audits and compliance checks are essential. Security scans should be conducted at every stage of the development lifecycle to continuously monitor and assess security.
Monitor and Measure
Regularly monitor and track system performance to detect issues early and ensure smooth operation. Use metrics and data to guide decisions, optimize processes, and continuously improve DevOps practices. Implement comprehensive dashboards and alerts to ensure teams can quickly respond to performance issues and maintain optimal health.
Prioritize Customer Needs
In DevOps culture, the organization must emphasize the ever-evolving needs of the customers. Encourage teams to think from the customer’s perspective and keep their needs and satisfaction at the forefront of the software delivery processes. Regularly incorporate customer feedback into the development cycle to ensure the product aligns with user expectations.
Typo - An Effective Platform to Promote DevOps Culture
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through DORA and other key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Building a DevOps culture is essential for organizations to improve their software delivery capabilities and maintain a competitive edge. Implementing key practices as mentioned above will pave the way for a successful DevOps transformation.
DORA metrics are a compass for engineering teams striving to optimise their development and operations processes.
Consistently tracking these metrics can lead to significant and lasting improvements in your software delivery processes and overall business performance.
Below is a detailed guide on how Typo uses DORA to improve DevOps performance and boost efficiency:
What are DORA Metrics?
In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble and Nicole Forsgren to evaluate and improve software development practices. The aim was to improve the understanding of how organisations can deliver software faster, more reliable and of higher quality.
They developed DORA metrics that provide insights into the performance of DevOps practices and help organisations improve their software development and delivery processes. These metrics help in finding answers to these two questions:
How to identify organisations’ elite performers?
What should low performers teams must focus on?
The Four DORA Metrics
DORA metrics helps in assessing software delivery performance based on four key (or accelerate) metrics:
Deployment Frequency
Lead Time for Changes
Change Failure Rate
Mean Time to Recover
Deployment Frequency
Deployment Frequency measures the number of times that code is deployed into production. It helps in understanding team’s throughput and quantifying how much value is delivered to customers.
When organizations achieve a high Deployment Frequency, they can enjoy rapid releases without compromising the software’s robustness. This can be a powerful driver of agility and efficiency, making it an essential component for software development teams.
One deployment per week is standard. However, it also depends on the type of product.
Why is it Important?
It provides insights into the overall efficiency and speed of the DevOps team’s processes.
It helps in identifying pitfalls and areas for improvement in the software development life cycle.
It helps in making data-driven decisions to optimise the process.
It helps in understanding the impact of changes on system performance.
Lead Time for Changes
Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. The measurement of this metric offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.
By analysing the Lead Time for Changes, development teams can identify bottlenecks in the delivery pipeline and streamline their workflows to improve software delivery’s overall speed and efficiency. Shorter lead time states that the DevOps team is more efficient in deploying code.
Why is it Important?
It helps organisations gather feedback and validate assumptions quickly, leading to informed decision-making and aligning software development with customer needs.
It helps organizations gain agility and adaptability, allowing them to swiftly respond to market changes, embrace new technologies, and meet evolving business needs.
It enables experimentation, learning, and continuous improvement, empowering organizations to stay competitive in dynamic environments.
It demands collaborative teamwork, breaking silos, fostering shared ownership, and improving communication, coordination, and efficiency.
Change Failure Rate
Change Failure Rate gauges the percentage of changes that require hot fixes or other remediation after production. It reflects the stability and reliability of the entire software development and deployment lifecycle.
By tracking CFR, teams can identify bottlenecks, flaws, or vulnerabilities in their processes, tools, or infrastructure that can negatively impact the quality, speed, and cost of software delivery.
0% — 15% CFR is considered to be a good indicator of your code quality.
Why is it Important?
It enhances user experience and builds trust by reducing failures.
It protects your business from financial risks which helps in avoiding revenue loss, customer churn, and brand damage by reducing failures.
It helps in allocating resources effectively and focuses on delivering new features.
It ensures changes are implemented smoothly and with minimal disruption.
Mean Time to Recovery
Mean Time to Recovery measures how quickly a team can bounce back from incidents or failures. It concentrates on determining the efficiency and effectiveness of an organisation’s incident response and resolution procedures.
A lower mean time to recovery is synonymous with a resilient system capable of handling challenges effectively.
The response time should be as short as possible. 24 hours is considered to be a good rule of thumb.
Why is it Important?
It enhances user satisfaction by reducing downtime and resolution times.
It mitigates the negative impacts of downtime on business operations, including financial losses, missed opportunities, and reputational damage.
It helps meet service level agreements (SLAs) that are vital for upholding client trust and fulfilling contractual commitments.
It provides valuable insights in day to day practices such as incident management, engineering team performance and helps elevate customer satisfaction.
The Fifth Metrics: Reliability
Reliability is a fifth metric that was added by the DORA team in 2021. It measures modern operational practices and doesn’t have standard quantifiable targets for performance levels.
Reliability comprises several metrics used to assess operational performance that includes availability, latency, performance and scalability that measures user-facing behaviour, software SLAs, performance targets, and error budgets.
How Typo Uses DORA to Boost Dev Efficiency?
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It offers comprehensive insights into the deployment process through key DORA metrics such as change failure rate, time to build, and deployment frequency.
Below is a detailed view of how Typo uses DORA to boost dev efficiency and team performance:
DORA Metrics Dashboard
Typo’s DORA metrics dashboard has a user-friendly interface and robust features tailored for DevOps excellence. This helps in identifying bottlenecks, improves collaboration between teams, optimises delivery speed and effectively communicates team’s success.
DORA metrics dashboard pulls in data from all the sources and presents in a visualised and detailed way to engineering leaders and development team.
DORA metrics helps in many ways:
With pre-built integrations in the dev tool stack, DORA dashboard provides all the relevant data flowing in within minutes.
It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency and much more from a single dashboard.
The dashboard sets custom improvement goals for each team and tracks their success in real-time.
It gives real-time visibility into a team’s KPI and lets them make informed decisions.
Firstly, define clear and measurable objectives. Consider KPIs that align with your organisational goals. Whether it’s improving deployment speed, reducing failure rates, or enhancing overall efficiency, having a well-defined set of objectives will help guide your implementation of the dashboard.
Understanding DORA metrics
Gain a deeper understanding of DORA metrics by exploring the nuances of Deployment Frequency, Lead Time, Change Failure Rate, and MTTR. Then, connect each of these metrics with your organisation’s DevOps goals to have a comprehensive understanding of how they contribute towards improving overall performance and efficiency.
Dashboard configuration
Follow specific guidelines to properly configure your dashboard. Customise the widgets to accurately represent important metrics and personalise the layout to create a clear and intuitive visualisation of your data. This ensures that your team can easily interpret the insights provided by the dashboard and take appropriate actions.
Implementing data collection mechanisms
To ensure the accuracy and reliability of your DORA Metrics, establish strong data collection mechanisms. Configure your dashboard to collect real-time data from relevant sources, so that the metrics reflect the current state of your DevOps processes.
Integrating automation tools
Integrate automation tools to optimise the performance of your DORA Metrics Dashboard.
By utilising automation for data collection, analysis, and reporting processes, you can streamline routine tasks. This will free up your team’s time and allow them to focus on making strategic decisions and improvements.
Utilising the dashboard effectively
To get the most out of your well-configured DORA Metrics Dashboard, use the insights gained to identify bottlenecks, streamline processes, and improve overall DevOps efficiency. Analyse the dashboard data regularly to drive continuous improvement initiatives and make informed decisions that will positively impact your software development lifecycle.
Comprehensive Visualization of Key Metrics
Typo’s dashboard provides clear and intuitive visualisations of the four key DORA metrics:
Deployment Frequency
It tracks how often new code is deployed to production, highlighting the team’s productivity.
By integrating with your CI/CD tool, Typo calculates Deployment Frequency by counting the number of unique production deployments within the selected time range. The workflows and repositories that align with production can be configured by you.
Cycle Time (Lead Time for Changes)
It measures the time it takes from code being committed to it being deployed in production, indicating the efficiency of the development pipeline.
In the context of Typo it is the average time all pull requests have spent in the “Coding”, “Pickup”, “Review” and “Merge” stages of the pipeline. Typo considers all the merged Pull Requests for the main/master/production branch for the selected time range and calculates the average time spent by each Pull Request in every stage of the pipeline. No open/draft Pull Requests are considered in this calculation.
Change Failure Rate
It shows the percentage of deployments causing a failure in production, reflecting the quality and stability of releases.
There are multiple ways this metric can be configured:
A deployment that needs a rollback or a hotfix: For such cases, any Pull Request having a title/tag/label that represents a rollback/hotfix that is merged to production can be considered as a failure.
A high-priority production incident: For such cases, any ticket in your Issue Tracker having a title/tag/label that represents a high-priority production incident can be considered as a failure.
A deployment that failed during the production workflow: For such cases, Typo can integrate with your CI/CD tool and consider any failed deployment as a failure.
To calculate the final percentage, the total number of failures are divided by the total number of deployments (this can be picked either from the Deployment PRs or from the CI/CD tool deployments).
Mean Time to Restore (MTTR)
It measures the time taken to recover from a failure, showing the team’s ability to respond to and fix issues.
The way a team tracks production failure (CFR) defines how MTTR is calculated for that team. If a team considers a production failure as :
Pull Request tagging to track a deployment that needs a rollback or a hotfix: In such a case, MTTR is calculated as the time between the last deployment till such a Pull Request was merged to main/master/production.
Tickets tagging for high-priority production incidents: In such a case, MTTR is calculated as the average time such a ticket takes from the ‘In Progress’ state to the ‘Done’ state.
CI/CD integration to track deployments that failed during the production workflow: In such a case, MTTR is calculated as the average time between that deployment failure to its being successfully deployed.
Benchmarking for Context
Industry Standards: By providing benchmarks, Typo allows teams to compare their performance against industry standards, helping them understand where they stand.
Historical Performance: Teams can also compare their current performance with their historical data to track improvements or identify regressions.
Find out what it takes to build reliable high-velocity dev teams:
Typo provides a clear, data-driven view of software development performance. It offers insights into various aspects of development and operational processes.
It helps in tracking progress over time. Through continuous tracking, it monitors improvements or regressions in a team’s performance.
It supports DevOps practices that focus on both development speed and operational stability.
DORA metrics help in mitigating risk. With the help of CFR and MTTR, engineering leaders can manage and lower risk, ensuring more stability and reliability associated with software changes.
It identifies bottlenecks and inefficiencies and pinpoints where the team is struggling such as longer lead times or high failure rates.
How Does it Help Development Teams?
Typo provides a clear, real-time view of a team’s performance and lets the team make informed decisions based on empirical data rather than guesswork.
It encourages balance between speed and quality by providing metrics that highlight both aspects.
It helps in predicting future performance based on historical data. This helps in better planning and resource allocation.
It helps in identifying potential risks early and taking proactive measures to mitigate them.
Conclusion
DORA metrics deliver crucial insights into team performance. Monitoring Change Failure Rate and Mean Time to Recovery helps leaders ensure their teams are building resilient services with minimal downtime. Similarly, keeping an eye on Deployment Frequency and Lead Time for Changes assures engineering leaders that the team is maintaining a swift pace.
Together, these metrics offer a clear picture of how well the team balances speed and quality in their workflows.
One of the ways organizations are implementing is through a continuous feedback process. While it may seem a straightforward process, it is not. Every developer takes feedback in different ways. Hence, it is important to engineer the feedback the right way.
Why is the feedback process important?
Below are a few ways why continuous feedback is beneficial for both developers and engineering leaders:
Keeps everyone on the same page: Feedback enables individuals to be on the same page. No matter what type of tasks they are working on. It allows them to understand their strengths and improve their blind spots. Hence, provide high-quality work.
Facilitates improvement: Feedback enables developers the areas they need to improve and the opportunities they can grab according to their strengths. With the right context and motivation, it can encourage software developers to work on their personal and professional growth.
Nurtures healthy relationships: Feedback fosters open and honest communication. It lets developers be comfortable in sharing ideas and seeking support without any judgements even when they aren’t performing well.
Enhances user satisfaction: Feedback helps developers to enhance their quality of work. This can have a direct impact on user satisfaction which further positively affects the organization.
Strength performance management: Feedback enables you to set clear expectations, track progress, and provide ongoing support and guidance to developers. This further strengthens their performance and streamlines their workflow.
How to engineer your feedback?
There are a lot of things to consider when giving effective and honest feedback. We’ve divided the process into three sections. Do check it out below:
Before the feedback session
Frame the context of the developer feedback
Plan in advance how will you start the conversation, what is worth mentioning, and what is not. For example, if it is related to pull requests, can start by discussing their past performance related to the same. Further, you can talk about how well are they performing, whether they are delivering the work on time, rating their performance and action plan, and if there are any challenges they are facing. Make sure to relate it to the bigger picture.
When framed appropriately and constructively, it helps in focusing on improvement rather than criticism. It also enables developers to take feedback the right way and help them grow and succeed.
Keep tracking continuously
Observe and note down everything related to the developers. Track their performance continuously. Jot down whatever noticed even if it is not worth mentioning during the feedback session. It allows you to share feedback more accurately and comprehensively. It also helps you to identify the trends and patterns in developer performance and lets them know that the feedback isn’t based on isolated incidents but rather the consistent observation.
For example, XYZ is a software developer at ABC organization. The engineering leader observed XYZ for three months before delivering effective feedback. She told him:
In 1st month, XYZ wasn’t able to work well on the initial implementation strategy. So, she provided him with resources.
In 2nd month, he showed signs of improvement yet he hesitated to participate in the team meetings.
In 3rd month, XYZ’s technical skills kept improving but he struggled to engage in meetings and share his ideas.
So, the engineering leader was able to discuss effectively his strengths and areas of improvement.
Understand the difference between feedback and criticism
Before offering feedback to software development teams, make sure you are well aware of the differences between constructive feedback and criticism. Constructive feedback encourages developers to enhance their personal and professional development. On the other hand, criticism enables developers to be defensive and hinder their progress.
Constructive feedback allows you to focus on the behavior and outcome of the developers and help them by providing actionable insights while criticism focuses on faults and mistakes without providing the right guidance.
For example,
Situation: A developer’s recent code review missed several critical issues.
Feedback: “Your recent code review missed a few critical issues, like the memory leak in the data processing module. Next time, please double-check for potential memory leaks. If you’re unsure how to spot them, let’s review some strategies together.”
Criticism: “Your code reviews are sloppy and miss too many important issues. You need to do a better job.”
Collect all important information
Review previous feedback given to developers before the session. Check what was last discussed and make sure to bring it up again. Also, include those that were you tracking during this time and connect them with the previous feedback process. Look for metrics such as pull request activity, work progress, team velocity, work log, check-ins, and more to get in-depth insights about their work. You can also gather peer reviews to get 360-degree feedback and understand better how well individuals are performing.
This makes your feedback balanced and takes into account all aspects of developers’ contributions and challenges.
During the feedback session
Two-way feedback
The feedback shouldn’t be a top-down approach. It must go both ways. You can start by bringing up the discussion that happened in the previous feedback session. Know their opinion and perspective on certain topics and ideas. Make sure that you ask questions to make them realize that you respect their opinions and want to hear what they want to discuss.
Now, share your feedback based on the last discussion, observations, and performance. You can also modify your feedback based on their perspective and reflections. It allows the feedback to be detailed and comprehensive.
Establish clear steps for improvement
When you have shared their areas of improvement, make sure you provide them with clear actionable plans as well. Discuss with them what needs immediate attention and what steps can they take. Set small goals with them as it makes it easier to focus on them and let them know that their goals are important. You must also schedule follow-up meetings with them after they reach every step and understand if they are facing any challenges. You can also provide resources and tools that can help them attain their goals.
Apply the SBI framework
Developed by the Center for Creative Leadership, the SBI stands for situation, behavior, and impact framework. It includes:
Situation: First, describe the specific context or scenario in which the observation/behavior took place. Provide factual details and avoid vague descriptions.
Example: Last week’s team collaboration on the new feature development.
Behavior: Now, articulate specific behavior you observed or experienced during that situation. Focus only on tangible actions or words instead of assumptions or generalizations.
Example: “You did not participate actively in the brainstorming sessions and missed a few important meetings.”
Impact: Lastly, explain the impact of behavior on you or others involved. Share the consequences on the team, project, and the organization.
Example: “This led to a lack of input from your side, and we missed out on potentially valuable ideas. It also caused some delays as we had to reschedule discussions.”
Final words could be: “Please ensure to attend all relevant meetings and actively participate in discussions. Your contributions are important to the team.”
This allows for delivering feedback that is clear, actionable, and respectful. It makes it relevant and directly tied to the situation. Note that, this framework is for both positive and negative feedback.
Understand constraints and personal circumstances
It is also important to know if any constraints are negatively impacting their performance. It could include tight deadlines or a heavy workload that is hampering their productivity or facing health issues due to which they aren’t able to focus properly. Ask them while you deliver feedback to them. You can further create actionable plans accordingly. This shows developers that you care for them and makes the feedback more personalized and relevant. Besides this, it also allows you to share tangible improvements rather than adding more pressure.
For example: “During the last sprint, there were a few missed deadlines. Is there something outside of work that might be affecting your ability to meet these deadlines? Please let me know if there’s anything we can do to accommodate your situation.”
Ask them if there’s anything else to discuss and summarize the feedback
Before concluding the meeting, ask them if there’s anything they would like to discuss. It could likely be that they have missed out on something or it wasn’t bought up during the session.
Afterwards, summarize what has been discussed. Ask the developers what are their key takeaways from the session and share your perspective as well. You can document the summary to help you and developers in the future feedback meetings. This gives mutual understanding and ensures that both are on the same page.
After the feedback session
Write a summary for yourself
Keep a record of what was discussed during this session and action plans provided to the developers. You can take a look at them in future feedback meetings or performance evaluations. An example of the structure of summary:
Date and time
List the main topics and specific behaviors discussed.
Include any constraints, personal circumstances, or insights the developer shared.
Outline the specific actions, along with any support or resources you committed to providing.
Detail the agreed-upon timeline for follow-up meetings or check-ins to monitor progress.
Add any personal observations or reflections that might help in future interactions.
Monitor the progress
Ensure you give them measurable goals and timelines during the feedback session. Monitor their progress through check-ins, provide ongoing support and guidance, and keep discussing the challenges or roadblocks they are facing. It helps the developers stay on track and feel supported throughout their journey.
How Typo can help enhance the feedback process?
Typo is an effective software engineering intelligence platform that can help in improving the feedback process within development teams. Here’s how Typo’s features can be leveraged to enhance feedback sessions:
By providing visibility into key SDLC metrics, engineering managers can give more precise and data-driven feedback.
It also captures qualitative insights and provides a 360-degree view of the developer experience allowing managers to understand the real issues developers face.
Comparing the team’s performance across industry benchmarks can help in understanding where the developers stand.
Customizable dashboards allow teams to focus on the most relevant metrics, ensuring feedback is aligned with the team’s specific goals and challenges.
The sprint analysis feature tracks and analyzes the progress throughout a sprint, making it easier to identify bottlenecks and areas for improvement. This makes the feedback more timely and targeted.
Software developers deserve high-quality feedback. It not only helps them identify their blind spots but also polishes their skills. The feedback loop lets developers know where they stand and the recognition they deserve.
Tired of code reviews disrupting your workflow? As developers know, pull request reviews are crucial for software quality, but they often lead to context switching and time-consuming interruptions. That's why Typo, is excited to announce powerful new feature designed to empower reviewers: AI-Generated PR Summaries with Estimated Time to Review Label. This feature is built to minimize interruptions, save time, and ultimately, make your life as a reviewer significantly easier.
AI-Powered PR Summary for Efficient Code Reviews
1. Take Control of Your Schedule with Estimated Time to Review Labels
Imagine knowing exactly how much time a pull request (PR) will take to review. No more guessing, no more unexpected time sinks. Typo's Estimated Time to Review Labels provide a clear, data-driven estimate of the review effort required.
How It Works:
Intelligent Analysis: Typo analyzes code changes, file complexity, and the number of lines modified to calculate an estimated review time.
Clear Labels: The tool automatically assigns labels like "Quick Review (Under 5 minutes)," "Moderate Review (5-15 minutes)," or "In-Depth Review (15+ minutes)."
Strategic Prioritization: Reviewers can use these labels to prioritize PRs based on their available time, ensuring they stay focused on their current tasks.
Benefits:
Minimize Interruptions: Easily defer in-depth reviews until you have dedicated time, avoiding context switching.
Optimize Workflow: Prioritize quick reviews to clear backlogs and maintain a smooth development pipeline.
Improve Time Management: Gain a clear understanding of the time commitment required for each review.
2. Accelerate Approvals with AI-Generated PR Summaries
Time is a precious commodity for developers. Typo's AI-Generated PR Summaries provide a concise and insightful overview of code changes, allowing reviewers to quickly grasp the key modifications without wading through every line of code.
Concise Summaries: The AI generates a clear summary highlighting the purpose and impact of the changes.
Rapid Understanding: Reviewers can quickly understand the context and make informed decisions.
Benefits:
Faster Review Cycles: Quickly grasp the essence of PRs and accelerate the approval process.
Enhanced Efficiency: Save valuable time by avoiding manual code inspection for every change.
Improved Focus: Quickly understand the changes, and get back to your own work.
Typo: Empowering Reviewers, Boosting Productivity
These two features work together to create a more efficient and less disruptive code review process. By providing time estimates and AI-powered summaries, Typo empowers reviewers to:
Maintain focus on their primary tasks.
Save valuable time and reduce context switching.
Accelerate the code review process.
Increase developer velocity.
Key Takeaways:
Typo helps developers maintain focus and save time, even when faced with incoming PR reviews.
Estimated Time to Review Labels provide valuable insights into review effort, enabling better time management.
AI-Generated PR Summaries accelerate approvals by providing concise overviews of code changes.
Ready to transform your code review workflow?
Try Typo today and experience the benefits of AI-powered time estimates and summaries. Streamline your processes, boost productivity, and empower your development team.
In an ever-evolving tech world, organisations need to innovate quickly while keeping up high standards of quality and performance. The key to achieving these goals is empowering engineering leaders with the right tools and technologies.
About Typo
Typo is a software intelligence platform that optimizes software delivery by identifying real-time bottlenecks in SDLC, automating code reviews, and measuring developer experience. We aim to help organizations ship reliable software faster and build high-performing teams.
However, engineering leaders often struggle to bridge the divide between traditional management practices and modern software development leading to missed opportunities for growth, ineffective team dynamics, and slower progress in achieving organizational goals.
To address this gap, we launched groCTO, a community designed specifically for engineering leaders.
What is groCTO Community?
Effective engineering leadership is crucial for building high-performing teams and driving innovation. However, many leaders face significant challenges and gaps that hinder their effectiveness. The role of an engineering leader is both demanding and essential. From aligning teams with strategic goals to managing complex projects and fostering a positive culture, they have a lot on their plates. Hence, leaders need to have the right direction and support so they can navigate the challenges and guide their teams efficiently.
Here’s when groCTO comes in!
groCTO is a community designed to empower engineering managers on their leadership journey. The aim is to help engineering leaders evolve, navigate complex technical challenges, and drive innovative solutions to create groundbreaking software. Engineering leaders can connect, learn, and grow to enhance their capabilities and, in turn, the performance of their teams.
Key Components of groCTO
groCTO Connect
Over 73% of successful tech leaders believe having a mentor is key to their success.
At groCTO, we recognize mentorship as a powerful tool for addressing leadership challenges and offering personalised support and fresh perspectives. That’s why we’ve kept Connect a cornerstone of our community - offering 1:1 mentorship sessions with global tech leaders and CTOs. With over 74 mentees and 20 mentors, our Connect program fosters valuable relationships and supports your growth as a tech leader.
Gain personalised advice: Through 1:1 sessions, mentors address individual challenges and tailor guidance to the specific needs and career goals of emerging leaders.
Navigate career growth: These mentors understand the strengths and weaknesses of the individual and help them focus on improving specific leadership skills and competencies and build confidence.
Build valuable professional relationships: Our mentorship sessions expand professional connections and foster collaborations and knowledge sharing that can offer ongoing support and opportunities.
Weekly Tech Insights
To keep our tech community informed and inspired, groCTO brings you a fresh set of learning resources every week:
CTO Diaries: The CTO Diaries provide a unique glimpse into the experiences and lessons learned by seasoned Chief Technology Officers. These include personal stories, challenges faced, and successful strategies implemented by them. Hence, helping engineering leaders gain practical insights and real-world examples that can inspire and inform their approach to leadership and team management.
groCTO Originals is a weekly podcast for current and aspiring tech leaders aiming to transform their approach by learning from seasoned industry experts and successful engineering leaders across the globe.
‘The DORA Lab’ by groCTO is an exclusive podcast that’s all about DORA and other engineering metrics. In each episode, expert leaders from the tech world bring their extensive knowledge of the challenges, inspirations, and practical uses of DORA metrics and beyond.
Bytes: groCTO Bytes is a weekly sun-day dose of curated wisdom delivered straight to your inbox, in the form of a newsletter. Our goal is to keep tech leaders and CTOs, VPEs up-to-date on the latest trends and best practices in engineering leadership, tech management, system design, and more.
At groCTO, we are committed to making this community bigger and better. We want current and aspiring engineering leaders to invest in their growth as well as contribute to pushing the boundaries of what engineering teams can achieve.
We’re just getting started. A few of our future plans for groCTO include:
Virtual Events: We plan to conduct interactive webinars and workshops to help engineering leaders and CTOs get deeper dives into specific topics and networking opportunities.
Slack Channels: We plan to create Slack channels to allow emerging tech leaders to engage in vibrant discussions and get real-time support tailored to various aspects of engineering leadership.
We envision a community that thrives on continuous engagement and growth. Scaling our resources and expanding our initiatives, we want to ensure that every member of groCTO finds the support and knowledge they need to excel.
Get in Touch with us!
At Typo, our vision is clear: to ship reliable software faster and build high-performing engineering teams. With groCTO, we are making significant progress toward this goal by empowering engineering leaders with the tools and support they need to excel.
Join us in this exciting new chapter and be a part of a community that empowers tech leaders to excel and innovate.
We’d love to hear from you! For more information about groCTO and how to get involved, write to us at hello@grocto.dev
Dev teams hold great importance in the engineering organization. They are essential for building high-quality software products, fostering innovation, and driving the success of technology companies in today’s competitive market.
However, engineering leaders need to understand the bottlenecks holding them back. Since these blindspots can directly affect the projects. Hence, this is when software development analytics tools come to your rescue. And these analytics software stands better when they have various features and integrations, engineering leaders are usually looking out for.
Typo is an intelligent engineering platform that is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Let’s know more about why engineering leaders prefer to choose Typo as their important tool:
You get Customized DORA and other Engineering Metrics
Engineering metrics are the measurements of engineering outputs and processes. However, there isn’t a pre-defined set of metrics that the software development teams use to measure to ensure success. This depends on various factors including team size, the background of the team members, and so on.
Typo’s customized DORA (Deployment frequency, Change failure rate, Lead time, and Mean Time to Recover) key metrics and other engineering metrics can be configured in a single dashboard based on specific development processes. This helps benchmark the dev team’s performance and identifies real-time bottlenecks, sprint delays, and blocked PRs. With the user-friendly interface and tailored integrations, engineering leaders can get all the relevant data within minutes and drive continuous improvement.
Typo has an In-Built Automated Code Review Feature
Code review is all about improving the code quality. It improves the software teams’ productivity and streamlines the development process. However, when done manually, the code review process can be time-consuming and takes a lot of effort.
Typo’s automated code review tool auto-analyses codebase and pull requests to find issues and auto-generates fixes before it merges to master. It understands the context of your code and quickly finds and fixes any issues accurately, making pull requests easy and stress-free. It standardizes your code, reducing the risk of a software security breach and boosting maintainability, while also providing insights into code coverage and code complexity for thorough analysis.
You can Track the Team’s Progress by Advanced Sprint Analysis Tool
While a burndown chart helps visually monitor teams’ work progress, it is time-consuming and doesn’t provide insights about the specific types of issues or tasks. Hence, it is always advisable to complement it with sprint analysis tools to provide additional insights tailored to agile project management.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This helps in identifying potential problems in the early stages, identifying areas where teams can be more efficient, and meeting deadlines.
The metrics Dashboard Focuses on Team-Level Improvement and Not Micromanaging Individual Developers
When engineering metrics focus on individual success rather than team performance, it creates a sense of surveillance rather than support. This leads to decreased motivation, productivity, and trust among development teams. Hence, there are better ways to use the engineering metrics.
Typo has a metrics dashboard that focuses on the team’s health and performance. It lets engineering leaders compare the team’s results with what healthy benchmarks across industries look like and drive impactful initiatives for your team. Since it considers only the team’s goals, it lets team members work together and solve problems together. Hence, fosters a healthier and more productive work environment conducive to innovation and growth.
Typo Takes into Consideration the Human Side of Engineering
Measuring developer experience not only focuses on quantitative metrics but also requires qualitative feedback as well. By prioritizing the human side of team members and developer productivity, engineering managers can create a more inclusive and supportive environment for them.
Typo helps in getting a 360 view of the developer experience as it captures qualitative insights and provides an in-depth view of the real issues that need attention. With signals from work patterns and continuous AI-driven pulse check-ins on the experience of developers in the team, Typo helps with early indicators of their well-being and actionable insights on the areas that need your attention. It also tracks the work habits of developers across multiple activities, such as Commits, PRs, Reviews, Comments, Tasks, and Merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the Burnout zone or at risk of burnout.
You can integrate as many tools with the dev stack
The more the tools can be integrated with software, the better it is for the software developers. It streamlines the development process, enforces standardization and consistency, and provides access to valuable resources and functionalities.
Typo lets you see the complete picture of your engineering health by seamlessly connecting to your tech tool stack. This includes:
GIT versioning tools that use the Git version control system
Issue tracker tools for managing tasks, bug tracking, and other project-related issues
CI/CD tools to automate and streamline the software development process
Communication tools to facilitate the exchange of ideas and information
Incident management tools to resolve unexpected events or failures
Conclusion
Typo is a software delivery tool that can help ship reliable software faster. You can find real-time bottlenecks in your SDLC, automate code reviews, and measure developer experience – all in a single platform.
We are delighted to share that Typo ranks as a leader in the Software Development analytics tool category. A big thank you to all our customers who supported us in this journey and took the time to write reviews about their experience. It really got us motivated to keep moving forward and bring the best to the table in the coming weeks.
Typo Taking the Lead
Typo is placed among the leaders in Software Development Analytics. Besides this, we earned the ‘User loved us’ badge as well.
Our wall of fame shines bright with –
Leader in the overall Grid® Report for Software Development Analytics Tools category
Leader in the Mid Market Grid® Report for Software Development Analytics Tools category
Rated #1 for Likelihood to Recommend
Rated #1 for Quality of Support
Rated #1 for Meets Requirements
Rated #1 for Ease of Use
Rated #1 for Analytics and Trends
Typo has been ranked a Leader in the Grid Report for Software Development Analytics Tool | Summer 2023. This is a testament to our continuous efforts toward building a product that engineering teams love to use.
The ratings also include –
97% of the reviewers have rated Typo high in analyzing historical data to highlight trends, statistics & KPIs
100% of the reviewers have rated us high in Productivity Updates
We, as a team, achieved the feat of attaining the score of:
Here’s What our Customers Say about Typo
Check out what other users have to say about Typo here.
What Makes Typo Different?
Typo is an intelligent AI-driven Engineering Management platform that enables modern software teams with visibility, insights & tools to code better, deploy faster & stay aligned with business goals.
Having launched with Product Hunt, we started with 15 engineers working with sheer hard work and dedication and have impacted 5000+ developers globally and engineering leaders globally, 400,000+ PRs & 1.5M+ commits.
We are NOT just the software delivery analytics platform. We go beyond the SDLC metrics to build an ecosystem that is a combination of intelligent insights, impactful actions & automated workflows – that will help Managers to lead better & developers perform better
As the first step, Typo gives core insights into dev velocity, quality & throughout that has helped the engineering leaders reduce their PR cycle time by almost 57% and 2X faster project deliveries.
Continuous Improvement with Typo
Typo empowers continuous improvement in the developers & managers with goal setting & specific visibility to developers themselves.
The leaders can set goals to ensure best practices like PR sizes, avoid merging PRs without review, identify high-risk work & others. Typo nudges the key stakeholders on Slack as soon as the goal is breached. Typo also automates the workflow on Slack to help developers with faster PR shipping and code reviews.
Developer’s View
Typo provides core insights to your developers that are 100% confidential to them. It helps developers to identify their strengths and core areas of improvement that have impacted the software delivery. It helps them gain visibility & measure the impact of their work on team efficiency & goals.
Developer’s Well-Being
We believe that all three aspects – work, collaboration & well-being – need to fall in place to help an individual deliver their best. Inspired by the SPACE framework for developer productivity, we support Pulse Check-Ins, Developer Experience insights, Burnout predictions & Engineering surveys to paint a complete picture.
10X your Dev Teams’ Efficiency with Typo
It’s all of your immense love and support that made us a leader in such a short period. We are grateful to you!
But this is just the beginning. Our aim has always been to level up your dev game and we will be coming with the new exciting releases in the next few weeks.