Typo's Picks

How do you build a culture of engineering metrics that drives real impact? Engineering teams often struggle with inefficiencies — high work-in-progress, unpredictable cycle times, and slow shipping. But what if the right metrics could change that?

In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Mario Viktorov Mechoulam, Senior Engineering Manager at Contentsquare, about how to establish a data-driven engineering culture using effective metrics. From overcoming cultural resistance to getting executive buy-in, Mario shares his insights on making metrics work for your team.

What You’ll Learn in This Episode:

Why Metrics Matter: How the lack of metrics creates inefficiencies & frustrations in tech teams.

Building a Metrics-Driven Culture: The five key steps — observability, accountability, understanding, discussions, and agreements.

Overcoming Resistance: How to tackle biases, cultural pushback, and skepticism around metrics.

Practical Tips for Engineering Managers: Early success indicators like reduced work-in-progress & improved predictability.

Getting Executive Buy-In: How to align leadership on the value of engineering metrics.

A Musician’s Path to Engineering Metrics: Mario’s unique journey from music to Lean & Toyota Production System-inspired engineering.

Timestamps

  • 00:00 — Let’s begin!
  • 00:47 — Meet the Guest: Mario
  • 01:48 — Mario’s Journey into Engineering Metrics
  • 03:22 — Building a Metrics-Driven Engineering Culture
  • 06:49 — Challenges & Solutions in Metrics Adoption
  • 07:37 — Why Observability & Accountability Matter
  • 11:12 — Driving Cultural Change for Long-Term Success
  • 20:05 — Getting Leadership Buy-In for Metrics
  • 28:17 — Key Metrics & Early Success Indicators
  • 30:34 — Final Insights & Takeaways

Links & Mentions

Episode Transcript

Kovid Batra: Hi, everyone. Welcome to the all new episode of groCTO by Typo. This is Kovid, your host. Today with us, we have a very special guest whom I found after stalking a lot of people on LinkedIn, but found him in my nearest circle. Uh, welcome, welcome to the show, Mario. Uh, Mario is a Senior Engineering Manager at Contentsquare and, uh, he is an engineering metrics enthusiast, and that’s where we connected. We talked a lot about it and I was sure that he’s the guy we should have on the podcast to talk about it. And that’s why we thought today’s topic should be something that is very close to Mario, which is setting metrics culture in the engineering teams. So once again, welcome, welcome to the show, Mario. It’s great to have you here.

Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. I’m very happy to join this series.

Kovid Batra: Great. So Mario, I think before we get started, one quick question, so that we know you a little bit more. Uh, this is kind of a ritual we always have, so don’t get surprised by it. Uh, tell us something about yourself from your childhood or from your teenage that defines who you are today.

Mario Viktorov Mechoulam: Right. I think my, my, both of my parents are musicians and I played violin for a few years, um, also in the junior orchestra. I think this contact with music and with the orchestra in particular, uh, was very important to define who I am today because of teamwork and synchronicity. So, orchestras need to work together and need to have very, very good collaboration. So, this part stuck somewhere on the back of my brain. And teamwork and collaboration is something that defines me today and I value a lot in others as well.

Kovid Batra: That’s really interesting. That is one unique thing that I got to learn today. And I’m sure orchestra must have been fun.

Mario Viktorov Mechoulam: Yes.

Kovid Batra: Do you do that, uh, even today?

Mario Viktorov Mechoulam: Uh, no, no, unfortunately I’m, I’m like the black sheep of my family because I, once I discovered computers and switched to that, um, I have not looked back. Uh, some days I regret it a bit, uh, but this new adventure, this journey that I’m going through, um, I don’t think it’s, it’s irreplaceable. So I’m, I’m happy with what I’m doing.

Kovid Batra: Great! Thank you for sharing this. Uh, moving on, uh, to our main section, which is setting a culture of metrics in engineering teams. I think a very known topic, a very difficult to do thing, but I think we’ll address the elephant in the room today because we have an expert here with us today. So Mario, I think I’ll, I’ll start with this. Uh, sorry to say this, but, uh, this looks like a boring topic to a lot of engineering teams, right? People are not immediately aligned towards having metrics and measurement and people looking at what they’re doing. And of course, there are biases around it. It’s a good practice. It’s an ideal practice to have in high performing engineering teams. But what made you, uh, go behind this, uh, what excited you to go behind this?

Mario Viktorov Mechoulam: A very good question. And I agree that, uh, it’s not an easy topic. I think that, uh, what’s behind the metrics is around us, whether we like it or not. Efficiency, effectiveness, optimization, productivity. It’s, it’s in everything we do in the world. So, for example, even if you, if you go to the airport and you stay in a queue for your baggage check in, um, I’m sure there’s some metrics there, whether they track it or not, I don’t know. And, um, and I discovered in my, my university years, I had, uh, first contact with, uh, Toyota production system with Lean, how we call it in the West, and I discovered how there were, there were things that looked like, like magic that you could simply by observing and applying use to transform the landscape of organizations and the landscape systems. And I was very lucky to be in touch with this, uh, with this one professor who is, uh, uh, the Director of the Lean Institute in Spain. Um, and I was surprised to see how no matter how big the corporation, how powerful the people, how much money they have, there were inefficiencies everywhere. And in my eyes, it looks like a magic wand. Uh, you just, uh, weave it around and then you magically solve stuff that could not be solved, uh, no matter how much money you put on them. And this was, yeah, this stuck with me for quite some time, but I never realized until a few years into the industry that, that was not just for manufacturing, but, uh, lean and metrics, they’re around us and it’s our responsibility to seize it and to make them, to put them to good use.

Kovid Batra: Interesting. Interesting. So I think from here, I would love to know some of the things that you have encountered in your journey, um, as an engineering leader. Uh, when you start implementing or bringing this thought at first point in the teams, what’s their reaction? How do you deal with it? I know it’s an obvious question to ask because I have been dealing with a lot of teams, uh, while working at Typo, but I want to hear it from you firsthand. What’s the experience like? How do you bring it in? How do you motivate those people to actually come on board? So maybe if you have an example, if you have a story to tell us from there, please go ahead.

Mario Viktorov Mechoulam: Of course, of course. It’s not easy and I’ve made a lot of mistakes and one thing that I learned is that there is no fast track. It doesn’t matter if you know, if you know how to do it. If you’ve done it a hundred times, there’s no fast track. Most of the times it’s a slow grind and requires walking the path with people. I like to follow the, these steps. We start with observability, then accountability, then understanding, then discussions and finally agreements. Um, but of course, we cannot, we cannot, uh, uh, drop everything at, at, at, at once at the team because as you said, there are people who are generally wary of, of this, uh, because of, um, bad, bad practices, because of, um, unmet expectations, frustrations in the past. So indeed, um, I have, I have had to be very, very careful about it. So to me, the first thing is starting with observability, you need to be transparent with your intentions. And I think one, one key sentence that has helped me there is that trying to understand what are the things that people care about. Do you care about your customers? Do you care about how much focus time, how much quality focus time do you have? Do you care about the quality of what you ship? Do you care about the impact of what you ship? So if the answer to these questions is yes, and for the majority of engineers, and not only engineers, it’s, it’s yes, uh, then if you care about something, it might be smart to measure it. So that’s a, that’s a good first start. Um, then by asking questions about what are the pains or generating curiosity, like for example, where do you think we spend the most time when we are working to ship something? You can, uh, you can get to a point where the team agrees to have some observability, some metrics in place. So that’s the first step.

Uh, the second step is to generate accountability. And that is arguably harder. Why so? Because in my career, I’ve seen sometimes people, um, who think that these are management metrics. Um, and they are, so don’t get me wrong. I think management can put these metrics to good use, um, but this sends a message in that nobody else is responsible for them, and I disagree with this. I think that everybody is responsible. Of course, I’m ultimately responsible. So, what I do here is I try to help teams understand how they are accountable of this. So if it was me, then I get to decide how it really works, how they do the work, what tools they use, what process they use. This is boring. It’s boring for me, but it’s also boring and frustrating for the people. People might see this as micromanagement. I think it’s, uh, it’s much more intellectually interesting if you get to decide how you do the work. And this is how I connect the accountability so that we can get teams to accept that okay, these metrics that we see, they are a result of how we have decided to work together. The things, the practices, the habits that we do. And we can, we can influence them.

Kovid Batra: Totally. But the thing is, uh, when you say that everyone should be onboarded with this thought that it is not just for the management, for the engineering, what exactly, uh, are those action items that you plan that get this into the team as a culture? Because I, I feel, uh, I’ll touch this topic again when we move ahead, but when we talk about culture, it comes with a lot of aspects that you can, you can not just define, uh, in two days or three days or five days of time. There is a mindset that already exists and everything that you add on top of it comes only or fits only if it aligns with that because changing culture is a hard thing, right? So when you say that people usually feel that these are management metrics, somehow I feel that this is part of the culture. But when you bring it, when you bring it in a way that everyone is accountable, bringing that change into the mindset is, is, is a little hard, I feel. So what exactly do you do there is what I want to understand from you.

Mario Viktorov Mechoulam: Sure. Um, so just, just to be, to be clear, at the point where you introduce this observability and accountability, it’s not, it’s not part of the culture yet. I think this is the, like, putting the foot on the door, uh, to get people to start, um, to start looking at these, using these and eventually they become a culture, but way, way later down the line.

Kovid Batra: Got it, got it. Yeah.

Mario Viktorov Mechoulam: Another thing is that culture takes, takes a lot of time. It’s, uh, um, how can we say? Um, organic adoption is very slow. And after organic adoption, you eventually get a shifting culture. Um, so I was talking to somebody a few weeks back, and they were telling me a senior leader for one of another company, and they were telling me that it took a good 3–4 years to roll out metrics in a company. And even then, they did not have all the levels of adoption, all the cultural changes everywhere in all the layers that they wanted to. Um, so, so this, there’s no fast track. This, this takes time. And when you say that, uh, people are wary about metrics or people think that manage, this is management metrics when they, when, when you say this is part of culture, it’s true. And it comes maybe from a place where people have been kept out of it, or where they have seen that metrics have been misused to do precisely micromanagement, right?

Kovid Batra: Right.

Mario Viktorov Mechoulam: So, yeah, people feel like, oh, with this, my work is going to be scrutinized. Perhaps I’m going to have to cut corners. I’m going to be forced to cut corners. I will have less satisfaction in the work we do. So, so we need to break that, um, to change the culture. We need to break the existing culture and that, that takes time. Um, so for me, this is just the first step. Uh, just the first step to, um, to make people feel responsible, because at the end of the day, um, every, every team costs some, some, some budget, right, to the company. So for an average sized team, we might be talking $1 million, depending on where you’re located, of course. But $1 million per year. So, of course, this, each of these teams, they need to make $1 million in, uh, in impact to at least break even, but we need more. Um, how do we do that? So two things. First, you need, you need to track the impact of the work you do. So that already tells you that if we care about this, there is a metric that we have to incorporate. We have to track the impact, the effect that the work we ship has in the product. But then the second, second thing is to be able to correlate this, um, to correlate what we ship with the impact that we see. And, and there is a very, very, uh, narrow window to do that. You cannot start working on something and then ship it three years later and say, Oh, I had this impact. No, in three years, landscape changed a lot, right? So we need to be quicker in shipping and we need to be tracking what we ship. Therefore, um, measuring lead time, for example, or cycle time becomes one of the highest expressions of being agile, for example.

Kovid Batra: Got it.

Mario Viktorov Mechoulam: So it’s, it’s through these, uh, constant repetition and helping people see how the way they do work, how, whether they track or not, and can improve or not, um, has repercussions in the customer. Um, it’s, it’s the way to start, uh, introducing this, this, uh, this metric concept and eventually helping shift the culture.

Kovid Batra: So is, let’s say cycle time for, for that matter, uh, is, is a metric that is generally applicable in every situation and we can start introducing it at, at the first step and then maybe explore more and, uh, go for some specifics or cycle time is specific to a situation in itself?

Mario Viktorov Mechoulam: I think cycle time is one of these beautiful metrics that you can apply everywhere. Uh, normally you see it applied on the teams. To do, doing, done. But, uh, what I like is that you can apply it, um, everywhere. So you can apply it, um, across teams, you can apply, apply it at line level, you can even apply it at company level. Um, which is not done often. And I think this is, this is a problem. But applying it outside of teams, it’s definitely part of the cultural change. Um, I’ve seen that the focus is often on teams. There’s a lot of focus in optimizing teams, but when you look at the whole picture, um, there are many other places that present opportunities for optimization, and one way to do that is to start, to start measuring.

Kovid Batra: Mario, did you get a chance where you could see, uh, or compare basically, uh, teams or organizations where people are using engineering metrics, and let’s say, a team which doesn’t use engineering metrics? How does the value delivery in these systems, uh, vary, and to what extent, basically?

Mario Viktorov Mechoulam: Let me preface that. Um, metrics are just a cornerstone, but they don’t guarantee that you’d do better or worse than the teams that don’t apply them. However, it’s, it’s very hard, uh, sometimes to know whether you’re doing good or bad if you don’t have something measurable, um, to, to do that. What I’ve seen is much more frustration generally in teams that do not have metrics. But because not having them, uh, forces them into some bad habits. One of the typical things that I, that I see when I join a team or do a Gemba Walk, uh, on some of the teams that are not using engineering metrics, is high work in progress. We’re talking 30+ things are ongoing for a team of five engineers. This means that on average, everybody’s doing 5–6 things at the same time. A lot of context switching, a lot of multitasking, a lot of frustration and leading to things taking months to ship instead of days. Of course, as I said, we can have teams that are doing great without this, but, um, if you’re already doing this, I think just adding the metric to validate it is a very small price to pay. And even if you’re doing great, this can start to change in any moment because of changes in the team composition, changes in the domain, changes in the company, changes in the process that is top-down. So it’s, uh, normally it’s, it’s, it’s very safe to have the metrics to be able to identify this type of drift, this type of degradation as soon as they happen. What I’ve seen also with teams that do have metric adoption is first this eventual cultural change, but then in general, uh, one thing that they do is that they keep, um, they keep the pieces of work small, they limit the work in progress and they are very, very much on top of the results on a regular basis and discussing these results. Um, so this is where we can continue with the, uh, cultural change.

Uh, so after we have, uh, accountability, uh, the next thing, step is understanding. So helping people through documentation, but also through coaching, understand how the choices that we make, the decisions, the events, produce the results that we see for which we’re responsible. And after that, fostering discussion for which you need to have trust, because here we don’t want blaming. We don’t want comparing teams. We want to understand what happened, what led to this. And then, with these discussions, see what can we do to prevent these things. Um, which leads to agreement. So doing this circle, closing the circle, doing it constantly, creates habits. Habits create continuous improvement, continuous learning. And at a certain point, you have the feeling that the team already understands the concepts and is able to work autonomously on this. And this is the moment where you delegate responsibility, um, of this and of the execution as well. And you have created, you have changed a bit the culture in one team.

Kovid Batra: Makes sense. What else does it take, uh, to actually bring in this culture? What else do you think is, uh, missing in this recipe yet?

Mario Viktorov Mechoulam: Yes. Um, I think working with teams is one thing. It’s a small and controlled environment. But the next thing is that you need executive sponsorship. You need to work at the organization level. And that is, that is a bit harder. Let’s say just a bit harder. Um, why is it hard?

Kovid Batra: I see some personal pain coming in there, right?

Mario Viktorov Mechoulam: Um, well, no, it depends. I think it can be harder or it can be easier. So, for example, uh, my experience with startups is that in general, getting executive sponsorship there, the buy-in, is way easier. Um, at the same time, the, because it’s flatter, so you’re in contact day to day with the people who, who need to give you this buy-in. At the same time, very interestingly, engineers in these organizations often are, often need these metrics much less at that point. Why? Because when we talk about startups, we’re talking about much less meetings, much less process. A lot of times, a lot of, um, people usually wear multiple hats, boundaries between roles are not clear. So there’s a lot of collaboration. People usually sit in the very same room. Um, so, so these are engineers that don’t need it, but it’s also a good moment to plant the seed because when these companies grow, uh, you’ll be thankful for that later. Uh, where it’s harder to get it, it’s in bigger corporations. But it’s in these places where I think that it’s most needed because the amount of process, the amount of bureaucracy, the amount of meetings, is very, very draining to the teams in those places. And usually you see all these just piles up. It seldom gets removed. Um, that, maybe it’s a topic for a different discussion. But I think people are very afraid of removing something and then be responsible of the result that removal brings. But yeah, I have, I have had, um, we can say fairly, a fair success of also getting the executive sponsorship, uh, in, in organizations to, to support this and I have learned a few things also along the way.

Kovid Batra: Would you like to share some of the examples? Not specifically from, let’s say, uh, getting sponsorship from the executives, I would be interested because you say it’s a little hard in places. So what things do you think, uh, can work out when you are in that room where you need to get a buy-in on this? What exactly drives that?

Mario Viktorov Mechoulam: Yes. The first point is the same, both for grassroots movements with teams and executive sponsorship, and that is to be transparent. Transparent with what, what do you want to do? What’s your intent and why do you think this is important? Uh, now here, and I’m embarrassed to say this, um, we, we want to change the culture, right? So we should focus on talking about habits, um, right? About culture, about people, et cetera. Not that much about, um, magic to say that, but I, but I’m guilty of using that because, um, people, people like how this sounds, people like to see, to, to, to hear, oh, we’ll introduce metrics and they will be faster and we’ll be more efficient. Um, so it’s not a direct relationship. As I said, it’s a stepping stone that can help you get there. Um, but, but it’s not, it’s not a one month journey or a one year journey. It can take slightly longer, but sometimes to get, to get the attention, you have to have a pitch which focuses more on efficiency, which focuses more on predictability and these type of things. So that’s definitely one, one learning. Um, second learning is that it’s very important, no matter who you are, but it’s even more important when you are, uh, not at the top of the, uh, of the management, uh, uh, pyramid to get, um, by, uh, so to get coaching from your, your direct manager. So if you have somebody that, uh, makes your goals, your objectives, their own, uh, it’s great because they have more experience, uh, they can help you navigate these and present the cases, uh, in a much better and structured way for the, for the intent that you have. And I was very lucky there as well to count on people that were supportive, uh, that were coaching me along the way. Um, yes.

So, first step is the same. First step is to be transparent and, uh, with your intent and share something that you have done already. Uh, here we are often in a situation where you have to put your money where your mouth is, and sometimes you have to invest from your own pocket if you want, for example, um, to use a specific tool. So to me, tools don’t really matter. So what’s important is start with some, something and then build up on top of it, change the culture, and then you’ll find the perfect tool that serves your purpose. Um, exactly. So sometimes you have to, you have to initiate this if you want to have some, some, some metrics. Of course, you can always do this manually. I’ve done it in the past, but I definitely don’t recommend it because it’s a lot of work. In an era where most of these tools are commodities, so we’re lucky enough to be able to gather this metric, this information. Yeah, so usually after this PoC, this experiment for three to six months with the team, you should have some results that you can present, um, to, um, to get executive sponsorship. Something that’s important here that I learned is that you need to present the results very, very precisely. Uh, so what was the problem? What are the actions we did? What’s the result? And that’s not always easy because when you, when you work with metrics for a while, you quickly start to see that there are a lot of synergies. There’s overlapping. There are things that impact other things, right? So sometimes you see a change in the trend, you see an improvement somewhere, uh, you see the cultural impact also happening, but you’re not able to define exactly what’s one thing that we need or two things that we, that we need to change that. Um, so, so that part, I think is very important, but it’s not always easy. So it has to be prepared clearly. Um, the second part is that unfortunately, I discovered that not many people are familiar with the topics. So when introducing it to get the exact sponsorship, you need to, you need to be able to explain them in a very simple, uh, and an easy way and also be mindful of the time because most of the people are very busy. Um, so you don’t want to go in a full, uh, full blown explanation of several hours.

Kovid Batra: I think those people should watch these kinds of podcasts.

Mario Viktorov Mechoulam: Yeah. Um, but, but, yeah, so it’s, it’s, it’s the experiment, it’s the results, it’s the actions, but also it’s a bit of background of why is this important and, um, yeah, and, and how did it influence what we did.

Kovid Batra: Yeah, I mean, there’s always, uh, different, uh, levels where people are in this journey. Let’s, let’s call this a journey where you are super aware, you know what needs to be done. And then there is a place where you’re not aware of the problem itself. So when you go through this funnel, there are people whom you need to onboard in your team, who need to first understand what we are talking about what does it mean, how it’s going to impact, and what exactly it is, in very simple layman language. So I totally understand that point and realize that how easy as well as difficult it is to get these things in place, bring that culture of metrics, engineering metrics in the engineering teams.

Well, I think this was something really, really interesting. Uh, one last piece that I want to touch upon is when you put in all these efforts into onboarding the teams, fostering that culture, getting buy-in from the executives, doing your PoCs and then presenting it, getting in sync with the team, there must be some specific indicators, right, that you start seeing in the teams. I know you have just covered it, but I want to again highlight that point that what exactly someone who is, let’s say an engineering manager and trying to implement it in the team should be looking for early on, or let’s say maybe one month, two months down the line when they started doing that PoC in their teams.

Mario Viktorov Mechoulam: I think, um, how comfortable the people in the team get in discussing and explaining the concepts during analysis of the metrics, this quality analysis is key. Um, and this is probably where most of the effort goes in the first months. We need to make sure that people do understand the metrics, what they represent, how the work we do has an impact on those. And, um, when we reached that point, um, one, one cue for me was the people in my teams, uh, telling me, I want to run this. This meant to me that we had closed the circle and we were close to having a habit and people were, uh, were ready to have this responsibility delegated to them to execute this. So it put people in a place where, um, they had to drive a conversation and they had to think about okay, what am I seeing? What happened? But what could it mean? But then what actions do we want to take? But this is something that we saw in the past already, and we tried to address, and then maybe we made it worse. And then you should also see, um, a change in the trend of metrics. For example, work in progress, getting from 30+ down to something close to the team size. Uh, it could be even better because even then it means that people are working independently and maybe you want them to collaborate. Um, some of the metrics change drastically. Uh, we can, we can talk about it another time, but the standard deviation of the cycle time, you can see how it squeezes, which means that, uh, it, it doesn’t, uh, feel random anymore. When, when I’m going to ship something, but now right now we can make a very, um, a very accurate guess of when, when it’s going to happen. So these types of things to me, mark, uh, good, good changes and that you’re on the right path.

Kovid Batra: Uh, honestly, Mario, very insightful, very practical tips that I have heard today about the implementation piece, and I’m sure this doesn’t end here. Uh, we are going to have more such discussions on this topic, and I want to deep dive into what exact metrics, how to use them, what suits which situation, talking about things like standard deviation from your cycle time would start changing, and that is in itself an interesting thing to talk about. So probably we’ll cover that in the next podcast that we have with you. For today, uh, this is our time. Any parting advice that you would like to share with the audience? Let’s say, there is an Engineering Manager. Let’s say, Mario five years back, who is thinking to go in this direction, what piece of advice would you give that person to get on this journey and what’s the incentive for that person?

Mario Viktorov Mechoulam: Yes. Okay. Clear. In, in general, you, you’ll, you’ll hear that people and teams are too busy to improve. We all know that. So I think as a manager who wants to start introducing these, uh, these concepts and these metrics, your, one of your responsibilities is to make room, to make space for the team, so that they can sit down and have a quality, quality time for this type of conversation. Without it, it’s not, uh, it’s not going to happen.

Kovid Batra: Okay, perfect. Great, Mario. It was great having you here. And I’m sure, uh, we are recording a few more sessions on this topic because this is close to us as well. But for today, this is our time. Thank you so much. See you once again.

Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. Bye-bye!

Kovid Batra: Bye.

In the ever-changing world of software development, tracking progress and gaining insights into your projects is crucial. While GitHub Analytics provides developers and teams with valuable data-driven intelligence, relying solely on GitHub data may not provide the full picture needed for making informed decisions. By integrating GitHub Analytics with JIRA, engineering teams can gain a more comprehensive view of their development workflows, enabling them to take more meaningful actions.

Why GitHub Analytics Alone is Insufficient

GitHub Analytics offers valuable insights into:

  • Repository Activity: Tracking commits, pull requests and contributor activity within repositories.
  • Collaboration Effectiveness: Evaluating how effectively teams collaborate on code reviews and issue resolution.
  • Workflow Identification: Identifying potential bottlenecks and inefficiencies within the development process.
  • Project Management Support: Providing data-backed insights for improving project management decisions.

However, GitHub Analytics primarily focuses on repository activity and code contributions. It lacks visibility into broader project management aspects such as sprint progress, backlog prioritization, and cross-team dependencies. This limited perspective can hinder a team's ability to understand the complete picture of their development workflow and make informed decisions.

The Power of GitHub & JIRA Integration

JIRA is a widely used platform for issue tracking, sprint planning, and agile project management. When combined with GitHub Analytics, it creates a powerful ecosystem that:

  • Connects Code Changes with Project Tasks and Business Objectives: By linking GitHub commits and pull requests to specific JIRA issues (like user stories, bugs, and epics), teams can understand how their code changes contribute to overall project goals.
    • Real-World Example: A developer fixes a bug in a specific feature. By linking the GitHub pull request to the corresponding JIRA bug ticket, the team can track the resolution of the issue and its impact on the overall product.
  • Provides Deeper Insights into Development Velocity, Bottlenecks, and Blockers: Analyzing data from both GitHub and JIRA allows teams to identify bottlenecks in the development process that might not be apparent when looking at GitHub data alone.
    • Real-World Example: If a team observes a sudden drop in commit frequency, they can investigate JIRA issues to determine if it's caused by unresolved dependencies, unclear requirements, or other blockers.
  • Enhances Collaboration Between Engineering and Product Management Teams: By providing a shared view of project progress, GitHub and JIRA integration fosters better communication and collaboration between engineering and product management teams.
    • Real-World Example: Product managers can gain insights into the engineering team's progress on specific features by tracking the progress of related JIRA issues and linked GitHub pull requests.
  • Ensures Traceability from Feature Requests to Code Deployments: By linking JIRA issues to GitHub pull requests and ultimately to production deployments, teams can establish clear traceability from initial feature requests to their implementation and release.
    • Real-World Example: A team can track the journey of a feature from its initial conception in JIRA to its final deployment to production by analyzing the linked GitHub commits, pull requests, and deployment information.


More Examples of How JIRA + GitHub Analytics Brings More Insights

  • Tracking Work from Planning to Deployment:
    • Without JIRA: GitHub Analytics shows PR activity and commit frequency but doesn't provide context on whether work is aligned with business goals.
    • With JIRA: Teams can link commits and PRs to specific JIRA tickets, tracking the progress of user stories and epics from the backlog to release, ensuring that development efforts are aligned with business priorities.
  • Identifying Bottlenecks in the Development Process:
    • Without JIRA: GitHub Analytics highlights cycle time, but it doesn't explain why a delay is happening.
    • With JIRA: Teams can analyze blockers within JIRA issues—whether due to unresolved dependencies, pending stakeholder approvals, unclear requirements, or other factors—to pinpoint the root cause of delays and address them effectively.
  • Enhanced Sprint Planning & Resource Allocation:
    • Without JIRA: Engineering teams rely on GitHub metrics to gauge performance but may struggle to connect them with workload distribution.
    • With JIRA: Managers can assess how many tasks remain open versus completed, analyze team workloads, and adjust priorities in real-time to ensure efficient resource allocation and maximize team productivity.
  • Connecting Engineering Efforts to Business Goals:
    • Without JIRA: GitHub Analytics tracks technical contributions but doesn't show their impact on business priorities.
    • With JIRA: Product owners can track how engineering efforts align with strategic objectives by analyzing the progress of JIRA issues linked to key business goals, ensuring that the team is working on the most impactful tasks.

Getting Started with GitHub & JIRA Analytics Integration

Start leveraging the power of integrated analytics with tools like Typo, a dynamic platform designed to optimize your GitHub and JIRA experience. Whether you're working on a startup project or managing an enterprise-scale development team, such tools can offers powerful analytics tools tailored to your specific needs.

How to Integrate GitHub & JIRA with Typo:

  1. Connect Your GitHub and JIRA Accounts: Visit Typo's platform and seamlessly link both tools to establish a unified view of your development data.
  2. Configure Dashboards: Build custom analytics dashboards that track both code contributions (from GitHub) and issue progress (from JIRA) in a single, integrated view.
  3. Analyze Insights Together: Gain deeper insights by analyzing GitHub commit trends alongside JIRA sprint performance, identifying correlations and uncovering hidden patterns within your development workflow.

Conclusion

While GitHub Analytics is a valuable tool for tracking repository activity, integrating it with JIRA unlocks deeper engineering insights, allowing teams to make smarter, data-driven decisions. By bridging the gap between code contributions and project management, teams can improve efficiency, enhance collaboration, and ensure that engineering efforts align with business goals.

Sign Up for Typo’s GitHub & JIRA Analytics Today!

Whether you aim to enhance software delivery, improve team collaboration, or refine project workflows, Typo provides a flexible, data-driven platform to meet your needs.

FAQs

1. How to integrate GitHub with JIRA for better analytics?

  • Utilize native integrations: Some tools offer native integrations between GitHub and JIRA.
  • Leverage third-party apps: Apps like Typo can streamline the integration process and provide advanced analytics capabilities.
  • Utilize APIs: For more advanced integrations, you can utilize the APIs provided by GitHub and JIRA.

2. What are some common challenges in integrating JIRA with Github?

  • Data inconsistency: Ensuring data accuracy and consistency between the two platforms can be challenging.
  • Integration complexity: Setting up and maintaining integrations can sometimes be technically complex.
  • Data overload: Integrating data from both platforms can generate a large volume of data, making it difficult to analyze and interpret.

3. How can I ensure the accuracy of data in my integrated GitHub and JIRA analytics?

  • Establish clear data entry guidelines: Ensure that all team members adhere to consistent data entry practices in both GitHub and JIRA.
  • Regularly review and clean data: Conduct regular data audits to identify and correct any inconsistencies or errors.
  • Utilize data validation rules: Implement data validation rules within JIRA to ensure data accuracy and consistency.

In today's fast-paced software development landscape, optimizing engineering performance is crucial for staying competitive. Engineering leaders need a deep understanding of workflows, team velocity, and potential bottlenecks. Engineering intelligence platforms provide valuable insights into software development dynamics, helping to make data-driven decisions. While Swarmia is a well-known player, it might not be the perfect fit for every team.This article explores the top Swarmia alternatives, giving you the knowledge to choose the best platform for your organization's needs. We'll delve into features, benefits, and potential drawbacks to help you make an informed decision.

Understanding Swarmia's Strengths

Swarmia is an engineering intelligence platform designed to improve operational efficiency, developer productivity, and software delivery. It integrates with popular development tools and uses data analytics to provide actionable insights.

Key Functionalities:

  • Data Aggregation: Connects to repositories like GitHub, GitLab, and Bitbucket, along with issue trackers like Jira and Azure DevOps, to create a comprehensive view of engineering activities.
  • Workflow Optimization: Identifies inefficiencies in development cycles by analyzing task dependencies, code review bottlenecks, and other delays.
  • Performance Metrics & Visualization: Presents data through dashboards, offering insights into deployment frequency, cycle time, resource allocation, and other KPIs.
  • Actionable Insights: Helps engineering leaders make data-driven decisions to improve workflows and team collaboration.

Why Consider a Swarmia Alternative?

Despite its strengths, Swarmia might not be ideal for everyone. Here's why you might want to explore alternatives:

  • Limited Customization: May not adapt well to highly specialized or unique workflows.
  • Complex Onboarding: Can have a steep learning curve, hindering quick adoption.
  • Pricing: Can be expensive for smaller teams or organizations with budget constraints.
  • User Interface: Some users find the UI challenging to navigate.

Top 6 Swarmia Competitors: Features, Pros & Cons

Here are six leading alternatives to Swarmia, each with its own unique strengths:

1. Typo

Typo is a comprehensive engineering intelligence platform providing end-to-end visibility into the entire SDLC. It focuses on actionable insights through integration with CI/CD pipelines and issue tracking tools.

Key Features:

  • Unified DORA and engineering metrics dashboard.
  • AI-driven analytics for sprint reviews, pull requests, and development insights.
  • Industry benchmarks for engineering performance evaluation.
  • Automated sprint analytics for workflow optimization.

Pros:

  • Strong tracking of key engineering metrics.
  • AI-powered insights for data-driven decision-making.
  • Responsive user interface and good customer support.

Cons:

  • Limited customization options in existing workflows.
  • Potential for further feature expansion.

G2 Reviews Summary:

G2 reviews indicate decent user engagement with a strong emphasis on positive feedback, particularly regarding customer support.

2. Jellyfish

Jellyfish is an advanced analytics platform that aligns engineering efforts with broader business goals. It gives real-time visibility into development workflows and team productivity, focusing on connecting engineering work to business outcomes.

Key Features:

  • Resource allocation analytics for optimizing engineering investments.
  • Real-time tracking of team performance.
  • DevOps performance metrics for continuous delivery optimization.

Pros:

  • Granular data tracking capabilities.
  • Intuitive user interface.
  • Facilitates cross-team collaboration.

Cons:

  • Can be complex to implement and configure.
  • Limited customization options for tailored insights.

G2 Reviews Summary: 

G2 reviews highlight strong core features but also point to potential implementation challenges, particularly around configuration and customization.


3. LinearB

LinearB is a data-driven DevOps solution designed to improve software delivery efficiency and engineering team coordination. It focuses on data-driven insights, identifying bottlenecks, and optimizing workflows.

Key Features:

  • Workflow visualization for process optimization.
  • Risk assessment and early warning indicators.
  • Customizable dashboards for performance monitoring.

Pros:

  • Extensive data aggregation capabilities.
  • Enhanced collaboration tools.
  • Comprehensive engineering metrics and insights.

Cons:

  • Can have a complex setup and learning curve.
  • High data volume may require careful filtering

G2 Reviews Summary: 

G2 reviews generally praise LinearB's core features, such as flow management and insightful analytics. However, some users have reported challenges with complexity and the learning curve.

4. Waydev

Waydev is an engineering analytics solution with a focus on Agile methodologies. It provides in-depth visibility into development velocity, resource allocation, and delivery efficiency.

Key Features:

  • Automated engineering performance insights.
  • Agile-based tracking of development velocity and bug resolution.
  • Budgeting reports for engineering investment analysis.

Pros:

  • Highly detailed metrics analysis.
  • Streamlined dashboard interface.
  • Effective tracking of Agile engineering practices.

Cons:

  • Steep learning curve for new users.

G2 Reviews Summary: 

G2 reviews for Waydev are limited, making it difficult to draw definitive conclusions about user satisfaction.

Waydev Updates: Custom Dashboards & Benchmarking - Waydev

5. Sleuth

Sleuth is a deployment intelligence platform specializing in tracking and improving DORA metrics. It provides detailed insights into deployment frequency and engineering efficiency.

Key Features:

  • Automated deployment tracking and performance benchmarking.
  • Real-time performance evaluation against efficiency targets.
  • Lightweight and adaptable architecture.

Pros:

  • Intuitive data visualization.
  • Seamless integration with existing toolchains.

Cons:

  • Pricing may be restrictive for some organizations.

G2 Reviews Summary: 

G2 reviews for Sleuth are also limited, making it difficult to draw definitive conclusions about user satisfaction

6. Pluralsight Flow (formerly Git Prime)

Pluralsight Flow provides a detailed overview of the development process, helping identify friction and bottlenecks. It aligns engineering efforts with strategic objectives by tracking DORA metrics, software development KPIs, and investment insights. It integrates with various manual and automated testing tools such as Azure DevOps and GitLab.

Key Features:

  • Offers insights into why trends occur and potential related issues.
  • Predicts value impact for project and process proposals.
  • Features DORA analytics and investment insights.
  • Provides centralized insights and data visualization.

Pros:

  • Strong core metrics tracking capabilities.
  • Process improvement features.
  • Data-driven insights generation.
  • Detailed metrics analysis tools.
  • Efficient work tracking system.

Cons:

  • Complex and challenging user interface.
  • Issues with metrics accuracy/reliability.
  • Steep learning curve for users.
  • Inefficiencies in tracking certain metrics.
  • Problems with tool integrations.


G2 Reviews Summary - 

The review numbers show moderate engagement (6-12 mentions for pros, 3-4 for cons), placing it between Waydev's limited feedback and Jellyfish's extensive reviews. The feedback suggests strong core functionality but notable usability challenges.Link to Pluralsight Flow's G2 Reviews

The Power of Integration

Engineering management platforms become even more powerful when they integrate with your existing tools. Seamless integration with platforms like Jira, GitHub, CI/CD systems, and Slack offers several benefits:

  • Out-of-the-box compatibility: Minimizes setup time.
  • Automation: Automates tasks like status updates and alerts.
  • Customization: Adapts to specific team needs and workflows.
  • Centralized Data: Enhances collaboration and reduces context switching.

By leveraging these integrations, software teams can significantly boost productivity and focus on building high-quality products.

Key Considerations for Choosing an Alternative

When selecting a Swarmia alternative, keep these factors in mind:

  • Team Size and Budget: Look for solutions that fit your budget, considering freemium plans or tiered pricing.
  • Specific Needs: Identify your key requirements. Do you need advanced customization, DORA metrics tracking, or a focus on developer experience?
  • Ease of Use: Choose a platform with an intuitive interface to ensure smooth adoption.
  • Integrations: Ensure seamless integration with your current tool stack.
  • Customer Support: Evaluate the level of support offered by each vendor.

Conclusion

Choosing the right engineering analytics platform is a strategic decision. The alternatives discussed offer a range of capabilities, from workflow optimization and performance tracking to AI-powered insights. By carefully evaluating these solutions, engineering leaders can improve team efficiency, reduce bottlenecks, and drive better software development outcomes.

Software teams relentlessly pursue rapid, consistent value delivery. Yet, without proper metrics, this pursuit becomes directionless. 

While engineering productivity is a combination of multiple dimensions, issue cycle time acts as a critical indicator of team efficiency. 

Simply put, this metric reveals how quickly engineering teams convert requirements into deployable solutions. 

By understanding and optimizing issue cycle time, teams can accelerate delivery and enhance the predictability of their development practices. 

In this guide, we discuss cycle time's significance and provide actionable frameworks for measurement and improvement. 

What is the Issue Cycle Time? 

Issue cycle time measures the duration between when work actively begins on a task and its completion. 

This metric specifically tracks the time developers spend actively working on an issue, excluding external delays or waiting periods. 

Unlike lead time, which includes all elapsed time from issue creation, cycle time focuses purely on active development effort. 

Core Components of Issue Cycle Time 

  • Work Start Time: When a developer transitions the issue to "in progress" and begins active development 
  • Development Duration: Time spent writing, testing, and refining code 
  • Review Period: Time in code review and iteration based on feedback 
  • Testing Phase: Duration of QA verification and bug fixes 
  • Work Completion: Final approval and merge of changes into the main codebase 

Understanding these components allows teams to identify bottlenecks and optimize their development workflow effectively. 

Why Does Issue Cycle Time Matter? 

Here’s why you must track issue cycle time: 

Impact on Productivity 

Issue cycle time directly correlates with team output capacity. Shorter cycle times allows teams to complete more work within fixed timeframes. So resource utilization is at peak. This accelerated delivery cadence compounds over time, allowing teams to tackle more strategic initiatives rather than getting bogged down in prolonged development cycles. 

Identifying Bottlenecks 

By tracking cycle time metrics, teams can pinpoint specific stages where work stalls. This reveals process inefficiencies, resource constraints, or communication gaps that break flow. Data-driven bottleneck identification allows targeted process improvements rather than speculative changes. 

Enhanced Collaboration 

Rapid cycle times help build tighter feedback loops between developers, reviewers, and stakeholders. When issues move quickly through development stages, teams maintain context and momentum. When collaboration is streamlined, handoff friction is reduced. And there’s no knowledge loss between stages, either. 

Better Predictability 

Consistent cycle times help in reliable sprint planning and release forecasting. Teams can confidently estimate delivery dates based on historical completion patterns. This predictability helps align engineering efforts with business goals and improves cross-functional planning. 

Customer Satisfaction 

Quick issue resolution directly impacts user experience. When teams maintain efficient cycle times, they can respond quickly to customer feedback and deliver improvements more frequently. This responsiveness builds trust and strengthens customer relationships. 

3 Phases of Issue Cycle Time 

The development process is a journey that can be summed up in three phases. Let’s break these phases down: 

Phase 1: Ticket Creation to Work Start

The initial phase includes critical pre-development activities that significantly impact 

overall cycle time. This period begins when a ticket enters the backlog and ends when active development starts. 

Teams often face delays in ticket assignment due to unclear prioritization frameworks or manual routing processes. One of the reasons behind this is resource allocation, which frequently occurs when assignment procedures lack automation. 

Implementing automated ticket routing and standardized prioritization matrices can substantially reduce initial delays. 

Phase 2: Active Work Period

The core development phase represents the most resource-intensive segment of the cycle. Development time varies based on complexity, dependencies, and developer expertise. 

Common delay factors are:

  • External system dependencies blocking progress
  • Knowledge gaps requiring additional research
  • Ambiguous requirements necessitating clarification
  • Technical debt increasing implementation complexity

Success in this phase demands precise requirement documentation and proactive dependency management. One should also establish escalation paths. Teams should maintain living documentation and implement pair programming for complex tasks. 

Phase 3: Resolution to Closure

The final phase covers all post-development activities required for production deployment. 

This stage often becomes a significant bottleneck due to: 

  • Sequential review processes
  • Manual quality assurance procedures
  • Multiple approval requirements
  • Environment-specific deployment constraints 

How can this be optimized? By: 

  • Implementing parallel review tracks
  • Automating test execution
  • Establishing service-level agreements for reviews
  • Creating self-service deployment capabilities

Each phase comes with many optimization opportunities. Teams should measure phase-specific metrics to identify the highest-impact improvement areas. Regular analysis of phase durations allows targeted process refinement, which is critical to maintaining software engineering efficiency. 

How to Measure and Analyse Issue Cycle Time 

Effective cycle time measurement requires the right tools and systematic analysis approaches. Businesses must establish clear frameworks for data collection, benchmarking, and continuous monitoring to derive actionable insights. 

Here’s how you can measure issue cycle time: 

Metrics and Tools 

Modern development platforms offer integrated cycle time tracking capabilities. Tools like Typo automatically capture timing data across workflow states. 

These platforms provide comprehensive dashboards displaying velocity trends, bottleneck indicators, and predictability metrics. 

Integration with version control systems enables correlation between code changes and cycle time patterns. Advanced analytics features support custom reporting and team-specific performance views. 

Establishing Benchmarks 

Benchmark definition requires contextual analysis of team composition, project complexity, and delivery requirements. 

Start by calculating your team's current average cycle time across different issue types. Factor in: 

  • Team size and experience levels 
  • Technical complexity categories 
  • Historical performance patterns 
  • Industry standards for similar work 

The right approach is to define acceptable ranges rather than fixed targets. Consider setting graduated improvement goals: 10% reduction in the first quarter, 25% by year-end. 

Using Visualizations 

Data visualization converts raw metrics into actionable insights. Cycle time scatter plots show completion patterns and outliers. Cumulative flow diagrams can also be used to show work in progress limitations and flow efficiency. Control charts track stability and process improvements over time. 

Ideally businesses should implement: 

  • Weekly trend analysis 
  • Percentile distribution charts 
  • Work-type segmentation views 
  • Team comparison dashboards 

By implementing these visualizations, businesses can identify bottlenecks and optimize workflows for greater engineering productivity. 

Regular Reviews 

Establish structured review cycles at multiple organizational levels. These could be: 

  • Weekly team retrospectives should examine cycle time trends and identify immediate optimization opportunities. 
  • Monthly department reviews analyze cross-team patterns and resource allocation impacts. 
  • Quarterly organizational assessments evaluate systemic issues and strategic improvements. 

These reviews should be templatized and consistent. The idea to focus on: 

  • Trend analysis 
  • Bottleneck identification 
  • Process modification results 
  • Team feedback integration 

Best Practices to Optimize Issue Cycle Time 

Focus on the following proven strategies to enhance workflow efficiency while maintaining output quality: 

  1. Automate Repetitive Tasks: Use automation for code testing, deployment, and issue tracking. Implement CI/CD pipelines and automated code review tools to eliminate manual handoffs. 
  1. Adopt Agile Methodologies: Implement Scrum or Kanban frameworks with clear sprint cycles or workflow stages. Maintain structured ceremonies and consistent delivery cadences. 
  1. Limit Work-in-Progress (WIP): Set strict WIP limits per development stage to reduce context switching and prevent resource overallocation. Monitor queue lengths to maintain steady progress. 
  1. Conduct Daily Standups: Hold focused standup meetings to identify blockers early, track issue age, and enable immediate escalation for unresolved tasks. 
  1. Ensure Comprehensive Documentation: Maintain up-to-date technical specifications and acceptance criteria to reduce miscommunication and streamline issue resolution. 
  1. Cross-Train Team Members: Build versatile skill sets within the team to minimize dependencies on single individuals and allow flexible resource allocation. 
  1. Streamline Review Processes: Implement parallel review tracks, set clear review time SLAs, and automate style and quality checks to accelerate approvals. 
  1. Leverage Collaboration Tools: Use integrated development platforms and real-time communication channels to ensure seamless coordination and centralized knowledge sharing. 
  1. Track and Analyze Key Metrics: Monitor performance indicators daily with automated reports to identify trends, spot inefficiencies, and take corrective action. 
  1. Host Regular Retrospectives: Conduct structured reviews to analyze cycle time patterns, gather feedback, and implement continuous process improvements. 

By consistently applying these best practices, engineering teams can reduce delays and optimise issue cycle time for sustained success.

Real-life Example of Optimizing 

A mid-sized fintech company with 40 engineers faced persistent delivery delays despite having talented developers. Their average issue cycle time had grown to 14 days, creating mounting pressure from stakeholders and frustration within the team.

After analyzing their workflow data, they identified three critical bottlenecks:

Code Review Congestion: Senior developers were becoming bottlenecks with 20+ reviews in their queue, causing delays of 3-4 days for each ticket.

Environment Stability Issues: Inconsistent test environments led to frequent deployment failures, adding an average of 2 days to cycle time.

Unclear Requirements: Developers spent approximately 30% of their time seeking clarification on ambiguous tickets.

The team implemented a structured optimization approach:

Phase 1: Baseline Establishment (2 weeks)

  • Documented current workflow states and transition times
  • Calculated baseline metrics for each cycle time component
  • Surveyed team members to identify perceived pain points

Phase 2: Targeted Interventions (8 weeks)

  • Implemented a "review buddy" system that paired developers and established a maximum 24-hour review SLA
  • Standardized development environments using containerization
  • Created a requirement template with mandatory fields for acceptance criteria
  • Set WIP limits of 3 items per developer to reduce context switching

Phase 3: Measurement and Refinement (Ongoing)

  • Established weekly cycle time reviews in team meetings
  • Created dashboards showing real-time metrics for each workflow stage
  • Implemented a continuous improvement process where any team member could propose optimization experiments

Results After 90 Days:

  • Overall cycle time reduced from 14 days to 5.5 days (60% improvement)
  • Code review turnaround decreased from 72 hours to 16 hours
  • Deployment success rate improved from 65% to 94%
  • Developer satisfaction scores increased by 40%
  • On-time delivery rate rose from 60% to 87%

The most significant insight came from breaking down the cycle time improvements by phase: while the initial automation efforts produced quick wins, the team culture changes around WIP limits and requirement clarity delivered the most substantial long-term benefits.

This example demonstrates that effective cycle time optimization requires both technical solutions and process refinements. The fintech company continues to monitor its metrics, making incremental improvements that maintain their enhanced velocity without sacrificing quality or team wellbeing.

Conclusion 

Issue cycle time directly impacts development velocity and team productivity. By tracking and optimizing this metric, teams can deliver value faster. 

Typo's real-time issue tracking combined with AI-powered insights automates improvement detection and suggests targeted optimizations. Our platform allows teams to maintain optimal cycle times while reducing manual overhead. 

Ready to accelerate your development workflow? Book a demo today!

Engineering Analytics

View All
Mastering GitHub Analytics

Mastering GitHub Analytics

In today's fast-paced software development world, tracking progress and understanding project dynamics is crucial. GitHub Analytics transforms raw data from repositories into actionable intelligence, offering insights that enable teams to optimize workflows, enhance collaboration, and improve software delivery. This guide explores the core aspects of GitHub Analytics, from key metrics to best practices, helping you leverage data to drive informed decision-making.

Why GitHub Analytics Matters

GitHub Analytics provides invaluable insights into project activity, empowering developers and project managers to track performance, identify bottlenecks, and enhance productivity. Unlike generic analytics tools, GitHub Analytics focuses on software development-specific metrics such as commits, pull requests, issue tracking, and cycle time analysis. This targeted approach allows for a deeper understanding of development workflows and enables teams to make data-driven decisions that directly impact project success.

Understanding GitHub Analytics

GitHub Analytics encompasses a suite of metrics and tools that help developers assess repository activity and project health.

Key Components of GitHub Analytics:

  • Data and Process Hygiene: Establishing standardized workflows through consistent labeling, commit keywords, and issue tracking is paramount. This ensures data accuracy and facilitates meaningful analysis.
    • Real-World Example: A team standardizes issue labels (e.g., "bug," "feature," "enhancement," "documentation") to categorize issues effectively and track trends in different issue types.
  • Pulse and Contribution Tracking: Monitoring repository activity, including commit frequency, work distribution among team members, and overall activity trends.
    • Real-World Example: A team uses GitHub Analytics to identify periods of low activity, which might indicate potential roadblocks or demotivation, allowing them to proactively address the issue.
  • Team Performance Metrics: Analyzing key metrics like cycle time (the time taken to complete a piece of work), lead time for changes, and DORA metrics (Deployment Frequency, Change Failure Rate, Mean Time to Recovery, Lead Time for Changes) to identify inefficiencies and improve productivity.
    • Real-World Example: A team uses DORA metrics to track deployment frequency and identify areas for improvement in their continuous delivery pipeline, leading to faster releases and reduced time to market.

GitHub Analytics vs. Other Analytics Tools

While other analytics platforms focus on user behavior or application performance, GitHub Analytics specifically tracks code contributions, repository health, and team collaboration, making it an indispensable tool for software development teams. This focus on development-specific data provides unique insights that are not readily available from generic analytics platforms.

Role of GitHub Analytics in Project Management

  • Performance Monitoring: Analytics provide real-time visibility into how and when contributions are made, enabling project managers to track progress against milestones and identify potential delays.
    • Real-World Example: A project manager uses GitHub Analytics to track the progress of critical features and identify any potential bottlenecks that might impact the project timeline.
  • Resource Allocation: Data-driven insights from GitHub Analytics help optimize resource allocation, ensuring that team members are working on the most impactful tasks and that their skills are effectively utilized.
    • Real-World Example: A project manager analyzes team member contributions and identifies areas where specific skillsets are lacking, informing decisions on hiring or training.
  • Quality Assurance: Identifying recurring issues, analyzing code review comments, and tracking bug trends helps teams proactively refine processes, improve code quality, and reduce the number of defects.
    • Real-World Example: A team analyzes code review comments to identify common code quality issues and implement best practices to prevent them in the future.
  • Strategic Planning: Historical project data, including past performance metrics, successful strategies, and areas for improvement, informs future roadmaps, enabling teams to predict and mitigate potential risks.
    • Real-World Example: A team analyzes past project data to identify trends in development velocity and predict future project timelines more accurately.

Getting Started with GitHub Analytics

Accessing GitHub Analytics:

  • Connect Your GitHub Account: Integrate analytics tools via GitHub settings or utilize GitHub's built-in insights.
  • Use GitHub's Built-in Insights: Access repository insights to track contributions, trends, and identify areas for improvement.
  • Customize Your Dashboard: Set up personalized views with relevant KPIs (Key Performance Indicators) that are most important to your team and project goals.

Navigating GitHub Analytics:

  • Real-Time Dashboards: Monitor KPIs such as deployment frequency and failure rates in real-time to gain immediate insights into project health.
  • Filtering Data: Focus on relevant insights using custom filters based on time frames, contributors, issue labels, and other criteria.
  • Multi-Repository Monitoring: Track multiple projects from a single dashboard to gain a comprehensive overview of team performance across different initiatives.

Configuring GitHub Analytics for Efficiency:

  • Customize Dashboard Templates: Create and save custom dashboard templates for different projects or teams to streamline analysis and reporting.
  • Optimize Data Insights: Aggregate pull requests, issues, and commits to generate meaningful reports and identify trends.
  • Foster Collaboration: Share dashboards with the entire team to promote transparency, foster a data-driven culture, and encourage open discussion around project performance.

Key GitHub Analytics Metrics

Software Development Cycle Time Metrics:

  • Coding Time: Duration from the start of development to when the code is ready for review.
  • Review Time: Measures the efficiency of collaboration in code reviews, indicating potential bottlenecks or areas for improvement in the review process.
  • Merge Time: Time taken from the completion of the code review to the integration of the code into the main branch.

Software Delivery Speed Metrics:

  • Average Pull Request Size: Tracks the scope of merged pull requests, providing insights into the team's approach to code changes and identifying potential areas for improvement in code modularity.
  • DORA Metrics:
    • Deployment Frequency: How often changes are deployed to production.
    • Change Failure Rate: Percentage of deployments that result in failures.
    • Lead Time for Changes: The time it takes to go from code commit to code in production.
    • Mean Time to Recovery: The average time it takes to restore service after a deployment failure.
  • Issue Queue Time: Measures how long issues remain unaddressed, highlighting potential delays in issue resolution and potential impacts on project progress.
  • Overdue Items: Tracks tasks that exceed their expected completion times, identifying potential bottlenecks and areas for improvement in project planning and execution.

Process Quality and Compliance Metrics:

  • Bug Lead Time for Changes (BLTC): Tracks the speed of bug resolution, providing insights into the team's responsiveness to and efficiency in addressing defects.
  • Raised Bugs Tracker (RBT): Monitors the frequency of bug identification, highlighting areas where improvements in code quality and testing can be made.
  • Pull Request Review Ratio (PRRR): Ensures adequate peer review coverage for all code changes, promoting code quality and knowledge sharing within the team.

Best Practices for Monitoring and Improving Performance

Regular Analytics Reviews:

  • Scheduled Checks: Conduct weekly or bi-weekly reviews of key metrics to track progress toward project goals and identify any emerging issues.

Screenshot 2024-03-16 at 12.29.43 AM.png
  • Sprint Planning Integration: Incorporate GitHub Analytics data into sprint planning meetings to refine sprint objectives, allocate resources effectively, and make data-driven decisions about scope and priorities.

  • CI/CD Monitoring: Track deployment success rates and identify areas for improvement in the continuous integration and continuous delivery pipeline.

Encouraging Team Engagement:

  • Open Data Access: Promote transparency by sharing analytics dashboards and reports with the entire team, fostering a shared understanding of project performance.
  • Training on Analytics: Provide training to team members on how to effectively interpret and utilize GitHub Analytics data to make informed decisions.
  • Recognition Based on Metrics: Acknowledge and reward team members and teams for achieving positive performance outcomes as measured by key metrics.

Unlocking the Potential of GitHub Analytics

GitHub Analytics tools like Typo are powerful tools for software teams, providing critical insights into development performance, collaboration, and project health. By embracing these analytics, teams can streamline workflows, enhance software quality, improve team communication, and make informed, data-driven decisions that ultimately lead to greater project success.

GitHub Analytics FAQs

  • What is GitHub Analytics?
    • A toolset that provides insights into repository activity, collaboration, and project performance.
  • How does GitHub Analytics support project management?
    • It helps monitor team performance, allocate resources effectively, identify inefficiencies, and make data-driven decisions to improve project outcomes.
  • Can GitHub Analytics be customized?
    • Yes, users can tailor dashboards, select specific metrics, and configure reports to meet their unique needs and project requirements.
  • What key metrics are available?
    • Key metrics include development cycle time metrics, software delivery speed metrics (including DORA metrics), and process quality and compliance metrics.
  • Can analytics improve code quality?
    • Yes, by tracking bug reports, analyzing code review trends, and identifying recurring issues, teams can proactively address code quality concerns and implement strategies for improvement.
  • Can GitHub Analytics help manage technical debt?
    • Absolutely. By monitoring changes, identifying areas needing improvement, and tracking the impact of technical debt on development velocity, teams can strategically address technical debt and maintain a healthy codebase.

Engineering Metrics: The Boardroom Perspective

Engineering Metrics: The Boardroom Perspective

Achieving engineering excellence isn’t just about clean code or high velocity. It’s about how engineering drives business outcomes. 

Every CTO and engineering department manager must know the importance of metrics like cycle time, deployment frequency, or mean time to recovery. These numbers are crucial for gauging team performance and delivery efficiency. 

But here’s the challenge: converting these metrics into language that resonates in the boardroom. 

In this blog, we’re going to share how you make these numbers more understandable. 

What are Engineering Metrics? 

Engineering metrics are quantifiable measures that assess various aspects of software development processes. They provide insights into team efficiency, software quality, and delivery speed. 

Some believe that engineering productivity can be effectively measured through data. Others argue that metrics oversimplify the complexity of high-performing teams. 

While the topic is controversial, the focus of metrics in the boardroom is different. 

In the board meeting, these metrics are a means to show that the team is delivering value. The engineering operations are efficient. And the investments being made by the company are justified. 

Challenges in Communicating Engineering Metrics to the Board 

Communicating engineering metrics to the board isn’t always easy. Here are some common hurdles you might face: 

1. The Language Barrier 

Engineering metrics often rely on technical terms like “cycle time” or “MTTR” (mean time to recovery). To someone outside the tech domain, these might mean little. 

For example, discussing “code coverage” without tying it to reduced defect rates and faster releases can leave board members disengaged. 

The challenge is conveying these technical terms into business language—terms that resonate with growth, revenue, and strategic impact. 

2. Data Overload 

Engineering teams track countless metrics, from pull request volumes to production incidents. While this is valuable internally, presenting too much data in board meetings can overwhelm your board members. 

A cluttered slide deck filled with metrics risks diluting your message. These granular-level operational details are for managers to take care of the team. The board members, however, care about the bigger picture. 

3. Misalignment with Business Goals 

Metrics without context can feel irrelevant. For example, sharing deployment frequency might seem insignificant unless you explain how it accelerates time-to-market. 

Aligning metrics with business priorities, like reducing churn or scaling efficiently, ensures the board sees their true value. 

Key Metrics CTOs Should Highlight in the Boardroom 

Before we go on to solve the above-mentioned challenges, let’s talk about the five key categories of metrics one should be mapping: 

1. R&D Investment Distribution 

These metrics show the engineering resource allocation and the return they generate. 

  • R&D Spend as a Percentage of Revenue: Tracks how much is invested in engineering relative to the company's revenue. Demonstrates commitment to innovation.
  • CapEx vs. OpEx Ratio: This shows the balance between long-term investments (e.g., infrastructure) and ongoing operational costs. 
  • Allocation by Initiative: Shows how engineering time and money are split between new product development, maintenance, and technical debt. 

2. Deliverables

These metrics focus on the team’s output and alignment with business goals. 

  • Feature Throughput: Tracks the number of features delivered within a timeframe. The higher it is, the happier the board. 
  • Roadmap Completion Rate: Measures how much of the planned roadmap was delivered on time. Gives predictability to your fellow board members. 
  • Time-to-Market: Tracks the duration from idea inception to product delivery. It has a huge impact on competitive advantage. 

3. Quality

Metrics in this category emphasize the reliability and performance of engineering outputs. 

  • Defect Density: Measures the number of defects per unit of code. Indicates code quality.
  • Customer-Reported Incidents: Tracks issues reported by customers. Board members use it to get an idea of the end-user experience. 
  • Uptime/Availability: Monitors system reliability. Tied directly to customer satisfaction and trust. 

4. Delivery & Operations

These metrics focus on engineering efficiency and operational stability.

  • Cycle Time: Measures the time taken from work start to completion. Indicates engineering workflow efficiency.
  • Deployment Frequency: Tracks how often code is deployed. Reflects agility and responsiveness.
  • Mean Time to Recovery (MTTR): Measures how quickly issues are resolved. Impacts customer trust and operational stability. 

5. People & Recruiting

These metrics highlight team growth, engagement, and retention. 

  • Offer Acceptance Rate: Tracks how many job offers are accepted. Reflects employer appeal. 
  • Attrition Rate: Measures employee turnover. High attrition signals team instability. 
  • Employee Satisfaction (e.g., via surveys): Gauges team morale and engagement. Impacts productivity and retention. 

By focusing on these categories, you can show the board how engineering contributes to your company's growth. 

Tools for Tracking and Presenting Engineering Metrics 

Here are three tools that can help CTOs streamline the process and ensure their message resonates in the boardroom: 

1. Typo

Typo is an AI-powered platform designed to amplify engineering productivity. It unifies data from your software development lifecycle (SDLC) into a single platform, offering deep visibility and actionable insights. 

Key Features:

  • Real-time SDLC visibility to identify blockers and predict sprint delays.
  • Automated code reviews to analyze pull requests, identify issues, and suggest fixes.
  • DORA and SDLC metrics dashboards for tracking deployment frequency, cycle time, and other critical metrics.
  • Developers experience insights to benchmark productivity and improve team morale. 
  • SOC2 Type II compliant

2. Dashboards with Tableau or Looker

For customizable data visualization, tools like Tableau or Looker are invaluable. They allow you to create dashboards that present engineering metrics in an easy-to-digest format. With these, you can highlight trends, focus on key metrics, and connect them to business outcomes effectively. 

3. Slide Decks

Slide decks remain a classic tool for boardroom presentations. Summarize key takeaways, use simple visuals, and focus on the business impact of metrics. A clear, concise deck ensures your message stays sharp and engaging. 

Best Practices and Tips for CTOs for Presenting Engineering Metrics to the Board 

More than data, engineering metrics for the board is about delivering a narrative that connects engineering performance to business goals. 

Here are some best practices to follow: 

1. Educate the Board About Metrics 

Start by offering a brief overview of key metrics like DORA metrics. Explain how these metrics—deployment frequency, MTTR, etc.—drive business outcomes such as faster product delivery or increased customer satisfaction. Always include trends and real-world examples. For example, show how improving cycle time has accelerated a recent product launch. 

2. Align Metrics with Investment Decisions

Tie metrics directly to budgetary impact. For example, show how allocating additional funds for DevOps could reduce MTTR by 20%, which could lead to faster recoveries and an estimated Y% revenue boost. You must include context and recommendations so the board understands both the problem and the solution. 

3. Highlight Actionable Insights 

Data alone isn’t enough. Share actionable takeaways. For example: “To reduce MTTR by 20%, we recommend investing in observability tools and expanding on-call rotations.” Use concise slides with 5-7 metrics max, supported by simple and consistent visualizations. 

4. Emphasize Strategic Value

Position engineering as a business enabler. You should show its role in driving innovation, increasing market share, and maintaining competitive advantage. For example, connect your team’s efforts in improving system uptime to better customer retention. 

5. Tailor Your Communication Style

Understand your board member’s technical understanding and priorities. Begin with business impact, then dive into the technical details. Use clear charts (e.g., trend lines, bar graphs) and executive summaries to convey your message. Tell stories behind the numbers to make them relatable. 

Conclusion 

Engineering metrics are more than numbers—they’re a bridge between technical performance and business outcomes. Focus on metrics that resonate with the board and align them with strategic goals. 

When done right, your metrics can show how engineering is at the core of value and growth.

Webinar: Unlocking Engineering Productivity with Ariel Pérez & Cesar Rodriguez

Webinar: Unlocking Engineering Productivity with Ariel Pérez & Cesar Rodriguez

In the second session of the 'Unlocking Engineering Productivity' webinar by Typo, host Kovid Batra engages engineering leaders Cesar Rodriguez and Ariel Pérez in a conversation about building high-performing development teams.

Cesar, VP of Engineering at StackGen, shares insights on ingraining curiosity and the significance of documentation and testing. Ariel, Head of Product and Technology at Tinybird, emphasizes the importance of clear communication, collaboration, and the role of AI in enhancing productivity. The panel discusses overcoming common productivity misconceptions, addressing burnout, and implementing effective metrics to drive team performance. Through practical examples and personal anecdotes, the session offers valuable strategies for fostering a productive engineering culture.

Timestamps

  • 00:00 — Introduction
  • 01:14—Childhood Stories and Personal Insights
  • 04:22—Defining Engineering Productivity
  • 10:27—High-Performing Teams and Data-Driven Decisions
  • 16:03—Counterintuitive Lessons in Leadership
  • 22:36—Navigating New Leadership Roles
  • 31:47—Measuring Impact and Outcomes in Engineering
  • 32:13—North Star Metrics and Customer Value
  • 32:53—DORA Metrics and Engineering Efficiency
  • 33:30—Learning from Customer Behavior and Feedback
  • 35:19—Scaling Engineering Teams and Productivity
  • 39:34—Implementing Metrics and Tools for Team Performance
  • 41:01—Qualitative Feedback and Customer-Centric Metrics
  • 46:37—Q&A Session: Addressing Audience Questions
  • 58:47—Concluding Thoughts on Engineering Leadership

Links and Mentions

Transcript

Kovid Batra: Hi everyone, welcome to the second webinar session of Unlocking Engineering Productivity by Typo. I’m your host, Kovid, excited to bring you all new webinar series, bringing passionate engineering leaders here to build impactful dev teams and unlocking success. For today’s panel, we have two special guests. Uh, one of them is our Typo champion customer. Uh, he’s VP of Engineering at StackGen. Welcome to the show, Cesar.

Cesar Rodriguez: Hey, Kovid. Thanks for having me.

Kovid Batra: And then we have Ariel, who is a longtime friend and the Head of Product and Technology at Tinybird. Welcome. Welcome to the show, Ariel.

Ariel Pérez: Hey, Kovid. Thank you for having me again. It’s great chatting with you one more time.

Kovid Batra: Same here. Pleasure. Alright, um, so, Cesar has been with us, uh, for almost more than a year now. And he’s a guy who’s passionate about spending quality time with kids, and he’s, uh, into cooking, barbecue, all that we know about him. But, uh, Cesar, there’s anything else that you would like to tell us about yourself so that, uh, the audience knows you a little more, something from your childhood, something from your teenage? This is kind of a ritual of our show.

Cesar Rodriguez: Yeah. So, uh, let me think about this. So one of, one of the things. So something from my childhood. So I had, um, I had the blessing of having my great grandmother alive when I was a kid. And, um, she always gave me all sorts of kinds of food to try. And something she always said to me is, “Hey, don’t say no to me when I’m offering you food.” And that stayed in my brain till.. Now that I’m a grown up, I’m always trying new things. If there’s an opportunity to try something new, I’m always, always want to try it out and see how it, how it is.

Kovid Batra: That’s, that’s really, really interesting. I think, Ariel, , uh, I’m sure you, you also have some something similar from your childhood or teenage which you would like to share that defines who you are today.

Ariel Pérez: Yeah, definitely. Um, you know, thankfully I was, um, I was all, you know, reminded me Cesar. I was also, uh, very lucky to have a great grandmother and a great grandfather, alive, alive and got to interact with them quite a bit. So, you know, I think we know very amazing experiences, remembering, speaking to them. Uh, so anyway, it was great that you mentioned that. Uh, but in terms of what I think about for me, the, the things that from my childhood that I think really, uh, impacted me and helped me think about the person I am today is, um, it was very important for my father who, uh, owned a small business in Washington Heights in New York City, uh, to very early on, um, give us the idea and then I know that in the sense that you’ve got to work, you’ve got to earn things, right? You’ve got to work for things and money just doesn’t suddenly appear. So at least, you know, a key thing there was that, you know, from the time I was 10 years old, I was working with my father on weekends. Um, and you know, obviously, you know, it’s been a few hours working and doing stuff and then like doing other things. But eventually, as I got older and older through my teenage years, I spent a lot more time working there and actually running my father’s business, which is great as a teenager. Um, so when you think about, you know, what that taught me for life. Obviously, there’s the power of like, look, you’ve got to work for things, like nothing’s given to you. But there’s also the value, you know, I learned very early on. Entrepreneurship, you know, how entrepreneurship is hard, why people go follow and go into entrepreneurship. It taught me skills around actual management, managing people, managing accounting, bookkeeping. But the most important thing that it taught me is dealing with people and working with people. It was a retail business, right? So I had to deal with customers day in and day out. So it was a very important piece of understanding customers needs, customers wants, customers problems, and how can I, in my position where I am in my business, serve them and help them and help them achieve their goals. So it was a very key thing, very important skill to learn all before I even went to college.

Kovid Batra: That’s really interesting. I think one, Cesar, uh, has learned some level of curiosity, has ingrained curiosity to try new things. And from your childhood, you got that feeling of building a business, serving customers; that is ingrained in you guys. So I think really, really interesting traits that you have got from your childhood. Uh, great, guys. Thank you so much for this quick sweet intro. Uh, so coming to today’s main section which is about talking, uh, about unlocking engineering productivity. And today’s, uh, specifically today’s theme is around building that data-driven mindset around unlocking this engineering productivity. So before we move on to, uh, and deep dive into experiences that you have had in your leadership journey. First of all, I would like to ask, uh, you guys, when we talk about engineering productivity or developer productivity, what exactly comes to your mind? Like, like, let’s start with a very basic, the fundamental thing. I think Ariel, would you like to take it first?

Ariel Pérez: Absolutely. Um, the first thing that comes to mind is unfortunate. It’s the negative connotation around developer productivity. And that’s primarily because for so long organizations have trying to figure out how do I measure the productivity of these software developers, software engineers, who are one of my most expensive resources, and I hate the word ‘resource’, we’re talking about people, because I need to justify my spend on them. And you know what, they, I don’t know what they do. I don’t understand what they do. And I got to figure out a way to measure them cause I measure everyone else. If you think about the history of doing this, like for a while, we were trying to measure lines of code, right? We know we don’t do that. We’re trying to open, you know, we’re trying to, you know, measure commits. No, we know we don’t do that either. So I think for me, unfortunately, in many ways, the term ‘developer productivity’ brings so many negative associations because of how wrong we’ve gotten it for so long. However, you know, I am not the, I am always the eternal optimist. And I also understand why businesses have been trying to measure this, right? All these things are inputs into the business and you build a business to, you know, deliver value and you want to understand how to optimize those inputs and you know, people and a particular skill set of people, you want to figure out how to best understand, retain the best people, manage the best people and get the most value out of those people. The thing is, we’ve gotten it wrong so many times trying to figure it out, I think, and you know, some of my peers who discuss with me regularly might, you know, bash me for this. I think DORA was one good step in that direction, even though there’s many things that it’s missing. I think it leans very heavily on efficiency, but I’ll stop, you know, I’ll leave that as is. But I believe in the people that are behind it and the people, the research and how they backed it. I think a next iteration SPACE and trying to go to SPACE, moved this closer and tried to figure it out, you know, there’s a lot of qualitative aspects that we need to care about and think about. Um, then McKinsey came and destroyed everything, uh, unfortunately with their one metric to rule it all. And it was, it’s been all hell broke loose. Um, but there’s a realization and a piece that look, we, as, as a, as a, as an industry, as a role, as a type of work that we do, we need to figure out how we define this so that we can, you know, not necessarily justify our existence, but think about, how do we add value to each business? How do we define and figure out a better way to continually measure? How do we add value to a business? So we can optimize for that and continually show that, hey, you actually can’t live without us and we’re actually the most important part of your business. Not to demean any other roles, right? But as software engineers in a world where software is eating the world and it has eaten the world, we are the most important people in the, in there. We’re gonna figure out how do we actually define that value that we deliver. So it’s a problem that we have to tackle. I don’t think we’re there yet. You know, at some point, I think, you know, in this conversation, we’ll talk about the latest, the latest iteration of this, which is the core 4, um, which is, you know, things being talked about now. I think there’s many positive aspects. I still think it’s missing pieces. I think we’re getting closer. But, uh, and it’s a problem we need to solve just not as a hammer or as, as a cudgel to push and drive individual developers to do more and, and do more activity. That’s the key piece that I think I will never accept as a, as a leader thinking about developer productivity.

Kovid Batra: Great, I think that that’s really a good overview of how things are when we talk about productivity. Cesar, do you have a take on that? Uh, what comes to your mind when we talk about engineering and developer productivity?

Cesar Rodriguez: I think, I think what Ariel mentioned resonates a lot with me because, um, I remember when we were first starting in the industry, everything was seen narrowly as how many lines of code can a developer write, how many tickets can they close. But true productivity is about enabling engineers to solve meaningful problems efficiently and ensuring that those problems have business impact. So, so from my perspective, and I like the way that you wrote the title for this talk, like developer (slash) engineering. So, so for me, developer, when I think about developer productivity, that that brings to my mind more like, how are your, what do your individual metrics look like? How efficiently can you write code? How can you resolve issues? How can you contribute to the product lifecycle? And then when you think about engineering metrics, that’s more of a broader view. It’s more about how is your team collaborating together? What are your processes for delivering? How is your system being resilient? Um, and how do you deliver, um, outcomes that are impactful to the business itself? So I think, I think I agree with Ariel. Everything has to be measured in what is the impact that you’re going to have for the business because if you can’t tie that together, then, then, well, I think what you’re measuring is, it’s completely wrong.

Kovid Batra: Yeah, totally. I, I, even I agree to that. And in fact, uh, when we, when we talk about engineering and developer productivity, both, I think engineering productivity encompasses everything. We never say it’s bad to look at individual productivity or developer productivity, but the way we need to look at it is as a wholesome thing and tie it with the impact, not just, uh, measuring specific lines of code or maybe metrics like that. Till that time, it definitely makes sense and it definitely helps measure the real impact, uh, real improvement areas, find out real improvement areas from those KPIs and those metrics that we are looking at. So I think, uh, very well said both of you. Uh, before I jump on to the next piece, uh, one thing that, uh, I’m sure about that you guys have worked with high-performing engineering teams, right? And Ariel, you had a view, like what people really think about it. And I really want to understand the best teams that you have worked with. What’s their perception of, uh, productivity and how they look at, uh, this data-driven approach, uh, while making decisions in the team, looking at productivity or prioritizing anything that comes their way, which, which would need improvement or how is it going? How, how exactly these, uh, high-performing teams operate, any, any experiences that you would like to share?

Ariel Pérez: Uh, Cesar, do you want to start?

Cesar Rodriguez: Sure. Um, so from my perspective, the first thing that I’ve observed on high-performing teams is that is there is great alignment with the individual goals to what the business is trying to achieve. Um, the interests align very well. So people are highly motivated. They’re having fun when they’re working and even on their outside hours, they’re just thinking about how are you going to solve the problem that they’re, they’re working on and, and having fun while doing it. So that’s, that’s one of the first things that I observed. The other thing is that, um, in terms of how do we use data to inform the decisions, um, high-performing teams, they always use, consistently use data to refine processes. Um, they identify blockers early and then they use that to prioritize effectively. So, so I think all ties back to the culture of the team itself. Um, so with high-performing teams, you have a culture that is open, that people are able to speak about issues, even from the lowest level engineer to the highest, most junior engineers, the most highest senior engineer, everyone is treated equally. And when people have that environment, still, where they can share their struggles, their issues and quickly collaborate to solve them, that, that for me is the biggest thing to be, to be high-performing as a team.

Kovid Batra: Makes sense.

Ariel Pérez: Awesome. Um, and, you know, to add to that, uh, you know, I 1000% agree with the things you just mentioned that, you know, a few things came to mind of that, like, you know, like the words that come to mind to describe some of the things that you just said. Uh, like one of them, for example, you know, you think about the, you know, what, what is a, what is special or what do you see in a high-performing team? One key piece is there’s a massive amount of intrinsic motivation going back to like Daniel Pink, right? Those teams feel autonomy. They get to drive decisions. They get to make decisions. They get to, in many ways own their destiny. Mastery is a critical thing. These folks are given the opportunity to improve their craft, become better and better engineers while they’re doing it. It’s not a fight between ‘should I fix this thing’ versus ‘should I build this feature’ since they have autonomy. And the, you know, guide their own and drive their own agenda and, and, and move themselves forward. They also know when to decide, I need to spend more time on building this skill together as a team or not, or we’re going to build this feature; they know how to find that balance between the two. They’re constantly becoming better craftsmen, better engineers, better developers across every dimension and better people who understand customer problems. That’s a critical piece. We often miss in an engineering team. So becoming better at how they are doing what they do. And purpose. They’re aligned with the mission of the company. They understand why we do what we do. They understand what problem we’re solving. They, they understand, um, what we sell, how we sell it, whose problems to solve, how we deliver value and they’re bought in. So all those key things you see in high-performing teams are the major things that make them high-performing.

The other thing sticking more to like data and hardcore data numbers. These are folks that generally are continually improving. They think about what’s not working, what’s working, what should we do more of, what should we do less of, you know, when I, I forgot who said this, but they know how to turn up the good. So whether you run retros, whether you just have a conversation every day, or you just chat about, hey, what was good today, what sucked; you know, they have continuous conversations about what’s working, what’s not working, and they continually refine and adjust. So that’s a key critical thing that I see in high-performing teams. And if I want to like, you know, um, uh, button it up and finish it at the end is high-performing teams collaborate. They don’t cooperate, they collaborate. And that’s a key thing we often miss, which is and the distinction between the two. They work together on their problems, which one of those key things that allows them to like each other, work well with each other, want to go and hang out and play games after work together because they depend on each other. These people are shoulder to shoulder every day, and they work on problems together. That helps them not only know that they can trust each other, they can trust each other, they can depend on each other, but they learn from each other day in and day out. And that’s part of what makes it a fun team to work on because they’re constantly challenging each other, pushing each other because of that collaboration. And to me, collaboration means, you know, two people, three people working on the same problem at the same time, synchronously. It’s not three people separating a problem and going off on their own and then coming back together. You know, basically team-based collaboration, working together in real time versus individual work and pulling it together; that’s another key aspect that I’ve often seen in high-performing teams. Not saying that the other ways, I have not seen them and cannot be in a high-performing team, but more likely and more often than not, I see this in high-performing teams.

Kovid Batra: Perfect. Perfect. Great, guys. And in your journeys, um, there have been, there must have been a lot of experiences, but any counterintuitive things that you have realized later on, maybe after making some mistakes or listening to other people doing something else, are there any things which, which are counterintuitive that you learned over the time about, um, improving your team’s productivity?

Ariel Pérez: Um, I’ll take this one first. Uh, I don’t know if this is counterintuitive, but it’s something you learn as you become a leader. You can’t tell people what to do, especially if they’re high-performing, you’re improving them, even if you know better, you can’t tell them what to do. So unfortunately, you cannot lead by edict. You can do that for a short period of time and get away with it for a short period of time. You know, there’s wartime versus peacetime. People talk about that. But in reality, in many ways, it needs to come from them. It needs to be intrinsic. They’re going to have to be the ones that want to improve in that world, you know, what do you do as a leader? And, you know, I’ve had every time I’ve told them, do this, go do this, and they hated me for it. Even if I was right at the end, then even if it took a while and then they eventually saw it, there was a lot of turmoil, a lot of fights, a lot of issues, and some attrition because of it. Um, even though eventually, like, yes, you were right, it was a bit more painful way, and it was, you know, me and the purpose for the desire, you know, let me go faster. We got to get this done. Um, it needs to come from the team. So I think I definitely learned that it might seem counterintuitive. You’re the boss. You get to tell people to do. It’s like, no, actually, no, that’s not how it works, right? You have to inspire them, guide them, drive them, give them the tools, give them the training, give them the education, give them the desire and need and want for how to get there, have them very involved in what should we do, how do we improve, and you can throw in things, but it needs to come from them. If there were anything else I’d throw into that, it was counterintuitive, as I think about improving engineering productivity was, to me, this idea of that off, you know, as we think about from an accounting perspective, there’s just no way in hell that two engineers working on one problem is better than one. There’s no way that’s more productive. You know, they’re going to get half the work done. That’s, that’s a counterintuitive notion. If you think about, if you think about it, engineers as just mere inputs and resources. But in reality, they’re people, and that software development is a team sport. As a matter of fact, if they work together in real time, two engineers at the same time, or god forbid, three, four, and five, if you’re ensemble programming, you actually find that you get more done. You get more done because things, like they need to get reworked less. Things are of higher quality. The team learns more, learns faster. So at the end of the day, while it might feel slow, slow is smooth and smooth is fast. And they get just get more over time. They get more throughput and more quality and get to deliver more things because they’re spending less time going back and fixing and reworking what they were doing. And the work always continues because no one person slows it down. So that’s the other counterintuitive thing I learned in terms of improving and increasing productivity. It’s like, you cannot look at just productivity, you need to look at productivity, efficiency, and effectiveness if you really want to move forward.

Kovid Batra: Makes sense. I think, uh, in the last few years, uh, being in this industry, I have also developed a liking towards pair programming, and that’s one of the things that align with, align with what you have just said. So I, I’m in for that. Yeah. Uh, great. Cesar, do you have, uh, any, any learnings which were counterintuitive or interesting that you would like to share?

Cesar Rodriguez: Oh, and this goes back to the developer versus engineering, uh, conversation, uh, and question. So productivity and then something that’s counterintuitive is that it doesn’t mean that you’re going to be busy. It doesn’t mean that you’re just going to write your code and finish tickets. It means that, and this is, if there are any developers here listening to this, they’re probably going to hate me. Um, you’re going to take your time to plan. You’re going to take your time to reflect and document and test. Um, and we, like, we’ve seen this even at StackGen last quarter, we focused our, our, our efforts on improving our automated tests. Um, in the beginning, we’re just trying to meet customer demands. We, unfortunately, they didn’t spend much time testing, but last quarter we made a concerted effort, hey, let’s test all of our happy paths, let’s have automated tests for all of that. Um, let’s make sure that we can build everything in our pipelines as best as possible. And our, um, deployment frequency metrics skyrocketed. Um, so those are some of the, uh, some of the counterintuitive things, um, maybe doing the boring stuff, it’s gonna be boring, but it’s gonna speed you up.

Ariel Pérez: Yeah, and I think, you know, if I can add one more thing on that, right, that’s critical that many people forget, you know, not only engineers, as we’re working on things and engineering leadership, but also your business peers; we forget that the cost of software, the initial piece of building it is just a tiny fraction of the cost. It’s that lifetime of iterating, maintaining it, managing, building upon it; that’s where all the cost is. So unfortunately, we often cut the things when we’re trying to cut corners that make that ongoing cost cheaper and you’re, you’re right, at, you know, investing in that testing upfront might seem painful, but it helps you maintain that actual, you know, uh, that reasonable burn for every new feature will cost a reasonable amount, cause if you don’t invest in that, every new feature is more expensive. So you’re actually a whole lot less productive over time if you don’t invest on these things at the beginning.

Cesar Rodriguez: And it, and it affects everything else. If you’re trying to onboard somebody new, it’ll take more time because you didn’t document, you didn’t test. Um, so your cost of onboarding new people is going to be more expensive. Your cost of adding new people, uh, new features is going to be more expensive. So yeah, a hundred percent.

Kovid Batra: Totally. I think, Cesar, documentation and testing, uh, people hate it, but that’s the truth for sure. Great, guys. I think, uh, there is more to learn on the journey and there are a lot more questions that I have and I’m sure audience would also have a lot of questions. So I would request the audience to put in their questions in the comment section right now, because at the end when we are having a Q&A, we’ll have all the questions sorted and we can take all of them one by one. Okay. Um, as I said, like a lot of learning and unlearning is going to happen, but let’s talk about some of, uh, your specific experiences, uh, learn some practical tips from there. So coming to you, Ariel. Uh, you have recently moved into this leadership role at Tinybird. Congratulations, first of all.

Ariel Pérez: Thank you.

Kovid Batra: And, uh, I’m sure this comes with a lot of responsibility when you enter into a new environment. It’s not just a new thing that you’re going to work upon, it’s a whole new set of people. I’m sure you have seen that in your career multiple times. But every time you step in and you’re a new person there, and of course, uh, you’re going as a leader, uh, it could be overwhelming, right? Uh, how do you manage that situation? How do you start off? How do you pull off so that you actually are able to lead, uh, and, and drive that impact which you really want?

Ariel Pérez: Got it. Um, so, uh, the first part is one of, this may sound like fluff, but it really helps, um, in many ways when you have a really big challenge ahead, you know, you have to avoid, you have to figure out how to avoid letting imposter syndrome freeze you. And even if you’ve had a career of success, you know, in many ways, imposter syndrome still creeps up, right? So how do I fight, how do I fight that? It’s one of those things like stand in front of the mirror and really deep breaths and talk about I got this job for a reason, right? I, you know, I, I, they, they’re trusting me for a reason. I got here. I earned this. Here’s my track record. I worked this. Like I deserve to be here. I’m supposed to be here. I think that’s a very critical piece for any new leader, especially if you’re a new leader in a new place, because you have so much novelty left and right. You have to prove yourself and that’s very daunting. So the first piece is you need to figure out how to get yourself out of your own head. And push yourself along and coach yourself, like I’m supposed to be here, right? Once you get that piece, you know down pat, it really helps in many ways helps change your own mindset your own framing. When you’re walking into conversations walking into rooms, there’s a big piece of how, how that confidence shines through. That confidence helps you speak and get your ideas and thoughts out without tripping all over yourself. That confidence helps you not worry about potentially ruffling some feathers and having hard conversations. When you’re in leadership, you have to have hard conversations. It’s really important to have that confidence, obviously without forgetting it, without saying, let me run over everybody, cause that’s not what it means, but it just means you got to get over the piece that freezes you and stops you. So that’s the first piece I think. The second piece is, especially when moving higher and higher into positions of leadership; it’s listening. Listening is the biggest thing you do. You might have a million ideas, hold them back, please hold them back. And that’s really hard for me. It’s so hard cause I’m like, “I see that I can fix that. I can fix that too. I’ve seen that before I can fix it too.” But, you know, you earn more respect by listening and observing. And actually you might learn a few things or two. I’m like, “Oh, that thing I wanted to fix, there’s a reason why it’s the way it is.” Because every place is different. Every place has a different history, a different context, a different culture, and all those things come into play as to why certain decisions were made that might seem contrary to what you would have done. And it helps you understand that context. That context is critical, not only to then figure out the appropriate solution to the problem, but also that time while you’re learning and listening and talking to people, you’re building relationships with people, you’re connecting to people, you’re understanding, you’re understanding the players, understanding who does well, who doesn’t do well, you’re understanding where all the bodies are buried, you’re understanding the strategy, you’re getting a big picture of all the things so that then when it comes time to say now time to implement change, you have a really good setup of who are the people that are gonna help me make the change, who are the people that are going to be challenging, how do I draw a plan to do change management, which is a big important thing. Change management is huge. It’s 90% people. So you need to understand the people and then understand, it also gives you enough time to understand the business strategy, the context, the big problem where you’re going to kind of be more effective at. Here’s why I got hired. Now I’m going to implement the things to help me execute on what I believe is the right strategy based on learning and listening and keeping my mouth shut for the time, right? Now, traditionally, you’ll hear this thing about 90 days. I think the 90 days is overly generous if you’re in a really big team, I think it leans and skews toward big places, slower moving places, um, and, and places that move. That’s it. Bigger places, slower places. When you join a startup environment, we join a smaller company. You need to be able to move faster. You don’t have 90 days to make decisions. You don’t have 90 days. You might have 30 days, right? You want to push that back as far as you can to get an appropriate context. But there’s a bias for action, reasonably so because you’re not guaranteed that the startup is going to be there tomorrow. So you don’t have 90 days, but you definitely don’t want to do it in two weeks and probably not start doing things in a month.

Kovid Batra: Makes sense. Makes sense. So, uh, a follow-up question on that. Uh, when you get into this position, if you are in a startup, let’s say you get 30 to 45 days, but then because of your bias towards action, you pick up initiatives that you would want to lead and create that impact. In your journey at Tinybird, have you picked up something, anything interesting, maybe related to AI or maybe working with different teams that you think would work on your existing code base to revamp it, anything that you have picked up and why?

Ariel Pérez: Yeah, a bunch of stuff. Um, I think when I first joined Tinybird, my first role was field CTO, which is a role that takes the, the, the responsibilities of the CTO and the external facing aspects of them. So I was focused primarily on the market, on customers, on prospects. And as part of that one, you know, one of the first initiatives I had was how do we, uh, operate within the, you know, sales engineering team, who was also reporting to me, and make that much more effective, much more efficient. So a few of the things that we were thinking of there were, um, AI-based solutions and GenAI-based solutions to help us find the information we need earlier, sooner, faster. So that was more like an optimization and efficiency thing in terms of helping us get the answers and clarify and understand and gather requirements from customers and very quickly figure out this is the right demo for you, these are the right features and capabilities for you, here’s what we can do, here’s what we can’t do, to get really effective and efficient at that. When moving into a product role though, and product and engineering role, in terms of the, the latest initiatives that I’ve picked up, like there, there, there, there are two big things in terms of themes. One of them is that Tinybird must always work, which sounds like, yeah, well, duh, obviously it must always work, but there’s a key piece underpinning that. Number one, obviously the, you know, stability and reliability are huge and required for trust from customers wanting to use you as a dev tool. You need to be able to depend on it, but there’s another piece is anything I must do and try to do on the platform, it must fail in a way that I understand and expect so that then I can self service it and fix it. So that idea of Tinybird always works that I’ve been picking up and working on projects is transparency, observability, and the ability for customers to self-service and resolve issues simply by saying, “I need more resources.” And that’s a, you know, it’s a very challenging thing because we’ve got to remove all the errors that have nothing to do with that, all the instability and all the reliability problems so that those are granted. And then remaining should only be issues that, hey, customer, you can solve this by managing your limits. Hey, customer, you can solve this by increasing the cores you’re using. You can solve this by adding more memory and that should be the only thing that remains. So working on a bunch of stuff there on predicting whether something will fail or not, predicting whether something is going to run out of resources or not, very quickly identifying if you’re running out of resources so there’s almost like an SRE monitoring observability aspect to this, but turning that back into a product solution. That’s one side of it. And then the other big pieces will be called a developer’s experience. And that’s something that my, you know, my, my, my peer is working internally on and leading is a lot more about how developers develop today. Developers develop today, well, they always develop locally. They prefer not depending on IO on a network, but developer, every developer, whether they tell you yes or no, is using an AI assistant; every developer, right? Or 99% of developers. So the idea is, how do we weave that into the experience without making it be, you know, a gimmick? How do we weave an AI Copilot into your development experience, your local development experience, your remote development experience, your UI development experience so that you have this expert at your disposal to help you accelerate your development, accelerate your ability to find problems before you ship? And even when you ship, help you find those problems there so you can accelerate those cycles, so you can shorten those lead time, so you can get to productivity and a productive solution faster with less errors and less issues. So that’s one major piece we’re working on there on the embedding AI; and not just AI and LLMs and GenAI, all you think about, even traditional. I say traditional like ML models on understanding and predicting whether something’s going to go wrong. So we’re working on a lot of that kind of stuff to really accelerate the developer, uh, accelerate developer productivity and engineering team productivity, get you to ship some value faster.

Kovid Batra: Makes sense. And I think, uh, when you’re doing this, is there any kind of framework, tooling or processes that you’re using to measure this impact, uh, over, over the journey?

Ariel Pérez: Yeah, um, for this kind of stuff, I lean a lot more toward the outcomes side of the equation, you know, this whole question of outputs versus outcomes. I do agree. John Cutler, very recently, I loved listening to John Cutler. He very recently published something like, look, we can’t just look at outcomes, because unfortunately, outcomes are lagging. We need some leading indicators and we need to look at not only outcomes, but also outputs. We need to look at what goes into here. We need to look at activity, but it can’t be the only thing we’ll look at. So the things I look at is number one, um, very recently I started working with my team to try to create our North Star metric. What is our North Star metric? How do we know that what we’re doing and what we’re solving for is delivering value for our customers? And is that linked to our strategy and our vision? And do we see a link to eventual revenue, right? So all those things, trying to figure out and come up with that, working with my teams, working, looking at our customers, understanding our data, we’ve come up with a North Star metric. We said, great, everything we do should move that number. If that moving, if that number is moving up into the right, we’re doing the right things. Now, looking at that alone is not enough, because especially as engineering teams, I got to work back and say, how efficient are we at trying to figure that out? So there’s, you know, a few of the things that I look at, I obviously look at the DORA metrics. I do look at them because they help us try to figure out sources of issues, right? What’s our lead time? What’s our cycle time? What’s our deployment frequency? What’s our, you know, what, you know, what, what’s our, uh, you know, change failure rate? What’s our mean time to recover? Those are very critical to understand. Are we running as a tip-top shop in terms of engineering? How good are we at shipping the next thing? Because it’s not just shipping things faster; it’s if there’s a problem, I need to fix it really fast. If I want to deliver value and learn, and this is the second piece is critical that many companies fail is, I need to put it out in the hands of customers sooner. That’s the efficiency piece. That’s the outputs. That’s the, you know, are we getting really good at putting it in front of customers, but the second piece that we must need independent of the North Star metric is ‘and what happened’, right? Did it actually improve things? Did it, did it make things worse. So it’s optimizing for that learning loop on what our customers are doing. Do we have.. I’m tracking behavioral analytics pieces where the friction points were funnels. Where are they dropping off? Where are they circling the wheels, right? We’re looking at heat maps. We’re looking at videos and screen shares of like, what did the customer do? Why aren’t they doing what they thought we thought they were going to do? So then now when we learn this, go back to that really awesome DORA numbers, ship again, and let’s see, let’s see, let’s fix on that. So, to me, it’s a comprehensive view on, are we getting really good at shipping? And are we getting really good at shipping the right thing? Mixing both those things driven by the North star metric. Overall, all the stuff we’re doing is the North star moving up into the right.

Kovid Batra: Makes sense. Great. Thanks, Ariel. Uh, this was really, really insightful. Like, from the point you enter as a leader, build that listening capability, have that confidence, uh, driving the initiatives which are right and impactful, and then looking at metrics to ensure that you’re moving in the right direction towards that North Star. I think to sum up, it was, it was really nice and interesting. Cesar, I think coming to your experience, uh, you have also had a good stint at, uh, at StackGen and, uh, you were mentioning about, uh, taking up this transition successfully, uh, which was multi-cloud infrastructure that expanded your engineering team. Uh, right? And I would want to like deep dive into that experience. Uh, you specifically mentioned that, uh, that, uh, transition was really successful, and at that time, you were able to keep the focus, keep the productivity in place. How things went for you, let’s deep dive into that experience of yours.

Cesar Rodriguez: Yeah. So, so from my perspective, the goals that you are going to have for your team are going to be specific to where the business is at, at that point in time. So, for example, StackGen, we started in 2023. Initially, we were a very small number of engineers just trying to solve the initial problem, um, which we’re trying to solve with Stackdn, which is infrastructure from code and easily deploying cloud architecture into, into the cloud environment. Um, so we focus on one cloud provider, one specific problem, with a handful of engineers. And once we started learning from customers, what was working, what was not working, um, and we started being pulled into different directions, we quickly learned that we needed to increase engineering capacity to support additional clouds, to deliver additional features faster. Um, our clients were trying to pull us in different directions. So that required, uh, two things. One is, um, hiring and scaling the team quickly. So now we are, at the moment we’re 22 engineers; so hiring and scaling the engineering team quickly and then enabling new team members to be as productive as possible in day zero. Um, and this is where, this is where the boring, the boring actions come into play. Um, uh, so first of all, making sure that you have enough documentation so somebody can get up and running on day one, um, and they can start doing pull requests on day one. Second of all, making sure that you have, um, clear expectations in terms of quality and what is your happy path, and how can you achieve that. And third, um, is making sure everyone knows what is expected from them in terms of the metrics that we’re looking for and, uh, the quality that we’re looking for in their outcomes. And this is something that we use Typo for. So, for example, we have an international team. We have people in India, Portugal, US East Coast, US West Coast. And one of the things that we were getting stuck early on was our pull requests were getting opened, but then it took a really long time for people to review them, merge them, and take action and get them deployed. So, um, we established a metric, and we did this using Typo, where we were measuring, hey, if you have a pull request open more than 12 hours, let’s create an alert, let’s alert somebody, so that somebody can be on top of that. We don’t want to get somebody stuck for more than a working day, waiting for somebody to review the pull request. And, and the other metric that we look at, which is deployment frequency, we see that an uptick of that. Now that people are not getting stuck, we’re able to have more frictionally, frictionless, um, deployments to our SDLC where people are getting less stuck. We’re seeing collaboration between the team members regardless of their time zone improving. So that’s something actionable that we’ve implemented.

Kovid Batra: So I think you’re doing the boring things well and keeping a good visibility on things, how they’re proceeding, really helped you drive this transition smoothly, and you were able to maintain that productivity in the team. That’s really interesting. But touching on the metrics part again, uh, you mentioned that you were using Typo. Uh, there, there are, uh, various toolings to help you, uh, plan, execute, automate, and reflect things when you are, when you are into a position where as a leader, uh, you have multiple stakeholders to manage. So my question to both of you, actually, uh, when we talk about such toolings, uh, that are there in the market, like Typo, how these tools help you exactly, uh, in each of these phases, or if you’re not using such tools, you must be using some level of metrics, uh, to actually, let’s say you’re planning an initiative, how do you look at numbers? If you’re automating something and executing something, how do you look at numbers and how does this whole tooling piece help you in that? Um, yeah.

Cesar Rodriguez: I think, I think for me, the biggest thing before, uh, using a tool like Typo was it was very hard to have a meaningful conversation on how the engineering team was performing, um, without having hard, hard data and raw data to back it up. So, um, the conversation, if you don’t, if you’re not measuring things, it’s more about feelings and more about anecdotal evidence. But when you have actual data that you can observe, then you can make improvements, and you can measure how, how, how that, how things are going well or going bad and take action on it. So, so that’s the biggest, uh, for me, that’s the biggest benefit for, from my perspective. Um, you have, you can have conversations within your team and then with the rest of the organization, um, and present that in a, in a way that makes sense for everyone.

Kovid Batra: Makes sense. I think that’s the execution part where you really take the advantage of the tool. You mentioned with one example that you had set a goal for your team that okay, if the review time is more than 12 hours, you would raise an alert. So, totally makes sense, that helps you in the execution, making it more smooth, giving you more, uh, action-driven, uh, insights so that you can actually make teams move faster. Uh, Ariel, for you, any, any experiences around that? How do you, uh, use metrics for planning, executing, reflecting?

Ariel Pérez: So I think, you know, one of the things I like doing is I like working from the outside in. By that I mean, first, let me look at the things that directly impact customers, that is visible. There’s so much there on, you know, in terms of trust to customers. There’s also someone’s there on like actual eventual impact. So I lay looking, for example, um, the, it may sound negative, but it’s one of those things you want to track very closely and manage and then learn from is, what’s our incident number? Like, how many incidents do we have? You know, how many P0s? How many P1s? That is a very important metric to trust because I will guarantee you this, if you don’t have that number as an engineering leader, your CEO is going to try to figure out, hey, why are we having so many problems? Why are so many customers angry calling me? So that’s a number you’re going to want to have a very strong pulse on: understand incidents. And then obviously, take that number and try to figure out what’s going on, right? There’s so much behind it. But the first part is understand the number and you want that number to go down over time. Um, obviously, like I said, there’s a North star metric. You’re tracking that. Um, I look at also, which, you know, I don’t lean heavily on these, but they’re still used a lot and they’re still valuable. Things like NPS and CSAT to help you understand how customers are feeling, how customers are thinking. And it allows me to get often when it’s paired with qualitative feedback, even more so because I want to understand the ‘why’ and I’ll dive more into the qualitative piece, how critical is it and how often we forget that piece when we’re chasing metrics and looking for numbers, especially we’re engineers, we want numbers. We need a story and the story, you can’t get the story just from the numbers. So I love the qualitative aspect. And then the third thing I look at is, um, SCIs or failed customer interactions, trying to find friction in the journeys. What are all the times a customer tries to do something and they fail? And you know, you can define that in so many kinds of ways, but capturing that is one of those things you try to figure out. Find failed customer interactions, find where customers are hitting friction points, and let’s figure out which of those are most important to attack. So these things help guide, at the minimum, what do we need to work on as a team? Right? What are the things we need to start focusing on to deliver and build? Like, how do I get initiatives? Obviously, that stuff alone doesn’t turn into initiatives. So the next thing I like ensuring and I drive to figure out what we work on is with all my leaders. And in our organization, we don’t have separate product managers. You know, engineering leaders are product managers. They have to build those product skills because we have such a technical product that we decided to make that decision, not only for efficiency’s sake and stop having two people in every conversation, but also to build up that skill set of ‘I’m building for engineers, and I need to know my engineering product very well, but now let me enable these folks with the frameworks and methodologies, the ideas and the things that help them make product decisions.’ So, when we look at these numbers, we try to look at what are some frameworks and ways to think about what am I going to build? Which of these is going to impact? How much do we think it’s going to impact it? What level of confidence do I have in that? Does that come from the gut? Does that come from several opinions that customers tell us that, is the data telling us that, are competitors doing it? Have we run an experiment? Did we do some UX research? So the different levels of, uh, confidence in I want to do this thing. Cause this thing’s going to move that number. We believe that number is important. The FCI is it through the roof. I want to attack them. This is going to move it. Okay. How sure are you? He’s going to move it. Now, how are we going to measure that? And indeed moved it. Then I worked, so that’s the outside of the onion. Then I work inward and say, great, how good are we at getting at those things? So, uh, there’s two combinations of measures. I pull measures and data from, from GitLab, from GitHub, I look at the deployments that we have. Thankfully, we run a database. We have an OLAP database, so I can run a bunch of metrics off of all this stuff. We collect all this data from all this telemetry from our services, from our deployments, from our providers for all of the systems we use, and then we have these dashboards we built internally to track aggregates, track metrics and track them in real time, because that’s what Tinybird does. So, we use Tinybird to Tinybird while we Tinybird, which is awesome. So I, we’ve built our own back dashboards and mechanisms to track a lot of these metrics and understand a lot of these things. However, there’s a key piece which I haven’t introduced yet, but I have a lot of conversations with a lot of people on, hey, why did this number move? What’s going on? I want to get to the place that we actually introduce surveys. Funny enough, when you talk about the beginning of DORA, even today, DORA says, surveys are the best way to do this. We try to get hard data, but surveys are the best way to get it. For me, surveys really help, um, forget for a second what the numbers are telling me, how do the engineers feel? Because then I get to figure out why do you feel that way? It allows me to dive in. So that’s why I believe the qualitative subjective piece is so important to then bolster the numbers I’m seeing, either A: explain the numbers, or the other way around, when I see a story, I’m like, do the numbers back up that story? The reality is somewhere in the middle, but I use both, both of those to really help me.

Kovid Batra: Makes sense. Makes sense. Great guys. I think, uh, thank you. Thank you so much for sharing such good insights. I’m sure our audience has some questions for us, uh, so we can break in for a minute and, uh, then start the QnA.

Kovid Batra: All right. I think, uh, we have a lot of questions there, but I’m sure we are going to pick a few of them. Let’s start with the first one. That’s from Vishal. Hi Ariel, how do you, how do I decide which metrics to focus while measuring teams productivity and individual metrics? So I think the question is simple, but please go ahead.

Ariel Pérez: Um, I would start with in terms of, I would measure the core four of DORA at the minimum across the team to help me pinpoint where I need to go. I would start with that to help me pinpoint. In terms of which team productivity metrics or individual productivity metrics, I’d be very wary of trying to measure individual productivity metrics, not because we shouldn’t hold individuals accountable for what they do, not because individuals don’t also need to understand, uh, how we think about performance, how we manage that performance, but for individuals, we have to be very careful, especially in software teams. Since it’s a team sport, there’s no individual that is successful on their own, and there’s no individual that fails on their own. So if I were to think, you know, if I were to measure and try to figure out how to identify how this individual is doing, I would, I would look for at least two things. Number one, actual peer feedback. How, how do their peers think about this person? Can they depend on this person? Is this person there when they need him? Is this person causing a lot of problems? Is this person fixing a lot of problems? But I’d also look at the things, to me, for the culture I want to build, I want to measure how often is this person reviewing other people’s PRs? How often is this person sitting with other people, helping unblock them? How often is this person not coding because they’re going and working with someone else to help unblock them? I actually see that as a positive. Most frameworks will ding that person for inactivity. So I try to find the things that don’t measure activity, but are measuring that they’re doing the right things, which is teamwork. They’re actually being effective at working in a team when it comes to individuals.

Kovid Batra: Great. Thanks, Ariel. Uh, next question. That’s for you, Cesar. How easy or hard is the adoption and implementation of SEI tools like Typo? Okay, so you can share your experience, how it worked out for you.

Cesar Rodriguez: So, so two things. So, so when I was evaluating tools, um, I prefer to work with startups like Typo because they’re extremely responsive. If you go to a big company, they’re not going to be as responsive and as helpful as a startup is. They change the product to meet your expectations and they work extremely fast. So that’s the first thing. Um, the hard part of it is not about the technology itself. The technology is easy. The hard part is the people aspect of it. So you have to, if you can implement it early, uh, when your company is growing, that’s better because they’ll, when new team members come in, they already know what are the expectations and what to expect. The other thing is, um, you need to communicate effectively to your team members why are you using this tool, and getting their buy-in for measuring. Some people may not like that you’re going to be measuring their commits, their pull requests, their quality, their activity, but if you have a conversation with, with those people to make them understand the ‘why’ and how can you connect their productivity to the business outcomes, I think that goes far along. And then once you’re, once you’re in place, just listening to your engineers feedback about the tool, working with the vendor to, to modify anything to fit your company’s need, um, a lot of these tools are very cookie cutter in their approach, um, and have a set of, set of capabilities, but teams are made of people and people have different needs. So, so make sure that you capture that feedback, give it to your vendor and work with them to make the tool work for your specific individual teams.

Kovid Batra: Makes sense. Next question. That’s from, uh, Mohd Helmy Ibrahim, uh, Hi Ariel, how to make my senior management and junior implement project management software in their work, tasking to be live update tracking status update?

Ariel Pérez: Um, I, that one, I’m of two minds cause only because I see a lot of organizations who can get really far without actual sophisticated project management tooling. Like they just use, you know, Linear and that’s it. That’s all enough. Other places can’t live without, you know, a super massive, complex Jira solution with all kinds of things and all kinds of bells and whistles and reports. Um, I think the key piece here that’s important and actually it was funny enough. I was literally just having this conversation with my leadership team, my engineering leadership team. It’s this, you know, when it comes to the folks involved is do you want to spend all day asking, answering questions about where is this thing, how is this thing doing, is this thing going to finish, when is it going to finish, or do you want to just get on with your work, right? If you want to just get on with your work and actually do the work rather than talk about the work to other people who don’t understand it, if you want to find out when you want to do it, you need some level of information radiator. Information reader, radiators are critical at the minimum so that other folks can get on the same page, but also if someone comes to you and says, Hey, where is this thing? Look at the information radiator. It’s right there. You, where’s the status on the, it’s on the information radiator. When’s this going to be done? Look at the information radiator, right? That’s the key piece for me is if you don’t want to constantly answer that question, then you will, because people care about the things you’re working on. They want to know when they can sell this thing or they want to know so they can manage their dependencies. You need to have some level, some minimum level of investment of marking status, marking when you think it’s going to be done and marking how it’s going. And that’s a regular piece. Write it down. It’s so much easier to write it down than to answer that question over and over again. And if you write it down in a place that other people can see it and visualize it, even better.

Kovid Batra: Totally makes sense. All right, moving on. Uh, the next question is for Cesar from Saloni. Uh, good to see you here. I have a question around burnout. How do you address burnout or disengagement while pushing for high productivity? Oh, very relevant question, actually.

Cesar Rodriguez: Yeah, so so for this one, I actually use Typo as well. Um, so Typo has this gauge to, um, that tells you based on the data that it’s collecting, whether somebody is working higher than expected or lower than expected. And it gives you an alert saying, hey, this person may be prone to burnout or this person is burning out. Um, so I use that gauge to detect how is the team doing and it’s always about having a conversation with the individual and seeing what’s going on with their lives. There may be, uh, work things that are impacting their productivity. There may be things that are outside of work that are impacting that individual’s productivity. So you have to work around that. Um, we are, uh, it’s all about people in the end, um, and working with them, setting the right expectations and at the same time being accommodating if they’re experiencing burnout.

Kovid Batra: Cool. I think, uh, more than myself, uh, you have promoted Typo a lot today. Great, but glad to know that the tool is really helping you and your team. Yeah. Next question. Uh, this one is again for you, Cesar from Nisha. Uh, how do you encourage accountability without micromanaging your team?

Cesar Rodriguez: I think, I think Ariel answered this question and I take this approach even with my kids. Um, it’s not about telling them what to do. It’s about listening and helping them learn and come to the same conclusion as you’re coming to without forcing your way into it. So, so yeah, you have to listen to everybody, listen to your stakeholders, listen to your team, and then help them and drive a conversation that can point them in the right direction without forcing them or giving them the answer which is, which requires a lot of tact.

Ariel Pérez: One more thing I’ll add to that, right, is, you know, so that folks don’t forget and think that, you know, we’re copping out and saying, hold on, what’s your job as a leader? What are you accountable for? Right? In that part, there’s also like, our job is let them know what’s important. It’s our job to tell them what is the most important thing, what is the most important thing now, what is the most important thing long term, and repeat that ad hominem until they make fun of you for it, but they need to understand what’s the most important, what’s the strategy, so you need to provide context, because there’s a piece of, it’s almost like, it’s unfair, and it’s actually, I think, very, um, it’s a very negative thing to say, go figure it out, without telling them, hold on, figure what out? So that’s a key piece there as well, right? It’s you, you’re accountable as the leader for telling them what’s important, letting them understand why this is important, providing context.

Kovid Batra: Totally. All right. Next one. This one’s for you, Cesar. According to you, what are the most common misconceptions about engineering productivity? How do you address them?

Cesar Rodriguez: So, so I think the, for me, the biggest thing is people try to come with all these new words, DORA, SPACE, uh, whatever latest and greatest thing is. Um, the biggest thing is that, uh, there’s not going to be a cookie cutter approach. You have to take what works from those frameworks to your specific team in your specific situation of your business right now. And then from there, you have to look at the data and adapt as your team and as your business is evolving. So that’s, that’s the biggest. misconception for me. Um, you can take, you can learn a lot from the things that are out there, but always keep in mind that, um, you have to put that into the context of your current situation.

Kovid Batra: I think, uh, Ariel, I would like to hear you on this one too.

Ariel Pérez: Yeah. Uh, definitely. Um, I think for me, one of the most common misconceptions about engineering productivity as a whole is this idea that engineering is like manufacturing. And for so long, we’ve applied so many ideas around, look, engineering is all about shipping more code because just like in a fan of factory, let’s get really good at shipping code and we’re going to be great. That’s how you measure productivity. Ship more code, just like ship more widgets. How many widgets can I ship per, per hour? That’s a great measure of engineering productivity in a factory. It’s a horrible measure of productivity in engineering. And that’s because many people, you know, don’t realize that engineering productivity and engineering in particular, and I’m gonna talk development, as a piece of development, is it’s more R&D than it is like doing things than it’s actual shipping things. Software development is 99% research and development, 1% actually coding the thing. And if they want any more proof of that is if you have an engineer working on something or a team working on something for three weeks and somehow it all disappears and they lose all of it, how long will it take them to recode the same thing? They’ll probably recode the same thing in about a day. So that tells you that most of those three weeks was figuring out the right thing, the right solution, the right piece, and then the last piece was just coding it. So I think for me, that’s the big misconception about engineering productivity, that it has anything to do with manufacturing. No, it has everything to do with R&D. So if we want to understand how to better measure engineering productivity, look at industries where R&D is a very, very heavy piece of what they do. How do they measure productivity? How did they think about productivity of their R&D efforts?

Kovid Batra: Cool. Interesting. All right. I think with that, uh, we come to the end of this session. Before we part, uh, I would like to thank both of you for making this session so interesting, so insightful for all of us. And thanks to the audience for bringing up such nice questions. Uh, so finally, before we part, uh, Ariel, Cesar, anything you would say as parting thoughts?

Ariel Pérez: Cesar, you wanna go first?

Cesar Rodriguez: No, no, um, no, no parting thoughts here. Feel free to, anyone that wants to chat more, feel free to hit me up on LinkedIn. Check out stackgen.com if you want to learn about what we do there.

Ariel Pérez: Awesome. Um, for me, uh, in terms of parting thoughts is; and this is just because how I’ve personally thought about this is, um, I think if you lean on the figuring out what makes people tick and figure, and you’re trying to take your job from the perspective of how do I improve people, how to enrich people’s lives, how do I make them better at what they do every day? If you take it from that perspective, I don’t think you can ever go wrong. If you make your people super happy and engaged and they want to be here and you’re constantly motivating them, building them and growing them, as a consequence, the productivity, the outputs, the outcomes, all that stuff will come. I firmly believe that. I’ve seen it. I firmly believe it. It really, it would be really hard to argue that with some folks, but I firmly believe it. So that’s my parting thoughts, focus on the people and what makes them tick and what makes them work, everything else will fall into place. And if I, you know, just like Cesar, I can’t walk away without plugging Tinybird. Tinybird is, you know, data infrastructure for software teams. You want to go faster, you want to be more productive, you want to ship solutions faster and for the customers, Tinybird is, is built for that. It helps engineering teams build solutions over analytical data faster than anyone else without adding more people. You can keep your team smaller for longer because Tinybird helps you get that efficiency, that productivity out there.

Kovid Batra: Great. Thank you so much guys and all the best for your ventures and for the efforts that you’re doing. Uh, we’ll see you soon again. Thank you.

Cesar Rodriguez: Thanks, Kovid.

Ariel Pérez: Thank you very much. Bye bye.

Cesar Rodriguez: Thank you. Bye!

View All

Software Delivery

View All
The Power of GitHub & JIRA Integration

The Power of GitHub & JIRA Integration

In the ever-changing world of software development, tracking progress and gaining insights into your projects is crucial. While GitHub Analytics provides developers and teams with valuable data-driven intelligence, relying solely on GitHub data may not provide the full picture needed for making informed decisions. By integrating GitHub Analytics with JIRA, engineering teams can gain a more comprehensive view of their development workflows, enabling them to take more meaningful actions.

Why GitHub Analytics Alone is Insufficient

GitHub Analytics offers valuable insights into:

  • Repository Activity: Tracking commits, pull requests and contributor activity within repositories.
  • Collaboration Effectiveness: Evaluating how effectively teams collaborate on code reviews and issue resolution.
  • Workflow Identification: Identifying potential bottlenecks and inefficiencies within the development process.
  • Project Management Support: Providing data-backed insights for improving project management decisions.

However, GitHub Analytics primarily focuses on repository activity and code contributions. It lacks visibility into broader project management aspects such as sprint progress, backlog prioritization, and cross-team dependencies. This limited perspective can hinder a team's ability to understand the complete picture of their development workflow and make informed decisions.

The Power of GitHub & JIRA Integration

JIRA is a widely used platform for issue tracking, sprint planning, and agile project management. When combined with GitHub Analytics, it creates a powerful ecosystem that:

  • Connects Code Changes with Project Tasks and Business Objectives: By linking GitHub commits and pull requests to specific JIRA issues (like user stories, bugs, and epics), teams can understand how their code changes contribute to overall project goals.
    • Real-World Example: A developer fixes a bug in a specific feature. By linking the GitHub pull request to the corresponding JIRA bug ticket, the team can track the resolution of the issue and its impact on the overall product.
  • Provides Deeper Insights into Development Velocity, Bottlenecks, and Blockers: Analyzing data from both GitHub and JIRA allows teams to identify bottlenecks in the development process that might not be apparent when looking at GitHub data alone.
    • Real-World Example: If a team observes a sudden drop in commit frequency, they can investigate JIRA issues to determine if it's caused by unresolved dependencies, unclear requirements, or other blockers.
  • Enhances Collaboration Between Engineering and Product Management Teams: By providing a shared view of project progress, GitHub and JIRA integration fosters better communication and collaboration between engineering and product management teams.
    • Real-World Example: Product managers can gain insights into the engineering team's progress on specific features by tracking the progress of related JIRA issues and linked GitHub pull requests.
  • Ensures Traceability from Feature Requests to Code Deployments: By linking JIRA issues to GitHub pull requests and ultimately to production deployments, teams can establish clear traceability from initial feature requests to their implementation and release.
    • Real-World Example: A team can track the journey of a feature from its initial conception in JIRA to its final deployment to production by analyzing the linked GitHub commits, pull requests, and deployment information.


More Examples of How JIRA + GitHub Analytics Brings More Insights

  • Tracking Work from Planning to Deployment:
    • Without JIRA: GitHub Analytics shows PR activity and commit frequency but doesn't provide context on whether work is aligned with business goals.
    • With JIRA: Teams can link commits and PRs to specific JIRA tickets, tracking the progress of user stories and epics from the backlog to release, ensuring that development efforts are aligned with business priorities.
  • Identifying Bottlenecks in the Development Process:
    • Without JIRA: GitHub Analytics highlights cycle time, but it doesn't explain why a delay is happening.
    • With JIRA: Teams can analyze blockers within JIRA issues—whether due to unresolved dependencies, pending stakeholder approvals, unclear requirements, or other factors—to pinpoint the root cause of delays and address them effectively.
  • Enhanced Sprint Planning & Resource Allocation:
    • Without JIRA: Engineering teams rely on GitHub metrics to gauge performance but may struggle to connect them with workload distribution.
    • With JIRA: Managers can assess how many tasks remain open versus completed, analyze team workloads, and adjust priorities in real-time to ensure efficient resource allocation and maximize team productivity.
  • Connecting Engineering Efforts to Business Goals:
    • Without JIRA: GitHub Analytics tracks technical contributions but doesn't show their impact on business priorities.
    • With JIRA: Product owners can track how engineering efforts align with strategic objectives by analyzing the progress of JIRA issues linked to key business goals, ensuring that the team is working on the most impactful tasks.

Getting Started with GitHub & JIRA Analytics Integration

Start leveraging the power of integrated analytics with tools like Typo, a dynamic platform designed to optimize your GitHub and JIRA experience. Whether you're working on a startup project or managing an enterprise-scale development team, such tools can offers powerful analytics tools tailored to your specific needs.

How to Integrate GitHub & JIRA with Typo:

  1. Connect Your GitHub and JIRA Accounts: Visit Typo's platform and seamlessly link both tools to establish a unified view of your development data.
  2. Configure Dashboards: Build custom analytics dashboards that track both code contributions (from GitHub) and issue progress (from JIRA) in a single, integrated view.
  3. Analyze Insights Together: Gain deeper insights by analyzing GitHub commit trends alongside JIRA sprint performance, identifying correlations and uncovering hidden patterns within your development workflow.

Conclusion

While GitHub Analytics is a valuable tool for tracking repository activity, integrating it with JIRA unlocks deeper engineering insights, allowing teams to make smarter, data-driven decisions. By bridging the gap between code contributions and project management, teams can improve efficiency, enhance collaboration, and ensure that engineering efforts align with business goals.

Sign Up for Typo’s GitHub & JIRA Analytics Today!

Whether you aim to enhance software delivery, improve team collaboration, or refine project workflows, Typo provides a flexible, data-driven platform to meet your needs.

FAQs

1. How to integrate GitHub with JIRA for better analytics?

  • Utilize native integrations: Some tools offer native integrations between GitHub and JIRA.
  • Leverage third-party apps: Apps like Typo can streamline the integration process and provide advanced analytics capabilities.
  • Utilize APIs: For more advanced integrations, you can utilize the APIs provided by GitHub and JIRA.

2. What are some common challenges in integrating JIRA with Github?

  • Data inconsistency: Ensuring data accuracy and consistency between the two platforms can be challenging.
  • Integration complexity: Setting up and maintaining integrations can sometimes be technically complex.
  • Data overload: Integrating data from both platforms can generate a large volume of data, making it difficult to analyze and interpret.

3. How can I ensure the accuracy of data in my integrated GitHub and JIRA analytics?

  • Establish clear data entry guidelines: Ensure that all team members adhere to consistent data entry practices in both GitHub and JIRA.
  • Regularly review and clean data: Conduct regular data audits to identify and correct any inconsistencies or errors.
  • Utilize data validation rules: Implement data validation rules within JIRA to ensure data accuracy and consistency.
Top Swarmia Alternatives in 2025

Top Swarmia Alternatives in 2025

In today's fast-paced software development landscape, optimizing engineering performance is crucial for staying competitive. Engineering leaders need a deep understanding of workflows, team velocity, and potential bottlenecks. Engineering intelligence platforms provide valuable insights into software development dynamics, helping to make data-driven decisions. While Swarmia is a well-known player, it might not be the perfect fit for every team.This article explores the top Swarmia alternatives, giving you the knowledge to choose the best platform for your organization's needs. We'll delve into features, benefits, and potential drawbacks to help you make an informed decision.

Understanding Swarmia's Strengths

Swarmia is an engineering intelligence platform designed to improve operational efficiency, developer productivity, and software delivery. It integrates with popular development tools and uses data analytics to provide actionable insights.

Key Functionalities:

  • Data Aggregation: Connects to repositories like GitHub, GitLab, and Bitbucket, along with issue trackers like Jira and Azure DevOps, to create a comprehensive view of engineering activities.
  • Workflow Optimization: Identifies inefficiencies in development cycles by analyzing task dependencies, code review bottlenecks, and other delays.
  • Performance Metrics & Visualization: Presents data through dashboards, offering insights into deployment frequency, cycle time, resource allocation, and other KPIs.
  • Actionable Insights: Helps engineering leaders make data-driven decisions to improve workflows and team collaboration.

Why Consider a Swarmia Alternative?

Despite its strengths, Swarmia might not be ideal for everyone. Here's why you might want to explore alternatives:

  • Limited Customization: May not adapt well to highly specialized or unique workflows.
  • Complex Onboarding: Can have a steep learning curve, hindering quick adoption.
  • Pricing: Can be expensive for smaller teams or organizations with budget constraints.
  • User Interface: Some users find the UI challenging to navigate.

Top 6 Swarmia Competitors: Features, Pros & Cons

Here are six leading alternatives to Swarmia, each with its own unique strengths:

1. Typo

Typo is a comprehensive engineering intelligence platform providing end-to-end visibility into the entire SDLC. It focuses on actionable insights through integration with CI/CD pipelines and issue tracking tools.

Key Features:

  • Unified DORA and engineering metrics dashboard.
  • AI-driven analytics for sprint reviews, pull requests, and development insights.
  • Industry benchmarks for engineering performance evaluation.
  • Automated sprint analytics for workflow optimization.

Pros:

  • Strong tracking of key engineering metrics.
  • AI-powered insights for data-driven decision-making.
  • Responsive user interface and good customer support.

Cons:

  • Limited customization options in existing workflows.
  • Potential for further feature expansion.

G2 Reviews Summary:

G2 reviews indicate decent user engagement with a strong emphasis on positive feedback, particularly regarding customer support.

2. Jellyfish

Jellyfish is an advanced analytics platform that aligns engineering efforts with broader business goals. It gives real-time visibility into development workflows and team productivity, focusing on connecting engineering work to business outcomes.

Key Features:

  • Resource allocation analytics for optimizing engineering investments.
  • Real-time tracking of team performance.
  • DevOps performance metrics for continuous delivery optimization.

Pros:

  • Granular data tracking capabilities.
  • Intuitive user interface.
  • Facilitates cross-team collaboration.

Cons:

  • Can be complex to implement and configure.
  • Limited customization options for tailored insights.

G2 Reviews Summary: 

G2 reviews highlight strong core features but also point to potential implementation challenges, particularly around configuration and customization.


3. LinearB

LinearB is a data-driven DevOps solution designed to improve software delivery efficiency and engineering team coordination. It focuses on data-driven insights, identifying bottlenecks, and optimizing workflows.

Key Features:

  • Workflow visualization for process optimization.
  • Risk assessment and early warning indicators.
  • Customizable dashboards for performance monitoring.

Pros:

  • Extensive data aggregation capabilities.
  • Enhanced collaboration tools.
  • Comprehensive engineering metrics and insights.

Cons:

  • Can have a complex setup and learning curve.
  • High data volume may require careful filtering

G2 Reviews Summary: 

G2 reviews generally praise LinearB's core features, such as flow management and insightful analytics. However, some users have reported challenges with complexity and the learning curve.

4. Waydev

Waydev is an engineering analytics solution with a focus on Agile methodologies. It provides in-depth visibility into development velocity, resource allocation, and delivery efficiency.

Key Features:

  • Automated engineering performance insights.
  • Agile-based tracking of development velocity and bug resolution.
  • Budgeting reports for engineering investment analysis.

Pros:

  • Highly detailed metrics analysis.
  • Streamlined dashboard interface.
  • Effective tracking of Agile engineering practices.

Cons:

  • Steep learning curve for new users.

G2 Reviews Summary: 

G2 reviews for Waydev are limited, making it difficult to draw definitive conclusions about user satisfaction.

Waydev Updates: Custom Dashboards & Benchmarking - Waydev

5. Sleuth

Sleuth is a deployment intelligence platform specializing in tracking and improving DORA metrics. It provides detailed insights into deployment frequency and engineering efficiency.

Key Features:

  • Automated deployment tracking and performance benchmarking.
  • Real-time performance evaluation against efficiency targets.
  • Lightweight and adaptable architecture.

Pros:

  • Intuitive data visualization.
  • Seamless integration with existing toolchains.

Cons:

  • Pricing may be restrictive for some organizations.

G2 Reviews Summary: 

G2 reviews for Sleuth are also limited, making it difficult to draw definitive conclusions about user satisfaction

6. Pluralsight Flow (formerly Git Prime)

Pluralsight Flow provides a detailed overview of the development process, helping identify friction and bottlenecks. It aligns engineering efforts with strategic objectives by tracking DORA metrics, software development KPIs, and investment insights. It integrates with various manual and automated testing tools such as Azure DevOps and GitLab.

Key Features:

  • Offers insights into why trends occur and potential related issues.
  • Predicts value impact for project and process proposals.
  • Features DORA analytics and investment insights.
  • Provides centralized insights and data visualization.

Pros:

  • Strong core metrics tracking capabilities.
  • Process improvement features.
  • Data-driven insights generation.
  • Detailed metrics analysis tools.
  • Efficient work tracking system.

Cons:

  • Complex and challenging user interface.
  • Issues with metrics accuracy/reliability.
  • Steep learning curve for users.
  • Inefficiencies in tracking certain metrics.
  • Problems with tool integrations.


G2 Reviews Summary - 

The review numbers show moderate engagement (6-12 mentions for pros, 3-4 for cons), placing it between Waydev's limited feedback and Jellyfish's extensive reviews. The feedback suggests strong core functionality but notable usability challenges.Link to Pluralsight Flow's G2 Reviews

The Power of Integration

Engineering management platforms become even more powerful when they integrate with your existing tools. Seamless integration with platforms like Jira, GitHub, CI/CD systems, and Slack offers several benefits:

  • Out-of-the-box compatibility: Minimizes setup time.
  • Automation: Automates tasks like status updates and alerts.
  • Customization: Adapts to specific team needs and workflows.
  • Centralized Data: Enhances collaboration and reduces context switching.

By leveraging these integrations, software teams can significantly boost productivity and focus on building high-quality products.

Key Considerations for Choosing an Alternative

When selecting a Swarmia alternative, keep these factors in mind:

  • Team Size and Budget: Look for solutions that fit your budget, considering freemium plans or tiered pricing.
  • Specific Needs: Identify your key requirements. Do you need advanced customization, DORA metrics tracking, or a focus on developer experience?
  • Ease of Use: Choose a platform with an intuitive interface to ensure smooth adoption.
  • Integrations: Ensure seamless integration with your current tool stack.
  • Customer Support: Evaluate the level of support offered by each vendor.

Conclusion

Choosing the right engineering analytics platform is a strategic decision. The alternatives discussed offer a range of capabilities, from workflow optimization and performance tracking to AI-powered insights. By carefully evaluating these solutions, engineering leaders can improve team efficiency, reduce bottlenecks, and drive better software development outcomes.

Issue Cycle Time: The Key to Engineering Operations

Issue Cycle Time: The Key to Engineering Operations

Software teams relentlessly pursue rapid, consistent value delivery. Yet, without proper metrics, this pursuit becomes directionless. 

While engineering productivity is a combination of multiple dimensions, issue cycle time acts as a critical indicator of team efficiency. 

Simply put, this metric reveals how quickly engineering teams convert requirements into deployable solutions. 

By understanding and optimizing issue cycle time, teams can accelerate delivery and enhance the predictability of their development practices. 

In this guide, we discuss cycle time's significance and provide actionable frameworks for measurement and improvement. 

What is the Issue Cycle Time? 

Issue cycle time measures the duration between when work actively begins on a task and its completion. 

This metric specifically tracks the time developers spend actively working on an issue, excluding external delays or waiting periods. 

Unlike lead time, which includes all elapsed time from issue creation, cycle time focuses purely on active development effort. 

Core Components of Issue Cycle Time 

  • Work Start Time: When a developer transitions the issue to "in progress" and begins active development 
  • Development Duration: Time spent writing, testing, and refining code 
  • Review Period: Time in code review and iteration based on feedback 
  • Testing Phase: Duration of QA verification and bug fixes 
  • Work Completion: Final approval and merge of changes into the main codebase 

Understanding these components allows teams to identify bottlenecks and optimize their development workflow effectively. 

Why Does Issue Cycle Time Matter? 

Here’s why you must track issue cycle time: 

Impact on Productivity 

Issue cycle time directly correlates with team output capacity. Shorter cycle times allows teams to complete more work within fixed timeframes. So resource utilization is at peak. This accelerated delivery cadence compounds over time, allowing teams to tackle more strategic initiatives rather than getting bogged down in prolonged development cycles. 

Identifying Bottlenecks 

By tracking cycle time metrics, teams can pinpoint specific stages where work stalls. This reveals process inefficiencies, resource constraints, or communication gaps that break flow. Data-driven bottleneck identification allows targeted process improvements rather than speculative changes. 

Enhanced Collaboration 

Rapid cycle times help build tighter feedback loops between developers, reviewers, and stakeholders. When issues move quickly through development stages, teams maintain context and momentum. When collaboration is streamlined, handoff friction is reduced. And there’s no knowledge loss between stages, either. 

Better Predictability 

Consistent cycle times help in reliable sprint planning and release forecasting. Teams can confidently estimate delivery dates based on historical completion patterns. This predictability helps align engineering efforts with business goals and improves cross-functional planning. 

Customer Satisfaction 

Quick issue resolution directly impacts user experience. When teams maintain efficient cycle times, they can respond quickly to customer feedback and deliver improvements more frequently. This responsiveness builds trust and strengthens customer relationships. 

3 Phases of Issue Cycle Time 

The development process is a journey that can be summed up in three phases. Let’s break these phases down: 

Phase 1: Ticket Creation to Work Start

The initial phase includes critical pre-development activities that significantly impact 

overall cycle time. This period begins when a ticket enters the backlog and ends when active development starts. 

Teams often face delays in ticket assignment due to unclear prioritization frameworks or manual routing processes. One of the reasons behind this is resource allocation, which frequently occurs when assignment procedures lack automation. 

Implementing automated ticket routing and standardized prioritization matrices can substantially reduce initial delays. 

Phase 2: Active Work Period

The core development phase represents the most resource-intensive segment of the cycle. Development time varies based on complexity, dependencies, and developer expertise. 

Common delay factors are:

  • External system dependencies blocking progress
  • Knowledge gaps requiring additional research
  • Ambiguous requirements necessitating clarification
  • Technical debt increasing implementation complexity

Success in this phase demands precise requirement documentation and proactive dependency management. One should also establish escalation paths. Teams should maintain living documentation and implement pair programming for complex tasks. 

Phase 3: Resolution to Closure

The final phase covers all post-development activities required for production deployment. 

This stage often becomes a significant bottleneck due to: 

  • Sequential review processes
  • Manual quality assurance procedures
  • Multiple approval requirements
  • Environment-specific deployment constraints 

How can this be optimized? By: 

  • Implementing parallel review tracks
  • Automating test execution
  • Establishing service-level agreements for reviews
  • Creating self-service deployment capabilities

Each phase comes with many optimization opportunities. Teams should measure phase-specific metrics to identify the highest-impact improvement areas. Regular analysis of phase durations allows targeted process refinement, which is critical to maintaining software engineering efficiency. 

How to Measure and Analyse Issue Cycle Time 

Effective cycle time measurement requires the right tools and systematic analysis approaches. Businesses must establish clear frameworks for data collection, benchmarking, and continuous monitoring to derive actionable insights. 

Here’s how you can measure issue cycle time: 

Metrics and Tools 

Modern development platforms offer integrated cycle time tracking capabilities. Tools like Typo automatically capture timing data across workflow states. 

These platforms provide comprehensive dashboards displaying velocity trends, bottleneck indicators, and predictability metrics. 

Integration with version control systems enables correlation between code changes and cycle time patterns. Advanced analytics features support custom reporting and team-specific performance views. 

Establishing Benchmarks 

Benchmark definition requires contextual analysis of team composition, project complexity, and delivery requirements. 

Start by calculating your team's current average cycle time across different issue types. Factor in: 

  • Team size and experience levels 
  • Technical complexity categories 
  • Historical performance patterns 
  • Industry standards for similar work 

The right approach is to define acceptable ranges rather than fixed targets. Consider setting graduated improvement goals: 10% reduction in the first quarter, 25% by year-end. 

Using Visualizations 

Data visualization converts raw metrics into actionable insights. Cycle time scatter plots show completion patterns and outliers. Cumulative flow diagrams can also be used to show work in progress limitations and flow efficiency. Control charts track stability and process improvements over time. 

Ideally businesses should implement: 

  • Weekly trend analysis 
  • Percentile distribution charts 
  • Work-type segmentation views 
  • Team comparison dashboards 

By implementing these visualizations, businesses can identify bottlenecks and optimize workflows for greater engineering productivity. 

Regular Reviews 

Establish structured review cycles at multiple organizational levels. These could be: 

  • Weekly team retrospectives should examine cycle time trends and identify immediate optimization opportunities. 
  • Monthly department reviews analyze cross-team patterns and resource allocation impacts. 
  • Quarterly organizational assessments evaluate systemic issues and strategic improvements. 

These reviews should be templatized and consistent. The idea to focus on: 

  • Trend analysis 
  • Bottleneck identification 
  • Process modification results 
  • Team feedback integration 

Best Practices to Optimize Issue Cycle Time 

Focus on the following proven strategies to enhance workflow efficiency while maintaining output quality: 

  1. Automate Repetitive Tasks: Use automation for code testing, deployment, and issue tracking. Implement CI/CD pipelines and automated code review tools to eliminate manual handoffs. 
  1. Adopt Agile Methodologies: Implement Scrum or Kanban frameworks with clear sprint cycles or workflow stages. Maintain structured ceremonies and consistent delivery cadences. 
  1. Limit Work-in-Progress (WIP): Set strict WIP limits per development stage to reduce context switching and prevent resource overallocation. Monitor queue lengths to maintain steady progress. 
  1. Conduct Daily Standups: Hold focused standup meetings to identify blockers early, track issue age, and enable immediate escalation for unresolved tasks. 
  1. Ensure Comprehensive Documentation: Maintain up-to-date technical specifications and acceptance criteria to reduce miscommunication and streamline issue resolution. 
  1. Cross-Train Team Members: Build versatile skill sets within the team to minimize dependencies on single individuals and allow flexible resource allocation. 
  1. Streamline Review Processes: Implement parallel review tracks, set clear review time SLAs, and automate style and quality checks to accelerate approvals. 
  1. Leverage Collaboration Tools: Use integrated development platforms and real-time communication channels to ensure seamless coordination and centralized knowledge sharing. 
  1. Track and Analyze Key Metrics: Monitor performance indicators daily with automated reports to identify trends, spot inefficiencies, and take corrective action. 
  1. Host Regular Retrospectives: Conduct structured reviews to analyze cycle time patterns, gather feedback, and implement continuous process improvements. 

By consistently applying these best practices, engineering teams can reduce delays and optimise issue cycle time for sustained success.

Real-life Example of Optimizing 

A mid-sized fintech company with 40 engineers faced persistent delivery delays despite having talented developers. Their average issue cycle time had grown to 14 days, creating mounting pressure from stakeholders and frustration within the team.

After analyzing their workflow data, they identified three critical bottlenecks:

Code Review Congestion: Senior developers were becoming bottlenecks with 20+ reviews in their queue, causing delays of 3-4 days for each ticket.

Environment Stability Issues: Inconsistent test environments led to frequent deployment failures, adding an average of 2 days to cycle time.

Unclear Requirements: Developers spent approximately 30% of their time seeking clarification on ambiguous tickets.

The team implemented a structured optimization approach:

Phase 1: Baseline Establishment (2 weeks)

  • Documented current workflow states and transition times
  • Calculated baseline metrics for each cycle time component
  • Surveyed team members to identify perceived pain points

Phase 2: Targeted Interventions (8 weeks)

  • Implemented a "review buddy" system that paired developers and established a maximum 24-hour review SLA
  • Standardized development environments using containerization
  • Created a requirement template with mandatory fields for acceptance criteria
  • Set WIP limits of 3 items per developer to reduce context switching

Phase 3: Measurement and Refinement (Ongoing)

  • Established weekly cycle time reviews in team meetings
  • Created dashboards showing real-time metrics for each workflow stage
  • Implemented a continuous improvement process where any team member could propose optimization experiments

Results After 90 Days:

  • Overall cycle time reduced from 14 days to 5.5 days (60% improvement)
  • Code review turnaround decreased from 72 hours to 16 hours
  • Deployment success rate improved from 65% to 94%
  • Developer satisfaction scores increased by 40%
  • On-time delivery rate rose from 60% to 87%

The most significant insight came from breaking down the cycle time improvements by phase: while the initial automation efforts produced quick wins, the team culture changes around WIP limits and requirement clarity delivered the most substantial long-term benefits.

This example demonstrates that effective cycle time optimization requires both technical solutions and process refinements. The fintech company continues to monitor its metrics, making incremental improvements that maintain their enhanced velocity without sacrificing quality or team wellbeing.

Conclusion 

Issue cycle time directly impacts development velocity and team productivity. By tracking and optimizing this metric, teams can deliver value faster. 

Typo's real-time issue tracking combined with AI-powered insights automates improvement detection and suggests targeted optimizations. Our platform allows teams to maintain optimal cycle times while reducing manual overhead. 

Ready to accelerate your development workflow? Book a demo today!

View All

DevEx

View All
10 Best Developer Experience (DevEx) Tools in 2025

10 Best Developer Experience (DevEx) Tools in 2025

Developer Experience (DevEx) is essential for boosting productivity, collaboration, and overall efficiency in software development. The right DevEx tools streamline workflows, provide actionable insights, and enhance code quality.

We’ve explored the 10 best Developer Experience tools in 2025, highlighting their key features and limitations to help you choose the best fit for your team.

Key Features to Look For in DevEx Tools 

Integrated Development Environment (IDE) Plugins

The DevEx tool must contain IDE plugins that enhance coding environments with syntax highlighting, code completion, and error detection features. They must also allow integration with external tools directly from the IDE and support multiple programming languages for versatility. 

Collaboration Features

The tools must promote teamwork through seamless collaboration, such as shared workspaces, real-time editing capabilities, and in-context discussions. These features facilitate better communication among teams and improve project outcomes. 

Developer Insights and Analytics

The Developer Experience tool could also offer insights into developer performance through qualitative metrics including deployment frequency and planning accuracy. This helps engineering leaders understand the developer experience holistically. 

Feedback Loops 

For a smooth workflow, developers need timely feedback for an efficient software process. Hence, ensure that the tools and processes empower teams to exchange feedback such as real-time feedback mechanisms, code quality analysis, or live updates to get the view of changes immediately. 

Impact on Productivity

Evaluate how the tool affects workflow efficiency and developers’ productivity. Assess it based on whether it reduces time spent on repetitive tasks or facilitates easier collaboration. Analyzing these factors can help gauge the tool's potential impact on productivity. 

Top 10 Developer Experience Tools 

Typo 

Typo is an intelligent engineering management platform to gain visibility, remove blockers, and maximize developer effectiveness. It captures 360 views of the developer experience and uncovers real issues. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins. Typo also sends automated alerts to identify burnout signs in developers at an early stage. It can seamlessly integrate with third-party applications such as Git, Slack, Calenders, and CI/CD tools.

GetDX

GetDX is a comprehensive insights platform founded by researchers behind the DORA and SPACE framework. It offers both qualitative and quantitative measures to give a holistic view of the organization. GetDX breaks down results based on personas and streamlines developer onboarding with real-time insights. 

Key Features

  • Provides a suite of tools that capture data from surveys and systems in real time.
  • Contextualizes performance with 180,000+ industry benchmark samples.
  • Uses advanced statistical analysis to identify the top opportunities.

Limitations 

  • GetDX’s frequent updates and features can disrupt user experience and confuse teams. 
  • New managers often face a steep learning curve. 
  • Users managing multiple teams face configuration and managing team data difficulties. 

Jellyfish 

Jellyfish is a developer experience platform that combines developer-reported insights with system metrics. It captures qualitative and quantitative data to provide a complete picture of the development ecosystem and identify bottlenecks. Jellyfish can be seamlessly integrated with survey tools or use sentiment analysis to gather direct feedback from developers. 

Key Features

  • Enables continuous feedback loops and rapid response to developer needs.
  • Allows teams to track effort without time tracking. 
  • Tracks team health metrics such as code churn and pull request review times. 

Limitations

  • Problem in integrating with popular tools like Jira and Okta which complicates the initial setup process and affects the overall user experience.
  • Absence of an API restricts users from exporting metrics for further analysis in other systems. 
  • Overlooks important aspects of developer productivity by emphasizing throughput over qualitative metrics. 

LinearB

LinearB provides engineering teams with data-driven insights and automation capabilities.  This software delivery intelligence platform provides teams with full visibility and control over developer experience and productivity. LinearB also helps them focus on the most important aspects of coding to speed up project delivery. 

Key Features

  • Automates routine tasks and processes to reduce manual effort and cognitive load. 
  • Offers visibility into team workload and capacity. 
  • Helps maximize DevOps groups’ efficiency with various metrics.

Limitations 

  • Teams that do not utilize GIT-based workflow may find that many of the features are not applicable or useful to their processes.
  • Lacks comprehensive historical data or external benchmarks.
  • Needs to rely on separate tools for comprehensive project tracking and management. 

Github Copilot 

Github Copilot was developed by GitHub in collaboration with open AI. It uses an open AI codex for writing code, test cases and code comments quickly. It draws context from the code and suggests whole lines or complete functions that developers can accept, modify, or reject. Github Copilot can generate code in multiple languages including Typescript, Javascript and C++. 

Key Features

  • Creates predictive lines of code from comments and existing patterns in the code.
  • Seamlessly integrates with popular editors such as Neovim, JetBrains IDEs, and Visual Studio.
  • Create dictionaries of lookup data. 

Limitations 

  • Struggles to fully grasp the context of complex coding tasks or specific project requirements.
  • Less experienced developers may become overly reliant on Copilot for coding task.
  • Can be costly for smaller teams. 

Postman 

Postman is a widely used automation testing tool for API. It provides a streamlined process for standardizing API testing and monitoring it for usage and trend insights. This tool provides a collaborative environment for designing APIs using specifications like OpenAPI and a robust testing framework for ensuring API functionality and reliability. 

 

Key Features

  • Enables users to mimic real-world scenarios and assess API behavior under various conditions.
  • Creates mock servers, and facilitates realistic simulations and comprehensive testing.
  • Auto-generates documentation to make APIs easily understandable and accessible.

Limitations 

  • User interface non friendly for beginners. 
  • Heavy reliance on Postman may create challenges when migrating workflows to other tools or platforms.
  • More suitable for manual testing rather than automated testing. 

Sourcegraph 

An AI code-based assistant tool that provides code-specific information and helps in locating precise code based on natural language description, file names, or function names. 

It improves the developer experience by simplifying the development process in intricate enterprise environments. 

Key Features

  • Explain complex lines of code in simple language.
  • Identifies bugs and errors in a codebase and provides suggestions.
  • Offers documentation generation.

Limitations

  • Doesn’t support creating insights over specific branches or revisions.
  • Codebase size and project complexity may impact performance.
  • Certain features available when running insights over all repositories. 

Code Climate Velocity 

Code Climate Velocity is an engineering intelligence platform that provides leaders with customized solutions based on data-driven insights. Teams using Code Climate Velocity follows a three-step approach: a diagnostic workshop with Code Climate experts, a personalized dashboard with insight reports, and a customized action plan tailored to their business.

Key Features

  • Seamlessly integrates with developer tools such as Jira, GitLab, and Bitbucket. 
  • Supports long-term strategic planning and process improvement efforts.
  • Offers insights tailored for managers to help them understand team dynamics and individual contributions.

Limitations

  • Relies heavily on the quality and comprehensiveness of the data it analyzes.
  • Overlooks qualitative aspects of software development, such as team collaboration, creativity, and problem-solving skills.
  • Offers limited customization options.

Vercel 

Vercel is a cloud platform that gives frontend developers space to focus on coding and innovation. It simplifies the entire lifecycle of web applications by automating the entire deployment pipeline. Vercel has collaborative features such as preview environments to help iterate quickly while maintaining high code quality. 

Key Features

  • Applications can be deployed directly from their Git repositories. 
  • Includes pre-built templates to jumpstart the app development process.
  • Allows to create APIs without managing traditional backend infrastructure.

Limitations

  • Projects hosted on Vercel may rely on various third-party services for functionality which can impact the performance and reliability of applications. 
  • Limited features available with the free version. 
  • Lacks robust documentation and support resources.

Quovery 

A cloud deployment platform to simplify the deployment and management of applications. 

It automates essential tasks such as server setup, scaling, and configuration management that allows developers to prioritize faster time to market instead of handling infrastructure.

Key Features

  • Supports the creation of ephemeral environments for testing and development. 
  • Scales applications automatically on demand.
  • Includes built-in security measures such as multi-factor authentication and fine-grained access controls. 

Limitations

  • Occasionally experiences minor bugs.
  • Can be overwhelming for those new to cloud and DevOps.
  • Deployment times may be slow.

Conclusion 

We’ve curated the best Developer Experience tools for you in 2025. Feel free to explore other options as well. Make sure to do your own research and choose what fits best for you.

All the best!

CTO’s Guide to Software Engineering Efficiency

CTO’s Guide to Software Engineering Efficiency

As a CTO, you often face a dilemma: should you prioritize efficiency or effectiveness? It’s a tough call. 

Engineering efficiency ensures your team delivers quickly and with fewer resources. On the other hand, effectiveness ensures those efforts create real business impact. 

So choosing one over the other is definitely not the solution. 

That’s why we came up with this guide to software engineering efficiency. 

Defining Software Engineering Efficiency 

Software engineering efficiency is the intersection of speed, quality, and cost. It’s not just about how quickly code ships or how flawless it is; it’s about delivering value to the business while optimizing resources. 

True efficiency is when engineering outputs directly contribute to achieving strategic business goals—without overextending timelines, compromising quality, or overspending. 

A holistic approach to efficiency means addressing every layer of the engineering process. It starts with streamlining workflows to minimize bottlenecks, adopting tools that enhance productivity, and setting clear KPIs for code quality and delivery timelines. 

As a CTO, to architect this balance, you need to foster collaboration between cross-functional teams, defining clear metrics for efficiency and ensuring that resource allocation prioritizes high-impact initiatives. 

Establishing Tech Governance 

Tech governance refers to the framework of policies, processes, and standards that guide how technology is used, managed, and maintained within an organization. 

For CTOs, it’s the backbone of engineering efficiency, ensuring consistency, security, and scalability across teams and projects. 

Here’s why tech governance is so important: 

  • Standardization: Promotes uniformity in tools, processes, and coding practices.
  • Risk Mitigation: Reduces vulnerabilities by enforcing compliance with security protocols.
  • Operational Efficiency: Streamlines workflows by minimizing ad-hoc decisions and redundant efforts.
  • Scalability: Prepares systems and teams to handle growth without compromising performance.
  • Transparency: Provides clarity into processes, enabling better decision-making and accountability.

For engineering efficiency, tech governance should focus on three core categories: 

1. Configuration Management

Configuration management is foundational to maintaining consistency across systems and software, ensuring predictable performance and behavior. 

It involves rigorously tracking changes to code, dependencies, and environments to eliminate discrepancies that often cause deployment failures or bugs. 

Using tools like Git for version control, Terraform for infrastructure configurations, or Ansible for automation ensures that configurations are standardized and baselines are consistently enforced. 

This approach not only minimizes errors during rollouts but also reduces the time required to identify and resolve issues, thereby enhancing overall system reliability and deployment efficiency. 

2. Infrastructure Management 

Infrastructure management focuses on effectively provisioning and maintaining the physical and cloud-based resources that support software engineering operations. 

The adoption of Infrastructure as Code (IaC) practices allows teams to automate resource provisioning, scaling, and configuration updates, ensuring infrastructure remains agile and cost-effective. 

Advanced monitoring tools like Typo provide real-time SDLC insights, enabling proactive issue resolution and resource optimization. 

By automating repetitive tasks, infrastructure management frees engineering teams to concentrate on innovation rather than maintenance, driving operational efficiency at scale. 

3. Frameworks for Deployment 

Frameworks for deployment establish the structured processes and tools required to release code into production environments seamlessly. 

A well-designed CI/CD pipeline automates the stages of building, testing, and deploying code, ensuring that releases are both fast and reliable. 

Additionally, rollback mechanisms safeguard against potential issues during deployment, allowing for quick restoration of stable environments. This streamlined approach reduces downtime, accelerates time-to-market, and fosters a collaborative engineering culture. 

Together, these deployment frameworks enhance software delivery and also ensure that the systems remain resilient under changing business demands. 

By focusing on these tech governance categories, CTOs can build a governance model that maximizes efficiency while aligning engineering operations with strategic objectives. 

Balancing Business Impact and Engineering Productivity 

If your engineering team’s efforts don’t align with key objectives like revenue growth, customer satisfaction, or market positioning, you’re not doing justice to your organization. 

To ensure alignment, focus on building features that solve real problems, not just “cool” additions. 

1. Chase value addition, not cool features 

Rather than developing flashy tools that don’t address user needs, prioritize features that improve user experience or address pain points. This prevents your engineering team from being consumed by tasks that don’t add value and keeps their efforts laser-focused on meeting demand. 

2. Decision-making is a crucial factor 

You need to know when to prioritize speed over quality or vice versa. For example, during a high-stakes product launch, speed might be crucial to seize market opportunities. However, if a feature underpins critical infrastructure, you’d prioritize quality and scalability to avoid long-term failures. Balancing these decisions requires clear communication and understanding of business priorities. 

3. Balance innovation and engineering efficiency 

Encourage your team to explore new ideas, but within a framework that ensures tangible outcomes. Innovation should drive value, not just technical novelty. This approach ensures every project contributes meaningfully to the organization’s success. 

Communicating Efficiency to the CEO and Board 

If you’re at a company where the CEO doesn’t come from a technical background — you will face some communication challenges. There will always be questions about why new features are not being shipped despite having a good number of software engineers. 

What you should focus on is giving the stakeholders insights into how the engineering headcount is being utilized. 

1. Reporting Software Engineering Efficiency 

Instead of presenting granular task lists, focus on providing a high-level summary of accomplishments tied to business objectives. For example, show the percentage of technical debt reduced, the cycle time improvements, or the new features delivered and their impact on customer satisfaction or revenue. 

Include visualizations like charts or dashboards to offer a clear, data-driven view of progress. Highlight key milestones, ongoing priorities, and how resources are being allocated to align with organizational goals. 

2. Translating Technical Metrics into Business Language

Board members and CEOs may not resonate with terms like “code churn” or “defect density,” but they understand business KPIs like revenue growth, customer retention, and market expansion. 

For instance, instead of saying, “We reduced bug rate by 15%,” explain, “Our improvements in code quality have resulted in a 10% reduction in downtime, enhancing user experience and supporting retention.” 

3. Building Trust Through Transparency

Trust is built when you are upfront about trade-offs, challenges, and achievements. 

For example, if you chose to delay a feature release to improve scalability, explain the rationale: “While this slowed our time-to-market, it prevents future bottlenecks, ensuring long-term reliability.” 

4. Framing Discussions Around ROI and Risk Management

Frame engineering decisions in terms of ROI, risk mitigation, and long-term impact. For example, explain how automating infrastructure saves costs in the long run or how adopting robust CI/CD practices reduces deployment risks. Linking these outcomes to strategic goals ensures the board sees technology investments as valuable, forward-thinking decisions that drive sustained business growth. 

Build vs. Buy Decisions 

Deciding whether to build a solution in-house or purchase off-the-shelf technology is crucial for maintaining software engineering efficiency. Here’s what to take into account: 

1. Cost Considerations 

From an engineering efficiency standpoint, building in-house often requires significant engineering hours that could be spent on higher-value projects. The direct costs include developer time, testing, and ongoing maintenance. Hidden costs like delays or knowledge silos can also reduce operational efficiency. 

Conversely, buying off-the-shelf technology allows immediate deployment and support, freeing the engineering team to focus on core business challenges. 

However, it’s crucial to evaluate licensing and customization costs to ensure they don’t create inefficiencies later. 

2. Strategic Alignment 

For software engineering efficiency, the choice must align with broader business goals. Building in-house may be more efficient if it allows your team to streamline unique workflows or gain a competitive edge. 

However, if the solution is not central to your business’s differentiation, buying ensures the engineering team isn’t bogged down by unnecessary development tasks, maintaining their focus on high-impact initiatives. 

3. Scalability, Flexibility, and Integration 

An efficient engineering process requires solutions that scale with the business, integrate seamlessly into existing systems, and adapt to future needs. 

While in-house builds offer customization, they can overburden teams if integration or scaling challenges arise. 

Off-the-shelf solutions, though less flexible, often come with pre-tested scalability and integrations, reducing friction and enabling smoother operations. 

Key Metrics CTOs Should Measure for Software Engineering Efficiency 

While the CTO’s role is rooted in shaping the company’s vision and direction, it also requires ensuring that software engineering teams maintain high productivity. 

Here are some of the metrics you should keep an eye on: 

1. Cycle Time 

Cycle time measures how long it takes to move a feature or task from development to deployment. A shorter cycle time means faster iterations, enabling quicker feedback loops and faster value delivery. Monitoring this helps identify bottlenecks and improve development workflows. 

2. Lead Time 

Lead time tracks the duration from ideation to delivery. It encompasses planning, design, development, and deployment phases. A long lead time might indicate inefficiencies in prioritization or resource allocation. By optimizing this, CTOs ensure that the team delivers what matters most to the business in a timely manner.

3. Velocity 

Velocity measures how much work a team completes in a sprint or milestone. This metric reflects team productivity and helps forecast delivery timelines. Consistent or improving velocity is a strong indicator of operational efficiency and team stability.

4. Bug Rate and Defect Density

Bug rate and defect density assess the quality and reliability of the codebase. High values indicate a need for better testing or development practices. Tracking these ensures that speed doesn’t come at the expense of quality, which can lead to technical debt.

5. Code Churn 

Code churn tracks how often code changes after the initial commit. Excessive churn may signal unclear requirements or poor initial implementation. Keeping this in check ensures efficiency and reduces rework. 

By selecting and monitoring these metrics, you can align engineering outcomes with strategic objectives while building a culture of accountability and continuous improvement. 

Conclusion 

The CTO plays a crucial role in driving software engineering efficiency, balancing technical execution with business goals. 

By focusing on key metrics, establishing strong governance, and ensuring that engineering efforts align with broader company objectives, CTOs help maximize productivity while minimizing waste. 

A balanced approach to decision-making—whether prioritizing speed or quality—ensures both immediate impact and long-term scalability. 

Effective CTOs deliver efficiency through clear communication, data-driven insights, and the ability to guide engineering teams toward solutions that support the company’s strategic vision. 

What is Developer Experience?

Let’s take a look at the situation below: 

You are driving a high-performance car, but the controls are clunky, the dashboard is confusing, and the engine constantly overheats. 

Frustrating, right? 

When developers work in a similar environment, dealing with inefficient tools, unclear processes, and a lack of collaboration, it leads to decreased morale and productivity. 

Just as a smooth, responsive driving experience makes all the difference on the road, a seamless Developer Experience (DX) is essential for developer teams.

DX isn't just a buzzword; it's a key factor in how developers interact with their work environments and produce innovative solutions. In this blog, let’s explore what Developer Experience truly means and why it is crucial for developers. 

What is Developer Experience? 

Developer Experience, commonly known as DX, is the overall quality of developers’ interactions with their work environment. It encompasses tools, processes, and organizational culture. It aims to create an environment where developers are working efficiently, focused, and producing high-quality code with minimal friction. 

Why Does Developer Experience Matter? 

Developer Experience is a critical factor in enhancing organizational performance and innovation. It matters because:

Boosts Developer Productivity 

When developers have access to intuitive tools, clear documentation, and streamlined workflow, it allows them to complete the tasks quicker and focus on core activities. This leads to a faster development cycle and improved efficiency as developers can connect emotionally with their work. 

As per Gartner's Report, Developer Experience is the key indicator of Developer Productivity

High Product Quality 

Positive developer experience leads to improved code quality, resulting in high-quality work. This leads to customer satisfaction and a decrease in defects in software products. DX also leads to effective communication and collaboration which reduces cognitive load among developers and can thoroughly implement best practices. 

Talent Attraction and Retention 

A positive work environment appeals to skilled developers and retains top talents. When the organization supports developers’ creativity and innovation, it significantly reduces turnover rates. Moreover, when they feel psychologically safe to express ideas and take risks, they would want to be associated with an organization for the long run. 

Enhances Developer Morale 

When developers feel empowered and supported at their workplace, they are more likely to be engaged with their work. This further leads to high morale and job satisfaction. When organizations minimize common pain points, developers encounter fewer obstacles, allowing them to focus more on productive tasks rather than tedious ones.

Competitive Advantage 

Organizations with positive developer experiences often gain a competitive edge in the market. Enabling faster development cycles and higher-quality software delivery allows companies to respond more swiftly to market demands and customer needs. This agility improves customer satisfaction and positions the organization favorably against competitors. 

What is Flow State and Why Consider it as a Core Goal of a Great DX? 

In simple words, flow state means ‘Being in the zone’. Also known as deep work, it refers to the mental state characterized by complete immersion and focused engagement in an activity. Achieving flow can significantly result in a sense of engagement, enjoyment, and productivity. 

Flow state is considered a core goal of a great DX because this allows developers to work with remarkable efficiency. Hence, allowing them to complete tasks faster and with higher quality. It enables developers to generate innovative solutions and ideas when they are deeply engaged in their work, leading to better problem-solving outcomes. 

Also, flow isn’t limited to individual work, it can also be experienced collectively within teams. When development teams achieve flow together, they operate with synchronized efficiency which enhances collaboration and communication. 

What Developer Experience is not?  

Developer Experience is Not Just a Good Tooling 

Tools like IDEs, frameworks, and libraries play a vital role in a positive developer experience, but, it is not the sole component. Good tooling is merely a part of the overall experience. It helps to streamline workflows and reduce friction, but DX encompasses much more, such as documentation, support, learning resources, and the community. Tools alone cannot address issues like poor communication, lack of feedback, or insufficient documentation, and without a holistic approach, these tools can still hinder developer satisfaction and productivity.

Developer Experience is Not a Quick Fix 

Improving DX isn’t a one-off task that can be patched quickly. It requires a long-term commitment and a deep understanding of developer needs, consistent feedback loops, and iterative improvements. Great developer experience involves ongoing evaluation and adaptation of processes, tools, and team dynamics to create an environment where developers can thrive over time. 

Developer Experience isn’t About Pampering Developers or Using AI tools to Cut Costs

One common myth about DX is that it focuses solely on pampering developers or uses AI tools as cost-cutting measures. True DX aims to create an environment where developers can work efficiently and effectively. In other words, it is about empowering developers with the right resources, autonomy, and opportunities for growth. While AI tools help in simplifying tasks, without considering the broader context of developer needs may lead to dissatisfaction if those tools do not genuinely enhance their work experience. 

Developer Experience is Not User Experience 

DX and UX look alike, however, they target different audiences and goals. User Experience is about how end-users interact with a product, while Developer Experience concerns the experience of developers who build, test, and deploy products. Improving DX involves understanding developers' unique challenges and needs rather than only applying UX principles meant for end-users.

Developer Experience is Not Same as Developer Productivity 

Developer Experience and Developer Productivity are interrelated yet not identical. While a positive developer experience can lead to increased productivity, productivity metrics alone don’t reflect the quality of the developer experience. These metrics often focus on output (like lines of code or hours worked), which can be misleading. True DX encompasses emotional satisfaction, engagement levels, and the overall environment in which developers work. Positive developer experience further creates conditions that naturally lead to higher productivity rather than measuring it directly through traditional metrics

How does Typo Help to Improve DevEx?

Typo is a valuable tool for software development teams that captures 360 views of developer experience. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.

Key features

  • Research-backed framework that captures parameters and uncovers real issues.
  • In-depth insights are published on the dashboard.
  • Combines data-driven insights with proactive monitoring and strategic intervention.
  • Identifies the key priority areas affecting developer productivity and well-being.
  • Sends automated alerts to identify burnout signs in developers at an early stage.

Conclusion 

Developer Experience empowers developers to focus on building exceptional solutions. A great DX fosters innovation, enhances productivity, and creates an environment where developers can thrive individually and collaboratively.

Implementing developer tools empowers organizations to enhance DX and enable teams to prevent burnout and reach their full potential.

View All

Podcasts

View All

'Data-Driven Engineering: Building a Culture of Metrics' with Mario Viktorov Mechoulam, Sr. Engineering Manager, Contentsquare

How do you build a culture of engineering metrics that drives real impact? Engineering teams often struggle with inefficiencies — high work-in-progress, unpredictable cycle times, and slow shipping. But what if the right metrics could change that?

In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Mario Viktorov Mechoulam, Senior Engineering Manager at Contentsquare, about how to establish a data-driven engineering culture using effective metrics. From overcoming cultural resistance to getting executive buy-in, Mario shares his insights on making metrics work for your team.

What You’ll Learn in This Episode:

Why Metrics Matter: How the lack of metrics creates inefficiencies & frustrations in tech teams.

Building a Metrics-Driven Culture: The five key steps — observability, accountability, understanding, discussions, and agreements.

Overcoming Resistance: How to tackle biases, cultural pushback, and skepticism around metrics.

Practical Tips for Engineering Managers: Early success indicators like reduced work-in-progress & improved predictability.

Getting Executive Buy-In: How to align leadership on the value of engineering metrics.

A Musician’s Path to Engineering Metrics: Mario’s unique journey from music to Lean & Toyota Production System-inspired engineering.

Timestamps

  • 00:00 — Let’s begin!
  • 00:47 — Meet the Guest: Mario
  • 01:48 — Mario’s Journey into Engineering Metrics
  • 03:22 — Building a Metrics-Driven Engineering Culture
  • 06:49 — Challenges & Solutions in Metrics Adoption
  • 07:37 — Why Observability & Accountability Matter
  • 11:12 — Driving Cultural Change for Long-Term Success
  • 20:05 — Getting Leadership Buy-In for Metrics
  • 28:17 — Key Metrics & Early Success Indicators
  • 30:34 — Final Insights & Takeaways

Links & Mentions

Episode Transcript

Kovid Batra: Hi, everyone. Welcome to the all new episode of groCTO by Typo. This is Kovid, your host. Today with us, we have a very special guest whom I found after stalking a lot of people on LinkedIn, but found him in my nearest circle. Uh, welcome, welcome to the show, Mario. Uh, Mario is a Senior Engineering Manager at Contentsquare and, uh, he is an engineering metrics enthusiast, and that’s where we connected. We talked a lot about it and I was sure that he’s the guy we should have on the podcast to talk about it. And that’s why we thought today’s topic should be something that is very close to Mario, which is setting metrics culture in the engineering teams. So once again, welcome, welcome to the show, Mario. It’s great to have you here.

Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. I’m very happy to join this series.

Kovid Batra: Great. So Mario, I think before we get started, one quick question, so that we know you a little bit more. Uh, this is kind of a ritual we always have, so don’t get surprised by it. Uh, tell us something about yourself from your childhood or from your teenage that defines who you are today.

Mario Viktorov Mechoulam: Right. I think my, my, both of my parents are musicians and I played violin for a few years, um, also in the junior orchestra. I think this contact with music and with the orchestra in particular, uh, was very important to define who I am today because of teamwork and synchronicity. So, orchestras need to work together and need to have very, very good collaboration. So, this part stuck somewhere on the back of my brain. And teamwork and collaboration is something that defines me today and I value a lot in others as well.

Kovid Batra: That’s really interesting. That is one unique thing that I got to learn today. And I’m sure orchestra must have been fun.

Mario Viktorov Mechoulam: Yes.

Kovid Batra: Do you do that, uh, even today?

Mario Viktorov Mechoulam: Uh, no, no, unfortunately I’m, I’m like the black sheep of my family because I, once I discovered computers and switched to that, um, I have not looked back. Uh, some days I regret it a bit, uh, but this new adventure, this journey that I’m going through, um, I don’t think it’s, it’s irreplaceable. So I’m, I’m happy with what I’m doing.

Kovid Batra: Great! Thank you for sharing this. Uh, moving on, uh, to our main section, which is setting a culture of metrics in engineering teams. I think a very known topic, a very difficult to do thing, but I think we’ll address the elephant in the room today because we have an expert here with us today. So Mario, I think I’ll, I’ll start with this. Uh, sorry to say this, but, uh, this looks like a boring topic to a lot of engineering teams, right? People are not immediately aligned towards having metrics and measurement and people looking at what they’re doing. And of course, there are biases around it. It’s a good practice. It’s an ideal practice to have in high performing engineering teams. But what made you, uh, go behind this, uh, what excited you to go behind this?

Mario Viktorov Mechoulam: A very good question. And I agree that, uh, it’s not an easy topic. I think that, uh, what’s behind the metrics is around us, whether we like it or not. Efficiency, effectiveness, optimization, productivity. It’s, it’s in everything we do in the world. So, for example, even if you, if you go to the airport and you stay in a queue for your baggage check in, um, I’m sure there’s some metrics there, whether they track it or not, I don’t know. And, um, and I discovered in my, my university years, I had, uh, first contact with, uh, Toyota production system with Lean, how we call it in the West, and I discovered how there were, there were things that looked like, like magic that you could simply by observing and applying use to transform the landscape of organizations and the landscape systems. And I was very lucky to be in touch with this, uh, with this one professor who is, uh, uh, the Director of the Lean Institute in Spain. Um, and I was surprised to see how no matter how big the corporation, how powerful the people, how much money they have, there were inefficiencies everywhere. And in my eyes, it looks like a magic wand. Uh, you just, uh, weave it around and then you magically solve stuff that could not be solved, uh, no matter how much money you put on them. And this was, yeah, this stuck with me for quite some time, but I never realized until a few years into the industry that, that was not just for manufacturing, but, uh, lean and metrics, they’re around us and it’s our responsibility to seize it and to make them, to put them to good use.

Kovid Batra: Interesting. Interesting. So I think from here, I would love to know some of the things that you have encountered in your journey, um, as an engineering leader. Uh, when you start implementing or bringing this thought at first point in the teams, what’s their reaction? How do you deal with it? I know it’s an obvious question to ask because I have been dealing with a lot of teams, uh, while working at Typo, but I want to hear it from you firsthand. What’s the experience like? How do you bring it in? How do you motivate those people to actually come on board? So maybe if you have an example, if you have a story to tell us from there, please go ahead.

Mario Viktorov Mechoulam: Of course, of course. It’s not easy and I’ve made a lot of mistakes and one thing that I learned is that there is no fast track. It doesn’t matter if you know, if you know how to do it. If you’ve done it a hundred times, there’s no fast track. Most of the times it’s a slow grind and requires walking the path with people. I like to follow the, these steps. We start with observability, then accountability, then understanding, then discussions and finally agreements. Um, but of course, we cannot, we cannot, uh, uh, drop everything at, at, at, at once at the team because as you said, there are people who are generally wary of, of this, uh, because of, um, bad, bad practices, because of, um, unmet expectations, frustrations in the past. So indeed, um, I have, I have had to be very, very careful about it. So to me, the first thing is starting with observability, you need to be transparent with your intentions. And I think one, one key sentence that has helped me there is that trying to understand what are the things that people care about. Do you care about your customers? Do you care about how much focus time, how much quality focus time do you have? Do you care about the quality of what you ship? Do you care about the impact of what you ship? So if the answer to these questions is yes, and for the majority of engineers, and not only engineers, it’s, it’s yes, uh, then if you care about something, it might be smart to measure it. So that’s a, that’s a good first start. Um, then by asking questions about what are the pains or generating curiosity, like for example, where do you think we spend the most time when we are working to ship something? You can, uh, you can get to a point where the team agrees to have some observability, some metrics in place. So that’s the first step.

Uh, the second step is to generate accountability. And that is arguably harder. Why so? Because in my career, I’ve seen sometimes people, um, who think that these are management metrics. Um, and they are, so don’t get me wrong. I think management can put these metrics to good use, um, but this sends a message in that nobody else is responsible for them, and I disagree with this. I think that everybody is responsible. Of course, I’m ultimately responsible. So, what I do here is I try to help teams understand how they are accountable of this. So if it was me, then I get to decide how it really works, how they do the work, what tools they use, what process they use. This is boring. It’s boring for me, but it’s also boring and frustrating for the people. People might see this as micromanagement. I think it’s, uh, it’s much more intellectually interesting if you get to decide how you do the work. And this is how I connect the accountability so that we can get teams to accept that okay, these metrics that we see, they are a result of how we have decided to work together. The things, the practices, the habits that we do. And we can, we can influence them.

Kovid Batra: Totally. But the thing is, uh, when you say that everyone should be onboarded with this thought that it is not just for the management, for the engineering, what exactly, uh, are those action items that you plan that get this into the team as a culture? Because I, I feel, uh, I’ll touch this topic again when we move ahead, but when we talk about culture, it comes with a lot of aspects that you can, you can not just define, uh, in two days or three days or five days of time. There is a mindset that already exists and everything that you add on top of it comes only or fits only if it aligns with that because changing culture is a hard thing, right? So when you say that people usually feel that these are management metrics, somehow I feel that this is part of the culture. But when you bring it, when you bring it in a way that everyone is accountable, bringing that change into the mindset is, is, is a little hard, I feel. So what exactly do you do there is what I want to understand from you.

Mario Viktorov Mechoulam: Sure. Um, so just, just to be, to be clear, at the point where you introduce this observability and accountability, it’s not, it’s not part of the culture yet. I think this is the, like, putting the foot on the door, uh, to get people to start, um, to start looking at these, using these and eventually they become a culture, but way, way later down the line.

Kovid Batra: Got it, got it. Yeah.

Mario Viktorov Mechoulam: Another thing is that culture takes, takes a lot of time. It’s, uh, um, how can we say? Um, organic adoption is very slow. And after organic adoption, you eventually get a shifting culture. Um, so I was talking to somebody a few weeks back, and they were telling me a senior leader for one of another company, and they were telling me that it took a good 3–4 years to roll out metrics in a company. And even then, they did not have all the levels of adoption, all the cultural changes everywhere in all the layers that they wanted to. Um, so, so this, there’s no fast track. This, this takes time. And when you say that, uh, people are wary about metrics or people think that manage, this is management metrics when they, when, when you say this is part of culture, it’s true. And it comes maybe from a place where people have been kept out of it, or where they have seen that metrics have been misused to do precisely micromanagement, right?

Kovid Batra: Right.

Mario Viktorov Mechoulam: So, yeah, people feel like, oh, with this, my work is going to be scrutinized. Perhaps I’m going to have to cut corners. I’m going to be forced to cut corners. I will have less satisfaction in the work we do. So, so we need to break that, um, to change the culture. We need to break the existing culture and that, that takes time. Um, so for me, this is just the first step. Uh, just the first step to, um, to make people feel responsible, because at the end of the day, um, every, every team costs some, some, some budget, right, to the company. So for an average sized team, we might be talking $1 million, depending on where you’re located, of course. But $1 million per year. So, of course, this, each of these teams, they need to make $1 million in, uh, in impact to at least break even, but we need more. Um, how do we do that? So two things. First, you need, you need to track the impact of the work you do. So that already tells you that if we care about this, there is a metric that we have to incorporate. We have to track the impact, the effect that the work we ship has in the product. But then the second, second thing is to be able to correlate this, um, to correlate what we ship with the impact that we see. And, and there is a very, very, uh, narrow window to do that. You cannot start working on something and then ship it three years later and say, Oh, I had this impact. No, in three years, landscape changed a lot, right? So we need to be quicker in shipping and we need to be tracking what we ship. Therefore, um, measuring lead time, for example, or cycle time becomes one of the highest expressions of being agile, for example.

Kovid Batra: Got it.

Mario Viktorov Mechoulam: So it’s, it’s through these, uh, constant repetition and helping people see how the way they do work, how, whether they track or not, and can improve or not, um, has repercussions in the customer. Um, it’s, it’s the way to start, uh, introducing this, this, uh, this metric concept and eventually helping shift the culture.

Kovid Batra: So is, let’s say cycle time for, for that matter, uh, is, is a metric that is generally applicable in every situation and we can start introducing it at, at the first step and then maybe explore more and, uh, go for some specifics or cycle time is specific to a situation in itself?

Mario Viktorov Mechoulam: I think cycle time is one of these beautiful metrics that you can apply everywhere. Uh, normally you see it applied on the teams. To do, doing, done. But, uh, what I like is that you can apply it, um, everywhere. So you can apply it, um, across teams, you can apply, apply it at line level, you can even apply it at company level. Um, which is not done often. And I think this is, this is a problem. But applying it outside of teams, it’s definitely part of the cultural change. Um, I’ve seen that the focus is often on teams. There’s a lot of focus in optimizing teams, but when you look at the whole picture, um, there are many other places that present opportunities for optimization, and one way to do that is to start, to start measuring.

Kovid Batra: Mario, did you get a chance where you could see, uh, or compare basically, uh, teams or organizations where people are using engineering metrics, and let’s say, a team which doesn’t use engineering metrics? How does the value delivery in these systems, uh, vary, and to what extent, basically?

Mario Viktorov Mechoulam: Let me preface that. Um, metrics are just a cornerstone, but they don’t guarantee that you’d do better or worse than the teams that don’t apply them. However, it’s, it’s very hard, uh, sometimes to know whether you’re doing good or bad if you don’t have something measurable, um, to, to do that. What I’ve seen is much more frustration generally in teams that do not have metrics. But because not having them, uh, forces them into some bad habits. One of the typical things that I, that I see when I join a team or do a Gemba Walk, uh, on some of the teams that are not using engineering metrics, is high work in progress. We’re talking 30+ things are ongoing for a team of five engineers. This means that on average, everybody’s doing 5–6 things at the same time. A lot of context switching, a lot of multitasking, a lot of frustration and leading to things taking months to ship instead of days. Of course, as I said, we can have teams that are doing great without this, but, um, if you’re already doing this, I think just adding the metric to validate it is a very small price to pay. And even if you’re doing great, this can start to change in any moment because of changes in the team composition, changes in the domain, changes in the company, changes in the process that is top-down. So it’s, uh, normally it’s, it’s, it’s very safe to have the metrics to be able to identify this type of drift, this type of degradation as soon as they happen. What I’ve seen also with teams that do have metric adoption is first this eventual cultural change, but then in general, uh, one thing that they do is that they keep, um, they keep the pieces of work small, they limit the work in progress and they are very, very much on top of the results on a regular basis and discussing these results. Um, so this is where we can continue with the, uh, cultural change.

Uh, so after we have, uh, accountability, uh, the next thing, step is understanding. So helping people through documentation, but also through coaching, understand how the choices that we make, the decisions, the events, produce the results that we see for which we’re responsible. And after that, fostering discussion for which you need to have trust, because here we don’t want blaming. We don’t want comparing teams. We want to understand what happened, what led to this. And then, with these discussions, see what can we do to prevent these things. Um, which leads to agreement. So doing this circle, closing the circle, doing it constantly, creates habits. Habits create continuous improvement, continuous learning. And at a certain point, you have the feeling that the team already understands the concepts and is able to work autonomously on this. And this is the moment where you delegate responsibility, um, of this and of the execution as well. And you have created, you have changed a bit the culture in one team.

Kovid Batra: Makes sense. What else does it take, uh, to actually bring in this culture? What else do you think is, uh, missing in this recipe yet?

Mario Viktorov Mechoulam: Yes. Um, I think working with teams is one thing. It’s a small and controlled environment. But the next thing is that you need executive sponsorship. You need to work at the organization level. And that is, that is a bit harder. Let’s say just a bit harder. Um, why is it hard?

Kovid Batra: I see some personal pain coming in there, right?

Mario Viktorov Mechoulam: Um, well, no, it depends. I think it can be harder or it can be easier. So, for example, uh, my experience with startups is that in general, getting executive sponsorship there, the buy-in, is way easier. Um, at the same time, the, because it’s flatter, so you’re in contact day to day with the people who, who need to give you this buy-in. At the same time, very interestingly, engineers in these organizations often are, often need these metrics much less at that point. Why? Because when we talk about startups, we’re talking about much less meetings, much less process. A lot of times, a lot of, um, people usually wear multiple hats, boundaries between roles are not clear. So there’s a lot of collaboration. People usually sit in the very same room. Um, so, so these are engineers that don’t need it, but it’s also a good moment to plant the seed because when these companies grow, uh, you’ll be thankful for that later. Uh, where it’s harder to get it, it’s in bigger corporations. But it’s in these places where I think that it’s most needed because the amount of process, the amount of bureaucracy, the amount of meetings, is very, very draining to the teams in those places. And usually you see all these just piles up. It seldom gets removed. Um, that, maybe it’s a topic for a different discussion. But I think people are very afraid of removing something and then be responsible of the result that removal brings. But yeah, I have, I have had, um, we can say fairly, a fair success of also getting the executive sponsorship, uh, in, in organizations to, to support this and I have learned a few things also along the way.

Kovid Batra: Would you like to share some of the examples? Not specifically from, let’s say, uh, getting sponsorship from the executives, I would be interested because you say it’s a little hard in places. So what things do you think, uh, can work out when you are in that room where you need to get a buy-in on this? What exactly drives that?

Mario Viktorov Mechoulam: Yes. The first point is the same, both for grassroots movements with teams and executive sponsorship, and that is to be transparent. Transparent with what, what do you want to do? What’s your intent and why do you think this is important? Uh, now here, and I’m embarrassed to say this, um, we, we want to change the culture, right? So we should focus on talking about habits, um, right? About culture, about people, et cetera. Not that much about, um, magic to say that, but I, but I’m guilty of using that because, um, people, people like how this sounds, people like to see, to, to, to hear, oh, we’ll introduce metrics and they will be faster and we’ll be more efficient. Um, so it’s not a direct relationship. As I said, it’s a stepping stone that can help you get there. Um, but, but it’s not, it’s not a one month journey or a one year journey. It can take slightly longer, but sometimes to get, to get the attention, you have to have a pitch which focuses more on efficiency, which focuses more on predictability and these type of things. So that’s definitely one, one learning. Um, second learning is that it’s very important, no matter who you are, but it’s even more important when you are, uh, not at the top of the, uh, of the management, uh, uh, pyramid to get, um, by, uh, so to get coaching from your, your direct manager. So if you have somebody that, uh, makes your goals, your objectives, their own, uh, it’s great because they have more experience, uh, they can help you navigate these and present the cases, uh, in a much better and structured way for the, for the intent that you have. And I was very lucky there as well to count on people that were supportive, uh, that were coaching me along the way. Um, yes.

So, first step is the same. First step is to be transparent and, uh, with your intent and share something that you have done already. Uh, here we are often in a situation where you have to put your money where your mouth is, and sometimes you have to invest from your own pocket if you want, for example, um, to use a specific tool. So to me, tools don’t really matter. So what’s important is start with some, something and then build up on top of it, change the culture, and then you’ll find the perfect tool that serves your purpose. Um, exactly. So sometimes you have to, you have to initiate this if you want to have some, some, some metrics. Of course, you can always do this manually. I’ve done it in the past, but I definitely don’t recommend it because it’s a lot of work. In an era where most of these tools are commodities, so we’re lucky enough to be able to gather this metric, this information. Yeah, so usually after this PoC, this experiment for three to six months with the team, you should have some results that you can present, um, to, um, to get executive sponsorship. Something that’s important here that I learned is that you need to present the results very, very precisely. Uh, so what was the problem? What are the actions we did? What’s the result? And that’s not always easy because when you, when you work with metrics for a while, you quickly start to see that there are a lot of synergies. There’s overlapping. There are things that impact other things, right? So sometimes you see a change in the trend, you see an improvement somewhere, uh, you see the cultural impact also happening, but you’re not able to define exactly what’s one thing that we need or two things that we, that we need to change that. Um, so, so that part, I think is very important, but it’s not always easy. So it has to be prepared clearly. Um, the second part is that unfortunately, I discovered that not many people are familiar with the topics. So when introducing it to get the exact sponsorship, you need to, you need to be able to explain them in a very simple, uh, and an easy way and also be mindful of the time because most of the people are very busy. Um, so you don’t want to go in a full, uh, full blown explanation of several hours.

Kovid Batra: I think those people should watch these kinds of podcasts.

Mario Viktorov Mechoulam: Yeah. Um, but, but, yeah, so it’s, it’s, it’s the experiment, it’s the results, it’s the actions, but also it’s a bit of background of why is this important and, um, yeah, and, and how did it influence what we did.

Kovid Batra: Yeah, I mean, there’s always, uh, different, uh, levels where people are in this journey. Let’s, let’s call this a journey where you are super aware, you know what needs to be done. And then there is a place where you’re not aware of the problem itself. So when you go through this funnel, there are people whom you need to onboard in your team, who need to first understand what we are talking about what does it mean, how it’s going to impact, and what exactly it is, in very simple layman language. So I totally understand that point and realize that how easy as well as difficult it is to get these things in place, bring that culture of metrics, engineering metrics in the engineering teams.

Well, I think this was something really, really interesting. Uh, one last piece that I want to touch upon is when you put in all these efforts into onboarding the teams, fostering that culture, getting buy-in from the executives, doing your PoCs and then presenting it, getting in sync with the team, there must be some specific indicators, right, that you start seeing in the teams. I know you have just covered it, but I want to again highlight that point that what exactly someone who is, let’s say an engineering manager and trying to implement it in the team should be looking for early on, or let’s say maybe one month, two months down the line when they started doing that PoC in their teams.

Mario Viktorov Mechoulam: I think, um, how comfortable the people in the team get in discussing and explaining the concepts during analysis of the metrics, this quality analysis is key. Um, and this is probably where most of the effort goes in the first months. We need to make sure that people do understand the metrics, what they represent, how the work we do has an impact on those. And, um, when we reached that point, um, one, one cue for me was the people in my teams, uh, telling me, I want to run this. This meant to me that we had closed the circle and we were close to having a habit and people were, uh, were ready to have this responsibility delegated to them to execute this. So it put people in a place where, um, they had to drive a conversation and they had to think about okay, what am I seeing? What happened? But what could it mean? But then what actions do we want to take? But this is something that we saw in the past already, and we tried to address, and then maybe we made it worse. And then you should also see, um, a change in the trend of metrics. For example, work in progress, getting from 30+ down to something close to the team size. Uh, it could be even better because even then it means that people are working independently and maybe you want them to collaborate. Um, some of the metrics change drastically. Uh, we can, we can talk about it another time, but the standard deviation of the cycle time, you can see how it squeezes, which means that, uh, it, it doesn’t, uh, feel random anymore. When, when I’m going to ship something, but now right now we can make a very, um, a very accurate guess of when, when it’s going to happen. So these types of things to me, mark, uh, good, good changes and that you’re on the right path.

Kovid Batra: Uh, honestly, Mario, very insightful, very practical tips that I have heard today about the implementation piece, and I’m sure this doesn’t end here. Uh, we are going to have more such discussions on this topic, and I want to deep dive into what exact metrics, how to use them, what suits which situation, talking about things like standard deviation from your cycle time would start changing, and that is in itself an interesting thing to talk about. So probably we’ll cover that in the next podcast that we have with you. For today, uh, this is our time. Any parting advice that you would like to share with the audience? Let’s say, there is an Engineering Manager. Let’s say, Mario five years back, who is thinking to go in this direction, what piece of advice would you give that person to get on this journey and what’s the incentive for that person?

Mario Viktorov Mechoulam: Yes. Okay. Clear. In, in general, you, you’ll, you’ll hear that people and teams are too busy to improve. We all know that. So I think as a manager who wants to start introducing these, uh, these concepts and these metrics, your, one of your responsibilities is to make room, to make space for the team, so that they can sit down and have a quality, quality time for this type of conversation. Without it, it’s not, uh, it’s not going to happen.

Kovid Batra: Okay, perfect. Great, Mario. It was great having you here. And I’m sure, uh, we are recording a few more sessions on this topic because this is close to us as well. But for today, this is our time. Thank you so much. See you once again.

Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. Bye-bye!

Kovid Batra: Bye.

Webinar: Unlocking Engineering Productivity with Clint Calleja & Rens Methratta

Webinar: Unlocking Engineering Productivity with Clint Calleja & Rens Methratta

In the third session of 'Unlocking Engineering Productivity' webinar by Typo, host Kovid Batra converses with engineering leaders Clint Calleja and Rens Methratta about strategies for enhancing team productivity.

Clint, Senior Director of Engineering at Contentsquare, and Rens, CTO at Prendio, share their perspectives on the importance of psychological safety, clear communication, and the integration of AI tools to boost productivity. The panel emphasizes balancing short-term deliverables with long-term technical debt, and the vital role of culture and clear goals in aligning teams. Through discussions on personal experiences, challenges, and learnings, the session provides actionable insights for engineering managers to improve developer experience and foster a collaborative working environment.

Timestamps

  • 00:00—Let's begin!
  • 01:10—Clint's Hobbies and Interests
  • 02:54—Rens' Hobbies and Family Life
  • 09:14—Defining Engineering Productivity
  • 16:08—Counterintuitive Learnings in Engineering
  • 21:09—Clint's Experience with Acquisitions
  • 25:08—Enhancing Developer Experience
  • 30:01—AI Tools and Developer Productivity
  • 32:07—Rethinking Development with AI
  • 33:57—Measuring the Impact of AI Initiatives
  • 39:40—Balancing Short-Term and Long-Term Goals
  • 41:57—Traits of a Great Product Lead
  • 44:14—Best Practices for Improving Productivity
  • 48:10—Challenges with Gen Z Developers
  • 58:38—Aligning Teams with Surveys and Metrics
  • 01:03:00—Conclusion and Final Thoughts

Links and Mentions

Transcript

Kovid Batra: All right. Welcome to the third session of Unlocking Engineering Productivity. This is Kovid, your host and with me today we have two amazing, passionate engineering leaders, Clint and Rens. So I’ll introduce them one by one. Let’s go ahead. Uh, Clint, uh, he’s the senior Director of engineering at Contentsquare, ex-Hotjar, uh, a long time friend and a mentor. Uh, welcome, welcome to the show, Clint. It is great to have you here.

Clint Calleja: Thank you. Thank you, Kovid. It’s, uh, it’s uh, it’s very exciting to be here. Thank you for the invite.

Kovid Batra: Alright. Uh, so Clint, uh, I think we were talking about your hobbies last time and I was really, uh, fascinated by the fact. So guys, uh, Clint is actually training in martial arts. Uh, he’s very, very, uh, well trained professional martial arts player and he’s particularly, uh, more interested in karate. Is it right, Clint?

Clint Calleja: Yes. Yes indeed. It’s, uh, I wouldn’t say professionally, you know, we’ve been at it for two years, me and the kids. But yes, it’s, uh, it’s grown on me. I enjoy it.

Kovid Batra: Perfect. What else do you like? Uh, would you like to share something about your hobbies and passions?

Clint Calleja: Yeah, I’m, I’m pretty much into, um, on the, you know, more movement side. Uh, I’m, I’m, I’m into sports in general, like fit training, and I enjoy a game of squash here and there. And then on the commerce side, I need my, you know, daily dose of reading. It, it varies. Sometimes it’s, uh, around leadership, sometimes psychology. Uh, lately it’s a bit more also into stoicism and the art of controlling, you know, thinking about what we can control. Uh, yeah, that’s, that’s me basically.

Kovid Batra: That’s great. Really interesting. Um, the, another guest that we have today, uh, we have Rens with us. Uh, Rens is CTO of Prendio. Uh, he is also a typo product user and a product champion. He has been guiding us throughout, uh, on building the product so far. Welcome to the show, Rens.

Rens Methratta: Hi, Kovid. Uh, you know, it’s good to be here. Uh, Clint, it’s really good to meet you. Uh, very excited to participate and, uh, uh, it’s always really good to, uh, talk shop. Uh, enjoy it.

Kovid Batra: Thank you so much. Thank you so much. Uh, all right, uh, Rens, would you like to tell us something about your hobbies? How do you unwind your day? What do you do outside of work?

Rens Methratta: Yeah, no, um, it’s funny, I don’t have many, I don’t think I have many hobbies anymore. I mean, I have two young kids now, um, and they are, uh, they take up, their hobbies are my hobbies. So, uh, um, so gymnastics, soccer, um, we have, uh, other, you know, a lot of different sports things and piano. So I, I’ve, I’ve learning, I’m learning piano with my daughter. I guess that’s a hobby. Um, that’s, uh, not far asleep, but I’m, I’m enjoying it. But I think a lot of their things that they do become stuff that, um, I get involved in and I really try to, um, I try to enjoy it with them as well. It makes, it makes it more fun.

Kovid Batra: No, I can totally understand that, because having two kids and, uh, being in a CTO position, uh, I think all your time would be consumed outside of work by the kids. Uh, that’s, that’s totally fine. And if your hobbies are aligned with what your kids are doing, that’s, that’s good for them and good for you.

Rens Methratta: Yeah, no, I, I, I think, I think it’s, I, I, I, I love it. I enjoy it. I, it keeps me, you know, I always say, you know, I think there’s a, I remember learning a long time ago, someone said that, you know, how you, uh, the, when you get older, you know, life, life goes by faster. ’cause you kept on doing the same stuff every day and your mind just samples less, right? So, like, they kind of keep me young. I get to do new stuff, um, with, through them. So it’s, it’s been good.

Kovid Batra: Perfect. Perfect. Um, thanks for the, for the introduction. Uh, we got to know you a little more, but that doesn’t stop here. Uh, Clint, you were talking about psychology reading those books. Uh, there is one small ritual, uh, on, on this show, uh, that is again, driven from my, uh, love for psychology, understanding human behavior and everything. So, uh, the ritual is basically that you have to tell something about yourself, uh, from your childhood, from your teenage, uh, that defines you who you are today.

Clint Calleja: Very interesting question. It reminds me of, uh, of a previous manager I used to have used to like, okay, asking this question as well. I think, um, there was a recent one which we just mentioned. Uh, you know, we’re mentioning kids, Rens, you got me to it. The fact that I actually started training martial arts because of the kids, I took them and I ended up doing it myself. Uh, so it was one of those. But I think the biggest one that comes to mind was, um, in 2005, at the age of 22, um, in Malta, you know, we’re a very tightly-knit culture. Um, uh, people, you know, stay living with their parents long, where a small island, everyone is close by. So I wanted to see what’s out there. So I went to live for a year in Germany. Um, and it was, I think this was the most defining moment because for two France. On one side it was the, um, a career opportunity where whilst I was still studying. So for software engineering part-time, um, there was this company that offered to take me on as an intern, but trained me for a whole year in their offices in Germany. So that was a good, uh, step in the right direction career wise, but, uh, outside, you know, of the profession, just on a personal life basis, it was such an eye-opener. It was, uh, this was the year where I realized, um, how many things were, was I taking for granted? You know, like, uh, family and friends. Yeah. Close by when you need them. Um, even simple things like, you know, the sunny weather in Malta, so the sea close by, like, I think this was the year where I became much more aware of all of this and, uh, could be, could reflect a bit deeper.

Kovid Batra: I totally relate to this actually. Uh, for you it happened, uh, I would say a little late because probably you moved out during your, uh, job or, uh, later in the college. For me, it happened in early teenage, I moved out for schooling for host, hostel and there were same realizations that I had and it got me thinking a lot about what I was taking for granted. So I totally relate and, and, and that actually defined me, who am I today. I’m much more grateful towards my parents and, uh, family that I have with me. Yeah.

Clint Calleja: Exactly. Exactly.

Kovid Batra: Yeah. Yeah, yeah. Totally. Rens, it’s your turn now.

Rens Methratta: Yeah. I, I, yeah. I’m, I’m glad, um, I, thinking through this, it was, it was an interesting question. Um, I think, you know, I, I, I’d say me growing up, I grew up on a, with my grandparents, right. And, and we, we had a farm, and I think growing up with them, obviously them being a bit older, I, I think they had a, you know, that there’s, I think you get older, you get a little bit more sense of maturity, kind of, you know, thinking of the world and seeing that at a young age, I think was really good for me because, you know, they were, you know, in farming there’s lots of things that sometimes go wrong. There’s floods, there’s, uh, disease, there’s, yeah, lots of stuff. But you know how they kind of approach things like, you know, they’re, you know, they were, they were never about, you know, let’s, let’s blame anyone, let’s do this, let’s, you know, really focus on, hey, let’s stay calm. Let’s focus on solving the problem. Let’s figure it out. Kind of staying positive. And, and I think that was really helpful for me. I think in setting an example, and really the biggest thing they taught me was like, you know, there are certain things you just can’t control. You know, just focus on things you can control and worry about those. And, you know, and, and, and, and that’s it. Really be positive in a lot of ways. Yeah. And I, I carry that with me a lot. And I think there’s, you know, there’s a lot of stuff you can stress out about, but there’s only so many things you can control and you kinda let go of everything else. So, so totally. That’s kind of, keep that with me.

Kovid Batra: Totally makes sense. I mean, uh, people like you coming from humble backgrounds are more fighter in nature. Uh, they’re more positive in lives, they’re more optimistic towards such situations. And I’ve seen that in a lot of, a lot of folks around me. Uh, people who are privileged do not really, um, get to be that anti-fragile, uh, when situations come, they break down easily. So I think that defines who you are. I totally relate to that. Perfect. Great. Thank you. Thank you for sharing this. Uh, alright guys. Uh, I think now we will move on to the main section, uh, whatever this particular unlocking engineering productivity is about. Uh, uh, today’s theme is around, uh, developer experience and of course the experience is that you have, you both have gathered over your engineering leadership journey. So I’ll start with a very fundamental thing and I think, uh, we’ll, we’ll go with Rens first. Uh, so let, let’s say, Rens, we, we’ll start. Uh, what according to you is engineering productivity? I mean, that’s the very fundamental question that I ask on this episode, but I want to hear out, the audience would want to understand what is the perspective of engineering leaders of such high performing teams when they talk about productivity?

Rens Methratta: Yeah, I think, you know, a lot of ways I, there there’s, are the, obviously the simple things, metrics you like, um, you know, velocity, things like that. Those are always good. Those are good to have. But from my perspective, um, and the way that I, I think really good teams function is, uh, making sure the teams are aligned with business objectives, right? So what we’re trying to accomplish, common goals, um, regardless of, you know, how big an organization is, I think if, um, and it gets harder when you get bigger, obviously, right? It’s like identifying, uh, the layers between your impact and then maybe the business is higher. Maybe it’s easier for smaller teams. Um, but, but regardless, I think that’s, that’s always been what I’ve seen that the outcome, a linking to the outcomes that makes the most sense, and understanding productivity. So like, hey, these are, this is, this is what their goals are. You know, I think OKRs work really well in terms of structuring that as a, as a framework. Right. But realistically I’m saying, Hey, here’s, here’s what we as a team are trying to accomplish. Uh, you know, here’s how we’re gonna measure it based on some sort of, you know, whatever the business metric is or the key outcome is. And then let’s work on fi, let’s work on figuring it out. And then, then, then, then basically how we, how we do that is okay. We, uh, I think my approach has always been, um, this is what we want to do. This is what we think we need to do to do, uh, do it. And then what are we gonna commit to? Like, when do we think, what are we gonna commit to? When are we gonna get it done, right? And how well do we do to that? Right? So I think that’s how we tie it all together. Um, so basically just yeah, uh, you know, getting us all line aligned on objectives, right? And making sure the objectives have meaning to the team. Like I, I, it’s always hard when people feel like why am I doing this, right? And, and that’s always the worst, right? But if it’s clear that, hey, we, we know how this is gonna make an impact on our customers or the business, and they can see it. Um, and then, but then aligning to, okay, we see it, we see the problem, here’s a solution, we think it’s gonna work. Uh, here’s what we’re committing to, to fix it. And then, then, then it’s really measuring, you know, how well did we meet on committing, getting to that? You know, did we re, did we deliver what we said we’re gonna deliver? Did we deliver it on time? Those are things that we kind of look at.

Kovid Batra: Got it. Got it. What, what do you have to say, Clint? Uh.

Clint Calleja: It’s, uh, it, it’s, uh, my, my definition is very much aligned. Like, uh, from a, a higher perspective. To me, it all boils down. And, um, how well are we, uh, and how well and quickly are we delivering the right value to the user? Right? And, uh, this kind of, uh, if we drill down to this, um, this means like how quickly are we able to experiment and learn from that as our architecture is allowing us to do that as quickly as we want. How well are we planning and executing on those plans, uh, with least disruptions possible, like being proactive rather than be reactive to disruption. So there’s, you know, there’s a whole, uh, sphere of accountability that we, we, we need to take into consideration.

Kovid Batra: Makes sense. I think both of you, uh, are pointing towards the same thing. Fundamentally, if I see it, it’s about more about delivering value to the customer, delivering more value, which aligns with the business goals, and that totally makes sense. But what, what do you guys think, uh, when it comes to, uh, other peers, uh, other engineering leaders, do you see the same perspective, uh, about engineering productivity, uh, Rens?

Rens Methratta: Um, I think in general, yes. I think, I think you end up and I, and I think sometimes you end up getting caught in. Um, I, you know, sometimes you get caught up in just hitting, trying to hit a metric, right? And then losing track of, you know, are we working on the right things? Is this, is this worthwhile? I think that’s when it’s, it could be, uh, you know, maybe problematic, right? So I, you know, I even in my early in my career at this, if they, where I’ve, I’ve done that, like, hey, you know, let’s, let’s be really as efficient as possible in terms of building a metrics organization, right? We’ll do kind of, everything’s our small projects, right? And we’ll get these things in really quickly. And, you know, I I, I think that, you know, what I learned is in that situation, like, yeah, we’re, we’re doing stuff, but then the team’s not as motivated. The, you know, we’re not, it’s not as collaborative, the outcome isn’t gonna be, um, as good. Like I think if we have, I think the really key thing is from my perspective, is keeping having a, a team that’s engaged, right? And being part of the process and proactive. Right. And obviously measuring to what the outcomes are, but, um, that’s side of my, where I feel it’s great when we, when we go to a, like a, a, uh, or a retrospective or a sprint planning where we’re like, Hey, teams are like, I don’t think this works. Like I, I, the worst part is when you get like crickets from people, like, okay, this is what we wanna do. Like, and no, no real feedback. Right? I think I really look for, you know, how engaged teams are, you know, how in terms of solving the problem, right? Um, and that’s, and I think that a lot of that cross collaboration, right? And how, um, and building a, a kind of environment where people feel empowered to kind of ask those questions, be collaborative, ask tough questions too, right? Like, I, I love it when an engineer says this, this is not gonna work, right? And it’s great. I’m like, yeah, let’s tell me why. Right? So I, I think if we can build cultures that way, that, that, that’s, that’s ideal.

Kovid Batra: Makes sense. Perfect. Uh, Clint, for you, uh, do you see the same perspective and how, how things get prioritized in that way?

Clint Calleja: I, I particularly love the focus and the tension on the culture, the cultural aspect. I think there’s, there’s more to it that we can unpack there, but yes, in general, um, I think actually when, when I heard this question, it reminded me of when I realized the needs of data points, not just for the sake of data points, of KPIs, but what I started to see as the company started to grow is that without sitting down and agreeing on what good looks like, what are the data points that we’re looking at, I started to see, um, a close definition, but not, not exact definition, which still left, you know, openness to interpretation. And there were cases as we grew, uh, bigger and bigger where, for example, I felt we were performing well, whereas someone else felt differently. And then you sit down and you realize, okay, this is the crux of the problem, so we need that. That was the eureka moment where like, okay, so this is where we need data points on team performance, on system performance, on product performance in general.

Kovid Batra: Yeah. Makes sense. I think both of, both of you have brought in some really good points and I would like to deep dive into those as we move forward. Uh, but before, like we move on to some specific experiences around those pointers, uh, Clint, uh, in your journey, uh, over the last so many years, there must have been some counterintuitive things that I would have that you would’ve realized now, uh, uh, that are not aligned with what you used to think earlier, how productivity is driven in teams. Are there any, anythings? Is, is that thing ringing a bell?

Clint Calleja: Uh, well, if you ask me about learnings on, uh, you know, things that I used to think are good for productivity and now I think they’re not, and I evolve them, I think I keep having these one and one out, right? Um, but again, like, uh, the alignment on piece, key set of indicators rather than just a few was one of those big changes in my, in my leadership career, because I went from using sprint, um, as the only data points to then extending to understanding why the DORA metrics better, why actually SPACE matters even more because they’re the, um, the, the wellbeing factor and how engaged people are. So, uh, I realized that I needed to form a bigger picture of data points in order to be able, able, able to understand all the picture. And again, not just the data points, the quantitative data points, I also needed to fill in the gaps with the qualitative parts.

Kovid Batra: Sure. I think, yeah, that goes hand in hand, uh, alone, either of those is not sufficient to actually look at what’s going on and how to improve on those factors. Perfect. Makes sense. Rens, for you, uh, there must have been some learnings over the years and, uh, anything that you find was intuitive earlier, but now it’s counterintuitive? Yeah.

Rens Methratta: Yeah, no, I think I, learnings every day. Um, but I, I, yeah, in general, maybe what Clint said, right? So you, I did end up at some point overindexing on like some of the quantitative stuff, right? And it’s like, and, and you lose track of what are you trying to do, right? So, hey, I did, we did really well. We, um, you know, we’re doing, um, in terms of, uh, you know, we, we got through our sprints, we got, we were getting to the, uh, we’re doing planning where, hey, our meeting times are low, right? Or these, these are so many things you can, there’s so many things you can look at, and then you lose pic, then lose track of the greater picture, right? So I, I do think of, you know, identifying those north stars, right? And these was referencing, right? Those what we think are important, saying, Hey, what are, how are we measuring to that? And also then also that helps you make sure you’re looking at the right metrics, potentially, right? And putting them in the right context, right? So, you know, it doesn’t matter your, if your velocity’s great, if you’re not building the right things, right? If you, it doesn’t matter if you’re like, so those are the things I think you kinda learned. Like, hey, sometimes it’s just. Um, you know, simplify, you know, the, you know, the what you want, what you, the what you, what you look at, right? And, and try to, try to think through like a, how are, how are we as a team meeting that? And also try to, primarily from a team perspective, right? Getting alignment on that. Like, Hey, this is what we’re, this is the goal we’re trying to get to. And I, I feel like that’s when you get most, uh, commitment from the team too. Like, Hey, uh, it’s easy. I know what we’re trying to do it, and it, it, it, it motivates people to be like, yep, this is what we’re trying to get to. It’s, it’s, it’s something to celebrate when we get to it, right? Otherwise, it just gets, I, it’s, it’s not hard. It’s, it’s it’s hard to celebrate. Like, oh, we got, we got X velocity. I’m like, ah, that’s not, that’s not right. So yeah, I think that’s a better learning, simplifying, right? And, and, um, and also, you know, simplifying in a way and then defining the metrics based on those core metrics like the, uh, and so they all flow down so that it, it aligns, right? And people, you can point something easily, easily and say, this is why it’s important. Right? Um, yeah, I think that’s really important when you communicate to people, Hey, look, this is problematic. Uh, we need to, we might need to take a look at this. And be able to really simplify, say, this is why it’s important, this is why it’s problematic. Uh, this is why it’s gonna impact our North Star. Right? I think that makes conversations a lot easier.

Kovid Batra: Totally, makes sense guys. I think having the right direction along with whatever you are doing on day-to-day basis as KPIs is very important. And of course, to understand a holistic picture, uh, to understand the developer’s experience, a team’s experience to improve overall productivity, not just quantitative, but qualitative data is equally important. So I think to sum up both of your learnings, I think this is a good piece of insight. Now, um, we will jump onto the next section, section, but before that, I would like to, uh, tell my audience, our audience that uh, if they have any questions, we are gonna have a QnA round at the end of the session. So it’s better you guys put in all your questions in the comments right now so that we have filtered it out by the end of the episode. And, uh, moving on now. So guys, uh. The next section is about specific experiences that we are gonna deep dive into from Rens and Clint’s journeys. Uh, we’ll start with you, Clint. Uh, I think the, the best part about your experience that I have felt after stalking you on LinkedIn is that, uh, you, you have had experience of going through multiple acquisitions and, uh, you work with smaller and larger organizations, high performing teams. Uh, this kind of experience brings a lot of exposure and we would want to learn something from you. How does this transition happen? And in those transitions, what, what should an engineering leader should be doing, uh, to make it more, uh, to not make it overwhelming and to be more productive and do the right things, create the impact even during those transitions?

Clint Calleja: Uh, yes. Uh, we’ve been through a, I’ve been through a couple of interesting experience, and I think like, uh, I dare to say, for me, they were like, especially the acquisition, uh, where HR was acquired was, um, uh, a unique, a very unique experience to big companies merging together. Um, it’s very easy for such a transition to be overwhelming. I mean, there’s a lot of things to do. Um, so I think the first key takeaway for me is like, clear communication, intentional, um, communication, regular, and, uh, making sure that we as a leader, like you’re making yourself available to support everyone and to help to guide others along this journey. Um, it’s, then there’s the other side of it that, you know, uh, it, it, such an experience is. Um, does not come without its own challenges, right? Uh, the outcomes are big. Um, uh, and in engineering leadership specifically, you know, that there’s a primary, um, area that you start to think about is, okay, the, the systems, what does it mean when talk about technology stacks the platforms? But it’s something not to underestimate, is also the ways of working in the culture when merging with companies because, uh, I, I ca, I ca I started to, uh, coming to the realization that I think there’s more effort required than planning there, than in the technology side, um, of the story. So, very interesting experience. Um, how to get the teams up and running. I mean, my experience last year was very, again, very, very challenging in a good way. You know, I started in a completely new. Department with about 55 people. 70% of them were new to me coming from the parent company. So we needed to, we already had goals to deliver by June and by September. So it, yes. Talk about overwhelm. And I think one of the, one of the key, um, exercises that really helped us, um, start to carve out, um, some breathing space was these iterations of higher level estimations of the things that we need, um, to implement. Uh, they started immediately enabling us to understand if we needed to scope, if we needed to have discussions to either delay or the scope or bring more, uh, people to the mix. So, and following that, you know, kickstarting, we needed to give some space to the teams to come together, start forming their ways of working. The same time getting a high level understanding of what we can commit to. And from there it’s all, again, about regular communication and reflections. It’s like, okay, biweekly, let’s have a quick update, um, and let’s call a spa. The spa. If we’re in the red, let’s call it out. I’d rather, you know, we’d rather know early so that we can do something about it where there’s time rather than, I’m not sure if you’ve ever seen the situation, but you see a status in the green for almost all a quarter. All of a sudden you get to the last two weeks and then the red. So.

Kovid Batra: Makes sense. Um, while, while we were, uh, initially talking, you said there is a lot to unpack on the, on the developer experience part. I’m sure, uh, that that’s something very core to you, your leadership style, where you ensure a good, uh, developer experience amongst your team. So now you shifted to a new team, uh, and in general, wherever you have been leading teams, uh, are there any specific practices around building a good developer experience that you have been following and that have been working for you? And if there are, uh, can you just share something?

Clint Calleja: That’s a very good question, because I, I see different teams, right? So I’ve done different exercises with different teams, but in this particular case, where I started from was I, I realized that, okay, when you have a new, uh, line being formed, mixed cultures coming from different companies, I said, okay, the one thing I can start with is at least providing, um, a community culture, um, where people feel safe enough to speak up. Why? Because we have challenging goals. We have a lot of questions. There are areas that are known. If people won’t be able to speak up, will, you know, the probability of surprises is gonna be much, much higher.

Kovid Batra: Right.

Clint Calleja: Um, so what are some elements, you know, some actions that I’ve taken to try and improve here? So I think when it comes to leading teams directly in general, we find much more support because even if you look at the Agile manifesto, it talks about a default team where you have a number of engineers who have a trio, ideally, you know, enabled to do decision making. There’s a pattern of reflections that, uh, happen, as Rens said in the retrospectives. Ideally actions get taken. There’s a continuous cycle of improvement. What I found interesting was that beyond one team, right, when I started to lead other leaders or managers, I could see a much bigger opportunity in this team of leaders or team of managers to actually work together as a team. By default, we’re all kind of more focused on our scope, making sure that our people, you know, are, are, are, are, uh, well supported and, uh, and heard and that our team is delivering. But I, I think it’s also worth thinking about if we’re calling it developer experience, let’s call this the manager experience of how much can we help each other out? How much can we support each other to remember that we’re people before managers, you know, like, uh, I dunno, it’s not the first time I, I went to work where I wasn’t feeling so great. So I needed to fine tune myself, expectations on what to produce. So if this is not shared outside with my, with my lead, with my manager, or with my peers, you know, their expectations cannot adjust accordingly. So there’s, there’s a lot of this that I try to, to prioritize by, uh, simple gestures like sharing my weekly goals, trying to encourage my managers to do the same.

Kovid Batra: Yeah.

Clint Calleja: So we can understand the, we try to take one too much in end of week reflection. Think of it like a retrospective, but between managers to say, okay, hey, it was much more disruption than I anticipated this week, and it’s okay. Part of it is actually the psychological safety of being able to say, oh, my short 400%, I only achieved 50. It’s okay. But I learned, right, and I think in terms of metrics, another exercise that I immediately tried to restart in my new line was this exercise that I call the high altitude flight. And this was an exercise where as leaders, we connect those KPIs, um, with qualitative data, like, uh, the weekly pulse and feedback from 15Five, for example. Mm-hmm. And we talk about it on a monthly basis. We bring those data points on a board, you know, start asynchronous, we raise the right questions, uh, challenge each other out and this way we were regularly bringing those data points into the discussion and making sure we’re prioritizing some actions towards them.

Kovid Batra: Totally. I think, um, after talking to so many engineering leaders, one common pattern that I’ve seen, uh, in some of the best leaders is they, they, they over communicate, like in, in a very positive sense I’m saying this, uh, they, they tend, because everyone feels that, uh, a lot of times you’re in a hybrid culture, in a remote culture where as much as you communicate is less actually. So, having those discussions, giving that psychological safety has always worked out for the teams, and I’m sure your team would be very happy in the way you have been driving things. Uh, but thanks, thanks for sharing this experience. I’ll get back to you. Uh, I have a few questions for Rens also on this note. Uh, so Rens’ journey has also been very interesting. He has been the CTO at Prendio and uh, recently I was talking to him about some of the recent initiatives that he was working on with the team. And he talked about some, uh, copilot and, uh, few other automated, uh, code analysis tools that he has been integrating in the team. So Rens, could you just share some experience from there and how that has, uh, impacted the developer experience and the productivity in your teams?

Rens Methratta: Um, yeah, I’d be happy to. It’s been, I think there’s a lot of, a lot of change, uh, happening in terms of capabilities with, uh, AI, right. And, and how we best utilize it. So like, we’ve definitely seen it, you know, as, as models have gotten better, right, I think the biggest thing is we have a, you know, relatively large code base and um, and newer code base for things. And so it’s, it was always good for like, maybe, maybe even like six months ago we would say, Hey, it’s, we can look at some new code, we can improve it, write some unit tests, things like that. But you know, having an AI that has like a really cohesive understanding of our code base and be able to, um, you know, have it, you know, suggest or build, uh, code that would work well, it wasn’t hap, it wouldn’t happen, right. But now it is, right? So I think that’s, that’s coming, the probably the biggest thing we’ve seen in the last couple months and really changing, um, you know, how we think about development a bit, right? Kind of moving, uh, we’ve done some, you know, a lot of this is AI first development, like it changed mindset for us as a team, right? Like how do we, how do we build it? Um, you know, lots of new tools. I think, Kovid, you mentioned there’s tons of new tools available. Yeah. It’s changing constantly, right? So, um, you know, we’ve spent some time. Looking at, looking at some of the newer tools, um, we’ve actually, uh, agreed to as of now, uh, a tool, we, we actually gonna moved everyone over to Cursor. Right. ’cause I just on terms of, um, like what the capabilities it provided and, and, and that, so, uh, and then similarly outside of code, yeah. It’s like, uh, there are tools that, you know, typo has the, uh, you know, for the pull requests, the, you know, uh, uh, summary, things like that’s really helpful, right, for things of that, even on the, and then automated testing, uh, there’s a bunch of things that I think are really changing how we work and make us more productive. And, and it’s, it’s challenging ’cause it’s, you know, it’s, obviously, and it’s good. It’s, it’s a lot of new stuff, right? And it’s really re-making us rethink how we do it. Like, um, you know, developing. So we built, uh, some things now from an AI-first approach, right? So we have to kind of relearn how we do things, right? We’re thinking things out a bit more, like defining things from a prompt first approach, right? What are our prompts, what are our templates for prompts? Like, it’s, it’s been really interesting to see and good to see. Um, uh, and I think, yeah, it definitely made us, uh, more productive and I think we’ll get more productivity as we kind of embrace the tools, but also kind of embrace the mindset. It’s, um, I think for the folks for who’s actually used it as most, and you can kind of see like when they first start utilizing it to maybe where they’re now, like the productivity increase has been tremendous. So that’s probably the biggest change we’ve seen recently. Um, but it, it’s an exciting time. We’re, we’re looking forward to kinda learning more and, and it’s something that we have to, um, you know, we really have to, um, get a better understanding of, uh. But again, which also like challenges too. I would say that too. Right? So, uh, like previously we had a good understanding of what our velocity would be. I am, I mean, right now it’s a good, maybe a good thing, like our velocity would be better, right? And it’s higher. So like, you know, uh, even engaging effort, things like that, it’s, it’s com, it’s a lot of new things that we’re gonna have to learn and, and figure out and, um, reassess. Um, but, um, but yeah, I, I mean, I, I think if I, if I look at anything that’s been different recently, that’s been probably the biggest thing and the biggest change for us in terms of how we work. And then also in kind of incorporate, making sure that we incorporate that into our existing workflows or existing development, uh, structure. That’s, I think it’s a lot of new changes for our team, um, trying to help, help us do it, uh, effectively and making sure we’re thinking about it, but also like giving our team power to like try new stuff has also been really cool too.

Kovid Batra: Perfect. Perfect. And, uh, my next question is actually to both of you. Um, you both are trying to implement, let’s say, AI, uh, think you’re trying to implement a better communication, better one-on-ones, bringing that psychological safety, everything that you guys do, I’m sure you, you both have some way to measure the impact of that particular initiative. So how do you guys actually, uh, measure the impact of such initiatives or such, uh, let’s say AI tooling that you brought in, bring into the picture. Uh, maybe Clint, you, you can go ahead.

Clint Calleja: I don’t have examples around AI toolings and in general, it’s more, you know, about utilizing the, actually deciding which of those KPIs are we actually optimizing for for this quarter. So I am guessing, for example, in Rens’ case we were talking about how much AI already is influencing, uh, productivity. So, um, so I would, um, assume or expect a pattern of decreased cycle time because of, uh, the quicker time, uh, to, to implement certain code. Um, I think the key part is something that Rens said a while ago is not focusing a lot on the KPI per se, just for the sake of that KP, but connecting it even in the narrative, in the communication, when we set the goal with the teams, connecting it to the user value. So some, for example, some experiences I’ve had. Okay. I had an, an interesting experience where I did exactly that. I just focused on the pickup time without a user connection. And this is where I got the learning. I’m like, okay, maybe I’m optimizing too much about the, the data points. Whereas eventually, we started shifting towards utilizing MTTD, for example, to, uh, reduce the impact of service disruptions on our customers by detecting, um, disruptions internally, uh, using SLOs to understand proactively if we’re eating too much into the other budget. So we actually act before an incident happens, right?

Kovid Batra: Um, right.

Clint Calleja: So, uh, it’s different, uh, different data points. And when going back to the wellbeing, what I found very interesting, I know that there are the engagement surveys that happen every six months to a year usually. Uh, but because of that time frequency, it sets wellbeing as a legging indicator. When we started utilizing, um, 15Five, for example, there are other tools like it, but the intention is, for every one-on-one, weekly or biweekly, you fill a form starting with how well did you feel from 1 to 5. Because we were collecting that data weekly, all of a sudden the wellbeing pulse became a leading indicator, something that I could attribute to a change, an intentional change that we decided to do in leadership.

Kovid Batra: Makes sense. Rens, for you. I think, uh, the question is pretty simple again, uh, like you, you are already using typo, right?

Rens Methratta: We are, yeah.

Kovid Batra: But I would just rephrase my question to ask you like, how do use such tools to, uh, make sure maybe your planning, your execution or your automation or your reflection is in place. Uh, how, how do you leverage that?

Rens Methratta: Yeah, and, and I think, I think it’s, uh, maybe understand the same thing. I think, uh, and aligning those two, you know, what the objectives are, right? So I love, uh, I know primarily the sprint retrospective like it, but not the detail, like more on just, um, as a collective team, we said, Hey, this is what we are trying to accomplish, this, we have a plan to do this. We’ve agreed to, um, hey, this is what we have to get done for, you know, these next couple of weeks to make it right. And then it’s, you know, really having the, all that in one place to see like, uh, we said we’re gonna, we need to get all this stuff done. Um, you know, we, this is how, this is how we did, right? So there’s, for us, there’s multiple tools to put together. We have ticketing with Jira, we have obviously Git to kind of get little controls, but then having all that merged into one place we can easily see like, okay, this is what we committed to, this is what we did. Right? And then, and then if, then basically having, being able to say, okay, I will, well, okay, one, okay, here’s where we are. Do we need to, what do we need to do to kind of, uh, kind of problem solve? Are we behind? What do we, what should we do? Right? Having those discussions is great. And then also being like, okay, well, um, and so then it’s again, can we still meet these goals that we wanna do from an objective perspective? What’s holding us back? I think getting to the point where we can have those conversations easily, right? That’s, that’s what the tools, uh, well, and for Typo, that’s what we really use it for, right? So in, instead of, uh, because it’s the context that all those individual stats provide, right? That’s more important. Uh, and and that context towards how does that, at the end of the day, that aligns to what our overall goal is, right? We have this goal, we’re trying to build this or, uh, change this, right? And for our customers, or because of a reason, uh, and, and being able to see like how we’re doing, uh, to that, right? In a, in a, in a good summary is really, is really, uh, is what we find the most useful and then so we can take action on it, right? Um, otherwise if we might, yeah, sometimes you kind of, you look at all these individual stats and you kind of, you, you lose track of it if you just look at those individually. Kind of. But if you have a holistic view of here’s how we are doing, uh, which, which we use it for, that, that really helps.

Kovid Batra: Perfect. Perfect. Clint, do you have anything to add on to that?

Clint Calleja: Not specifically, not at this stage.

Kovid Batra: Alright, perfect. Uh, I think, uh, thanks, uh, both of you for sharing, uh, your experiences. Now I think it’s time for us to move on to the QnA section. And I already see, uh, we are, uh, flooded with a few questions here and, uh, we’ll take a minutes break right now and, uh, in the meantime I will just pick on the questions that we need to prioritize here. Alright.

All right, so, uh, there are some interesting people here who have come up. Uh, Clint, I’m sure you have already seen the name. Uh, Paulo uh, the, the guest from the last session and one of our common friends, uh, uh, I’ll just pull up his question first. So I think, uh, this question, uh, Clint, you can take up. Uh, engineering productivity is a lot about the relationship with the product. As senior engineering leaders, what does a great product lead look like?

Clint Calleja: Very good question. Uh, hi Paulo. Um, well, I, I, I, I’ve seen a fair share, right? Uh, of, uh, good traits in, in product, product leads. That’s not me, right? No, that’s not you. Um, I, I think what I can speak for is what I, I tend to look for, um, and first and foremost, I tend to look for in a partner, um, uh, so ideally no division, because that division, um, easily, um, gets, uh, downstream, you know. You start to see that division happening in the teams as well. There’s the, secondly is in the alignment of objectives. So, um, I always tend to lean on my product counterpart to understand more about the top priorities of our product goals. And I bring, uh, which would answer some of the questions here, I bring in the picture, uh, the top, um, technical solvency, uh, challenges. In order to sustain those product goals. And this way, uh, we find a balance on how to set up the next set of goals for a quarter or half a year. Right. And we build together a narrative that we can share both upwards and with the rest of our teams. Uh, and another characteristic, yes, is regular, um, is actually the teamwork element. Uh, a while ago I explained the differentiation, the opportunity I’ve seen, uh, amongst team leads or managers to work together as a team as well. I think the way I like to see it is as a leader, you have at least three teams. You have the people that, uh, you work for, that report to you. You have your trio as another team, and then there’s the, um, the managers in the department, the other leaders in the department, which is yet another team. So in the product lead, I do lean on for, uh, one of my teams to be one of my team, uh, peers.

Kovid Batra: Makes sense. Perfect. Alright, uh, moving on, uh, to the next question. Uh, that’s from, uh, Vishal. Uh, what are the, some of, what are some of the best practices or tools you have found to improve your own productivity? Rens, would you like to take that?

Rens Methratta: Uh, sure. Um, I’m trying to think. I, I, there’s a lot of tools obviously. I think, I think at the end of the day, I. Um, more than anything else, I would say communication is the biggest thing, right? I would think for productivity. Yeah. From a team perspective. Like, um, like I’ve, you know, I, I’ve, I I’ve worked in a lot of different, uh, types of places from really large enterprise companies to really small startups. Right? And, and I think the common, the common thing, regardless of, of whatever tools we do is really one, how, how well do we, how well are we connected to what we’re building? How well are we, do we as a team understand what we’re trying to build and the overall objectives, right? And, and, um, I think that just, you know, that itself, uh, more than anything is what drives productivity. So like I, you know, I’ve, uh. I, I think the most productive I’ve ever been is, uh, we, we, I was in a startup. We were, uh, we had this one small attic space in the, in, in this, uh, in our, in our city, in, in Cambridge. There was five of us in one room for like, uh, we were, but we were constantly together communicating. Um, and so, uh, and, and then we had, we had a shared vision, right? So we were able to do a lot of stuff very quickly. Um, I think that, so I think what I look into is some of the tools that maybe help us now. It is challenging, I would say, with everyone being remote, right? Distributed. That is probably one of the biggest challenges I, I have for productivity. Um, so, you know, trying to get everyone together. Video calls are great. We try to make sure everyone goes on video, uh, but like at least, you know, try to get, um, as much of that, um, workflow of thinking through like, uh, being together even though we’re not together as much as possible. I, I think that helps a lot. Um, and tools that..

Kovid Batra: Have you, uh, have you tried those digital office tools like, uh, you are virtually in an office?

Rens Methratta: Yeah, we tried that. Uh, I think it’s okay. Uh, we tried some of the white, the whiteboarding tools as well. Right. It, it’s okay. Yeah. You know, quite, and it’s, it’s honestly, it’s, it’s good. Um, but I, you know, the interesting thing I’ve really found, and I, if possible, even if we, if we met one person live for in, in person, even once, right? Yeah. I feel like the relationship between the teams are so much different. So no matter what, no matter how far away we try, we try to get everyone together at least who wants to meet because it is just, uh, I think even like, um, how people’s expressions are, how they are in real life, it, it is so hard to replicate. Right?

Kovid Batra: Totally. Totally. Yeah.

Rens Methratta: And, uh, and those nuances are important in terms of communication. So, um, and, but you know, outside of that, I mean, yeah, I think whatever the things that I would say are the things that can simplify, uh, objectives, right? Make sure it’s clear, uh, anything that would, uh, make, make that easy and straightforward, uh, I think that’s, that’s the best. And then it’s making sure you have easy ways to talk to each other and communicate with each other to kind of, uh, yeah, keep track of what we were doing.

Kovid Batra: Uh, I could see a big smile on Clint’s face when I talked about this virtual office tool. Is, is there an experience that you would like to share, Clint?

Clint Calleja: Uh, not, not really. Like it was, it was fun to hear the question because I’ve been wondering about it as well, but I have to agree with Rens. I think nothing beats, you know, the change that happens after an in-person meetup.

Kovid Batra: Sure.

Clint Calleja: The, the relationships that get built from there are, take a different, you know, a different go to..

Rens Methratta: It is, it is different. Yeah. I, I don’t know why, but if I’ve met someone in person for, I feel like, I know, I feel like I know ’em at a much deeper level than, uh, even though we’re, you know, uh, on video for a long time. It just, it is a different experience.

Kovid Batra: Totally. I think there is another good question. I, I think you both would relate to it. Have you guys had an experience to, uh, work with the Gen Z developers, uh, recently or, or in the last few years?

Rens Methratta: I, I mean, I probably, I’m trying to think through like what Gen Z would be. Yeah.

Kovid Batra: I, I, I get that in my circle, uh, a lot that dealing with the Gen Z developers is getting a little hard for us. And there’s like almost 10 to 12 years and maybe more age gap there. And the thing, and things have changed drastically. So, uh, people find it a little hard to understand and empathize, uh, on, on that front. So do you have anything to share? By the way, this is a question from Madhurima. Uh, yeah.

Rens Methratta: I think, well, I think in general, I think I will just say maybe not, maybe Gen Z, but just in general for junior, more junior developers we bring on board like younger developers, I think it’s, it has been challenging for them too because I think a lot of it’s been remote, A lot of their experience has been remote, right? Um, I think it is harder to acclimate and, and that that a lot of the stuff I’ve learned when I remember, uh, coming up as a software engineer and that’s, a lot of that experience has been like, you know, getting in, meeting with people, whiteboarding, getting through that, right. And having those relationships, um, was really beneficial. So I definitely think it’s harder, um, in that sense. Uh, I, I do think we’ve, uh, personally tried to try to get, um, you know, people who are more junior developers, you know, try to more opportunities to, um, you know, uh, more coaching, um, uh, and, and also like, uh, more one-on-one time just to try to help them acclimate to that because I think we’ve identified that it is harder, especially if we’re being remote first. Um, I haven’t had any, um, I don’t think anything, yeah, I know the memes of the Gen Z developers. I haven’t got any meme worthy stuff or experiences for Gen Z developer. Hasn’t been that, so I’ve maybe, I’ve been lucky, so, but I, I do, but I would empathize with that. It is harder for junior devs because, you know, we are in a much more, you know, uh, remote world and it, it, it’s harder to make those connections.

Kovid Batra: Totally. All right.

Clint Calleja: I think, uh, if I, if I may add something to this, I, uh, maybe what I, I’d add is I, I don’t have a specific way to deal with Gen Z developers because what I try to do is I try to optimize for inclusivity. Okay, there’s Gen Z, but there are many other, you know, cultures and subcultures that require specific attention. So at the end of the day, what I found to be the, at least the best way forward, is a good, strong set of values that are exemplified, that comes from the company, a consistent way of sharing feedback, uh, and the guidelines of how feedback is shared, and of course, making space for anyone to be heard in their preferred, uh, way, you know, they’d like to communicate and you can easily understand this if, you know, as just a part, part of your onboarding, you ask people to provide the user manual for you to understand how is it the best way, you know, for these people to communicate, to get the feedback. So think of it this way, it’s like, okay, providing the support for interfaces which are consistent for everyone, but then being available, uh, for everyone to communicate and get the support the way they prefer it, if that makes sense.

Kovid Batra: Okay. Totally. Alright, uh, thanks guys. Moving on to the next question. Uh, this is from Gaurav. Uh, how do you balance short-term deliverables with long-term technical debt management? Also, how to plan them out effectively while giving some freedom to the engineering teams, some bandwidth to explore and innovate and delve into the unknowns. Uh, Clint, would you like to go first?

Clint Calleja: Sure. Uh, when I, when going through this question, the first thing that came to mind, something that I wanna be clear, I’m not an expert of, but I started, you know, trying and iterating upon is the definition of an engineering strategy. Uh, because this is exactly what I used to try and understand, get a di.. So there’s this, the, the book, uh, ‘Good Strategy Bad Strategy’. So I try to replicate the tips from there. And it’s basically getting a diagnosis of, okay, where’s the money coming from? What are our product goals? And there are other areas to cover. And then coming up with policies, guiding policies. So the, you know, your team knows the direction we want to go, and some high level actions that could be really and truly could become projects or goals to be set as OKRs, for example, I don’t know. Uh, we realized the need from the diagnosis. We realize the need, we need to simplify our architecture, for example. So then I connect that engineering strategy and actions to goals, so that the teams have enough freedom to choose what to tackle on, uh, first, uh, whilst having enough direction on my end.

Kovid Batra: Makes sense.

Clint Calleja: So I’m still fine tuning on how, how good that strategy is. Right. But it’s, it really helps me there.

Kovid Batra: Perfect. Uh, the other part of the question also mentions about giving engineering teams that bandwidth, uh, that freedom to innovate and delve into the unknown. So, of course, one part of the question does get answered from your strategy framework, but in that, how, how much do you account for the bandwidth that teams would need to innovate and delve into the unknown? Uh..

Rens Methratta: I, I can deal with that or Clint, either way, I, I think..

Clint Calleja: Go, go, go, go.

Rens Methratta: No, uh, it’s, it’s an interesting point. Like, um, we look at it, I, I think in, in general, like I, we define it like an overall architecture. We try to, for everything we do, like here’s our high level where we want to be from a technical perspective, right? And then whatever solutions we’re trying to do, we, we always wanna try to get to that. But there’s always these, you know, the short and long term and, and how much do we give engineers ability to innovate? We really look at it this way there. If it’s something we need to do right away and we say, Hey, look. Uh, and then, um, and typically if someone has a really great idea and then just like, let’s, let’s do it. Uh, I think our overall question is, okay, worst case scenario, what’s our long, how long would this take to, uh, completely redo to get back to our architecture? Right? Um, and if it’s, if it’s like, Hey, if it’s not gonna, it’s, it’s not gonna increase in, it’s not gonna increase in complexity to redo this a year from now if we, if this is the wrong mistake, right? If we, if the, so we, we are much more lenient towards let’s try something, let’s do this, right? If we think worst case scenario, it’s not gonna be exponentially worse if we put this into production to, to roll this back. Right. And so, uh, if it’s something that is gonna say like, oh, this is gonna lead us down a path where if we’re, this is gonna be, we’re never gonna be able to be fix this, right? Or it’s gonna take us so much effort to fix this, then we’re much more careful and we’re like, well, let’s, let’s see, you know, we might not wanna give as much leeway there. So that’s, that’s kinda how we balance it out typically.

Kovid Batra: Makes sense. Makes sense. Perfect. Uh, moving on, uh, probably the last question. Uh, this is from Moshiour. Uh, what’s your approach to balancing new feature development with improving system? I think this is what we have already taken up. Do you have practical guidelines for deciding on when to prioritize innovation versus strengthening your foundations? Uh, Moshiour, I think we just answered this question, uh, in the previous question. So, we’ll, we’ll, uh, give this a pass for now, uh, and move on to the next question. Okay. There is another one from Paulo. Uh, how much of engineering productivity do you attribute to great engineers versus how work and information flows among individuals? So, Rens, would you like to take that?

Rens Methratta: Um, this is like a yes and yes. Like, I mean, uh, I, I, I think, uh, really great engineers have like, you know, really great productivity, right? It’s, it’s, it’s, it’s a both, it’s a both thing, right? So if you have, um, we’ve seen, I think we’ve kind of seen it from, I get more experienced, like, uh, even on the let’s recent stuff on the AI side. Like we, we playing around with folks who’ve really have gotten understand, understood our, like really solid understanding of our technical infrastructure, but can, you know, learn to use those tools effectively. The output is, is like maybe 10x, but someone who’s, um, you know, not as solid on like maybe some of our existing code base technical understandings and utilizing it is, is still improving. It’s like, you know, maybe 2x, 3x, right? So you definitely see that difference. Um, and I think that’s important. Um, but I, I think, you know, the other part about that is communication between the teams and how you do it and making sure that, similarly going back to productivity, like are we, are we building the right things? Right? We can build, yeah, you know, a lot of, a lot of stuff very quickly, but it might not be worth it if we don’t communicate well, we’re probably building completely different things. So I, I think it goes hand in hand. Um, I, you know, I think, I don’t think there’s a really way to. Uh, it’s not an, it’s not an ‘or’, it’s really an ‘and’.

Kovid Batra: Perfect. No, I think it’s, it’s well answered. Clint, do you have anything to add here?

Clint Calleja: It’s, uh, very much in line with Rens, I think, and even, you know, even in fact the KPI, the KPI suggest looking at the holistic of a theme. So once I do believed that, you know, great engineers, the experience an engineer brings will make a difference. It’s not the first time I’ve also seen great engineers not compatible with a team, and they, you know, the, the, it doesn’t work out. So you start to see that the productivity is not really, uh, improving. So yes, you need great engineers, but, uh, there’s a very big emphasis. I think it goes, it’s beyond 50/50. I think there’s a bigger emphasis, in my opinion, on the ways of working, the respectful ways of working, small details. I don’t know, like, um, when is, when should I expect my teammate to pick up a pull request during the sprint? Um, how do I make it easier for them? Should opening up a request with 50 change files, embedding refactoring with a bot fix, does that make it, you know, small things. But I think this is where, um, you can reduce a lot of friction and may make, uh, bring more harmony.

Kovid Batra: Okay. Makes sense. Um, you guys, I think we are already, uh, done with our time today, but, uh, I feel bad for other people who have put in questions, so I just wanna take one more, uh, this sounds interesting. Uh, are you guys okay to extend it for like 2–3 more minutes?

Rens Methratta: Sure.

Kovid Batra: Perfect. Uh, this question comes from Nisha. Uh, how to align teams to respond to developer surveys and use engineering metrics to improve overall experience and performance. So I think both of you have some experience here. Uh, clint is, uh, already, uh, a promoter of having communication, having those one-on-ones with teams. So, and for, uh, Rens, I know he’s using Typo, so he’s already into that setup where he is using engineering metrics and developer surveys with the, with the developers. So both of your opinion would be great here. Uh, Rens, would you like to go first?

Rens Methratta: Um, yeah. To Nisha’s question, um, I’ve never had good luck with like, surveys and, uh, with like developers, quite honestly. They’re just not, um, you know, I think a lot of it is, uh, time spent and, and, and, you know, I, I try to try to do one-on-ones with people, um, and just, you know, get an understanding of how people are doing. Um, I, I, you know, um, we’ve done, tried to do surveys and it’s, you know, people, it becomes, people aren’t, you know, I don’t think the, the responses get, um, less and less valid in some ways if, if it becomes robotic, uh, in a lot of ways. So I, I really think, I think aligning to how people are doing is, from my perspective, is really more, more hands-on, more one-on-one discussions and conversations.

Kovid Batra: Makes sense. How, how did that work for you, Clint? Uh..

Clint Calleja: I, uh, what, what, what Rens just, uh, just, just explained, uh, resonates with a lot of my experiences in the past. It was, uh, a different and eye-opening experience at Hotjar, where I’ve seen the use of, the weekly use of such a survey being well, um, being, um, well adopted. And when I joined Hotjar, I joined as an individual contributor, as a front end engineer. So the first time I had to fill one of these, first I was like, okay, I have to do this every week. But the thing that made me change my mind was the actions I was seeing coming out, the benefits for me that I was seeing coming out from my lead. This wasn’t just a form, this was becoming the talking points of the one hour session I had with him every week. Actions get taken out, which were dedicated to me. So it was a fun fact. This was the first remote experience for me, but the one-on-ones felt like the most tailored I’ve ever had. So think..

Kovid Batra: That’s interesting. Yeah.

Clint Calleja: If I can sum up on the developer surveys, um, I understand that the less people can under an attribute, their input to actual outcomes, to actual change then, you know, why spend the effort? So on, on my end, what I try to do as much as possible is not just collect the data. Here’s a summary of the points. Here are some actions which are now part of this strategy. Remember the connection of the strategy. And here’s why when we are trying to attack what. So again, not a silver, uh, silver bullet.

Kovid Batra: Yeah. Yeah.

Clint Calleja: And then the second part on engineering metrics, I think here, uh, I really rely on engineering leaders to be the glue of bringing those data points into the retrospectives. So the engineering managers are in the best position to connect those data points with the ways of working and the patterns seen throughout the sprints. And in an end of sprint review, you know, express, okay, here are the patterns that I see. Let’s talk about this. Let’s celebrate this because it’s a, you know, huge milestone.

Kovid Batra: Makes sense. Great. Uh, Rens, you wanna add something?

Rens Methratta: No, I, I, I would agree. I think that’s a good, I that’s a good call out. Uh, yeah. Getting, maybe having more action oriented from the surveys would provide different results. Um, and we, we, we tried something where we try to do our, do our one-on-ones as a, as, as a daily survey. Yeah. I didn’t think it was successful because it, it didn’t, I think people weren’t, um, weren’t seeing that individual response back from them. Right. It was just more like data collection for data aggregation purposes. Yeah. Wasn’t, which wasn’t, people didn’t seem to value it.

Kovid Batra: Perfect. Perfect. Thank you so much guys. Uh, this was an amazing session. Uh, thank you for your time. Thank you for sharing all your thoughts. It’s always a pleasure to talk to you, to talk to folks like you who are open, take out time from their busy schedules and give it for the community. Thanks once again.

Clint Calleja: Thanks for the invite. Yeah. And nice to meet you guys.

Rens Methratta: Same here, Clint.

Kovid Batra: All right, guys. That’s our time. Signing off for today. Bye-bye. Okay.

'How EMs Break into Leadership—Road to Success' with C S Sriram, VP of Engineering, Betterworks

How do you transition from being a strong Engineering Manager to an effective VP of Engineering? What challenges do leaders face as they scale their impact from team execution to organizational strategy?

In this episode of the groCTO Podcast, host Kovid Batra speaks with C S Sriram, VP of Engineering at Betterworks, about his career journey from an engineering manager to a VP role. He shares the hard-earned lessons, leadership principles, and mindset shifts that helped him navigate this transition.

What You’ll Learn in This Episode:


From IC to Leadership: How Sriram overcame early challenges as a new engineering manager and grew into an executive role.


Building a High-Performing Engineering Culture: The principles and frameworks he uses to drive accountability, innovation, and efficiency.


Balancing Business Goals & Technical Excellence: Strategies to prioritize impact, make trade-offs, and maintain quality at scale.


The Role of Mentorship & Coaching: How investing in people accelerates engineering success.


Scaling Leadership with Dashboards & Skip-Level 1:1s: How structured communication helps VPs and Directors manage growing teams effectively.


Closing with Inspiration: Sriram shares a poem he wrote, reflecting on the inner strength and vision required to succeed in leadership.

Timestamps

  • 00:00—Let's begin!
  • 00:45—Meet the Guest: Sriram
  • 03:08—First Steps in Engineering Management
  • 06:14—Lessons from Entrepreneurship
  • 07:15—Building a Productive Team Culture
  • 09:51—Defining and Enforcing Policies
  • 19:30—Balancing Speed and Quality
  • 21:14—Defining Quality Standards
  • 21:42—Shift Left Approach to Quality
  • 21:58—Mind Maps and Quality Requirements
  • 23:02—Engineering Management Success
  • 24:18—Transition to Leadership
  • 25:20—Principles of Engineering Leadership
  • 27:31—Coaching and Mentorship
  • 29:18—Navigating Compensation Challenges
  • 34:14—Dashboards and Skip-Level 1-on-1s
  • 37:18—Final Thoughts and Reflections

Links & Mentions

Episode Transcript

Kovid Batra: Hi everyone, this is Kovid, back with another episode of groCTO by Typo. Today with us, we have a very special guest. He's VP of Engineering at Betterworks, comes with 20+ years of engineering and leadership experience. Welcome to the show, Sriram. 

C S Sriram: Thanks. Thanks so much for having me over, Kovid, and thanks for the opportunity. I really appreciate it. 

Kovid Batra: No, it's our pleasure. So, Sriram, uh, today, I think we have a lot to talk about, about your engineering and leadership experience, your journey from an engineering manager to engineering leader. But before we get started on that, there is a small ritual that we follow on this podcast. To know you a little more, we would like to ask you one question. Tell us something about yourself from your childhood, from your teenage that defines you, who you are today. So you have to share something from the past, so that we get to know the real Sriram. 

C S Sriram: Sure. Yes. Uh, uh, I think the one thing that I can recall is something that happened when I was in my seventh standard. My then school principal, her name is Mrs. Anjana Rajsekar. I'm still in touch with her. She's a big inspiration for me. She founded and she was running the school that I was studying in. She nudged me towards two things which I think have defined my life. The first thing that she nudged me towards was computers. Until then I hadn't really touched a real computer. That school was the first place where I wrote my very first logo and basic programs. Uh, so that was the first thing. And the second thing that she nudged me towards was just writing in general. And that gave me an interest towards, uh, languages, towards, uh, writing, reading, uh, poetry, short stories, novels, all of that. I think that she kind of created those two very crucial parts of my identity and that's what I would like to share. 

Kovid Batra: That's really inspiring actually. Teachers are always great in that sense. Uh, and I think you had one, so I really appreciate that. Thanks for sharing. And, Sriram, is there anything from your writing piece that you would like to share with us? Anything that you find really interesting or that you wrote sometime in the past, which you think would be good to share here? 

C S Sriram: Oh, I wasn't prepared for that. Uh.. 

Kovid Batra: No, that's fine. 

C S Sriram: Maybe, maybe towards the end. I'll try and see if I can find something towards the end. 

Kovid Batra: Sure, no problem. All right. So getting started with the main section, just to iterate this again, we are going to talk about your engineering leadership journey, specifically from an Engineering Manager to a VP of Engineering at Betterworks. I think the landscape changes, the perspective changes, and there are a lot of aspiring engineering managers engineering leaders who are actually looking towards that career path. So I think this podcast would be really helpful for them to learn and to understand what exactly needs to be there in a person to go through that journey and what challenges, what opportunities come on the way, how to tackle them. So, to start with, I think tell us about your first engineering management experience when you moved in, uh, from, uh, from, uh, let's say a tech lead or an individual contributor role to an EM role and how things changed at that point. How was that experience for you? Was that overwhelming or that came in very easily to you and you were there when you, when you actually arrived in that particular role or responsibility?

C S Sriram: I was a programmer once. So I'll start from index 0 instead of index 1. So I had a, uh, index 0 programmer, uh, engineering management experience where I was given the designation of Engineering Manager for about a month. And I ran back to my CEO and said that I'm not doing management. Uh, take the designation away from me, take the people away from me. I'm not doing it anymore. Uh, that was the index 0 and index 1 was when I started my own software consultancy, roughly about 10 years ago. 

Kovid Batra: Okay. 

C S Sriram: And then I didn't realize I would have to do management. I just wanted that thrill of running my own business. I guess to paraphrase Shakespeare, you know, "Some people are born managers. Some people are made managers. Some people have management thrust on them." So it was thrust on me. It was my necessity that I got into management and for the first five years, I really messed it up. Because I was running a business, I was also trying to get some coding done for the business. I was also trying to win sales. I was trying to manage the people, recruit them and all of it. I didn't do a great job of it at all. And then when I joined Betterworks was where I think I really did something meaningful with, uh, engineering management. I took the time to study some first principles, understood where I went wrong and corrected. So yeah, that's how I got into management. And it was, uh, it wasn't scary the first time because I didn't know I was doing it. Uh, so I didn't know I was doing a lot of things wrong, so there was no fear there. Uh, but the second time around, when I started in Betterworks, I was very scared that, uh, of a lot of things. There were a lot of insecurities. The fact that I was letting go of control and most of the time intentionally, that was a very scary thing. But yeah, it's, it's comfortable at the moment. 

Kovid Batra: Perfect. Perfect. But I'm sure that experience of running a business would have brought a lot of aspects which you could have not learned if you were in a trivial journey of a job where you were a software engineer and then moved into, let's say a tech lead or a management role. I'm sure that piece of your entrepreneurship would have taught you a lot more about bringing more value or bringing more business aspect to the engineering. Was it so? 

C S Sriram: A 100% yes. I think the main thing that I learned through that was that software doesn't exist in isolation. A team doesn't exist in isolation. You building the most beautiful user experience or design, you building the most beautiful software, uh, most beautiful piece of code that you've ever written, uh, means nothing if it doesn't generate some sort of business value. I think that was the biggest lesson that I took away from that, because we did a lot of work that I would call as very good engineering work, but extremely poor from the business side. I understood that importance that, you know, it is, it always has to be connected to some business outcome. 

Kovid Batra: Great. I think there must be some good examples, some real life examples that you would like to share from your engineering management stint that might revolve around having a good culture, that might revolve around building more or better processes in the team. So could you share something from your start of the journey or maybe something that you're doing today? 

C S Sriram: Definitely. Yes, I can. I think I'll start with, uh, the Betterworks/Hyphen journey. So when I joined, it was called Hyphen. We were an employee engagement, uh, SaaS platform. We had a team of really talented engineers and a very capable, uh, Director of Product, uh, and an inspirational CEO. All the ingredients were there to deliver success. But when I joined the team, they hadn't completed even a single story. Forget about a feature, a complete, uh, you know, product; they hadn't completed, uh, a single story in over two quarters. What I had to do in that case was just prioritize shipping over everything else. Like there were a lot of distractions, right? The team was talking about a lot of things. There was recruitment. There was the team culture, process, et cetera, et cetera. I think the first thing that I did there was after a month of observation, I decided that, okay, sprint one, somebody has to ship some things. And just setting that one finish line that people have to cross, that built up the momentum that was required, uh, and it kept pushing things forward. And I got, uh, hands-on in a way that I wouldn't have got hands-on before. Like usually I would've jumped into the code and started writing code myself. That was my usual approach until then. This time I got hands-on on the product side. Uh, I worked with the, uh, director of the product, uh, to prioritize the stories, to refine acceptance criteria, uh, give a sprint goal and then tell everybody that, okay, this is the goal. This is what is included. This is what is not included. Get it done. And it happened. Uh, so that's how that got started. 

Kovid Batra: Perfect. So I think when you're sharing this, this is from your initial phase when you actually start working as an Engineering Manager and working directly with the product, uh, managing the team, uh, getting into that real engineering management role, bridging that gap. What exactly led you or made you understand that priority? Like, you went in, saw a lot of things distracting you, people and culture changes. So, initially when you moved into such a space, which is completely new, right? What exactly made you realize, okay, one thing is of course, they didn't ship anything for, let's say a good amount of time, so you had to prioritize that and you went in with that goal. But if you just focus on one thing, do not take people along, there is a lot of resistance that you get. So when you were deciding to do this, uh, you cannot be ruthless when you are joining in new. So was there any friction? How did you deal with it? How did you bring everyone on the same page? Is there anything specific you would like to share from that part? 

C S Sriram: Yeah, yeah. See, the diagnosis was actually pretty straightforward because I had a very supportive CEO at that time. Orno, that was his name. So he was very supportive. When I told him that, okay, I'm going to take a month to just observe. Don't expect any changes from me. Uh, in the first month, uh, I don't want to just start applying changes. He was very supportive of that, and I was given a month to just observe and make my own notes. Once I diagnosed the problem, the application of solution took a bit of time. The first thing was to build culture. Uh, now a lot of people talk a lot of things about, uh, culture. Uh, to me, or what culture means is what are the negotiable and non-negotiable policies within your team? Uh, like what is acceptable? What is not acceptable? Uh, and even in acceptable, what are the gray areas? That there may be some areas where you have a bit of negotiation that is allowed. Uh, so that was the first thing that I wanted to sort out. The way I did that first was, like I said, I spent a month studying the team and then I proposed a set of working rules. I talked about working hours. Uh, that was the time when we were all in office. So presence in, uh, office, the work, how do we do work handoff? How do we make decisions? All of those things. Uh, and these, uh, I presented some of them saying that, see, I am tasked with getting some things done. So these are non-negotiable for me. Uh, like you are doing this, uh, you don't have the space to negotiate and say that you are not going to be in office for two weeks, for example. Or you're not going to say that, uh, I won't write automated tests. Those are my, uh, you know, addition areas. I'm owning them. But you can say that, uh, I will be 10 to 15 minutes late because of Bangalore traffic. So we had that kind of agreement that was made and we had an open discussion about it. That was the first presentation that I made to the team saying that these are our working rules and this is how we'll proceed. And I need explicit agreement from all of you. If anybody is not going to agree, you let me know, we'll negotiate and we'll see where we can get to. Now, once that happened, uh, there was a question of enforcing the policy. And I think this is where I failed in my previous attempt at management. I had a set of policies, but I wasn't very consistent in enforcing them. And this time I had a system where I said that, okay, if someone strayed from a policy, someone said that they'll do something, but they haven't done it, my usual reaction would have been either if I thought it wasn't so important, ignore it. Or if it was important, you know, just go ballistic, go lose your temper and ask questions and, uh, you know, do that boss kind of stuff. This time I took a different approach, which was curiosity over trying to being, uh, you know, trying to be right. So I spent a bit of time to understand why did, you know, this miss happen? Why did this person stray from the agreed policy? Was it because the policy itself wasn't well-defined? Uh, or did they agree to the policy without fully understanding it? Or was it just a, you know, human error that can be corrected? Or is it an attitude issue that I can't tolerate? Now in most cases, what happened is once I started putting these curious questions and I started sharing them, people started aligning themselves automatically because nobody wants to be in that uncomfortable position of having to explain themselves. It's just human nature to, you know, avoid that and correct themselves. So that itself gave me the results most of the time. In a few cases, uh, the policy wasn't well-defined or it wasn't well-understood, in which case I had to refine it and make sure it is explained very clearly. And the last thing was, uh, in a few cases where despite repeated feedback, they couldn't really correct themselves. I had to make the decision saying that, okay, this person is not suited for what I want and I'll have to let them go. And we've made some decisions like that also. 

Kovid Batra: I think setting those ground rules becomes very important because when you go out and just explicitly do something, assuming that, okay, this is, uh, this is something that should be followed and people are not aligned on that, that creates more friction than, uh, if they're beforehand aware of what needs to be done, how need, how it needs to be done. So I think stepping into that role and taking up that responsibility, it's a good start to first diagnose, understand what's there, and then setting some ground rules with negotiables and non-negotiables. I think it makes a lot of sense. And when you're sharing those specific details, it all the way more aligns with my thought of how one should go out and take up this responsibility. But Sriram, uh, when you jump into that role there are a lot of things that come into your mind that you need to do as an Engineering Manager. What are those top 3-4 things that you think you need to consistently deliver on? I mean, this could be something very simple, related to how fast your teams are shipping. It could be something related to what's the quality of the work that is coming out. So, anything. But, in your scenario, what were your business priorities? According to that, you as an engineering manager, what were your KPIs or what were those things that you mostly aligned with and tried to deliver consistently? 

C S Sriram: Yeah, so two things mattered most. And I think it still matters even today for me. The first is what business value is a team delivering. A lot of people get confused where they say they have high-performing teams when actually the teams are just shipping features very regularly, uh, instead of creating business value, uh, like, that's something that I ask my managers a lot as well. Like, what is the business problem that your team is solving? Not just what is the feature that they are shipping next? So that is the first thing. So, um, having a very clear sprint goal, if you're doing a sprint goal, a quarterly goal that says that this is the business outcome that we are achieving. Maybe you're trying to increase the signups. Maybe you're trying to increase the revenue. You're trying to increase the retention. You're trying to solve a specific problem for a customer. A customer is struggling with a particular business outcome at their end, and that is what your software is solving. And once you set, set that as the priority, then adjusting your scope, adjusting what you want to deliver to meet that outcome becomes very easier, very easy. Like I've seen cases where we thought we will have to deliver like 10 or 15 use cases for a feature, but narrowing it down to five, uh, gave us more results because we've been solving what was most valuable for the customer rather than shipping everything that we thought we have to ship. So that is one of the biggest metrics that I try to use. Like, what final business outcome can I connect this team's output to? 

Kovid Batra: Makes sense. Almost every day we deal with this situation when, so when I say 'we,' people who are into those position where they have to take some decisions that would impact the business directly. Of course, a developer also writes code and it impacts the business. But I hope you understand where I'm coming from. Like you are in that position where you're taking decisions and you are managing the team as well. So there is a lot of hustle bustle going on on a day-to-day basis. How did you make space for doing this? Uh, that prioritizing even more, highlighting those five things out of those 15 that needs to be done. What kind of drive you need or what kind of process setting you need for yourself to come to that point? Because I strongly believe I have talked to so many engineering leaders and engineering managers, this one quality has always stood out in all high-performing, uh, engineering leaders and engineering managers. They value the value delivery. Like, anything else comes second. They are so focused on delivering value, which makes sense, but how do you make that space? How do you stay focused on that part?

C S Sriram: Uh, see, I think anybody who makes a transition to management from engineering has a big advantage over there. If you are a good engineer you would have learned to define the problem well before you solve it. Uh, you would have learned to design systems. You would have learned to visualize you know, the problem and the solution before you even implement it. Like, a good engineer is going to draw a high-level and a low-level system diagram before they write the first line of code. They will write tests before they write the first line of code. It is just about transposing that into management. This means that before your team starts working on anything crucial, you spend that focus time, and that's where I think a lot of engineering managers get confused as well. I see a lot of engineering managers talking about, Oh, I'm always in meetings. Uh, I don't know what to do. I'm always running around. Uh, having that focus time for yourself, where you are in deep work, trying to define a problem and to define its solution, that makes a huge difference. And when people try to define a problem, I think it always helps to use some sort of standard frameworks. Like right now, uh, as an engineering leader, most of my problem definitions are strategy definitions. Uh, like what policies you know, should the team pursue for the next one to two quarters? What policies drive things like recruitment, uh, promotion, compensation, management, et cetera, et cetera? Now I try to follow some sort of framework. Like I try to follow a policy diagnosis, risk and actions framework. That is how I define my you know, uh, policies. And for each of those problems that you're trying to define, there are usually standard frameworks that are available so that you don't have to break your head trying to come up with some way of defining them. I think leaning on that sort of structure helps as well. 

Kovid Batra: Got it. 

C S Sriram: And over time, that structure itself becomes reusable. You will tweak it. You will see that some parts of the structure are useful, some parts are not, and it gets better over time. 

Kovid Batra: Makes sense. For an engineering manager, I think these are some really good lessons and coming with specific examples that you have taken, I think it becomes even more valuable. One thing that I want to always understand, how much you prioritize quality over fast shipping or fast shipping over quality? 

C S Sriram: Yeah. Uh, okay. So I had, uh, an ex, uh, manager who is my current mentor as well, and he keeps saying that he says that 'slow is smooth and smooth is fast.' 

Kovid Batra: Yeah, yeah. 

C S Sriram: Okay, so I don't aim for just shipping things fast, but I aim to create systems that enable both speed and quality. I think a lot of engineering managers, they always try to improve immediate speed and that's almost an impossibility. Like you can't fix a pipeline while things are running through it already, uh, you need to step away from the pipeline and you're going to get speed results, you know, speed outcomes. Over time, quality outcomes over time. I think that is the first step towards speed and quality. You need to accept that any improvement will take a little bit of time. Now, once you accept that, then defining these things also, again, makes a huge difference. If it's speed, what is speed for you? Is it just shipping features out or is it creating value faster? The best way of increasing speed I've seen is just measuring team cycle time. Like you don't even have to put in any solutions in place, just measuring and reporting the cycle time to the team automatically starts moving things forward because nobody likes to see that it takes two weeks to move a ticket to 'done' in there. And people start getting curious and they start finding out, okay, I'm not moving that fast. I'm actually working a lot more than at that speed, but I've moved only one ticket in two weeks. That's not acceptable. Then you see things changing over there. Same thing with quality also. I like to define what quality clearly means. Like what is a P0, P1 test case that you cannot afford to miss? What are acceptable non-functional requirements? Like, well, you know, uh, not every team has to build the most performant solution. There may be a team that might say that, okay, a one second latency is acceptable for us. A hundred requests per second throughput is more than sufficient for us. So building with that in mind also makes a huge difference. And once you do that, for quality, I would always say the best thing to do is to shift quality left. The earlier you enforce quality in your process, the better it is. And there are standard techniques to do that. You can use mind maps, you can use the three Amigo calls, automated tests, et cetera, et cetera. One example that I can think of is that when I was working with Hyphen, uh, there were a set of data reporting screens, a set of reports which all had very similar kind of charts, grouping and filters. So I spent time with QA to develop some mind maps where we listed all the use cases for all the reports, that were common to all the reports. And we kind of had these mind maps put up during these print review calls during the QA review calls and all of it. If a developer is going to start development, they have it on their screen before they start developing. The developer develops to match those quality requirements rather than trying to catch up with the quality later on. Uh, and this is another example that I like, uh, analogy that I like using as well. Developers, when they write code, they should write as if they are writing an exam where the answers are already available to you and you should really try to score the highest marks possible. Uh, no need to keep anything secret or anything. I think that's an approach that testers should also adopt. You write the exam with every answer available and you score the maximum marks. 

Kovid Batra: Makes sense. So I think in your EM journey, if you have to sum it up for us, when was the point when you felt that, okay, uh, you're doing good and these are the top 2-3 things which you did as an EM that really made you that visible, made you that accomplished in a team that you were ready for the next role? 

C S Sriram: Got it. I think it took me about a year at Hyphen. So that would be about six years after I started engineering management. So 1 in 5 years running my own consultancy and then 1 year at Hyphen. the outcome that made me feel that okay, I've done something with engineering management was that we ship the entire product. It was a migration from JavaScript to TypeScript, from an old UI to a new UI, a complete migration of a product that was already in use. We hit $2 million ARR and we got acquired by Betterworks. So those were good, uh, you know, outcomes that I could actually claim as victory for myself and for the team. And that was, uh, what I thought was success at that time. But what really feels like success right now is that engineers from that time call me and tell me that you know, working with me during that time was really good and they are yet to find that kind of culture, that kind of clarity. So that is, you know, that turned out to be a good success. 

Kovid Batra: Makes sense. Okay, so now moving from that point of engineering management to a leader, how has your perspective changed? I think the company altogether changed because now you are part of Betterworks, which is a bigger organization. You're working with global teams who are situated across different countries. How your perspective, how your approach to the overall delivery, value delivery has changed, I would like to hear that. 

C S Sriram: Yeah. So, Betterworks, I would split it into two halves, two and a half years, two and a half years, uh, you know, at Betterworks, uh, leaving that first year at Hyphen. The first two and a half years I was working towards more of a directorship kind of role where I wanted to own complete execution. That was a time I learned how to manage managers, how to get a few other things done as well, like, uh, tie that, uh, you know, the engineering teams outcome, uh, output to the business outcome. The first principle that I learned through that, uh, and the second two and a half years was really about strategy, about executive management. Now, the first principle that I learned that was your first team changes once you start getting on this journey. Until you're an engineering manager, the team that you manage is your team. You belong to that team. That's kind of the outcome that you always look at. Once you start this journey towards engineering leader, that is not your first team anymore. Your first team is the team that you work with, which is your Co-Directors, Co-VPs, your, you know, immediate boss. That leadership team is the core team. You're creating value for that team. And the team that you manage is a "tool" that you use to get those results. Uh, and I would, you know, put a quotation mark around the "tool" because you still need to be respectful and empathetic towards people. It's not just using them, but that's, that's kind of the mindset that you need to adopt. The side effects of this mindset is that you have to learn to be alone, right? At least when I was an Engineering Manager and all of it, uh, there were these moments when you could gossip and complain about what's happening and all of it, the higher up you go, the lesser, uh, you know, you have space for all of that. Um, uh, you, like, who can you go and complain when you have all the power to, you know, do anything. You have the power to do everything that you want. So you have to learn to be alone and to operate by yourself. So that is the second side effect of that. The next principle that I learned was to give up what you take or built. Luckily, it came on, came easily to me at that point. I'm really thankful for that. Like I had built this whole product and, you know, we completed the migration and we got acquired by Betterworks and all of it was something that I was really proud off. But the moment the first opportunity came, I delegated it to someone else. Now, if I had held on to that product because it was my baby, I wouldn't have had the opportunity to scale Betterworks India. We went from I think around five or six engineers, today we are almost 45+, uh, engineers in India. That sort of a 5x scale, 7x scale would have been very difficult to achieve if I had held on to any of the babies that I was building at that time. So that sort of giving up things, uh, is something that's very important. And the next thing that I learned was to coach engineering managers. You basically have to repeat what you did with your developers. Like with, once you manage developers, you don't develop. You delegate. You try to ask them questions. You nudge them and you guide them. You need to repeat the same process with managers as well. That's another thing that I had to learn. And the last thing that I had to learn was setting up teams for success. This was a big challenge because most of my managers were first-time managers at that time. So the potential for failure was huge. So I had to take my time to make sure I set boundaries within which they can make mistakes, fail, and learn. And that was a balance because I couldn't set boundaries that were so safe that they'll never make a mistake.

Kovid Batra: Yeah, that makes sense. 

C S Sriram: And at the same time, I, yeah, yeah.. Because it has to be that space. I think you know that, uh. And at the same time, the boundaries can't be so open that they can, they make mistakes that can turn into disasters. And luckily I had good leaders at Betterworks, uh, who guided me through that. So that worked very well. And I also had to spend a lot of time sharing these success stories and learnings with peers and with leadership. Uh, that was something that I didn't invest a lot of time in as a manager. That sort of story building, narrative building both within the team and outside the team, that was another skill that I had to learn. 

Kovid Batra: Perfect. So when you talk about the story building and bringing up those stories to your team, which is the leadership, what exactly would you tell them, can you give some example? Like in, for someone who's listening to you right now, what kind of situations, and how those situations should be portrayed to the leadership team would bring a better visibility of your work as an engineering director to the overall leadership?

C S Sriram: Sure. Yes. I think a classic example would be compensation. So I can go back to that just around the COVID time where suddenly investment was booming. The job market was booming. Every candidate that we were trying to hire had three to four offers. We were not assured of the candidate joining us even after they came on board and people were coaching our engineers left, right, and center as well. So that was a crazy time. Betterworks is a very prudent business. That's something that I'm always thankful for. We don't go and spend money like water just because we've got investment. And this means that now as an Engineering Manager, if I'm going to go and talk about compensation, about business planning and all of it with my leadership team, most of the time, I'm just going to say that, hey, this person is demanding so much. That person is demanding so much. I don't know what to do. That is an Engineering Manager approach, and it is justified because an Engineering Manager, depending on what sort of company and what sort of scale you are in has limited scope on what they can actually do in these cases. But the story that you take as an engineering director is you spend time collecting data from the market to see what is the market compensation rate. You see how many exits have happened in your team. How many of those exits are because of compensation, what percentages have those people been offered outside in the market. You collect all that data. You can't even stop at saying that, okay, I'll put all this data in front of management and I'll tell them that see, we are losing people because we are not able to match requirements. We need to change our, uh, you know, numbers. Even that is not sufficient because that is still a director-level, uh, you know, solution that you can offer. If you want to offer a truly executive-level, you are going to look at costs in the business. You're going to look at optimizations that you can do. You're going to come up with a system saying that this is how compensation can be managed. Again, most of the stories that I tell to my executive team come to the point where it's like, there is a problem there is potential solutions, and usually I even recommend one solution out of the solutions that I'm already suggesting. Uh, and this really helps the leadership team as well, because when I think of my boss or my CEO, they are possibly dealing with 20 things that are more complex than I've ever seen in my life. 

Kovid Batra: Right. 

C S Sriram: So how can I ensure that A, I get the decision that I think is right. And at the same time, I give them enough information so that they can correct me if my decision is wrong. Uh, both are crucial. You know, one of the scariest things that can happen to me is that I get a decision that I want and the decision turns out to be wrong. So giving myself.. 

Kovid Batra: That's a balanced approach where you are giving the person an option, an opportunity to at least make your decision even better if it is possible and if you're missing out on something. So that totally makes sense. And putting out things to the leadership in such way and how you're solving them would be really good. But one thing that I could understand from your EM to an EL transition you start becoming more cost and budget kind of things being, start coming in more as compared to an EM position. Is it right? 

C S Sriram: 100% yes. That's what I've seen with all the great engineering leaders that I've worked with as well. Yes, they love engineering. They get into, uh, engineering, architecture and development at whatever, all levels of interest and time that they have. But there is always a question of how much value am I getting for the money that I'm spending? And I think that is a question that any manager who wants to become a leader should learn to ask like, uh, I think about two and a half years ago when I was asking my then manager, how do I get into leadership? That was the first thing that he said, "Follow the money. Try to understand how the business works. Try to understand where sales comes from. Try to understand where outflow goes." That made a huge difference.

Kovid Batra: Totally. Makes sense. I think this is something that you realize more when you get into this position. But going back to an EM role also, if you start seeing that picture and you emphasize more on that part, automatically your visibility, the kind of work that you're doing becomes even better. Like you're able to deliver what business is asking. So, totally agree. But one thing always surprises me and I ask this multiple times because everyone has a different approach to this problem, which is now you have a layer of managers who are actually dealing with developers, right? And there are situations you would want to really understand what's exactly going on, how things are quality-wise, speed-wise, and you really don't have that much time that you go out and talk to 45 engineering leaders , engineering managers, engineers, to understand what's exactly going on with them. So, there must be some approach that you are following to have that visibility because you can't just go blind and just say, "Okay, whatever engineering managers are doing, how I'm coaching them would work out wonders." You have to like trust them, but then you have to have a check. You have to understand what exactly is going on. So how do you manage that piece as a director here at Betterworks? 

C S Sriram: Yeah, no, that was a very interesting coaching experience for me, where I worked with each of my managers for almost over six months to help them build that discipline. Like any good software engineer will tell you, pulling is never a good idea. If you think of your manager as a software service, you don't want to ask them every half an hour or one hour 'what's the update?' Uh, I like push-based updates. So I help them set up dashboards. So you know, dashboards that talk to them about their team's delivery, their team's quality, uh, their team's motivation and general status and all of it. Uh, and I work with them to design it for their purpose. Uh, I think that was the first thing that I was very clear about. This is not a dashboard that I'm designing so that they can present a story to me, but it's a dashboard that they are using to solve their problems and I'm just peeking in to see what's happening. So that made it very usable. I use those dashboards to inform myself. I ask the questions that I would expect a manager to ask from them. And over time, you know, they got into the habit of asking it themselves because in every 1-on-1 we'd spend 10-15 minutes discussing those numbers. By the time we did it for three to six months, it had become internalized. They knew to look for, you know, signs, they knew to look for challenges. So that became quite natural from there on. And I again want to emphasize on that one part that these were dashboards that were designed to solve their problems. If there was a dashboard or information that I had to design to relay some information or story to a leadership team or to some other team or something like that, that would be something very different. But this is primarily a dashboard that a team uses to run itself. And I was just peeking into that. I was just looking at it to gather some information for myself. So that made a big difference. The second thing that I also did was skip-level 1-on-1s. It took me, I think, almost six months to learn how to do  skip-level 1-on-1s, uh, because the two challenges that I faced with  skip-level 1-on-1s was it turned out to be another project status update session initially. I was getting the same information from 2-3 places, which was inefficient. It was also a waste of time for the engineers to come and report what they've already done. And the second thing also was, there were a lot of complaints coming in my  skip-level 1-on-1s initially as well. And especially more so because many of the engineers that I was doing  skip-level 1-on-1s with were engineers who I managed earlier. So I had to slowly cut that relationship and just connect them to their new managers. And I started turning the  skip-level 1-on-1s into sessions where I can coach and I can give people some guidance. And I can also use it to get the pulse of the team. Like, is the team generally positive or is the team generally frustrated? And who are the second-level leaders that I need to be aware of? Whose stories I have to carry on? Who I think can become the core of the business after my first-level leaders? So I changed the purpose of the  skip-level 1-on-1s and over time that also developed into a good thing. 

Kovid Batra: Great. Great. there is a lot that we can go in and talk about this, but we are running out of time. So I will put a pause to this session here, but before we end this session for us, I would love for you to share one of those best learnings that you think as an engineering leader made you an accomplished one, and you think that can drive real growth for someone who is in that position and looking for the next job.

C S Sriram: Got it. Yeah. The one thing that, uh, was a breakthrough learning for me was mentorship and coaching. My then boss, uh, who moved on to another company, I spoke with him and I turned him into a mentor. His name is Chris Lanier. Uh, he's an exceptional executive. I connect with him very regularly to discuss a lot of challenges that I face. It helps me in two ways. The first thing it helps me is I get an outsider's perspective to solve certain problems that, uh, I can't even take to my leaders because those are problems that I am expecting no answers for. So that is the first thing that I get. And the second thing is the more you grow in this career, the bigger the imposter syndrome gets. So that reassurance that someone with the kind of experience and the success that he has, still goes through all of those things; that's quite reassuring. You know, you steady yourself and then you move forward. The next thing that I would also recommend for anybody who is looking at going into this role is to get a coach. A coach is different from a mentor. A coach is going to diagnose challenges that you have and work on specific areas. Like I had two specific challenges, uh, about two years ago. Betterworks was really generous enough to give me a coach at that time. Challenge number one was that my peer-to-peer relationships were terrible. Like, I didn't have a relationship at all. It's not even that, you know, they were poor relationships. There's no relationships at all. Uh, an introvert like me, I didn't see the value of doing it as well. The second thing was public speaking skills. Almost 40% of my speaking was filler words. So I worked on both of those with the help of a coach and got those two addressed and they made a huge difference. So I would highly, and at this level, you can't afford unknown unknowns, like you can afford it at an engineer level. You can afford it at a manager level. If you don't know what you're missing, that can turn into a disaster for both the business and for you at the executive level. So a mentor and a coach are two things that I would highly recommend. 

Kovid Batra: Makes sense. And I think I can't agree more on that front because we as humans have this tendency to be in our zones and think that, okay, whatever we are doing is fine and we are doing the right things. But when a third person perspective comes in, it humbles you down, gives you more perspective to look at things and improve way faster than you could have done from your own journey on or your own mistakes that you make. So I totally agree on that. And with that, I think, thanks a lot, Sriram. This was a really good experience.

C S Sriram: Yeah, sorry to, sorry to interrupt you. If you've got a minute, I did pick something to read. You asked at the beginning, something from my writing, do we have a minute for that? 

Kovid Batra: Yes, for sure. Please go ahead. 

C S Sriram: Cool. Perfect. Okay. This is something that I wrote in 2020. Uh, it's a poem called "No Magic". This is how it goes: 

There is no magic in this world.
No magical letter shall arrive
to grant us freedom from the cupboard under the stairs,
and the tyrants who put us there.
No wizard shall scratch our door
with his mischievous staff
and pack us off unwilling on an adventure
that will draw forth our hidden courage.
No peddler shall sell us a flying horse
made of the darkest ebony
to exile us away to mystic lands
and there to find love and friendship.
No letters, no wizards, no winged horses.
In our lives of facts, laws, and immovable rules,
where trees don’t walk, beasts don’t talk,
and we don’t fly.
Except…
when we close our eyes and dream some dreams,
of magic missiles that bring us freedom,
of wily wizards that thrust us into danger,
of soaring speeds that lead us to destiny.
And thence we fly from life to hope and back again.
Birds that fly from the nest to sky and back again.
There is no magic in the world
but in the void of the nests of our mind.
The bird with its hollow bones,
where will it fly, if not in the unreachable sky?

Kovid Batra: Amazing! I mean, I could get like 60% of it, but I could feel what you are trying to say here. And I think it's within us that makes us go far, makes us go everywhere. It's not the magic, but we need to believe the magic that we have in us. So I think, a really inspiring one. 

C S Sriram: Thanks. Thank you so much. 

Kovid Batra: Great, Sriram, this session was really amazing. We would love to connect with you once again. Talk more about your current role, more into leadership. But for today, I think this is our time. Thank you so much. Thank you for joining. 

C S Sriram: Absolutely. Thanks for having me, Kovid. I really enjoyed it.

View All

AI

View All

Developer Productivity in the Age of AI

Are you tired of feeling like you’re constantly playing catch-up with the latest AI tools, trying to figure out how they fit into your workflow? Many developers and managers share that sentiment, caught in a whirlwind of new technologies that promise efficiency but often lead to confusion and frustration.

The problem is clear: while AI offers exciting opportunities to streamline development processes, it can also amplify stress and uncertainty. Developers often struggle with feelings of inadequacy, worrying about how to keep up with rapidly changing demands. This pressure can stifle creativity, leading to burnout and a reluctance to embrace the innovations designed to enhance our work.

But there’s good news. By reframing your relationship with AI and implementing practical strategies, you can turn these challenges into opportunities for growth. In this blog, we’ll explore actionable insights and tools that will empower you to harness AI effectively, reclaim your productivity, and transform your software development journey in this new era.

The Current State of Developer Productivity

Recent industry reports reveal a striking gap between the available tools and the productivity levels many teams achieve. For instance, a survey by GitHub showed that 70% of developers believe repetitive tasks hamper their productivity. Moreover, over half of developers express a desire for tools that enhance their workflow without adding unnecessary complexity.

Understanding the Productivity Paradox

Despite investing heavily in AI, many teams find themselves in a productivity paradox. Research indicates that while AI can handle routine tasks, it can also introduce new complexities and pressures. Developers may feel overwhelmed by the sheer volume of tools at their disposal, leading to burnout. A 2023 report from McKinsey highlights that 60% of developers report higher stress levels due to the rapid pace of change.

Common Emotional Challenges

As we adapt to these changes, feelings of inadequacy and fear of obsolescence may surface. It’s normal to question our skills and relevance in a world where AI plays a growing role. Acknowledging these emotions is crucial for moving forward. For instance, it can be helpful to share your experiences with peers, fostering a sense of community and understanding.

Key Challenges Developers Face in the Age of AI

Understanding the key challenges developers face in the age of AI is essential for identifying effective strategies. This section outlines the evolving nature of job roles, the struggle to balance speed and quality, and the resistance to change that often hinders progress.

Evolving Job Roles

AI is redefining the responsibilities of developers. While automation handles repetitive tasks, new skills are required to manage and integrate AI tools effectively. For example, a developer accustomed to manual testing may need to learn how to work with automated testing frameworks like Selenium or Cypress. This shift can create skill gaps and adaptation challenges, particularly for those who have been in the field for several years.

Balancing speed and Quality

The demand for quick delivery without compromising quality is more pronounced than ever. Developers often feel torn between meeting tight deadlines and ensuring their work meets high standards. For instance, a team working on a critical software release may rush through testing phases, risking quality for speed. This balancing act can lead to technical debt, which compounds over time and creates more significant problems down the line.

Resistance to Change

Many developers hesitate to adopt AI tools, fearing that they may become obsolete. This resistance can hinder progress and prevent teams from fully leveraging the benefits that AI can provide. A common scenario is when a developer resists using an AI-driven code suggestion tool, preferring to rely on their coding instincts instead. Encouraging a mindset shift within teams can help them embrace AI as a supportive partner rather than a threat.

Strategies for Boosting Developer Productivity

To effectively navigate the challenges posed by AI, developers and managers can implement specific strategies that enhance productivity. This section outlines actionable steps and AI applications that can make a significant impact.

Embracing AI as a Collaborator

To enhance productivity, it’s essential to view AI as a collaborator rather than a competitor. Integrating AI tools into your workflow can automate repetitive tasks, freeing up your time for more complex problem-solving. For example, using tools like GitHub Copilot can help developers generate code snippets quickly, allowing them to focus on architecture and logic rather than boilerplate code.

  • Recommended AI tools: Explore tools that integrate seamlessly with your existing workflow. Platforms like Jira for project management and Test.ai for automated testing can streamline your processes and reduce manual effort.

Actual AI Applications in Developer Productivity

AI offers several applications that can significantly boost developer productivity. Understanding these applications helps teams leverage AI effectively in their daily tasks.

  • Code generation: AI can automate the creation of boilerplate code. For example, tools like Tabnine can suggest entire lines of code based on your existing codebase, speeding up the initial phases of development and allowing developers to focus on unique functionality.
  • Code review: AI tools can analyze code for adherence to best practices and identify potential issues before they become problems. Tools like SonarQube provide actionable insights that help maintain code quality and enforce coding standards.
  • Automated testing: Implementing AI-driven testing frameworks can enhance software reliability. For instance, using platforms like Selenium and integrating them with AI can create smarter testing strategies that adapt to code changes, reducing manual effort and catching bugs early.
  • Intelligent debugging: AI tools assist in quickly identifying and fixing bugs. For example, Sentry offers real-time error tracking and helps developers trace their sources, allowing teams to resolve issues before they impact users.
  • Predictive analytics for sprints/project completion: AI can help forecast project timelines and resource needs. Tools like Azure DevOps leverage historical data to predict delivery dates, enabling better sprint planning and management.
  • Architectural optimization: AI tools suggest improvements to software architecture. For example, the AWS Well-Architected Tool evaluates workloads and recommends changes based on best practices, ensuring optimal performance.
  • Security assessment: AI-driven tools identify vulnerabilities in code before deployment. Platforms like Snyk scan code for known vulnerabilities and suggest fixes, allowing teams to deliver secure applications.

Continuous Learning and Professional Development

Ongoing education in AI technologies is crucial. Developers should actively seek opportunities to learn about the latest tools and methodologies.

Online resources and communities: Utilize platforms like Coursera, Udemy, and edX for courses on AI and machine learning. Participating in online forums such as Stack Overflow and GitHub discussions can provide insights and foster collaboration among peers.

Cultivating a Supportive Team Environment

Collaboration and open communication are vital in overcoming the challenges posed by AI integration. Building a culture that embraces change can lead to improved team morale and productivity.

Building peer support networks: Establish mentorship programs or regular check-ins to foster support among team members. Encourage knowledge sharing and collaborative problem-solving, creating an environment where everyone feels comfortable discussing their challenges.

Setting Effective Productivity Metrics

Rethink how productivity is measured. Focus on metrics that prioritize code quality and project impact rather than just the quantity of code produced.

Tools for measuring productivity: Use analytics tools like Typo that provide insights into meaningful productivity indicators. These tools help teams understand their performance and identify areas for improvement.

How Typo Enhances Developer Productivity?

There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.

Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.

Here are three ways in which Typo measures the team productivity:

Software Development Lifecycle (SDLC) Visibility

Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.

Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.

AI Powered Code Review

Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them using AI before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.

Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback.  This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.

Developer Experience

Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.

Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.

Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.

Continuous Learning: Empowering Developers for Future Success

With its robust features tailored for the modern software development environment, Typo acts as a catalyst for productivity. By streamlining workflows, fostering collaboration, integrating with AI tools, and providing personalized support, Typo empowers developers and their managers to navigate the complexities of development with confidence. Embracing Typo can lead to a more productive, engaged, and satisfied development team, ultimately driving successful project outcomes.

Want to know more?

AI code reviews

AI C͏o͏de Rev͏iews ͏for Remote͏ Teams

Ha͏ve͏ yo͏u ever felt ͏overwhelmed trying to ͏mainta͏in co͏nsist͏ent͏ c͏o͏de quality acros͏s ͏a remote te͏am? As mo͏re development t͏eams shift to remo͏te work, t͏he challenges of code͏ revi͏e͏ws onl͏y gro͏w—slowed c͏ommunication͏, la͏ck o͏f real-tim͏e feedba͏ck, and t͏he c͏r͏eeping ͏possibility of errors sl͏ipp͏i͏ng t͏hro͏ugh. ͏

Moreover, thin͏k about how͏ much ti͏me is lost͏ ͏waiting͏ fo͏r feedback͏ o͏r having to͏ rewo͏rk code due͏ ͏to sma͏ll͏, ͏overlooked issues. ͏When you’re͏ working re͏motely, the͏se frustra͏tio͏ns com͏poun͏d—su͏ddenly, a task that shou͏ld take hours stre͏tc͏hes into days. You͏ migh͏t ͏be spendin͏g tim͏e on ͏repetitiv͏e tasks ͏l͏ike͏ s͏yn͏ta͏x chec͏king, cod͏e formatting, and ma͏nually catch͏in͏g errors that could be͏ ha͏nd͏led͏ more ef͏fi͏cie͏nt͏ly. Me͏anwhile͏,͏ ͏yo͏u’r͏e ͏expected to deli͏ver high-quality͏ ͏work without delays. ͏

Fortuna͏tely,͏ ͏AI-͏driven too͏ls offer a solutio͏n t͏h͏at can ea͏se this ͏bu͏rd͏en.͏ B͏y automating ͏the tedi͏ous aspects of cod͏e ͏re͏views, such as catchin͏g s͏y͏ntax ͏e͏r͏rors and for͏m͏a͏tting i͏nconsistenc͏ies, AI ca͏n͏ gi͏ve deve͏lopers m͏or͏e͏ time to focus on the creative and comple͏x aspec͏ts of ͏coding. 

͏In this ͏blog, we’͏ll ͏explore how A͏I͏ can ͏help͏ remote teams tackle the diffic͏u͏lties o͏f͏ code r͏eviews ͏a͏nd ho͏w ͏t͏o͏ols like Typo can fu͏rther͏ im͏prove this͏ proc͏ess͏, allo͏wing t͏e͏am͏s to focu͏s on what ͏tru͏ly matter͏s—writing excellent͏ code.

The͏ Unique Ch͏allenges͏ ͏of R͏emot͏e C͏ode Revi͏ews

Remote work h͏as int͏roduced a unique se͏t of challenges t͏hat imp͏a͏ct t͏he ͏code rev͏iew proce͏ss. They a͏re:͏ 

Co͏mmunication bar͏riers

When team members are͏ s͏cat͏t͏ered across ͏diffe͏rent time ͏zon͏e͏s, real-t͏ime discussions and feedba͏ck become ͏mor͏e difficult͏. Th͏e͏ lack of face͏-to-͏face͏ ͏int͏e͏ra͏ctions can h͏i͏nder effective ͏commun͏icati͏on ͏an͏d͏ le͏ad ͏to m͏isunde͏rs͏tandings.

Delays in fee͏dback͏

Without͏ the i͏mmedi͏acy of in-pers͏on ͏collabo͏rati͏on͏,͏ remote͏ ͏tea͏ms͏ often experie͏n͏ce del͏ays in receivi͏ng feedback on͏ thei͏r code chang͏e͏s. This ͏can slow d͏own the developmen͏t cycle͏ and fru͏strat͏e ͏te͏am ͏member͏s who are ea͏ger t͏o iterate and impro͏ve the͏ir ͏code.͏

Inc͏rea͏sed risk ͏of human error͏

͏C͏o͏mplex ͏code͏ re͏vie͏ws cond͏ucted ͏remo͏t͏ely are more͏ p͏ro͏n͏e͏ to hum͏an overs͏ight an͏d errors. When team͏ memb͏ers a͏re no͏t ph͏ysically ͏pres͏ent to catch ͏ea͏ch other's mistakes, the risk of intro͏duci͏ng͏ bug͏s or quality i͏ssu͏es into the codebase increa͏ses.

Emo͏tional stres͏s

Re͏mot͏e͏ work can take͏ a toll on t͏eam mo͏rale, with f͏eelings͏ of ͏is͏olation and the pres͏s͏ure ͏to m͏ai͏nt͏a͏in productivit͏y w͏eighing heavily ͏on͏ developers͏. This emo͏tional st͏ress can negativel͏y ͏impact col͏laborati͏on͏ a͏n͏d code quality i͏f not͏ properly add͏ress͏ed.

Ho͏w AI Ca͏n͏ Enhance ͏Remote Co͏d͏e Reviews

AI-powered tools are transforming code reviews, helping teams automate repetitive tasks, improve accuracy, and ensure code quality. Let’s explore how AI dives deep into the technical aspects of code reviews and helps developers focus on building robust software.

NLP for Code Comments

Natural Language Processing (NLP) is essential for understanding and interpreting code comments, which often provide critical context:

Tokenization and Parsing

NLP breaks code comments into tokens (individual words or symbols) and parses them to understand the grammatical structure. For example, "This method needs refactoring due to poor performance" would be tokenized into words like ["This", "method", "needs", "refactoring"], and parsed to identify the intent behind the comment.

Sentiment Analysis

Using algorithms like Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks, AI can analyze the tone of code comments. For example, if a reviewer comments, "Great logic, but performance could be optimized," AI might classify it as having a positive sentiment with a constructive critique. This analysis helps distinguish between positive reinforcement and critical feedback, offering insights into reviewer attitudes.

Intent Classification

AI models can categorize comments based on intent. For example, comments like "Please optimize this function" can be classified as requests for changes, while "What is the time complexity here?" can be identified as questions. This categorization helps prioritize actions for developers, ensuring important feedback is addressed promptly.

Static Code Analysis

Static code analysis goes beyond syntax checking to identify deeper issues in the code:

Syntax and Semantic Analysis

AI-based static analysis tools not only check for syntax errors but also analyze the semantics of the code. For example, if the tool detects a loop that could potentially cause an infinite loop or identifies an undefined variable, it flags these as high-priority errors. AI tools use machine learning to constantly improve their ability to detect errors in Java, Python, and other languages.

Pattern Recognition

AI recognizes coding patterns by learning from vast datasets of codebases. For example, it can detect when developers frequently forget to close file handlers or incorrectly handle exceptions, identifying these as anti-patterns. Over time, AI tools can evolve to suggest better practices and help developers adhere to clean code principles.

Vulnerability Detection

AI, trained on datasets of known vulnerabilities, can identify security risks in the code. For example, tools like Typo or Snyk can scan JavaScript or C++ code and flag potential issues like SQL injection, buffer overflows, or improper handling of user input. These tools improve security audits by automating the identification of security loopholes before code goes into production.

Code Similarity Detection

Finding duplicate or redundant code is crucial for maintaining a clean codebase:

Code Embeddings

Neural networks convert code into embeddings (numerical vectors) that represent the code in a high-dimensional space. For example, two pieces of code that perform the same task but use different syntax would be mapped closely in this space. This allows AI tools to recognize similarities in logic, even if the syntax differs.

Similarity Metrics

AI employs metrics like cosine similarity to compare embeddings and detect redundant code. For example, if two functions across different files are 85% similar based on cosine similarity, AI will flag them for review, allowing developers to refactor and eliminate duplication.

Duplicate Code Detection

Tools like Typo use AI to identify duplicate or near-duplicate code blocks across the codebase. For example, if two modules use nearly identical logic for different purposes, AI can suggest merging them into a reusable function, reducing redundancy and improving maintainability.

Automated Code Suggestions

AI doesn’t just point out problems—it actively suggests solutions:

Generative Models

Models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) can create new code snippets. For example, if a developer writes a function that opens a file but forgets to handle exceptions, an AI tool can generate the missing try-catch block to improve error handling.

Contextual Understanding

AI analyzes code context and suggests relevant modifications. For example, if a developer changes a variable name in one part of the code, AI might suggest updating the same variable name in other related modules to maintain consistency. Tools like GitHub Copilot use models such as GPT to generate code suggestions in real-time based on context, making development faster and more efficient.

Reinforcement Learning for Code Optimization

Reinforcement learning (RL) helps AI continuously optimize code performance:

Reward Functions

In RL, a reward function is defined to evaluate the quality of the code. For example, AI might reward code that reduces runtime by 20% or improves memory efficiency by 30%. The reward function measures not just performance but also readability and maintainability, ensuring a balanced approach to optimization.

Agent Training

Through trial and error, AI agents learn to refactor code to meet specific objectives. For example, an agent might experiment with different ways of parallelizing a loop to improve performance, receiving positive rewards for optimizations and negative rewards for regressions.

Continuous Improvement

The AI’s policy, or strategy, is continuously refined based on past experiences. This allows AI to improve its code optimization capabilities over time. For example, Google’s AlphaCode uses reinforcement learning to compete in coding competitions, showing that AI can autonomously write and optimize highly efficient algorithms.

AI-Assisted Code Review Tools

Modern AI-assisted code review tools offer both rule-based enforcement and machine learning insights:

Rule-Based Systems

These systems enforce strict coding standards. For example, AI tools like ESLint or Pylint enforce coding style guidelines in JavaScript and Python, ensuring developers follow industry best practices such as proper indentation or consistent use of variable names.

Machine Learning Models

AI models can learn from past code reviews, understanding patterns in common feedback. For instance, if a team frequently comments on inefficient data structures, the AI will begin flagging those cases in future code reviews, reducing the need for human intervention.

Hybrid Approaches

Combining rule-based and ML-powered systems, hybrid tools provide a more comprehensive review experience. For example, DeepCode uses a hybrid approach to enforce coding standards while also learning from developer interactions to suggest improvements in real-time. These tools ensure code is not only compliant but also continuously improved based on team dynamics and historical data.

Incorporating AI into code reviews takes your development process to the next level. By automating error detection, analyzing code sentiment, and suggesting optimizations, AI enables your team to focus on what matters most: building high-quality, secure, and scalable software. As these tools continue to learn and improve, the benefits of AI-assisted code reviews will only grow, making them indispensable in modern development environments.

Here’s a table to help you seamlessly understand the code reviews at a glance:

Practical Steps to Im͏pleme͏nt AI-Driven Co͏de ͏Review͏s

To ef͏fectively inte͏grate A͏I ͏into your remote͏ tea͏m's co͏de revi͏ew proce͏ss, con͏side͏r th͏e followi͏ng ste͏ps͏:

Evaluate͏ and choo͏se ͏AI tools: Re͏sear͏ch͏ and ͏ev͏aluat͏e A͏I͏-powe͏red code͏ review tools th͏at ali͏gn with your tea͏m'͏s n͏e͏eds an͏d ͏de͏vel͏opment w͏orkflow.

S͏t͏art with͏ a gr͏ad͏ua͏l ͏approa͏ch: Us͏e AI tools to ͏s͏upp͏ort h͏uman-le͏d code ͏reviews be͏fore gr͏ad͏ua͏lly ͏automating simpler tasks. This w͏ill al͏low your͏ te͏am to become comfortable ͏w͏ith the te͏chnol͏ogy and see its ͏ben͏efit͏s firsthan͏d͏.

͏Foster a cu͏lture of collaboration͏: ͏E͏nc͏ourage͏ yo͏ur tea͏m to view AI ͏as͏ a co͏llaborati͏ve p͏ar͏tner rathe͏r tha͏n͏ a replac͏e͏men͏t for ͏huma͏n expert͏is͏e͏. ͏Emp͏hasize ͏the impo͏rtan͏ce of human oversi͏ght, ͏especially for complex issue͏s th͏at r͏equire ͏nuance͏d͏ ͏judgmen͏t.

Provi͏de trainin͏g a͏nd r͏eso͏urces: Equi͏p͏ ͏your͏ team ͏with͏ the neces͏sary ͏training ͏an͏d resources to ͏use A͏I ͏c͏o͏de revie͏w too͏ls͏ effectively.͏ T͏his include͏s tuto͏rials, docume͏ntatio͏n, and op͏p͏ortunities fo͏r hands-on p͏r͏actice.

Lev͏era͏ging Typo to ͏St͏r͏eam͏line Remot͏e Code ͏Revi͏ews

Typo is an ͏AI-͏po͏w͏er͏ed tool designed to streamli͏ne the͏ code review process for r͏emot͏e teams. By i͏nte͏grating seamlessly wi͏th ͏your e͏xisting d͏e͏vel͏opment tool͏s, Typo mak͏es it easier͏ to ma͏nage feedbac͏k, improve c͏ode͏ q͏uali͏ty, and ͏collab͏o͏ra͏te ͏acr͏o͏ss ͏tim͏e zone͏s͏.

S͏ome key͏ benefi͏ts of ͏using T͏ypo ͏inclu͏de:

  • AI code analysis
  • Code context understanding
  • Auto debuggging with detailed explanations
  • Proprietary models with known frameworks (OWASP)
  • Auto PR fixes

Here's a brief comparison on how Typo differentiates from other code review tools

The Hu͏man Element: Com͏bining͏ ͏AI͏ and Human Exp͏ert͏ise

Wh͏ile AI ca͏n ͏s͏i͏gn͏ificantly͏ e͏nhance͏ the code ͏review proces͏s, i͏t͏'s essential͏ to maintain ͏a balance betw͏een AI ͏and human expert͏is͏e. AI ͏is not ͏a repla͏ce͏me͏nt for h͏uman intuition, cr͏eativity, or judgmen͏t but rather ͏a ͏s͏upportive t͏ool that augme͏nts and ͏emp͏ower͏s ͏developers.

By ͏using AI to ͏handle͏ re͏peti͏tive͏ tasks a͏nd prov͏ide real-͏time f͏eedba͏ck, develope͏rs can͏ foc͏us on higher-lev͏el is͏su͏es ͏that re͏quire ͏h͏uman problem-solving ͏skills. T͏h͏is ͏division of͏ l͏abor͏ allows teams ͏to w͏ork m͏ore efficient͏ly͏ and eff͏ectivel͏y while still͏ ͏ma͏in͏taining͏ the ͏h͏uma͏n touch that is cr͏uc͏ial͏ ͏for complex͏ ͏p͏roble͏m-solving and innov͏ation.

Over͏c͏oming E͏mot͏ional Barriers to AI In͏tegra͏tion

In͏troducing new t͏echn͏ol͏og͏ies͏ can so͏metimes be ͏met wit͏h r͏esist͏ance or fear. I͏t's ͏im͏porta͏nt ͏t͏o address these co͏ncerns head-on and hel͏p your͏ team understand t͏he͏ be͏nefits of AI integr͏ation.

Some common͏ fears—͏su͏ch as job͏ r͏eplacement or dis͏r͏u͏pt͏ion of esta͏blished workflows—͏shou͏ld be dire͏ctly addre͏ssed͏.͏ Reas͏sur͏e͏ your t͏ea͏m͏ that AI is ͏designed to r͏e͏duce workload and enh͏a͏nce͏ pro͏duc͏tiv͏ity, no͏t rep͏lace͏ human ex͏pertise.͏ Foster an͏ en͏vironment͏ that embr͏aces new t͏echnologie͏s while focusing on th͏e long-t͏erm be͏nefits of improved ͏eff͏icienc͏y, collabor͏ati͏on, ͏and j͏o͏b sat͏isfaction.

Elevate Your͏ Code͏ Quality: Em͏b͏race AI Solut͏ions͏

AI-d͏riven co͏d͏e revie͏w͏s o͏f͏fer a pr͏omising sol͏ution f͏or remote teams ͏lookin͏g͏ to maintain c͏ode quality, fo͏ster collabor͏ation, and enha͏nce productivity. ͏By emb͏ra͏cing͏ ͏AI tool͏s like Ty͏po, you can streamline ͏your code rev͏iew pro͏cess, reduce delays, and empower ͏your tea͏m to focus on writing gr͏ea͏t code.

Remem͏ber tha͏t ͏AI su͏pports and em͏powers your team—not replace͏ human expe͏rti͏se. Exp͏lore and experim͏ent͏ with A͏I͏ code review tools ͏in y͏o͏ur ͏teams, and ͏wa͏tch as your remote co͏lla͏borati͏on rea͏ches new͏ he͏i͏ghts o͏f effi͏cien͏cy and success͏.

How does Gen AI address Technical Debt?

The software development field is constantly evolving field. While this helps deliver the products and services quickly to the end-users, it also implies that developers might take shortcuts to deliver them on time. This not only reduces the quality of the software but also leads to increased technical debt.

But, with new trends and technologies, comes generative AI. It seems to be a promising solution in the software development industry which can ultimately, lead to high-quality code and decreased technical debt.

Let’s explore more about how generative AI can help manage technical debt!

Technical debt: An overview

Technical debt arises when development teams take shortcuts to develop projects. While this gives them short-term gains, it increases their workload in the long run.

In other words, developers prioritize quick solutions over effective solutions. The four main causes behind technical debt are:

  • Business causes: Prioritizing business needs and the company’s evolving conditions can put pressure on development teams to cut corners. It can result in preponing deadlines or reducing costs to achieve desired goals.
  • Development causes: As new technologies are evolving rapidly, It makes it difficult for teams to switch or upgrade them quickly. Especially when already dealing with the burden of bad code.
  • Human resources causes: Unintentional technical debt can occur when development teams lack the necessary skills or knowledge to implement best practices. It can result in more errors and insufficient solutions.
  • Resources causes: When teams don’t have time or sufficient resources, they take shortcuts by choosing the quickest solution. It can be due to budgetary constraints, insufficient processes and culture, deadlines, and so on.

Why generative AI for code management is important?

As per McKinsey’s study,

“… 10 to 20 percent of the technology budget dedicated to new products is diverted to resolving issues related to tech debt. More troubling still, CIOs estimated that tech debt amounts to 20 to 40 percent of the value of their entire technology estate before depreciation.”

But there’s a solution to it. Handling tech debt is possible and can have a significant impact:

“Some companies find that actively managing their tech debt frees up engineers to spend up to 50 percent more of their time on work that supports business goals. The CIO of a leading cloud provider told us, ‘By reinventing our debt management, we went from 75 percent of engineer time paying the [tech debt] ‘tax’ to 25 percent. It allowed us to be who we are today.”

There are many traditional ways to minimize technical debt which includes manual testing, refactoring, and code review. However, these manual tasks take a lot of time and effort. Due to the ever-evolving nature of the software industry, these are often overlooked and delayed.

Since generative AI tools are on the rise, they are considered to be the right way for code management which subsequently, lowers technical debt. These new tools have started reaching the market already. They are integrated into the software development environments, gather and process the data across the organization in real-time, and further, leveraged to lower tech debt.

Some of the key benefits of generative AI are:

  • Identify redundant code: Generative AI tools like Codeclone analyze code and suggest improvements. This further helps in improving code readability and maintainability and subsequently, minimizing technical debt.
  • Generates high-quality code: Automated code review tools such as Typo help in an efficient and effective code review process. They understand the context of the code and accurately fix issues which leads to high-quality code.  
  • Automate manual tasks: Tools like Github Copilot automate repetitive tasks and let the developers focus on high-quality tasks.
  • Optimal refactoring strategies: AI tools like Deepcode leverage machine learning models to understand code semantics, break it down into more manageable functions, and improve variable namings.

Case studies and real-life examples

Many industries have started adopting generative AI technologies already for tech debt management. These AI tools assist developers in improving code quality, streamlining SDLC processes, and cost savings.

Below are success stories of a few well-known organizations that have implemented these tools in their organizations:

Microsoft uses Diffblue cover for Automated Testing and Bug Detection

Microsoft is a global technology leader that implemented Diffblue cover for automated testing. Through this generative AI, Microsoft has experienced a considerable reduction in the number of bugs during the development process. It also ensures that the new features don’t compromise with existing functionality which positively impacts their code quality. This further helps in faster and more reliable releases and cost savings.

Google implements Codex for code documentation

Google is an internet search engine and technology giant that implemented OpenAI’s Codex for streamlining code documentation processes. Integrating this AI tool helped subsequently reduce the time and effort spent on manual documentation tasks. Due to the consistency across the entire codebase, It enhances code quality and allows developers to focus more on core tasks.

Facebook adopts CodeClone to identify redundancy

Facebook, a leading social media, has adopted a generative AI tool, CodeClone for identifying and eliminating redundant code across its extensive codebase. This resulted in decreased inconsistencies and a more streamlined and efficient codebase which further led to faster development cycles.

Pioneer Square Labs uses GPT-4 for higher-level planning

Pioneer Square Labs, a studio that launches technology startups, adopted GPT-4 to allow developers to focus on core tasks and let these AI tools handle mundane tasks. This further allows them to take care of high-level planning and assist in writing code. Hence, streamlining the development process.

How Typo leverage generative AI to reduce technical debt?

Typo’s automated code review tool enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells.

Typo also auto-analyses your codebase pulls requests to find issues and auto-generates fixes before you merge to master. Its Auto-Fix feature leverages GPT 3.5 Pro trained on millions of open source data & exclusive anonymised private data as well to generate line-by-line code snippets where the issue is detected in the codebase.

As a result, Typo helps reduce technical debt by detecting and addressing issues early in the development process, preventing the introduction of new debt, and allowing developers to focus on high-quality tasks.

Issue detection by Typo

AI to reduce technical debt

Autofixing the codebase with an option to directly create a Pull Request

AI to reduce technical debt

Key features

Supports top 10+ languages

Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.

Fix every code issue

Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.

Efficient code optimization

Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.

Professional coding standards

Typo standardizes code and reduces the risk of a security breach.

Professional coding standards

Click here to know more about our Code Review tool

Can technical debt increase due to generative AI?

While generative AI can help reduce technical debt by analyzing code quality, removing redundant code, and automating the code review process, many engineering leaders believe technical debt can be increased too.

Bob Quillin, vFunction chief ecosystem officer stated “These new applications and capabilities will require many new MLOps processes and tools that could overwhelm any existing, already overloaded DevOps team,”

They aren’t wrong either!

Technical debt can be increased when the organizations aren’t properly documenting and training development teams in implementing generative AI the right way. When these AI tools are adopted hastily without considering any long-term implications, they can rather increase the workload of developers and increase technical debt.

Ethical guidelines

Establish ethical guidelines for the use of generative AI in organizations. Understand the potential ethical implications of using AI to generate code, such as the impact on job displacement, intellectual property rights, and biases in AI-generated output.

Diverse training data quality

Ensure the quality and diversity of training data used to train generative AI models. When training data is biased or incomplete, these AI tools can produce biased or incorrect output. Regularly review and update training data to improve the accuracy and reliability of AI-generated code.

Human oversight

Maintain human oversight throughout the generative AI process. While AI can generate code snippets and provide suggestions, the final decision should be upon the developers for final decision making, review, and validate the output to ensure correctness, security, and adherence to coding standards.

Most importantly, human intervention is a must when using these tools. After all, it’s their judgment, creativity, and domain knowledge that help to make the final decision. Generative AI is indeed helpful to reduce the manual tasks of the developers, however, they need to use it properly.

Conclusion

In a nutshell, generative artificial intelligence tools can help manage technical debt when used correctly. These tools help to identify redundancy in code, improve readability and maintainability, and generate high-quality code.

However, it is to be noted that these AI tools shouldn’t be used independently. These tools must work only as the developers’ assistants and they muse use them transparently and fairly.

View All

Tutorials

View All
How to Reduce Software Cycle Time

How to Reduce Software Cycle Time

Speed matters in software development. Top-performing teams ship code in just two days, while many others lag at seven. 

Software cycle time directly impacts product delivery and customer satisfaction - and it’s equally essential for your team's confidence. 

CTOs and engineering leaders can’t reduce cycle time just by working faster. They must optimize processes, identify and eliminate bottlenecks, and consistently deliver value. 

In this post, we’ll break down the key strategies to reduce cycle time. 

What is Software Cycle Time 

Software cycle time measures how long it takes for code to go from the first commit to production. 

It tracks the time a pull request (PR) spends in various stages of the pipeline, helping teams identify and address workflow inefficiencies. 

Understanding DORA Metrics: Cycle Time vs Lead Time in Software Development  - Typo

Cycle time consists of four key components: 

  1. Coding Time: The time taken from the first commit to raising a PR for review.
  2. Pickup Time: The delay between the PR being raised and the first review comment.
  3. Review Time: The duration from the first review comment to PR approval.
  4. Merge Time: The time between PR approval and merging into the main branch. 

Software cycle time is a critical part of DORA metrics, complimenting others like deployment frequency, lead time for changes, and MTTR. 

While deployment frequency indicates how often new code is released, cycle time provides insights into the efficiency of the development process itself. 

Why Does Software Cycle Time Matter? 

Understanding and optimising software cycle time is crucial for several reasons: 

1. Engineering Efficiency 

Cycle time reflects how efficiently engineering teams work. For example, there are brands that reduce their PR cycle time with automated code reviews and parallel test execution. This change allows developers to focus more on feature development rather than waiting for feedback, resulting in faster, higher-quality code delivery.

2. Time to Market 

Reducing cycle time accelerates product delivery, allowing teams to respond faster to market demands and customer feedback. Remember Amazon’s “two-pizza teams” model? It emphasizes small, independent teams with streamlined processes, enabling them to deploy code thousands of times a day. This agility helps Amazon quickly respond to customer needs, implement new features, and outpace competitors. 

3. Competitive Advantage 

The ability to ship high-quality software quickly can set a company apart from competitors. Faster delivery means quicker innovation and better customer satisfaction. For example, Netflix’s use of chaos engineering and Service-Level Prioritized Load Shedding has allowed it to continuously improve its streaming service, roll out updates seamlessly, and maintain its market leadership in the streaming industry. 

Cycle time is one aspect that engineering teams cannot overlook — apart from all the technical reasons, it also has psychological impact. If Cycle time is high, the productivity level further drops because of demotivation and procrastination. 

6 Challenges in Reducing Cycle Time 

Reducing cycle time is easier said than done. There are several factors that affect efficiency and workflow. 

  1. Inconsistent Workflows: Non-standardized processes create variability in task durations, making it harder to detect and resolve inefficiencies. Establishing uniform workflows ensures predictable and optimized cycle times. 
  2. Limited Automation: Manual tasks like testing and deployment slow down development. Implementing CI/CD pipelines, test automation, and infrastructure as code reduces these delays significantly. 
  3. Overloaded Teams: Resource constraints and overburdened engineers lead to slower development cycles. Effective workload management and proper resourcing can alleviate this issue. 
  4. Waiting on Dependencies: External dependencies, such as third-party services or slow approval chains, cause idle time. Proactive dependency management and clear communication channels reduce these delays. 
  5. Resistance to Change: Teams hesitant to adopt new tools or practices miss opportunities for optimization. Promoting a culture of continuous learning and incremental changes can ease transitions. 
  6. Unclear Prioritization: When teams lack clarity on task priorities, critical work is delayed. Aligning work with business goals and maintaining a clear backlog ensures efficient resource allocation. 

6 Proven Strategies to Reduce Software Cycle Time 

Reducing software cycle time requires a combination of technical improvements, process optimizations, and cultural shifts. Here are six actionable strategies to implement today:

1. Optimize Code Reviews and Approvals 

Establish clear SLAs for review timelines—e.g., 48 hours for initial feedback. Use tools like GitHub’s code owners to automatically assign reviewers based on file ownership. Implement peer programming for critical features to accelerate feedback loops. Introduce a "reviewer rotation" system to distribute the workload evenly across the team and prevent bottlenecks. 

2. Invest in Automation 

Identify repetitive tasks such as testing, integration, and deployment. And then implement CI/CD pipelines to automate these processes. You can also use test parallelization to speed up execution and set up automatic triggers for deployments to staging and production environments. Ensure robust rollback mechanisms are in place to reduce the risk of deployment failures. 

3. Improve Team Collaboration 

Break down silos by encouraging cross-functional collaboration between developers, QA, and operations. Adopt DevOps principles and use tools like Slack for real-time communication and Jira for task tracking. Schedule regular cross-team sync-ups, and document shared knowledge in Confluence to avoid communication gaps. Establish a "Definition of Ready" and "Definition of Done" to align expectations across teams. 

4. Address Technical Debt Proactively 

Schedule dedicated time each sprint to address technical debt. One amazing cycle time reduction strategy is to categorise debt into critical, moderate, and low-priority issues and then focus first on high-impact areas that slow down development. Implement a policy where no new feature work is done without addressing related legacy code issues. 

5. Leverage Metrics and Analytics 

Track cycle time by analysing PR stages—coding, pickup, review, and merge. Use tools like Typo to visualise bottlenecks and benchmark team performance. Establish a regular cadence to review these engineering metrics and correlate them with other DORA metrics to understand their impact on overall delivery performance. If review time consistently exceeds targets, consider adding more reviewers or refining the review process. 

6. Prioritize Backlog Management 

A cluttered backlog leads to confusion and context switching. Use prioritization frameworks like MoSCoW or RICE to focus on high-impact tasks. Ensure stories are clear, with well-defined acceptance criteria. Regularly groom the backlog to remove outdated items and reassess priorities. You can also introduce a “just-in-time” backlog refinement process to prepare stories only when they're close to implementation. 

Tools to Support Cycle Time Reduction 

Reducing software cycle time requires the right set of tools to streamline development workflows, automate processes, and provide actionable insights. 

Here’s how key tools contribute to cycle time optimization:

1. GitHub/GitLab 

GitHub and GitLab simplify version control, enabling teams to track code changes, collaborate efficiently, and manage pull requests. Features like branch protection rules, code owners, and merge request automation reduce delays in code reviews. Integrated CI/CD pipelines further streamline code integration and testing.

2. Jenkins, CircleCI, or TravisCI 

These CI/CD tools automate build, test, and deployment processes, reducing manual intervention, ensuring faster feedback loops and more effective software delivery. Parallel execution, pipeline caching, and pre-configured environments significantly cut down build times and prevent bottlenecks. 

3. Typo 

Typo provides in-depth insights into cycle time by analyzing Git data across stages like coding, pickup, review, and merge. It highlights bottlenecks, tracks team performance, and offers actionable recommendations for process improvement. By visualizing trends and measuring PR cycle times, Typo helps engineering leaders make data-driven decisions and continuously optimize development workflows. 

Cycle Time as shown in Typo App

Best Practices to Reduce Software Cycle Time 

In your next development project, if you do not want to feel that this is taking forever, follow these best practices: 

  • Break down large changes into smaller, manageable PRs to simplify reviews and reduce review time. 
  • Define expectations for reviewers (e.g., 24-48 hours) to prevent PRs from being stuck in review. 
  • Reduce merge conflicts by encouraging frequent, small merges to the main branch. 
  • Track cycle time metrics via tools like Typo to identify trends and address recurring bottlenecks. 
  • Deploy incomplete code safely, enabling faster releases without waiting for full feature completion. 
  • Allocate dedicated time each sprint to address technical debt and maintain code maintainability. 

Conclusion  

Reducing software cycle time is critical for both engineering efficiency and business success. It directly impacts product delivery speed, market responsiveness, and overall team performance. 

Engineering leaders should continuously evaluate processes, implement automation tools, and track cycle time metrics to streamline workflows and maintain a competitive edge. 

And it all starts with accurate measurement of software cycle time. 

How to Achieve Effective Software Delivery

How to Achieve Effective Software Delivery

Professional service organizations within software companies maintain a delivery success rate hovering in the 70% range. 

This percentage looks good. However, it hides significant inefficiencies given the substantial resources invested in modern software delivery lifecycles. 

Even after investing extensive capital, talent, and time into development cycles, missing targets on every third of projects should not be acceptable. 

After all, there’s a direct correlation between delivery effectiveness and organizational profitability. 

However, the complexity of modern software development - with its complex dependencies and quality demands - makes consistent on-time, on-budget delivery persistently challenging. 

This reality makes it critical to master effective software delivery. 

What is the Software Delivery Lifecycle 

The Software Delivery Lifecycle (SDLC) is a structured sequence of stages that guides software from initial concept to deployment and maintenance. 

Consider Netflix's continuous evolution: when transitioning from DVD rentals to streaming, they iteratively developed, tested, and refined their platform. All this while maintaining uninterrupted service to millions of users. 

A typical SDLC has six phases: 

  1. Planning: Requirements gathering and resource allocation 
  2. Design: System architecture and technical specifications 
  3. Development: Code writing and unit testing 
  4. Testing: Quality assurance and bug fixing 
  5. Deployment: Release to production environment 
  6. Maintenance: Ongoing updates and performance monitoring 

Each phase builds upon the previous, creating a continuous loop of improvement. 

Modern approaches often adopt Agile methodologies, which enable rapid iterations and frequent releases. This also allows organizations to respond quickly to market demands while maintaining high-quality standards. 

7 Best Practices to Achieve Effective Software Delivery 

Even the best of software delivery processes can have leakages in terms of engineering resource allocation and technical management. By applying these software delivery best practices, you can achieve effectiveness: 

1. Streamline Project Management 

Effective project management requires systematic control over development workflows while maintaining strategic alignment with business objectives. 

Modern software delivery requires precise distribution of resources, timelines, and deliverables.

Here’s what you should implement: 

  • Set Clear Objectives and Scope: Implement SMART criteria for project definition. Document detailed deliverables with explicit acceptance criteria. Establish timeline dependencies using critical path analysis. 
  • Effective Resource Allocation: Deploy project management tools for agile workflow tracking. Implement capacity planning using story point estimation. Utilize resource calendars for optimal task distribution. Configure automated notifications for blocking issues and dependencies.
  • Prioritize Tasks: Apply MoSCoW method (Must-have, Should-have, Could-have, Won't-have) for feature prioritization. Implement RICE scoring (Reach, Impact, Confidence, Effort) for backlog management. Monitor feature value delivery through business impact analysis. 
  • Continuous Monitoring: Track velocity trends across sprints using burndown charts. Monitor issue cycle time variations through Typo dashboards. Implement automated reporting for sprint retrospectives. Maintain real-time visibility through team performance metrics. 

2. Build Quality Assurance into Each Stage 

Quality assurance integration throughout the SDLC significantly reduces defect discovery costs. 

Early detection and prevention strategies prove more effective than late-stage fixes. This ensures that your time is used for maximum potential helping you achieve engineering efficiency

Some ways to set up robust a QA process: 

  • Shift-Left Testing: Implement behavior-driven development (BDD) using Cucumber or SpecFlow. Integrate unit testing within CI pipelines. Conduct code reviews with automated quality gates. Perform static code analysis during development.
  • Automated Testing: Deploy Selenium WebDriver for cross-browser testing. Implement Cypress for modern web application testing. Utilize JMeter for performance testing automation. Configure API testing using Postman/Newman in CI pipelines.
  • QA as Collaborative Effort: Establish three-amigo sessions (Developer, QA, Product Owner). Implement pair testing practices. Conduct regular bug bashes. Share testing responsibilities across team roles. 

3. Enable Team Collaboration

Efficient collaboration accelerates software delivery cycles while reducing communication overhead. 

There are tools and practices available that facilitate seamless information flow across teams. 

Here’s how you can ensure the collaboration is effective in your engineering team: 

  • Foster open communication with dedicated Slack channels, Notion workspaces, daily standups, and video conferencing. 
  • Encourage cross-functional teams with skill-balanced pods, shared responsibility matrices, cross-training, and role rotations. 
  • Streamline version control and documentation with Git branching strategies, pull request templates, automated pipelines, and wiki systems. 

4. Implement Strong Security Measures

Security integration throughout development prevents vulnerabilities and ensures compliance. Instead of fixing for breaches, it’s more effective to take preventive measures. 

To implement strong security measures: 

  • Implement SAST tools like SonarQube in CI pipelines. 
  • Deploy DAST tools for runtime analysis. 
  • Conduct regular security reviews using OWASP guidelines. 
  • Implement automated vulnerability scanning.
  • Apply role-based access control (RBAC) principles. 
  • Implement multi-factor authentication (MFA). 
  • Use secrets management systems. 
  • Monitor access patterns for anomalies. 
  • Maintain GDPR compliance documentation and ISO 27001 controls. 
  • Conduct regular SOC 2 audits and automate compliance reporting. 

5. Build Scalability into Process

Scalable architectures directly impact software delivery effectiveness by enabling seamless growth and consistent performance even when the load increases. 

Strategic implementation of scalable processes removes bottlenecks and supports rapid deployment cycles. 

Here’s how you can build scalability into your processes: 

  • Scalable Architecture: Implement microservices architecture patterns. Deploy container orchestration using Kubernetes. Utilize message queues for asynchronous processing. Implement caching strategies. 
  • Cloud Infrastructure: Configure auto-scaling groups in AWS/Azure. Implement infrastructure as code using Terraform. Deploy multi-region architectures. Utilize content delivery networks (CDNs). 
  • Monitoring and Performance: Deploy Typo for system health monitoring. Implement distributed tracing using Jaeger. Configure alerting based on SLOs. Maintain performance dashboards. 

6. Leverage CI/CD

CI/CD automation streamlines deployment processes and reduces manual errors. Now, there are pipelines available that are rapid, reliable software delivery through automated testing and deployment sequences. Integration with version control systems ensures consistent code quality and deployment readiness. This means there are less delays and more effective software delivery. 

7. Measure Success Metrics

Effective software delivery requires precise measurement through carefully selected metrics. These metrics provide actionable insights for process optimization and delivery enhancement. 

Here are some metrics to keep an eye on: 

  • Deployment Frequency measures release cadence to production environments. 
  • Change Lead Time spans from code commit to successful production deployment. 
  • Change Failure Rate indicates deployment reliability by measuring failed deployment percentage. 
  • Mean Time to Recovery quantifies service restoration speed after production incidents. 
  • Code Coverage reveals test automation effectiveness across the codebase. 
  • Technical Debt Ratio compares remediation effort against total development cost. 

These metrics provide quantitative insights into delivery pipeline efficiency and help identify areas for continuous improvement. 

Challenges in the Software Delivery Lifecycle 

The SDLC has multiple technical challenges at each phase. Some of them include: 

1. Planning Phase Challenges 

Teams grapple with requirement volatility leading to scope creep. API dependencies introduce integration uncertainties, while microservices architecture decisions significantly impact system complexity. Resource estimation becomes particularly challenging when accounting for potential technical debt. 

2. Design Phase Challenges 

Design phase complications are around system scalability requirements conflicting with performance constraints. Teams must carefully balance cloud infrastructure selections against cost-performance ratios. Database sharding strategies introduce data consistency challenges, while service mesh implementations add layers of operational complexity. 

3. Development Phase Challenges 

Development phase issues leads to code versioning conflicts across distributed teams. Software engineers frequently face memory leaks in complex object lifecycles and race conditions in concurrent operations. Then there are rapid sprint cycles that often result in technical debt accumulation, while build pipeline failures occur from dependency conflicts. 

4. Testing Phase Challenges 

Testing becomes increasingly complex as teams deal with coverage gaps in async operations and integration failures across microservices. Performance bottlenecks emerge during load testing, while environmental inconsistencies lead to flaky tests. API versioning introduces additional regression testing complications. 

5. Deployment Phase Challenges 

Deployment challenges revolve around container orchestration failures and blue-green deployment synchronization. Teams must manage database migration errors, SSL certificate expirations, and zero-downtime deployment complexities. 

6. Maintenance Phase Challenges 

In the maintenance phase, teams face log aggregation challenges across distributed systems, along with memory utilization spikes during peak loads. Cache invalidation issues and service discovery failures in containerized environments require constant attention, while patch management across multiple environments demands careful orchestration. 

These challenges compound through modern CI/CD pipelines, with Infrastructure as Code introducing additional failure points. 

Effective monitoring and observability become crucial success factors in managing them. 

Use software engineering intelligence tools like Typo to get visibility on precise performance of the teams, sprint delivery which helps you in optimizing resource allocation and reducing tech debt better.

Conclusion 

Effective software delivery depends on precise performance measurement. Without visibility into resource allocation and workflow efficiency, optimization remains impossible. 

Typo addresses this fundamental need. The platform delivers insights across development lifecycles - from code commit patterns to deployment metrics. AI-powered code analysis automates optimization, reducing technical debt while accelerating delivery. Real-time dashboards expose productivity trends, helping you with proactive resource allocation. 

Transform your software delivery pipeline with Typo's advanced analytics and AI capabilities.

How to Measure Change Failure Rate?

Smooth and reliable deployments are key to maintaining user satisfaction and business continuity. This is where DORA metrics play a crucial role. 

Among these metrics, the Change Failure Rate provides valuable insights into how frequently deployments lead to failures. Hence, helping teams minimize disruptions in production environments.

Let’s read about CFR further! 

What are DORA Metrics? 

In 2015, Gene Kim, Jez Humble, and Nicole Forsgren founded the DORA (DevOps Research and Assessment) team to evaluate and improve software development practices. The aim is to improve the understanding of how organizations can deliver faster, more reliable, and higher-quality software.

DORA metrics help in assessing software delivery performance based on four key (or accelerate) metrics:

  • Deployment Frequency
  • Lead Time for Changes
  • Change Failure Rate
  • Mean Time to Recover

While these metrics provide valuable insights into a team's performance, understanding CFR is crucial. It measures the effectiveness of software changes and their impact on production environments.

Overview of Change Failure Rate

The Change Failure Rate (CFR) measures how often new deployments cause failures, glitches, or unexpected issues in the IT environment. It reflects the stability and reliability of the entire software development and deployment lifecycle.

It is important to measure the Change Failure Rate for various reasons:

  • A lower change failure rate enhances user experience and builds trust by reducing failures. 
  • It protects your business from financial risks, revenue loss, customer churn, and brand damage. 
  • Lower change failures help to allocate resources effectively and focus on delivering new features.

How to Calculate Change Failure Rate? 

Change Failure Rate calculation is done by following these steps:

  1. Identify Failed Changes: Keep track of the number of changes that resulted in failures during a specific timeframe.
  2. Determine Total Changes Implemented: Count the total changes or deployments made during the same period.

Apply the formula:

CFR = (Number of Failed Changes / Total Number of Changes) * 100 to calculate the Change Failure Rate as a percentage.

For example, Suppose during a month:

Failed Changes = 2

Total Changes = 30

Using the formula: (2/30)*100 = 5

Therefore, the Change Failure Rate for that period is 6.67%.

What is a Good Failure Rate? 

An ideal failure rate is between 0% and 15%. This is the benchmark and standard that the engineering teams need to maintain. Low CFR equals stable, reliable, and well-tested software. 

When the Change Failure Rate is above 15%, it reflects significant issues with code quality, testing, or deployment processes. This leads to increased system downtime, slower deployment cycles, and a negative impact on user experience. 

Hence, it is always advisable to keep CFR as low as possible. 

How to Correctly Measure Change Failure Rate?

Follow the right steps to measure the Change Failure Rate effectively. Here’s how you can do it:

Define ‘Failure’ Criteria

Clearly define what constitutes a ‘Change’ and a ‘Failure,’ such as service disruptions, bugs, or system crashes. Having clear metrics ensures the team is aligned and consistently collecting data.

Accurately Capture and Label Your Data

Firstly, define the scope of change that needs to be included in CFR calculation. Besides this, include the details to be added for deciding the success or failure of changes. Have a Change Management System to track or log changes in a database. You can use tools like JIRA, GIT or CI/CD pipelines to automate and review data collection. 

Measure Change Failure, Not Deployment Failure 

Understand the difference between Change Failure and Deployment Failure. 

Deployment Failure: Failures that occur during the process of deploying code or changes to a production environment.

Change Failure: Failures that occur after the deployment when the changes themselves cause issues in the production environment.

This ensures that the team focuses on improving processes rather than troubleshooting unrelated issues. 

Analyze Trends Over Time 

Don’t analyze failures only once. Analyze trends continuously over different time periods, such as weekly, monthly, and quarterly. The trends and patterns help reveal recurring issues, prioritize areas for improvement, and inform strategic decisions. This allows teams to adapt and improve continuously. 

Understand the Limitations of DORA Metrics

DORA Metrics provide valuable insights into software development performance and identify high-level trends. However, they fail to capture the nuances such as the complexity of changes or severity of failures. Use them alongside other metrics for a holistic view. Also, ensure that these metrics are used to drive meaningful improvements rather than just for reporting purposes. 

Consider Contextual Factors

Various factors including team experience, project complexity, and organizational culture can influence the Change Failure Rate. These factors can impact both the failure frequency and effect of mitigation strategy. This allows you to judge failure rates in a broader context rather than only based on numbers. 

Exclude External Incidents

Filter out the failures caused by external factors such as third-party service outages or hardware failure. This helps accurately measure CFR as external incidents can distort the true failure rate and mislead conclusions about your team’s performance. 

How to Reduce Change Failure Rate? 

Identify the root causes of failures and implement best practices in testing, deployment, and monitoring. Here are some effective strategies to minimize CFR: 

Automate Testing Practices

Implement an automated testing strategy during each phase of the development lifecycle. The repeatable and consistent practice helps catch issues early and often, hence, improving code quality to a great extent. Ensure that the test results are also made accessible so they can have a clear focus on crucial aspects. 

Deploy small changes frequently

Small deployments in more frequent intervals make testing and detecting bugs easier. They reduce the risks of failures from deploying code to production issues as the issues are caught early and addressed before they become significant problems. Moreover, the frequent deployments provide quicker feedback to the team members and engineering leaders. 

Adopt a CI/CD

Continuous Integration and Continuous Deployment (CI/CD) ensures that code is regularly merged, tested, and deployed automatically. This reduces the deployment complexity and manual errors and allows teams to detect and address issues early in the development process. Hence, ensuring that only high-quality code reaches production. 

Prioritize Code Quality 

Establishing a culture where quality is prioritized helps teams catch issues before they escalate into production failures. Adhering to best practices such as code reviews, coding standards, and refactoring continuously improves the quality of code. High-quality code is less prone to bugs and vulnerabilities and directly contributes to a lower CFR.  

Implement Real-Time Monitoring and Alerting

Real-time monitoring and alerting systems help teams detect issues early and resolve them quickly. This minimizes the impact of failures, improves overall system reliability, and provides immediate feedback on application performance and user experience. 

Cultivate a Learning Culture 

Creating a learning culture within the development team encourages continuous improvement and knowledge sharing. When teams are encouraged to learn from past mistakes and successes, they are better equipped to avoid repeating errors. This involves conducting post-incident reviews and sharing key insights. This approach also fosters collaboration, accountability, and continuous improvement. 

How Does Typo Help in Reducing CFR? 

Since the definition of Failure is specific to teams, there are multiple ways this metric can be configured. Here are some guidelines on what can indicate a failure :

A deployment that needs a rollback or a hotfix

For such cases, any Pull Request having a title/tag/label that represents a rollback/hotfix that is merged to production can be considered a failure.

A high-priority production incident

For such cases, any ticket in your Issue Tracker having a title/tag/label that represents a high-priority production incident can be considered a failure.

A deployment that failed during the production workflow

For such cases, Typo can integrate with your CI/CD tool and consider any failed deployment as a failure. 

To calculate the final percentage, the total number of failures is divided by the total number of deployments (this can be picked either from the Deployment PRs or from the CI/CD tool deployments).

Conclusion 

Measuring and reducing the Change Failure Rate is a strategic necessity. It enables engineering teams to deliver stable software, leading to happier customers and a stronger competitive advantage. With tools like Typo, organizations can easily track and address failures to ensure successful software deployments.

View All

Product Updates

View All
AI-Powered PR Summary for Efficient Code Reviews

AI-Powered PR Summary for Efficient Code Reviews

Tired of code reviews disrupting your workflow? As developers know, pull request reviews are crucial for software quality, but they often lead to context switching and time-consuming interruptions. That's why Typo, is excited to announce powerful new feature designed to empower reviewers: AI-Generated PR Summaries with Estimated Time to Review Label. This feature is built to minimize interruptions, save time, and ultimately, make your life as a reviewer significantly easier.

AI-Powered PR Summary for Efficient Code Reviews

1. Take Control of Your Schedule with Estimated Time to Review Labels

Imagine knowing exactly how much time a pull request (PR) will take to review. No more guessing, no more unexpected time sinks. Typo's Estimated Time to Review Labels provide a clear, data-driven estimate of the review effort required.

How It Works:

  • Intelligent Analysis: Typo analyzes code changes, file complexity, and the number of lines modified to calculate an estimated review time.
  • Clear Labels: The tool automatically assigns labels like "Quick Review (Under 5 minutes)," "Moderate Review (5-15 minutes)," or "In-Depth Review (15+ minutes)."
  • Strategic Prioritization: Reviewers can use these labels to prioritize PRs based on their available time, ensuring they stay focused on their current tasks.

Benefits:

  • Minimize Interruptions: Easily defer in-depth reviews until you have dedicated time, avoiding context switching.
  • Optimize Workflow: Prioritize quick reviews to clear backlogs and maintain a smooth development pipeline.
  • Improve Time Management: Gain a clear understanding of the time commitment required for each review.

2. Accelerate Approvals with AI-Generated PR Summaries

Time is a precious commodity for developers. Typo's AI-Generated PR Summaries provide a concise and insightful overview of code changes, allowing reviewers to quickly grasp the key modifications without wading through every line of code.

How It Works:

  • AI-Driven Analysis: Typo's advanced algorithms analyze code diffs, commit messages, and associated issues.
  • Concise Summaries: The AI generates a clear summary highlighting the purpose and impact of the changes.
  • Rapid Understanding: Reviewers can quickly understand the context and make informed decisions.

Benefits:

  • Faster Review Cycles: Quickly grasp the essence of PRs and accelerate the approval process.
  • Enhanced Efficiency: Save valuable time by avoiding manual code inspection for every change.
  • Improved Focus: Quickly understand the changes, and get back to your own work.

Typo: Empowering Reviewers, Boosting Productivity

These two features work together to create a more efficient and less disruptive code review process. By providing time estimates and AI-powered summaries, Typo empowers reviewers to:

  • Maintain focus on their primary tasks.
  • Save valuable time and reduce context switching.
  • Accelerate the code review process.
  • Increase developer velocity.

Key Takeaways:

Typo helps developers maintain focus and save time, even when faced with incoming PR reviews.

  • Estimated Time to Review Labels provide valuable insights into review effort, enabling better time management.
  • AI-Generated PR Summaries accelerate approvals by providing concise overviews of code changes.

Ready to transform your code review workflow?

Try Typo today and experience the benefits of AI-powered time estimates and summaries. Streamline your processes, boost productivity, and empower your development team.

Typo Launches groCTO: Community to Empower Engineering Leaders

In an ever-evolving tech world, organisations need to innovate quickly while keeping up high standards of quality and performance. The key to achieving these goals is empowering engineering leaders with the right tools and technologies. 

About Typo

Typo is a software intelligence platform that optimizes software delivery by identifying real-time bottlenecks in SDLC, automating code reviews, and measuring developer experience. We aim to help organizations ship reliable software faster and build high-performing teams. 

However, engineering leaders often struggle to bridge the divide between traditional management practices and modern software development leading to missed opportunities for growth, ineffective team dynamics, and slower progress in achieving organizational goals. 

To address this gap, we launched groCTO, a community designed specifically for engineering leaders.

What is groCTO Community? 

Effective engineering leadership is crucial for building high-performing teams and driving innovation. However, many leaders face significant challenges and gaps that hinder their effectiveness. The role of an engineering leader is both demanding and essential. From aligning teams with strategic goals to managing complex projects and fostering a positive culture, they have a lot on their plates. Hence, leaders need to have the right direction and support so they can navigate the challenges and guide their teams efficiently. 

Here’s when groCTO comes in! 

groCTO is a community designed to empower engineering managers on their leadership journey. The aim is to help engineering leaders evolve, navigate complex technical challenges, and drive innovative solutions to create groundbreaking software. Engineering leaders can connect, learn, and grow to enhance their capabilities and, in turn, the performance of their teams. 

Key Components of groCTO 

groCTO Connect

Over 73% of successful tech leaders believe having a mentor is key to their success.

At groCTO, we recognize mentorship as a powerful tool for addressing leadership challenges and offering personalised support and fresh perspectives. That’s why we’ve kept Connect a cornerstone of our community - offering 1:1 mentorship sessions with global tech leaders and CTOs. With over 74 mentees and 20 mentors, our Connect program fosters valuable relationships and supports your growth as a tech leader.

These sessions allow emerging leaders to: 

  • Gain personalised advice: Through 1:1 sessions, mentors address individual challenges and tailor guidance to the specific needs and career goals of emerging leaders. 
  • Navigate career growth: These mentors understand the strengths and weaknesses of the individual and help them focus on improving specific leadership skills and competencies and build confidence. 
  • Build valuable professional relationships: Our mentorship sessions expand professional connections and foster collaborations and knowledge sharing that can offer ongoing support and opportunities. 

Weekly Tech Insights

To keep our tech community informed and inspired, groCTO brings you a fresh set of learning resources every week:

  • CTO Diaries: The CTO Diaries provide a unique glimpse into the experiences and lessons learned by seasoned Chief Technology Officers. These include personal stories, challenges faced, and successful strategies implemented by them. Hence, helping engineering leaders gain practical insights and real-world examples that can inspire and inform their approach to leadership and team management.
  • Podcasts: 
    • groCTO Originals is a weekly podcast for current and aspiring tech leaders aiming to transform their approach by learning from seasoned industry experts and successful engineering leaders across the globe.
    • ‘The DORA Lab’ by groCTO is an exclusive podcast that’s all about DORA and other engineering metrics. In each episode, expert leaders from the tech world bring their extensive knowledge of the challenges, inspirations, and practical uses of DORA metrics and beyond.
  • Bytes: groCTO Bytes is a weekly sun-day dose of curated wisdom delivered straight to your inbox, in the form of a newsletter. Our goal is to keep tech leaders and CTOs, VPEs up-to-date on the latest trends and best practices in engineering leadership, tech management, system design, and more.
Are you a tech coach looking to make an impact? 

Looking Ahead: Building a Dynamic Community

At groCTO, we are committed to making this community bigger and better. We want current and aspiring engineering leaders to invest in their growth as well as contribute to pushing the boundaries of what engineering teams can achieve.

We’re just getting started. A few of our future plans for groCTO include:

  • Virtual Events: We plan to conduct interactive webinars and workshops to help engineering leaders and CTOs get deeper dives into specific topics and networking opportunities.
  • Slack Channels: We plan to create Slack channels to allow emerging tech leaders to engage in vibrant discussions and get real-time support tailored to various aspects of engineering leadership.

We envision a community that thrives on continuous engagement and growth. Scaling our resources and expanding our initiatives, we want to ensure that every member of groCTO finds the support and knowledge they need to excel. 

Get in Touch with us! 

At Typo, our vision is clear: to ship reliable software faster and build high-performing engineering teams. With groCTO, we are making significant progress toward this goal by empowering engineering leaders with the tools and support they need to excel. 

Join us in this exciting new chapter and be a part of a community that empowers tech leaders to excel and innovate. 

We’d love to hear from you! For more information about groCTO and how to get involved, write to us at hello@grocto.dev

Why do Companies Choose Typo?

Dev teams hold great importance in the engineering organization. They are essential for building high-quality software products, fostering innovation, and driving the success of technology companies in today’s competitive market.

However, engineering leaders need to understand the bottlenecks holding them back. Since these blindspots can directly affect the projects. Hence, this is when software development analytics tools come to your rescue. And these analytics software stands better when they have various features and integrations, engineering leaders are usually looking out for.

Typo is an intelligent engineering platform that is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Let’s know more about why engineering leaders prefer to choose Typo as their important tool:

You get Customized DORA and other Engineering Metrics

Engineering metrics are the measurements of engineering outputs and processes. However, there isn’t a pre-defined set of metrics that the software development teams use to measure to ensure success. This depends on various factors including team size, the background of the team members, and so on.

Typo’s customized DORA (Deployment frequency, Change failure rate, Lead time, and Mean Time to Recover) key metrics and other engineering metrics can be configured in a single dashboard based on specific development processes. This helps benchmark the dev team’s performance and identifies real-time bottlenecks, sprint delays, and blocked PRs. With the user-friendly interface and tailored integrations, engineering leaders can get all the relevant data within minutes and drive continuous improvement.

Typo has an In-Built Automated Code Review Feature

Code review is all about improving the code quality. It improves the software teams’ productivity and streamlines the development process. However, when done manually, the code review process can be time-consuming and takes a lot of effort.

Typo’s automated code review tool auto-analyses codebase and pull requests to find issues and auto-generates fixes before it merges to master. It understands the context of your code and quickly finds and fixes any issues accurately, making pull requests easy and stress-free. It standardizes your code, reducing the risk of a software security breach and boosting maintainability, while also providing insights into code coverage and code complexity for thorough analysis.

You can Track the Team’s Progress by Advanced Sprint Analysis Tool

While a burndown chart helps visually monitor teams’ work progress, it is time-consuming and doesn’t provide insights about the specific types of issues or tasks. Hence, it is always advisable to complement it with sprint analysis tools to provide additional insights tailored to agile project management.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This helps in identifying potential problems in the early stages, identifying areas where teams can be more efficient, and meeting deadlines.

The metrics Dashboard Focuses on Team-Level Improvement and Not Micromanaging Individual Developers

When engineering metrics focus on individual success rather than team performance, it creates a sense of surveillance rather than support. This leads to decreased motivation, productivity, and trust among development teams. Hence, there are better ways to use the engineering metrics.

Typo has a metrics dashboard that focuses on the team’s health and performance. It lets engineering leaders compare the team’s results with what healthy benchmarks across industries look like and drive impactful initiatives for your team. Since it considers only the team’s goals, it lets team members work together and solve problems together. Hence, fosters a healthier and more productive work environment conducive to innovation and growth.

Typo Takes into Consideration the Human Side of Engineering

Measuring developer experience not only focuses on quantitative metrics but also requires qualitative feedback as well. By prioritizing the human side of team members and developer productivity, engineering managers can create a more inclusive and supportive environment for them.

Typo helps in getting a 360 view of the developer experience as it captures qualitative insights and provides an in-depth view of the real issues that need attention. With signals from work patterns and continuous AI-driven pulse check-ins on the experience of developers in the team, Typo helps with early indicators of their well-being and actionable insights on the areas that need your attention. It also tracks the work habits of developers across multiple activities, such as Commits, PRs, Reviews, Comments, Tasks, and Merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the Burnout zone or at risk of burnout.

You can integrate as many tools with the dev stack

The more the tools can be integrated with software, the better it is for the software developers. It streamlines the development process, enforces standardization and consistency, and provides access to valuable resources and functionalities.

Typo lets you see the complete picture of your engineering health by seamlessly connecting to your tech tool stack. This includes:

  • GIT versioning tools that use the Git version control system
  • Issue tracker tools for managing tasks, bug tracking, and other project-related issues
  • CI/CD tools to automate and streamline the software development process
  • Communication tools to facilitate the exchange of ideas and information
  • Incident management tools to resolve unexpected events or failures

Conclusion

Typo is a software delivery tool that can help ship reliable software faster. You can find real-time bottlenecks in your SDLC, automate code reviews, and measure developer experience – all in a single platform.

View All
Made in Webflow