Typo's Picks

In the fourth episode of ‘The DORA Lab’ - an exclusive podcast by groCTO, host Kovid Batra engages in an enlightening conversation with Peter Peret Lupo, Head of Software Engineering at Sibli, who brings over a decade of experience in engineering management.

The episode starts with Peter sharing his hobbies, followed by an in-depth discussion on how DORA metrics play a crucial role in transforming organizational culture and establishing a unified framework for measuring DevOps efficiency. He discusses fostering code collaboration through anonymous surveys and key indicators like code reviews. Peter also covers managing technical debt, the challenges of implementing metrics, and the timeline for adoption. He emphasizes the importance of context in analyzing teams based on metrics and advocates for a bottom-up approach.

Lastly, Peter concludes by emphasizing the significant impact that each team member has on engineering metrics. He encourages individual contributors and managers to monitor both their personal & team progress through these metrics.

Timestamps

  • 00:49 - Peter’s introduction
  • 03:27 - How engineering metrics influence org culture
  • 05:08 - Are DORA metrics enough?
  • 09:29 - Code collaboration as a key metric
  • 12:40 - Metrics to address technical debt
  • 17:27 - Overcoming implementation challenges
  • 21:00 - Timeframe & process of adopting metrics
  • 25:19 - Importance of context in analyzing teams
  • 28:31 - Peter’s advice for ICs & managers

Links and Mentions 

Episode Transcript

Kovid Batra: Hi everyone. This is Kovid, back with another episode of our exclusive series, the DORA Lab, where we will be talking about all things DORA, engineering metrics, and their impact, and to make today's show really special, we have Peter with us, who is currently an engineering manager at Sibli. For a big part of his career, he has been a teacher at a university and then he moved into the career of engineering management and currently, holds more than 10 plus years of engineering management experience. He has his great expertise in setting up dev processes and implementing metrics, and that's why we have him on the show today. Welcome to the show, Peter. 

Peter Peret Lupo: Thank you. 

Kovid Batra: Quickly, Peter, uh, before we jump into DORA metrics, engineering metrics and dev processes, how it impacts the overall engineering efficiency, we would love to know a little bit more about you. What I have just spoken is more from your LinkedIn profile. So we don't know who the real Peter is. So if you could share something about yourself, your hobby or some important events of your life which define you today, I think that would be really great. 

Peter Peret Lupo: Well, um, hobbies I have a few. I like playing games, computer, VR, sort of like different styles, different, uh, genres. Two things that I'm really passionate about are like playing and studying. So I do study a lot. I've been like taking like one hour every day almost to study new things. So it's always exciting to learn new stuff. 

Kovid Batra: Great, great. 

Peter Peret Lupo: I guess, a big nerd here! 

Kovid Batra: Thank you so much. Yeah. No, I think that that's really, uh, what most software developers and engineering managers would be like, but good to know about you on that note.

Apart from that, uh, Peter, is there anything you really love or would you like to share any, uh, event from your life that you think is memorable and it defines you today who you are? 

Peter Peret Lupo: Well, that's a deep question. Um, I don't know, I guess like, one thing that was like a big game changer for me was, uh, well, I'm Brazilian, I came to Canada, now I'm Canadian too. Um, so I came to Canada like six years ago, and, uh, it has been transformational, I think. Like cultural differences, a lot of interesting things. I feel more at home here, to be honest. Uh, but like, yeah, uh, meeting people from all over the world, it's been a great experience. 

Kovid Batra: Great, great. All right, Peter. So I think, first of all, thanks a lot for that short, sweet intro about yourself. From this point, let's move on to our main topic of today, uh, which is around the engineering metrics and DORA metrics. Before we deep dive, I think the most important part is why DORA metrics or why engineering metrics, right? So I think let's start from there. Why these engineering metrics are important and why people should actually use it and in what situations? 

Peter Peret Lupo: I think the DORA metrics are really important because it's kind of changing the culture of many organizations, like a lot of people were already into, uh, measuring. Measuring like performance of processes and all, but, uh, it was kind of like, sometimes it wasn't like very well seen that people were measuring processes and people took it personally and it's like all sort of things. But nowadays, people are more used to metrics. DORA metrics is like a very good framework for DevOps metrics, and so widespread nowadays, it's kind of like a common language, a common jargon, like when you talk about things like mean lead time for changes, everybody knows that, everybody knows how to calculate that. I guess that's like the first thing, like the changing the culture about measuring and measuring is really important because it allows you to, uh, to establish a baseline and compare the results of your changes to where you were before and, uh, affirm if you actually have improved, if something got worse with your changes, if your, the benefits of your changes are aligned with the organizational goals. It allows everybody to be engaged at some level to, uh, reaching the organizational goals. 

Kovid Batra: Makes sense. Yeah, absolutely. I think when we always talk about these metrics, most of the people are talking about the first-level DORA metrics, which is your lead time for changes or cycle time, or the deployment frequency, change failure rate, mean time to restore. These metrics define a major part of how you should look at engineering efficiency as a manager, as a leader, or as a part of the team. But do you think is it sufficient enough? Like looking at just the DORA metrics, does it sound enough to actually look at a team's efficiency, engineering efficiency? Or do you think beyond DORA that we should look at metrics that could actually help teams identify other areas of engineering efficiency also? 

Peter Peret Lupo: Well, um, one thing that I like about our metrics is that it lets us start the culture of measuring. However, I don't see that as like the only source of information, like the only set of metrics that matter. I think there are a lot of things that are not covered in DORA metrics. The way that I see, it's like it's a very good subset for DevOps, it covers many different aspects of DevOps, and that's important because when you wanna measure something, it's important to measure different aspects because if you are trying to improve something, you want to be able to detect like side effects that may be negative on other aspects. So it's important to have like a good framework. However, it's focused a lot on DevOps, and, uh, I'll tell you, like, if you are on a very large organization with a lot of developers pushing features, like many changes daily, and your goal is to be able to continuously deliver them and be able to roll back them and assess like the time to restore the service when something breaks down. That's good, that's very, very interesting. And so I think it's very aligned with like what Google does. Like it's a very big corporation, uh, with a lot of different teams. However, context matters, right? The organizational context matters. Not all companies are able, for instance, to do continuous delivery. And sometimes in our matter of like what the company wants or their capability, sometimes their clients don't want that, like if you have like banks as clients, they don't want you to be changing their production environments every like 12 hours or so. Uh, they want like big phases, uh, releases where they can like do their own testing, do their own validation sometimes. So it's fundamentally different. 

In terms of, uh, the first part of it, because when you get to DevOps and you get to like delivery stuff into production, things were already built, right? So building is also something that you should be looking at. So DORA metrics provide a good entry point to start measuring, but you do need to look at things like quality, for instance, because if you're deploying something and you're rolling back, and I want to make a parenthesis there, if you're measuring deployment frequency, you should be telling those apart because rolling back a feature is not the same as, like, deploying a feature. But if you're rolling back because something wasn't built right, wasn't built correctly, there's a defect there. DORA metrics won't allow you to understand the nature of the defect, where you got into, like, got into, like the requirements and continue what's propagated to codes and tests, or if somebody made a mistake on the codes, like it doesn't allow you for this level of understanding of the nature of your defects or even productivity. So if you're not in a scenario where you do have a lot of teams, you do have a lot of like developers pushing codes, code changes all the time. Uh, maybe your bottleneck, maybe your concerns are actually on the development side. So you should be looking at metrics on that side, like code quality, or product quality in general, defect density, uh, productivity, these sorts of things. 

Kovid Batra: I think great point there. Uh, actually, context is what is most important and DORA could be the first step to look into engineering efficiency in general, but the important, or I should say the real point is understanding the context and then applying the metrics and we would need metrics which are on DORA also. Like, as you mentioned, like there would be scenarios where you would want to look at defect density, you would want to look at code quality, and from that, uh, I think one of the interesting, uh, metrics that I have recently come across is about code collaboration also, right? So people look at how well the teams are collaborating over the code reviews. So that also becomes an essential part of when you're shipping your software, right? So the quality gets impacted. The velocity of the delivery gets impacted. Have you encountered a scenario where you wanted or you had measured code review collaboration within the team? And if you did so, uh, how did you do it? 

Peter Peret Lupo: Yes, actually in different ways. So one thing that I like to do, it's more of a qualitative measurement, but I do believe there is space for this kind of metric as well. One thing that I like doing, that I'm currently doing, and I've done in other companies as well, is taking some part of the Sprint's retrospective to share with the team, results of a survey. And one of the things that I do ask on the survey is if they're being supported by team members, if they're supporting team members. So it's just like a Likert Scale, like 1 to 5, but it highlights like that kind of collaboration support. 

Kovid Batra: Right.

Peter Peret Lupo: Um, it's anonymous, so I can't tell like who is helping who. Uh, so sometimes somebody's, like, being very, like being helped a lot, and sometimes some other person is helping a lot. And maybe they switch, depending on like whether or not they're working on something that they're familiar with and the other person isn't or vice versa, I don't know. I have no means to do that, and I don't bother about that. Nobody should be bothering about that. I think if you have like a very senior person, they're probably like helping a lot of people and maybe they're not pushing many changes, but like everybody relies on them. Uh, so if you're working on the same, you should be measuring the team, right? But there are other things as well, like, um, you can see like comments on code reviews, who jumps to do code reviews, and all those kinds of things, right? These are very important indicators that they have like a healthy team, that they're supporting each other. You can even like indicate some things like if people are getting, uh, are learning more about the codes component they are changing or like some, like a service or whatever area, how you define it, uh, if you have like knowledge silos and, um, who should be providing training to whom to break out those silos to improve productivity. So yeah, that's very insightful and very helpful. Yeah, definitely. 

Kovid Batra: Makes sense, makes sense Um, is there anything that you have used, uh, to look at the technical debt? So that is also something I have, uh, always heard from managers and leaders. Like when you're building, whether you are a large organization or you are a small one moving faster, uh, the degree might vary, but you accumulate technical debt over a period of time. Is there something that you think could be looked at as a metric to indicate that, okay, it's high time now, that we should look at technical debt? Because mostly what happens is like whenever there are team meetings, people just come up with ideas that, okay, this is what we can improve, this is where we are facing a lot of bugs and issues. So let's work on this piece because this has now become a debt for us, but is there something objective that could tell that yes, now it's time that we should sit down and look at the technical debt part? 

Peter Peret Lupo: Well, uh, the problem is like, there are so many, uh, different approaches to technical debt. They're going to be more suited to one organization or another organization. If you have like a very, uh, engineering-driven organization, you tend to have less technical debt or you tend to pay that technical debt more often. But if it's not the case, if it's like more product-driven, you tend to accumulate those more often, and then you need to apply different approaches. So, one thing that I like doing is like when we are acquiring the debt; and that's normal, that's part of life. Sometimes you have to, and you should be okay with that. But when we are acquiring debt, we catalog it somewhere. Maybe you have like an internal wiki or something, like whatever documentation tool you use. You add that to a catalog where you basically have like your components or services or however you split your application. And then like what's the technical data you're acquiring, which would be the appropriate solutions or alternatives, how that's going to impact, and most importantly, when you believe you should pay that so you don't get like a huge impact, right? 

Kovid Batra: Right. Of course. So just one thing I recently heard from one of my friends. Like they look at the time for a new developer to do the first commit as an indicator of technical debt. So if they.. First release, actually. So if someone who is joining new in the team, if they're taking too much time to reach a point where they could actually merge their code, and like have it on production, uh, if that is high and they, of course, keep a baseline there, then they consider that there is a lot of debt they might have accumulated because of which the learning and the implementation for the first release from a new developer is taking time. So do you think this approach could work or this approach could be inferential to what we are talking about, like the technical debt? 

Peter Peret Lupo: I think that in this particular case, there are so many confounding variables. People join the team at different seniority levels. A senior might take less time than a junior, even in a scenario where there is more technical debt. So that alone is hard to compare. Even at the same level, people join with different skills. So maybe you have like a feature you need to write frontend and backend code, and some people are, uh, full stack but are more backend-inclined, more frontend-inclined. That alone will change your metric. You are waiting for one person to join that team so you can have like a new point of measurement. So you're not gonna have a lot, and there's gonna have like a lot of variation because of these confounding factors. Even that the onboarding process may change in between. The way that I usually like to introduce people to code is asking them to reduce the amount of warnings from like code linters first, and then fixing some simple defects, and then something like a more complex defect, and then writing a new feature. Uh, so, even like depending on your own onboarding strategy, your model strategy you're defining is going to affect that metric. So I wouldn't be very confident on that metric for this purpose. 

Kovid Batra: Okay. Got it, got it. Makes sense. All right. I think if I have to ask you, uh, it's never easy, right? Like in the beginning, you mentioned that the first point itself is talking about these metrics is hard, right? Even if they make a lot of practical sense, but talking about it is hard. So when there is inherited resistance towards this topic, uh in the teams, when you go about implementing it, there could be a hell of a lot of challenges, right? And I'm sure, you would have also come across some of those in your journey when you were implementing it. So can you give us some examples from the implementation point of view, like how does the implementation go for, uh, these metrics and what are the challenges one faces when they're implementing it? And maybe if there are certain timelines to which one should look at for a full-fledged implementation and getting some success from the implementation of these metrics. 

Peter Peret Lupo: Right. So, um, usually you're measuring something because you want to prove something, right? Because you want to like achieve like a certain goal, uh, maybe organizational, or just like the team. So I think that the first thing to lower, uh, the resistance is having a clear goal, and making sure that everybody understands that, uh, that the goal is not measuring anybody, uh, individually. That already like reduces the resistance a lot, and making sure that people understand why that goal is important and how you're going to measure in it is also extremely important.

Another thing that is interesting is to ask people for inputs on like how they think you could be measuring that. So making them also part of the process, and maybe the way that they're advising is not going to be like the way that you end up measuring. Maybe it influences, maybe it's exactly what they suggest, but the important thing is to make them part of the process, so they don't feel that the process, like the process of establishing metrics is not something that is being done to them, but something that they are doing with everybody else. 

And so honestly, like so many things are already measured by the team, uh, velocity or however they estimate productivity. Even the estimates themselves are on like tickets on user stories or, uh, these are all, uh, attempts to measure things and they're used to compare the destinations with, uh, the actual results, so they know what the measures are used for. So sometimes it's just a matter of like establishing these parallels. Look, we measure productivity, we measure velocity to see if we are getting better, if we're getting worse. We also need to measure, uh, the quality to see if we're like catching more defects than before, if we have like more escape defects. Measurement is in some way already a part of our lives. Most of the times, it's a matter of like highlighting that, and, uh, people are usually comfortable with them, yeah, once you go through all this. 

Kovid Batra: Right. Makes sense. Um, I think the major part is done when the team is aligned on the 'why' part, like why you are doing it, because as soon as they realize that there is some importance to measuring this metric, they would automatically be, uh, intuitively be aligned towards measuring that, and it becomes easier because then if there are challenges related to the implementation process also, they would like come together and maybe find out ways to, uh, build things around that and help in actual measurement of the metric also.

But if I have to ask, let's say a team is fully aligned and, uh, we are looking at implementing, let's say DORA metrics for a team, what should be the time frame one should keep in mind to get an understanding of what these metrics are saying? Because it's not like straightforward. You look at the common frequency, if it's high, you say things are good. If it's low, things are bad. Of course, it doesn't work like that. You have to understand these metrics in the first place in the cadence of your team, in the situation of your team, and then make sense out of it and find out those bottlenecks or those areas of inefficiency where you could really work upon, right? So what should be that time frame in one's mind that someone is an engineering manager who is implementing this for a team? What time frame should that person keep in mind and what exactly should be the first step towards measuring these once you start implementing them? 

Peter Peret Lupo: Right. So it's a very good question. Time frame varies a lot and I'll tell you why; because more important than the time is the amount of data points that you have. If you wait for, like let's say, a month and you have like three data points, you can't establish any sort of pattern. You don't know if that's increasing, decreasing. There's no confidence. There's no statistical relevance. It's, like, the sample is too small. But like if you collect, like three data points, that's like generic for any metric. If you collect, like three data points every day, maybe in a week you'll have enough. The problem I see here is like, let's say, uh, something happens that is out of the ordinary. I want to investigate that to see if there is room for improvement there, or if that actually indicates that something went like really well and you want to replicate that success in the other cases. Um, you can't tell what's out of the ordinary if you're looking at three, four points. 

Kovid Batra: Right. 

Peter Peret Lupo: Uh, or if it's just like normal variation. So, I think that what's important is to have like a good baseline. So, that's gonna vary from process to process, from organization to organization, but there are some indications in the literature that like you should collect at least 30 data points. I think that with 30 data points you have like somewhat of a good, uh, statistical relevance for it, for your analysis, but I don't think you should, you have to wait for those many points in order to start investigating things. Sometimes you have like 10 or 12 and you already see something that. looks like something that you should investigate or you start having like an idea of what's going on, if it's higher than you expected, if it's lower than you expected, and you can start taking actions and investigating that as long as you consider that your interpretation may not be valid, bec ause like your sample is small. The time that it takes, like time frame, I guess that's going to depend on how often you are able to collect a new data point, and that's going to vary from organization to organization and from process to process, like measuring quality is different from measuring productivity, uh, and so on. So, I think all these things need to be taken into consideration. I think that the confidence is important. 

And one other thing that you mentioned there, about like the team analyzing. It's something that I want to touch on because it's an experience that I've had more than once. You mentioned context. Context is super important. So much so that I think that the team that is producing the metrics should be the first one looking at them, not management, higher management, C-level, not them, because they are the only ones that are able to look at data points and say, "Yeah, things here didn't go well. Our only QA was on vacation." Or like somebody took a sick day or whatever reason, like they have the context. So they should be the first ones looking at the metric, analyzing the metric, and conveying the results of their analysis to higher levels, not the other way around, because what happens when you have it the other way around is that, like, they don't have the context, so they're looking at just the numbers, and if the number is bad, they're gonna inquire about it. If it's good, they are usually gonna stay quiet, uh, and they're gonna ask about the bad numbers, whether or not there was a good reason for that, whether or not it was like, uh, let's say, an exception. And then the team is going to feel that they have to defend themselves, to justify themselves every time, and it creates like a very poisonous scenario where the team feels that management is there to question them and they need to defend themselves against management instead of them having the autonomy to report on their success and their failures to management and let management deal with those results instead of the causes. 

Kovid Batra: Totally, totally. 

Peter Peret Lupo: Context is super important. 

Kovid Batra: Great point there. Yeah, of course. Great point there, uh, highlighting the do's and don'ts from your experience and it's very relevant actually because the numbers don't always give you the reality of the situation. They could be an indicator and that's why we have them in place. Like first thing, you measure it. Don't come to a conclusion from it directly. If you see some discrepancy, like if there are some extraordinary data points, as you said, then there is a point which you should come out and inquire to understand what exactly happened here, but not directly jump onto the team saying that, Oh, you're not doing good or the other way around. So I think that that totally makes sense, uh, Peter. 

I think it was really, really interesting talking to you about the metrics and the implementation and the experiences that you have shared. Um, we could go on on this, but today I think we'll have to stop here and, uh, say goodbye to you. Maybe we can have another round of discussion continuing with those experiences that you have had with the implementation.

Peter Peret Lupo: Definitely. It was a real pleasure.. 

Kovid Batra: It would be our pleasure, actually. But, uh, like before you leave, uh, anything that you want to share with our audience as parting advice, uh, would be really appreciated. 

Peter Peret Lupo: All right. Um, look at your metrics as an ally, as a guide to tell you where you're going. Compare what you're doing now with what you were doing before to see if you're improving. When I say 'you', I'm talking to, uh, each individual in the team. Consider your team metrics, look at them, your work is part of the work that is being analyzed, and you have an influence on that at an individual level and with your team. So do look at your metrics, compare where you are at with where you were before to see if your changes were improved, see if your changes, uh, carried improvements you're looking for, and talk to your team about these metrics on your sprint retrospective. That's a very powerful tool to tell you, like, if your, uh, retrospective actions are being effective in delivering the change that you want in your process.

Kovid Batra: Great! I think great piece of advice there. Thanks, Peter. Thank you so much. Uh, this was really insightful. Loved talking to you. 

Peter Peret Lupo: All right. Thank you.

The era when development and operations teams worked in isolation, rarely interacting, is over. This outdated approach led to significant delays in developing and launching new applications. Modern IT leaders understand that DevOps is a more effective strategy.

DevOps fosters collaboration between software development and IT operations, enhancing the speed, efficiency, and quality of software delivery. By leveraging DevOps tools, the software development process becomes more streamlined through improved team collaboration and automation.

Understanding DevOps

DevOps is a methodology that merges software development (Dev) with IT operations (Ops) to shorten the development lifecycle while maintaining high software quality.

Creating a DevOps culture promotes collaboration, which is essential for continuous delivery. IT operations and development teams share ideas and provide prompt feedback, accelerating the application launch cycle.

Importance of DevOps for Startups

In the competitive startup environment, time equates to money. Delayed product launches risk competitors beating you to market. Even with an early market entry, inefficient development processes can hinder timely feature rollouts that customers need.

Implementing DevOps practice helps startups keep pace with industry leaders, speeding up development without additional resource expenditure, improving customer experience, and aligning with business needs.

Core Principles of DevOps

The foundation of DevOps rests on the principles of culture, automation, measurement, and sharing (CAMS). These principles drive continuous improvement and innovation in startups.

Key Benefits of DevOps for Startups

Faster Time-to-Market

DevOps accelerates development and release processes through automated workflows and continuous feedback integration.

  • Startups can rapidly launch new features, fix bugs, and update software, gaining a competitive advantage.
  • Implement continuous integration and continuous deployment (CI/CD) pipelines.
  • Use automated testing to identify issues early.

Improved Efficiency

DevOps enhances workflow efficiency by automating repetitive tasks and minimizing manual errors.

  • Utilize configuration management tools like Ansible and Chef.
  • Implement containerization with Docker for consistency across environments.
  • Jenkins for CI/CD
  • Docker for containerization
  • Kubernetes for orchestration

Enhanced Reliability

DevOps ensures code changes are continuously tested and validated, reducing failure risks.

  • Conduct regular automated testing.
  • Continuously monitor applications and infrastructure.
  • Increased reliability leads to higher customer satisfaction and retention.

DevOps Practices for Startups

Embrace Automation with CI/CD Tools

Automation tools are essential for accelerating the software delivery process. Startups should use CI/CD tools to automate testing, integration, and deployment. Recommended tools include:

  • Jenkins: An open-source automation server that supports building and deploying applications.
  • GitLab CI/CD: Integrated CI/CD capabilities within GitLab for seamless pipeline management.
  • CircleCI: A cloud-based CI/CD tool that offers fast builds and easy integration with various services.

Implement Continuous Integration and Continuous Delivery (CI/CD)

CI/CD practices enable frequent code changes and deployments. Key components include:

  • Version Control Systems (VCS): Use Git with platforms like GitHub or Bitbucket for efficient code management.
  • Build Automation: Tools like Maven or Gradle for Java projects, or npm scripts for Node.js, automate the build process.
  • Deployment Automation: Utilize tools like Spinnaker or Argo CD for managing Kubernetes deployments.

Utilize Infrastructure as Code (IaC)

IaC allows startups to manage infrastructure through code, ensuring consistency and reducing manual errors. Consider using:

  • Terraform: For provisioning and managing cloud infrastructure in a declarative manner.
  • AWS CloudFormation: For defining infrastructure using YAML or JSON templates.
  • Ansible: For configuration management and application deployment.

Adopt Containerization

Containerization simplifies deployment and improves resource utilization. Use:

  • Docker: To package applications and their dependencies into lightweight, portable containers.
  • Kubernetes: For orchestrating containerized applications, enabling scaling and management.

Monitor and Measure Performance

Implement robust monitoring tools to gain visibility into application performance. Recommended tools include:

  • Prometheus: For real-time monitoring and alerting.
  • Grafana: For visualizing metrics and logs.
  • ELK Stack (Elasticsearch, Logstash, Kibana): For centralized logging and data analysis.

Integrate Security (DevSecOps)

Incorporate security practices into the DevOps pipeline using:

  • Snyk: For identifying vulnerabilities in open-source dependencies.
  • SonarQube: For continuous inspection of code quality and security vulnerabilities.
  • HashiCorp Vault: For managing secrets and protecting sensitive data.

Leverage Software Engineering Intelligence (SEI) Platforms

SEI platforms provide critical insights into the engineering processes, enhancing decision-making and efficiency. Key features include:

  • Data Integration: SEI platforms like Typo ingest data from various tools (e.g., GitHub, JIRA) to provide a holistic view of the development pipeline.
  • Actionable Insights: These platforms analyze data to identify bottlenecks and inefficiencies, enabling teams to optimize workflows and improve delivery speed.
  • DORA Metrics: SEI platforms track key metrics such as deployment frequency, lead time for changes, change failure rate, and time to restore service, helping teams measure their performance against industry standards.

Foster Collaboration and Communication

Utilize collaborative tools to enhance communication among team members. Recommended tools include:

  • Slack: For real-time communication and integration with other DevOps tools.
  • JIRA: For issue tracking and agile project management.
  • Confluence: For documentation and knowledge sharing.

Encourage Continuous Learning

Promote a culture of continuous learning through:

  • Internal Workshops: Regularly scheduled sessions on new tools or methodologies.
  • Online Courses: Encourage team members to take courses on platforms like Coursera or Udemy.

Establish Clear Standards and Documentation

Create a repository for documentation and coding standards using:

  • Markdown: For easy-to-read documentation within code repositories.
  • GitHub Pages: For hosting project documentation directly from your GitHub repository.

How Typo Helps DevOps Teams?

Typo is a powerful tool designed specifically for tracking and analyzing DevOps metrics. It provides an efficient solution for dev and ops teams seeking precision in their performance measurement.

  • With pre-built integrations in the dev tool stack, the dashboard provides all the relevant data within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real time.
  • It gives real-time visibility into a team’s KPI and lets them make informed decisions.

Conclusion

Implementing DevOps best practices can markedly boost the agility, productivity, and dependability of startups.

By integrating continuous integration and deployment, leveraging infrastructure as code, employing automated testing, and maintaining continuous monitoring, startups can effectively tackle issues like limited resources and skill shortages.

Moreover, fostering a cooperative culture is essential for successful DevOps adoption. By adopting these strategies, startups can create durable, scalable solutions for end users and secure long-term success in a competitive landscape.

DORA metrics offer a valuable framework for assessing software delivery performance. By measuring DORA key metrics, organizations can identify bottlenecks, improve efficiency, and enhance software quality. It is also a key indicator for measuring the effectiveness of continuous delivery pipelines.

In this blog post, we delve into the pros and cons of utilizing DORA metrics to optimize continuous delivery processes, exploring their impact on performance, efficiency, and overall software quality.

What are DORA Metrics?

DORA metrics, developed by the DevOps Research and Assessment team, are key performance indicators that measure the effectiveness and efficiency of software development and delivery processes. They provide a data-driven approach to evaluate the impact of operational practices on software delivery performance.

Four Key DORA Metrics

  • Change Failure Rate measures the code quality released to production during software deployments.
  • Mean Time to Recover measures the time to recover a system or service after an incident or failure in production.

In 2021, the DORA Team added Reliability as a fifth metric. It is based upon how well the user’s expectations are met, such as availability and performance, and measures modern operational practices.

Importance of Continuous Delivery

Continuous delivery (CD) is a primary aspect of modern software development that automatically prepares code changes for release to a production environment. It is combined with continuous integration (CI) and together, these two practices are known as CI/CD.

Continuous delivery holds significant importance compared to traditional waterfall-style development. A few of them are:

Faster Time to Market

Continuous Delivery allows more frequent releases, allowing new features, improvements, and bug fixes to be delivered to end-users more quickly. It provides a competitive advantage by keeping the product up-to-date and responsive to user needs.

Improved Quality and Reliability

Automated testing and consistent deployment processes catch bugs and issues early. It improves the overall quality and reliability of the software and reduces the chances of defects reaching production.

Reduced Deployment Risk

When updates are smaller and more frequent, it reduces the complexity and risk associated with each deployment. If an issue does arise, it becomes easier to pinpoint the problem and roll back the changes.

Scalability

CD practices can be scaled to accommodate growing development teams and more complex applications. It helps to manage the increasing demands of modern software development.

Innovation and Experimentation

Continuous delivery allows teams to experiment with new ideas and features efficiently. This encourages innovation by allowing quick feedback and iteration cycles.

Pros of DORA Metrics for Continuous Delivery

Enhances Performance Visibility

  • Deployment Frequency: Higher frequencies indicate a team’s ability to deliver updates and new features quickly and consistently.
  • Lead Time for Changes: Short lead times suggest a more efficient delivery process.
  • Change Failure Rate: A lower rate highlights better testing and higher quality in releases.
  • Mean Time to Restore (MTTR): A lower MTTR indicates a team’s capability to respond to and fix issues rapidly.

Increases Operational Efficiency

Implementing DORA metrics encourages teams to streamline their processes, reducing bottlenecks and inefficiencies in the delivery pipeline. It also allows the team to regularly measure and analyze these metrics which fosters a culture of continuous improvement. As a result, teams are motivated to identify and resolve inefficiencies.

Fosters Collaboration and Communication

Tracking DORA metrics encourages collaboration between DevOps and other stakeholders. Hence, fostering a more integrated and cooperative approach to software delivery. It further provides objective data that teams can use to make informed decisions, prioritize work, and align their efforts with business goals.

Improves Software Quality

Continuous Delivery relies heavily on automated testing to catch defects early. DORA metrics help software teams track the testing processes’ effectiveness which ensures higher software quality. Moreover, faster deployment cycles and lower lead times enable quicker feedback from end-users. It allows teams to address issues and improve the product more swiftly.

Increases Reliability and Stability

Software teams can ensure that their deployments are more reliable and less prone to issues by monitoring and aiming to reduce the change failure rate. A low MTTR demonstrates a team’s capability to quickly recover from failures which minimizes downtime and its impact on users. Hence, increases the reliability and stability of the software.

Cons of DORA Metrics for Continuous Delivery

Implementation Challenges

The process of setting up the necessary software to measure DORA metrics accurately can be complex and time-consuming. Besides this, inaccurate or incomplete data can lead to misleading metrics which can affect decision-making and process improvements.

Resource Allocation Issues

Implementing and maintaining the necessary infrastructure to track DORA metrics can be resource-intensive. It potentially diverts resources from other important areas and increases the risk of disproportionately allocating resources to high-performing teams or projects to improve metrics.

Limited Scope of Metrics

DORA metrics focus on specific aspects of the delivery process and may not capture other crucial factors including security, compliance, or user satisfaction. It is also not universally applicable as the relevance and effectiveness of DORA metrics can vary across different types of projects, teams, and organizations. What works well for one team may not be suitable for another.

Cultural Resistance

Implementing DORA metrics requires changes in culture and mindset, which can be met with resistance from teams that are accustomed to traditional methods. Apart from this, ensuring that DORA metrics align with broader business goals and are understood by all stakeholders can be challenging.

Subjectivity in Measurement

While DORA Metrics are quantitative in nature, their interpretation and application of DORA metrics can be highly subjective. The definition and measuring metrics like ‘Lead Time for Changes’ or ‘MTTR’ can vary significantly across teams. It may result in inconsistencies in how these metrics are understood and applied across different teams.

How does Typo Solve this Issue?

As the tech landscape is evolving, there is a need for diverse evaluation tools in software development. Relying solely on DORA metrics can result in a narrow understanding of performance and progress. Hence, software development organizations necessitate a multifaceted evaluation approach.

And that’s why, Typo is here at your rescue!

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools. It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.‍

Features

  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Includes effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint.
  • Provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.‍

Conclusion

While DORA metrics offer valuable insights into software delivery performance, they have their limitations. Typo provides a robust platform that complements DORA metrics by offering deeper insights into developer productivity and workflow efficiency, helping organizations achieve the best possible software delivery outcomes.

Scrum is known to be a popular methodology for software development. It concentrates on continuous improvement, transparency, and adaptability to changing requirements. Scrum teams hold regular ceremonies, including Sprint Planning, Daily Stand-ups, Sprint Reviews, and Sprint Retrospectives, to keep the process on track and address any issues.

With the help of DORA DevOps Metrics, Scrum teams can gain valuable insights into their development and delivery processes.

In this blog post, we discuss how DORA Metrics helps boost scrum team performance. 

What are DORA Metrics? 

DevOps Research and Assessment (DORA) metrics are a compass for engineering teams striving to optimize their development and operations processes.

In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble, and Dr. Nicole Forsgren to evaluate and improve software development practices. The aim is to enhance the understanding of how development teams can deliver software faster, more reliably, and of higher quality.

Four key metrics are: 

  • Deployment Frequency: Deployment Frequency measures the rate of change in software development and highlights potential bottlenecks. It is a key indicator of agility and efficiency. Regular deployments signify a streamlined pipeline, allowing teams to deliver features and updates faster.
  • Lead Time for Changes: Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. It tracks the speed and efficiency of software delivery and offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.
  • Change Failure Rate: Change Failure Rate measures the frequency at which newly deployed changes lead to failures, glitches, or unexpected outcomes in the IT environment. It reflects reliability and efficiency and is related to team capacity, code complexity, and process efficiency, impacting speed and quality.
  • Mean Time to Recover: Mean Time to Recover measures the average duration taken by a system or application to recover from a failure or incident. It concentrates on determining the efficiency and effectiveness of an organization's incident response and resolution procedures.

Reliability is a fifth metric that was added by the DORA team in 2021. It is based upon how well your user’s expectations are met, such as availability and performance, and measures modern operational practices. It doesn’t have standard quantifiable targets for performance levels rather it depends upon service level indicators or service level objectives.

Why DORA Metrics are Useful for Scrum Team Performance? 

DORA metrics are useful for Scrum team performance because they provide key insights into the software development and delivery process.

Measure Key Performance Indicators (KPIs)

DORA metrics track crucial KPIs such as deployment frequency, lead time for changes, mean time to recovery (MTTR), and change failure rate which helps Scrum teams understand their efficiency and identify areas for improvement.

Enhance Workflow Efficiency

Teams can streamline their processes and reduce bottlenecks by monitoring deployment frequency and lead time for changes. Hence, leading to faster delivery of features and bug fixes.

Improve Reliability 

Tracking the change failure rate and MTTR helps software teams focus on improving the reliability and stability of their applications. Hence, resulting in more stable releases and fewer disruptions for users.

Data-Driven Decision Making 

DORA metrics give clear data that helps teams decide where to improve, making it easier to prioritize the most impactful actions for better performance.

Continuous Improvement

Regularly reviewing these metrics encourages a culture of continuous improvement. This helps software development teams to set goals, monitor progress, and adjust their practices based on concrete data.

Benchmarking

DORA metrics allow DevOps teams to compare their performance against industry standards or other teams within the organization. This encourages healthy competition and drives overall improvement.

Best Practices for Implementing DORA Metrics in Scrum Teams

Understand the Metrics 

Firstly, understand the importance of DORA Metrics as each metric provides insight into different aspects of the development and delivery process. Together, these metrics offer a comprehensive view of the team’s performance and allow them to make data-driven decisions. 

Set Baselines and Goals

Scrum teams should start by setting baselines for each metric to get a clear starting point and set realistic goals. For instance, if a scrum team currently deploys once a month, it may be unrealistic to aim for multiple deployments per day right away. Instead, they could set a more achievable goal, like deploying once a week, and gradually work towards increasing their frequency.

Regularly Review and Analyze Metrics

Scrum teams must schedule regular reviews (e.g., during sprint retrospectives) to discuss the metrics to identify trends, patterns, and anomalies in the data. This helps to track progress, pinpoint areas for improvement, and further allow them to make data-driven decisions to optimize their processes and adjust their goals as needed.

Foster Continuous Growth

Use the insights gained from the metrics to drive ongoing improvements and foster a culture that values experimentation and learning from mistakes. By creating this environment, Scrum teams can steadily enhance their software delivery performance. Note that, this approach should go beyond just focusing on DORA metrics. it should also take into account other factors like team well-being, collaboration, and customer satisfaction.

Ensure Cross-Functional Collaboration and Communicate Transparently

Encourage collaboration between development, operations, and other relevant teams to share insights and work together to address bottlenecks and improve processes. Also, make the metrics and their implications transparent to the entire team. You can use the DORA Metrics dashboard to keep everyone informed and engaged.

How Typo Leverages DORA Metrics? 

Typo is a powerful tool designed specifically for tracking and analyzing DORA metrics. It provides an efficient solution for DevOps and Scrum teams seeking precision in their performance measurement.

  • With pre-built integrations in the dev tool stack, the DORA metrics dashboard provides all the relevant data within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real time.
  • It gives real-time visibility into a team’s KPI and allow them to make informed decisions.

Conclusion 

Leveraging DORA Metrics can transform Scrum team performance by providing actionable insights into key aspects of development and delivery. When implemented the right way, teams can optimize their workflows, enhance reliability, and make informed decisions. 

Engineering Analytics

View All

4 Key DevOps Metrics for Improved Performance

Lots of organizations are prioritizing the adoption and enhancement of their DevOps practices. The aim is to optimize the software development life cycle and increase delivery speed which enables faster market reach and improved customer service. 

In this article, we’ve shared four key DevOps metrics, their importance and other metrics to consider. 

What are DevOps Metrics?

DevOps metrics are the key indicators that showcase the performance of the DevOps software development pipeline. By bridging the gap between development and operations, these metrics are essential for measuring and optimizing the efficiency of both processes and people involved.

Tracking DevOps metrics allows teams to quickly identify and eliminate bottlenecks, streamline workflows, and ensure alignment with business objectives.

Four Key DevOps Metrics 

Here are four important DevOps metrics to consider:

Deployment Frequency 

Deployment Frequency measures how often code is deployed into production per week, taking into account everything from bug fixes and capability improvements to new features. It is a key indicator of agility, and efficiency and a catalyst for continuous delivery and iterative development practices that align seamlessly with the principles of DevOps. A wrong approach in the first key metric can degrade the other DORA metrics.

Deployment Frequency is measured by dividing the number of deployments made during a given period by the total number of weeks/days. One deployment per week is standard. However, it also depends on the type of product.

Importance of High Deployment Frequency

  • High deployment frequency allows new features, improvements, and fixes to reach users more rapidly. It allows companies to quickly respond to market changes, customer feedback, and emerging trends.
  • Frequent deployments usually involve incremental, manageable changes, which are easier to test, debug, and validate. Moreover, It helps to identify and address bugs and issues more quickly, reducing the risk of significant defects in production.
  • High deployment frequency leads to higher satisfaction and loyalty as it allows continuous improvement and timely resolution of issues. Moreover, users get access to new features and enhancements without long waits which improves their overall experience.
  • Deploying smaller changes reduces the risk associated with each deployment, making rollbacks and fixes simpler. Moreover, continuous integration and deployment provide immediate feedback, allowing teams to address problems before they escalate.
  • Regular, automated deployments reduce the stress and fear often associated with infrequent, large-scale releases. Development teams can iterate on their work more quickly, which leads to faster innovation and problem-solving.

Lead Time for Changes

Lead Time for Changes measures the time it takes for a code change to go through the entire development pipeline and become part of the final product. It is a critical metric for tracking the efficiency and speed of software delivery. The measurement of this metric offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.

To measure this metric, DevOps should have:

  • The exact time of the commit 
  • The number of commits within a particular period
  • The exact time of the deployment 

Divide the total sum of time spent from commitment to deployment by the number of commitments made.

Importance of Reduced Lead Time for Changes

  • Short lead times allow new features and improvements to reach users quickly, delivering immediate value and outpacing competitors by responding to market needs and trends timely. 
  • Customers see their feedback addressed promptly, which leads to higher satisfaction and loyalty. Bugs and issues can be fixed and deployed rapidly which improves user experience. 
  • Developers spend less time waiting for deployments and more time on productive work which reduces context switching. It also enables continuous improvement and innovation which keeps the development process dynamic and effective.
  • Reduced lead time encourages experimentation. This allows businesses to test new ideas and features rapidly and pivot quickly in response to market changes, regulatory requirements, or new opportunities.
  • Short lead times help in better allocation and utilization of resources. It helps to avoid prolonged delays and smoother operations. 

Change Failure Rate

Change Failure Rate refers to the proportion or percentage of deployments that result in failure or errors, indicating the rate at which changes negatively impact the stability or functionality of the system. It reflects the stability and reliability of the entire software development and deployment lifecycle. Tracking CFR helps identify bottlenecks, flaws, or vulnerabilities in processes, tools, or infrastructure that can negatively impact the quality, speed, and cost of software delivery.

To calculate CFR, follow these steps:

  • Identify Failed Changes: Keep track of the number of changes that resulted in failures during a specific timeframe.
  • Determine Total Changes Implemented: Count the total changes or deployments made during the same period.

Apply the formula:

Use the formula CFR = (Number of Failed Changes / Total Number of Changes) * 100 to calculate the Change Failure Rate as a percentage.

Importance of Low Change Failure Rate

  • Low change failure rates ensure the system remains stable and reliable which leads to lower downtime and disruptions. Moreover, consistent reliability builds trust with users. 
  • Reliable software increases customer satisfaction and loyalty, as users can depend on the product for their needs. This further lowers issues and interruptions, leading to a more seamless and satisfying experience.
  • Reduced change failure rates result in reliable and efficient software which leads to higher customer retention and positive word-of-mouth referrals. It can also provide a competitive edge in the market that attracts and retains customers.
  • Fewer failures translate to lower costs that are associated with diagnosing and fixing issues in production. This also allows resources to be better allocated to development and innovation rather than maintenance and support.
  • Low failure rates contribute to a more positive and motivated work environment. It further gives teams confidence in their deployment processes and the quality of their code. 

Mean Time to Restore

Mean Time to Restore (MTTR) represents the average time taken to resolve a production failure/incident and restore normal system functionality each week. Measuring "Mean Time to Restore" (MTTR) provides crucial insights into an engineering team's incident response and resolution capabilities. It helps identify areas of improvement, optimize processes, and enhance overall team efficiency. 

To calculate this, add the total downtime and divide it by the total number of incidents that occurred within a particular period.

Importance of Reduced Mean Time to Restore

  • Reduced MTTR minimizes system downtime i.e. higher availability of services and systems, which is critical for maintaining user trust and satisfaction.
  • Faster recovery from incidents means that users experience less disruption. This leads to higher customer satisfaction and loyalty, especially in competitive markets where service reliability can be a key differentiator.
  • Frequent or prolonged downtimes can damage a company’s reputation. Quick restoration times help maintain a good reputation by demonstrating reliability and a strong capacity for issue resolution.
  • Keeping MTTR low helps in meeting these SLAs, avoiding penalties, and maintaining good relationships with clients and stakeholders.
  • Reduced MTTR encourages a proactive culture of monitoring, alerting, and preventive maintenance. This can lead to identifying and addressing potential issues swiftly, which further enhances system reliability.

Other DevOps Metrics to Consider 

Apart from the above-mentioned key metrics, there are other metrics to take into account. These are: 

Cycle Time 

Cycle time measures the total elapsed time taken to complete a specific task or work item from the beginning to the end of the process.

Mean Time to Failure 

Mean Time to Failure (MTTF) is a reliability metric used to measure the average time a non-repairable system or component operates before it fails.

Error Rates

Error Rates measure the number of errors encountered in the platform. It identifies the stability, reliability, and user experience of the platform.

Response Time

Response time is the total time from when a user makes a request to when the system completes the action and returns a result to the user.

How Typo Leverages DevOps Metrics? 

Typo is a powerful tool designed specifically for tracking and analyzing DORA metrics. It provides an efficient solution for development teams seeking precision in their DevOps performance measurement.

  • With pre-built integrations in the dev tool stack, the DORA metrics dashboard provides all the relevant data within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real time.
  • It gives real-time visibility into a team’s KPI and lets them make informed decisions.

Conclusion

Adopting and enhancing DevOps practices is essential for organizations that aim to optimize their software development lifecycle. Tracking these DevOps metrics helps teams identify bottlenecks, improve efficiency, and deliver high-quality products faster. 

How to Improve Software Delivery Using DORA Metrics

In today's software development landscape, effective collaboration among teams and seamless service orchestration are essential. Achieving these goals requires adherence to organizational standards for quality, security, and compliance. Without diligent monitoring, organizations risk losing sight of their delivery workflows, complicating the assessment of impacts on release velocity, stability, developer experience, and overall application performance.

To address these challenges, many organizations have begun tracking DevOps Research and Assessment (DORA) metrics. These metrics provide crucial insights for any team involved in software development, offering a comprehensive view of the Software Development Life Cycle (SDLC). DORA metrics are particularly useful for teams practising DevOps methodologies, including Continuous Integration/Continuous Deployment (CI/CD) and Site Reliability Engineering (SRE), which focus on enhancing system reliability.

However, the collection and analysis of these metrics can be complex. Decisions about which data points to track and how to gather them often fall to individual team leaders. Additionally, turning this data into actionable insights for engineering teams and leadership can be challenging. 

Understanding DORA DevOps Metrics

The DORA research team at Google conducts annual surveys of IT professionals to gather insights into industry-wide software delivery practices. From these surveys, four key metrics have emerged as indicators of software teams' performance, particularly regarding the speed and reliability of software deployment. These key DORA metrics include:

DORA metrics connect production-based metrics with development-based metrics, providing quantitative measures that complement qualitative insights into engineering performance. They focus on two primary aspects: speed and stability. Deployment frequency and lead time for changes relate to throughput, while time to restore services and change failure rate address stability.

Contrary to the historical view that speed and stability are opposing forces, research from DORA indicates a strong correlation between these metrics in terms of overall performance. Additionally, these metrics often correlate with key indicators of system success, such as availability, thus offering insights that benefit application performance, reliability, delivery workflows, and developer experience.

Collecting and Analyzing DORA Metrics

While DORA DevOps metrics may seem straightforward, measuring them can involve ambiguity, leading teams to make challenging decisions about which data points to use. Below are guidelines and best practices to ensure accurate and actionable DORA metrics.

Defining the Scope

Establishing a standardized process for monitoring DORA metrics can be complicated due to differing internal procedures and tools across teams. Clearly defining the scope of your analysis—whether for a specific department or a particular aspect of the delivery process—can simplify this effort. It’s essential to consider the type and amount of work involved in different analyses and standardize data points to align with team, departmental, or organizational goals.

For example, platform engineering teams focused on improving delivery workflows may prioritize metrics like deployment frequency and lead time for changes. In contrast, SRE teams focused on application stability might prioritize change failure rate and time to restore service. By scoping metrics to specific repositories, services, and teams, organizations can gain detailed insights that help prioritize impactful changes.

Best Practices for Defining Scope:

  • Engage Stakeholders: Involve stakeholders from various teams (development, QA, operations) to understand their specific needs and objectives.
  • Set Clear Goals: Establish clear goals for what you aim to achieve with DORA metrics, such as improving deployment frequency or reducing change failure rates.
  • Prioritize Based on Objectives: Depending on your team's goals, prioritize metrics accordingly. For example, teams focused on enhancing deployment speed should emphasize deployment frequency and lead time for changes.
  • Standardize Definitions: Create standardized definitions for metrics across teams to ensure consistency in data collection and analysis.

Standardizing Data Collection

To maintain consistency in collecting DORA metrics, address the following questions:

1. What constitutes a successful deployment?

Establish clear criteria for what defines a successful deployment within your organization. Consider the different standards various teams might have regarding deployment stages. For instance, at what point do you consider a progressive release to be "executed"?

2. What defines a failure or response?

Clarify definitions for system failures and incidents to ensure consistency in measuring change failure rates. Differentiate between incidents and failures based on factors such as application performance and service level objectives (SLOs). For example, consider whether to exclude infrastructure-related issues from DORA metrics.

3. When does an incident begin and end?

Determine relevant data points for measuring the start and resolution of incidents, which are critical for calculating time to restore services. Decide whether to measure from when an issue is detected, when an incident is created, or when a fix is deployed.

4. What time spans should be used for analysis?

Select appropriate time frames for analyzing data, taking into account factors like organization size, the age of the technology stack, delivery methodology, and key performance indicators (KPIs). Adjust time spans to align with the frequency of deployments to ensure realistic and comprehensive metrics.

Best Practices for Standardizing Data Collection:

  • Develop Clear Guidelines: Establish clear guidelines and definitions for each metric to minimize ambiguity.
  • Automate Data Collection: Implement automation tools to ensure consistent data collection across teams, thereby reducing human error.
  • Conduct Regular Reviews: Regularly review and update definitions and guidelines to keep them relevant and accurate.

Utilizing DORA Metrics to Enhance CI/CD Workflows

Establishing a Baseline

Before diving into improvements, it’s crucial to establish a baseline for your current continuous integration and continuous delivery performance using DORA metrics. This involves gathering historical data to understand where your organization stands in terms of deployment frequency, lead time, change failure rate, and MTTR. This baseline will serve as a reference point to measure the impact of any changes you implement.

Analyzing Deployment Frequency

Actionable Insights: If your deployment frequency is low, it may indicate issues with your CI/CD pipeline or development process. Investigate potential causes, such as manual steps in deployment, inefficient testing procedures, or coordination issues among team members.

Strategies for Improvement:

  • Automate Testing and Deployment: Implement automated testing frameworks that allow for continuous integration, enabling more frequent and reliable deployments.
  • Adopt Feature Toggles: This technique allows teams to deploy code without exposing it to users immediately, increasing deployment frequency without compromising stability.

Reducing Lead Time for Changes

Actionable Insights: Long change lead time often points to inefficiencies in the development process. By analyzing your CI/CD pipeline, you can identify delays caused by manual approval processes, inadequate testing, or other obstacles.

Strategies for Improvement:

  • Streamline Code Reviews: Establish clear guidelines and practices for code reviews to minimize bottlenecks.
  • Use Branching Strategies: Adopt effective branching strategies (like trunk-based development) that promote smaller, incremental changes, making the integration process smoother.

Lowering Change Failure Rate

Actionable Insights: A high change failure rate is a clear sign that the quality of code changes needs improvement. This can be due to inadequate testing or rushed deployments.

Strategies for Improvement:

  • Enhance Testing Practices: Implement comprehensive automated tests, including unit, integration, and end-to-end tests, to ensure quality before deployment.
  • Conduct Post-Mortems: Analyze failures to identify root causes and learn from them. Use this knowledge to adjust processes and prevent similar issues in the future.

Improving Mean Time to Recover (MTTR)

Actionable Insights: If your MTTR is high, it suggests challenges in incident management and response capabilities. This can lead to longer downtimes and reduced user trust.

Strategies for Improvement:

  • Invest in Monitoring and Observability: Implement robust monitoring tools to quickly detect and diagnose issues, allowing for rapid recovery.
  • Create Runbooks: Develop detailed runbooks that outline recovery procedures for common incidents, enabling your team to respond quickly and effectively.

Continuous Improvement Cycle

Utilizing DORA metrics is not a one-time activity but part of an ongoing process of continuous improvement. Establish a regular review cycle where teams assess their DORA metrics and adjust practices accordingly. This creates a culture of accountability and encourages teams to seek out ways to improve their CI/CD workflows continually.

Case Studies: Real-World Applications

1. Etsy

Etsy, an online marketplace, adopted DORA metrics to assess and enhance its CI/CD workflows. By focusing on improving its deployment frequency and lead time for changes, Etsy was able to increase deployment frequency from once a week to multiple times a day, significantly improving responsiveness to customer needs.

2. Flickr

Flickr used DORA metrics to track its change failure rate. By implementing rigorous automated testing and post-mortem analysis, Flickr reduced its change failure rate significantly, leading to a more stable production environment.

3. Google

Google's Site Reliability Engineering (SRE) teams utilize DORA metrics to inform their practices. By focusing on MTTR, Google has established an industry-leading incident response culture, resulting in rapid recovery from outages and high service reliability.

Leveraging Typo for Monitoring DORA Metrics

Typo is a powerful tool designed specifically for tracking and analyzing DORA metrics. It provides an efficient solution for development teams seeking precision in their DevOps performance measurement.

  • With pre-built integrations in the dev tool stack, the DORA metrics dashboard provides all the relevant data within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real time.
  • It gives real-time visibility into a team’s KPI and lets them make informed decisions.

Importance of DORA Metrics for Boosting Tech Team Performance

DORA metrics serve as a compass for engineering teams, optimizing development and operations processes to enhance efficiency, reliability, and continuous improvement in software delivery.

In this blog, we explore how DORA metrics boost tech team performance by providing critical insights into software development and delivery processes.

What are DORA Metrics?

DORA metrics, developed by the DevOps Research and Assessment team, are a set of key performance indicators that measure the effectiveness and efficiency of software development and delivery processes. They provide a data-driven approach to evaluate the impact of operational practices on software delivery performance.

Four Key DORA Metrics

  • Deployment Frequency: It measures how often code is deployed into production per week. 
  • Lead Time for Changes: It measures the time it takes for code changes to move from inception to deployment. 
  • Change Failure Rate: It measures the code quality released to production during software deployments.
  • Mean Time to Recover: It measures the time to recover a system or service after an incident or failure in production.

In 2021, the DORA Team added Reliability as a fifth metric. It is based upon how well the user’s expectations are met, such as availability and performance, and measures modern operational practices.

How do DORA Metrics Drive Performance Improvement for Tech Teams? 

Here’s how key DORA metrics help in boosting performance for tech teams: 

Deployment Frequency 

Deployment Frequency is used to track the rate of change in software development and to highlight potential areas for improvement. A wrong approach in the first key metric can degrade the other DORA metrics.

One deployment per week is standard. However, it also depends on the type of product.

How does it Drive Performance Improvement? 

  • Frequent deployments allow development teams to deliver new features and updates to end-users quickly. Hence, enabling them to respond to market demands and feedback promptly.
  • Regular deployments make changes smaller and more manageable. Hence, reducing the risk of errors and making identifying and fixing issues easier. 
  • Frequent releases offer continuous feedback on the software’s performance and quality. This facilitates continuous improvement and innovation.

Lead Time for Changes

Lead Time for Changes is a critical metric used to measure the efficiency and speed of software delivery. It is the duration between a code change being committed and its successful deployment to end-users. 

The standard for Lead time for Change is less than one day for elite performers and between one day and one week for high performers.

How does it Drive Performance Improvement? 

  • Shorter lead times indicate that new features and bug fixes reach customers faster. Therefore, enhancing customer satisfaction and competitive advantage.
  • Reducing lead time highlights inefficiencies in the development process, which further prompts software teams to streamline workflows and eliminate bottlenecks.
  • A shorter lead time allows teams to quickly address critical issues and adapt to changes in requirements or market conditions.

Change Failure Rate

CFR, or Change Failure Rate measures the frequency at which newly deployed changes lead to failures, glitches, or unexpected outcomes in the IT environment.

0% - 15% CFR is considered to be a good indicator of code quality.

How does it Drive Performance Improvement? 

  • A lower change failure rate highlights higher quality changes and a more stable production environment.
  • Measuring this metric helps teams identify bottlenecks in their development process and improve testing and validation practices.
  • Reducing the change failure rate enhances the confidence of both the development team and stakeholders in the reliability of deployments.

Mean Time to Recover 

MTTR, which stands for Mean Time to Recover, is a valuable metric that provides crucial insights into an engineering team's incident response and resolution capabilities.

Less than one hour is considered to be a standard for teams.  

How does it Drive Performance Improvement? 

  • Reducing MTTR boosts the overall resilience of the system. Hence, ensuring that services are restored quickly and minimizing downtime.
  • Users experience less disruption due to quick recovery from failures. This helps in maintaining customer trust and satisfaction. 
  • Tracking MTTR advocates teams to analyze failures, learn from incidents, and implement preventative measures to avoid similar issues in the future.

How to Implement DORA Metrics in Tech Teams? 

Collect the DORA Metrics 

Firstly, you need to collect DORA Metrics effectively. This can be done by integrating tools and systems to gather data on key DORA metrics. There are various DORA metrics trackers in the market that make it easier for development teams to automatically get visual insights in a single dashboard. The aim is to collect the data consistently over time to establish trends and benchmarks. 

Analyze the DORA Metrics 

The next step is to analyze them to understand your development team's performance. Start by comparing metrics to the DORA benchmarks to see if the team is an Elite, High, Medium, or Low performer. Ensure to look at the metrics holistically as improvements in one area may come at the expense of another. So, always strive for balanced improvements. Regularly review the collected metrics to identify areas that need the most improvement and prioritize them first. Don’t forget to track the metrics over time to see if the improvement efforts are working.

Drive Improvements and Foster a DevOps Culture 

Leverage the DORA metrics to drive continuous improvement in engineering practices. Discuss what’s working and what’s not and set goals to improve metric scores over time. Don’t use DORA metrics on a sole basis. Tie it with other engineering metrics to measure it holistically and experiment with changes to tools, processes, and culture. 

Encourage practices like: 

  • Implement small changes and measure their impact.
  • Share the DORA metrics transparently with the team to foster a culture of continuous improvement.
  • Promote cross-collaboration between development and operations teams.
  • Focus on learning from failures rather than assigning blame.

Typo - A Leading DORA Metrics Tracker 

Typo is a powerful tool designed specifically for tracking and analyzing DORA metrics, providing an alternative and efficient solution for development teams seeking precision in their DevOps performance measurement.

  • With pre-built integrations in the dev tool stack, the DORA dashboard provides all the relevant data flowing in within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real time.
  • It gives real-time visibility into a team’s KPI and lets them make informed decisions.

Conclusion

DORA metrics are not just metrics; they are strategic navigators guiding tech teams toward optimized software delivery. By focusing on key DORA metrics, tech teams can pinpoint bottlenecks and drive sustainable performance enhancements. 

The Fifth DORA Metric: Reliability

The DORA (DevOps Research and Assessment) metrics have emerged as a north star for assessing software delivery performance.  The fifth metric, Reliability is often overlooked as it was added after the original announcement of the DORA research team. 

In this blog, let’s explore Reliability and its importance for software development teams. 

What are DORA Metrics? 

DevOps Research and Assessment (DORA) metrics are a compass for engineering teams striving to optimize their development and operations processes.

In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble, and Dr. Nicole Forsgren to evaluate and improve software development practices. The aim is to enhance the understanding of how development teams can deliver software faster, more reliably, and of higher quality.

Four key metrics are: 

  • Deployment Frequency: Deployment frequency measures the rate of change in software development and highlights potential bottlenecks. It is a key indicator of agility and efficiency. Regular deployments signify a streamlined pipeline, allowing teams to deliver features and updates faster.
  • Lead Time for Changes: Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. It tracks the speed and efficiency of software delivery and offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.
  • Change Failure Rate: Change failure rate measures the frequency at which newly deployed changes lead to failures, glitches, or unexpected outcomes in the IT environment. It reflects the reliability and efficiency and is related to team capacity, code complexity, and process efficiency, impacting speed and quality.
  • Mean Time to Recover: Mean Time to Recover measures the average duration taken by a system or application to recover from a failure or incident. It concentrates on determining the efficiency and effectiveness of an organization's incident response and resolution procedures.

What is Reliability?

Reliability is a fifth metric that was added by the DORA team in 2021. It is based upon how well your user’s expectations are met, such as availability and performance, and measures modern operational practices. It doesn’t have standard quantifiable targets for performance levels rather it depends upon service level indicators or service level objectives. 

While the first four DORA metrics (Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Mean Time to Recover) target speed and efficiency, reliability focuses on system health, production readiness, and stability for delivering software products.  

Reliability comprises various metrics used to assess operational performance including availability, latency, performance, and scalability that measure user-facing behavior, software SLAs, performance targets, and error budgets. It has a substantial impact on customer retention and success. 

Indicators to Follow when Measuring Reliability

A few indicators include:

  • Availability: How long the software was available without incurring any downtime.
  • Error Rates: Number of times software fails or produces incorrect results in a given period. 
  • Mean Time Between Failures (MTBF): The average time passes between software breakdowns or failures. 
  • Mean Time to Recover (MTTR): The average time it takes for the software to recover from a failure. 

These metrics provide a holistic view of software reliability by measuring different aspects such as failure frequency, downtime, and the ability to quickly restore service. Tracking these few indicators can help identify reliability issues, meet service level agreements, and enhance the software’s overall quality and stability. 

Impact of Reliability on Overall DevOps Performance 

The fifth DevOps metric, Reliability, significantly impacts overall performance. Here are a few ways: 

Enhances Customer Experience

Tracking reliability metrics like uptime, error rates, and mean time to recovery allows DevOps teams to proactively identify and address issues. Therefore, ensuring a positive customer experience and meeting their expectations. 

Increases Operational Efficiency

Automating monitoring, incident response, and recovery processes helps DevOps teams to focus more on innovation and delivering new features rather than firefighting. This boosts overall operational efficiency.

Better Team Collaboration

Reliability metrics promote a culture of continuous learning and improvement. This breaks down silos between development and operations, fostering better collaboration across the entire DevOps organization.

Reduces Costs

Reliable systems experience fewer failures and less downtime, translating to lower costs for incident response, lost productivity, and customer churn. Investing in reliability metrics pays off through overall cost savings. 

Fosters Continuous Improvement

Reliability metrics offer valuable insights into system performance and bottlenecks. Continuously monitoring these metrics can help identify patterns and root causes of failures, leading to more informed decision-making and continuous improvement efforts.

Role of Reliability in Distinguishing Elite Performers from Low Performers

Importance of Reliability for Elite Performers

  • Reliability provides a more holistic view of software delivery performance. Besides capturing velocity and stability, it also takes the ability to consistently deliver reliable services to users into consideration. 
  • Elite-performing teams deploy quickly with high stability and also demonstrate strong operational reliability. They can quickly detect and resolve incidents, minimizing disruptions to the user experience.
  • Low-performing teams may struggle with reliability. This leads to more frequent incidents, longer recovery times, and overall less reliable service for customers.

Distinguishing Elite from Low Performers

  • Elite teams excel across all five DORA Metrics. 
  • Low performers may have acceptable velocity metrics but struggle with stability and reliability. This results in more incidents, longer recovery times, and an overall less reliable service.
  • The reliability metric helps identify teams that have mastered both the development and operational aspects of software delivery. 

Conclusion 

The reliability metric with the other four DORA DevOps metrics offers a more comprehensive evaluation of software delivery performance. By focusing on system health, stability, and the ability to meet user expectations, this metric provides valuable insights into operational practices and their impact on customer satisfaction. 

Implementing DORA DevOps Metrics in Large Organizations

Introduction

In software engineering, aligning your work with business goals is crucial. For startups, this is often straightforward. Small teams work closely together, and objectives are tightly aligned. However, in large enterprises where multiple teams are working on different products with varied timelines, this alignment becomes much more complex. In these scenarios, effective communication with leadership and establishing standard metrics to assess engineering performance is key. DORA Metrics is a set of key performance indicators that help organizations measure and improve their software delivery performance.

But first, let’s understand in brief how engineering works in startups vs. large enterprises -

Software Engineering in Startups: A Focused Approach

In startups, small, cross-functional teams work towards a single goal: rapidly developing and delivering a product that meets market needs. The proximity to business objectives is close, and the feedback loop is short. Decision-making is quick, and pivoting based on customer feedback is common. Here, the primary focus is on speed and innovation, with less emphasis on process and documentation.

Success in a startup's engineering efforts can often be measured by a few key metrics: time-to-market, user acquisition rates, and customer satisfaction. These metrics directly reflect the company's ability to achieve its business goals. This simple approach allows for quick adjustments and real-time alignment of engineering efforts with business objectives.

Engineering Goals in Large Enterprises: A Complex Landscape

Large enterprises operate in a vastly different environment. Multiple teams work on various products, each with its own roadmap, release schedules, and dependencies. The scale and complexity of operations require a structured approach to ensure that all teams align with broader organizational goals.

In such settings, communication between teams and leadership becomes more formalized, and standard metrics to assess performance and progress are critical. Unlike startups, where the impact of engineering efforts is immediately visible, large enterprises need a consolidated view of various performance indicators to understand how engineering work contributes to business objectives.

The Challenge of Communication and Metrics in Large Organizations

Effective communication in large organizations involves not just sharing information but ensuring that it's understood and acted upon across all levels. Engineering teams must communicate their progress, challenges, and needs to leadership in a manner that is both comprehensive and actionable. This requires a common language of metrics that can accurately represent the state of development efforts.

Standard metrics are essential for providing this common language. They offer a way to objectively assess the performance of engineering teams, identify areas for improvement, and make informed decisions. However, the selection of these metrics is crucial. They must be relevant, actionable, and aligned with business goals.

Introducing DORA Metrics

DORA Metrics, developed by the DevOps Research and Assessment team, provide a robust framework for measuring the performance and efficiency of software delivery in DevOps and platform engineering. These metrics focus on key aspects of software development and delivery that directly impact business outcomes.

The four primary DORA Metrics are:

These metrics provide a comprehensive view of the software delivery pipeline, from development to deployment and operational stability. By focusing on these key areas, organizations can drive improvements in their DevOps practices and enhance overall developer efficiency.

Using DORA Metrics in DevOps and Platform Engineering

In large enterprises, the application of DORA Metrics can significantly improve developer efficiency and software delivery processes. Here’s how these metrics can be used effectively:

  1. Deployment Frequency: It is a key indicator of agility and efficiency.
    • Goal: Increase the frequency of deployments to ensure that new features and fixes are delivered to customers quickly.
    • Action: Encourage practices such as Continuous Integration and Continuous Deployment (CI/CD) to automate the build and release process. Monitor deployment frequency across teams to identify bottlenecks and areas for improvement.
  2. Lead Time for Changes: It tracks the speed and efficiency of software delivery. some text
    • Goal: Reduce the time it takes for changes to go from commit to production.
    • Action: Streamline the development pipeline by automating testing, reducing manual interventions, and optimizing code review processes. Use tools that provide visibility into the pipeline to identify delays and optimize workflows.
  3. Mean Time to Recover (MTTR): It concentrates on determining efficiency and effectiveness. some text
    • Goal: Minimize downtime when incidents occur to ensure high availability and reliability of services.
    • Action: Implement robust monitoring and alerting systems to quickly detect and diagnose issues. Foster a culture of incident response and post-mortem analysis to continuously improve response times.
  4. Change Failure Rate: It reflects reliability and efficiency. some text
    • Goal: Reduce the percentage of changes that fail in production to ensure a stable and reliable release process.
    • Action: Implement practices such as automated testing, code reviews, and canary deployments to catch issues early. Track failure rates and use the data to improve testing and deployment processes.

Integrating DORA Metrics with Other Software Engineering Metrics

While DORA Metrics provide a solid foundation for measuring DevOps performance, they are not exhaustive. Integrating them with other software engineering metrics can provide a more holistic view of engineering performance. Some additional metrics to consider include:

Development Cycle Efficiency:

Metrics: Lead Time for Changes and Deployment Frequency

High Deployment Frequency, Swift Lead Time:

Teams with rapid deployment frequency and short lead time exhibit agile development practices. These efficient processes lead to quick feature releases and bug fixes, ensuring dynamic software development aligned with market demands and ultimately enhancing customer satisfaction.

Low Deployment Frequency despite Swift Lead Time:

A short lead time coupled with infrequent deployments signals potential bottlenecks. Identifying these bottlenecks is vital. Streamlining deployment processes in line with development speed is essential for a software development process.

Code Review Excellence:

Metrics: Comments per PR and Change Failure Rate

Few Comments per PR, Low Change Failure Rate:

Low comments and minimal deployment failures signify high-quality initial code submissions. This scenario highlights exceptional collaboration and communication within the team, resulting in stable deployments and satisfied end-users.

Abundant Comments per PR, Minimal Change Failure Rate:

Teams with numerous comments per PR and a few deployment issues showcase meticulous review processes. Investigating these instances ensures review comments align with deployment stability concerns, ensuring constructive feedback leads to refined code.

Developer Responsiveness:

Metrics: Commits after PR Review and Deployment Frequency

Frequent Commits after PR Review, High Deployment Frequency:

Rapid post-review commits and a high deployment frequency reflect agile responsiveness to feedback. This iterative approach, driven by quick feedback incorporation, yields reliable releases, fostering customer trust and satisfaction.

Sparse Commits after PR Review, High Deployment Frequency:

Despite few post-review commits, high deployment frequency signals comprehensive pre-submission feedback integration. Emphasizing thorough code reviews assures stable deployments, showcasing the team’s commitment to quality.

Quality Deployments:

Metrics: Change Failure Rate and Mean Time to Recovery (MTTR)

Low Change Failure Rate, Swift MTTR:

Low deployment failures and a short recovery time exemplify quality deployments and efficient incident response. Robust testing and a prepared incident response strategy minimize downtime, ensuring high-quality releases and exceptional user experiences.

High Change Failure Rate, Rapid MTTR:

A high failure rate alongside swift recovery signifies a team adept at identifying and rectifying deployment issues promptly. Rapid responses minimize impact, allowing quick recovery and valuable learning from failures, strengthening the team’s resilience.

Impact of PR Size on Deployment:

Metrics: Large PR Size and Deployment Frequency

The size of pull requests (PRs) profoundly influences deployment timelines. Correlating Large PR Size with Deployment Frequency enables teams to gauge the effect of extensive code changes on release cycles.

High Deployment Frequency despite Large PR Size:

Maintaining a high deployment frequency with substantial PRs underscores effective testing and automation. Acknowledge this efficiency while monitoring potential code intricacies, ensuring stability amid complexity.

Low Deployment Frequency with Large PR Size:

Infrequent deployments with large PRs might signal challenges in testing or review processes. Dividing large tasks into manageable portions accelerates deployments, addressing potential bottlenecks effectively.

PR Size and Code Quality:

Metrics: Large PR Size and Change Failure Rate

PR size significantly influences code quality and stability. Analyzing Large PR Size alongside Change Failure Rate allows engineering leaders to assess the link between PR complexity and deployment stability.

High Change Failure Rate with Large PR Size:

Frequent deployment failures with extensive PRs indicate the need for rigorous testing and validation. Encourage breaking down large changes into testable units, bolstering stability and confidence in deployments.

Low Change Failure Rate despite Large PR Size:

A minimal failure rate with substantial PRs signifies robust testing practices. Focus on clear team communication to ensure everyone comprehends the implications of significant code changes, sustaining a stable development environment.Leveraging these correlations empowers engineering teams to make informed, data-driven decisions — a great way to drive business outcomes— optimizing workflows, and boosting overall efficiency. These insights chart a course for continuous improvement, nurturing a culture of collaboration, quality, and agility in software development endeavors.

By combining DORA Metrics with these additional metrics, organizations can gain a comprehensive understanding of their engineering performance and make more informed decisions to drive continuous improvement.

Leveraging Software Engineering Intelligence (SEI) Platforms

As organizations grow, the need for sophisticated tools to manage and analyze engineering metrics becomes apparent. This is where Software Engineering Intelligence (SEI) platforms come into play. SEI platforms like Typo aggregate data from various sources, including version control systems, CI/CD pipelines, project management tools, and incident management systems, to provide a unified view of engineering performance.

Benefits of SEI platforms include:

  • Centralized Metrics Dashboard: A single source of truth for all engineering metrics, providing visibility across teams and projects.
  • Advanced Analytics: Use machine learning and data analytics to identify patterns, predict outcomes, and recommend actions.
  • Customizable Reports: Generate tailored reports for different stakeholders, from engineering teams to executive leadership.
  • Real-time Monitoring: Track key metrics in real-time to quickly identify and address issues.

By leveraging SEI platforms, large organizations can harness the power of data to drive strategic decision-making and continuous improvement in their engineering practices.

Conclusion

In large organizations, aligning engineering work with business goals requires effective communication and the use of standardized metrics. DORA Metrics provides a robust framework for measuring the performance of DevOps and platform engineering, enabling organizations to improve developer efficiency and software delivery processes. By integrating DORA Metrics with other software engineering metrics and leveraging Software Engineering Intelligence platforms, organizations can gain a comprehensive understanding of their engineering performance and drive continuous improvement.

Using DORA Metrics in large organizations not only helps in measuring and enhancing performance but also fosters a culture of data-driven decision-making, ultimately leading to better business outcomes. As the industry continues to evolve, staying abreast of best practices and leveraging advanced tools will be key to maintaining a competitive edge in the software development landscape.

What Lies Ahead: Predictions for DORA Metrics in DevOps

The DevOps Research and Assessment (DORA) metrics have long served as a guiding light for organizations to evaluate and enhance their software development practices.

As we look to the future, what changes lie ahead for DORA metrics amidst evolving DevOps trends? In this blog, we will explore the future landscape and strategize how businesses can stay at the forefront of innovation.

What Are DORA Metrics?

The widely used reference book for engineering leaders called Accelerate introduced the DevOps Research and Assessment (DORA) group’s four metrics, known as the DORA 4 metrics.

These metrics were developed to assist engineering teams in determining two things:

  • The characteristics of a top-performing team.
  • How does their performance compare to the rest of the industry?

Four key DevOps measurements:

Deployment Frequency

Deployment Frequency measures the frequency of deployment of code to production or releases to end-users in a given time frame. Greater deployment frequency is an indication of increased agility and the ability to respond quickly to market demands.

Lead Time for Changes

Lead Time for Changes measures the time between a commit being made and that commit making it to production. Short lead times in software development are crucial for success in today’s business environment. When changes are delivered rapidly, organizations can seize new opportunities, stay ahead of competitors, and generate more revenue.

Change Failure Rate

Change failure rate measures the proportion of deployment to production that results in degraded services. A lower change failure rate enhances user experience and builds trust by reducing failure and helping to allocate resources effectively.

Mean Time to Recover

Mean Time to Recover measures the time taken to recover from a failure, showing the team’s ability to respond to and fix issues. Optimizing MTTR aims to minimize downtime by resolving incidents through production changes and enhancing user satisfaction by reducing downtime and resolution times.

In 2021, DORA introduced Reliability as the fifth metric for assessing software delivery performance.

Reliability

It measures modern operational practices and doesn’t have standard quantifiable targets for performance levels. Reliability comprises several metrics used to assess operational performance including availability, latency, performance, and scalability that measure user-facing behavior, software SLAs, performance targets, and error budgets.

DORA Metrics and Their Role in Measuring DevOps Performance

DORA metrics play a vital role in measuring DevOps performance. It provides quantitative, actionable insights into the effectiveness of an organization’s software delivery and operational capabilities.

  • It offers specific, quantifiable indicators that measure various aspects of software development and delivery process.
  • DORA metrics align DevOps practices with broader business objectives. Metrics like high Deployment Frequency and low Lead Time indicate quick feature delivery and updates to end-users.
  • DORA metrics provide data-driven insights that support informed decision-making at all levels of the organization.
  • It tracks progress over time i.e. enabling teams to measure the effectiveness of implemented changes.
  • DORA metrics help organizations understand and mitigate the risks associated with deploying new code. Aiming to reduce Change Failure Rate and Mean Time to Restore helps software teams increase systems’ reliability and stability.
  • Continuously monitoring DORA metrics helps identify trends and patterns over time, enabling them to pinpoint inefficiencies and bottlenecks in their processes.

This further leads to:

  • Streamlines workflows and fewer failed leads to quick deployments.
  • Reduces failed rate and improved recovery time to minimize downtime and associated risks.
  • Fosters communication and collaboration between the development and operations teams.
  • Faster release and fewer disruptions contribute to a better user experience.

Key Predictions for DORA Metrics in DevOps

Increased Adoption of DORA metrics

One of the major predictions is that the use of DORA metrics in organizations will continue to rise. These metrics will broaden its horizons beyond five key metrics (Deployment Frequency, Lead Time for Changes, Change Failure Rate, Mean Time to Restore, and Reliability) that focus on security, compliance, and more.

Organizations will start integrating these metrics with DevOps tools as well as tracking and reporting on these metrics to benchmark performance against industry leaders. This will allow software development teams to collect, analyze, and act on these data.

Emphasizing Observability and Monitoring

Observability and monitoring are now becoming two non-negotiable aspects of organizations. This is occurring as systems are getting more complex. This makes it challenging for them to understand the system’s state and diagnose issues without comprehensive observability.

Moreover, businesses have started relying on digital services which leads to an increase in the cost of downtime. Metrics like average detection and resolution times can pinpoint and rectify glitches in the early stages. Emphasizing these two aspects will further impact MTTR and CFR by enabling fast detection, and diagnosis of issues.

Integration with SPACE Framework

Nowadays, organizations are seeking more comprehensive and accurate metrics to measure software delivery performance. With the rise in adoption of DORA metrics, they are also said to be integrated well with the SPACE framework.

Since DORA and SPACE are both complemented in nature, integrating will provide a more holistic view. While DORA focuses on technical outcome and efficiency, the SPACE framework provides a broader perspective that incorporates aspects of developer satisfaction, collaboration, and efficiency (all about human factors). Together, they both emphasize the importance of continuous improvement and faster feedback loops.

Merging with AI and ML Advancements

AI and ML technologies are emerging. By integrating these tools with DORA metrics, development teams can leverage predictive analytics, proactively identify potential issues, and promote AI-driven decision-making.

DevOps gathers extensive data from diverse sources, which AI and ML tools can process and analyze more efficiently than manual methods. These tools enable software teams to automate decisions based on DORA metrics. For instance, if a deployment is forecasted to have a high failure rate, the tool can automatically initiate additional testing or notify the relevant team member.

Furthermore, continuous analysis of DORA metrics allows teams to pinpoint areas for improvement in the development and deployment processes. They can also create dashboards that highlight key metrics and trends.

Emphasis on Cultural Transformation

DORA metrics alone are insufficient. Engineering teams need more than tools and processes. Soon, there will be a cultural transformation emphasizing teamwork, open communication, and collective accountability for results. Factors such as team morale, collaboration across departments, and psychological safety will be as crucial as operational metrics.

Collectively, these elements will facilitate data-driven decision-making, adaptability to change, experimentation with new concepts, and fostering continuous improvement.

Focus on Security Metrics

As cyber-attacks continue to increase, security is becoming a critical concern for organizations. Hence, a significant upcoming trend is the integration of security with DORA metrics. This means not only implementing but also continually measuring and improving these security practices. Such integration aims to provide a comprehensive view of software development performance. This also allows striking a balance between speed and efficiency on one hand, and security and risk management on the other.

How to Stay Ahead of the Curve?

Stay Informed

Ensure monitoring of industry trends, research, and case studies continuously related to DORA metrics and DevOps practices.

Experiment and Implement

Don’t hesitate to pilot new DORA metrics and DevOps techniques within your organization to see what works best for your specific context.

Embrace Automation

Automate as much as possible in your software development and delivery pipeline to improve speed, reliability, and the ability to collect metrics effectively.

Collaborate across Teams

Foster collaboration between development, operations, and security teams to ensure alignment on DORA metrics goals and strategies.

Continuous Improvement

Regularly review and optimize your DORA metrics implementation based on feedback and new insights gained from data analysis.

Cultural Alignment

Promote a culture that values continuous improvement, learning, and transparency around DORA metrics to drive organizational alignment and success.

How Typo Leverages DORA Metrics?

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It offers comprehensive insights into the deployment process through key DORA metrics such as change failure rate, time to build, and deployment frequency.

DORA Metrics Dashboard

Typo’s DORA metrics dashboard has a user-friendly interface and robust features tailored for DevOps excellence. The dashboard pulls in data from all the sources and presents it in a visualized and detailed way to engineering leaders and the development team.

Comprehensive Visualization of Key Metrics

Typo’s dashboard provides clear and intuitive visualizations of the four key DORA metrics: Deployment Frequency, Change Failure Rate, Lead Time for Changes, and Mean Time to Restore.

Benchmarking for Context

By providing benchmarks, Typo allows teams to compare their performance against industry standards, helping them understand where they stand. It also allows the team to compare their current performance with their historical data to track improvements or identify regressions.

Find out what it takes to build reliable high-velocity dev teams

Conclusion

The rising adoption of DORA metrics in DevOps marks a significant shift towards data-driven software delivery practices. Integrating these metrics with operations, tools, and cultural frameworks enhances agility and resilience. It is crucial to stay ahead of the curve by keeping an eye on trends, embracing automation, and promoting continuous improvement to effectively harness DORA metrics to drive innovation and achieve sustained success.

How to Calculate Cycle Time

Cycle time is one of the important metrics in software development. It measures the time taken from the start to the completion of a process, providing insights into the efficiency and productivity of teams. Understanding and optimizing cycle time can significantly improve overall performance and customer satisfaction.

This blog will guide you through the precise cycle time calculation, highlighting its importance and providing practical steps to measure and optimize it effectively.

What is Cycle Time?

Cycle time measures the total elapsed time taken to complete a specific task or work item from the beginning to the end of the process.

  • The “Coding” stage represents the time taken by developers to write and complete the code changes.
  • The “Pickup” stage denotes the time spent before a pull request is assigned for review.
  • The “Review” stage encompasses the time taken for peer review and feedback on the pull request.
  • Finally, the “Merge” stage shows the duration from the approval of the pull request to its integration into the main codebase.

Screenshot 2024-03-16 at 1.14.10 AM.png

It is important to differentiate cycle time from other related metrics such as lead time, which includes all delays and waiting periods, and takt time, which is the rate at which a product needs to be completed to meet customer demand. Understanding these differences is crucial for accurately measuring and optimizing cycle time.

Components of Cycle Time Calculation

To calculate total cycle time, you need to consider several components:

  • Net production time: The total time available for production, excluding breaks, maintenance, and downtime.
  • Work items and task duration: Specific tasks or work items and the time taken to complete each.
  • Historical data: Past data on task durations and production times to ensure accurate calculations.

Step-by-Step Guide to Calculating Cycle Time

Step 1: Identify the start and end points of the process:

Clearly define the beginning and end of the process you are measuring. This could be initiating and completing a task in a project management tool.

Step 2: Gather the necessary data

Collect data on task durations and time tracking. Use tools like time-tracking software to ensure accurate data collection.

Step 3: Calculate net production time

Net production time is the total time available for production minus any non-productive time. For example, if a team works 8 hours daily but takes 1 hour for breaks and meetings, the net production time is 7 hours.

Step 4: Apply the cycle time formula

The formula for cycle time is:

Cycle Time = Net Production Time / Number of Work Items Completed

Cycle Time= Number of Work Items Completed / Net Production Time

Example calculation

If a team has a net production time of 35 hours in a week and completes 10 tasks, the cycle time is:

Cycle Time = 35 hours / 10 tasks = 3.5 hours per task

Cycle Time= 10 tasks / 35 hours =3.5 hours per task

An ideal cycle time should be less than 48 hours. Shorter cycle times in software development indicate that teams can quickly respond to requirements, deliver features faster, and adapt to changes efficiently, reflecting agile and responsive development practices.

Longer cycle times in software development typically indicates several potential issues or conditions within the development process. This can lead to increased costs and delayed delivery of features.

Accounting for Variations in Work Item Complexity

When calculating cycle time, it is crucial to account for variations in the complexity and size of different work items. Larger or more complex tasks can skew the average cycle time. To address this, categorize tasks by size or complexity and calculate cycle time for each category separately.

Use of Control Charts

Control charts are a valuable tool for visualizing cycle time data and identifying trends or anomalies. You can quickly spot variations and investigate their causes by plotting cycle times on a control chart.

Statistical Analysis

Performing statistical analysis on cycle time data can provide deeper insights into process performance. Metrics such as standard deviation and percentiles help understand the distribution and variability of cycle times, enabling more precise optimization efforts.

Tools and Techniques for Accurate Measurement

In order to effectively track task durations and completion times, it’s important to utilize time tracking tools and software such as Jira, Trello, or Asana. These tools can provide a systematic approach to managing tasks and projects by allowing team members to log their time and track task durations consistently.

Consistent data collection is essential for accurate time tracking. Encouraging all team members to consistently log their time and task durations ensures that the data collected is reliable and can be used for analysis and decision-making.

Visual management techniques, such as implementing Kanban boards or other visual tools, can be valuable for tracking progress and identifying bottlenecks in the workflow. These visual aids provide a clear and transparent view of task status and can help teams address any delays or issues promptly.

Optimizing cycle time involves analyzing cycle time data to identify bottlenecks in the workflow. By pinpointing areas where tasks are delayed, teams can take action to remove these bottlenecks and optimize their processes for improved efficiency.

Continuous improvement practices, such as implementing Agile and Lean methodologies, are effective for improving cycle times continuously. These practices emphasize a flexible and iterative approach to project management, allowing teams to adapt to changes and make continuous improvements to their processes.

Furthermore, studying case studies of successful cycle time reduction from industry leaders can provide valuable insights into efficient practices that have led to significant reductions in cycle times. Learning from these examples can inspire and guide teams in implementing effective strategies to reduce cycle times in their own projects and workflows.

How Typo Helps?

Typo is an innovative tool designed to enhance the precision of cycle time calculations and overall productivity.

It seamlessly integrates Git data by analyzing timestamps from commits and merges. This integration ensures that cycle time calculations are based on actual development activities, providing a robust and accurate measurement compared to relying solely on task management tools. This empowers teams with actionable insights for optimizing their workflow and enhancing productivity in software development projects.

Here’s how Typo can help:

Automated time tracking: Typo provides automated time tracking for tasks, eliminating manual entry errors and ensuring accurate data collection.

Real-time analytics: With Typo, you can access real-time analytics to monitor cycle times, identify trends, and make data-driven decisions.

Customizable dashboards: Typo offers customizable dashboards that allow you to visualize cycle time data in a way that suits your needs, making it easier to spot inefficiencies and areas for improvement.

Seamless integration: Typo integrates seamlessly with popular project management tools, ensuring that all your data is synchronized and up-to-date.

Continuous improvement support: Typo supports continuous improvement by providing insights and recommendations based on your cycle time data, helping you implement best practices and optimize your workflows.

By leveraging Typo, you can achieve more precise cycle time calculations, improving efficiency and productivity.

Common Challenges and Solutions

In dealing with variability in task durations, it’s important to use averages as well as historical data to account for the range of possible durations. By doing this, you can better anticipate and plan for potential fluctuations in timing.

When it comes to ensuring data accuracy, it’s essential to implement a system for regularly reviewing and validating data. This can involve cross-referencing data from different sources and conducting periodic audits to verify its accuracy.

Additionally, when balancing speed and quality, the focus should be on maintaining high-quality standards while optimizing cycle time to ensure customer satisfaction. This can involve continuous improvement efforts aimed at increasing efficiency without compromising the quality of the final output.

The Path Forward with Optimized Cycle Time

Accurately calculating and optimizing cycle time is essential for improving efficiency and productivity. By following the steps outlined in this blog and utilizing tools like Typo, you can gain valuable insights into your processes and make informed decisions to enhance performance. Start measuring your cycle time today and reap the benefits of precise and optimized workflows.

DevOps Metrics Mistakes to Avoid in 2024

As DevOps practices continue to evolve, it’s crucial for organizations to effectively measure DevOps metrics to optimize performance.

Here are a few common mistakes to avoid when measuring these metrics to ensure continuous improvement and successful outcomes:

DevOps Landscape in 2024

In 2024, the landscape of DevOps metrics continues to evolve, reflecting the growing maturity and sophistication of DevOps practices. The emphasis is to provide actionable insights into the development and operational aspects of software delivery.

The integration of AI and machine learning (ML) in DevOps has become increasingly significant in transforming how teams monitor, manage, and improve their software development and operations processes. Apart from this, observability and real-time monitoring have become critical components of modern DevOps practices in 2024. They provide deep insights into system behavior and performance and are enhanced significantly by AI and ML technologies.

Lastly, Organizations are prioritizing comprehensive, real-time, and predictive security metrics to enhance their security posture and ensure robust incident response mechanisms.

Importance of Measuring DevOps Metrics

DevOps metrics track both technical capabilities and team processes. They reveal the performance of a DevOps software development pipeline and help to identify and remove any bottlenecks in the process in the early stages.

Below are a few benefits of measuring DevOps metrics:

  • Metrics enable teams to identify bottlenecks, inefficiencies, and areas for improvement. By continuously monitoring these metrics, teams can implement iterative changes and track their effectiveness.
  • DevOps metrics help in breaking down silos between development, operations, and other teams by providing a common language and set of goals. It improves transparency and visibility into the workflow and fosters better collaboration and communication.
  • Metrics ensure the team’s efforts are aligned with customer needs and expectations. Faster and more reliable releases contribute to better customer experiences and satisfaction.
  • DevOps metrics provide objective data that can be used to make informed decisions rather than relying on intuition or subjective opinions. This data-driven approach helps prioritize tasks and allocate resources effectively.
  • DevOps Metrics allows teams to set benchmarks and track progress against them. Clear goals and measurable targets motivate teams and provide a sense of achievement when milestones are reached.

Common Mistakes to Avoid when Measuring DevOps Metrics

Not Defining Clear Objectives

When clear objectives are not defined for development teams, they may measure metrics that do not directly contribute to strategic goals. This leads to scattered efforts and teams may achieve high numbers in certain metrics without realizing they are not contributing meaningfully to overall business objectives. This may also not provide actionable insights and decisions might be based on incomplete or misleading data. Lack of clear objectives makes it challenging to evaluate performance accurately and makes it unclear whether performance is meeting expectations or falling short.

Solutions

Below are a few ways to define clear objectives for DevOps metrics:

  • Start by understanding the high-level business goals. Engage with stakeholders to identify what success looks like for the organization.
  • Based on the business goals, identify specific KPIs that can measure progress towards these goals.
  • Ensure that objectives are Specific, Measurable, Achievable, Relevant, and Time-bound (SMART). For example, “Reduce the average lead time for changes from 5 days to 3 days within the next quarter.”
  • Choose metrics that directly measure progress toward the objectives.
  • Regularly review the objectives and the metrics to ensure they remain aligned with evolving business goals and market conditions. Adjust them as needed to reflect new priorities or insights.

Prioritizing Speed over Quality

Organizations usually focus on delivering products quickly rather than quality. However, speed and quality must work hand in hand. DevOps tasks must be accomplished by maintaining high standards and must be delivered to the end users on time. Due to this, the development team often faces intense pressure to deliver products or updates rapidly to stay competitive in the market. This can lead them to focus excessively on speed metrics, such as deployment frequency or lead time for changes, at the expense of quality metrics.

Solutions

  • Clearly define quality goals alongside speed goals. This involves setting targets for reliability, performance, security, and user experience metrics that are equally important as delivery speed metrics.
  • Implement continuous feedback loops throughout the DevOps process such as feedback from users, automated testing, monitoring, and post-release reviews.
  • Invest in automation and tooling that accelerates delivery as well as enhances quality. Automated testing, continuous integration, and continuous deployment (CI/CD) pipelines can help in achieving both speed and quality goals simultaneously.
  • Educate teams about the importance of balancing speed and quality in DevOps practices.
  • Regularly review and refine metrics based on the evolving needs of the organization and the feedback received from customers and stakeholders.

Tracking Too Much at Once

It is usually believed that the more metrics you track, the better you’ll understand DevOps processes. This leads to an overwhelming number of metrics, where most of them are redundant or not directly actionable. It usually occurs when there is no clear strategy or prioritization framework, leading teams to attempt to measure everything that further becomes difficult to manage and interpret. Moreover, it also results in tracking numerous metrics to show detailed performance, even if those metrics are not particularly meaningful.

Solutions

  • Identify and focus on a few key metrics that are most relevant to your business goals and DevOps objectives.
  • Align your metrics with clear objectives to ensure you are tracking the most impactful data. For example, if your goal is to improve deployment frequency and reliability, focus on metrics like deployment frequency, lead time for changes, and mean time to recovery.
  • Review the metrics you are tracking to determine their relevance and effectiveness. Remove metrics that do not provide value or are redundant.
  • Foster a culture that values the quality and relevance of metrics over the sheer quantity.
  • Use visualizations and summaries to highlight the most important data, making it easier for stakeholders to grasp the critical information without being overwhelmed by the volume of metrics.

Rewarding Performance

Engineering leaders often believe that rewarding performance will motivate developers to work harder and achieve better results. However, this is not true. Rewarding specific metrics can lead to an overemphasis on those metrics at the expense of other important aspects of work. For example, focusing solely on deployment frequency might lead to neglecting code quality or thorough testing. This can also result in short-term improvements but leads to long-term problems such as burnout, reduced intrinsic motivation, and a decline in overall quality. Due to this, developers may manipulate metrics or take shortcuts to achieve rewarded outcomes, compromising the integrity of the process and the quality of the product.

Solutions

  • Cultivate an environment where teams are motivated by the satisfaction of doing good work rather than external rewards.
  • Recognize and appreciate good work through non-monetary means such as public acknowledgment, opportunities for professional development, and increased autonomy.
  • Instead of rewarding individual performance, measure and reward team performance.
  • Encourage knowledge sharing, pair programming, and cross-functional teams to build a cooperative work environment.
  • If rewards are necessary, align them with long-term goals rather than short-term performance metrics.

Lack of Continuous Integration and Testing

Without continuous integration and testing, bugs and defects are more likely to go undetected until later stages of development or production, leading to higher costs and more effort to fix issues. It compromises the quality of the software, resulting in unreliable and unstable products that can damage the organization’s reputation. Moreover, it can result in slower progress over time due to the increased effort required to address accumulated technical debt and defects.

Solutions

  • Allocate resources to implement CI/CD pipelines and automated testing frameworks.
  • Invest in training and upskilling team members on CI/CD practices and tools.
  • Begin with small, incremental implementations of CI and testing. Gradually expand the scope as the team becomes more comfortable and proficient with the tools and processes.
  • Foster a culture that values quality and continuous improvement. Encourage collaboration between development and operations teams to ensure that CI and testing are seen as essential components of the development process.
  • Use automation to handle repetitive and time-consuming tasks such as building, testing, and deploying code. This reduces manual effort and increases efficiency.

Key DevOps Metrics to Measure

Below are a few important DevOps metrics:

Deployment Frequency

Deployment Frequency measures the frequency of code deployment to production and reflects an organization’s efficiency, reliability, and software delivery quality. It is often used to track the rate of change in software development and highlight potential areas for improvement.

Lead Time for Changes

Lead Time for Changes is a critical metric used to measure the efficiency and speed of software delivery. It is the duration between a code change being committed and its successful deployment to end-users. This metric is a good indicator of the team’s capacity, code complexity, and efficiency of the software development process.

Change Failure Rate

Change Failure Rate measures the frequency at which newly deployed changes lead to failures, glitches, or unexpected outcomes in the IT environment. It reflects the stability and reliability of the entire software development and deployment lifecycle. It is related to team capacity, code complexity, and process efficiency, impacting speed and quality.

Mean Time to Recover

Mean Time to Recover is a valuable metric that calculates the average duration taken by a system or application to recover from a failure or incident. It is an essential component of the DORA metrics and concentrates on determining the efficiency and effectiveness of an organization’s incident response and resolution procedures.

Conclusion

Optimizing DevOps practices requires avoiding common mistakes in measuring metrics. To optimize DevOps practices and enhance organizational performance, specialized tools like Typo can help simplify the measurement process. It offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.

Top Platform Engineering Tools (2024)

Platform engineering tools empower developers by enhancing their overall experience. By eliminating bottlenecks and reducing daily friction, these tools enable developers to accomplish tasks more efficiently. This efficiency translates into improved cycle times and higher productivity.

In this blog, we explore top platform engineering tools, highlighting their strengths and demonstrating how they benefit engineering teams.

What is Platform Engineering?

Platform Engineering, an emerging technology approach, enables the software engineering team with all the required resources. This is to help them perform end-to-end operations of software development lifecycle automation. The goal is to reduce overall cognitive load, enhance operational efficiency, and remove process bottlenecks by providing a reliable and scalable platform for building, deploying, and managing applications.

Importance of Platform Engineering

  • Platform engineering involves creating reusable components and standardized processes. It also automates routine tasks, such as deployment, monitoring, and scaling, to speed up the development cycle.
  • Platform engineers integrate security measures into the platform, to ensure that applications are built and deployed securely. They help ensure that the platform meets regulatory and compliance requirements.
  • It ensures efficient use of resources to balance performance and expenditure. It also provides transparency into resource usage and associated costs to help organizations make informed decisions about scaling and investment.
  • By providing tools, frameworks, and services, platform engineers empower developers to build, deploy, and manage applications more effectively.
  • A well-engineered platform allows organizations to adapt quickly to market changes, new technologies, and customer needs.

Best Platform Engineering Tools

Typo

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Kubernetes

An open-source container orchestration platform. It is used to automate deployment, scale, and manage container applications.

Kubernetes is beneficial for application packages with many containers; developers can isolate and pack container clusters to be deployed on several machines simultaneously.

Through Kubernetes, engineering leaders can create Docker containers automatically and assign them based on demands and scaling needs.
Kubernetes can also handle tasks like load balancing, scaling, and service discovery for efficient resource utilization. It also simplifies infrastructure management and allows customized CI/CD pipelines to match developers’ needs.

Jenkins

An open-source automation server and CI/CD tool. Jenkins is a self-contained Java-based program that can run out of the box.

It offers extensive plug-in systems to support building and deploying projects. It supports distributing build jobs across multiple machines which helps in handling large-scale projects efficiently. Jenkins can be seamlessly integrated with various version control systems like Git, Mercurial, and CVS and communication tools such as Slack, and JIRA.

GitHub Actions

A powerful platform engineering tool that automates software development workflows directly from GitHub.GitHub Actions can handle routine development tasks such as code compilation, testing, and packaging for standardizedizing and efficient processes.

It creates custom workflows to automate various tasks and manage blue-green deployments for smooth and controlled application deployments.

GitHub Actions allows engineering teams to easily deploy to any cloud, create tickets in Jira, or publish packages.

GitLab CI

GitLab CI automatically uses Auto DevOps to build, test, deploy, and monitor applications. It uses Docker images to define environments for running CI/CD jobs and build and publish them within pipelines. It supports parallel job execution that allows to running of multiple tasks concurrently to speed up build and test processes.

GitLab CI provides caching and artifact management capabilities to optimize build times and preserve build outputs for downstream processes. It can be integrated with various third-party applications including CircleCI, Codefresh, and YouTrack.

AWS Codepipeline

A Continuous Delivery platform provided by Amazon Web Services (AWS). AWS Codepipeline automates the release pipeline and accelerates the workflow with parallel execution.

It offers high-level visibility and control over the build, test, and deploy processes. It can be integrated with other AWS tools such as AWS Codebuild, AWS CodeDeploy, and AWS Lambda as well as third-party integrations like GitHub, Jenkins, and BitBucket.

AWS Codepipeline can also configure notifications for pipeline events to help stay informed about the deployment state.

Argo CD

A Github-based continuous deployment tool for Kubernetes application. Argo CD allows to deployment of code changes directly to Kubernetes resources.

It simplifies the management of complex application deployment and promotes a self-service approach for developers. Argo CD defines and automates the K8 cluster to suit team needs and includes multi-cluster setups for managing multiple environments.

It can seamlessly integrate with third-party tools such as Jenkins, GitHub, and Slack. Moreover, it supports multiple templates for creating Kubernetes manifests such as YAML files and Helm charts.

Azure DevOps Pipeline

A CI/CD tool offered by Microsoft Azure. It supports building, testing, and deploying applications using CI/CD pipelines within the Azure DevOps ecosystem.

Azure DevOps Pipeline lets engineering teams define complex workflows that handle tasks like compiling code, running tests, building Docker images, and deploying to various environments. It can automate the software delivery process, reducing manual intervention, and seamlessly integrates with other Azure services, such as Azure Repos, Azure Artifacts, and Azure Kubernetes Service (AKS).

Moreover, it empowers DevSecOps teams with a self-service portal for accessing tools and workflows.

Terraform

An Infrastructure as Code (IoC) tool. It is a well-known cloud-native platform in the software industry that supports multiple cloud provider and infrastructure technologies.

Terraform can quickly and efficiently manage complex infrastructure and can centralize all the infrastructures. It can seamlessly integrate with tools like Oracle Cloud, AWS, OpenStack, Google Cloud, and many more.

It can speed up the core processes the developers’ team needs to follow. Moreover, Terraform automates security based on the enforced policy as the code.

Heroku

A platform-as-a-service (PaaS) based on a managed container system. Heroku enables developers to build, run, and operate applications entirely in the cloud and automates the setup of development, staging, and production environments by configuring infrastructure, databases, and applications consistently.

It supports multiple deployment methods, including Git, GitHub integration, Docker, and Heroku CLI, and includes built-in monitoring and logging features to track application performance and diagnose issues.

Circle CI

A popular Continuous Integration/Continuous Delivery (CI/CD) tool that allows software engineering teams to build, test, and deploy software using intelligent automation. It hosts CI under the cloud-managed option.

Circle CI is GitHub-friendly and includes extensive API for customized integrations. It supports parallelism i.e. splitting tests across different containers to run as clean and separate builds. It can also be configured to run complex pipelines.

Circle CI has an in-built feature ‘Caching’. It speeds up builds by storing dependencies and other frequently-used files, reducing the need to re-download or recompile them for subsequent builds.

How to Choose the Right Platform Engineering Tools?

Know your Requirements

Understand what specific problems or challenges the tools need to solve. This could include scalability, automation, security, compliance, etc. Consider inputs from stakeholders and other relevant teams to understand their requirements and pain points.

Evaluate Core Functionalities

List out the essential features and capabilities needed in platform engineering tools. Also, the tools must integrate well with existing infrastructure, development methodologies (like Agile or DevOps), and technology stack.

Security and Compliance

Check if the tools have built-in security features or support integration with security tools for vulnerability scanning, access control, encryption, etc. The tools must comply with relevant industry regulations and standards applicable to your organization.

Documentation and Support

Check the availability and quality of documentation, tutorials, and support resources. Good support can significantly reduce downtime and troubleshooting efforts.

Flexibility

Choose tools that are flexible and adaptable to future technology trends and changes in the organization’s needs. The tools must integrate smoothly with the existing toolchain, including development frameworks, version control systems, databases, and cloud services.

Proof of Concept (PoC)

Conduct a pilot or proof of concept to test how well the tools perform in the environment. This allows them to validate their suitability before committing to full deployment.

Conclusion

Platform engineering tools play a crucial role in the IT industry by enhancing the experience of software developers. They streamline workflows, remove bottlenecks, and reduce friction within developer teams, thereby enabling more efficient task completion and fostering innovation across the software development lifecycle.

|

Mastering the Art of DORA Metrics

In today's competitive tech landscape, engineering teams need robust and actionable metrics to measure and improve their performance. The DORA (DevOps Research and Assessment) metrics have emerged as a standard for assessing software delivery performance. In this blog, we'll explore what DORA metrics are, why they're important, and how to master their implementation to drive business success.

📊 What are DORA Metrics?

DORA metrics, developed by the DevOps Research and Assessment team, are a set of key performance indicators that measure the effectiveness and efficiency of software development and delivery processes. They provide a data-driven approach to evaluate the impact of operational practices on software delivery performance.

The four primary DORA metrics are:

✅ Deployment Frequency: How often an organization deploys code to production.

✅ Lead Time for Changes: The time it takes for a commit to go into production.

✅ Change Failure Rate: The percentage of deployments causing a failure in production.

✅ Mean Time to Restore (MTTR): The time it takes to recover from a production failure.

📌 But, why are they important?

These metrics offer a comprehensive view of the software delivery process, highlighting areas for improvement and enabling teams to enhance their delivery speed, reliability, and overall quality, leading to better business outcomes.

✅ Objective Measurement of Performance

DORA metrics provide an objective way to measure the performance of software delivery processes. By focusing on these key indicators, dev teams gain a clear and quantifiable understanding of their tech practices.

✅ Benchmarking Against Industry Standards

DORA metrics enable organizations to benchmark their performance against industry standards. The DORA State of DevOps reports provide insights into what high-performing teams look like, offering a target for other organizations to aim for. By comparing your metrics against these benchmarks, you can set realistic goals and understand where your team stands in relation to others in the industry.

✅ Enhancing Collaboration and Communication

DORA metrics promote better collaboration and communication within and across teams. By providing a common language and set of goals, these metrics align development, operations, and business teams around shared objectives. This alignment helps in breaking down silos and fostering a culture of collaboration and transparency.

✅ Improving Business Outcomes

The ultimate goal of tracking DORA metrics is to improve business outcomes. High-performing teams, as measured by DORA metrics, are correlated with faster delivery times, higher quality software, and improved stability. These improvements lead to greater customer satisfaction, increased market competitiveness, and higher revenue growth.

👨🏻‍💻 So, how do we Master the Implementation?

▶️ Define Clear Objectives

Firstly, identify what you want to achieve by tracking DORA metrics. Objectives might include increasing deployment frequency, reducing lead time, decreasing change failure rates, or minimizing MTTR.

▶️ Collect Accurate Data

Ensure your tools are properly configured to collect the necessary data for each metric:

  • Deployment Frequency: Track every deployment to production.
  • Lead Time for Changes: Measure the time from code commit to deployment.
  • Change Failure Rate: Monitor production incidents and link them to specific changes.
  • MTTR: Track the time taken from the detection of a failure to resolution.

▶️ Analyze and Visualize Data

Use dashboards and reports to visualize the metrics. There are many DORA metrics trackers available in the market. Do research and select a tool that can help you create clear and actionable visualizations.

▶️ Set Benchmarks and Targets

Establish benchmarks based on industry standards or your historical data. Set realistic targets for improvement and use these as a guide for your DevOps practices.

▶️ Encourage Continuous Improvement

Use the insights gained from your DORA metrics to identify bottlenecks and areas for improvement. Ensure to implement changes and continuously monitor their impact on your metrics. This iterative approach helps in gradually enhancing your DevOps performance.

▶️ Regular Reviews and Adjustments

Regularly review metrics and adjust your practices as needed. The objectives and targets must evolve with the organization’s growth and changes in the industry.Typo is an intelligent engineering management platform for gaining visibility, removing blockers, and maximizing developer effectiveness. It's user-friendly interface and cutting-edge capabilities set it apart in the competitive landscape.

Key Features

  • Customizable DORA metrics dashboard: You can tailor the DORA metrics dashboard to their specific needs, providing a personalized and efficient monitoring experience. It provides a user-friendly interface and integrates with DevOps tools to ensure a smooth data flow for accurate metric representation.
  • Code review automation: Typo is an automated code review tool that not only enables developers to catch issues related to code maintainability, readability, and potential bugs but also can detect code smells. It identifies issues in the code and auto-fixes them before you merge to master.
  • Predictive sprint analysis: Typo’s intelligent algorithm provides you with complete visibility of your software delivery performance and proactively tells which sprint tasks are blocked, or are at risk of delay by analyzing all activities associated with the task.
  • Measures developer experience: While DORA metrics provide valuable insights, they alone cannot fully address software delivery and team performance. With Typo’s research-backed framework, gain qualitative insights across developer productivity and experience to know what’s causing friction and how to improve.
  • High number of integrations: Typo seamlessly integrates with the tech tool stack. It includes GIT versioning, Issue tracker, CI/CD, communication, Incident management, and observability tools.

🏁 Conclusion

Understanding DORA metrics and effectively implementing and analyzing them can significantly enhance your software delivery performance and overall DevOps practices. These metrics are vital for benchmarking against industry standards, enhancing collaboration and communication, and improving business outcomes.

View All

Software Delivery

View All

Effective DevOps Strategies for Startups

The era when development and operations teams worked in isolation, rarely interacting, is over. This outdated approach led to significant delays in developing and launching new applications. Modern IT leaders understand that DevOps is a more effective strategy.

DevOps fosters collaboration between software development and IT operations, enhancing the speed, efficiency, and quality of software delivery. By leveraging DevOps tools, the software development process becomes more streamlined through improved team collaboration and automation.

Understanding DevOps

DevOps is a methodology that merges software development (Dev) with IT operations (Ops) to shorten the development lifecycle while maintaining high software quality.

Creating a DevOps culture promotes collaboration, which is essential for continuous delivery. IT operations and development teams share ideas and provide prompt feedback, accelerating the application launch cycle.

Importance of DevOps for Startups

In the competitive startup environment, time equates to money. Delayed product launches risk competitors beating you to market. Even with an early market entry, inefficient development processes can hinder timely feature rollouts that customers need.

Implementing DevOps practice helps startups keep pace with industry leaders, speeding up development without additional resource expenditure, improving customer experience, and aligning with business needs.

Core Principles of DevOps

The foundation of DevOps rests on the principles of culture, automation, measurement, and sharing (CAMS). These principles drive continuous improvement and innovation in startups.

Key Benefits of DevOps for Startups

Faster Time-to-Market

DevOps accelerates development and release processes through automated workflows and continuous feedback integration.

  • Startups can rapidly launch new features, fix bugs, and update software, gaining a competitive advantage.
  • Implement continuous integration and continuous deployment (CI/CD) pipelines.
  • Use automated testing to identify issues early.

Improved Efficiency

DevOps enhances workflow efficiency by automating repetitive tasks and minimizing manual errors.

  • Utilize configuration management tools like Ansible and Chef.
  • Implement containerization with Docker for consistency across environments.
  • Jenkins for CI/CD
  • Docker for containerization
  • Kubernetes for orchestration

Enhanced Reliability

DevOps ensures code changes are continuously tested and validated, reducing failure risks.

  • Conduct regular automated testing.
  • Continuously monitor applications and infrastructure.
  • Increased reliability leads to higher customer satisfaction and retention.

DevOps Practices for Startups

Embrace Automation with CI/CD Tools

Automation tools are essential for accelerating the software delivery process. Startups should use CI/CD tools to automate testing, integration, and deployment. Recommended tools include:

  • Jenkins: An open-source automation server that supports building and deploying applications.
  • GitLab CI/CD: Integrated CI/CD capabilities within GitLab for seamless pipeline management.
  • CircleCI: A cloud-based CI/CD tool that offers fast builds and easy integration with various services.

Implement Continuous Integration and Continuous Delivery (CI/CD)

CI/CD practices enable frequent code changes and deployments. Key components include:

  • Version Control Systems (VCS): Use Git with platforms like GitHub or Bitbucket for efficient code management.
  • Build Automation: Tools like Maven or Gradle for Java projects, or npm scripts for Node.js, automate the build process.
  • Deployment Automation: Utilize tools like Spinnaker or Argo CD for managing Kubernetes deployments.

Utilize Infrastructure as Code (IaC)

IaC allows startups to manage infrastructure through code, ensuring consistency and reducing manual errors. Consider using:

  • Terraform: For provisioning and managing cloud infrastructure in a declarative manner.
  • AWS CloudFormation: For defining infrastructure using YAML or JSON templates.
  • Ansible: For configuration management and application deployment.

Adopt Containerization

Containerization simplifies deployment and improves resource utilization. Use:

  • Docker: To package applications and their dependencies into lightweight, portable containers.
  • Kubernetes: For orchestrating containerized applications, enabling scaling and management.

Monitor and Measure Performance

Implement robust monitoring tools to gain visibility into application performance. Recommended tools include:

  • Prometheus: For real-time monitoring and alerting.
  • Grafana: For visualizing metrics and logs.
  • ELK Stack (Elasticsearch, Logstash, Kibana): For centralized logging and data analysis.

Integrate Security (DevSecOps)

Incorporate security practices into the DevOps pipeline using:

  • Snyk: For identifying vulnerabilities in open-source dependencies.
  • SonarQube: For continuous inspection of code quality and security vulnerabilities.
  • HashiCorp Vault: For managing secrets and protecting sensitive data.

Leverage Software Engineering Intelligence (SEI) Platforms

SEI platforms provide critical insights into the engineering processes, enhancing decision-making and efficiency. Key features include:

  • Data Integration: SEI platforms like Typo ingest data from various tools (e.g., GitHub, JIRA) to provide a holistic view of the development pipeline.
  • Actionable Insights: These platforms analyze data to identify bottlenecks and inefficiencies, enabling teams to optimize workflows and improve delivery speed.
  • DORA Metrics: SEI platforms track key metrics such as deployment frequency, lead time for changes, change failure rate, and time to restore service, helping teams measure their performance against industry standards.

Foster Collaboration and Communication

Utilize collaborative tools to enhance communication among team members. Recommended tools include:

  • Slack: For real-time communication and integration with other DevOps tools.
  • JIRA: For issue tracking and agile project management.
  • Confluence: For documentation and knowledge sharing.

Encourage Continuous Learning

Promote a culture of continuous learning through:

  • Internal Workshops: Regularly scheduled sessions on new tools or methodologies.
  • Online Courses: Encourage team members to take courses on platforms like Coursera or Udemy.

Establish Clear Standards and Documentation

Create a repository for documentation and coding standards using:

  • Markdown: For easy-to-read documentation within code repositories.
  • GitHub Pages: For hosting project documentation directly from your GitHub repository.

How Typo Helps DevOps Teams?

Typo is a powerful tool designed specifically for tracking and analyzing DevOps metrics. It provides an efficient solution for dev and ops teams seeking precision in their performance measurement.

  • With pre-built integrations in the dev tool stack, the dashboard provides all the relevant data within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real time.
  • It gives real-time visibility into a team’s KPI and lets them make informed decisions.

Conclusion

Implementing DevOps best practices can markedly boost the agility, productivity, and dependability of startups.

By integrating continuous integration and deployment, leveraging infrastructure as code, employing automated testing, and maintaining continuous monitoring, startups can effectively tackle issues like limited resources and skill shortages.

Moreover, fostering a cooperative culture is essential for successful DevOps adoption. By adopting these strategies, startups can create durable, scalable solutions for end users and secure long-term success in a competitive landscape.

Pros and Cons of DORA Metrics for Continuous Delivery

DORA metrics offer a valuable framework for assessing software delivery performance. By measuring DORA key metrics, organizations can identify bottlenecks, improve efficiency, and enhance software quality. It is also a key indicator for measuring the effectiveness of continuous delivery pipelines.

In this blog post, we delve into the pros and cons of utilizing DORA metrics to optimize continuous delivery processes, exploring their impact on performance, efficiency, and overall software quality.

What are DORA Metrics?

DORA metrics, developed by the DevOps Research and Assessment team, are key performance indicators that measure the effectiveness and efficiency of software development and delivery processes. They provide a data-driven approach to evaluate the impact of operational practices on software delivery performance.

Four Key DORA Metrics

  • Change Failure Rate measures the code quality released to production during software deployments.
  • Mean Time to Recover measures the time to recover a system or service after an incident or failure in production.

In 2021, the DORA Team added Reliability as a fifth metric. It is based upon how well the user’s expectations are met, such as availability and performance, and measures modern operational practices.

Importance of Continuous Delivery

Continuous delivery (CD) is a primary aspect of modern software development that automatically prepares code changes for release to a production environment. It is combined with continuous integration (CI) and together, these two practices are known as CI/CD.

Continuous delivery holds significant importance compared to traditional waterfall-style development. A few of them are:

Faster Time to Market

Continuous Delivery allows more frequent releases, allowing new features, improvements, and bug fixes to be delivered to end-users more quickly. It provides a competitive advantage by keeping the product up-to-date and responsive to user needs.

Improved Quality and Reliability

Automated testing and consistent deployment processes catch bugs and issues early. It improves the overall quality and reliability of the software and reduces the chances of defects reaching production.

Reduced Deployment Risk

When updates are smaller and more frequent, it reduces the complexity and risk associated with each deployment. If an issue does arise, it becomes easier to pinpoint the problem and roll back the changes.

Scalability

CD practices can be scaled to accommodate growing development teams and more complex applications. It helps to manage the increasing demands of modern software development.

Innovation and Experimentation

Continuous delivery allows teams to experiment with new ideas and features efficiently. This encourages innovation by allowing quick feedback and iteration cycles.

Pros of DORA Metrics for Continuous Delivery

Enhances Performance Visibility

  • Deployment Frequency: Higher frequencies indicate a team’s ability to deliver updates and new features quickly and consistently.
  • Lead Time for Changes: Short lead times suggest a more efficient delivery process.
  • Change Failure Rate: A lower rate highlights better testing and higher quality in releases.
  • Mean Time to Restore (MTTR): A lower MTTR indicates a team’s capability to respond to and fix issues rapidly.

Increases Operational Efficiency

Implementing DORA metrics encourages teams to streamline their processes, reducing bottlenecks and inefficiencies in the delivery pipeline. It also allows the team to regularly measure and analyze these metrics which fosters a culture of continuous improvement. As a result, teams are motivated to identify and resolve inefficiencies.

Fosters Collaboration and Communication

Tracking DORA metrics encourages collaboration between DevOps and other stakeholders. Hence, fostering a more integrated and cooperative approach to software delivery. It further provides objective data that teams can use to make informed decisions, prioritize work, and align their efforts with business goals.

Improves Software Quality

Continuous Delivery relies heavily on automated testing to catch defects early. DORA metrics help software teams track the testing processes’ effectiveness which ensures higher software quality. Moreover, faster deployment cycles and lower lead times enable quicker feedback from end-users. It allows teams to address issues and improve the product more swiftly.

Increases Reliability and Stability

Software teams can ensure that their deployments are more reliable and less prone to issues by monitoring and aiming to reduce the change failure rate. A low MTTR demonstrates a team’s capability to quickly recover from failures which minimizes downtime and its impact on users. Hence, increases the reliability and stability of the software.

Cons of DORA Metrics for Continuous Delivery

Implementation Challenges

The process of setting up the necessary software to measure DORA metrics accurately can be complex and time-consuming. Besides this, inaccurate or incomplete data can lead to misleading metrics which can affect decision-making and process improvements.

Resource Allocation Issues

Implementing and maintaining the necessary infrastructure to track DORA metrics can be resource-intensive. It potentially diverts resources from other important areas and increases the risk of disproportionately allocating resources to high-performing teams or projects to improve metrics.

Limited Scope of Metrics

DORA metrics focus on specific aspects of the delivery process and may not capture other crucial factors including security, compliance, or user satisfaction. It is also not universally applicable as the relevance and effectiveness of DORA metrics can vary across different types of projects, teams, and organizations. What works well for one team may not be suitable for another.

Cultural Resistance

Implementing DORA metrics requires changes in culture and mindset, which can be met with resistance from teams that are accustomed to traditional methods. Apart from this, ensuring that DORA metrics align with broader business goals and are understood by all stakeholders can be challenging.

Subjectivity in Measurement

While DORA Metrics are quantitative in nature, their interpretation and application of DORA metrics can be highly subjective. The definition and measuring metrics like ‘Lead Time for Changes’ or ‘MTTR’ can vary significantly across teams. It may result in inconsistencies in how these metrics are understood and applied across different teams.

How does Typo Solve this Issue?

As the tech landscape is evolving, there is a need for diverse evaluation tools in software development. Relying solely on DORA metrics can result in a narrow understanding of performance and progress. Hence, software development organizations necessitate a multifaceted evaluation approach.

And that’s why, Typo is here at your rescue!

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools. It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.‍

Features

  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Includes effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint.
  • Provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.‍

Conclusion

While DORA metrics offer valuable insights into software delivery performance, they have their limitations. Typo provides a robust platform that complements DORA metrics by offering deeper insights into developer productivity and workflow efficiency, helping organizations achieve the best possible software delivery outcomes.

Improving Scrum Team Performance with DORA Metrics

Scrum is known to be a popular methodology for software development. It concentrates on continuous improvement, transparency, and adaptability to changing requirements. Scrum teams hold regular ceremonies, including Sprint Planning, Daily Stand-ups, Sprint Reviews, and Sprint Retrospectives, to keep the process on track and address any issues.

With the help of DORA DevOps Metrics, Scrum teams can gain valuable insights into their development and delivery processes.

In this blog post, we discuss how DORA Metrics helps boost scrum team performance. 

What are DORA Metrics? 

DevOps Research and Assessment (DORA) metrics are a compass for engineering teams striving to optimize their development and operations processes.

In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble, and Dr. Nicole Forsgren to evaluate and improve software development practices. The aim is to enhance the understanding of how development teams can deliver software faster, more reliably, and of higher quality.

Four key metrics are: 

  • Deployment Frequency: Deployment Frequency measures the rate of change in software development and highlights potential bottlenecks. It is a key indicator of agility and efficiency. Regular deployments signify a streamlined pipeline, allowing teams to deliver features and updates faster.
  • Lead Time for Changes: Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. It tracks the speed and efficiency of software delivery and offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.
  • Change Failure Rate: Change Failure Rate measures the frequency at which newly deployed changes lead to failures, glitches, or unexpected outcomes in the IT environment. It reflects reliability and efficiency and is related to team capacity, code complexity, and process efficiency, impacting speed and quality.
  • Mean Time to Recover: Mean Time to Recover measures the average duration taken by a system or application to recover from a failure or incident. It concentrates on determining the efficiency and effectiveness of an organization's incident response and resolution procedures.

Reliability is a fifth metric that was added by the DORA team in 2021. It is based upon how well your user’s expectations are met, such as availability and performance, and measures modern operational practices. It doesn’t have standard quantifiable targets for performance levels rather it depends upon service level indicators or service level objectives.

Why DORA Metrics are Useful for Scrum Team Performance? 

DORA metrics are useful for Scrum team performance because they provide key insights into the software development and delivery process.

Measure Key Performance Indicators (KPIs)

DORA metrics track crucial KPIs such as deployment frequency, lead time for changes, mean time to recovery (MTTR), and change failure rate which helps Scrum teams understand their efficiency and identify areas for improvement.

Enhance Workflow Efficiency

Teams can streamline their processes and reduce bottlenecks by monitoring deployment frequency and lead time for changes. Hence, leading to faster delivery of features and bug fixes.

Improve Reliability 

Tracking the change failure rate and MTTR helps software teams focus on improving the reliability and stability of their applications. Hence, resulting in more stable releases and fewer disruptions for users.

Data-Driven Decision Making 

DORA metrics give clear data that helps teams decide where to improve, making it easier to prioritize the most impactful actions for better performance.

Continuous Improvement

Regularly reviewing these metrics encourages a culture of continuous improvement. This helps software development teams to set goals, monitor progress, and adjust their practices based on concrete data.

Benchmarking

DORA metrics allow DevOps teams to compare their performance against industry standards or other teams within the organization. This encourages healthy competition and drives overall improvement.

Best Practices for Implementing DORA Metrics in Scrum Teams

Understand the Metrics 

Firstly, understand the importance of DORA Metrics as each metric provides insight into different aspects of the development and delivery process. Together, these metrics offer a comprehensive view of the team’s performance and allow them to make data-driven decisions. 

Set Baselines and Goals

Scrum teams should start by setting baselines for each metric to get a clear starting point and set realistic goals. For instance, if a scrum team currently deploys once a month, it may be unrealistic to aim for multiple deployments per day right away. Instead, they could set a more achievable goal, like deploying once a week, and gradually work towards increasing their frequency.

Regularly Review and Analyze Metrics

Scrum teams must schedule regular reviews (e.g., during sprint retrospectives) to discuss the metrics to identify trends, patterns, and anomalies in the data. This helps to track progress, pinpoint areas for improvement, and further allow them to make data-driven decisions to optimize their processes and adjust their goals as needed.

Foster Continuous Growth

Use the insights gained from the metrics to drive ongoing improvements and foster a culture that values experimentation and learning from mistakes. By creating this environment, Scrum teams can steadily enhance their software delivery performance. Note that, this approach should go beyond just focusing on DORA metrics. it should also take into account other factors like team well-being, collaboration, and customer satisfaction.

Ensure Cross-Functional Collaboration and Communicate Transparently

Encourage collaboration between development, operations, and other relevant teams to share insights and work together to address bottlenecks and improve processes. Also, make the metrics and their implications transparent to the entire team. You can use the DORA Metrics dashboard to keep everyone informed and engaged.

How Typo Leverages DORA Metrics? 

Typo is a powerful tool designed specifically for tracking and analyzing DORA metrics. It provides an efficient solution for DevOps and Scrum teams seeking precision in their performance measurement.

  • With pre-built integrations in the dev tool stack, the DORA metrics dashboard provides all the relevant data within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real time.
  • It gives real-time visibility into a team’s KPI and allow them to make informed decisions.

Conclusion 

Leveraging DORA Metrics can transform Scrum team performance by providing actionable insights into key aspects of development and delivery. When implemented the right way, teams can optimize their workflows, enhance reliability, and make informed decisions. 

Top Platform Engineering KPIs You Need to Monitor

Platform Engineering is becoming increasingly crucial. According to the 2024 State of DevOps Report: The Evolution of Platform Engineering, 43% of organizations have had platform teams for 3-5 years. The field offers numerous benefits, such as faster time-to-market, enhanced developer happiness, and the elimination of team silos.

However, there is one critical piece of advice that Platform Engineers often overlook: treat your platform as an internal product and consider your wider teams as your customers.

So, how can they do this effectively? It's important to measure what’s working and what isn’t using consistent indicators of success.

In this blog, we’ve curated the top platform engineering KPIs that software teams must monitor:

What is Platform Engineering?

Platform Engineering, an emerging technology approach, enables the software engineering team with all the required resources. This is to help them perform end-to-end operations of software development lifecycle automation. The goal is to reduce overall cognitive load, enhance operational efficiency, and remove process bottlenecks by providing a reliable and scalable platform for building, deploying, and managing applications. 

Importance of Tracking Platform Engineering KPIs

Helps in Performance Monitoring and Optimization

Platform Engineering KPIs offer insights into how well the platform performs under various conditions. They also help to identify loopholes and areas that need optimization to ensure the platform runs efficiently.

Ensures Scalability and Capacity Planning

These metrics guide decisions on how to scale resources. It also ensures the capacity planning i.e. the platform can handle growth and increased load without performance degradation. 

Quality Assurance

Tracking KPIs ensure that the platform remains robust and maintainable. This further helps to reduce technical debt and improve the platform’s overall quality. 

Increases Productivity and Collaboration

They provide in-depth insights into how effectively the engineering team operates and help to identify areas for improvement in team dynamics and processes.

Fosters a Culture of Continuous Improvement

Regularly tracking and analyzing KPIs fosters a culture of continuous improvement. Hence, encouraging proactive problem-solving and innovation among platform engineers. 

Top Platform Engineering KPIs to Track 

Deployment Frequency 

Deployment Frequency measures how often code is deployed into production per week. It takes into account everything from bug fixes and capability improvements to new features. It is a key metric for understanding the agility and efficiency of development and operational processes and highlights the team’s ability to deliver updates and new features.

The higher frequency with minimal issues reflects mature CI/CD processes and how platform engineering teams can quickly adapt to changes. Regularly tracking and adapting Deployment Frequency helps in continuous improvement as it reduces the risk of large, disruptive changes and delivers value to end-users effectively. 

Lead Time for Changes

Lead Time is the duration between a code change being committed and its successful deployment to end-users. It is correlated with both the speed and quality of the platform engineering team. Higher lead time gives a clear sign of roadblocks in processes and the platform needs attention. 

Low lead time indicates that the teams quickly adapt to feedback and deliver products timely. It also gives teams the ability to make rapid changes, allowing them to adapt to evolving user needs and market conditions. Tracking it regularly helps in streamlining workflows and reducing bottlenecks. 

Change Failure Rate

Change Failure Rate refers to the proportion or percentage of deployments that result in failure or errors. It indicates the rate at which changes negatively impact the stability or functionality of the system. CFR also provides a clear view of the platform’s quality and stability eg: how much effort goes into addressing problems and releasing code.

Lower CFR indicates that deployments are reliable, changes are thoroughly tested, and less likely to cause issues in production. Moreover, it also reflects a well-functioning development and deployment processes, boosting team confidence and morale. 

Mean Time to Restore

Mean Time to Restore (MTTR) represents the average time taken to resolve a production failure/incident and restore normal system functionality each week.  Low MTTR indicates that the platform is resilient, quickly recovers from issues, and efficiency of incident response. 

Faster recovery time minimizes the impact on users, increasing their satisfaction and trust in service. Moreover, it contributes to higher system uptime and availability and enhances your platform’s reputation, giving you a competitive edge. 

Resource Utilization 

This KPI tracks the usage of system resources. It is a critical metric that optimizes resource allocation and cost efficiency. Resource Utilization balances several objectives with a fixed amount of resources. 

It allows platform engineers to distribute limited resources evenly and efficiently and understand where exactly to spend. Resource Utilization also aids in capacity planning and helps in avoiding potential bottlenecks. 

Error Rates

Error Rates measure the number of errors encountered in the platform. It identifies the stability, reliability, and user experience of the platform. High Error Rates indicate underlying problems that need immediate attention which can otherwise, degrade user experience, leading to frustration and potential loss of users.

Monitoring Error Rates helps in the early detection of issues, enabling proactive response, and preventing minor issues from escalating into major outages. It also provides valuable insights into system performance and creates a feedback loop that informs continuous improvement efforts. 

Team Velocity 

Team Velocity is a critical metric that measures the amount of work completed in a given iteration (e.g., sprint). It highlights the developer productivity and efficiency as well as in planning and prioritizing future tasks. 

It helps to forecast the completion dates of larger projects or features, aiding in long-term planning and setting stakeholder expectations. Team Velocity also helps to understand the platform teams’ capacity to evenly distribute tasks and prevent overloading team members. 

How to Develop a Platform Engineering KPI Plan? 

Define Objectives 

Firstly, ensure that the KPIs support the organization’s broader objectives. A few of them include improving system reliability, enhancing user experience, or increasing development efficiency. Always focus on metrics that reflect the unique aspects of platform engineering. 

Identify Key Performance Indicators 

Select KPIs that provide a comprehensive view of platform engineering performance. We’ve shared some critical KPIs above. Choose those KPIs that fit your objectives and other considered factors. 

Establish Baseline and Targets

Assess current performance levels of software engineers to establish baselines. Set targets and ensure they are realistic and achievable for each KPI. They must be based on historical data, industry benchmarks, and business objectives.

Analyze and Interpret Data

Regularly analyze trends in the data to identify patterns, anomalies, and areas for improvement. Set up alerts for critical KPIs that require immediate attention. Don’t forget to conduct root cause analysis for any deviations from expected performance to understand underlying issues.

Review and Refine KPIs

Lastly, review the relevance and effectiveness of the KPIs periodically to ensure they align with business objectives and provide value. Adjust targets based on changes in business goals, market conditions, or team capacity.

Typo - An Effective Platform Engineering Tool 

Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion

Monitoring the right KPIs is essential for successful platform teams. By treating your platform as an internal product and your teams as customers, you can focus on delivering value and driving continuous improvement. The KPIs discussed above, provide a comprehensive view of your platform's performance and areas for enhancement. 

There are other KPIs available as well that we have not mentioned. Do your research and consider those that best suit your team and objectives.

All the best! 

Comparative Analysis of DevOps and Platform Engineering

There are two essential concepts in contemporary software engineering: DevOps and Platform Engineering.

In this article, We dive into how DevOps has revolutionized the industry, explore the emerging role of Platform Engineering, and compare their distinct methodologies and impacts.

What is DevOps?

DevOps is a cultural and technical movement aimed at unifying software development (Dev) and IT operations (Ops) to improve collaboration, streamline processes, and enhance the speed and quality of software delivery. The primary goal of DevOps is to create a more cohesive, continuous workflow from development through to production.

Key Principles of DevOps

  • Automation: Automating repetitive tasks to increase efficiency and reduce errors.
  • Continuous Integration and Continuous Delivery (CI/CD): Integrating code changes frequently and automating the deployment process to ensure rapid, reliable releases.
  • Collaboration and Communication: Fostering a culture of shared responsibility between development and operations teams.
  • Monitoring and Logging: Continuously monitoring applications and infrastructure to identify issues early and improve performance.
  • Infrastructure as Code (IaC): Managing and provisioning computing infrastructure through machine-readable definition files.

What is Platform Engineering?

Platform engineering is the practice of designing and building toolchains and workflows that enable self-service capabilities for software engineering organizations in the cloud-native era. It focuses on creating internal developer platforms (IDPs) that provide standardized environments and services for development teams.

Key Principles of Platform Engineering

  • Self-Service Interfaces: Providing developers with easy access to environments, tools, and infrastructure.
  • Standardization and Consistency: Ensuring that environments and workflows are consistent across different projects and teams.
  • Scalability and Flexibility: Designing platforms that can scale with organizational needs and accommodate different technologies and workflows.
  • Security and Compliance: Embedding security and compliance checks within the platform to ensure that applications meet organizational and regulatory standards.
  • Developer Experience: Improving the overall developer experience by reducing friction and enabling faster delivery cycles.

Comparative Analysis of DevOps and Platform Engineering

Overview

Technical Foundations

Architectural Differences

Toolchains and Technologies

Processes and Workflows

Operational Impact

Conclusion

DevOps and Platform Engineering offer different yet complementary approaches to enhancing software development and delivery. DevOps focuses on cultural integration and automation, while Platform Engineering emphasizes providing a robust, scalable infrastructure platform. By understanding these technical distinctions, organizations can make informed decisions to optimize their software development processes and achieve their operational goals.

How to Become a Successful Platform Engineer

Platform engineering is a relatively new and evolving field in the tech industry. While it offers many opportunities, certain aspects are often overlooked.

In this blog, we discuss effective strategies for becoming a successful platform engineer and identify common pitfalls to avoid.

What is a Platform Engineer? 

Platform Engineering, an emerging technology approach, enables the software engineering team with all the required resources. This is to help them perform end-to-end operations of software development lifecycle automation. The goal is to reduce overall cognitive load, enhance operational efficiency, and remove process bottlenecks by providing a reliable and scalable platform for building, deploying, and managing applications. 

Strategies for Being a Great Platform Engineer

Keeping the Entire Engineering Organization Up-to-Date with Platform Team Insights

One important tip to becoming a great platform engineer is informing the entire engineering organization about platform team initiatives. This fosters transparency, alignment, and cross-team collaboration, ensuring everyone is on the same page. When everyone is aware of what’s happening in the platform team, it allows them to plan tasks effectively, offer feedback, raise concerns early, and minimize duplication of efforts. As a result, providing everyone a shared understanding of the platform, goals, and challenges. 

Ensure Your Team Possesses Diverse Skills

When everyone on the platform engineering team has varied skill sets, it brings a variety of perspectives and expertise to the table. This further helps in solving problems creatively and approaching challenges from multiple angles. 

It also lets the team handle a wide range of tasks such as architecture design and maintenance effectively. Furthermore, team members can also learn from each other (and so do you!) which helps in better collaboration and understanding and addressing user needs comprehensively.

Automate as much as Possible 

Pull Requests and code reviews, when done manually, take a lot of the team’s time and effort. Hence, this gives you an important reason why to use automation tools. Moreover, it allows you to focus on more strategic and high-value tasks and lets you handle an increased workload. This further helps in accelerating development cycles and time to market for new features and updates which optimizes resource utilization and reduces operational costs over time. 

Maintain a DevOps Culture 

Platform engineering isn’t all about building the underlying tools, it also signifies maintaining a DevOps culture. You must partner with development, security, and operations teams to improve efficiency and performance. This allows for having the right conversation for discovering bottlenecks, and flexibility in tool choices, and reinforces positive collaboration among teams. 

Moreover, it encourages a feedback-driven culture, where teams can continuously learn and improve. As a result, aligning the team’s efforts closely with customer requirements and business objectives. 

Stay Abreast of Industry Trends

To be a successful platform engineer, it's important to stay current with the latest trends and technologies. Attending tech workshops, webinars, and conferences is an excellent way to keep up with industry developments. besides these offline methods, you can read blogs, follow tech influencers, listen to podcasts, and join online discussions to improve your knowledge and stay ahead of industry trends.

Moreover, collaborating with a team that possesses diverse skill sets can help you identify areas that require upskilling and introduce you to new tools, frameworks, and best practices. This combined approach enables you to better anticipate and meet customer needs and expectations.

Take Everything into Consideration and Measure Success Holistically 

Beyond DevOps metrics, consider factors like security improvements, cost optimization, and consistency across the organization. This holistic approach prevents overemphasis on a single area and helps identify potential risks and issues that might be overlooked when focusing solely on individual metrics. Additionally, it highlights areas for improvement and drives ongoing optimized efficiencies across all dimensions of the platform.

Common Pitfalls that Platform Engineers Ignore 

Unable to Understand the Right Customers 

First things first, understand who your customers are. When platform teams prioritize features or improvements that don't meet software developers' needs, it negatively impacts their user experience. This can lead to poor user interfaces, inadequate documentation, and missing functionalities, directly affecting customers' productivity.

Therefore, it's essential to identify the target audience, understand their key requirements, and align with their requests. Ignoring this in the long run can result in low usage rates and a gradual erosion of customer trust.

Lack of Adequate Tools for Platform Teams

One of the common mistakes that platform engineers make is not giving engineering teams enough tooling or ownership. This makes it difficult for them to diagnose and fix issues in their code. It increases the likelihood of errors and downtime as teams may struggle to thoroughly test and monitor code. Not only this, they may also struggle to spend more time on manual processes and troubleshooting which slows down the development time cycle. 

Hence, it is always advisable to provide your team with enough tooling. Discuss with them what tooling they need, whether the existing ones are working fine, and what requirements they have. 

Too Much Planning, Not Enough Feature Releases

When a lot of time is spent on planning, it results in analysis paralysis i.e. thinking too much of potential features and improvements rather than implementation and testing. This further results in delays in deliveries, hence, slowing down the development process and feedback loops. 

Early and frequent shipping creates the right feedback loops that can enhance the user experience and improve the platform continuously. These feature releases must be prioritized based on how often certain deployment proceedings are performed. Make sure to involve the software developers as well to discover more effective solutions. 

Neglecting the Documentation Process

The documentation process is often underestimated. Platform engineers believe that the process is self-explanatory but this isn’t true. Everything around code, feature releases, and related to the platform must be comprehensively documented. This is critical for onboarding, troubleshooting, and knowledge transfer. 

Well-written documents also help in establishing and maintaining consistent practices and standards across the team. It also allows an understanding of the system’s architecture, dependencies, and known issues. 

Relying Solely on Third Party Tools for Security

Platform engineers must take full ownership of security issues. Lack of accountability can result in increased security risks and vulnerabilities specific to the platform. The limited understanding of unique risks and vulnerabilities can affect the system. 

But that doesn’t mean that platform engineers must stop using third-party tools. They must leverage them however, they need to be complemented by internal processes or knowledge and need to be integrated into the design, development, and deployment phases of platform engineering.  

Typo - An Effective Platform Engineering Tool 

Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion 

Implementing these strategies will improve your success as a platform engineer. By prioritizing transparency, diverse skill sets, automation, and a DevOps culture, you can build a robust platform that meets evolving needs efficiently. Staying updated with industry trends and taking a holistic approach to success metrics ensures continuous improvement.

Ensure to avoid the common pitfalls as well. By addressing these challenges, you create a responsive, secure, and innovative platform environment.

Hope this helps. All the best! :)

Impact of DORA Metrics on SPACE Efficiency in Software Development

Abstract

Efficiency in software development is crucial for delivering high-quality products quickly and reliably. This research investigates the impact of DORA (DevOps Research and Assessment) Metrics — Deployment Frequency, Lead Time for Changes, Mean Time to Recover (MTTR), and Change Failure Rate — on efficiency within the SPACE framework (Satisfaction, Performance, Activity, Collaboration, Efficiency). Through detailed mathematical calculations, correlation with business metrics, and a case study of one of our customers, this study provides empirical evidence of their influence on operational efficiency, customer satisfaction, and financial performance in software development organizations.

Introduction

Efficiency is a fundamental aspect of successful software development, influencing productivity, cost-effectiveness, and customer satisfaction. The DORA Metrics serve as standardized benchmarks to assess and enhance software delivery performance across various dimensions. This paper aims to explore the quantitative impact of these metrics on SPACE efficiency and their correlation with key business metrics, providing insights into how organizations can optimize their software development processes for competitive advantage.

Literature Review

Previous research has highlighted the significance of DORA Metrics in improving software delivery performance and organizational agility (Forsgren et al., 2020). However, detailed empirical studies demonstrating their specific impact on SPACE efficiency and business metrics remain limited, warranting comprehensive analysis and calculation-based research.

Methodology

Case Study Design: one of our customers in the US — A B2B SaaS Company with 120+ Engineers

Selection Criteria: A leading SaaS company based in the US, was chosen for this case study due to its scale and complexity in software development operations. With over 120 engineers distributed across various teams, the customer faced challenges related to deployment efficiency, reliability, and customer satisfaction.

Data Collection: Utilized the customer’s internal metrics and tools, including deployment logs, incident reports, customer feedback surveys, and performance dashboards. The study focused on a period of 12 months to capture seasonal variations and long-term trends in software delivery performance.

Contextual Insights: Gathered qualitative insights through interviews with the customer’s development and operations teams. These interviews provided valuable context on existing challenges, process bottlenecks, and strategic goals for improving software delivery efficiency.

Selection and Calculation of DORA Metrics

Deployment Frequency: Calculated as the number of deployments per unit time (e.g., per day).

Example: They increased their deployment frequency from 3 deployments per week to 15 deployments per week during the study period.

Calculation:

Insight: Higher deployment frequency facilitated faster feature delivery and responsiveness to market demands.

Lead Time for Changes: Measured from code commit to deployment completion.

Example: Lead time reduced from 7 days to 1 day due to process optimizations and automation efforts.

Calculation:

Insight: Shorter lead times enabled TYPO’s customer to swiftly adapt to customer feedback and market changes.

MTTR (Mean Time to Recover): Calculated as the average time taken to restore service after an incident.

Example: MTTR decreased from 4 hours to 30 minutes through improved incident response protocols and automated recovery mechanisms.

Calculation:

Insight: Reduced MTTR enhanced system reliability and minimized service disruptions.

Change Failure Rate: Determined by dividing the number of failed deployments by the total number of deployments.

Example: Change failure rate decreased from 8% to 1% due to enhanced testing protocols and deployment automation.

Insight: Lower change failure rate improved product stability and customer satisfaction.

Correlation with Business Metrics

Revenue Growth: TYPO’s customer achieved a 25% increase in revenue attributed to faster time-to-market and improved customer satisfaction.

Customer Satisfaction: Improved Net Promoter Score (NPS) from 8 to 9, indicating higher customer loyalty and retention rates.

Employee Productivity: Increased by 30% as teams spent less time on firefighting and more on innovation and feature development.

Discussion

The findings from our customer case study illustrate a clear correlation between improved DORA Metrics, enhanced SPACE efficiency, and positive business outcomes. By optimizing Deployment Frequency, Lead Time for Changes, MTTR, and Change Failure Rate, organizations can achieve significant improvements in operational efficiency, customer satisfaction, and financial performance. These results underscore the importance of data-driven decision-making and continuous improvement practices in software development.

How Typo Leverages DORA Metrics?

Typo is an intelligent engineering management platform used for gaining visibility, removing blockers, and maximizing developer effectiveness. Typo’s user-friendly interface and cutting-edge capabilities set it apart in the competitive landscape. Users can tailor the DORA metrics dashboard to their specific needs, providing a personalized and efficient monitoring experience. It provides a user-friendly interface and integrates with DevOps tools to ensure a smooth data flow for accurate metric representation.

Conclusion

In conclusion, leveraging DORA Metrics within software development processes enables organisations to streamline operations, accelerate innovation, and maintain a competitive edge in the market. By aligning these metrics with business objectives and systematically improving their deployment practices, companies can achieve sustainable growth and strategic advantages. Future research should continue to explore emerging trends in DevOps and their implications for optimizing software delivery performance.

Next Steps

Moving forward, Typo and similar organizations consider the following next steps based on the insights gained from this study:

  • Continuous Optimization: Implement continuous optimization practices to further enhance DORA Metrics and sustain efficiency gains.
  • Expansion of Metrics: Explore additional DORA Metrics and benchmarks to capture broader aspects of software delivery performance.
  • Industry Collaboration: Engage in industry collaborations and benchmarking exercises to validate and benchmark performance against peers.
  • Technology Integration: Invest in advanced technologies such as AI and machine learning to automate and optimize software delivery processes further.

References

  • Forsgren, N., Humble, J., & Kim, G. (2020). Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations. IT Revolution Press.

State of DevOps Report 2023 Highlights

Although we are somewhat late in presenting this summary, the insights from the 2023 State of DevOps Report remain highly relevant and valuable for the industry. The DevOps Research and Assessment (DORA) program has significantly influenced software development practices over the past decade. Each year, the State of DevOps Report provides a detailed analysis of the practices and capabilities that drive success in software delivery, offering benchmarks that teams can use to evaluate their own performance. This blog summarizes the key findings from the 2023 report, incorporates additional data and insights from industry developments, and introduces the role of the Software Engineering Institute (SEI) platform as highlighted by Gartner in 2024.

Overview of the 2023 Report

The 2023 State of DevOps Report draws from responses provided by over 36,000 professionals across various industries and organizational sizes. This year’s research emphasizes three primary outcomes:

  1. Organizational Performance: Generating value for customers and the community, extending beyond just revenue metrics.
  2. Team Performance: Evaluating teams’ ability to innovate and collaborate effectively.
  3. Employee Well-being: Assessing the impact of organizational strategies on reducing burnout and enhancing job satisfaction and productivity.

Additionally, the report examines two key performance measures:

  • Software Delivery Performance: The efficiency and safety of teams in implementing changes in technology systems.
  • Operational Performance: The reliability and quality of the user experience provided.

Key Takeaways from the Report

Culture Is Critical

The 2023 report highlights the crucial role of culture in developing technical capabilities and driving performance. Teams with a generative culture — characterized by high levels of trust, autonomy, open information flow, and a focus on learning from failures rather than assigning blame — achieve, on average, 30% higher organizational performance. This type of culture is essential for fostering innovation, collaboration, and continuous improvement.

Building a successful organizational culture requires a combination of everyday practices and strategic leadership. Practitioners shape culture through their daily actions, promoting collaboration and trust. Transformational leadership is also vital, emphasizing the importance of a supportive environment that encourages experimentation and autonomy.

User-Centric Approach

A significant finding in this year’s report is that a user-centric approach to software development is a strong predictor of organizational performance. Teams with a strong focus on user needs show 40% higher organizational performance and a 20% increase in job satisfaction. Leaders can foster an environment that prioritizes user value by creating incentive structures that reward teams for delivering meaningful user value rather than merely producing features.

Generative AI: Early Stages

An intriguing insight from the report is that the use of Generative AI, such as coding assistants, has not yet shown a significant impact on performance. This is likely because larger enterprises are slower to adopt emerging technologies. However, as adoption increases and more data becomes available, this trend is expected to evolve.

Investing in Technical Capabilities

Investing in technical capabilities like continuous integration and delivery, trunk-based development, and loosely coupled architectures leads to substantial improvements in performance. For example, reducing code review times can improve software delivery performance by up to 50%. High-quality documentation further enhances these technical practices, with trunk-based development showing a 12.8x greater impact on organizational performance when supported by quality documentation.

Cloud Computing Enhances Flexibility

Leveraging cloud platforms significantly enhances flexibility and, consequently, performance. Using a public cloud platform increases infrastructure flexibility by 22% compared to other environments. While multi-cloud strategies also improve flexibility, they can introduce complexity in managing governance, compliance, and risk. To maximize the benefits of cloud computing, organizations should modernize and refactor workloads to exploit the cloud’s flexibility rather than simply migrating existing infrastructure.

Equitable Work Distribution

The report indicates that individuals from underrepresented groups, including women and those who self-describe their gender, experience higher levels of burnout and are more likely to engage in repetitive work. Implementing formal processes to distribute work evenly can help reduce burnout. However, further efforts are needed to extend these benefits to all underrepresented groups.

Flexible Working Arrangements

The Covid-19 pandemic has reshaped working arrangements, with many employees working remotely. About 33% of respondents in this year’s survey work exclusively from home, while 63% work from home more often than an office. Although there is no conclusive evidence that remote work impacts team or organizational performance, flexibility in work arrangements correlates with increased value delivered to users and improved employee well-being. This flexibility also applies to new hires, with no observable increase in performance linked to office-based onboarding.

Actual Practices and Trends in DevOps

The 2023 report highlights several key practices that are driving success in DevOps:

️Continuous Integration/Continuous Delivery (CI/CD)

Implementing CI/CD pipelines is essential for automating the integration and delivery process. This practice allows teams to detect issues early, reduce integration problems, and deliver updates more frequently and reliably.

  • Google: Google has implemented CI/CD pipelines extensively across its development teams. This practice has enabled Google to push thousands of updates daily with minimal disruption. Automated testing and deployment ensure that new code is integrated seamlessly, significantly reducing the risk of integration issues.
  • Netflix: Known for its high-frequency deployments, Netflix utilizes a CI/CD pipeline that includes automated testing, canary releases, and real-time monitoring. This approach allows Netflix to deliver new features and updates quickly while maintaining high reliability and performance.

️ Trunk-Based Development

This approach involves developers integrating their changes into a shared trunk frequently, reducing the complexity of merging code and improving collaboration. Trunk-based development is linked to faster delivery cycles and higher quality outputs.

  • Facebook: Facebook employs trunk-based development to streamline code integration. Developers frequently merge their changes into the main branch, reducing merge conflicts and integration pain. This practice supports Facebook’s fast-paced release cycles, enabling frequent updates without compromising stability.
  • Etsy: Etsy has adopted trunk-based development to foster collaboration and accelerate delivery. By continuously integrating code into the main branch, Etsy’s development teams can quickly address issues and deliver new features, enhancing their agility and responsiveness to market demands.

️Loosely Coupled Architectures

Designing systems as loosely coupled services or microservices helps teams develop, deploy, and scale components independently. This architecture enhances system resilience and flexibility, enabling faster and more reliable updates.

  • Amazon: Amazon’s architecture is built around microservices, allowing teams to develop, deploy, and scale services independently. This decoupled architecture enhances system resilience and flexibility, enabling Amazon to innovate rapidly and handle high traffic volumes efficiently.
  • Spotify: Spotify uses microservices to ensure that different parts of its application can be updated independently. This architecture allows Spotify to scale its services globally, providing a reliable and high-quality user experience even during peak usage times.

️Automated Testing

Automated testing is critical for maintaining high-quality code and ensuring that new changes do not introduce defects. This practice supports continuous delivery by providing immediate feedback on code quality.

  • Microsoft: Microsoft has integrated automated testing into its development pipeline for products like Azure. Automated unit, integration, and end-to-end tests ensure that new code meets quality standards before it is deployed, reducing the risk of defects and improving overall software quality.
  • Airbnb: Airbnb relies heavily on automated testing to maintain the quality of its platform. By incorporating automated tests into their CI/CD pipeline, Airbnb can rapidly identify and address issues, ensuring that new features are delivered without introducing bugs.

️Monitoring and Observability

Implementing robust monitoring and observability practices allows teams to gain insights into system performance and user behavior. These practices help in quickly identifying and resolving issues, improving system reliability and user satisfaction.

  • LinkedIn: LinkedIn has developed a comprehensive observability platform that provides real-time insights into system performance and user behavior. This platform helps LinkedIn quickly identify and resolve issues, improving system reliability and enhancing the user experience.
  • Uber: Uber uses advanced monitoring and observability tools to track the performance of its services. These tools provide detailed metrics and alerts, enabling Uber to proactively manage system health and ensure a seamless experience for users.

️Infrastructure as Code (IaC)

Using IaC enables teams to manage and provision infrastructure through code, making the process more efficient, repeatable, and less prone to human error. IaC practices contribute to faster, more consistent deployment of infrastructure resources.

  • Capital One: Capital One has adopted Infrastructure as Code to automate the provisioning and management of its cloud infrastructure. Using tools like AWS CloudFormation and Terraform, Capital One ensures consistency, reduces manual errors, and accelerates infrastructure deployment.
  • Shopify: Shopify employs IaC to manage its infrastructure across multiple cloud providers. This approach allows Shopify to maintain a consistent and repeatable deployment process, supporting rapid scaling and reducing the time required to provision new environments.

The Role of Metrics and Insights

Metrics are vital for guiding teams and driving continuous improvement. However, they should be used to inform and guide rather than set rigid targets, in accordance with Goodhart’s law. Here’s why metrics are crucial:

  • Promoting Accountability and Transparency: Metrics foster a culture of ownership and responsibility, creating transparency and shared goals within teams.
  • Enabling Data-Driven Decisions: Metrics provide objective data for evaluating processes, identifying inefficiencies, and implementing improvements.
  • Facilitating Collaboration and Communication: Shared metrics create a common understanding, making it easier for teams to collaborate effectively.
  • Supporting Continuous Improvement: Regularly measuring and analyzing performance helps teams identify trends, address inefficiencies, and continuously improve.

SEI Platform as Highlighted by Gartner

The Software Engineering Intelligence(SEI) platforms like Typo , as highlighted in Gartner’s research, plays a pivotal role in advancing DevOps practices. The SEI platform provides tools and frameworks that help organizations assess their software engineering capabilities and identify areas for improvement. This platform emphasizes the importance of integrating DevOps principles into the entire software development lifecycle, from initial planning to deployment and maintenance.

Gartner’s analysis indicates that organizations leveraging the SEI platform see significant improvements in their DevOps maturity, leading to enhanced performance, reduced time to market, and increased customer satisfaction. The platform’s comprehensive approach ensures that DevOps practices are not just implemented but are continuously optimized to meet evolving business needs.

Conclusion

The State of DevOps Report 2023 by DORA offers critical insights into the current state of DevOps, emphasizing the importance of culture, user focus, technical capabilities, cloud flexibility, and equitable work distribution.

Further Reading

For those interested in delving deeper into the State of DevOps Report 2023 and related topics, here are some recommended resources:

These resources provide extensive insights into DevOps principles and practices, offering practical guidance for organizations aiming to enhance their DevOps capabilities and achieve greater success in their software delivery processes.

Understanding the hurdles in sprint reviews

Sprint reviews aim to foster open communication, active engagement, alignment with goals, and clear expectations. Despite these noble goals, many teams face significant hurdles in achieving them. These challenges often stem from the complexities involved in managing these elements effectively.

Common issues in sprint reviews

  • Open Communication: One of the core principles of Agile is transparent and timely information sharing. However, developers often hesitate to provide early feedback due to the fear of premature criticism. This delay in communication can hinder problem-solving and allow minor issues to escalate. Moreover, sprint reviews sometimes become dominated by the Product Owner, overshadowing the collaborative efforts of the entire team.
  • Engagement: For sprint reviews to be effective, active participation from all team members and stakeholders is crucial. Unfortunately, these meetings often become monotonous, with one person presenting while others remain passive. This one-sided narrative stifles the collective intelligence of the group and diminishes the value of the meeting.
  • Goal Alignment: Clear, collaboratively set goals are essential in Agile. These goals provide direction and purpose for the team’s efforts. However, without frequent reinforcement, teams can lose focus. Developers may pursue interesting but unrelated tasks that, while beneficial on their own, can detract from the primary objectives of the sprint. This issue is compounded by unclear definitions of what constitutes “done,” leading to incomplete tasks being presented as finished.
  • Managing Expectations: Misaligned expectations can derail sprint reviews. For instance, if stakeholders expect these meetings to be approval sessions or if developers dive too deeply into unnecessary details, the main points can become obscured, reducing the effectiveness of the meeting.

Strategies for effective sprint reviews

To overcome these challenges, teams should adopt a set of best practices designed to enhance the efficiency and productivity of sprint reviews. The following principles provide a framework for achieving this:

Cultivate open communication

Continuous dialogue is the cornerstone of Agile methodology. For sprint reviews to be effective, a culture of open communication must be established and ingrained in daily interactions. Leaders play a crucial role in fostering an environment where team members feel safe to share concerns and challenges without fear of repercussions. This approach minimizes friction and ensures issues are addressed promptly before they escalate.

Case Study: Atlassian, a leading software company, introduced regular, open discussions about project hurdles. This practice fostered a culture of transparency, allowing the team to address potential issues early and leading to more efficient sprint reviews. As a result, they saw a 30% reduction in unresolved issues by the end of each sprint.

Promote active and inclusive engagement

Sprint reviews should be interactive sessions with two-way communication. Instead of having a single person present, these meetings should involve contributions from all team members. Passing the keyboard around and encouraging real-time discussions can make the review more dynamic and collaborative.

Case Study: HubSpot, a marketing and sales software company, transformed their sprint reviews by making them more interactive. During brainstorming sessions for new campaigns, involving all team members led to more innovative solutions and a greater sense of ownership and engagement across the team. HubSpot reported a 25% increase in team satisfaction scores and a 20% increase in creative solutions presented during sprint reviews.

Set, reinforce, and stick to goals

While setting clear goals is essential, the real challenge lies in revisiting and realigning them throughout the sprint. Regular check-ins with both internal teams and stakeholders help maintain focus and ensure consistency.

Case Study: Epic Systems, a healthcare software company, improved their sprint reviews by regularly revisiting their primary goal of enhancing user experience. By ensuring that all new features and changes aligned with this objective, they were able to maintain focus and deliver a more cohesive product. This led to a 15% increase in user satisfaction ratings and a 10% reduction in feature revisions post-launch.

Ensure clarity in expectations

Effective sprint reviews require clear and mutual understanding. Teams must ensure they are not just explaining but also being understood. Setting the context at the beginning of each meeting, followed by a quick recap of previous interactions, can bridge any gaps.

Case Study: FedEx, a logistics giant, faced challenges with misaligned expectations during sprint reviews. Stakeholders often expected these meetings to be approval sessions, which led to confusion and inefficiency. To address this, FedEx started each sprint review with a clear definition of expectations and a quick recap of previous interactions. This approach ensured that all team members and stakeholders were aligned on objectives and progress, making the discussions more productive. Consequently, FedEx experienced a 20% reduction in project delays and a 15% improvement in stakeholder satisfaction.

Additional strategies for enhancing sprint reviews

Beyond the foundational principles of open communication, engagement, goal alignment, and clear expectations, there are additional strategies that can further enhance the effectiveness of sprint reviews:

Leverage data and metrics

Using data and metrics to track progress can provide objective insights into the team’s performance and highlight areas for improvement. Tools like burn-down charts, velocity charts, and cumulative flow diagrams can be invaluable in providing a clear picture of the team’s progress and identifying potential bottlenecks.

Example: Capital One, a financial services company, used velocity charts to track their sprint progress. By analyzing the data, they were able to identify patterns and trends, which helped them optimize their workflow and improve overall efficiency. They reported a 22% increase in on-time project completion and a 15% decrease in sprint overruns.

Incorporate feedback loops

Continuous improvement is a key tenet of Agile. Incorporating feedback loops within sprint reviews can help teams identify areas for improvement and implement changes more effectively. This can be achieved through regular retrospectives, where the team reflects on what went well, what didn’t, and how they can improve.

Example: Amazon, an e-commerce giant, introduced regular retrospectives at the end of each sprint review. By discussing successes and challenges, they were able to implement changes that significantly improved their workflow and product quality. This practice led to a 30% increase in overall team productivity and a 25% improvement in customer satisfaction ratings.

Facilitate stakeholder involvement

Involving stakeholders in sprint reviews can provide valuable insights and ensure that the team is aligned with business objectives. Stakeholders can offer feedback on the product’s direction, validate the team’s progress, and provide clarity on priorities.

Example: Google started involving stakeholders in their sprint reviews. This practice helped ensure that the team’s work was aligned with business goals and that any potential issues were addressed early. Google reported a 20% improvement in project alignment with business objectives and a 15% decrease in project scope changes.

Real-life case studies

Case study 1: Enhancing communication at Atlassian

Atlassian, a leading software company, faced significant challenges with communication during sprint reviews. Developers were hesitant to share early feedback, which led to delayed problem-solving and escalated issues. The team decided to implement daily check-in meetings where all members could discuss ongoing challenges openly. This practice fostered a culture of transparency and ensured that potential issues were addressed promptly. As a result, the team’s sprint reviews became more efficient, and their overall productivity improved. Atlassian saw a 30% reduction in unresolved issues by the end of each sprint and a 25% increase in overall team morale.

Case Study 2: Boosting engagement at HubSpot

HubSpot, a marketing and sales software company, struggled with engagement during their sprint reviews. Meetings were often dominated by a single presenter, with little input from other team members. To address this, HubSpot introduced interactive brainstorming sessions during sprint reviews, where all team members were encouraged to contribute ideas. This change led to more innovative solutions and a greater sense of ownership and engagement among the team. HubSpot reported a 25% increase in team satisfaction scores, a 20% increase in creative solutions presented during sprint reviews, and a 15% decrease in time to market for new features.

Case Study 3: Aligning goals at Epic Systems

Epic Systems, a healthcare software company, had difficulty maintaining focus on their primary goal of enhancing user experience. Developers frequently pursued interesting but unrelated tasks. The company decided to implement regular check-ins to revisit and realign their goals. This practice ensured that all new features and changes were in line with the overarching objective, leading to a more cohesive product and improved user satisfaction. As a result, Epic Systems experienced a 15% increase in user satisfaction ratings, a 10% reduction in feature revisions post-launch, and a 20% improvement in overall product quality.

Case Study 4: Clarifying expectations at FedEx

FedEx, a logistics giant, faced challenges with misaligned expectations during sprint reviews. Stakeholders often expected these meetings to be approval sessions, which led to confusion and inefficiency. To address this, FedEx started each sprint review with a clear definition of expectations and a quick recap of previous interactions. This approach ensured that all team members and stakeholders were aligned on objectives and progress, making the discussions more productive. Consequently, FedEx experienced a 20% reduction in project delays, a 15% improvement in stakeholder satisfaction, and a 10% increase in overall team efficiency.

Incorporating data and statistics

Data and metrics can provide valuable insights into the effectiveness of sprint reviews. For example, according to a report by VersionOne, 64% of Agile teams use burn-down charts to track their progress. These charts can highlight trends and potential bottlenecks, helping teams optimize their workflow.

Additionally, a study by the Project Management Institute (PMI) found that organizations that use Agile practices are 28% more successful in their projects compared to those that do not. This statistic underscores the importance of implementing effective Agile practices, including efficient sprint reviews.

Conclusion

Sprint reviews are a critical component of the Agile framework, designed to ensure that teams stay aligned on goals and progress. By addressing common challenges such as communication barriers, lack of engagement, misaligned goals, and unclear expectations, teams can significantly improve the effectiveness of their sprint reviews.

Implementing strategies such as fostering open communication, promoting active engagement, setting and reinforcing goals, ensuring clarity in expectations, leveraging data and metrics, incorporating feedback loops, and facilitating stakeholder involvement can transform sprint reviews into highly productive sessions.

By learning from real-life case studies and incorporating data-driven insights, teams can continuously improve their sprint review process, leading to better project outcomes and greater overall success.

Moving beyond JIRA Sprint Reports in 2024

Sprint reports are a crucial part of the software development process. They help in gaining visibility into the team’s progress, how much work is completed, and the remaining tasks.

While there are many tools available for sprint reports, the JIRA sprint report stands out to be the most reliable one. Thousands of development teams use it on a day-to-day basis. However, as the industry shifts towards continuous improvement, JIRA’s limitations may impact outcomes.

So, what can be the right alternative for sprint reports? And what factors to be weighed when choosing a sprint reports tool?

Importance of Analyzing Sprint Reports

Sprints are the core of agile and scrum frameworks. It represents defined periods for completing and reviewing specific work.

Sprint allows developers to focus on pushing out small, incremental changes over large sweeping changes. Note that, they aren’t meant to address every technical issue or wishlist improvement. It lets the team members outline the most important issues and how to address them during the sprint.

Analyzing progress through sprint reports is crucial for several reasons:

Transparency

Analyzing sprint reports ensures transparency among the team members. It includes an entire scrum or agile team that has a clear and shared view of work being done and pending tasks. There is no duplication of work since everything is visible to them.

Higher Quality Work

Sprint reports allow software development teams to have a clear understanding and requirements about their work. This allows them to focus on prioritized tasks first, fix bottlenecks in the early stages and develop the right solutions for the problems. For engineering leaders, these reports give them valuable insights into their performance and progress.

Higher Productivity

Sprint reports eliminate unnecessary work and overcommitment for the team members. This allows them to allocate time more efficiently to the core tasks and let them discuss potential issues, risks and dependencies which further encourages continuous improvement. Hence, increasing the developers’ productivity and efficiency.

Optimize Workflow

The sprint reports give team members a visual representation of how work is flowing through the system. It allows them to identify slowdowns or blockers and take corrective actions. Moreover, it allows them to make adjustments to their processes and workflow and prioritize tasks based on importance and dependencies to maximize efficiency.

JIRA sprint reports tick all of the benefits stated above. Here’s more to JIRA sprint reports:

JIRA Sprint Reports

Out of many sprint reporting software, JIRA Sprint Report stands out to be the out-of-the-box solution that is being used by many software development organizations. It is a great way to analyze team progress, keep everyone on track, and complete the projects on time.

You can easily create simple reports from the range of reports that can be generated from the scrum board:

Projects > Reports > Sprint report

There are many types of JIRA reports available for sprint analysis for agile teams. Some of them are:

  • Sprint burndown charts: Burndown chart measures daily completed work, monitors the total work to be done, and sets intended deadlines.
  • Burnup charts: It displays a sprint’s completed work in relation to its total scope.
  • Velocity chart: Velocity chart shows a Scrum team’s average work completed per sprint.
  • Cumulative flow diagram: It visually represents a Kanban team’s project progress over time.
  • Control chart: It maps the Cycle Time or Lead Time of each issue over a specified period.

JIRA sprint reports are built into JIRA software, convenient and are easy to use. It helps developers understand the sprint goals, organize and coordinate their work and retrospect their performance.

However, few major problems make it difficult for the team members to rely solely on these reports.

What’s Missing in JIRA Sprint Reports?

Measures through Story Points

JIRA sprint reports measure progress predominantly via story points. For teams who are not working with story points, JIRA reports aren’t of any use. Moreover, it sidelines other potential metrics as well. This makes it challenging to understand team velocities and get the complete picture.

Can be Misinterpreted in Different Ways

Another limitation is that the team has to read between the lines since it presents the raw data to team members. This doesn’t give accurate insights of what truly happening in the organization. Rather every individual can come with slightly different conclusions and can be misunderstood and misinterpreted in different ways.

Limited Capabilities

JIRA add-ons need installation and have a steep learning curve which may require training or technical expertise. They are also restricted to the JIRA system making it challenging to share with external stakeholders or clients.

So, what can be done instead? Either the JIRA sprint report can be supplemented with another tool or a better alternative that considers all of its limitations. The latter option proves to be the right option since a sprint dashboard that shows all the data and reports at a single place saves time and effort.

How does Typo Leverage the Sprint Analysis Reports?

Typo’s sprint analysis is a valuable tool for any team that is using an agile development methodology. It allows you to track and analyze your team’s progress throughout a sprint. It helps you gain visual insights into how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This information can help you to identify any potential problems early on and take corrective action.

Our sprint analysis feature uses data from Git and issue management tools to provide insights into how your team is working. You can see how long tasks are taking, how often they’re being blocked, and where bottlenecks are occurring. This information can help you identify areas for improvement and make sure your team is on track to meet their goals.

It is easy to use and can be integrated with existing Git and Jira/Linear/Clickup workflows.

Key Components of Sprint Analysis Tool

Work Progress

Work progress represents the percentage breakdown of issue tickets or story points in the selected sprint according to their current workflow status.

How is it Calculated?

Typo considers all the issues in the sprint and categorizes them based on their current status category, using JIRA status category mapping. It shows three major categories by default:

  • Open
  • In Progress
  • Done

These can be configured as per your custom processes. In the case of a closed sprint, Typo only shows the breakup of work on a ‘Completed’ & ‘Not Completed’ basis.

Work Breakup

Work breakup represents the percentage breakdown of issue tickets in the current sprint according to their issue type or labels. This helps in understanding the kind of work being picked in the current sprint and plan accordingly.

How is it Calculated?

Typo considers all the issue tickets in the selected sprint and sums them up based on their issue type.

Screenshot 2024-03-16 at 12.03.44 AM.png

Team Velocity

Team Velocity represents the average number of completed issue tickets or story points across each sprint.

How is it Calculated?

Typo calculates Team Velocity for each sprint in two ways :

  • For Issue Tickets: Typo calculates the sum of all the issue tickets completed in the sprint
  • For Story Points: Typo calculates the sum of story Points for all the issue tickets completed in the sprint

To calculate the average velocity, the total number of completed issue tickets or story points are divided by the total number of allocated issue tickets or story points for each sprint.

Screenshot 2024-03-16 at 12.05.58 AM.png

Developer Workload

Developer Workload represents the count of issue tickets or story points completed by each developer against the total issue tickets/story points assigned to them in the current sprint.

Once the sprint is marked as ‘Closed’, it starts reflecting the count of Issue tickets/Story points that were not completed and were moved to later sprints as ‘Carry Over’.

How is it Calculated?

Typo calculates the Developer Workload by considering all the issue tickets/story points assigned to each developer in the selected sprint and identifying the ones that have been marked as ‘Done’/’Completed’. Typo categorizes these issues based on their current workflow status that can be configured as per your custom processes.

The assignee of a ticket is considered in either of the two ways as a default:

  • The developer assigned to the ticket at the time it was moved to ‘In Progress’
  • Any custom field that represents the developer of that ticket

This logic is also configurable as per your custom processes.

Screenshot 2024-03-16 at 12.06.09 AM.png

Issue Cycle Time

Issue cycle time represents the average time it takes for an issue ticket to transition from the ‘In Progress’ state to the ‘Completion’ state.

How is it Calculated?

For all the ‘Done’/’Completed’ tickets in a sprint, Typo measures the time spent by each ticket to transition from ‘In Progress’ state to ‘Completion’ state.

By default, Typo considers 24 hours in a day and 7 day work week. This can be configured as per your custom processes.

Scope Creep

Scope creep is one of the common project management risks. It represents the new project requirements that are added to a project beyond what was originally planned.

Typo’s sprint analysis tool monitors it to quantify its impact on the team’s workload and deliverables.

Screenshot 2024-03-16 at 12.06.28 AM.png

Conclusion

Sprint analysis tool is important for sprint planning, optimizing team performance and project outcomes in agile environments. By offering comprehensive insights into progress and task management, it empowers teams to focus on sprint goals, make informed decisions and drive continuous improvement.

To learn more about this tool, visit our website!

View All

DevEx

View All

Mastering Developer Productivity with the SPACE Framework

In the crazy world of software development, getting developers to be productive is like finding the Holy Grail for tech companies. When developers hit their stride, turning out valuable work at breakneck speed, it’s a win for everyone. But let’s be honest—traditional productivity metrics, like counting lines of code or tracking hours spent fixing bugs, are about as helpful as a screen door on a submarine.

Say hello to the SPACE framework: your new go-to for cracking the code on developer productivity. This approach doesn’t just dip a toe in the water—it dives in headfirst to give you a clear, comprehensive view of how your team is doing. With the SPACE framework, you’ll ensure your developers aren’t just busy—they’re busy being awesome and delivering top-quality work on the dot. So buckle up, because we’re about to take your team’s productivity to the next level!

Introduction to the SPACE Framework

The SPACE framework is a modern approach to measuring developer productivity, introduced in a 2021 paper by experts from GitHub and Microsoft Research. This framework goes beyond traditional metrics to provide a more accurate and holistic view of productivity.

Nicole Forsgren, the lead author, emphasizes that measuring productivity by lines of code or speed can be misleading. The SPACE framework integrates several key metrics to give a complete picture of developer productivity.

Detailed Breakdown of SPACE Metrics

The five SPACE framework dimensions are:

Satisfaction and Well-being

When developers are happy and healthy, they tend to be more productive. If they enjoy their work and maintain a good work-life balance, they're more likely to produce high-quality results. On the other hand, dissatisfaction and burnout can severely hinder productivity. For example, a study by Haystack Analytics found that during the COVID-19 pandemic, 81% of software developers experienced burnout, which significantly impacted their productivity. The SPACE framework encourages regular surveys to gauge developer satisfaction and well-being, helping you address any issues promptly.

Performance

Traditional metrics often measure performance by the number of features added or bugs fixed. However, this approach can be problematic. According to the SPACE framework, performance should be evaluated based on outcomes rather than output. This means assessing whether the code reliably meets its intended purpose, the time taken to complete tasks, customer satisfaction, and code reliability.

Activity

Activity metrics are commonly used to gauge developer productivity because they are easy to quantify. However, they only provide a limited view. Developer Activity is the count of actions or outputs completed over time, such as coding new features or conducting code reviews. While useful, activity metrics alone cannot capture the full scope of productivity.

Nicole Forsgren points out that factors like overtime, inconsistent hours, and support systems also affect activity metrics. Therefore, it's essential to consider routine tasks like meetings, issue resolution, and brainstorming sessions when measuring activity.

Collaboration and Communication

Effective communication and collaboration are crucial for any development team's success. Poor communication can lead to project failures, as highlighted by 86% of employees in a study who cited ineffective communication as a major reason for business failures. The SPACE framework suggests measuring collaboration through metrics like the discoverability of documentation, integration speed, quality of work reviews, and network connections within the team.

Efficiency and Flow

Flow is a state of deep focus where developers can achieve high levels of productivity. Interruptions and distractions can break this flow, making it challenging to return to the task at hand. The SPACE framework recommends tracking metrics such as the frequency and timing of interruptions, the time spent in various workflow stages, and the ease with which developers maintain their flow.

Benefits of the SPACE Framework

The SPACE framework offers several advantages over traditional productivity metrics. By considering multiple dimensions, it provides a more nuanced view of developer productivity. This comprehensive approach helps avoid the pitfalls of single metrics, such as focusing solely on lines of code or closed tickets, which can lead to gaming the system.

Moreover, the SPACE framework allows you to measure both the quantity and quality of work, ensuring that developers deliver high-quality software efficiently. This integrated view helps organizations make informed decisions about team productivity and optimize their workflows for better outcomes.

Implementing the SPACE Framework in Your Organization

Implementing the SPACE productivity framework effectively requires careful planning and execution. Below is a comprehensive plan and roadmap to guide you through the process. This detailed guide will help you tailor the SPACE framework to your organization's unique needs and ensure a smooth transition to this advanced productivity measurement approach.

Step 1: Understanding Your Current State

Objective: Establish a baseline by understanding your current productivity measurement practices and developer workflow.

  1. Conduct a Productivity Audit
    • Review existing metrics and tools like Typo used for tracking productivity. 
    • Identify gaps and limitations in current measurement methods.
    • Gather feedback from developers and managers on existing practices.
  2. Analyze Team Dynamics and Workflow
    • Map out your development process, identifying key stages and tasks.
    • Observe how teams collaborate, communicate, and handle interruptions.
    • Assess the overall satisfaction and well-being of your developers.

Outcome: A comprehensive report detailing your current productivity measurement practices, team dynamics, and workflow processes.

Step 2: Setting Goals and Objectives

Objective: Define clear goals and objectives for implementing the SPACE framework.

  1. Identify Key Business Objectives
    • Align the goals of the SPACE framework with your company's strategic objectives.
    • Focus on improving areas such as time-to-market, code quality, customer satisfaction, and developer well-being.
  2. Set Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) Goals
    • Example Goals
      • Increase developer satisfaction by 20% within six months.
      • Reduce average bug resolution time by 30% over the next quarter.
      • Improve code review quality scores by 15% within the next year.

Outcome: A set of SMART goals that will guide the implementation of the SPACE framework.

Step 3: Selecting and Customizing SPACE Metrics

Objective: Choose the most relevant SPACE metrics and customize them to fit your organization's needs.

  1. Review SPACE Metrics
    • Satisfaction and Well-being
    • Performance
    • Activity
    • Collaboration and Communication
    • Efficiency and Flow
  2. Customize Metrics
    • Tailor each metric to align with your organization's specific context and objectives.
    • Example Customizations
      • Satisfaction and Well-being: Conduct quarterly surveys to measure job satisfaction and work-life balance.
      • Performance: Track the reliability of code and customer feedback on delivered features.
      • Activity: Measure the number of completed tasks, code commits, and other relevant activities.
      • Collaboration and Communication: Monitor the quality of code reviews and the speed of integrating work.
      • Efficiency and Flow: Track the frequency and duration of interruptions and the time spent in flow states.

Outcome: A customized set of SPACE metrics tailored to your organization's needs.

Step 4: Implementing Measurement Tools and Processes

Objective: Implement tools and processes to measure and track the selected SPACE metrics.

  1. Choose Appropriate Tools
    • Use project management tools like Jira or Trello to track activity and performance metrics.
    • Implement collaboration tools such as Slack, Microsoft Teams, or Confluence to facilitate communication and knowledge sharing.
    • Utilize code review tools like CodeIQ by Typo to monitor the quality of code and collaboration.
  2. Set Up Data Collection Processes
    • Establish processes for collecting and analyzing data for each metric.
    • Ensure that data collection is automated wherever possible to reduce manual effort and improve accuracy.
  3. Train Your Team
    • Provide training sessions for developers and managers on using the new tools and understanding the SPACE metrics.
    • Encourage open communication and address any concerns or questions from the team.

Outcome: A fully implemented set of tools and processes for measuring and tracking SPACE metrics.

Step 5: Regular Monitoring and Review

Objective: Continuously monitor and review the metrics to ensure ongoing improvement.

  1. Establish Regular Review Cycles
    • Conduct monthly or quarterly reviews of the SPACE metrics to track progress towards goals.
    • Hold team meetings to discuss the results, identify areas for improvement, and celebrate successes.
  2. Analyze Trends and Patterns
    • Look for trends and patterns in the data to gain insights into team performance and productivity.
    • Use these insights to make informed decisions and adjustments to workflows and processes.
  3. Solicit Feedback
    • Regularly gather feedback from developers and managers on the effectiveness of the SPACE framework.
    • Use this feedback to make continuous improvements to the framework and its implementation.

Outcome: A robust monitoring and review process that ensures the ongoing effectiveness of the SPACE framework.

Step 6: Continuous Improvement and Adaptation

Objective: Adapt and improve the SPACE framework based on feedback and evolving needs.

  1. Iterate and Improve
    • Continuously refine and improve the SPACE metrics based on feedback and observed results.
    • Adapt the framework to address new challenges and opportunities as they arise.
  2. Foster a Culture of Continuous Improvement
    • Encourage a culture of continuous improvement within your development teams.
    • Promote openness to change and a willingness to experiment with new ideas and approaches.
  3. Share Success Stories
    • Share success stories and best practices with the broader organization to demonstrate the value of the SPACE framework.
    • Use these stories to inspire other teams and encourage the adoption of the framework across the organization.

Outcome: A dynamic and adaptable SPACE framework that evolves with your organization's needs.

Conclusion

Implementing the SPACE framework is a strategic investment in your organization's productivity and success. By following this comprehensive plan and roadmap, you can effectively integrate the SPACE metrics into your development process, leading to improved performance, satisfaction, and overall productivity. Embrace the journey of continuous improvement and leverage the insights gained from the SPACE framework to unlock the full potential of your development teams.

SPACE Framework: How to Measure Developer Productivity

In today’s fast-paced software development world, understanding and improving developer productivity is more crucial than ever. One framework that has gained prominence for its comprehensive approach to measuring and enhancing productivity is the SPACE Framework. This framework, developed by industry experts and backed by extensive research, offers a multi-dimensional perspective on productivity that transcends traditional metrics.

This blog delves deep into the genesis of the SPACE Framework, its components, and how it can be effectively implemented to boost developer productivity. We’ll also explore real-world success stories of companies that have benefited from adopting this framework.

The genesis of the SPACE Framework

The SPACE Framework was introduced by researchers Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila, Thomas Zimmermann, Brian Houck, and Jenna Butler. Their work was published in a paper titled “The SPACE of Developer Productivity: There’s More to it than You Think!” emphasising that a single metric cannot measure developer productivity. Instead, it should be viewed through multiple lenses to capture a holistic picture.

Components of the SPACE Framework

The SPACE Framework is an acronym that stands for:

  1. Satisfaction and Well-being
  2. Performance
  3. Activity
  4. Communication and Collaboration
  5. Efficiency and Flow

Each component represents a critical aspect of developer productivity, ensuring a balanced approach to measurement and improvement.

Detailed breakdown of the SPACE Framework

1. Satisfaction and Well-being

Definition: This dimension focuses on how satisfied and happy developers are with their work and environment. It also considers their overall well-being, which includes factors like work-life balance, stress levels, and job fulfillment.

Why It Matters: Happy developers are more engaged, creative, and productive. Ensuring high satisfaction and well-being can reduce burnout and turnover, leading to a more stable and effective team.

Metrics to Consider:

  • Employee satisfaction surveys
  • Work-life balance scores
  • Burnout indices
  • Turnover rates

2. Performance

Definition: Performance measures the outcomes of developers’ work, including the quality and impact of the software they produce. This includes assessing code quality, deployment frequency, and the ability to meet user needs.

Why It Matters: High performance indicates that the team is delivering valuable software efficiently. It helps in maintaining a competitive edge and ensuring customer satisfaction.

Metrics to Consider:

  • Code quality metrics (e.g., number of bugs, code review scores)
  • Deployment frequency
  • Customer satisfaction ratings
  • Feature adoption rates

3. Activity

Definition: Activity tracks the actions developers take, such as the number of commits, code reviews, and feature development. This component focuses on the volume and types of activities rather than their outcomes.

Why It Matters: Monitoring activity helps understand workload distribution and identify potential bottlenecks or inefficiencies in the development process.

Metrics to Consider:

  • Number of commits per developer
  • Code review participation
  • Task completion rates
  • Meeting attendance

4. Communication and Collaboration

Definition: This dimension assesses how effectively developers interact with each other and with other stakeholders. It includes evaluating the quality of communication channels and collaboration tools used.

Why It Matters: Effective communication and collaboration are crucial for resolving issues quickly, sharing knowledge, and fostering a cohesive team environment. Poor communication can lead to misunderstandings and project delays.

Metrics to Consider:

  • Frequency and quality of team meetings
  • Use of collaboration tools (e.g., Slack, Jira)
  • Cross-functional team interactions
  • Feedback loops

5. Efficiency and Flow

Definition: Efficiency and flow measure how smoothly the development process operates, including how well developers can focus on their tasks without interruptions. It also looks at the efficiency of the processes and tools in place.

Why It Matters: High efficiency and flow indicate that developers can work without unnecessary disruptions, leading to higher productivity and job satisfaction. It also helps in identifying and eliminating waste in the process.

Metrics to Consider:

  • Cycle time (time from task start to completion)
  • Time spent in meetings vs. coding
  • Context switching frequency
  • Tool and process efficiency

Implementing the SPACE Framework in real life

Implementing the SPACE Framework requires a strategic approach, involving the following steps:

Establish baseline metrics

Before making any changes, establish baseline metrics for each SPACE component. Use existing tools and methods to gather initial data.

Actionable Steps:

  • Conduct surveys to measure satisfaction and well-being.
  • Use code quality tools to assess performance.
  • Track activity through version control systems.
  • Analyze communication patterns via collaboration tools.
  • Measure efficiency and flow using project management software.

Set clear goals

Define what success looks like for each component of the SPACE Framework. Set achievable and measurable goals.

Actionable Steps:

  • Increase employee satisfaction scores by 10% within six months.
  • Reduce bug rates by 20% over the next quarter.
  • Improve code review participation by 15%.
  • Enhance cross-team communication frequency.
  • Shorten cycle time by 25%.

Implement changes

Based on the goals set, implement changes to processes, tools, and practices. This may involve adopting new tools, changing workflows, or providing additional training.

Actionable Steps:

  • Introduce well-being programs to improve satisfaction.
  • Adopt automated testing tools to enhance performance.
  • Encourage regular code reviews to boost activity.
  • Use collaboration tools like Slack or Microsoft Teams to improve communication.
  • Streamline processes to reduce context switching and improve flow.

Monitor and adjust

Regularly monitor the metrics to evaluate the impact of the changes. Be prepared to make adjustments as necessary to stay on track with your goals.

Actionable Steps:

  • Use dashboards to track key metrics in real time.
  • Hold regular review meetings to discuss progress.
  • Gather feedback from developers to identify areas for improvement.
  • Make iterative changes based on data and feedback.

Integrating the SPACE Framework with DORA Metrics

SPACE Dimension

Definition

DORA Metric Integration

Actionable Steps

Satisfaction and Well-being

Measures happiness, job fulfillment, and work-life balance

High deployment frequency and low lead time improve satisfaction; high failure rates increase stress

– Conduct satisfaction surveys 

– Correlate with DORA metrics

 – Implement well-being programs

Performance

Assesses the outcomes of developers’ work

Direct overlap with DORA metrics like deployment frequency and lead time

– Use DORA metrics for benchmark

– Track and improve key metrics

 – Address failure causes

Activity

Tracks volume and types of work (e.g., commits, reviews)

Frequent, high-quality activities improve deployment frequency and lead time

– Track activities and DORA metrics

 – Promote high-quality work practices

– Balance workloads

Communication and Collaboration

Evaluates effectiveness of interactions and tools

Effective communication and collaboration reduce failure rates and restoration times

– Use communication tools (e.g., Slack)

– Conduct retrospectives

 – Encourage cross-functional teams

Efficiency and Flow

Measures smoothness and efficiency of processes

Efficient workflows lead to higher deployment frequencies and shorter lead times

– Streamline processes <br> – Implement CI/CD pipelines

 – Monitor cycle times and context switching

Real-world success stories

GitHub

GitHub implemented the SPACE Framework to enhance its developer productivity. By focusing on communication and collaboration, they improved their internal processes and tools, leading to a more cohesive and efficient development team. They introduced regular team-building activities and enhanced their internal communication tools, resulting in a 15% increase in developer satisfaction and a 20% reduction in project completion time.

Microsoft

Microsoft adopted the SPACE Framework across several development teams. They focused on improving efficiency and flow by reducing context switching and streamlining their development processes. This involved adopting continuous integration and continuous deployment (CI/CD) practices, which reduced cycle time by 30% and increased deployment frequency by 25%.

Key software engineering metrics mapped to the SPACE Framework

This table outlines key software engineering metrics mapped to the SPACE Framework, along with how they can be measured and implemented to improve developer productivity and overall team effectiveness.

Satisfaction

Key Metrics

Measurement Tools/Methods

Implementation Steps

Satisfaction and Well-being

Employee Satisfaction Score

Employee surveys, engagement platforms (e.g.,Typo)

– Conduct regular surveys

– Analyze results to identify pain points

– Implement programs for well-being and work-life balance

Work-life Balance

Survey responses, self-reported hours

Employee surveys, time tracking tools (e.g., Toggl)

– Encourage flexible hours and remote work

– Monitor workload distribution

Burnout Index

Burnout survey scores

Surveys, tools like Typo, Gallup Q12

– Monitor and address high burnout scores

– Offer mental health resources

Turnover Rate

Percentage of staff leaving

HR systems, exit interviews

– Analyze reasons for turnover

– Improve work conditions based on feedback

Performance

Key Metrics

Measurement Tools/Methods

Implementation Steps

Code Quality

Number of bugs, code review scores

Static analysis tools (e.g., Typo, SonarQube), code review platforms (e.g., GitHub)

– Implement code quality tools

– Conduct regular code reviews

Deployment Frequency

Number of deployments per time period

CI/CD pipelines (e.g., Jenkins, GitLab CI/CD)

– Adopt CI/CD practices

– Automate deployment processes

Lead Time for Changes

Time from commit to production

CI/CD pipelines, version control systems (e.g., Git)

– Streamline deployment pipeline

– Optimize testing processes

Change Failure Rate

Percentage of failed deployments

Incident tracking tools (e.g., PagerDuty, Jira)

– Implement thorough testing and QA

– Analyze and learn from failures

Time to Restore Service

Time to recover from incidents

Incident tracking tools (e.g., PagerDuty, Jira)

– Develop robust incident response plans

– Conduct post-incident reviews

Activity

Key Metrics

Measurement Tools/Methods

Implementation Steps

Number of Commits

Commits per developer

Version control systems (e.g., Git)

– Track commits per developer

– Ensure commits are meaningful

Code Review Participation

Reviews per developer

Code review platforms (e.g., GitHub, Typo)

– Encourage regular participation in reviews

– Recognize and reward contributions

Task Completion Rates

Completed tasks vs. assigned tasks

Project management tools (e.g., Jira, Trello)

– Monitor task completion

– Address bottlenecks and redistribute workloads

Meeting Attendance

Attendance records

Calendar tools, project management tools

– Schedule necessary meetings

– Ensure meetings are productive and focused

Communication and Collaboration

Key Metrics

Measurement Tools/Methods

Implementation Steps

Team Meeting Frequency

Number of team meetings

Calendar tools, project management tools (e.g., Jira)

– Schedule regular team meetings

– Ensure meetings are structured and purposeful

Use of Collaboration Tools

Activity in tools (e.g., Slack messages, Jira comments)

Collaboration tools (e.g., Slack, Jira)

– Promote use of collaboration tools

– Provide training on tool usage

Cross-functional Interactions

Number of interactions with other teams

Project management tools, communication tools

– Encourage cross-functional projects

– Facilitate regular cross-team meetings

Feedback Loops

Number and quality of feedback instances

Feedback tools, retrospectives

– Implement regular feedback sessions

– Act on feedback to improve processes

Efficiency and Flow

Key Metrics

Measurement Tools/Methods

Implementation Steps

Cycle Time

Time from task start to completion

Project management tools (e.g., Jira)

– Monitor cycle times 

– Identify and remove bottlenecks

Time Spent in Meetings vs. Coding

Hours logged in meetings vs. coding

Time tracking tools, calendar tools

– Optimize meeting schedules

– Minimize unnecessary meetings

Context Switching Frequency

Number of task switches per day

Time tracking tools, self-reporting

– Reduce unnecessary interruptions

– Promote focused work periods

Tool and Process Efficiency

Time saved using tools/processes

Productivity tools, surveys

– Regularly review tool/process efficiency

– Implement improvements based on feedback

What engineering leaders can do

Engineering leaders play a crucial role in the successful implementation of the SPACE Framework. Here are some actionable steps they can take:

Promote a culture of continuous improvement

Encourage a mindset of continuous improvement among the team. This involves being open to feedback and constantly seeking ways to enhance productivity and well-being.

Actionable Steps:

  • Regularly solicit feedback from team members.
  • Celebrate small wins and improvements.
  • Provide opportunities for professional development and growth.

Invest in the right tools and processes

Ensure that developers have access to the tools and processes that enable them to work efficiently and effectively.

Actionable Steps:

  • Conduct regular tool audits to ensure they meet current needs.
  • Invest in training programs for new tools and technologies.
  • Streamline processes to eliminate unnecessary steps and reduce bottlenecks.

Foster collaboration and communication

Create an environment where communication and collaboration are prioritized. This can lead to better problem-solving and more innovative solutions.

Actionable Steps:

  • Organize regular team-building activities.
  • Use collaboration tools to facilitate better communication.
  • Encourage cross-functional projects to enhance team interaction.

Prioritize well-being and satisfaction

Recognize the importance of developer well-being and satisfaction. Implement programs and policies that support a healthy work-life balance.

Actionable Steps:

  • Offer flexible working hours and remote work options.
  • Provide access to mental health resources and support.
  • Recognize and reward achievements and contributions.

Conclusion

The SPACE Framework offers a holistic and actionable approach to understanding and improving developer productivity. By focusing on satisfaction and well-being, performance, activity, communication and collaboration, and efficiency and flow, organizations can create a more productive and fulfilling work environment for their developers.

Implementing this framework requires a strategic approach, clear goal setting, and ongoing monitoring and adjustment. Real-world success stories from companies like GitHub and Microsoft demonstrate the potential benefits of adopting the SPACE Framework.

Engineering leaders have a pivotal role in driving this change. By promoting a culture of continuous improvement, investing in the right tools and processes, fostering collaboration and communication, and prioritizing well-being and satisfaction, they can significantly enhance developer productivity and overall team success.

Top Developer Experience tools (2024)

In the software development industry, while user experience is an important aspect of the product life cycle, organizations are also considering Developer Experience.

A positive Developer Experience helps in delivering quality products and allows developers to be happy and healthy in the long run.

However, it is not always possible for organizations to measure and improve developer experience without any good tools and platforms.

What is Developer Experience?

Developer Experience is about the experience software developers have while working in the organization. It is the developers’ journey while working with a specific framework, programming languages, platform, documentation, general tools, and open-source solutions.

Positive Developer Experience = Happier teams

Developer Experience has a direct relationship with developer productivity. A positive experience results in high dev productivity, leading to high job satisfaction, performance, and morale. Hence, happier developer teams.

This starts with understanding the unique needs of developers and fostering a positive work culture for them.

Why Developer Experience is important?

Smooth onboarding process

Good DX ensures the onboarding process is as simple and smooth as possible. It includes making them familiar with the tools and culture and giving them the support they need to proceed further in their career. It also allows them to know other developers which helps in collaboration, open communication, and seeking help, whenever required.

Improves product quality

A positive Developer Experience leads to 3 effective C’s – Collaboration, communication, and coordination. Besides this, adhering to coding standards, best practices, and automated testing helps promote code quality and consistency and fix issues early.  As a result, development teams can easily create products that meet customer needs and are free from errors and glitches.  

Increases development speed

When Developer Experience is handled with care, software developers can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are few ways to boost development speed.  It also lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.

Attracts and retains top talents

Developers usually look out for a strong tech culture. So they can focus on their core skills and get acknowledged for their contributions. Great DX increases job satisfaction and aligns their values and goals with the organization. In return, developers bring the best to the table and want to stay in the organization for the long run.

Enhances collaboration

The right kind of Developer Experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings. Developers can easily discuss issues, share feedback, and work together on tasks. It helps streamline the development process and results in high-quality work.

Best developer experience tools

Time management tools

Clockwise

A powerful time management tool that streamlines and automates the calendar and protects developers’ flow time. It helps to strike a balance between meetings and coding time with a focus time feature.

Key features
  • Seamlessly integrates with third-party applications such as Slack, Google Calendar, and Asana.
  • Determines the most suitable meeting times for both developers and engineering leaders.
  • Creates custom smart holds i.e. protected time throughout the hold.
  • Reschedules the meetings that are marked as ‘Flexible’.
  • Provides a quick summary of how much meetings and focus time was spent last week.

Toggle track

A straightforward time-tracking, reporting, and billing tool for software developers. It lets development teams view tracked team entries in a grid or calendar format.

Key features
  • ‘Dashboard and Reporting’ feature offers in-depth analysis and lets engineering leaders create customized dashboards.
  • Simple and easy-to-use interface.
  • Preferable for those who avoid real-time tracking rather than track their time manually.
  • Offers a PDF invoice template that can be downloaded easily.
  • Includes optional Pomodoro setting that allows developers to take regular quick breaks.

Software development intelligence

Typo

Typo is an intelligent engineering management platform used for gaining visibility, removing blockers, and maximizing developer effectiveness. It gives a comparative view of each team’s performance across velocity, quality, and throughput. This tool can be integrated with the tech stack to deliver real-time insights. Git, Slack, Calenders, and CI/CD to name a few.

Key features
  • Seamlessly integrates with third-party applications such as Git, Slack, Calenders, and CI/CD tools.
  • ‘Sprint analysis’ feature allows for tracking and analyzing the team’s progress throughout a sprint.
  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.
Software development intelligence

Code intelligence tools

Sourcegraph (Cody)

An AI code-based assistant tool that provides code-specific information and helps in locating precise code based on natural language description, file names, or function names.

Key features
  • Explain complex lines of code in simple language.
  • Identifies bugs and errors in a codebase and provides suggestions.
  • Offers documentation generation.
  • Answers questions about existing code.
  • Generates code snippets, fixes, and improves code.

GitHub Copilot

Developed by GitHub in collaboration with open AI, it uses an open AI codex for writing code quickly. It draws context from the code and suggests whole lines or complete functions that developers can accept, modify, or reject.

Key features
  • Creates predictive lines of code from comments and existing patterns in the code.
  • Generates code in multiple languages including Typescript, Javascript, Ruby, C++, and Python.
  • Seamlessly integrates with popular editors such as Neovim, JetBrains IDEs, and Visual Studio.
  • Create dictionaries of lookup data
  • Writes test cases and code comments

Communication and collaboration

Slack

A widely used communication platform that enables developers to real-time communication and share files. It also allows team members to share and download files and create external links for people outside of the team.

Key features
  • Seamlessly integrates with third-party applications such as Google Calendar, Hubspot, Clickup, and Salesforce.
  • ‘Huddle’ feature that includes phone and video conferencing options.
  • Accessible on both mobile and desktop (Application and browser).
  • Offers ‘Channel’ feature i.e. similar to groups, team members can create projects, teams, and topics.
  • Perfect for asynchronous communication and collaboration.

Project and task management

JIRA

A part of the Atlassian group, JIRA is an umbrella platform that includes JIRA software, JIRA core, and JIRA work management. It relies on the agile way of working and is purposely built for developers and engineers.

Key features
  • Built for agile and scrum workflows.
  • Offers Kanban view.
  • JIRA dashboard helps users to plan projects, measure progress, and track due dates.
  • Offers third-party integrations with other parts of Atlassian groups and third-party apps like Github, Gitlab, and Jenkins.
  • Offers customizable workflow states and transitions for every issue type.

Linear

A project management and issue-tracking tool that is tailored for software development teams. It helps the team plan their projects and auto-close and auto-archive issues.

Key features
  • Simple and straightforward UI.
  • Easy to set up.
  • Breaks larger tasks into smaller issues.
  • Switches between list and board layout to view work from any angle.
  • Quickly apply filters and operators to refine issue lists and create custom views.
Linear

Automated software testing

Lambda test

A cloud-based cross-browser testing platform that provides real-time testing on multiple devices and simulators. It is used to create and run both manual and automatic tests and functions via the Selenium Automation Grid.

Key features
  • Seamlessly integrates with other testing frameworks and CI/CD tools.
  • Offers detailed automated logs such as exception logs, command logs, and metadata.
  • Runs parallel tests in multiple browsers and environments.
  • Offers command screenshots and video recordings of the script execution.
  • Facilitates responsive testing to ensure the application works well on various devices and screen sizes.

Postman

A widely used automation testing tool for API. It provides a streamlined process for standardizing API testing and monitoring it for usage and trend insights.

Key features
  • Seamlessly integrates with CI/CD pipelines.
  • Enable users to mimic real-world scenarios and assess API behavior under various conditions.
  • Creates mock servers, and facilitates realistic simulations and comprehensive testing.
  • Provides monitoring features to gain insights into API performance and usage trends.
  • Friendly and easy-to-use interface equipped with code snippets.

Continuous integration/continuous deployment

Circle CI

Certified with FebRamp and SOC Type II compliant, It helps in achieving CI/CD in open-source and large-scale projects. Circle CI streamlines the DevOps process and automates builds across multiple environments.

Key features
  • Seamlessly integrates with third-party applications with Bitbucket, GitHub, and GitHub Enterprise.  
  • Tracks the status of projects and keeps tabs on build processes
  • ‘Parallel testing’ feature helps in running tests in parallel across different executors.
  • Allows a single process per project.
  • Provides ways to troubleshoot problems and inspect things such as directory path, log files, and running processes

Documentation

Swimm

Specifically designed for software development teams. Swimm is an innovative cloud-based documentation tool that integrates continuous documentation into the development workflow.

Key features
  • Seamlessly integrates with development tools such as GitHub, VSC, and JetBrains IDEs.
  • ‘Auto-sync’ feature ensures the document stays up to date with changes in the codebase.
  • Creates new documents, rewrites existing ones, or summarizes information.
  • Creates tutorials and visualizations within the codebase for better understanding and onboarding new members.
  • Analyzes the entire codebase, documentation sources, and data from enterprise tools.

Developer engagement

DevEx by Typo

A valuable tool for development teams that captures 360 views of developer experience and helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.

Key features
  • Research-backed framework that captures parameters and uncovers real issues.
  • In-depth insights are published on the dashboard.
  • Combines data-driven insights with proactive monitoring and strategic intervention.
  • Identifies the key priority areas affecting developer productivity and well-being.
  • Sends automated alerts to identify burnout signs in developers at an early stage.
DevEx by Typo

GetDX

A comprehensive insights platform that is founded by researchers behind the DORA and SPACE framework. It offers both qualitative and quantitative measures to give a holistic view of the organization.

Key features
  • Provides a suite of tools that capture data from surveys and systems in real-time.
  • Breaks down results based on personas.
  • Streamlines developer onboarding with real-time insights.
  • Contextualizes performance with 180,000+ industry benchmark samples.
  • Uses advanced statistical analysis to identify the top opportunities.

Conclusion

Overall Developer Experience is crucial in today’s times. It facilitates effective collaboration within engineering teams, offers real-time feedback on workflow efficiency and early signs of burnout, and enables informed decision-making. By pinpointing areas for improvement, it cultivates a more productive and enjoyable work environment for developers.

There are various tools available in the market. We’ve curated the best Developer Experience tools for you. You can check other tools as well. Do your own research and see what fits right for you.

All the best!

Measuring Developer Productivity: A Comprehensive Guide

The software development industry constantly evolves, and measuring developer productivity has become crucial to success. It is the key to achieving efficiency, quality, and innovation. However, measuring productivity is not a one-size-fits-all process. It requires a deep understanding of productivity in a development context and selecting the right metrics to reflect it accurately.

This guide will help you and your teams navigate the complexities of measuring dev productivity. It offers insights into the process’s nuances and equips teams with the knowledge and tools to optimize performance. By following the tips and best practices outlined in this guide, teams can improve their productivity and deliver better software.

What is Developer Productivity?

Development productivity extends far beyond the mere output of code. It encompasses a multifaceted spectrum of skills, behaviors, and conditions that contribute to the successful creation of software solutions. Technical proficiency, effective collaboration, clear communication, suitable tools, and a conducive work environment are all integral components of developer productivity. Recognizing and understanding these factors is fundamental to devising meaningful metrics and fostering a culture of continuous improvement.

Benefits of developer productivity

  • Increased productivity allows developers to complete tasks more efficiently. It leads to shorter development cycles and quicker delivery of products or features to the market.
  • Productivity developers can focus more on code quality, testing, and optimization, resulting in higher-quality software with fewer bugs and issues.
  • Developers can accomplish more in less time, reducing development costs and improving the organization’s overall return on investment.
  • Productive developers often experience less stress and frustration due to reduced workloads and smoother development processes that lead to higher job satisfaction and retention rates.
  • With more time and energy available, developers can dedicate resources to innovation, continuous learning, experimenting with new technologies, and implementing creative solutions to complex problems.

Metrics for Measuring Developer Productivity

Measuring software developers’ productivity cannot be any arbitrary criteria. This is why there are several metrics in place that can be considered while measuring it. Here we can divide them into quantitative and qualitative metrics. Here is what they mean:

Quantitative Metrics

Lines of Code (LOC) Written

While counting lines of code isn’t a perfect measure of productivity, it can provide valuable insights into coding activity. A higher number of lines might suggest more work done, but it doesn’t necessarily equate to higher quality or efficiency. However, tracking LOC changes over time can help identify trends and patterns in development velocity. For instance, a sudden spike in LOC might indicate a burst of productivity or potentially code bloat, while a decline could signal optimization efforts or refactoring.

Time to Resolve Issues/Bugs

The swift resolution of issues and bugs is indicative of a team’s efficiency in problem-solving and code maintenance. Monitoring the time it takes to identify, address, and resolve issues provides valuable feedback on the team’s responsiveness and effectiveness. A shorter time to resolution suggests agility and proactive debugging practices, while prolonged resolution times may highlight bottlenecks in the development process or technical debt that needs addressing.

Number of Commits or Pull Requests

Active participation in version control systems, as evidenced by the number of commits or pull requests, reflects the level of engagement and contribution to the codebase. A higher number of commits or pull requests may signify active development and collaboration within the team. However, it’s essential to consider the quality, not just quantity, of commits and pull requests. A high volume of low-quality changes may indicate inefficiency or a lack of focus.

Code Churn

Code churn refers to the rate of change in a codebase over time. Monitoring code churn helps identify areas of instability or frequent modifications, which may require closer attention or refactoring. High code churn could indicate areas of the code that are particularly complex or prone to bugs, while low churn might suggest stability but could also indicate stagnation if accompanied by a lack of feature development or innovation. Furthermore, focusing on code changes allows teams to track progress and ensure that updates align with project goals while emphasizing quality code ensures that these changes maintain or improve the overall codebase integrity and performance.

Qualitative Metrics

Code Review Feedback

Effective code reviews are crucial for maintaining code quality and fostering a collaborative development environment in engineering org. Monitoring code review feedback, such as the frequency of comments, the depth of review, and the incorporation of feedback into subsequent iterations, provides insights into the team’s commitment to quality and continuous improvement. A culture of constructive feedback and iteration during code reviews indicates a quality-driven approach to development.

Team Satisfaction and Morale

High morale and job satisfaction among engineering teams are key indicators of a healthy and productive work environment. Happy and engaged teams tend to be more motivated, creative, and productive. Regularly measuring team satisfaction through surveys, feedback sessions, or one-on-one discussions helps identify areas for improvement and reinforces a positive culture that fosters teamwork, productivity, and collaboration.

Rate of Feature Delivery

Timely delivery of features is essential for meeting project deadlines and delivering value to stakeholders. Monitoring the rate of feature delivery, including the speed and predictability of feature releases, provides insights into the team’s ability to execute and deliver results efficiently. Consistently meeting or exceeding feature delivery targets indicates a well-functioning development process and effective project management practices.

Customer Satisfaction and Feedback

Ultimately, the success of development efforts is measured by the satisfaction of end-users. Monitoring customer satisfaction through feedback channels, such as surveys, reviews, and support tickets, provides valuable insights into the effectiveness of the software in delivering meaningful solutions. Positive feedback and high satisfaction scores indicate that the development team has successfully met user needs and delivered a product that adds value. Conversely, negative feedback or low satisfaction scores highlight areas for improvement and inform future development priorities.

Best Practices for Measuring Developer Productivity

While analyzing the metrics and measuring software developer productivity, here are some things you need to remember:

  • Balance Quantitative and Qualitative Metrics: Combining both types of metrics provides a holistic view of productivity.
  • Customize Metrics to Fit Team Dynamics: Tailor metrics to align with the development team’s unique objectives and working styles.
  • Ensure Transparency and Clarity: Communicate clearly about the purpose and interpretation of metrics to foster trust and accountability.
  • Iterate and Adapt Measurement Strategies: Continuously evaluate and refine measurement approaches based on feedback and evolving project requirements.

How does Generative AI Improve Developer Productivity?

Below are a few ways in which Generative AI can have a positive impact on developer productivity:

Focus on meaningful tasks: Generative AI tools take up tedious and repetitive tasks, allowing developers to give their time and energy to meaningful activities, resulting in productivity gains within the team members’ workflow.

Assist in their learning graph: Generative AI lets software engineers gain practical insights and examples from these AI tools and enhance team performance.

Assist in pair programming: Through Generative AI, developers can collaborate with other developers easily.

Increase the pace of software development: Generative AI helps in the continuous delivery of products and services and drives business strategy.

How does Typo Measure Developer Productivity?

There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.

Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.

Here are three ways in which Typo measures the team productivity:

Software Development Visibility

Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.

Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.

Code Quality Automation

Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.

Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback.  This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.

Developer Experience

Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.

Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.

Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.

Track Developer Productivity Effectively

Measuring developers’ productivity is not straightforward, as it varies from person to person. It is a dynamic process that requires careful consideration and adaptability.

To achieve greater success in software development, the development teams must embrace the complexity of productivity, select appropriate metrics, use relevant tools, and develop a supportive work culture.

There are many developer productivity tools available in the market. Typo stands out to be the prevalent one. It’s important to remember that the journey toward productivity is an ongoing process, and each iteration presents new opportunities for growth and innovation.

How to Measure and Improve Engineering Productivity?

As technology rapidly advances, software engineering is becoming an increasingly fast-paced field where maximizing productivity is critical for staying competitive and driving innovation. Efficient resource allocation, streamlined processes, and effective teamwork are all essential components of engineering productivity. In this guide, we will delve into the significance of measuring and improving engineering productivity, explore key metrics, provide strategies for enhancement, and examine the consequences of neglecting productivity tracking.

What is Engineering Productivity?

Engineering productivity refers to the efficiency and effectiveness of engineering teams in producing work output within a specified timeframe while maintaining high-quality standards. It encompasses various factors such as resource utilization, task completion speed, deliverable quality, and overall team performance. Essentially, engineering productivity measures how well a team can translate inputs like time, effort, and resources into valuable outputs such as completed projects, software features, or innovative solutions.

Tracking software engineering productivity involves analyzing key metrics like productivity ratio, throughput, cycle time, and lead time. By assessing these metrics, engineering managers can pinpoint areas for improvement, make informed decisions, and implement strategies to optimize productivity and achieve project objectives. Ultimately, engineering productivity plays a critical role in ensuring the success and competitiveness of engineering projects and organizations in today’s fast-paced technological landscape.

Why does Engineering Productivity Matter?

Impact on Project Timelines and Deadlines

Engineering productivity directly affects project timelines and deadlines. When teams are productive, they can deliver projects on schedule, meeting client expectations and maintaining stakeholder satisfaction.

Influence on Product Quality and Customer Satisfaction

High productivity levels correlate with better product quality. By maximizing productivity, engineering teams can focus on thorough testing, debugging, and refining processes, ultimately leading to increased customer satisfaction.

Role in Resource Allocation and Cost-Effectiveness

Optimized engineering productivity ensures efficient resource allocation, reducing unnecessary expenditures and maximizing ROI. By utilizing resources effectively, tech companies can achieve their goals within budgetary constraints.

The Importance of Tracking Engineering Productivity

Insights for Performance Evaluation and Improvement

Tracking engineering productivity provides valuable insights into team performance. By analyzing productivity metrics, organizations can identify areas for improvement and implement targeted strategies for enhancement.

Facilitates Data-Driven Decision-Making

Data-driven decision-making is essential for optimizing engineering productivity. Organizations can make informed decisions about resource allocation, process optimization, and project prioritization by tracking relevant metrics.

Helps in Setting Realistic Goals and Expectations

Tracking productivity metrics allows organizations to set realistic goals and expectations. By understanding historical productivity data, teams can establish achievable targets and benchmarks for future projects.

Factors Affecting Engineering Productivity

Team Dynamics and Collaboration

Effective teamwork and collaboration are essential for maximizing engineering productivity. Organizations can leverage team members’ diverse skills and expertise to achieve common goals by fostering a collaboration and communication culture.

Work Environment and Organizational Culture

The work environment and organizational culture play a significant role in determining engineering productivity. A supportive and conducive work environment fosters team members’ creativity, innovation, and productivity.

Resource Allocation and Workload Management

Efficient resource allocation and workload management are critical for optimizing engineering productivity. By allocating resources effectively and balancing workload distribution, organizations can ensure that team members work on tasks that align with their skills and expertise.

Strategies to Improve Engineering Productivity

Identifying Productivity Roadblocks and Bottlenecks

Identifying and addressing productivity roadblocks and bottlenecks is essential for improving engineering productivity. By conducting thorough assessments of workflow processes, organizations can identify inefficiencies, focus on workload distribution, and implement targeted solutions for improvement.

Implementing Effective Tools and Practices for Optimization

Leveraging effective tools and best practices is crucial for optimizing engineering productivity. By adopting agile methodologies, DevOps practices, and automation tools, engineering organizations can streamline processes, reduce manual efforts, enhance code quality, and accelerate delivery timelines.

Prioritizing Tasks Strategically

Strategic task prioritization, along with effective time management and goal setting, is key to maximizing engineering productivity. By prioritizing tasks based on their impact and urgency, organizations can ensure that team members focus on the most critical activities, leading to improved productivity and efficiency.

Promoting Collaboration and Communication

Promoting collaboration and communication within engineering teams is essential for maximizing productivity. By fostering open communication channels, encouraging knowledge sharing, and facilitating cross-functional collaboration, organizations can leverage the collective expertise of team members to drive innovation, and motivation and achieve common goal setting.

Continuous Improvement through Feedback Loops and Iteration

Continuous improvement is essential for maintaining and enhancing engineering productivity. By soliciting feedback from team members, identifying areas for improvement, and iteratively refining processes, organizations can continuously optimize productivity, address technical debt, and adapt to changing requirements and challenges.

Consequences of Not Tracking Engineering Productivity

Risk of Missed Deadlines and Project Delays

Neglecting to track engineering productivity increases the risk of missed deadlines and project delays. Without accurate productivity tracking, organizations may struggle to identify and address issues that could impact project timelines and deliverables.

Decreased Product Quality and Customer Dissatisfaction

Poor engineering productivity can lead to decreased product quality and customer dissatisfaction. Organizations may overlook critical quality issues without effective productivity tracking, resulting in negative business outcomes, subpar products, and unsatisfied customers.

Inefficient Resource Allocation and Higher Costs

Failure to track engineering productivity can lead to inefficient resource allocation and higher costs. Without visibility into productivity metrics, organizations may allocate resources ineffectively, wasting time, effort, and budgetary overruns.

Best Practices for Engineering Productivity

Setting SMART Goals

Setting SMART (specific, measurable, achievable, relevant, time-bound) goals is essential for maximizing engineering productivity. By setting clear and achievable goals, organizations can focus their efforts on activities that drive meaningful results and contribute to overall project success.

Establishing a Culture of Accountability and Ownership

Establishing a culture of accountability and ownership is critical for maximizing engineering productivity. Organizations can foster a sense of ownership and commitment that drives productivity and excellence by empowering team members to take ownership of their work and be accountable for their actions.

Promoting Work-Life Balance

Ensure work-life balance at the organization by promoting policies that support flexible schedules, encouraging regular breaks, and providing opportunities for professional development and personal growth. This can help reduce stress and prevent burnout, leading to higher productivity and job satisfaction.

Embracing Automation and Technology

Embracing automation and technology is key to streamlining processes and accelerating delivery timelines. By leveraging automation tools, DevOps practices, and advanced technologies, organizations can automate repetitive tasks, reduce manual efforts, and improve overall productivity and efficiency.

Investing in Employee Training and Skill Development

Investing in employee training and skill development is essential for maintaining and enhancing engineering productivity. By providing ongoing training and development opportunities, organizations can equip team members with the skills and knowledge they need to excel in their roles and contribute to overall project success.

Using Typo for Improved Engineering Productivity

Typo offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It includes engineering metrics that can help you take action with in-depth insights.

Understanding Engineering Productivity Metrics

Below are a few important engineering metrics that can help in measuring their productivity:

Merge Frequency

Merge Frequency represents the rate at which the Pull Requests are merged into any of the code branches per day. Engineering teams can optimize their development workflows, improve collaboration, and increase team efficiency.

Cycle Time

Cycle time measures the time it takes to complete a single iteration of a process or task. Organizations can identify opportunities for process optimization and efficiency improvement by tracking cycle time.

Deployment PR

Deployment PRs represent the average number of Pull Requests merged in the main/master/production branch per week. Measuring it helps improve Engineering teams’ efficiency by providing insights into code deployments’ frequency, timing, and success rate.

Planning Accuracy

Planning Accuracy represents the percentage of Tasks Planned versus Tasks Completed within a given time frame. Its benchmarks help engineering teams measure their performance, identify improvement opportunities, and drive continuous enhancement of their planning processes and outcomes.

Code Coverage

Code coverage is a measure that indicates the percentage of a codebase that is tested by automated tests. It helps ensure that the tests cover a significant portion of the code, identifying code quality, untested parts, and potential bugs.

Screenshot 2024-05-20 at 2.42.17 PM.png

How does Typo Help in Enhancing Engineering Productivity?

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools. It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Features

  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Includes effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint.
  • Provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.

Improve Engineering Productivity Always to Stay Ahead

Measuring and improving engineering productivity is essential for achieving project success and driving business growth. By understanding the importance of productivity tracking, leveraging relevant metrics, and implementing effective strategies, organizations can optimize productivity, enhance product quality, and deliver exceptional results in today’s competitive software engineering landscape.

In conclusion, engineering productivity is not just a metric; it’s a mindset and a continuous journey towards excellence.

Measure developer experience with Typo

A software development team is critical for business performance. They wear multiple hats to complete the work and deliver high-quality software to end-users. On the other hand, organizations need to take care of their well-being and measure developer experience to create a positive workplace for them.

Otherwise, this can negatively impact developers’ productivity and morale which makes their work less efficient and effective. As a result, disrupting the developer experience at the workplace.

With Typo, you can capture qualitative insights and get a 360 view of your developer experience. Let’s delve deeper into it in this blog post:

What is developer experience?

Developer experience refers to the overall experience of developer teams when using tools, platforms, and services to build software applications. This means right from the documentation to coding and deployment and includes tangible and intangible experience.

Happy developers = positive developer experience. It increases their productivity and morale. It further leads to a faster development cycle, developer workflow, methods, and working conditions.

Not taking care of developer experience can make it difficult for businesses to retain and attract top talent.

Why is developer experience beneficial?

Developer experience isn’t just a buzzword. It is a crucial aspect of your team’s productivity and satisfaction.

Below are a few benefits of developer experience:

Smooth onboarding process

Good devex ensures the onboarding process is as simple and smooth as possible. It includes making engineering teams familiar with the tools and culture and giving them the support they need to proceed further in their career. It also allows them to know other developers which can help them in collaboration and mentorship.

Improves product quality

A positive developer experience leads to 3 effective C’s – Collaboration, communication, and coordination. Adhering to coding standards, best practices and automated testing also helps in promoting code quality and consistency and catching and fixing issues early. As a result, they can easily create products that meet customer needs and are free from errors and glitches.  

Increases development speed

When developer experience is handled carefully, team members can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are a few of the ways to boost development speed. It lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.

Attracts and retains top talents

Developers usually look out for a strong tech culture so they can focus on their core skills and get acknowledged for their contributions. A good developer experience results in developer satisfaction and aligns their values and goals with the organization. In return, developers bring the best to the table and want to stay in the organization for the long run.

Enhances collaboration

Great developer experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings. Through collaborative approaches, developers can easily discuss issues, share feedback, and work together on tasks.

How to measure developer experience with Typo?

Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers.

Below is the process that Typo follows to gain insights into developer experience effectively:

Step 1: Pulse surveys

Pulse surveys refer to short, periodic questionaries used to gather feedback from developers to assess their engagement, satisfaction, and overall organizational health.

Typo’s pulse surveys are specifically designed for the software engineering team as it is built on a developer experience framework. It triggers AI-driven pulse surveys where each developer receives a notification periodically with a few conversational questions.

We highly recommend doing surveys once a month as to keep a tab on your team’s wellbeing & experiences and build a continuous loop of feedback. However, you can customize the frequency of these surveys according to the company’s suitability and needs.

And don’t worry, these surveys are anonymous.

Pulse surveys

Step 2: Developer Experience analytics

Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help to analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.

Below are key components of Typo’s developer experience analytics dashboard:

DevEx Score

The DevEx score indicates the overall state of well-being or happiness within an organization. It reflects the collective emotional and mental health of the developers.

Also known as the employee net promoter score, this score ranges between 1 – 10 as shown in the image below. It is based on the developer feedback collected. A high well-being score suggests that people are generally content and satisfied while a low score may indicate areas of concern or areas needing improvement.

Response Rate

It is the percentage of people who responded to the check-in. A higher response rate represents a more reliable dataset for analyzing developer experience metrics and deriving insights.

This is a percentage number along with the delta change. You will also see the exact count to drive this percentage.  It also includes the trend graph showing the data from the last 4 weeks.

It also includes trending sentiments that show you the segregation of employees based on the maximum re-occurring sentiments as mentioned by developer team.

Response Rate

Recent comments

This section shows all the concerns raised by developers which you can reply to and drive meaningful conversations. This offers valuable insights into their workflow challenges, addresses issues promptly, and boosts developer satisfaction.

Recent comments

Heatmap

In this section, you can slice and dice your data to deep-dive further on the level of different demographics. The list of demo graphies is as follows:

  • Designation
  • Location
  • Team
  • Tenure

Burnout Alerts

Typo sends automated alerts to your communication to help you identify burnout signs in developers at an early stage. This enables leaders to track developer engagement and support their well-being, maintain productivity, and create a positive and thriving work environment.

Typo tracks the work habits of developers across multiple activities, such as commits, PRs, reviews, comments, tasks, and merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the burnout zone or at risk of burnout. These benchmarks can be customized to meet your specific needs.

Developer experience framework, powered by Typo

Typo’s developer experience framework suggests to engineering leaders what they should focus on for measuring the dev productivity and experience.  

Below are the key focus areas and their drivers incorporated in the developer experience framework:

Learn more about Typo's DevEx Framework

Key focus areas

Manager support

It refers to the level of assistance, guidance, and resources provided by managers or team leads to support developers in their work.

Sub focus areas

Description

Questions

Empathy

The ability to understand and relate to developers, actively listen, and show compassion in interactions.

  • Do you feel comfortable sharing your concerns or personal challenges with your manager?
  • Do you feel comfortable expressing yourself in this space?
  • Does your manager actively listen to your ideas without judgment?

Coach and guide

The role of managers is to provide expertise, advice, and support to help developers improve their skills, overcome challenges, and achieve career goals.

  • Does your manager give constructive feedback regularly?
  • Does your manager give you the guidance you need in your work?
  • Does your manager help you learn and develop new skills?

Feedback

The ability to provide timely and constructive feedback on performance, skills, and growth areas helping developers gain insights, refine their skills, and work towards achieving their career objectives.

  • Do you feel that your manager’s feedback helps you understand your strengths and areas for improvement?
  • Do you feel comfortable providing feedback to your manager?
  • How effectively does your manager help you get support for technical growth?

Developer flow

It is a state of optimal engagement and productivity that developers experience when fully immersed and focused on their work.

Sub focus areas

Description

Questions

Work-life balance

Maintaining a healthy equilibrium between work responsibilities and personal life promotes well-being, boundaries, and resources for managing workload effectively.

  • How would you rate the work-life balance in your current role?
  • Do you feel supported by your team in maintaining a good work-life balance?
  • How would you rate the work-life balance in your current role?

Autonomy

Providing developers with the freedom and independence to make decisions, set goals, and determine their approach and execution of tasks.

  • Do you feel free to make decisions for your work?
  • Do you feel encouraged to explore new ideas and experiment with different solutions?
  • Do you think your ideas are well-supported by the team?

Focus time

The dedicated periods of uninterrupted work where developers can deeply concentrate on their tasks without distractions or interruptions.

  • How often do you have time for focused work without interruptions?
  • How often do you switch context during focus time?
  • How often can you adjust your work schedule to improve conditions for focused work when needed?

Goals

Setting clear objectives that provide direction, motivation, and a sense of purpose in developers’ work, enhances their overall experience and productivity.

  • Have you experienced success in meeting your goals?
  • Are you able to track your progress towards your goals?
  • How satisfied are you with the goal-setting process within your team?

Product management

The practices involved overseeing a software product’s lifecycle, from ideation to development, launch, and ongoing management.

Sub focus areas

Description

Questions

Clear requirements

Providing developers with precise and unambiguous specifications, ensuring clarity, reducing ambiguity, and enabling them to meet the expectations of stakeholders and end-users.

  • Are the requirements provided for your projects clear and well-defined?
  • Do you have the necessary information you need for your tasks?
  • Do you think the project documentation covers everything you need?

Reasonable timelines

Setting achievable and realistic project deadlines, allowing developers ample time to complete tasks without undue pressure or unrealistic expectations.

  • Do you have manageable timeframes and deadlines that enhance the quality of your work?
  • Are you provided with the resources you need to meet the project timelines?
  • How often do you encounter unrealistic project timelines?

Collaborative discussions

Fostering open communication among developers, product managers, and stakeholders, enabling constructive discussions to align product strategies, share ideas, and resolve issues.

  • Are your inputs valued during collaborative discussions?
  • Does your team handle conflicts well in product meetings?
  • How often do you actively participate during collaborative discussions?

Development releases

It refers to creating and deploying software solutions or updates, emphasizing collaboration, streamlined workflows, and reliable deployment to enhance the developer experience.

Sub focus areas

Description

Questions

Tools and technology

Providing developers with the necessary software tools, frameworks, and technologies to facilitate their work in creating and deploying software solutions. 

  • Are you satisfied with the tools provided to you for your development work?
  • Has the availability of tools positively impacted your development process?
  • To what extent do you believe that testing tools adequately support your work?

Code review

Evaluating code changes for quality, adherence to standards, and identifying issues to enhance software quality and promote collaboration among developers.

  • Do you feel that code reviews contribute to your growth and development as a developer?
  • How well does your team addresses the issues identified during code reviews?
  • How often do you receive constructive feedback during code reviews that help improve your coding skills?

Code health

Involves activities like code refactoring, performance optimization, and enforcing best practices to ensure code quality, maintainability, and efficiency, thereby enhancing the developer experience and software longevity.

  • Are coding standards and best practices consistently followed in the development process?
  • Do you get enough support with technical debt & code-related issues?
  • Are you satisfied with the overall health of the codebase you’re currently working on?

Frictional releases

Streamlining software deployment through automation, standardized procedures, and effective coordination, reducing errors and delays for a seamless and efficient process that enhances the developer experience.

  • Do you often have post-release reviews to identify areas for improvement?
  • Do you feel that the release process is streamlined in your projects?
  • Is the release process in your projects efficient?

Culture and values

It includes shared beliefs, norms, and principles that shape a positive work environment. It includes collaboration, open communication, respect, innovation, diversity, and inclusion, fostering creativity, productivity, and satisfaction among developers.

Sub focus areas

Description

Questions

Psychological safety

Creating an environment where developers feel safe to express their opinions, take risks, and share their ideas without fear of judgment or negative consequences.

  • Do you feel that your team creates an atmosphere where trust, respect, and openness are valued?
  • Do you feel comfortable sharing your thoughts without worrying about judgement?
  • Do you believe that your team fosters a culture where everyone’s opinions are valued?

Recognition

Acknowledging and appreciating developers’ contributions and achievements through meaningful recognition, fostering a positive and motivating environment that boosts morale and engagement.

  • Does recognition at your workplace make you happier and more involved in your job?
  • Do you feel that your hard work is acknowledged by your team members and manager?
  • Do you believe that recognition motivates you to perform better in your role?

Team collaboration

Fostering open communication, trust, and knowledge sharing among developers, enabling seamless collaboration, and idea exchange, and leveraging strengths to achieve common goals.

  • Is there a strong sense of teamwork and cooperation within your team?
  • Are you confident in your team’s ability to solve problems together?
  • Do you believe that your team leverages individual expertise to enhance collaboration?

Learning and growth

Continuous learning and professional development, offering skill-enhancing opportunities, encouraging a growth mindset, fostering curiosity and innovation, and supporting career progression.

  • Does your organization encourage your professional growth?
  • Are there any training programs you would like to see implemented?
  • Does your organization invest enough in employee training and development?

Conclusion

Measuring developer experience continuously is crucial in today’s times. It helps to provide real-time feedback on workflow efficiency, early signs of burnout, and overall satisfaction levels. This further identifies areas for improvement and fosters a more productive and enjoyable work environment for developers.

To learn more about DevEx, visit our website!

|

Developer Experience Framework: A Comprehensive Guide to Improving Developer Productivity

In today’s times, developer experience has become an integral part of any software development company. A direct relationship exists between developer experience and developer productivity. A positive developer experience leads to high developer productivity, increasing job satisfaction, efficiency, and high-quality products.

When organizations don’t focus on developer experience, they may encounter many problems in workflow. This negatively impacts the overall business performance.

In this blog, let’s learn more about the developer experience framework that is beneficial to developers, engineering managers, and organizations.

What is Developer Experience?

In simple words, Developer experience is about the experience software developers have while working in the organization.

It is the developers’ journey while working with a specific framework, programming languages, platform, documentation, general tools, and open-source solutions.

Positive developer experience = Happier teams

Developer experience has a direct relationship with developer productivity. A positive experience results in high dev productivity which further leads to high job satisfaction, performance, and morale. Hence, happier developer teams.

This starts with understanding the unique needs of developers and fostering a positive work culture for them.

Benefits of Developer Experience

Smooth Onboarding Process

DX ensures that the onboarding process is simple and smooth as possible. This includes making them familiar with the tools and culture as well as giving them the support they need to proceed further in their career.

It also allows them to know other developers which help in collaboration, open communication, and seeking help, whenever required.

Improves Product Quality

A positive developer experience leads to 3 effective C’s - Collaboration, communication, and coordination. Besides this, adhering to coding standards, best practices, and automated testing helps in promoting code quality and consistency and catching and fixing issues early.

As a result, they can easily create products that can meet customer needs and are free from errors and glitches.  

Increases Development Speed

When developer experience is handled with care, software developers can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are a few of the ways to boost development speed.

It also lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.

Attract and Retain Top Talents

Developers usually look out for a strong tech culture. So they can focus on their core skills and get acknowledged for their contributions. A good developer experience increases job satisfaction and aligns their values and goals with the organization.

In return, developers bring the best to the table and want to stay in the organization for the long run.

Enhanced Collaboration

The right kind of developer experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings.

Through collaborative approaches, developers can easily discuss issues, share feedback, and work together on tasks. It helps streamline the development process and results in high-quality work.

Two Key Frameworks and Their Limitations

There are two frameworks to measure developer productivity. However, they come with certain drawbacks. Hence, a new developer framework is required to bridge the gap in how organizations approach developer experience and productivity.

Let’s take a look at DORA metrics and SPACE frameworks along with their limitations:

DORA Metrics

DORA metrics have been identified after 6 years of research and surveys by DORA. It assists engineering leaders to determine two things:

  • The characteristics of a top-performing team
  • How their performance compares to the rest of the industry

It defines 4 key metrics:

Deployment frequency

Deployment Frequency measures the frequency of deployment of code to production or releases to end-users in a given time frame.

Lead Time for Changes

Also known as cycle time. Lead Time for Changes measures the time between a commit being made and that commit making it to production.

Mean Time to Recover

This metric is also known as the mean time to restore. Mean Time to Recover measures the time required to solve the incident i.e. service incident or defect impacting end-users.

Change Failure Rate

Change Failure Rate measures the proportion of deployment to production that results in degraded services.

Use Four Keys metrics like change failure rate to measure your DevOps  performance | Google Cloud Blog

Limitations of DORA metrics

It Doesn't Take into Consideration All the Factors that Add to the Success of the Development Process

DORA metrics are a useful tool for tracking and comparing DevOps team performance. Unfortunately, it doesn’t take into account all the factors for a successful software development process. For example, assessing coding skills across teams can be challenging due to varying levels of expertise. These metrics also overlook the actual efforts behind the scenes, such as debugging, feature development, and more.

It Doesn't Provide Full Context

While DORA metrics tell us which metric is low or high, it doesn’t reveal the reason behind it. Suppose, there is an increase in lead time for changes, it could be due to various reasons. For example, DORA metrics might not reflect the effectiveness of feedback provided during code review. Hence, overlooking the true impact and value of the code review process.

The Software Development Landscape is Constantly Evolving

The software development landscape is changing rapidly. Hence, the DORA metrics may not be able to quickly adapt to emerging programming practices, coding standards, and other software trends. For instance, Code review has evolved to include not only traditional peer reviews but also practices like automated code analysis. DORA metrics may not be able to capture the new approaches fully. Hence, it may not be able to assess the effectiveness of these reviews properly.

SPACE Framework

This framework helps in understanding and measuring developer productivity. It takes into consideration both the qualitative and quantitative aspects and uses various data points to gauge the team's productivity.

The 5 dimensions of this framework are:

Satisfaction and Well-Being

The dimension of developers’ satisfaction and well-being is often evaluated through developer surveys, which assess whether team members are content, happy, and exhibiting healthy work practices. There is a strong connection between contentment, well-being, and productivity, and teams that are highly productive but dissatisfied are at risk of burning out if their well-being is not improved.

Performance

The SPACE Framework originators recommend evaluating a developer’s performance based on their work outcome, using metrics like Defect Rate and Change Failure Rate. Every failure in production takes away time from developing new features and ultimately harms customers.

Activity

The Velocity framework includes activity metrics that provide insights into developer outputs, such as on-call participation, pull requests opened, the volume of code reviewed, or documents written, which are similar to older productivity measures. However, the framework emphasizes that such activity metrics should not be viewed in isolation but should be considered in conjunction with other metrics and qualitative information.

Communication and Collaboration:

Teams that are highly transparent and communicative tend to be the most successful. This enables developers to have a clear understanding of their priorities, and how their work contributes to larger projects, and also facilitates knowledge sharing among team members.

Indicators that can be used to measure collaboration and communication may include the extent of code review coverage and the quality of documentation.

Efficiency and Flow

The concept of efficiency in the SPACE framework pertains to an individual’s ability to complete tasks quickly with minimal disruption, while team efficiency refers to the ability of a group to work effectively together. These are essential factors in reducing developer frustration.

SPACE framework: a quick primer

Limitations of SPACE framework

It Doesn’t Tell You WHY

While the SPACE framework measures dev productivity, it doesn’t tell why certain measurements have a specific value nor can tell the events that triggered a change. This framework offers a structured approach to evaluating internal and external factors but doesn’t delve into the deeper motivations driving these factors.

Limited Scope for Innovation

Too much focus on efficiency and stability can stifle developers’ creativity and innovation. The framework can make teams focus more on hitting specific targets. A culture that embraces change, experiments, and a certain level of uncertainty doesn’t align with the framework principles.

Too Many Metrics

This framework has 5 different dimensions and multiple metrics. Hence, it produces an overwhelming amount of data. Further, engineering leaders need to set up data, maintain data accuracy, and analyze these results. This makes it difficult to identify critical insights and prioritize actions.

Need for a new Developer Experience Framework

This new framework suggests to organizations and engineering leaders what they should focus on for measuring the dev productivity and experience.  

Below are the key focus areas and their drivers incorporated in the Developer Experience Framework:

Manager Support

Refers to the level of assistance, guidance, and resources provided by managers or team leads to support developers in their work.

Empathy

The ability to understand and relate to developers, actively listen, and show compassion in interactions.

Coach and Guide

The role of managers is to provide expertise, advice, and support to help developers improve their skills, overcome challenges, and achieve career goals.

Feedback

The ability to provide timely and constructive feedback on performance, skills, and growth areas helping developers gain insights, refine their skills, and work towards achieving their career objectives.

Developer flow

Refers to a state of optimal engagement and productivity that developers experience when they are fully immersed and focused on their work.

Work-Life Balance

Maintaining a healthy equilibrium between work responsibilities and personal life promotes well-being, boundaries, and resources for managing workload effectively.

Autonomy

Providing developers with the freedom and independence to make decisions, set goals, and determine their approach and execution of tasks.

Focus Time

The dedicated periods of uninterrupted work where developers can deeply concentrate on their tasks without distractions or interruptions.

Goals

Setting clear objectives that provide direction, motivation, and a sense of purpose in developers' work, enhances their overall experience and productivity.

Product Management

Refers to the practices involved in overseeing the lifecycle of a software product, from ideation to development, launch, and ongoing management.

Clear Requirements

Providing developers with precise and unambiguous specifications, ensuring clarity, reducing ambiguity, and enabling them to meet the expectations of stakeholders and end-users.

Reasonable Timelines

Setting achievable and realistic project deadlines, allowing developers ample time to complete tasks without undue pressure or unrealistic expectations.

Collaborative Discussions

Fostering open communication among developers, product managers, and stakeholders, enabling constructive discussions to align product strategies, share ideas, and resolve issues.

Development and Releases

Refers to creating and deploying software solutions or updates, emphasizing collaboration, streamlined workflows, and reliable deployment to enhance the developer experience.

Tools and Technology

Providing developers with the necessary software tools, frameworks, and technologies to facilitate their work in creating and deploying software solutions.

Code Health

Involves activities like code refactoring, performance optimization, and enforcing best practices to ensure code quality, maintainability, and efficiency, thereby enhancing the developer experience and software longevity.

Frictionless Releases

Streamlining software deployment through automation, standardized procedures, and effective coordination, reducing errors and delays for a seamless and efficient process that enhances the developer experience.

Culture and Values

Refers to shared beliefs, norms, and principles that shape a positive work environment. It includes collaboration, open communication, respect, innovation, diversity, and inclusion, fostering creativity, productivity, and satisfaction among developers.

Psychological Safety

Creating an environment where developers feel safe to express their opinions, take risks, and share their ideas without fear of judgment or negative consequences.

Recognition

Acknowledging and appreciating developers' contributions and achievements through meaningful recognition, fostering a positive and motivating environment that boosts morale and engagement.

Team Collaboration

Fostering open communication, trust, and knowledge sharing among developers, enabling seamless collaboration, and idea exchange, and leveraging strengths to achieve common goals.

Learning and Growth

Continuous learning and professional development, offering skill-enhancing opportunities, encouraging a growth mindset, fostering curiosity and innovation, and supporting career progression.

Conclusion

The developer experience framework creates an indispensable link between developer experience and productivity. Organizations that neglect developer experience face workflow challenges that can harm business performance.  

Prioritizing developer experience isn’t just about efficiency. It includes creating a work culture that values individual developers, fosters innovation, and propels software development teams toward unparalleled success.

Typo aligns seamlessly with the principles of the Developer Experience Framework, empowering engineering leaders to revolutionize their teams.

|||

7 Tips to Improve Developer Happiness

Happy developers are more engaged and productive in the organization. They are more creative and less likely to quit.

But, does developer happiness only come from the fair compensation provided to them? While it is one of the key factors, other aspects also contribute to their happiness.

As the times are changing, there has been a shift in developers’ perspective too. From ‘What we do for a Living’, they now believe in ‘How we want to live’. Happiness is now becoming a major driving force in their decision whether to take, stay, or leave a job.

In this blog, let’s delve deeper into developer happiness and ways to improve it in the organization:

What is Developer Happiness?

In simple words, Developer happiness can be defined as a ‘State of having a positive attitude and outlook on one’s work’.

It is one of the essential elements of organizational success. An increase in developer happiness results in higher engagement and job satisfaction. This gives software developers the freedom to be human and survive and thrive in the organization.

The making of a good Developer Experience (DX) | by Teerapong Singthong  👨🏻‍💻 | Medium

Below are a few benefits of having happy developers in the workplace:

Breed Innovation

Happy developers have a positive mindset toward their jobs and organization. They are most likely to experiment with new ideas and contribute to creative solutions. They are more likely to take calculated risks and step out of their comfort zone to foster innovation and try new approaches.

Faster Problem Resolution

Having a positive mindset leads to quicker problem-solving. When software developers are content, they are open to collaboration with other developers and increase communication. This facilitates faster issue resolution, anticipates potential issues, and addresses them before they escalate.

Ownership and Accountability

Developer happiness comes from a positive work environment. When they feel valued and happy about their work, they take responsibility for resolving issues promptly and align their work and the company goals. They become accountable for their work and want to give their best. This not only increases their work satisfaction but developer experience as well.

Improves Code Quality

Happy developers are more likely to pay attention to the details of their code. They ensure that the work is clean and adheres to the best practices. They are more open to and cooperative during code reviews and take feedback as a way to improve and not criticism.

Health and Well-Being

A positive work environment, supportive team, and job satisfaction result in developer happiness. This reduces their stress and burnout and hence, improves developers’ overall mental and physical well-being.

The above-mentioned points also result in increased developer productivity. In the next section, let’s understand how developer happiness is related to developer productivity.

How Developer Happiness and Developer Productivity are Related to Each Other?

According to the Social Market Foundation, Happy employees are 12% more productive than unhappy employees on average.

Developer Happiness is closely linked to Developer Productivity.

Happy developers perform their tasks well. They treat their customers well and take their queries seriously. This results in happy customers as well. These developers are also likely to take fewer sick leaves and work breaks. Hence, showcasing their organization and its work culture in a good picture.

Moreover, software developers find fulfillment in their roles. This increases their enthusiasm and commitment to their work. They wouldn’t mind going the extra mile to achieve project goals and perform their tasks well.

As a result, happy developers are highly motivated and engaged in their work which leads to increased productivity and developer experience.

Three Core Aspects of Developer Happiness

Following are the three pillars of developer happiness:

Right Tools and Technologies

Tools have a huge impact on developer happiness and retention. The latest and most reliable tools save a lot of time and effort. It makes them more effective in their roles and improves their day-to-day tasks. This helps in creating the flow state, comfortable cognitive leads, and feedback loops.

Passionate Developer Teams

When developers have more control over their roadmaps, it challenges them intellectually and allows them to make meaningful decisions. Having autonomy and a sense of ownership over their work, allows them to deliver efficient and high-quality software products.

Positive Engineering Culture

The right engineering culture creates space for developers to learn, experiment, and share. It allows them to have an ownership mindset, encourages strong agile practices, and is a foundation for productive and efficient teams that drive the business forward. A positive engineering culture also prioritizes psychological safety.

Ways to Boost Developer Happiness in the Workplace

Use Relevant Tools and Technologies

One of the main ways to improve developer happiness is to invest in the right tools and technologies. Experiment with these tools from time to time and monitor the progress of it. If it seems to be the right fit, go ahead with them. However, be cautious to not include every latest tool and technology that comes your way. Use those that you seem are relevant, updated, and compatible with the software. You can also set policies for how someone can obtain new equipment.

The combination of efficient workspace and modern learning tools helps in getting better work from developers. This also increases their productivity and hence, results in developer happiness.

Flexible Work Arrangement

When developers have control over their working style and work schedules, it gives them a healthy work-life balance. As in changing times, we all are changing our perspective not everyone is meant for 9-5 jobs. The flexibility allows developers to when their productivity is at its peak. This becomes a win-win situation for developer satisfaction and project success.

For team communication and collaboration, particular core hours can be set. I.e. 12 PM - 5 PM. After that, anyone can work at any time of the day. Apart from this, asynchronous communication can also be encouraged to accommodate varied work schedules.

Ensure that there are open communication channels to understand the evolving needs and preferences of the development team.

Keep Realistic Expectations

Ensure that you don’t get caught up in completing objectives. Understand that you are dealing with human beings. Hence, keep realistic expectations and deadlines from your developers. Know that good software takes time. It includes a lot of planning, effort, energy, and commitment to create meaningful projects. Also, consider their time beyond work as well. By taking note of all of these, set expectations accordingly.

But that’s not all! Ensure that you prioritize quality over quantity. This not only boosts their confidence in skills and abilities but also allows them to be productive and satisfied with their role.

It doesn't end well : r/ProgrammerHumor

Enable Deep Focus

Software development job is demanding and requires a lot of undivided focus. Too many meetings or overwork can distract them and lose their focus. The state of flow is important for deep work. If a developer is working from the office, having booths can be helpful. They can isolate themselves and focus deeply on work. If they are working remotely, developers can turn off notifications or set the status as focused time.

Focus time can be as long as two hours to less than half an hour. Make sure that they are taking breaks between these focus sessions. You can make them aware of time management techniques. So, that they know how to manage their time effectively and efficiently.

Promote Continuous Learning

Software development is an ever-changing field. Hence, developers need to upskill and stay up to date with the latest developments. Have cross-sharing and recorded training sessions so that they are aware of the current trends in the software development industry. You can also provide them with the necessary courses, books, and newsletters.

Apart from this, you can also do task-shifting so they don’t feel their work to be monotonous. Give them time and space to skill up before any new project starts.

Appreciation and Recognition

Developers want to know their work counts and feel proud of their job. They want to be seen, valued, heard and understood. To foster a positive work environment, celebrate their achievements. Even a ‘Thankyou’ goes a long way.

Since developers' jobs are demanding, they have a strong emotional need to be recognized for their accomplishments. They expect genuine appreciation and recognition that match the impact or output. It should be publicly acknowledged.

You can give them credit for their work in daily or weekly group meetings. This increases their job satisfaction, retention, and productivity.

Improves Overall Well-Being

The above-mentioned points help developers improve their physical and mental well-being. It not only helps them in the work front but also their personal lives. When developers aren’t loaded with lots of work, it lets them be more creative in solving problems and decision-making. It also encourages a healthy lifestyle and allows them to have proper sleep.

You can also share mental health resources and therapists' details with your developers. Besides this, you can have seminars and workshops on how health is important and promote physical activities such as walking, playing outdoor games, swimming, and so on.

Conclusion

Fostering developer happiness is not just a desirable goal, but rather a driving force for an organization’s success. By investing in supportive cultures, effective tools, and learning opportunities, Organizations can empower developers to perform their development tasks well and unleash their full potential.

Typo helps in revolutionizing your team's efficiency and happiness.

||

10 Ways to Boost Developer Productivity

If you are leading a developer team, you can relate to this line – ‘No matter what the situation is, developers won’t ever stop being busy.’

There will always be important decisions to make, keeping up with customers’ demands, maintaining the software development process, and whatnot. Indeed, it is the nature of their work. But if this remains for the long run, it can hamper developer productivity.

In simple terms, when developers are constantly busy with their work, it can lead to burnout and frustration. As a result, this lowers their productivity. The work couldn’t be compromised or reduced, but there are ways to unblock their workflows and enhance the developer experience.

In this blog, let’s dive further into the ways to increase developer productivity.

What is Developer Productivity?

Productivity refers to the measure of the efficiency of people, teams, or organizations in converting inputs into valuable outputs. This is done to accomplish more and reach for bigger ambitious goals.Developer productivity refers to how productive a developer is in a given time frame. It is a measure of the team’s ability to ship high-quality products to end-users. When it is properly taken care of, it can improve developer experience and performance while fostering a positive work culture.

Why is it Important?

A few of the reasons why developer productivity is important are stated below:

Business Outcomes

Efficient developers can ship high-quality code that delivers business value. Positive business outcomes accelerate the development process. This not only increases customer satisfaction but also results in faster product releases and new features.

Improves Overall Well-Being

Developer productivity is not just important for business outcomes. It improves the physical and mental well-being of the developers as well. High productivity among developers leads to reduced stress and helps prevent burnout. Hence, it contributes to their positive mental and physical health.

Developer Satisfaction

Developer productivity allows them to be highly satisfied with their work. This is because they can see the impact of their work and can accomplish more in less time. Besides this, they can focus better, set better goals, and be more effective at their jobs.

Improves Quality of Work

Productive developers are more efficient at work. They help them to strike the right balance between speed and quality of work. As a result, it reduces the need for rework and extensive bug fixes.

Improves Creativity and Problem-Solving Ability

Productive developers have more time to focus on creative problem-solving and innovation. They have better analytical and decision-making abilities. Hence, they can introduce new and innovative features and technologies that enhance their products or services.

What Kills Developer Productivity?

Software developers face more productivity challenges compared to other professionals. If not taken care of, it can negatively affect their performance which further, hinders the business outcome.

Below are some of the obstacles that kill the productivity of the developers:

Ineffective Meetings

Meetings are an important aspect of aligning developers on the same page. But, when many meetings are conducted per day without any agenda, it can hamper their productivity. These meetings can break the focus mode of developers and reduce the time for actual work. As a result, it can impede developers’ progress and performance.

Too Many Manual Processes

Lack of automation can hinder the productivity of developers. They spend a lot of time on manual and repetitive tasks which can overwhelm them and increase their stress. This wastes a lot of their time and hence, they aren’t able to focus on core tasks.

Slow Code Reviews

Code reviews are usually the hurdles in most software companies. This leaves the developers frustrated as PR is either unmerged for too long or they are juggling between new and old work. Hence, this negatively impacts their motivation level and productivity.

No Alignment between Tech and Business Teams

Although tech and business teams have different expertise, they have the same core product vision. When they aren’t on the same page, developers may not be able to fully grasp business objectives and feel disconnected from the work. As a result, developers may feel frustrated and directionless.

Lack of Adequate Sleep and Free Time

When developers don’t get adequate sleep, it prevents their brains from functioning at optimum levels. Lack of proper sleep could be due to overtime, heavy workload, or constant work stress. In these cases, burnout is expected which further decreases their performance and impacts their productivity.

10 Ways to Increase Developer Productivity

Someone rightly said - ‘Where there is a will, there is a way!’ Undoubtedly, this is the case in improving developer productivity too.

As team leaders, you must take care of your teams' productivity. Below are the top 10 ways you can implement at your workplace to enhance developer productivity:

Clearly Define the Project Scope

Engineering managers must inform the development team clearly how to get started on their projects. You can create an outline of specific features, functions, and tasks so that developers are aware of what tools to use, what things to keep in mind, and what shouldn’t be done.

Vague or ambiguous project requirements can create misunderstandings and confusion among team members. They may not be able to see the bigger picture and are expected to make changes continuously.

When project requirements are clearly defined, developers can set the scope for better time management and reduce miscommunication and the need for additional meetings. They can create project plans accordingly, set realistic timelines, and allocate resources.

This also motivates the team since they can see the direct impact of their work on the project’s objectives.  

Set Realistic Deadlines

If your developers are continuously missing deadlines, understand that it is a productivity problem. The deadlines are unrealistic which developers are failing to meet.

Unrealistic deadlines can cause burnout, poor code quality work, and negligence in PR review. This further leads to technical debt. If it remains for a long run, it can negatively impact business performance.

Always involve your development team while setting deadlines. When set right, it can help them plan and prioritize their tasks.

Including your developers gives you an idea of the timeline each task will take. Hence, it allows you to decide the deadline accordingly. Ensure that you give buffer time to them to manage roadblocks and unexpected bugs as well as other priorities.

Be Well Aware of your Development Environment

Be well aware of your development environment and prioritize developer performance. Ensure that you and your team members are fully informed about the development environment. In other words, they should be familiar with the set of tools, resources, and configurations that they use, including IDE, Programming languages and frameworks, operating systems, version control systems, software applications, and so on.

This allows them to complete tasks faster and more accurately, contributing to enhanced developer performance and code quality. Further, it helps in faster problem-solving, thinking innovatively, and speeding up development. When developers are well-versed in their tools and resources, they can also collaborate with other developers and mentor their juniors better. Ensure that you inform them about your existing tools and resources on their day of joining. It is important to be transparent about these things to allow them to perform their tasks well without any confusion and frustration.

This not only results in better resource utilization but also allows them to understand their environment in a much more efficient way.

Encourage Frequent Communication and Two-Way Feedback

Regular communication among team leaders and developers lets them share important information on a priority basis. It allows them to effectively get their work done since they are communicating their progress and blockers while simultaneously moving on with their tasks.

There are various ways to encourage frequent communication. A few of them are:

Daily Standups

Daily standups could be inefficient when the agenda is not clear and if turns out to be hour-long meetings. However, when customized according to your team’s size and preference and agenda is well known, it can work wonders.

Slack (or Any Other Communication Platform)

Various unified communication platforms such as Slack channels and Microsoft Teams allow teams to communicate with each other. These have chat-first solutions with multiple integration options that make everyone’s work easier and simpler.

Team Lunch

Another way to share important updates and progress is team lunch. These meetings can be conducted once or twice a month where team members can share the updates, blockers, and achievements they have while working on their project.

However, don’t overdo it. Make the communication short and engaging. Otherwise, it will take a lot of time from the developers’ schedules which makes them distracted and frustrated.

While frequent communication is important, nurturing a two-way feedback loop is another vital aspect as well to improving developer productivity. This allows developers to be accountable, improve their skills, and know where they are going wrong.

Don’t forget to document everything or else they may spend a lot of time figuring it out. As ideas are shared and cooperation is encouraged, it makes developers satisfied with their work. As a result, it increases their productivity.

Implement Agile Methodology

Most of the organizations are implementing agile methodology and emphasizing agile development nowadays. The key reason is that it breaks down the project into several phases, promoting efficient project management.

This allows the development team to focus on achievable tasks that result in quicker, tangible progress. This increases their efficiency and speed and allows them to continuously release and test updates and features. As a result, it allows them to hit the market faster.

The agile methodology also creates a culture of continuous improvement and learning within the realm of agile development and project management. Regular feedback loops help identify and address issues early, enhancing the overall project management process and reducing time-consuming revisions. It also allows developers to take ownership of their work.

This autonomy often leads to higher motivation, positively impacting both agile development and project management productivity and performance.

Automate your Workflow

Developers usually become overwhelmed when they are continuously working on tedious and repetitive tasks. Such as generating API documentation, creating data or reports, and much more. This hampers team productivity and efficiency. As a result, they aren’t able to focus well on the core activities.

This is the reason why organizations need to automate their workflow. This allows developers not to work on repetitive tasks, but rather focus on work that requires constant attention. Automated testing also minimizes human errors and accelerates the development cycle.

Organizations can adopt the CI/CD approach. This helps in continuous monitoring, from integration to delivery and deployment of applications.

Besides this, there are various tools available in the market such as Selenium, TestSigma, and Katalon.

Use the Right Productivity Tools

Choose high-quality tools that are best suited for the task and team’s preference. You must first start by choosing the right hardware. Assure that the laptop is right for the task and it supports various software and applications. Afterward, choose the developer productivity tools and applications that can be well suited for the projects and tasks.

There are various tools in the market such as Typo, Github, and Codacy. This lets them automate the non-core activities, set timelines for the work, and clearly define their project requirements. Hence, it impacts the efficiency and quality of software development.

Below are certain criteria for choosing the right productivity tools:

  • Relevance
  • User-friendly interface
  • Customization
  • Cross-platform compatibility
  • Cost-effective

Encourage Frequent Breaks

Here, breaks don’t mean 10-15-minute lunch breaks. It means taking short and frequent breaks to relax the mind and regain energy levels. One technique that is usually helpful is the Pomodoro Technique, a time management method. It breaks work into intervals, typically 25 minutes in length, separated by short breaks.

During these frequent breaks, developers can take a nap, exercise, or take a walk. This will help bring back their productivity level and further, improve their performance.

Make sure you have a flexible schedule at your workplace. Not every developer can work from 9-5 and they may find it better to balance their work and personal lives with flexible schedules. Besides this, when developers work during their peak hours, it results in higher-quality work and better focus.

Match Developers’ Strengths with Projects

Developers tend to be more productive when they are doing something that interests them. Notice your team’s strengths and weaknesses. And then assign the work according to their strengths. Task allocation also becomes easier when strengths are matched.

This lets them be satisfied with their work and contribute positively to their job. They can think creatively and find unique ways to solve problems. Hence, be more efficient about the work.

It could also be a possibility that developers may be good at specific skills. However, they want to learn something else. In such cases, you can ask your developers directly what they want to do.

Mentoring and Pair Programming

To increase developer productivity, knowledge sharing and collaboration are important. This can be through pair programming and collaborating with other developers. As it allows them to work on more complex problems and code together in parallel.

Besides this, it also results in effective communication as well as accountability for each other’s work.

Another way is mentoring and training junior and new developers. Through knowledge transfer, they can acquire new skills and learn best practices more efficiently. It also reduces the time spent on trial and error.

Both mentoring and pair programming can result in developers’ satisfaction and happiness. As a result, increases retention rate.

Conclusion

Productivity is not a one-day task. It is a journey and hence, needs to be prioritized always.

There will be many productivity challenges faced by the developers, even when they are doing well in their careers. Team leaders must make it easier for them so that they can contribute positively to business success.

The above-mentioned ideas may help you to increase their productivity. Feel free to follow or customize as per your needs and preferences.

preferences.
|

Are daily standups inefficient?

Daily standup meetings are a ritual usually followed by scrum and agile teams. According to sources, 81% of scrum teams hold daily standups. Agile and non-agile teams too.

While they are considered an important part, they are overlooked. Daily standups can be a waste of time for the team when not done correctly. They aren’t worth it if they are unable to provide value and align the team on the same page.

Let’s dive further to explore it in detail and various formats that you may consider.

What are daily standups?

Daily standups are brief meetings where team members share updates about their progress and discuss blockers. The aim is to sync teams with the projects and it usually last for 15 minutes or less.

The motive behind these meetings is to promote productivity and efficiency among team members.

Where do they go wrong?

These daily standups are progressive. However, when it takes the wrong direction, it can cause trouble. Below are a few signs of the same:

When it is stretched for too long

A few of the reasons why standup meetings last for more than 15 minutes are:

  • These meetings are overly detailed. 
  • These meetings include gossip and meaningless discussions. 
  • The team size is large which takes longer time to listen to everyone’s updates and blockers. 

When things are discussed unrelated to other members' work

Another factor behind inefficient standups is unrelated things discussed in the meetings. This not only leads to a lack of focus but also lowers participation in discussions. It dilutes the meeting’s purpose and gives less time for addressing real issues. Hence, impacting the overall team’s progress and performance.

When it becomes monotonous

This is one of the common pitfalls of daily standups. When tasks or updates remain unchanged for an extended period, they become repetitive. They stop adding value to these meetings and hence, team members start finding it boring. This lowers the opportunities to address challenges and collaboration among team members.

When managers consider it to be an opportunity to micromanage their team

Daily standups may lose their essence when engineering leaders start micromanaging their teams. This can be detrimental to the team’s productivity when managers closely monitor and scrutinize their progress. Further, this can disrupt the flow of work as well as decrease their problem-solving skills.

When trying to solve a problem or challenges in standups

Standup meetings are meant to be brief and straight to the point. When engineering leaders start taking up the challenges, they aren’t fulfilling the motive of effective standups. Solving a problem should be taken in the follow-up meeting. These meetings are for daily updates, progress, and discussion of blockers.

But, are they really a waste of time?

If done correctly, daily standups aren’t a waste of time. Accountability and transparency are two aspects of standup meetings. When engineering leaders stick to the right format for these meetings, they will be efficient and straightforward.

Effective daily standups are short. It should be interactive and track the progress of the team members without controlling every aspect of it. Further, this can result in:

Improves communication

Daily standups act as a vital communication tool within agile and scrum teams. When engineering leaders ask three standard standup questions ( We will discuss them in the next section), it helps in conveying need-to-know information quickly as well as distilling updates into clear brief standards.

Align projects with people and sprint goals

Daily standups help in staying clear with sprint goals, what has been accomplished, and what needs to be addressed. This helps in gaining visibility into each other progress and also focuses on the team’s common goals and sprint objectives.

Address blockers and blind spots:

It doesn’t mean that the engineering leaders need to take up the problems during the daily standups. Rather, they need to be aware of and acknowledge the challenges they are facing.

Since it is the first step to address blockers and blind spots.

Fosters accountability:

Standup meetings give a sense of accountability and ownership to team members. It is because they share their progress and commitments with other members. Hence, it encourages them to meet their obligations and deliver results on time.

Time management:

Standup meetings allow team members to discuss their tasks and work on them according to their priorities. This helps them to set a clear daily focus, stay on track, and adapt to changing circumstances smoothly.

Different standup meeting formats to try

There are various daily standup formats you can try to not make them monotonous and ineffective. A few of them include:

Scrum standard questions

This is a well-known standup format where three questions are asked during the daily scrum or daily standup meetings. These include:

What did you do yesterday?

This question encourages team members to share what tasks they have completed the previous day. It gives them a sense of accomplishment and an update on how much progress has been made.

What will you do today?

This question allows team members to outline their plans and tasks for the current workday. This lets them prioritize their work according to scrum planning and align with other individuals.

Any blockers or impediments preventing you from doing your work?

Team members can discuss the blockers and challenges they are facing while doing a specific task. This allows the team to address the blindspots early and ensure the team stays on track.

Such types of meetings can be a good starting point for new agile and scrum teams. As it helps in creating small, achievable goals that can be shared with everyone.However, ensure that it doesn’t turn out to be another dreaded status update. It may also not be suitable for large teams as it can be time-consuming and unproductive.

Walk the board format

This standup format focuses on managing the work, not the people. The question asked during these meetings is simple - “What can we finish today?

”This helps the team to not give needless status updates to prove they are working.

In this format, all you have to do is:

  • Review WIP (Work in progress) on scrum board.
  • Start from ‘What is the closest to getting done?’ to ‘What has just been started?’ (Go from right to left) 

It is a visual progress tracking. Hence, it lets team members understand the tasks better and prioritize tasks accordingly.

However, it may considered to be a rigid format and may not always work for remote teams.

Traffic light standup

This format focuses on the emotional state of the team members. It helps in keeping a check on how they feel about their work.

In this format, you have to ask individuals whether they are feeling red, yellow, or green (representing traffic light colors).

Red

Red means that they are feeling blocked, distracted, overwhelmed, or exhausted. The reason may vary from person to person. It gives you an idea to focus on them first and have a one-on-one meeting. Ensure that you prioritize these individuals as they may resign from the organization or be mentally unavailable.

Yellow

Yellow is somewhere between they are present yet not able to focus fully. They are probably facing some minor issues or delays which need to be addressed early. Hence, it could signify that they are looking for help or collaboration.

Green

Green signifies that team members are feeling happy, energized, and confident at their workplace. The reasons may vary such as working as per sprint planning, aligning well with their team, or no blockages.

Although it may not work as a daily standup you can combine it with other standup formats. Ensure that you don’t use it as a team therapy. Rather, to understand the team’s mental well-being and the blockers they are facing, if any.

Asynchronous standup

Also known as ‘Async standups’ or ‘Written standups’. Here, the team members communicate their updates in written form. Such as using email, slack, or Microsoft Teams.

This allows them to provide updates at a time that is convenient for them. It is also best suited for remote teams across various time zones.As information and updates are written, it becomes easily searchable and accessible.However, asynchronous standups can be ineffective in some cases. This could be when meetings require a high level of collaboration and problem-solving or where quick feedback and immediate adaptation are critical.

‘Wins and blockers’ standup format

With this format, two things are in focus. It includes:

Important progress updates or accomplishments as a team

Team members share their wins, progress made, or any other positive developments since the last meeting. This encourages them to celebrate their achievements and improve their morale. It also allows members to acknowledge each other work and contributions.

Blockers or challenges preventing from moving toward the goal

Team members share the obstacles or challenges they are facing while moving forward. It can include anything that is hindering their performance. Such as technical difficulties, unable to understand any task, or due to any other team member. It allows prompt resolution and addressing blockers.

These two aspects help in identifying the blind spots earlier as well as building a positive environment for team members.

However, this format may not be able to give a clear picture of the tasks the team is currently working on.

Remember to choose what suits you the best. For example, if your team is short and new, you can go for scrum standard questions. And when your team is growing, you can choose other formats or customize them accordingly.

How Typo can help you?

To make your task easier, you can sign up for Typo, an intelligent engineering platform focusing on developers’ productivity and well-being. 

  • With their ‘Work log’ feature, you can take note of your team members’ tasks and progress. It also allows you to check who is overworked and feeling burnout. 
  • The ‘Sprint management’ feature allows you to track the team’s progress daily. With this, the important tasks are prioritized and new work is added only after that.
Typo can help you

These two above-mentioned features not only gain visibility in your team members' work but also act as an upgrade to traditional standup meetings.Book your demo today!

Conclusion

Daily standups are a vital part of the organization. They aren’t inefficient and a waste of time if you know the motive behind them.

Ensure that you don’t follow old-age best practices blindly. Customize these standup meetings according to your team size, preference, and other aspects.

After all, the best results come from what is thoughtful and deliberate. Not what is easy and familiar.

View All

Podcasts

View All

The DORA Lab EP #04 | Peter Peret Lupo - Head of Engineering at Sibli

In the fourth episode of ‘The DORA Lab’ - an exclusive podcast by groCTO, host Kovid Batra engages in an enlightening conversation with Peter Peret Lupo, Head of Software Engineering at Sibli, who brings over a decade of experience in engineering management.

The episode starts with Peter sharing his hobbies, followed by an in-depth discussion on how DORA metrics play a crucial role in transforming organizational culture and establishing a unified framework for measuring DevOps efficiency. He discusses fostering code collaboration through anonymous surveys and key indicators like code reviews. Peter also covers managing technical debt, the challenges of implementing metrics, and the timeline for adoption. He emphasizes the importance of context in analyzing teams based on metrics and advocates for a bottom-up approach.

Lastly, Peter concludes by emphasizing the significant impact that each team member has on engineering metrics. He encourages individual contributors and managers to monitor both their personal & team progress through these metrics.

Timestamps

  • 00:49 - Peter’s introduction
  • 03:27 - How engineering metrics influence org culture
  • 05:08 - Are DORA metrics enough?
  • 09:29 - Code collaboration as a key metric
  • 12:40 - Metrics to address technical debt
  • 17:27 - Overcoming implementation challenges
  • 21:00 - Timeframe & process of adopting metrics
  • 25:19 - Importance of context in analyzing teams
  • 28:31 - Peter’s advice for ICs & managers

Links and Mentions 

Episode Transcript

Kovid Batra: Hi everyone. This is Kovid, back with another episode of our exclusive series, the DORA Lab, where we will be talking about all things DORA, engineering metrics, and their impact, and to make today's show really special, we have Peter with us, who is currently an engineering manager at Sibli. For a big part of his career, he has been a teacher at a university and then he moved into the career of engineering management and currently, holds more than 10 plus years of engineering management experience. He has his great expertise in setting up dev processes and implementing metrics, and that's why we have him on the show today. Welcome to the show, Peter. 

Peter Peret Lupo: Thank you. 

Kovid Batra: Quickly, Peter, uh, before we jump into DORA metrics, engineering metrics and dev processes, how it impacts the overall engineering efficiency, we would love to know a little bit more about you. What I have just spoken is more from your LinkedIn profile. So we don't know who the real Peter is. So if you could share something about yourself, your hobby or some important events of your life which define you today, I think that would be really great. 

Peter Peret Lupo: Well, um, hobbies I have a few. I like playing games, computer, VR, sort of like different styles, different, uh, genres. Two things that I'm really passionate about are like playing and studying. So I do study a lot. I've been like taking like one hour every day almost to study new things. So it's always exciting to learn new stuff. 

Kovid Batra: Great, great. 

Peter Peret Lupo: I guess, a big nerd here! 

Kovid Batra: Thank you so much. Yeah. No, I think that that's really, uh, what most software developers and engineering managers would be like, but good to know about you on that note.

Apart from that, uh, Peter, is there anything you really love or would you like to share any, uh, event from your life that you think is memorable and it defines you today who you are? 

Peter Peret Lupo: Well, that's a deep question. Um, I don't know, I guess like, one thing that was like a big game changer for me was, uh, well, I'm Brazilian, I came to Canada, now I'm Canadian too. Um, so I came to Canada like six years ago, and, uh, it has been transformational, I think. Like cultural differences, a lot of interesting things. I feel more at home here, to be honest. Uh, but like, yeah, uh, meeting people from all over the world, it's been a great experience. 

Kovid Batra: Great, great. All right, Peter. So I think, first of all, thanks a lot for that short, sweet intro about yourself. From this point, let's move on to our main topic of today, uh, which is around the engineering metrics and DORA metrics. Before we deep dive, I think the most important part is why DORA metrics or why engineering metrics, right? So I think let's start from there. Why these engineering metrics are important and why people should actually use it and in what situations? 

Peter Peret Lupo: I think the DORA metrics are really important because it's kind of changing the culture of many organizations, like a lot of people were already into, uh, measuring. Measuring like performance of processes and all, but, uh, it was kind of like, sometimes it wasn't like very well seen that people were measuring processes and people took it personally and it's like all sort of things. But nowadays, people are more used to metrics. DORA metrics is like a very good framework for DevOps metrics, and so widespread nowadays, it's kind of like a common language, a common jargon, like when you talk about things like mean lead time for changes, everybody knows that, everybody knows how to calculate that. I guess that's like the first thing, like the changing the culture about measuring and measuring is really important because it allows you to, uh, to establish a baseline and compare the results of your changes to where you were before and, uh, affirm if you actually have improved, if something got worse with your changes, if your, the benefits of your changes are aligned with the organizational goals. It allows everybody to be engaged at some level to, uh, reaching the organizational goals. 

Kovid Batra: Makes sense. Yeah, absolutely. I think when we always talk about these metrics, most of the people are talking about the first-level DORA metrics, which is your lead time for changes or cycle time, or the deployment frequency, change failure rate, mean time to restore. These metrics define a major part of how you should look at engineering efficiency as a manager, as a leader, or as a part of the team. But do you think is it sufficient enough? Like looking at just the DORA metrics, does it sound enough to actually look at a team's efficiency, engineering efficiency? Or do you think beyond DORA that we should look at metrics that could actually help teams identify other areas of engineering efficiency also? 

Peter Peret Lupo: Well, um, one thing that I like about our metrics is that it lets us start the culture of measuring. However, I don't see that as like the only source of information, like the only set of metrics that matter. I think there are a lot of things that are not covered in DORA metrics. The way that I see, it's like it's a very good subset for DevOps, it covers many different aspects of DevOps, and that's important because when you wanna measure something, it's important to measure different aspects because if you are trying to improve something, you want to be able to detect like side effects that may be negative on other aspects. So it's important to have like a good framework. However, it's focused a lot on DevOps, and, uh, I'll tell you, like, if you are on a very large organization with a lot of developers pushing features, like many changes daily, and your goal is to be able to continuously deliver them and be able to roll back them and assess like the time to restore the service when something breaks down. That's good, that's very, very interesting. And so I think it's very aligned with like what Google does. Like it's a very big corporation, uh, with a lot of different teams. However, context matters, right? The organizational context matters. Not all companies are able, for instance, to do continuous delivery. And sometimes in our matter of like what the company wants or their capability, sometimes their clients don't want that, like if you have like banks as clients, they don't want you to be changing their production environments every like 12 hours or so. Uh, they want like big phases, uh, releases where they can like do their own testing, do their own validation sometimes. So it's fundamentally different. 

In terms of, uh, the first part of it, because when you get to DevOps and you get to like delivery stuff into production, things were already built, right? So building is also something that you should be looking at. So DORA metrics provide a good entry point to start measuring, but you do need to look at things like quality, for instance, because if you're deploying something and you're rolling back, and I want to make a parenthesis there, if you're measuring deployment frequency, you should be telling those apart because rolling back a feature is not the same as, like, deploying a feature. But if you're rolling back because something wasn't built right, wasn't built correctly, there's a defect there. DORA metrics won't allow you to understand the nature of the defect, where you got into, like, got into, like the requirements and continue what's propagated to codes and tests, or if somebody made a mistake on the codes, like it doesn't allow you for this level of understanding of the nature of your defects or even productivity. So if you're not in a scenario where you do have a lot of teams, you do have a lot of like developers pushing codes, code changes all the time. Uh, maybe your bottleneck, maybe your concerns are actually on the development side. So you should be looking at metrics on that side, like code quality, or product quality in general, defect density, uh, productivity, these sorts of things. 

Kovid Batra: I think great point there. Uh, actually, context is what is most important and DORA could be the first step to look into engineering efficiency in general, but the important, or I should say the real point is understanding the context and then applying the metrics and we would need metrics which are on DORA also. Like, as you mentioned, like there would be scenarios where you would want to look at defect density, you would want to look at code quality, and from that, uh, I think one of the interesting, uh, metrics that I have recently come across is about code collaboration also, right? So people look at how well the teams are collaborating over the code reviews. So that also becomes an essential part of when you're shipping your software, right? So the quality gets impacted. The velocity of the delivery gets impacted. Have you encountered a scenario where you wanted or you had measured code review collaboration within the team? And if you did so, uh, how did you do it? 

Peter Peret Lupo: Yes, actually in different ways. So one thing that I like to do, it's more of a qualitative measurement, but I do believe there is space for this kind of metric as well. One thing that I like doing, that I'm currently doing, and I've done in other companies as well, is taking some part of the Sprint's retrospective to share with the team, results of a survey. And one of the things that I do ask on the survey is if they're being supported by team members, if they're supporting team members. So it's just like a Likert Scale, like 1 to 5, but it highlights like that kind of collaboration support. 

Kovid Batra: Right.

Peter Peret Lupo: Um, it's anonymous, so I can't tell like who is helping who. Uh, so sometimes somebody's, like, being very, like being helped a lot, and sometimes some other person is helping a lot. And maybe they switch, depending on like whether or not they're working on something that they're familiar with and the other person isn't or vice versa, I don't know. I have no means to do that, and I don't bother about that. Nobody should be bothering about that. I think if you have like a very senior person, they're probably like helping a lot of people and maybe they're not pushing many changes, but like everybody relies on them. Uh, so if you're working on the same, you should be measuring the team, right? But there are other things as well, like, um, you can see like comments on code reviews, who jumps to do code reviews, and all those kinds of things, right? These are very important indicators that they have like a healthy team, that they're supporting each other. You can even like indicate some things like if people are getting, uh, are learning more about the codes component they are changing or like some, like a service or whatever area, how you define it, uh, if you have like knowledge silos and, um, who should be providing training to whom to break out those silos to improve productivity. So yeah, that's very insightful and very helpful. Yeah, definitely. 

Kovid Batra: Makes sense, makes sense Um, is there anything that you have used, uh, to look at the technical debt? So that is also something I have, uh, always heard from managers and leaders. Like when you're building, whether you are a large organization or you are a small one moving faster, uh, the degree might vary, but you accumulate technical debt over a period of time. Is there something that you think could be looked at as a metric to indicate that, okay, it's high time now, that we should look at technical debt? Because mostly what happens is like whenever there are team meetings, people just come up with ideas that, okay, this is what we can improve, this is where we are facing a lot of bugs and issues. So let's work on this piece because this has now become a debt for us, but is there something objective that could tell that yes, now it's time that we should sit down and look at the technical debt part? 

Peter Peret Lupo: Well, uh, the problem is like, there are so many, uh, different approaches to technical debt. They're going to be more suited to one organization or another organization. If you have like a very, uh, engineering-driven organization, you tend to have less technical debt or you tend to pay that technical debt more often. But if it's not the case, if it's like more product-driven, you tend to accumulate those more often, and then you need to apply different approaches. So, one thing that I like doing is like when we are acquiring the debt; and that's normal, that's part of life. Sometimes you have to, and you should be okay with that. But when we are acquiring debt, we catalog it somewhere. Maybe you have like an internal wiki or something, like whatever documentation tool you use. You add that to a catalog where you basically have like your components or services or however you split your application. And then like what's the technical data you're acquiring, which would be the appropriate solutions or alternatives, how that's going to impact, and most importantly, when you believe you should pay that so you don't get like a huge impact, right? 

Kovid Batra: Right. Of course. So just one thing I recently heard from one of my friends. Like they look at the time for a new developer to do the first commit as an indicator of technical debt. So if they.. First release, actually. So if someone who is joining new in the team, if they're taking too much time to reach a point where they could actually merge their code, and like have it on production, uh, if that is high and they, of course, keep a baseline there, then they consider that there is a lot of debt they might have accumulated because of which the learning and the implementation for the first release from a new developer is taking time. So do you think this approach could work or this approach could be inferential to what we are talking about, like the technical debt? 

Peter Peret Lupo: I think that in this particular case, there are so many confounding variables. People join the team at different seniority levels. A senior might take less time than a junior, even in a scenario where there is more technical debt. So that alone is hard to compare. Even at the same level, people join with different skills. So maybe you have like a feature you need to write frontend and backend code, and some people are, uh, full stack but are more backend-inclined, more frontend-inclined. That alone will change your metric. You are waiting for one person to join that team so you can have like a new point of measurement. So you're not gonna have a lot, and there's gonna have like a lot of variation because of these confounding factors. Even that the onboarding process may change in between. The way that I usually like to introduce people to code is asking them to reduce the amount of warnings from like code linters first, and then fixing some simple defects, and then something like a more complex defect, and then writing a new feature. Uh, so, even like depending on your own onboarding strategy, your model strategy you're defining is going to affect that metric. So I wouldn't be very confident on that metric for this purpose. 

Kovid Batra: Okay. Got it, got it. Makes sense. All right. I think if I have to ask you, uh, it's never easy, right? Like in the beginning, you mentioned that the first point itself is talking about these metrics is hard, right? Even if they make a lot of practical sense, but talking about it is hard. So when there is inherited resistance towards this topic, uh in the teams, when you go about implementing it, there could be a hell of a lot of challenges, right? And I'm sure, you would have also come across some of those in your journey when you were implementing it. So can you give us some examples from the implementation point of view, like how does the implementation go for, uh, these metrics and what are the challenges one faces when they're implementing it? And maybe if there are certain timelines to which one should look at for a full-fledged implementation and getting some success from the implementation of these metrics. 

Peter Peret Lupo: Right. So, um, usually you're measuring something because you want to prove something, right? Because you want to like achieve like a certain goal, uh, maybe organizational, or just like the team. So I think that the first thing to lower, uh, the resistance is having a clear goal, and making sure that everybody understands that, uh, that the goal is not measuring anybody, uh, individually. That already like reduces the resistance a lot, and making sure that people understand why that goal is important and how you're going to measure in it is also extremely important.

Another thing that is interesting is to ask people for inputs on like how they think you could be measuring that. So making them also part of the process, and maybe the way that they're advising is not going to be like the way that you end up measuring. Maybe it influences, maybe it's exactly what they suggest, but the important thing is to make them part of the process, so they don't feel that the process, like the process of establishing metrics is not something that is being done to them, but something that they are doing with everybody else. 

And so honestly, like so many things are already measured by the team, uh, velocity or however they estimate productivity. Even the estimates themselves are on like tickets on user stories or, uh, these are all, uh, attempts to measure things and they're used to compare the destinations with, uh, the actual results, so they know what the measures are used for. So sometimes it's just a matter of like establishing these parallels. Look, we measure productivity, we measure velocity to see if we are getting better, if we're getting worse. We also need to measure, uh, the quality to see if we're like catching more defects than before, if we have like more escape defects. Measurement is in some way already a part of our lives. Most of the times, it's a matter of like highlighting that, and, uh, people are usually comfortable with them, yeah, once you go through all this. 

Kovid Batra: Right. Makes sense. Um, I think the major part is done when the team is aligned on the 'why' part, like why you are doing it, because as soon as they realize that there is some importance to measuring this metric, they would automatically be, uh, intuitively be aligned towards measuring that, and it becomes easier because then if there are challenges related to the implementation process also, they would like come together and maybe find out ways to, uh, build things around that and help in actual measurement of the metric also.

But if I have to ask, let's say a team is fully aligned and, uh, we are looking at implementing, let's say DORA metrics for a team, what should be the time frame one should keep in mind to get an understanding of what these metrics are saying? Because it's not like straightforward. You look at the common frequency, if it's high, you say things are good. If it's low, things are bad. Of course, it doesn't work like that. You have to understand these metrics in the first place in the cadence of your team, in the situation of your team, and then make sense out of it and find out those bottlenecks or those areas of inefficiency where you could really work upon, right? So what should be that time frame in one's mind that someone is an engineering manager who is implementing this for a team? What time frame should that person keep in mind and what exactly should be the first step towards measuring these once you start implementing them? 

Peter Peret Lupo: Right. So it's a very good question. Time frame varies a lot and I'll tell you why; because more important than the time is the amount of data points that you have. If you wait for, like let's say, a month and you have like three data points, you can't establish any sort of pattern. You don't know if that's increasing, decreasing. There's no confidence. There's no statistical relevance. It's, like, the sample is too small. But like if you collect, like three data points, that's like generic for any metric. If you collect, like three data points every day, maybe in a week you'll have enough. The problem I see here is like, let's say, uh, something happens that is out of the ordinary. I want to investigate that to see if there is room for improvement there, or if that actually indicates that something went like really well and you want to replicate that success in the other cases. Um, you can't tell what's out of the ordinary if you're looking at three, four points. 

Kovid Batra: Right. 

Peter Peret Lupo: Uh, or if it's just like normal variation. So, I think that what's important is to have like a good baseline. So, that's gonna vary from process to process, from organization to organization, but there are some indications in the literature that like you should collect at least 30 data points. I think that with 30 data points you have like somewhat of a good, uh, statistical relevance for it, for your analysis, but I don't think you should, you have to wait for those many points in order to start investigating things. Sometimes you have like 10 or 12 and you already see something that. looks like something that you should investigate or you start having like an idea of what's going on, if it's higher than you expected, if it's lower than you expected, and you can start taking actions and investigating that as long as you consider that your interpretation may not be valid, bec ause like your sample is small. The time that it takes, like time frame, I guess that's going to depend on how often you are able to collect a new data point, and that's going to vary from organization to organization and from process to process, like measuring quality is different from measuring productivity, uh, and so on. So, I think all these things need to be taken into consideration. I think that the confidence is important. 

And one other thing that you mentioned there, about like the team analyzing. It's something that I want to touch on because it's an experience that I've had more than once. You mentioned context. Context is super important. So much so that I think that the team that is producing the metrics should be the first one looking at them, not management, higher management, C-level, not them, because they are the only ones that are able to look at data points and say, "Yeah, things here didn't go well. Our only QA was on vacation." Or like somebody took a sick day or whatever reason, like they have the context. So they should be the first ones looking at the metric, analyzing the metric, and conveying the results of their analysis to higher levels, not the other way around, because what happens when you have it the other way around is that, like, they don't have the context, so they're looking at just the numbers, and if the number is bad, they're gonna inquire about it. If it's good, they are usually gonna stay quiet, uh, and they're gonna ask about the bad numbers, whether or not there was a good reason for that, whether or not it was like, uh, let's say, an exception. And then the team is going to feel that they have to defend themselves, to justify themselves every time, and it creates like a very poisonous scenario where the team feels that management is there to question them and they need to defend themselves against management instead of them having the autonomy to report on their success and their failures to management and let management deal with those results instead of the causes. 

Kovid Batra: Totally, totally. 

Peter Peret Lupo: Context is super important. 

Kovid Batra: Great point there. Yeah, of course. Great point there, uh, highlighting the do's and don'ts from your experience and it's very relevant actually because the numbers don't always give you the reality of the situation. They could be an indicator and that's why we have them in place. Like first thing, you measure it. Don't come to a conclusion from it directly. If you see some discrepancy, like if there are some extraordinary data points, as you said, then there is a point which you should come out and inquire to understand what exactly happened here, but not directly jump onto the team saying that, Oh, you're not doing good or the other way around. So I think that that totally makes sense, uh, Peter. 

I think it was really, really interesting talking to you about the metrics and the implementation and the experiences that you have shared. Um, we could go on on this, but today I think we'll have to stop here and, uh, say goodbye to you. Maybe we can have another round of discussion continuing with those experiences that you have had with the implementation.

Peter Peret Lupo: Definitely. It was a real pleasure.. 

Kovid Batra: It would be our pleasure, actually. But, uh, like before you leave, uh, anything that you want to share with our audience as parting advice, uh, would be really appreciated. 

Peter Peret Lupo: All right. Um, look at your metrics as an ally, as a guide to tell you where you're going. Compare what you're doing now with what you were doing before to see if you're improving. When I say 'you', I'm talking to, uh, each individual in the team. Consider your team metrics, look at them, your work is part of the work that is being analyzed, and you have an influence on that at an individual level and with your team. So do look at your metrics, compare where you are at with where you were before to see if your changes were improved, see if your changes, uh, carried improvements you're looking for, and talk to your team about these metrics on your sprint retrospective. That's a very powerful tool to tell you, like, if your, uh, retrospective actions are being effective in delivering the change that you want in your process.

Kovid Batra: Great! I think great piece of advice there. Thanks, Peter. Thank you so much. Uh, this was really insightful. Loved talking to you. 

Peter Peret Lupo: All right. Thank you.

The DORA Lab EP #03 | Ben Parisot - Engineering Manager at Planet Argon

In the third episode of ‘The DORA Lab’ - an exclusive podcast by groCTO, host Kovid Batra has an engaging conversation with Ben Parisot, Software Engineering Manager at Planet Argon, with over 10 years of experience in engineering and engineering management.

The episode starts with Ben offering a glimpse into his personal life. Following that, he delves into engineering metrics, specifically DORA & the SPACE framework. He highlights their significance in improving the overall efficiency of development processes, ultimately benefiting customers & dev teams alike. He discusses the specific metrics he monitors for team satisfaction and the crucial areas that affect engineering efficiency, underscoring the importance of code quality & longevity. Ben also discusses the challenges faced when implementing these metrics, providing effective strategies to tackle them.

Towards the end, Ben provides parting advice for engineering managers leading small teams emphasizing the importance of identifying & utilizing metrics tailored to their specific needs.

Timestamps

  • 00:09 - Ben’s Introduction
  • 03:05 - Understanding DORA & Engineering Metrics
  • 07:51 - Code Quality, Collaboration & Roadmap Contribution
  • 11:34 - Team Satisfaction & DevEx
  • 16:52 - Focus Areas of Engineering Efficiency
  • 24:39 - Implementing Metrics Challenges
  • 32:11 - Ben’s Parting Advice

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo, and today's episode is a bit special. This is part of The DORA Lab series and this episode becomes even more special with our guest who comes with an amazing experience of 10 plus years in engineering and engineering management. He's currently working as an engineering manager with Planet Argon. We are grateful to have you here, Ben. Welcome to the show. 

Ben Parisot: Thank you, Kovid. It's really great to be here. 

Kovid Batra: Cool, Ben. So today, I think when we talk about The DORA Lab, which is our exclusive series, where we talk only about DORA, engineering metrics beyond DORA, and things related to implementation of these metrics and their impact in the engineering themes. This is going to be a big topic where we will deep dive into the nitty gritties that you have experienced with with this framework. But before that, we would love to know about you. Something interesting, uh, about your life, your hobby and your role at your company. So, please go ahead and let us know. 

Ben Parisot: Sure. Um, well, my name is Ben Parisot, uh, as you said, and I am the engineering manager at Planet Argon. We are a Ruby on Rails agency. Uh, we are headquartered in Portland, Oregon in the US but we have a distributed team across the US and, uh, many different countries around the world. We specifically work with, uh, companies that have legacy rails applications that are becoming difficult to maintain, um, either because of outdated versions, um, or just like complicated legacy code. We all know how the older an application gets, the more complex and, um, difficult it can be to work within that code. So we try to come in, uh, help people pull back from the brink of having to do a big rewrite and modernize and update their applications. 

Um, for myself, I am an Engineering Manager. I'm a writer, parts, uh, very, very non-professional musician. Um, I like to read, I really like comic books. I currently am working on a mural for my son, uh, he's turning 11 in about a week, and he requested a giant Godzilla mural painted on his bedroom wall. This is the first time I've ever done a giant mural, so we'll see how it goes. So far, so good. Uh, but he did tell me that, uh, he said, "Dad, even if it's bad, it's just paint." So, I think that.. Uh, still trying to make it look good, but, um, he's, he's got the right mindset, I think about it. 

Kovid Batra: Yeah, I mean, I have to appreciate you for that and honestly, great courage and initiative from your end to take up this for the kid. I am sure you will do a great job there. All the best, man. And thanks a lot for this quick, interesting intro about yourself. 

Let's get going for The DORA Lab. So I think before we deep dive into, uh, what these metrics are all about and what you do, let's have a quick definition of DORA from you, like what is DORA and why is it important and maybe not just DORA, but other metrics, engineering metrics, why they are important. 

Ben Parisot: Sure. So my understanding of DORA is sort of the classical, like it's the DevOps Research and Assessment. It was a think tank type of group just to, I can't remember the company that they started with, but it was essentially to improve productivity specifically around deployments, I believe, and like smoothing out some deployment, uh, and more DevOps-related processes, I think. That's, uh, but, uh, it's essentially evolved to be more about engineering metrics in a broader sense, still pretty focused on deployment. So specifically, like how, how fast can teams deploy code, the frequency of those deployments and changes, uh, to the codebase. Um, and then also metrics around failures and response to failures and how fast people can, uh, or engineering teams respond to incidences. 

Beyond DORA, there's of course the SPACE framework, which is a little bit more broader and looks at some of the more day-to-day processes involved in software engineering, um, and also developer experience. We at Planet Argon, we do a little bit of DORA. We focus mainly on more SPACE-related metrics, um, although there's a bunch of crossover. For us, metrics are very important both for, you know, evaluating the performance of our team, so that we can, you know, show value to our clients and prove, you know, "Hey, this is the value that we are providing beyond just the deliverable." Sometimes because of the nature of our work, we do a lot of work on like the backend or improvements that are not necessarily super-apparent to an end user or even, you know, the stakeholder within the project. So having metrics that we can show to our clients to say, "Hey, this is, um, you know, this is improving our processes and our team's efficiency and therefore, that's getting more value for your budget because we're able to move faster and accomplish more." That's a good thing. Also, it's just very helpful to, you know, keep up good team morale and for longevity sake, like, engineers on our team really like to know where they stand. They like to know how they're doing. Um, they like to have benchmarks on which they can, uh, measure their own growth and know where in sort of the role advancement phase they are based on some, you know, quantifiable metric that is not just, you know, feedback from their coworkers or from clients. 

Kovid Batra: Yeah, I think that's totally making sense to me and while you were talking about the purpose of bringing these metrics in place and going beyond DORA also, that totally relates to the modern software development processes, because you just don't want to like restrict yourself to certain part of engineering efficiency when you measure it, you just don't want to look at the lead time for change, or you just don't want to look at the deployment frequency. There are things beyond these, which are also very important and become, uh, the area of inefficiency or bottlenecks in the team's overall delivery. So, just for example, I mean, this is a question also, whether there is good collaboration between the team or not, right? If there is no good code collaboration, that is a big bottleneck, right? Getting reviews done in a proper way where the quality, the base is intact, that really, really matters. Similarly, if you talk about things like delivery, when you're delivering the software from the planning phase to the actual feature rollout and users using it, so cycle time probably in DORA will cover that, but going beyond that space to understand the project management piece and making sure how much time in total goes into it is again an aspect. Then, there are areas where you would want to understand your team satisfaction and how much teams are contributing towards the roadmap, because that's also how you define that whether you have accumulated too much of technical debt or there are too many bugs coming in and the team is involved right over there. 

And an interesting one which I recently came across was someone was measuring that when new engineers are getting onboarded, uh, how much time does it take to go for the first commit, right? So, these small metrics really matter in defining how the overall efficiency of the engineering or the development process looks like. So, I just wanted to understand from you, just for example, as I mentioned, how do you look at code collaboration or how do you look at, uh, roadmap contribution or how do you look at initial code quality, deliverability, uh, when it comes to your team. And I understand like you are a software agency, maybe a roadmap contribution thing might not be very relevant. So, maybe we can skip that. But otherwise, I think everything else would definitely be relevant to your context. 

Ben Parisot: Sure. Yeah, being an agency, we work with multiple different clients, um, different repos in different locations even, some of them in GitHub, Bitbucket, um, GitLab, like we've got clients with code everywhere. Um, so having consistent metrics across all of like the DORA or SPACE framework is pretty difficult. So we've been able to piecemeal together metrics that make sense for our team. And as you said, like a lot of the metrics, they're for productivity and efficiency sake for sure, but they also then, if you like dig one level deeper, there is a developer experience metric below that. Um, so for instance, PR review, you know, you mentioned, um, like turnaround time on PRs, how quickly are people that are being assigned to review getting to it, how quickly are changes being implemented from after a review has been submitted; um, those are on the surface level, very productivity- driven metrics. We want our team to be moving quickly and reviewing things in a timely manner. But as you mentioned, like a slow PR turnaround time can be a symptom of bad communication and that can lead to a lot of frustration, um, and even like disagreement amongst team members. So that's a really like developer satisfaction metric as well, um, because we want to make sure no one's frustrated with any of their coworkers, uh, or bottlenecked and just stuck not knowing what to do because they have a PR that hasn't been touched. 

We use a couple of different tools. We're luckily a pretty small team, so my job as a manager and collecting all this data from the metrics is doable for now, not necessarily scalable, but doable with the size of our team. I do a lot of manual data collection, and then we also have various third-party integrations and sort of marketplace plugins. So, we work a lot in GitHub, and we use some plugins in GitHub to help us give some insight into, for instance, like you said, like commit time or number of commits within a PR size of those commits you know, we have an engineering handbook that has a lot of our, you know, agreed upon best practices and those are generally in place so that our developers can be more efficient and happy in their work, so, you know, it can feel a little nitpicky to be like, "Oh, you only had two commits in this giant PR." Like, if the work's getting done, the work's getting done. However you know, good commit, best practice. We try to practice atomic commits here at Planet Argon. That is going to, you know, not only like create easier rollbacks if necessary, there's just a lot of reasons for our best practices. So the metrics try to enforce the best practices that we have in mind already, or that we have in place already. And then, yeah, uh, you asked what other tools or? 

Kovid Batra: So, yeah, I mean talking specifically about the team satisfaction piece. I think that's very critical. Like, that's one of the fundamental things that should be there in the team so that you make sure the team is actually productive, right? From what you have explained, uh, the kind of best practices you're following and the kind of things that you are doing within the team reflect that you are concerned about that. So, are there any specific metrics there which you track? Can you like name a few of them for us? 

Ben Parisot: Sure, sure. Um, so for team satisfaction, we track a number of following metrics. We track build processes, code review, deep work, documentation, ease of release, local development, local environment setup, managing technical debt, review turnaround, uh, roadmap and priorities, and test coverage and test efficiency. So these are all sentiment metrics. I use them from a management perspective to not only get a feeling of how the team is doing in terms of where their frustrations lie, but I also use it to direct my work. If I see that some of these metrics or some of these areas of focus are receiving like consistently low sentiment scores, then I can brainstorm with the team, bring it to an all-hands meeting to be like, "Here's some ideas that I have for improving these. What is your input? What's a reasonable timeline look like?" And then, show them that, you know, their continued participation in these, um, these surveys, these developer experience surveys are leading to results that are improving their work experience. 

Kovid Batra: Makes sense. Out of all these metrics that you mentioned, which are those top three or four, maybe? Because it's very difficult to, uh, look at 10, 12 metrics every time, right? So.. 

Ben Parisot: Yes. 

Kovid Batra: There is a go-to metric or there are a few go-to metrics that quickly tell you okay, what's going on, right? So for me, sometimes what I basically do is like if I want to see if the code, initial code quality is coming out good or not I'm mostly looking at how many commits are happening after the PRs are being raised for review and how many comments were there. So when I look at these two, I quickly understand, okay, there is too much to and fro happening and then quality initially is not coming out well. But in the case of team satisfaction, of course, it's a very feeling, qualitative-driven, uh, piece we are talking about. But still, if you have to objectify it with, let's say three or four metrics, what would be those three or four important metrics that you think impact the developer's experience or developer's satisfaction in your team? 

Ben Parisot: Sure. So we actually have like 4 KPIs that we track in addition to those sentiment metrics, and they are also sort of sentiment metrics as well, but they're a little higher level. Um, we track weekly time loss, ease of delivery, engagement, uh, and perceived productivity. So we feel like those touch pretty much all of the different aspects of the software development life cycle or the developer's day-to-day experience. So, ease of delivery, how, you know, how easy is it for you to be, uh, deploying your code, um, that touches on any bottlenecks in the deployment pipelines, any issues with PRs, PR reviews, that sort of thing. Um, engagement speaks to how excited or interested people are about the work that they're doing. So that's the more meat of the software development work. Um, perceived productivity is how, you know, how well you think you are being productive or how productive you feel like you are being. Um, and that's really important because sometimes the hard metrics of productivity and the perceived productivity can be very different, and not only for like, "Oh, you think you're being very productive, but you're not on paper." Um, oftentimes, it's the reverse where someone feels like they aren't being productive at all and I can go, and I know that from their sentiment score. Um, but then I can go and pull up PRs that they've submitted or work that they've been doing in JIRA and just show them a whole list of work that they've done. I feel like sometimes developers are very in the weeds of the work and they don't have a chance to step back and look at all that they've accomplished. So that's an important metric to make sure that people are recognizing and appreciating all of the work and their contributions to a project and not feeling like, "Oh, this one ticket, I haven't been very productive on. So, therefore, I'm not a very good developer." Uh, and then finally, weekly time loss is a big one. This one is more about everything outside of the development work. So this also focuses on like, how often are you in your flow? Are you having too many meetings? Do you feel like, you know, the asynchronous current communication that is just the nature of our distributed team? Is that blocking you? And are you being, you know, held up too much by waiting around for a response from someone? So that's an important metric that we look at as well. 

Kovid Batra: Makes sense. Thanks. Thanks for that detailed overview. I think team satisfaction is of course, something that I also really, really care about. Beyond that, what do you think are those important areas of engineering efficiency that really impact the broader piece of efficiency? So, uh, just to give you an example, is it you're focusing mostly in your teams related to deliverability or are you focusing more related to, uh, the quality of the work or is it something related to maybe sprints? I'm really just throwing out ideas here to understand from you how you, uh, look at which are those important areas of engineering efficiency other than developer satisfaction. 

Ben Parisot: Yeah. I think, right. I think, um, for our company, we're a little bit different even than other agencies. Companies don't come to us often for large new feature development. You know, as I mentioned at the top of the recording, we inherit really old applications. We inherit applications that have, you know, the developers have just given up on. So a lot of our job is modernizing and improving the quality of the code. So, you know, we try to keep our deployment metrics, you know, looking nice and having all of the metrics around deployment and, uh, post-deployment, obviously. Um, but from my standpoint, I really focus on the quality of the code and sort of the longevity of the code that the team is writing. So, you know, we look at coding practices at Planet Argon, we measure, you know, quality in a lot of different ways. Some of them are, like I said earlier, like practicing atomic commits size of PRs. Uh, because we have multiple projects that people are working on, we have different levels of understanding of those projects. So there's like, you know, some people that have a very high domain knowledge of an application and some people that don't. So when we are submitting PRs, we try to have more than one person look at a PR and one person is often coming with a higher domain knowledge and reviewing that code as it, uh, does it satisfy the requirements? Is it high-quality code within the sort of ecosystem of that existing application? And then, another person is looking at more of the, the best practices and the coding standards side of it, and reviewing it just from a more, a little more objective viewpoint and not necessarily as it's related to that.

Let's see, I'm trying to find some specific metrics around code quality. Um, commits after a PR submission is one, you know, if where you are finding that our team is often submitting a PR and then having to go back and work a lot more on it and change a lot more things; that means that those PRs are probably not ready or they're being submitted a little earlier. Sometimes that's a reflection of the developer's understanding of the task or of the code. Sometimes it's a reflection of the clarity of the issue that they've been assigned or the requirements. You know, if the client hasn't very clearly defined what they want, then we submit a PR and they're like, "Oh, that's not what I mean." so that's an important one that we looked at. And then, PR approval time, I would say is another one. Again, that one is both for our clients because we want to be moving as quickly with them as we can, even though we don't often work against hard deadlines. We still like to keep a nice pace and show that our team is active on their projects. And then, it's also important for our team because nobody likes to be waiting around for days and days for their PR to be reviewed. 

Kovid Batra: Makes sense. I think, yeah, uh, these are some critical areas and almost every engineering team struggles with it in terms of efficiency and what I have felt also is not just organization, right, but individual teams have different challenges and for each team, you could be looking at different metrics to solve their problems. So one team would be having a low deployment frequency because of maybe not enough tooling in place and a lot of manual intervention being there, right? That's when their deployments are not coming out well or breaking most of the time. Or it could be for another team, the same deployment frequency could be a reason that the developers are actually not writing or doing enough number of like PRs in a defined period of time. So there is a velocity challenge there, basically. That's why the deployment frequency is low. So most of the times I think for each team, the challenge would be different and the metrics that you pick would be different. So in your case, as you mentioned, like how you do it for your clients and for your teams is a different method. Cool. I think with that, I.. Yeah, you were saying something. 

Ben Parisot: Oh, I was, yeah. I was gonna say, I think that, uh, also, you know, we have sort of across the board, company best practice or benchmarks that we try to reach for a lot of different things. For instance, like test coverage or code coverage, technical debt, and because we inherit codebases in various levels of, um, quality, the metric itself is not necessarily good or bad. The progress towards a goal is where we look. So we have a code coverage metric, uh, or goal across the company of like 80, 85%, um, test coverage, code coverage. And we've inherited apps, big applications, live applications that have zero test coverage. And so, you know, when I'm looking at metrics for tests, uh, you know, it's not necessarily, "Hey, is this application's test coverage meeting our standards? Is it moving towards our standards?" And then it also gets into the individual developers. Like, "Are you writing the tests for the new code that you're writing? And then also, is there any time carved out of the work that you have on that project to backfill tests?" And similarly, with, uh, technical debt, you know, we use a technical debt tagging tool and oftentimes, like every three months or so we have a group session where we spend like an hour, hour and a half with our cameras off on zoom, just going into codebases that we're currently working on and finding as much technical debt as we can. Um, and that's not necessarily like, oh, we're trying to, you know, find who's not writing the best code or what the, uh, you know, trying to find all the problems that previous developers caused. But it's more of a is this, you know, other areas for like, you know, improvements? Right. And also, um, is there any like potential risks in this codebase that we've overlooked just by going through the day-to-day? And so, the goal is not, "Hey, we need to have no technical debt ever." It's, "Are we reducing the backlog of tech debt that we're currently working within?" 

Kovid Batra: Totally, totally. And I think this again brings up that point that for every team, the need of a metric would be very different. In your case, the kind of projects you are getting by default, you have so much of technical debt that's why they're coming to you. People are not managing it and then the project is handed over to you. So, having that test coverage as a goal or a metric is making more sense for your team. So, agreed. I think I am a hundred percent in line with that. But one thing is for sure that there must be some level of, uh, implementation challenges there, right? Uh, it's not like straightforward, like you, you are coming in with a project and you say, "Okay, these are the metrics we'll be tracking to make sure the efficiency is in place or not." There are always implementation challenges that are coming with that. So, I mean, with your examples or with your experience, uh, what do you think most of the teams struggle with while implementing these metrics? And I would be more than happy to hear about some successes or maybe failures also related to your implementation experiences. 

Ben Parisot: Yeah. So I would just say the very first challenge that we face is always, um. I don't want to see team morale, but, um, the, somewhat like overwhelming nature depending on the state of the codebase. Like if you inherit a codebase, it's really large and there's no tests. That's, you know, overwhelming to think about, having to go and write all those tests, but it's also overwhelming and scary to think, "Oh, what if something breaks?" Like, a test is a really good indicator of where things might be breaking and there's none of that, so the guardrails are off. Um, and that's scary. So helping people get used to, especially newer developers who have just joined the team, helping them get used to working within a codebase that might not be as nice and clean as previous ones that they've worked with is a big challenge. In terms of actual implementation, uh, we face a number of challenges being an agency. Like I mentioned earlier, some codebases are in, um, different places like GitHub or Bitbucket. You know, obviously those tools have generally the same features and generally the same, you know, sort of dashboard type of things. Um, but if we are using any sort of like integrated tool to measure metrics around those things, if we get it, um, a repo that's not in the platform, it's not on the platform where we have that integration happening, then we don't get the metrics on that, or we have to spin up a new integration. 

Kovid Batra: Yeah. 

Ben Parisot: Um, for some of our clients, we have access to some of their repos and not others, and so, like we are working in an app ecosystem where the application that we are responsible for is communicating and integrated with another application that we don't, we can't see; and so that's very difficult at times. That can be a challenge for implementing certain metrics, because we need to know, like, especially performance metrics for the application. Something might be happening on this hidden application that we don't have any control over or visibility into. 

Um, and then what else? Just I would say also a challenge that we face is the, um, most of our developers are working on 2 to 3 applications at a time, and depending on the length of the engagement, um, sometimes people will switch on and off. So it can be difficult to track metrics for just a single project when developers are working on it for like maybe a few weeks or a few months and then leaving. Sometimes we have like a dedicated developer who's lead and then, have a support developer come in when necessary. And so that can be challenging if we're trying to parse out, like why there might've been a shift in the metrics or like a spike in one metric or another, or a drop and be like, "Okay, well, let's contextualize that around who was working on this project, try to determine like, okay, is this telling us something important about the project itself, or is it just data that is displaying the adding or subtracting of different developers to the project?" So that can be a challenge. 

Specifically, I can mention an actual sort of case study that we had. Uh, we were using Code Climate, which is a tool that we still use. We use the quality tool for like audits and stuff. Um, but when I first started applying to Argon, I wanted to implement its velocity tool, which is like the sister tool to quality and it is like very heavily around cycle time. Um, and it was all great, I was very excited about it. Went and signed up, um, went and connected our GitHub accounts, or I guess I did the Bitbucket account at the time cause most of our repos were in Bitbucket. Um, didn't realize at the time at least that you could only integrate with one platform. And so, even though we had, uh, we had accounts and we had clients with applications on GitHub, we were only able to integrate with Bitbucket. So some engineers' work was not being caught in that tool at all because they were working primarily in GitHub application. And again, like I said, sometimes developers would then go to one of the applications in Bitbucket, help out and then drop off. So it was just causing a lot of fluctuations in data and also not giving us metrics for the entire team consistently. So we eventually had to drop it because it was just not a very valuable tool, um, in that it was not capturing all of the activities of all of our engineers everywhere they were working. Um, I wished that it was, but it's the nature of the agency work and also, you know, having people that are, um. 

Kovid Batra: Yeah, I totally agree on that point and the challenges that you're facing are actually very common, but at the same time, uh, having said that, I believe the tooling around metrics observation and metrics monitoring has come way ahead of what you have been using in Code Climate. So, the challenge still remains, like most of the teams try to gather metrics manually, which is time-consuming, or in your case where agencies are working on different projects, it's very difficult or different codebases, very difficult to gather the right metrics for individual developers there also. Somehow, I think the challenges are very valid, but now, the tooling that is available in the market is able to cater to all those challenges. So maybe you want to give it a try and see, uh, your metrics implementation getting in place. But yeah, I think, thanks for highlighting these pointers and I think a lot of people, a lot of engineering managers and engineering leaders struggle with the same challenges while implementing those. So totally, uh, bringing these challenges in front of the audience and talking about it would bring some level of awareness to handle these challenges as well. 

Great. Great, Ben. I think with this, uh, we would like to bring an end to our today's episode. It was really amazing to understand how Planet Argon is working and you are dealing with those challenges of implementing metrics and how well you are actually doing, even though the right tooling or right things are not in place, but the important part is you realize the purpose. You don't probably go ahead and do it for the sake of doing it. You're actually doing it where you have a purpose and you know that this can impact the overall productivity of the team and also bring credibility with your clientele that yes, we are doing something and you have something to show in numbers also. So, I really appreciate that. 

And while, like before we say goodbye, is there parting advice or something that you would like to speak with the audience? Please go ahead. 

Ben Parisot: Oh, wow! Um, yeah, sure. So I think your point about like understanding the purpose of the metrics is important. You know, my team, uh, I am the manager of a small team and a small company. I wear a lot of hats and I do a lot of different things for my team. They show me a lot of grace, I suppose, when I have, you know, incomplete data for them, I suppose. Like you said, there's a lot of tools out there that can provide a more holistic, uh, look. Um, and I think that if you are an agency, uh, if you're a manager on a small team and you sort of struggle to sort of keep up with all of the metrics that you have even promised for your team or that you know that you should be doing, uh, if you really focus on the ones that are impacting their day-to-day experience as well as like the value that they're providing for either, you know, your company's internal stakeholders or external clients, you're going to quickly see the metrics that are most important and your team is going to appreciate that you're focusing on those, and then, the rest of it is going to fall into place when it does. And when it doesn't, um, you know, your team's not going to really be too upset because they know, they see you focusing on the stuff that matters most to them. 

Kovid Batra: Great. Thanks a lot, Ben. Thank you so much for such great, insightful experiences that you have shared with us. And, uh, we wish you all the best, uh, and your kid a very happy birthday in advance. 

Ben Parisot: Thank you. 

Kovid Batra: All right, Ben. Thank you so much for your time. Have a great day. 

Ben Parisot: Yes. Thanks.

‘Evolution of Software Testing: From Brick Phones to AI’ with Leigh Rathbone, Head of Quality Engineering at CAVU

In the latest episode of ‘groCTO: Originals’ (formerly ‘Beyond the Code: Originals’), host Kovid Batra engages with Leigh Rathbone, Head of Quality Engineering at CAVU who has a rich technical background with reputable organizations like Sony Ericsson and The Very Group. He has been at the forefront of tech innovation, working on the first touchscreen smartphone and smartwatch, and later with AR, VR, & AI tech. The conversation revolves around ‘Evolution of Software Testing: From Brick Phones to AI’.

The podcast begins with Leigh introducing himself and sharing a life-defining moment in his career. He further highlights the shift from manual testing to automation, discussing in depth the automation framework for touchscreen smartphones from 19 years ago. Leigh also addresses the challenges of implementing AI and how to motivate teams to explore automation opportunities. He also discusses the evolution of AR, VR, and 3D gaming & their role in shaping modern-day testing practices, emphasizing the importance of health and safety considerations for testers.

Lastly, Leigh offers parting advice urging software makers & testers to always prioritize user experience & code quality when creating software.

Timestamps

  • 00:06 - Leigh’s Introduction
  • 01:07 - Life-defining Moment in Leigh’s Career
  • 04:10 - Evolution of Software Testing
  • 09:20 - Role of AI in Testing
  • 11:14 - Conflicts with Implementing AI
  • 15:29 - Adapting to AI with Upskilling
  • 21:02 - Evolution of AR, VR & 3D Gaming
  • 25:45 - Unique Value of Humans in Testing
  • 32:58 - Conclusion & Parting Advice

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today, we are lucky to have a tech industry veteran with us on our show today. He is the Head of Quality Engineering at CAVU. He has had fascinating 25-plus years of engineering and leadership experience, working on cutting-edge technologies including the world's first smartphone and smartwatch. He was also involved in the development of progressive download and DRM technologies that laid the groundwork for modern streaming services. We are grateful to have you on the show, Leigh. 

Leigh Rathbone: Thank you, Kovid. It's great to be here. I'm really happy to be invited. I'm looking forward to sharing a few experiences and a few stories in order to hopefully inspire and help other people in the tech industry. 

Kovid Batra: Great, Leigh. And today, I think they would have a lot of things to deep dive into and learn from you, from your experience. But before we go there, where we talk about the changing landscape of software testing, coming from brick phones to AI, let's get to know a little bit more about each other. Can you just tell us something about yourself, some of your life-defining experiences so that I and the audience can know you a little more? 

Leigh Rathbone: Yeah. Well, I'm Leigh Rathbone. I live in the UK, uh, in England. I live just North of a city called Liverpool. People might've heard of Liverpool because there's a few famous football teams that come from there, but there's also a famous musical band called the Beatles that came from Liverpool. So, I live just North of Liverpool. I have two children. I'm happily married, been married for over 20 years. I am actually an Aston Villa football fan. I don't support any of the Liverpool football clubs. I'm not a cricket fan or a rugby fan. It's 100 percent football for me. I do like a bit of fitness, so I like to get out on my bike. I like to go to the gym. I like to drink alcohol. Am I allowed to say that, Kovid? Am I allowed to say that? I do like a little bit of alcohol. Um, and like everybody else, I think I'm addicted to Netflix and all the streaming services, which is quite emotional for me, Kovid, because having played a part in the building blocks and a tiny, tiny part in the building blocks of what later became streaming, when I'm listening to Spotify or when I'm watching something on Amazon Video or Netflix, I do get a little bit emotional at times thinking, "Oh my God! I played a minute part of that technology that we now take for granted." So, that's my sort of out-of-work stuff that, um, I hope people will either find very boring or very interesting, I don't know. 

Kovid Batra: No, I definitely relate to it and I would love to know, like, which was the last, uh, series you watched or a movie you watched on Netflix and what did you love about it? 

Leigh Rathbone: So, I watched a film last night called 'No Escape'. Um, it's a family that goes to, uh, a country in Asia and they don't say the name of the country for legal reasons. Um, but they get captured in a hotel and it's how they escape from some terrorists in a hotel with the help of Brosnan who's also in the film. So, yeah, it was, uh, it was high intensity, high energy and I think that's probably why I liked it because from the very almost 5-10 minutes, it's like, whoa, what's going on here? So, it was called 'No Escape'. It's on Netflix in the UK. I don't know whether it'll be on Netflix across the world. But yeah, it's an old film. It's not new. I think it's about three years old. But yeah, it was quite enjoyable. 

Kovid Batra: Cool, cool. I think that that's really interesting and thank you for such a quick, sweet intro about yourself. And of course, your contributions are not minute. Uh, I'm sure you would have done that in that initial stage of tech when the technology was building up. So, thanks on behalf of the tech community there. 

Uh, all right, Leigh, thank you so much and let's get started on today's main topic. So, you come from a background where you have seen the evolution of this landscape of software testing and as I said earlier, also like from brick phones to AI, I'm sure, uh, you have a lot of experiences to share from the days when it all started. So, let's start from the part where there was no automation, there was manual testing, and how that evolved from manual testing to automation today, and how things are being balanced today because we are still not 100 percent automated. So, let's talk about something like, uh, your first smartphone, uh, maybe where you might not have all the automation, testing or sophisticated automation possible. How was your experience in that phase? 

Leigh Rathbone: Well, I am actually holding up for those people that, uh, can either watch the video. 

Kovid Batra: Oh my God! Oh my God! 

Leigh Rathbone: I'm holding up the world's first touchscreen smartphone and you can see my reflection and your reflection on the screen there. This is called the Sony Ericsson P800. I worked on this in 2002 and it hit the market in 2003 as the world's first touchscreen smartphone, way before Apple came to the market. But actually, if I could, Kovid, can I go back maybe four years before this? Because there's a story to be told around manual testing and automation before I got to this, because there is automation, there is an automation story for this phone as well. But if I can start in 1999, I've been in testing for 12 months and I moved around a lot in my first four years, Kovid because I wanted to build up my skillsets and the only way to do that was to move jobs. So, my first four years, I had four jobs. So, in 1999 I'm in my second job. I'm 12 months into my testing career and I explore a tool called WinRunner. I think it was the first automation tool. So, there I am in 1999, writing automation scripts without really knowing the scale of what was going to unfold in front of not just the testing community, but the tech community. And when I was using this tool called WinRunner. Oh, Kovid, it was horrible. Oh my God! So, I would be writing scripts and it was pretty much record and playback, okay? So, I was clicking around, I was looking at the code, I was going, "Oh, this is exciting." And every time a new release would come from the developers, none of my scripts would work. You know the story here, Kovid. As soon as a new release of code comes out, there's bug fixes, things move around on the screens, you know, different classes change, there might be new classes. This just knocks out all of my scripts. So, I'd spend the next sort of, I don't know, eight days, working, reworking, refactoring my automation scripts. And then, I just get to the point where I was tackling new scripts for the new code that dropped and a new drop of code would come. And I found myself in this cycle in 1999 of using WinRunner and getting a little bit excited but getting really frustrated. And I thought, "Where is this going to go? Has it got a future in tech? Has it got a future in testing?" Cause I'm not seeing the return on investment with the way I was using it. So, at that point in time, 1999, I saw a glimpse, a tiny glimpse of the future, Kovid. And that was 25 years ago. And for the next couple of years, I saw this slow introduction, very, very slow back then, Kovid, of manual testing and automation. And the two were very separate, and that continued for a long, long time, whereby you'd have manual testers and automation testers. And I feel that's almost leading and jumping ahead because I do want to talk about this phone, Kovid, because this phone was touchscreen, and we had automation in 2005. We built an automation framework bespoke to Sony Ericsson that would do stress testing, soak testing, um, you know, um, it would actually do functional automation testing on a touchscreen smartphone. Let that sink into 19 years ago. We built a bespoke automation framework for the touchscreen smartphone. Let that sink in folks. 

Kovid Batra: Yeah. 

Leigh Rathbone: Unreal, absolutely unreal, Kovid. Back in the day, that was pretty much unheard of. Unbelievable. It still blows my mind to this day that in 2005, 19 years ago, on a touchscreen smartphone, we had an automation framework that added loads and loads of value. 

Kovid Batra: Totally, totally. And was this your first project wherein you actually had a chance to work hands-on with this automation piece? Like, was that your first project? 

Leigh Rathbone: So, what happened here, Kovid, and this is a trend that happened throughout the testing and tech industry right up until about, I'd say that seven years ago, we had an automation team and a manual team. I'll give you some context for the size. The automation team was about five people. The manual test team was about 80 people. So, you can see the contrast there. So, they were doing pretty much what I was doing in 1999. They were writing some functional test scripts that we could use for regression testing. Uh, but they were mostly using it for soak testing. So in other words, random touches on the screen, these scripts needed to run for 24 hours in order for us to be able to say, "Yes, that can, that software will exist in live with a customer touching the screen for 24 hours without having memory leaks, as an example." So, their work felt very separate to what we were doing. There was a slight overlap with the functional testing where they'd take some of our scripts and turn them into, um, automated regression packs. But they were way ahead of the curve. They were using this automation pack for soak testing to make sure there was no memory leaks by randomly dibbing on a screen. And I say dibbing, Kovid, because you touched the screen with a dibber, right? It wasn't a finger. Yeah, you needed this little dibber that clicked onto the side of the phone in order to touch the screen. So, they managed to mimic random clicks on the screen in order to test for memory leaks. Fascinating. Absolutely fascinating. So at that point, we started to see a nice little return on investment on automation being used. 

Kovid Batra: Got it. Got it. And from there, how did it get picked up over the years? Like, how have teams collaborated? Was there any resistance from, of course, every time this automation piece comes in, uh, there is resistance also, right? People start pointing things. So, how was that journey at that point? 

Leigh Rathbone: I think there's always resistance to change and we'll see it with AI. When we come on to the AI section of the talk, we'll see it there. There will always be resistance to change because people go through fear when change is announced. So, if you're talking to a tester, a QA or a QE and you're saying, "Look, you're going to have to change your skill sets in order to learn this." They're gonna go through fear before they spot the opportunity and come up the other side. So, everywhere I've gone, there's been resistance to automation and there's something else here, Kovid, from the years 1998 to 2015, test teams were massive. They were huge. And because we were in the waterfall methodology, they were pretty much standalone teams and all the people that were in power of running these big teams, they had empires, and they didn't want to see those empires come down. So actually, resistance wasn't just sometimes from testers themselves, it was from the top, where they might say, "Oh, this might mean that the number of testers I need goes down, so, therefore, my empire shrinks." And there were test leaders out there, Kovid, doing that, very, very naughty people. Like, honestly, trying to protect their own, their own job, instead of thinking about the future. So, I saw some testers try and accelerate the use of automation. I also saw test leaders put the brakes on it because they were worried about the status of their jobs and the size of their empires. 

Kovid Batra: Right. And I think fast-forward to today, we won't take much long to jump to the AI part here. Like, a lot of automation Is already in place. According to your, uh, view of the tech industry right now uh, let's say, if there are a hundred companies; out of those hundred, how many are at a scale where they have done like 80 percent or 90 percent of automation of testing? 

Leigh Rathbone: God! 80 to 90 percent automation of testing. You'll never ever reach that number because you can do infinite amounts of testing, okay? So, let's put that one out there. The question still stands up. You're asking of 100 companies, how many people have automation embedded in their DNA? So I would probably, yeah, I would probably say it's in the region of 70 to 80 percent. And I'd be, I wouldn't be surprised if it's higher, and I've got no data to back that up on. What I have got to back that up on is the fact that I've worked in 14 different companies, and I spend a lot of time in the tech industry, the tech communities, talking to other companies. And it's very rare now that you come across a company that doesn't have automation. 

But here's the twist, Kovid, there's a massive twist here. I don't employ automation testers, okay? So 2015, I made a conscious effort and decision not to employ automation testers. I actually employed testers who can do the exploratory side and the automation side. And that is a trend now, Kovid, that really is a thing. Not many companies now are after QAs that only do automation. They want QAs that can do the exploratory, the automation, a little bit of performance, a little bit of security, the people skills is obviously rife. You know, you've got to put those in there with everything else. 

Kovid Batra: Of course. 

Leigh Rathbone: Yeah. So for me now, this trend that sort of I spotted in 2014 and I started doing in 2015 and I've done it at every company I've been to, that really is the big trend in testers and QAs right now. 

Kovid Batra: Got it. I think it's more like, uh, it's an ever-growing evolutionary discipline, right? Uh, every time you explore new use cases and it also depends on the kind of business, the products the company is rolling out. If there are new use cases coming in, if there are new products coming in, every time you can just have everything automated. So yeah, I mean, uh, having that 80-90% testing scale automated is something quite far-fetched for most of the teams, until and unless you are stagnated over one product and you're just running it for years and years, which is definitely not, uh, sustainable for any business. 

So here, my question would be, like, how do you ensure that your teams are always up for exploring that side which can be automated and making sure that it's being done? So, how do you make sure? One is, of course, having the right hires in the team, but what are the processes and what are the workflows that you implement there from time to time? People are, of course, doing the manual testing also and with the existing use cases where they can automate it. They're doing that as well. 

Leigh Rathbone: It's a really good question, Kovid, and I'll just roll back in the process a little bit because for me, automation is not just the QA person's task and not even test automation. I believe that is a shared responsibility. So, quality is owned by everybody in the team and everyone plays their different parts. So for me, the automation starts right with the developers, to say, "Well, what are you automating? What are your developer checks that you're going to automate?" Because you don't want developers doing manual checks either. You want them to automate as much as they can because at the unit test level and even the integration test level, the feedback loops are really quick. So, that means the test is really cheap. So, you're getting some really good, rich feedback initially to show that nothing obvious has broken early on when a developer adds new code. So, that's the first part. So, that now, I think is industry standard. There aren't many places where developers are sat there going, "I'm not going to write any tests at all." Those days are long, long gone, Kovid. I think all, you know, modern developers that live by the modern coding principles know that they have to write automated checks.

But I think your question is targeted at the QAs. So, how do we get QAs involved? So, you have to breed the curiosity gene in people, Kovid. So, you're absolutely right. You have to bring people in who have the skills because it's very, very hard to start with a team of 10 QAs where no one can automate. That's really hard. I've only ever done that once. That's really hard. So, what I have done is I've brought people in with the mindset of thinking about automation first. The mindset of collaborating with developers to see what they're automating. The curiosity and the skill sets to grow and develop and learn more about the tools. And then, you have to give people time, Kovid. There is no way you can expect people who don't have the automation skills to just upskill like that. It's just not fair. You have to support, support, and support some more. And that comes from myself giving people time. It's understanding how people learn, Kovid.

So, I'll give you an example. Pair learning. That's one technique where you can get somebody who can't automate and maybe you get them pairing with someone else who can't automate and you give them a course. That's point number one. Pair learning could be pairing with someone who does know automation and pairing them with someone who doesn't know automation. But guess what? Not everyone likes pairing because it's quite a stressful environment for some people. Jumping on a screen and sharing your screen while you type, and them saying, "No, you've typed that wrong." That can be quite stressful. So, some people prefer to learn in isolation but they like to do a brief course first, and then come back and actually start writing something right in the moment, like taking a ticket now that they're manually testing, and doing something and practising, then getting someone to peer review it. So, not everyone likes pair learning. Not everybody likes to learn in isolation. You have to understand your people. How do they like to learn and grow? And then, you have to understand, then you have to relay to them why you are asking them to learn and grow. Why am I asking people to change? 'cause the skill bases that's needed tomorrow and the day after and in two years time are different to the skill bases we need right now or even 10 years ago. And if people don't upskill, how are they going to stay relevant? 

Kovid Batra: Right. 

Leigh Rathbone: Everything is about staying relevant, especially when test automation came along, Kovid, and people were saying, "Hey, we won't need QAs because the automation will get rid of them." And you'd be amazed how many people believed that, Kovid, you'd be absolutely gobsmacked how many tech leaders had in their minds that automation would get rid of QAs. So, we've been fighting an uphill struggle since then to justify our existence in some cases, which is wrong because I think the value addition of QAs and all the crafts when they come together is brilliant. But for me, for people who struggle to understand why they need to upskill in automation, it's the need to stay relevant and keep adding value to the company that they're in.

Kovid Batra: Makes sense. And what about, uh, the changing landscape here? So basically, uh, you have seen that part where you moved to phones and when these phones were being built, you said like, that was the first time you built something for touchscreen testing, right? Now, I think in the last five to seven years, we have seen AR, VR coming into the picture, right? Uh, the processes that you are following, let's say the pair learning and other things that you bring along to make sure that the testing piece, the quality assurance piece is in place as you grow as a company, as you grow as a tech team. For VR, AR kind of technologies, how has it changed? How has it evolved? 

Leigh Rathbone: Well, massively, because if you think about testing back in the day, everybody tested on a screen. And most of us are still doing that. And this is why this phone felt different and even the world's first smartwatch, which is here. When I tested these two things, I wasn't testing on a screen. I was wearing it on my wrist, the watch, and I was using the phone in my hand in the environment that the end user would use it in. So, when I went to PlayStation, Kovid, and I was head of European Test Operations for Europe with PlayStation, we had a number of new technologies that came in and they changed the way we had to think about testing. So, I'll give you some examples. Uh, the PlayStation Move, where you had the two controllers that can control the game, uh, VR, AR, um, 3D gaming. Those four bits of technology, and I've only reeled off four, there was more. Just in three years at PlayStation, I saw how that changed testing. So, for VR and 3D, you've got to think about health and safety of the tester. Why? Because the VR has bugs in it, the 3D has bugs in it, so it makes the tester disorientated. You're wearing.. They're not doing stuff through their eyes, their true eyes, they're doing it through a screen that has bugs in it, but the screen is right up and close to their eyes. So there was motion sickness to think about. And then, of course, there was the physical space that the testers were in. You can't test VR sat at a desk, you have to stand up. Look, because that's how the end users do it. When we tested the PlayStation Move with the two controllers, we had to build physical lounges for testers to then go into to test the Move because that's how gamers were going to use the game. Uh, I remember Microsoft saying that they actually went and bought a load of prompts for the Kinect. Um, so wigs and blow-up bodies to mimic different shapes of people's bodies because the camera needed to pick up everybody's style of hair, whether you're bald like me, or whether you have an afro, the camera needed to be able to pick you up. So all of a sudden, the whole testing dynamics have changed from just being 2 plus 2 equals 4 in a field, to actually can the camera recognize a bald, fat person playing the game. 

Everything changed. And this is what I mean. Now, it's performance. Uh, for VR, for augmented reality, mixed reality glasses, there's gonna be usability, there's gonna be performance, there's gonna be security. I'll give you one example if I can, Kovid. I'm walking down the road, and I'm wearing, uh, mixed reality glasses, and there's a person coming towards me in a t-shirt that I like, and all of a sudden, my pupils dilate, a bit of sweat comes out my wrist, That's data. That's collected by the wearable tech and the glasses. They know that I like that t-shirt. All of a sudden, at the top right-hand corner of those glasses, a picture of me wearing that t-shirt appears, and a voice appears on the arm and goes, "Would you like to purchase?" And I say, "Yes." And a purchase is made with no interaction with the phone. No interaction with anything except me saying 'yes' to a picture that appeared in the top right-hand corner of my phone. Performance was key there. Security was really key because there's a transaction of payments that's taken place. And usability, Kovid. If that picture appeared in the middle of the glasses, and appeared on both glasses, I'm walking out into the road in front of a bus, the bus is going to hit me, bang I'm dead because of usability. So, the world is changing how we need to think about quality and the user's experience with mixed reality, VR, AR is changed overnight.

Kovid Batra: I think I would like to go back to the point where you mentioned automation replacing humans, right? Uh, and that was a problem. And of course, that's not the reality, that cannot happen, but can we just deep dive into the testing and QA space itself and highlight what exactly today humans are doing that automation cannot replace? 

Leigh Rathbone: Ooh! Okay. Well, first of all, I think there's some things that need to be said before we answer that, Kovid. So, what's in your head? So, when I think of AI, when I think of companies, and this does answer your question, actually, every company that I've been into, and I've already mentioned that I've worked in a lot, the culture, the people, the tech stack, the customers, when you combine all of those together for each company, they're unique, absolutely unique to every single company. When you blend all of those and the culture and make a little pot of ingredients as to what that company is, it's unique. So, I think point number one is I think AI will always help and assist and is a tool to help everyone, but we have to remember, every company is unique and AI doesn't know that. So, AI is not a human being. AI is not creative. I think AI should be seen as a member of the team. Now if we took that mindset, would we invite everybody who's a member of the team into a meeting, into an agile ceremony, and then ignore one member of that team? We wouldn't, would we? So, AI is a tool and if we see it as a member of the team, not a human being, but a member of the team, why wouldn't we ask AI its opinion with everything that we do as QAs, but as an Agile team? So if we roll right back, even before a feature or an epic gets written, you can use AI for research. It's a member of the team. What do you think? What happened previously? It can give you trends. It can give you trends on bugs with previous projects that have been similar. So, you can use AI as a member of the team to help you before you even get going. What were the previous risks on this project that look similar? Then when you start getting to writing the stories, why wouldn't you ask AI its opinion? It's a member of the team. But guess what? Whatever it gives you, the team can then choose whether they want to use it, or tweak it, or not use it, just like any other member of the team. If I say this is my opinion, and I think we should write the story with this, the team might disagree. And I go, "Okay, let's go with the team." So, why don't we use AI as exactly the same, Kovid, and say, "When we're writing stories, let's ask it. In fact, let's ask it to start with 'cause it might get us into a place where we can refactor that story much quicker." Then when we write code, why aren't we as devs using AI as a member, doing pair programming with it? And if you're already pair programming with another developer, add AI as the third person to pair program with. It'll help you with writing code, spotting errors with code, peer reviews, pull requests. And then, when we come to tests, use it as a member of the team. " I'm new to learning Cypress, can you help me?" Goddamn right, it can. "I've written my first Cypress test. What have I done wrong?" It's just like asking another colleague. Right, except it's got a wider sort of knowledge base and a wider amount of parameters that it's pulling from. 

So for me, will AI replace people? Yes, absolutely. But not just in testing, not just in tech, AI has made things easily accessible to more people outside of tech as well. So, will it replace people's jobs? I'm afraid it will. Yes. But the people who survive this will be the ones who know how to use AI and treat it as a member of the team. Those people will be the last pots of people. They will be the ones who probably stay. AI will replace jobs. I don't care what people say, it will happen. Will it happen on a large scale? I don't know. And I don't think anyone does. But it will start reducing number of people in jobs, not just in tech. 

Kovid Batra: That would happen across all domains, actually. I think that that's very true. Yeah. 

So basically, I think it's more around the creativity piece, wherein if there are new use cases coming in, the AI is yet not there to make sure that you write the best, uh, test case for it and do the testing for you, or in fact, automate that piece for the coming, uh, use cases, but if the teams are using it wisely and as a team member, as you said, that's a very, very good analogy, by the way, uh, that's a great analogy. Uh, I think that's the best way to build that context for that team member so that they know what the whole journey has been while releasing an epic or a story, and then, probably they would have that creativity or that, uh, expertise to give you the use case and help you in a much better way than it could do today, like without hallucinating, without giving you results that are completely irrelevant. 

Leigh Rathbone: Yeah, I totally agree, Kovid. And I think this is, um, if you think about what companies should be doing, companies should be creating new code, new experiences for their customers, value add code. If we're just recreating old stuff, the company might not be around much longer. So, if we are creating new stuff, and let's make an assumption that, I don't know, 50 percent of code is actually new stuff that's never been out there before. Well, yeah, AI is going to struggle with knowing what to do or what automation test it could be. It can have a good stab because you can set parameters and you can say, you can give it a role, as an example. So, when you're working with chatGPT, you can say, as a professional tester or as a, you know, a long-term developer, what would be my mindset on writing JavaScript code for blah, blah, blah, blah? And it will have a good stab at doing it. But if it's for a space rocket that can go 20 times the speed of light, it might struggle because no one's done that yet and put the data back into the LLM, yet. 

Kovid Batra: Totally. Totally makes sense. Great. I think, Leigh, uh, with this thought, I think we'll bring our discussion to an end for today. I loved talking to you so much and I have to really appreciate the way you explain things. Great storytelling, great explanation. And you're one of the finest ones whom I have brought on the show, probably, so I would love to have another show with you, uh, and talk and deep dive more into such topics. But for today, I think we'll have to say goodbye to you, and before we say that, I would love for you to give our audience parting advice on how they should look at software quality testing in their career. 

Leigh Rathbone: I think that that's a really good question. I think the piece of advice, regardless of what craft you're doing in tech, always try and think quality and always put the customer at the heart of what you're trying to do because too many times we create software without thinking about the customer. I'll give you one example, Kovid, as a parting gift. Anybody can go and sit in a contact centre and watch how people in contact centres work, and you'll understand the thing that I'm saying, because we never, ever create software for people who work in contact centres. We always think we're creating software that's solving their problems, but you go and watch how they work. They work at speed. They'll have about three different systems open at once. They'll have a notepad open where they're copying and pasting stuff into. What a terrible user experience. Why? Because we've never created the software with them at the heart of what we were trying to do. And that's just one example, Kovid. The world is full of software examples where we do not put the customer first. So we all own quality, put the customer front and center. 

Kovid Batra: Great. I think, uh, best advice, not just in software testing or in general or any aspect of business that you're doing, but also I think in life you have to.. So I believe in this philosophy that if you're in this world, you have to give some value to this world and you can create value only if you understand your environment, your surroundings, your people. So, to always have that empathy, that understanding of what people expect from you and what value you want to deliver. So, I really second that thought, and it's very critical to building great pieces of software, uh, in the industry also. 

Leigh Rathbone: Well, Kovid, you've got a great value there and it ties very closely with people that write code, but leaders as well. So, developers should always leave the code base in a better state than they found it. And leaders should always leave their people in a much better place than when they found them or when they came into the company. So, I think your value is really strong there. 

Kovid Batra: Thank you so much. All right, Leigh, thank you. Thank you so much for your time today. It was great, great talking to you. Talk to you soon. 

Leigh Rathbone: Thank you, Kovid. Thank you. Bye. 

‘Team Building 101: Communication & Innovation’ with Paul Lewis, CTO at Pythian

In the latest episode of the ‘groCTO: Originals’ podcast (formerly Beyond the Code), host Kovid Batra welcomes Paul Lewis, CTO at Pythian and board member at the Schulich School of Business, who brings extensive experience from companies like Hitachi Vantara & Davis + Henderson. The topic for today’s discussion is ‘Team Building 101: Communication & Innovation’.

The episode begins with an introduction to Paul, offering insights into his background. During the discussion, Paul stresses the foundational aspects of building strong tech teams, starting with trusted leadership and clearly defining vision and technology goals. He provides strategies for fostering effective processes and communication within large, hybrid, and remote teams, and explores methods for keeping developers motivated and aligned with the broader product vision. He also shares challenges he encountered while scaling at Pythian and discusses how his teams manage the balance between tech and business goals, emphasizing the need for innovation & planning for future tech.

Lastly, Paul advises aspiring tech leaders to prioritize communication skills alongside technical skills, underscoring the pivotal role of 'code communication' in shaping successful careers.

Timestamps

  • 00:05 - Paul’s introduction
  • 02:47 - Establishing a collaborative team culture
  • 07:01 - Adapting to business objectives
  • 10:00 - Aligning developers to the basic principles of the org
  • 12:57 - Hiring & talent acquisition strategy
  • 17:31 - Processes & communication in growing teams
  • 22:15 - Communicating & imbibing team values
  • 24:33 - Challenges faced at Pythian
  • 26:00 - Aligning tech innovation with business requirements
  • 30:24 - Parting advice for aspiring tech leaders

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a special guest. He has more than 25 years of engineering leadership experience. He has been a CTO for organizations like Hitachi Vantara and today, he's working as a CTO with Pythian. Welcome to the show. Great to have you here, Paul. 

Paul Lewis: Hi there. Great to be here. And sadly, it's slightly more than 30 years versus 25 years. Don't want to shame you. 

Kovid Batra: My bad. All right, Paul. So, before we dive into today's topic, by the way, today's topic, audience for you, uh, it's building tech teams from scratch. But before we move there, before we hear out Paul's thoughts on that, uh, Paul, can you give us a quick intro about yourself? Or maybe you would like to share some life-defining moments of your life. Can you just give us a quick intro there? 

Paul Lewis: Sure. Sure. So I've been doing this for a long time, as we just mentioned. Uh, but I've, I've had the privilege of seeing sort of the spectrum of technology. First 17 years in IT, like 5, 000 workloads and 29 data centers. You know, I was involved in the purchase of billions of dollars of hardware and software and services, and then moving to Hitachi, a decade of OT, right? So, I get to see what technology looks like in the real world, the impact to, uh, sort of the human side of the world and nuclear power plants and manufacturing and hospitals.

Uh, and then, the last three years at Pythian, uh, which is more cloud and data services. So, I get to see sort of the insight side of the equation and what the new innovation and technology might look like in the real future. I do spend time with academics. I'm on the board of Schulich School of Business, Masters of Technology Leadership, and I spend time with the students on Masters of Management and AI, Masters of Business Analytics. 

And then, I spend at least once a quarter with a hundred CIOs and CTOs, right? So, we talk about trends, we talk about application, we talk about innovation. So, I get to see a lot of different dimensions of the technology side. 

Kovid Batra: Oh, that's great. Thanks for that quick intro. And of course, I feel that today I'm sitting in front of somebody who has immense experience, has seen that change when there was internet coming in to the point where AI is coming in. So, I'm sure there is a lot to learn today from you. 

Paul Lewis: That sounds like a very old statement. Yes, I have used mainframe. I have used AS400. 

Kovid Batra: I have no intentions. Great, man. Great. Thank you so much. So, let's get started. I think when we are talking about building teams from scratch. I think laying the foundation is the first thing that comes to mind, like having that culture, having that vision, right? That's how you define the foundation for any strong tech team that needs to come in. So, how do you establish that? How do you establish that collaborative, positive team culture in the beginning? And how do you ensure the team aligns with the overall vision of the business, the product. So, let's hear it from you. 

Paul Lewis: Sure. Well, realistically, I don't think you start with the team and team culture. I think you start with the team leadership. I know as recent in the last three years, when we built out the Pythian software engineering practice, well, I started by bringing in somebody who's worked for me and with me for 15 years, right, somebody who I trusted, who has an enterprise perspective of maturity, who I knew had a detailed implementation of building software, who has built, you know, hundreds of millions of dollars worth of software over a period of time, and I knew could determine what skill set was necessary. But in combination with that person, I also needed sort of the program management side because this practice didn't exist, there was no sense of communications or project agility or even project management capability. So, I had to bring in sort of a program management leadership and software delivery leadership, and then start the practice of building the team. And of course, it always starts with, well, what are we actually building? You can't just hire in technologists assuming that they'll be able to build everything. It's saying, what's our technology goal? What's our technology principles? What do we think the technology strategy should be to implement? You know, whatever software we think we want to build. And from that, we can say, well, we need at least, you know, these five different skill sets and let's bring in those five skill sets in order to coordinate sort of the creation of at the very least, you know, the estimates, the foundation of the software. 

Kovid Batra: Makes sense So, I think when you say bringing in that right leadership that's the first step. But then, with that leadership, your thought is to bring in a certain type of personality that would create the culture or you need people who align with your thoughts as a CTO, and then you bring those people in? I think I would want to understand that. 

Paul Lewis: I'm less sure you need to decide between the two. I know my choices usually are bringing in somebody to which already knows how to manage me. Right? As you can imagine, CTOs, CIOs have personalities and those personalities sometimes could be straining, sometimes can be motivational, sometimes could be inspirational, but I knew I need to bring somebody in that didn't have to, that already knew how to communicate with me effectively, that I already know knew my sort of expectations of delivery, expectations of quality, expectations of timeliness, expectations of adhering to technology principles and technology maturity. So, they passed that gate, right? So now, I had sort of right out of the gate trust between me and the leadership that was going to deliver on the software which is sort of the first requirement. From then, then I expect them to both evolve the maturity of the practice, in other words, the documentation and technology principles and build out the team itself from scratch. 

So, determine what five skills are necessary and then acquire those skills and bring them into the organization. It doesn't necessarily mean hiring. In fact, the vast majority of the software, which I've built over the time, we started with partnerships with ecosystems, right? So, ecosystems of QA partnerships and development partnerships. Bring those skill sets in and as we determine, we need sort of permanent Pythian resources like software architecture resources or DevOps architecture resources or, you know, skilled senior development that we start to hire them in our organization as being the primary decision-makers and primary implementers of technology. 

Kovid Batra: Makes sense. And, uh, Paul, does this change with the type of business the org is into or you look at it from a single lens, like if the tech team is there, it has to function like this, uh, does it change with the business or not? 

Paul Lewis: I think it changes based on the business objectives. So, some businesses are highly regulated and therefore, quality might be more important than others. The reality is, you know, the three triangles of time, cost, and quality. For the most part, quality is the most fungible, right? There are industries where I'm landing a plane where quality needs to be, you know, near zero bugs and then tech startups where there's an assumption that there'll be severe, if not damaging bugs in production, right, cause I'm trying to deploy a highly agile environment. So, yes, different organizations have different sort of, uh, appetites for time, cost, and quality. Quality being the biggest measure that you can sort of change the scale on. And the smaller the organization, the more agile it becomes, the more likelihood that you can do things quickly with, I'll call it less maturity, out of the gate, and assume that you can grow maturity over time. 

So, Pythian is an example. Out of the gate, we had a relatively zero sense of maturity, right? No documentation, no process, no real sort of project management implementation. It was really smart people doing really good work. Um, and then we said, "Wow, that's interesting. That's kind of a superhero implementation which just has no longevity to it because those superheroes could easily win the lottery and move on." Right? So, we had to think about, well, how do we move away from the superhero implementation to the repeatable scalable implementation, and that requires process, and I know development isn't a big fan of process holistically, but they are a fan of consistency, right? They are a fan of proper decision-making. They are a fan of documented decisions so that the next person who's auditing or reviewing or updating the code knows the purpose and value of that particular code, right? So, some things they enjoy, some things they don't, uh, but we can improve that maturity over time. So, I can say every year we want to go from 0 to 1, 1 to 2, 2 to 3, never to pass 3, right, because we're not, like Pythian, for example, isn't a bank, right, isn't an insurance company, isn't a telco, we're not landing planes, we're not solving, uh, we're not solving complex, uh, healthcare issues, so we don't have to be as mature as any one of those organizations, but we need to have documents at least, right, we need to ensure that we have automation, automated procedures to push to production instead of direct access, DBA access to the database in a production environment. So, that's kind of the evolution that we had. 

Kovid Batra: So, amazing to hear these kinds of thoughts and I'm just trying to capture how you are enabling your developers or how you are ensuring that your developers, your teams are aligned with a similar kind of thought. What's your style of communicating and imbibing that in the team? 

Paul Lewis: We like to do that with technology principles, written technology principles. So, think of it as a, you know, top 10 what the CTO thinks is the most important when we build software. Right? So what the top 10 things are, let's mutually agree that automation is key for everything we do, right? So, automation to move code, automation to implement code, uh, automation to test, automation in terms of reporting, but that's key. Top 10 is also we need to sort of implement security by design. We need to make sure that, um, it has a secure foundation because it's not just internal users, but we're deploying the software to 2,000 endpoints, and therefore, I need to appreciate endpoints to which I don't control, there I need, therefore I need a sort of a zero trust implementation. I need to make sure that I'm using current technology standards and architectural patterns, right? I want to make sure that I have implemented such a framework that I can easily hire other people who would be just as interested in seeing this technology and using technology, and we want to be in many ways, a beacon to new technologies. I want the software we build to be an inspirational part of why somebody would want to come to work at Pythian because they can see us using an innovating current practical architectural standards in the cloud, as an example.

So, you know, you go through those technology principles and you say, "This is what I think an ideal software engineering outcome, set of outcomes look like. Who wants to subscribe to that?" And then, you see the volunteers putting up their hands saying, "Yeah, I believe in these principles. These principles are what I would put in place, or I would expect if I was running a team, therefore I want to join." Does that make sense? 

Kovid Batra: Yeah, definitely. And I think these are the folks who then become the leaders for the next set of people who need to like follow them on it. 

Paul Lewis: Yeah, exactly. 

Kovid Batra: It totally makes sense. 

Paul Lewis: And if you have a set of rules, you know, I use the word 'rules', you know, loosely, I really just mean principles, right? To say, "Here are the set of things we believe and want to be true even if there's different maturities to them. Yes, we want a fully automated system, but year one, we don't. Year three, we might." Right? So, they know sometimes it's a goal, sometimes it's principle, sometimes it's a requirement. Right? We're not going to implement low-quality code, right? We're not going to implement unsecured code, but if you have a team to buy in those principles, then they know it's not just the outcome of the software they're building, but it's the outcome of the practice that they're building. 

Kovid Batra: Totally, totally. And when it comes to bringing that kind of people to the team, I think one is of course enabling the existing team members to abide by that and believe in those principles, but when you're hiring, there has to be a good talent acquisition strategy there, right? You can't just go on hiring, uh, if you are scaling, like you're on a hiring spree and you're just bringing in people. So how do you keep that quality check when people are coming in, joining in from the lowest level, like from the developer point, we should say, to the highest leadership level also, like what's your strategy there? How do you ensure this team-building? 

Paul Lewis: Well, on the recruiting side, we make sure we talk about our outcomes frequently, both internally in the organization and externally to, uh, you know, the world at large. So internally, I do like a CTO 'ask me anything', right? So, that's a full, everybody's, you know, full board, everybody can access it, everybody can, and it's almost like a townhall. That's where we do a couple of things. We disclose things I'm hearing externally that might be motivating, inspiring to you. It's, "Here's how we're maturing and the outcomes we've produced in software over this quarter.", let's say. And then, we'll do a technology debate to say, "You know what, there's a new principle I need to think about, and that new principle might be generative AI. Let's all jump in and have a, you know, a reasonably interesting technology debate on the best implications and applications of that technology. So, it's almost like they get to have a group think or group input into those technology principles before we write it down and put it into the document. And then if I present that, which I do frequently externally, then I gavel, you know, whole networks of people to say, "Wow, that is an interesting set of policies. That's an interesting set of, um, sort of guiding principles. I want to participate in that." And that actually creates recruiting opportunities. I get at least 50 percent of my LinkedIn, um, sort of contributions and engagements are from people saying, "I thought what you said was interesting. That sounds like a team I want to join. Do you have openings to make that happen?" Right? So, we actually don't have in many ways a lack of opportunity, recruiting opportunity. If anything, we might have too much opportunity. But that's how we create that engagement, that excitement, that motivation, that inspiration both internally and externally. 

Kovid Batra: And usually, like when everyone is getting hired in your team like, do you handpick, like at least one round is there which you take with the developers or are you relying mostly on your leadership next in line to take that up? How does that work for your team? 

Paul Lewis: I absolutely rely on my leadership team, mostly because they're pretty seasoned and they've worked with me for a while, right? So, they fully appreciate what kind of things that I would expect. There are some exceptions, right? So if there are some key technologists to which I think will drive inspirational, motivational behavior or where they are implementing sort of the core or complex patterns that I think are important for the software. So, things like, uh, software architecture, I would absolutely be involved in the software architecture conversations and recruiting and sort of interviewing and hiring process because it's not just about sort of their technology acumen, it's also about their communication capabilities, right? They're going to be part of the architectural review board, and therefore, I need to know whether they can motivate, inspire, and persuade other parts of the organization to make these decisions, right? That they can communicate both verbally and in the written form, that when they create an architectural diagram, it's easy to understand, sort of that side, and even sort of DevOps-type architects where, you know, automation is so key in most of the software we develop and that'll lead into, you know, not just infrastructure as code, but potentially even the AI deployment of infrastructure as code, which means not only do they need to have, you know, the technical chops now, I need them to be well read on what the technical jobs are needed for tomorrow. Right? That also becomes important. So, I'll get involved in those key resources that I know will have a pretty big impact on the future of the organization versus, you know, the developers, the QAs, the BAs, the product owners, the project managers, you know, I don't necessarily involved in every part of that interview process.

Kovid Batra: Totally, totally. I think one good point you just touched upon right now is about processes and the communication bit of it. Right? So, I think that's also very critical in a team, at least in large-scale teams, because as you grow, things are going hybrid, remote, and even then, the processes are, and the communication is becoming even more critical there, right? So, in your teams, how do you take up or how do you ensure that the right processes are there? I mean, you can give some examples, like how do you ensure that teams are not overloaded or in fact, the work is rightly distributed amongst the team and they're communicating well wherever there is a cross-functional requirement to be delivered and teams are communicating well, the requirements are coming in? So, something around process and communication which you are doing, I would love to hear that. 

Paul Lewis: Good questions. I think communication is on three fronts, but I'll talk about the internal communication first, the communication within the teams. We have a relatively unique set of sort of development processes that are federated. So, think of it as there is a software engineering team that is dedicated to do software engineering work, but for scale, we get to dip into the billable or the customer practices. So, if I need to deliver an epic or a series of stories that require more than one, uh, Go developer or data engineer or DevOps practitioner, then I have the ability to dip into those resources, into those practices, assign them temporarily to these epics and stories, uh, or just a ticket that they, that I want them to deliver on and then they can deliver on them as long as it's all, as long as everybody's already been trained on how to implement the appropriate architectural frameworks and that they're subscribing to the PR process that is equivalent, both internally and externally to the team. We do that with standard agile processes, right? We do standups on a daily basis. We break down all of our epics in the stories and we deliver stories in the tickets and tickets get assigned people, like this is a standard process with standard PM, with standard architectural frameworks, standard automation, deployments, and we have specific people assigned to do most of the PRs, right? So not only PR reviews, but doing sort of code, code creation and code deployment so that, you know, I rely on the experts to do the expert work, but we can reach out into those teams when we need to reach out to those teams and they can be temporary, right? I don't need to assign somebody for an entire eight-week journey. Um, I could just assign them to a particular story to implement that work, which is great. So, I can expand any one particular stream from five people to 15 people at any one period of time. That's kind of the internal communication.

So, they do standups. We do, you know fine-tuned documentation, uh, we have a pretty prescriptive understanding of what's in the backlog and how and what we have to deliver in the backlog. We actually detail a one-year plan with multiple releases. So, we actually have a pretty detailed, we'll call it 'product roadmap' or 'project roadmap' to deliver in the year, and therefore, it's pretty predictable. Every eight weeks we're delivering something new to production, right? But that's only one of those communication patterns. The other communication patterns all to the other internal technology teams, because we're talking about, you know, six, seven hundred internal technologists, and we want them to be aware of not just things that we've successfully implemented in the past and how it's working in production, but what the future looks like and how they might want to participate in the future functions and features that we deliver on. 

But even those two communication patterns arguably isn't the most important part. The most important part might actually be the communication up. Right? So now, I have to have communication on a quarterly basis with my peers, with the CEO and the CFO to say not only how well we're spending money, how well we're achieving our technological goals and our technological maturity, but even more importantly, are we getting the gain in the organization? So, are we matching the revenue growth of the organization? Are we creating the operational efficiency that we expect to create with the software, right? Cause I have to measure what I produce based on the value created, not just because I happen to like building software, and that's arguably the most difficult part, right, to say, "I built software for a purpose, an organizational purpose. Am I achieving the organizational goals?" Much more difficult calculus as compared to, "I said I was going to do five features. I delivered five features. Let's celebrate." 

Kovid Batra: But I think that's the tricky part, right? And as you said, it's the hardest part. How do you make sure, like, as in, see, the leaders probably are communicating with the business teams and they have that visibility into how it's going to impact the product or how it's going to impact the users, but when it comes to the developers particularly, uh, who are just coding on a day-to-day basis, how do you ensure that the teams are motivated that way and they are communicating on those lines of delivering the outcomes, which the leaders also see? So, that's.. 

Paul Lewis: Well, they get to participate in the backlog prioritization, right? So, in fact, I like to have most of the development team consider themselves in many ways, owners of the software. They might not have the Product Owner title, or they might not be the Product Manager of the product, but I want them to feel like it's theirs. And therefore, I need them to participate in architectural decisions. I want them to buy-in to what the priority of the next set of features are going to be. I want to be able to celebrate with them when they do something successful, right? I want them to be on the forefront of presenting the value back to the rest of the team, which is why that second communication to the rest of the, you know, six or seven hundred technologists that they're the ones presenting what they created versus I'm the one presenting their credit. I want them to be proud of the code that they've built, proud of the feature that they've implemented and talk about it as if it's something that they, you know, had to spend a good portion of their waking hours on, right? That there was a technology innovation or R&D exercises that they had to overcome. That's kind of the best part. So, they're motivated to participate in the, um, in the prioritization, they're motivated to implement good code, and then they're motivated to present that as if it was an offering they were presenting to a board of decision-makers, right? It's almost as if they're going and getting new money to do new work, right? So, it's a dragon's den kind of world, which I think they enjoy. 

Kovid Batra: No, I think that's a great thought and I think this makes them feel accountable. This makes them feel motivated to whatever they are doing, and at the end of the day, if the developers start thinking on those lines, I think you have cracked it, probably. That's the criteria for a successful engineering, for sure. 

Apart from that, any other challenges while you were scaling, any particular example from your journey at Pythian that you felt is worth sharing with our audience here?

Paul Lewis: The challenge is always the 'what next?'. Right? So let's say, it takes 24 months to build a substantial piece of software. Part of my job, my leadership's job is to prepare for the next two years, right? So, we're in deep dive, we're in year one, we're delivering, we're halfway delivering some interesting piece of software, but I need to prep year three and year four, right? I need to have the negotiations with my peers and my leaders to say, "Once we complete this journey, what's the next big thing on the list? How are we going to articulate the value to the organization, either in growth or efficiency? How are we going to determine how we spend? Is this a $1m piece of software, or is this a $10m piece of software?" And then, you know, preparing the team for the shift between development to steady state, right? From building something to running something. And that's a pretty big mindset, as you know, right? It's no longer focused on automation of releases between dev, QA & production. It's saying, "It's in production now. It's now locked down. I need you to refocus on development on something else and let some other team run this system." So, both sides of that equation, how do I move from build to run in that team? And then, how do I prepare for the next thing that they build? 

Kovid Batra: And how do you think your tech teams contribute here? Because what needs to be built next is something that always flows in, in terms of features or stories from the product teams, right? Or other business teams, right? Here, how do you operate? Like, in your org, let's say, there is a new technology which can completely revamp the way you have been delivering value so that tech team members are open to speak and like let the business people know that this is what we could do, or it's more like only the technical goals are being set by the tech team and rest of the product goals are given by the product team. How does it work for the, for your team here? 

Paul Lewis: It's pretty mutual in fairness, right? So, when we determine sort of the feature backlog of a piece of software, there's contribution for product management, think of that as the business, right? And the technology architecture team, right? So, we mutually determine what our next bet in terms of features that will both improve the application, functionally improve the application technically. So, that's good. 

When it comes to the bigger piece of software, so we want to put this software in steady state, do minor feature adjustments instead of major feature adjustments, that's when it requires much more of a, sort of a business headspace, right? Cause it's less about technology innovation at that point. However, sometimes it is, right? Sometimes I'll get, "Hey, what are we doing for generative AI? What new software can we build to be an exemplar of generative AI?" And I can bring that to the table. So, I have my team bringing to the decision-making table of, "Here's some technology innovation that's happening in the world that I think we should apply." And then, from the business saying, "Here's a set of business problems or revenue opportunities that we can match." So now, it's a matching process. How can I match generative AI, interesting technology with, you know, acquiring a new customer segment we currently don't acquire now, right? And so, how do I sort of bring both of those worlds together and say, "Given this match program, I'm going to circle what I think is possible within the budget that we have."? 

Kovid Batra: Right. Right. And my question is exactly this, like, what exactly makes sure that the innovation on technology and the requirements from the customer end are there on the same table, same decision-making table? So, if I'm a developer in your team, even if I'm well aware of the customer needs and requirements and I've seen certain new technologies coming up, trending up, how do I make sure that my thought is put on the right table in front of the right team and members? 

Paul Lewis: Well, fortunately, like most organizations, but definitely Pythian, we've created like an architectural review board, right? So, that includes development, architecture, product management, but it's not the executive team, right? It's the real architectural practitioners and they get to have the debate, discussion on what they think is the most technologically challenging that we want to solve or innovation that we think matters or evolution of technology that we think we want to implement within our technologies, moving from, you know, an IaaS to a PaaS to a Saas, as an example, those are all decisions that in many ways we let the technology practitioners make, and then they bring that set of decisions to the business to say, "Well, let's match this set of architectural principles with a bunch of business problems." Right? So, it's not top-down. It's not me saying, "Thou shalt build software. Thou shalt use Gen AI. Make it so." It rarely is that. It's the technology principle saying, "We think this innovation is occurring. It's a trend. It's important. We think we should apply it knowing its implications. Let's match that to one of a hundred business problems to which we know the business has, right? The reality is the business has an endless amount of technology problems, of business problems. Technology has an endless amount of innovation, right? 

Kovid Batra: Yeah, yeah. 

Paul Lewis: There's no shortlist in either of those equations. 

Kovid Batra: Correct. Absolutely. Perfect, perfect. I think this was great. I think I can go on talking with you. Uh, this is so interesting, but we'll take a hard stop here for today's episode and thank you so much for taking out time and sharing these thoughts with us, Paul. I would love to have you on another show with us, talking about many more problems of engineering teams. 

Paul Lewis: Sure. 

Kovid Batra: But thanks for today and it was great meeting you. Before you leave, um, is there a parting advice for our audience who are mostly like aspiring engineering managers, engineering leaders of the modern tech world? 

Paul Lewis: Um, the gap with most technologists, because they tend to be, you know, put their glasses on, close the lights in the room, focus on the code, that's amazing. But the best part of the thing you develop is the communication part. So, don't be just a 'code creator', be a 'code communicator'. That's the most important part of your career as a developer, is to present that wares that you just built outside of your own headspace. That's what makes the difference between a junior, an intermediate, senior developer and architect. So, think about that. 

Kovid Batra: Great, great piece of advice, Paul. Thank you so much. With that, we'll say, have a great evening. Have a great day and see you soon! 

Paul Lewis: Thank you.

‘Enhancing DevEx, Code Review and Leading Gen Z’ with Jacob Singh, CTO in Residence, Alpha Wave Global

In the latest episode of 'groCTO Originals' podcast (formerly: 'Beyond the Code'), host Kovid Batra engages in a thought-provoking discussion with Jacob Singh, Chief Technology Officer in Residence at Alpha Wave Global. He brings extensive experience from his roles at Blinkit, Acquia, and Sequoia Capital. The heart of their conversation revolves around ‘Enhancing DevEx, Code Review and Leading Gen Z’. https://youtu.be/TFTrSjXI3Tg?si=H_KxnZGlFOsBtw7Y The discussion begins with Jacob's reflection on India and his career break. Moving on to the main section, he explores the evolving definition and widespread adoption of developer experience. He also draws comparisons between tech culture in Indian versus Western companies and addresses strategies for cultivating effective DevEx for Gen Z & younger generations. Furthermore, he shares practical advice for tech leaders to navigate the ‘culture-market fit’ challenge and team structuring ideas from his hands-on experience at Grofers (now ‘Blinkit’). Lastly, Jacob offers parting advice to developers and leaders to embrace AI tools like Copilot and Cursor for maximizing efficiency and productivity, advocating for investing in quality tooling without hesitation.

Timestamps

  • 00:06 - Jacob’s introduction
  • 00:39 - Getting candid
  • 04:22 - Defining ‘Developer Experience’
  • 05:11 - Why is DevEx trending?
  • 07:02 - Comparing tech culture in India & the West
  • 09:39 - How to create good DevEx for Gen Z & beyond?
  • 13:37 - Advice for leaders in misaligned organizations
  • 17:03 - How Grofers improved their DevEx
  • 22:04 - Measuring DevEx in multiple teams
  • 25:49 - Enhancing code review experience
  • 31:51 - Parting advice for developers & leaders

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone! This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a special guest. He's currently a CTO in Residence with Alpha Wave Group, which is a VC group. He comes with 20-plus years of engineering and leadership experience. He has worked with multiple startups and orgs like Blinkit as a CTO. He's the guest whom I have met and he's the only guest whom I have met in person, and I really liked talking to him at the SaaSBoomi event. Welcome to the show, Jacob. Great to have you here.Jacob Singh: Thanks. Good to be here, to chat with you.Kovid Batra: Cool. I think, let's start with something very unique that I've seen experienced with you, that is your name. It's Jacob Singh, right? So, how did that fusion happen?Jacob Singh: Just seemed like fun, you know? Just can't, since I was living in India anyway, I figured Smith is harder to pronounce, so.. I'm just kidding. My dad's from here. My dad's Punjabi. So, you know, when a brown boy and a white girl, they love each other a lot, then, you know, you end up with a Jacob Singh. That's about it. There's not much else to it. I grew up in the States, but I've lived in India on and off for the past 20 years.Kovid Batra: Great, great. Perfect. That's rare to see, at least in India. Most of the generation, maybe half-Indian, half-American are in the U.S. But what did you love about India? What made you come here?Jacob Singh: Good question. I was trying to escape my tech stuff. So, I sort of started very early. I taught myself to code as a teenager and started my first business when I was 18 and I'd done pretty well. And then, when I was like 21-22, I just sort of decided I wanted to do something different, do something in the social sector, helping people. So, I took a job with an NGO in West Delhi and sort of shifted for that. That was the original purpose. Why did I stay? I guess there's something dynamic and interesting about the place. India's changed a lot in 20 years, as everybody knows. And, I think that's been continuously exciting to be a part of. It doesn't feel stagnant. It doesn't feel like, I mean, a lot of changes are not in the direction I would love, to be honest, but, you know, but it's an interesting place. There's always something happening. And I found that, and then eventually you build your community, your friends and your family and all that kind of stuff. So, this is home. Yeah, that's about it.Kovid Batra: Perfect. Perfect. Talking about the family, I was just talking to you on LinkedIn. I found that there was like a one-year break that you took in your career and you were just parenting at that time. And honestly, that's very different and very unique to a culture, to an Indian culture, actually. So, I just wanted to know how was your experience there. I'm sure it would have made you learn a lot of things, as it does for a lot of other parents. But I just wanted to hear about your experience with your kid.Jacob Singh: Hopefully, it's not so uncommon. I think it's starting to change especially the role of men as fathers in India. I think it's traditionally, like my dad's father, he just saw him for tea, you know, and he was reading the newspaper and he'd meet him once a year on his birthday and take him to a quality restaurant for a coke, you know, that was their relationship. I think things are different with Indian fathers these days. I think for me, you know, we were just, perfectly honest, was going through a divorce. Difficult. I needed to be there for my daughter and I was, you know, sort of taking half the responsibility in terms of time with her. This was eight years ago. And I think my parents had also divorced. So, I was kind of, my dad was a very active part of my upbringing and did all the things, did all the dishes, cooked all the meals, you know, was around. He was also working as a programmer and did all that, but he was at home as well. And he found ways to make it work, even if it had meant sacrificing his career to some extent. He was working from home when I was a kid in the 80s. So, he had a giant IBM 880, or whatever computer, a little tiny green screen, a 300-bot modem, you know, to connect and send his work up. So, that's how I grew up. Turned out to benefit me a lot, uh, when it came to learning computers, but, um, you know, he would convince him to do that cause he was good at his job, and he's like, I have to be there for my kids. And he made it work, you know? I think we all find those times where we need to lean into family or lean into work in different proportions, you know?Kovid Batra: Hmm. Great. I think amazing job there honestly, Jacob All right, that was great knowing you and thanks for that quick intro. Moving on to the main section of our today's podcast, enhancing the developer experience. So, that's our topic for today. So let's start very basic, very simple. What is developer experience according to you?Jacob Singh: What is developer experience? It's an interesting term. I guess it's, you know, the day-to-day of how a programmer gets their job done. I guess the term would be everything encapsulated from, how their boss talks to them, how they work with their teammates, the kind of tools they use for project management down to the quality of documentation, APIs, the kind of tools they use on their computer, the tools they use in the cloud that they work with, et cetera. And all of that encapsulated is how effective can they be at their job, you know, and the environment around them that allows them to be effective. I guess what I would define it as.Kovid Batra: And why do you think it's trending so much these days? I think what you mentioned and what I have also read everywhere about developer experience is the same fundamental that has been existing all the years, right? But why is it trending these days? Why do you think this is something up in the market?Jacob Singh: Good question. I guess, I mean, I've been around for a while, so I think in the earlier days of the industry, when I first started engineers were a little expensive, but they were also looked at as like a commodity that you could just use. Like, you put them on a spreadsheet, you pay the engineers, you get the work done. They weren't considered really central. They were considered sort of like factory workers in an expensive factory, to some extent. I think it was more so in the 80s and 90s, right? But like, it's still trending more and more in the direction of engineers kind of being very expensive and being decision-makers, moving into C-level positions, having more authority, seeing that, like, if you just look at, you know, the S&P 500, you look at the, you look at the stock market and you see that the top companies are all tech companies and they command most of the market cap. I think those days are over. So now, it's very much that if you're able to execute with your engineering roadmap, you're able to win the market. It's considered the basis of a lot of companies, sort of strategies, whether they're a tech company or not, and therefore the effectiveness of the developers and the team plays into which developers will join you. And when they join you, how likely are they to be engaged and to be retained? And then, how effective, you know, is each developer because they're a rare commodity because it's hard to find good developers. There's so much demand, et cetera, even in recessionary times, most of the layoffs are not engineering teams. And so, the effectiveness of each engineer and their ability to motivate and retain them becomes paramount to a company strategy.Kovid Batra: Right. Makes sense. So, according to you, I'm sure you have had the experience of seeing this shift in the West as well as in companies in India, right? When you see the culture, particularly talking about the tech culture in a company, like maybe, for example, you work with a company like Blinkit, which is huge today in India and you must have worked with other companies in the West. How would you compare, like, how are teams being led in these two different cultures? Jacob Singh: Well, I would say those kind of, you know, anything I say is going to be a gross generalization, and it's going to be incorrect more often than it's correct. I think there's more difference between two Indian companies than there is between any given American or any Indian company, you know. There's a lot of variation. But if I'm being put on the spot to make such generalizations, I would say that one big difference is the age and maturity of engineers. So, like, when I was 29, I got hired as an IC, a Principal Engineer at this company called Acquia. They kind of acquired my startup and I joined there, and, you know, we had PhDs on the team who were ICs, right? Our VP Engineering, you know, had 25 years of experience in the field and was, you know, sort of. You know, one of my colleagues was like heading R&D for the RFID team at Sun. You know, like the very senior guys were still writing code.Kovid Batra: Yeah.Jacob Singh: It's like, very much just like in the weeds writing code. They're not trying to be managers and an architect. They're just like a programmer, right? I got my first team, like to manage, like I got a small team like at 25-26, but really I got my first team of like really qualified, expensive engineers when I was like 32. Whereas here, I'm a VP Engineering at Grofers at like 29. It's like managing a 100 people. It's very common to be much early in your career. Engineers with 3-4 years of experience are sort of talking about, "I should be an SDE-IV". So, the whole like, that scale is different. You have a much younger audience. In the States, at least when I was coming up, there's a lot more earning your stripes over a long time before you go into management. Whereas here, we have a lot more very junior managers. I think that's a big difference.Kovid Batra: Okay. And that's interesting, actually.Jacob Singh: That creates a lot of other dynamics, right? I mean, you just have like, generally you know, you have more, I would, and I hate to say this, probably going to take shit for this, but having been an engineering leader in both places, I feel like you have more like discipline and like professionalism issues here, generally, which is not to do with Indians. It's just to do with the age of people. Like, they're 24 years old, they're not gonna be very professional, right? Like a lot of your team.Kovid Batra: No, totally. I think, we are not generalizing this, but as you said, it's probably about the age. In one of my podcasts, I was just talking to this fellow from Spain. He's leading a team of almost 30 folks now.Jacob Singh: Yeah.Kovid Batra: And 50% of them were early hires, like early 20 hires, right?Jacob Singh: Yeah.Kovid Batra: And he's that guy. And then I was talking to somebody in India who was leading a 50-member team there. Again, 50% of the folks were like junior engineers in their 25-26. And both of them had the same issue of handling Gen Zs. So, I think from where you are coming, it's totally relatable and I think we touched on a very good topic here. How to create a good developer experience for these early-age, 25-26-year-olds in the system? Because they're not stable, they are not, So again, I am also not generalizing. A lot of good folks are there, but they're not like in the right mindset of sticking to a place, learning gradually and then making that impact. They just like really want to hop on fast on everything.Jacob Singh: Yeah.Kovid Batra: So, how do you handle that? Because that definitely is a pain for a lot of us, not just in India, but across the globe.Jacob Singh: No, no, I've heard this a lot, you know, and I'm not really sure. I mean, I'm far from Gen Z. I was actually having this exact same conversation with the CTO of an Indian unicorn, a pretty large one, who was talking about the same thing. He's like, "How do I motivate these?" This seems like the younger guys, they don't really want to work and they're constantly, you know, making noise and they think it's their right to work from home. They think it's their right to work 20-30 hours a week. They don't want, they don't want to advance and follow all this sort of stuff. I guess my advice to him was maybe a bit generic and maybe incorrect. You know, I think there are differences in generations, but I think some truths are fairly universal and I've uncovered a couple of things which have worked for me. And every manager has their own style and because of that, and every company has its own culture and its own goals. And so, there's a thing that's 'culture-market fit'. So, certain leaders will do well in certain kinds of companies, right, who have certain kinds of cultures made for the market they're in. This is not generic advice for everybody. But for me, I like to work in startups and I like to work in you know, startups, which are working on, I would say, kind of top-line problems which means not efficiency-related problems so much as innovation-related problems. How do we discover the next big thing? What feature is going to work with customers? Et cetera. And in such places, you need people who are motivated by intrinsic, they need intrinsic creative motivation. Carrot and Stick stuff doesn't work. Carrot and Stick will work for a customer service team, for a sales team, it'll work for an IT team at a Fortune 500 who's shipping the same things over and over again, but for creative teams, they really need to care about it intrinsically. They need to be interested in the puzzle. They want to solve the puzzle. You can't sort of motivate them in other ways. And I think this applies to the younger generation as much as the older ones. You know, the best thing to do is to, basically, it's a very simple formula, it sounds cliché but figure out where the hell you're going, why you should go there and everyone in the team should know where you're going and they should know why they're important to that strategy, what they're doing that's important, you know, and they should believe it themselves that it can work. And then, they should believe that if it works, you're gonna take care of them. And if you solve those things, they will work hard and they will try to solve problems. A lot of companies think they can shortchange that by having a crappy strategy, by having, you know, a lot of program management, which removes people from the real problem they're solving by treating people as numbers on a spreadsheet, naturally, you're going to get, you know, poor results when you do that.Kovid Batra: Totally. I think very well answered, Jacob. I think finding that culture-market fit and finding the place where you will also fit in naturally, you would be easily able to align with the folks who are working there and maybe lead them better. I think that that analogy is really, really good here. Apart from that, do you think like not everyone gets that opportunity to find the right place for themselves, right, when there is a dearth of opportunities in the market? What would be the advice for the leaders that they should apply to them when they are not in the culture-market fit scenario?Jacob Singh: Leaders? You mean, if a leader is in an organization where they don't feel like the values of the tech team aligned to their value, whether it be engineer or CTO or something?Kovid Batra: Correct, yes.Jacob Singh: Good question. The best thing to do is probably to quit. But if you can't afford, I mean, I say that a bit flippantly. I'm not saying "quit early". I'm not saying "don't try". I mean, if you really have a true values alignment problem you know, then that's hard to get over. If it's tactical, if it's relationship-based, you can work on those things, right? If you feel like you have to be someone you don't like to fit in an organization, then that's unlikely to change if it comes from the top, right? There's a very cliché saying, but you know, "Be careful who you work with. You're more likely to become them than they are to become you." And so, I would say that. But to get more practical, let's say if you can't, or you're feeling challenged, et cetera. Your question was basically, okay, so let's say you're a VP Engineering or Director of Engineering and you're unhappy with the leadership in some way, right?Kovid Batra: Yeah. So, no, I think it makes sense. The advice is generic, but it definitely gives you the direction of where you should be thinking when you are stuck in such a situation. You can't really fight against it.Jacob Singh: Yeah. I will say a couple of things. This is also the same conversation I had mentioned earlier. This also came up with the typical thing of leadership not trusting tech. You know, they don't think we're doing anything. They think we're moving too slow. They think we're, you know, sandbagging everything, etc. And to solve that, I think, which is not a values problem. That's the case in almost every organization. I mean, there's never been a CEO who said, "Oh, man! The tech team is so fast. They just keep.. I can't come up with dumb ideas fast enough. They keep implementing everything." So, you know, it's never happened in the history of companies. So, there's always that conflict which is natural and healthy. But I think to get over that, that's basically a transparency problem, usually. It's like, are you clear on your sprint reviews? Do you do them in public? Do you share a lot of information around the progress made? Do you share it in a way that gets consumed or not? Are you A/B testing stuff? Are you able to look at numbers? Able to talk numbers with your business teams? Do you know the impact of features you're releasing? Can you measure it? Right? If you can measure it, release that information. If you can give timely updates in a way which is entertaining and appropriate for the audience that they actually listen those problems tend to go away. Then it's just, the main problem there is not that people don't trust you. It's just that you're a black box to them. They don't understand your language. And so, you have to find, you know, techniques to go over that. Yeah.Kovid Batra: Yeah. Makes sense. Great, great. All right, coming back to enhancing the developer experience there. So, there are multiple areas where you can see developer experience taking a hit or working well, right? So, which are those areas which you feel are getting impacted with this enhancement of developer experience and how you have worked upon those in any one of your experiences in the past with any of the companies?Jacob Singh: You said "enhancement of developer experience". What do you mean by that?Kovid Batra: So, yeah. I'll repeat my question. Maybe I confused you with too many words there. So, in your previous experiences, you must have worked with many teams and there would have been various areas that could have impacted the developer experience. So, just to give you a very simple example, as you talked about the tooling, the people whom they're working with. So, there could be multiple issues that impact the developer experience, right? So, in your previous experiences where you found out a specific developer experience related problem and how you solved it, how it was impacting the overall delivery of the development team. Just want to like deep dive into that experience of yours.Jacob Singh: Yeah. So I think a big one was I can talk about Grofers. So, one of the things we had when I first came to Grofers, we had about 50-60 people in tech, product engineering, data, design, etc. We had them into different pods. That was good. Someone had created like different teams for different parts of the application. So, it wasn't just a free-flowing pool of labor. There was the, you know, the shopping cart team and the browsing team and the supply chain, like the warehouse management team, the last mile team, it was like, you know, four or five teams. But there was a shared mobile team. So at the front end, there was, there was one Android team, there was one iOS team, there was one web team, which again, is very common, not necessarily a bad idea. But what ended up happening was that the business teams would, they wouldn't trust the tech deadlines because a lot of times what happened is there'd be a bunch of backend engineers in the shopping cart team, they'd finish something, and then they'd be stuck on the front end team because the front end team was working on something for the, or the Android team was working on something for the browsing team, right? The iOS team was free, so they would build that shopping cart feature. But they couldn't really release it yet because the releases had to go out together with Android and iOS, because, you know, the backend release had to go with that. So, we're waiting on this one. Then we're waiting on the web one. There's this cascading waiting that's happening. And now, the shopping cart team is like, "We're done with our part. We're waiting on Android. So we're going to start a new feature." They start a new feature. And then again, the problem starts again where that feature is then waiting on somebody else, waiting on the QA team or waiting on somebody else. So, all of these waiting aspects that happen ruin the developer experience because the developer can't finish something. They get something half done, but it's not out in production, so they can't forget about it. Production is a moving target. The more teams you have, the more frequently it's changing. So, you have to go back and revisit that code. It's stale now. You've forgotten it, right? And you haven't actually released anything to customers. So, the business teams are like, "What the hell? You said you're done. You said you'd be done two weeks ago." And you're like, "Oh, I'm waiting for this team." "Stop giving me excuses." Right?Kovid Batra: Right.Jacob Singh: That team's waiting on the other team. So, I think one of the big things we did was we said, we took a hard call and said, at the time, Grofers was not quick commerce. At that time, Grofers was like DMart online, cheap discounting, 2-3 day delivery, and we scaled up massively on that proposition. And, uh, we said, hey, people who care about the price of bread being 5% less or more, do they own iPhones? No, they do not own iPhones. That's like 5% of our population. So we just ditched the iPhone team, cross-trained people on Android. We took all the Android engineers and we put them into the individual teams. We also took QA, automated most of our tests, and put QA resources into each of the teams, SDATs, and kind of removed those horizontal shared services teams and put them in fully cross-functional teams. And that improved developer experience a lot. And it's kind of obvious, like people talk about cross-functional teams and being able to get everything done within a team, being more efficient, less waiting for the teams, but it has another huge benefit. And the other huge benefit is motivation-wise. You cannot expect, like I said earlier, you want your engineers to care about the business outcomes. You want them to understand the strategy. They don't understand why they're building it. But if an engineer has to build something, send it to another team, get that team to send it to some other team, get that team to send it to some other team, to a release team to get released eventually, right? And then, the results come back three months later. You can't expect that engineer to be excited about their metrics, their business metrics and the outcomes.Kovid Batra: Right.Jacob Singh: If you put everyone in one team, they operate like a small startup and they just continually crank that wheel and put out new things, continually get feedback and learn, and they improve their part of the business and it's much more motivating and they're much more creative as a result. And I think that changes the whole experience for a developer. Just reduces those loops, those learning loops. You get a sense of progress and a sense of productivity. And that's really the most motivating thing.Kovid Batra: Totally makes sense. And it's a very good example. I think this is how you should reshape teams from time to time based on the business requirements and the business scale is what's going to impact the developer experience here. But what I'm thinking here is that this would have become a very evident problem while executing, right? Your project is not getting shipped and the business teams are probably out there standing, waiting for the release to happen. And you started feeling that pain and why it is happening and you went on solving it. But there could be multiple other issues when you scale up and 50-60 is a very good number actually. But when you go beyond that, there are small teams releasing something or the other on an everyday basis. How exactly would you go about measuring the developer experience on different areas? So, of course, this was one. Like, your sprints were delayed or your deliverables were delayed. This was evident. But how do you go about finding, in general, what's going on and how developers are feeling?Jacob Singh: Well, we hit those scaling things and like you said, yes, people are delayed. It sounds obvious, but it's mostly not. Most leaders actually take exactly the opposite approach. They say, "Okay. No more excuses. We're going to plan everything out at the beginning of the quarter. We're going to.. You plan your project. We'll do all the downstream mapping with every other Android, iOS, QA team required. We'll allocate all their bandwidth ahead of time. So, we'll never have this problem again. And we'll put program managers in place to make sure the whole thing happens." They go the opposite direction, which I think is kind of, well, it never works, to be honest. Kovid Batra: Yeah, of course.Jacob Singh: In terms of measuring developer experience as you scale. So, we got up to like 220 people in tech I think at some point in Grofers and we scaled up very quickly. That was within a year and a half or something, that happened. And, you know, that became much more challenging. I honestly don't love the word 'developer experience' cause it doesn't mean anything specifically cause there's sort of your experience as an employee, right, HR kind of related stuff, your manager or whatever, there's your experience, you know, as an engineer, like the tools you're using and stuff like that, right? And then your experience, like, as a team member, like your colleagues, your manager, your kind of stuff like that, right? So it's slightly different from an employee in terms of, it's not about company confidence and stuff or strategy, but more about your relationships. So, there's different areas of it. For measuring, like, general satisfaction, we use things like Officevibe, we use things like continuous performance improvement tools, like 15Five. we brought in OKRs, a lot of things which kind of are there to connect people to strategy and regularly check in and make sure that we're not missing things. All of those were effective in pockets, depending on usage. But by far the most effective thing, and I know this might not be the popular answer when it comes to what type of sells, although I do like the product a lot, which is why I'm doing this. I think it's a cool product. A lot of it is really just like 1-on-1s, just making sure that every manager does a 1-on-1 every two weeks. And making it like, put it in some kind of spreadsheet, some kind of lightweight thing, but making sure that new managers learn they have to do it, how to do them well, how to, you know, connect with individuals, understand their motivations and like follow through on stuff and make small improvements in their lives. Like, that's the cheat code. It's just doing the hard work of being a manager, following through with people, listening to them. That being said, data helps. So, like, what you guys have what you guys built, I've built small versions of that. I wrote a script which would look at all the repositories, see how long things are sitting in PR, look at Jira, see how long things are sitting in wait. You know, build continuous flow sort of diagrams, you know, sort of just showing people where your value, team is getting stuck. I've, like hand-coded some of that stuff and it's been helpful to start conversations. It's been helpful to convince people they need to change their ideas about what's happening. I think those are some of the ideas.Kovid Batra: Thanks for promoting Typo there. But yeah, I think that also makes sense. It's not necessary that you have to have a tooling in place, but in general, keeping a tab on these metrics or the understanding of how things are flowing in the team is critical and probably that's from where you understand where the developer experience and the experience of the team members would be impacted. One thing you mentioned was that you scaled very rapidly at Grofers and it was from 50 to 250, right? One interesting piece, I think we were having this discussion at the time of SaaSBoomi also the code review experience, right? Because at that scale, it becomes really difficult to, like even for a Director of Engineering to go into the code and see how it is flowing, where things are getting stuck. So, this code review experience in itself, I'm sure this impacts a lot of the developer experience piece, so to say. So, how did you manage that and how it flowed there for you?Jacob Singh: Well, one is I didn't manage it directly. So, like Grofers is big enough that I had a Director of Engineering sort of, or VP Engineering for different-different divisions. And that level of being hands-on in terms of code reviews, I wouldn't really participate in other than like, you know, sometimes as an observer, sometimes to show proof, if we're doing something new, like we're doing automation, I might whip up some sample code, show people, do a review here and there for a specific purpose about something, but never like generally manage those, like, look at that. Grofers was really good this way. I think we had a really good academic culture where people did like public code reviews. There wasn't this like protection shame kind of thing. It was very open. We had a big open-source culture. We used to host lots of open-source meetups. There was a lot of positive sort of view of inquiry and learning. It wasn't looked at as a threatening thing, which in a lot of organizations is looked at as a threatening kind of thing. And the gatekeeping thing, which I think we try to discourage. I think we had a lot of really positive aspects of that. Vedic Kapoor was heading DevOps and Security infrastructure stuff that I work with a lot. He's consulting now, doing this kind of work. He did a lot of great, sort of workshops and a lot of like a continuous improvement program with his teams around this kind of stuff where they do public reviews every so often every week or whatever. The DevOps teams made a big point of being that service team for the engineer so they would kind of build features for engineers. So, we had a developer experience team, essentially, because we were that size. Well, the review process, generally, I mean, I gave this rant at SaaSBoomi, and I think I've given it often. I think PRs are one of the worst things that's happened to software engineering, in the last 20 years.Kovid Batra: Yeah, you mentioned that.Jacob Singh: Yeah, and I think it's a real problem. And it's this funny thing where everyone assumes that progress moves forward and it never goes backwards. And then they, the younger generation doesn't necessarily believe that it could have been better before. But I'll tell you the reason why I say that is that, you know, Git was created by Linus, by the creator of Linux because they needed, well, they needed something immediately, but also they needed something which would allow thousands and thousands of people working at different companies with different motivations to all collaborate on the same project. And so, it's the branching and the sub branching and the sub-sub branching which Git allowed people to simultaneously work on many different branches and then sort of merge them late and review them in any order they wanted to and discuss them at length to get them in and had to be very secure and very stable at the end of the day. And that made a lot of sense, right? It's very asynchronous and things take a very long time to finish. That's not how your software team works. You guys are all sitting on the same table. What are you doing? Like, you don't need to do that. You can just be like, "Hey, look at this, right? There's a different way to do it." Even if you're on a remote team, you can be like, "Let's do a screen share." Or, "Can I meet you tomorrow at two or whatever, and we'll go through this." Or like, "I had some ideas, paste it in Slack, get back to me when you can." You know, "Here's my patch." Whatever. And I think what ends up happening is that this whole, the GitHub, and for open-source projects, it's huge. Git is amazing. Pull requests are amazing. They've enabled a lot. But if you are a team where you all kind of work on the same codebase, you all kind of work closely together, there's no reason to send up this long asynchronous process where it can take anywhere between a couple of hours to, I've seen a couple weeks to get a review done. And it creates a log jam and that slows down teams and that reduces again, that loop. I'm big on loops. Like, I think you should be able to run your code in less than a second. You should be able to compile as quickly as possible. You should be able to test things as quickly as possible that you should be able to get things to the market and review them as quickly as possible. You should get feedback from your colleague as soon as possible. And like, I think a lot of that has gotten worse. So, engineers like learn slower and they're waiting more, they're waiting for PRs to get reviewed, there's politics around it. And so, I think that that process, probably should change. More frequent reviews, pairing you know, sort of less formal reviews, et cetera. Yeah, and TDD if you can do it. It's kind of the way to get much faster loops, productivity loops going, get things out sooner. Sorry, a bit of a long rant, but yeah, PRs suck.Kovid Batra: No, I think this is really interesting, how we moved from enhancing developer experience and how code review should be done better because that ultimately impacts the overall experience and that's what most of the developers are doing most of the time. So, I think that makes sense. And it was.. Yeah?Jacob Singh: I just want to caveat that before you misquote me. I'm not saying formal reviews are bad. You should also have a formal review, but it should be like, it should be a formality. Like, you should have already done so many reviews informally along the way that anyone is reviewing it already kind of knows it's there and then the formal review happens. And it's like in the books and we've looked at it and we put the comments. It shouldn't just be like, "Here's a 20K patch.", a day before the deadline. You know what I mean? That shouldn't happen anymore, I think that's what I'm trying to say.Kovid Batra: Yeah. No, no, totally makes sense. I think this last piece was very interesting. And, we can have a complete discussion, a podcast totally on this piece itself. So, I'll stop here. And thank you so much for your time today, for discussing all these aspects of developer experience and how code reviews should be done better. Any parting advice, Jacob, for the dev teams of the modern age?Jacob Singh: The dev teams or the other managers? I guess the managers are probably watching this more than developers.Kovid Batra: Whichever you like.Jacob Singh: A parting advice. I would say that we're at the most exciting time to be an engineer since I mean, maybe I'm biased, but since I started coding. When I started coding, it was like, just as the web was taking off. You know, like, I remember when, like, CSS was released, you know, that's old. So I was like, "Oh, shit, this is great. This is so much fun!" You know, like, when it started getting adopted, right? So I think, like the sort of dynamic programmable web was nice when I started, right? Now, we're at the second most exciting one, in my opinion, as an engineer. And I think it's surprising to me. I work with a lot of portfolio companies at Alpha Wave. I talk to new companies that I'm looking at investing in. It's really surprising to me how few of them use Copilot or Cursor or these various sorts of AI tools to assist in development or like everyone uses them a little bit, but not programmatically. They don't really look into it too much. And I think that's a missed opportunity. I still code. When I code, I use them extensively. Like, extensively. I'm on ChatGPT. I'm on Copilot. I pay for all these subscriptions. You know, I use ShellGPT. I don't remember shell commands anymore. ShellGPT, by the way, is great to plug. Write what you want to do, hit ctrl+L, and it'll like generate the shell command for you. Even stuff I know how to do, it's faster. But the main thing is, like, the yak shaving goes away. So, I don't know if you guys know yak shaving, but yak shaving is this idea of having to do all this configuration, all this setup, all this screw around to get the thing actually working before you can start coding. Like, learning some new framework or dependency management, bullshit like that. That stuff is so much better now. You take your errors. You paste them into ChatGPT. It explains it. It's right most of the time. You can ask it to build a config script. So, I think if you know how to use the tool, you can just be a million times more productive. So, I would say lean into it. Don't be intimidated by it. Definitely don't shortchange it. Dedicate some research effort. Trust your engineers. Buy those subscriptions, It's 20 bucks a month. Don't be so cheap, please. Indian engineering managers are really cheap with tooling, I think a lot of time. Just spend it. It's fine. It's going to be worth it. I think that would be my big thing right now.Kovid Batra: Great, Jacob. Thank you. Thank you so much. Thank you so much for this. We'd love to have another discussion with you on any of the topics you love in the coming shows. And for today, I think, thanks a lot once again.Jacob Singh: My pleasure. Same here. Good talking to you, Kovid. All right. See you.Kovid Batra: Thank you. All right, take care. Bye-bye.Jacob Singh: Happy hacking!

|||

‘Scaling Startups with a People-first Approach’ with Roman Kuba, VP of Engineering at Tset

In the latest episode of 'groCTO Originals' podcast (formerly: 'Beyond the Code'), host Kovid Batra engages in a thought-provoking discussion with Roman Kuba, VP of Engineering at a tech-based startup Tset. He brings a wealth of experience from his leadership roles at renowned organizations including GitLab, Cloudbees, and CodeSandbox. The heart of their conversation revolves around ‘Scaling startups with a people-first approach’. https://youtu.be/lTtwQ6PPyq8?si=lnOtCuVxjtvu9Ui2 The episode begins with Roman discussing the key events that shaped his life, followed by his responsibilities at Tset. He then details strategies for aligning the team with high-level goals, fostering a collaborative effort rather than a top-down directive. He also addresses challenges encountered during the zero-to-one phase and effective ways to overcome them. Lastly, Roman leaves the audience with essential advice to prioritise user value creation and genuinely care for your team’s ambitions and well-being.

Timestamps

  • (0:06): Roman’s introduction & background
  • (0:52): Life-defining moments
  • (3:54): Roman’s responsibilities at Tset
  • (7:29): Aligning & keeping young devs motivated
  • (10:29): Challenges in the ‘Zero to One’ journey
  • (19:37): Balancing effort & value delivery
  • (22:33): Advice for aspiring engineering leaders

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have an amazing guest who has 12-plus years of engineering and leadership experience. He's currently serving as a VP of Engineering at a tech-based startup called Tset. He has worked with prestigious names like GitLab as an engineering leader, manager in the past. He's a strong believer of giving back to the community. Great to have you on the show, Roman. Welcome to the show.Roman Kuba: Hi, Kovid. Thank you for having me.Kovid Batra: Great, Roman.Roman Kuba: And by the way, a small thing to correct, the company is called (pronounced) 'Set'.Kovid Batra: Yeah.Roman Kuba: I know a lot of people call it 'T-Set', but it's called 'Set'.Kovid Batra: Oh, I'm so sorry for that. Roman Kuba: All good, all good.Kovid Batra: I'll call it 'Set' from now. Is that cool?Roman Kuba: That sounds good. Yes, sure.Kovid Batra: So, we have VP of Engineering, Tset, on the show today. Before we get started, Roman, I would love to know a little bit about you. And let's just start with a life-defining moment for you in your life that has been really impactful in terms of making Roman who he is today.Roman Kuba: Life-defining moments. Well, there's definitely not too many of them or so, but like, I would say one, thinking back, like very far in the, in the past or so when I was like actually six years old, like, that's like crazy long time, right? But I had like a great opportunity or so with my parents back then. Like my, my dad was like still a student or so and this kind of thing. And we, we didn't have like a lot of money or so, but they saved a lot of money for a long time. And we would like to spend like, I think over half a year basically in Australia. So, I'm currently in Austria, right, or so, and Australia was like a lifelong dream for my parents to get there. And that was one of the things or so.Kovid Batra: I'll just stop you here. Just for our audience, Austria and Australia are two different countries, guys.Roman Kuba: Yes, no kangaroos.Kovid Batra: Yeah. So, you were in Australia.Roman Kuba: Yeah, there was like this thing where, interestingly, to this day, I have like still a lot of memories, even it goes back for a long time, right? And on the one side, it kind of like, changed a lot in like how I started looking at the world and various things, right? And like, having this like constant interest to say like, cool, there's so many cool places to see, so many different kinds of cultures to discover, right? And I think this is one of the things that kind of like changed a lot in how I look at certain things, how I look at like certain stuff that's happening in my own country and these kinds of stuff, right? And I always try to keep a very open mind. And I think having this experience as a young kid or so, it really changed how I look at new things. And my early career is like continuing to just like part of like, traveling to other countries, traveling, like using job opportunities in the way of like seeing more parts of the world was a big, big driving factor for me to be able like to find a job that allows us to do this. And yeah, I think by now I've had the opportunity to work with like so many international people. And I think, really looking back or so, I think that was for me, the one thing was, okay, cool, there's so much more out there than this little Austria. Austria is a very, very small country. And it kind of like shaped my interest for just the general world more than anything else, I think could have done in this age.Kovid Batra: Yeah. I think even I relate to that sentiment and the curiosity, this enthusiasm that it always brings is amazing to have. And like it impacts in so many ways that even today, I sometimes think that like when we are talking to people, the openness that we get, all these things are driven from all these aspects of meeting different people, being into different cultures. So I really relate to it and it's really interesting. Thank you. Thank you, Roman, for this quick brief about yourself. Moving on, you are holding this role of VP Engineering at Tset. I hope this time it's right, right?Roman Kuba: Yeah.Kovid Batra: So, what's your daily routine like? What do you do at Tset? What are your major responsibilities that are coming with this role?Roman Kuba: So I was like trying to describe the VP of Engineering is like, even if it's like in a, more in this leading kind of role, right? For me, one important part of the whole thing is like, every day, my main focus is, okay, what can I improve in our organization today? How can I make like the life of each of our engineers, like better in the way that how to do their day-to-day job, right? And I think like, just kind of like supporting thought is the main thing that kind of drives a lot of the actions I'm doing. So, the current company I'm at, I joined only like last June, right? So, it's like not that long ago, actually. And for me, one of the first things when I kind of started, I saw okay, there's a lot of really, a lot of kind of grown processes that the company fell into, right? And there's a lot of these processes that just like start to kind of grow and grow and grow. I've been out there. We're not like in a way that they were helping the engineers on a daily basis, but often people would think of them like, oh, there's like, I cannot do this faster because there's this overhead and this kind of stuff, right? And the other thing I uncovered with this is like, we also have a lot of implicit knowledge, like just in people's heads. And there's like no shared understanding of certain things. Why do we want certain things to be done in a certain way? How do we handle certain situations? What if we do a certain failure, right? And how to recover from these kinds of things? And we had like a very outdated knowledge base at this point where say, if I ask somebody or say, "Hey, by the way, how can I learn about this?" They would kind of first go through their Slack history and see where they posted a link to a certain page, cause they couldn't find it from just like searching or anything. And so, they always say, "Oh yeah, I have this information somewhere." And that was one of the things that started very early on, okay, how can I make life for everybody in engineering better basically, right? And that is like the main driver. They say like, "Okay, I focus a lot on knowledge base and these kinds of things." And yes, the knowledge base part was for me the critical one in saying like, okay, how can I make information discoverable by everybody? But also, how can I make information like contributable by everybody in a very, very small barrier of entry, right? And basically, all of this for me is also like a big part of what does it mean to lead an entire team, right? It shouldn't be like I constantly tell everybody what to do, but ideally, I want to give people context. I want people like, to know how I make certain decisions or which piece of information do I have that they might not have, right? But just opening this up, making it discoverable, make it useful for everybody, that's my day-to-day. And to this or so we're of course, having now challenges, like we need to scale the team when we need to grow and these kinds of things. And to even be able to do this, like you just need to have a very solid foundation as a company, to have like everybody, like, okay, this is how we operate, this is how we work, this is how you can join the team, so we don't lose on the one side, a lot of knowledge if somebody leaves, but also don't have to spend a ton of time in giving somebody who joins the company all this implicit knowledge, right? Ideally, they say, "Hey, go in there, you know, everything where we are, and then you can join the journey from day one, basically." And that's my focus & help, right?Kovid Batra: That's the interesting as well as one of the hardest parts of bringing to a company. It is when we are talking and when we think about things from a high-level perspective, as a leader, I'm sure this seems to be one of the most critical things to have if you want to scale. But the problem here that I've seen with most of the teams is, like the junior folks who are working, they don't intuitively align towards this. And they find it hard to contribute there. As a leader, you know it's important. So you can dedicate that time, that focus and bring in that goal for the team. Did you do anything specific to trigger that counterintuitive behavior for people to actually go and contribute back or help you make this a crowdsource thing rather than just one person doing it? Because it's an incremental process and it has to come from every angle, right? So, were you able to discover anything interesting there and implement it for the team?Roman Kuba: I, at the start, I necked everybody just like, "Where's this documented?" Right. And if I asked the question and somebody would tell me, "Oh, it's like about this or so." Then, I would kind of immediately say, "Hey, please document it." Right. And once it was written down, I then continued to share it further in the team. So, I kind of said that the work people create and how to write things down, I try to leverage it right away, right, so that the person writing it down also sees the value in it right away.Kovid Batra: That's a really good idea, actually. Roman Kuba: Yeah, for me, that is the number one thing or so. And I really hate these weird icons that pop up in a Mac camera recently or so. It's funny. So, please don't get distracted by them.Kovid Batra: Cool, cool.Roman Kuba: But I think this is so critical, right? If you try to make changes, for you as a manager, as a leader, you're more used to just having like a longer feedback cycle in general. I make a change and they're going to see the success or the impact of a change after a certain amount of time. But as soon as you start involving other people, I think it's critical that they get like very instant gratification of how their work contributes to the overarching goal, right? And by kind of leveraging what they do and saying, "Hey, look, this is what you contributed and it already creates value from the first day you did it." I think this is incredibly important for people to actually don't lose track of the change you want to make overall, right? But to say, "Cool, I'm now part of this journey." Right, and then they go in and now they ping other people. So I'm like, "Hey, have you documented this somewhere?" And it started to be a joke at the start where I say, "Oh, please, please, please document it." By now, but people like constantly ask her, "Where's this documented?" So, you know, it's become like a viral way of like how we write things down. And that was pretty cool.Kovid Batra: No, I think that that's a pretty good idea, actually. We just have to like go to the very basics of how we can really incentivize and reward people for doing this activity. And it makes a lot of sense. Cool, Roman. I think this was something about when you were scaling, really scaling the startup, right? But when it comes to the journey of zero to one, like, you have been at Codeship, right? This was a startup where you were part of the zero-to-one journey. I think those are the one of the most challenging times. Of course, scaling is a problem, I don't deny that, but in zero to one, you're fighting for survival, right? So, in that journey, as an engineering leader, I'm sure you must have tackled a lot of problems. But tell us about one initiative or one challenge that you think was really a challenge for you. And how did you handle it? And what did you learn from that?Roman Kuba: Yeah. So, like for everybody listening, like Codeship, that was my first really, I would say tech startup challenge in this case or so. I joined the company in 2014. Yes, that's a long time ago, actually, yeah. And we were like a CI/CD startup, right? So that means when, basically, this constant delivery and testing of software was pretty like, I don't want to say a super-new thing, but having like a SaaS product doing this, it was a thing that was slowly kind of getting more and more adoption in the market and people getting used to it. I joined that company actually as one of their very early members, as an engineer, I was like still really a front end developer or full stack developer back then, even like, and focused a lot on the front end part. For me, like the, cause you say, what is the challenge? Like there were a ton of challenges on a daily basis back then. Like, everything there, I had to learn a ton of stuff, like, how do startups work, how to make really, basically, build a SaaS product, right, that you have like a ton of other developers rely on now on a daily basis to say like, "Okay, how can we ship things without breaking it? How do we recover from mistakes?" And these kinds of things, right? That was amazing. And I would say, if I think about one specific thing that the biggest one is been there for some time and like we started to introduce a lot of like different kinds of JavaScript stuff, of course, like to make, drive a lot of the very interactive parts of the application, like think of a log output, right? If you know, run something on your terminal, of course, your terminal prints all this stuff. If you do it in the web, right, then you suddenly, basically, need to take all this kind of terminal content and present it to the user in pretty much real time in the web interface, right? And that was at a time where say, jQuery was like still very, very active. And Angular was one of the bigger frameworks and it was Angular 1. And React was like just coming, was slowly the thing kind of driving in, right? And we had this one page of a new part of our product where you could run like a lot of like really complex bills and you would get like a ton of terminal output. I think you would talk about like basically, on the screen, like about, 50-60K DOM nodes that you need to render in basically, like real time for them constantly, right? And it's expanding and this kind of stuff. And there was this one big challenge where I say, okay, we had big customers and they had very big logs and that page was just like crashing the browsers for users. So, we're not able like to look at their log output because it was just like too many DOM nodes to properly handle this refresh and this kind of thing in the way we did it back then. And from the engineering side or so, the interesting part was I really needed to spend a lot of time in dissecting the problem, where was like the big bottleneck, right? We looked at a ton of different kinds of metrics on the time to paint and the refresh on kind of when do we touch which kinds of things. And we had Angular back then as the main driving part for this front end page. And within basically, a week, I did two POCs. One was with React and the other one was with Vue back then. And Vue was like completely unknown. It was like this little fun side project from one person, right? Nobody has heard of it. It was like zero point something. And I had basically, neither knowledge, nor in react, nor in Vue, right? And for me, it was like my main go-to was okay, looking at a piece of documentation, say, "What can I learn about this piece of tech to solve my problem?" cause I identified that rendering is the biggest part in the refreshing and Angular was just really notoriously bad at refreshing a lot of nodes. And like I know back then, so it was React with a constant like, I would say roadblocks we hit back then, cause it was like much more complicated. And the big roadblocks were on a lot on the technical side of things cause we also had to take into account what is the knowledge we currently have within our team, right? It's not good or so if like I build something that only I understand and nobody else in the company can easily contribute to, right? So, taking these constraints into account is incredibly important, especially in the early parts of the journey of a startup. You need to take all the resources you have in a smart way. And with Vue, I was able, like to build this page in such an easy way that even a backend developer could look at the code, understand how it works, understand how to contribute to it or so in a very easy way, and it didn't need to jump through a ton of like building hoops and kind of steps to understand it and so, cause it was looking so similar to just like plain HTML in the way it was rendered, right? So, it was, we'd like to build this POC basically, within a week. And then, it took me like another, like half week, week or so to implement it really in the product with everything they wanted to do. And then, there was this defining moment, okay, of like, I recall this moment. Like, you click this button in your own like UI and say, "Okay, let's merge it to production." It goes live, really just all the tests ran through, kind of like it went live and then you see it's deployed. And say, okay, now all the users are seeing that piece of code running, right? And then suddenly, the browser stopped crashing. Like you had users, "Hey, it's working for me." And that was like, for me, this thing was a, that was very defining the way I started to look at, okay, what value tests bring, what value documentation brings, what value like other parts within the company bring regarding knowledge we have and constraints. And yeah. That was, it was a fun one or so. And I think it helped a lot in the journey for the company.Kovid Batra: So now, like just going back and thinking about the same incident, what do you think that one thing drove you towards solving this problem? Like, what comes to your mind? Was it your curiosity to explore something else or was it something like the need of the hour and there was no escaping it? What pushed you to actually go forward and take up this challenge?Roman Kuba: Like, I was basically the only front end developer back then, right? I was the only one able, like to fix it.Kovid Batra: Yeah. So, it was more of a question of survival for you then. Roman Kuba: Yes. It was, like for us as a company, it was really, we have this product, we want to sell it, like this customer is curious, right? But if I have like this negative connotation with our product where people say, "Oh, it's not working." That's just not great, right? And for you as a startup or so, I think you always need to make very conscious decisions on what you do, what do you focus on, right? There's always like ideal solutions to say it is the best solution you can build or the other one is like, okay, what is the solution I can build now that just provides the most value to our users, right? And sometimes, even the value for the user and the ideal solution, they just go in very separate ways, right? And I think that is the thing I just learned at this point or so, when do you prioritize the right pieces of code? When do you kind of like look at what do I really need to invest in long term as a company? And it also changed a lot of my perception when it kind of, concepting things about like, where do metrics play a bigger part in the way what I build, right? like I started to weigh, look more for like performance metrics from the start. I'm looking at really edge cases in how I build things, how fast am I actually in deploying and recovering from these kinds of things. So, yeah. Ideally, if I can go incrementally forward with these kinds of changes going forward, that's always like the better approach than just say, throwing everything over the fence and restarting. That was more just like escape hatch because we had a really big problem. But usually, always making smart decisions with the constraints you have in mind and say, "Okay, what do I need to make as a small step to bring me closer to the ideal value I want to create for the user?" And my ideal solution can be the really North Star, but it shouldn't be my first stepping stone.Kovid Batra: First step towards that. Totally. I think it's a great, great piece of advice. And being in that position and taking that call, I think is the hardest part. Like, when you talk about it, you can say that just keep that balance on, like, don't go for the ideal solution, just look at what's the next best step. But that really requires some level of research, some level of understanding of what should be the next first step, because you can end up being in a tech debt situation for things like these that you have been doing. And it could be that you might delay the delivery. So, I think great piece of advice, but if I have to ask you, what exactly is that framework or even if it's not a defined framework, how do you take that call that, okay, this is going to be the first step. What's that feeling that you get when you see, okay, this is what I'm going to implement and this is what I'm going to leave for the next step. How do you decide that?Roman Kuba: I would say always like to weigh a little bit, the risk, right. Especially in a startup, like everything that you do has a lot of risk involved, right? If you build new features, have you validated them with users, will users like them and these kinds of things, right? And for me, it's always like on one side, I want to kind of like, I don't want to say minimize risk, but I want to keep the risk to a low amount of like effort when it comes to my previous investment. For me, like if I need to spend like a month of building something and there's super high risk involved in like, if it's even a month spent well, right? That's something, especially in the startup world, a month is like a ton of time. You're not, you're never getting this back, right? So, if you say, "Okay, that's for me, actually a step I can take. That takes me only like a week." Right? And maybe it's even like a higher risk or so, or like a lower risk in this case, right? Then I think that's always like the better thing to do. Even if you say like, okay, maybe it's, maybe I then need another month or afterwards to validate what I'm doing or so. And then later, a user says, "Yes, it's the right journey." Then you invested a week, right? But the thing is, so, if you invest a month and then it's like the bad thing, it's, yeah, you're not getting this back. But having a week and then spending an extra month to say, "Okay, yes, it was a good idea." That is how I usually kind of try to look at things. Yeah, that's the thing. Just getting you to the goal and getting the confirmation that like you don't waste your resources. That is, for me, the big thing.Kovid Batra: Makes sense. Just to add to it, I think a lot of times, we as developers make a decision purely based on the effort it's going to take and we just find the shortest paths to it. What I loved about your narrative is that there was no single point when you were thinking about the effort that would go into it. You were always thinking from a customer standpoint, like what value should be delivered right now. So, even without you saying, I'm just taking this from you that thinking for the customer, delivering the value should be the primary objective in your mind. The effort, whether it is one week or 10 days or diving into a new technology to ramp up your learning and do stuff, I think that never became the hurdle for you. It was always just the path that you have to take to deliver that value. So I think, amazing, Roman, I think I already feel inspired from the way you are thinking and doing things. All the best to you for your upcoming endeavors and may you shine like this ever. On that note, I think I would ask for one last piece of advice for our audience today from you as an engineering leader, as an engineering manager, people who are aspiring to be that what would you like to tell them?Roman Kuba: That's a fun one. I would say, the engineer in me would say, like really focusing on the value is the number one priority because your user, they just don't care which piece of tech you're using, right? They're not caring which framework you're using and all this kind of stuff. And it's for developers, very uncomfortable. And this thing I needed to learn. But making a decision that says, okay, even if it takes you a little longer, what is the thing you want to create for the user for them to get the benefit, right? That is for me, the number one thing. Start thinking about the value you want to create. The leader in me just says or so for anybody, because I went through this journey, right? Like being a developer, then leading a front end team, then stepping up to become an Engineering Manager, right? What I always did and do to this day is like really honestly caring for the people you work with, like understanding their ambitions, understanding what they want to achieve, right, or so. Like, everybody, even they, when we talk about tech, right, they also have fears about like, do they make the right decisions, right? Really genuinely be interested in what people think, how they feel about certain situations, right? Because in this case, you also want to create value for the people that you work with. And I think that is the number one thing, like yeah, generally care for each other in this kind of case or so. And do this journey on a startup or a tech product that you're ever building, like together. And yeah, that's my advice, I would say.Kovid Batra: Great, Roman. Thank you. Thank you so much for this advice and thank you for your time today. We'd love to see you on the show again once we hit a benchmark where we have another episode and we would love to call guests like you back on the show. Really loved the talk.Roman Kuba: Thank you, Kovid, for having me. It was a pleasure being here.Kovid Batra: Thank you, Roman. See ya!Roman Kuba: All right. Take care. Bye-bye.

‘DORA, SonarQube & First 90 Days of Leadership’ with Glenn Santos, VP of Engineering at PDAX

In the latest episode of the ‘groCTO: Originals’ podcast (Formerly: Beyond the Code), host Kovid Batra welcomes Glenn Santos, VP of Engineering at PDAX. Glenn is also dedicated to empowering developers to become leaders with his initiative ‘eHeads’, a community for engineering leaders to exchange experiences and insights. His vast experience includes valuable contributions to renowned companies such as Salarium, TraXion Tech Inc., and HCX Technology Partners. The discussion revolves around ‘First 90 Days of Leadership, DORA & SonarQube’.

The episode kicks off with Glenn sharing his hobbies and life-defining moments, followed by an insightful discussion on how Glenn as a VP of Engineering manages his role, the company’s mission, and day-to-day challenges. Further, he shares strategies for aligning developers with business goals and navigating compliances in FinTech while maintaining velocity. He also shares his 90-day draft for building trust in the initial days as a leader and highlights the use of DORA metrics and SonarQube to measure team success, address integration challenges, and plan targeted improvements.

Lastly, he offers parting advice to aspiring leaders and engineering managers to embrace leadership opportunities and prioritize personal growth over comparing themselves to others’ progress.

Timestamps

  • (0:06): Glenn’s background
  • (0:37): Glenn’s hobbies & life-defining moment
  • (2:38): Role & daily challenges at PDAX
  • (3:37): Aligning tech strategy with business goals
  • (5:22): Aligning team & individual goals
  • (8:00): Managing velocity and compliance in FinTech
  • (11:31): First 90-day leadership plan
  • (14:56): Measure engineering team success
  • (17:24): Implementing DORA & SonarQube
  • (21:58): Parting advice for aspiring leaders

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a special guest. He’s currently VP of Engineering at PDAX, which is one of the best crypto and digital asset exchanges in the Philippines. He’s also known as the organizer of eHeads, a fellowship for local engineering leaders. His trajectory of career has revolved around mainly helping others work better. So he’s a passionate tech leader with a lot of compassion and empathy. Welcome to the show, Glenn. Happy to have you here.

Glenn Santos: Thanks. Thanks for having me.

Kovid Batra: Great, Glenn. So, before we start off and learn some great stuff from you, from your experience, we would love to know a little bit more about you, like your hobbies, your day-to-day activities. So quickly, if you could introduce us with yourself and tell us about your life-defining moments and some of the best experiences that you have had so far, your hobbies, how you unwind your day, I think that would be great.

Glenn Santos: So probably, my most life-defining experience was when I discovered TechCrunch before. So, when TechCrunch was just starting out, I was just a usual rank-and-file worker in a big company. I wasn’t a developer at all. So, when TechCrunch published like 10 ideas on how to create a startup, these were the ideas that they thought would boom. I found one that was particularly something that Filipinos here where I am from could do, which is some sort of labor arbitrage. So, it’s called outsourcing now. It’s very popular across the world. But at that time, we did not have the technology to make it easy, so I had to build my own forum, I had to create my own website, and do all the other stuff needed to get that business up and running.

For my hobbies, I’m actually an avid fan of cars. And I’m also a foodie, as they call it. So, I like trying new foods. Technology-wise I still read, like, Hacker News to keep up-to-date. But I also mix it up with some newsletters to supplement my knowledge in engineering management. And I share my learnings in my LinkedIn. That’s maybe a quick run-through of what I, yeah, what I’ve done.

Kovid Batra: Yeah. Yeah. That really helps. Thank you so much for this intro and coming to the main section, the main discussion, like when we want to learn something from engineering leaders like you, I think the best would be to start with your current role wherein if you could just tell us, what you do as a VP of Engineering at PDAX, what PDAX exactly does, and what your day-to-day challenges are. I think let’s get started from there first.

Glenn Santos: So, as VP of Engineering, I handle most of the people side in the engineering management role. So, my focus is really on people and the processes that enable them to work better. So right now, one of my initiatives in the company is to roll out DORA metrics and SonarQube. In my day-to-day, I actually do 1-on-1s, I join meetings with engineers and I also help plan out what, since we’re at the start of the year, I’m helping plan out what we’re going to do for 2024.

Kovid Batra: So, when you say you are planning what is gonna go out for 2024, I mean, this is basically what a VP Engineering would be doing, like connecting the business to the tech side and, like enabling the teams to be aware of what we should build and what strategy we should follow. So, I think this is one interesting piece. And this is one of the challenging things of a leadership role where you bring in the business and align it with the technical strategy. How do you exactly do that? And if you would share some of your experiences with aligning your teams to that technical strategy which ultimately aligns with the business goals. So, how do you do that?

Glenn Santos: So, first I need to understand the business goals for this year that’s going to be actually rolled out next week. But right now, it’s pretty clear what our direction will be business-wise. So on my end, I have to translate that into something that will help the engineering team with the help of product, of course. In our organization, product is separate from engineering. So, we align ourselves with each other so that the features that we build are according to that roadmap from the executives.

And aside from that, I also have to make sure that we keep code quality high because one of the things that I’d like to implement is that we build features more quickly. So, that’s enabled by better code which actually is a very good flywheel that contributes to the speed of development as well. So, we can do more with less once we have better code.

Kovid Batra: That totally makes sense. When it comes to aligning the team with these goals, it’s always a challenge for making the developers intuitively or naturally aligned with these goals. So, what exactly would you do in your meetings or your day-to-day work to ensure that these people are on a day-to-day level aligned with the business goals and the work that they are doing? Because it’s always a secondary effect, right? Like, the developer builds the code, ships it, and then ultimately, they see the results when the features are being adopted or your system is getting faster. So, it’s always a second-order effect. So, in day-to-day, how, do you make sure that they relate to that second-order effect goals and get rewarded with that actually?

Glenn Santos: So, when we’re building things aside from the regular, I think most of your audience would be familiar with the stand-ups and the catch-ups with product and the entire team. So, that’s part of it. But another part is, I always reiterate during our own engineering meetings, why we’re doing these things because we need to connect that with their own motivations. Each person has a different motivation, but I’m hoping that most of our engineers are motivated by growth and learning as well as achieving something that’s impactful. So, we share metrics in our meetings. We share how the users are accepting their features. And we also want them to, like connect the goals of whatever they’re doing with their own personal goals. So, we’re a fintech company that’s focused on wealth, increasing wealth. And most of our engineers are actually crypto traders as well. So we give, we roll out initiatives like helping them learn more about crypto and also how to handle their own funds. So that’s also something that we strive to do at PDAX.

Kovid Batra: I think that’s the best thing that you can have, like the people who are building the tool for a user, they themselves are users of that product and they understand the problem statement, they understand the solution around it. I think you answered the question really well here and you’re lucky to have that kind of motivation because in a lot of cases when people are building, let’s say some B2B products, the developers are totally disconnected from the kind of audience they have to build the product for, right? So, I think you’re in a good position and it’s a good strategy actually.

Cool. I’ll move on to the next question, Glenn. So basically, when you say that you are working with PDAX and it’s a financial institution, I’m sure there are a lot of compliances and security issues coming into the picture and, being a fast-moving company, you have to roll out so many features and in as less time as possible. So as a leader, how do you manage that balance? Because that is a little bit of a challenge. And when compliances come into the picture and specifically in FinTech, it’s a big, big challenge. So how do you manage that, your velocity while making sure that you are fulfilling all the compliances?

Glenn Santos: Yeah. Good question. Currently, we’re, there’s a really like a push and pull here. So, other teams, they need their central bank requirements because that’s part of the regulations, but we also want to build something quickly, which is also a mandate from our CEO, actually. So, what we do here is we actually set aside time for that compliance stuff. And, for the most part, it’s not handled by the engineers themselves. I let the managers handle that. But for example, we need input from tech leads or engineers, we need to bake that in so that it doesn’t disrupt their flow. So, I’m still a big believer of giving them a maker flow, these engineers, because that’s the only time where they can deliver quickly. And they’ll have well thought-out solutions to their, to the problems that we give them. So, we don’t want to interrupt that. But at the same time, we also want them to be communicative and collaborative. So, I think having those standups and having them work more async is actually key here so that they can contribute when they need to, but not, we don’t, like rush them into contributing these admin tasks.

Another thing is that we want to also build rapport with these other teams who are not technical, who might not understand what we’re doing so that when we’re, like a bit delayed when giving responses because we’re working on other stuff, we can smooth things out. And that’s actually another part of our like what we’re doing in the company currently, some process improvements so that these asks from external parties are handled well.

Kovid Batra: I think this is a good strategy where you are segregating their work to an extent where they don’t get interrupted in the flow of work on an everyday basis, but also they’re aware of things that are going on and they understand the context of it so that, as you said, like when they need it, when they need to contribute in that direction, they have that context and they implement it accordingly. So, yeah, I think balancing this on a day-to-day level is the key here probably because you don’t know when things go more into that direction where you are interrupting them every time. And then, there could be situations where they’re not completely aware of what’s going on on the compliance side and they end up building something which doesn’t fall into the picture and they have to rework. So, I think the balancing act is a must here, which I’m sure you’re doing. And you joined PDAX I think very recently, it’s most likely one year or so.

Glenn Santos: Less than a year actually.

Kovid Batra: Less than a year? All right. So, I think this is also an interesting piece, coming in at a leadership position and building that trust with the team. Right? So, this is something where I see a lot of people struggle, like people just don’t take the authority. In today’s world, at least they don’t just take the authority. They want to, like be influenced in a very subtle, different manner from the leaders. So, how are you handling this role? How are you handling building of that trust with the team in this initial year?

Glenn Santos: So, when I started, I actually, before I even started, I interviewed all the leaders that would be under me and I’ll be working closely with because I wanted to establish that initial rapport. I wanted to know if we clicked because for the most part, you don’t want your reports to be, like going against you. So, you want to have some harmony. When I started, I also actually, and I still do frequent 1-on-1s because I want to get to know them not just for their work, my direct reports who are engineering managers. I also want to know, what makes them tick, what they’re really like, and maybe some of their interests outside of work. So, that’s part of it.

And aside from that, for the engineers themselves, I’ll start doing some skip-level 1-on-1s as well so that I get a holistic picture of the engineering team. But, when building trust with them, I’m very transparent and open about what we’re going to do. So, I always reiterate that our goals for this year are code quality, this is how we will be measured, and I also want people to be more learning-focused this year. So, I’m really hoping that that aligns with them because one thing that I’ve taught recently is that you cannot give people motivation. They have their own motivations. You just have to align yourself with them so that they can do their best work.

Kovid Batra: Anything that you specifically follow when you join in? Any basic rule or format or template that you follow that, okay, whenever I’m going into a new position as a leader, like this is something that I should continue doing for some time.

Glenn Santos: Yeah. I actually, since I’ve joined a few companies recently, I’ve actually created my own 90-day, like my 90-day draft on what I should do in the first 90 days. So it’s split into 30, 60 and 90. So, while the goals are not that set in stone, I do want to get as much information about the organization as possible in the first 30 days, as well as talk to as many people in the organization. Yeah. So I go to, I actually go on-site, we’ve returned on site already, and I need to talk to as many people because for our companies, you have to interact with people who are not in tech, maybe some people in operations, some people in sales. So, you’d want to build that rapport with them so that they understand you and you understand them when they have things that they ask from you.

Kovid Batra: Right.

Glenn Santos: Yeah, so that template has been very useful for me. There’s actually a book out called ‘The First 90 Days’ I think, that I use as a basis.

Kovid Batra: Perfect. Great. Glenn, the last thing which I want to touch on with you in this discussion is something that you just mentioned you have taken up as an initiative: setting up the DORA metrics. So, DORA metrics is probably one aspect that I want to understand. But broadly, I want to understand what’s your way of measuring the success of an engineering team. Like, if your team is doing good, how do you define that for the team and how do you make sure that everyone is aligned on those KPIs and metrics, maybe DORA metrics? And, what all goes into setting up that momentum for the team so that everyone is motivated towards those? That they just don’t feel that they are under a microscope when you are talking about KPIs. They should naturally feel to work on those KPIs when they are working on a day-to-day level. So, I just want to understand this whole piece from you.

Glenn Santos: So, one of our, I guess our rallying cry this year is to be better engineers. So I’m pretty sure most engineers want to be better. So, DORA metrics, I also tell them that this is not actually some sort of measurement that we use just for that sake or to, like rank you. I want to use it to really create better engineers because when you follow the metrics, you’ll naturally hit roadblocks. Engineers love problem-solving, so this is one way to, like attack that part of the brain that loves feedback. It’s a very quick way to reinforce the feedback loop. That and SonarQube, which is also an automated way to collect metrics.

So, people love gaming, and we’ve seen that gaming is very very effective in producing behaviors that we want. And this is one way for them to really see if they’re doing well, they’re publishing clean code, they’re creating code that has no bugs, no vulnerabilities, so we want that. And also, it’s a team metric more than an individual metric, because the emphasis of DORA is really on teams. I want them to be more collaborative. So, if it fails, we’re not singling out one person. I’d rather tell the team, “Hey, you’re not doing well, help each other out, raise these metrics so that we can deliver better products to our customers.”

Kovid Batra: Right. Makes sense. Apart from building this first-level alignment of the team towards these metrics, what challenges did you see while implementing these success metrics for the team? And any specific example? So, I’m not sure if you have implemented it yet or not, but, let’s say, if you’re looking at implementing it, what would be your go-to strategy? What would be those one or two important metrics that you would be tracking? And how, again, you would bring that alignment with the team that, okay, these are the right metrics that we should be focusing on right now?

Glenn Santos: So, we’re actually in the process of implementing these metrics. We’ve ranked them accordingly. One thing that really stands out that I’d like to measure is the reliability of our code. So, that’s automatically measured by SonarQube. So one thing that I really emphasize here. One of the roadblocks, sorry, that we’ve come across is that if you’re using systems, different systems for like CI/CD or another company for your repos and maybe another company for your servers, it might be best if you streamline these first because that’s one of the challenges that we had. We had to string them together and DORA metrics is ideally collected in real time. So, for now, we’re not collecting it in real time. But, if for example, everything that you have is in AWS, it might be simpler or everything you have is maybe in Atlassian, that’d be simpler. And probably one of the people’s side challenges of implementing metrics is actually getting them to integrate, especially if you have lots of repos.

Kovid Batra: Yeah.

Glenn Santos: So, having time for them to do that, that’s usually the challenge. Do they do the features or do they do these integrations? So, I have to work with product, say that we need to slot this in. Maybe, you can slot this in, like before the sprint or during the sprint so that we can start collecting the metrics because we can only act upon the metrics once we’ve collected them. And yeah, we’re actually at just that part right now.

So, the next phase would be creating a plan, how to improve those metrics. So, we are not there yet, but we don’t want to plan ahead because that might involve, you know, we say that this is wrong, but our metrics would say that actually we’re doing okay here. So, we can focus on other metrics that are not up to par. So, we can put our engineering efforts there. So, it’s more targeted and has more impact.

Kovid Batra: Yeah. I think it also makes sense. Like, you cannot really jump the gun here. The whole point of having metrics is to first understand the problem. So, if you will just say that, okay, I will pick up this metric and start working on it from today itself with the team, that might not actually align with the real improvement areas. So, I think the thought process that you have right now for implementing these metrics makes a lot of sense. Like, first-level implementation, getting in place, getting that data in place, people looking at it regularly. And from there, you will start getting those indications where the inefficiencies lie. Just for example, if we talk about change failure rate, right? This is one of the important DORA metrics that needs to be tracked. So, if in your team, there are a lot of failures once you release to production, then that becomes the area of focus. And then, you start working towards it, taking measures towards it. But, in the beginning itself, if you say that okay, let’s start working on change failure rate, and surprisingly, your team is doing good on that metric, the team would say that why are we doing that? That would make it lose its purpose. So, it totally makes sense to look at it very deeply and understand at every team level which metrics would really work out for them. It’s not that for a particular team, the same metric that is working out would work for the other team also. So, it’s a process. I think the way you were taking it up, like phase-wise is something I would say is the right way to go about it.

And, great. I think, Glenn, it’s really nice to understand these things when you’re implementing them hands-on. I’d love to know more insights from you when you do this implementation. Maybe we can have another session, another podcast where we discuss about how you implemented those metrics and what rewards you got out of those. So, great. I think, it was a great, great talk. And before leaving, I would love for you to give parting advice to our aspiring leaders and aspiring engineering managers who are listening to this podcast, how they should move ahead in their career.

Glenn Santos: So, one of my big pushes really is, and you can see it in my LinkedIn that I want developers to become leaders. We don’t have enough engineering leaders actually, and not enough developers are interested in leading. So, my advice is for people to try it out. You can pedal back if it doesn’t really fit you, but it might be another way for you to grow. So,, right out within your own company, maybe you can help another startup out. And when you’re going through this career journey, it’s not really for you to compare yourself with others. Other people would have done pretty well and other people might have really not progressed that quickly, but don’t compare yourself to them. Compare yourself to what you were, like maybe one year ago, five years ago. As long as you’re, like progressing in a good pace, I think your career as an engineering or engineering leader or an engineer would really go far.

Kovid Batra: That’s really great advice. And I think the best part I felt is that you said, “Keep on trying it with other startups and companies.” So, I think having that hands-on experience and being in those situations would really, really build that quality in you. Like, reading books or just listening to a few podcasts might give you some initial framework on how you should do it. But the real belief I would say comes in when you have done it hands-on multiple times, probably.

So, great advice and thank you so much for this amazing, amazing discussion. I would be more than interested to talking to you again about your experiences of implementing DORA and handling your team, maybe six months down the line, how it went down and how it went up. So, let’s get in touch again. Thank you for today. I’ll see you soon.

Glenn Santos: Thanks. Thanks, Kovid. Great to talk to you.

Kovid Batra: Thank you.

‘Leading Dev Teams through Acquisitions’ with Francis Lacoste and Miroslaw Stanek

In the latest episode of the ‘groCTO: Originals’ podcast (Formerly: Beyond the Code), host Kovid Batra engages in an insightful discussion with two dynamic engineering leaders: Francis Lacoste and Miroslaw Stanek.

Francis has formerly worked with Heroku and Salesforce & is now a VPE and CTO Coach specializing in scaling up startups. Miroslaw is the Director of Engineering & the PL Engineering Site Lead for the Poland R&D division at Papaya Global. He’s also the author of the newsletter ‘Practical Engineering Management’. Explore the theme of ‘Leading Dev teams through acquisitions’ by delving into their real-life experiences.

The episode kicks off with Francis and Miroslaw talking about their personal lives and hobbies. Moving on to the main section, they dive into the acquisition experiences and the pivotal hurdles faced by the engineering leaders in their respective organizations. They stress the importance of swiftly merging company cultures post-acquisition while addressing challenges in navigating the ‘us’ versus ‘them’ dynamic. The conversation also explores strategies for maintaining engineering team efficiency without sacrificing value delivery.

Lastly, Francis and Miroslaw share parting advice with engineering leaders who are navigating similar challenges.

Timestamps

  • (00:05): Miroslaw & Francis’ background
  • (04:23): Challenges of leading dev teams through acquisitions
  • (07:40): Navigating the transition period
  • (20:50): Lessons learned & areas for improvement
  • (27:20): Maintaining team motivation
  • (35:22): Measuring efficiency during transition
  • (41:02): Aligning team practices with new requirements
  • (42:54): Parting advice by Miroslaw & Francis

Links and Mentions

Episode Transcript

Kovid Batra: Hi everyone! This is Kovid, back with another episode of Beyond the Code by Typo. Today, it’s a unique episode for us and we have some special guests. In fact, we have two amazing guests with us, Mirek and Francis. Both of them are accomplished engineering leaders. But they have one thing in common, their passion for contributing back to the engineering community. And that’s why we connected. So, Mirek has been on our show previously, but let me introduce him again. He’s the newsletter writer for Practical Engineering Management. He’s the Director of Engineering at Papaya Global. Francis is coming to our show for the first time. He’s an Engineering Leadership Coach. He’s a seasoned engineering leader and has worked with companies like Heroku, Salesforce and more. I’m glad to have both of you on the show. Thanks for coming. Thanks for joining in.

Francis Lacoste: Hi, Kovid.

Miroslaw Stanek: Yeah, thank you, Kovid. Hey, Francis. Thanks for having us.

Kovid Batra: Great. Francis, Mirek, it’s a basic format, like before we jump on to our today’s topic of leading dev teams through acquisition, I think it’s great if you could share some of your hobbies, some personal things about yourselves with the audience so that they know you a little more. So, we can start with you, Mirek, would you like to go first?

Miroslaw Stanek: Yeah, yeah, sure. Yeah, like Kovid said, it’s my second time in this podcast. but, for new people listening to us, my name is Mirek Miroslaw, depending on which pronunciation you prefer. Like Kovid said, recently I’m the Director of Engineering for the company Papaya Global. I’m also the site leader, leading the Polish R&D side of this company. And I also write a newsletter ‘Practical Engineering Management’. Basically, I try to help engineering leaders to maximize impact of their work and make their teams successful.

Personally, I’m the father of a three-year-old daughter. So, showing her the way, exploring the world, answering all of the questions. And, recently I’m also becoming a professional athlete. Yes, even after being 35 years old, you can still apply for the license. So, I’m an obstacle race runner. I have some aspirations, maybe, you know, not the box on the Olympics, but still, you know, I’m enjoying the ride and then hopefully, we’ll be able to share some successes over time. Yeah, so, thanks for having me.

Kovid Batra: All the best. All the best. All the best. Thanks, Mirek. Thank you so much for this lovely intro. Francis, your turn, man.

Francis Lacoste: So I’m Francis Lacoste. I’m based in Montreal in Canada. I’m an executive coach working mainly with CTOs and VPs of Engineering at startups. I help them specifically when they need to scale their team. And this is where we, they need to get really deliberate with culture. This is my passion, really making sure that teams have a great engineering environment like I’ve experienced. Before that, I was an Engineering Leader at Salesforce and Heroku and started my leadership career at Canonical, which was an open-source company, the one that made Ubuntu. And this is where I started learning remote management back in 2000.

Outside of work, I play in an electronic ambient band. I play a hands-free instrument, the Theremin, which is the space sound music type of sound that this instrument makes. Also, I have a long practice of meditation and I now also teach meditation with the Buddhist geeks, which is an online organization.

And it’s a pleasure to be here, Kovid. Thank you for inviting me.

Kovid Batra: Great, Francis. Thank you. Thank you so much for that lovely intro. I would love to hear you sing and play the music sometime.

Francis Lacoste: Well, we have a Bandcamp and we’re on Spotify, so I can give you the link in the show notes.

Kovid Batra: Oh yeah, that’s cool! Great. Francis, Mirek, we are here today to discuss around the challenges that engineering leaders face post-acquisition. And both of you come with immense experience. You have spent time in different-sized organizations. You have had startups, worked with companies as big as Salesforce, Francis, as you just mentioned. I’m sure you have had experiences of acquisition, right? And various types.

So, to start off, tell us about what kind of acquisition experiences you have had. And what were the biggest challenges as an engineering leader or as an engineering team you saw for the company getting acquired?

Francis Lacoste: Well, at Salesforce, we had many, there were many acquisitions. I came in with Heroku just after they had been acquired. And the Heroku acquisition was kind of a weird one because it took like that very long time. It operated somewhat independently. That was part of the main challenge. You know, the challenge is how do you integrate the culture? You know, it’s an integration problem. The big challenge was the ‘identity’ one. We’re identified as Heroku, but Heroku is now part of Salesforce. How can we be seen? How can we embrace the bigger identity of Salesforce? So, that’s how I would characterize its essence, the challenge we faced. And, it was not inside of it, but concise on, there were many other acquisitions, some more rapid where kind of you’re acquired and then there’s.. It’s a technology acquisition, so the product kind of shuts down very rapidly, things like that.

Those are other challenges, but there’s still this identity issue that’s very present there because usually people are not happy when losing identity.

Kovid Batra: Sure. I think we’ll come back to you for more details on that and discuss more things in depth. Mirek, what about you?

Miroslaw Stanek: From my side, basically the company I’m working for recently, Papaya Global acquired the previous company, Azimo, where I worked for almost eight years. What was the challenge of the acquisition? I think that the merging process in general, yeah. So, my role in the company was like, I would say, middle-level manager as a Director of Engineering who leads leaders who lead individual contributors. Basically, our main challenge was to make sure that the entire know-how which was acquired by, you know, by the bigger company is utilized because we came with know-how, we came with, you know, experience, and then the long stories, ways of working, but this is still in the sphere of, you know, of the potential which you can give to the company. And as a leader, as a manager, you need to be sure that this potential is somehow utilized. So, I think this is the biggest challenge. So, finding good places for the skills which we are bringing with you and you know, it opens all of the challenges around that. Yeah so, the ones about the organization, the culture, the team structure, and everything. So yeah, this is how it looks like in the general view.

Kovid Batra: Makes sense. Diving deep a little more into this challenge, how on a day-to-day basis are you navigating this situation? And Francis, please feel free to share your opinion or Mirek, please feel free to discuss anything that you feel should be known to the community also which you are facing as a challenge today. And, Francis comes with a lot of experience. I am sure he would have certain advice on how to navigate this situation.

Francis Lacoste: Yeah. I mean, I’ve coached people who went through acquisition as well, so that’s another source. I think one of the things that is very important to get going is to know what the context of the acquisition is. You know, there are multiple reasons for an acquisition. I’d say there are three main ones. So, the first one is usually a strategic product acquisition. Your business is acquired because, it’s seen as complimentary. I mean, there’s two, actually, there’s two there, you know, it can be because they want the revenue. So you’re in the same space as your competitor. And they just want, I mean, the one who’s acquiring is kind of a competitor and they’re kind of getting your customers and they want to add their customers there. That’s one strategic acquisition.

The other one, which was more like Heroku and I think in Mirek’s case here, is there’s a complementary product. You know, so it’s kind of Salesforce wanted to expand its reach in the developer space and Heroku was very at good traction in the developer space. So, it was okay work. And, and you’ve seen that in Salesforce. You know, we have a portfolio of companies they acquired, exact targets to add, like marketing capabilities to the CRM, Tableau to add analytics. So these are kind of products complimentary. And the idea is that when you go to sell to customers, you have, like a more comprehensive solution to sell them. So, that will drive more revenue.

Kovid Batra: Right.

Francis Lacoste: So, this is like the strategic acquisition and that will be very different, how it goes and how it will go away than these other two, which is ‘higher acquisition’, you know. So, you’re acquiring a company because of talent. You, you want to usually this will be as you acquire a small startup where you’re not really interested in their product or that technology.

Kovid Batra: It’s just the team that you need.

Francis Lacoste: It’s, “I want the team.” You know, there’s this, and usually it might be one person in the team. You know, there’s like a, somebody has like a very deep expertise. They’re not willing, they have a stake in the company, they’re not willing to jump ship. So they’re going to buy the company so that they can work for the bigger corporation. That’s a very different context than the first two.

And the third one is a tech acquisition. There, it’s not like you don’t really have traction. It’s not about your customers or things like that, but there’s complimentary technology. So, they want that tech. You know, you’ve solved one problem for them and instead of building it by themselves, they will buy you. And depending on that context, it will change a lot of how the acquisition will go.

But, what’s your experience with it, Mirek? Is it like, was it more technology acquisition, a talent acquisition or a strategy acquisition?

Miroslaw Stanek: Well, You know, in the, I think in the end, it’s those types of acquisitions, they have a lot in common because yeah, you can acquire the product, but in the end, you know, there are people behind the product. So even if you have this piece of technology, you still need to have those talented people who can maintain that, who can plug it into, you know, into the new structures and who can continue the growth. I think that, we are kind of mixing both things. Obviously, we expanded the new company’s portfolio, but we also brought, you know, fresh talent, new perspectives and fresh know-how to the problems which can also be the strategic problems for the company, yeah? The company wants to grow. The company wants to expand their portfolio. So bringing, you know, fresh talents who spent years building this or that can be a part of this acquisition.

Kovid Batra: Cool. Francis, do you have any questions that you wanted to ask Mirek?

Francis Lacoste: I think Mirek is right here in the sense that these three types that I said, or four, you know, if you split the first two, they will often overlap. This is what is always interesting about the Heroku acquisition, you know. Heroku was a strategic acquisition. So, what it means is that the first thing that they do is they will usually give autonomy to the product because you don’t want to kill the golden goose. And that creates a challenge because then it will mean that you will have, like kind of two independent or semi-independent organizations going by and in Heroku’s case, it took basically seven years to complete the integration. Actually, that’s not true. Like, five years after I joined, for the first five years, the technology team, I mean, Heroku had its own CEO and it was reporting to the product organization. So, the Heroku engineering organization was totally separate than the rest of the sales force and our engineering organization. And what we’ve seen is that when they did other acquisitions, that changed. You know, in some acquisitions, the technology organization is trying to be merged, you know. And this is where you kind of get these processes because you need to.. As if you’re independent, you can have these processes going on here and these processes going on over that place. And that’s fine. I mean, unless you need to align roadmaps, there will be friction, but those are the frictions you need to deal with. Whereas, if you’re acquiring the technology, then the first thing that we’ll do, or the talent, it’s kind of, we don’t care about how you’re working. Usually, the way it goes is that they will kind of say, “Here’s how we work, and you need to align with that.” Sure, we’re open.

And then, there’s a challenge of how we can influence the culture as per the acquisition, because you have good things. And there’s a size, you know, so it’s usually the smaller ones to influence the bigger one, but that’s very hard. And it will really depend on how you’re able to hook in into the process, build the relationships, all of these things.

So, even though all the problems will happen at some point, the schedule on which they will happen, these integrations will differ based on the integration type because the first thing they do when it’s a product, you know, at the acquisition, usually you can expect like they will merge the sales team rapidly. And in Heroku’s case, that took a while. But in other acquisitions, first thing, it didn’t take long. That the sales team at that exact target or the sales team, the go-to-market of Tableau were integrated in the go-to-market, the general go-to-market because you want to go to the customer with a unified product offering, even if the tech, I think the customer experience is we’re using two different products here. You know.

Kovid Batra: Right. Coming back to Mirek’s challenge after the acquisition. Having the capacity utilization done properly. Is that something that you have also experienced and is there anything specific that you have done at that point of time? Because I can also feel this that as soon as an acquisition is done, there is a lot of context to gain. There are a lot of things for people to first get on board with and then see how teams can be utilized at every level. And the operating style of every company that comes in would be different, right? So, there are multiple areas where you need to first get yourself onboarded after that position and then, ensure that everyone is utilized in the right place. So, Francis, a question for you. Have you experienced such a thing? And how did you navigate that situation?

Francis Lacoste: Yeah, I mean, Heroku’s, acquisition was kind of special in that case, you know, because these questions really took years to materialize. So, Heroku Engineering and Heroku Product were split, you know. And then, Engineering went into to report into the general Engineering org and same thing with Product. And then, these questions started to happen.

And then, there’s these things. Okay, well, is that capacity here? Can we use it for something else? You know, again, do we want, this is less prioritized and the challenge there was kind of often there’s not a lot of knowledge or you have to explain how your product and your technology fit together. And you have to understand how you need really to dive into the understanding of each part.

Kovid Batra: Yeah.

Francis Lacoste: And especially in a big organization, the decisions are made without really the details of the context. So they will say, “Oh, we can cut that.” You know? Or, we were going to ask them, but then it has a huge impact on the product because it’s..

Kovid Batra: It is not looking into deeply. Yeah.

Francis Lacoste: This is critical infrastructure. I mean, it doesn’t seem to be much, but if we don’t develop this, then we’re going to have problems or these things are going to have problems, things like that, dependency. And at the same time, there’s often, like not a lot of understanding of the other side of what it is trying to achieve. So, the advice I would give is really understand, if you’re being acquired, you need to understand very rapidly what the business of the acquirer is, you know, the company making the acquisition, how the tech fits. And now, you fit into that because you cannot really rely on them understanding what’s going on, you know.

Kovid Batra: Exactly.

Francis Lacoste: So, you need to understand them so that you can make your case to them, you know, in the terms that they understand.

Kovid Batra: Right. Right. Mirek, for you, after the acquisition, you were heading the engineering team there. When you moved here, the developers, the team members who were working with you, did they have an expectation from you, or were they looking up to you to sort their lives into this new space? And what exactly you did, like, I want to know, like your first-hand experience there, like what exactly did you do to solve these problems of them and for them and help them get on track or maybe you’re getting them on track right now, I don’t know, just share that experience with us.

Miroslaw Stanek: Yes. So, one of the biggest challenges for me as a, you know, not the senior manager, like I said, just the mid-level manager, is that I got a lot of questions with the expectations that I could answer all of them which obviously, wasn’t true. So, obviously, when the company is acquired, I assume that on the strategic level, you have a product. So, this new management thinks about how to use this product in their strategy. You have pool of talents. So, they think about how to use, utilize those talents. And they think like long-term. My role was bridging this gap between those strategic decisions which were still, you know, in discussions basically. My work was to bridge that with leaders and the engineers, to translate that into their, basically day-to-day you know, activities. It’s very similar to the things which you do as a fresh manager in a company, yeah? So, what you’d need to do in the first 100 days, for example, yeah? So, I assume that you need to learn as much as possible about business or the product. You need to understand what are the problems of the company that you need to solve. And then, looking at your team, at the individuals, you need to find the best fit for their skills in the scope of problems that the company has. Like I said at the beginning, we are joining the company with some experience, with a track record, but, you know, we need to somehow build this credibility because this is just the potential and we need to find a way to utilize this potential, how to start providing the value.

So, basically, my 100 days were full of 1-on-1s with people in all of the positions, from software engineers to their managers, to the directors, to also product people, marketing people, data people and others, to build context. For example, one of the projects which I led at the very beginning post-acquisition was building front-end infrastructure because we realized that with the monolithic system which we had back then, we couldn’t move as fast as possible. And actually, this was one of the, you know, know-how which we brought to the organization because we did some kind of that stuff in the past. So, you know, next to those big strategic things, the product and the entire talent pool, we also brought some, you know, very specific experiences. And actually, there was a problem in the company which we could solve with that.

One and a half year later, I can say that our entire front-end application is built on top of micro front-ends. We have tens of those compared to the single one, one year ago. So, this went well. But, like I said, it had to start with understanding that this is the real problem of the company and we have resources, we have experience, we have people who can address just that. So, this was one of the two experiences I had at the beginning of the acquisition.

Kovid Batra: Perfect. I think, great job there, first of all. And, one thing that I feel is that when you have traveled this journey, there is always some looking back and saying, “Okay, I could have done this better.” Right? So is there anything of that sort, Mirek, which you think you could share as an experience with the audience that this is something that you could do better? Like broadly, I feel you did the best thing. And as Francis also said, first you have to understand the business, understand the need. That’s the very fundamental. And you got to that point rightly, having 1-on-1s and aligning the teams, bridging that gap, bringing everyone on board, right? So, this is amazing. But if there was anything else that you could have done, and whatever you did, if you could do it better in some or the other way.

Miroslaw Stanek: I think that one of the super-important things which I underestimated at the beginning is the quick merger of the, I would say, companies’ culture. So, as long as you have ‘us’ and ‘them’, and we work this way and they work that way, it’s super hard to navigate, yeah? So, you know, the truth is that usually bigger organizations that are more bureaucratic, more formalized are acquiring smaller organizations that usually move faster. But also, you know, they are moving faster and breaking.

Kovid Batra: Yeah. Yeah. Yeah, there are pros and cons.

Miroslaw Stanek: Yeah. So, I think that those are the, you know, non-technical challenges that you should address from day one to bridge this gap, to stop talking ‘us’ versus ‘them’, to see how quickly we can become one organization focused on a single goal because rather than, you know, expecting the company to adjust to us, we need to find a way to influence, to bring our experience, to help change the culture which works for all of us, rather than saying, “Okay, we work this way. And then, now the new way is not that effective. So I cannot, you know, push anymore once a day because of this or that or that.” So, I think that my role as a leader would be to answer all of those questions. Why we cannot push as fast as we did in the past? Why we have more compliance rules? Why this or that? I think that this is the thing that I should do more at the beginning of that position. Just provide all of the needed context to the former team or the organization to help them become, you know, good, empowered employees of the new organization. This is it.

Francis Lacoste: I agree completely with what Mirek said, you know. I mean, and this is similar to what we would have done differently. I think for us, it took really, too long. We stayed like ‘us’ with ‘them’ for way too long. And I mean, it was still going on, you know, when I left Heroku. In my last year, it was kind of, this was what I was trying to get to the team. It was, we were looking, we don’t know what is Heroku’s mission. And I was kind of, look, we get briefed every year at the company kickoff, which is this big event that sells for us where we have the strategy of the year, you know, and we want to know what our business is. We need to listen to that and tell how we fit into that, where, what is our contribution. Salesforce is in the business of digital transformation. How do we help customers with their digital transformation? And they were cool as a big part to play with, like at development there, but the ‘us’ and ‘them’ is strong. And this is where I said at the beginning, you know, it’s an identity problem. And there’s kind of a, the acquisition, the fact that you’re successful because you’re, you know, you’re at an exit. Especially the funders are having a dip there. Usually you’re bought, you know, I mean, even if it’s not for the valuation you were expecting, you’re still, “Oh! We’re a big deal. We got acquired.” You know? And at that time, like I said, I wasn’t there at the acquisition, but when Salesforce had acquired Heroku, it was a big deal. I mean, in Acre News and all sorts of that, people were saying, “Oh, Heroku is acquired by Salesforce because it required a lot of creds.” And I’ve seen other acquisitions where there’s some sense of pride and arrogance as being the smaller being and, “We are a startup”. “We’re nimble”. We’re really, I mean, “We have traction.” “This is why we got acquired, so they should listen to us.” “We know a lot. They don’t.” You know? So, there’s some arrogance and pride there. But the truth is, you know, especially, the bigger the differential, we need to get some humility and really start to get interested around, okay, why is there this? Because bureaucracy, it’s, I mean, it’s funny. When I was thinking, well, Mirek, you know, usually what is appreciated by the startups is the HR policies of the bigger corporations because they have more formal, you know, they have better insurance, health insurance, all that. That’s usually, “Oh, this is great!” But then it says, “Okay, this is how you should deploy to production.” Because there’s compliance issues and usually the bigger one will have to deal with this. Oh no! So, we need, as startups entering this world to kind of really get the humility of, “Okay, we probably have something to learn from them and it’s on us to tell, to understand what are the pain points and how we can solve it, probably.”

I loved Mirek’s story around the front-end development. It’s a great example. There was a thing and this is how we can solve it. I mean, Heroku was not successful in that way. You know, I mean, we kind of knew how to do deployments and all of that, but we were not really able to solve the deployment problem for Salesforce as a whole, you know. And so, Salesforce created its own Heroku, you know. And because Heroku, we were not interested. So, the arrogance is at the leadership level. So, you need to be able to jump shit and.. That was ‘ship’, I’m a little sorry. You need to jump shit in a way and embrace the new culture because otherwise you become like very protective of what you have and that’s, I mean, down the line, it’s not good. I mean, you see it, usually people will stick for their golden end, their leaders stick for their golden handcuffs and then they leave, you know, because they were not able really to integrate and find the value in that. And the people who stayed are kind of miserable. So it’s, yeah.

Kovid Batra: Totally, totally. One thing you just mentioned around, like how that cultural difference plays a role at different aspects of how you are operating. So, it could be something related to the hierarchy. People moving from a team which is small to a large organization would be happy about the HR policies, as you just mentioned. So, I have had an experience of working with an MNC and I have had an experience of working with a startup, right? The problem is that everyone, even the MNCs want or a large-scale organization wants that the team should move faster, right? Of course, without breaking things. And startups usually move faster, even though they break things, but they move faster. So, when this cultural shift happens, a startup gets acquired by a large-scale organization, keeping the team motivated that has been, like working with such a good pace and releasing features, having that clarity on what they are doing, seeing that impact, how does that transition work? Like I need to understand some detail around that part, maybe. Francis, Mirek, any one of you can answer that, like how do you keep your teams motivated with the fact that, okay, now we were running at 100, it’s going to be 50 now, right? That things could slow down for us, and still you need to keep them motivated on that journey. How would you do that?

Miroslaw Stanek: So, from my experience during the acquisition, as an individual contributor, you either join the existing team. So, this is basically like you would be hired to this company. Or the other way is you stay as the entire team, as the entire entity, and you build your stuff and your job is only to expose an interface or any ways of integrating your stuff with the rest of the product, with the rest of the business. I think that the second scenario is easier because you can still build things in your way. You can still have your, you know, ceremonies, ways of working. Sometimes you even keep the entire, you know, SDLC process or the tech stack. This is nice, you are just taking care of exposing the API or the contract or whatever.

When you join the team as an individual, I think it’s a good exercise for the company which acquires to see how their onboarding processes worked for this particular person. So, I personally look at the things of.. How quickly you can commit to production? For example. How much you need to learn? Do you have those materials which you can learn from? And then, how can you utilize them to push even a one line change to production? If you touch the production, it’s a success because you went the entire way and then you can start generating the real value and expand.

Yeah. So, I personally assume that the best motivation to people is to give them the possibility to generate value. And like I said, those are two ways of, I would say, maximizing that, yeah? And this is basically my experience from the last one and a half years.

Kovid Batra: Totally. Totally makes sense. Yeah. Yeah. What’s your take on this, Francis?

Francis Lacoste: Yeah. I mean, I agree here again, you know. The choice between the two will depend. It’s not necessarily in your hands, unfortunately. You know, I mean, if you’re able to maintain autonomy, it will depend more on the context of the acquisition or if it’s like, “We want to keep this product.” And so, they won’t refactor the teams or they will try to maintain the team’s autonomy, at least for a while so that the product can continue to grow and develop, you know. If it’s, like more technical or a hiring acquisition, then you cannot really expect autonomy. And then leveraging the onboarding process and that. And it’s hard because I mean, you’re really changing things for folks. And the trick is that even with the autonomy, there’s a clock ticking. You might not be aware of it because there’s autonomy, but autonomy is, as always, an expiration date, you know? At Heroku, it was like a lot of years. For most of other acquisitions, usually it’s more like a year, 18 months, 6 months, you know, and then, there will be, you’re on a timeline. And what is tough there as a leader is that you’re expecting to continue building the product as you are and you’re expecting implicitly or explicitly to integrate with the rest of the engineering org. You want to get ahead of it. Even if it’s, like just in six months or a year, you want to start building the relationships to.. How is it that you’re doing planning? How is it that you’re pushing to production? What is the integration aspect? And while at the same time, keeping your team autonomous. But you want to initiate these relationships. Get ahead. I mean, this is what I would have liked to do.

Kovid Batra: Yeah, yeah. I understand. And I think it’s good actually. See, setting the expectations brings a lot of more certainty to the situation and people get prepared for it. So, it definitely makes sense. First is that you give them that positive side of being there, keep them motivated and set the expectations right for the future so that they are prepared for it. So, I think that’s one good way of moving around like that.

Francis Lacoste: There’s something I want to add, you know, because I think I didn’t feel I really answered your question, initial question, which was about how we maintain the speed and agility, you know, of the original context. And this is the truth there, unfortunately, is that to maintain speed, you need autonomy. If you’re trying to centralize everything, this is when you centralize decision making, that things get slow and you get bogged down in, like, all of the coordination processes to make a decision. And this is what’s plaguing larger organizations. And so, there is an organizational philosophy at management. So, there is an uphill battle there because larger organizations that can move fast will add a lot of autonomy to the decentralized decision making. And that’s not really what is, like the common thinking in larger organization management. So, this is why it’s often unsuccessful. If you add up like the centralized decision making and the centralized process, you end up with these slow things. And that’s just the nature of it, you know, kind of. So, that’s the challenge.

Kovid Batra: Yeah, looking at the bright side would be the only option, like looking at the HR policies.

Francis Lacoste: And I mean, there are various.. The trick, and this is why I insist on relationship building because I mean, especially the larger organizations, the more they are, you can build some autonomy. Even though officially, there’s only a single way to do it, in practice, there will be multiple ways because of the history of acquisition and all of that. So, you can, if you know this, if you have the relationships when you did your training, your inventory of the lay of the land, then you know, okay, well, I can gain more time here and help steer that part of the organization into something that is more sane, you know. So, you can influence the culture, but you’re not going to transform it, you know, six months here. It’s like you’re starting a journey to nudge a little bit, the larger organization to work as a saner practice. We saw that at Heroku, especially around remote work. When I joined, it was to build a remote culture there. And when the pandemic hit, at Salesforce, the larger Salesforce organization, there was a lot of interest. Oh, what can we learn from Heroku? They’ve done that. So, our experience was welcomed and we were able to shift things, you know, in that area around remote work a little bit like Mirek was able to do around, like the front-end development. So, this is why understanding what the pain points are where we can contribute can help these micro-shifts.

Kovid Batra: Yeah, yeah. Makes sense. All right, moving on to one last piece of our discussion today around the acquisitions. This is a time of transition, turmoil, leaders themselves are figuring out that space, that foot into the new organization and trying to set up things with the existing team and the upcoming team. At this time, how do you think you can look at the efficiency of an engineering team? How can you go about measuring it? Or maybe, you should not measure it because there could be other aspects to look at at that point of time. How do you ensure that the people who are getting paid are delivering the value in that moment of transition? And how do you ensure that people are efficient?

Miroslaw Stanek: So, from my perspective, I take into consideration those four stages of team development, yeah? So, we have forming, storming, norming, and performing. And I assume that if the company is acquired, it’s major enough, fast-moving enough. So, I assume they are close to the ‘performing’ stage, yeah, where you measure the efficiency, speed, you can implement DORA metrics, you know, measure the number of deployments, whatever. But when you are acquired, I assume that you are coming back to the forming phase. So yeah, obviously, if you stay as a single team, single entity, you still can move really, really fast. You can keep deploying your stuff, you know, every single day to production. We are moving fast, but the question is if we are moving in the right direction, yeah? So, that’s why you can still keep measuring those things.

But I think that at the beginning, you know, of ‘forming’, you need to get to know people, company, business and everything. So, you understand how you can contribute to the company’s success rather than just moving fast in a totally random direction. So, I would come back to my answers from a few minutes ago. I would measure the onboarding time, basic stuff, how quickly you can, you know, come into production because, you know, you need to get access to your repositories, you need to go through all of the documentations and stuff like that. In the meantime, you know, just learning the company, learning the teams, your, you know, colleagues and everything. Then obviously, you will go to the ‘storming’ phase where everyone is debating on the ways of working and why we don’t work this way, but that way and so on. But, you know, after this turbulent time, then you can come back to the performing phase where you are optimizing, but only when you know that you are going in the right direction.

Kovid Batra: Makes sense. Perfect. Perfect. What’s your take on this, Francis?

Francis Lacoste: Well, what I’d add, again, it depends, you know. It’s really understanding how the acquiring organization answers that question because they probably already have a framework of how they’re thinking about performance, how they’re doing performance management, for instance. That’s also one of the usual sources of friction. We like the HR process, but not necessarily the performance, the way they do performance management, you know, because they have a very formalized one. Our smaller organization was always smaller. So in a way it’s kind of understanding how these questions are framed and processed at the bigger level. And then seeing, okay, how is that compatible with us? How are we going to need to adjust? And if you’re already doing that, you know, so that, because that will be an impedance mismatch that will need to be negotiated. And if you want to negotiate it, you’ll have to get ahead of it because otherwise the expectation will, you’ll just use ours.

Kovid Batra: Yeah, yeah.

Francis Lacoste: That’s very tough. The other around that question often it will be removing duplication. You know, it’s not so much, it’s everybody is busy because I mean, everybody’s busy in our company. Now, the question, like Mirek said, is kind of, “Are they busy on the right stuff?” And this is where I always recommend looking more for output, you know, what the outcomes are. I mean, not output, actually outcomes more than, like the busyness, you know, or our people’s time sheets or, you know, that, you know, oh, the number of pull requests or number of lines of code or all of these metrics which are kind of irrelevant in many ways.

But really, how is the business? Are we meeting our business outcomes? Giving transparency on how we’re making progress on those so that they can have conversations. But often, what happens is more kind of, you have a Platform Engineering team in your startup and we have a Platform Engineering, so we’re just going to merge those, you know, because obviously you should not have two Platform Engineering teams. I mean, that’s kind of naive, but it’s also a source of multiple confusion. But this is also a conversation you need to have, they’re going to come. So, you want to say, “Okay, what is this Platform Engineering team doing?” “What is their charter?” “How is it compatible with ours (or not)?” “Is merging really the right thing?” So, getting these collaborations going between peers at the startup and the bigger. If the teams have talked and have kind of an idea, when the Execs come in and say that you need to merge, you can actually say, “Well, actually, this is how we think we should be doing it.” And then, it’s much easier because the people with the maximum understanding of the context of the deals are able to weigh in on the decision.

Kovid Batra: Yeah. So here, let’s take your example here, Francis, when Heroku merged into Salesforce, there must be certain performance practices you would have already taken up, right? And then, there must be something that Salesforce enforces on the team, right? There must be some clashes over there. Can you give us an example of that? And how did you, as a leader, navigate your team and align them with that? So, it completely changes the way you are thinking, how you’re incentivized to do something in a team, right? And if that happens, it’s a big shift, according to me. How you handle that would be something that I would like to know.

Francis Lacoste: Yeah. I mean, two examples of that were, like the performance management, which I mean, Salesforce didn’t have a very formal one at the beginning. It came in later. But it required this one. The way they do promotions and things like that. So it’s kind of, okay, we need to align more with that. And it was about understanding this process & understanding how we do things. And then, there’s a phase where it’s about how we can continue to keep the spirit and the principles we have in that different process and hybridize the two. Another one was the career ladders. So, we had our own career ladders. And then, there’s kind of the, okay, well, these are the different roles. Harmonizing that. Often, I mean, the biggest job was managing expectations on both sides. Basically what we had was like an interpretation. This is that level. Here’s what that level means here. And you were seeing that even though officially, that’s kind of, you should not be doing that. I mean, the HR folks really hate that. But in practice, contexts are different and you need to have that adaptation. So, even though it was not recognized, it was happening all over the organization. It’s not like just a group who were doing that, but other teams also when they’re, kind of doing commentary on the official career ladder.

Kovid Batra: Yeah, of course. That’s there. Great, guys. I am out of my questions for now. It was lovely discussing all these challenges with you and having a discussion on all those practical tips that you shared. Any parting advice from both of you for the engineering leaders who are in a similar situation, what they should be doing, what they should be taking as next steps?

Miroslaw Stanek: So, from my perspective, I would say that your role as a leader is to find a good match between the skills you are bringing to the new company. You know, your team, the know-how the solutions, the product, to the problems which the new company has. And start, you know, start by doing that. Start by showing what’s the value of your stuff in the context of a new reality. And the quicker you sort it out, the quicker you become, you know, successful in a new organization.

Francis Lacoste: That’s a very good tip. So, two things for me. The first, most practical one is get the conversation going, you know, look at the org chart and find people who are in similar roles or where you can see that oh, if I were to look at this and it looks similar and I want to merge these things, start talking with those teams to get your team to actually start talking to these teams, just to get to know each other, to learn from each other, that sort of thing. Very informal kind of thing. It is just to encourage cross-organization conversations because that makes everything easier afterwards. You get to know people, you get to relate to them as humans. They’re not like a dam who wants to eat you or things like that. So, just encourage, multilateral conversation between similar roles and similar teams, between engineers, well, across the org. So, conversations. Then, same thing with the leader.

The other aspect that I say is kind of, keep in mind that there’s an identity shift that needs to happen, you know, from “we are this company” to “we are this bigger company”. The mission is changing, that sort of thing. And when there is an identity shift, there will be a grieving process, you know, because you’re losing an identity and you’re embracing a new one. So, be prepared to accompany the people in that journey, you know, of kind of losing the, “Oh, this is how we were.” And, “This is our startup times.” And things like that. The loss of that, because it’s a real loss, it will be an emotional impact. And then, so kind of acknowledging it and normalizing it, supporting people through it and embracing, helping them to embrace the bigger identity, “Hey, this is the new mission. This is bigger. We can do more things together.”

Kovid Batra: Totally. I think both of you, thanks a lot for such great piece of advice. Can’t thank you enough. Let’s keep this passion of contributing to the community going and let’s build great dev teams together, man.

Francis Lacoste: Thank you so much, Kovid, for providing this space.

Kovid Batra: Thanks.

Miroslaw Stanek: Thank you.

'Building Teams from Scratch vs Branching' with Lubo Drobny, Software Engineering Coach at Cisco

In the recent episode of groCTO Originals (Formerly: Beyond the Code: Originals), host Kovid Batra welcomes Lubo Drobny, Software Engineering Coach at Cisco. With an impressive professional background at Seimens & SAP Labs, Lubo also actively contributes to the tech community through blogging, hosting podcasts, and organizing meetups centred around product and software engineering topics. Their discussion revolves around ‘Building teams from scratch vs branching’.

The episode begins with Lubo sharing his love for programming, hiking & gardening. He also gets into the details of a life-defining moment in his career that shaped his current coaching style.

Moving on to Lubo’s career progression, we get a glimpse of his journey from Slido to its acquisition by Cisco, highlighting the key differences between a startup & a corporation. He also shares strategies for team building through hiring, onboarding & training while focusing on delivery.

Lastly, Lubo highlights the key engineering metrics for assessing team excellence and their impact on delivery and quality, while underscoring the importance of prioritization.

Time Stamps

  • (0:06): Lubo’s background
  • (0:41): Lubo’s hobbies & life-defining moments
  • (5:56): Journey from Slido to Cisco
  • (11:15): Balancing hiring, training & delivery while scaling up
  • (15:18): Branching strategy for building teams
  • (17:05): Working at Slido vs Cisco
  • (21:41): How to evaluate tech excellence?
  • (25:42): Finding the root cause of inefficiency in teams
  • (28:47): Conclusion

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a special guest who loves to organize meetups with product and with tech fellows. He loves to organize engineering podcasts. He has 16 years of engineering and leadership experience. Currently, he’s serving as a senior engineering leader at Cisco. Welcome to the show, Lubo. Great to have you here.

Lubo Drobny: Hi! Hi, everyone. Thank you for having me. I hope you will enjoy this episode.

Kovid Batra: Of course, we will. And I’m sure you would have a lot of things to share from your journey at your startup and acquisition by Cisco. So, we’ll definitely enjoy this. But before we get started on that part, we would love to know you a little more. So, if you’re comfortable, can you just tell us about your hobbies? What do you like? Some life-defining moments for you? That would be great.

Lubo Drobny: Yeah, perfect. When we’re talking about my hobbies, I would say that my first hobby, uh, programming, I think this is not a surprise. This was probably also kind of a life-changing moment. So I was just 11-12, teenager and I got my first computer at that time. It was an 8-bit Atari XL and I fell in love with this machine, not for games, but for the, let’s say, the possibility of creating new stuff, programming stuff and so on and so on. This was a very important moment for me.

But another one, it was, um, I would say, and a life-changing moment probably was when I was at a Swiss company, RSD, and my boss was a very good mentor. So this is something I probably never felt before, or didn’t have the experience when your manager or your leader is also a great mentor and helping you to grow. And this was very, very eye-opening for me. And it worked well.

And then about my hobbies, um, I like hiking. Uh, usually in Slovakia, we have nice mountains, but also around Europe, for example, Italy, the one, it’s Austria, uh, very nice mountains. Uh, I like gardening. So, this is also connecting to nature. And I like to play in my small garden. And I’m a proud father of three kids. It’s not a hobby, but something which defines me.

Kovid Batra: That’s great. Thanks a lot for sharing that. And you mentioned that your mentor, your boss at one of your companies was a really, really good leader and he guided you well. Can you tell us more about that experience of yours?

Lubo Drobny: Yeah. Uh, it was I think about 10 years ago or 12, I don’t remember exactly. And I was switching my role from technical developer to manager or engineering leader, and what needs to be said, it is a different role. It is not like evolution. So we go step-by-step. It is kind of a different role to lead the people, manage the people. Uh, it’s different than coding because computers do what you want, but people are very different. And for me, it was very important to tell him that okay, I’m kind of new to this. I would need help because otherwise, it could be a disaster. And he was very open and he, what was important was that he discussed a lot with me what he plans to do, he asked me a lot of questions, how I would approach those problems, what needs to be done, but he also really comforted me, okay, well, this is good, done. And he helped me to have focus on important stuff because at the beginning you cannot keep all balls in the air. When you are juggling, some of them could fall, but he always, we talked about it and he always told me, “Okay, this is not important. Don’t focus on it. It’s okay. You are doing good. It could be better, but we need to.” So, he was very encouraging. And this is a kind of quality. Usually, a lot of people kind of only criticize. But he was also very encouraging, very helpful always for me. So this was a very nice experience. And I think it really helped me to survive and grow as an engineering leader. Probably without him, I would, I would go back to coding and engineering.

Kovid Batra: I think that that’s a really great example. And when you, when you find someone at that stage of your life when you’re actually growing or making such transitions and somebody guides you well, you’re tend to actually learn from them, that trade and you yourself try to implement it. So I’m sure that today your team members who are looking up to you as a leader, are sharing the same emotion and feeling.

Lubo Drobny: I hope so because I think I copied kind of this coaching or mentoring style of management. So I’m, I’m not very direct. With my team, I usually talk to them trying to find, uh, the answers, the questions. So I’m not bringing the solution upfront, I’m trying to help them to find it. And I try also to coach them or mentor people around me to grow, not only as a team but also grow, uh, as persons. So it’s very important for me because if you have good people on the bus, uh, it’s, it’s a perfect setup.

Kovid Batra: Yeah, right. Absolutely. Great, Lubo, I think it’s a really great start. And talking about compassionate leadership, empathetic leadership, I think is very important these days. So let’s, let’s get started. Let’s look into more of your journey as a tech person, as a tech leader. Your stint at Slido was quite long. You spent almost six years scaling that startup from zero to one, and then this acquisition also happened. And then, you moved to Cisco. So this journey is quite interesting, at least from the outside. You tell us about your role at different points of time and how, how you took the team from zero to 50 members. And then, how did this acquisition happen where now, you are serving at Cisco as an Engineering Leader, how have things changed for you from Slido to here? I think it’s a big question to answer. You can take your time and let us know some interesting facts from the story.

Lubo Drobny: Yeah. So I hope that it will also be interesting for the audience. But in general, I joined Slido, I think seven years ago. Uh, at that time, it was a promising Slovak startup and there were around 40 people. But only five, uh, developers and two students. But at the time it was like, yeah, this could be interesting because they started to make some profit and the Slido application, if you don’t know, it’s about engagement in meetings and conferences. So, it’s a polling and questions and answers tool. So the presenter can communicate with the audience. The presenter can see the questions online. It’s soft real-time or can poll again in a soft real-time manner, uh, audience and see results and so on and so on. And looks that, uh, this is an interesting niche at the time and it could, it could grow and also the leader or CEO at the time, but the company decided that, yeah, this is a good time to scale the team and try to push more on, uh, also on engineering.

And my role since then has been the same. So, build world-class engineering and world-class products. So this was kind of the mission from the beginning. It’s kind of cliche, of course, but, this is usually the mission of everyone at a startup. You would like to build something great. But, to build something great, you need very good or great people. So as I said, it’s very important who is on the bus with you. And, so my first, of course, my first role was to start hiring and put together, let’s say, a solid process. And there were two levels to this. So, um, once it was, let’s say, kind of short-term, so find the gaps in the team, double the positions, so we are, we are strong and double the team, um, kind of in short time. But on the other hand, also think on, let’s say, mid or so, a long-time period. But we discussed that we should also build some awareness in the engineering community that, uh, we are good because otherwise nobody knows. So, um, I was focusing with the HR team on typical hiring. So, looking for the people, prepare the process, stages and all the stuff. But also we worked on some, let’s say, ‘engineering marketing’ or ‘advocacy’, how we call it. So we started to write some blogs. And we started to be more visible on meetups. Uh, lately we’ve started to organize meetups. So today, I’m helping to organize the product meetups in Bratislava, engineering, uh, meetups here in Bratislava. Uh, we started to be visible at conferences because we believe that it’s important in the long-term also to increase the awareness that we are here and we would like to build a great team. So, this was at the beginning.

Uh, then the next challenge, I would say, was finding a good structure for our teams and deciding how we would like to work. What we put together also with product leads, is that we would like to have small teams because they are, um, in our point of view, most effective for what we need. So, up to 8-10 people, not bigger. And we would like to have cross-functional teams. So, product parts, we call them ‘discovery’. So, it’s Product Manager, Designer, later also, User Researcher. And then, a ‘delivery’ part, which is the Tech Lead, engineers, front-end, back-end, and a tester. So it’s, it’s kind of a typical setup, but we’re experimenting, uh, let’s say, with the size of the team, with the roles. But in the end, we found out that this is probably the best template for us, how to create a team with it. Of course, few mistakes. Maybe the big mistake was, for example, starting the new teams from scratch because usually, we lost the culture thing. And so, therefore, we decided that it’s better, for example, to start the new team by splitting from the older one, which worked better.

Kovid Batra: Sorry to interrupt here. I have a question. You had this great strategy of, um, hiring the right folks by creating that awareness. So you started this community aspect, right? So, from there to hiring more people, you were like, it seems that at that stage, hiring becomes the highest priority, right? You want to scale, you want to grow and everything is going on. At that moment, when you are hiring, new people are coming in, the time period of onboarding somebody easily, like if we talk broadly, it takes 8 to 10 months for somebody to actually show you something productive coming out because the person would come in, gain the context and then get familiar with the things that are going around. So then, at least it takes eight months for somebody to come out and deliver something. And in a stage where you are fast-growing, how did you manage to deliver alongside hiring such folks and training them faster if you are doing that?

Lubo Drobny: I understand the question. The first point was that, uh, we were focusing on experienced people. So let’s say, seniors because usually they are able to be onboarded faster. So..

Kovid Batra: Yeah.

Lubo Drobny: So, for example, in my experience, how we did it, uh, when someone senior joins the team, the first month, okay, setting up everything. But we are in a startup. We don’t need a lot of permissions. So it’s very quick. This is your laptop and accessories. Uh, then we have, I think one or two weeks in general onboarding to the product, uh, to the company, everything. And then, after one month, I was like, “Okay, guys, what I expect is that you will do the first small release to the production.” Because we are a web application, we can release very quickly. We are using a common tech stack for the web, it’s TypeScript, Node.js, React, or Angular. And when we hire people who are proficient in those technologies, that is great. And we are not using some internal special frameworks or something, you know, you need to figure out how it works. And also we have very, very light processes. So, even when we hire, for example, a Java Engineer, after three months, they will be ready to code, ship, and deliver. So..

Kovid Batra: Oh, that’s great.

Lubo Drobny: But the best guys, in one month, they did the first release. So it was very quick, but we were focused on seniors. Then there was a question, okay, uh, what about the juniors? Because it’s, you can’t hire only, only them (seniors).

Kovid Batra: Right.

Lubo Drobny: And, what would I really like? We started the internship program which remains. And so, we decided to do a three-month summer internship for full-time and paid internships. Usually, four or five people join. And then, if they decided to continue part-time, it was really great. We were focusing on university students, of course. And this was a very great way to find very, very good junior people who are on top level, I would say. So I can recommend the internship program also for, for others. It is working for us very well.

Kovid Batra: Yeah. I think hiring the expert folks, having your tech stack pretty common and simple and sorted, having the least number of processes, and having automation in the right places can really accelerate that process of onboarding. And hence, when you’re hiring someone who is good, you can get the productivity as fast as possible, right? So..

Lubo Drobny: Yeah.

Kovid Batra: That, that really worked well for you there. So, I think that’s a good piece of advice to keep such things in mind when we are proceeding to scale, at least at that point when you are navigating towards two different goals. One is, of course, bringing in people and then training them and hiring them. And at the same time, delivery. This could work out really well. And I think the point which you started that also seemed very interesting. Like you said, we thought of not hiring people and putting them into new teams, we just branched out, uh, new teams from the existing teams itself. So, this seems to be a very interesting strategy. I think you could just continue on that. So, I’d love to hear more.

Lubo Drobny: Yeah. Because what we, what I, or what we realized, when we started teams from scratch, they came with some culture or habits from previous companies and they started to replicate this thing because they didn’t have, uh, let’s say experience from our teams, our culture and the way we work. And in the end, we realized that this is not the, this is not the best. And maybe it is more clever to, for example, we have a good team, we can hire a little bit more people there, and then, turn around and split them. It’s also easier, also for the team and to keep the culture this way if you already work in a good team and you understand how we work, what are the habits, what are the things, what is important for us, you can easily continue. So, uh, this is the, let’s say, mechanics, how it, how it works for us. I think it’s, it’s better, at least in our practice.

Kovid Batra: Definitely.

Lubo Drobny: But, of course, sometimes you need to start from scratch if you do not have, let’s say, the skills, technology, or you started with something really new, so you cannot pick it for everything. But let’s say, if you would like to have the new team with the same tech stack and the same culture, this is the better way from my point of view.

Kovid Batra: Definitely. Even I agree with that point. All right. I think, apart from this, when you scaled up and, uh, I’m just going back to that piece where the company got acquired. The way you were operating at Slido and now, working as an engineering leader with Cisco. How have things changed? Like, give me some examples, like, okay, this is how we used to do here. And now, things have changed for the better or maybe for a little worse here.

Lubo Drobny: Yeah. Of course, something changed. The thing is that, uh, we joined a very big company, a corporation. And also, corporations focus on security, definitely. So, what definitely changed was that we had to implement more certifications around the ISO, 9000, 27000, and also, another American certification for software development, security and quality. This was kind of challenging for us because we didn’t want to sacrifice, let’s say, our way of work, but we had to change some processes, of course. But we didn’t want to slow down our, let’s say, release process and our possibility to be, to be fast. Um, therefore we had to implement a lot of automation and we had a lot of discussion with the experts in this certification, how to do it. It is compliant and it is okay from the security point of view or quality point of view. But we had to do some sacrifices, I would say. So, it is not the same as before. But on one side, we, are, uh, shipping more secure products, so it’s, it’s not bad.

Kovid Batra: Yeah. Yeah.

Lubo Drobny: On the other hand, we joined Cisco as a business unit. So, they didn’t change the way, how we are organized, how our teams work and Slido is still continuing like a standalone offer. So, also, the Slido brand still exists. Also, this is kind of different, so they didn’t swallow us, I would say, but we are still living as a Slido, which is kind of, kind of nice. And therefore, we are keeping some autonomy, which is good for us to some extent that we can continue working, you know, let’s say, the way we consider them the best force for Slido. On the other hand, as I mentioned, Cisco brought on to us more focus on security and quality, of course, because, um, this company requires high levels of that and more opportunities in, uh, let’s say, integration. Integration way. So, we started with integration with Webex, of course.

Kovid Batra: Yeah. Yeah.

Lubo Drobny: This is a Cisco tool, but then we continued with the integrations with also other video tooling. But, in this cooperation, with Webex teams, gave us a lot of experience, how to do it right way and so on and so on. So, and of course, they gave us the opportunity to reach a broader audience, especially in an enterprise environment where usually the startups are not, let’s say, uh, preferred

Kovid Batra: Yeah, I know.

Lubo Drobny: Usually, corporations would like to buy tools which are strong with the maintenance and all that certification and all the stuff.

Kovid Batra: There is one more important thing that I just felt like asking. When you were building Slido and you were an independent company, the level of impact you would have felt that you’re creating with the product that you’re rolling out and then, integrating with a tool like Webex and then reaching out to like millions of users, right? So, that changes the overall feel of how you’re building it, how you’re doing it. So I think that that must have been a good experience.

Lubo Drobny: Yeah. This is, uh, this is definitely the positive thing that, uh, we were able to, to put Slido in the hands of the enterprise users, like the Webex or another integration. So this was definitely, definitely very positive. We were able to do it because otherwise, probably, we would not have gone in this direction, definitely.

Kovid Batra: Of course. I mean, even if you reach so many people, it would have taken a few years to reach there, right? So that’s, that’s a good jump there. Cool. I think that that’s interesting.

And now, the last piece that I just wanted to understand here. When you are operating with 50 developers with you, I am sure being, uh, an empathetic leader, you are trying to understand every aspect of your developer who is part of the team. But how do you exactly measure their overall excellence? How good they’re doing? How do you measure their work? How you measure their achievements is what I want to understand from you.

Lubo Drobny: Um-hm. So, what is most important for us is the product itself at the end and the value that we are bringing to our customers. So for example, if we build something in time, in high quality, very secure, but nobody’s using it, in the end, it is a failure even if we did a good engineering job. But if it is not working for our customers, we will scratch it or discard it or trim it on the part of the product. So, in the end, it’s, um, this cooperation with the product is very important for us because we are a product company. And also, my evaluation of what we are doing is connected with this, that at the end, when we’re doing and delivering something, we believe, of course, that Product did their jobs and they, they suggested a good feature or, or product to build. Uh, but this is still a very important part of, uh, let’s say, also evaluation of what we are doing in engineering. If in the end, what we built is used and it is built the way that people can use it. So in this case, the second important thing for us, it’s quality and usability, which reflects, for example, uh, net promoter score or statistics like this. So you can, uh, you, you want to measure that these deliver something, that it is a good idea from the product side, but it also on the other hand, it is delivered with good quality because we have experienced if there is some big bug, uh, in the product and our NPS is going down next week.

Kovid Batra: Yeah.

Lubo Drobny: It’s, it’s very connected, it’s, it’s very connected. And for us, it’s a, it’s a very important metric. So, it’s NPS. Then, of course, what we’re evaluating is, uh, the quality by measuring, for example, the total number of bugs or trends. So if we are able to keep it in the, let’s say, a reasonable level, so we have kind of a ‘zero medium +’ bugs policy. So, we are okay to have small hiccups in the product and we are, uh, we are okay with it. But we would like to fix, let’s say, medium, medium bugs as soon as possible. So, this is important for us. Then, uh, for us, is important, deployment pace, that our CI/CD and all this, our continuous integration, and release process is fast. So, there is no problem, that test automation is working well. So, we are measuring weekly deployment strength. And again, if we see that there are some problems or, uh, developers are complaining that something is taking too much or if the tests are unstable, we would like to very quickly fix it and address it, because this is very important for, let’s say, developers’ experience. If you have eight people, A-class people in the team, they just want to release it. They don’t want to wait. They don’t look for some..

Kovid Batra: Reviews or anything.

Lubo Drobny: ..Some excuses why they want to push it. And they want to see their work outside. So, this is very important for us.

Kovid Batra: I think this, this brings me to one important point. Like you said, you look at the bugs rate and, like you have this policy, like, as soon as there is a medium-severity bug out there, you have to resolve it as soon as possible. See, these things ultimately tell you that, okay, this is the problem, like this is a symptom kind of a thing, right? But at the end, when you have to drill down and understand what is the core, like is it a bad review process or is the initial code quality not good? Then how do you end up finding it? Uh, like, of course, you can go by the brute force method, but I’m just, uh, curious to know how you do it.

Lubo Drobny: For more, let’s say, some critical bugs or some high-severity bugs, uh, we are doing, um, postmortems. It’s usually a very interesting process. Uh, usually takes two, three weeks. So, if something happens, there is an owner of the postmortem. So, it is not about who is guilty or not. There is someone who is the owner, who is able to put it together. And it usually is investigated because, uh, you need to, you need to check the log files. You need to talk to people about what’s happening. You need to check the slight communication and you put together, let’s say some scenario, what happened before, what happened during the incident. And you would like to evaluate more things, how we reacted, if our reaction was good or it was slow, um, because maybe we could do revert or we could fix it. And, you know, there is a lot of, a lot of nuances. You can evaluate it during our reaction to this bug or to this problem. And then, of course, there are some five ‘whys’, this typical ‘why is this happening?’ and why, why, why, why. And you asking yourself more than five times to really find a good, to find the root cause of what really happened. And then, you would like to suggest a good, let’s say, some short-term fixes and maybe also some long-term, because you maybe need to just fix some code because there is a bug. But maybe also you need to fix the process or you need to fix some communication issues. You need to fix something else. Because sometimes, some problems happen. But, if you are able to react in seconds or minutes, It’s perfect from some point of view. If you can improve from, let’s say, reaction time in hours to minutes, it’s also a good deployment. This is what I want to say.

So for us, it’s also improvement, uh, important in this. Uh, time to detect the problem and, uh, time to fix, of course. And if we are able to increase those, uh, decrease, decrease those times to a minimum.

Kovid Batra: That makes sense. Perfect, perfect. Great, Lubo. Uh, this was really interesting. And, uh, when someone shares like these level of details, but with some examples, I think that’s the best part and I loved that while discussing this with you. So, I’d surely love to have another round of discussion with you on other topics related to the engineering channel sometime.

But today in the interest of time, I think we’ll have to call this short today and thanks a lot once again for giving us the time and sharing your learnings and experiences with the community.

Lubo Drobny: Okay. Thank you again for having me. I really enjoyed it. And I wish you the best.

Kovid Batra: Thank you so much. See you.

Lubo Drobny: Bye bye.

‘Leading with Empathy & Compassion’ with Jörg Godau, Chief Digital Officer at Doctorly

In the latest episode of the ‘groCTO: Originals’ podcast (Formerly: Beyond the Code: Originals), host Kovid Batra welcomes Jörg Godau, Chief Digital Officer at Doctorly and one of the founding members of The EL Club in Berlin, Germany. His vast experience includes valuable contributions to renowned companies such as VRR Consulting UG, Nortal, and IBM. The discussion revolves around ‘Leading with Empathy & Compassion’.

The episode kicks off with Jörg discussing his hobbies and life-defining events before delving into his role and daily challenges at Doctorly. He emphasizes leveraging user insights and business understanding for software development and aligning individual career aspirations with organizational needs during team scaling.

Furthermore, Jörg explores measuring engineering team success both qualitatively and quantitatively. Wrapping up, he shares his final thoughts on remote work.

Time stamps

  • (0:06): Jörg’s background
  • (0:45): Jörg’s hobbies & life-defining moments
  • (4:52): What is Doctorly?
  • (8:51): Adoption challenges for Doctorly
  • (10:57): Leveraging user & business insights when building products
  • (13:00): Biggest role challenges and their impact
  • (17:38): Aligning team goals with individual aspirations
  • (22:45): How to define success for an engineering team?
  • (25:06): DORA metrics for measuring teams’ visibility
  • (28:55): How to gauge developer experience?
  • (32:13): Final thoughts on remote working

Links and mentions

Episode transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have an amazing guest who is the founding member of the Engineering Leadership Club, Germany. This is a group of empathetic leaders who believe in supporting and mentoring other engineering leaders to lead with compassion. He has 20+ years of engineering and leadership experience himself. He’s currently working as a Chief Digital Officer at Doctorly. Welcome to the show, Jack. Great to have you here.

Jörg Godau: Thank you so much, Kovid. It’s great to be here. Just to be clear, one of the founding members, not the only founding, don’t want to take credit for everybody else’s great work as well.

Kovid Batra: All right. All right. My bad there. Perfect. So, Jack, I think before we get started and have a lot of things to learn from you, I would first want you to introduce yourself with some of your hobbies, some of your life-defining events, so that our audience knows you more.

Jörg Godau: Sure. Yeah, I can do that. And, my name is Jack, but actually, my name is Jörg. I was born in Germany, a long, long, long time ago, and then immigrated to Australia as a very small child, and I lived there for about 30 years. And the umlauts and the pronunciation were not possible. And, people in Australia don’t have umlauts. They don’t have it on the keyboard. It’s not compatible with the Australian way of saying things. So I gave up and said, right, “In English, just call me Jack.” Um, lived in Australia for almost 30 years. Got married there and then moved to Berlin for one year in 2006-2007. I plan to register at some point with the Guinness Book of Records for the world’s longest year, because I’m still here. And now I have, I have two kids, and live here happily with my wife and kids in Berlin since also a long time now, when I think about it.

As far as like hobbies, relaxation, I very much like going for hikes. So like long distance walks. So we’ve done the Camino, we’ve done the Tour de Mont Blanc, also with our children, both. And, this year we’re going to do the Fisherman’s Trail in the South of Portugal. So, and that’s two weeks where you like, we carry all of our stuff. So it forces me to not carry a laptop or other things. So I, in that time, I also can’t work. So it’s a, it’s a very good way to switch off and have a bit of digital detox.

Kovid Batra: Perfect, perfect. What about some of your life-defining moments? I mean, anything that defines who you are today.

Jörg Godau: I think I really like this move to Germany and this plan to like, you know, travel around Europe and do random things for a year. That was a big difference. Obviously as a parent, like having children, you know, every parent will tell you like children change things quite a lot. I think most recently, like probably actually joining Doctorly and having the chance to really almost build something from scratch in a startup environment and really like be able to very directly shape the organization and shape the way things move. These are all like, and they’re on different levels. No one is like.. Personal, travel, seeing the world, experiencing different cultures. One is more like the family life and the other is, is certainly the work life.

Kovid Batra: Great. I think, I totally relate to it. I personally love to travel. I though don’t have a kid right now, but I definitely feel so that it changes your life completely. So I totally relate to that.

Jörg Godau: Yeah. And for us to Australia, like my wife is also like, was also an immigrant to Australia. And, for us, Australia is very far away, right? Like, it’s far away physically. It’s far away in terms of its involvement with world politics. Like in Europe, like world politics is like two hours drive away is the next country, right? Like in Australia, two hours drive away, like that’s a trip to see your friends, right? It’s not like, it’s just not the same.

And also in terms of cultural access. Yes, like people go to Australia with art exhibitions and cultural exhibitions and concerts, but even for those people, it’s a lot of effort to go. So it’s less accessible. Right? In Europe, if you want to see anything, like cultural concerts, ballet, art, it like, there’s just so much here that it’s, I think actually impossible to see it all, which is a different approach.

Kovid Batra: Yeah, absolutely. I agree to that. Great, Jack. I think thanks a lot for sharing that with us. And now, moving on to our main section where we would love to learn a lot from you, but keeping the time in mind, let’s start with something very, very basic. Like you are currently working as a Chief Digital Officer at Doctorly. So, tell us what does Doctorly do, what is your role there and what are your daily challenges?

Jörg Godau: So, Doctorly’s vision is to enable people to live healthier lives. This sounds beautiful and like, you know, cloudy, but like, okay, how? And so, when the founders of Doctorly originally started the company, they looked at what are the real problems in healthcare and in Germany and probably in many other countries. So the problem, one of the problems is the communication and the digitalization of healthcare. In Germany, patients become data mules. You go to a doctor, they give you a piece of paper, you carry that piece of paper to somewhere else, they give you more paper, you carry it back, and you end up with like these massive folders of paper which you probably don’t understand and don’t want. If you lose it, they get very angry at you because you have, they have to print it again or something. So, this process is terrible. So we thought, okay, let’s build something for the patients to improve it. But you can’t because it’s not the patient’s job to put this data. The doctor has to put it and the doctor has to get it. At that point, we realized that the source of the issue and the core of the problem is doctors are confronted with very old-fashioned software. The software that doctors use in general in Germany today started to be built in the 90s, the 2000s. If you’ve been around for a while and you can recognize a Delphi application by how it looks, this is how they look. They look like Windows 95 Minesweeper. Gray bevels. Push the wrong button, it explodes, right? Like, it’s really, really bad. And they run it on computers in their office. So, backups, security, any of these topics, super, super challenging, right? Because like, while they do do backups, they never test the restore. If you don’t test the restore, you haven’t done a backup, right? Like, it’s like, so all of these things led us to start building the core Doctorly product, which is Practice Management software for German doctors, fully cloud-based. They don’t have to worry about anything. They get updates every night, like, they get, like, data backups. We do it. It runs on a professional data center, with professional people supporting the machines. And so, they just don’t have to care. So they can concentrate on the patient. But now already, the data is digital and the data is somewhere central. So we have the first step in being able to transfer the data. And in the next, in the next period of time, we’ll start also building the patient app and a platform and a marketplace so that the patients get control of their data and can say, Hey, I want to send it to this other doctor, but we have to start with the doctor first. That was the real core of us.

Kovid Batra: That’s great. I think that’s a good finding there. Yeah. Please continue. Sorry.

Jörg Godau: Sorry. My daily business, I run everything to do with technology. So the CTO reports to me, the developers, scrum masters, QA, architecture, cloud, all of this is my responsibility. And it goes a little bit further. And as the Chief Digital Officer, I’m also responsible for security, data privacy, and topics like this. So it’s managing all of the software development, delivery, and running of the software for the doctors, but also making sure we’re doing it in the right way that it’s compliant with regulations. And it’s Germany, so we have many, many, many regulations. I think if you print the regulations and the source code, the regulations will be bigger.

Kovid Batra: Yeah, that could be possible. One interesting question here. So, are these doctors ready to use your software immediately or there is an adoption challenge and like, do they pay for it?

Jörg Godau: So the doctors pay for the software, yes. Our prices are very similar to the prices that they normally pay for what they’re used to at the moment. A lot of doctors are ready for this because if you go to a doctor’s office and ask them, “Do you like your software in Germany?” The answer will be no, but they have very little choice. There’s not very many companies that do this. And some of the big companies actually have six or seven products. So the doctor can switch from one product to another, but it’s actually still the same company in the background.

Kovid Batra: Yeah.

Jörg Godau: And one of the things that these companies also do very badly is, you know, updates we’ve seen, they send like not floppy disks, but CD-ROM disks to the doctors and the doctor then has to install the update. Or like some of them you can download the updates. But if somebody accidentally clicks ‘update now’, then the practice can’t work for two hours or three hours. It’s like, and you’ve got all these angry patients who want their like treatment and your computers are just effectively broken.

Also, terrible customer support is another problem. Like, we have very good customer support. We have people who actually used to work in doctor’s offices working in our customer support. So they know, like when somebody calls up, they know what this is. They know, like how important it is and they can actually really help these people. So, doctors are ready. There is an adoption challenge because we have to get the data out of these old systems into our system. That’s the biggest challenge. So lifting the data from, like in the physical office. And if the doctor has, we have it sometimes hundreds of gigabytes of attachments that they’ve kept over the last 20 years, and a very bad internet connection. It takes a long time to upload. Yeah, but that’s just the feature of Germany and its internet providers as well.

Kovid Batra: But as you said, like, doctors are not very adaptive or receptive of the new tools. First of all, uh, I really appreciate the fact that you bring in a lot of business side information, user side information to your job, being a digital officer, a technology officer, I really appreciate that you have that business perspective in place and what exactly do you think you do with all this information and understanding of your user when you build your product, because that’s very important. Like, when you’re building technology, if you have that level of empathy, that level of understanding for the users, I think you can do a tremendous job at building the software right. So can you just give me some examples? Like, yeah.

Jörg Godau: So, we actually have partner practices which we work with and they work with us even before we’d launched. We worked very closely together with our partners and our product owners, our designers were able to go to this doctor’s office and sit with them and watch how they work and watch what’s not working in the old software or watch what is. And the old software is not all terrible, right? It’s old, but it’s not all terrible. It works. And some things are actually quite good. So they were able to go there and see what are the processes in the office of the doctor and where can we have the biggest impact. So our aim is to actually reduce the admin of the doctor like by building systems that reduce the admin by 40-50% so that they can treat faster and the limited time they have, they can focus on the patient Average German doctor’s visit for a normal patient- six minutes, including ‘Hello’, ‘How are you?’, ‘Goodbye’, ‘Here’s your medication, take it three times a day.’ And in that time, the doctor also has to write down all of the billing information. So, making all of this admin stuff easier means that in those six minutes, at least the doctor then can concentrate on the person in front of them and what they need. So this is super important.

Kovid Batra: Makes sense. So, what are the biggest challenges you see today in your role that you are tackling and has the biggest impact?

Jörg Godau: Right now, organizationally, we’ve reached a point where we are now focusing more on scale. So, having great software that does the right things. That’s certainly like an essential first step, but now we have to focus on scale. So, instead of adding 10 customers a month, adding a hundred a month, adding a thousand a month, what processes do we need to make sure that each of those also gets the same great support that the first 10 got? Yeah. Like, so, because if you have 10 customers and you have one customer support person, okay, he can talk to all 10 every day for an hour. Like, and it’s fine, yeah? But if you have a hundred, a thousand, 10, 000, it becomes much more about processes for scale, giving people access to their own support. So, self-service support, really clear instructions, or even better, building applications where you don’t need instructions for. And this is super important that it’s really intuitive, that it’s very easy.

On the other side, as we’re thinking about platform, integrations, marketplace, how do we enable somebody else to build plugins for our product? Like, I don’t want to build everything myself. There are, like different medical image formats. People have built great viewers for this that display all information with different colors and everything. It works. They’re really, really good. How can I enable that company to build a plugin that integrates with my software so that it runs, and the doctor can go to like a marketplace page and say, “I want to use this viewer.” “I want to use this telemedicine thing.” “I want to use this prescription stuff.”? And they have a choice then, but they don’t have to use 12, 15, 20 different products because they don’t like that because the products don’t work well together. So this integration and scale challenge, those are the biggest topics that we’re working on this year.

Kovid Batra: How do you exactly tackle this problem? So, if you could give me an example, I think I would be able to relate more here. Let’s say, we talk about having integrations, right, with third-party software. So what kind of challenges do you really face on the ground when you go about doing this? And you as a team leader or the like the whole organization, technical leader, what, what steps do you take to enable your team to do that efficiently?

Jörg Godau: Yeah. There’s all of the usual challenges, like when you integrate with a third party, like, how do you exchange information with them? How do you actually, like assure that the data is travelling in the right ways, that the data security is met? This is something where we have to be very careful when we’re integrating with third parties that, like, they don’t do things in a way that is against German regulations or against data privacy regulations. So for example, even if you take something as simple as appointment booking, right? Like, the patient wants to book the appointment. The doctor wants the patient to book the appointment. But, which data is shared? So, if you book an appointment with a psychotherapist, this already gives quite sensitive information about you as an individual, right? Like, you know, because somebody can, from just the calendar entry, understand, hmm, Jack has booked an appointment with a psychotherapist, maybe there’s something, like, wrong with him. So, we have to be very careful about those regulations. And then, it’s all of the standard stuff. How can we secure the communication? How can we, like make sure that the data is transferred accurately? How can we keep the systems reasonably decoupled? Yeah. Like, you don’t want to be reliant on somebody else and they don’t want to be reliant on you, like building these principles of decoupling. And then, those are the architecture challenges. And then you have on top of that, how do you share authentication? How do you validate the users? Where’s the primary source? Like, is the primary source our system, the other system, how do you match? You know, many people have the same name, right? So like, and even the same date of birth. And Germany has a population of like 80, 90 million people. A lot of those are double ups, right? We have a lot of like Müller and Schmitz. Yeah. And like, so you have to be very careful, like how you, like you don’t get the wrong appointment with the wrong person. So, some things that seem simple, become then bigger challenges at scale.

Kovid Batra: Makes sense, totally. When you encounter these challenges, these are some things that are to deal with the product and technology, right? Along with that, I’m sure you’re handling a big team there. There are people challenges also. So, this is one important topic that we usually discuss with the CTOs and other engineering leaders who are on the show. While you’re managing people, it is very important to see as your company scales, the people progress, right? And when you’re enabling a team, you need to make sure that people take the right career path. Like, you wouldn’t want a person who is aspiring to be into management or let’s say, Engineering Manager, you push them towards taking some technology role. So, you need to find that alignment. How do you enable your.. And you can’t go to each and every person and then talk to them and understand everything. When you are at scale, you have hundreds of developers, team members working with you. How do you impart that thought into people so that they make their decisions consciously of what do they want to do? That makes your job easier. But I think that’s very important for them to understand themselves also to align better.

Jörg Godau: A lot of this comes from company culture and values. If you set up the right company culture, the right company values, then you are actually in a very good place to allow people to grow in the right way. Doctorally, even though it’s a startup and I think altogether, we’re about 70 people now. Development about 30, 35 or technology, let’s call it technology, 30, 35, a lot of other people in sales, customer onboarding, support, you know, like these other organizational roles. So, we have four values. ‘Excellence’. People should strive to do great work. Yeah, like, fairly normal. ‘ Integrity’. You must do what you say that you’re going to do, or try to do what you say that you’re going to do. And if it doesn’t work, you must tell somebody and not, like, just hide it, yeah? It’s fairly normal as well. ‘Kindness’. Yeah, this is super, super important. And this is not just kindness to the employee, but kindness to the customer, kindness to the patient who is sitting in front of our customer, like kindness to each other, how we talk to each other and how we, like behave if you make a mistake or if you accidentally, like talk to somebody the wrong way. Go and say like, “Hey, I’m sorry.” Right? This is part of it. And, ‘Ownership’. So taking ownership of the work that you do, being responsible for the things that you do and accountable for the things. And using these four values, we talk about them all the time. I refuse to let them be written on the walls. I think once you start writing them on the walls or putting them in pamphlets, values are no longer useful.

I did this actually, we had a, I went to a, like a presentation and gave a talk in front of a bigger group of people and I asked, you know, “A show of hands, does your company have, like values?” And most people put up their hands. I’m like, “Okay. Do you know the values?” And like, half the hands, they go down. At Doctorly, every single person knows the values because we try to refer to them always and we try to use them in our daily business. So we say, “Thank you.” Thank you for taking ownership. Thank you for like, you know, doing this work. Thank you for like, you know, being kind, to like help me. And that’s really important. And when people feel comfortable and safe, then you can talk about personal growth. Do you want to become a better technical expert? Do you want to become a manager? Are you happy doing what you’re doing and we don’t, like need to move you anywhere? It’s also people, like sometimes they’re just happy doing their job. You know, like, and sometimes people don’t want to be something else. They just want to be good at their job and do this. Of course, in technology, everybody must still continue to learn because the technology changes, so you can’t be completely static. But if somebody is a great backend developer and they want to continue to be a great backend developer, and they have no vision for themselves for leadership, why should I force them? It just hurts them and hurts me in the end. So, this is really important And then, taking the time to talk to people, you know? Those are the secrets. Like, I think we all know them. It’s the doing that’s, that’s harder. Yeah.

Kovid Batra: Exactly. I mean, I was just about to say this, that even though every time you mentioned, that, okay, this is the value, pretty common, but the important point that I took away is that you are not putting them on the walls and you are bringing them into the discussions on an everyday basis when you’re working. And I think that’s how the human brain also works. Like, you have to do that reinforcement in the right way, so that people live by it. So, I think that’s pretty good advice actually.

Jörg Godau: It’s like learning a language. If you don’t use it, you can’t learn it, but you can study it and it’s okay. But if you don’t use it, if you don’t live with the language, it’s not possible to really learn it. And if you have values that you don’t use, what’s the point, right? Like..

Kovid Batra: Absolutely, absolutely. Perfect. So I think with this, one question that comes to my mind is that when everything is aligned on the cultural and values part, you’re doing good. You know, you get that feel from the team that they are very integral, they’re putting down their best, right? How do you exactly measure their success? Like, for an engineering team which is basically enabling the product, how as a technical leader, you define the success of an engineering team so that they also remain motivated to achieve that, right?

Jörg Godau: It’s super difficult, right? Like, metrics, measurements, super difficult topic. And it’s one that we’re just revisiting ourselves as well at the moment. And we’re considering what do we measure? So, at the moment, we are measuring very obvious things, customer bugs, customer satisfaction. This is quite simple. If there’s no bugs that customers find, it doesn’t mean your software is good, but it means that it’s working in a way that they expect, you know? So that’s one very easy thing. I think all development companies can measure this.

The other thing that we’re trying to do is we’re giving the teams when we ask them to build something, we actually ask them, “Okay, you tell me how long.” And they get to choose. Will it be four weeks, seven weeks, five weeks, eight weeks. And then we measure, did they get that right? So, are they able to deliver at the time when they say they want to deliver? And if not, then we have to look at what causes this, obviously. And this is a big change. We used to work using Scrum, two week sprints, deliver every two weeks something. We don’t do that anymore. Now, because the things that we build are either too small and so two weeks is too much, or too big and it takes many months. If we have a new complicated regulation that we have to implement, you can’t do this in two weeks. And you can’t, like, yes, you can build it iteratively, but it provides no value until it’s finished. And thus, you have the certification. So you can never give it to a customer until you have the signed piece of paper from, like the regulatory body.

So in this sense, we’ve now aligned our development process more to how the real world expects us to work. And that’s been a big change, but I think overall now that it’s been going for a few months, that’s been actually quite good.

Kovid Batra: Anything on the DORA metrics piece that you have seen, being implemented or thinking of implementing in your teams? Like particularly, let’s say, cycle time or change failure rate so that the teams have visibility there, or do you just think that these metrics put in the right process, which you’re already measuring would do the purpose, fulfill the purpose?

Jörg Godau: We do measure some of these things. Deployment frequency for us is not relevant because our customers don’t want the software to change during the day. You’re a doctor and you’re using the software. It should not change.

Kovid Batra: Yeah. Yeah.

Jörg Godau: Oh, they’re like, you know, if you’re Amazon or eBay or something and you have customers 24/7 and you can, like do like different things. Yeah, fine. But for a doctor, if he’s in the middle of making a prescription, and the form suddenly changes and there’s a new box, it’s like, no. So deployment, our deployment frequency is one, once per night. Finish. So then, there’s no point to measure, you know. And, obviously when there’s something that needs to be deployed, but otherwise that’s, so that path for us is useless.

What we do measure is if there is a critical bug. So, something that is stopping a doctor from doing something that’s important for the patient. These ones we want to solve on the same day so that the patient can get his medication or his sick note or whatever they need. And this is something that we track the resolution time on the bugs. So, critical bugs must be resolved within the one day, and that’s working very well. Other bugs, we want them to be resolved within the times that we, like the SLAs that we give. So we track the SLA resolution on this. But, if there’s a spelling mistake, you know, if it says ‘calendar’ with like, double ‘a’ instead of double ‘e’, nobody cares when this is resolved. Yeah. It’s an example that I’m pulling from nowhere, but it’s not important because everybody still understands it’s the calendar. They can find it. They can use it. Everything works. So these ones we don’t care about. So any of the low-level bugs, we don’t track the time on these. They have to be done. Yeah, it’s wrong. Yes, must be fixed, but it’s not such that people can’t work. So, low-level, we ignore in terms of tracking metrics because it just adds effort. Every measurement that you make costs time. Every time you look at the measurements, “Oh, we’re not resolving our low-level bugs in 16 weeks.” Yeah, and? Like, well, what does it matter?

So, this is the important thing. When you’re measuring something, you must determine what are you going to do with the answer? So, if you’re measuring a piece of wood, you’re asking the question, is it big enough to make what I want to make from this piece of wood? Yeah. It’s a very specific question. If you are measuring development teams, it’s much more complicated, obviously, but what do you want to do with the answer? If you have no, like, if you don’t know what the answer is for, you shouldn’t measure it.

Kovid Batra: Absolutely. I think it’s a very valid point that, DORA metrics or in general, any engineering metric that you’re looking at, it’s not going to be same for the other team that is working on a different product, right? Every organization, every team has their own areas where they need to focus. And you have to choose these metrics rightly so that you can make an impact rather than just putting down all those gazillion metrics there and overloading the team with something which is completely unnecessary. So I totally agree to that point and it makes sense. The deployment frequency was a very good example. Like in your case, it doesn’t make sense to measure, right? You can deploy only once.

Cool. I think that that’s really great. That was something on the quantitative part. You’re looking at engineering efficiency here. But another important aspect is the developer experience, right? Uh, you have to be empathetic of your team, trying to understand what do they feel. What are their basic needs if there are any kind of challenges? So, do you do any measurements or pulse check-ins there to understand what they need as a team, as an organization to work swiftly?

Jörg Godau: So we do the usual things like we do like 1-on-1s, we do skip-level meetings. So, managers talk to them. At the moment, actually our CEO is in South Africa. A lot of our team is actually based in South Africa. And he then met personally with all of the people in South Africa.

Kovid Batra: That’s great.

Jörg Godau: We have twice a year we have events where people come together. Our team is very distributed. So we have Germany, East Europe, Lebanon, South Africa. But twice a year we bring people, not all together because we can’t afford flying 20 people from South Africa to Europe or vice versa. But we have one event in South Africa, one event in Europe, and people get to spend time with each other. This is very important for the feeling. And we do, measure employee NPS. So internally every month we send like a very quick survey, like just three questions, you know, NPS-style survey. And then once per quarter, we actually do a feedback cycle, like a proper feedback cycle where people get feedback from their peers, from their manager, from their direct reports. So, and we gather all of this feedback and the managers then look at it together with the people and say, “Hey, this is the feedback you got. Like, your team members are really happy with the way that you work, but not so happy with how you communicate. So what can we do to help you, like communicate better?” Or, ” You’re doing good work, but your colleagues don’t like the way that you like, sometimes don’t write enough unit tests. So, what can we do to help you write more unit tests?” Like, so, like very specific conversations can then happen out of this.

And we also then rate the employees, like how well are they doing now and what’s their future potential. So we have a like a grid system. Are they doing really well now? What’s their potential like in the future? And we reward the ones that are doing really well with extra shares or opportunities to do more work or not more work, but like to like grow in their career in different ways. So if somebody says, “I want to become Senior Engineer.” Or, “I want to become Team Lead.” We then try and look at that with them together and say, “Okay. So, what are the steps that we need to take? What’s the path?” And have very clear discussions with them.

Kovid Batra: That’s really amazing. And managing remote teams like this is kind of necessary now. And if not done well, I think you will not have the right team with the right enthusiasm in place. So, totally appreciate that.

Perfect, Jack. Thanks for sharing all these insights and learnings with us. I hope our audience would love it and thanks a lot once again.

Jörg Godau: I’m very, very happy to have this conversation. Thank you for giving me the opportunity. I think just one last thought on the whole, like remote work point.

Kovid Batra: Yeah.

Jörg Godau: There are a bunch of companies now that are saying you must come to the office two or three days or like some rule for coming back to the office. For me, I think this should be taken under the premise that as a management team, as a leadership team, we cannot support you remotely. It is not about the employees, but it’s about the organization can’t do it. If you force people to come to the office because you don’t trust them, you can’t see their work, you can’t measure what they’re doing, this is not their fault. Now, like you have to find ways to actually be able to do these things remotely. It is much more work as an organizational leader, as a team lead, as a manager, to have a remote team. Because if you have a local team, sure, you walk into the office, you look, “Ah, Mary, she looks a bit sad.” “John, he seems like he’s not having a good day. I’ll talk to him.” Remote team, you have to actually spend time. You have to talk to them. Not every day, maybe if you have too many people, but regularly or in group settings. And you have to provide this. And that means you as a manager have to find somewhere the extra hours to do it. And this is the thing where I think companies are misrepresenting. This is like, ah, we like, need it for collaboration. It’s very good if the people can meet and collaborate, making like, we have it to like, we make hackathons, but the people can participate remotely, so they’re able to collaborate, able to work together or we have these events where people come together. These work, but if you force people to go to an office and sit at their desks, especially if you’re an international company, what am I supposed to do? I make the people in South Africa go to the South African office to have a video call with the people in East Europe. What’s the point? Like, this is like, it’s like at this point because we’re so spread out, it’s now like obvious that it won’t work.

Kovid Batra: Yeah.

Jörg Godau: So, I think that’s super important and we’ve seen a lot of, like news, big companies forcing people back to the office full-time, part-time. I think that this is a failure of people to adapt and not of the individuals, but of the organizations. And this is something that I’m very passionate about, like holding up a flag for.

Kovid Batra: This is a little counterintuitive thought for me, but I think it’s very true, actually. It’s the organization that has to take care of it, not the employees.

Jörg Godau: And if, if I as a manager can’t do it, if I’m not capable as a manager to manage a remote team, that’s okay. But I have to say, “I, as a manager, I’m not capable to manage a remote team. So you must all come to my office.” It’s not his fault that I can’t manage him when he’s two hours away. Right? Like, or her fault. It’s my fault because it’s my job as a manager to manage these people. And some people are not good at remote work. There are individuals who, if they work from home, they don’t perform. Yeah? But you have to either help them to learn how to work in this way or they have to find a job where they go to the office. Yeah? But it’s not like, it’s not every employee’s fault that one manager is not capable. Yeah. It’s like if you think about it this way, like if there’s 10 people and one of them has a problem and nine don’t, which one is most likely have to change?

Kovid Batra: The organization, probably. Yeah.

Jörg Godau: Yeah. Cool.

Kovid Batra: Perfect, Jack. Can’t thank you enough for all the insights and learnings. I would love to have another show with you and share more details on how to manage these remote teams better because that looks like a very interesting topic to me now.

Jörg Godau: Yeah. Thank you very much, Kovid. It was a real pleasure to talk to you and certainly very happy to talk again in the future. Yeah, thank you.

View All

AI

View All

How does Gen AI address Technical Debt?

The software development field is constantly evolving field. While this helps deliver the products and services quickly to the end-users, it also implies that developers might take shortcuts to deliver them on time. This not only reduces the quality of the software but also leads to increased technical debt.

But, with new trends and technologies, comes generative AI. It seems to be a promising solution in the software development industry which can ultimately, lead to high-quality code and decreased technical debt.

Let’s explore more about how generative AI can help manage technical debt!

Technical debt: An overview

Technical debt arises when development teams take shortcuts to develop projects. While this gives them short-term gains, it increases their workload in the long run.

In other words, developers prioritize quick solutions over effective solutions. The four main causes behind technical debt are:

  • Business causes: Prioritizing business needs and the company’s evolving conditions can put pressure on development teams to cut corners. It can result in preponing deadlines or reducing costs to achieve desired goals.
  • Development causes: As new technologies are evolving rapidly, It makes it difficult for teams to switch or upgrade them quickly. Especially when already dealing with the burden of bad code.
  • Human resources causes: Unintentional technical debt can occur when development teams lack the necessary skills or knowledge to implement best practices. It can result in more errors and insufficient solutions.
  • Resources causes: When teams don’t have time or sufficient resources, they take shortcuts by choosing the quickest solution. It can be due to budgetary constraints, insufficient processes and culture, deadlines, and so on.

Why generative AI for code management is important?

As per McKinsey’s study,

“… 10 to 20 percent of the technology budget dedicated to new products is diverted to resolving issues related to tech debt. More troubling still, CIOs estimated that tech debt amounts to 20 to 40 percent of the value of their entire technology estate before depreciation.”

But there’s a solution to it. Handling tech debt is possible and can have a significant impact:

“Some companies find that actively managing their tech debt frees up engineers to spend up to 50 percent more of their time on work that supports business goals. The CIO of a leading cloud provider told us, ‘By reinventing our debt management, we went from 75 percent of engineer time paying the [tech debt] ‘tax’ to 25 percent. It allowed us to be who we are today.”

There are many traditional ways to minimize technical debt which includes manual testing, refactoring, and code review. However, these manual tasks take a lot of time and effort. Due to the ever-evolving nature of the software industry, these are often overlooked and delayed.

Since generative AI tools are on the rise, they are considered to be the right way for code management which subsequently, lowers technical debt. These new tools have started reaching the market already. They are integrated into the software development environments, gather and process the data across the organization in real-time, and further, leveraged to lower tech debt.

Some of the key benefits of generative AI are:

  • Identify redundant code: Generative AI tools like Codeclone analyze code and suggest improvements. This further helps in improving code readability and maintainability and subsequently, minimizing technical debt.
  • Generates high-quality code: Automated code review tools such as Typo help in an efficient and effective code review process. They understand the context of the code and accurately fix issues which leads to high-quality code.  
  • Automate manual tasks: Tools like Github Copilot automate repetitive tasks and let the developers focus on high-quality tasks.
  • Optimal refactoring strategies: AI tools like Deepcode leverage machine learning models to understand code semantics, break it down into more manageable functions, and improve variable namings.

Case studies and real-life examples

Many industries have started adopting generative AI technologies already for tech debt management. These AI tools assist developers in improving code quality, streamlining SDLC processes, and cost savings.

Below are success stories of a few well-known organizations that have implemented these tools in their organizations:

Microsoft uses Diffblue cover for Automated Testing and Bug Detection

Microsoft is a global technology leader that implemented Diffblue cover for automated testing. Through this generative AI, Microsoft has experienced a considerable reduction in the number of bugs during the development process. It also ensures that the new features don’t compromise with existing functionality which positively impacts their code quality. This further helps in faster and more reliable releases and cost savings.

Google implements Codex for code documentation

Google is an internet search engine and technology giant that implemented OpenAI’s Codex for streamlining code documentation processes. Integrating this AI tool helped subsequently reduce the time and effort spent on manual documentation tasks. Due to the consistency across the entire codebase, It enhances code quality and allows developers to focus more on core tasks.

Facebook adopts CodeClone to identify redundancy

Facebook, a leading social media, has adopted a generative AI tool, CodeClone for identifying and eliminating redundant code across its extensive codebase. This resulted in decreased inconsistencies and a more streamlined and efficient codebase which further led to faster development cycles.

Pioneer Square Labs uses GPT-4 for higher-level planning

Pioneer Square Labs, a studio that launches technology startups, adopted GPT-4 to allow developers to focus on core tasks and let these AI tools handle mundane tasks. This further allows them to take care of high-level planning and assist in writing code. Hence, streamlining the development process.

How Typo leverage generative AI to reduce technical debt?

Typo’s automated code review tool enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells.

Typo also auto-analyses your codebase pulls requests to find issues and auto-generates fixes before you merge to master. Its Auto-Fix feature leverages GPT 3.5 Pro trained on millions of open source data & exclusive anonymised private data as well to generate line-by-line code snippets where the issue is detected in the codebase.

As a result, Typo helps reduce technical debt by detecting and addressing issues early in the development process, preventing the introduction of new debt, and allowing developers to focus on high-quality tasks.

Issue detection by Typo

AI to reduce technical debt

Autofixing the codebase with an option to directly create a Pull Request

AI to reduce technical debt

Key features

Supports top 10+ languages

Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.

Fix every code issue

Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.

Efficient code optimization

Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.

Professional coding standards

Typo standardizes code and reduces the risk of a security breach.

Professional coding standards

Click here to know more about our Code Review tool

Can technical debt increase due to generative AI?

While generative AI can help reduce technical debt by analyzing code quality, removing redundant code, and automating the code review process, many engineering leaders believe technical debt can be increased too.

Bob Quillin, vFunction chief ecosystem officer stated “These new applications and capabilities will require many new MLOps processes and tools that could overwhelm any existing, already overloaded DevOps team,”

They aren’t wrong either!

Technical debt can be increased when the organizations aren’t properly documenting and training development teams in implementing generative AI the right way. When these AI tools are adopted hastily without considering any long-term implications, they can rather increase the workload of developers and increase technical debt.

Ethical guidelines

Establish ethical guidelines for the use of generative AI in organizations. Understand the potential ethical implications of using AI to generate code, such as the impact on job displacement, intellectual property rights, and biases in AI-generated output.

Diverse training data quality

Ensure the quality and diversity of training data used to train generative AI models. When training data is biased or incomplete, these AI tools can produce biased or incorrect output. Regularly review and update training data to improve the accuracy and reliability of AI-generated code.

Human oversight

Maintain human oversight throughout the generative AI process. While AI can generate code snippets and provide suggestions, the final decision should be upon the developers for final decision making, review, and validate the output to ensure correctness, security, and adherence to coding standards.

Most importantly, human intervention is a must when using these tools. After all, it’s their judgment, creativity, and domain knowledge that help to make the final decision. Generative AI is indeed helpful to reduce the manual tasks of the developers, however, they need to use it properly.

Conclusion

In a nutshell, generative artificial intelligence tools can help manage technical debt when used correctly. These tools help to identify redundancy in code, improve readability and maintainability, and generate high-quality code.

However, it is to be noted that these AI tools shouldn’t be used independently. These tools must work only as the developers’ assistants and they muse use them transparently and fairly.

Use of AI in the code review process

The code review process is one of the major reasons for developer burnout. This not only hinders the developer’s productivity but also negatively affects the software tasks. Unfortunately, it is a crucial aspect of software development that shouldn’t be compromised.

So, what is the alternative to manual code review? Let’s dive in further to know more about it:

The current State of Manual Code Review

Manual code reviews are crucial for the software development process. It can help identify bugs, mentor new developers, and promote a collaborative culture among team members. However, it comes with its own set of limitations.

Software development is a demanding job with lots of projects and processes. Code review when done manually, can take a lot of time and effort from developers. Especially, when reviewing an extensive codebase. It not only prevents them from working on other core tasks but also leads to fatigue and burnout, resulting in decreased productivity.

Since the reviewers have to read the source code line by line to identify issues and vulnerabilities, it can overwhelm them and they may miss out on some of the critical paths. This can result in human errors especially when the deadline is approaching. Hence, negatively impacting project efficiency and straining team resources.

In short, manual code review demands significant time, effort, and coordination from the development team.

This is when AI code review comes to the rescue. AI code review tools are becoming increasingly popular in today’s times. Let’s read more about AI code review and why is it important for developers:

What is AI Code Review?

AI code review is an automated process that examines and analyzes the code of software applications. It uses artificial intelligence and machine learning techniques to identify patterns, detect potential problems, common programming mistakes, and potential security vulnerabilities. These AI code review tools are entirely based on data so they aren’t biased and can read vast amounts of code in seconds.

Why AI in the Code Review Process is Important?

Augmenting human efforts with AI code review has various benefits:

Enhance Overall Quality

Generative AI in code review tools can detect issues like potential bugs, security vulnerabilities, code smells, bottlenecks, and more. The human code review process usually overlooks these issues. Hence, helping in identifying patterns and recommending code improvements that can enhance efficiency and maintenance and reduce technical debt. This leads to robust and reliable software that meets the highest quality standards.

Improve Productivity

AI-powered tools can scan and analyze large volumes of code within minutes. It not only detects potential issues but also suggests improvements according to coding standards and practices. This allows the development team to catch errors early in the development cycle by providing immediate feedback. This saves time spent on manual inspections and rather, developers can focus on other intricate and imaginative parts of their work.

Better Compliance with Coding Standards

The automated code review process ensures that code conforms to coding standards and best practices. It allows code to be more readable, understandable, and maintainable. Hence, improving the code quality. Moreover, it enhances teamwork and collaboration among developers as all of them adhere to the same guidelines and consistency in the code review process.

Enhance Accuracy

The major disadvantage of manual code reviews is that they are prone to human errors and biases. It further increases other critical issues related to structural quality, architectural decisions or so which negatively impact the software application. Generative AI in code reviews can analyze code much faster and more consistently than humans. Hence, maintaining accuracy and reducing biases since they are entirely based on data.

Increase Scalability

When software projects grow in complexity and size, manual code reviews become increasingly time-consuming. It may also struggle to keep up with the scale of these codebases which further delay the code review process. As mentioned before, AI code review tools can handle large codebases in a fraction of a second and can help development teams maintain high standards of code quality and maintainability.  

How Typo Leverage Gen AI to Automate Code Reviews?

Typo’s automated code review tool not only enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells. It auto-analyses your codebase and pulls requests to find issues and auto-generates fixes before you merge to master.

Typo’s Auto-Fix feature leverages GPT 3.5 Pro to generate line-by-line code snippets where the issue is detected in the codebase. This means less time reviewing and more time for important tasks. As a result, making the whole process faster and smoother.

Issue detection by Typo

Auto fixing the codebase with an option to directly create a Pull Request

Key Features

Supports Top 10+ Languages

Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.

Fix Every Code Issue

Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.

Efficient Code Optimization

Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.

Professional Coding Standards

Typo standardizes code and reduces the risk of a security breach.

Comparing Typo with Other AI Code Review Tools

There are other popular AI code review tools available in the market. Let’s compare how we stack against others:

Typo

Sonarcloud

Codacy

Codecov

Code analysis

AI analysis and static code analysis

No

No

No

Code context

Deep understanding

No

No

No

Proprietary models

Yes

No

No

No

Auto debugging

Automated debugging with detailed explanations

Manual

No

No

Auto pull request

Automated pull requests and fixes

No

No

No

AI vs. Humans: The Future of Code Reviews?

AI code review tools are becoming increasingly popular. One question that has been on everyone’s mind is whether these AI code review tools will take away developers’ jobs.

The answer is NO.

Generative AI in code reviews is designed to enhance and streamline the development process. It lets the developers automate the repetitive and time-consuming tasks and focus on other core aspects of software applications. Moreover, human judgment, creativity, and domain knowledge are crucial for software development that AI cannot fully replicate.

While these tools excel at certain tasks like analyzing codebase, identifying code patterns, and software testing, they still cannot fully understand complex business requirements, and user needs, or make subjective decisions.

As a result, the combination of AI code review tools and developers’ intervention is an effective approach to ensure high-quality code.

Conclusion

The tech industry is demanding. The software engineering team needs to stay ahead of the industry trends. New AI tools and technologies can help them complement their skills and expertise and make their task easier.

AI in the code review process offers remarkable benefits including reducing human error and consistent accuracy. But, make sure that they are here to assist you in your task, not your whole strategy or replace you.

|

How Generative AI Is Revolutionising Developer Productivity

Generative AI has become a transformative force in the tech world. And it isn’t going to stop anytime soon! It will continue to have a major impact, especially in the software development industry.Generative AI, when used in the right way, can help developers in saving their time and effort. It allows them to focus on core tasks and upskilling. It further helps streamline various stages of SDLC and improves Developer Productivity. In this article, let’s dive deeper into how generative AI can positively impact developer productivity.

What is Generative AI?

Generative AI is a category of AI models and tools that are designed to create new content, images, videos, text, music, or code. It uses various techniques including neural networks and deep learning algorithms to generate new content.Generative artificial intelligence holds a great advantage for software developers in improving their productivity. It not only improves code quality and delivers better products and services but also allows them to stay ahead of their competitors.Below are a few benefits of Generative AI:

Increases Efficiency

With the help of Generative AI, developers can automate tasks that are either repetitive or don’t require much attention. This saves a lot of time and energy and allows developers to be more productive and efficient in their work. Hence, they can focus on more complex and critical aspects of the software without constantly stressing about other work.

Improves Quality

Generative AI can help in minimizing errors and address potential issues early. When they are set as per the coding standards, it can contribute to more effective coding reviews. This increases the code quality and decreases costly downtime and data loss.

Helps in Learning and Assisting with Work

Generative AI can assist developers by analyzing and generating examples of well-structured code, providing suggestions for refactoring, generating code snippets, and detecting blind spots. This further helps developers in upskilling and gaining knowledge about their tasks.

Cost Savings

Integrating generative AI tools can reduce costs. It enables developers to use existing codebases effectively and complete projects faster even with shorter teams. Generative AI can streamline the stages of the software development life cycle and get the most out of less budget.

Predict Analytics

Generative AI can help in detecting potential issues in the early stages by analyzing historical data. It can also make predictions about future trends. This allows developers to make informed decisions about their projects, streamline their workflow, and hence, deliver high-quality products and services.

How does Generative AI Help Software Developers?

Below are four key areas in which Generative AI can be a great asset to software developers:

It Eliminates Manual and Repetitive Tasks

Generative AI can take up the manual and routine tasks of software development teams. A few of them are test automation, completing coding statements, writing documentation, and so on. Developers can provide the prompt to Generative AI i.e. information regarding their code and documentation that adheres to the best practices. And it can generate the required content accordingly. It minimizes human errors and increases accuracy.This increases the creativity and problem-solving skills of developers. It further lets them focus more on solving complex business challenges and fast-track new software capabilities. Hence, it helps in faster delivery of products and services to end users.

It Helps Developers to Tackle New Challenges

When developers face any challenges or obstacles in their projects, they can turn to these AI tools to seek assistance. These AI tools can track performance, provide feedback, offer predictions, and find the optimal path to complete tasks. By providing the right and clear prompts, these tools can provide problem-specific recommendations and proven solutions.This prevents developers from being stressed out with certain tasks. Rather, they can use their time and energy for other important tasks or can take breaks.It increases their productivity and performance. Hence, improves the overall developer experience.

It Helps in Creating the First Draft of the Code

With the help of generative artificial intelligence, developers can get helpful code suggestions and generate initial drafts. It can be done by entering the prompt in a separate window or within the IDE that helps in developing the software.This prevents developers from entering into a slump and getting in the flow sooner. Besides this, these AI tools can also assist in root cause analysis and generate new system designs. Hence, it allows developers to reflect on code at a higher and more abstract level and focus more on what they want to build.

It Helps in Making Changes to Existing Code Faster

Generative AI can accelerate updates to existing code faster. Developers simply have to provide the criteria for the same and the AI tool can proceed further. It usually includes those tasks that get sidelined due to workload and lack of time. For example, Refactoring existing code further helps in making small changes and improving code readability and performance.As a result, developers can focus on high-level design and critical decision-making without worrying much about existing tasks.

How does Generative AI Improve Developer Productivity?

Below are a few ways in which Generative AI can have a positive impact on developer productivity:

Focus on Meaningful Tasks

As Generative AI tools take up tedious and repetitive tasks, they allow developers to give their time and energy to meaningful activities. This avoids distractions and prevents them from stress and burnout. Hence, it increases their productivity and positively impacts the overall developer experience.

Assist in their Learning Graph

Generative AI lets developers be less dependent on their seniors and co-workers. Since they can gain practical insights and examples from these AI tools. It allows them to enter their flow state faster and reduces their stress level.

Assist in Pair Programming

Through Generative AI, developers can collaborate with other developers easily. These AI tools help in providing intelligent suggestions and feedback during coding sessions. This stimulates discussion between them and leads to better and more creative solutions.

Increase the Pace of Software Development

Generative AI helps in the continuous delivery of products and services and drives business strategy. It addresses potential issues in the early stages and provides suggestions for improvements. Hence, it not only accelerates the phases of SDLC but improves overall quality as well.

5 top Generative AI Tools for Software Developers

Typo

Typo auto-analyzes your code and pull requests to find issues and suggests auto-fixes before getting merged.

Use Case

The code review process is time-consuming. Typo enables developers to find issues as soon as PR is raised and shows alerts within the git account. It gives you a detailed summary of security, vulnerability, and performance issues. To streamline the whole process, it suggests auto-fixes and best practices to move things faster and better.

Github Copilot

Github Copilot is an AI pair programmer that provides autocomplete style suggestions to your code.

Use Case

Coding is an integral part of your software development project. However, when done manually, takes a lot of effort. Github Copilot picks suggestions from your current or related code files and lets you test and select your code to perform different actions. It also ensures that vulnerable coding patterns are filtered out and blocks problematic public code suggestions.

Tabnine

Tabnine is an AI-powered code completion tool that uses deep learning to suggest code as you type.

Use Case

Writing code can prevent you from focusing on other core activities. Tabnine can provide accurate suggestions over time as per your coding habits and personalize code too. It also includes programming languages such as Javascript and Python and integrates them with popular IDEs for speedy setup and reduced context switching.

ChatGPT

ChatGPT is a language model developed by OpenAI to understand prompts and generate human-like texts.

Use Case

Developers need to brainstorm ideas and get feedback on their projects. This is when ChatGPT comes to their rescue. This AI tool helps in finding answers to their coding, technical documentation, programming concepts and much more quickly. It uses natural language to understand questions and provide relevant suggestions.

Mintlify

Mintlify is an AI-powered documentation writer that allows developers to quickly and accurately generate code documentation.

Use Case

Code documentation can be a tedious process. Mintlify can analyze code, quickly understand complicated functions, and include built-in analytics to help developers understand how users engage with the documentation. It also has a Mintlify chat that reads documents and answers user questions instantly.

How to Mitigate Risks Associated with Generative AI?

No matter how effective Generative AI is becoming nowadays, it also comes with a lot of defects and errors. They are not always correct hence, human review is important after giving certain tasks to AI tools.Below are a few ways you can reduce risks related to Generative AI:

Implement Quality Control Practices

Develop guidelines and policies to address ethical challenges such as fairness, privacy, transparency, and accuracy of software development projects. Make sure to monitor a system that tracks model accuracy, performance metrics, and potential biases.

Provide Generative AI Training

Offer mentorship and training regarding Generative AI. This will increase AI literacy across departments and mitigate the risk. Help them know how to effectively utilize these tools and know their capabilities and limitations.

Understand AI is an Assistant, Not a Replacement

Make your developers understand that these generative tools should be viewed as assistants only. Encourage collaboration between these tools and human operators to leverage the strength of AI.

Conclusion

In a nutshell, Generative AI stands as a game-changer in the software development industry. When they are harnessed effectively, they can bring a multitude of benefits to the table. However, ensure that your developers approach the integration of Generative AI with caution.

View All

Tutorials

View All

How Typo Uses DORA Metrics to Boost Efficiency?

DORA metrics are a compass for engineering teams striving to optimise their development and operations processes.

Consistently tracking these metrics can lead to significant and lasting improvements in your software delivery processes and overall business performance.

Below is a detailed guide on how Typo uses DORA to improve DevOps performance and boost efficiency:

What are DORA Metrics?

In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble and Nicole Forsgren to evaluate and improve software development practices. The aim was to improve the understanding of how organisations can deliver software faster, more reliable and of higher quality.

They developed DORA metrics that provide insights into the performance of DevOps practices and help organisations improve their software development and delivery processes. These metrics help in finding answers to these two questions:

  • How to identify organisations’ elite performers?
  • What should low performers teams must focus on?

The Four DORA Metrics

DORA metrics helps in assessing software delivery performance based on four key (or accelerate) metrics:

  • Deployment Frequency
  • Lead Time for Changes
  • Change Failure Rate
  • Mean Time to Recover

Deployment Frequency

Deployment Frequency measures the number of times that code is deployed into production. It helps in understanding team’s throughput and quantifying how much value is delivered to customers.

When organizations achieve a high Deployment Frequency, they can enjoy rapid releases without compromising the software’s robustness. This can be a powerful driver of agility and efficiency, making it an essential component for software development teams.

One deployment per week is standard. However, it also depends on the type of product.

Why is it Important?

  • It provides insights into the overall efficiency and speed of the DevOps team’s processes.
  • It helps in identifying pitfalls and areas for improvement in the software development life cycle.
  • It helps in making data-driven decisions to optimise the process.
  • It helps in understanding the impact of changes on system performance.

Lead Time for Changes

Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. The measurement of this metric offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.

By analysing the Lead Time for Changes, development teams can identify bottlenecks in the delivery pipeline and streamline their workflows to improve software delivery’s overall speed and efficiency. Shorter lead time states that the DevOps team is more efficient in deploying code.

Why is it Important?

  • It helps organisations gather feedback and validate assumptions quickly, leading to informed decision-making and aligning software development with customer needs.
  • It helps organizations gain agility and adaptability, allowing them to swiftly respond to market changes, embrace new technologies, and meet evolving business needs.
  • It enables experimentation, learning, and continuous improvement, empowering organizations to stay competitive in dynamic environments.
  • It demands collaborative teamwork, breaking silos, fostering shared ownership, and improving communication, coordination, and efficiency.

Change Failure Rate

Change Failure Rate gauges the percentage of changes that require hot fixes or other remediation after production. It reflects the stability and reliability of the entire software development and deployment lifecycle.

By tracking CFR, teams can identify bottlenecks, flaws, or vulnerabilities in their processes, tools, or infrastructure that can negatively impact the quality, speed, and cost of software delivery.

0% — 15% CFR is considered to be a good indicator of your code quality.

Why is it Important?

  • It enhances user experience and builds trust by reducing failures.
  • It protects your business from financial risks which helps in avoiding revenue loss, customer churn, and brand damage by reducing failures.
  • It helps in allocating resources effectively and focuses on delivering new features.
  • It ensures changes are implemented smoothly and with minimal disruption.

Mean Time to Recovery

Mean Time to Recovery measures how quickly a team can bounce back from incidents or failures. It concentrates on determining the efficiency and effectiveness of an organisation’s incident response and resolution procedures.

A lower mean time to recovery is synonymous with a resilient system capable of handling challenges effectively.

The response time should be as short as possible. 24 hours is considered to be a good rule of thumb.

Why is it Important?

  • It enhances user satisfaction by reducing downtime and resolution times.
  • It mitigates the negative impacts of downtime on business operations, including financial losses, missed opportunities, and reputational damage.
  • It helps meet service level agreements (SLAs) that are vital for upholding client trust and fulfilling contractual commitments.
  • It provides valuable insights in day to day practices such as incident management, engineering team performance and helps elevate customer satisfaction.

The Fifth Metrics: Reliability

Reliability is a fifth metric that was added by the DORA team in 2021. It measures modern operational practices and doesn’t have standard quantifiable targets for performance levels.

Reliability comprises several metrics used to assess operational performance that includes availability, latency, performance and scalability that measures user-facing behaviour, software SLAs, performance targets, and error budgets.

How Typo Uses DORA to Boost Dev Efficiency?

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It offers comprehensive insights into the deployment process through key DORA metrics such as change failure rate, time to build, and deployment frequency.

Below is a detailed view of how Typo uses DORA to boost dev efficiency and team performance:

DORA Metrics Dashboard

Typo’s DORA metrics dashboard has a user-friendly interface and robust features tailored for DevOps excellence. This helps in identifying bottlenecks, improves collaboration between teams, optimises delivery speed and effectively communicates team’s success.

DORA metrics dashboard pulls in data from all the sources and presents in a visualised and detailed way to engineering leaders and development team.

DORA metrics helps in many ways:

  • With pre-built integrations in the dev tool stack, DORA dashboard provides all the relevant data flowing in within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real-time.
  • It gives real-time visibility into a team’s KPI and lets them make informed decisions.

How to Build your DORA Metrics Dashboard?

Define your objectives

Firstly, define clear and measurable objectives. Consider KPIs that align with your organisational goals. Whether it’s improving deployment speed, reducing failure rates, or enhancing overall efficiency, having a well-defined set of objectives will help guide your implementation of the dashboard.

Understanding DORA metrics

Gain a deeper understanding of DORA metrics by exploring the nuances of Deployment Frequency, Lead Time, Change Failure Rate, and MTTR. Then, connect each of these metrics with your organisation’s DevOps goals to have a comprehensive understanding of how they contribute towards improving overall performance and efficiency.

Dashboard configuration

Follow specific guidelines to properly configure your dashboard. Customise the widgets to accurately represent important metrics and personalise the layout to create a clear and intuitive visualisation of your data. This ensures that your team can easily interpret the insights provided by the dashboard and take appropriate actions.

Implementing data collection mechanisms

To ensure the accuracy and reliability of your DORA Metrics, establish strong data collection mechanisms. Configure your dashboard to collect real-time data from relevant sources, so that the metrics reflect the current state of your DevOps processes.

Integrating automation tools

Integrate automation tools to optimise the performance of your DORA Metrics Dashboard.

By utilising automation for data collection, analysis, and reporting processes, you can streamline routine tasks. This will free up your team’s time and allow them to focus on making strategic decisions and improvements.

Utilising the dashboard effectively

To get the most out of your well-configured DORA Metrics Dashboard, use the insights gained to identify bottlenecks, streamline processes, and improve overall DevOps efficiency. Analyse the dashboard data regularly to drive continuous improvement initiatives and make informed decisions that will positively impact your software development lifecycle.

Comprehensive Visualization of Key Metrics

Typo’s dashboard provides clear and intuitive visualisations of the four key DORA metrics:

Deployment Frequency

It tracks how often new code is deployed to production, highlighting the team’s productivity.

By integrating with your CI/CD tool, Typo calculates Deployment Frequency by counting the number of unique production deployments within the selected time range. The workflows and repositories that align with production can be configured by you.

Cycle Time (Lead Time for Changes)

It measures the time it takes from code being committed to it being deployed in production, indicating the efficiency of the development pipeline.

In the context of Typo it is the average time all pull requests have spent in the “Coding”, “Pickup”, “Review” and “Merge” stages of the pipeline. Typo considers all the merged Pull Requests for the main/master/production branch for the selected time range and calculates the average time spent by each Pull Request in every stage of the pipeline. No open/draft Pull Requests are considered in this calculation.

Change Failure Rate

It shows the percentage of deployments causing a failure in production, reflecting the quality and stability of releases.

There are multiple ways this metric can be configured:

  • A deployment that needs a rollback or a hotfix: For such cases, any Pull Request having a title/tag/label that represents a rollback/hotfix that is merged to production can be considered as a failure.
  • A high-priority production incident: For such cases, any ticket in your Issue Tracker having a title/tag/label that represents a high-priority production incident can be considered as a failure.
  • A deployment that failed during the production workflow: For such cases, Typo can integrate with your CI/CD tool and consider any failed deployment as a failure.

To calculate the final percentage, the total number of failures are divided by the total number of deployments (this can be picked either from the Deployment PRs or from the CI/CD tool deployments).

Mean Time to Restore (MTTR)

It measures the time taken to recover from a failure, showing the team’s ability to respond to and fix issues.

The way a team tracks production failure (CFR) defines how MTTR is calculated for that team. If a team considers a production failure as :

  • Pull Request tagging to track a deployment that needs a rollback or a hotfix: In such a case, MTTR is calculated as the time between the last deployment till such a Pull Request was merged to main/master/production.
  • Tickets tagging for high-priority production incidents: In such a case, MTTR is calculated as the average time such a ticket takes from the ‘In Progress’ state to the ‘Done’ state.
  • CI/CD integration to track deployments that failed during the production workflow: In such a case, MTTR is calculated as the average time between that deployment failure to its being successfully deployed.

Benchmarking for Context

  • Industry Standards: By providing benchmarks, Typo allows teams to compare their performance against industry standards, helping them understand where they stand.
  • Historical Performance: Teams can also compare their current performance with their historical data to track improvements or identify regressions.

Find out what it takes to build reliable high-velocity dev teams:

How Does it Help Engineering Leaders?

  • Typo provides a clear, data-driven view of software development performance. It offers insights into various aspects of development and operational processes.
  • It helps in tracking progress over time. Through continuous tracking, it monitors improvements or regressions in a team’s performance.
  • It supports DevOps practices that focus on both development speed and operational stability.
  • DORA metrics help in mitigating risk. With the help of CFR and MTTR, engineering leaders can manage and lower risk, ensuring more stability and reliability associated with software changes.
  • It identifies bottlenecks and inefficiencies and pinpoints where the team is struggling such as longer lead times or high failure rates.

How Does it Help Development Teams?

  • Typo provides a clear, real-time view of a team’s performance and lets the team make informed decisions based on empirical data rather than guesswork.
  • It encourages balance between speed and quality by providing metrics that highlight both aspects.
  • It helps in predicting future performance based on historical data. This helps in better planning and resource allocation.
  • It helps in identifying potential risks early and taking proactive measures to mitigate them.

Conclusion

DORA metrics deliver crucial insights into team performance. Monitoring Change Failure Rate and Mean Time to Recovery helps leaders ensure their teams are building resilient services with minimal downtime. Similarly, keeping an eye on Deployment Frequency and Lead Time for Changes assures engineering leaders that the team is maintaining a swift pace.

Together, these metrics offer a clear picture of how well the team balances speed and quality in their workflows.

How to engineer your feedback?

One of the ways organizations are implementing is through a continuous feedback process. While it may seem a straightforward process, it is not. Every developer takes feedback in different ways. Hence, it is important to engineer the feedback the right way.

Why is the feedback process important?

Below are a few ways why continuous feedback is beneficial for both developers and engineering leaders:

Keeps everyone on the same page: Feedback enables individuals to be on the same page. No matter what type of tasks they are working on. It allows them to understand their strengths and improve their blind spots. Hence, provide high-quality work.

Facilitates improvement: Feedback enables developers the areas they need to improve and the opportunities they can grab according to their strengths. With the right context and motivation, it can encourage software developers to work on their personal and professional growth.

Nurtures healthy relationships: Feedback fosters open and honest communication. It lets developers be comfortable in sharing ideas and seeking support without any judgements even when they aren’t performing well.

Enhances user satisfaction: Feedback helps developers to enhance their quality of work. This can have a direct impact on user satisfaction which further positively affects the organization.

Strength performance management: Feedback enables you to set clear expectations, track progress, and provide ongoing support and guidance to developers. This further strengthens their performance and streamlines their workflow.

How to engineer your feedback?

There are a lot of things to consider when giving effective and honest feedback. We’ve divided the process into three sections. Do check it out below:

Before the feedback session

Frame the context of the developer feedback

Plan in advance how will you start the conversation, what is worth mentioning, and what is not. For example, if it is related to pull requests, can start by discussing their past performance related to the same. Further, you can talk about how well are they performing, whether they are delivering the work on time, rating their performance and action plan, and if there are any challenges they are facing. Make sure to relate it to the bigger picture.

When framed appropriately and constructively, it helps in focusing on improvement rather than criticism. It also enables developers to take feedback the right way and help them grow and succeed.

Keep tracking continuously

Observe and note down everything related to the developers. Track their performance continuously. Jot down whatever noticed even if it is not worth mentioning during the feedback session. It allows you to share feedback more accurately and comprehensively. It also helps you to identify the trends and patterns in developer performance and lets them know that the feedback isn’t based on isolated incidents but rather the consistent observation.

For example, XYZ is a software developer at ABC organization. The engineering leader observed XYZ for three months before delivering effective feedback. She told him:

  • In 1st month, XYZ wasn’t able to work well on the initial implementation strategy. So, she provided him with resources.
  • In 2nd month, he showed signs of improvement yet he hesitated to participate in the team meetings.
  • In 3rd month, XYZ’s technical skills kept improving but he struggled to engage in meetings and share his ideas.

So, the engineering leader was able to discuss effectively his strengths and areas of improvement.

Understand the difference between feedback and criticism

Before offering feedback to software development teams, make sure you are well aware of the differences between constructive feedback and criticism. Constructive feedback encourages developers to enhance their personal and professional development. On the other hand, criticism enables developers to be defensive and hinder their progress.

Constructive feedback allows you to focus on the behavior and outcome of the developers and help them by providing actionable insights while criticism focuses on faults and mistakes without providing the right guidance.

For example,

Situation: A developer’s recent code review missed several critical issues.

Feedback: “Your recent code review missed a few critical issues, like the memory leak in the data processing module. Next time, please double-check for potential memory leaks. If you’re unsure how to spot them, let’s review some strategies together.”

Criticism: “Your code reviews are sloppy and miss too many important issues. You need to do a better job.”

Collect all important information

Review previous feedback given to developers before the session. Check what was last discussed and make sure to bring it up again. Also, include those that were you tracking during this time and connect them with the previous feedback process. Look for metrics such as pull request activity, work progress, team velocity, work log, check-ins, and more to get in-depth insights about their work. You can also gather peer reviews to get 360-degree feedback and understand better how well individuals are performing.

This makes your feedback balanced and takes into account all aspects of developers’ contributions and challenges.

During the feedback session

Two-way feedback

The feedback shouldn’t be a top-down approach. It must go both ways. You can start by bringing up the discussion that happened in the previous feedback session. Know their opinion and perspective on certain topics and ideas. Make sure that you ask questions to make them realize that you respect their opinions and want to hear what they want to discuss.

Now, share your feedback based on the last discussion, observations, and performance. You can also modify your feedback based on their perspective and reflections. It allows the feedback to be detailed and comprehensive.

Establish clear steps for improvement

When you have shared their areas of improvement, make sure you provide them with clear actionable plans as well. Discuss with them what needs immediate attention and what steps can they take. Set small goals with them as it makes it easier to focus on them and let them know that their goals are important. You must also schedule follow-up meetings with them after they reach every step and understand if they are facing any challenges. You can also provide resources and tools that can help them attain their goals.

Apply the SBI framework

Developed by the Center for Creative Leadership, the SBI stands for situation, behavior, and impact framework. It includes:

  • Situation: First, describe the specific context or scenario in which the observation/behavior took place. Provide factual details and avoid vague descriptions.

Example: Last week’s team collaboration on the new feature development.

  • Behavior: Now, articulate specific behavior you observed or experienced during that situation. Focus only on tangible actions or words instead of assumptions or generalizations.

Example: “You did not participate actively in the brainstorming sessions and missed a few important meetings.”

  • Impact: Lastly, explain the impact of behavior on you or others involved. Share the consequences on the team, project, and the organization.

Example: “This led to a lack of input from your side, and we missed out on potentially valuable ideas. It also caused some delays as we had to reschedule discussions.”

Final words could be: “Please ensure to attend all relevant meetings and actively participate in discussions. Your contributions are important to the team.”

This allows for delivering feedback that is clear, actionable, and respectful. It makes it relevant and directly tied to the situation. Note that, this framework is for both positive and negative feedback.

Understand constraints and personal circumstances

It is also important to know if any constraints are negatively impacting their performance. It could include tight deadlines or a heavy workload that is hampering their productivity or facing health issues due to which they aren’t able to focus properly. Ask them while you deliver feedback to them. You can further create actionable plans accordingly. This shows developers that you care for them and makes the feedback more personalized and relevant. Besides this, it also allows you to share tangible improvements rather than adding more pressure.

For example: “During the last sprint, there were a few missed deadlines. Is there something outside of work that might be affecting your ability to meet these deadlines? Please let me know if there’s anything we can do to accommodate your situation.”

Ask them if there’s anything else to discuss and summarize the feedback

Before concluding the meeting, ask them if there’s anything they would like to discuss. It could likely be that they have missed out on something or it wasn’t bought up during the session.

Afterwards, summarize what has been discussed. Ask the developers what are their key takeaways from the session and share your perspective as well. You can document the summary to help you and developers in the future feedback meetings. This gives mutual understanding and ensures that both are on the same page.

After the feedback session

Write a summary for yourself

Keep a record of what was discussed during this session and action plans provided to the developers. You can take a look at them in future feedback meetings or performance evaluations. An example of the structure of summary:

  • Date and time
  • List the main topics and specific behaviors discussed.
  • Include any constraints, personal circumstances, or insights the developer shared.
  • Outline the specific actions, along with any support or resources you committed to providing.
  • Detail the agreed-upon timeline for follow-up meetings or check-ins to monitor progress.
  • Add any personal observations or reflections that might help in future interactions.

Monitor the progress

Ensure you give them measurable goals and timelines during the feedback session. Monitor their progress through check-ins, provide ongoing support and guidance, and keep discussing the challenges or roadblocks they are facing. It helps the developers stay on track and feel supported throughout their journey.

How Typo can help enhance the feedback process?

Typo is an effective software engineering intelligence platform that can help in improving the feedback process within development teams. Here’s how Typo’s features can be leveraged to enhance feedback sessions:

  • By providing visibility into key SDLC metrics, engineering managers can give more precise and data-driven feedback.
  • It also captures qualitative insights and provides a 360-degree view of the developer experience allowing managers to understand the real issues developers face.
  • Comparing the team’s performance across industry benchmarks can help in understanding where the developers stand.
  • Customizable dashboards allow teams to focus on the most relevant metrics, ensuring feedback is aligned with the team’s specific goals and challenges.
  • The sprint analysis feature tracks and analyzes the progress throughout a sprint, making it easier to identify bottlenecks and areas for improvement. This makes the feedback more timely and targeted.
Typo can help enhance the feedback process
Typo can help enhance the feedback process

For more information, visit our website!

Conclusion

Software developers deserve high-quality feedback. It not only helps them identify their blind spots but also polishes their skills. The feedback loop lets developers know where they stand and the recognition they deserve.

Building and structuring an effective engineering team

Building a high-performing engineering team is crucial for the success of any company, especially in the dynamic and constantly evolving world of technology. Whether you’re a startup on the rise or an established enterprise looking to maintain your competitive edge, having a well-structured engineering team is essential.

This blog will explore the intricacies of building and structuring engineering teams for scale and success. We’ll cover many topics, including talent acquisition, skill development, team management, and more.

Whether you’re a CTO, a team leader, or an entrepreneur looking to build your own engineering team, this blog will equip you with the knowledge and tools to create a high-performing engineering team that can drive innovation and help you achieve your business goals.

What are the dynamics of engineering teams?

Before we dive into the specifics of team structure, it’s vital to understand the dynamics that shape engineering teams. Various factors, including team size, communication channels, leadership style, and cultural fit, influence these dynamics. Each factor plays a significant role in determining how well a team operates.

Team size

The size of a team can significantly impact its operation. Smaller teams tend to be more agile and flexible, making it easier for them to make quick decisions and respond to project changes. On the other hand, larger teams can provide more resources, skills, and knowledge, but they may struggle with communication and coordination.

Communication channels

Effective communication is essential for any team’s success. In engineering teams, communication channels play a significant role in ensuring team members can collaborate effectively. Different communication channels, such as email, chat, video conferencing, or face-to-face, can impact the team’s effectiveness.

Leadership style

A team leader’s leadership style can significantly impact the team’s effectiveness. Autocratic leaders tend to make decisions without input from team members, while democratic leaders encourage team members to participate in decision-making. Moreover, transformational leaders inspire and motivate team members to achieve their best.

Cultural fit

Cultural fit refers to how well team members align with the team’s values, norms, and beliefs. A team that has members with similar values and beliefs is more likely to work well together and be more productive. In contrast, a team with members with conflicting values and beliefs may struggle to work effectively.

Scaling engineering teams can present challenges, and planning and strategizing thoughtfully is crucial to ensure that the team remains effective. Understanding the dynamics that shape engineering teams can help teams overcome these challenges and work together effectively.

Key roles in engineering teams

An engineering team must be diverse and collaborative. Each team member should specialize in a particular area but also be able to comprehend and collaborate with others in building a product.

A few of them include:

Software development team lead and manager

The software development team lead plays a crucial role in guiding and coordinating the efforts of the software development team. They could have under 10 to hundreds of team members under their lead.

Software developer

Software developers write the code, their job is purely technical and they build the product. Most of them are individual contributors i.e. they have no management or HR responsibilities.

Product managers

Product managers define the product vision, gather and prioritize requirements, and deal with collaboration with engineering teams.

Designers

Designers create user-friendly interfaces, develop prototypes to visualize concepts and iterate on feedback-based designs.

Key principles for building and structuring engineering teams

Once the dynamics of engineering teams are understood, organizations can apply key principles to build and structure teams for scale. From defining goals and establishing role clarity to fostering a culture of collaboration and innovation, these principles serve as a foundation for effective team building.

  • Setting clear goals ensures everyone is aligned and working towards the same vision.
  • Clearly defined roles and responsibilities help prevent confusion and promote accountability within the team.
  • Foster an environment where team members feel empowered to collaborate, share ideas, and innovate.
  • Communication is the backbone of any successful team. Establishing efficient communication channels is vital for sharing information and maintaining transparency.
  • Encourage continuous learning and professional development to keep your team members motivated and up-to-date with the latest technologies and trends.
  • Allow individual team members autonomy while ensuring alignment with the organization’s overall goals and objectives.

Different approaches to structuring engineering teams

There is no one-size-fits-all approach to structuring engineering teams. Different structures may be more suitable depending on the organization’s size, industry, and goals. Organizations can identify the structure that best aligns with their unique needs and objectives by exploring various approaches.

The top two approaches are:

Project-based structure

When teams are formed based on the project for a defined period. It is a traditional way where engineers and designers are selected from their respective departments and tasked with project-related work.

It may seem logical, but it poses challenges. Project-based teams can prioritize short-term objectives and collaborating with unfamiliar team members can lead to communication gaps, particularly between developers and other project stakeholders.

Product-based structure

When teams are aligned around specific products or features to promote ownership and accountability. Since this team structure is centered around the product,  it is a long-term project, and team members are bound to work together more efficiently.

As the product gains traction and attracts users, the team needs to adapt to a changing environment i.e. restructuring and hiring specialists.

Other approaches include:

  • Functional-based structure: Organizing teams based on specialized functions such as backend, frontend, or QA.
  • Matrix-based structure: Combining functional and product-based structures to leverage expertise and resources efficiently.
  • Hybrid models: Tailoring the team structure to fit your organization’s unique needs and challenges.

Top pain points in building engineering teams

Sharing responsibilities

In engineering organizations, there is a tendency to rely heavily on one person for all responsibilities rather than distributing them among team members. It not only leads to bottlenecks and inefficiencies but also, slows down progress and the inability to deliver quality products.

Broken communication

The two most common communication issues while structuring and building engineering teams are – Alignment and context-switching between engineering teams. This increases the miscommunication among team members and leads to duplication of work, neglected responsibilities, and coverage gaps.

Lack of independence

When engineering leaders micromanage developers, it can hinder productivity, innovation, and overall team effectiveness. Hence, having a structure that fosters optimization, ownership, and effectiveness is important for building an effective team.

Best practices for scaling engineering teams

Scaling an engineering team requires careful planning and execution. Here are the best practices to build a team that scales well:

  • Streamline your hiring and onboarding processes to attract top talent and integrate new team members seamlessly.
  • Develop scalable processes and workflows to accommodate growth and maintain efficiency.
  • Foster a diverse and inclusive workplace culture to attract and retain top talent from all backgrounds.
  • Invest in the right tools and technologies to streamline development workflows and enhance collaboration.
  • Continuously evaluate your team structure and processes, making adjustments as necessary to adapt to changing needs and challenges.

Build an engineering team that sets your team up for success!

Building and structuring engineering teams for scale is a multifaceted endeavor that requires careful planning, execution, and adaptation.

But this doesn’t end here! Measuring a team’s performance is equally important to build an effective team. This is where Typo comes in!

It is an intelligent engineering management platform used for gaining visibility, removing blockers, and maximizing developer effectiveness. It gives a comparative view of each team’s performance across velocity, quality, and throughput.

engineering management platform

Key features

  • Seamlessly integrates with third-party applications such as Git, Slack, Calenders, and CI/CD tools.
  • ‘Sprint analysis’ feature allows for tracking and analyzing the team’s progress throughout a sprint.
  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.

For more information, check out our website!

Iteration burndown chart: Tips for effective use

Agile project management relies on iterative development cycles to deliver value efficiently. Central to this methodology is the iteration burndown chart, a visual representation of work progress over time. In this blog, we’ll explore leveraging and enhancing the iteration burndown chart to optimize Agile project outcomes and team collaboration.

What is an iteration burndown chart?

An iteration burndown chart is a graphical representation of the total work remaining over time in an Agile iteration, helping teams visualize progress toward completing their planned work.

 iteration burndown chart

Components

It typically includes an ideal line representing the planned progress, an actual line indicating the real progress, and axes to represent time and work remaining.

Purpose

The chart enables teams to monitor their velocity, identify potential bottlenecks, and make data-driven decisions to ensure successful iteration completion.

Benefits of using iteration burndown charts

Understanding the advantages of iteration burndown charts is key to appreciating their value in Agile project management. From enhanced visibility to improved decision-making, these charts offer numerous benefits that can positively impact project outcomes.

  • Improved visibility: provides stakeholders with a clear view of project progress.
  • Early risk identification: helps identify and address issues early in the iteration.
  • Enhanced communication: facilitates transparent communication within the team and with stakeholders.
  • Data-driven decisions: enables teams to make informed decisions based on real-time progress data.

How to create an effective iteration burndown chart

Crafting an effective iteration burndown chart requires a thorough and step-by-step approach. Here are some detailed guidelines to help you create a well-designed burndown chart that accurately reflects progress and facilitates efficient project management:

  • Set clear goals: Before you start creating your chart, it’s essential to define clear objectives and expectations for the iteration. Be specific about what you want to achieve, what tasks need to be completed, and what resources you’ll need to get there.
  • Break down tasks: Once you’ve established your goals, you’ll need to break down tasks into manageable units to track progress effectively. Divide the work into smaller tasks that can be completed within a reasonable timeframe and assign them to team members accordingly.
  • Accurate estimation: Accurate estimation of effort required for each task is crucial for creating an effective burndown chart. Make sure to involve team members in the estimation process, and use historical data to improve accuracy. This will help you to determine how much work is left to be done and when the iteration will be completed.
  • Choose the right tools: Creating an effective burndown chart requires selecting the appropriate tools for tracking and visualizing data. Typo is a great option for creating and managing burndown charts, as it allows you to customize the chart’s appearance and track progress in real time.
  • Regular updates: Updating the chart regularly is essential for keeping track of progress and making necessary adjustments. Set a regular schedule for updating the chart, and ensure that team members are aware of the latest updates. This will help you to identify potential issues early on and adjust the plan accordingly.

By following these detailed guidelines, you’ll be able to create an accurate and effective iteration burndown chart that can help you and your team monitor your project’s progress and manage it more efficiently.

Tips for using iteration burndown charts effectively

While creating a burndown chart is a crucial first step, maximizing its effectiveness requires ongoing attention and refinement. These tips will help you harness the full potential of your iteration burndown chart, empowering your development teams to achieve greater success in Agile projects.

  • Simplicity: keep the chart simple and easy to understand.
  • Consistency: use consistent data and metrics for accurate analysis.
  • Collaboration: encourage team collaboration and transparency in updating the chart.
  • Analytical approach: analyze trends and patterns to identify areas for improvement.
  • Adaptability: adjust the chart based on feedback and lessons learned during the iteration.

Improving your iteration burndown chart

Continuous improvement lies at the heart of Agile methodology, and your iteration burndown chart is no exception. By incorporating feedback, analyzing historical data, and experimenting with different approaches, you can refine your chart to better meet your team’s and stakeholders’ needs.

  • Review historical data: analyze past iterations to identify trends and improve future performance.
  • Incorporate feedback: gather input from team members and stakeholders to refine the chart’s effectiveness.
  • Experiment with formats: try different chart formats and visualizations to find what works best for your team.
  • Additional metrics: integrate additional metrics to provide deeper insights into project progress.

Are iteration burndown charts worth it?

A burndown chart is great for evaluating the ratio of work remaining and the time it takes to complete the work. However, relying solely on a burndown chart is not the right way due to certain limitations.

Time-consuming and manual process

Although creating a burndown chart in Excel is easy, entering data manually requires more time and effort. This makes the work repetitive and tiresome after a certain point.

Unable to give insights into the types of issues

The Burndown chart helps to track the progress of completing tasks or user stories over time within a sprint or iteration. But, it doesn’t provide insights about the specific types of issues or tasks being worked on. It includes shipping new features, determining technical debt, and so on.

Gives equal weight to all the tasks

A burndown chart doesn’t differentiate between an easy and difficult task. It considers all of them equal, regardless of their size, complexity, or effort required to complete it. Hence, leading to ineffective outlines of project progress. This further potentially masks critical issues and hinders project management efforts.

Unable to give complete information on sprint predictability

The burndown chart primarily focuses on tracking remaining work throughout a sprint, but it doesn’t directly indicate the predictability of completing that work within the sprint timeframe. It lacks insight into factors like team velocity fluctuations or scope changes, which are crucial for assessing sprint predictability accurately.

How does Typo leverage the sprint predictability?

Typo’s sprint analysis is an essential tool for any team using an agile development methodology. It allows agile teams to track and analyze overall progress throughout a sprint timeline.  It helps to gain visual insights into how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This information can help to identify any potential problems early on and take corrective action.

sprint predictability

Our sprint analysis feature uses data from Git and issue management tools to provide insights into how software development teams are working. They can see how long tasks are taking, how often they’re being blocked, and where bottlenecks are occurring.

It is easy to use and can be integrated with existing Git and Jira/Linear/Clickup workflows.

Key features

  • A velocity chart shows how much work has been completed in previous sprints.
  • A sprint backlog that shows all of the work that needs to be completed in the sprint.
  • A list of sprint issues that shows the status of each issue.
  • Time tracking to see how long tasks are taking.
  • Blockage tracking to check how often tasks are being blocked, and what are the causes of those blocks.
  • Bottleneck identification to identify areas where work is slowing down.
  • Historical data analysis to compare sprint data over time.
sprint predictability

Constantly improve your charts!

The iteration burndown chart is a vital tool in Agile project management. It offers agile and scrum teams a clear, concise way to track progress and make data-driven decisions.

However, one shouldn’t rely solely on the burndown charts. Moreover, there are various advanced sprint analysis tools such as Typo in the market that allow teams to track and gain visual insights into the overall progress of the work.

What are Jira Dashboards and How to Create it?

Jira is a widely used project management tool that enables teams to work together efficiently and achieve outstanding outcomes. The Jira dashboard is a vital component of this tool, offering teams valuable insights, metrics, and project visibility. In this journey, we will explore the potential of Jira dashboards and learn how to leverage their full capabilities.

What is a Jira Dashboard?

A Jira dashboard serves as the nerve center of project activity, offering a consolidated view of tasks, progress, and key metrics. It gives stakeholders a centralized location to monitor project health, track progress, and make informed decisions.

Jira Core dashboard: your project status at a glance

What are the Components of a Jira Dashboard?

Gadgets

These modular components provide specific information and functionality, such as task lists, burndown charts, and activity streams. There are several gadgets built into Jira such as filter results gadget, issue statistics gadget, and road map gadget. However, additional gadgets can also be downloaded from the Atlassian Marketplace. Some of them are the pivot gadget and gauge gadget.

Reports

Jira dashboards host various reports, including velocity charts, sprint summaries, and issue statistics, offering valuable insights into team performance,  and project trends.

Why is it Used?

Jira dashboards are used for several reasons:

  • Visibility: Dashboards offer stakeholders a real-time snapshot of project status and progress, promoting transparency and accountability.
  • Decision Making: By providing access to actionable insights and performance metrics, dashboards enable data-driven decision-making, leading to more informed choices.
  • Collaboration: Dashboards foster collaboration by providing a centralized platform for teams to track tasks, share updates and communicate effectively.
  • Efficiency: Dashboards streamline project management processes and enhance team productivity by consolidating project information and metrics in one location.

The default Jira dashboard

The default dashboard is also known as the system dashboard. It is the screen Jira users see the first time they log in. It includes gadgets from Jira’s pre-installed selection and is limited to only one dashboard page.

Creating your Jira dashboard

Creating custom dashboards requires careful planning and consideration of project objectives and team requirements. Let’s explore the step-by-step process of crafting a bespoke dashboard:

Create a New Dashboard

Log in to your Jira account. Go to the dashboard and click ‘Create Dashboard’.

Define Dashboard Objectives

Start by defining the objectives and goals of your dashboard page. Determine what information is crucial for your team to track and monitor, and tailor your dashboard accordingly.

Select Relevant Gadgets and Reports

Choose gadgets and reports that align with your project’s needs and objectives. When curating your dashboard content, consider factors such as team workflow, project complexity, and stakeholder requirements.

Opt for your Preferred Layout and Configuration

Choose your preferred dashboard layout and configuration to ensure optimal visibility and usability for all stakeholders. Arrange gadgets and reports logically and intuitively to facilitate easy navigation and information access.

Iterative Refinement

Embrace an iterative dashboard refinement approach. Solicit user and stakeholder feedback to improve its effectiveness and usability continuously. Regularly assess and update your dashboard to reflect evolving project needs and priorities.

Share the Dashboard with Team Members

Don’t forget to share the Jira dashboard with the team. This ensures transparency and fosters a collaborative culture. By granting appropriate permissions, they can view and interact with the dashboard and get real-time updates.

JIRA Dashboard Examples

Personal Dashboard

A personal dashboard is tailored to individual needs and offers various advantages in streamlining workflow management and improving productivity. It provides a centralized platform for organizing and visualizing user’s tasks, different projects, issues, etc.

Sprint Burndown Dashboard

This dashboard gives real-time updates on whether the team is on pace to meet a sprint goal. It offers a glimpse of how much work is left in the queue and how long your team will take to complete it. Moreover, the sprint burndown dashboard allows you to jump on any issue when the remaining workload is pacing slower than the delivery date.

Workload Dashboard

The workload dashboard, also known as the monitor resource dashboard tracks the amount of work assigned to each team member and adjusts their workload accordingly. It helps identify workload patterns and plan resource allocation.

Issue Tracking Dashboard

The issue tracking dashboard allows users to quickly identify and prioritize the most important issues. It focuses on providing visibility into the status and progress of issues or tickets within a project.

Maximizing Dashboard Impact

To maximize the impact of your Jira dashboard, consider the following best practices:

Promote Transparency and Collaboration

Share your dashboard with relevant stakeholders to promote transparency and collaboration. Encourage team members to actively engage with the dashboard and provide feedback to drive continuous improvement.

Leverage Automation and Integration

Integrating your Jira dashboard with other tools and systems is the best way to automate data capture and reporting processes. Leverage integration capabilities to streamline workflow management and enhance productivity.

Foster Data-Driven Decision Making

Empower project teams and leaders to make informed decisions by providing access to actionable insights and performance metrics through the dashboard. Encourage data-driven discussions and decision-making to drive project success.

Advanced dashboard customization

Take your Jira dashboard customization to the next level with advanced techniques and strategies:

Dashboard Filters and Contextualization

Implement filters and contextualization techniques to personalize the dashboard experience for individual users or specific project phases. Allow users to tailor the dashboard view based on their preferences and requirements.

Dynamic Dashboard Updates

Utilize dynamic updating capabilities to ensure that your dashboard reflects real-time changes and updates in project data. Implement automated refresh intervals and notifications to keep stakeholders informed and engaged.

Custom Gadgets and Extensions

Explore the possibilities of custom gadgets and extensions to extend the functionality of your Jira dashboard. Develop custom gadgets or integrate third-party extensions to address unique project requirements and enhance user experience.

How Typo's Sprint Analysis Feature is Useful for the Jira Dashboard?

Typo’s sprint analysis feature can be seamlessly integrated with the Jira dashboard. It allows to track and analyze the team’s progress throughout a sprint and provides valuable insights into work progress, work breakup, team velocity, developer workload, and issue cycle time.

The benefits of Sprint analysis feature are:

  • It helps spot potential issues early, allowing for corrective action to avoid major problems.
  • Pinpointing inefficiencies, such as excessive time spent on tasks, enables workflow improvements to boost team productivity.
  • Provides real-time progress updates, ensuring deadlines are met by highlighting areas needing adjustments.

The better Way to Achieve Project Excellence

A well-designed Jira dashboard is a catalyst for project excellence, providing teams with the insights and visibility they need to succeed. By understanding its components, crafting a tailored dashboard, and maximizing its impact, you can unlock Jira dashboards’ full potential and drive your projects toward success.

Furthermore, while Jira dashboards offer extensive functionalities, it’s essential to explore alternative tools that may simplify the process and enhance user experience. Typo is one such tool that streamlines project management by offering intuitive dashboard creation, seamless integration, and a user-friendly interface. With Typo, teams can effortlessly visualize project data, track progress, and collaborate effectively, ultimately leading to improved productivity and project outcomes. Explore Typo today and revolutionize your project management experience.

How to fix scrum anti patterns?

Scrum has become one of the most popular project management frameworks, but like any methodology, it’s not without its challenges. Scrum anti-patterns are common obstacles that teams may face, leading to decreased productivity, low morale, and project failure. Let’s explore the most prevalent Scrum anti patterns and provide practical solutions to overcome them.

Lack of clear definition of done

A lack of a clear Definition of Done (DoD) can cause teams to struggle to deliver shippable increments at the end of each sprint. It can be due to a lack of communication and transparency. This ambiguity leads to rework and dissatisfaction among stakeholders.

Fix

Collaboration is key to establishing a robust DoD. Scrum team members should work together to define clear criteria for completing each user story. These criteria should encompass all necessary steps, from development to testing and acceptance. The DoD should be regularly reviewed and refined to adapt to evolving project needs and ensure stakeholder satisfaction.

Overcommitting in sprint planning

One of the common anti patterns is overcommitment during sprint planning meetings. It sets unrealistic expectations, leading to compromised quality and missed deadlines.

Fix

Base sprint commitments on past performance and team capacity rather than wishful thinking. Focus on realistic sprint goal setting to ensure the team can deliver commitments consistently. Emphasize the importance of transparency and communication in setting and adjusting sprint goals.

Micromanagement by the scrum master

Micromanagement stifles team autonomy and creativity, leading to disengagement, lack of trust and reduced productivity.

Fix

Scrum Masters should adopt a servant-leadership approach, empowering teams to self-organize and make decisions autonomously. They should foster a culture of trust and collaboration where team members feel comfortable taking ownership of their work. They should provide support and guidance when needed, but avoid dictating tasks or solutions.

Lack of product owner engagement

Disengaged Product Owners fail to provide clear direction and effectively prioritize the product backlog, leading to confusion and inefficiency.

Fix

Encourage regular communication and collaboration between the Product Owner and the development team. Ensure that the Product Owner is actively involved in sprint planning, backlog refinement, and sprint reviews. Establish clear channels for feedback and decision-making to ensure alignment with project goals and stakeholder expectations.

Failure to adapt and improve

Failing to embrace a mindset of continuous improvement and adaptation leads to stagnation and inefficiency.

Fix

Prioritize retrospectives and experimentation to identify areas for improvement. Encourage a culture of learning and innovation where team members feel empowered to suggest and implement changes. Emphasize the importance of feedback loops and iterative development to drive continuous improvement and adaptation.

Scope creep

Allowing the project scope to expand unchecked during the sprint leads to incomplete work and missed deadlines.

Fix

Define a clear product vision and prioritize features based on value and feasibility. Review and refine the product backlog regularly to ensure that it reflects the most valuable and achievable items. Encourage stakeholder collaboration and feedback to validate assumptions and manage expectations.

Lack of cross-functional collaboration

Siloed teams hinder communication and collaboration, leading to bottlenecks and inefficiencies.

Fix

Foster a collaboration and knowledge-sharing culture across teams and disciplines. Encourage cross-functional teams to work together towards common goals. Implement practices such as pair programming, code reviews, and knowledge-sharing sessions to facilitate collaboration and break down silos.

Inadequate Sprint review and retrospective

Rushing through sprint retrospective and review meetings results in missed opportunities for feedback and improvement.

Fix

Allocate sufficient time for thorough discussion and reflection during sprint review and retrospective meetings. Encourage open and honest communication and ensure that all development team members have a chance to share their insights and observations. Based on feedback and retrospective findings, prioritize action items for continuous improvement.

Unrealistic commitments by the product owner

Product Owners making unrealistic commitments disrupt the team’s focus and cause delays.

Fix

Establish a clear process for managing changes to the product backlog. Encourage collaboration between the Product Owner and the development team to negotiate realistic commitments and minimize disruptions during the sprint. Prioritize backlog items based on value and effort to ensure the team consistently delivers on its commitments.

Lack of stakeholder involvement

Limited involvement or feedback from stakeholders leads to misunderstandings and dissatisfaction with the final product.

Fix

Engage stakeholders early and often throughout the project lifecycle. Solicit feedback and involve stakeholders in key decision-making processes. Communicate project progress regularly and solicit input to ensure alignment with stakeholder expectations and requirements.

Ignoring technical debt

Neglecting to address technical debt results in decreased code quality, increased bugs, and slower development velocity over time.

Fix

Allocate time during each sprint for addressing technical debt alongside new feature development. Encourage collaboration between developers and stakeholders to prioritize and tackle technical debt incrementally. Invest in automated testing and refactoring to maintain code quality and reduce technical debt accumulation.

Lack of continuous integration and deployment

Failing to implement continuous integration and deployment practices leads to integration issues, longer release cycles, and reduced agility.

Fix

Establish automated CI/CD pipelines to ensure that code changes are integrated and deployed frequently and reliably. Invest in infrastructure and tools that support automated testing and deployment. Encourage a culture of automation and DevOps practices to streamline the development and delivery process.

Daily scrum meetings are inefficient

Daily scrum meeting is usually used synonymously with daily status meetings. This loses its focus on collaboration and decision-making. Sometimes, team members don’t find any value in these meetings leading to disengagement and decreased motivation.

Fix

In daily scrums, the focus should only be on talking to each other about what’s the most important work to get done that day and how to do it. Encourage team members to collaborate to tackle problems and achieve sprint goals. Moreover, keep the daily scrums short and timeboxed, typically to 15 minutes.

Navigating scrum challenges with confidence

Successfully implementing Scrum requires more than just following the framework—it demands a keen understanding of potential pitfalls and proactive strategies to overcome them. By addressing common Scrum anti patterns, teams can cultivate a culture of collaboration, efficiency, and continuous improvement, leading to better project outcomes and stakeholder satisfaction.

However, without the right tools, identifying and addressing these anti-patterns can be daunting. That’s where Typo comes in. Typo is an intuitive project management platform designed to streamline Agile processes, enhance team communication, and mitigate common Scrum challenges.

With Typo, teams can effortlessly manage their Scrum projects, identify and address anti-patterns in real-time, and achieve greater success in their Agile endeavors.

So why wait? Try Typo today and elevate your Scrum experience to new heights!

How to Improve Your Jira Ticket Management?

Jira software has become the backbone of project management for many teams across various industries. Its flexibility and powerful features make it an invaluable tool for organizing tasks, tracking progress, and collaborating effectively. However, maximizing its potential requires more than just basic knowledge. To truly excel in Jira ticket management, you must implement strategies and best practices that streamline your workflows and enhance productivity.

What is Jira Ticket Management?

Jira is a popular project management tool developed by Atlassian, commonly used for issue tracking, bug tracking, and project management. Jira ticket management refers to the process of creating, updating, assigning, prioritizing, and tracking issues within Jira.

Jira Service Desk | IT Service Desk & ITSM Software

Key Challenges in Jira Ticketing System

Requires Significant Manual Work

One of the major challenges with Jira ticketing platform is that it requires a lot of tedious and manual work.  This leads to developer frustration, incomplete ticket updates, and undocumented work.

Complexity of Configuration

Setting up Jira software to align with the specific needs of a team or project can be complicated. Configuring workflows, custom fields, and permissions requires careful planning and may involve a learning curve for administrators.

Lacks Data Hygiene

Due to the above-mentioned points, it can lead to software development team work becoming untracked and invisible. Hence, the team lacks data hygiene which further leads top management to make decisions with incomplete information. This can further impact planning accuracy as well.

How to Manage JIRA Tickets Better?

Below are some essential tips to help you manage your Jira tickets better:

JIRA Automations

Developers often find it labor-intensive to keep tickets updated. Hence, JIRA provides some automation that eases the work of developers. Although these automations are a bit complex initially, once mastered, they offer significant efficiency gains. Moreover, they can be customized as well.

Here are a few JIRA automation that you can take note of:

Smart Auto Design

This is one of the most commonly used automation that involves ensuring accountability for an issue by automatically assigning it to its creator. It ensures that there is always a designated individual responsible for addressing the matter, streamlining workflow management and accountability within the team.

Auto-Create Sub-Tasks

This automation can be customized to suit various scenarios, such as applying it to epics and stories or refining it with specific conditions tailored to your workflow. For example, when a bug issue is reported, you can set up automation to automatically create tasks aimed at resolving the problem. It not only streamlines the process but also ensures that necessary tasks are promptly initiated, enhancing overall efficiency in issue management.

Clone Issues

Implementing this advanced automation involves creating a duplicate of an issue in a different project when it undergoes a specific transition. It also leaves a comment on the original issue to establish a connection between them. It becomes particularly valuable in scenarios where one project is dedicated to managing customer requests, while another project is focused on executing the actual work.

Change Due Date

This automation automatically computes and assigns a due date to an issue when it’s moved from the backlog to the ‘In Progress’ status.  This streamlines the process of managing task timelines, ensuring that deadlines are promptly established as tasks transition into active development stages.

Standardize Ticket Creation

Establishing clear guidelines for creating tickets ensures consistency across your projects. Include essential details such as a descriptive title, priority level, assignee, and due date. This ensures that everyone understands what needs to be done at a glance, reducing confusion and streamlining the workflow.

Moreover, standardizing ticket creation practices fosters alignment within your team and improves communication. When everyone follows the same format for ticket creation, it becomes easier to track progress, assign tasks, and prioritize work effectively. Consistency also enhances transparency, as stakeholders can quickly grasp the status of each ticket without needing to decipher varying formats.

Customize Workflows

Tailoring Jira workflows to match your team’s specific processes and requirements is essential for efficient ticket management. Whether you follow Agile, Scrum, Kanban, or a hybrid methodology, configure workflows that accurately reflect your workflow stages and transitions. This customization ensures your team can work seamlessly within Jira, optimizing productivity and collaboration.

Customizing workflows allows you to streamline your team’s unique processes and adapt to changing project needs. For example, you can define distinct stages for task assignment, development, testing, and deployment that reflect your team’s workflow. Custom workflows empower teams to work more efficiently by clarifying task progression and facilitating smoother handoffs between team members.

Prioritize Effectively

Not all tasks are created equal in Jira. Use priority fields to categorize tickets based on urgency and importance. This strategic prioritization helps your team focus on high-priority items and prevents critical tasks from slipping through the cracks. By prioritizing effectively, you can ensure that important deadlines are met and resources are allocated efficiently.

Effective prioritization involves considering various factors, such as project deadlines, stakeholder requirements, and resource availability. By assessing the impact and urgency of each task, teams can more effectively allocate their time and resources. Regularly reviewing and updating priorities ensures your team remains agile and responsive to changing project needs.

Utilize Labels and Tags

Leverage tags or custom fields to add context to your tickets. Whether it’s categorizing tasks by feature, department, or milestone, these metadata elements make it easier to filter and search for relevant tickets. By utilizing labels and tags effectively, you can improve organization and streamline ticket management within Jira.

Furthermore, consistent labeling conventions enhance collaboration and communication across teams. When everyone adopts a standardized approach to labeling tickets, it becomes simpler to locate specific tasks and understand their context. Moreover, labels and tags can provide valuable insights for reporting and analytics, enabling teams to track progress and identify trends over time.

Encourage Clear Communication

Effective communication is the cornerstone of successful project management. Encourage team members to provide detailed updates, ask questions, and collaborate openly within Jira ticket comments. This transparent communication ensures that everyone stays informed and aligned, fostering a collaborative environment conducive to productivity and success.

Clear communication within Jira ticket comments keeps team members informed and facilitates knowledge sharing and problem-solving. Encouraging open dialogue enables team members to provide feedback, offer assistance, and address potential roadblocks promptly. Additionally, documenting discussions within ticket comments provides valuable context for future reference, aiding in project continuity and decision-making.

Automate Repetitive Tasks

Identify repetitive tasks or processes and automate them using Jira’s built-in automation features or third-party integrations. This not only saves time but also reduces the likelihood of human error. By automating repetitive tasks, you can free up valuable resources and focus on more strategic initiatives, improving overall efficiency and productivity.

Moreover, automation can standardize workflows and enforce best practices, ensuring project consistency. By defining automated rules and triggers, teams can streamline repetitive processes such as task assignments, status updates, and notifications. This minimizes manual intervention and enables team members to devote their time and energy to tasks that require human judgment and creativity.

Regularly Review and Refine

Continuously reviewing your Jira setup and workflows is essential to identify areas for improvement. Solicit feedback from team members and stakeholders to understand pain points and make necessary adjustments. By regularly reviewing and refining your Jira configuration, you can optimize processes and adapt to evolving project requirements effectively.

Moreover, regular reviews foster a culture of continuous improvement within your team. By actively seeking feedback and incorporating suggestions for enhancement, you demonstrate a commitment to excellence and encourage team members to engage. Additionally, periodic reviews help identify bottlenecks and inefficiencies, allowing teams to address them proactively and maintain high productivity levels.

Integrate with Other Tools

Jira seamlessly integrates with a wide range of third-party tools and services, enhancing its capabilities and extending its functionality. Integrating with other tools can streamline your development process and enhance collaboration, whether it’s version control systems, CI/CD pipelines, or communication platforms. Incorporating workflow automation tools into the mix further enhances efficiency by automating repetitive tasks and reducing manual intervention, ultimately accelerating project delivery and reducing errors.

Furthermore, integrating Jira with other tools promotes cross-functional collaboration and data sharing. By connecting disparate systems and centralizing information within Jira, teams can eliminate silos and improve visibility into project progress. Additionally, integrating with complementary tools allows teams to leverage existing investments and build upon established workflows, maximizing efficiency and effectiveness.

Foster a Culture of Continuous Improvement

Encourage a mindset of continuous improvement within your software teams. Encourage feedback, experimentation, and learning from both successes and failures. By embracing a culture of constant improvement, you can adapt to changing requirements and drive greater efficiency in your Jira ticket management process while also building a robust knowledge base of best practices and lessons learned.

Moreover, fostering a culture of continuous improvement empowers team members to take ownership of their work and seek opportunities for growth and innovation. By encouraging experimentation and learning from failures, teams can cultivate resilience and agility, enabling them to thrive in dynamic environments. Additionally, celebrating successes and acknowledging contributions fosters morale and motivation, creating a positive and supportive work culture.

How these Strategies Can Help in Better Planning?

Better JIRA ticket management helps in improving planning accuracy. Below are a few of the ways how these strategies can further help in better planning:

  • Automating these tasks reduces the likelihood of human error and ensures that essential tasks are promptly initiated and tracked, leading to better planning accuracy.
  • Establishing clear guidelines for creating tickets reduces confusion and ensures that all necessary details are included from the start, facilitating more accurate planning and resource allocation.
  • Clear communication within JIRA comments ensures that everyone understands project requirements and updates, reducing misunderstandings and enhancing planning accuracy by facilitating effective coordination and decision-making.
  • Connecting disparate systems and centralizing information improves visibility into project progress and facilitates data sharing. Hence, improving planning by providing a comprehensive view of project status and dependencies.
  • When you consistently follow through on your commitments, you build trust not just within your own team, but across the entire company. Hence, allowing other teams to confidently line up their timelines to development timelines, leading to a tightly aligned, high-velocity organization.

Plan your Way into a Good Jira Ticket System!

Improving your Jira ticket management, essential for effective task management, requires thoughtful planning, ongoing refinement, and a commitment to best practices. Implementing these tips and fostering a culture of continuous improvement can optimize your workflows, enhance collaboration, and drive greater project success, benefiting both internal teams and external customers.

If you need further help in optimizing your engineering processes, Typo is here to help you.

Curious to know more? Learn about Typo here!

How to Create a Burndown Chart in Excel?

In Agile project management, it is crucial to get a clear picture of the project’s reality. Hence, one of the best ways is to visualize the progress.

A Burndown chart is a project management chart that shows the remaining work needed to reach project completion over time.

Let’s understand how can you create a burndown chart in Excel:

What is a Burndown Chart?

A Burndown chart visually represents teams’ or projects’ progress over time. It analyzes their pace, reflects progress, and determines if they are on track to complete it on time.

Burndown charts are generally of three types:

Product Burndown Chart

The product burndown chart focuses on the big picture and visualizes the entire project. It determines how many product goals the development team has achieved so far and the remaining work.

Sprint Burndown Chart

Sprint burndown charts focus on the ongoing sprints. It indicates progress towards completing the sprint backlog.

Epic Burndown Chart

This chart focuses on how your team is performing against the work in the epic over time. It helps to track the advancement of major deliverables within a project.

Components of Burndown Chart

Axes

A burndown chart has two axes: X and Y. The horizontal axis represents the time or iteration and the vertical axis displays user story points.

Ideal Work Remaining

It is the diagonal line sloping downwards that represents the remaining work a team has at a specific point of the project or sprint under ideal conditions.

Actual Work Remaining

It is a realistic depiction of the team’s performance that is updated in real-time. It is drawn as the teams progress and complete user stories.  

Story Points

Each point on the work lines displays a measurement of work remaining at a given time.

Project/Sprint End

It is the rightmost point of your burndown chart that represents whether the team has completed a project/sprint on time, behind, or ahead of schedule.

Benefits of Burndown Chart

Visual Representation of Work

A Burndown chart helps in keeping an eye on teams’ work progress visually. This is not only simple to use but also motivates the team to perform well.

Shows a Direct Comparison

A burndown chart is useful to show the direct comparison between planned work and actual progress over time. This helps in quickly assessing whether the team is on track to meet its goals.

Better Team Productivity

A burndown chart acts as a tool for inspiration. Such types of charts transparently show the progress and work efficiency. Hence, improving the collaboration and cooperation between team members.

Quickly Identifies or Spots Blockers

A burndown chart must be updated daily. This helps in tracking progress in real-time, identifying problems in early stages hence, assisting in completing the project on time.

How to Create a Burndown Chart in Excel?

Step 1: Create Your Table

Open a new sheet in Excel and create a new table that includes 3 columns.

The first column should include the dates of each sprint, the second column have the ideal burndown i.e. ideal rate at which work will be completed and the last column should have the actual burndown i.e. updating them as story points get completed.

Step 2: Add Data in these Columns

Now, fill in the data accordingly. This includes the dates of your sprints and numbers in the Ideal Burndown column indicating the desired number of tasks remaining after each day throughout the let’s say, 10-day sprint.

As you complete tasks each day, update the spreadsheet to document the number of tasks you can finish under the ‘Actual Burndown’ column.

Step 3: Create a Burndown Chart

Now, it’s time to convert the data into a graph. To create a chart, follow these steps: Select the three columns > Click ‘Insert’ on the menu bar > Select the ‘Line chart’ icon, and generate a line graph to visualize the different data points you have in your chart.

How to Use a Burndown Chart in the Best Possible Way?

Determine the Project Scope

Study project scope and divide the projects or sprints into short-term tasks. Ensure to review them and estimate the time required to complete each task based on the project deadline.

Check the Chart Often

The Scrum master must check the chart often and update it daily. It helps to understand the flagging trends, know the pitfalls, and ensure it aligns with the expectations.

Pay Attention to the Outcome

Don’t lose sight of the outcome. By focusing on it, software development teams can ensure they are making progress toward their goals and adjust their efforts accordingly to stay on track for successful project completion.

Don’t Put in Weekends

Teams pause the work during weekends or holidays. Excluding weekends provides accuracy by focusing solely on the days when active work is being done hence giving a clearer representation of progress and highlighting the team’s actual productivity levels during working days.

Encourage Team Ownership

Burndown chart, when accessible to the entire team, fosters collaboration and accountability. It gives them a sense of ownership to discuss points to address challenges and celebrate achievements.

Limitations of a Burndown Chart

A burndown chart is great for evaluating the ratio of work remaining and the time it takes to complete the work. However, relying solely on a burndown chart is not the right way due to certain limitations.

A Time-Consuming and Manual Process

Although creating a burndown chart in Excel is easy, entering data manually requires more time and effort. This makes the work repetitive and tiresome after a certain point.

There are various tools available in the market that offer collaboration and automation features including Jira, Trello, and Asana.

It Doesn’t Give Insights into the Types of Issues

The Burndown chart helps in tracking the progress of completing tasks or user stories over time within a sprint or iteration. But, it doesn’t provide insights about the specific types of issues or tasks being worked on. It includes shipping new features, determining technical debt, and so on.

It Gives Equal Weight to all the Tasks

A burndown chart doesn’t differentiate between an easy and difficult task. It considers all of them equal, regardless of their size, complexity, or effort required to complete it. Hence, leading to ineffective outlines of project progress. This further potentially masks critical issues and hinders project management efforts.

As a result, the burndown chart is not a reliable metric engineering leaders can solely trust. It is always better to complement it with sprint analysis tools to provide additional insights tailored to agile project management. A few of the reasons are stated below:

  • Sprint analysis software can offer a wider range of metrics such as velocity, cycle time, throughput, and cumulative flow diagrams to provide a more comprehensive understanding of team performance and process efficiency.
  • These tools typically offer customization options to tailor metrics and reports according to the team’s specific needs and preferences.
  • They are designed with Agile principles in mind which incorporate concepts such as iterative improvement, feedback loops, and continuous delivery.

Typo - An Effective Sprint Analysis Tool

Typo’s sprint analysis feature allows engineering leaders to track and analyze their team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint hence, identifying any potential problems early on and taking corrective action.

Screenshot 2024-05-11 at 9.58.10 PM.png

Key Features:

  • A velocity chart shows how much work has been completed in previous sprints.
  • A sprint backlog that shows all of the work that needs to be completed in the sprint.
  • A list of sprint issues that shows the status of each issue.
  • Time tracking to See how long tasks are taking.
  • Blockage tracking to check how often tasks are being blocked, and what the causes of those blocks are.
  • Bottleneck identification to identify areas where work is slowing down.
  • Historical data analysis to compare sprint data over time.

How to Write Clean Code?

Martin Fowler once said “Anyone can write a code that a computer can understand. Good programmers write code that humans can understand.”

Clean code is an essential component of software development.

Writing clean code is exactly like a sales pitch. When you use words full of technical jargon, you end up losing your target audience. The same is true with coding as well. Writing clean code enhances the readability, maintainability, and understandability of the software.

What is Clean Code?

Robert C. Martin in his book “Clean Code: A Handbook of Agile Software Craftsmanship  defined clean code as:

“A code that has been taken care of. Someone has taken the time to keep it simple and orderly. They have laid appropriate attention to details. They have cared.”

Clean code is clear, understandable, and maintainable. It is well-organized, properly documented, and follows standard conventions. The purpose behind clean code is to create software that is not just functional but readable and efficient throughout its lifecycle. Since the audience isn’t a computer but rather a real live audience.

Why is Clean Code Important?

Clean code is the foundation of sustainable software development. Below are a few reasons why clean code is important:

Reduce Technical Debt

Technical debt can slow down the development process in the long run. Having clean code ensures that future modifications will be smoother as well as less costly process.

Increase Code Readability and Maintainability

Clean code means that the developers are prioritizing clarity. When it is easier to read, understand, and modify code, it leads to faster software development.

Enhance Collaboration

Good code means that the code is accessible to all team members and follows coding standards. This helps in improved communication and collaboration among them.

Debugging and Issue Resolution

Clean code is designed with clarity and simplicity. Hence, making it easier to locate and understand specific sections of the codebase. This further helps in identifying and resolving issues in the early stages.

Ease of Testing

Clean code facilitates unit testing, integrated testing, and other forms of automated testing. Hence, leading to increased reliability and maintainability of the software.

Clean Code Principles and Best Practices

Below are some established clean code principles that most developers find useful.

KISS Rule

Apply the KISS (Keep it simple, stupid) rule. It is one of the oldest principles of clean code. This means that don’t make the code unnecessarily complex. Make it as simple as possible. So that it takes less time to write, has less chance of bugs, easier to understand and modify.  

Curly’s Law

This law states that the entity (class, function, or variable) must have a single, defined goal. It should only do one thing in one circumstance.

DRY Rule

DRY (Don’t repeat yourself) is closely related to the KISS rule and Curly’s law. It states to avoid unnecessary repetition or duplication of code. Not following this can make the code prone to bugs and make the code change difficult.

YAGNI Rule

YAGNI (You aren’t gonna need it) rule is an extreme programming practice that states that the developers shouldn’t add functionality unless deemed necessary. It should be used in conjunction with continuous refactoring, unit testing, and integration.

Fail Fast

It means that the code should fail as early as possible. This is because issues can be quickly identified and resolved which further limits the number of bugs that make it into production.

Boy Scout Rule

This rule by Uncle Bob states that always leave the code cleaner than you found it. It means that software developers must incrementally improve parts of the codebase they interact with, no matter how minute the enhancement might be.

SOLID Principles

Apply the SOLID principles. This refers to:

S: Single Responsibility Principle which means that the classes must only have a single responsibility.

O: The open-closed Principle states that the piece of software should be open for extension but closed for modification.

L: The Liskov Substitution Principle means that subclasses should be able to substitute their base class without getting incorrect results.

I: The Interface Segregation Principle states that interfaces should be specific to clients instead of being generic for all clients.

D: The dependency Inversion Principle means that classes should depend on abstractions (interfaces) rather than concrete implementations.

A few of the best practices include:

Use Descriptive and Meaningful Names

Choose descriptive and clear names for variables, functions, classes, and other identifiers. They should be easy to remember and according to the context that conveys the purpose and behavior to make the code understandable.

Follow Established Code-Writing Standards

Most programming languages have community-accepted coding standards and style guides. Some of them include Google Java style and PEP 8 for Python and Javascript. Organizations must also have internal coding rules and standards that provide guidelines for consistent formal, naming conventions and overall code organization.

Avoid Writing Unnecessary Comments

Comments help explain the code. However, the codebase changes continuously so the comment can become old or obsolete soon. This can create confusion and distraction among software developers. Make sure to keep the comments updated. Also, avoid writing poorly written or redundant comments as it may increase the cognitive load of software engineering teams.

Avoid Magic Numbers

Magic numbers are hard-coded numbers in code. They are considered to be a bad practice since they can cause ambiguity and confusion among developers. Instead of directly using them, create symbolic constants for hard-coded values. It makes it easy to change the value at a later stage and improves the readability and maintainability of the code.

Refactor Continuously

Ensure that you regularly refactor to enhance the structure and readability of the code. It also helps in improving its flexibility and maintaining code that is overly complex, poorly structured, or duplicated.

You can apply refactoring techniques such as extracting methods, renaming variables, and consolidating duplicate code to keep the codebase cleaner.

Version Control

Version control systems such as GIT, SVN, and Mercurial help track changes to your code and pull back to previous versions, if necessary. Before refactoring, ensure that the code is under version control to safely experiment with changes. Moreover, it helps understand the evolution of the project and maintains the integrity of the codebase by enforcing a structured workflow.

Git - About Version Control

Testing

Software developers can write unit tests to verify the code’s correctness as well-tested code is reliable and easier to refactor. Test-driven development helps in writing cleaner code as it considers edge cases and provides immediate feedback on code changes.

Code Reviews

Code reviewing continuously helps in ensuring code quality by identifying potential issues, catching bugs, and enforcing coding standards. It also facilitates collaboration between software developers to see each other’s strengths and review mistakes together.

Typo - An Automated Code Review Tool

Typo’s automated code review tool not only enables developers to catch issues related to code maintainability, readability, and potential bugs but also can detect code smells. It identifies issues in the code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps the code error-free, making the whole process faster and smoother.

Key features:

  • Supports top 10+ languages including JS, Python, Ruby
  • Understands the context of the code and fixes issues accurately
  • Optimizes code efficiently
  • Standardizes code and reduces the risk of a security breach
  • Provides automated debugging with detailed explanations

Conclusion

Writing clean code isn’t just a crucial skill for developers. It is an important way to sustain software development projects.

By following the above-mentioned principles and best practices, you can develop a habit of writing clean code. It will take time but it will be worth it in the end.

Hope this was helpful. All the best!

How to identify and remove dead code?

Dead code is the most overlooked aspect of software development projects. They can become common when they evolve. A large amount of dead code can be harmful to software.

The best way to ensure this is to detect dead code in the early stages to maintain the quality of the software application.

Let’s talk more about dead code below:

What is Dead Code?

Dead code can be referred to as the segment of code that is unnecessary for the software program. They are executed without their results being used or accessed.

Dead code is known as zombie code. Such a portion of code may have been part of earlier versions, experimental features, or functions that are no longer needed. If the dead code remains in the software, it can decrease the software’s efficiency and add unnecessary complexity to it. This can further make the code harder to understand and maintain.

Other Types of Dead Code

Unreachable Code

The segment of code that is never executed under any condition during program runtime. It could be due to conditional statements, loops, or other control flow structures. Besides this, the issue may even arise during development because of coding errors, incorrect logic, or unintended consequences of code refactoring.

Obsolete Code

The portion of code that was once useful but not anymore. They have now become outdated or irrelevant due to changes in software requirements or function, technology, or best practices. Obsolete code may still be present in the codebase however, no longer recommended for use.

Orphaned Code

Code that was once part of a functional feature or system but is now left behind or isolated. This can result from changes in project requirements, refactoring, feature removal, or other modifications in the development process. As obsolete code, this code may still be present but no longer integrated or contribute to the application functionality.

Commented out Code

Sometimes, developers ‘comment out’ code rather than deleting it to use it in the future. However, when they forget about it, it can facilitate dead code. While it is a common practice, developers must take note of it otherwise it can reduce code readability and maintainability.

Why Remove Dead Code?

Dead code majorly contributes to Technical Debt. While a small amount of technical debt is still fine, if it grows, it can negatively affect the team’s progress. This can also increase the delivery time to market to end-users and reduce customer satisfaction.

Hence, it is important to monitor technical debt through engineering metrics to take note of dead code as well.

Besides this, there are other reasons why removing dead code is crucial:

Improves Maintainability

When dead code is present, it can complicate the understanding and maintenance of software systems. It can further lead to confusion and misunderstandings which increases the cognitive load of the engineering team.

Eliminating dead code lets them focus on relevant code that helps increase code readability, and facilitates feature updates and bug fixes.

Reduces Security and Risks

Dead code could be a hidden backdoor entry point to the system. This can be a threat to the security of the software. Moreover, dead code includes dependencies that are no longer needed.

Removing dead code simplifies code complexities, and improves code review and analysis processes. This further helps to address and reduce security vulnerabilities easily.

Decreases Code and Cognitive Complexity

Dead code disrupts the understanding of codebase structure. It not only decreases the development process but also developers’ productivity and effectiveness.

Eliminating dead code results in reducing the overall size of the code. Hence, it makes it concise and easier to manage which potentially enhances developers’ performance.

Avoid Code Duplication

Duplicate code is a considerable strain on the software development process. However, when dead code is present, it diverts developers from identifying and addressing areas where code duplication occurs.

Hence, eliminating dead code avoids code duplication and improves the codebase’s quality.

Streamline Development

When dead code is not present in the software, it allows developers to focus on the relevant active parts of the codebase. It also streamlines the process as there are no unnecessary distractions and identifies and addresses issues.

How to Identify and Remove Dead Code?

Static Analysis Tools

Dead code can often be removed through static code analysis tools. Automated tools such as code quality checkers can help in detecting unused variables, classes, imports, or modules. This allows developers to address and eliminate the dead code easily which reduces the development cost and improves the overall quality of the system.

However, the drawback is that during uncertainty regarding programming behavior, dead code may not be removed. Hence, static code analysis tools are not a complete solution.

Dynamic Analysis Tools

Dynamic code analysis tools involve running the program to see which lines are executed and identifying which code paths are never reached. Hence, the code that is never executed or used in the codebase i.e. dead code is eliminated.

However, most of these tools are specific to programming languages.

Version Control History

Leverage version control systems such as GIT commits to identify code that was once active but now deprecated or replaced. Commits that were removed or modified could indicate areas where dead code be found.

In case of a mistake, the code can be retrieved from the version control system. Hence, less risky and easily manageable.

Refactoring

Through refactoring, developers carefully examine the codebase to identify sections that include unused or old code, unnecessary variables, functions, or classes. Hence, revealing dead code that can be safely removed. Moreover, refactoring aims to optimize code for performance, maintainability, and readability. This further allows developers to look out for inefficient or unnecessary code by replacing or redesigning these segments.

Code Reviews

Code review is an effective method to maintain the quality of code. It promotes simplicity and clarity in the codebase. They can help in detecting dead code by applying best practices, standards, and conventions. However, when not automated, they can be time-consuming and harder to implement. Hence, it is recommended to use automated code review tools to speed up the process.

Typo - Automated Code Review Tool

Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.

Key features:

  • Supports top 8 languages including C++ and C#
  • Understands the context of the code and fixes issues accurately
  • Optimizes code efficiently
  • Provides automated debugging with detailed explanations
  • Standardizes code and reduces the risk of a security breach

Conclusion

In software engineering, detecting and removing dead code is imperative for streamlining the development process. You can choose the method or combination of methods to remove dead code that best aligns with your project’s needs, resources, and constraints.

All the best!

View All

Product Updates

View All

Why do Companies Choose Typo?

Dev teams hold great importance in the engineering organization. They are essential for building high-quality software products, fostering innovation, and driving the success of technology companies in today’s competitive market.

However, engineering leaders need to understand the bottlenecks holding them back. Since these blindspots can directly affect the projects. Hence, this is when software development analytics tools come to your rescue. And these analytics software stands better when they have various features and integrations, engineering leaders are usually looking out for.

Typo is an intelligent engineering platform that is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Let’s know more about why engineering leaders prefer to choose Typo as their important tool:

You get Customized DORA and other Engineering Metrics

Engineering metrics are the measurements of engineering outputs and processes. However, there isn’t a pre-defined set of metrics that the software development teams use to measure to ensure success. This depends on various factors including team size, the background of the team members, and so on.

Typo’s customized DORA (Deployment frequency, Change failure rate, Lead time, and Mean Time to Recover) key metrics and other engineering metrics can be configured in a single dashboard based on specific development processes. This helps benchmark the dev team’s performance and identifies real-time bottlenecks, sprint delays, and blocked PRs. With the user-friendly interface and tailored integrations, engineering leaders can get all the relevant data within minutes and drive continuous improvement.

Typo has an In-Built Automated Code Review Feature

Code review is all about improving the code quality. It improves the software teams’ productivity and streamlines the development process. However, when done manually, the code review process can be time-consuming and takes a lot of effort.

Typo’s automated code review tool auto-analyses codebase and pull requests to find issues and auto-generates fixes before it merges to master. It understands the context of your code and quickly finds and fixes any issues accurately, making pull requests easy and stress-free. It standardizes your code, reducing the risk of a software security breach and boosting maintainability, while also providing insights into code coverage and code complexity for thorough analysis.

You can Track the Team’s Progress by Advanced Sprint Analysis Tool

While a burndown chart helps visually monitor teams’ work progress, it is time-consuming and doesn’t provide insights about the specific types of issues or tasks. Hence, it is always advisable to complement it with sprint analysis tools to provide additional insights tailored to agile project management.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This helps in identifying potential problems in the early stages, identifying areas where teams can be more efficient, and meeting deadlines.

The metrics Dashboard Focuses on Team-Level Improvement and Not Micromanaging Individual Developers

When engineering metrics focus on individual success rather than team performance, it creates a sense of surveillance rather than support. This leads to decreased motivation, productivity, and trust among development teams. Hence, there are better ways to use the engineering metrics.

Typo has a metrics dashboard that focuses on the team’s health and performance. It lets engineering leaders compare the team’s results with what healthy benchmarks across industries look like and drive impactful initiatives for your team. Since it considers only the team’s goals, it lets team members work together and solve problems together. Hence, fosters a healthier and more productive work environment conducive to innovation and growth.

Typo Takes into Consideration the Human Side of Engineering

Measuring developer experience not only focuses on quantitative metrics but also requires qualitative feedback as well. By prioritizing the human side of team members and developer productivity, engineering managers can create a more inclusive and supportive environment for them.

Typo helps in getting a 360 view of the developer experience as it captures qualitative insights and provides an in-depth view of the real issues that need attention. With signals from work patterns and continuous AI-driven pulse check-ins on the experience of developers in the team, Typo helps with early indicators of their well-being and actionable insights on the areas that need your attention. It also tracks the work habits of developers across multiple activities, such as Commits, PRs, Reviews, Comments, Tasks, and Merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the Burnout zone or at risk of burnout.

You can integrate as many tools with the dev stack

The more the tools can be integrated with software, the better it is for the software developers. It streamlines the development process, enforces standardization and consistency, and provides access to valuable resources and functionalities.

Typo lets you see the complete picture of your engineering health by seamlessly connecting to your tech tool stack. This includes:

  • GIT versioning tools that use the Git version control system
  • Issue tracker tools for managing tasks, bug tracking, and other project-related issues
  • CI/CD tools to automate and streamline the software development process
  • Communication tools to facilitate the exchange of ideas and information
  • Incident management tools to resolve unexpected events or failures

Conclusion

Typo is a software delivery tool that can help ship reliable software faster. You can find real-time bottlenecks in your SDLC, automate code reviews, and measure developer experience – all in a single platform.

Typo ranked as a Leader in G2 Summer 2023 Reports

The G2 Summer 2023 report is out!

We are delighted to share that Typo ranks as a leader in the Software Development analytics tool category. A big thank you to all our customers who supported us in this journey and took the time to write reviews about their experience. It really got us motivated to keep moving forward and bring the best to the table in the coming weeks.

 leader

Typo taking the lead

Typo is placed among the leaders in Software Development Analytics. Besides this, we earned the ‘User loved us’ badge as well.

Our wall of fame shines bright with –

  • Leader in the overall Grid® Report for Software Development Analytics Tools category
  • Leader in the Mid Market Grid® Report for Software Development Analytics Tools category
  • Rated #1 for Likelihood to Recommend
  • Rated #1 for Quality of Support
  • Rated #1 for Meets Requirements
  • Rated #1 for Ease of Use
  • Rated #1 for Analytics and Trends

Typo has been ranked a Leader in the Grid Report for Software Development Analytics Tool | Summer 2023. This is a testament to our continuous efforts toward building a product that engineering teams love to use.

Typo taking the lead

The ratings also include –

  • 97% of the reviewers have rated Typo high in analyzing historical data to highlight trends, statistics & KPIs
  • 100% of the reviewers have rated us high in Productivity Updates

We, as a team, achieved the feat of attaining the score of:

Typo User  ratings

Here’s what our customers say about Typo

Check out what other users have to say about Typo here.

What makes Typo different?

Typo is an intelligent AI-driven Engineering Management platform that enables modern software teams with visibility, insights & tools to code better, deploy faster & stay aligned with business goals.

Having launched with Product Hunt, we started with 15 engineers working with sheer hard work and dedication and have impacted 5000+ developers globally and engineering leaders globally, 400,000+ PRs & 1.5M+ commits.

We are NOT just the software delivery analytics platform. We go beyond the SDLC metrics to build an ecosystem that is a combination of intelligent insights, impactful actions & automated workflows – that will help Managers to lead better & developers perform better

As the first step, Typo gives core insights into dev velocity, quality & throughout that has helped the engineering leaders reduce their PR cycle time by almost 57% and 2X faster project deliveries.

PR cycle time
Continuous Improvement with Typo

Typo empowers continuous improvement in the developers & managers with goal setting & specific visibility to developers themselves.

The leaders can set goals to ensure best practices like PR sizes, avoid merging PRs without review, identify high-risk work & others. Typo nudges the key stakeholders on Slack as soon as the goal is breached. Typo also automates the workflow on Slack to help developers with faster PR shipping and code reviews.

Continuous Improvement with Typo
Developer’s view

Typo provides core insights to your developers that are 100% confidential to them. It helps developers to identify their strengths and core areas of improvement that have impacted the software delivery. It helps them gain visibility & measure the impact of their work on team efficiency & goals.

Developer’s view
Developer’s well-being

We believe that all three aspects – work, collaboration & well-being – need to fall in place to help an individual deliver their best. Inspired by the SPACE framework for developer productivity, we support Pulse Check-Ins, Developer Experience insights, Burnout predictions & Engineering surveys to paint a complete picture.

Developer’s well-being

10X your dev teams’ efficiency with Typo

It’s all of your immense love and support that made us a leader in such a short period. We are grateful to you!

But this is just the beginning. Our aim has always been to level up your dev game and we will be coming with the new exciting releases in the next few weeks.

Interested in using Typo? Sign up for FREE today and get insights in 5 min.

View All