Build efficient, productive dev teams with our practical insights & guides on engineering analytics, developer experience, and more - backed by the top tech leaders across the globe.
Impact of Low Code Quality on Software Development
Maintaining a balance between speed and code quality is a challenge for every developer.
Deadlines and fast-paced projects often push teams to prioritize rapid delivery, leading to compromises in code quality that can have long-lasting consequences. While cutting corners might seem efficient in the moment, it often results in technical debt and a codebase that becomes increasingly difficult to manage.
The hidden costs of poor code quality are real, impacting everything from development cycles to team morale. This blog delves into the real impact of low code quality, its common causes, and actionable solutions tailored to developers looking to elevate their code standards.
Understanding the Core Elements of Code Quality
Code quality goes beyond writing functional code. High-quality code is characterized by readability, maintainability, scalability, and reliability. Ensuring these aspects helps the software evolve efficiently without causing long-term issues for developers. Let’s break down these core elements further:
Readability: Code that follows consistent formatting, uses meaningful variable and function names, and includes clear inline documentation or comments. Readable code allows any developer to quickly understand its purpose and logic.
Maintainability: Modular code that is organized with reusable functions and components. Maintainability ensures that code changes, whether for bug fixes or new features, don’t introduce cascading errors throughout the codebase.
Scalability: Code designed withan architecture that supports growth. This involves using design patterns that decouple different parts of the code and make it easier to extend functionalities.
Reliability: Robust code that has been tested under different scenarios to minimize bugs and unexpected behavior.
The Real Costs of Low Code Quality
Low code quality can significantly impact various facets of software development. Below are key issues developers face when working with substandard code:
Sluggish Development Cycles
Low-quality code often involves unclear logic and inconsistent practices, making it difficult for developers to trace bugs or implement new features. This can turn straightforward tasks into hours of frustrating work, delaying project milestones and adding stress to sprints.
Escalating Technical Debt
Technical debt accrues when suboptimal code is written to meet short-term goals. While it may offer an immediate solution, it complicates future updates. Developers need to spend significant time refactoring or rewriting code, which detracts from new development and wastes resources.
Bug-Prone Software
Substandard code tends to harbor hidden bugs that may not surface until they affect end-users. These bugs can be challenging to isolate and fix, leading to patchwork solutions that degrade the codebase further over time.
Collaboration Friction
When multiple developers contribute to a project, low code quality can cause misalignment and confusion. Developers might spend more time deciphering each other’s work than contributing to new development, leading to decreased team efficiency and a lower-quality product.
Scalability Bottlenecks
A codebase that doesn’t follow proper architectural principles will struggle when scaling. For instance, tightly coupled components make it hard to isolate and upgrade parts of the system, leading to performance issues and reduced flexibility.
Developer Burnout
Constantly working with poorly structured code is taxing. The mental effort needed to debug or refactor a convoluted codebase can demoralize even the most passionate developers, leading to frustration, reduced job satisfaction, and burnout.
Root Causes of Low Code Quality
Understanding the reasons behind low code quality helps in developing practical solutions. Here are some of the main causes:
Pressure to Deliver Rapidly
Tight project deadlines often push developers to prioritize quick delivery over thorough, well-thought-out code. While this may solve immediate business needs, it sacrifices code quality and introduces problems that require significant time and resources to fix later.
Lack of Unified Coding Standards
Without established coding standards, developers may approach problems in inconsistent ways. This lack of uniformity leads to a codebase that’s difficult to maintain, read, and extend. Coding standards help enforce best practices and maintain consistent formatting and documentation.
Insufficient Code Reviews
Skipping code reviews means missing opportunities to catch errors, bad practices, or code smells before they enter the main codebase. Peer reviews help maintain quality, share knowledge, and align the team on best practices.
Limited Testing Strategies
A codebase without sufficient testing coverage is bound to have undetected errors. Tests, especially automated ones, help identify issues early and ensure that any code changes do not break existing features.
Overreliance on Low-Code/No-Code Solutions
Low-code platforms offer rapid development but often generate code that isn’t optimized for long-term use. This code can be bloated, inefficient, and difficult to debug or extend, causing problems when the project scales or requires custom functionality.
Comprehensive Solutions to Improve Code Quality
Addressing low code quality requires deliberate, consistent effort. Here are expanded solutions with practical tips to help developers maintain and improve code standards:
Adopt Rigorous Code Reviews
Code reviews should be an integral part of the development process. They serve as a quality checkpoint to catch issues such as inefficient algorithms, missing documentation, or security vulnerabilities. To make code reviews effective:
Create a structured code review checklist that focuses on readability, adherence to coding standards, potential performance issues, and proper error handling.
Foster a culture where code reviews are seen as collaborative learning opportunities rather than criticism.
Implement tools like GitHub’s review features or Bitbucket for in-depth code discussions.
Integrate Linters and Static Analysis Tools
Linters help maintain consistent formatting and detect common errors automatically. Tools like ESLint (JavaScript), RuboCop (Ruby), and Pylint (Python) check your code for syntax issues and adherence to coding standards. Static analysis tools go a step further by analyzing code for complex logic, performance issues, and potential vulnerabilities. To optimize their use:
Configure these tools to align with your project’s coding standards.
Run these tools in pre-commit hooks with Husky or integrate them into your CI/CD pipelines to ensure code quality checks are performed automatically.
Prioritize Comprehensive Testing
Adopt a multi-layered testing strategy to ensure that code is reliable and bug-free:
Unit Tests: Write unit tests for individual functions or methods to verify they work as expected. Frameworks like Jest for JavaScript, PyTest for Python, and JUnit for Java are popular choices.
Integration Tests: Ensure that different parts of your application work together smoothly. Tools like Cypress and Selenium can help automate these tests.
End-to-End Tests: Simulate real user interactions to catch potential issues that unit and integration tests might miss.
Integrate testing into your CI/CD pipeline so that tests run automatically on every code push or pull request.
Dedicate Time for Refactoring
Refactoring helps improve code structure without changing its behavior. Regularly refactoring prevents code rot and keeps the codebase maintainable. Practical strategies include:
Identify “code smells” such as duplicated code, overly complex functions, or tightly coupled modules.
Apply design patterns where appropriate, such as Factory or Observer, to simplify complex logic.
Use IDE refactoring tools like IntelliJ IDEA’s refactor feature or Visual Studio Code extensions to speed up the process.
Create and Enforce Coding Standards
Having a shared set of coding standards ensures that everyone on the team writes code with consistent formatting and practices. To create effective standards:
Collaborate with the team to create a coding guideline that includes best practices, naming conventions, and common pitfalls to avoid.
Document the guideline in a format accessible to all team members, such as a README file or a Confluence page.
Conduct periodic training sessions to reinforce these standards.
Leverage Typo for Enhanced Code Quality
Typo can be a game-changer for teams looking to automate code quality checks and streamline reviews. It offers a range of features:
Automated Code Review: Detects common issues, code smells, and inconsistencies, supplementing manual code reviews.
Detailed Reports: Provides actionable insights, allowing developers to understand code weaknesses and focus on the most critical issues.
Seamless Collaboration: Enables teams to leave comments and feedback directly on code, enhancing peer review discussions and improving code knowledge sharing.
Continuous Monitoring: Tracks changes in code quality over time, helping teams spot regressions early and maintain consistent standards.
Enhance Knowledge Sharing and Training
Keeping the team informed on best practices and industry trends strengthens overall code quality. To foster continuous learning:
Organize workshops, code review sessions, and tech talks where team members share insights or recent challenges they overcame.
Encourage developers to participate in webinars, online courses, and conferences.
Create a mentorship program where senior developers guide junior members through complex code and teach them best practices.
Strategically Use Low-Code Tools
Low-code tools should be leveraged for non-critical components or rapid prototyping, but ensure that the code generated is thoroughly reviewed and optimized. For more complex or business-critical parts of a project:
Supplement low-code solutions with custom coding to improve performance and maintainability.
Regularly review and refactor code generated by these platforms to align with project standards.
Commit to Continuous Improvement
Improving code quality is a continuous process that requires commitment, collaboration, and the right tools. Developers should assess current practices, adopt new ones gradually, and leverage automated tools like Typo to streamline quality checks.
By incorporating these strategies, teams can create a strong foundation for building maintainable, scalable, and high-quality software. Investing in code quality now paves the way for sustainable development, better project outcomes, and a healthier, more productive team.
Maintaining a balance between speed and code quality is a challenge for every developer.
Deadlines and fast-paced projects often push teams to prioritize rapid delivery, leading to compromises in code quality that can have long-lasting consequences. While cutting corners might seem efficient in the moment, it often results in technical debt and a codebase that becomes increasingly difficult to manage.
The hidden costs of poor code quality are real, impacting everything from development cycles to team morale. This blog delves into the real impact of low code quality, its common causes, and actionable solutions tailored to developers looking to elevate their code standards.
Understanding the Core Elements of Code Quality
Code quality goes beyond writing functional code. High-quality code is characterized by readability, maintainability, scalability, and reliability. Ensuring these aspects helps the software evolve efficiently without causing long-term issues for developers. Let’s break down these core elements further:
Readability: Code that follows consistent formatting, uses meaningful variable and function names, and includes clear inline documentation or comments. Readable code allows any developer to quickly understand its purpose and logic.
Maintainability: Modular code that is organized with reusable functions and components. Maintainability ensures that code changes, whether for bug fixes or new features, don’t introduce cascading errors throughout the codebase.
Scalability: Code designed withan architecture that supports growth. This involves using design patterns that decouple different parts of the code and make it easier to extend functionalities.
Reliability: Robust code that has been tested under different scenarios to minimize bugs and unexpected behavior.
The Real Costs of Low Code Quality
Low code quality can significantly impact various facets of software development. Below are key issues developers face when working with substandard code:
Sluggish Development Cycles
Low-quality code often involves unclear logic and inconsistent practices, making it difficult for developers to trace bugs or implement new features. This can turn straightforward tasks into hours of frustrating work, delaying project milestones and adding stress to sprints.
Escalating Technical Debt
Technical debt accrues when suboptimal code is written to meet short-term goals. While it may offer an immediate solution, it complicates future updates. Developers need to spend significant time refactoring or rewriting code, which detracts from new development and wastes resources.
Bug-Prone Software
Substandard code tends to harbor hidden bugs that may not surface until they affect end-users. These bugs can be challenging to isolate and fix, leading to patchwork solutions that degrade the codebase further over time.
Collaboration Friction
When multiple developers contribute to a project, low code quality can cause misalignment and confusion. Developers might spend more time deciphering each other’s work than contributing to new development, leading to decreased team efficiency and a lower-quality product.
Scalability Bottlenecks
A codebase that doesn’t follow proper architectural principles will struggle when scaling. For instance, tightly coupled components make it hard to isolate and upgrade parts of the system, leading to performance issues and reduced flexibility.
Developer Burnout
Constantly working with poorly structured code is taxing. The mental effort needed to debug or refactor a convoluted codebase can demoralize even the most passionate developers, leading to frustration, reduced job satisfaction, and burnout.
Root Causes of Low Code Quality
Understanding the reasons behind low code quality helps in developing practical solutions. Here are some of the main causes:
Pressure to Deliver Rapidly
Tight project deadlines often push developers to prioritize quick delivery over thorough, well-thought-out code. While this may solve immediate business needs, it sacrifices code quality and introduces problems that require significant time and resources to fix later.
Lack of Unified Coding Standards
Without established coding standards, developers may approach problems in inconsistent ways. This lack of uniformity leads to a codebase that’s difficult to maintain, read, and extend. Coding standards help enforce best practices and maintain consistent formatting and documentation.
Insufficient Code Reviews
Skipping code reviews means missing opportunities to catch errors, bad practices, or code smells before they enter the main codebase. Peer reviews help maintain quality, share knowledge, and align the team on best practices.
Limited Testing Strategies
A codebase without sufficient testing coverage is bound to have undetected errors. Tests, especially automated ones, help identify issues early and ensure that any code changes do not break existing features.
Overreliance on Low-Code/No-Code Solutions
Low-code platforms offer rapid development but often generate code that isn’t optimized for long-term use. This code can be bloated, inefficient, and difficult to debug or extend, causing problems when the project scales or requires custom functionality.
Comprehensive Solutions to Improve Code Quality
Addressing low code quality requires deliberate, consistent effort. Here are expanded solutions with practical tips to help developers maintain and improve code standards:
Adopt Rigorous Code Reviews
Code reviews should be an integral part of the development process. They serve as a quality checkpoint to catch issues such as inefficient algorithms, missing documentation, or security vulnerabilities. To make code reviews effective:
Create a structured code review checklist that focuses on readability, adherence to coding standards, potential performance issues, and proper error handling.
Foster a culture where code reviews are seen as collaborative learning opportunities rather than criticism.
Implement tools like GitHub’s review features or Bitbucket for in-depth code discussions.
Integrate Linters and Static Analysis Tools
Linters help maintain consistent formatting and detect common errors automatically. Tools like ESLint (JavaScript), RuboCop (Ruby), and Pylint (Python) check your code for syntax issues and adherence to coding standards. Static analysis tools go a step further by analyzing code for complex logic, performance issues, and potential vulnerabilities. To optimize their use:
Configure these tools to align with your project’s coding standards.
Run these tools in pre-commit hooks with Husky or integrate them into your CI/CD pipelines to ensure code quality checks are performed automatically.
Prioritize Comprehensive Testing
Adopt a multi-layered testing strategy to ensure that code is reliable and bug-free:
Unit Tests: Write unit tests for individual functions or methods to verify they work as expected. Frameworks like Jest for JavaScript, PyTest for Python, and JUnit for Java are popular choices.
Integration Tests: Ensure that different parts of your application work together smoothly. Tools like Cypress and Selenium can help automate these tests.
End-to-End Tests: Simulate real user interactions to catch potential issues that unit and integration tests might miss.
Integrate testing into your CI/CD pipeline so that tests run automatically on every code push or pull request.
Dedicate Time for Refactoring
Refactoring helps improve code structure without changing its behavior. Regularly refactoring prevents code rot and keeps the codebase maintainable. Practical strategies include:
Identify “code smells” such as duplicated code, overly complex functions, or tightly coupled modules.
Apply design patterns where appropriate, such as Factory or Observer, to simplify complex logic.
Use IDE refactoring tools like IntelliJ IDEA’s refactor feature or Visual Studio Code extensions to speed up the process.
Create and Enforce Coding Standards
Having a shared set of coding standards ensures that everyone on the team writes code with consistent formatting and practices. To create effective standards:
Collaborate with the team to create a coding guideline that includes best practices, naming conventions, and common pitfalls to avoid.
Document the guideline in a format accessible to all team members, such as a README file or a Confluence page.
Conduct periodic training sessions to reinforce these standards.
Leverage Typo for Enhanced Code Quality
Typo can be a game-changer for teams looking to automate code quality checks and streamline reviews. It offers a range of features:
Automated Code Review: Detects common issues, code smells, and inconsistencies, supplementing manual code reviews.
Detailed Reports: Provides actionable insights, allowing developers to understand code weaknesses and focus on the most critical issues.
Seamless Collaboration: Enables teams to leave comments and feedback directly on code, enhancing peer review discussions and improving code knowledge sharing.
Continuous Monitoring: Tracks changes in code quality over time, helping teams spot regressions early and maintain consistent standards.
Enhance Knowledge Sharing and Training
Keeping the team informed on best practices and industry trends strengthens overall code quality. To foster continuous learning:
Organize workshops, code review sessions, and tech talks where team members share insights or recent challenges they overcame.
Encourage developers to participate in webinars, online courses, and conferences.
Create a mentorship program where senior developers guide junior members through complex code and teach them best practices.
Strategically Use Low-Code Tools
Low-code tools should be leveraged for non-critical components or rapid prototyping, but ensure that the code generated is thoroughly reviewed and optimized. For more complex or business-critical parts of a project:
Supplement low-code solutions with custom coding to improve performance and maintainability.
Regularly review and refactor code generated by these platforms to align with project standards.
Commit to Continuous Improvement
Improving code quality is a continuous process that requires commitment, collaboration, and the right tools. Developers should assess current practices, adopt new ones gradually, and leverage automated tools like Typo to streamline quality checks.
By incorporating these strategies, teams can create a strong foundation for building maintainable, scalable, and high-quality software. Investing in code quality now paves the way for sustainable development, better project outcomes, and a healthier, more productive team.
Mobile development comes with a unique set of challenges: rapid release cycles, stringent user expectations, and the complexities of maintaining quality across diverse devices and operating systems. Engineering teams need robust frameworks to measure their performance and optimize their development processes effectively.
DORA metrics—Deployment Frequency, Lead Time for Changes, Mean Time to Recovery (MTTR), and Change Failure Rate—are key indicators that provide valuable insights into a team’s DevOps performance. Leveraging these metrics can empower mobile development teams to make data-driven improvements that boost efficiency and enhance user satisfaction.
Importance of DORA Metrics in Mobile Development
DORA metrics, rooted in research from the DevOps Research and Assessment (DORA) group, help teams measure key aspects of software delivery performance.
Here's why they matter for mobile development:
Deployment Frequency: Mobile teams need to keep up with the fast pace of updates required to satisfy user demand. Frequent, smooth deployments signal a team’s ability to deliver features, fixes, and updates consistently.
Lead Time for Changes: This metric tracks the time between code commit and deployment. For mobile teams, shorter lead times mean a streamlined process, allowing quicker responses to user feedback and faster feature rollouts.
MTTR: Downtime in mobile apps can result in frustrated users and poor reviews. By tracking MTTR, teams can assess and improve their incident response processes, minimizing the time an app remains in a broken state.
Change Failure Rate: A high change failure rate can indicate inadequate testing or rushed releases. Monitoring this helps mobile teams enhance their quality assurance practices and prevent issues from reaching production.
Deep Dive into Practical Solutions for Tracking DORA Metrics
Tracking DORA metrics in mobile app development involves a range of technical strategies. Here, we explore practical approaches to implement effective measurement and visualization of these metrics.
Implementing a Measurement Framework
Integrating DORA metrics into existing workflows requires more than a simple add-on; it demands technical adjustments and robust toolchains that support continuous data collection and analysis.
Automated Data Collection
Automating the collection of DORA metrics starts with choosing the right CI/CD platforms and tools that align with mobile development. Popular options include:
Jenkins Pipelines: Set up custom pipeline scripts that log deployment events and timestamps, capturing deployment frequency and lead times. Use plugins like the Pipeline Stage View for visual insights.
GitLab CI/CD: With GitLab's built-in analytics, teams can monitor deployment frequency and lead time for changes directly within their CI/CD pipeline.
GitHub Actions: Utilize workflows that trigger on commits and deployments. Custom actions can be developed to log data and push it to external observability platforms for visualization.
Technical setup: For accurate deployment tracking, implement triggers in your CI/CD pipelines that capture key timestamps at each stage (e.g., start and end of builds, start of deployment). This can be done using shell scripts that append timestamps to a database or monitoring tool.
Real-Time Monitoring and Visualization
To make sense of the collected data, teams need a robust visualization strategy. Here’s a deeper look at setting up effective dashboards:
Prometheus with Grafana: Integrate Prometheus to scrape data from CI/CD pipelines, and use Grafana to create dashboards with deployment trends and lead time breakdowns.
Elastic Stack (ELK): Ship logs from your CI/CD process to Elasticsearch and build visualizations in Kibana. This setup provides detailed logs alongside high-level metrics.
Technical Implementation Tips:
Use Prometheus exporters or custom scripts that expose metric data as HTTP endpoints.
Design Grafana dashboards to show current and historical trends for DORA metrics, using panels that highlight anomalies or spikes in lead time or failure rates.
Comprehensive Testing Pipelines
Testing is integral to maintaining a low change failure rate. To align with this, engineering teams should develop thorough, automated testing strategies:
Unit Testing: Implement unit tests with frameworks like JUnit for Android or XCTest for iOS. Ensure these are part of every build to catch low-level issues early.
Integration Testing: Use tools such as Espresso and UIAutomator for Android and XCUITest for iOS to validate complex user interactions and integrations.
End-to-End Testing: Integrate Appium or Selenium to automate tests across different devices and OS versions. End-to-end testing helps simulate real-world usage and ensures new deployments don't break critical app flows.
Pipeline Integration:
Set up your CI/CD pipeline to trigger these tests automatically post-build. Configure your pipeline to fail early if a test doesn’t pass, preventing faulty code from being deployed.
Incident Response and MTTR Management
Reducing MTTR requires visibility into incidents and the ability to act swiftly. Engineering teams should:
Implement Monitoring Tools: Use tools like Firebase Crashlytics for crash reporting and monitoring. Integrate with third-party tools like Sentry for comprehensive error tracking.
Set Up Automated Alerts: Configure alerts for critical failures using observability tools like Grafana Loki, Prometheus Alertmanager, or PagerDuty. This ensures that the team is notified as soon as an issue arises.
Strategies for Quick Recovery:
Implement automatic rollback procedures using feature flags and deployment strategies such as blue-green deployments or canary releases.
Use scripts or custom CI/CD logic to switch between versions if a critical incident is detected.
Weaving Typo into Your Workflow
After implementing these technical solutions, teams can leverage Typo for seamless DORA metrics integration. Typo can help consolidate data and make metric tracking more efficient and less time-consuming.
For teams looking to streamline the integration of DORA metrics tracking, Typo offers a solution that is both powerful and easy to adopt. Typo provides:
Automated Deployment Tracking: By integrating with existing CI/CD tools, Typo collects deployment data and visualizes trends, simplifying the tracking of deployment frequency.
Detailed Lead Time Analysis: Typo’s analytics engine breaks down lead times by stages in your pipeline, helping teams pinpoint delays in specific steps, such as code review or testing.
Real-Time Incident Response Support: Typo includes incident monitoring capabilities that assist in tracking MTTR and offering insights into incident trends, facilitating better response strategies.
Seamless Integration: Typo connects effortlessly with platforms like Jenkins, GitLab, GitHub, and Jira, centralizing DORA metrics in one place without disrupting existing workflows.
Typo’s integration capabilities mean engineering teams don’t need to build custom scripts or additional data pipelines. With Typo, developers can focus on analyzing data rather than collecting it, ultimately accelerating their journey toward continuous improvement.
Establishing a Continuous Improvement Cycle
To fully leverage DORA metrics, teams must establish a feedback loop that drives continuous improvement. This section outlines how to create a process that ensures long-term optimization and alignment with development goals.
Regular Data Reviews: Conduct data-driven retrospectives to analyze trends and set goals for improvements.
Iterative Process Enhancements: Use findings to adjust coding practices, enhance automated testing coverage, or refine build processes.
Team Collaboration and Learning: Share knowledge across teams to spread best practices and avoid repeating mistakes.
Empowering Your Mobile Development Process
DORA metrics provide mobile engineering teams with the tools needed to measure and optimize their development processes, enhancing their ability to release high-quality apps efficiently. By integrating DORA metrics tracking through automated data collection, real-time monitoring, comprehensive testing pipelines, and advanced incident response practices, teams can achieve continuous improvement.
Tools like Typo make these practices even more effective by offering seamless integration and real-time insights, allowing developers to focus on innovation and delivering exceptional user experiences.
In this episode of the groCTO Podcast, host Kovid Batra engages in a comprehensive discussion with Geoffrey Teale, the Principal Product Engineer at Upvest, who brings over 25 years of engineering and leadership experience.
The episode begins with Geoffrey's role at Upvest, where he has transitioned from Head of Developer Experience to Principal Product Engineer, emphasizing a holistic approach to improving both developer experience and engineering standards across the organization. Upvest's business model as a financial infrastructure company providing investment banking services through APIs is also examined. Geoffrey underscores the multifaceted engineering requirements, including security, performance, and reliability, essential for meeting regulatory standards and customer expectations. The discussion further delves into the significance of product thinking for internal teams, highlighting the challenges and strategies of building platforms that resonate with developers' needs while competing with external solutions.
Throughout the episode, Geoffrey offers valuable insights into the decision-making processes, the importance of simplicity in early-phase startups, and the crucial role of documentation in fostering team cohesion and efficient communication. Geoffrey also shares his personal interests outside work, including his passion for music, open-source projects, and low-carbon footprint computing, providing a holistic view of his professional and personal journey.
Timestamps
00:00 - Introduction
00:49 - Welcome to the groCTO Podcast
01:22 - Meet Geoffrey: Principal Engineer at Upvest
01:54 - Understanding Upvest's Business & Engineering Challenges
03:43 - Geoffrey's Role & Personal Interests
05:48 - Improving Developer Experience at Upvest
08:25 - Challenges in Platform Development and Team Cohesion
13:03 - Product Thinking for Internal Teams
16:48 - Decision-Making in Platform Development
19:26 - Early-Phase Startups: Balancing Resources and Growth
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO Podcast. Today with us, we have a very special guest who has great expertise in managing developer experience at small scale and large scale organizations. He is currently the Principal Engineer at Upvestm, and has almost 25 plus years of experience in engineering and leadership. Welcome to the show, Geoffrey. Great to have you here.
Geoffrey Teale: Great to be here. Thank you.
Kovid Batra: So Geoffrey, I think, uh, today's theme is more around improving the developer experience, bringing the product thinking while building the platform teams, the platform. Uh, and you, you have been, uh, doing all this from quite some time now, like at Upvest and previous organizations that you've worked with, but at your current company, uh, like Upvest, first of all, we would like to know what kind of a business you're into, what does Upvest do, and let's then deep dive into how engineering is, uh, getting streamlined there according to the business.
Geoffrey Teale: Yeah. So, um, Upvest is a financial infrastructure company. Um, we provide, uh, essentially investment banking services, a complete, uh, solution for building investment banking experiences, uh, for, for client organizations. So we're business to business to customer. We provide our services via an API and client organizations, uh, names that you'd heard of people like Revolut and N26 build their client-facing applications using our backend services to provide that complete investment experience, um, currently within the European Union. Um, but, uh, we'll be expanding out from there shortly.
Kovid Batra: Great. Great. So I think, uh, when you talk about investment banking and supporting the companies with APIs, what kind of engineering is required here? Is it like more, uh, secure-oriented, secure-focused, or is it more like delivering on time? Or is it more like, uh, making things very very robust? How do you see it right now in your organization?
Geoffrey Teale: Well, yeah, I mean, I think in the space that we're in the, the answer unfortunately is all of the above, right? So all those things are our requirements. It has to be secure. It has to meet the, uh, the regulatory standards that we, we have in our industry. Um, it has to be performant enough for our customers who are scaling out to quite large scales, quite large numbers of customers. Um, has to be reliable. Um, so there's a lot of uh, uh, how would I say that? Pressure, uh, to perform well and to make sure that things are done to the highest possible standard in order to deliver for our customers. And, uh, if we don't do that, then, then, well, the customers won't trust us. If they don't trust us, then we wouldn't be where we are today. So, uh, yeah.
Kovid Batra: No, I totally get that. Uh, so talking more about you now, like, what's your current role in the organization? And even before that, tell us something about yourself which the LinkedIn doesn't know. Uh, I think the audience would love to know you a little bit more. Uh, let's start from there. Uh, maybe things that you do to unwind or your hobbies or you're passionate about anything else apart from your job that you're doing?
Geoffrey Teale: Oh, well, um, so, I'm, I'm quite old now. I have a family. I have two daughters, a dog, a cat, fish, quail. Keep quail in the garden. Uh, and that occupies most of my time outside of work. Actually my passions outside of work were always um, music. So I play guitar, and actually technology itself. So outside of work, I'm involved and have been involved in, in open source and free software for, for longer than I've been working. And, uh, I have a particular interest in, in low carbon footprint computing that I pursue outside of, out of work.
Kovid Batra: That's really amazing. So, um, like when you say low carbon, uh, cloud computing, what exactly are you doing to do that?
Geoffrey Teale: Oh, not specifically cloud computing, but that would be involved. So yeah, there's, there's multiple streams to this. So one thing is about using, um, low power platforms, things like RISC-V. Um, the other is about streamlining of software to make it more efficient so we can look into lots of different, uh, topics there about operating systems, tools, programming languages, how they, uh, how they perform. Um, sort of reversing a trend, uh, that's been going on for as long as I've been in computing, which is that we use more and more power, both in terms of computing resource, but also actual electricity for the network, um, to deliver more and more functionality, but we're also programming more and more abstracted ways with more and more layers, which means that we're actually sort of getting less, uh, less bang for buck, if you, if you like, than we used to. So, uh, trying to reverse those trends a little bit.
Kovid Batra: Perfect. Perfect. All right. That's really interesting. Thanks for that quick, uh, cute little intro. Uh, and, uh, now moving on to your work, like we were talking about your experience and your specialization in DevEx, right, improving the developer experience in teams. So what's your current, uh, role, responsibility that comes with, uh, within Upvest? Uh, and what are those interesting initiatives that you have, you're working on?
Geoffrey Teale: Yeah. So I've actually just changed roles at Upvest. I've been at Upvest for a little bit over two years now, and the first two years I spent as the Head of Developer Experience. So running a tribe with a specific responsibility for client-facing developer experience. Um, now I've switched into a Principal Engineering role, which means that I have, um, a scope now which is across the whole of our engineering department, uh, with a, yeah, a view for improving experience and improving standards and quality of engineering internally as well. So, um, a slight shift in role, but my, my previous five years before, uh, Upvest, were all in, uh, internal development experience. So I think, um, quite a lot of that skill, um, coming into play in the new role which um, yeah, in terms of challenges actually, we're just at the very beginning of what we're doing on that side. So, um, early challenges are actually about identifying what problems do exist inside the company and where we can improve and how we can make ourselves ready for the next phase of the company's lifetime. So, um, I think some of those topics would be quite familiar to any company that's relatively modern in terms of its developer practices. If you're using microservices, um, there's this aspect of Conway's law, which is to say that your organizational structure starts to follow the program structure and vice versa. And, um, in that sense, you can easily get into this world where teams have autonomy, which is wonderful, but they can be, um, sort of pushed into working in a, in a siloized fashion, which can be very efficient within the team, but then you have to worry about cohesion within the organization and about making sure that people are doing the right things, uh, to, to make the services work together, in terms of design, in terms of the technology that we develop there. So that bridges a lot into this world of developer experience, into platform drives, I think you mentioned already, and about the way in which you think about your internal development, uh, as opposed to just what you do for customers.
Kovid Batra: I agree. I mean, uh, as you said, like when the teams are siloed, they might be thinking they are efficient within themselves. And that's mostly the use case, the case. But when it comes to integrating different pieces together, that cohesion has to fall in. What is the biggest challenge you have seen, uh, in, in the teams in the last few years of your experience that prevents this cohesion? And what is it that works the best to bring in this cohesion in the teams?
Geoffrey Teale: Yeah. So I think there's, there's, there's a lot of factors there. The, the, the, the biggest one I think is pressure, right? So teams in most companies have customers that they're working for, they have pressure to get things done, and that tends to make you focus on the problem in front of you, rather than the bigger picture, right? So, um, dealing, dealing with that and reinforcing the message to engineers that it's actually okay to do good engineering and to worry about the other people, um, is a big part of that. I've always said, actually, that in developer experience, a big part of what you have to do, the first thing you have to do is actually teach people about why developer experience is important. And, uh, one of those reasons is actually sort of saying, you know, promoting good behavior within engineering teams themselves and saying, we only succeed together. We only do that when we make the situation for ourselves that allows us to engineer well. And when we sort of step away from good practice and rush, rush, um, that maybe works for a short period of time. But, uh, in the long term that actually creates a situation where there's a lot of mess and you have to deal with, uh, getting past, we talk about factors like technical debt. There's a lot of things that you have to get past before you can actually get on and do the productive things that you want to do. Um, so teaching organizations and engineers to think that way is, uh, is, uh, I think a big, uh, a big part of the work that has to be done, finding ways to then take that message and put it into a package that is acceptable to people outside of engineering so that they understand why this is a priority and why it should be worked on is, I think, probably the second biggest part of that as well.
Kovid Batra: Makes sense. I think, uh, most of the, so is it like a behavioral challenge, uh, where, uh, developers and team members really don't like the fact that they have to work in cohesion with the teams? Or is it more like the organizational structure that put people into a certain kind of mindset and then they start growing with that and that becomes a problem in the later phase of the organization? What, what you have seen, uh, from your experience?
Geoffrey Teale: Yeah. So I mean, I think growth is a big part of this. So, um, I mean, I've, I've worked with a number of startups. I've also worked in much bigger organizations. And what happens in that transition is that you move from a small tight-knit group of people who sort of inherently have this very good interpersonal communication, they all know what's going on with the company as a whole, and they build trust between them. And that way, this, this early stage organization works very well, and even though you might be working on disparate tasks, you always have some kind of cohesion there. You know what to do. And if something comes up that affects all of you, it's very easy to identify the people that you need to talk to and find a solution for it. Then as you grow, you start to have this situation where you start to take domains and say, okay, this particular part of, of what we do now belongs in a team, it has a leader and this piece over here goes over there. And that still works quite well up into a certain scale, right? But after time in an organization, several things happen. Okay, so your priorities drift apart, right? You no longer have such good understanding of the common goal. You tend to start prioritizing your work within those departments. So you can have some, some tension between those goals. It's not always clear that Department A should be working together with Department B on the same priority. You also have natural staff turnover. So those people who are there at the beginning, they start to leave, some of them, at least, and these trust relationships break down, the communication channels break down. And the third factor is that new people coming into the organization, they haven't got these relationships, they haven't got this experience. They usually don't have, uh, the position to, to have influence over things on such a large scale. So they get an expectation of these people that they're going to be effective across the organization in the way that people who've been there a long time are, and it tends not to happen. And if you haven't set up for that, if you haven't built the support systems for that and the internal processes and tooling for that, then that communication stops happening in the way that it was happening before.
So all of those things create pressure to, to siloes, then you put it on the pressure of growth and customers and, and it just, um, uh, ossifies in that state.
Kovid Batra: Totally. Totally. And I think, um, talking about the customers, uh, last time when we were discussing, uh, you very beautifully put across this point of bringing that product thinking, not just for the products that you're building for the customer, but when you're building it for the teams. And I, what I feel is that, the people who are working on the platform teams have come across this situation more than anyone else in the team as a developer, where they have to put in that thought of product thinking for the people within the team. So what, what, what, uh, from where does this philosophy come? How you have fitted it into, uh, how platform teams should be built? Just tell us something about that.
Geoffrey Teale: Yeah. So this is something I talk about a little bit when I do presentations, uh, about developer experience. And one of the points that I make actually, particularly for platform teams, but any kind of internal team that's serving other internal teams is that you have to think about yourself, not as a mandatory piece that the company will always support and say, "You must use this, this platform that we have." Because I have direct experience, not in my current company, but in previous, uh, in previous employers where a lot of investment has been made into making a platform, but no thought really was given to this kind of developer experience, or actually even the idea of selling the platform internally, right? It was just an assumption that people would have to use it and so they would use it. And that creates a different set of forces than you'll find elsewhere. And, and people start to ignore the fact that, you know, if you've got a cloud platform in this case, um, there is competition, right? Every day as an engineer, you run into people out there working in the wide world, working for, for companies, the Amazons, AWS of this world, as your Google, they're all producing cloud platform tools. They're all promoting their cloud native development environments with their own reasons for doing that. But they expend a lot of money developing those things, developing them to a very high standard and a lot of money promoting and marketing those things. And it doesn't take very much when we talk just now about trust breaking down, the cohesion between teams breaking down. It doesn't take very much for a platform to start looking like less of a solution and more of a problem if it's taking you a long time to get things done, if you can't find out how to do things, if you, um, you have bad experiences with deployment. This all turns that product into an internal problem.
Kovid Batra: In context of an internal problem for the teams.
Geoffrey Teale: Yeah, and in that context, and this is what I, what I've seen, when you then either have someone coming in from outside with experience with another, a product that you could use, or you get this kind of marketing push and sales push from one of these big companies saying, "Hey, look at this, this platform that we've got that you could just buy into." um, it, it puts you in direct competition and you can lose that, that, right? So I have seen whole divisions of a, of a very large company switch away from the internal platform to using cloud native development, right, on, on a particular platform. Now there are downsides for that. There are all sorts of things that they didn't realize they would have to do that they end up having to do. But once they've made the decision, that battle is lost. And I think that's a really key topic to understand that you are in competition, even though you're an internal team, you are in competition with other people, and you have to do some of the things that they do to convince the people in your organization that what you're doing is beneficial, that it's, it's, it's useful, and it's better in some very distinct way than what they would get off the shelf from, from somewhere else.
Kovid Batra: Got it. Got it. So, when, uh, whenever the teams are making this decision, let's, let's take something, build a platform, what are those nitty gritties that one should be taking care of? Like, either people can go with off the shelf solutions, right? And then they start building. What, what should be the mindset, what should be the decision-making mindset, I must say, uh, for, for this kind of a process when they have to go through?
Geoffrey Teale: So I think, um, uh, we within Upvest, follow a very, um, uh, prescribed is not the right word, but we have a, we have a process for how we think about things, and I think that's actually a very useful example of how to think about any technical project, right? So we start with this 'why' question and the 'why' question is really important. We talk about product thinking. Um, this is, you know, who are we doing this for and what are the business outcomes that we want to achieve? And that's where we have to start from, right? So we define that very, very clearly because, and this is a really important part, there's no value, uh, in anybody within the organization saying, "Let's go and build a platform." For example, if that doesn't deliver what the company needs. So you have to have clarity about this. What is the best way to build this? I mean, nobody builds a platform, well not nobody, but very few people build a platform in the cloud starting from scratch. Most people are taking some existing solution, be that a cloud native solution from a big public cloud, or be that Kubernetes or Cloud Foundry. People take these tools and they wrap them up in their own processes, their own software tools around it to package them up as a, uh, a nice application platform for, for development to happen, right? So why do you do that? What, what purpose are you, are you serving in doing this? How will this bring your business forward? And if you can't answer those questions, then you probably should never even start the project, right? That's, that's my, my view. And if you can't continuously keep those, um, ideas in mind and repeat them back, right? Repeat them back in terms of what are we delivering? What do we measure up against to the, to the, to the company? Then again, you're not doing a very good job of, of, of communicating why that product exists. If you can't think of a reason why your platform delivers more to your company and the people working in your company than one of the off the shelf solutions, then what are you for, right? That's the fundamental question.
So we start there, we think about those things well before we even start talking about solution space and, and, um, you know, what kind of technology we're going to use, how we're going to build that. That's the first lesson.
Kovid Batra: Makes sense. A follow-up question on that. Uh, let's say a team is let's say 20-30 folks right now, okay? I'm talking about an engineering team, uh, who are not like super-funded right now or not in a very profit making business. This comes with a cost, right? You will have to deploy resources. You will have to invest time and effort, right? So is it a good idea according to you to have shared resources for such an initiative or it doesn't work out that way? You need to have dedicated resources, uh, working on this project separately or how, how do you contemplate that?
Geoffrey Teale: My experience of early-phase startups is that people have to be multitaskers and they have to work on multiple things to make it work, right? It just doesn't make sense in the early phase of a company to invest so heavily in a single solution. Um, and I think one of the mistakes that I see people making now actually is that they start off with this, this predefined idea of where they're going to be in five years. And so they sort of go away and say, "Okay, well, I want my, my, my system to run on microservices on Kubernetes." And they invest in setting up Kubernetes, right, which has got a lot easier over the last few years, I have to say. Um, you can, to some degree, go and just pick that stuff off the shelf and pay for it. Um, but it's an example of, of a technical decision that, that's putting the cart before the horse, right? So, of course, you want to make architectural decisions. You don't want to make investments on something that isn't going to last, but you also have to remember that you don't know what's going to happen. And actually, getting to a product quickly, uh, is more important than, than, you know, doing everything perfectly the first time around. So, when I talk about these, these things, I think uh, we have to accept that there is a difference between being like the scrappy little startup and then being in growth phase and being a, a mega corporation. These are different environments with different pressures
Kovid Batra: Got it. So, when, when teams start, let's say, work on it, working on it and uh, they have started and taken up this project for let's say, next six months to at least go out with the first phase of it. Uh, what are those challenges which, uh, the platform heads or the people who are working, the engineers who are working on it, should be aware of and how to like dodge those? Something from your experience that you can share.
Geoffrey Teale: Yes. So I mean, in, in, in the, the very earliest phase, I mean, as I just alluded to that keeping it simple is, is a, a, a big benefit. And actually keeping it simple sometimes means, uh, spending money upfront. So what I've, what I've seen is, is, um, many times I've, I've worked at companies, um, but so many, at least three times who've invested in a monitoring platform. So they've bought a off the shelf software as a service monitoring platform, uh, and used that effectively up until a certain point of growth. Now the reason they only use it up into a certain point of growth is because these tools are extremely expensive and those costs tend to scale with your company and your organization. And so, there comes a point in the life of that organization where that no longer makes sense financially. And then you withdraw from that and actually invest in, in specialist resources, either internally or using open source tools or whatever it is. It could just be optimization of the tool that you're using to reduce those costs. But all of those things have a, a time and financial costs associated with them. Whereas at the beginning, when the costs are quite low to use these services, it actually tends to make more sense to just focus on your own project and, and, you know, pick those things up off the shelf because that's easier and quicker. And I think, uh, again, I've seen some companies fail because they tried to do everything themselves from scratch and that, that doesn't work in the beginning. So yeah, I think that's a, it's a big one.
The second one is actually slightly later as you start to grow, getting something up and running at all is a challenge. Um, what tends to happen as you get a little bit bigger is this effect that I was talking about before where people get siloized, um, the communication starts to break down and people aren't aware of the differing concerns. So if you start worrying about things that you might not worry about at first, like system recovery, uh, compliance in some cases, like there's laws around what you do in terms of your platform and your recoverability and data protection and all these things, all of these topics tend to take focus away, um, from what the developers are doing. So on the first hand, that tends to slow down delivery of, of, features that the engineers within your company want in favor of things that they don't really want to know about. Now, all the time you're doing this, you're taking problems away from them and solving them for them. But if you don't talk about that, then you're not, you're not, you may be delivering value, but nobody knows you're delivering value. So that's the first thing.
The other thing is that you then tend to start losing focus on, on the impact that some of these things have. If you stop thinking about the developers as the primary stakeholders and you get obsessed about these other technical and legal factors, um, then you can start putting barriers into place. You can start, um, making the interfaces to the system the way in which it's used, become more complicated. And if you don't really focus then on the developer experience, right, what it is like to use that platform, then you start to turn into the problem, which I mentioned before, because, um, if you're regularly doing something, if you're deploying or testing on a platform and you have to do that over and over again, and it's slowed down by some bureaucracy or some practice or just literally running slowly, um, then that starts to be the thing that irritates you. It starts to be the thing that's in your way, stopping you doing what you're doing. And so, I mean, one thing is, is, is recognizing when this point happens, when your concerns start to deviate and actually explicitly saying, "Okay, yes, we're going to focus on all these things we have to focus on technically, but we're going to make sure that we reserve some technical resource for monitoring our performance and the way in which our customers interact with the system, failure cases, complaints that come up often."
Um, so one thing, again, I saw in much bigger companies, is they migrated to the cloud from, from legacy systems in data centers. And they were used to having turnaround times on, on procedures for deploying software that took at least weeks or having month-long projects because they had to wait for specific training that they had to get sign off. And they thought that by moving to an internal cloud platform, they would solve these things and have this kind of rapid development and deployment cycle. They sort of did in some ways, but they forgot, right? When they were speculating out, they forgot to make the developers a stakeholder and saying, "What do you need to achieve that?" And what they actually need to achieve that is a change in the mindset around the bureaucracy that came around. It's all well and good, like not having to physically put a machine in a rack and order it from a company. But if you still have these rules that say, okay, you need to go in this training course before you can do anything with this, and there's a six month waiting list for that training course, or this has to be approved by five managers who can only be contacted by email before you can do it. These processes are slowing things down. So actually, I mentioned that company that, uh, we lost the whole department from the, from the, uh, platform that we had internally. One of the reasons actually was that just getting started with this platform took months. Whereas if you went to a public cloud service, all you needed was a credit card and you could do it and you wouldn't be breaking any rules in the company in doing that. As long as you had the, the right to spend the money on the credit card, it was fine.
So, you know, that difference of experience, that difference of, uh, of understanding something that starts to grow out as you, as you grow, right? So I think that's a, uh, a thing to look out for as you move from the situation when you're 10, 20 people in the whole company to when you're about, I would say, 100 to 200 people in the whole company. These forces start to become apparent.
Kovid Batra: Got it. So when, when you touch that point of 100-200, uh, then there is definitely a different journey that you have to look up to, right? And there are their own set of challenges. So from that zero to one and then one to X, uh, journey, what, what things have you experienced? Like, this would be my last question for, for today, but yeah, I would be really interested for people who are listening to you heading teams of sizes, a hundred and above. What kind of things they should be looking at when they are, let's say, moving from an off the shelf to an in-house product and then building these teams together?
Geoffrey Teale: Oh, what should they be looking at? I mean, I think we just covered, uh, one of the big ones. I'd say actually that one of the, the biggest things for engineers particularly, um, and managers of engineers is resistance to documentation and, and sort of ideas about documentation that people have. So, um, when you're again, when you're that very small company, it's very easy to just know what's going on. As you grow, what happens, new people come into your team and they have the same questions that have been asked and answered before, or were just known things. So you get this pattern where you repeatedly get the same information being requested by people and it's very nice and normal to have conversations. It builds teams. Um, but there's this kind of key phrase, which is, 'Documentation is automation', right? So engineers understand automation. They understand why automation is required to scale, but they tend to completely discount that when it comes to documentation. So almost every engineer that I've ever met hates writing documentation. Not everyone, but almost everyone. Uh, but if you go and speak to engineers about what they need to start working with a new product, and again, we think about this as a product, um, they'll say, of course, I need some documentation. Uh, and if you dive into that, they don't really want to have fancy YouTube videos. And so, that sometimes that helps people overcome a resistance to learning. Um, but, uh, having anything at all is useful, right? But this is a key, key learning documentation. You need to treat it a little bit like you treat code, right? So it's a very natural, um, observation from, from most engineers. Well, if I write a document about this, that document is just going to sit there and, and rot, and then it will be worse than useless because it will say the wrong thing, which is absolutely true. But the problem there is that someone said it will sit there and rot, right? It shouldn't be the case, right? If you need the documentation to scale out, you need these pieces to, to support new people coming into the company and to actually reduce the overhead of communication because more people, the more different directions of communication you have, the more costly it gets for the organization. Documentation is boring. It's old-fashioned, but it is the solution that works for fixing that.
The only other thing I'm going to say about is mindset, is it's really important to teach engineers what to document, right? Get them away from this mindset that documentation means writing massive, uh, uh, reams and reams of, of text explaining things in, in detail. It's about, you know, documenting the right things in the right place. So at code-level, commenting, um, saying not what the code there does, but more importantly, generally, why it does that. You know, what decision was made that led to that? What customer requirement led to that? What piece of regulation led to that? Linking out to the resources that explain that. And then at slightly higher levels, making things discoverable. So we talk actually in DevEx about things like, um, service catalogs so people can find out what services are running, what APIs are available internally. But also actually documentation has to be structured in a way that meets the use cases. And so, actually not having individual departments dropping little bits of information all over a wiki with an arcane structure, but actually sort of having a centralized resource. Again, that's one thing that I did actually in a bigger company. I came into the platform team and said, "Nobody can find any information about your platform. You actually need like a central website and you need to promote that website and tell people, 'Hey, this is here. This is how you get the information that you need to understand this platform.' And actually including at the very front of that page why this platform is better than just going out somewhere else to come back to the same topic."
Documentation isn't a silver bullet, but it's the closest thing I'm aware of in tech organizations, and it's the thing that we routinely get wrong.
Kovid Batra: Great. I think, uh, just in the interest of time, we'll have to stop here. But, uh, Geoffrey, this was something really, really interesting. I also explored a few things, uh, which were very new to me from the platform perspective. Uh, we would love to, uh, have you for another episode discussing and deep diving more into such topics. But for today, I think this is our time. And, uh, thank you once again for joining in, taking out time for this. Appreciate it.
For agile teams, tracking productivity can quickly become overwhelming, especially when too many metrics clutter the process. Many teams feel they’re working hard without seeing the progress they expect. By focusing on a handful of high-impact JIRA metrics, teams can gain clear, actionable insights that streamline decision-making and help them stay on course.
These five essential metrics highlight what truly drives productivity, enabling teams to make informed adjustments that propel their work forward.
Why JIRA Metrics Matter for Agile Teams
Agile teams often face missed deadlines, unclear priorities, and resource management issues. Without effective metrics, these issues remain hidden, leading to frustration. JIRA metrics provide clarity on team performance, enabling early identification of bottlenecks and allowing teams to stay agile and efficient. By tracking just a few high-impact metrics, teams can make informed, data-driven decisions that improve workflows and outcomes.
Top 5 JIRA Metrics to Improve Your Team’s Productivity
1. Work In Progress (WIP)
Work In Progress (WIP) measures the number of tasks actively being worked on. Setting WIP limits encourages teams to complete existing tasks before starting new ones, which reduces task-switching, increases focus, and improves overall workflow efficiency.
Technical applications:
Setting WIP limits: On JIRA Kanban boards, teams can set WIP limits for each stage, like “In Progress” or “Review.” This prevents overloading and helps teams maintain steady productivity without overwhelming team members.
Identifying bottlenecks: WIP metrics highlight bottlenecks in real time. If tasks accumulate in a specific stage (e.g., “In Review”), it signals a need to address delays, such as availability of reviewers or unclear review standards.
Using cumulative flow diagrams: JIRA’s cumulative flow diagrams visualize WIP across stages, showing where tasks are getting stuck and helping teams keep workflows balanced.
2. Work Breakdown
Work Breakdown details how tasks are distributed across project components, priorities, and team members. Breaking down tasks into manageable parts (Epics, Stories, Subtasks) provides clarity on resource allocation and ensures each project aspect receives adequate attention.
Technical applications:
Epics and stories in JIRA: JIRA enables teams to organize large projects by breaking them into Epics, Stories, and Subtasks, making complex tasks more manageable and easier to track.
Advanced roadmaps: JIRA’s Advanced Roadmaps allow visualization of task breakdown in a timeline, displaying dependencies and resource allocations. This overview helps maintain balanced workloads across project components.
Tracking priority and status: Custom filters in JIRA allow teams to view high-priority tasks across Epics and Stories, ensuring critical items are progressing as expected.
3. Developer Workload
Developer Workload monitors the task volume and complexity assigned to each developer. This metric ensures balanced workload distribution, preventing burnout and optimizing each developer’s capacity.
Technical applications:
JIRA workload reports: Workload reports aggregate task counts, hours estimated, and priority levels for each developer. This helps project managers reallocate tasks if certain team members are overloaded.
Time tracking and estimation: JIRA allows developers to log actual time spent on tasks, making it possible to compare against estimates for improved workload planning.
Capacity-based assignment: Project managers can analyze workload data to assign tasks based on each developer’s availability and capacity, ensuring sustainable productivity.
4. Team Velocity
Team Velocity measures the amount of work completed in each sprint, establishing a baseline for sprint planning and setting realistic goals.
Technical applications:
Velocity chart: JIRA’s Velocity Chart displays work completed versus planned work, helping teams gauge their performance trends and establish realistic goals for future sprints.
Estimating story points: Story points assigned to tasks allow teams to calculate velocity and capacity more accurately, improving sprint planning and goal setting.
Historical analysis for planning: Historical velocity data enables teams to look back at performance trends, helping identify factors that impacted past sprints and optimizing future planning.
5. Cycle Time
Cycle Time tracks how long tasks take from start to completion, highlighting process inefficiencies. Shorter cycle times generally mean faster delivery.
Technical applications:
Control chart: The Control Chart in JIRA visualizes Cycle Time, displaying how long tasks spend in each stage, helping to identify where delays occur.
Custom workflows and time tracking: Customizable workflows allow teams to assign specific time limits to each stage, identifying areas for improvement and reducing Cycle Time.
SLAs for timely completion: For teams with service-level agreements, setting cycle-time goals can help track SLA adherence, providing benchmarks for performance.
How to Set Up JIRA Metrics for Success: Practical Tips for Maximizing the Benefits of JIRA Metrics with Typo
Effectively setting up and using JIRA metrics requires strategic configuration and the right tools to turn raw data into actionable insights. Here’s a practical, step-by-step guide to configuring these metrics in JIRA for optimal tracking and collaboration. With Typo’s integration, teams gain additional capabilities for managing, analyzing, and discussing metrics collaboratively.
Step 1: Configure Key Dashboards for Visibility
Setting up dashboards in JIRA for metrics like Cycle Time, Developer Workload, and Team Velocity allows for quick access to critical data.
How to set up:
Go to the Dashboards section in JIRA, select Create Dashboard, and add specific gadgets such as Cumulative Flow Diagram for WIP and Velocity Chart for Team Velocity.
Position each gadget for easy reference, giving your team a visual summary of project progress at a glance.
Step 2: Use Typo’s Sprint Analysis for Enhanced Sprint Visibility
Typo’s sprint analysis offers an in-depth view of your team’s progress throughout a sprint, enabling engineering managers and developers to better understand performance trends, spot blockers, and refine future planning. Typo integrates seamlessly with JIRA to provide real-time sprint insights, including data on team velocity, task distribution, and completion rates.
Key features of Typo’s sprint analysis:
Detailed sprint performance summaries: Typo automatically generates sprint performance summaries, giving teams a clear view of completed tasks, WIP, and uncompleted items.
Sprint progress tracking: Typo visualizes your team’s progress across each sprint phase, enabling managers to identify trends and respond to bottlenecks faster.
Velocity trend analysis: Track velocity over multiple sprints to understand performance patterns. Typo’s charts display average, maximum, and minimum velocities, helping teams make data-backed decisions for future sprint planning.
Step 3: Leverage Typo’s Customizable Reports for Deeper Analysis
Typo enables engineering teams to go beyond JIRA’s native reporting by offering customizable reports. These reports allow teams to focus on specific metrics that matter most to them, creating targeted views that support sprint retrospectives and help track ongoing improvements.
Key benefits of Typo reports:
Customized metrics views: Typo’s reporting feature allows you to tailor reports by sprint, team member, or task type, enabling you to create a focused analysis that meets team objectives.
Sprint performance comparison: Easily compare current sprint performance with past sprints to understand progress trends and potential areas for optimization.
Collaborative insights: Typo’s centralized platform allows team members to add comments and insights directly into reports, facilitating discussion and shared understanding of sprint outcomes.
Step 4: Track Team Velocity with Typo’s Velocity Trend Analysis
Typo’s Velocity Trend Analysis provides a comprehensive view of team capacity and productivity over multiple sprints, allowing managers to set realistic goals and adjust plans according to past performance data.
How to use:
Access Typo’s Velocity Trend Analysis to view velocity averages and deviations over time, helping your team anticipate work capacity more accurately.
Use Typo’s charts to visualize and discuss the effects of any changes made to workflows or team processes, allowing for data-backed sprint planning.
Incorporate these insights into future sprint planning meetings to establish achievable targets and manage team workload effectively.
Step 5: Automate Alerts and Notifications for Key Metrics
Setting up automated alerts in JIRA and Typo helps teams stay on top of metrics without manual checking, ensuring that critical changes are visible in real-time.
How to set up:
Use JIRA’s automation rules to create alerts for specific metrics. For example, set a notification if a task’s Cycle Time exceeds a predefined threshold, signaling potential delays.
Enable notifications in Typo for sprint analysis updates, such as velocity changes or WIP limits being exceeded, to keep team members informed throughout the sprint.
Automate report generation in Typo, allowing your team to receive regular updates on sprint performance without needing to pull data manually.
Step 6: Host Collaborative Retrospectives with Typo
Typo’s integration makes retrospectives more effective by offering a shared space for reviewing metrics and discussing improvement opportunities as a team.
How to use:
Use Typo’s reports and sprint analysis as discussion points in retrospective meetings, focusing on completed vs. planned work, Cycle Time efficiency, and WIP trends.
Encourage team members to add insights or suggestions directly into Typo, fostering collaborative improvement and shared accountability.
Document key takeaways and actionable steps in Typo, ensuring continuous tracking and follow-through on improvement efforts in future sprints.
Scope creep—when a project’s scope expands beyond its original objectives—can disrupt timelines, strain resources, and lead to project overruns. Monitoring scope creep is essential for agile teams that need to stay on track without sacrificing quality.
In JIRA, tracking scope creep involves setting clear boundaries for task assignments, monitoring changes, and evaluating their impact on team workload and sprint goals.
How to Monitor Scope Creep in JIRA
Define scope boundaries: Start by clearly defining the scope of each project, sprint, or epic in JIRA, detailing the specific tasks and goals that align with project objectives. Make sure these definitions are accessible to all team members.
Use the issue history and custom fields: Track changes in task descriptions, deadlines, and priorities by utilizing JIRA’s issue history and custom fields. By setting up custom fields for scope-related tags or labels, teams can flag tasks or sub-tasks that deviate from the original project scope, making scope creep more visible.
Monitor workload adjustments with Typo: When scope changes are approved, Typo’s integration with JIRA can help assess their impact on the team’s workload. Use Typo’s reporting to analyze new tasks added mid-sprint or shifts in priorities, ensuring the team remains balanced and prepared for adjusted goals.
Sprint retrospectives for reflection: During sprint retrospectives, review any instances of scope creep and assess the reasons behind the adjustments. This allows the team to identify recurring patterns, evaluate the necessity of certain changes, and refine future project scoping processes.
By closely monitoring and managing scope creep, agile teams can keep their projects within boundaries, maintain productivity, and make adjustments only when they align with strategic objectives.
Building a Data-Driven Engineering Culture
Building a data-driven culture goes beyond tracking metrics; it’s about engaging the entire team in understanding and applying these insights to support shared goals. By fostering collaboration and using metrics as a foundation for continuous improvement, teams can align more effectively and adapt to challenges with agility.
Regularly revisiting and refining metrics ensures they stay relevant and actionable as team priorities evolve. To see how Typo can help you create a streamlined, data-driven approach, schedule a personalized demo today and unlock your team’s full potential.
Mobile development comes with a unique set of challenges: rapid release cycles, stringent user expectations, and the complexities of maintaining quality across diverse devices and operating systems. Engineering teams need robust frameworks to measure their performance and optimize their development processes effectively.
DORA metrics—Deployment Frequency, Lead Time for Changes, Mean Time to Recovery (MTTR), and Change Failure Rate—are key indicators that provide valuable insights into a team’s DevOps performance. Leveraging these metrics can empower mobile development teams to make data-driven improvements that boost efficiency and enhance user satisfaction.
Importance of DORA Metrics in Mobile Development
DORA metrics, rooted in research from the DevOps Research and Assessment (DORA) group, help teams measure key aspects of software delivery performance.
Here's why they matter for mobile development:
Deployment Frequency: Mobile teams need to keep up with the fast pace of updates required to satisfy user demand. Frequent, smooth deployments signal a team’s ability to deliver features, fixes, and updates consistently.
Lead Time for Changes: This metric tracks the time between code commit and deployment. For mobile teams, shorter lead times mean a streamlined process, allowing quicker responses to user feedback and faster feature rollouts.
MTTR: Downtime in mobile apps can result in frustrated users and poor reviews. By tracking MTTR, teams can assess and improve their incident response processes, minimizing the time an app remains in a broken state.
Change Failure Rate: A high change failure rate can indicate inadequate testing or rushed releases. Monitoring this helps mobile teams enhance their quality assurance practices and prevent issues from reaching production.
Deep Dive into Practical Solutions for Tracking DORA Metrics
Tracking DORA metrics in mobile app development involves a range of technical strategies. Here, we explore practical approaches to implement effective measurement and visualization of these metrics.
Implementing a Measurement Framework
Integrating DORA metrics into existing workflows requires more than a simple add-on; it demands technical adjustments and robust toolchains that support continuous data collection and analysis.
Automated Data Collection
Automating the collection of DORA metrics starts with choosing the right CI/CD platforms and tools that align with mobile development. Popular options include:
Jenkins Pipelines: Set up custom pipeline scripts that log deployment events and timestamps, capturing deployment frequency and lead times. Use plugins like the Pipeline Stage View for visual insights.
GitLab CI/CD: With GitLab's built-in analytics, teams can monitor deployment frequency and lead time for changes directly within their CI/CD pipeline.
GitHub Actions: Utilize workflows that trigger on commits and deployments. Custom actions can be developed to log data and push it to external observability platforms for visualization.
Technical setup: For accurate deployment tracking, implement triggers in your CI/CD pipelines that capture key timestamps at each stage (e.g., start and end of builds, start of deployment). This can be done using shell scripts that append timestamps to a database or monitoring tool.
Real-Time Monitoring and Visualization
To make sense of the collected data, teams need a robust visualization strategy. Here’s a deeper look at setting up effective dashboards:
Prometheus with Grafana: Integrate Prometheus to scrape data from CI/CD pipelines, and use Grafana to create dashboards with deployment trends and lead time breakdowns.
Elastic Stack (ELK): Ship logs from your CI/CD process to Elasticsearch and build visualizations in Kibana. This setup provides detailed logs alongside high-level metrics.
Technical Implementation Tips:
Use Prometheus exporters or custom scripts that expose metric data as HTTP endpoints.
Design Grafana dashboards to show current and historical trends for DORA metrics, using panels that highlight anomalies or spikes in lead time or failure rates.
Comprehensive Testing Pipelines
Testing is integral to maintaining a low change failure rate. To align with this, engineering teams should develop thorough, automated testing strategies:
Unit Testing: Implement unit tests with frameworks like JUnit for Android or XCTest for iOS. Ensure these are part of every build to catch low-level issues early.
Integration Testing: Use tools such as Espresso and UIAutomator for Android and XCUITest for iOS to validate complex user interactions and integrations.
End-to-End Testing: Integrate Appium or Selenium to automate tests across different devices and OS versions. End-to-end testing helps simulate real-world usage and ensures new deployments don't break critical app flows.
Pipeline Integration:
Set up your CI/CD pipeline to trigger these tests automatically post-build. Configure your pipeline to fail early if a test doesn’t pass, preventing faulty code from being deployed.
Incident Response and MTTR Management
Reducing MTTR requires visibility into incidents and the ability to act swiftly. Engineering teams should:
Implement Monitoring Tools: Use tools like Firebase Crashlytics for crash reporting and monitoring. Integrate with third-party tools like Sentry for comprehensive error tracking.
Set Up Automated Alerts: Configure alerts for critical failures using observability tools like Grafana Loki, Prometheus Alertmanager, or PagerDuty. This ensures that the team is notified as soon as an issue arises.
Strategies for Quick Recovery:
Implement automatic rollback procedures using feature flags and deployment strategies such as blue-green deployments or canary releases.
Use scripts or custom CI/CD logic to switch between versions if a critical incident is detected.
Weaving Typo into Your Workflow
After implementing these technical solutions, teams can leverage Typo for seamless DORA metrics integration. Typo can help consolidate data and make metric tracking more efficient and less time-consuming.
For teams looking to streamline the integration of DORA metrics tracking, Typo offers a solution that is both powerful and easy to adopt. Typo provides:
Automated Deployment Tracking: By integrating with existing CI/CD tools, Typo collects deployment data and visualizes trends, simplifying the tracking of deployment frequency.
Detailed Lead Time Analysis: Typo’s analytics engine breaks down lead times by stages in your pipeline, helping teams pinpoint delays in specific steps, such as code review or testing.
Real-Time Incident Response Support: Typo includes incident monitoring capabilities that assist in tracking MTTR and offering insights into incident trends, facilitating better response strategies.
Seamless Integration: Typo connects effortlessly with platforms like Jenkins, GitLab, GitHub, and Jira, centralizing DORA metrics in one place without disrupting existing workflows.
Typo’s integration capabilities mean engineering teams don’t need to build custom scripts or additional data pipelines. With Typo, developers can focus on analyzing data rather than collecting it, ultimately accelerating their journey toward continuous improvement.
Establishing a Continuous Improvement Cycle
To fully leverage DORA metrics, teams must establish a feedback loop that drives continuous improvement. This section outlines how to create a process that ensures long-term optimization and alignment with development goals.
Regular Data Reviews: Conduct data-driven retrospectives to analyze trends and set goals for improvements.
Iterative Process Enhancements: Use findings to adjust coding practices, enhance automated testing coverage, or refine build processes.
Team Collaboration and Learning: Share knowledge across teams to spread best practices and avoid repeating mistakes.
Empowering Your Mobile Development Process
DORA metrics provide mobile engineering teams with the tools needed to measure and optimize their development processes, enhancing their ability to release high-quality apps efficiently. By integrating DORA metrics tracking through automated data collection, real-time monitoring, comprehensive testing pipelines, and advanced incident response practices, teams can achieve continuous improvement.
Tools like Typo make these practices even more effective by offering seamless integration and real-time insights, allowing developers to focus on innovation and delivering exceptional user experiences.
For agile teams, tracking productivity can quickly become overwhelming, especially when too many metrics clutter the process. Many teams feel they’re working hard without seeing the progress they expect. By focusing on a handful of high-impact JIRA metrics, teams can gain clear, actionable insights that streamline decision-making and help them stay on course.
These five essential metrics highlight what truly drives productivity, enabling teams to make informed adjustments that propel their work forward.
Why JIRA Metrics Matter for Agile Teams
Agile teams often face missed deadlines, unclear priorities, and resource management issues. Without effective metrics, these issues remain hidden, leading to frustration. JIRA metrics provide clarity on team performance, enabling early identification of bottlenecks and allowing teams to stay agile and efficient. By tracking just a few high-impact metrics, teams can make informed, data-driven decisions that improve workflows and outcomes.
Top 5 JIRA Metrics to Improve Your Team’s Productivity
1. Work In Progress (WIP)
Work In Progress (WIP) measures the number of tasks actively being worked on. Setting WIP limits encourages teams to complete existing tasks before starting new ones, which reduces task-switching, increases focus, and improves overall workflow efficiency.
Technical applications:
Setting WIP limits: On JIRA Kanban boards, teams can set WIP limits for each stage, like “In Progress” or “Review.” This prevents overloading and helps teams maintain steady productivity without overwhelming team members.
Identifying bottlenecks: WIP metrics highlight bottlenecks in real time. If tasks accumulate in a specific stage (e.g., “In Review”), it signals a need to address delays, such as availability of reviewers or unclear review standards.
Using cumulative flow diagrams: JIRA’s cumulative flow diagrams visualize WIP across stages, showing where tasks are getting stuck and helping teams keep workflows balanced.
2. Work Breakdown
Work Breakdown details how tasks are distributed across project components, priorities, and team members. Breaking down tasks into manageable parts (Epics, Stories, Subtasks) provides clarity on resource allocation and ensures each project aspect receives adequate attention.
Technical applications:
Epics and stories in JIRA: JIRA enables teams to organize large projects by breaking them into Epics, Stories, and Subtasks, making complex tasks more manageable and easier to track.
Advanced roadmaps: JIRA’s Advanced Roadmaps allow visualization of task breakdown in a timeline, displaying dependencies and resource allocations. This overview helps maintain balanced workloads across project components.
Tracking priority and status: Custom filters in JIRA allow teams to view high-priority tasks across Epics and Stories, ensuring critical items are progressing as expected.
3. Developer Workload
Developer Workload monitors the task volume and complexity assigned to each developer. This metric ensures balanced workload distribution, preventing burnout and optimizing each developer’s capacity.
Technical applications:
JIRA workload reports: Workload reports aggregate task counts, hours estimated, and priority levels for each developer. This helps project managers reallocate tasks if certain team members are overloaded.
Time tracking and estimation: JIRA allows developers to log actual time spent on tasks, making it possible to compare against estimates for improved workload planning.
Capacity-based assignment: Project managers can analyze workload data to assign tasks based on each developer’s availability and capacity, ensuring sustainable productivity.
4. Team Velocity
Team Velocity measures the amount of work completed in each sprint, establishing a baseline for sprint planning and setting realistic goals.
Technical applications:
Velocity chart: JIRA’s Velocity Chart displays work completed versus planned work, helping teams gauge their performance trends and establish realistic goals for future sprints.
Estimating story points: Story points assigned to tasks allow teams to calculate velocity and capacity more accurately, improving sprint planning and goal setting.
Historical analysis for planning: Historical velocity data enables teams to look back at performance trends, helping identify factors that impacted past sprints and optimizing future planning.
5. Cycle Time
Cycle Time tracks how long tasks take from start to completion, highlighting process inefficiencies. Shorter cycle times generally mean faster delivery.
Technical applications:
Control chart: The Control Chart in JIRA visualizes Cycle Time, displaying how long tasks spend in each stage, helping to identify where delays occur.
Custom workflows and time tracking: Customizable workflows allow teams to assign specific time limits to each stage, identifying areas for improvement and reducing Cycle Time.
SLAs for timely completion: For teams with service-level agreements, setting cycle-time goals can help track SLA adherence, providing benchmarks for performance.
How to Set Up JIRA Metrics for Success: Practical Tips for Maximizing the Benefits of JIRA Metrics with Typo
Effectively setting up and using JIRA metrics requires strategic configuration and the right tools to turn raw data into actionable insights. Here’s a practical, step-by-step guide to configuring these metrics in JIRA for optimal tracking and collaboration. With Typo’s integration, teams gain additional capabilities for managing, analyzing, and discussing metrics collaboratively.
Step 1: Configure Key Dashboards for Visibility
Setting up dashboards in JIRA for metrics like Cycle Time, Developer Workload, and Team Velocity allows for quick access to critical data.
How to set up:
Go to the Dashboards section in JIRA, select Create Dashboard, and add specific gadgets such as Cumulative Flow Diagram for WIP and Velocity Chart for Team Velocity.
Position each gadget for easy reference, giving your team a visual summary of project progress at a glance.
Step 2: Use Typo’s Sprint Analysis for Enhanced Sprint Visibility
Typo’s sprint analysis offers an in-depth view of your team’s progress throughout a sprint, enabling engineering managers and developers to better understand performance trends, spot blockers, and refine future planning. Typo integrates seamlessly with JIRA to provide real-time sprint insights, including data on team velocity, task distribution, and completion rates.
Key features of Typo’s sprint analysis:
Detailed sprint performance summaries: Typo automatically generates sprint performance summaries, giving teams a clear view of completed tasks, WIP, and uncompleted items.
Sprint progress tracking: Typo visualizes your team’s progress across each sprint phase, enabling managers to identify trends and respond to bottlenecks faster.
Velocity trend analysis: Track velocity over multiple sprints to understand performance patterns. Typo’s charts display average, maximum, and minimum velocities, helping teams make data-backed decisions for future sprint planning.
Step 3: Leverage Typo’s Customizable Reports for Deeper Analysis
Typo enables engineering teams to go beyond JIRA’s native reporting by offering customizable reports. These reports allow teams to focus on specific metrics that matter most to them, creating targeted views that support sprint retrospectives and help track ongoing improvements.
Key benefits of Typo reports:
Customized metrics views: Typo’s reporting feature allows you to tailor reports by sprint, team member, or task type, enabling you to create a focused analysis that meets team objectives.
Sprint performance comparison: Easily compare current sprint performance with past sprints to understand progress trends and potential areas for optimization.
Collaborative insights: Typo’s centralized platform allows team members to add comments and insights directly into reports, facilitating discussion and shared understanding of sprint outcomes.
Step 4: Track Team Velocity with Typo’s Velocity Trend Analysis
Typo’s Velocity Trend Analysis provides a comprehensive view of team capacity and productivity over multiple sprints, allowing managers to set realistic goals and adjust plans according to past performance data.
How to use:
Access Typo’s Velocity Trend Analysis to view velocity averages and deviations over time, helping your team anticipate work capacity more accurately.
Use Typo’s charts to visualize and discuss the effects of any changes made to workflows or team processes, allowing for data-backed sprint planning.
Incorporate these insights into future sprint planning meetings to establish achievable targets and manage team workload effectively.
Step 5: Automate Alerts and Notifications for Key Metrics
Setting up automated alerts in JIRA and Typo helps teams stay on top of metrics without manual checking, ensuring that critical changes are visible in real-time.
How to set up:
Use JIRA’s automation rules to create alerts for specific metrics. For example, set a notification if a task’s Cycle Time exceeds a predefined threshold, signaling potential delays.
Enable notifications in Typo for sprint analysis updates, such as velocity changes or WIP limits being exceeded, to keep team members informed throughout the sprint.
Automate report generation in Typo, allowing your team to receive regular updates on sprint performance without needing to pull data manually.
Step 6: Host Collaborative Retrospectives with Typo
Typo’s integration makes retrospectives more effective by offering a shared space for reviewing metrics and discussing improvement opportunities as a team.
How to use:
Use Typo’s reports and sprint analysis as discussion points in retrospective meetings, focusing on completed vs. planned work, Cycle Time efficiency, and WIP trends.
Encourage team members to add insights or suggestions directly into Typo, fostering collaborative improvement and shared accountability.
Document key takeaways and actionable steps in Typo, ensuring continuous tracking and follow-through on improvement efforts in future sprints.
Scope creep—when a project’s scope expands beyond its original objectives—can disrupt timelines, strain resources, and lead to project overruns. Monitoring scope creep is essential for agile teams that need to stay on track without sacrificing quality.
In JIRA, tracking scope creep involves setting clear boundaries for task assignments, monitoring changes, and evaluating their impact on team workload and sprint goals.
How to Monitor Scope Creep in JIRA
Define scope boundaries: Start by clearly defining the scope of each project, sprint, or epic in JIRA, detailing the specific tasks and goals that align with project objectives. Make sure these definitions are accessible to all team members.
Use the issue history and custom fields: Track changes in task descriptions, deadlines, and priorities by utilizing JIRA’s issue history and custom fields. By setting up custom fields for scope-related tags or labels, teams can flag tasks or sub-tasks that deviate from the original project scope, making scope creep more visible.
Monitor workload adjustments with Typo: When scope changes are approved, Typo’s integration with JIRA can help assess their impact on the team’s workload. Use Typo’s reporting to analyze new tasks added mid-sprint or shifts in priorities, ensuring the team remains balanced and prepared for adjusted goals.
Sprint retrospectives for reflection: During sprint retrospectives, review any instances of scope creep and assess the reasons behind the adjustments. This allows the team to identify recurring patterns, evaluate the necessity of certain changes, and refine future project scoping processes.
By closely monitoring and managing scope creep, agile teams can keep their projects within boundaries, maintain productivity, and make adjustments only when they align with strategic objectives.
Building a Data-Driven Engineering Culture
Building a data-driven culture goes beyond tracking metrics; it’s about engaging the entire team in understanding and applying these insights to support shared goals. By fostering collaboration and using metrics as a foundation for continuous improvement, teams can align more effectively and adapt to challenges with agility.
Regularly revisiting and refining metrics ensures they stay relevant and actionable as team priorities evolve. To see how Typo can help you create a streamlined, data-driven approach, schedule a personalized demo today and unlock your team’s full potential.
Think of reading a book with multiple plot twists and branching storylines. While engaging, it can also be confusing and overwhelming when there are too many paths to follow. Just as a complex storyline can confuse readers, high Cyclic Complexity can make code hard to understand, maintain, and test, leading to bugs and errors.
In this blog, we will discuss why high cyclomatic complexity can be problematic and ways to reduce it.
What is Cyclomatic Complexity?
Cyclomatic Complexity, a software metric, was developed by Thomas J. Mccabe in 1976. It is a metric that indicates the complexity of the program by counting its decision points.
A higher cyclomatic Complexity score reflects more execution paths, leading to increased complexity. On the other hand, a low score signifies fewer paths and, hence, less complexity.
Cyclomatic Complexity is calculated using a control flow graph:
M = E - N + 2P
M = Cyclomatic Complexity
N = Nodes (Block of code)
E = Edges (Flow of control)
P = Number of Connected Components
Why is High Cyclomatic Complexity Problematic?
Increases Error Prone
The more complex the code is, the more the chances of bugs. When there are many possible paths and conditions, developers may overlook certain conditions or edge cases during testing. This leads to defects in the software and becomes challenging to test all of them.
Leads to Cognitive Complexity
Cognitive complexity refers to the level of difficulty in understanding a piece of code.
Cyclomatic Complexity is one of the factors that increases cognitive complexity. Since, it becomes overwhelming to process information effectively for developers, which makes it harder to understand the overall logic of code.
Difficulty in Onboarding
Codebases with high cyclomatic Complexity make onboarding difficult for new developers or team members. The learning curve becomes steeper for them and they require more time and effort to understand and become productive. This also leads to misunderstanding and they may misinterpret the logic or overlook critical paths.
Higher Risks of Defects
More complex code leads to more misunderstandings, which further results in higher defects in the codebase. Complex code is more prone to errors as it hinders adherence to coding standards and best practices.
Rise in Maintainance Efforts
Due to the complex codebase, the software development team may struggle to grasp the full impact of their changes which results in new errors. This further slows down the process. It also results in ripple effects i.e. difficulty in isolating changes as one modification can impact multiple areas of application.
How to Reduce Cyclomatic Complexity?
Function Decomposition
Single Responsibility Principle (SRP): This principle states that each module or function should have a defined responsibility and one reason to change. If a function is responsible for multiple tasks, it can result in bloated and hard-to-maintain code.
Modularity: This means dividing large, complex functions into smaller, modular units so that each piece serves a focused purpose. It makes individual functions easier to understand, test, and modify without affecting other parts of the code.
Cohesion: Cohesion focuses on keeping related code close to functions and modules. When related functions are grouped together, it results in high cohesion which helps with readability and maintainability.
Coupling: This principle states to avoid excessive dependencies between modules. This will reduce the complexity and make each module more self-contained, enabling changes without affecting other parts of the system.
Conditional Logic Simplification
Guard Clauses: Developers must implement guard clauses to exit from a function as soon as a condition is met. This avoids deep nesting and enhances the readability and simplicity of the main logic of the function.
Boolean Expressions: Use De Morgan's laws and simplify Boolean expressions to reduce the complexity of conditions. For example, rewriting! (A && B) as ! A || !B can sometimes make the code easier to understand.
Conditional Expressions: Consider using ternary operators or switch statements where appropriate. This will condense complex conditional branches into more concise expressions which further enhance their readability and reduce code size.
Flag Variables: Avoid unnecessary flag variables that track control flow. Developers should restructure the logic to eliminate these flags which can lead to simpler and cleaner code.
Loop Optimization
Loop Unrolling: Expand the loop body to perform multiple operations in each iteration. This is useful for loops with a small number of iterations as it reduces loop overhead and improves performance.
Loop Fusion: When two loops iterate over the same data, you may be able to combine them into a single loop. This enhances performance by reducing the number of loop iterations and boosting data locality.
Loop Strength Reduction: Consider replacing costly operations in loops with less expensive ones, such as using addition instead of multiplication where possible. This will reduce the computational cost within the loop.
Loop Invariant Code Motion: Prevent redundant computation by moving calculations that do not change with each loop iteration outside of the loop.
Code Refactoring
Extract Method: Move repetitive or complex code segments into separate functions. This simplifies the original function, reduces complexity, and makes code easier to reuse.
Introduce Explanatory Variables: Use intermediate variables to hold the results of complex expressions. This can make code more readable and allow others to understand its purpose without deciphering complex operations.
Replace Magic Numbers with Named Constants: Magic numbers are hard-coded numbers in code. Instead of directly using them, create symbolic constants for hard-coded values. It makes it easy to change the value at a later stage and improves the readability and maintainability of the code.
Simplify Complex Expressions: Break down long, complex expressions into smaller, more digestible parts to improve readability and reduce cognitive load on the reader.
5. Design Patterns
Strategy Pattern: This pattern allows developers to encapsulate algorithms within separate classes. By delegating responsibilities to these classes, you can avoid complex conditional statements and reduce overall code complexity.
State Pattern: When an object has multiple states, the State Pattern can represent each state as a separate class. This simplifies conditional code related to state transitions.
Observer Pattern: The Observer Pattern helps decouple components by allowing objects to communicate without direct dependencies. This reduces complexity by minimizing the interconnectedness of code components.
6. Code Analysis Tools
Static Code Analyzers: Static Code Analysis Tools like Typo or Sonarqube, can automatically highlight areas of high complexity, unused code, or potential errors. This allows developers to identify and address complex code areas proactively.
Code Coverage Tools: Code coverage is a measure that indicates the percentage of a codebase that is tested by automated tests. Tools like Typo measures code coverage, highlighting untested areas. It helps ensure that the tests cover a significant portion of the code which helps identifies untested parts and potential bugs.
Other Ways to Reduce Cyclomatic Complexity
Identify andremove dead code to simplify the codebase and reduce maintenance efforts. This keeps the code clean, improves performance, and reduces potential confusion.
Consolidate duplicate code into reusable functions to reduce redundancy and improve consistency. This makes it easier to update logic in one place and avoid potential bugs from inconsistent changes.
Continuously improve code structure by refactoring regularly to enhance readability, and maintainability, and reduce technical debt. This ensures that the codebase evolves to stay efficient and adaptable to future needs.
Perform peer reviews to catch issues early, promote coding best practices, and maintain high code quality. Code reviews encourage knowledge sharing and help align the team on coding standards.
Write Comprehensive Unit Tests to ensure code functions correctly and supports easier refactoring in the future. They provide a safety net which makes it easier to identify issues when changes are made.
Typo - An Automated Code Review Tool
Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.
Key Features:
Supports top 8 languages including C++ and C#.
Understands the context of the code and fixes issues accurately.
Optimizes code efficiently.
Provides automated debugging with detailed explanations.
Standardizes code and reduces the risk of a security breach
The cyclomatic complexity metric is critical in software engineering. Reducing cyclomatic complexity increases the code maintainability, readability, and simplicity. By implementing the above-mentioned strategies, software engineering teams can reduce complexity and create a more streamlined codebase. Tools like Typo’s automated code review also help in identifying complexity issues early and providing quick fixes. Hence, enhancing overall code quality.
Burndown charts are essential instruments for tracking the progress of agile teams. They are simple and effective ways to determine whether the team is on track or falling behind. However, there may be times when a burndown chart is not ideal for teams, as it may not capture a holistic view of the agile team’s progress.
In this blog, we have discussed the latter part in greater detail.
What is a Burndown Chart?
Burndown Chart is a visual representation of the team’s progress used for agile project management. They are useful for scrum teams and agile project managers to assess whether the project is on track or not.
The primary objective is to accurately depict the time allocations and plan for future resources.
Components of Burndown Chart
Axes
There are two axes: x and y. The horizontal axis represents the time or iteration and the vertical axis displays user story points.
Ideal Work Remaining
It represents the remaining work that an agile team has at a specific point of the project or sprint under an ideal condition.
Actual Work Remaining
It is a realistic indication of a team's progress that is updated in real time. When this line is consistently below the ideal line, it indicates the team is ahead of schedule. When the line is above, it means they are falling behind.
Project/Sprint End
It indicates whether the team has completed a project/sprint on time, behind or ahead of schedule.
Data Points
The data points on the actual work remaining line represents the amount of work left at specific intervals i.e. daily updates.
Types of Burndown Chart
There are two types of Burndown Chart:
Product Burndown Chart
This type of burndown chart focuses on the big picture and visualises the entire project. It helps project managers and teams monitor the completion of work across multiple sprints and iteration.
Sprint Burndown Chart
Sprint Burndown chart particularly tracks the remaining work within a sprint. It indicates progress towards completing the sprint backlog.
Advantages of Burndown Chart
Visualises Progress
Burndown Chart captures how much work is completed and how much is left. It allows the agile team to compare the actual progress with the ideal progress line to track if they are ahead or behind the schedule.
Encourages Teams
Burndown Chart motivates teams to align their progress with the ideal line. These small milestones boost morale and keep their motivation high throughout the sprint. It also reinforces the sense of achievement when they see their tasks completed on time.
Informs Retrospectives
It helps in analyzing performance over sprint during retrospection. Agile teams can review past data through burndown Charts to identify patterns, adjust future estimates, and refine processes for improved efficiency. It allows them to pinpoint periods where progress went down and help to uncover blockers that need to be addressed.
Shows a Direct Comparison
Burndown Chart visualizes the direct comparison of planned work and actual progress. It can quickly assess whether a team is on track to meet the goals, and monitor trends or recurring issues such as over-committing or underestimating tasks.
Burndown Chart can be Misleading too. Here’s Why?
While the Burndown Chart comes with lots of pros, it could be misleading as well. It focuses solely on the task alone without accounting for individual developer productivity. It ignores the aspects of agile software development such as code quality, team collaboration, and problem-solving.
Burndown Chart doesn’t explain how the task impacted the developer productivity or the fluctuations due to various factors such as team morale, external dependencies, or unexpected challenges. It also doesn’t focus on work quality which results in unaddressed underlying issues.
Other Limitations of Burndown Chart
Oversimplification of Complex Projects
While the Burndown Chart is a visual representation of Agile teams’ progress, it fails to capture the intricate layers and interdependencies within the project. It overlooks the critical factors that influence project outcomes which may lead to misinformed decisions and unrealistic expectations.
Ignores Scope Changes
Scope Creep refers to modification in the project requirement such as adding new features or altering existing tasks. Burndown Chart doesn’t take note of the same rather shows a flat line or even a decline in progress which can signify that the team is underperforming, however, that’s not the actual case. This leads to misinterpretation of the team’s progress and overall project health.
Gives Equal Weight to all the Tasks
Burndown Chart doesn’t differentiate between easy and difficult tasks. It considers all of the tasks equal, regardless of their size, complexity, or effort required. Whether the task is on priority or less impactful, it treats every task as the same. Hence, obscuring insights into what truly matters for the project's success.
Neglects Team Dynamics
Burndown Chart treats team members equally. It doesn't take individual contributions into consideration as well as other factors including personal challenges. It also neglects how well they are working with each other, sharing knowledge, or supporting each other in completing tasks.
What are the Alternatives to Burndown Chart?
Gantt Charts
Gantt Charts are ideal for complex projects. They are a visual representation of a project schedule using horizontal axes. They provide a clear timeline for each task i.e. when the project starts and ends as well as understanding overlapping tasks and dependencies between them.
Cumulative Flow Diagram
CFD visualizes how work moves through different stages. It offers insight into workflow status and identity trends and bottlenecks. It also helps in measuring key metrics such as cycle time and throughput.
Kanban Boards
Kanban Boards is an agile management tool that is best for ongoing work. It helps to visualize work, limit work in progress, and manage workflows. They can easily accommodate changes in project scope without the need for adjusting timelines.
Burnup Chart
Burnup Chart is a quick, easy way to plot work schedules on two lines along a vertical axis. It shows how much work has been done and the total scope of the project, hence, providing a clearer picture of project completion.
Developer Intelligence Platforms
DI platforms focus on how smooth and satisfying a developer experience is. It streamlines the development process and offers a holistic view of team productivity, code quality, and developer satisfaction. These platforms also provide real-time insights into various metrics that reflect the team’s overall health and efficiency beyond task completion alone.
Typo - An Effective Sprint Analysis Tool
One such platform is Typo, which goes beyond the traditional metrics. Its sprint analysis is an essential tool for any team using an agile development methodology. It allows agile teams to monitor and assess progress across the sprint timeline, providing visual insights into completed work, ongoing tasks, and remaining time. This visual representation allows to spot potential issues early and make timely adjustments.
Our sprint analysis feature leverages data from Git and issue management tools to focus on team workflows. They can track task durations, identify frequent blockers, and pinpoint bottlenecks.
With easy integration into existing Git and Jira/Linear/Clickup workflows, Typo offers:
Velocity Chart that shows completed work in past sprints
Sprint Backlog that displays all tasks slated for completion within the sprint
Tracks the status of each sprint issue.
Measures task durations
Highlights areas where work is delayed and identifies task blocks and causes.
Historical Data Analysis that compares sprint performance over time.
Hence, helping agile teams stay on track, optimize processes, and deliver quality results efficiently.
While the burndown chart is a valuable tool for visualizing task completion and tracking progress, it often overlooks critical aspects like team morale, collaboration, code quality, and factors impacting developer productivity. There are several alternatives to the burndown chart, with Typo’s sprint analysis tool standing out as a powerful option. Through this, agile teams gain a more comprehensive view of progress, fostering resilience, motivation, and peak performance.
Understanding the Human Side of DevOps: Aligning Goals Across Teams
One of the biggest hurdles in a DevOps transformation is not the technical implementation of tools but aligning the human side—culture, collaboration, and incentives. As a leader, it’s essential to recognize that different, sometimes conflicting, objectives drive both Software Engineering and Operations teams.
Engineering often views success as delivering features quickly, whereas Operations focuses on minimizing downtime and maintaining stability. These differing incentives naturally create friction, resulting in delayed deployment cycles, subpar product quality, and even a toxic work environment.
The key to solving this? Cross-functional team alignment.
Before implementing DORA metrics, you need to ensure both teams share a unified vision: delivering high-quality software at speed, with a shared understanding of responsibility. This requires fostering an environment of continuous communication and trust, where both teams collaborate to achieve overarching business goals, not just individual metrics.
Why DORA Metrics Outshine Traditional Metrics
Traditional performance metrics, often focused on specific teams (like uptime for Operations or feature count for Engineering), incentivize siloed thinking and can lead to metric manipulation. Operations might delay deployments to maintain uptime, while Engineering rushes features without considering quality.
DORA metrics, however, provide a balanced framework that encourages cooperative success. For example, by focusing on Change Failure Rate and Deployment Frequency, you create a feedback loop where neither team can game the system. High deployment frequency is only valuable if it’s accompanied by low failure rates, ensuring that the product's quality improves alongside speed.
In contrast to traditional metrics, DORA's approach emphasizes continuous improvement across the entire delivery pipeline, leading to better collaboration between teams and improved outcomes for the business. The holistic nature of these metrics also forces leaders to look at the entire value stream, making it easier to identify bottlenecks or systemic issues early on.
Leveraging DORA Metrics for Long-Term Innovation
While the initial focus during your DevOps transformation should be on Deployment Frequency and Change Failure Rate, it’s important to recognize the long-term benefits of adding Lead Time for Changes and Time to Restore Service to your evaluation. Once your teams have achieved a healthy rhythm of frequent, reliable deployments, you can start optimizing for faster recovery and shorter change times.
A mature DevOps organization that excels in these areas positions itself to innovate rapidly. By decreasing lead times and recovery times, your team can respond faster to market changes, giving you a competitive edge in industries that demand agility. Over time, these metrics will also reduce technical debt, enabling faster, more reliable development cycles and an enhanced customer experience.
Building a Culture of Accountability with Metrics Pairing
One overlooked aspect of DORA metrics is their ability to promote accountability across teams. By pairing Deployment Frequency with Change Failure Rate, for example, you prevent one team from achieving its goals at the expense of the other. Similarly, pairing Lead Time for Changes with Time to Restore Service encourages teams to both move quickly and fix issues effectively when things go wrong.
This pairing strategy fosters a culture of accountability, where each team is responsible not just for hitting its own goals but also for contributing to the success of the entire delivery pipeline. This mindset shift is crucial for the success of any DevOps transformation. It encourages teams to think beyond their silos and work together toward shared outcomes, resulting in better software and a more collaborative work environment.
Early Wins and Psychological Momentum: The Power of Small Gains
DevOps transformations can be daunting, especially for teams that are already overwhelmed by high workloads and a fast-paced development environment. One strategic benefit of starting with just two metrics—Deployment Frequency and Change Failure Rate—is the opportunity to achieve quick wins.
Quick wins, such as reducing deployment time or lowering failure rates, have a significant psychological impact on teams. By showing progress early in the transformation, you can generate excitement and buy-in across the organization. These wins build momentum, making teams more eager to tackle the larger, more complex challenges that lie ahead in the DevOps journey.
As these small victories accumulate, the organizational culture shifts toward one of continuous improvement, where teams feel empowered to take ownership of their roles in the transformation. This incremental approach reduces resistance to change and ensures that even larger-scale initiatives, such as optimizing Lead Time for Changes and Time to Restore Service, feel achievable and less stressful for teams.
The Role of Leadership in DevOps Success
Leadership plays a critical role in ensuring that DORA metrics are not just implemented but fully integrated into the company’s DevOps practices. To achieve true transformation, leaders must:
Set the right expectations: Make it clear that the goal of using DORA metrics is not just to “move the needle” but to deliver better software faster. Explain how the metrics contribute to business outcomes.
Foster a culture of psychological safety: Encourage teams to see failures as learning opportunities. This cultural shift helps improve the Change Failure Rate without resorting to blame or fear.
Lead by example: Show that leadership is equally committed to the DevOps transformation by adopting new tools, improving communication, and advocating for cross-functional collaboration.
Provide the right tools and resources: For DORA metrics to be effective, teams need the right tools to measure and act on them. Leaders must ensure their teams have access to automated pipelines, robust monitoring tools, and the support needed to interpret and respond to the data.
Typo: Accelerating Your DevOps Transformation with Streamlined Documentation
In your DevOps journey, the right tools can make all the difference. One often overlooked aspect of DevOps success is the need for effective, transparent documentation that evolves as your systems change. Typo, a dynamic documentation tool, plays a critical role in supporting your transformation by ensuring that everyone—from engineers to operations teams—can easily access, update, and collaborate on essential documents.
Typo helps you:
Maintain up-to-date documentation that adapts with every deployment, ensuring that your team never has to work with outdated information.
Reduce confusion during deployments by providing clear, accessible, and centralized documentation for processes and changes.
Improve collaboration between teams, as Typo makes it easy to contribute and maintain critical project information, supporting transparency and alignment across your DevOps efforts.
With Typo, you streamline not only the technical but also the operational aspects of your DevOps transformation, making it easier to implement and act on DORA metrics while fostering a culture of shared responsibility.
Starting a DevOps transformation can feel overwhelming, but with the focus on DORA metrics—especially Deployment Frequency and Change Failure Rate—you can begin making meaningful improvements right away. Your organization can smoothly transition into a high-performing, innovative powerhouse by fostering a collaborative culture, aligning team goals, and leveraging tools like Typo for documentation.
The key is starting with what matters most: getting your teams aligned on quality and speed, measuring the right things, and celebrating the small wins along the way. From there, your DevOps transformation will gain the momentum needed to drive long-term success.
Webinar: ‘The Hows and Whats of DORA' with Dave Farley and Denis Čahuk
October 7, 2024
•
51 min read
In this DORA exclusive webinar, hosted by Kovid from Typo, notable software engineers Dave Farley and Denis Čahuk discuss the profound impact of DORA metrics on engineering productivity.
Dave, co-author of 'Continuous Delivery,' emphasized the transition to continuous delivery (CD) and its significant benefits, involving systematic quality improvements and efficient software release cycles. Denis, a technical coach and TDD/DDD expert, shared insights into overcoming resistance to CD adoption. The discussion covered the challenges associated with measuring productivity, differentiating between continuous delivery and continuous deployment, and the essential role of team dynamics in successful implementation. The session also addressed audience questions about balancing speed and quality, using DORA metrics effectively, and handling burnout and engineering well-being.
Timestamps
00:00 - Introduction
00:14 - Meet the Experts: Dave Farley and Denis Čahuk
01:01 - Dave Farley's Journey and Hobbies
02:38 - Denis Čahuk's Passion for Problem Solving
06:37 - Challenges in Adopting Continuous Delivery
11:34 - Engineering Mindset and Continuous Improvement
Kovid Batra: All right. So time to get started. Uh, thanks for joining in for this DORA exclusive webinar, The Hows and Whats of DORA session three, powered by Typo. I am Kovid, founding member at Typo and your host for today's webinar. With me today, I have two extremely passionate software engineers. Please welcome the DORA expert tonight, Dave Farley. Dave is a co-author of award-winning books, Continuous Delivery, Modern Software Engineering, and a pioneer in DevOps. Along with him, we have the technical coach, Denis Čahuk, who is TDD, DDD expert, and he is a stress-free high-performance development culture provider in the tech teams. Welcome to the show, both of you. Thank you so much for joining in.
Dave Farley: Pleasure. Thank you for having me.
Denis Čahuk: Thank you for having me.
Kovid Batra: Great guys. So I think we will take it one by one. Uh, so let's, let's, let's start with, uh, I think, uh, Dave first. Uh, so Dave, uh, this is a ritual that we follow on this webinar. You have to tell us about yourself, uh, that your LinkedIn profile doesn't tell. So you have to give us a quick, sweet intro about yourself.
Dave Farley: Okay. Um, I'm a long-time software developer who really enjoys problem-solving. I really enjoy that aspect of the job. I, if you want, if you want to get me, get me to come and work at your place, you tell me that the problem's hard to solve. And that's, that's the kind of stuff that I like, and I've spent much of my career doing some of those hard to solve problems and figuring out ways in which to make that easier.
Kovid Batra: Great. All right. So I think, Dave, uh, apart from that, uh, anything that you love beyond software engineering that you enjoy doing?
Dave Farley: Yeah, my wife says that my hobby is collecting hobbies. So, so I'm, I'm a guitarist. I used to, I used to play in rock bands years ago. Um, I, until fairly recently, I was a member of the British aerobatics team, flying competition aerobatics in a 300 horsepower, plus 10, minus 10 G, uh, aerobatic airplane, which, which was awesome, but, uh, I don't do that anymore. I've stopped very recently.
Kovid Batra: That's amazing, man. That's really amazing. Great. Thank you. Thank you so much for that, uh, intro about yourself and, uh, Denis over to you, man.
Denis Čahuk: Um, like Dave, I really like problem solving, but, but I like involving, uh, I spent the beginning of my career in focusing too much on the compiler and I like focusing on the human problems as well. So how, what, what makes the team tick and in particular with TDD, it really, really scratched an itch about what makes teams resistant and what makes teams a little bit more open to change and improvement and dialogue, especially dialogue. Uh, that has become my specialty since. So yes, I brand myself as a TDD, DDD coach, but that's primarily there to drive engagement. I'm, I'm super interested in engineering leadership and specifically what drives trends and what helps people, what helps, uh, engineers, engineering teams overcome their own resistance, sort of, if they're in their own way, you know, why is that there, how to, how to resolve any kind of, um, blockers, let's say, human blockers, not, not, not the compiler kind, uh, in engineering things. I don't plan any planes, but I do have, I do share, uh, Dave's passion for music. So I do have a guitar and, uh, the drum there behind me. So whenever I'm not streaming or coding, I am jamming out as much as I can.
Kovid Batra: Perfect. Perfect, man. All right. So I think it's time we get started and move to the, to move to the main section. Uh, so the first thing that I love to talk to you, uh, Dave first, uh, so you have this, uh, YouTube channel, uh, and it's not in your name, right? It's, it's Continuous Delivery. Uh, what, what makes Continuous Delivery so important to you?
Dave Farley: Somebody else said to, this to me very recently, which, which I agree with, which is that I think that Continuous Delivery, without seeming too immodest, because my name's associated with it, but I think it represents a step change in what we can do as software developers. I think it's a significant step forward in our ability to create better software faster. If you embrace the ideas of continuous delivery, which includes things like test-driven development, in DDD, as Denis was describing, and is very team-centered as well, which Denis was also talking about. If you, if you embrace those ideas and adopt the disciplines of continuous delivery, which fundamentally, all devolve into one idea, which is working software is always in a releasable state, then you get quite dramatically better outcomes. And I think without too much fear of contradiction, continuous delivery represents the state of the art in software development. It's what the best organizations at software development do. And so, I think it's an important idea and it's as I said, although I sound rather immodest because I'm one of the people that helped at least put the language to it, but people were doing these things, but Jez, Jez and my book define the language around which continuous delivery talking is usually structured these days. Um, and so, so I think it's an important idea and I think that software engineering is one of the most important things that we do in our society and it matters a lot and we ought to be better at it as an industry and I think that this is how we get better at it. So, so I get an awful lot of job satisfaction and personal pleasure on trying to help people on their journey towards achieving continuous delivery.
Kovid Batra: And I think you're being just modest here. Your book just didn't define or give a language there. It did way, way more than that. And, uh, kudos to you for that. Uh, I think my next question would be like, what's that main ingredient, uh, that separates a team following CD and a team not following CD? What do you think makes the big difference there?
Dave Farley: There are some easy answers. Let me just tackle the difficult answer first, because I think the difficulty with continuous delivery is that the idea is simple, but it's so challenging to most people that it's very difficult to adopt. It challenges the way in which we think about software. I think it challenges to some degree. I'm a bit of a pop psychologist. I think in many ways it challenges, um, our very understanding of what software is to some extent, and certainly what software development is. And that's difficult. That means that it changes every person's role in undertaking this. It, as I said already, it's a much more team centered approach, I think, uh, to be able to achieve this permanent releasability of our software. But fundamentally, I think if you want to boil it down to more straightforward concepts to think about, I think that what we're talking about here is kind of applying what I think of as a kind of scientific rationalism to solving problems in software. And so the biggest part of that, the two biggest ideas there, from my point of view, are working in small steps and essentially, treating each of those steps as a little experiment and assuming that we're going to be wrong. So it's always one of the big ideas in science is that you start off assuming that your ideas are wrong, and then you try and figure out how and why they're wrong. I think we do the same thing in continuous delivery and software engineering, modern software engineering. We try to figure out how can we detect where our ideas are wrong, and then we try and detect where they're wrong, in those places and find out if they're wrong or not and then correct them. And that's how we build a better software. And so this, I think that goes quite deep and it affects quite a lot about how we undertake our work. But I think that one of the step changes in capability is that I think that previous thinking about software development kind of started off from the assumption that our job is to get everything perfectly right from the start. And that's simply irrational and impossible. And so, instead of taking a more scientific mindset and starting off assuming that we will be wrong, and so we give ourselves the freedom to be wrong and the ability to um, recover from it easily is almost the whole game.
Kovid Batra: Got it. I think Denis has a question. He wants to, yeah, please go ahead.
Denis Čahuk: Sure. I'm going to go off script. I think I like that distinction of psychologist. Sometimes I feel myself, find myself in a similar role. And I think the core disagreement comes from this idea of a lot of engineers, organizational owners, CTOs don't like this idea that their code is an experiment. They want some like certain assurances that it has been inspected and that it's, it's not, it's not something that we expect to fail. So from their perspective, non-CD adopters think that the scientific rationale is hard inspection towards requirements rather than conducting an experiment. And I see that, um, sort of providing a lot of resistance regarding CD adoption cause it is very hard to do, or it's very hard to come from that rationale and say, okay, we're now doing CD, but we're not doing CD right now. We're adopting CD right now. So we're kind of doing it, but not doing it. And it just creates a lot of tension and resistance in companies. Did you find similar situations? How do you, how do you sort of massage this sort of identity shift identity crisis?
Dave Farley: Yeah. Yeah I think, I think absolutely that's a thing and, and that is the challenge. It is that is to try and find ways to help those people to see the light. So I know I sound like an evangelist. Yeah, but, but I guess I see that as part of my role. But..
Denis Čahuk: You did write the book, so..
Dave Farley: Yeah, so, so, so I think this is in everybody's interest. I mean, the data backs me up. The DORA data says that if you adopt the practices of continuous delivery, you spend 44 percent as an organization more time on building new features than if you don't. That's pretty slam dunk in terms of value as far as I'm concerned, and there's lots more to it than that. But, you know, so why wouldn't anybody want to be able to build better software faster? And this is the best way that we know of so far, how to do that. So, so that seems like a reasonably rational way of deciding that this is a good idea, but that's not enough to change people's minds. And you've got to change people's minds in all sorts of different ways. Um, I think it's important to make these sorts of things, but going back to those people that you said that, you know, engineers who think it's their job to get it right first time, they don't understand what engineering is. Managers who want to build the software more quickly, get more features out. They don't understand what building software more quickly really means because if either of those groups knew those things, they'd be shouting out and demanding continuous delivery, because it's the thing that you need. We don't know the right answers first time. Look at any technology. Let alone any product and its history. Look at the aeroplane. In the first aeroplane that could carry a person under power in a controllable way was the Wright Flyer in 1903. And for the first 20 or 30 years, all aeroplanes were death traps. People were, they were such dangerous devices. But engineering as a discipline adopted an incremental approach to learning and discovery to improve the airplane. And by 2017, two thirds of the planet, the equivalent of two thirds of the population of the planet, flew in commercial airliners and nobody was killed. That's what engineering does. It's an incremental process. It doesn't, we don't, we never ever, ever get it right first time. The iPhone, the first iPhone didn't have an app store, didn't have a camera, didn't have Siri, didn't have none of these things, didn't..
Denis Čahuk: Multitasking.
Dave Farley: Didn't have multitasking, all of these things. And now we have these amazing devices in our pockets that can do all sorts of amazing things that the original designers of the iPhone didn't actually predict. I'm sure that they had vague wishes in their minds, but they didn't predict them ahead of time. That's not how engineering works. So the way that engineering works is by exploration and discovery. And we need to, to be good at it, we need to organize to be good at exploration and discovery. And the way that, you know, so if we want to build things more efficiently, then we would, we need to adopt the disciplines that allow us to make these mistakes and accept that we will and look, you know, detect them as quickly as we can and learn from them as quickly as we can. And that's, you know, that's why, to my mind, you know, the results of the DORA thing, so there's no trade-off between speed and quality because you work in these small steps, you get faster feedback on, on whether your ideas are good or bad. So those small steps are important. And then when you find out that they're a bad idea, you correct them. And that's how you get to good.
Kovid Batra: Totally. I think, uh, one very good point, uh, here, we are sure like now CD and other practices like TDD impact engineering in a very positive way, improving the overall productivity and actually delivering value and the slam dunk like 44 percent more value delivered, right? But when it really comes to proving that number to these teams, uh, do you, like, do you use any framework? Do you use like DORA or SPACE to tell whether implementing CD was effective in a way? How do you measure that impact?
Dave Farley: No, most, mostly I recommend that people use the DORA metrics. Um, I, let me just talk momentarily about that because I think that that's important. I think the work of Nicole and the rest of the team in starting off the DORA was close to genius in identifying, as far as I can think of, the only generic measures in software. If you think about what, what the, the DORA metrics of stability and throughput measure, um, it's, um, the quality of the software that we produce and the rate at which we can produce software of that quality. That stability is the quality. Throughput is the efficiency with which we can produce software of that quality. Those are fundamental. They say nothing at all about the nature of the problem we're solving, the technology we're using, or anything else. If you're writing, if you're configuring SAP to do a better job of whatever it is that you're trying to do, that's still a good measure of success, stability and throughput. Um, if I'm writing some low-level code for an operating system, that's still a good measure of success. It doesn't matter. So, so we have these generic measures. Now they aren't enough to measure everything that's important in software. What they do is that they tell us whether we're building software right. They don't tell us whether we're building the right software, for example. So we need different kinds of experiments to understand other aspects of software. But I don't think there's much else. There's nothing else that I can think of that's in the same category. Stability and throughput in terms of the generosity of those measurements. And so, if you want a place to start of what to measure, start with stability and throughput and then figure out how to measure the other things out because they're going to be dependent on your context.
I'm a big fan of Site Reliability Engineering as a model for this. It talks in terms of, um, um, SLOs and SLIs, Service Level Indicators and Service Level Objectives. So the Service Level Indicator is what measure will determine the success of this service. So you identify, for every single feature, you identify what you should measure to know whether it's good or not. And then you set an objective of what score on that scale you want to achieve for this thing. That's a good way of measuring things, but it's kinda difficult. The huge difference is it's completely contextual, not even application by application, but feature by feature. So one feature might improve the latency, another feature might improve the rate at which we recruit new customers. And we've got to figure out, you know, that's how we get experimental with those kinds of things, by being more specific about and targeted with what we measure. I am skeptical of most of the generic measures. Not because I don't want them, it's just that I don't think that most of the others are generic and do what we want them to. Um, I'm not quite sure what I make of the SPACE framework, which is Nicole's new work on measuring developer, developer productivity. She's very smart and very good at the research-driven stuff. Uh, I spoke to her about some of this stuff on my, my podcast and, um, she had interesting things to say about it. I am still nervous of measuring individual developer productivity because as Denis said in his introduction, one of the really important things is how well a team works. So I think modern software development. unless it's building something trivial usually, is a team game. It's a matter of people coming together and organizing themselves in a way to be able to achieve some goal. And that takes an awful lot, and you can have people working with different levels of skill, experience, diligence, who may be still contributing strongly to the team, even if they're not pulling their weight in other respects. So I think it's a complicated thing to measure, a very human thing to measure. So, so I'm a bit suspect of that, but I'm fairly confident that Nicole will have some data that proves me wrong. But I, you know, that's, that's my position so far.
Kovid Batra: Totally makes sense. I think with almost all the frameworks, there have been some level of challenges and so is with DORA, SPACE, but I think in your experience, when, when you have seen, uh, and you have helped teams implement such practices, uh, what do you think have become the major reasons where they get stuck, not implementing these frameworks, not implementing proper engineering metrics? What, what, what stops them from doing it? What stops them from adopting it?
Dave Farley: I think specifically with using DORA, um, there are some complexities. If you, if you, if you are in a, a regular kind of organization that hasn't been working in the ways in which we've been talking about so far, um, then measuring stuff, just, just measuring stuff is hard. You're not used to doing it. The number of organizations that I talked to that couldn't even tell you how much, excuse me, time was spent on a feature, they don't measure it. They don't know. And so just getting the basics in, the thinking in, to be able to start to be a little bit more quantitative on these things is hard. And that's hard for people like us probably to get our heads around a little bit because when you've got a working deployment pipeline, this stuff is actually pretty easy because you just instrument your deployment pipeline and it gives you all the answers pretty much. So I think that there's that kind of practical difficulty, but I don't think that's the big ticket problem. The big ticket problem is just the mindset, my, I am old enough and comfortable enough in my shoes to recognize that I'm a grumpy old man. Um, and part of my grumpy old manness is to look at our industry and think that our industry is largely a fashion industry. It's not a technical industry. And there's an awful lot of mythology that goes on in the software industry. That's simply nothing to do with doing a good job. It's just what everybody thinks everybody else is doing. And I think that's incredibly common. And you've got to overcome that because if you're talking to a team, I'm going to trample on some people's sacred cow right now, but if you're talking to a team that works with feature branching, the evidence is in. Feature branching doesn't work as well as trunk-based development. That's more learning that we got from the DORA metrics, measuring those. Teams that work with feature branches build slightly lower quality code and they do it slightly more slowly than teams working on trunk. Now the problem is, is that it's almost inconceivable how you can do trunk-based development safely to people that buy into the, what I would think of as the mythology of feature branching. The fact that it, it, you can do it safely and you can do it safely at scale with complicated software, they start to deny because they assume that, that, that you can't, because they can't think of how you would do it. And that's the kind of difficulty that, that you face. It's not that it's a rational way of thinking about it, because I, I think it's very easy to defend why trunk-based development and continuous integration are more true, more, more, more accurate. You know, you, you organize things so that there's one point of truth. And in feature branching, you don't have one point of truth, you have multiple points of truth. And so it's clear that it's easier to determine whether the one point of truth is correct than deciding that multiple points of truth, that you don't know how you're going to integrate them together yet, is correct. You can't tell.
So it's, it's, it's tricky. So I think that there are rational ways of thinking that help us to do this, which is why I started, I've started to think about and talk about what we do as engineering more than as craft or just software development. If we do it well, uh, it's engineering and if we do it well and use engineering, we get a better result, which is kind of the definition of what engineering is in another discipline. If we work in certain ways, we do get better results. I think that's important stuff. So it's very, very hard to convince people and to get them away from their, what I would think of as mythologies sometimes. Um, and it's also difficult to be able to have these kinds of conversations and not seem very dogmatic. I get accused of being dogmatic about this stuff all of the time. Being arrogant for a moment. I think there's a big difference between being dogmatic and being right. I, I think that if we talk about, you know, having evidence like the DORA metrics, having a model like the way that I describe how these things stitch together and the reasons why they work and just having a favorite way of doing things, there's a difference between those things. I don't like continuous integration because it's my favorite. I like continuous integration because it works better than anything else. I like TDD not because I think it's my ideal for designing software. It's just that it's a better way of designing software than anything else. That's my belief. And, and so it's difficult to have these kinds of conversations because inevitably, you know, my viewpoints are going to be covered, colored by my experiences and what I've seen. But I try hard to be honest myself as an aspiring engineer and scientific rationalist. I try to be true to myself and try to critique my own ideas to find the holes in them. And I think that's the best that we can do in terms of staying sane on these things.
Kovid Batra: Sure. I think on that note, I think Denis would also resonate with that fact, because last time when Denis and I were talking, he mentioned about how he's helping teams implement TDD and like taking away those roadblocks time to time. So I'm sure Denis has certain questions around that, and he would like to jump in. Denis, uh, do you have any questions?
Denis Čahuk: I have a few, actually, I need your help a little bit to stay on topic. Um, so Dave mentioned something really important that sort of touched me more than the rest, which is this sort of concern for measuring individual performance. And I've been following Nicole's work as well, um, especially with SPACE metrics and what the team topology community is doing now with flow engineering. Um, there, there is a, let's say, strong interest in the community and the engineering intelligence community to measure burnout, to measure.
Dave Farley: Mm-Hmm.
Denis Čahuk: So, so the, so to clarify, do we have a high-performing team that's burnt out or do we have a healthy team that's low-performing? And to really, and to really sort of start course correct in the right areas is very difficult to measure burnout without being individual because of the need for it to be a subjective experience. Um, and I share Dave's concern where the productivity metrics are being put in the same bucket as the psychological safety and burnout research. So, I'm wondering when you're dealing with teams, because I see this with product engineering, I see this with TDD, I see this with engineering leaders who are just resistant to this idea of, you know, are we burned out? Are we just tired and we're following the right process? Or is the process correct, but it's being implemented incorrectly? How do you, how do you navigate this rift? I mean, specifically, do you find any quick, uh, lagging indicators from the DORA metrics to help you a little bit, like to cajole the conversation a little bit more? Um, or do you go to other metrics, like SPACE metrics, et cetera, to sort of, or surveying to help you start some kind of continuous delivery initiative? So a lot of teams who are not doing CD, they do complain about burnout when they're sort of being asked to start just measuring everything, just out of, um, out of, I would say, fatigue.
Dave Farley: Yeah, and, and, uh, and, uh, it gets to the, uh, Matt and Manuel's thing from the team, the Team Topologies guys, you know, uh, uh, description of cognitive load. I know it's not their, their, their idea originally, but, but, but applying it to software teams. It's, it, I, I think burnout is primarily a matter of, a mix of cognitive load and excessive cognitive load and the freedom to direct your own destiny within a team, you know? You need, you need kind of the Daniel Pink thing, autonomy, mastery and purpose. You need freedom to do a good job. You need enough scope to be, and, and that those are the kinds of things that I think are important in terms of measuring high-performance teams. I think that it's a false correlation. Um, I know that recent versions of the, the DORA reports have thrown up some, what seemed to me to be, um, counterintuitive findings. So people saying things like working with continuous integration has, is correlated with increased levels of burnout. That makes no sense to me. I put this to, to Nicole when I spoke to her as well, and she was a little skeptical of that too, in terms of the methodology for collecting the data. That's no, it's no aspersion on the people. We all get these things wrong from time to time, but I'm distrustful of that result. But if that is the result, you know, I've got to change my views on things. But my experience, and that's in the absence of, of hard data, except that previous versions of DORA gave us hard data and now the finding seems to have changed. But my experience has been that teams that are good at continuous delivery don't burn out, because it's, it's sustainable. It's long-term sustainable. The LMAX team that, that I led in the beginning of that team have been going, how long is it now? Uh, about 15 years. And those, those people weren't burning, people weren't burning out, you know, and they're producing high-quality software still, um, and their process is working still. Um, so I I'm not, I, I think that mostly burnout is a symptom of something being wrong. Um, and something being wrong in terms of too much cognitive load and not enough control of your own destiny within the team. Now, that's complicated stuff to do well, and it gets into some of the, for want of a better term, softer things, the less technical aspects of organizing teams and leading teams and so on. So we need leaders that are inspirational, that can kind of set a vision and a direction, and also demonstrating the, the right behavior. So going home on time, not, not working all hours and, you know, not telling people off if things go wrong, if it's not their fault, and all these kinds of things. So we need.. The best teams in my experience, take a lot of personal responsibility for their work, but that's, that's doing it themselves. That's not externally forced on them, and that's a good thing because that makes you both be prouder of the things that you do and more committed to doing a good job, which is in everybody's interest.
So, so I think there's, I think there's quite a lot to this. And again, it's, none of it's easy, but I think that shaping to be able to keep our software in a releasable state and working in small steps, gathering feedback, focusing on learning all of those techniques, the kind of things that I talk about all the time are some of the tools that help us to at least have a better chance of reducing burnout. Now that, there are always going to be some individuals in any system that get burnt out for other reasons. You get burnt out because of pressures from home or because your dog died or whatever it might be. Um, but, you know, we need, we need to treat this stuff seriously because we need to take care of people even if that's only for pragmatic commercial reasons, that we don't want to burn people because that's not going to be good for us long term as an industry. I, I, I, that's not more the primary reason why I would do it. But if I'm talking to a hard-nosed commercial person, I still think it's in their interest to treat people well. And so, and so we need to be cautious of people and more caring of people in the workplace. It's one of the things that I think that ensemble programming, whether it's pairing or mobbing, are significantly better for, and probably that's counterintuitive to many people, because there's a degree to which pair programming in particular applies a bit of extra pressure. You're a bit more on your game. You get a bit more, more tired at the end of each day's work, but you also build better friendships amongst your, your, your team workers and you learn from one another more effectively and you can depend on one another. If you're having a bad day, your, your, your pair might pick up the pace and be, you know, sustaining productivity or whatever else. There are all these kinds of subtle complex interactions that go on to producing a healthy workspace where, where people can keep at it for a long, you know, a long time, years at a time. And I, I think that's really important.
I worked at a company called ThoughtWorks in, in the early 2000s, and during that period, ThoughtWorks and ThoughtWorks in London in particular where I worked, where I think some of the thought leaders in agile thinking, we were pushing the boundaries of agile projects at that time and doing all sorts of interesting things. So we experimented a lot. We tried out lots of different, you know, leading edge, bleeding edge, often ideas in, in development. One of those, I worked on one of the early teams in London that was doing full-blown lean and applying that to software development. Um, and one of the things that we found was that that tended to, to, to burn us out a little bit over months because it just started to feel a bit like a treadmill. There was no kind of cadence to it because you just pick up a feature off the Kanban board, you'd work on that feature, you'd deliver the feature, you'd showcase the feature, you'd pick the next feature and you'd, and so on and so on and so on, and that was it. And you did that for months on end. And we were, we were, we were building good software. We were mostly having a good time, but over, over time it made us tired. So we started to think about how to make more social variants in the way in which we could do things. And we ended up doing the same thing, but also having iterations or most people would call them 'sprints' these days, of two weeks so that we could have a party at the end and celebrate the things that we did release, even though we weren't committing to what we'd release in the next two weeks. And, you know, we'd have some cake and stuff like that at the end, and all of those sorts of human things that just made it feel a little bit more different. We could celebrate our success and forget about our losses. Software development is a human endeavor. Let's not forget that and not try and talk, turn us into cogs in a machine. Let's treat us like human beings. Sorry. I'm off-road. I'm not sure if I answered your question.
Denis Čahuk: This is great. This is great, Dave. No need to apologize. We're enjoying this and I think our audiences as well.
Kovid Batra: I'm sure. All right. So, Denis, uh, do you have any other question?
Denis Čahuk: Well, I would like to follow up with what the story with the, with the ThoughtWorks story that Dave just mentioned You know, you mentioned you had evidence of high performance in that team. You know, we tend to forget that lean is primarily a product concern, not an engineering concern. So it sort of has to go through the ringer and to make sure, you know, does it apply to software engineering as well? And I have similar findings with things like lean, things like Kanban, particularly Scrum or the bad ways of doing Scrum is that it is, it can, it can show evidence of high performance, but not sustainably due to its lack of social component. And the retrospectives are a lame excuse at social components. It's just forcing people to judge each other and usually produces negative results rather than positive ones. So I'm wondering, you just mentioned this two-week iteration cycle for increments, but also you're leaning towards small batches. Are you still adamant on like this two-week barrier for social engagement? So, so, so what we There does seem to be a difference.
Dave Farley: Yeah, so, so, so what we did is that we retained the lean kind of Kanban style planning. We just kept that as it was, but we kind of overlaid a two-week schedule where we would have a kickoff meeting at the start of an iteration, and we would have a little retrospective at the end of an iteration and we, you know, we would talk about the work that we did over that period. So, so we had this, this kind of different cycle and that was purely human stuff. It wasn't even visible really outside of the team. It was just the way that we organized our work so that we could just look ahead for, for, for what's coming downstream as far as our Kanban board said today, and look back at what, what, what we'd, you know, what we delivered over the pre, you know, the previous iteration. It was just that kind of thing. And that was enough to give us this more human cycle, you know, because we could be, we could be looking forward to, so I'm releasing this feature, we're nearly at the end, you know, we'll talk about that tomorrow or whatever else it is, you know, and it was just nice to kind of reconnect with the rest of the team in that way. And it just, we used it essentially, I suppose you could pragmatically look at it as just as a meeting schedule for, for, for the team-level stuff. I suppose you could look at it like that, but it was, it felt like a bit more, more than that to us. But I've, by default, if I'm in a position to control these things, that's how I've organized teams ever since. And that, that's how, that's how we worked at LMAX where we built our financial exchange. That's the organization that's been going for 15 odd years, um, doing this real high-performance version of continuous delivery.
Denis Čahuk: But to pick your brain, uh, Dave, sorry to interject. When you said, you separated out the work cycles from the social cycles, that does involve daily deployments, right? Like daily pairing, daily deployments. So the releases were separate from the meeting, uh, routine.
Dave Farley: Yes. Yeah, so, so, so we, we were, we were doing the, we were doing the, the, the, the Kanban continuous delivery kind of thing of when a feature was finished, it was ready to go. So, so we were working that way. Um, there was some limitations on that sometimes, but, but, but pretty much that, that's a very close approximation have been an accurate statement, at least. Um, so, so we, we were working that way. Yeah. So we'd really, we'd essentially release on demand. We'd, we'd release when, you know, at every point when we were ready. And that was more often, usually, than once every two weeks. So the releases weren't, weren't forced to be aligned with those two week schedules. So it wasn't a technical thing at all. It was, uh, it was primarily a team social thing, but, but it worked. It worked very well.
Denis Čahuk: I really liked the brief mention about SPACE and Nicole's other work. Kovid and I are very active in the Google community. It's sort of organizing DORA-related events. And Google does have a very heavy interest in measuring well-being, measuring burnout, or just, you know, trying to figure out whether engineers and managers are actually really contributing or whether they're just slowing things down. And it's very hard to just judge from DORA metrics alone, or at least to get a clearer picture. Um, is there anything else you use for situational awareness? What would you recommend for either evidence of micromanagement, or maybe the team wants to do TDD, but there's sort of an anti-pairing stigma, if you have to, how would you approach, um, the sort of more survey-oriented, SPACE-oriented?
Dave Farley: From my experience, and I'm saying that with reservations, not with not, not, not boasting. I'm not saying because I've got great experience, but, but from my experience, I, I'm a little bit wary of trying to find quantity of ways of evaluating those things. These are very human things. So stuff like some of the examples that you mentioned, I, I've spent a significant proportion of my career as a leader of technical teams and I've always thought that it was a failure on my part as a leader of a technical team if I don't know, notice that somebody's struggling or that somebody's not pulling their weight or, or I haven't got the kind of relation, relationship where the team, if I, if I don't, if I don't know something, the team doesn't come and tell me and then I can help. I'm kind of in a weird position, for example, I'm in a slightly weird position in terms of career reviews. I think that as a manager or a leader, if you don't know the stuff that you find out in a review, you're not doing your job. You should be knowing that stuff all of the time. And it's kind of the Gemba thing. It's kind of walking around and being with the team. It's it's spending time and understanding the team as a member of the team because that's what you are. You're not outside it. You're not different. You're a member of the team, so you should feel part of that and you should be there to help, help people guide their careers and steer them in the right direction of doing better and doing, doing good things from their point of view and from the organization's point of view. But to do that, you've got, you've got to understand a little bit about what's going on. And that feels like one of those very, very human things. It's about empathy, and it's about understanding. It's about communication, and it's about trust between, between the people. And I'm not quite sure how well you can quantify that stuff.
Denis Čahuk: I coach teams primarily through this kind of engagement, to rebuild trust.
Dave Farley: Yes.
Denis Čahuk: So I have found I have zero success rate in adopting TDD if the team isn't prepared to pair on it.
Dave Farley: Yeah.
Denis Čahuk: Once the team is pairing, once the team is assembling, TDD, continuous delivery, trunk-based\ development, no problem. Once they're prepared to sort of invest time into each other, just form friendships or if nothing else, cordial acquaintances, sort of, we can sort of, bridge that gap of, well, I want you to write a test so that he can go home and spend time with his kids without worrying about deployment. So that, that is the ulterior motive, not that there is some like, you know, fairytale fashion metric to tick a box on.
Dave Farley: Yeah.
Denis Čahuk: Um, since you mentioned quantitative metrics, to sort of backtrack a little bit on that and tie it together with TDD, did you find any lagging indicators of a team that, that did adopt TDD after you came in that, you know, what, what are the key metrics that are getting better, different after TDD adoption, or maybe leading indicators or perhaps leading indicators that say, hey, this more than anything else needs attention?
Dave Farley: So, so, so, so I think, I think, I think mostly, uh, stability. So, so it's a lagging indicator, but I, I think that's the measure that, you know, tells us whether you're doing a good enough job on quality. And if you're not doing TDD, mostly the data says you're not doing a good enough job on quality. There's a lot of other measures that kind of reinforce that picture, but fundamentally in terms of monitoring our performance day-to-day, I think stability is the best tool for that. Um, and, you know, so, so some, you know, so there's, I, I, I'm interested as a technologist from a technical point of view in some of the work that, um, Adam Thornhill, uh, uh, and code scene are doing in terms of red code and things like that. So patterns of use in code, the stuff that changes a lot and monitoring the stuff that changes a lot versus this stuff that, you know, where, where defects happen and all that kind of stuff. And so, you know, the crossover between sort of cyclomatic complexity and other measures of complexity in code and the need to change it a lot and all that kind of stuff. I think that's all interesting and kind of, but I see that as reinforcing this view of how important quality is. And fundamentally, we need to find ways of doing less work, managing our cognitive load to achieve higher quality, and that's what TDD does. So TDD isn't the end in itself. It's, it's a tool that gives us, that pushes us in the direction of the end that matters, which is building high-quality software and maintaining our ability to change it. And that's, again, that's what TDD does. So, so, so I think that TDD influences software in some deep ways that people that don't practice TDD miss all of the time.
And it's linked to lots of other practices. Like you said, um, you know, pairing is a great way of helping to introduce TDD, uh, particularly for our people that already know how to do TDD in the team. That's, that's the way that you spread it, certainly, but it's, I can't, I can't think of many things that, that, as I say, I'm wary of measures. I tend to either use tactical measures that just seem right in the context of what we're doing now, sort of treating each thing as an experiment and trying to figure out how to experiment on this thing and what do I need to measure to, to do that, or I use stability and throughput primarily.
Kovid Batra: Uh, I'll just, uh, take a pause here for all of us because, uh, we have a QnA lined up for the audience. And, uh, we will try to take like 30, 30 seconds of a break here and, uh, audience, you can get started, posting your questions. Uh, we are ready to take them.
Denis Čahuk: We already have a few comments and we had, uh,
Kovid Batra: Okay. I think, uh, we can start with the questions.
Denis Čahuk: Before we go into Paul's question. Paul has a great question. I just want to preface that by saying that not this one, the DORA-related one.
Kovid Batra: But I like this one more.
Denis Čahuk: Yes.
Kovid Batra: Dave, I think you have to answer this one. Uh, where do you get your array of t-shirts?
Dave Farley: So, so, so mostly I buy my t-shirts off a company based in Ireland called QWERTEE. "QWERTEE". And if you go to, if you go to any of my videos, there's usually a link underneath them where you can get a discount off the t-shirts because we did a deal with QWERTEE because, because so many people commented on my t-shirts.
Denis Čahuk: Great t-shirts. Well done.
Kovid Batra: Yeah. Denis.
Denis Čahuk: I just wanted to, I just wanted to preface Paul's other question regarding how to measure that, you know, Kovid and I are very active in the DORA communities on the Google, Google group, and by far the most asked questions are, how do I precisely measure X? How do I correctly measure this? My team does not follow continuous delivery. We have feature branches. How do I correctly measure this metric, that metric? Before we go into too much detail, I just wanna emphasize that if you're not measuring, if you're not doing continuous delivery, then the metrics will tell you that you should probably be doing continuous delivery. And..
Dave Farley: Yeah.
Denis Čahuk: The ulterior motive is how can we get to continuous delivery sooner? Not how can we correctly measure DORA metrics and continue doing feature branching. Yeah, that's that is generally the most trending conversation topic on these groups. And I just want to take a lot of time to sort of nail, like the, it's about the business. It's about continuous delivery, running experiments quickly, smoother, safely, sustainably, rather than directly measuring any kind of dysfunctional workflow. Or even if you can judge that your workflow is bad because the metrics don't track properly, which is usually where people turn towards DORA metrics.
Dave Farley: Yeah, I would add to that as well is that even if you, even if you get the measures and you use the measures, you're still not going to convince people it's the measures enough alone aren't enough. You need, you need to approach this from a variety of different directions to start convincing people to change their minds over things, and that's without being disrespectful from those, of those people that differ in terms of their viewpoints, because it's hard to change your mind about something if you've, if you've made a career working in a certain way, it's hard to change the things that from the things that you've learned. Um, so this is challenging, and that's the downside of continuous delivery. It works better than anything else. It's the most fun way of organizing our work. It does tend to eliminate, in my experience, burnout in teams, all of these good things. You build better software more quickly working this way. But it's hard to adopt when you're not, when you've not done it before. Everybody that I know that's tried likes it better, but it's hard to make the change.
Denis Čahuk: It's a worthwhile change that manages a lot of stress and burnout, but that doesn't mean there aren't difficult conversations along the way.
Dave Farley: Sure.
Kovid Batra: All right, uh, moving on to the next one. Uh, how do you find the right balance between speed and quality while delivering software?
Dave Farley: The DORA metrics answer this question. There is no trade off, so there is no need to balance. If you want more speed, you need to build with higher quality. If you want more quality, you need to build faster. So let's just, let's just explain that a little bit because I think it's useful to just have this idea in mind because, because we have to defend ourselves because it seems, it seems like a reasonable idea that there's a trade off between speed and quality. It's just not true. But it seems like a reasonable idea. So, so if I build bad software this week and then next week, I've got a load more pressure on me to build next week's work, next week, I'm going to have all of that pressure plus all of the cost of the bad software that I wrote this week. So it's obviously more efficient if I build good software this week and then I don't have that work next week and then I could build good software next week as well. And what that plays out to is that that's where the 44 percent comes from. That's where the increase in productivity comes from. If we concentrate and organize our work to build higher quality software, we save time. We don't, we don't waste, we don't, it doesn't cost time.
Now there's a transition period. If you're busy working in a poor software development environment, that's building crap software, then, you know, it's going to take you a while to learn some of these things. So there's, there's an activation energy to get better at building software. But once you do, you will be going faster and building higher quality software at the same time because they come together. So what do we mean by fast when we talk about going fast if you want high quality software? Fundamentally, that's about working in smaller steps. So we want to organize our work into much smaller steps so that after each small step, we can evaluate where we are and whether what, whether that step that we took was, was a good one. And that's in all kinds of ways. Does my software do what I think it does? Does it do what the customer wants it to do? Is it making money in production or whatever else it is? So, so all of these things, you know, these are learning points and we need to build that more experimental mindset into the, in deep, into the way that we work.
And the smart thing to do. To optimize all of this is to make it easy to do the right things. It makes it, make it easy for us to carry out these small steps in these experiments. And that's what continuous delivery does. That's what the deployment pipeline fundamentally is for. It's an experimental platform that will give us a definitive statement on the releasability of our software multiple times per day. And that makes it easier then to, to work, to work in these small steps and do that quickly and, and get high quality results back.
Kovid Batra: Totally makes sense. Moving on, uh, Agustin, uh, why is it so, why is it so important in your opinion to differentiate between continuous delivery, continuous deployment, and how that affects the delivery process performance, also known as the DORA metrics?
Dave Farley: So, so, so, so let me first differentiate between them and then explain why I think it matters. So, so continuous delivery is working so that our software is always in a releasable state. Continuous deployment is built on top of continuous delivery. And if all of your tests pass, you just push the change out automatically into production. And that's a really, really good thing. If you can get, if you can get to that point where you can release all of the time small changes, that's probably the best way of getting this, optimising to get this really fast feedback, all the way out to your end users. Now the problem is, is that there are some kinds of software where that doesn't make any sense. There are some kinds of software for a variety of different kinds of reasons, depending on the technology, the regulation, um, real practical limitations for some reason, why we can't do that. So, Tesla are a continuous delivery company. But part of what they are continuously, continuously delivering is software embodied as silicon burnt into devices in the car. There's physics involved in burning the silicon. So you can't always release every change immediately that the software is, the software is done. That's not practical. So you have to manage that slightly differently. Uh, one of my clients, um, Siemens build medical devices and so, within the regulatory framework for medical devices that can kill people, you're not allowed to release them all of the time into production. And so, continuous delivery is the foundational idea but continuous deployment is kind of the, the limit, I suppose of where you can get to. If you're Amazon, continuous, continuous deployment makes a huge amount of sense. Amazon are pushing out changes. I think it's currently 1. 5 changes per second. It might be more than that. It might be five changes per second. Something like that. Something ridiculous like that. But that's what they're doing. And so they're able to move ridiculously fast and learn ridiculously quickly. And so build better software. I think you can think of it from a more internally focused viewpoint as that they each optimize for slightly different things.
Continuous delivery gives us feedback on whether we are, um, building things right and continuous deployment gives us feedback on whether we're building the right things. So we learn more about our product from continuous deployment by getting it into the hands of real users, monitoring that and understanding their impact. We get, and we can't get that kind of feedback any other way really than getting out to real users. We don't learn those lessons until real users are really using it. Continuous delivery though, gives us feedback on, does this do what we think it's doing? Um, is it good quality? Is it fast enough? Is it resilient enough? All of those kinds of things. We can measure those things. And we can know those before we release. So, they are slightly different things. And they do, they do balance off in different ways. They give us different levels of value. There's an excellent book that's recently been released on continuous deployment. Um, I've forgotten the name of the author. Valentina, somebody, I think. Um, but I wrote the foreword, so I should remember the name of the author. I'm very embarrassed, but it's, it's, it's a really good book, and it goes into lots of detail about continuous deployment as distinct from continuous delivery. I think, but I suppose I would say this, wouldn't I? I think that continuous delivery is the more foundational practice here, and I think that depending on your viewpoint, I think this is one of the very, very few ideas where, where Jez Humble and I would, would come at this from slightly different perspectives. I tended, I've tended to spend the latter part of my career working in environments where continuous deployment wasn't practical. I couldn't, I was never going to get my clients to, to, to do it in, in, in the environments in which they were building things. And sometimes they couldn't even if they wanted to. Um, I think Jez has worked in environments where continuous deployment was a little easier. And so that seems more natural. And so I think that kind of is why, um, some of the DORA metrics, for example, measure the efficiency based on assumptions, really, of continuous deployment.
Um, so I think, I think continuous deployment is the right target to aim for. You want to be able to release as frequently as is practicable, given the constraints on you, and you want to kind of push at the boundaries of those constraints where you can. So, for example, working with Siemens, we weren't allowed to release software into production of medical systems in clinical settings, but we could release much more frequently to non-clinical settings. So we did that, so we identified some non-clinical settings, and we released frequently to those places, in university hospitals, for example, and so on.
Kovid Batra: So I think it's almost time. Uh, and, uh, we do have more questions, but just because the stream is for an hour, uh, it's going to end. So we'll take those questions offline. Uh, I'll email the answers to you. Uh, audience, please don't be disappointed here. It's just in the interest of time that we'll have to stop here. Thank you so much, Dave, Denis, for this amazing, amazing session. It was nice talking to you and learning so much about CD, TDD, engineering metrics from you. Thank you so much once again.
Dave Farley: It's a pleasure. Thank you. Bye-bye. Thanks everyone.
Denis Čahuk: Thanks!
Measuring Project Success with DevOps Metrics
October 4, 2024
•
11 min read
Are you feeling unsure if your team is making real progress, even though you’re following DevOps practices? Maybe you’ve implemented tools and automation but still struggle to identify what’s working and what’s holding your projects back. You’re not alone. Many teams face similar frustrations when they can’t measure their success effectively.
But here’s the truth: without clear metrics, it’s nearly impossible to know if your DevOps processes are driving the results you need. Tracking the right DevOps metrics can make all the difference, offering insights that help you streamline workflows, fix bottlenecks, and make data-driven decisions.
In this blog, we’ll dive into the essential DevOps metrics that empower teams to confidently measure success. Whether you’re just getting started or looking to refine your approach, these metrics will give you the clarity you need to drive continuous improvement. Ready to take control of your project’s success? Let’s get started.
What Are DevOps Metrics?
DevOps metrics are statistics and data points that correlate to a team's DevOps model's performance. They measure process efficiency and reveal areas of friction between the phases of the software delivery pipeline.
These metrics are essential for tracking progress toward achieving overarching goals set by the team. The primary purpose of DevOps metrics is to provide insight into technical capabilities, team processes, and overall organizational culture.
By quantifying performance, teams can identify bottlenecks, assess quality improvements, and measure application performance gains. Ultimately, if you don’t measure it, you can’t improve it.
Key Categories of DevOps Metrics
The DevOps Metrics has these primary categories:
Software Delivery Metrics: Measure the speed and efficiency of software delivery.
Stability Metrics: Assess the reliability and quality of software in production.
Operational Performance Metrics: Evaluate system performance under load.
Security Metrics: Monitor vulnerabilities and compliance within the software development lifecycle.
Cost Efficiency Metrics: Analyze resource utilization and cost-effectiveness in DevOps practices.
Understanding these categories helps organizations select relevant metrics tailored to their specific challenges.
Why Metrics Matter: Driving Measurable Success with DevOps
DevOps is often associated with automation and speed, but at its core, it is about achieving measurable success. Many teams struggle with measuring their success due to inconsistent performance or unclear goals. It's understandable to feel lost when confronted with vast amounts of data and competing priorities.
However, the right metrics can simplify this process.
They help clarify what success looks like for your team and provide a framework for continuous improvement. Remember, you don't have to tackle everything at once; focusing on a few key metrics can lead to significant progress.
Key DevOps Metrics to Track for Success
To effectively measure your project's success, consider tracking the following essential DevOps metrics:
Deployment Frequency
This metric tracks how often your team releases new code. A higher frequency indicates a more agile development process. Deployment frequency is measured by dividing the number of deployments made during a given period by the total number of weeks/days. One deployment per week is standard, but it also depends on the type of product.
For example, a team working on a mission-critical financial application may aim for daily deployments to fix bugs and ensure system stability quickly. In contrast, a team developing a mobile game might release updates weekly to coincide with the app store's review process.
Lead Time for Changes
Measure how quickly changes move from development to production. Shorter lead times suggest a more efficient workflow. Lead time for changes is the length of time between when a code change is committed to the trunk branch and when it is in a deployable state, such as when code passes all necessary pre-release tests.
Consider a scenario where a developer submits a bug fix to the main codebase. The change is automatically tested, approved, and deployed to production within an hour. This rapid turnaround allows the team to quickly address customer issues and maintain a high level of service.
Change Failure Rate
This assesses the percentage of changes that cause issues requiring a rollback. Lower rates indicate better quality control. The change failure rate is the percentage of code changes that require hot fixes or other remediation after production, excluding failures caught by testing and fixed before deployment.
Imagine a team that deploys 100 changes per month, with 10 of those changes requiring a rollback due to production issues. Their change failure rate would be 10%. By tracking this metric over time and implementing practices like thorough testing and canary deployments, they can work to reduce the failure rate and improve overall stability.
Mean Time to Recovery (MTTR)
Evaluate how quickly your team can recover from failures. A shorter recovery time reflects resilience and effective incident management. MTTR measures how long it takes to recover from a partial service interruption or total failure, regardless of whether the interruption is the result of a recent deployment or an isolated system failure.
In a scenario where a production server crashes due to a hardware failure, the team's MTTR is the time it takes to restore service. If they can bring the server back online and restore functionality within 30 minutes, that's a strong MTTR. Tracking this metric helps teams identify areas for improvement in their incident response processes and infrastructure resilience.
These metrics are not about achieving perfection; they are tools designed to help you focus on continuous improvement. High-performing teams typically measure lead times in hours, have change failure rates in the 0-15 percent range, can deploy changes on demand, and often do so many times a day.
Common Challenges When Measuring DevOps Success
While measuring success is essential, it's important to acknowledge the emotional and practical hurdles that come with it:
Resistance to change
People often resist change, especially when it disrupts established routines or processes. Overcoming this resistance is crucial for fostering a culture of improvement.
For example, a team that has been manually deploying code for years may be hesitant to adopt an automated deployment pipeline. Addressing their concerns, providing training, and demonstrating the benefits can help ease the transition.
Lack of time
Teams frequently find themselves caught up in day-to-day demands, leaving little time for proactive improvement efforts. This can create a cycle where urgent tasks overshadow long-term goals.
A development team working on a tight deadline may struggle to find time to optimize their deployment process or write automated tests. Prioritizing these activities as part of the sprint planning process can help ensure they are not overlooked.
Complacency
Organizations may become complacent when things seem to be functioning adequately, preventing them from seeking further improvements. The danger lies in assuming that "good enough" will suffice without striving for excellence.
A team that has achieved a 95% test coverage rate may be tempted to focus on other priorities, even though further improvements could catch additional bugs and reduce technical debt. Regularly reviewing metrics and setting stretch goals can help avoid complacency.
Data overload
With numerous metrics available, teams might struggle to determine which ones are most relevant to their goals. This can lead to confusion and frustration rather than clarity.
A large organization with dozens of teams and applications may find itself drowning in DevOps metrics data. Focusing on a core set of key metrics that align with overall business objectives and tailoring dashboards for each team's specific needs can help manage this challenge.
Measuring success
Determining what success looks like and how to measure it in a continuous improvement culture can be challenging. Setting clear goals and KPIs is essential but often overlooked.
A team may struggle to define what "success" means for their project. Collaborating with stakeholders to establish measurable goals, such as reducing customer support tickets by 20% or increasing revenue by 5%, can provide a clear target to work towards.
If you're facing these challenges, remember that you are not alone. Start by identifying the most actionable metrics that resonate with your current goals. Focusing on a few key areas can make the process feel more manageable and less daunting.
How to Use DevOps Metrics for Continuous Improvement
Once you've identified the key metrics to track, it's time to leverage them for continuous improvement:
Establish baselines: Begin by establishing baseline measurements for each metric you plan to track. This will give you a reference point against which you can measure progress over time.
For example, if your current deployment frequency is once every two weeks, establish that as your baseline before setting a goal to deploy weekly within three months.
Set clear objectives: Define specific objectives for each metric based on your baseline measurements. For instance, if your current deployment frequency is once every two weeks, aim for weekly deployments within three months.
Implement feedback loops: Create mechanisms for gathering feedback from team members about processes and tools regularly used in development cycles. This could be through retrospectives or dedicated feedback sessions focusing on specific metrics.
After each deployment, hold a brief retrospective to discuss what went well, what could be improved, and any insights gained from the deployment metrics. Use this feedback to refine processes and inform future improvements.
Analyze trends: Regularly analyze trends in your metrics data rather than just looking at snapshots in time. For example, if you notice an increase in change failure rate over several weeks, investigate potential causes such as code complexity or inadequate testing practices.
Use tools like Typo to visualize trends in your DevOps metrics over time. Look for patterns and correlations that can help identify areas for improvement. For instance, if you notice that deployments with more than 50 commits tend to have higher failure rates, consider breaking changes into smaller batches.
Encourage experimentation: Foster an environment where team members feel comfortable experimenting with new processes or tools based on insights gained from metrics analysis. Encourage them to share their findings with others in the organization.
If a developer discovers a new testing framework that significantly reduces the time required to validate changes, support them in implementing it and sharing their experience with the broader team. Celebrating successful experiments helps reinforce a culture of continuous improvement.
Celebrate improvements: Recognize and celebrate improvements achieved through data-driven decision-making efforts—whether it's reducing MTTR or increasing deployment frequency—this reinforces positive behavior within teams.
When a team hits a key milestone, such as deploying 100 changes without a single failure, take time to acknowledge their achievement. Sharing success stories helps motivate teams and demonstrates the value of DevOps metrics.
Iterate regularly: Continuous improvement is not a one-time effort; it requires ongoing iteration based on what works best for your team's unique context and challenges encountered along the way.
As your team matures in its DevOps practices, regularly review and adjust your metrics strategy. What worked well in the early stages may need to evolve as your organization scales or faces new challenges. Remain flexible and open to experimenting with different approaches.
By following these steps consistently over time, you'll create an environment where continuous improvement becomes ingrained within your team's culture—ultimately leading toward greater efficiency and higher-quality outputs across all projects.
Overcoming Obstacles with Typo: A Powerful DevOps Metrics Tracking Solution
One tool that can significantly ease the process of tracking DevOps metrics is Typo—a user-friendly platform designed specifically for streamlining metric collection while integrating seamlessly into existing workflows:
Key Features of Typo
Intuitive interface: Typo's user-friendly interface allows teams to easily monitor critical metrics such as deployment frequency and lead time for changes without extensive training or onboarding processes required beforehand.
For example, the Typo dashboard provides a clear view of key metrics like deployment frequency over time so teams can quickly see if they are meeting their goals or if adjustments are needed.
By automating data collection processes through integrations with popular CI/CD tools like Jenkins or GitLab CI/CD pipelines—Typo eliminates manual reporting burdens placed upon developers—freeing them up so they can focus more on delivering value rather than managing spreadsheets!
Typo automatically gathers deployment data from your CI/CD tools so developers save time while reducing human error risk associated with manual data entry—allowing them instead to concentrate solely on improving results achieved through informed decision-making based upon actionable insights derived directly from their own data!
Real-time performance dashboards
Typo provides real-time performance dashboards that visualize key metrics at a glance, enabling quick decision-making based on current performance trends rather than relying solely upon historical data points!
The Typo dashboard updates in real time as new deployments occur, giving teams an immediate view of their current performance against goals. This allows them to quickly identify and address any issues arising.
Customizable alerts & notifications
With customizable alerts set up around specific thresholds (e.g., if the change failure rate exceeds 10%), teams receive timely notifications that prompt them to take action before issues escalate further down production lines!
Typo allows teams to set custom alerts based on specific goals and thresholds—for example, receiving notification if the change failure rate rises above 5% over three consecutive deployments, helping catch potential issues early before they cause major problems.
Integration capabilities
Typo effortlessly integrates with various project management tools (like Jira) alongside monitoring solutions (such as Datadog), providing comprehensive insights into both development processes and operational performance simultaneously.
Using Typo empowers organizations simplifying metric tracking without overwhelming users allowing them instead concentrate solely upon improving results achieved through informed decision-making based upon actionable insights derived directly from their own data.
Embracing the DevOps Metrics Journey
As we conclude this discussion, measuring project success, effective DevOps metrics serve invaluable strategies driving continuous improvement initiatives while enhancing collaboration efforts among various stakeholders involved throughout every stage—from development through deployment until final delivery. By focusing specifically on key indicators like deployment frequency alongside lead time changes coupled together alongside change failure rates mean time recovery—you'll gain deeper insights into identifying bottlenecks while optimizing workflows accordingly.
While challenges may arise along this journey towards achieving excellence within software delivery processes—tools like Typo combined alongside supportive cultures fostered throughout organizations will help navigate these obstacles successfully unlocking full potential inherent within each team member involved.
So take those first steps today!
Start tracking relevant metrics now—watch closely improvements unfold before eyes transforming not only how projects executed but also elevating overall quality delivered across all products released moving forward.
“Why does it feel like no matter how hard we try, our software deployments are always delayed or riddled with issues?”
Many development teams ask this question as they face the ongoing challenges of delivering software quickly while maintaining quality. Constant bottlenecks, long lead times, and recurring production failures can make it seem like smooth, efficient releases are out of reach.
But there’s a way forward: DORA Metrics.
By focusing on these key metrics, teams can gain clarity on where their processes are breaking down and make meaningful improvements. With tools like Typo, you can simplify tracking and start taking real, actionable steps toward faster, more reliable software delivery. Let’s explore how DORA Metrics can help you transform your process.
What are DORA Metrics?
DORA Metrics consist of four key indicators that help teams assess their software delivery performance:
Deployment Frequency: This metric measures how often new releases are deployed to production. High deployment frequency indicates a responsive and agile development process.
Lead time for Changes: This tracks the time it takes for a code change to go from commit to deployment. Short lead times reflect an efficient workflow and the ability to respond quickly to user feedback.
Mean Time to Recovery (MTTR): This indicates how quickly a team can recover from a failure in production. A lower MTTR signifies strong incident management practices and resilience in the face of challenges.
Change Failure Rate: This measures the percentage of deployments that result in failures, such as system outages or degraded performance. A lower change failure rate indicates higher quality releases and effective testing processes.
These metrics are essential for teams striving to deliver high-quality software efficiently and can significantly impact overall performance.
Challenges teams commonly face
While DORA Metrics provide valuable insights, teams often encounter several common challenges:
Data overload and complexity: Tracking too many metrics can lead to confusion and overwhelm, making it difficult to identify key areas for improvement. Teams may find themselves lost in data without clear direction.
Misaligned priorities: Different teams may have conflicting goals, making it challenging to work towards shared objectives. Without alignment, efforts can become fragmented, leading to inefficiencies.
Fear of failure: A culture that penalizes mistakes can hinder innovation and slow down progress. Teams may become risk-averse, avoiding necessary changes that could enhance their delivery processes.
Breaking down the 4 DORA Metrics
Understanding each DORA Metric in depth is crucial for improving software delivery performance. Let's dive deeper into what each metric measures and why it's important:
Deployment Frequency
Deployment frequency measures how often an organization successfully releases code to production. This metric is an indicator of overall DevOps efficiency and the speed of the development team. Higher deployment frequency suggests a more agile and responsive delivery process.
To calculate deployment frequency:
Track the number of successful deployments to production per day, week, or month.
Determine the median number of days per week with at least one successful deployment.
If the median is 3 or more days per week, it falls into the "Daily" deployment frequency bucket.
If the median is less than 3 days per week but the team deploys most weeks, it's considered "Weekly" frequency.
Monthly or lower frequency is considered "Monthly" or "Yearly" respectively.
The definition of a "successful" deployment depends on your team's requirements. It could be any deployment to production or only those that reach a certain traffic percentage. Adjust this threshold based on your business needs.
Lead time for changes measures the amount of time it takes a code commit to reach production. This metric reflects the efficiency and complexity of the delivery pipeline. Shorter lead times indicate an optimized workflow and the ability to respond quickly to user feedback.
To calculate lead time for changes:
Maintain a list of all changes included in each deployment, mapping each change back to the original commit SHA.
Join this list with the changes table to get the commit timestamp.
Calculate the time difference between when the commit occurred and when it was deployed.
Use the median time across all deployments as the lead time metric.
Lead time for Changes is a key indicator of how quickly your team can deliver value to customers. Reducing the amount of work in each deployment, improving code reviews, and increasing automation can help shorten lead times.
Change Failure Rate (CFR)
Change failure rate measures the percentage of deployments that result in failures requiring a rollback, fix, or incident. This metric is an important indicator of delivery quality and reliability. A lower change failure rate suggests more robust testing practices and a stable production environment.
To calculate change failure rate:
Track the total number of deployments attempted.
Count the number of those deployments that caused a failure or needed to be rolled back.
Divide the number of failed deployments by the total to get the percentage.
Change failure rate is a counterbalance to deployment frequency and lead time. While those metrics focus on speed, change failure rate ensures that rapid delivery doesn't come at the expense of quality. Reducing batch sizes and improving testing can lower this rate.
Mean Time to Recovery (MTTR)
Mean time to recovery measures how long it takes to recover from a failure or incident in production. This metric indicates a team's ability to respond to issues and minimize downtime. A lower MTTR suggests strong incident management practices and resilience.
To calculate MTTR:
For each incident, note when it was opened.
Track when a deployment occurred that resolved the incident.
Calculate the time difference between incident creation and resolution.
Use the median time across all incidents as your MTTR metric.
Restoring service quickly is critical for maintaining customer trust and satisfaction. Improving monitoring, automating rollbacks, and having clear runbooks can help teams recover faster from failures.
By understanding these metrics in depth and tracking them over time, teams can identify areas for improvement and measure the impact of changes to their delivery processes. Focusing on these right metrics helps optimize for both speed and stability in software delivery.
If you are looking to implement DORA Metrics in your team, download the guide curated by DORA experts at Typo.
Starting with DORA Metrics can feel daunting, but here are some practical steps you can take:
Step 1: Identify your goals
Begin by clarifying what you want to achieve with DORA Metrics. Are you looking to improve deployment frequency? Reduce lead time? Understanding your primary objectives will help you focus your efforts effectively.
Step 2: Choose one metric
Select one metric that aligns most closely with your current goals or pain points. For instance:
If your team struggles with frequent outages, focus on reducing the Change Failure Rate.
If you need faster releases, prioritize Deployment Frequency.
Step 3: Establish baselines
Before implementing changes, gather baseline data for your chosen metric over a set period (e.g., last month). This will help you understand your starting point and measure progress accurately.
Step 4: Implement changes gradually
Make small adjustments based on insights from your baseline data. For example:
If focusing on Deployment Frequency, consider adopting continuous integration practices or automating parts of your deployment process.
Step 5: Monitor progress regularly
Use tools like Typo to track your chosen metric consistently. Set up regular check-ins (weekly or bi-weekly) to review progress against your baseline data and adjust strategies as needed.
Step 6: Iterate based on feedback
Encourage team members to share their experiences with implemented changes regularly. Gather feedback continuously and be open to iterating on your processes based on what works best for your team.
How Typo helps with DORA Metrics
Typo simplifies tracking and optimizing DORA Metrics through its user-friendly features:
Intuitive dashboards: Typo's dashboards allow teams to visualize their chosen metric clearly, making it easy to monitor progress at a glance while customizing views based on specific needs or roles within the team.
Focused tracking: By enabling teams to concentrate on one metric at a time, Typo reduces information overload. This focused approach helps ensure that improvements are actionable and manageable.
Automated reporting: Typo automates data collection and reporting processes, saving time while reducing errors associated with manual tracking so you receive regular updates without extensive administrative overhead.
Actionable insights: The platform provides insights into bottlenecks or areas needing improvement based on real-time data analysis; if cycle time increase, Typo highlights specific stages in your deployment pipeline requiring attention.
By leveraging Typo's capabilities, teams can effectively reduce lead times, enhance deployment processes, and foster a culture of continuous improvement without feeling overwhelmed by data complexity.
“When I was looking for an Engineering KPI platform, Typo was the only one with an amazing tailored proposal that fits with my needs. Their dashboard is very organized and has a good user experience, it has been months of use with good experience and really good support”
When implementing DORA Metrics, teams often encounter several pitfalls that can hinder progress:
Over-focusing on one metric: While it's essential prioritize certain metrics based on team goals, overemphasizing one at others' expense can lead unbalanced improvements; ensure all four metrics are considered strategy holistic view performance.
Ignoring contextual factors: Failing consider external factors (like market changes organizational shifts) when analyzing metrics can lead astray; always contextualize data broader business objectives industry trends meaningful insights.
Neglecting team dynamics: Focusing solely metrics without considering team dynamics create toxic environment where individuals feel pressured numbers rather than encouraged collaboration; foster open communication within about successes challenges promoting culture learning from failures.
Setting unrealistic targets: Establishing overly ambitious targets frustrate team members if they feel these goals unattainable reasonable timeframes; set realistic targets based historical performance data while encouraging gradual improvement over time.
Key Approaches to Implementing DORA Metrics
When implementing DORA (DevOps Research and Assessment) metrics, it is crucial to adhere to best practices to ensure accurate measurement of key performance indicators and successful evaluation of your organization's DevOps practices. By following established guidelines for DORA metrics implementation, teams can effectively track their progress, identify areas for improvement, and drive meaningful changes to enhance their DevOps capabilities.
Customize DORA metrics to fit your team's needs
Every team operates with its own unique processes and goals. To maximize the effectiveness of DORA metrics, consider the following steps:
Identify relevant metrics: Determine which metrics align best with your team's current challenges and objectives.
Adjust targets: Use historical data and industry benchmarks to set realistic targets that reflect your team's context.
By customizing these metrics, you ensure they provide meaningful insights that drive improvements tailored to your specific needs.
Foster leadership support for DORA metrics
Leadership plays a vital role in cultivating a culture of continuous improvement. To effectively support DORA metrics, leaders should:
Encourage transparency: Promote open sharing of metrics and progress among all team members to foster accountability.
Provide resources: Offer training and resources that focus on best practices for implementing DORA metrics.
By actively engaging with their teams about these metrics, leaders can create an environment where everyone feels empowered to contribute toward collective goals.
Track progress and celebrate wins
Regularly monitoring progress using DORA metrics is essential for sustained improvement. Consider the following practices:
Schedule regular check-ins: Hold retrospectives focused on evaluating progress and discussing challenges.
Celebrate achievements: Take the time to recognize both small and significant successes. Celebrating wins boosts morale and motivates the team to continue striving for improvement.
DORA Metrics offer valuable insights into how to transform software delivery processes, enhance collaboration, and improve quality; understanding these deeply and implementing them thoughtfully within an organization positions it for success in delivering high-quality efficiently.
Start small manageable changes—focus one metric at time—leverage tools like Typo support journey better performance; remember every step forward counts creating more effective development environment where continuous improvement thrives!
Typo hosted an exclusive live webinar titled 'The Hows and Whats of DORA', featuring Bryan Finster and Richard Pangborn. With over 150+ attendees, we explored how DORA can be misused and learnt practical tips for turning engineering metrics into dev team success.
Bryan Finster, Value Stream Architect at Defense Unicorns and co-author of 'How to Misuse and Abuse DORA Metrics’, and Richard Pangborn, Software Development Manager at Method and advocate for Typo, brought valuable perspectives to the table.
The discussion covered DORA metrics' implementation and challenges, highlighting the critical role of continuous delivery and value stream management. Bryan provided insights from his experience at Walmart and Defense Unicorns, explaining the pitfalls of misusing DORA metrics. Meanwhile, Richard shared his hands-on experience with implementation challenges, including data collection difficulties and the crucial need for accurate observability. They also reinforced the idea that DORA metrics should serve as health indicators rather than direct targets. Bryan and Richard offered parting advice on using observability effectively and ensuring that metrics lead to meaningful improvements rather than superficial compliance. They both emphasized the importance of a supportive culture that sees metrics as tools for improvement rather than instruments of pressure.
The event concluded with an interactive Q&A session, allowing attendees to ask questions and gain deeper insights.
P.S.: Our next live webinar is on September 25, featuring DORA expert Dave Farley. We hope to see you there!
Timestamps
00:00 - Introduction
00:59 - Meet Richard Pangborn
02:58 - Meet Bryan Finster
04:49 - Bryan's Journey with Continuous Delivery
07:33 - Challenges & Misuse of DORA Metrics
20:55 - Richard's Experience with DORA Metrics
27:43 - Ownership of MTTR & Measurement Challenges
Kovid Batra: Hi, everyone. Thanks for joining in for our DORA exclusive webinar, The Hows and Whats of DORA, powered by Typo. I'm Kovid, founding member at Typo and your host for today's webinar. With me, we have two special people. Please welcome the DORA expert for tonight, Bryan Finster, who is an exceptional Value Stream Architect at Defense Unicorns and the co-author of the ebook, 'How to Misuse and Abuse DORA Metrics', and one of our product mentors, and Typo advocates, Richard Pangborn, who is a Software Development Manager at Method. Thanks, Bryan. Thanks, Rich, for joining in.
Bryan Finster: Thanks for having me.
Richard Pangborn: Yeah, no problem.
Kovid Batra: Great. So, like, before we, uh, get started and discuss about how to implement DORA, how to misuse DORA, uh, Rich, you have some questions to ask, uh, we would love to know a little bit about you both. So if you could just spare a minute and tell us about yourself. So I think we can get started with you, Rich. Uh, and then we can come back to Bryan.
Richard Pangborn: Sure. Yeah, sounds good. Uh, my name is Richard Pangborn. I'm the Software Developer Manager here at Method. Uh, I've been a manager for about three years now. Um, but I do come from a Tech Lead role of five or more years. Um, I started here as a junior developer when we were just in the startup phase. Um, went through the series funding, the investments, the exponential growth. Today we're, you know, over a 100-person company with six software development teams. Um, and yeah, Typo is definitely something that we've been using to help us measure ourselves and succeed. Um, some interesting things about myself, I guess, is I was part of the company and team that succeeded when we did a Intuit hackathon. Um, it was pretty impactful to me. Um, We brought this giant check, uh, back with us from Cali all the way to Toronto, where we're located. Uh, we got to celebrate with, uh, all of the company, everyone who put in all the hard and hard work to, to help us succeed. Um, that's, that's sort of what pushed me into sort of a management path to sort of mentor and help those, um, that are junior or intermediate, uh, have that same sort of career path, uh, and set them up for success.
Kovid Batra: Perfect. Perfect. Thanks, Richard. And something apart from your professional life, anything that you want to share with the audience about yourself?
Richard Pangborn: Uh, myself, um, I'm a gamer, um, I do like to golf, I do like to, um, exercise, uh, something interesting also is, um, I met my, uh, wife here at the company who I still work with today.
Kovid Batra: Great. Thank you so much, Rich. Bryan, over to you.
Bryan Finster: Oh, yes. I'm Bryan Finster. I've been a software developer for, oh, well, since 1996. So I'll let you do the math. I'm mostly doing enterprise development. I worked for Walmart for 19 of those years, um, in logistics for most of that time and, uh, helped pilot continuous delivery at Walmart inside logistics. I've got scars to show for it. Um, and then later moved to platform at Walmart, where I was originally in charge of the delivery metrics we were gathering to help teams understand how to do continuous delivery so they can compare themselves to what good continuous delivery looked like. And then later was asked to start a dojo at Walmart to directly pair with teams to help them solve the problem of how do we do CD. And then about a little over three years ago, I was, I joined Defense Unicorns as employee number three of three, uh, and we're, we're now, um, over 150 people. We're focused on how do we help the Department of Defense deliver, um, you know, do continuous delivery and secure environments. So it's a fun path.
Kovid Batra: Great, great. Perfect. And the same question to you. Something that LinkedIn doesn't tell about you, you would like to share with the audience.
Bryan Finster: Um, computers aren't my hobby. Uh, I, you know, it's a lot better than roofing. My dad had a construction company, so I know what that's like. Um, but I, I very much enjoy photography, uh, collecting watches, ride motorcycles, and build plastic models. So that's where I spend my time.
Kovid Batra: Nice. Great to know that. All right. So now I think, uh, we are good to go and start with the main section of, of our webinar. So I think first, uh, let's, let's start with you, Bryan. Um, I think you have been a long-time advocate of value streams, continuous delivery, DORA metrics. You just briefly told us about how this journey started, but let's, let's deep dive a little bit more into this. Uh, tell us about how value stream management, continuous delivery, all this as a concept started appealing to you from the point of Walmart and then how it has evolved over time for you in your life.
Bryan Finster: Sure. Uh, no, at Walmart, um, continuous delivery was the answer to a problem. It wasn't, it was, we had a business problem, you know, our lead time for change in logistics was a year. We were delivering every quarter with massive explosions. Every time we piloted, I mean, it was really stressful. Um, any, anytime we did a big change of recorder, we had planned 24 by 7 support for at least a week and sometimes longer, um, And it was just a complete nightmare. And our SVP, instead of hiring in a bunch of consultants, cause we've been through a whole bunch of agile transformations over the years, asked the senior engineers in the area to figure out how we can deliver every two weeks. Now, if you can imagine these giant explosions happening every two weeks instead of every quarter, we didn't want that. And so we started digging in, how do we get that done? And my partner in crime bought a copy of continuous delivery. We started reading that book cover to cover, pulling out everything we could, uh, started building Jenkins pipelines with templates, so the teams didn't have to go and build their own pipeline. They can just extend the base template which was a pattern we took forward later. And, and, uh, we built a global platform. I started trying to figure out how do we actually do the workflow that enables continuous delivery. I mean, we weren't testing at all. Think how scary that is. Uh, other than, you know, handing it off to QA and say, "Hey, test this for us.
And so I had to really dig into how do we do continuous integration. And then that led into what's the communication problems that are stopping us from getting information so we can test before we commit code. Um, and then once you start doing that at the team level, what's preventing us from getting all the other information that we need outside the team? How do we get the connection? You know that, all the, all the roadblocks that are preventing us from doing continuous delivery, how do we fix those? Which kind of let me fall backwards in the value stream management because now you're looking at the broader value stream. It's beyond just what your team can do. Um, and so it's, uh, it's, it's been just a journey of solving that problem of how do we allow every team to independently deploy from any other team as frequently as they can.
Kovid Batra: Great. And, and how do, uh, DORA metrics and engineering metrics, while you are implementing these projects, taking up these initiatives, play a role in it?
Bryan Finster: Well, so, you know, all this effort that we went on predated Accelerate coming out, but I was going to DevOps Enterprise Summit and learning as much as I could starting in 2015 and talking to people about how do we measure things, cause I was actually sent to DevOps Enterprise Summit the first time to figure out how do we measure if we're doing it well, and then started pulling together, you know, some metrics to show that we're progressing on this path to CD, you know, how frequently integrating code, how many defects are being generated over time, you know, and how, how often can individuals on the team deploy as like, you know, deploys per day per developer was a metric that Jim proposed back in 2015 as just a health metric. How are we doing? And then later in the, and when we started the dojo in platform at Walmart, we were using a metrics-based approach to help teams. Continuous delivery was the method we were using to improve engineering excellence in the organization. We, you know, we weren't doing any Agile frameworks. It was just, why can't we deliver change daily? Um, and early on when we started building the platform, the first tool was the CI tool. Second tool was how do we measure. And we brought in CapitalOne's Hygieia, and then we gamified delivery metrics so we can show teams with a star rating how they were doing on integration frequency, build time, build success rate, deploy frequency, you know, and code complexity, that sort of thing, to show them, you know, this is what good looks like, and here's where you are. That's it. Now, I learned a lot from that, and there's some things I would still game today, and some things I would absolutely not gamify. Um, but that's where I, you know, I spent a long time running that as the game master about how do we, how do we run the game to get teams to want to, want to move and have shown where to go.
And then later, Accelerate came out, and the big thing that Accelerate did was it validated everything we thought was true. All the experiences we had, because the reason I'm so passionate about it is that first, first experience with CD was such a morale improvement on the team that I, nobody ever wanted to work any other way, and when things later changed, they were forced to not work that way by new leadership, everyone who could left. And that's just the reality of it. And, but accelerate came out and said these exact things that we were seeing. And it wasn't just a one-off. It wasn't just this, you know, just localized to. What we were saying, it was everywhere.
Kovid Batra: Yeah, totally makes sense. I think, uh, it's been a burning topic now, and a lot of, uh, talks have been around it. In fact, like, these things are at team-level, system-level. In fact, uh, I'm, uh, the McKinsey article that came out, uh, talking about dev productivity also. So I, I have actually a question there. So, uh.
Bryan Finster: Oh, I shouldn't have read the article. Yeah, go ahead.
Kovid Batra: I mean, it's basically, it's basically talking about individual, uh, dev productivity, right? People say that it can be measured. So yeah. What's your take on that?
Bryan Finster: That's, that's really dumb. If you want to absolutely kill outcomes, uh, focus on HR metrics instead of outcome metrics, you know. And, and so, I want to touch a little bit on the DORA metrics I think. You know, I've, having worked to apply those metrics on top of the metrics we're already using, there's some of them that are useful, but you have to understand those came from surveys, and there's some of them that are, that if you try to measure them directly, you won't get the results you want, you won't get useful data from measuring directly. Um, you know, and they don't tell you things are going well, they only tell you things are going poorly and you can't use those as your, your, the thing that tells you whether, whether you're delivering value well, you know? It's just something that you, cause you to ask questions about what might be going wrong or not, but it's not, it's not something you use like a dashboard.
Kovid Batra: Makes sense. And I think, uh, the book that you have written, uh, 'How to Misuse and Abuse DORA Metrics', I think, let's, let's talk, talk about that a little bit. Like you have summarized a lot of things there, how DORA metrics should not be used, or Engineering metrics for that matter should not be used. So like, when do you think, how do you think teams should be using it? When do the teams actually feel the need of using these metrics and in which areas?
Bryan Finster: Well, I think observability in general is something people don't pay enough attention to. And not just, you know, not just production observability, but how are we working as a team. And, and really what we're trying to do is you have to think of it first from what are we trying to do with product development. Um, a big mistake people make is assuming that their idea is correct, and all we have to do is build something according to spec, make sure it tests according to spec and deliver it when we're done. When fundamentally, the idea is probably wrong. And so the question is, how big of a chunk of wrong idea do I want to deliver to the end user and which money do I want to spend doing that? So what we're trying to do is we're trying to become much more efficient about how we make change so we can make smaller change at lower costs so that we can be more effective about delivering value and deliver less wrong stuff. And so what you're really trying to do is you're trying to measure the, the, the way that we work, the way we test, to find areas where we can improve that workflow, so that we can reduce the cost and increase the velocity, which we can deliver change. So we can deliver smaller units of work more frequently, get faster feedback and adjust our idea, right? And so if you're not, if you're just looking at, "Oh, we just need to deliver faster." But you're not looking at why do we want to deliver faster is to get faster feedback on the idea. And also from my perspective, after 20 years of carrying a pager, fix production very, very quickly and safely, I think those are both key things.
And so what we're trying to do with the metrics is we're trying to identify where those problems are. And so in the paper I wrote for IT revolution, which was about twice as long as they asked me for on, on how to misuse and abuse DORA metrics, I went into the details of how we apply those metrics in real life. At Walmart, when we were working with teams to help them improve and also, you know, using them on ourselves, I think if a team really wants to focus on improving, the first thing they should measure is how well they're doing at continuous integration, you know, how frequently are we integrating code, how long does it take us to finish whatever a unit of work is, and what's our, uh, how many defects we're generating, uh, over time as a trend. And measure trends and improve all those trends at the same time.
Kovid Batra: How do we measure this piece where we are talking about measuring the continuous integration?
Bryan Finster: So, as an average on the team, how frequently are we integrating code? And you really want to be at least daily, right? And that's integrated to the trunk, not to some develop branch. And then also, you know, people generally work on a task or a story or whatever it is. How long does it take to go from when we start that work until it's delivered? What's that time frame? And there's, there's other times within that we can measure and that was when we get into value stream mapping. We can talk about that later, but, uh, we want small units of work because you get higher quality information if you get smaller units work and you're more predictable on delivery of that unit of work, which takes a lot of pressure off, it eliminates story points. But then you also have to balance those with the quality of what we did, and you can't measure that quality until it's in production, because test to spec doesn't mean it's good. 'Fit for purpose' means the user finds it good.
Kovid Batra: Right. Can you give us some examples of where you have seen implementing these metrics went completely south instead of working positively? Like how exactly were they abused and misused in a scenario?
Bryan Finster: Yeah, every single time somebody builds a dashboard without really understanding what the problems you're trying to solve are, I see, I've seen lots of people over the years since Accelerate was published, building dashboards to sell, but they don't understand the core problem they're trying to solve. But also, you know, when you have management who reads a book and says, Oh, look, here's an end, you know, I helped cause this problems, which is why I work so hard to fix it by saying, "Hey, look at these four key metrics." Aren't you? You know, this, this can tell you some things, but then they start using them as goals instead of health indicators that are contextual to individual teams. And when you start saying, "Hey, all teams must have this, this level of delivery frequency." Well, maybe, but everybody has their own delivery context. You're not going to deliver to an air-gapped environment as frequently as you are to, you know, AWS, right? And so, you have to understand what it is you're actually trying to do. What, what decisions are you going to make with any metric? What questions are you trying to answer before you go and measure it? You have to define what the problem is before you try to measure that you're successful at correcting the problem.
Kovid Batra: Right. Makes sense. There are challenges that I've seen in teams. Uh, so of course, Typo is getting implemented in various organizations here. What we have commonly come across is teams tend to start using it, but sometimes it happens that when there are certain indicators highlighted from those metrics, they're not sure of what to do next.
Bryan Finster: Right.
Kovid Batra: So I'm sure you must.
Bryan Finster: Well, but the reason why is because they didn't know why they were measuring it in the first place, right? And so, like I said, you know, DORA metrics in specific, they tell you something, but they're very much trailing metrics, which is why I point to CI because CI is really the, the CI workflow is really the engine that starts driving improvement. And then, you know, once you get better at that, you say, "Well, why can't I deliver today's work today?" And you start finding other things in the value stream that are broken, but then you have to identify, okay, well, We see this issue here with code review. We see this issue here. We have this handoff to another team downstream of development before we can deploy. How do we improve those? And how can we measure that we are improving? So you have to ask the question first. And then come up with the metrics that you're using to evaluate success.
And so, people are saying, well, I don't know what to do with this number. It's because they don't, they didn't, they started with a metric and then tried to figure out what to do with it because someone told him it was a good metric. No metric is a good metric unless you know what you're doing with it. I mean, if I put a tachometer on a car and you think that more is better but you don't understand what the tachometer is telling you, then you'll just blow up your engine.
Kovid Batra: But don't you think like there is a possible way to actually not know what to measure, but to identify what to measure also from these metrics itself? For example, like, uh, we have certain benchmarks for, uh, different industries for each metric, right? And let's say I start looking at the lead time, I start looking at the deployment frequency, mean time to restore, there are various other metrics. And from there, I try to identify where my engineering efficiency or productivity is, productivity is getting impacted. So can, can it not be a top-down approach where we find out what we need to actually measure and improve upon from those metrics itself?
Bryan Finster: Only if you start with a question you're trying to answer. But I wouldn't compare. So one of the other problems I have with the DORA metrics specifically is that the, and I've talked to DORA at Google about this as well, it's, it's like some of the questions are nonspecific. So for your, the system you work on most of the time, how frequently you deliver. Well, are you talking about a thousand developers, a hundred developers, a team of eight, right? And so, your delivery frequency is going to be very much relative to the number of people working on it, plus other constraints outside of it. And so you, yes, high performers deliver, you know, multiple times a day with, uh, you know, lead times of less than an hour, except that what's the definition of lead time? Well, there's two inside Accelerate, and they're different depending on how you read it. And, but that doesn't mean that you should just copy what it says. You should look at that and say, "Okay, now what, what am I trying to accomplish? And how can I apply these ideas? Not necessarily the metrics directly, but how can I apply these ideas to measure what I'm trying to measure to find out where my problems are?" So you have to deep dive into where your problems are. And so just like, "Hey, measure these things and here's your benchmarks.
Kovid Batra: Makes sense. Makes sense. Richard, do you have a point that I think we have been talking for a long, if you have any question, uh, I think let's, let's hear from Richard also. Um, he has used Typo, uh, has been using it for a while now, and I'm sure, uh, in this journey of implementing engineering metrics, DORA metrics in his team, he would have seen certain challenges. Richard, I think the stage is yours.
Richard Pangborn: Yeah, sure. Um, so my research into using DORA metrics stem from, um, building high-performing teams. So, um, we always, we're looking for continuous improvement, but we're really looking for ways to measure ourselves that, that makes sense, that can't be totally gamed, that, um, that are like standards. Uh, what I liked about DORA was it had some counterbalancing metrics like throughput versus quality, time to repair versus time to build, speed for stability. That's, it's a, it's a nice counterbalancing, um, effect. Um, and high-performing teams, they care about stuff like continuous improvement, they want to do better than they did last quarter or, or last month, they want to, um, they want help with decision-making. So better data to drive some of the guesswork about, you know, what, what area needs, um, The most improvement or what area is, uh, broken in our pipeline, maybe for like continuous delivery for quality. Um, I want to make sure that they're making a difference, that they're moving a needle, um, ladders up. So a lot of times, a lot of companies, uh, have different measurements at different levels, like company level, department level, team level, individual level. So DORA, we were able to identify some that do ladder up, which is great.
Some of the there are some challenges with implementing DORA, like when we first started. Um, so I think part of the challenge, one of the first ones was the complexity around data collection. Um, so, you know, accurately tracking and measuring DORA metrics. So deployment frequency, lead time for changes, failure rate, recovery, um, they all come from different sources. So CI/CD pipelines, version control systems, incident management tools. So integrating these data sources and ensuring they provide consistent results can be a little time consuming. Um, it can be a little difficult to understand. Yeah, so that was, that was definitely one part of it. Uh, we haven't rolled out all four yet. We're still in the process, just ensuring that, you know, what we are measuring is accurate.
Bryan Finster: Yeah, and I'm glad you touched on the accuracy thing. Um, When we would go and work with teams and start collecting data, so number one, we had data from the pipeline because it was embedded into the platform, but we also knew that when we worked with teams that the Git data was accurate, but the workload was going to be garbage unless the teams actually cared about using Jira correctly. And so, education step number one was while we were cleaning up the, the data in Jira, educating them on why Jira actually should matter to them, instead of just as a, it's not, it's not a time-tracking tool, it's a communication tool. You know, and educating them so that they would actually take it seriously so that the workflow data would be accurate so that they could then use that to help them identify where the improvements could happen because we're going to try to teach them how to improve, we weren't just trying to teach them to do what we said. Um, and yeah, I built a few data collection tools since we started this, and yeah, the collecting the data and showing where, um, accuracy problems happen as part of the dashboard is something that needs to be understood because people will just say, "Oh, the data's right." But yeah, I mean, especially with workflow data, one of the things we really did on the last one I built was show where, where the, you know, where we're out of bounds, very high or very low, you know. I talked to management. I was like, "Well, look, we're doing really good. I've got stuff closing here really fast." I'm like, you're telling me it took 30 seconds to do that, give it a work. Yeah, the accuracy issues. And MTTR is something that DORA's talked about ditching entirely because it's a far too noisy metric if you're trying to collect it automatically.
Richard Pangborn: Yeah, we haven't started tracking MTTR yet. Um, we're more concerned with the throughput versus stability that would have the biggest, um, change at the department level, at the team level. Um, I think, I think that's made the difference so far. Also, we have a challenge with, um, yeah, just doing a lot of stuff manually. So lack of tooling and automation. Um, there's a lot of manual measurements that are taking place. So like you said, error-prone for data collection, inconsistent processes. Um, once we get to a more automated state, I feel like it will be a bit more successful.
Bryan Finster: Yeah. There's a dashboard I built for the, for the Air Force. I'll send you a link later. It might, it might be useful, I'm not sure. But also the other thing is change failure rate is something that people misunderstand a lot, uh, and I've, I've combed through Accelerate multiple times. Uh, uh, Walmart has actually asked to reverse engineer the survey for the book, so I've gone back in depth. Change failure rate is any defect. It's not an incident. If you go and read what it says about change failure rate, it's any defect, which it should be because also the idea is wrong. If the user's reporting it's defective, and you say, "Well, that's a new feature." No, the idea was defective. We're not, it's not fit for purpose in most, you know, unless it's some edge case, but we should track that as well, because that's part of our quality process and change failure rate's trying to track our quality process.
Richard Pangborn: Another problem we had is, um, mean, uh, meantime to recovery. So because we track our bugs or defects differently, they have different priorities. So, um, P0s here has to be done, has to be fixed in less than 24 hours. Um, P, priority 1 means, you know, five days, priority two, you have two weeks. So trying to come up with a, an algorithm to accurately identify, um, time to fix, I guess you'd have like three, three or four different ones instead of one.
Bryan Finster: I've tried to solve that problem too, and especially on distributed systems, it becomes very difficult. So who's getting measured on MTTR? I mean, I'm sorry. Yes, yes. Who's getting measured, right? It's going to be because MTTR, by definition, is when the user sees impact. And so really, that's whoever has the user interface owns that metric. If you're trying to help a team improve their processes for recovery. So it's, it's, it's just a really difficult metric to try to do anything with unless, um, well, you can't, it's, I've, I've, I've tried to measure it directly. I've talked to Verizon, CapitalOne, uh, you know, other people in the dojo consortium, they've tried to make, nobody's been successful at measuring it. But yeah. I think better metrics are out there for how fast we can resolve defects.
Richard Pangborn: Um, one of the things we were concerned about at the beginning was like a resistance to measurement. Um, some people don't want to be measured.
Bryan Finster: That's because they have management meeting over the head and using it as, as the reason why it's a massive fear thing. And it's part of the, it's a cultural thing. I mean, as long as you, it's, you have to have a generative culture to make these metrics effective. One of the things we would do when we start working with teams is number one, we'd explain to them, we're not trying to judge you. We're like your doctor. We're working with you. We're in the trenches with you. These are all of our metrics. They're not yours. And here's how to use them to help you improve. And if a manager comes and starts trying to beat you up with them, just, you know, stop making the data valid.
Richard Pangborn: Yeah. Well, some developers do want to know am I doing well, how do I measure myself? Um, So this gives them a way to do it a little bit, but we told them, um, you know, you set your own goals. Improve yourself. Don't measure yourself next to a developer, another developer on your team or, or someone else where you're looking for your own improvement.
Bryan Finster: Well, I think it's also really important that the smallest unit that's measured with delivery metrics is team and not person. If, if, if individuals are being measured, they're going to optimize for themselves instead of optimizing for team goals. And this is something I've seen, uh, frequently, uh, there was one, uh, with, you know, on, on our, on the dojo team, we can walk into your team and see that if there was filters by individual developer, your team was seriously broken. Uh, and I've seen managers who measured team members by how many Jira issues they closed, which meant that code review is going to be delayed, uh, mentoring was not going to happen, um, uh, you'd have senior engineers focusing on easy tasks to get their numbers up instead of focusing on solving the hard problems, design was not going to happen well because it wasn't a ticket, you know, and so you focus on team outcomes and measure team goals and individual performance because everybody has different roles on the teams. People know that from an HR perspective, coaching by walking around is how you find out who's struggling. You go to the gimbal, you find out who's struggling, you can't measure people directly, that way it'll impact team goals, business goals.
Richard Pangborn: Yeah, I don't think we measure it as a, um, whether they're not successful, it's just something for them to, to watch themselves.
Bryan Finster: As long as somebody else can see it. I mean.
Richard Pangborn: Yeah, it's just for them, isn't it? Not for anyone else.
Bryan Finster: Yeah.
Richard Pangborn: Um, cool. Yeah. Yeah. That's, that's about it for me. I think at the moment.
Kovid Batra: Perfect, perfect. I think, uh, Rich, if, if you are done with your questions, we have already started seeing questions from the audience.
Bryan Finster: There's one other thing I'd like to mention real quick before we go there.
Kovid Batra: Sure.
Bryan Finster: I also gave a talk about how to misuse and abuse DORA metrics, and the fact that people think there's, yes, there's four key metrics they focus on, but read Accelerate. There's a lot more in that book for things that you should measure, including culture. Uh, it's, it's important that you look at this as a holistic thing and not just focus on these metrics to show how well we're doing at CD. Cool, but the most valuable thing in Accelerate is Appendix A and not the four key metrics. So that's number one. But number two, value stream maps, they're manual, but they give you far deeper insights into what's going wrong than the 4 key metrics will. So learn how to do value stream maps and learn how to use them to identify problems and fix those problems.
Kovid Batra: And how exactly, uh, so just an example, I'm expecting an example here, like when, when you are dealing with value stream maps, you're collecting data from system, you're collecting data from people through surveys and what exactly are you creating here?
Bryan Finster: No, I don't collect any data from the system initially. So if I'm doing a value stream map, it'll be bringing a team together. We're not doing it at the, at the organization level. We're doing it at the team level. So you bring a team together and then you talk about the process, starting from delivery and working backwards to initiation of how we deliver change. Uh, you get a consensus from the team about how long things take, how long things are waiting to start. And then you start seeing things like, Oh, we do asynchronous code review, and so I'm ready for code review to start. Four to eight hours later, somebody picks it up and they review it. And then I find out later that they've done and there's changes being made, you know, maybe the next day. And then I go make those changes, resubmit it, and like four to eight hours later, somebody would go re-review it. And, and you see things like, Oh, well, what if we just sat down and discuss the change together and just fix it on the fly, um, and remove all that wait time? How much, you know, that would encourage smaller pieces of work? And we can deliver more frequently and get faster feedback and see, you can see just immediate improvements from things like that, just by doing a value stream map. But bringing the team together will give you much higher quality data than trying to instrument that because not all of those things are, there's data being collected anywhere.
Kovid Batra: Makes sense. All right. We'll take a minute break and we'll start with the Q and A after that. So audience, uh, please shoot out all your questions that you have.
All right. Uh, we have the first question.
Bryan Finster: Yeah. So MTTR is a metric measuring customer impact. So the moment from when a customer is impacted or user impact until they are no longer impacted. And that doesn't mean you fix the defect. It means that you are no, they are no longer being impacted. So roll back, roll forward, doesn't matter. That's what MTTR has mentioned.
Kovid Batra: Perfect. Let's, let's move on to the next one.
Bryan Finster: Yeah. So, um, there's some things where I can set hard targets on as, as ways to know that we're doing well. Integration frequency is one of those, you know, if, if we're integrating once per day or better into the trunk, then we're doing a really good job of breaking down our work. We're doing a good job of testing, or as long as we keep our defects from blowing up, you know, we should be testing. But you can set targets for that. You can also set targets as a team, not something you impose on a team. This is something we as a team do that we want to keep a story size of two days or less. Paul Hammett would say one day or less. Uh, but I think two days is, is a good time limit, that if we, if it takes us more than two days, we'll start running into other dysfunctions that cause quality impact and, and issues with delivery. So I've built dashboards where I have a line on those two graphs that say "this is what good looks like", so the teams can compare themselves to good. Other things that you don't want to gamify, you don't ever want to measure test coverage and say, "Hey, this is what good test coverage looks like." Because test coverage doesn't measure quality. It just measures how much code is executed by code that says it's a test whether it's a test or not. So don't want to do that. That's a fail. I learned that the hard way. Delivery frequency, of course, it's, that's relative to their delivery problem. Uh, you may be delivering every day, every hour, every week, and that all could be good. It just depends. Um, but you can make objective measurements on integration frequency and how long a unit of work takes to do.
Kovid Batra: Cool. Moving on to the next one. Uh, any recommendations where you learn, uh, where we can learn value stream maps?
Bryan Finster: Yeah, so Steve Pereira and Andrew Davis released 'Flow Engineering', which is basically, because there's lots of books on value stream mapping, but it's, from the past, but they're mostly focused on manufacturing and Steve and Andrew released the Flow Engineering book where they talk about using value stream maps to identify problems and how to go about fixing those things. So it was just released earlier this year.
Kovid Batra: Cool. Moving on to the next one. When would you start and how to convince upper management? They want KPI now and we are trying to get a VSM expert to come in and help. It's a hard sell.
Bryan Finster: Yeah, yeah. We want easy numbers. Okay. Well, you know, I would, I would start with having a conversation about what problems we're trying to solve. It's very much like the conversation you have when you're trying to convince management that we want to do continuous delivery. They don't care about continuous delivery unless that they're, they're deep into the topic. But they do care about, uh, you know, delivering better about business value. So you talk about the business value. When you're talking about performance indicators, well, what performance are we trying to measure? And we really need to have that hard conversation about, are we trying to measure how much, how many lines of code are getting dumped onto the end user? How much value are we delivering? Are we trying to, you know, reduce the size and cost of delivering change so we can be more effective about this, or are we just trying to make sure people are busy? And so if you have management that just wants to make sure people are productive, uh, and they're not opening to listening to why they're wrong, I'd quit.
Kovid Batra: All right. Can we move on to the next one then?
Bryan Finster: Where's the next one?
Kovid Batra: Yeah.
Bryan Finster: Oh, okay.
Kovid Batra: Is there any scientific evidence we can use to point out that working on small steps iteratively is better than working in larger batches? The goal is to avoid anecdotal evidence while discussing what can improve the development process.
Bryan Finster: You know, the hard thing about software, uh, in an industry is that people don't like sharing their information, uh, the real information because it can be stock impacting. And so we're, we're going to get a scientific study from a private company. Um, but we have a, you know, a few centuries worth of, of knowledge about knowing that if you build a whole bunch of the wrong thing, that you're not going to sell it. Um, there's, you don't have to do a scientific study because we have knowledge from manufacturing. Uh, you know, the, the, the Simpsons, the documentary The Simpsons, where they talk about the Homer car, where they build the entirely wrong car and put the company out of business without, because there was no feedback loop on that car at all until it was unveiled. Right? That's, that's really the problem. We're doing product development. And if you go off and say, I have this brilliant, well, you know, like, uh, uh, what was the, uh, Silicon Valley, they spent so much money building something nobody wanted and they kept iterating and trying to find the right thing, but they kept building the complete thing and building the wrong thing and just burning money. And this, this is the problem we're trying to solve. And so you're, you're trying to get faster feedback about when we're wrong, because we're inventing something new. Edison didn't build a million wrong light bulbs and see if any, I see if they worked.
Kovid Batra: All right. I think we can move on to the next one. Uh, what strategies do you recommend for setting realistic yet ambitious goals based on our current DORA metrics?
Bryan Finster: Uh, I would start with why can't we deliver today's work today? Well, I'd do that right after why can't we integrate today's work today? And then start finding out what those problems are and solving them. Uh, as far as ambitious goals, I mean, I think it's ambitious to be doing continuous delivery. Why can't we do continuous delivery? Uh, you know, one of the reasons why we put minimumcd. org together several years ago was because it's a list of problems to solve, and if you solve those problems, you can't solve those problems with an organization that's not a great place to work. You just can't. And the goal is to make it a better place to work. So solve those problems. That's an ambitious goal. Do CD.
Kovid Batra: Richard, do you have a question?
Richard Pangborn: Uh, myself? No?
Kovid Batra: Yup.
Richard Pangborn: Nope.
Kovid Batra: Okay. One last one we'll take here. Uh, yeah.
Bryan Finster: Yeah, so common pitfalls, and I think we touched on some of these before, is trying to instrument all but two of them. You could instrument two of them mostly, I think that, uh, you know, and change fail rate is not named well because of the description. It's really defect arrival rate. But even then, that depends on being able to collect data from defects and whether or not that's being collected in a disciplined manner. Um, delivery frequency, you know, people frequently measure that at the organization level, but that doesn't really tell you anything. You really need to get down to where the work is happening and try to measure that there. But then setting targets around delivery frequency, instead of identifying how do we improve, right? And it's, it's, it's all it is, is how do we, how do we get better, um, using them as goals? They're absolutely not goals. They're health indicators. You know, like I talked about the tachometer before, I don't have a goal of, we're going to run at 5, 000 RPM. I mean, number one, it depends on the engine, right? I mean, that would be really terrible for a sport bike, would blow up a diesel. So we, we need to, using them naively without understanding what they mean and what it is we're trying to do. I see it constantly. Uh, I and others who were early adopters of these met out, screaming about this for several years, and that's why I'm on here today is please, please don't use them incorrectly because it just hurts things.
Kovid Batra: Perfect. Uh, Bryan, I have one question. Uh, uh, like when, when teams are setting these benchmarks for different metrics that they have identified to be measured, what should be the ideal strategy, ideal way of setting those benchmarks? Because that's a question I get asked a lot.
Bryan Finster: Let's say, they were never benchmarks in Accelerate either. What they said was is that we're seeing a correlation between companies with these outcomes and metrics that look like this. So those aren't industry benchmarks, that's a correlation they're making. And correlation is not equal causation. I will tell you that being really good at continuous delivery means that you can, if you have good ideas, deliver good ideas well, but being good at CD doesn't mean you're going to be good at, at, at, you know, meeting your business goals because it depends, you know, garbage in, garbage out. Um, and so, you don't set them as benchmarks. They're not benchmarks. They're health indicators. Use them as health indicators. How do we make this better? Use them as, as things to cause you to ask questions. Why can't we deliver more than once a month?
Kovid Batra: So basically, if we are, let's say, for a lack of a better term, we use 'benchmarks'. There should, those should be set on the basis of the cadence of our own team, how they are working, how they are designed to deliver. That's how we should be doing. Is that what you mean?
Bryan Finster: No, I would absolutely use them as health indicators, you know, track trends. Are we trending up? Are we trending down? And then use that as the basis of starting an investigation into why are we trending up? Why are we trending down? I mean, are we trending up because people think it's a goal? And were there some other metric that's going south that we're not aware of while we're, while we're focusing on this one thing getting better? I mean, this is Richard, I mean, you pointed out exactly. It's a good balance set of metrics if they're measured correctly unlike if it's collected correctly. And you can't, you know, another problem I see is people focusing on 1. I remember a director telling his area, "Hey, we're going to start using DORA metrics. But for change management purposes, we're only going to start by focusing on MTTR instead of anything else." They're a set, they go together, you know? You can't just peel one out. Um, so.
Kovid Batra: Got it, got it. Yeah, that absolutely answers my question. All right. I think with that, we come to the end of this session. Uh, before we part, uh, any parting advice from you, Bryan, Rich?
Richard Pangborn: Um, just what we found successful in our own journey. Every, every company is different. They all have their own different processes, their own way of doing things, their own way of building things. So, there's not exactly one right way to do it. It's usually by trial and error for each, probably each company, uh, I would say. Depending on the tooling that you want to choose, the way you want to break down tasks and deliver stories. Like for us, we chose one day tasks in Jira. Um, we didn't choose, uh, long-lived branches. Um, we're not trunk-based explicitly, but we're, our PRs last no longer than a day. Um, so this is what we find works well for us. We're delivering daily. We haven't gotten yet to the, um, you know, delivering multiple times a day, but that's, that's somewhere in the future that we're going to get to, but you have to balance that with business goals. You need to get buy-in from stakeholders before you can get, um, development time to sort of build out that, that structure. So, um, it's a process. Um, everyone's different. Um, but I think bringing in some of these KPIs or, or sorry, benchmarks or health metrics, whatever you want to call them, um, has worked for us in the way where we have more observability into how we operate as engineers than we've ever had in the past. Um, so it's been pretty beneficial for us.
Bryan Finster: Yeah. I'd say that the observability is critical. Um, you know, I've, I've built a few dashboards for showing these things. And for people, for development teams who were, uh, focusing on "we want to improve", they always found value in those things. Um, but I, one, one caution I have is that if you are showing metrics on a dashboard, understand that the user experience of that will change people's behaviors. It's so important people understand. And whenever I'm building a dashboard, I'm showing offsetting metrics together in a way that they can't be separated, um, because you, otherwise you'll just focus on one. I want you to focus on those offsetting metrics as a group, make them all better. Um, but it only matters if people are looking at it. And if it's not a constant topic of conversation, um, it, it, it won't help at all. And I know, uh, Abi Noda and I have a difference of opinion on how, on data collection. You know, I'm big on, I want real-time data because I'm trying to improve quickly. Uh, he's big on surveys, but for me, and I don't get feedback fast enough on, um, with a survey to be able to correct the course correctly if I'm trying to do, if I'm trying to improve CI and CD. It's good for other stuff. Good for culture. So that's the difference. Um, but make sure that you're not just going out and buying a tool to measure these things that shows data in a way or has, you know, that causes bad behavior, um, or shows, or collects data in a way where it's not collecting it correctly. Really understand what you're doing before you go and implement a tool.
Kovid Batra: Cool. Thanks for that piece of advice, Bryan, Rich. Uh, with that, I think that's our time. Just a quick announcement about the next webinar session, which is with the pioneer of CD, the co-author of the book 'Continuous Delivery', Dave Farley. That will be on 25th of September. So audience, stay tuned. I'll be sharing the link with you guys, sending you emails. Thank you so much. That's it for today.
Software engineering teams are important assets for the organization. They build high-quality products, gather and analyze requirements, design system architecture and components, and write clean, efficient code. Measuring their success and identifying the potential challenges they may be facing is important. However, this isn’t always easy and takes a lot of time.
And that’s how Engineering Analytics Tools comes to the rescue. One of the popular tools is Jellyfish which is widely used by engineering leaders and CTOs across the globe.
While this is usually the best choice for the organizations, there might be chances that it doesn’t work for you. Worry not! We’ve curated the top 6 Jellyfish alternatives that you can consider when choosing an engineering analytics tool for your company.
What is Jellyfish?
Jellyfish is a popular engineering management platform that offers real-time visibility into engineering organization and team progress. It translates tech data into information that the business side can understand and offers multiple perspectives on resource allocation. It also shows the status of every pull request and commits on the team. Jellyfish can be integrated with third-party tools such as Bitbucket, Github, Gitlab, JIRA, and other popular HR, Calendar, and Roadmap tools.
However, its UI can be tricky initially and has a steep learning curve due to the vast amount of data it provides, which can be overwhelming for new users.
Top Jellyfish Alternatives
Typo
Typo is another Jellyfish alternative that maximizes the business value of software delivery by offering features that improve SDLC visibility, developer insights, and workflow automation. It provides comprehensive insights into the deployment process through key DORA and other engineering metrics and offers engineering benchmarks to compare the team’s results across industries. Its automated code tool helps development teams identify code issues and auto-fix them before merging to master. It captures a 360-degree view of developers’ experience and includes an effective sprint analysis that tracks and analyzes the team’s progress. Typo can be integrated with tech tools such as GitHub, GitLab, Jira, Linear, and Jenkins.
Price
Free: $0/dev/month
Starter: $16/dev/month
Pro: $24/dev/month
Enterprise: Quotation on request
LinearB
LinearB is another leading software engineering intelligence platform that provides insights for identifying bottlenecks and streamlining software development workflow. It highlights automatable tasks to save time and enhance developer productivity. It also tracks DORA metrics and collects data from other tools to provide a holistic view of performance. Its project delivery tracker reflects project delivery status updates using planning accuracy and delivery reports. LinearB can be integrated with third-party applications such as Jira, Slack, and Shortcut.
Price
Free: $0/dev/month
Business: $49/dev/month
Enterprise: Quotation on request
Waydev
Waydev is a software development analytics platform that provides actionable insights on metrics related to bug fixes, velocity, and more. It uses the agile method for tracking output during the development process and allows engineering leaders to see data from different perspectives. It emphasizes market-based metrics and ROI, unlike other platforms. Its resource planning assistance feature allows for avoiding scope creep and offers an understanding of the cost and progress of deliverables and key initiatives. Waydev can be integrated with well-known tools such as Gitlab, Github, CircleCI, and AzureOPS.
Price
Quotation on request
Pluralsight Flow
Pluralsight Flow is a popular tool that tracks DORA metrics and helps to benchmark DevOps practices. It aggregates GIT data into comprehensive insights and offers a bird-eye view of what’s happening in development teams. Its sprint feature helps to make better plans and dive into the team’s accomplished work and whether they are committed or unplanned. Its team-level ticket filters, GIT tags, and other lightweight signals streamline pulling data from different sources. Pluralsight Flow can be integrated with manual and automated testing tools such as Azure DevOps, and GitLab.
Price
Core: $38/mo
Plus: $50/mo
Code Climate Velocity
Code Climate Velocity is a popular tool that uses repos to synthesize data and offers visibility into code coverage, coding practices, and security risks. It tracks issues in real time to help quickly move through existing workflows and allow engineering leaders to compile data on dev velocity and code quality. It has JIRA and GIT support that compresses into real-time analytics. Its customized dashboard and trends provide a view into each individual’s day-to-day tasks to long progress. Code Climate Velocity also provides technical debt assessment and style check in every pull request.
Swarmia is another well-known engineering effectiveness platform that provides quantitative insights into the software development pipeline. It offers visibility into three key areas: Business outcomes, developer productivity, and developer experience. It allows engineering leaders to create flexible and audit-ready software cost capitalization reports. It also identifies and fixes common teamwork antipatterns such as siloing and too much work in progress. Swarmia can be integrated with popular tools such as Slack, JIRA, Gitlab, Azure DevOps, and more.
Price
Free: 0£/dev/month
Lite: 20£/dev/month
Standard: 39£/dev/month
Conclusion
While we have shared top software development analytics tools, don’t forget to conduct thorough research before selecting for your engineering team. Check whether it aligns well with your requirements, facilitates team collaboration and continuous improvement, integrates seamlessly with your existing and upcoming tools, and so on.
Maintaining a balance between speed and code quality is a challenge for every developer.
Deadlines and fast-paced projects often push teams to prioritize rapid delivery, leading to compromises in code quality that can have long-lasting consequences. While cutting corners might seem efficient in the moment, it often results in technical debt and a codebase that becomes increasingly difficult to manage.
The hidden costs of poor code quality are real, impacting everything from development cycles to team morale. This blog delves into the real impact of low code quality, its common causes, and actionable solutions tailored to developers looking to elevate their code standards.
Understanding the Core Elements of Code Quality
Code quality goes beyond writing functional code. High-quality code is characterized by readability, maintainability, scalability, and reliability. Ensuring these aspects helps the software evolve efficiently without causing long-term issues for developers. Let’s break down these core elements further:
Readability: Code that follows consistent formatting, uses meaningful variable and function names, and includes clear inline documentation or comments. Readable code allows any developer to quickly understand its purpose and logic.
Maintainability: Modular code that is organized with reusable functions and components. Maintainability ensures that code changes, whether for bug fixes or new features, don’t introduce cascading errors throughout the codebase.
Scalability: Code designed withan architecture that supports growth. This involves using design patterns that decouple different parts of the code and make it easier to extend functionalities.
Reliability: Robust code that has been tested under different scenarios to minimize bugs and unexpected behavior.
The Real Costs of Low Code Quality
Low code quality can significantly impact various facets of software development. Below are key issues developers face when working with substandard code:
Sluggish Development Cycles
Low-quality code often involves unclear logic and inconsistent practices, making it difficult for developers to trace bugs or implement new features. This can turn straightforward tasks into hours of frustrating work, delaying project milestones and adding stress to sprints.
Escalating Technical Debt
Technical debt accrues when suboptimal code is written to meet short-term goals. While it may offer an immediate solution, it complicates future updates. Developers need to spend significant time refactoring or rewriting code, which detracts from new development and wastes resources.
Bug-Prone Software
Substandard code tends to harbor hidden bugs that may not surface until they affect end-users. These bugs can be challenging to isolate and fix, leading to patchwork solutions that degrade the codebase further over time.
Collaboration Friction
When multiple developers contribute to a project, low code quality can cause misalignment and confusion. Developers might spend more time deciphering each other’s work than contributing to new development, leading to decreased team efficiency and a lower-quality product.
Scalability Bottlenecks
A codebase that doesn’t follow proper architectural principles will struggle when scaling. For instance, tightly coupled components make it hard to isolate and upgrade parts of the system, leading to performance issues and reduced flexibility.
Developer Burnout
Constantly working with poorly structured code is taxing. The mental effort needed to debug or refactor a convoluted codebase can demoralize even the most passionate developers, leading to frustration, reduced job satisfaction, and burnout.
Root Causes of Low Code Quality
Understanding the reasons behind low code quality helps in developing practical solutions. Here are some of the main causes:
Pressure to Deliver Rapidly
Tight project deadlines often push developers to prioritize quick delivery over thorough, well-thought-out code. While this may solve immediate business needs, it sacrifices code quality and introduces problems that require significant time and resources to fix later.
Lack of Unified Coding Standards
Without established coding standards, developers may approach problems in inconsistent ways. This lack of uniformity leads to a codebase that’s difficult to maintain, read, and extend. Coding standards help enforce best practices and maintain consistent formatting and documentation.
Insufficient Code Reviews
Skipping code reviews means missing opportunities to catch errors, bad practices, or code smells before they enter the main codebase. Peer reviews help maintain quality, share knowledge, and align the team on best practices.
Limited Testing Strategies
A codebase without sufficient testing coverage is bound to have undetected errors. Tests, especially automated ones, help identify issues early and ensure that any code changes do not break existing features.
Overreliance on Low-Code/No-Code Solutions
Low-code platforms offer rapid development but often generate code that isn’t optimized for long-term use. This code can be bloated, inefficient, and difficult to debug or extend, causing problems when the project scales or requires custom functionality.
Comprehensive Solutions to Improve Code Quality
Addressing low code quality requires deliberate, consistent effort. Here are expanded solutions with practical tips to help developers maintain and improve code standards:
Adopt Rigorous Code Reviews
Code reviews should be an integral part of the development process. They serve as a quality checkpoint to catch issues such as inefficient algorithms, missing documentation, or security vulnerabilities. To make code reviews effective:
Create a structured code review checklist that focuses on readability, adherence to coding standards, potential performance issues, and proper error handling.
Foster a culture where code reviews are seen as collaborative learning opportunities rather than criticism.
Implement tools like GitHub’s review features or Bitbucket for in-depth code discussions.
Integrate Linters and Static Analysis Tools
Linters help maintain consistent formatting and detect common errors automatically. Tools like ESLint (JavaScript), RuboCop (Ruby), and Pylint (Python) check your code for syntax issues and adherence to coding standards. Static analysis tools go a step further by analyzing code for complex logic, performance issues, and potential vulnerabilities. To optimize their use:
Configure these tools to align with your project’s coding standards.
Run these tools in pre-commit hooks with Husky or integrate them into your CI/CD pipelines to ensure code quality checks are performed automatically.
Prioritize Comprehensive Testing
Adopt a multi-layered testing strategy to ensure that code is reliable and bug-free:
Unit Tests: Write unit tests for individual functions or methods to verify they work as expected. Frameworks like Jest for JavaScript, PyTest for Python, and JUnit for Java are popular choices.
Integration Tests: Ensure that different parts of your application work together smoothly. Tools like Cypress and Selenium can help automate these tests.
End-to-End Tests: Simulate real user interactions to catch potential issues that unit and integration tests might miss.
Integrate testing into your CI/CD pipeline so that tests run automatically on every code push or pull request.
Dedicate Time for Refactoring
Refactoring helps improve code structure without changing its behavior. Regularly refactoring prevents code rot and keeps the codebase maintainable. Practical strategies include:
Identify “code smells” such as duplicated code, overly complex functions, or tightly coupled modules.
Apply design patterns where appropriate, such as Factory or Observer, to simplify complex logic.
Use IDE refactoring tools like IntelliJ IDEA’s refactor feature or Visual Studio Code extensions to speed up the process.
Create and Enforce Coding Standards
Having a shared set of coding standards ensures that everyone on the team writes code with consistent formatting and practices. To create effective standards:
Collaborate with the team to create a coding guideline that includes best practices, naming conventions, and common pitfalls to avoid.
Document the guideline in a format accessible to all team members, such as a README file or a Confluence page.
Conduct periodic training sessions to reinforce these standards.
Leverage Typo for Enhanced Code Quality
Typo can be a game-changer for teams looking to automate code quality checks and streamline reviews. It offers a range of features:
Automated Code Review: Detects common issues, code smells, and inconsistencies, supplementing manual code reviews.
Detailed Reports: Provides actionable insights, allowing developers to understand code weaknesses and focus on the most critical issues.
Seamless Collaboration: Enables teams to leave comments and feedback directly on code, enhancing peer review discussions and improving code knowledge sharing.
Continuous Monitoring: Tracks changes in code quality over time, helping teams spot regressions early and maintain consistent standards.
Enhance Knowledge Sharing and Training
Keeping the team informed on best practices and industry trends strengthens overall code quality. To foster continuous learning:
Organize workshops, code review sessions, and tech talks where team members share insights or recent challenges they overcame.
Encourage developers to participate in webinars, online courses, and conferences.
Create a mentorship program where senior developers guide junior members through complex code and teach them best practices.
Strategically Use Low-Code Tools
Low-code tools should be leveraged for non-critical components or rapid prototyping, but ensure that the code generated is thoroughly reviewed and optimized. For more complex or business-critical parts of a project:
Supplement low-code solutions with custom coding to improve performance and maintainability.
Regularly review and refactor code generated by these platforms to align with project standards.
Commit to Continuous Improvement
Improving code quality is a continuous process that requires commitment, collaboration, and the right tools. Developers should assess current practices, adopt new ones gradually, and leverage automated tools like Typo to streamline quality checks.
By incorporating these strategies, teams can create a strong foundation for building maintainable, scalable, and high-quality software. Investing in code quality now paves the way for sustainable development, better project outcomes, and a healthier, more productive team.
In today's fast-paced and rapidly evolving software development landscape, effective project management is crucial for engineering teams striving to meet deadlines, deliver quality products, and maintain customer satisfaction. Project management not only ensures that tasks are completed on time but also optimizes resource allocation enhances team collaboration, and improves communication across all stakeholders. A key tool that has gained prominence in this domain is JIRA, which is widely recognized for its robust features tailored for agile project management.
However, while JIRA offers numerous advantages, such as customizable workflows, detailed reporting, and integration capabilities with other tools, it also comes with limitations that can hinder its effectiveness. For instance, teams relying solely on JIRA dashboard gadget may find themselves missing critical contextual data from the development process. They may obtain a snapshot of project statuses but fail to appreciate the underlying issues impacting progress. Understanding both the strengths and weaknesses of JIRA dashboard gadget is vital for engineering managers to make informed decisions about their project management strategies.
The Limitations of JIRA Dashboard Gadgets
Lack of Contextual Data
JIRA dashboard gadgets primarily focus on issue tracking and project management, often missing critical contextual data from the development process. While JIRA can show the status of tasks and issues, it does not provide insights into the actual code changes, commits, or branch activities that contribute to those tasks. This lack of context can lead to misunderstandings about project progress and team performance. For example, a task may be marked as "in progress," but without visibility into the associated Git commits, managers may not know if the team is encountering blockers or if significant progress has been made. This disconnect can result in misaligned expectations and hinder effective decision-making.
Static Information
JIRA dashboards having road map gadget or sprint burndown gadget can sometimes present a static view of project progress, which may not reflect real-time changes in the development process. For instance, while a JIRA road map gadget or sprint burndown gadget may indicate that a task is "done," it does not account for any recent changes or updates made in the codebase. This static nature can hinder proactive decision-making, as managers may not have access to the most current information about the project's health. Additionally, relying on historical data can create a lag in response to emerging issues in issue statistics gadget. In a rapidly changing development environment, the ability to react quickly to new information is crucial for maintaining project momentum hence we need to move beyond default chart gadget like road map gadget or burndown chart gadget.
Limited Collaboration Insights
Collaboration is essential in software development, yet JIRA dashboards often do not capture the collaborative efforts of the team. Metrics such as code reviews, pull requests, and team discussions are crucial for understanding how well the team is working together. Without this information, managers may overlook opportunities for improvement in team dynamics and communication. For example, if a team is actively engaged in code reviews but this activity is not reflected in JIRA gadgets or sprint burndown gadget, managers may mistakenly assume that collaboration is lacking. This oversight can lead to missed opportunities to foster a more cohesive team environment and improve overall productivity.
Overemphasis on Individual Metrics
JIRA dashboard or other copy dashboard can sometimes encourage a focus on individual performance metrics rather than team outcomes. This can foster an environment of unhealthy competition, where developers prioritize personal achievements over collaborative success. Such an approach can undermine team cohesion and lead to burnout. When individual metrics are emphasized, developers may feel pressured to complete tasks quickly, potentially sacrificing code quality and collaboration. This focus on personal performance can create a culture where teamwork and knowledge sharing are undervalued, ultimately hindering project success.
Inflexibility in Reporting
JIRA dashboard layout often rely on predefined metrics and reports, which may not align with the unique needs of every project or team. This inflexibility can result in a lack of relevant insights that are critical for effective project management. For example, a team working on a highly innovative project may require different metrics than a team maintaining legacy software. The inability to customize reports can lead to frustration and a sense of disconnect from the data being presented.
The Power of Integrating Git Data with JIRA
Integrating Git data with JIRA provides a more holistic view of project performance and developer productivity. Here’s how this integration can enhance insights:
Real-Time Visibility into Development Activity
By connecting Git repositories with JIRA, engineering managers can gain real-time visibility into commits, branches, and pull requests associated with JIRA issues & issue statistics. This integration allows teams to see the actual development work being done, providing context to the status of tasks on the JIRA dashboard gadet. For instance, if a developer submits a pull request that relates to a specific JIRA ticket, the project manager instantly knows that work is ongoing, fostering transparency. Additionally, automated notifications for changes in the codebase linked to JIRA issues keep everyone updated without having to dig through multiple tools. This integrated approach ensures that management has a clear understanding of actual progress rather than relying on static task statuses.
Enhanced Collaboration and Communication
Integrating Git data with JIRA facilitates better collaboration among team members. Developers can reference JIRA issues in their commit messages, making it easier for the team to track changes related to specific tasks. This transparency fosters a culture of collaboration, as everyone can see how their work contributes to the overall project goals. Moreover, by having a clear link between code changes and JIRA issues, team members can engage in more meaningful discussions during stand-ups and retrospectives. This enhanced communication can lead to improved problem-solving and a stronger sense of shared ownership over the project.
Improved Risk Management
With integrated Git and JIRA data, engineering managers can identify potential risks more effectively. By monitoring commit activity and pull requests alongside JIRA issue statuses, managers can spot trends and anomalies that may indicate project delays or technical challenges. For example, if there is a sudden decrease in commit activity for a specific task, it may signal that the team is facing challenges or blockers. This proactive approach allows teams to address issues before they escalate, ultimately improving project outcomes and reducing the likelihood of last-minute crises.
Comprehensive Reporting and Analytics
The combination of JIRA and Git data enables more comprehensive reporting and analytics. Engineering managers can analyze not only task completion rates but also the underlying development activity that drives those metrics. This deeper understanding can inform better decision-making and strategic planning for future projects. For instance, by analyzing commit patterns and pull request activity, managers can identify trends in team performance and areas for improvement. This data-driven approach allows for more informed resource allocation and project planning, ultimately leading to more successful outcomes.
Best Practices for Integrating Git Data with JIRA
To maximize the benefits of integrating Git data with JIRA, engineering managers should consider the following best practices:
Select the Right Tools
Choose integration tools that fit your team's specific needs. Tools like Typo can facilitate the connection between Git and JIRA smoothly. Additionally, JIRA integrates directly with several source control systems, allowing for automatic updates and real-time visibility.
If you’re ready to enhance your project delivery speed and predictability, consider integrating Git data with your JIRA dashboards. Explore Typo! We can help you do this in a few clicks & make it one of your favorite dashboards.
Encourage your team to adopt consistent commit message guidelines. Including JIRA issue keys in commit messages will create a direct link between the code change and the JIRA issue. This practice not only enhances traceability but also aids in generating meaningful reports and insights. For example, a commit message like 'JIRA-123: Fixed the login issue' can help managers quickly identify relevant commits related to specific tasks.
Automate Workflows
Leverage automation features available in both JIRA and Git platforms to streamline the integration process. For instance, set up automated triggers that update JIRA issues based on events in Git, such as moving a JIRA issue to 'In Review' once a pull request is submitted in Git. This reduces manual updates and alleviates the administrative burden on the team.
Train Your Team
Providing adequate training to your team ensures everyone understands the integration process and how to effectively use both tools together. Conduct workshops or create user guides that outline the key benefits of integrating Git and JIRA, along with tips on how to leverage their combined functionalities for improved workflows.
Monitor and Adapt
Implement regular check-ins to assess the effectiveness of the integration. Gather feedback from team members on how well the integration is functioning and identify any pain points. This ongoing feedback loop allows you to make incremental improvements, ensuring the integration continues to meet the needs of the team.
Utilize Dashboards for Visualization
Create comprehensive dashboards that visually represent combined metrics from both Git and JIRA. Tools like JIRA dashboards, Confluence, or custom-built data visualization platforms can provide a clearer picture of project health. Metrics can include the number of active pull requests, average time in code review, or commit activity relevant to JIRA task completion.
Encourage Regular Code Reviews
With the changes being reflected in JIRA, create a culture around regular code reviews linked to specific JIRA tasks. This practice encourages collaboration among team members, ensures code quality, and keeps everyone aligned with project objectives. Regular code reviews also lead to knowledge sharing, which strengthens the team's overall skill set.
Case Study:
25% Improvement in Task Completion with Jira-Git Integration at Trackso
To illustrate the benefits of integrating Git data with JIRA, let’s consider a case study of a software development team at a company called Trackso.
Background
Trackso, a remote monitoring platform for Solar energy, was developing a new SaaS platform that consisted of a diverse team of developers, designers, and project managers. The team relied heavily on JIRA for tracking project statuses, but they found their productivity hampered by several issues:
Tasks had vague statuses that did not reflect actual progress to project managers.
Developers frequently worked in isolation without insight into each other's code contributions.
They could not correlate project delays with specific code changes or reviews, leading to poor risk management.
Implementation of Git and JIRA Integration
In 2022, Trackso's engineering manager decided to integrate Git data with JIRA. They chose GitHub for version control, given its robust collaborative features. The team set up automatic links between their JIRA tickets and corresponding GitHub pull requests and standardized their commit messages to include JIRA issue keys.
Metrics of Improvement
After implementing the integration, Trackso experienced significant improvements within three months:
Increased Collaboration: There was a 40% increase in code review participation as developers began referencing JIRA issues in their commits, facilitating clearer discussions during code reviews.
Reduced Delivery Times: Average task completion times decreased by 25%, as developers could see almost immediately when tasks were being actively worked on or if blockers arose.
Improved Risk Management: The team reduced project delays by 30% due to enhanced visibility. For example, the integration helped identify that a critical feature was lagging due to slow pull request reviews. This enabled team leads to improve their code review workflows.
Boosted Developer Morale: Developer satisfaction surveys indicated that 85% of team member felt more engaged in their work due to improved communication and clarity around task statuses.
Challenges Faced
Despite these successes, Trackso faced challenges during the integration process:
Initial Resistance: Some team member were hesitant to adopt new practices & new personal dashboard. The engineering manager organized training sessions to showcase the benefits of integrating Git and JIRA & having a personal dashboard, promoting buy-in from the team and leaving the default dashboard.
Maintaining Commit Message Standards: Initially, not all developers consistently used the issue keys in their commit messages. The team revisited training sessions and created a shared repository of best practices to ensure adherence.
Conclusion
While JIRA dashboards are valuable tools for project management, they are insufficient on their own for engineering managers seeking to improve project delivery speed and predictability. By integrating Git data with JIRA, teams can gain richer insights into development activity, enhance collaboration, and manage risks more effectively. This holistic approach empowers engineering leaders to make informed decisions and drive continuous improvement in their software development processes. Embracing this integration will ultimately lead to better project outcomes and a more productive engineering culture. As the software development landscape continues to evolve, leveraging the power of both JIRA and Git data will be essential for teams looking to stay competitive and deliver high-quality products efficiently.
As platform engineering continues to evolve, it brings both promising opportunities and potential challenges.
As we look to the future, what changes lie ahead for Platform Engineering? In this blog, we will explore the future landscape of platform engineering and strategize how organizations can stay at the forefront of innovation.
What is Platform Engineering?
Platform Engineering is an emerging technology approach that enables software developers with all the required resources. It acts as a bridge between development and infrastructure which helps in simplifying the complex tasks and enhancing development velocity. The primary goal is to improve developer experience, operational efficiency, and the overall speed of software delivery.
Importance of Platform Engineering
Platform engineering helps in creating reusable components and standardized processes. It also automates routine tasks, such as deployment, monitoring, and scaling, to speed up the development cycle.
Platform engineering integrates security measures into the platform to ensure that applications are built and deployed securely. This allows the platform to meet regulatory and compliance requirements.
It ensures efficient use of resources to balance performance and expenditure. It also provides transparency into resource usage and associated costs to help organizations make informed decisions about scaling and investment.
By providing tools, frameworks, and services, platform engineering tool empowers developers to build, deploy, and manage applications more effectively.
A well-engineered platform allows organizations to adapt quickly to market changes, new technologies, and customer needs.
Key Predictions for Platform Engineering
More Focus on Developer Experience
The rise in Platform Engineering will enhance developer experience by creating standard toolchains and workflow. In the coming time, the platform engineering team will work closely with developers to understand what they need to be productive. Moreover, the platform tool will be integrated and closely monitored through DevEx and reports. This will enable developers to work efficiently and focus on the core tasks by automating repetitive tasks, further improving their productivity and satisfaction.
Rise in Internal Developer Platform
Platform engineering is closely associated with the development of IDP. In today’s times, organizations are striving for efficiency, hence, the creation and adoption of internal development platforms will rise. This will streamline operations, provide a standardized way of deploying and managing applications, and reduce cognitive load. Hence, reducing time to market for new features and products, allowing developers to focus on delivering high-quality products more efficiently rather than managing infrastructure.
Growing Trend of Ephemeral Environment
Modern software development demands rapid iteration. The ephemeral environments, temporary, ideal environments, will be an effective way to test new features and bugs before they are merged into the main codebase. These environments will prioritize speed, flexibility, and cost efficiency. Since they are created on-demand and short-lived, they will align perfectly with modern development practices.
Integration with Generative AI
As times are changing, AI-driven tools become more prevalent. These Generative AI tools such as GitHub Copilot and Google Gemini will enhance capabilities such as infrastructure as code, governance as code, and security as code. This will not only automate manual tasks but also support smoother operations and improved documentation processes. Hence, driving innovation and automating dev workflow.
Extension to DevOps
Platform engineering is a natural extension of DevOps. In the future, the platform engineers will work alongside DevOps rather than replacing it to address its complexities and scalability challenges. This will provide a standardized and automated approach to software development and deployment leading to faster project initialization, reduced lead time, and increased productivity.
Shift to Product-Centric Funding Model
Software organizations are now shifting from project project-centric model towards product product-centric funding model. When platforms are fully-fledged products, they serve internal customers and require a thoughtful and user-centric approach in their ongoing development. It also aligns well with the product lifecycle that is ongoing and continuous which enhances innovation and reduces operational friction. It will also decentralize decision making which allows platform engineering leaders to make and adjust funding decisions for their teams.
Why Staying Updated on Platform Engineering Trends is Crucial?
Platform Engineering is a relatively new and evolving field. Hence, platform engineering teams need to keep up with rapid tech changes and ensure the platform remains robust and efficient.
Emerging technologies such as serverless computers and edge computers will shape the future of platform engineering. Moreover, Artificial intelligence and machine learning also help in optimizing various aspects of software development such as testing and monitoring.
Platform engineering trends are introducing new ways to automate processes, manage infrastructure, and optimize workflows. This enables organizations to streamline operations, reduce manual work, and focus on more strategic tasks, leading to enhanced developer productivity.
A platform aims to deliver a superior user experience. When platform engineers stay ahead of the learning curve, they can implement features and improvements that improve the end-user experience, resulting in higher customer satisfaction and retention.
Trends in platform engineering highlight new methods for building scalable and flexible systems. It allows platform engineers to design platforms that can easily adapt to changing demands and scale without compromising performance.
Typo - An Effective Platform Engineering Tool
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
The future of platform engineering is both exciting and dynamic. As this field continues to evolve, staying ahead of these developments is crucial for organizations aiming to maintain a competitive edge. By embracing these predictions and proactively adapting to changes, platform engineering teams can drive innovation, improve efficiency, and deliver high-quality products that meet the demands of an ever-changing tech landscape.
Platform engineering is a relatively new and evolving field in the tech industry. However, like any evolving field, it comes with its share of challenges. If overlooked can limit its effectiveness.
In this blog post, we dive deep into these common missteps and provide actionable insights to overcome them, so that your platform engineering efforts are both successful and sustainable.
What is Platform Engineering?
Platform Engineering refers to providing foundational tools and services to the development team that allow them to quickly and safely deliver their applications. This aims to increase developer productivity by providing a unified technical platform to streamline the process which helps reduce errors and enhance reliability.
Core Components of Platform Engineering
Internal Developer Platform (IDPs)
The core component of Platform Engineering is IDP i.e. centralized collections of tools, services, and automated workflows that enable developers to self-serve resources needed for building, testing, and deploying applications. It empowers developers to deliver faster by reducing reliance on other teams, automating repetitive tasks, reducing the risk of errors, and ensuring every application adheres to organizational standards.
Platform Team
The platform team consists of platform engineers who are responsible for building, maintaining, and configuring the IDP. The platform team standardizes workflows, automates repetitive tasks, and ensures that developers have access to the necessary tools and resources. The aim is to create a seamless experience for developers. Hence, allowing them to focus on building applications rather than managing infrastructure.
Automation and Standardization
Platform engineering focuses on the importance of standardizing processes and automating infrastructure management. This includes creating paved roads for common development tasks such as deployment scripts, testing, and scaling to simplify workflows and reduce friction for developers. Curating a catalog of resources, following predefined templates, and establishing best practices ensure that every deployment follows the same standards, thus enhancing consistency across development efforts while allowing flexibility for individual preferences.
Continuous Improvement
Platform engineering is an iterative process, requiring ongoing assessment and enhancement based on developer feedback and changing business needs. This results in continuous improvement that ensures the platform evolves to meet the demands of its users and incorporates new technologies and practices as they emerge.
Security and Compliance
Security is a key component of platform engineering. Integrating security best practices into the platform such as automated vulnerability scanning, encryption, and compliance monitoring is the best way to protect against vulnerabilities and ensure compliance with relevant regulations. This proactive approach is integrated into all stages of the platform helps mitigate risks associated with software delivery and fosters a secure development environment.
Common Mistakes in Platform Engineering
Focusing Solely on Dashboards
One of the common mistakes platform engineers make is focusing solely on dashboards without addressing the underlying issues that need solving. While dashboards provide a good overview, they can lead to a superficial understanding of problems instead of encouraging genuine process improvements.
To avoid this, teams must combine dashboards with automated alerts, tracing, and log analysis to get actionable insights and a more comprehensive observability strategy for faster incident detection and resolution.
Building without Understanding the Developers’ Needs
Developing a platform based on assumptions ends up not addressing real problems and does not meet the developers’s needs. The platform may lack important features for developers leading to dissatisfaction and low adoption.
Hence, establishing clear objectives and success criteria vital for guiding development efforts. Engage with developers now and then. Conduct surveys, interviews, or workshops to gather insights into their pain points and needs before building the platform.
Overengineering the Platform
Building an overlay complex platform hinders rather than helps development efforts. When the platform contains features that aren’t necessary or used by developers, it leads to increased maintenance costs and confusion among developers that further hampers their productivity.
The goal must be finding the right balance between functionality and simplicity. Hence, ensuring the platform effectively meets the needs of developers without unnecessary complications and iterating it based on actual usage and feedback.
Encouraging One-Size-Fits-All Solution
The belief that a single platform caters to all development teams and uses cases uniformly is a fallacy. Different teams and applications have varying needs, workflows, and technology stacks, necessitating tailored solutions rather than a uniform approach. As a result, the platform may end up being too rigid for some teams and overly complex for some resulting in low adoption and inefficiencies.
Hence, design a flexible and customizable platform that adapts to diverse requirements. This allows teams to tailor the platform to their specific workflows while maintaining shared standards and governance.
Overplanning and under-executing
Spending excessive time in the planning phase leads to delays in implementation, missed opportunities, and not fully meeting the evolving needs of end-users. When the teams focus on perfecting every detail before implementation it results in the platform remaining theoretical instead of delivering real value.
An effective way is to create a balance between planning and executing by adopting an iterative approach. In other words, focus on delivering a minimum viable product (MVP) quickly and continuously improving it based on real user feedback. This allows the platform to evolve in alignment with actual developer needs which ensures better adoption and more effective outcomes.
Failing to Prioritize Security
Building the platform without incorporating security measures from the beginning can create opportunities for cyber threats and attacks. This also exposes the organization to compliance risks, vulnerabilities, and potential breaches that could be costly to resolve.
Implementing automated security tools, such as identity and access management (IAM), encrypted communications, and code analysis tools helps continuously monitor for security issues and ensure compliance with best practices. Besides this, provide ongoing security training that covers common vulnerabilities, secure coding practices, and awareness of evolving threats.
Benefits of Platform Engineering
When used correctly, platform engineering offers many benefits:
Platform engineering improves developer experience by offering self-service capabilities and standardized tools. It allows the team to focus on building features and deliver products more efficiently and effectively.
It increases the reliability and security of applications by providing a stable foundation and centralized infrastructure management.
Engineering teams can deploy applications and updates faster with a robust and automated platform that accelerates the time-to-market for new features and products.
Focusing on scalable solutions allows Platform engineering to enable the underlying systems to handle increased demand without compromising performance and grow their applications and services efficiently.
A solid platform foundation allows teams to experiment with new technologies and methodologies. Hence, supporting innovation and the adoption of modern practices.
Typo - An Effective Platform Engineering Tool
Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Platform engineering has immense potential to streamline development and improve efficiency, but avoiding common pitfalls is key. By focusing on the pitfalls mentioned above, you can create a platform that drives productivity and innovation.
Robert C. Martin introduced the ‘Clean Code’ concept in his book ‘Clean Code: A Handbook of Agile Software Craftsmanship’. He defined clean code as:
“A code that has been taken care of. Someone has taken the time to keep it simple and orderly. They have laid appropriate attention to details. They have cared.”
Clean code is easy to read, understand, and maintain. It is well structured and free of unnecessary complexity, code smell, and anti-patterns.
Key Characteristics that Define Clean Code
The code is easy to read and understand. The names are descriptive of variables, functions, and classes, and the code is structured for a clear purpose.
The code is simple and doesn’t include any unnecessary complexity.
The code is consistent in naming conventions, formatting, and organization to help maintain readability.
The code is easy to test and free from bugs and errors.
The code is easy to update and modify.
Clean code is regularly refactored and free from redundancy.
Clean Code Principles
Single Responsibility Principle
This principle states that each module or function should have a defined responsibility and one reason to change. Otherwise, it can result in bloated and hard-to-maintain code.
Example: the code’s responsibilities are separated into three distinct classes: User, Authentication, and EmailService. This makes the code more modular, easier to test, and easier to maintain.
class User {
constructor(name, email, password) {
this.name = name;
this.email = email;
this.password = password;
}
}
class Authentication {
login(user, password) {
// ... login logic
}
register(user, password) {
// ... registration logic
}
}
class EmailService {
sendVerificationEmail(email) {
// ... email sending logic
}
}
DRY Principle (Don’t Repeat Yourself)
The DRY Principle states that unnecessary duplication and repetition of code must be avoided. If not followed, it can increase the risk of inconsistency and redundancy. Instead, you can abstract common functionality into reusable functions, classes, or modules.
Example: The common greeting formatting logic is extracted into a reusable formatGreeting function, which makes the code DRY and easier to maintain.
function formatGreeting(name, message) {
return message + ", " + name + "!";
}
function greetUser(name) {
console.log(formatGreeting(name, "Hello"));
}
function sayGoodbye(name) {
console.log(formatGreeting(name, "Goodbye"));
}
YAGNI – you aren’t gonna need it
YAGNI is an extreme programming practice that states “Always implement things when you actually need them, never when you just foresee that you need them.”
It doesn’t mean avoiding flexibility in code but rather not overengineer everything based on assumptions about future needs. The principle means delivering the most critical features on time and prioritizing them based on necessity.
Kiss - Keep it Simple, Stupid
This principle states that the code must be simple over complex to enhance comprehensibility, usability, and maintainability. Direct and clear code is better to avoid making it bloated or confusing.
Example: The function directly multiplies the length and width to calculate the area and there are no extra steps or conditions that might confuse or complicate the code.
def calculate_area(length, width):
return length * width
The Boy Scout Rule
According to ‘The Boy Scout Rule’, always leave the code in a better state than you found it. In other words, make continuous, small enhancements whenever engaging with the codebase. It could be either adding a feature or fixing a bug. It encourages continuous improvement and maintains a high-quality codebase over time.
Example: The original code had unnecessary complexity due to the redundant variable and nested conditional. The cleaned-up code is more concise and easier to understand.
Before:
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
# Before:
result = factorial(5)
print(result)
# After:
print(factorial(5))
After:
def factorial(n):
return 1 if n == 0 else n * factorial(n - 1)
Fail Fast
This principle indicates that the code must fail as early as possible. This limits the bugs that make it into production and promptly addresses errors. This ensures the code remains clean, reliable, and usable.
Open/Closed Principle
As per the Open/Closed Principle, the software entities should be open to extension but closed to modification. This means that team members must add new functionalities to an existing software system without changing the existing code.
Example: The Open/Closed Principle allows adding new employee types (like "intern" or "contractor") without modifying the existing calculate_salary function. This makes the system more flexible and maintainable.
Without the Open/Closed Principle
def calculate_salary(employee_type):
if employee_type == "regular":
return base_salary
elif employee_type == "manager":
return base_salary * 1.5
elif employee_type == "executive":
return base_salary * 2
else:
raise ValueError("Invalid employee type")
With the Open/Closed Principle
class Employee:
def calculate_salary(self):
raise NotImplementedError()
class RegularEmployee(Employee):
def calculate_salary(self):
return base_salary
class Manager(Employee):
def calculate_salary(self):
return base_salary * 1.5
class Executive(Employee):
def calculate_salary(self):
return base_salary * 2
Practice Consistently
When you choose to approach something in a specific way, ensure maintaining consistency throughout the entire project. This includes consistent naming conventions, coding styles, and formatting. It also ensures that the code aligns with team standards, to make it easier for others to understand and work with. Consistent practice also allows you to identify areas for improvement and learn new techniques.
Favor composition over inheritance
This means to use ‘has-a’ relationships (containing instances of other classes) instead of ‘is-a’ relationships (inheriting from a superclass). This makes the code more flexible and maintainable.
Example: In this example, the SportsCar class has a Car object as a member, and it can also have additional components like a spoiler. This makes it more flexible, as we can easily create different types of cars with different combinations of components.
class Engine:
def start(self):
pass
class Car:
def __init__(self, engine):
self.engine = engine
class SportsCar(Car):
def __init__(self, engine, spoiler):
super().__init__(engine)
self.spoiler = spoiler
Avoid Hard-Coded Number
Avoid hardcoded numbers, rather use named constants or variables to make the code more readable and maintainable.
Example:
Instead of:
discount_rate = 0.2
Use:
DISCOUNT_RATE = 0.2
This makes the code more readable and easier to modify if the discount rate needs to be changed.
Typo - An Automated Code Review Tool
Typo’s automated code review tool enables developers to catch issues related to code issues and detect code smells and potential bugs promptly.
With automated code reviews, auto-generated fixes, and highlighted hotspots, Typo streamlines the process of merging clean, secure, and high-quality code. It automatically scans your codebase and pull requests for issues, generating safe fixes before merging to master. Hence, ensuring your code stays efficient and error-free.
The ‘Goals’ feature empowers engineering leaders to set specific objectives for their tech teams that directly support writing clean code. By tracking progress and providing performance insights, Typo helps align teams with best practices, making it easier to maintain clean, efficient code. The goals are fully customizable, allowing you to set tailored objectives for different teams simultaneously.
Platform engineering is a relatively new and evolving field in the tech industry. To make the most of Platform Engineering, there are several best practices you should be aware of.
In this blog, we explore these practices in detail and provide insights into how you can effectively implement them to optimize your development processes and foster innovation.
What is Platform Engineering?
Platform Engineering, an emerging technology approach, is the practice of designing and managing the infrastructure and tools that support software development and deployment. This is to help them perform end-to-end operations of software development lifecycle automation. The aim is to reduce overall cognitive load, increase operational efficiency, and remove process bottlenecks by providing a reliable and scalable platform for building, deploying, and managing applications.
Importance of Platform Engineering
Platform engineering improves developer experience by offering self-service capabilities and standardized tools. It allows the team to focus on building features and deliver products more efficiently and effectively.
It increases the reliability and security of applications by providing a stable foundation and centralized infrastructure management.
Engineering teams can deploy applications and updates faster with a robust and automated platform that accelerates the time-to-market for new features and products.
Focusing on scalable solutions allows Platform engineering to enable the underlying systems to handle increased demand without compromising performance and grow their applications and services efficiently.
A solid platform foundation allows teams to experiment with new technologies and methodologies. Hence, supporting innovation and the adoption of modern practices.
Platform Engineering Best Practices
The platform Must be Developer-Centric
Always treat your platform engineering team as paying customers. This allows you to understand developers’ pain points, preferences, and requirements and focus on making the development process easier and more efficient. Some of the key points that are taken into consideration:
User-friendly tools to streamline the workflow.
Must feel at ease while navigating the platform.
Seamlessly integrates with existing and other third-party applications.
Allow them to access and manage resources without needing extensive support.
When the above-mentioned needs and requirements are met, end-users are likely to adopt this platform enthusiastically. Hence, making the platform more effective and productive.
Adopt Security Best Practices
Implement security control at every layer of the platform. Make sure that audit security posture is conducted regularly and that everyone on the team is updated with the latest security patches. Besides this, conduct code reviews and code analysis to identify and fix security vulnerabilities quickly. Educate your platform engineering team about security practices and offer them ongoing training and mentorship so they are constantly upskilling.
Foster Continuous Improvement and Feedback Loops
Continuous improvement must be a core principle to allow the platform to evolve according to technical trends. Integrate feedback mechanisms with the internal developer platform to gather insights from the software development lifecycle. Regularly review and improve the platform based on feedback from development teams. This enables rapid responses to any impediments developers face.
Encourage a Culture of Collaboration
Foster communication and knowledge sharing among platform engineers. Align them with common goals and objects and recognize their collaborative efforts. This helps teams to understand how their work contributes to the overall success of the platform which further, fosters a sense of unity and purpose. It also ensures that all stakeholders understand how to effectively use the platform and contribute to its continuous improvement.
Platform Team must have a Product Mindset
View your internal platform as a product that requires management and ongoing development. The platform team must be driven by a product mindset that includes publishing roadmaps, gathering user feedback, and fostering a customer-centric approach. They must focus on what offers real value to their internal customers and app developers based on the feedback, so it addresses the pain points quickly.
Maintain DevOps Culture
Emphasize the importance of a DevOps culture that prioritizes collaboration between development and operations teams that focuses on learning and improvement rather than assigning time. It is crucial to foster an environment where platform engineering can thrive and foster a shared responsibility for the software lifecycle.
Typo - An Effective Platform Engineering Tool
Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Platform Engineering is reshaping how we approach software development by streamlining infrastructure management and improving operational efficiency. Adhering to best practices allows organizations to harness the full potential of their platforms. Embracing these principles will optimize your development processes, drive innovation, and ensure a stable foundation for future growth.
The era when development and operations teams worked in isolation, rarely interacting, is over. This outdated approach led to significant delays in developing and launching new applications. Modern IT leaders understand that DevOps is a more effective strategy.
DevOps fosters collaboration between software development and IT operations, enhancing the speed, efficiency, and quality of software delivery. By leveraging DevOps tools, the software development process becomes more streamlined through improved team collaboration and automation.
Understanding DevOps
DevOps is a methodology that merges software development (Dev) with IT operations (Ops) to shorten the development lifecycle while maintaining high software quality.
Creating a DevOps culture promotes collaboration, which is essential for continuous delivery. IT operations and development teams share ideas and provide prompt feedback, accelerating the application launch cycle.
Importance of DevOps for Startups
In the competitive startup environment, time equates to money. Delayed product launches risk competitors beating you to market. Even with an early market entry, inefficient development processes can hinder timely feature rollouts that customers need.
Implementing DevOps practice helps startups keep pace with industry leaders, speeding up development without additional resource expenditure, improving customer experience, and aligning with business needs.
Core Principles of DevOps
The foundation of DevOps rests on the principles of culture, automation, measurement, and sharing (CAMS). These principles drive continuous improvement and innovation in startups.
Key Benefits of DevOps for Startups
Faster Time-to-Market
DevOps accelerates development and release processes through automated workflows and continuous feedback integration.
Startups can rapidly launch new features, fix bugs, and update software, gaining a competitive advantage.
Implement continuous integration and continuous deployment (CI/CD) pipelines.
Use automated testing to identify issues early.
Improved Efficiency
DevOps enhances workflow efficiency by automating repetitive tasks and minimizing manual errors.
Utilize configuration management tools like Ansible and Chef.
Implement containerization with Docker for consistency across environments.
Jenkins for CI/CD
Docker for containerization
Kubernetes for orchestration
Enhanced Reliability
DevOps ensures code changes are continuously tested and validated, reducing failure risks.
Conduct regular automated testing.
Continuously monitor applications and infrastructure.
Increased reliability leads to higher customer satisfaction and retention.
DevOps Practices for Startups
Embrace Automation with CI/CD Tools
Automation tools are essential for accelerating the software delivery process. Startups should use CI/CD tools to automate testing, integration, and deployment. Recommended tools include:
Jenkins: An open-source automation server that supports building and deploying applications.
GitLab CI/CD: Integrated CI/CD capabilities within GitLab for seamless pipeline management.
CircleCI: A cloud-based CI/CD tool that offers fast builds and easy integration with various services.
Implement Continuous Integration and Continuous Delivery (CI/CD)
SEI platforms provide critical insights into the engineering processes, enhancing decision-making and efficiency. Key features include:
Data Integration: SEI platforms like Typo ingest data from various tools (e.g., GitHub, JIRA) to provide a holistic view of the development pipeline.
Actionable Insights: These platforms analyze data to identify bottlenecks and inefficiencies, enabling teams to optimize workflows and improve delivery speed.
DORA Metrics: SEI platforms track key metrics such as deployment frequency, lead time for changes, change failure rate, and time to restore service, helping teams measure their performance against industry standards.
Foster Collaboration and Communication
Utilize collaborative tools to enhance communication among team members. Recommended tools include:
Slack: For real-time communication and integration with other DevOps tools.
JIRA: For issue tracking and agile project management.
Confluence: For documentation and knowledge sharing.
Encourage Continuous Learning
Promote a culture of continuous learning through:
Internal Workshops: Regularly scheduled sessions on new tools or methodologies.
Online Courses: Encourage team members to take courses on platforms like Coursera or Udemy.
Establish Clear Standards and Documentation
Create a repository for documentation and coding standards using:
Markdown: For easy-to-read documentation within code repositories.
GitHub Pages: For hosting project documentation directly from your GitHub repository.
How Typo Helps DevOps Teams?
Typo is a powerful tool designed specifically for tracking and analyzing DevOps metrics. It provides an efficient solution for dev and ops teams seeking precision in their performance measurement.
With pre-built integrations in the dev tool stack, the dashboard provides all the relevant data within minutes.
It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
The dashboard sets custom improvement goals for each team and tracks their success in real time.
It gives real-time visibility into a team’s KPI and lets them make informed decisions.
Implementing DevOps best practices can markedly boost the agility, productivity, and dependability of startups.
By integrating continuous integration and deployment, leveraging infrastructure as code, employing automated testing, and maintaining continuous monitoring, startups can effectively tackle issues like limited resources and skill shortages.
Moreover, fostering a cooperative culture is essential for successful DevOps adoption. By adopting these strategies, startups can create durable, scalable solutions for end users and secure long-term success in a competitive landscape.
Pros and Cons of DORA Metrics for Continuous Delivery
DORA metrics offer a valuable framework for assessing software delivery performance throughout the software delivery lifecycle. Measuring DORA key metrics allows engineering leaders to identify bottlenecks, improve efficiency, and enhance software quality, which impacts customer satisfaction. It is also a key indicator for measuring the effectiveness of continuous delivery pipelines.
In this blog post, we delve into the pros and cons of utilizing DORA metrics to optimize continuous delivery processes, exploring their impact on performance, efficiency, and delivering high-quality software
What are DORA Metrics?
DORA metrics were developed by the DORA team founded by Gene Kim, Jez Humble, and Dr. Nicole Forsgren. These metrics are key performance indicators that measure the effectiveness and efficiency of the software delivery process and provide a data-driven approach to evaluate the impact of operational practices on software delivery performance.
Four Key DORA Metrics
Deployment Frequencymeasures how often code is deployed into production per week.
Lead Time for Changes measures the time it takes for code changes to move from inception to deployment.
Change Failure Rate measures the code quality released to production during software deployments.
Mean Time to Recovermeasures the time to recover a system or service after an incident or failure in production.
In 2021, the DORA Team added Reliability as a fifth metric. It is based upon how well the user’s expectations are met, such as availability and performance, and measures modern operational practices.
Importance of Continuous Delivery for DORA Metrics
Continuous delivery (CD) is a primary aspect of modern software development that automatically prepares code changes for release to a production environment. It is combined with continuous integration (CI) and together, these two practices are known as CI/CD.
CD pipelines hold significant importance compared to traditional waterfall-style development. A few of them are:
Faster Time to Market
Continuous Delivery allows more frequent releases, allowing new features, improvements, and bug fixes to be delivered to end-users more quickly. It provides a competitive advantage by keeping the product up-to-date and responsive to user needs, which enhances customer satisfaction.
Improved Quality and Reliability
Automated testing and consistent deployment processes catch bugs and issues early. It improves the overall quality and reliability of the software and reduces the chances of defects reaching production.
Reduced Deployment Risk
When updates are smaller and more frequent, it reduces the complexity and risk associated with each deployment. If an issue does arise, it becomes easier to pinpoint the problem and roll back the changes.
Scalability
CD practices can be scaled to accommodate growing development teams and more complex applications. It helps to manage the increasing demands of modern software development.
Innovation and Experimentation
Continuous delivery allows teams to experiment with new ideas and features efficiently. This encourages innovation by allowing quick feedback and iteration cycles.
Enhances Performance Visibility
Deployment Frequency: High deployment frequency indicates a team’s ability to deliver updates and new features quickly and consistently.
Lead Time for Changes: Short lead times suggest a more efficient delivery process.
Change Failure Rate: A lower rate highlights better testing and higher quality in releases.
Mean Time to Restore (MTTR): A lower MTTR indicates a team’s capability to respond to and fix issues rapidly.
Increases Operational Efficiency
Implementing DORA metrics encourages teams to streamline their processes, reducing bottlenecks and inefficiencies in the delivery pipeline. It also allows the team to regularly measure and analyze these metrics which fosters a culture of continuous improvement. As a result, teams are motivated to identify and resolve inefficiencies.
Fosters Collaboration and Communication
Tracking DORA metrics encourages collaboration between DevOps and other stakeholders. Hence, fostering a more integrated and cooperative approach to software delivery. It further provides objective data that teams can use to make informed decisions, prioritize work, and align their efforts with business goals.
Improves Software Quality
Continuous Delivery relies heavily on automated testing to catch defects early. DORA metrics help software teams track the testing processes’ effectiveness which ensures higher software quality. Faster deployment cycles and lower lead times enable quicker feedback from end-users. It allows software development teams to address issues and improve the product more swiftly.
Increases Reliability and Stability
Software teams can ensure that their deployments are more reliable and less prone to issues by monitoring and aiming to reduce the change failure rate. A low MTTR demonstrates a team’s capability to quickly recover from failures which minimizes downtime and its impact on users. Hence, increases the reliability and stability of the software.
Effective Incident Management
Incident management is an integral part of CD as it helps quickly address and resolve any issues that arise. This aligns with the DORA metric for Time to Restore Service as it ensures that any disruptions are quickly addressed, minimizing downtime, and maintaining service reliability.
Cons of DORA Metrics for Continuous Delivery
Implementation Challenges
The process of setting up the necessary software to measure DORA metrics accurately can be complex and time-consuming. Besides this, inaccurate or incomplete data can lead to misleading metrics which can affect decision-making and process improvements.
Resource Allocation Issues
Implementing and maintaining the necessary infrastructure to track DORA metrics can be resource-intensive. It potentially diverts resources from other important areas and increases the risk of disproportionately allocating resources to high-performing teams or projects to improve metrics.
Limited Scope of Metrics
DORA metrics focus on specific aspects of the delivery process and may not capture other crucial factors including security, compliance, or user satisfaction. It is also not universally applicable as the relevance and effectiveness of DORA metrics can vary across different types of projects, teams, and organizations. What works well for one team may not be suitable for another.
Cultural Resistance
Implementing DORA DevOps metrics requires changes in culture and mindset, which can be met with resistance from teams that are accustomed to traditional methods. Apart from this, ensuring that DORA metrics align with broader business goals and are understood by all stakeholders can be challenging.
Subjectivity in Measurement
While DORA Metrics are quantitative in nature, their interpretation and application of DORA metrics can be highly subjective. The definition and measuring metrics like ‘Lead Time for Changes’ or ‘MTTR’ can vary significantly across teams. It may result in inconsistencies in how these metrics are understood and applied across different teams.
How does Typo Solve this Issue?
As the tech landscape is evolving, there is a need for diverse evaluation tools in software development. Relying solely on DORA metrics can result in a narrow understanding of performance and progress. Hence, software development organizations necessitate a multifaceted evaluation approach.
And that’s why, Typo is here at your rescue!
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools. It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Features
Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
Includes effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint.
Provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Offers engineering benchmark to compare the team’s results across industries.
While DORA metrics offer valuable insights into software delivery performance, they have their limitations. Typo provides a robust platform that complements DORA metrics by offering deeper insights into developer productivity and workflow efficiency, helping engineering teams achieve the best possible software delivery outcomes.
Improving Scrum Team Performance with DORA Metrics
Scrum is known to be a popular methodology for software development. It concentrates on continuous improvement, transparency, and adaptability to changing requirements. Scrum teams hold regular ceremonies, including Sprint Planning, Daily Stand-ups, Sprint Reviews, and Sprint Retrospectives, to keep the process on track and address any issues.
With the help of DORA DevOps Metrics, Scrum teams can gain valuable insights into their development and delivery processes.
In this blog post, we discuss how DORA metrics help boost scrum team performance.
In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble, and Dr. Nicole Forsgren to evaluate and improve software development practices. The aim is to enhance the understanding of how development teams can deliver software faster, more reliably, and of higher quality.
Four key DORA metrics are:
Deployment Frequency:Deployment Frequencymeasures the rate of change in software development and highlights potential bottlenecks. It is a key indicator of agility and efficiency. High Deployment Frequency signifies a streamlined pipeline, allowing teams to deliver features and updates faster.
Lead Time for Changes: Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. It tracks the speed and efficiency of software delivery and offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.
Change Failure Rate:Change Failure Rate measures the frequency of newly deployed changes leading to failures, glitches, or unexpected outcomes in the IT environment. It reflects reliability and efficiency and is related to team capacity, code complexity, and process efficiency, impacting speed and quality.
Mean Time to Recover:Mean Time to Recover measures the average duration a system or application takes to recover from a failure or incident. It concentrates on determining the efficiency and effectiveness of an organization's incident response and resolution procedures.
Reliability is a fifth metric that was added by the DORA team in 2021. It is based upon how well your user’s expectations are met, such as availability and performance, and measures modern operational practices. It doesn’t have standard quantifiable targets for performance levels rather it depends upon service level indicators or service level objectives.
Wanna Improve your Team Performance with DORA Metrics?
Why DORA Metrics are Useful for Scrum Team Performance?
DORA metrics are useful for Scrum team performance because they provide key insights into the software development and delivery process. Hence, driving operational performance and improving developer experience.
Measure Key Performance Indicators (KPIs)
DORA metrics track crucial KPIs such as deployment frequency, lead time for changes, mean time to recovery (MTTR), and change failure rate which helps Scrum teams understand their efficiency and identify areas for improvement.
Enhance Workflow Efficiency
Teams can streamline their software delivery process and reduce bottlenecks by monitoring deployment frequency and lead time for changes. Hence, leading to faster delivery of features and bug fixes.
Improve Reliability
Tracking the change failure rate and MTTR helps software teams focus on improving the reliability and stability of their applications. Hence, resulting in more stable releases and fewer disruptions for users.
Encourage Data-Driven Decision Making
DORA metrics give clear data that helps teams decide where to improve, making it easier to prioritize the most impactful actions for better performance and enhanced customer satisfaction.
Foster Continuous Improvement
Regularly reviewing these metrics encourages a culture of continuous improvement. This helps software development teams to set goals, monitor progress, and adjust their practices based on concrete data.
Benchmarking
DORA metrics allow DevOps teams to compare their performance against industry standards or other teams within the organization. This encourages healthy competition and drives overall improvement.
Provide Actionable Insights
DORA metrics provide actionable data that helps Scrum teams identify inefficiencies and bottlenecks in their processes. Analyzing these metrics allows engineering leaders to make informed decisions about where to focus improvement efforts and reduce recovery time.
Best Practices for Implementing DORA Metrics in Scrum Teams
Understand the Metrics
Firstly, understand the importance of DORA Metrics as each metric provides insight into different aspects of the development and delivery process. Together, these metrics offer a comprehensive view of the team’s performance and allow them to make data-driven decisions.
Set Baselines and Goals
Scrum teams should start by setting baselines for each metric to get a clear starting point and set realistic goals. For instance, if a scrum team currently deploys once a month, it may be unrealistic to aim for multiple deployments per day right away. Instead, they could set a more achievable goal, like deploying once a week, and gradually work towards increasing their frequency.
Regularly Review and Analyze Metrics
Scrum teams must schedule regular reviews (e.g., during sprint retrospectives) to discuss the metrics to identify trends, patterns, and anomalies in the data. This helps to track progress, pinpoint areas for improvement, and further allow them to make data-driven decisions to optimize their processes and adjust their goals as needed.
Foster Continuous Growth
Use the insights gained from the metrics to drive ongoing improvements and foster a culture that values experimentation and learning from mistakes. By creating this environment, Scrum teams can steadily enhance their software delivery performance. Note that, this approach should go beyond just focusing on DORA metrics. it should also take into account other factors like developer productivity and well-being, collaboration, and customer satisfaction.
Ensure Cross-Functional Collaboration and Communicate Transparently
Encourage collaboration between development, operations, and other relevant teams to share insights and work together to address bottlenecks and improve processes. Make the metrics and their implications transparent to the entire team. You can use the DORA Metrics dashboard to keep everyone informed and engaged.
How Typo Leverages DORA Metrics?
Typo is a powerful tool designed specifically for tracking and analyzing DORA metrics. It provides an efficient solution for DevOps and Scrum teams seeking precision in their performance measurement.
With pre-built integrations in the dev tool stack, the DORA metrics dashboard provides all the relevant data within minutes.
It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
The dashboard sets custom improvement goals for each team and tracks their success in real-time.
It gives real-time visibility into a team’s KPI and allows real-time them to make informed decisions.
Wanna Improve your Team Performance with DORA Metrics?
Leveraging DORA Metrics can transform Scrum team performance by providing actionable insights into key aspects of development and delivery. When implemented the right way, teams can optimize their workflows, enhance reliability, and make informed decisions to build high-quality software.
Platform Engineering is becoming increasingly crucial. According to the 2024 State of DevOps Report: The Evolution of Platform Engineering, 43% of organizations have had platform teams for 3-5 years. The field offers numerous benefits, such as faster time-to-market, enhanced developer happiness, and the elimination of team silos.
However, there is one critical piece of advice that Platform Engineers often overlook: treat your platform as an internal product and consider your wider teams as your customers.
So, how can they do this effectively? It's important to measure what’s working and what isn’t using consistent indicators of success.
In this blog, we’ve curated the top platform engineering KPIs that software teams must monitor:
What is Platform Engineering?
Platform Engineering, an emerging technology approach, enables the software engineering team with all the required resources. This is to help them perform end-to-end operations of software development lifecycle automation. The goal is to reduce overall cognitive load, enhance operational efficiency, and remove process bottlenecks by providing a reliable and scalable platform for building, deploying, and managing applications.
Importance of Tracking Platform Engineering KPIs
Helps in Performance Monitoring and Optimization
Platform Engineering KPIs offer insights into how well the platform performs under various conditions. They also help to identify loopholes and areas that need optimization to ensure the platform runs efficiently.
Ensures Scalability and Capacity Planning
These metrics guide decisions on how to scale resources. It also ensures the capacity planning i.e. the platform can handle growth and increased load without performance degradation.
Quality Assurance
Tracking KPIs ensure that the platform remains robust and maintainable. This further helps to reduce technical debt and improve the platform’s overall quality.
Increases Productivity and Collaboration
They provide in-depth insights into how effectively the engineering team operates and help to identify areas for improvement in team dynamics and processes.
Fosters a Culture of Continuous Improvement
Regularly tracking and analyzing KPIs fosters a culture of continuous improvement. Hence, encouraging proactive problem-solving and innovation among platform engineers.
Top Platform Engineering KPIs to Track
Deployment Frequency
Deployment Frequency measures how often code is deployed into production per week. It takes into account everything from bug fixes and capability improvements to new features. It is a key metric for understanding the agility and efficiency of development and operational processes and highlights the team’s ability to deliver updates and new features.
The higher frequency with minimal issues reflects mature CI/CD processes and how platform engineering teams can quickly adapt to changes. Regularly tracking and adapting Deployment Frequency helps in continuous improvement as it reduces the risk of large, disruptive changes and delivers value to end-users effectively.
Lead Time for Changes
Lead Time is the duration between a code change being committed and its successful deployment to end-users. It is correlated with both the speed and quality of the platform engineering team. Higher lead time gives a clear sign of roadblocks in processes and the platform needs attention.
Low lead time indicates that the teams quickly adapt to feedback and deliver products timely. It also gives teams the ability to make rapid changes, allowing them to adapt to evolving user needs and market conditions. Tracking it regularly helps in streamlining workflows and reducing bottlenecks.
Change Failure Rate
Change Failure Rate refers to the proportion or percentage of deployments that result in failure or errors. It indicates the rate at which changes negatively impact the stability or functionality of the system. CFR also provides a clear view of the platform’s quality and stability eg: how much effort goes into addressing problems and releasing code.
Lower CFR indicates that deployments are reliable, changes are thoroughly tested, and less likely to cause issues in production. Moreover, it also reflects a well-functioning development and deployment processes, boosting team confidence and morale.
Mean Time to Restore
Mean Time to Restore (MTTR) represents the average time taken to resolve a production failure/incident and restore normal system functionality each week. Low MTTR indicates that the platform is resilient, quickly recovers from issues, and efficiency of incident response.
Faster recovery time minimizes the impact on users, increasing their satisfaction and trust in service. Moreover, it contributes to higher system uptime and availability and enhances your platform’s reputation, giving you a competitive edge.
Resource Utilization
This KPI tracks the usage of system resources. It is a critical metric that optimizes resource allocation and cost efficiency. Resource Utilization balances several objectives with a fixed amount of resources.
It allows platform engineers to distribute limited resources evenly and efficiently and understand where exactly to spend. Resource Utilization also aids in capacity planning and helps in avoiding potential bottlenecks.
Error Rates
Error Rates measure the number of errors encountered in the platform. It identifies the stability, reliability, and user experience of the platform. High Error Rates indicate underlying problems that need immediate attention which can otherwise, degrade user experience, leading to frustration and potential loss of users.
Monitoring Error Rates helps in the early detection of issues, enabling proactive response, and preventing minor issues from escalating into major outages. It also provides valuable insights into system performance and creates a feedback loop that informs continuous improvement efforts.
Team Velocity
Team Velocity is a critical metric that measures the amount of work completed in a given iteration (e.g., sprint). It highlights the developer productivity and efficiency as well as in planning and prioritizing future tasks.
It helps to forecast the completion dates of larger projects or features, aiding in long-term planning and setting stakeholder expectations. Team Velocity also helps to understand the platform teams’ capacity to evenly distribute tasks and prevent overloading team members.
How to Develop a Platform Engineering KPI Plan?
Define Objectives
Firstly, ensure that the KPIs support the organization’s broader objectives. A few of them include improving system reliability, enhancing user experience, or increasing development efficiency. Always focus on metrics that reflect the unique aspects of platform engineering.
Identify Key Performance Indicators
Select KPIs that provide a comprehensive view of platform engineering performance. We’ve shared some critical KPIs above. Choose those KPIs that fit your objectives and other considered factors.
Establish Baseline and Targets
Assess current performance levels of software engineers to establish baselines. Set targets and ensure they are realistic and achievable for each KPI. They must be based on historical data, industry benchmarks, and business objectives.
Analyze and Interpret Data
Regularly analyze trends in the data to identify patterns, anomalies, and areas for improvement. Set up alerts for critical KPIs that require immediate attention. Don’t forget to conduct root cause analysis for any deviations from expected performance to understand underlying issues.
Review and Refine KPIs
Lastly, review the relevance and effectiveness of the KPIs periodically to ensure they align with business objectives and provide value. Adjust targets based on changes in business goals, market conditions, or team capacity.
Typo - An Effective Platform Engineering Tool
Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Monitoring the right KPIs is essential for successful platform teams. By treating your platform as an internal product and your teams as customers, you can focus on delivering value and driving continuous improvement. The KPIs discussed above, provide a comprehensive view of your platform's performance and areas for enhancement.
There are other KPIs available as well that we have not mentioned. Do your research and consider those that best suit your team and objectives.
What if we told you that writing more code could be making you less productive?
While equating productivity with output is tempting, developer efficiency is far more complex. The real challenge often lies in processes, collaboration, and well-being. Without addressing these, inefficiencies and burnout will inevitably follow.
You may spend hours coding, only to feel your work isn’t making an impact—projects get delayed, bug fixes drag on, and constant context switching drains your focus. The key isn’t to work harder but smarter by solving the root causes of these issues.
The SPACE framework addresses this by focusing on five dimensions: Satisfaction, Performance, Activity, Communication, and Efficiency. It helps teams improve how much they do and how effectively they work, reducing workflow friction, improving collaboration, and supporting well-being to boost long-term productivity.
Understanding the SPACE Framework
The space framework addresses five key dimensions of developer productivity: satisfaction and well-being, performance, activity, collaboration and communication, and efficiency and flow. Together, these dimensions provide a comprehensive view of how developers work and where improvements can be made, beyond just measuring output.
By taking these factors into account, teams can better support developers, helping them not only produce better work but also maintain their motivation and well-being. Let’s take a closer look at each part of the framework and how it can help your team achieve a balance between productivity and a healthy work environment.
Common Developer Challenges that SPACE Addresses
In fast-paced, tech-driven environments, developers face several roadblocks to productivity:
Constant interruptions: Developers often deal with frequent context switching, from bug fixes to feature development to emergency support, making it hard to stay focused.
Cross-team collaboration: Working with multiple teams, such as DevOps, QA, and product management, can lead to miscommunication and misaligned priorities.
Lack of real-time feedback: Without timely feedback, developers may unknowingly veer off course or miss performance issues until much later in the development cycle.
Technical debt: Legacy systems and inconsistent coding practices create overhead and slow down development cycles, making it harder to move quickly on new features.
The space framework helps identify and address these challenges by focusing on improving both the technical processes and the developer experience.
How SPACE can help: A Deep Dive into Each Dimension
Let’s explore how each aspect of the space framework can directly impact technical teams:
Satisfaction and well-being
Developers are more productive when they feel engaged and valued. It's important to create an environment where developers are recognized for their contributions and have a healthy work-life balance. This can include feedback mechanisms, peer recognition, or even mental health initiatives. Automated tools that reduce repetitive tasks can also contribute to overall well-being.
Performance
Measuring performance should go beyond tracking the number of commits or pull requests. It’s about understanding the impact of the work being done. High-performing teams focus on delivering high-quality code and minimizing technical debt. Integrating automated testing and static code analysis tools into your CI/CD pipeline ensures code quality is maintained without manual intervention.
Activity
Focusing on meaningful developer activity, such as code reviews, tests written, and pull requests merged, helps align efforts with goals. Tools that track and visualize developer activities provide insight into how time is spent. For example, tracking code review completion times or how often changes are being pushed can reveal bottlenecks or opportunities for improving workflows.
Collaboration and communication
Effective communication across teams reduces friction in the development process. By integrating communication tools directly into the workflow, such as through Git or CI/CD notifications, teams can stay aligned on project goals. Automating feedback loops within the development process, such as notifications when builds succeed or fail, helps teams respond faster to issues.
Efficiency and flow
Developers enter a “flow state” when they can work on a task without distractions. One way to foster this is by reducing manual tasks and interruptions. Implementing CI/CD tools that automate repetitive tasks—like build testing or deployments—frees up developers to focus on writing code. It’s also important to create dedicated time blocks where developers can work without interruptions, helping them enter and maintain that flow.
Practical Strategies for Applying the SPACE Framework
To make the space framework actionable, here are some practical strategies your team can implement:
Automate repetitive tasks to enhance focus
A large portion of developer time is spent on tasks that can easily be automated, such as code formatting, linting, and testing. By introducing tools that handle these tasks automatically, developers can focus on the more meaningful aspects of their work, like writing new features or fixing bugs. This is where tools like Typo can make a difference. Typo integrates seamlessly into your development process, ensuring that code adheres to best practices by automating code quality checks and providing real-time feedback. Automating these reviews reduces the time developers spend on manual reviews and ensures consistency across the codebase.
Track meaningful metrics
Instead of focusing on superficial metrics like lines of code written or hours logged, focus on tracking activities that lead to tangible progress. Typo, for example, helps track key metrics like the number of pull requests merged, the percentage of code coverage, or the speed at which developers address code reviews. These insights give team leads a clearer picture of where bottlenecks are occurring and help teams prioritize tasks that move the project forward.
Improve communication and collaboration through integrated tools
Miscommunication between developers, product managers, and QA teams can cause delays and frustration. Integrating feedback systems that provide automatic notifications when tests fail or builds succeed can significantly improve collaboration. Typo plays a role here by streamlining communication between teams. By automatically reporting code review statuses or deployment readiness, Typo ensures that everyone stays informed without the need for constant manual updates or status meetings.
Protect flow time and eliminate disruptions
Protecting developer flow is essential to maintaining efficiency. Schedule dedicated “flow” periods where meetings are minimized, and developers can focus solely on their tasks. Typo enhances this by minimizing the need for developers to leave their coding environment to check on build statuses or review feedback. With automated reports, developers can stay updated without disrupting their focus. This helps ensure that developers can spend more time in their flow state and less time on administrative tasks.
Identify bottlenecks in your workflow
Using metrics from tools like Typo, you can gain visibility into where delays are happening in your development process—whether it's slow code review cycles, inefficient testing processes, or unclear requirements. With this insight, you can make targeted improvements, such as adjusting team structures, automating manual testing processes, or dedicating more resources to code reviews to ensure smoother project progression.
How Typo supports the SPACE framework
By using Typo as part of your workflow, you can naturally align with many of the principles of the space framework:
Automated code quality: Typo ensures code quality through automated reviews and real-time feedback, reducing the manual effort required during code review processes.
Tracking developer metrics: Typo tracks key activities that are directly related to developer efficiency, helping teams stay on track with performance goals.
Seamless communication: With automatic notifications and updates, Typo ensures that developers and other team members stay in sync without manual reporting, which helps maintain flow and improve collaboration.
Supporting flow: Typo’s integrations provide updates within the development environment, reducing the need for developers to context switch between tasks.
Bringing it all together: Maximizing Developer Productivity with SPACE
The space framework offers a well-rounded approach to improving developer productivity and well-being. By focusing on automating repetitive tasks, improving collaboration, and fostering uninterrupted flow time, your team can achieve more without sacrificing quality or developer satisfaction. Tools like Typo naturally fit into this process, helping teams streamline workflows, enhance communication, and maintain high code quality.
If you’re looking to implement the space framework, start by automating repetitive tasks and protecting your developers' flow time. Gradually introduce improvements in collaboration and tracking meaningful activity. Over time, you’ll notice improvements in both productivity and the overall well-being of your development team.
What challenges are you facing in your development workflow?
Share your experiences and let us know how tools like Typo could help your team implement the space framework to improve productivity and collaboration!
Developer productivity is the new buzzword across the industry. Suddenly, measuring developer productivity has started going mainstream after the remote work culture, and companies like McKinsey are publishing articles titled - ”Yes, you can measure software developer productivity” causing a stir in the software development community, So we thought we should share our take on- Developer Productivity.
We will be covering the following Whats, Whys & Hows about Developer Productivity in this piece-
What is developer productivity?
Why do we need to measure developer productivity?
How do we measure it at the Team and individual level? & Why is it more complicated to measure developer productivity than Sales or Hiring productivity?
Challenges & Dangers of measuring developer productivity & What not to measure.
What is the impact of measuring developer productivity on engineering culture?
What is Developer Productivity?
Developer productivity refers to the effectiveness and efficiency with which software developers create high-quality software that meets business goals. It encompasses various dimensions, including code quality, development speed, team collaboration, and adherence to best practices. For engineering managers and leaders, understanding developer productivity is essential for driving continuous improvement and achieving successful project outcomes.
Key Aspects of Developer Productivity
Quality of Output: Developer productivity is not just about the quantity of code or code changes produced; it also involves the quality of that code. High-quality code is maintainable, readable, and free of significant bugs, which ultimately contributes to the overall success of a project.
Development Speed: This aspect measures how quickly developers (usually referred as developer velocity) can deliver features, fixes, and updates. While developer velocity is important, it should not come at the expense of code quality. Effective engineering teams strike a balance between delivering quickly and maintaining high standards.
Collaboration and Team Dynamics: Successful software development relies heavily on effective teamwork. Collaboration tools and practices that foster communication and knowledge sharing can significantly enhance developer productivity. Engineering managers should prioritize creating a collaborative environment that encourages teamwork.
Adherence to Best Practices for Outcomes: Following coding standards, conducting code review, and implementing testing protocols are essential for maintaining development productivity. These practices ensure that developers produce high-quality work consistently, which can lead to improved project outcomes.
We all know that no love to be measured but the CEOs & CFOs have an undying love for measuring the ROI of their teams, which we can't ignore. The more the development productivity, the more the RoI. However, measuring developer productivity is essential for engineering managers and leaders too who want to optimize their teams' performance- We can't improve something that we don't measure.
Understanding how effectively developers work can lead to improved project outcomes, better resource allocation, and enhanced team morale. In this section, we will explore the key reasons why measuring developer productivity is crucial for engineering management.
Enhancing Team Performance
Measuring developer productivity allows engineering managers to identify strengths and weaknesses within their teams. By analyzing developer productivity metrics, leaders can pinpoint areas where new developer excel and where they may need additional support or resources. This insight enables managers to tailor training programs, allocate tasks more effectively, and foster a culture of continuous improvement.
Driving Business Outcomes
Developer productivity is directly linked to business success. By measuring development team productivity, managers can assess how effectively their teams deliver features, fix bugs, and contribute to overall project goals. Understanding productivity levels helps align development efforts with business objectives, ensuring that the team is focused on delivering value that meets customer needs.
Improving Resource Allocation
Effective measurement of developer productivity enables better resource allocation. By understanding how much time and effort are required for various tasks, managers can make informed decisions about staffing, project timelines, and budget allocation. This ensures that resources are utilized efficiently, minimizing waste and maximizing output.
Fostering a Positive Work Environment
Measuring developer productivity can also contribute to a positive work environment. By recognizing high-performing teams and individuals, managers can boost morale and motivation. Additionally, understanding productivity trends can help identify burnout or dissatisfaction, allowing leaders to address issues proactively and create a healthier workplace culture.
Facilitating Data-Driven Decisions
In today’s fast-paced software development landscape, data-driven decision-making is essential. Measuring developer productivity provides concrete data that can inform strategic decisions. Whether it's choosing new tools, adopting agile methodologies, or implementing process changes, having reliable developer productivity metrics allows managers to make informed choices that enhance team performance.
Encouraging Collaboration and Communication
Regularly measuring productivity can highlight the importance of collaboration and communication within teams. By assessing metrics related to teamwork, such as code reviews and pair programming sessions, managers can encourage practices that foster collaboration. This not only improves productivity but overall developer experience by strengthening team dynamics and knowledge sharing.
Ultimately, understanding developer experience and measuring developer productivity leads to better outcomes for both the team and the organization as a whole.
How do we measure Developer Productivity?
Measuring developer productivity is essential for engineering managers and leaders who want to optimize their teams' performance.
Strategies for Measuring Productivity
Focus on Outcomes, Not Outputs: Shift the emphasis from measuring outputs like lines of code to focusing on outcomes that align with business objectives. This encourages developers to think more strategically about the impact of their work.
Measure at the Team Level: Assess productivity at the team level rather than at the individual level. This fosters team collaboration, knowledge sharing, and a focus on collective goals rather than individual competition.
Incorporate Qualitative Feedback: Balance quantitative metrics with qualitative feedback from developers through surveys, interviews, and regular check-ins. This provides valuable context and helps identify areas for improvement.
Encourage Continuous Improvement: Position productivity measurement as a tool for continuous improvement rather than a means of evaluation. Encourage developers to use metrics to identify areas for growth and work together to optimize workflows and development processes.
Lead by Example: As engineering managers and leaders, model the behavior you want to see in your team & team members. Prioritize work-life balance, encourage risk-taking and innovation, and create an environment where developers feel supported and empowered.
Measuring Dev productivity involves assessing both team and individual contributions to understand how effectively developers are delivering value through their development processes. Here’s how to approach measuring productivity at both levels:
Team-Level Developer Productivity
Measuring productivity at the team level provides a more comprehensive view of how collaborative efforts contribute to project success. Here are some effective metrics:
DORA Metrics
The DevOps Research and Assessment (DORA) metrics are widely recognized for evaluating team performance. Key metrics include:
Deployment Frequency: How often the software engineering team releases code to production.
Lead Time for Changes: The time taken for committed code to reach production.
Change Failure Rate: The percentage of deployments that result in failures.
Time to Restore Service: The time taken to recover from a failure.
Issue Cycle Time
This metric measures the time taken from the start of work on a task to its completion, providing insights into the efficiency of the software development process.
Team Satisfaction and Engagement
Surveys and feedback mechanisms can gauge team morale and satisfaction, which are critical for long-term productivity.
Collaboration Metrics
Assessing the frequency and quality of code reviews, pair programming sessions, and communication can provide insights into how well the software engineering team collaborates.
While team-level metrics are crucial, individual developer productivity also matters, particularly for performance evaluations and personal development. Here are some metrics to consider:
Pull Requests and Code Reviews: Tracking the number of pull requests submitted and the quality of code reviews can provide insights into an individual developer's engagement and effectiveness.
Commit Frequency: Measuring how often a developer commits code can indicate their active participation in projects, though it should be interpreted with caution to avoid incentivizing quantity over quality.
Personal Goals and Outcomes: Setting individual objectives related to project deliverables and tracking their completion can help assess individual productivity in a meaningful way.
Skill Development: Encouraging developers to pursue training and certifications can enhance their skills, contributing to overall productivity.
Measuring developer productivity metrics presents unique challenges compared to more straightforward metrics used in sales or hiring. Here are some reasons why:
Complexity of Work: Software development involves intricate problem-solving, creativity, and collaboration, making it difficult to quantify contributions accurately. Unlike sales, where metrics like revenue generated are clear-cut, developer productivity encompasses various qualitative aspects that are harder to measure for project management.
Collaborative Nature: Development work is highly collaborative. Team members often intertwine with team efforts, making it challenging to isolate the impact of one developer's work. In sales, individual performance is typically more straightforward to assess based on personal sales figures.
Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity of a pragmatic engineer. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
Varied Work Activities: Developers engage in various activities beyond coding, including debugging, code reviews, and meetings. These essential tasks are often overlooked in productivity measurements, whereas sales roles typically have more consistent and quantifiable activities.
Productivity Tools and Software development Process: The developer productivity tools and methodologies used in software development are constantly changing, making it difficult to establish consistent metrics. In contrast, sales processes tend to be more stable, allowing for easier benchmarking and comparison.
By employing a balanced approach that considers both quantitative and qualitative factors, with a few developer productivity tools, engineering leaders can gain valuable insights into their teams' productivity and foster an environment of continuous improvement & better developer experience.
Challenges of measuring Developer Productivity - What not to Measure?
Measuring developer productivity is a critical task for engineering managers and leaders, yet it comes with its own set of challenges and potential pitfalls. Understanding these challenges is essential to avoid the dangers of misinterpretation and to ensure that developer productivity metrics genuinely reflect the contributions of developers. In this section, we will explore the challenges of measuring developer productivity and highlight what not to measure.
Challenges of Measuring Developer Productivity
Complexity of Software Development: Software development is inherently complex, involving creativity, problem-solving, and collaboration. Unlike more straightforward fields like sales, where performance can be quantified through clear metrics (e.g., sales volume), developer productivity is multifaceted and includes various non-tangible elements. This complexity makes it difficult to establish a one-size-fits-all metric.
Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
Team Dynamics and Collaboration: Measuring individual productivity can overlook the collaborative nature of software development. Developers often work in teams where their contributions are interdependent. Focusing solely on individual metrics may ignore the synergistic effects of collaboration, mentorship, and knowledge sharing, which are crucial for a team's overall success.
Context Ignorance: Developer productivity metrics often fail to consider the context in which developers work. Factors such as project complexity, team dynamics, and external dependencies can significantly impact productivity but are often overlooked in traditional assessments. This lack of context can lead to misleading conclusions about a developer's performance.
Potential for Misguided Incentives: Relying heavily on specific metrics can create perverse incentives. For example, if developers are rewarded based on the number of commits, they may prioritize frequent small commits over meaningful contributions. This can lead to a culture of "gaming the system" rather than fostering genuine productivity and innovation.
What Not to Measure
Lines of Code (LOC): While LOC can provide some insight into coding activity, it is not a reliable measure of productivity. More code does not necessarily equate to better software. Instead, focus on the quality and impact of the code produced.
Commit Frequency: Tracking how often developers commit code can give a false sense of productivity. Frequent commits do not always indicate meaningful progress and can encourage developers to break down their work into smaller, less significant pieces.
Bug Counts: Focusing on the number of bugs reported or fixed can create a negative environment where developers feel pressured to avoid complex tasks that may introduce bugs. This can stifle innovation and lead to a culture of risk aversion.
Time Spent on Tasks: Measuring how long developers spend on specific tasks can be misleading. Developers may take longer on complex problems that require deep thinking and creativity, which are essential for high-quality software development.
Measuring developer productivity is fraught with challenges and dangers that engineering managers must navigate carefully. By understanding these complexities and avoiding outdated or superficial metrics, leaders can foster a more accurate and supportive environment for their development team productivity.
What is the impact of measuring Dev productivity on engineering culture?
Developer productivity improvements are a critical factor in the success of software development projects. As engineering managers or technology leaders, measuring and optimizing developer productivity is essential for driving development team productivity and delivering successful outcomes. However, measuring development productivity can have a significant impact on engineering culture & software engineering talent, which must be carefully navigated. Let's talk about measuring developer productivity while maintaining a healthy and productive engineering culture.
Measuring developer productivity presents unique challenges compared to other fields. The complexity of software development, inadequate traditional metrics, team dynamics, and lack of context can all lead to misguided incentives and decreased morale. It's crucial for engineering managers to understand these challenges to avoid the pitfalls of misinterpretation and ensure that developer productivity metrics genuinely reflect the contributions of developers.
Remember, the goal is not to maximize metrics but to create a development environment where software engineers can thrive and deliver maximum value to the organization.
Development teams using Typo experience a 30% improvement in Developer Productivity. Want to Try Typo?
Code review is all about improving the code quality. However, it can be a nightmare for developers when not done correctly. They may experience several code review challenges and slow down the entire development process. This further reduces their morale and efficiency and results in developer burnout.
Hence, optimizing the code review process is crucial for both code reviewers and developers. In this blog post, we have shared a few tips on optimizing code reviews to boost developer productivity.
Importance of Code Reviews
The Code review process is an essential stage in the software development life cycle. It has been a defining principle in agile methodologies. It ensures high-quality code and identifies potential issues or bugs before they are deployed into production.
Another notable benefit of code reviews is that it helps to maintain a continuous integration and delivery pipeline to ensure code changes are aligned with project requirements. It also ensures that the product meets the quality standards, contributing to the overall success of sprint or iteration.
With a consistent code review process, the development team can limit the risks of unnoticed mistakes and prevent a significant amount of tech debt.
They also make sure that the code meets the set acceptance criteria, and functional specifications and whether the code base follows consistent coding styles across the codebase.
Lastly, it provides an opportunity for developers to learn from each other and improve their coding skills which further helps in fostering continuous growth and helps raise the overall code quality.
How do Ineffective Code Reviews Decrease Developer Productivity?
Unclear Standards and Inconsistencies
When the code reviews lack clear guidelines or consistent criteria for evaluation, the developers may feel uncertain of what is expected from their end. This leads to ambiguity due to varied interpretations of code quality and style. It also takes a lot of their time to fix issues on different reviewers’ subjective opinions. This leads to frustration and decreased morale among developers.
Increase in Bottlenecks and Delays
When developers wait for feedback for an extended period, it prevents them from progressing. This slows down the entire software development lifecycle, resulting in missed deadlines and decreased morale. Hence, negatively affecting the deployment timeline, customer satisfaction, and overall business outcomes.
Low Quality and Delayed Feedback
When reviewers communicate vague, unclear, and delayed feedback, they usually miss out on critical information. This leads to context-switching for developers which makes them lose focus on their current tasks. Moreover, they need to refamiliarize themselves with the code when the review is completed. Hence, resulting in developers losing their productivity.
Increase Cognitive Load
Frequent switching between writing and reviewing code requires a lot of mental effort. This makes it harder for developers to be focused and productive. Poorly structured, conflicting, or unclear feedback also confuses developers on which of them to prioritize first and understand the rationale behind suggested changes. This slows down the progress, leading to decision fatigue and reducing the quality of work.
Knowledge Gaps and Lack of Context
Knowledge gaps usually arise when reviewers lack the necessary domain knowledge or context about specific parts of the codebase. This results in a lack of context which further misguides developers who may overlook important issues. They may also need extra time to justify their decision and educate reviewers.
How to Optimize Code Review Process to Improve Developer Productivity?
Set Clear Goals and Standards
Establish clear objectives, coding standards, and expectations for code reviews. Communicate in advance with developers such as how long reviews should take and who will review the code. This allows both reviewers and developers to focus their efforts on relevant issues and prevent their time being wasted on insignificant matters.
Use a Code Review Checklist
Code review checklists include a predetermined set of questions and rules that the team will follow during the code review process. A few of the necessary quality checks include:
Readability and maintainability: This is the first criterion and cannot be overstated enough.
Uniform formatting: Whether the code with consistent indentation, spacing, and naming convention easy to understand?
Testing and quality assurance: Whether it have meticulous testing and quality assurance processes?
Boundary testing: Are we exploring extreme scenarios and boundary conditions to identify hidden problems?
Security and performance: Are we ensuring security and performance in our source code?
Architectural integrity: Whether the code is scalable, sustainable, and has a solid architectural design?
Prioritize High-Impact Issues
The issues must be prioritized based on their severity and impact. Not every issue in the code review process is equally important. Take up those issues first which affect system performance, security, or major features. Review them more thoroughly rather than the ones that have smaller and less impactful changes. It helps in allocating time and resources effectively.
Encourage Constructive Feedback
Always share specific, honest, and actionable feedback with the developers. The feedback must point in the right direction and must explain the ‘why’ behind it. It will reduce follow-ups and give necessary context to the developers. This also helps the engineering team to improve their skills and produce better code which further results in a high-quality codebase.
Automate Wherever Possible
Use automation tools such as style check, syntax check, and static code analysis tools to speed up the review process. This allows for routine checks for style, syntax errors, potential bugs, and performance issues and reduces the manual effort needed on such tasks. Automation allows developers to focus on more complex issues and allocate time more effectively.
Keep Reviews Small and Focused
Break down code into smaller, manageable chunks. This will be less overwhelming and time-consuming. The code reviewers can concentrate on details, adhere to the style guide and coding standards, and identify potential bugs. This will allow them to provide meaningful feedback more effectively. This helps in a deeper understanding of the code’s impact on the overall project.
Recognize and Reward Good Work
Acknowledge and celebrate developers who consistently produce high-quality code. This enables developers to feel valued for their contributions, leading to increased engagement, job satisfaction, and a sense of ownership in the project’s success. They are also more likely to continue producing high-quality code and actively participate in the review process.
Encourage Pair Programming or Pre-Review
Encourage pair programming or pre-review sessions to by enabling real-time feedback, reducing review time, and improving code quality. This fosters collaboration, enhances knowledge sharing, and helps catch issues early. Hence, leading to smoother and more effective reviews. It also promotes team bonding, streamlines communication, and cultivates a culture of continuous learning and improvement.
Use a Software Engineering Analytics Platform
Using an Engineering analytics platform in an organization is a powerful way to optimize the code review process and improve developer productivity. It provides comprehensive insights into the code quality, technical debt, and bug frequency which allow teams to proactively identify bottlenecks and address issues in real time before they escalate. It also allow teams to monitor their practices continuously and make adjustments as needed.
Typo — Automated Code Review Tool
Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.
Key Features
Supports top 8 languages including C++ and C#.
Understands the context of the code and fixes issues accurately.
Optimizes code efficiently.
Provides automated debugging with detailed explanations.
Standardizes code and reduces the risk of a security breach
If you prioritize the code review process, follow the above-mentioned tips. It will help in maximizing code quality, improve developer productivity, and streamline the development process.
Happy reviewing!
Mastering Developer Productivity with the SPACE Framework
In the crazy world of software development, getting developers to be productive is like finding the Holy Grail for tech companies. When developers hit their stride, turning out valuable work at breakneck speed, it’s a win for everyone. But let’s be honest—traditional productivity metrics, like counting lines of code or tracking hours spent fixing bugs, are about as helpful as a screen door on a submarine.
Say hello to the SPACE framework: your new go-to for cracking the code on developer productivity. This approach doesn’t just dip a toe in the water—it dives in headfirst to give you a clear, comprehensive view of how your team is doing. With the SPACE framework, you’ll ensure your developers aren’t just busy—they’re busy being awesome and delivering top-quality work on the dot. So buckle up, because we’re about to take your team’s productivity to the next level!
Introduction to the SPACE Framework
The SPACE framework is a modern approach to measuring developer productivity, introduced in a 2021 paper by experts from GitHub and Microsoft Research. This framework goes beyond traditional metrics to provide a more accurate and holistic view of productivity.
Nicole Forsgren, the lead author, emphasizes that measuring productivity by lines of code or speed can be misleading. The SPACE framework integrates several key metrics to give a complete picture of developer productivity.
Detailed Breakdown of SPACE Metrics
The five SPACE framework dimensions are:
Satisfaction and Well-being
When developers are happy and healthy, they tend to be more productive. If they enjoy their work and maintain a good work-life balance, they're more likely to produce high-quality results. On the other hand, dissatisfaction and burnout can severely hinder productivity. For example, a study by Haystack Analytics found that during the COVID-19 pandemic, 81% of software developers experienced burnout, which significantly impacted their productivity. The SPACE framework encourages regular surveys to gauge developer satisfaction and well-being, helping you address any issues promptly.
Performance
Traditional metrics often measure performance by the number of features added or bugs fixed. However, this approach can be problematic. According to the SPACE framework, performance should be evaluated based on outcomes rather than output. This means assessing whether the code reliably meets its intended purpose, the time taken to complete tasks, customer satisfaction, and code reliability.
Activity
Activity metrics are commonly used to gauge developer productivity because they are easy to quantify. However, they only provide a limited view. Developer Activity is the count of actions or outputs completed over time, such as coding new features or conducting code reviews. While useful, activity metrics alone cannot capture the full scope of productivity.
Nicole Forsgren points out that factors like overtime, inconsistent hours, and support systems also affect activity metrics. Therefore, it's essential to consider routine tasks like meetings, issue resolution, and brainstorming sessions when measuring activity.
Collaboration and Communication
Effective communication and collaboration are crucial for any development team's success. Poor communication can lead to project failures, as highlighted by 86% of employees in a study who cited ineffective communication as a major reason for business failures. The SPACE framework suggests measuring collaboration through metrics like the discoverability of documentation, integration speed, quality of work reviews, and network connections within the team.
Efficiency and Flow
Flow is a state of deep focus where developers can achieve high levels of productivity. Interruptions and distractions can break this flow, making it challenging to return to the task at hand. The SPACE framework recommends tracking metrics such as the frequency and timing of interruptions, the time spent in various workflow stages, and the ease with which developers maintain their flow.
Benefits of the SPACE Framework
The SPACE framework offers several advantages over traditional productivity metrics. By considering multiple dimensions, it provides a more nuanced view of developer productivity. This comprehensive approach helps avoid the pitfalls of single metrics, such as focusing solely on lines of code or closed tickets, which can lead to gaming the system.
Moreover, the SPACE framework allows you to measure both the quantity and quality of work, ensuring that developers deliver high-quality software efficiently. This integrated view helps organizations make informed decisions about team productivity and optimize their workflows for better outcomes.
Implementing the SPACE Framework in Your Organization
Implementing the SPACE productivity framework effectively requires careful planning and execution. Below is a comprehensive plan and roadmap to guide you through the process. This detailed guide will help you tailor the SPACE framework to your organization's unique needs and ensure a smooth transition to this advanced productivity measurement approach.
Step 1: Understanding Your Current State
Objective: Establish a baseline by understanding your current productivity measurement practices and developer workflow.
Conduct a Productivity Audit
Review existing metrics and tools like Typo used for tracking productivity.
Identify gaps and limitations in current measurement methods.
Gather feedback from developers and managers on existing practices.
Analyze Team Dynamics and Workflow
Map out your development process, identifying key stages and tasks.
Observe how teams collaborate, communicate, and handle interruptions.
Assess the overall satisfaction and well-being of your developers.
Outcome: A comprehensive report detailing your current productivity measurement practices, team dynamics, and workflow processes.
Step 2: Setting Goals and Objectives
Objective: Define clear goals and objectives for implementing the SPACE framework.
Identify Key Business Objectives
Align the goals of the SPACE framework with your company's strategic objectives.
Focus on improving areas such as time-to-market, code quality, customer satisfaction, and developer well-being.
Set Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) Goals
Example Goals
Increase developer satisfaction by 20% within six months.
Reduce average bug resolution time by 30% over the next quarter.
Improve code review quality scores by 15% within the next year.
Outcome: A set of SMART goals that will guide the implementation of the SPACE framework.
Step 3: Selecting and Customizing SPACE Metrics
Objective: Choose the most relevant SPACE metrics and customize them to fit your organization's needs.
Review SPACE Metrics
Satisfaction and Well-being
Performance
Activity
Collaboration and Communication
Efficiency and Flow
Customize Metrics
Tailor each metric to align with your organization's specific context and objectives.
Example Customizations
Satisfaction and Well-being: Conduct quarterly surveys to measure job satisfaction and work-life balance.
Performance: Track the reliability of code and customer feedback on delivered features.
Activity: Measure the number of completed tasks, code commits, and other relevant activities.
Collaboration and Communication: Monitor the quality of code reviews and the speed of integrating work.
Efficiency and Flow: Track the frequency and duration of interruptions and the time spent in flow states.
Outcome: A customized set of SPACE metrics tailored to your organization's needs.
Step 4: Implementing Measurement Tools and Processes
Objective: Implement tools and processes to measure and track the selected SPACE metrics.
Choose Appropriate Tools
Use project management tools like Jira or Trello to track activity and performance metrics.
Implement collaboration tools such as Slack, Microsoft Teams, or Confluence to facilitate communication and knowledge sharing.
Utilize code review tools like CodeIQ by Typo to monitor the quality of code and collaboration.
Set Up Data Collection Processes
Establish processes for collecting and analyzing data for each metric.
Ensure that data collection is automated wherever possible to reduce manual effort and improve accuracy.
Train Your Team
Provide training sessions for developers and managers on using the new tools and understanding the SPACE metrics.
Encourage open communication and address any concerns or questions from the team.
Outcome: A fully implemented set of tools and processes for measuring and tracking SPACE metrics.
Step 5: Regular Monitoring and Review
Objective: Continuously monitor and review the metrics to ensure ongoing improvement.
Establish Regular Review Cycles
Conduct monthly or quarterly reviews of the SPACE metrics to track progress towards goals.
Hold team meetings to discuss the results, identify areas for improvement, and celebrate successes.
Analyze Trends and Patterns
Look for trends and patterns in the data to gain insights into team performance and productivity.
Use these insights to make informed decisions and adjustments to workflows and processes.
Solicit Feedback
Regularly gather feedback from developers and managers on the effectiveness of the SPACE framework.
Use this feedback to make continuous improvements to the framework and its implementation.
Outcome: A robust monitoring and review process that ensures the ongoing effectiveness of the SPACE framework.
Step 6: Continuous Improvement and Adaptation
Objective: Adapt and improve the SPACE framework based on feedback and evolving needs.
Iterate and Improve
Continuously refine and improve the SPACE metrics based on feedback and observed results.
Adapt the framework to address new challenges and opportunities as they arise.
Foster a Culture of Continuous Improvement
Encourage a culture of continuous improvement within your development teams.
Promote openness to change and a willingness to experiment with new ideas and approaches.
Share Success Stories
Share success stories and best practices with the broader organization to demonstrate the value of the SPACE framework.
Use these stories to inspire other teams and encourage the adoption of the framework across the organization.
Outcome: A dynamic and adaptable SPACE framework that evolves with your organization's needs.
Conclusion
Implementing the SPACE framework is a strategic investment in your organization's productivity and success. By following this comprehensive plan and roadmap, you can effectively integrate the SPACE metrics into your development process, leading to improved performance, satisfaction, and overall productivity. Embrace the journey of continuous improvement and leverage the insights gained from the SPACE framework to unlock the full potential of your development teams.
SPACE Framework: How to Measure Developer Productivity
In today’s fast-paced software development world, understanding and improving developer productivity is more crucial than ever. One framework that has gained prominence for its comprehensive approach to measuring and enhancing productivity is the SPACE Framework. This framework, developed by industry experts and backed by extensive research, offers a multi-dimensional perspective on productivity that transcends traditional metrics.
This blog delves deep into the genesis of the SPACE Framework, its components, and how it can be effectively implemented to boost developer productivity. We’ll also explore real-world success stories of companies that have benefited from adopting this framework.
The genesis of the SPACE Framework
The SPACE Framework was introduced by researchers Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila, Thomas Zimmermann, Brian Houck, and Jenna Butler. Their work was published in a paper titled “The SPACE of Developer Productivity: There’s More to it than You Think!” emphasising that a single metric cannot measure developer productivity. Instead, it should be viewed through multiple lenses to capture a holistic picture.
Components of the SPACE Framework
The SPACE Framework is an acronym that stands for:
Satisfaction and Well-being
Performance
Activity
Communication and Collaboration
Efficiency and Flow
Each component represents a critical aspect of developer productivity, ensuring a balanced approach to measurement and improvement.
Detailed breakdown of the SPACE Framework
1. Satisfaction and Well-being
Definition: This dimension focuses on how satisfied and happy developers are with their work and environment. It also considers their overall well-being, which includes factors like work-life balance, stress levels, and job fulfillment.
Why It Matters: Happy developers are more engaged, creative, and productive. Ensuring high satisfaction and well-being can reduce burnout and turnover, leading to a more stable and effective team.
Metrics to Consider:
Employee satisfaction surveys
Work-life balance scores
Burnout indices
Turnover rates
2. Performance
Definition: Performance measures the outcomes of developers’ work, including the quality and impact of the software they produce. This includes assessing code quality, deployment frequency, and the ability to meet user needs.
Why It Matters: High performance indicates that the team is delivering valuable software efficiently. It helps in maintaining a competitive edge and ensuring customer satisfaction.
Metrics to Consider:
Code quality metrics (e.g., number of bugs, code review scores)
Deployment frequency
Customer satisfaction ratings
Feature adoption rates
3. Activity
Definition: Activity tracks the actions developers take, such as the number of commits, code reviews, and feature development. This component focuses on the volume and types of activities rather than their outcomes.
Why It Matters: Monitoring activity helps understand workload distribution and identify potential bottlenecks or inefficiencies in the development process.
Metrics to Consider:
Number of commits per developer
Code review participation
Task completion rates
Meeting attendance
4. Communication and Collaboration
Definition: This dimension assesses how effectively developers interact with each other and with other stakeholders. It includes evaluating the quality of communication channels and collaboration tools used.
Why It Matters: Effective communication and collaboration are crucial for resolving issues quickly, sharing knowledge, and fostering a cohesive team environment. Poor communication can lead to misunderstandings and project delays.
Metrics to Consider:
Frequency and quality of team meetings
Use of collaboration tools (e.g., Slack, Jira)
Cross-functional team interactions
Feedback loops
5. Efficiency and Flow
Definition: Efficiency and flow measure how smoothly the development process operates, including how well developers can focus on their tasks without interruptions. It also looks at the efficiency of the processes and tools in place.
Why It Matters: High efficiency and flow indicate that developers can work without unnecessary disruptions, leading to higher productivity and job satisfaction. It also helps in identifying and eliminating waste in the process.
Metrics to Consider:
Cycle time (time from task start to completion)
Time spent in meetings vs. coding
Context switching frequency
Tool and process efficiency
Implementing the SPACE Framework in real life
Implementing the SPACE Framework requires a strategic approach, involving the following steps:
Establish baseline metrics
Before making any changes, establish baseline metrics for each SPACE component. Use existing tools and methods to gather initial data.
Actionable Steps:
Conduct surveys to measure satisfaction and well-being.
Use code quality tools to assess performance.
Track activity through version control systems.
Analyze communication patterns via collaboration tools.
Measure efficiency and flow using project management software.
Set clear goals
Define what success looks like for each component of the SPACE Framework. Set achievable and measurable goals.
Actionable Steps:
Increase employee satisfaction scores by 10% within six months.
Reduce bug rates by 20% over the next quarter.
Improve code review participation by 15%.
Enhance cross-team communication frequency.
Shorten cycle time by 25%.
Implement changes
Based on the goals set, implement changes to processes, tools, and practices. This may involve adopting new tools, changing workflows, or providing additional training.
Actionable Steps:
Introduce well-being programs to improve satisfaction.
Adopt automated testing tools to enhance performance.
Encourage regular code reviews to boost activity.
Use collaboration tools like Slack or Microsoft Teams to improve communication.
Streamline processes to reduce context switching and improve flow.
Monitor and adjust
Regularly monitor the metrics to evaluate the impact of the changes. Be prepared to make adjustments as necessary to stay on track with your goals.
Actionable Steps:
Use dashboards to track key metrics in real time.
Hold regular review meetings to discuss progress.
Gather feedback from developers to identify areas for improvement.
Make iterative changes based on data and feedback.
Integrating the SPACE Framework with DORA Metrics
SPACE Dimension
Definition
DORA Metric Integration
Actionable Steps
Satisfaction and Well-being
Measures happiness, job fulfillment, and work-life balance
High deployment frequency and low lead time improve satisfaction; high failure rates increase stress
– Conduct satisfaction surveys
– Correlate with DORA metrics
– Implement well-being programs
Performance
Assesses the outcomes of developers’ work
Direct overlap with DORA metrics like deployment frequency and lead time
– Use DORA metrics for benchmark
– Track and improve key metrics
– Address failure causes
Activity
Tracks volume and types of work (e.g., commits, reviews)
Frequent, high-quality activities improve deployment frequency and lead time
– Track activities and DORA metrics
– Promote high-quality work practices
– Balance workloads
Communication and Collaboration
Evaluates effectiveness of interactions and tools
Effective communication and collaboration reduce failure rates and restoration times
– Use communication tools (e.g., Slack)
– Conduct retrospectives
– Encourage cross-functional teams
Efficiency and Flow
Measures smoothness and efficiency of processes
Efficient workflows lead to higher deployment frequencies and shorter lead times
GitHub implemented the SPACE Framework to enhance its developer productivity. By focusing on communication and collaboration, they improved their internal processes and tools, leading to a more cohesive and efficient development team. They introduced regular team-building activities and enhanced their internal communication tools, resulting in a 15% increase in developer satisfaction and a 20% reduction in project completion time.
Microsoft
Microsoft adopted the SPACE Framework across several development teams. They focused on improving efficiency and flow by reducing context switching and streamlining their development processes. This involved adopting continuous integration and continuous deployment (CI/CD) practices, which reduced cycle time by 30% and increased deployment frequency by 25%.
Key software engineering metrics mapped to the SPACE Framework
This table outlines key software engineering metrics mapped to the SPACE Framework, along with how they can be measured and implemented to improve developer productivity and overall team effectiveness.
Activity in tools (e.g., Slack messages, Jira comments)
Collaboration tools (e.g., Slack, Jira)
– Promote use of collaboration tools
– Provide training on tool usage
Cross-functional Interactions
Number of interactions with other teams
Project management tools, communication tools
– Encourage cross-functional projects
– Facilitate regular cross-team meetings
Feedback Loops
Number and quality of feedback instances
Feedback tools, retrospectives
– Implement regular feedback sessions
– Act on feedback to improve processes
Efficiency and Flow
Key Metrics
Measurement Tools/Methods
Implementation Steps
Cycle Time
Time from task start to completion
Project management tools (e.g., Jira)
– Monitor cycle times
– Identify and remove bottlenecks
Time Spent in Meetings vs. Coding
Hours logged in meetings vs. coding
Time tracking tools, calendar tools
– Optimize meeting schedules
– Minimize unnecessary meetings
Context Switching Frequency
Number of task switches per day
Time tracking tools, self-reporting
– Reduce unnecessary interruptions
– Promote focused work periods
Tool and Process Efficiency
Time saved using tools/processes
Productivity tools, surveys
– Regularly review tool/process efficiency
– Implement improvements based on feedback
What engineering leaders can do
Engineering leaders play a crucial role in the successful implementation of the SPACE Framework. Here are some actionable steps they can take:
Promote a culture of continuous improvement
Encourage a mindset of continuous improvement among the team. This involves being open to feedback and constantly seeking ways to enhance productivity and well-being.
Actionable Steps:
Regularly solicit feedback from team members.
Celebrate small wins and improvements.
Provide opportunities for professional development and growth.
Invest in the right tools and processes
Ensure that developers have access to the tools and processes that enable them to work efficiently and effectively.
Actionable Steps:
Conduct regular tool audits to ensure they meet current needs.
Invest in training programs for new tools and technologies.
Streamline processes to eliminate unnecessary steps and reduce bottlenecks.
Foster collaboration and communication
Create an environment where communication and collaboration are prioritized. This can lead to better problem-solving and more innovative solutions.
Actionable Steps:
Organize regular team-building activities.
Use collaboration tools to facilitate better communication.
Encourage cross-functional projects to enhance team interaction.
Prioritize well-being and satisfaction
Recognize the importance of developer well-being and satisfaction. Implement programs and policies that support a healthy work-life balance.
Actionable Steps:
Offer flexible working hours and remote work options.
Provide access to mental health resources and support.
Recognize and reward achievements and contributions.
Conclusion
The SPACE Framework offers a holistic and actionable approach to understanding and improving developer productivity. By focusing on satisfaction and well-being, performance, activity, communication and collaboration, and efficiency and flow, organizations can create a more productive and fulfilling work environment for their developers.
Implementing this framework requires a strategic approach, clear goal setting, and ongoing monitoring and adjustment. Real-world success stories from companies like GitHub and Microsoft demonstrate the potential benefits of adopting the SPACE Framework.
Engineering leaders have a pivotal role in driving this change. By promoting a culture of continuous improvement, investing in the right tools and processes, fostering collaboration and communication, and prioritizing well-being and satisfaction, they can significantly enhance developer productivity and overall team success.
In the software development industry, while user experience is an important aspect of the product life cycle, organizations are also considering Developer Experience.
A positive Developer Experience helps in delivering quality products and allows developers to be happy and healthy in the long run.
However, it is not always possible for organizations to measure and improve developer experience without any good tools and platforms.
What is Developer Experience?
Developer Experience is about the experience software developers have while working in the organization. It is the developers’ journey while working with a specific framework, programming languages, platform, documentation, general tools, and open-source solutions.
Positive Developer Experience = Happier teams
Developer Experience has a direct relationship with developer productivity. A positive experience results in high dev productivity, leading to high job satisfaction, performance, and morale. Hence, happier developer teams.
This starts with understanding the unique needs of developers and fostering a positive work culture for them.
Why Developer Experience is important?
Smooth onboarding process
Good DX ensures the onboarding process is as simple and smooth as possible. It includes making them familiar with the tools and culture and giving them the support they need to proceed further in their career. It also allows them to know other developers which helps in collaboration, open communication, and seeking help, whenever required.
Improves product quality
A positive Developer Experience leads to 3 effective C’s – Collaboration, communication, and coordination. Besides this, adhering to coding standards, best practices, and automated testing helps promote code quality and consistency and fix issues early. As a result, development teams can easily create products that meet customer needs and are free from errors and glitches.
Increases development speed
When Developer Experience is handled with care, software developers can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are few ways to boost development speed. It also lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.
Attracts and retains top talents
Developers usually look out for a strong tech culture. So they can focus on their core skills and get acknowledged for their contributions. Great DX increases job satisfaction and aligns their values and goals with the organization. In return, developers bring the best to the table and want to stay in the organization for the long run.
Enhances collaboration
The right kind of Developer Experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings. Developers can easily discuss issues, share feedback, and work together on tasks. It helps streamline the development process and results in high-quality work.
A powerful time management tool that streamlines and automates the calendar and protects developers’ flow time. It helps to strike a balance between meetings and coding time with a focus time feature.
Key features
Seamlessly integrates with third-party applications such as Slack, Google Calendar, and Asana.
Determines the most suitable meeting times for both developers and engineering leaders.
Creates custom smart holds i.e. protected time throughout the hold.
Reschedules the meetings that are marked as ‘Flexible’.
Provides a quick summary of how much meetings and focus time was spent last week.
A straightforward time-tracking, reporting, and billing tool for software developers. It lets development teams view tracked team entries in a grid or calendar format.
Key features
‘Dashboard and Reporting’ feature offers in-depth analysis and lets engineering leaders create customized dashboards.
Simple and easy-to-use interface.
Preferable for those who avoid real-time tracking rather than track their time manually.
Offers a PDF invoice template that can be downloaded easily.
Includes optional Pomodoro setting that allows developers to take regular quick breaks.
Typo is an intelligent engineering management platform used for gaining visibility, removing blockers, and maximizing developer effectiveness. It gives a comparative view of each team’s performance across velocity, quality, and throughput. This tool can be integrated with the tech stack to deliver real-time insights. Git, Slack, Calenders, and CI/CD to name a few.
Key features
Seamlessly integrates with third-party applications such as Git, Slack, Calenders, and CI/CD tools.
‘Sprint analysis’ feature allows for tracking and analyzing the team’s progress throughout a sprint.
Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
Offers engineering benchmark to compare the team’s results across industries.
An AI code-based assistant tool that provides code-specific information and helps in locating precise code based on natural language description, file names, or function names.
Key features
Explain complex lines of code in simple language.
Identifies bugs and errors in a codebase and provides suggestions.
Offers documentation generation.
Answers questions about existing code.
Generates code snippets, fixes, and improves code.
Developed by GitHub in collaboration with open AI, it uses an open AI codex for writing code quickly. It draws context from the code and suggests whole lines or complete functions that developers can accept, modify, or reject.
Key features
Creates predictive lines of code from comments and existing patterns in the code.
Generates code in multiple languages including Typescript, Javascript, Ruby, C++, and Python.
Seamlessly integrates with popular editors such as Neovim, JetBrains IDEs, and Visual Studio.
A widely used communication platform that enables developers to real-time communication and share files. It also allows team members to share and download files and create external links for people outside of the team.
Key features
Seamlessly integrates with third-party applications such as Google Calendar, Hubspot, Clickup, and Salesforce.
‘Huddle’ feature that includes phone and video conferencing options.
Accessible on both mobile and desktop (Application and browser).
Offers ‘Channel’ feature i.e. similar to groups, team members can create projects, teams, and topics.
Perfect for asynchronous communication and collaboration.
A part of the Atlassian group, JIRA is an umbrella platform that includes JIRA software, JIRA core, and JIRA work management. It relies on the agile way of working and is purposely built for developers and engineers.
Key features
Built for agile and scrum workflows.
Offers Kanban view.
JIRA dashboard helps users to plan projects, measure progress, and track due dates.
Offers third-party integrations with other parts of Atlassian groups and third-party apps like Github, Gitlab, and Jenkins.
Offers customizable workflow states and transitions for every issue type.
A project management and issue-tracking tool that is tailored for software development teams. It helps the team plan their projects and auto-close and auto-archive issues.
Key features
Simple and straightforward UI.
Easy to set up.
Breaks larger tasks into smaller issues.
Switches between list and board layout to view work from any angle.
Quickly apply filters and operators to refine issue lists and create custom views.
A cloud-based cross-browser testing platform that provides real-time testing on multiple devices and simulators. It is used to create and run both manual and automatic tests and functions via the Selenium Automation Grid.
Key features
Seamlessly integrates with other testing frameworks and CI/CD tools.
Offers detailed automated logs such as exception logs, command logs, and metadata.
Runs parallel tests in multiple browsers and environments.
Offers command screenshots and video recordings of the script execution.
Facilitates responsive testing to ensure the application works well on various devices and screen sizes.
A widely used automation testing tool for API. It provides a streamlined process for standardizing API testing and monitoring it for usage and trend insights.
Key features
Seamlessly integrates with CI/CD pipelines.
Enable users to mimic real-world scenarios and assess API behavior under various conditions.
Creates mock servers, and facilitates realistic simulations and comprehensive testing.
Provides monitoring features to gain insights into API performance and usage trends.
Friendly and easy-to-use interface equipped with code snippets.
Certified with FebRamp and SOC Type II compliant, It helps in achieving CI/CD in open-source and large-scale projects. Circle CI streamlines the DevOps process and automates builds across multiple environments.
Key features
Seamlessly integrates with third-party applications with Bitbucket, GitHub, and GitHub Enterprise.
Tracks the status of projects and keeps tabs on build processes
‘Parallel testing’ feature helps in running tests in parallel across different executors.
Allows a single process per project.
Provides ways to troubleshoot problems and inspect things such as directory path, log files, and running processes
Specifically designed for software development teams. Swimm is an innovative cloud-based documentation tool that integrates continuous documentation into the development workflow.
Key features
Seamlessly integrates with development tools such as GitHub, VSC, and JetBrains IDEs.
‘Auto-sync’ feature ensures the document stays up to date with changes in the codebase.
Creates new documents, rewrites existing ones, or summarizes information.
Creates tutorials and visualizations within the codebase for better understanding and onboarding new members.
Analyzes the entire codebase, documentation sources, and data from enterprise tools.
A valuable tool for development teams that captures 360 views of developer experience and helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.
Key features
Research-backed framework that captures parameters and uncovers real issues.
In-depth insights are published on the dashboard.
Combines data-driven insights with proactive monitoring and strategic intervention.
Identifies the key priority areas affecting developer productivity and well-being.
Sends automated alerts to identify burnout signs in developers at an early stage.
A comprehensive insights platform that is founded by researchers behind the DORA and SPACE framework. It offers both qualitative and quantitative measures to give a holistic view of the organization.
Key features
Provides a suite of tools that capture data from surveys and systems in real-time.
Breaks down results based on personas.
Streamlines developer onboarding with real-time insights.
Contextualizes performance with 180,000+ industry benchmark samples.
Uses advanced statistical analysis to identify the top opportunities.
Conclusion
Overall Developer Experience is crucial in today’s times. It facilitates effective collaboration within engineering teams, offers real-time feedback on workflow efficiency and early signs of burnout, and enables informed decision-making. By pinpointing areas for improvement, it cultivates a more productive and enjoyable work environment for developers.
There are various tools available in the market. We’ve curated the best Developer Experience tools for you. You can check other tools as well. Do your own research and see what fits right for you.
All the best!
Measuring Developer Productivity: A Comprehensive Guide
The software development industry constantly evolves, and measuring developer productivity has become crucial to success. It is the key to achieving efficiency, quality, and innovation. However, measuring productivity is not a one-size-fits-all process. It requires a deep understanding of productivity in a development context and selecting the right metrics to reflect it accurately.
This guide will help you and your teams navigate the complexities of measuring dev productivity. It offers insights into the process’s nuances and equips teams with the knowledge and tools to optimize performance. By following the tips and best practices outlined in this guide, teams can improve their productivity and deliver better software.
What is Developer Productivity?
Development productivity extends far beyond the mere output of code. It encompasses a multifaceted spectrum of skills, behaviors, and conditions that contribute to the successful creation of software solutions. Technical proficiency, effective collaboration, clear communication, suitable tools, and a conducive work environment are all integral components of developer productivity. Recognizing and understanding these factors is fundamental to devising meaningful metrics and fostering a culture of continuous improvement.
Benefits of developer productivity
Increased productivity allows developers to complete tasks more efficiently. It leads to shorter development cycles and quicker delivery of products or features to the market.
Productivity developers can focus more on code quality, testing, and optimization, resulting in higher-quality software with fewer bugs and issues.
Developers can accomplish more in less time, reducing development costs and improving the organization’s overall return on investment.
Productive developers often experience less stress and frustration due to reduced workloads and smoother development processes that lead to higher job satisfaction and retention rates.
With more time and energy available, developers can dedicate resources to innovation, continuous learning, experimenting with new technologies, and implementing creative solutions to complex problems.
Metrics for Measuring Developer Productivity
Measuring software developers’ productivity cannot be any arbitrary criteria. This is why there are several metrics in place that can be considered while measuring it. Here we can divide them into quantitative and qualitative metrics. Here is what they mean:
Quantitative Metrics
Lines of Code (LOC) Written
While counting lines of code isn’t a perfect measure of productivity, it can provide valuable insights into coding activity. A higher number of lines might suggest more work done, but it doesn’t necessarily equate to higher quality or efficiency. However, tracking LOC changes over time can help identify trends and patterns in development velocity. For instance, a sudden spike in LOC might indicate a burst of productivity or potentially code bloat, while a decline could signal optimization efforts or refactoring.
Time to Resolve Issues/Bugs
The swift resolution of issues and bugs is indicative of a team’s efficiency in problem-solving and code maintenance. Monitoring the time it takes to identify, address, and resolve issues provides valuable feedback on the team’s responsiveness and effectiveness. A shorter time to resolution suggests agility and proactive debugging practices, while prolonged resolution times may highlight bottlenecks in the development process or technical debt that needs addressing.
Number of Commits or Pull Requests
Active participation in version control systems, as evidenced by the number of commits or pull requests, reflects the level of engagement and contribution to the codebase. A higher number of commits or pull requests may signify active development and collaboration within the team. However, it’s essential to consider the quality, not just quantity, of commits and pull requests. A high volume of low-quality changes may indicate inefficiency or a lack of focus.
Code Churn
Code churn refers to the rate of change in a codebase over time. Monitoring code churn helps identify areas of instability or frequent modifications, which may require closer attention or refactoring. High code churn could indicate areas of the code that are particularly complex or prone to bugs, while low churn might suggest stability but could also indicate stagnation if accompanied by a lack of feature development or innovation. Furthermore, focusing on code changes allows teams to track progress and ensure that updates align with project goals while emphasizing quality code ensures that these changes maintain or improve the overall codebase integrity and performance.
Effective code reviews are crucial for maintaining code quality and fostering a collaborative development environment in engineering org. Monitoring code review feedback, such as the frequency of comments, the depth of review, and the incorporation of feedback into subsequent iterations, provides insights into the team’s commitment to quality and continuous improvement. A culture of constructive feedback and iteration during code reviews indicates a quality-driven approach to development.
Team Satisfaction and Morale
High morale and job satisfaction among engineering teams are key indicators of a healthy and productive work environment. Happy and engaged teams tend to be more motivated, creative, and productive. Regularly measuring team satisfaction through surveys, feedback sessions, or one-on-one discussions helps identify areas for improvement and reinforces a positive culture that fosters teamwork, productivity, and collaboration.
Rate of Feature Delivery
Timely delivery of features is essential for meeting project deadlines and delivering value to stakeholders. Monitoring the rate of feature delivery, including the speed and predictability of feature releases, provides insights into the team’s ability to execute and deliver results efficiently. Consistently meeting or exceeding feature delivery targets indicates a well-functioning development process and effective project management practices.
Customer Satisfaction and Feedback
Ultimately, the success of development efforts is measured by the satisfaction of end-users. Monitoring customer satisfaction through feedback channels, such as surveys, reviews, and support tickets, provides valuable insights into the effectiveness of the software in delivering meaningful solutions. Positive feedback and high satisfaction scores indicate that the development team has successfully met user needs and delivered a product that adds value. Conversely, negative feedback or low satisfaction scores highlight areas for improvement and inform future development priorities.
Best Practices for Measuring Developer Productivity
While analyzing the metrics and measuring software developer productivity, here are some things you need to remember:
Balance Quantitative and Qualitative Metrics: Combining both types of metrics provides a holistic view of productivity.
Customize Metrics to Fit Team Dynamics: Tailor metrics to align with the development team’s unique objectives and working styles.
Ensure Transparency and Clarity: Communicate clearly about the purpose and interpretation of metrics to foster trust and accountability.
Iterate and Adapt Measurement Strategies: Continuously evaluate and refine measurement approaches based on feedback and evolving project requirements.
How does Generative AI Improve Developer Productivity?
Below are a few ways in which Generative AI can have a positive impact on developer productivity:
Focus on meaningful tasks: Generative AI tools take up tedious and repetitive tasks, allowing developers to give their time and energy to meaningful activities, resulting in productivity gains within the team members’ workflow.
Assist in their learning graph: Generative AI lets software engineers gain practical insights and examples from these AI tools and enhance team performance.
Assist in pair programming: Through Generative AI, developers can collaborate with other developers easily.
Increase the pace of software development: Generative AI helps in the continuous delivery of products and services and drives business strategy.
How does Typo Measure Developer Productivity?
There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.
Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.
Here are three ways in which Typo measures the team productivity:
Software Development Visibility
Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.
Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.
Code Quality Automation
Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.
Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback. This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.
Developer Experience
Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.
Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.
Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.
Track Developer Productivity Effectively
Measuring developers’ productivity is not straightforward, as it varies from person to person. It is a dynamic process that requires careful consideration and adaptability.
To achieve greater success in software development, the development teams must embrace the complexity of productivity, select appropriate metrics, use relevant tools, and develop a supportive work culture.
There are many developer productivity tools available in the market. Typo stands out to be the prevalent one. It’s important to remember that the journey toward productivity is an ongoing process, and each iteration presents new opportunities for growth and innovation.
As technology rapidly advances, software engineering is becoming an increasingly fast-paced field where maximizing productivity is critical for staying competitive and driving innovation. Efficient resource allocation, streamlined processes, and effective teamwork are all essential components of engineering productivity. In this guide, we will delve into the significance of measuring and improving engineering productivity, explore key metrics, provide strategies for enhancement, and examine the consequences of neglecting productivity tracking.
What is Engineering Productivity?
Engineering productivity refers to the efficiency and effectiveness of engineering teams in producing work output within a specified timeframe while maintaining high-quality standards. It encompasses various factors such as resource utilization, task completion speed, deliverable quality, and overall team performance. Essentially, engineering productivity measures how well a team can translate inputs like time, effort, and resources into valuable outputs such as completed projects, software features, or innovative solutions.
Tracking software engineering productivity involves analyzing key metrics like productivity ratio, throughput, cycle time, and lead time. By assessing these metrics, engineering managers can pinpoint areas for improvement, make informed decisions, and implement strategies to optimize productivity and achieve project objectives. Ultimately, engineering productivity plays a critical role in ensuring the success and competitiveness of engineering projects and organizations in today’s fast-paced technological landscape.
Why does Engineering Productivity Matter?
Impact on Project Timelines and Deadlines
Engineering productivity directly affects project timelines and deadlines. When teams are productive, they can deliver projects on schedule, meeting client expectations and maintaining stakeholder satisfaction.
Influence on Product Quality and Customer Satisfaction
High productivity levels correlate with better product quality. By maximizing productivity, engineering teams can focus on thorough testing, debugging, and refining processes, ultimately leading to increased customer satisfaction.
Role in Resource Allocation and Cost-Effectiveness
Optimized engineering productivity ensures efficient resource allocation, reducing unnecessary expenditures and maximizing ROI. By utilizing resources effectively, tech companies can achieve their goals within budgetary constraints.
The Importance of Tracking Engineering Productivity
Insights for Performance Evaluation and Improvement
Tracking engineering productivity provides valuable insights into team performance. By analyzing productivity metrics, organizations can identify areas for improvement and implement targeted strategies for enhancement.
Facilitates Data-Driven Decision-Making
Data-driven decision-making is essential for optimizing engineering productivity. Organizations can make informed decisions about resource allocation, process optimization, and project prioritization by tracking relevant metrics.
Helps in Setting Realistic Goals and Expectations
Tracking productivity metrics allows organizations to set realistic goals and expectations. By understanding historical productivity data, teams can establish achievable targets and benchmarks for future projects.
Factors Affecting Engineering Productivity
Team Dynamics and Collaboration
Effective teamwork and collaboration are essential for maximizing engineering productivity. Organizations can leverage team members’ diverse skills and expertise to achieve common goals by fostering a collaboration and communication culture.
Work Environment and Organizational Culture
The work environment and organizational culture play a significant role in determining engineering productivity. A supportive and conducive work environment fosters team members’ creativity, innovation, and productivity.
Resource Allocation and Workload Management
Efficient resource allocation and workload management are critical for optimizing engineering productivity. By allocating resources effectively and balancing workload distribution, organizations can ensure that team members work on tasks that align with their skills and expertise.
Strategies to Improve Engineering Productivity
Identifying Productivity Roadblocks and Bottlenecks
Identifying and addressing productivity roadblocks and bottlenecks is essential for improving engineering productivity. By conducting thorough assessments of workflow processes, organizations can identify inefficiencies, focus on workload distribution, and implement targeted solutions for improvement.
Implementing Effective Tools and Practices for Optimization
Leveraging effective tools and best practices is crucial for optimizing engineering productivity. By adopting agile methodologies, DevOps practices, and automation tools, engineering organizations can streamline processes, reduce manual efforts, enhance code quality, and accelerate delivery timelines.
Prioritizing Tasks Strategically
Strategic task prioritization, along with effective time management and goal setting, is key to maximizing engineering productivity. By prioritizing tasks based on their impact and urgency, organizations can ensure that team members focus on the most critical activities, leading to improved productivity and efficiency.
Promoting Collaboration and Communication
Promoting collaboration and communication within engineering teams is essential for maximizing productivity. By fostering open communication channels, encouraging knowledge sharing, and facilitating cross-functional collaboration, organizations can leverage the collective expertise of team members to drive innovation, and motivation and achieve common goal setting.
Continuous Improvement through Feedback Loops and Iteration
Continuous improvement is essential for maintaining and enhancing engineering productivity. By soliciting feedback from team members, identifying areas for improvement, and iteratively refining processes, organizations can continuously optimize productivity, address technical debt, and adapt to changing requirements and challenges.
Consequences of Not Tracking Engineering Productivity
Risk of Missed Deadlines and Project Delays
Neglecting to track engineering productivity increases the risk of missed deadlines and project delays. Without accurate productivity tracking, organizations may struggle to identify and address issues that could impact project timelines and deliverables.
Decreased Product Quality and Customer Dissatisfaction
Poor engineering productivity can lead to decreased product quality and customer dissatisfaction. Organizations may overlook critical quality issues without effective productivity tracking, resulting in negative business outcomes, subpar products, and unsatisfied customers.
Inefficient Resource Allocation and Higher Costs
Failure to track engineering productivity can lead to inefficient resource allocation and higher costs. Without visibility into productivity metrics, organizations may allocate resources ineffectively, wasting time, effort, and budgetary overruns.
Best Practices for Engineering Productivity
Setting SMART Goals
Setting SMART (specific, measurable, achievable, relevant, time-bound) goals is essential for maximizing engineering productivity. By setting clear and achievable goals, organizations can focus their efforts on activities that drive meaningful results and contribute to overall project success.
Establishing a Culture of Accountability and Ownership
Establishing a culture of accountability and ownership is critical for maximizing engineering productivity. Organizations can foster a sense of ownership and commitment that drives productivity and excellence by empowering team members to take ownership of their work and be accountable for their actions.
Promoting Work-Life Balance
Ensure work-life balance at the organization by promoting policies that support flexible schedules, encouraging regular breaks, and providing opportunities for professional development and personal growth. This can help reduce stress and prevent burnout, leading to higher productivity and job satisfaction.
Embracing Automation and Technology
Embracing automation and technology is key to streamlining processes and accelerating delivery timelines. By leveraging automation tools, DevOps practices, and advanced technologies, organizations can automate repetitive tasks, reduce manual efforts, and improve overall productivity and efficiency.
Investing in Employee Training and Skill Development
Investing in employee training and skill development is essential for maintaining and enhancing engineering productivity. By providing ongoing training and development opportunities, organizations can equip team members with the skills and knowledge they need to excel in their roles and contribute to overall project success.
Using Typo for Improved Engineering Productivity
Typo offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It includes engineering metrics that can help you take action with in-depth insights.
Understanding Engineering Productivity Metrics
Below are a few important engineering metrics that can help in measuring their productivity:
Merge Frequency
Merge Frequency represents the rate at which the Pull Requests are merged into any of the code branches per day. Engineering teams can optimize their development workflows, improve collaboration, and increase team efficiency.
Cycle Time
Cycle time measures the time it takes to complete a single iteration of a process or task. Organizations can identify opportunities for process optimization and efficiency improvement by tracking cycle time.
Deployment PR
Deployment PRs represent the average number of Pull Requests merged in the main/master/production branch per week. Measuring it helps improve Engineering teams’ efficiency by providing insights into code deployments’ frequency, timing, and success rate.
Planning Accuracy
Planning Accuracy represents the percentage of Tasks Planned versus Tasks Completed within a given time frame. Its benchmarks help engineering teams measure their performance, identify improvement opportunities, and drive continuous enhancement of their planning processes and outcomes.
Code Coverage
Code coverage is a measure that indicates the percentage of a codebase that is tested by automated tests. It helps ensure that the tests cover a significant portion of the code, identifying code quality, untested parts, and potential bugs.
How does Typo Help in Enhancing Engineering Productivity?
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools. It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Features
Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
Includes effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint.
Provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Offers engineering benchmark to compare the team’s results across industries.
Improve Engineering Productivity Always to Stay Ahead
Measuring and improving engineering productivity is essential for achieving project success and driving business growth. By understanding the importance of productivity tracking, leveraging relevant metrics, and implementing effective strategies, organizations can optimize productivity, enhance product quality, and deliver exceptional results in today’s competitive software engineering landscape.
In conclusion, engineering productivity is not just a metric; it’s a mindset and a continuous journey towards excellence.
A software development team is critical for business performance. They wear multiple hats to complete the work and deliver high-quality software to end-users. On the other hand, organizations need to take care of their well-being and measure developer experience to create a positive workplace for them.
Otherwise, this can negatively impact developers’ productivity and morale which makes their work less efficient and effective. As a result, disrupting the developer experience at the workplace.
With Typo, you can capture qualitative insights and get a 360 view of your developer experience. Let’s delve deeper into it in this blog post:
What is Developer Experience?
Developer experience refers to the overall experience of developer teams when using tools, platforms, and services to build software applications. This means right from the documentation to coding and deployment and includes tangible and intangible experience.
Happy developers = positive developer experience. It increases their productivity and morale. It further leads to a faster development cycle, developer workflow, methods, and working conditions.
Not taking care of developer experience can make it difficult for businesses to retain and attract top talent.
Why is Developer Experience Beneficial?
Developer experience isn’t just a buzzword. It is a crucial aspect of your team’s productivity and satisfaction.
Below are a few benefits of developer experience:
Smooth Onboarding Process
Good devex ensures the onboarding process is as simple and smooth as possible. It includes making engineering teams familiar with the tools and culture and giving them the support they need to proceed further in their career. It also allows them to know other developers which can help them in collaboration and mentorship.
Improves Product Quality
A positive developer experience leads to 3 effective C’s – Collaboration, communication, and coordination. Adhering to coding standards, best practices and automated testing also helps in promoting code quality and consistency and catching and fixing issues early. As a result, they can easily create products that meet customer needs and are free from errors and glitches.
Increases Development Speed
When developer experience is handled carefully, team members can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are a few of the ways to boost development speed. It lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.
Attracts and Retains Top Talents
Developers usually look out for a strong tech culture so they can focus on their core skills and get acknowledged for their contributions. A good developer experience results in developer satisfaction and aligns their values and goals with the organization. In return, developers bring the best to the table and want to stay in the organization for the long run.
Enhances Collaboration
Great developer experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings. Through collaborative approaches, developers can easily discuss issues, share feedback, and work together on tasks.
How to Measure Developer Experience with Typo?
Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers.
Below is the process that Typo follows to gain insights into developer experience effectively:
Step 1: Pulse Surveys
Pulse surveys refer to short, periodic questionaries used to gather feedback from developers to assess their engagement, satisfaction, and overall organizational health.
Typo’s pulse surveys are specifically designed for the software engineering team as it is built on a developer experience framework. It triggers AI-driven pulse surveys where each developer receives a notification periodically with a few conversational questions.
We highly recommend doing surveys once a month as to keep a tab on your team’s wellbeing & experiences and build a continuous loop of feedback. However, you can customize the frequency of these surveys according to the company’s suitability and needs.
And don’t worry, these surveys are anonymous.
Step 2: Developer Experience Analytics
Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help to analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.
Below are key components of Typo’s developer experience analytics dashboard:
DevEx Score
The DevEx score indicates the overall state of well-being or happiness within an organization. It reflects the collective emotional and mental health of the developers.
Also known as the employee net promoter score, this score ranges between 1 – 10 as shown in the image below. It is based on the developer feedback collected. A high well-being score suggests that people are generally content and satisfied while a low score may indicate areas of concern or areas needing improvement.
Response Rate
It is the percentage of people who responded to the check-in. A higher response rate represents a more reliable dataset for analyzing developer experience metrics and deriving insights.
This is a percentage number along with the delta change. You will also see the exact count to drive this percentage. It also includes the trend graph showing the data from the last 4 weeks.
It also includes trending sentiments that show you the segregation of employees based on the maximum re-occurring sentiments as mentioned by developer team.
Recent comments
This section shows all the concerns raised by developers which you can reply to and drive meaningful conversations. This offers valuable insights into their workflow challenges, addresses issues promptly, and boosts developer satisfaction.
Heatmap
In this section, you can slice and dice your data to deep-dive further on the level of different demographics. The list of demo graphies is as follows:
Designation
Location
Team
Tenure
Burnout Alerts
Typo sends automated alerts to your communication to help you identify burnout signs in developers at an early stage. This enables leaders to track developer engagement and support their well-being, maintain productivity, and create a positive and thriving work environment.
Typo tracks the work habits of developers across multiple activities, such as commits, PRs, reviews, comments, tasks, and merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the burnout zone or at risk of burnout. These benchmarks can be customized to meet your specific needs.
Developer Experience Framework, Powered by Typo
Typo’s developer experience framework suggests to engineering leaders what they should focus on for measuring the dev productivity and experience.
Below are the key focus areas and their drivers incorporated in the developer experience framework:
It refers to the level of assistance, guidance, and resources provided by managers or team leads to support developers in their work.
Sub focus areas
Description
Questions
Empathy
The ability to understand and relate to developers, actively listen, and show compassion in interactions.
Do you feel comfortable sharing your concerns or personal challenges with your manager?
Do you feel comfortable expressing yourself in this space?
Does your manager actively listen to your ideas without judgment?
Coach and guide
The role of managers is to provide expertise, advice, and support to help developers improve their skills, overcome challenges, and achieve career goals.
Does your manager give constructive feedback regularly?
Does your manager give you the guidance you need in your work?
Does your manager help you learn and develop new skills?
Feedback
The ability to provide timely and constructive feedback on performance, skills, and growth areas helping developers gain insights, refine their skills, and work towards achieving their career objectives.
Do you feel that your manager’s feedback helps you understand your strengths and areas for improvement?
Do you feel comfortable providing feedback to your manager?
How effectively does your manager help you get support for technical growth?
Developer Flow
It is a state of optimal engagement and productivity that developers experience when fully immersed and focused on their work.
Sub focus areas
Description
Questions
Work-life balance
Maintaining a healthy equilibrium between work responsibilities and personal life promotes well-being, boundaries, and resources for managing workload effectively.
How would you rate the work-life balance in your current role?
Do you feel supported by your team in maintaining a good work-life balance?
How would you rate the work-life balance in your current role?
Autonomy
Providing developers with the freedom and independence to make decisions, set goals, and determine their approach and execution of tasks.
Do you feel free to make decisions for your work?
Do you feel encouraged to explore new ideas and experiment with different solutions?
Do you think your ideas are well-supported by the team?
Focus time
The dedicated periods of uninterrupted work where developers can deeply concentrate on their tasks without distractions or interruptions.
How often do you have time for focused work without interruptions?
How often do you switch context during focus time?
How often can you adjust your work schedule to improve conditions for focused work when needed?
Goals
Setting clear objectives that provide direction, motivation, and a sense of purpose in developers’ work, enhances their overall experience and productivity.
Have you experienced success in meeting your goals?
Are you able to track your progress towards your goals?
How satisfied are you with the goal-setting process within your team?
Product Management
The practices involved overseeing a software product’s lifecycle, from ideation to development, launch, and ongoing management.
Sub focus areas
Description
Questions
Clear requirements
Providing developers with precise and unambiguous specifications, ensuring clarity, reducing ambiguity, and enabling them to meet the expectations of stakeholders and end-users.
Are the requirements provided for your projects clear and well-defined?
Do you have the necessary information you need for your tasks?
Do you think the project documentation covers everything you need?
Reasonable timelines
Setting achievable and realistic project deadlines, allowing developers ample time to complete tasks without undue pressure or unrealistic expectations.
Do you have manageable timeframes and deadlines that enhance the quality of your work?
Are you provided with the resources you need to meet the project timelines?
How often do you encounter unrealistic project timelines?
Collaborative discussions
Fostering open communication among developers, product managers, and stakeholders, enabling constructive discussions to align product strategies, share ideas, and resolve issues.
Are your inputs valued during collaborative discussions?
Does your team handle conflicts well in product meetings?
How often do you actively participate during collaborative discussions?
Development Releases
It refers to creating and deploying software solutions or updates, emphasizing collaboration, streamlined workflows, and reliable deployment to enhance the developer experience.
Sub focus areas
Description
Questions
Tools and technology
Providing developers with the necessary software tools, frameworks, and technologies to facilitate their work in creating and deploying software solutions.
Are you satisfied with the tools provided to you for your development work?
Has the availability of tools positively impacted your development process?
To what extent do you believe that testing tools adequately support your work?
Code review
Evaluating code changes for quality, adherence to standards, and identifying issues to enhance software quality and promote collaboration among developers.
Do you feel that code reviews contribute to your growth and development as a developer?
How well does your team addresses the issues identified during code reviews?
How often do you receive constructive feedback during code reviews that help improve your coding skills?
Code health
Involves activities like code refactoring, performance optimization, and enforcing best practices to ensure code quality, maintainability, and efficiency, thereby enhancing the developer experience and software longevity.
Are coding standards and best practices consistently followed in the development process?
Do you get enough support with technical debt & code-related issues?
Are you satisfied with the overall health of the codebase you’re currently working on?
Frictional releases
Streamlining software deployment through automation, standardized procedures, and effective coordination, reducing errors and delays for a seamless and efficient process that enhances the developer experience.
Do you often have post-release reviews to identify areas for improvement?
Do you feel that the release process is streamlined in your projects?
Is the release process in your projects efficient?
Culture and Values
It includes shared beliefs, norms, and principles that shape a positive work environment. It includes collaboration, open communication, respect, innovation, diversity, and inclusion, fostering creativity, productivity, and satisfaction among developers.
Sub focus areas
Description
Questions
Psychological safety
Creating an environment where developers feel safe to express their opinions, take risks, and share their ideas without fear of judgment or negative consequences.
Do you feel that your team creates an atmosphere where trust, respect, and openness are valued?
Do you feel comfortable sharing your thoughts without worrying about judgement?
Do you believe that your team fosters a culture where everyone’s opinions are valued?
Recognition
Acknowledging and appreciating developers’ contributions and achievements through meaningful recognition, fostering a positive and motivating environment that boosts morale and engagement.
Does recognition at your workplace make you happier and more involved in your job?
Do you feel that your hard work is acknowledged by your team members and manager?
Do you believe that recognition motivates you to perform better in your role?
Team collaboration
Fostering open communication, trust, and knowledge sharing among developers, enabling seamless collaboration, and idea exchange, and leveraging strengths to achieve common goals.
Is there a strong sense of teamwork and cooperation within your team?
Are you confident in your team’s ability to solve problems together?
Do you believe that your team leverages individual expertise to enhance collaboration?
Learning and growth
Continuous learning and professional development, offering skill-enhancing opportunities, encouraging a growth mindset, fostering curiosity and innovation, and supporting career progression.
Does your organization encourage your professional growth?
Are there any training programs you would like to see implemented?
Does your organization invest enough in employee training and development?
Conclusion
Measuring developer experience continuously is crucial in today’s times. It helps to provide real-time feedback on workflow efficiency, early signs of burnout, and overall satisfaction levels. This further identifies areas for improvement and fosters a more productive and enjoyable work environment for developers.
In today’s times, developer experience has become an integral part of any software development company. A direct relationship exists between developer experience and developer productivity. A positive developer experience leads to high developer productivity, increasing job satisfaction, efficiency, and high-quality products.
When organizations don’t focus on developer experience, they may encounter many problems in workflow. This negatively impacts the overall business performance.
In this blog, let’s learn more about the developer experience framework that is beneficial to developers, engineering managers, and organizations.
What is Developer Experience?
In simple words, Developer experience is about the experience software developers have while working in the organization.
It is the developers’ journey while working with a specific framework, programming languages, platform, documentation, general tools, and open-source solutions.
Positive developer experience = Happier teams
Developer experience has a direct relationship with developer productivity. A positive experience results in high dev productivity which further leads to high job satisfaction, performance, and morale. Hence, happier developer teams.
This starts with understanding the unique needs of developers and fostering a positive work culture for them.
Benefits of Developer Experience
Smooth Onboarding Process
DX ensures that the onboarding process is simple and smooth as possible. This includes making them familiar with the tools and culture as well as giving them the support they need to proceed further in their career.
It also allows them to know other developers which help in collaboration, open communication, and seeking help, whenever required.
Improves Product Quality
A positive developer experience leads to 3 effective C’s - Collaboration, communication, and coordination. Besides this, adhering to coding standards, best practices, and automated testing helps in promoting code quality and consistency and catching and fixing issues early.
As a result, they can easily create products that can meet customer needs and are free from errors and glitches.
Increases Development Speed
When developer experience is handled with care, software developers can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are a few of the ways to boost development speed.
It also lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.
Attract and Retain Top Talents
Developers usually look out for a strong tech culture. So they can focus on their core skills and get acknowledged for their contributions. A good developer experience increases job satisfaction and aligns their values and goals with the organization.
In return, developers bring the best to the table and want to stay in the organization for the long run.
Enhanced Collaboration
The right kind of developer experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings.
Through collaborative approaches, developers can easily discuss issues, share feedback, and work together on tasks. It helps streamline the development process and results in high-quality work.
Two Key Frameworks and Their Limitations
There are two frameworks to measure developer productivity. However, they come with certain drawbacks. Hence, a new developer framework is required to bridge the gap in how organizations approach developer experience and productivity.
Let’s take a look at DORA metrics and SPACE frameworks along with their limitations:
DORA Metrics
DORA metrics have been identified after 6 years of research and surveys by DORA. It assists engineering leaders to determine two things:
The characteristics of a top-performing team
How their performance compares to the rest of the industry
It defines 4 key metrics:
Deployment frequency
Deployment Frequency measures the frequency of deployment of code to production or releases to end-users in a given time frame.
Lead Time for Changes
Also known as cycle time. Lead Time for Changes measures the time between a commit being made and that commit making it to production.
Mean Time to Recover
This metric is also known as the mean time to restore. Mean Time to Recover measures the time required to solve the incident i.e. service incident or defect impacting end-users.
Change Failure Rate
Change Failure Rate measures the proportion of deployment to production that results in degraded services.
Limitations of DORA metrics
It Doesn't Take into Consideration All the Factors that Add to the Success of the Development Process
DORA metrics are a useful tool for tracking and comparing DevOps team performance. Unfortunately, it doesn’t take into account all the factors for a successful software development process. For example, assessing coding skills across teams can be challenging due to varying levels of expertise. These metrics also overlook the actual efforts behind the scenes, such as debugging, feature development, and more.
It Doesn't Provide Full Context
While DORA metrics tell us which metric is low or high, it doesn’t reveal the reason behind it. Suppose, there is an increase in lead time for changes, it could be due to various reasons. For example, DORA metrics might not reflect the effectiveness of feedback provided during code review. Hence, overlooking the true impact and value of the code review process.
The Software Development Landscape is Constantly Evolving
The software development landscape is changing rapidly. Hence, the DORA metrics may not be able to quickly adapt to emerging programming practices, coding standards, and other software trends. For instance, Code review has evolved to include not only traditional peer reviews but also practices like automated code analysis. DORA metrics may not be able to capture the new approaches fully. Hence, it may not be able to assess the effectiveness of these reviews properly.
SPACE Framework
This framework helps in understanding and measuring developer productivity. It takes into consideration both the qualitative and quantitative aspects and uses various data points to gauge the team's productivity.
The 5 dimensions of this framework are:
Satisfaction and Well-Being
The dimension of developers’ satisfaction and well-being is often evaluated through developer surveys, which assess whether team members are content, happy, and exhibiting healthy work practices. There is a strong connection between contentment, well-being, and productivity, and teams that are highly productive but dissatisfied are at risk of burning out if their well-being is not improved.
Performance
The SPACE Framework originators recommend evaluating a developer’s performance based on their work outcome, using metrics like Defect Rate and Change Failure Rate. Every failure in production takes away time from developing new features and ultimately harms customers.
Activity
The Velocity framework includes activity metrics that provide insights into developer outputs, such as on-call participation, pull requests opened, the volume of code reviewed, or documents written, which are similar to older productivity measures. However, the framework emphasizes that such activity metrics should not be viewed in isolation but should be considered in conjunction with other metrics and qualitative information.
Communication and Collaboration:
Teams that are highly transparent and communicative tend to be the most successful. This enables developers to have a clear understanding of their priorities, and how their work contributes to larger projects, and also facilitates knowledge sharing among team members.
Indicators that can be used to measure collaboration and communication may include the extent of code review coverage and the quality of documentation.
Efficiency and Flow
The concept of efficiency in the SPACE framework pertains to an individual’s ability to complete tasks quickly with minimal disruption, while team efficiency refers to the ability of a group to work effectively together. These are essential factors in reducing developer frustration.
Limitations of SPACE framework
It Doesn’t Tell You WHY
While the SPACE framework measures dev productivity, it doesn’t tell why certain measurements have a specific value nor can tell the events that triggered a change. This framework offers a structured approach to evaluating internal and external factors but doesn’t delve into the deeper motivations driving these factors.
Limited Scope for Innovation
Too much focus on efficiency and stability can stifle developers’ creativity and innovation. The framework can make teams focus more on hitting specific targets. A culture that embraces change, experiments, and a certain level of uncertainty doesn’t align with the framework principles.
Too Many Metrics
This framework has 5 different dimensions and multiple metrics. Hence, it produces an overwhelming amount of data. Further, engineering leaders need to set up data, maintain data accuracy, and analyze these results. This makes it difficult to identify critical insights and prioritize actions.
Need for a new Developer Experience Framework
This new framework suggests to organizations and engineering leaders what they should focus on for measuring the dev productivity and experience.
Below are the key focus areas and their drivers incorporated in the Developer Experience Framework:
Manager Support
Refers to the level of assistance, guidance, and resources provided by managers or team leads to support developers in their work.
Empathy
The ability to understand and relate to developers, actively listen, and show compassion in interactions.
Coach and Guide
The role of managers is to provide expertise, advice, and support to help developers improve their skills, overcome challenges, and achieve career goals.
Feedback
The ability to provide timely and constructive feedback on performance, skills, and growth areas helping developers gain insights, refine their skills, and work towards achieving their career objectives.
Developer flow
Refers to a state of optimal engagement and productivity that developers experience when they are fully immersed and focused on their work.
Work-Life Balance
Maintaining a healthy equilibrium between work responsibilities and personal life promotes well-being, boundaries, and resources for managing workload effectively.
Autonomy
Providing developers with the freedom and independence to make decisions, set goals, and determine their approach and execution of tasks.
Focus Time
The dedicated periods of uninterrupted work where developers can deeply concentrate on their tasks without distractions or interruptions.
Goals
Setting clear objectives that provide direction, motivation, and a sense of purpose in developers' work, enhances their overall experience and productivity.
Product Management
Refers to the practices involved in overseeing the lifecycle of a software product, from ideation to development, launch, and ongoing management.
Clear Requirements
Providing developers with precise and unambiguous specifications, ensuring clarity, reducing ambiguity, and enabling them to meet the expectations of stakeholders and end-users.
Reasonable Timelines
Setting achievable and realistic project deadlines, allowing developers ample time to complete tasks without undue pressure or unrealistic expectations.
Collaborative Discussions
Fostering open communication among developers, product managers, and stakeholders, enabling constructive discussions to align product strategies, share ideas, and resolve issues.
Development and Releases
Refers to creating and deploying software solutions or updates, emphasizing collaboration, streamlined workflows, and reliable deployment to enhance the developer experience.
Tools and Technology
Providing developers with the necessary software tools, frameworks, and technologies to facilitate their work in creating and deploying software solutions.
Code Health
Involves activities like code refactoring, performance optimization, and enforcing best practices to ensure code quality, maintainability, and efficiency, thereby enhancing the developer experience and software longevity.
Frictionless Releases
Streamlining software deployment through automation, standardized procedures, and effective coordination, reducing errors and delays for a seamless and efficient process that enhances the developer experience.
Culture and Values
Refers to shared beliefs, norms, and principles that shape a positive work environment. It includes collaboration, open communication, respect, innovation, diversity, and inclusion, fostering creativity, productivity, and satisfaction among developers.
Psychological Safety
Creating an environment where developers feel safe to express their opinions, take risks, and share their ideas without fear of judgment or negative consequences.
Recognition
Acknowledging and appreciating developers' contributions and achievements through meaningful recognition, fostering a positive and motivating environment that boosts morale and engagement.
Team Collaboration
Fostering open communication, trust, and knowledge sharing among developers, enabling seamless collaboration, and idea exchange, and leveraging strengths to achieve common goals.
Learning and Growth
Continuous learning and professional development, offering skill-enhancing opportunities, encouraging a growth mindset, fostering curiosity and innovation, and supporting career progression.
Conclusion
The developer experience framework creates an indispensable link between developer experience and productivity. Organizations that neglect developer experience face workflow challenges that can harm business performance.
Prioritizing developer experience isn’t just about efficiency. It includes creating a work culture that values individual developers, fosters innovation, and propels software development teams toward unparalleled success.
Typo aligns seamlessly with the principles of the Developer Experience Framework, empowering engineering leaders to revolutionize their teams.
'Product Thinking Secrets for Platform Teams' with Geoffrey Teale, Principal Product Engineer, Upvest
November 15, 2024
•
31 min read
In this episode of the groCTO Podcast, host Kovid Batra engages in a comprehensive discussion with Geoffrey Teale, the Principal Product Engineer at Upvest, who brings over 25 years of engineering and leadership experience.
The episode begins with Geoffrey's role at Upvest, where he has transitioned from Head of Developer Experience to Principal Product Engineer, emphasizing a holistic approach to improving both developer experience and engineering standards across the organization. Upvest's business model as a financial infrastructure company providing investment banking services through APIs is also examined. Geoffrey underscores the multifaceted engineering requirements, including security, performance, and reliability, essential for meeting regulatory standards and customer expectations. The discussion further delves into the significance of product thinking for internal teams, highlighting the challenges and strategies of building platforms that resonate with developers' needs while competing with external solutions.
Throughout the episode, Geoffrey offers valuable insights into the decision-making processes, the importance of simplicity in early-phase startups, and the crucial role of documentation in fostering team cohesion and efficient communication. Geoffrey also shares his personal interests outside work, including his passion for music, open-source projects, and low-carbon footprint computing, providing a holistic view of his professional and personal journey.
Timestamps
00:00 - Introduction
00:49 - Welcome to the groCTO Podcast
01:22 - Meet Geoffrey: Principal Engineer at Upvest
01:54 - Understanding Upvest's Business & Engineering Challenges
03:43 - Geoffrey's Role & Personal Interests
05:48 - Improving Developer Experience at Upvest
08:25 - Challenges in Platform Development and Team Cohesion
13:03 - Product Thinking for Internal Teams
16:48 - Decision-Making in Platform Development
19:26 - Early-Phase Startups: Balancing Resources and Growth
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO Podcast. Today with us, we have a very special guest who has great expertise in managing developer experience at small scale and large scale organizations. He is currently the Principal Engineer at Upvestm, and has almost 25 plus years of experience in engineering and leadership. Welcome to the show, Geoffrey. Great to have you here.
Geoffrey Teale: Great to be here. Thank you.
Kovid Batra: So Geoffrey, I think, uh, today's theme is more around improving the developer experience, bringing the product thinking while building the platform teams, the platform. Uh, and you, you have been, uh, doing all this from quite some time now, like at Upvest and previous organizations that you've worked with, but at your current company, uh, like Upvest, first of all, we would like to know what kind of a business you're into, what does Upvest do, and let's then deep dive into how engineering is, uh, getting streamlined there according to the business.
Geoffrey Teale: Yeah. So, um, Upvest is a financial infrastructure company. Um, we provide, uh, essentially investment banking services, a complete, uh, solution for building investment banking experiences, uh, for, for client organizations. So we're business to business to customer. We provide our services via an API and client organizations, uh, names that you'd heard of people like Revolut and N26 build their client-facing applications using our backend services to provide that complete investment experience, um, currently within the European Union. Um, but, uh, we'll be expanding out from there shortly.
Kovid Batra: Great. Great. So I think, uh, when you talk about investment banking and supporting the companies with APIs, what kind of engineering is required here? Is it like more, uh, secure-oriented, secure-focused, or is it more like delivering on time? Or is it more like, uh, making things very very robust? How do you see it right now in your organization?
Geoffrey Teale: Well, yeah, I mean, I think in the space that we're in the, the answer unfortunately is all of the above, right? So all those things are our requirements. It has to be secure. It has to meet the, uh, the regulatory standards that we, we have in our industry. Um, it has to be performant enough for our customers who are scaling out to quite large scales, quite large numbers of customers. Um, has to be reliable. Um, so there's a lot of uh, uh, how would I say that? Pressure, uh, to perform well and to make sure that things are done to the highest possible standard in order to deliver for our customers. And, uh, if we don't do that, then, then, well, the customers won't trust us. If they don't trust us, then we wouldn't be where we are today. So, uh, yeah.
Kovid Batra: No, I totally get that. Uh, so talking more about you now, like, what's your current role in the organization? And even before that, tell us something about yourself which the LinkedIn doesn't know. Uh, I think the audience would love to know you a little bit more. Uh, let's start from there. Uh, maybe things that you do to unwind or your hobbies or you're passionate about anything else apart from your job that you're doing?
Geoffrey Teale: Oh, well, um, so, I'm, I'm quite old now. I have a family. I have two daughters, a dog, a cat, fish, quail. Keep quail in the garden. Uh, and that occupies most of my time outside of work. Actually my passions outside of work were always um, music. So I play guitar, and actually technology itself. So outside of work, I'm involved and have been involved in, in open source and free software for, for longer than I've been working. And, uh, I have a particular interest in, in low carbon footprint computing that I pursue outside of, out of work.
Kovid Batra: That's really amazing. So, um, like when you say low carbon, uh, cloud computing, what exactly are you doing to do that?
Geoffrey Teale: Oh, not specifically cloud computing, but that would be involved. So yeah, there's, there's multiple streams to this. So one thing is about using, um, low power platforms, things like RISC-V. Um, the other is about streamlining of software to make it more efficient so we can look into lots of different, uh, topics there about operating systems, tools, programming languages, how they, uh, how they perform. Um, sort of reversing a trend, uh, that's been going on for as long as I've been in computing, which is that we use more and more power, both in terms of computing resource, but also actual electricity for the network, um, to deliver more and more functionality, but we're also programming more and more abstracted ways with more and more layers, which means that we're actually sort of getting less, uh, less bang for buck, if you, if you like, than we used to. So, uh, trying to reverse those trends a little bit.
Kovid Batra: Perfect. Perfect. All right. That's really interesting. Thanks for that quick, uh, cute little intro. Uh, and, uh, now moving on to your work, like we were talking about your experience and your specialization in DevEx, right, improving the developer experience in teams. So what's your current, uh, role, responsibility that comes with, uh, within Upvest? Uh, and what are those interesting initiatives that you have, you're working on?
Geoffrey Teale: Yeah. So I've actually just changed roles at Upvest. I've been at Upvest for a little bit over two years now, and the first two years I spent as the Head of Developer Experience. So running a tribe with a specific responsibility for client-facing developer experience. Um, now I've switched into a Principal Engineering role, which means that I have, um, a scope now which is across the whole of our engineering department, uh, with a, yeah, a view for improving experience and improving standards and quality of engineering internally as well. So, um, a slight shift in role, but my, my previous five years before, uh, Upvest, were all in, uh, internal development experience. So I think, um, quite a lot of that skill, um, coming into play in the new role which um, yeah, in terms of challenges actually, we're just at the very beginning of what we're doing on that side. So, um, early challenges are actually about identifying what problems do exist inside the company and where we can improve and how we can make ourselves ready for the next phase of the company's lifetime. So, um, I think some of those topics would be quite familiar to any company that's relatively modern in terms of its developer practices. If you're using microservices, um, there's this aspect of Conway's law, which is to say that your organizational structure starts to follow the program structure and vice versa. And, um, in that sense, you can easily get into this world where teams have autonomy, which is wonderful, but they can be, um, sort of pushed into working in a, in a siloized fashion, which can be very efficient within the team, but then you have to worry about cohesion within the organization and about making sure that people are doing the right things, uh, to, to make the services work together, in terms of design, in terms of the technology that we develop there. So that bridges a lot into this world of developer experience, into platform drives, I think you mentioned already, and about the way in which you think about your internal development, uh, as opposed to just what you do for customers.
Kovid Batra: I agree. I mean, uh, as you said, like when the teams are siloed, they might be thinking they are efficient within themselves. And that's mostly the use case, the case. But when it comes to integrating different pieces together, that cohesion has to fall in. What is the biggest challenge you have seen, uh, in, in the teams in the last few years of your experience that prevents this cohesion? And what is it that works the best to bring in this cohesion in the teams?
Geoffrey Teale: Yeah. So I think there's, there's, there's a lot of factors there. The, the, the, the biggest one I think is pressure, right? So teams in most companies have customers that they're working for, they have pressure to get things done, and that tends to make you focus on the problem in front of you, rather than the bigger picture, right? So, um, dealing, dealing with that and reinforcing the message to engineers that it's actually okay to do good engineering and to worry about the other people, um, is a big part of that. I've always said, actually, that in developer experience, a big part of what you have to do, the first thing you have to do is actually teach people about why developer experience is important. And, uh, one of those reasons is actually sort of saying, you know, promoting good behavior within engineering teams themselves and saying, we only succeed together. We only do that when we make the situation for ourselves that allows us to engineer well. And when we sort of step away from good practice and rush, rush, um, that maybe works for a short period of time. But, uh, in the long term that actually creates a situation where there's a lot of mess and you have to deal with, uh, getting past, we talk about factors like technical debt. There's a lot of things that you have to get past before you can actually get on and do the productive things that you want to do. Um, so teaching organizations and engineers to think that way is, uh, is, uh, I think a big, uh, a big part of the work that has to be done, finding ways to then take that message and put it into a package that is acceptable to people outside of engineering so that they understand why this is a priority and why it should be worked on is, I think, probably the second biggest part of that as well.
Kovid Batra: Makes sense. I think, uh, most of the, so is it like a behavioral challenge, uh, where, uh, developers and team members really don't like the fact that they have to work in cohesion with the teams? Or is it more like the organizational structure that put people into a certain kind of mindset and then they start growing with that and that becomes a problem in the later phase of the organization? What, what you have seen, uh, from your experience?
Geoffrey Teale: Yeah. So I mean, I think growth is a big part of this. So, um, I mean, I've, I've worked with a number of startups. I've also worked in much bigger organizations. And what happens in that transition is that you move from a small tight-knit group of people who sort of inherently have this very good interpersonal communication, they all know what's going on with the company as a whole, and they build trust between them. And that way, this, this early stage organization works very well, and even though you might be working on disparate tasks, you always have some kind of cohesion there. You know what to do. And if something comes up that affects all of you, it's very easy to identify the people that you need to talk to and find a solution for it. Then as you grow, you start to have this situation where you start to take domains and say, okay, this particular part of, of what we do now belongs in a team, it has a leader and this piece over here goes over there. And that still works quite well up into a certain scale, right? But after time in an organization, several things happen. Okay, so your priorities drift apart, right? You no longer have such good understanding of the common goal. You tend to start prioritizing your work within those departments. So you can have some, some tension between those goals. It's not always clear that Department A should be working together with Department B on the same priority. You also have natural staff turnover. So those people who are there at the beginning, they start to leave, some of them, at least, and these trust relationships break down, the communication channels break down. And the third factor is that new people coming into the organization, they haven't got these relationships, they haven't got this experience. They usually don't have, uh, the position to, to have influence over things on such a large scale. So they get an expectation of these people that they're going to be effective across the organization in the way that people who've been there a long time are, and it tends not to happen. And if you haven't set up for that, if you haven't built the support systems for that and the internal processes and tooling for that, then that communication stops happening in the way that it was happening before.
So all of those things create pressure to, to siloes, then you put it on the pressure of growth and customers and, and it just, um, uh, ossifies in that state.
Kovid Batra: Totally. Totally. And I think, um, talking about the customers, uh, last time when we were discussing, uh, you very beautifully put across this point of bringing that product thinking, not just for the products that you're building for the customer, but when you're building it for the teams. And I, what I feel is that, the people who are working on the platform teams have come across this situation more than anyone else in the team as a developer, where they have to put in that thought of product thinking for the people within the team. So what, what, what, uh, from where does this philosophy come? How you have fitted it into, uh, how platform teams should be built? Just tell us something about that.
Geoffrey Teale: Yeah. So this is something I talk about a little bit when I do presentations, uh, about developer experience. And one of the points that I make actually, particularly for platform teams, but any kind of internal team that's serving other internal teams is that you have to think about yourself, not as a mandatory piece that the company will always support and say, "You must use this, this platform that we have." Because I have direct experience, not in my current company, but in previous, uh, in previous employers where a lot of investment has been made into making a platform, but no thought really was given to this kind of developer experience, or actually even the idea of selling the platform internally, right? It was just an assumption that people would have to use it and so they would use it. And that creates a different set of forces than you'll find elsewhere. And, and people start to ignore the fact that, you know, if you've got a cloud platform in this case, um, there is competition, right? Every day as an engineer, you run into people out there working in the wide world, working for, for companies, the Amazons, AWS of this world, as your Google, they're all producing cloud platform tools. They're all promoting their cloud native development environments with their own reasons for doing that. But they expend a lot of money developing those things, developing them to a very high standard and a lot of money promoting and marketing those things. And it doesn't take very much when we talk just now about trust breaking down, the cohesion between teams breaking down. It doesn't take very much for a platform to start looking like less of a solution and more of a problem if it's taking you a long time to get things done, if you can't find out how to do things, if you, um, you have bad experiences with deployment. This all turns that product into an internal problem.
Kovid Batra: In context of an internal problem for the teams.
Geoffrey Teale: Yeah, and in that context, and this is what I, what I've seen, when you then either have someone coming in from outside with experience with another, a product that you could use, or you get this kind of marketing push and sales push from one of these big companies saying, "Hey, look at this, this platform that we've got that you could just buy into." um, it, it puts you in direct competition and you can lose that, that, right? So I have seen whole divisions of a, of a very large company switch away from the internal platform to using cloud native development, right, on, on a particular platform. Now there are downsides for that. There are all sorts of things that they didn't realize they would have to do that they end up having to do. But once they've made the decision, that battle is lost. And I think that's a really key topic to understand that you are in competition, even though you're an internal team, you are in competition with other people, and you have to do some of the things that they do to convince the people in your organization that what you're doing is beneficial, that it's, it's, it's useful, and it's better in some very distinct way than what they would get off the shelf from, from somewhere else.
Kovid Batra: Got it. Got it. So, when, uh, whenever the teams are making this decision, let's, let's take something, build a platform, what are those nitty gritties that one should be taking care of? Like, either people can go with off the shelf solutions, right? And then they start building. What, what should be the mindset, what should be the decision-making mindset, I must say, uh, for, for this kind of a process when they have to go through?
Geoffrey Teale: So I think, um, uh, we within Upvest, follow a very, um, uh, prescribed is not the right word, but we have a, we have a process for how we think about things, and I think that's actually a very useful example of how to think about any technical project, right? So we start with this 'why' question and the 'why' question is really important. We talk about product thinking. Um, this is, you know, who are we doing this for and what are the business outcomes that we want to achieve? And that's where we have to start from, right? So we define that very, very clearly because, and this is a really important part, there's no value, uh, in anybody within the organization saying, "Let's go and build a platform." For example, if that doesn't deliver what the company needs. So you have to have clarity about this. What is the best way to build this? I mean, nobody builds a platform, well not nobody, but very few people build a platform in the cloud starting from scratch. Most people are taking some existing solution, be that a cloud native solution from a big public cloud, or be that Kubernetes or Cloud Foundry. People take these tools and they wrap them up in their own processes, their own software tools around it to package them up as a, uh, a nice application platform for, for development to happen, right? So why do you do that? What, what purpose are you, are you serving in doing this? How will this bring your business forward? And if you can't answer those questions, then you probably should never even start the project, right? That's, that's my, my view. And if you can't continuously keep those, um, ideas in mind and repeat them back, right? Repeat them back in terms of what are we delivering? What do we measure up against to the, to the, to the company? Then again, you're not doing a very good job of, of, of communicating why that product exists. If you can't think of a reason why your platform delivers more to your company and the people working in your company than one of the off the shelf solutions, then what are you for, right? That's the fundamental question.
So we start there, we think about those things well before we even start talking about solution space and, and, um, you know, what kind of technology we're going to use, how we're going to build that. That's the first lesson.
Kovid Batra: Makes sense. A follow-up question on that. Uh, let's say a team is let's say 20-30 folks right now, okay? I'm talking about an engineering team, uh, who are not like super-funded right now or not in a very profit making business. This comes with a cost, right? You will have to deploy resources. You will have to invest time and effort, right? So is it a good idea according to you to have shared resources for such an initiative or it doesn't work out that way? You need to have dedicated resources, uh, working on this project separately or how, how do you contemplate that?
Geoffrey Teale: My experience of early-phase startups is that people have to be multitaskers and they have to work on multiple things to make it work, right? It just doesn't make sense in the early phase of a company to invest so heavily in a single solution. Um, and I think one of the mistakes that I see people making now actually is that they start off with this, this predefined idea of where they're going to be in five years. And so they sort of go away and say, "Okay, well, I want my, my, my system to run on microservices on Kubernetes." And they invest in setting up Kubernetes, right, which has got a lot easier over the last few years, I have to say. Um, you can, to some degree, go and just pick that stuff off the shelf and pay for it. Um, but it's an example of, of a technical decision that, that's putting the cart before the horse, right? So, of course, you want to make architectural decisions. You don't want to make investments on something that isn't going to last, but you also have to remember that you don't know what's going to happen. And actually, getting to a product quickly, uh, is more important than, than, you know, doing everything perfectly the first time around. So, when I talk about these, these things, I think uh, we have to accept that there is a difference between being like the scrappy little startup and then being in growth phase and being a, a mega corporation. These are different environments with different pressures
Kovid Batra: Got it. So, when, when teams start, let's say, work on it, working on it and uh, they have started and taken up this project for let's say, next six months to at least go out with the first phase of it. Uh, what are those challenges which, uh, the platform heads or the people who are working, the engineers who are working on it, should be aware of and how to like dodge those? Something from your experience that you can share.
Geoffrey Teale: Yes. So I mean, in, in, in the, the very earliest phase, I mean, as I just alluded to that keeping it simple is, is a, a, a big benefit. And actually keeping it simple sometimes means, uh, spending money upfront. So what I've, what I've seen is, is, um, many times I've, I've worked at companies, um, but so many, at least three times who've invested in a monitoring platform. So they've bought a off the shelf software as a service monitoring platform, uh, and used that effectively up until a certain point of growth. Now the reason they only use it up into a certain point of growth is because these tools are extremely expensive and those costs tend to scale with your company and your organization. And so, there comes a point in the life of that organization where that no longer makes sense financially. And then you withdraw from that and actually invest in, in specialist resources, either internally or using open source tools or whatever it is. It could just be optimization of the tool that you're using to reduce those costs. But all of those things have a, a time and financial costs associated with them. Whereas at the beginning, when the costs are quite low to use these services, it actually tends to make more sense to just focus on your own project and, and, you know, pick those things up off the shelf because that's easier and quicker. And I think, uh, again, I've seen some companies fail because they tried to do everything themselves from scratch and that, that doesn't work in the beginning. So yeah, I think that's a, it's a big one.
The second one is actually slightly later as you start to grow, getting something up and running at all is a challenge. Um, what tends to happen as you get a little bit bigger is this effect that I was talking about before where people get siloized, um, the communication starts to break down and people aren't aware of the differing concerns. So if you start worrying about things that you might not worry about at first, like system recovery, uh, compliance in some cases, like there's laws around what you do in terms of your platform and your recoverability and data protection and all these things, all of these topics tend to take focus away, um, from what the developers are doing. So on the first hand, that tends to slow down delivery of, of, features that the engineers within your company want in favor of things that they don't really want to know about. Now, all the time you're doing this, you're taking problems away from them and solving them for them. But if you don't talk about that, then you're not, you're not, you may be delivering value, but nobody knows you're delivering value. So that's the first thing.
The other thing is that you then tend to start losing focus on, on the impact that some of these things have. If you stop thinking about the developers as the primary stakeholders and you get obsessed about these other technical and legal factors, um, then you can start putting barriers into place. You can start, um, making the interfaces to the system the way in which it's used, become more complicated. And if you don't really focus then on the developer experience, right, what it is like to use that platform, then you start to turn into the problem, which I mentioned before, because, um, if you're regularly doing something, if you're deploying or testing on a platform and you have to do that over and over again, and it's slowed down by some bureaucracy or some practice or just literally running slowly, um, then that starts to be the thing that irritates you. It starts to be the thing that's in your way, stopping you doing what you're doing. And so, I mean, one thing is, is, is recognizing when this point happens, when your concerns start to deviate and actually explicitly saying, "Okay, yes, we're going to focus on all these things we have to focus on technically, but we're going to make sure that we reserve some technical resource for monitoring our performance and the way in which our customers interact with the system, failure cases, complaints that come up often."
Um, so one thing, again, I saw in much bigger companies, is they migrated to the cloud from, from legacy systems in data centers. And they were used to having turnaround times on, on procedures for deploying software that took at least weeks or having month-long projects because they had to wait for specific training that they had to get sign off. And they thought that by moving to an internal cloud platform, they would solve these things and have this kind of rapid development and deployment cycle. They sort of did in some ways, but they forgot, right? When they were speculating out, they forgot to make the developers a stakeholder and saying, "What do you need to achieve that?" And what they actually need to achieve that is a change in the mindset around the bureaucracy that came around. It's all well and good, like not having to physically put a machine in a rack and order it from a company. But if you still have these rules that say, okay, you need to go in this training course before you can do anything with this, and there's a six month waiting list for that training course, or this has to be approved by five managers who can only be contacted by email before you can do it. These processes are slowing things down. So actually, I mentioned that company that, uh, we lost the whole department from the, from the, uh, platform that we had internally. One of the reasons actually was that just getting started with this platform took months. Whereas if you went to a public cloud service, all you needed was a credit card and you could do it and you wouldn't be breaking any rules in the company in doing that. As long as you had the, the right to spend the money on the credit card, it was fine.
So, you know, that difference of experience, that difference of, uh, of understanding something that starts to grow out as you, as you grow, right? So I think that's a, uh, a thing to look out for as you move from the situation when you're 10, 20 people in the whole company to when you're about, I would say, 100 to 200 people in the whole company. These forces start to become apparent.
Kovid Batra: Got it. So when, when you touch that point of 100-200, uh, then there is definitely a different journey that you have to look up to, right? And there are their own set of challenges. So from that zero to one and then one to X, uh, journey, what, what things have you experienced? Like, this would be my last question for, for today, but yeah, I would be really interested for people who are listening to you heading teams of sizes, a hundred and above. What kind of things they should be looking at when they are, let's say, moving from an off the shelf to an in-house product and then building these teams together?
Geoffrey Teale: Oh, what should they be looking at? I mean, I think we just covered, uh, one of the big ones. I'd say actually that one of the, the biggest things for engineers particularly, um, and managers of engineers is resistance to documentation and, and sort of ideas about documentation that people have. So, um, when you're again, when you're that very small company, it's very easy to just know what's going on. As you grow, what happens, new people come into your team and they have the same questions that have been asked and answered before, or were just known things. So you get this pattern where you repeatedly get the same information being requested by people and it's very nice and normal to have conversations. It builds teams. Um, but there's this kind of key phrase, which is, 'Documentation is automation', right? So engineers understand automation. They understand why automation is required to scale, but they tend to completely discount that when it comes to documentation. So almost every engineer that I've ever met hates writing documentation. Not everyone, but almost everyone. Uh, but if you go and speak to engineers about what they need to start working with a new product, and again, we think about this as a product, um, they'll say, of course, I need some documentation. Uh, and if you dive into that, they don't really want to have fancy YouTube videos. And so, that sometimes that helps people overcome a resistance to learning. Um, but, uh, having anything at all is useful, right? But this is a key, key learning documentation. You need to treat it a little bit like you treat code, right? So it's a very natural, um, observation from, from most engineers. Well, if I write a document about this, that document is just going to sit there and, and rot, and then it will be worse than useless because it will say the wrong thing, which is absolutely true. But the problem there is that someone said it will sit there and rot, right? It shouldn't be the case, right? If you need the documentation to scale out, you need these pieces to, to support new people coming into the company and to actually reduce the overhead of communication because more people, the more different directions of communication you have, the more costly it gets for the organization. Documentation is boring. It's old-fashioned, but it is the solution that works for fixing that.
The only other thing I'm going to say about is mindset, is it's really important to teach engineers what to document, right? Get them away from this mindset that documentation means writing massive, uh, uh, reams and reams of, of text explaining things in, in detail. It's about, you know, documenting the right things in the right place. So at code-level, commenting, um, saying not what the code there does, but more importantly, generally, why it does that. You know, what decision was made that led to that? What customer requirement led to that? What piece of regulation led to that? Linking out to the resources that explain that. And then at slightly higher levels, making things discoverable. So we talk actually in DevEx about things like, um, service catalogs so people can find out what services are running, what APIs are available internally. But also actually documentation has to be structured in a way that meets the use cases. And so, actually not having individual departments dropping little bits of information all over a wiki with an arcane structure, but actually sort of having a centralized resource. Again, that's one thing that I did actually in a bigger company. I came into the platform team and said, "Nobody can find any information about your platform. You actually need like a central website and you need to promote that website and tell people, 'Hey, this is here. This is how you get the information that you need to understand this platform.' And actually including at the very front of that page why this platform is better than just going out somewhere else to come back to the same topic."
Documentation isn't a silver bullet, but it's the closest thing I'm aware of in tech organizations, and it's the thing that we routinely get wrong.
Kovid Batra: Great. I think, uh, just in the interest of time, we'll have to stop here. But, uh, Geoffrey, this was something really, really interesting. I also explored a few things, uh, which were very new to me from the platform perspective. Uh, we would love to, uh, have you for another episode discussing and deep diving more into such topics. But for today, I think this is our time. And, uh, thank you once again for joining in, taking out time for this. Appreciate it.
Geoffrey Teale: Thank you. It's my pleasure.
'The Art & Science of Leading Global Dev Teams' with Christopher Zotter, Head of Engineering, Sky Germany
November 1, 2024
•
29 min read
In this episode of the groCTO Originals podcast, host Kovid Batra engages in an insightful conversation with Christopher Zotter, the Head of Digital Engineering at Sky, Germany. Christopher brings a wealth of experience, including a decade of leading engineering teams and founding a software development agency.
Known for his unique leadership philosophy, Christopher believes in the power of building trust, embracing failures, and fostering a transparent culture. He shares his journey from an apprentice in Germany to a leadership role, emphasizing the importance of hands-on experience and continuous learning. The discussion delves into the challenges and strategies of managing culturally diverse remote teams, effective communication, and transitioning from legacy systems to cutting-edge technologies.
Christopher also highlights the significance of being a role model and integrating community involvement into one’s career. This episode offers a deep dive into the principles and practices that can guide leaders in nurturing successful global development teams.
Timestamps
00:00 — Introduction
00:49 — Welcome to the groCTO Podcast
01:39 — Meet Christopher: Personal and Professional Background
03:34 — Christopher’s Career Journey and Key Learnings
05:38 — The Importance of Community and Respect in Leadership
07:42 — Balancing Side Projects and Career Growth
11:33 — Leading Global Teams at Sky
15:20 — Challenges and Strategies in Remote Team Management
21:48 — Navigating Major System Migrations
24:26 — Ensuring Team Motivation and Embracing Change
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO podcast. And today with us, we have a very special guest. Uh, he’s Head of Engineering at Sky, Germany. He is also the founder of a software dev agency, and he has been leading engineering teams from past 10 years now. And today, we are going to talk to him about how to lead those global dev teams because he has been an expert at doing that. So welcome to the show, Christopher. Great to have you here.
Christopher Zotter: Thanks for having me. I’m really excited to be here, part of the great podcast. I get to know this and also the last months and with key insights and hope I can provide some of my learnings from the past experience also to your great audience. So happy, happy to be here.
Kovid Batra: I’m sure you can do that. All right. But before we get started into, um, knowing something about your team and your, uh, areas of expertise of how you lead teams, we would love to know a little bit about you. Like something that LinkedIn doesn’t know, something that is very impactful in your life, from your childhood, from your teenage. Um anything that you would like to share
Christopher Zotter: So first of all, the most important part is not business, it’s my family. So I’m a proud father of two kids and I have a lovely wife. So this is the foundation of everything that I can do, also my job properly to be honest and gives me energy. Um, and also what is not on LinkedIn or it’s on LinkedIn, but it’s worth mentioning is I didn’t study anything. So you see now my title, which is, I also need to reflect, impressive to be honest, also to myself, but I only did a normal apprenticeship in Germany to work as a software developer. So I really start at the core of the things, but now I managed to do so. So I make my, my way through doing the things, getting hats, hands-on, and don’t fear to make mistakes. I learned from things, um, I did, I deployed the hard coded ID and tested it on production while on a software in the past. Yeah, that never happened again. So I really get hands-on and get these kinds of experiences. Um, And what is also, I think, important is to not only focus on, on the software things, but also doing some things for the society, for the community beside the work, which, which gave me the balance. So this is not on LinkedIn. This is something that has also very positive impact on, on my, on my past. So, um, yeah, that’s roughly where, who am I, but I can also continue a bit of my journey to, to becoming that position if you’re interested in too.
Kovid Batra: Sure, why not? Please go ahead.
Christopher Zotter: Um, yeah, then my, my, as I said, I, I did an apprenticeship in Germany, which takes mostly three, three and a half years, and I had the chance to work at the very small company. It’s not, it’s not, the company doesn’t exist anymore, I think, but I got the chance to work in a very small team with great experts, and I got responsibility from day one. So I didn’t develop something for the trash. It was really then something which can go to production, of course, with review process, et cetera. And again, the advice I can already share is try to do as many things as possible. Even if in the younger years, you have the time. I see that now with family, the priority shifts obviously, but use the time you have, do side projects if possible, because getting hands on the things, nothing can beat experience. And this is, I think also the big learning I had over the, uh, over the time is I get all of my, um, promotions all of my way through the career, starting from an apprenticeship, junior developer, senior developer, lead developer, and now Head of Engineering, um, through my experience. I did hands-on and I can prove, showcase what I did starting from code skills, simple HTML page for with the, with the simple contact form, everything. So I get my hands on different things to get, uh, get, get the knowledge, and I think knowledge and experience beats most of the, of the things, but you can’t study it. Um, you need to get hands-on. Yeah, just briefly, and now I’m here.
Kovid Batra: Yeah, no, I think that was a very, very nice intro, and I think we now, we now know you a little more. And one, one thing that I really loved when you said that, uh, it’s not just about work. Uh, there is family, there’s community that you want to do for. So I’m sure this community thing which you are doing, uh, this, this would have helped in shaping up, uh, some level of leadership, some level of giving back. I think leadership is another name for giving back. So from there, it should be coming in. So can you share some of your experience from there that helped you in your career moving from let’s say an IC to an EM and then growing to a leadership position?
Christopher Zotter: I like that you say leadership is giving back. Yes. Um, I didn’t see it that way, but it totally echoes with me. Um, at the end, it’s all about the people. Um, I think we have, we have also on this planet, so many, uh, wars happening, so many people working against it, and I’m, I try to do the opposite because we’re all humans. And I learned also through working for the community in a certain way. So I, I worked for one year to support disabled people, to go with them to school, young people, and there I learned, hey, these are all humans and everybody’s trying their best. Also now, in my position, it’s about people, it’s about getting their feelings, getting their circumstances and getting their perspectives, getting their culture. We will come to the topic later, um, because there are different cultures. We are working together, even in software development, you’re across the globe. Um, and there, you need, always need to, to think about and not act like everybody has the pressure to get it done, get it done. And so, we need to consider that humans behind and let’s find to create a win-win situation for everybody that everybody feels confident, confident and comfortable and respected. And, um, this I learned, I’m a very value-driven person. And my key value is respect because respect is there for everything no matter what you’re doing. Um, it starts going into the office, the cleaning person, greet the same way as you greet the CEO. Um, it’s, it’s, we are all humans, everybody’s putting the bits and pieces together and this sometimes we, we forget in our daily business. So, um, this is what I definitely learned from being there, putting, giving away something for the community or whatever there is. So yeah
Kovid Batra: Perfect. Perfect. And another interesting piece in your career is, uh, no academic background, uh, in engineering and then doing things hands-on. And then, uh, you are working on a side business as well, which you just mentioned where you, you recommend people to do that in the early ages, because that’s where you get the most of your experience and knowledge to do things, how to complete things. How exactly that has contributed in your career growth? Because I also come from a similar experience. I would love for you to explain it if this has contributed in some way
Christopher Zotter: Okay. Yeah, great. Um, that’s yeah. I started my side business also, I think now eight, nine years ago. Um, and by the way, this will now come to an end right now. It’s already more or less ended because my, my daily job requires full attention plus family. There is no time and you need to also to say no to the things. Um, but in that time it was, uh, it was pretty important for me because what I did is the things I learned in my company, in my apprenticeship, um, I tried to do then some projects for first, for my own and then for my inner circle. So for some friends, they had also built up a company, whatever that is, need a home page, need a web application. Um, and I built it on my side business. Then to adapt the things I learned in my, in my daily business and enhance it on a certain way in my environment to test it to work against and enhance the knowledge. Try things out if they’re working there in a smaller, bits of pieces, not in the big company where you’re working on. Um, helps me a lot to grow, trying out, trial and error. Uh, and at least that’s the experience you get and this experience, if you bring it back to your company, if you want it to make career, um, this is where you can benefit from, and yeah, that knowledge beats everything at the end.
Kovid Batra: Sure. I think for me, like I also had a side business and how it has helped me is that I was interacting with the customers directly, right? So that was for me a great experience, which when you are in a larger organization where you have people doing the front end job and then you are getting just the requirements, that relatability with the problem statement with the audience is much lesser So I think that way it has helped me much more from that point of view.
Christopher Zotter: Interesting, because we at Sky we have, our claim is the, the customer or the users in the centric of everything and I have the, the I, I’m a Sky, a soccer fan, and, and, and Sky probably just to name it what we are doing, um, because there is probably a conflict with your audience from India because Sky channel there is known and it’s a bit of a different thing than what Sky Germany is doing. So, um, for, for, for you, we are the major entertainment provider here in Germany called pay tv. We have sports, um, mostly the Bundesliga, so the German soccer football, uh, um, rights we have in place or some, uh, own produced movies. Uh, you can watch Netflix and stuff over our platform, either it’s streaming or it’s our Q receiver. And, um, as I’m a big, Bayern Munich fan, I use Sky or previously it was named Premier, uh, for a long, long time ago. So I’m also the customer on the one hand side to use our product and know what’s going on and know the issues and can bring it then into and learn from it on, on the other side, which is now a great benefit, but I can echo it. It’s, it’s definitely one of the key things to know who’s your audience and what are the users and what are the customers and go out and get to know them, what is their behavior in order to deliver them the best product, the best experience they can, they can have.
Kovid Batra: Sure, sure. Absolutely. All right. I think, uh, that was, outside what you do at Sky, most of it, uh, we discussed. Now moving in from that note into the world of Sky where you are heading teams and, uh, most of them are remotely working from India, from Germany and other parts of the world. So first thing I would like to understand, like, how things have changed in the last four or five years from your perspective? Um, you have grown from a manager to a leadership profile. What were those things that came into, uh, into your role as a responsibility, uh, that you took up with these global teams that help you grow here? How was the experience the last four years?
Christopher Zotter: It was an amazing ride. Um, I think every, every, every step has their challenges in, in a certain way. Um, being a developer, you can then go to either other developers or have your scrum master and feature teams. Um, but coming to be, um, a leader for such, such a, such a big team. So my team is currently, we have five people here in Germany and we have 15–16 right now sitting in Chennai, India. You have to think about different things. You have to think about the team harmony, how the people working together, you have to think about communication. You have to think about values, how everything works then together, and not only getting the code done in a proper way with all of their quality checks in between, but also that I need now to consider there helps me to get the experience in beforehand to know what is technically possible, what we need to do in order to shape, um, the best and the most effective process. We will talk about that, I think, later also, what can be done there. But also, um, yeah, to consider, as I said previously, the different perspectives. Everybody is on a different level, um, has different circumstances. Somebody is now getting it further earlier. So probably not that much focus on work, which is fine. We need to deal with that also to support wherever we can. Somebody is getting sick and all of the things you need to consider. Um, and it’s, it was also a big change for me and I’m still in progress to be honest, because I started my journey as a developer and I love to code also. Um, but so much coding in that position is not possible anymore. And you need to build up your team where you can trust and give them the task and get it back done or get it, getting the right feedback, uh, whatever that is. So this is one of the things to build trust to having a lot of conversations. So having a lot of coffee in the office with the different guys to get to know what’s going on. And of course, um, you are now, or I am now in a position to having, uh, stakeholders, uh, communication with our CTO, COO, uh, different, different areas, which you don’t have normally as a developer that you only get the requirements. So again, I’m a bit next to the customer, right? Because I can also bring my bits and pieces into some of the features and decisions. Um, and this, this is one of the biggest changes to, to go out of the real, getting the hands-on and, and yeah, bringing the layer on top to prepare everything and protect everything that my developers can really focus or my architects can focus on the work without any disruption and make the work as smooth and as fast as possible.
Kovid Batra: But I think in your case, um, as compared to, uh, I would say, a single culture, a uniculture team, um, your case is different. You have people in India, across the globe. This collaboration, uh, I’m sure this becomes a little difficult and it’s a challenge of a lot of companies after COVID, uh, because things have gone remote and people are hiring from across the borders. How, how it has been an experience for you to handle these remote teams who are from different culture? And what, what really worked out, what didn’t work out some of those examples from your journey?
Christopher Zotter: Uh, yes, this is definitely a challenge and I have to say I’m the only German-speaking guy in my team. So we are a German company, but I’m the only German speaking guy. So I, in Germany, we have also some Indian colleagues, some from Russia, uh, sorry, from Ukraine. We have some from, uh, Egypt. So it’s mixed. And as, as you said, a lot of people are coming from, from Chennai, India. And imagine this is about 4, 000 kilometers difference. Um, a lot of, uh, at the end, and we have two different cultures. And this was the biggest learning I got to know is at the beginning, just an example, a yes doesn’t mean a yes. Um, we had some requirements, we talked about that and I got the feedback, “Yes.” Okay, and then I assumed the ticket will be done, but it was only, “Yes. I got to know that I need to do that.” But not, “Yes, I understand it.” So there’s a communication, a learning over the time and which the whole company has to do. So we all need to transform here at Sky and also at Comcast Engineering in India that we are going together, find a way of communication, get to know the, the other, uh, the other culture, the other people, the other behavior, how they’re working.
Um, and of course, I’m also a fan of remote working, but also a fan of getting in touch, uh, getting into, into personal conversations with people, um, not only, uh, not via camera, but in person. So that’s also why we have some mandatory days at Sky where we need to go to the office. But I’ll also be there in India once or twice a year, even if it’s a long travel and, you know, challenge with family, but, um, the investment is, is worth it. Um, I got to know the, the Indian culture very well. Um, and it’s also kind to them to show appreciation. So they recognize, “Hey, they really take care about us and we’re not only there outsource for things, get the things done.” And as I said, I’m taking care of, at least my goal is to take care about the people, to treat them with respect and try to find the way together. And if you’re having the 1-on-1 conversations in person, get to know the culture, go to temples, get to know all of the things we’re running around, what they, what, the food. Oh! It’s amazing in India. Um, everything. Um, then you grow together and then this makes, after my second visit, I can say, um, the communication was a totally different one. So I got to know then, or I feel really the trust of my team then to say, “Hey, Christopher, this doesn’t work.” So they say and you know, this is a cultural topic because in india, it’s normally, uh, it’s they’re not used to saying, “No, it’s not working.” They say yes and try to make it work anyhow, but it doesn’t help in the, in the daily business. So it’s better to say, “Uh, I need help at the first place and then we can get it done as a team.” But coming to that point, that’s one of the biggest challenges I faced. It’s still not perfect yet, but this is where we think always about what is their circumstances? Is that really yes, they got it or do they need some other kind of help, um, that we can provide them to them?
Kovid Batra: I think a very, very good example. Being an Indian, I can totally relate to it. Uh, we go with that mindset and at times it is not, uh, beneficial for the business as such, but there is a natural instinct which says, okay, let’s say yes. Let’s say, “Yeah, we are trying.” And try to fight for it maybe. Not sure what exactly drives that, but yeah, a very, uh, important point to understand and look at.
All right. So I think this is, this is definitely one example, which, uh, our audiences, if they are leading some teams from India, would keep in mind when they’re leading them. Anything else that you, that comes to your mind that you would want to do to ensure good communication or collaboration across these teams?
Christopher Zotter: I think when we stick to the topic is to be the role model. Um, I said it in my introduction. I deployed something hard coded to production with an ID. I bring that always as an example to say “Yes, this was a failure.” But I took a great learning out of it. So to establish these kind of things to acting as a role model, especially as a leader, because then you lead and the people will follow you and you should.. My claim is to act as a leader who is not there. I’m the same. I only have another title, but we are all equal. I can’t do my work without you and the other way around. So we’re one team, no matter who has, which level of a junior or, uh, whoever that is, so working together as a team and be there and support everybody. And I say always, “If they don’t need me anymore, I did my job perfectly.” Um, so this is what I, what I’m aiming for. No, to be really a leader, to be a role model, to, to say, “Hey, this doesn’t work.” “Oh, this was my failure of the week.” That’s what we probably now try to establish failure of the week that everybody, uh, put that failure into learning and share that with the audience. Um, it breaks a bit everything. So they see, “Hey, they are now doing it. So I can do that as well.” And this takes away the fear of if I say too much things I can’t do, I get fired. That’s the most fear, I also get to know why talking to the people. Um, as I know, that’s not the case. I appreciate it more if you say it to me instead of hiding it. So, um, yeah, this is definitely, definitely the thing.
Kovid Batra: True. I think one example that comes to my mind, uh, when I talk to my, um, friends and colleagues who are working across different organizations, like Amazon, Microsoft, world, handling teams from India for US or vice versa. Um, whenever there is huge transitions, let’s say from legacy systems to new architecture, they are like for 6 to 10 to 12 months, I’ve seen they were in a stressed situation where they’re saying like, “The team is not here communicating and managing that stuff is becoming difficult for me.” They were making multiple trips to, to the, uh, to the main home ground and then getting things done. So in your case, you, you guys are remote-first and I’m assuming most of the times you’re dealing with such situations remotely. So has there been a situation where you had to migrate from some legacy systems to new systems, new architecture, and, uh, there were challenges on that journey?
Christopher Zotter: Um, we’re currently in. Uh, so we are in a big transformation phase at Sky. So this is taking off for some years. And, uh, let’s say we in the final steps to be there to create, we started everything, challenged every technology we had, um, a few years back and say, “What can we provide best to our customers? So what technology is cutting edge? What technology is bringing our faster cycles of deployment, faster cycles of changes?” And challenged our content management system up to all completely our CRM system. Um, and that’s, that’s, we’re currently in the middle of it. Um, the challenge is obviously, yes, you always did in the past, something is not documented, some processes are there, and not everybody’s trying to challenge all of the things which happened in the past but it’s exactly the right time to do so, to, to challenge what was there. Do we really need to convert it and migrate it to a new system or not? Um, and get better into doing that. So take the learnings, challenge it and bring it to the new system. And that we’re in the middle of, um, that’s why, why I also started at Sky to, to, to kick-off that journey and at this part of time I was the developer who started it and, um, now i’m happy to say that we are in a very good shape. So we are live with, uh, with most of the things already, the migration is still going on, but um, our sales journey and stuff is already live and going to customers. We have proper monitoring set up. We have good testing in place. So, um, yeah, but again, what I said is, um, I see also now the old worlds, the old systems, um, and we, we all have to be open-minded to getting, getting transferred to new things, um, to always learn every day, especially, I think your audience knows that pretty well. In software development or development is that every day is a new tool, every day is a new change, a new version and new things you need to update it here and there. To always stick to that level is a challenge we face every day, but we’re trying to do our best to always get the latest version and the best features out for our customers.
Kovid Batra: Sure. I think one very good point you highlighted, like as a leader, uh, as a manager, you might still realize that this change is for the good, and this change is going to impact us in much better ways for the business point of view, from our engineering point of view. But when it comes to the people who are actually developing, coding, uh, how do you ensure like such big migrations come handy, people don’t have resistance? Because giving a plan and a strategy, uh, is definitely one thing which you have to craft carefully. But one very important thing goes into the, the innate motivation of people to execute it so that they think of use cases, make it even better than what you have planned for, at least on the paper. So what, what do you do to ensure such kind of, uh, culture shift or such kind of culture being instilled in people to embrace that change?
Christopher Zotter: Um, first of all, I think if you are yourself your own customer, this is the first thing. So you need to consume your own product as well. So dog food it. Um, It’s a bit difficult with India, but we have possibilities to also use Sky at least in the office to play around, to watch the movies to watch the things, um, that we can identify with that. That’s the first thing that we know what we’re doing to know what, how our customers are acting and I always said is I use a lot of data, um, to just, hey, how many visits do we have on these pages? Or check this feature, has this impact on our sales, whatever that is. So using that data to show, hey, the button you’re changing right now is not only a color change. This has a psychologically thing. If you change it to green one to give a positive feedback to our customers that they would click then and buy the things, just stupid example. Um, And you will see when we put that on production or do some user tests, you see directly your impact and it would go to millions of customers. And coming out and bringing that every time, every day to the table, um, opens up, hey, the things they’re doing, they have a real impact and this is everybody can be proud of. And I said always, hey, look, if you show that to your family and your mother, this, you can, and that’s a good thing at that development. You can show the things, uh, if you’re doing an API, it’s also important, but it’s a different thing. That’s why I love that development to say, you can showcase the things. Um, so we’re constantly measuring the things constantly, constantly improving. And this gives also the, the, the developers a sense of, “Hey, this is really important, what I’m doing here and this is the impact.” Um, and in order not to, you know, putting too much pressure on the people. We always have, uh, uh, we are working in a safe environment, so a scaled agile framework where we plan the next three months ahead and the planning is done by the developers and the developers commit to this, um, uh, plan provided by the business and they commit what they can achieve. So they have then the plan and they have an influence on that. And this gives us a balance to first be predictable, but also, uh, make the developers identify with things they’re developing.
Kovid Batra: Got it. Got it. Makes sense. I think it revolves around creating those right incentives, creating those right experiences for the developers to understand and relate to. Uh, so while, while you’re talking about having those right incentives, measuring the impactful areas, uh, I’m sure you must be using some level of metrics, some level of processes to ensure that you continuously improve on these things, you continuously keep working on the impactful areas. So, uh, at, at Sky or at your previous organizations, what kind of frameworks you have deployed? What kind of metrics you look at for different initiatives?
Christopher Zotter: Um, first of all, uh, I got to know that only what you measure, you can improve. That’s the one claim I always get to know. Um, it can be a weight, but, uh, then you see also some improvements. So just an example. Um, I’m, I’m a developer. So, uh, let’s start with the coding part, probably GitHub. Um, yeah, I mean, GitHub, a lot of different cycles, um, starting from creating a pull request, uh, reviewing a pull request, checking if it gets rejected or not, how many comments you get, um, uh, up to, it’s connected to CI/CD where some of our testing frameworks are running against different features we wanted to merge. Um, this is one of the key indicators where we say, um, or in the past also where we, we were looking into and say, “Okay, um, how big is a pull request? How much time does it take that it gets reviewed?” Um, all of these KPIs, um, or there are KPIs behind that, but the, my goal is that I get identified if I need to go deeper into some of the topics to find probably some root cause. Um, the same happened on, on the delivery level. So not on the code level but on the delivery level where we have our tickets, our story points and where we can roughly say a story point is one day more or less, um, and if I see there’s one story point, but the ticket is in development for five days, um, I need to go into, uh, into communication, say, “Hey, are there any challenges?” Um, or, “Do you need some support? Is there a knowledge gap?” Or if a feature has too many bugs after that assigned, um, after it’s merged to our development stage, um, we probably have a lack of quality. It could lead to a lack of, uh, lack of yeah knowledge here and there. So this is my, my measures to not to and this is again coming into a culture topic, um, to use the data the right way and not to say, “I micromanage you. You get fired if you don’t hit the KPIs.” No. Um, the key is we need to have in these KPIs that I get an alert as early as possible that I need to go into communication and find a way to take the people by hand and work together against some strategies. Could be knowledge sharing, could be coachings, could be whatever that is. It could also be that I got identified. We have some issues with one of the product owners, for example, who doesn’t provide all of the details in a ticket beforehand. It comes to development. It can be a lot of things, but if I don’t do that, I don’t have or at least I get to know that by a lot of weeks later, and then it’s too late. So gives me an indicator where I need to get into communication to improve, um, the process, to improve, um, the people, to make them better and, and yeah, to support them.
Kovid Batra: Make sense. I think very rightly said, um, using these metrics always makes sense, but how you’re using it will ultimately be the core thing, whether they are going to help you or they can give back. So yeah, I think great advice there, Christopher. And I think in the interest of time, uh, we’ll have to take a pause here, though I, I really loved the discussion and I would love to deep dive more into how you’re managing your teams, but maybe another episode for that. Uh, and once again, uh, thanks a lot for taking our time, sharing your experience at Sky, telling us about yourself. Thank you so much.
Christopher Zotter: Thanks for having me. Uh, thanks for having me. It was a pleasure to be here. Happy to come a second time to dive deep, uh, deep dive into some of the topics, um, if interested and, uh, also kudos to you. It’s a great podcast. I love to listen to it on my own because I also pick some nuggets out of that each of the time. So keep, keep pushing that. Thanks a lot.
Kovid Batra: Thank you so much, Christopher.
'DevEx: It's NOT Just About Dev Tools!' with Vilas Veeraraghavan, Startup Advisor, Ex-Walmart
October 18, 2024
•
35 min read
In this episode of the groCTO Originals podcast, host Kovid Batra engages with Vilas, an accomplished engineering leader with significant experience at companies like Walmart, Netflix, and Bill.com.
Vilas discusses the concept of Developer Experience (DevEx) and how it extends beyond simply providing tools. Vilas highlights the importance of enabling developers with frictionless processes and addresses the multidimensional challenges involved. The conversation delves into Vilas’s journey in DevEx, insights from designing platforms and enabling developer productivity, and the necessity of engaging with key opinion leaders for successful adoption. Vilas shares personal anecdotes and learning experiences, stressing the significance of treating developer enablement as a product and encouraging collaboration.
The discussion concludes with advice for those stepping into DevEx roles, underlining the evolving significance of this field in the industry.
Timestamps
00:00 — Introduction
00:51 — Meet Vilas: The Man Behind the Expertise
04:28 — Diving into DevEx: Concepts and Definitions
06:32 — The Evolution of DevEx: From Platform to Productivity
13:19 — Challenges and Strategies in DevEx Implementation
Kovid Batra: Hi everyone, this is Kovid, back with a new episode of groCTO podcast. Today with us, we have a very special guest. He’s an accomplished engineering leader, has been building successful teams from last 15 years at Walmart, Netflix, Bill.com, and with his expertise in DevEx and Dev productivity, he’s now very well renowned. So we found Vilas through LinkedIn and, uh, his posts around DevEx and Dev Productivity, and I just like started resonating with it. So, uh, welcome to the show, Vilas, great to have you here.
Vilas Veeraraghavan: Thanks Kovid. I am grateful for getting to meet people like yourself who are interested in this topic and want to talk about it. Um, so yeah, I’m looking forward to having a discussion.
Kovid Batra: Perfect. Perfect. But Vilas, before we get started, um, this is a ritual for groCTO podcast.
Vilas Veeraraghavan: Okay.
Kovid Batra: Uh, we will have to like, uh, know you a little more beyond what LinkedIn tells about you. So tell us about yourself, like your hobbies, how do you unwind your day? Something from your childhood memories that tells who Vilas today is. So, yeah.
Vilas Veeraraghavan: Okay. Okay. That’s, I was not prepared for it, but I’ll, I’ll share it anyway. Um, so I am a, the thing that most people don’t know about me, uh, is that I am a big movie fan. Like I watch movies of all languages, all kinds, and I pride myself on knowing, uh, most of the details around why the movie was made. Um, like, you know, I really want to get into those details. Like I want to get the inspiration of behind the movie. It’s almost like appreciating art. You want to get into like, why did this person do this? Uh, so I’m very passionate about that. Um, so that’s, that’s something that people don’t necessarily know. Um, and apart from that, like, I, I enjoy, uh, running and walking. It sounds weird to say I enjoy walking, but I genuinely do that. Like that’s my, that’s the place where I do most of my thinking, analysing, all of that.
Kovid Batra: Perfect. Which one’s the weirdest movie that you have watched and like found out certain details which were like very surprising for you as well?
Vilas Veeraraghavan: I don’t know if I would say weird, but you know, all of, every director, every film director has one movie that, you know, they have always yearned to make. So they, their entire career goes in sort of trying to get to that movie, right? Because it’s their magnum opus, right? That’s the, that’s the term that people use. Um, I always find that fascinating. So I always try to look for, for every director, what was their magnum opus, right? Uh, so for example, for Raj Kapoor, it was Mera Naam Joker, and that was his magnum opus. Like what went into really making that film? Why did he make it? Like what? And you’ll realize also that their vision, the director’s vision is actually very, um, pure in those, in a sense that they will not listen to anyone else. They will not edit it short. They will not cut off songs or scenes. It’s such a, uh, important thing for them that they will deliver it. So I always chase that. That’s the story I chase.
Kovid Batra: Got it. Perfect. I think that was a very quick, interesting intro about yourself. Good to know that you are a movie buff. And now like, let’s, let’s move on to the main section. So just for the audience, they know, uh, we’re going to talk about DevEx, dev productivity, which is Vilas’s main area of expertise. And his, his quote from my last discussion with him was that DevEx is not just, uh, some tools being brought in, some dev productivity tools being brought in. So I think with that note, uh, let’s get started, Vilas.
Vilas Veeraraghavan: Sure.
Kovid Batra: What according to you defines DevEx? Like let’s start with that first basic question. What is DevEx for you?
Vilas Veeraraghavan: Okay. So before I jump into that, I want to give you, give the context behind that statement I said, right? Um, it’s not about throwing tools at someone and expecting that things will get better. Um, I learnt that over time, right? I was a big fan of automation and creating tools to help people, and I would often be surprised by why people are not using them the way I thought they should. And then I realized it’s about the fact that their process that they are following today does not allow them to include this. There is too much friction that brings that. If they bring in a new tool, it’s too much friction. And then I realized also what the people, about management, all of that stuff. So it’s a very, it’s a, it’s a multidimensional problem. And so that, I just want to set that context because that’s how I defined DevEx, right? DevEx or I, as I like to call it more about dev enablement, is about making sure that your developers have the best possible path through which they can deliver features to production. Right? And so it’s, it’s not about productivity. I think productivity is inherent in the fact that if you enable someone, uh, you are providing them with the shortest paved road kind of thing to get to their destination. They will become productive. Uh, it’s sort of, uh, automatic extrapolation, if you will, from that. So that’s the reason why I, that’s how I defined DevEx. Um, but it’s important because that’s how, that was my journey to learn as well.
Kovid Batra: So I think, uh, before the discussion started, we were talking about how you got into this role and how DevEx came into play. So I think, uh, let audience also hear it from you. Like, we know like DevEx is a very new term. Uh, this is something that has been introduced very lately, but back in the day, when you started working on things, what defined DevEx at that time and how you got involved in it?
Vilas Veeraraghavan: Um, so back in the day when I started working in a software organization, the thing that drew me to, uh, what we would call ‘platform’ back then was the fact that there were a lot of opportunities to see quick wins from doing improvements for other teams. So for example, if I created something, if I improved something at the platform layer, it will not benefit one team. It will benefit all teams, multiple teams. And so the, the impact is actually pretty widespread and it’s immediate. You can see the, um, the joy of making someone happy. Like someone will come to you and say, “Oh, I was spending so much time and now I don’t have to do this.” Uh, so that drew me in, it wasn’t called DevEx. It wasn’t even called Dev Productivity at that time. Um, but this is I’m talking about like 2008, 2007–2008 timeframe. But then what happened over time was that, um, I realized that automation and creating the tools and all of that, uh, I realized how much of a superpower that can be for a company to have, uh, investment in that because it’s a multifold impact on how quickly people can get features. So how quickly you innovate, how efficient your engineering team is, how, um, excellent the, uh, how it says, the practices are within the engineering organization. They can all be defined by providing your engineers something that is, they can use every day and they don’t have to think and reinvent new ways and they don’t have to relitigate the same problem again and again.
Um, so that drew me in. Uh, so over time I’ve seen it evolve from just platform or like there used to be common libraries that people would write, which other companies, other teams would, uh, ingest and then they would release, uh, and we did not have, uh, continuous delivery. Uh, funnily enough, uh, we used to ship CDs, compact discs for those who are new to this process. Uh, so we would actually ship physical media over. So we would burn all the software on it and then we would ship it, um, to the data center and an admin would install it. So there was no concept of that level of continuous delivery, but we did have CI, and we did have a sense of automation within the actual pipeline, the software delivery pipeline. That is still valid.
Kovid Batra: There is one interesting question, like, uh, this is something that I have also felt, uh, coming from an engineering background. People usually don’t have, uh, an interest towards moving into platform teams, DevOps kind of things, right? You say that you are passionate about it. So I just want to hear it from you, like what drives that passion? Like you just mentioned that there is an impact that you’re creating with all the teams who are working there. Um, so is, is that the key thing or is it something else that is driving that passion?
Vilas Veeraraghavan: I mean, I feel like that is the key thing because I, I derive a lot of joy out of that, because I feel that when you make a change and sometimes, uh, the result, the impact of that change is not visible till it’s actually live and then people use it. I mean, for example, if you wanted to, let’s say you’re moving from a GitLab pipeline to, uh, using Argo CD for something or something like that. You’re doing a massive migration. It can be very troubling to look at it when you’re stepping back and looking at it as a big picture. But then when all of the change is done and you see how it has impacted, uh, you see how fast you’re running or you see, something like that, right? So I think it’s that, um, obviously is, which is a big motivator, but here’s the other thing, right? I think, uh, and this is a secret that I hope others also, uh, realize that it was right there all along. They just haven’t seen it. The secret is that by being in a space like DevEx, you actually solve multiple different domain, uh, domain areas, problems, right? So for example, at Walmart, I got deeply, I had a chance to deeply understand supply chain issues, like supply chain teams had issues that were different from maybe, uh, like teams that were doing more payment management. Uh, the problems are different, but when you look at the problem, uh, you have to understand deeply what that technology is. So you end up having a lot of really broad knowledge across multiple domain areas. And when you solve a problem for a domain area, you will be surprised to know, Oh, this actually solves it for five other areas as well. Right? So it’s, it’s a fascinating thing that I think people don’t realize immediately. So it feels less glamorous than something else, um, like a feature team maybe. Um, but in fact, it’s actually, in my opinion, uh, more powerful.
Kovid Batra: Got it. Is this the effect of working with large organizations particularly? Like, uh..
Vilas Veeraraghavan: It’s possible.
Kovid Batra: I’m not making any assumptions here but I’m just asking a question.
Vilas Veeraraghavan: Yeah. It’s possible.
Kovid Batra: Okay.
Vilas Veeraraghavan: Yeah, it is. I, I, yes. Uh, I, I will say that there is definitely a privilege that I’m, I should call out here, is that the privilege for me was to work, uh, in companies which allowed me the ability to like learn this, right? There was a lot of, um, bandwidth that was offered to me to learn all of this. Um, and Netflix was, is, is always good about a lot of transparency across organizations. Uh, so as an engineer, if you are working for a company like Netflix, you absorb a lot of information. And because you, if you’re curious, you can do more, you can do a lot, right? Um, obviously Walmart, fortune one, big, biggest company I’ve ever worked for. I think it’s, it is the biggest company in terms of size as well. Um, again, right, you have the ability to learn, uh, and you work your way out of ambiguity by defining structure yourself. Um, so same thing happens. I think I’ve been lucky in that way as well, um, to learn from all of these folks who worked there and obviously, amazing, talented people work in these places. So something, you keep hearing about it, you keep learning about it and then it makes you better as an engineer as well.
Kovid Batra: Makes sense. So, um, let’s, let’s deep dive into some of these situations where you applied your great brains around designing the platform teams, defining things for, uh, these platforms. So maybe, can you just bring up some examples from your journey at Netflix or Walmart or Bill.com where you had a great challenge in front of you? Uh, and what were the decision-making framework, uh, frameworks you, you, uh, basically deployed at that point of time and how things spanned out during the journey? So this might be a long question, but like, uh, I just wanted to, uh, dive into any one of those journeys if you, if you’re okay.
Vilas Veeraraghavan: Okay. I think we have had in the past, you’ve had Bryan Finster. So this was something that we traversed together along with many other people. Uh, we were all part of the same team, um, when we did this. Uh, so I’ll start with Walmart, uh, as an example. Um, I’ll, I’ll keep, keep it to sort of, I’ll go into generics and not give you specifics, but the challenge, uh, at a company like Walmart is that as a company, a big company, there is a lot of established practices, uh, a lot of established processes, established tools that teams use and businesses rely on, right? So each of these areas within the company is a business by itself. Uh, they are obviously wanting to get the best possible output for their customers. Uh, and they rely on a bunch of processes, tools, people, all of that, right? Um, if you now, going in, say that, “Hey, I’m going to introduce something that’s brand new.” Or if you’re going to change something drastically, you are creating unnecessary churn and unnecessary friction within the system, right? So in order for us to think about how we wanted to do dev enablement within Walmart, it is important to understand that you had to address the friction, right? If you are providing a solution that is replacing existing solution and doing just enough, that’s not going to cut it. It has to be a sea change. It has to be something that significantly changes how the company does software delivery, right? Uh, and so, one thing I’ll say is that I was very lucky to work for someone and for like leaders at Walmart that also understood this at that time. Um, so, for all those who are in the process right now, you cannot do it unless your leadership has that, you have buy in from that leadership, you have sponsorship from your executive teams. Uh, that helped us a lot.
Now, once you have buy in, you still have to produce something that is of value, right? And so that is where I’m saying this thing is important. So initially, uh, in my mind, uh, naively, my expectation was we build some amazing tools, right? And then we provide that to these teams and of course, they’ll be super happy, uh, the word of month will spread and that’s it. Right. All done. Um, what I found was in order to solve a problem where engineers were spending a lot of time doing toil, right? Like they were doing a lot of manual processes or repeated, uh, work throwing a tool at them was actually exacerbating the cognitive load problem, right?
Kovid Batra: Yeah.
Vilas Veeraraghavan: So now, while they maintain existing solutions, they have to now learn something new, migrate it, then convince their leaders and their teams to say, “Yeah, this is how we have to do things.” And then move forward. So you’re making that problem worse, that bandwidth problem, which is I’m a developer. I have certain amount of time to spend on feature delivery. I don’t have time for everything. So now I’m squeezing this into my, like 20 percent time, on my own free time outside of work to learn what this new thing is about. What that meant is that adoption would not succeed. So if adoption doesn’t succeed, then obviously, if your customers are not using you, you’re not, you’re a failed product, right? So what we realized was there are two other aspects to it that we had not thought about. One was process and the other one was people, right? So when I say people, I mean it could be management, it could be a key opinion leader within the space, right? That’s what we attacked. And you can obviously ask Bryan more about it. He is, he’s, he knows all about it. But the way that we attacked it was we created programs which were more grassroots, like more bottoms up view of saying, “Hey, we are starting to use these new tools. Come join us as we learn together. Let’s discuss what problems we have. Let’s talk about successes that we have. Let’s talk about how we want to do this well.” And we were open to feedback. So, inside my organization, uh, which is the dev enablement area, there was also a product organization. Uh, so we had product owners with each of the teams that are building these tools and the product owners had a pulse on the customer’s need.
So that is, that is how we found success over time. We did not obviously succeed at the start, and there was obviously, a lot of challenges we had to work through, but what happened is adoption only kicked up when we saw that we were able to, one, provide a solution that is X times better than where we were, right? So if you were to, if you were maintaining configuration, if you’re meeting five config, uh, different configs, now we just have to meet in one YAML file and that’s checked into GitHub or something like that, right? That’s a big difference productivity-wise. lesser errors. Uh, second thing is how many times do I have to look at the build? Uh, and then security review after the build and all that. So you say, okay, let’s do security scanning before the build. Uh, so even before you build a binary, you know if it’s safe to build it based on your code scan. Uh, things like that we did to improve the process itself. And then we educated our teams about it. All of our teams. We upskilled them. We gave them a chance to upskill themselves by giving them lots to, lots of references. We showed them like what the industry standards are. By showing them what the industry standards are, you created a need inside them say, “Hey, we need to be like that, right? Like, why can’t we do this?” And so that essentially became a motivating factor for most teams and most managers and directors and VPs started saying, “Hey, I want all of my teams to do exactly that.” Right. We need to be that kind of a team. And that introduced a lot of sort of gamification, right? Because when we, when you look at dashboards that look slick, right, and you’re like, “Hey, why can’t I do this? Why can’t my team do this?” It created a very natural tension, a very natural competition within the company, which served adoption well. Once the adoption was starting to grow and beyond a certain threshold, it became a very natural, or we didn’t have to go asking for customers, customers came looking for us. And so, that’s how we got to the point where there was more uniformity in how software is delivered.
Kovid Batra: Perfect. So I think it’s more around defining the right problem for the teams that you’re going to work with, defining a priority on those problems, how you were like very swiftly slide into their existing system so that the adoption is not a barrier in the first place itself. So the basic principles of how you bring in a product into the market. Similarly, you just have to..
Vilas Veeraraghavan: It is the exact same.
Kovid Batra: Yeah.
Vilas Veeraraghavan: Uh, platform, dev enablement, tooling, all of this. These are all products. Your developers are your customers. If your customers are not happy and they don’t use you, um, yeah, you are a failed organization then. That’s how it is. Right. So if you, if you feel like, um, just because you are part of a DevEx team, uh, what you say has to be the law of the land, it doesn’t work that way, right? The customers vote with their, with the time that they give you. Uh, and if that, if you find if, let’s say in an organization, you see that there are some tools that’s been released by the developer productivity or DevEx or enablement or platform engineering organization, but most people are using workarounds to do something. Then I hope the teams understand that there needs to be some serious change in the DevEx organization.
Kovid Batra: Cool. I’ll just go back to the first point itself from where you start. Is there any specific way to identify which teams are dealing with the most impactful problems right now and then you go about tackling that? Or it’s more like you are talking to a lot of engineering leaders around you and then you just think that, “Okay, this is something that we can easily solve and it seems impactful. Let’s pick this up.” How does that work?
Vilas Veeraraghavan: That’s actually a very, um, important thing to think about. And thanks for reminding me of that because I did ignore to say that. I didn’t say this the last time. Uh, you do need some champions and that’s why I said key opinion leaders, right? In the company, you need champions who can help do that early adoption and then find success. That comes from not just impact, which means, let’s say that someone is doing, uh, a hundred million dollars of business every year. Uh, and if they change something that they made to save a significant amount of money, that can be big impact, but it’s also about what their ambition is. So if I am a hundred million dollar business, but my ambition is I want to be a hundred million dollar business next year as well. They may not be able to be the, uh, they may not be the person who’s pushing at the boundaries, right?
Kovid Batra: Got it.
Vilas Veeraraghavan: They may be saying, “Oh yeah, it’s fine. I mean, everything is working just fine. I don’t want to break anything. I don’t want to touch anything. I don’t want to innovate. Let’s keep going.” But on the other hand, you will see, and this is common in many big companies is there’ll always be pockets of rapid innovation, right? And so, these folks who are in that space and their decision makers in those spaces, uh, them having a discussion with it, a really deep discussion, a very open discussion with them, uh, almost like a partnership, right? Saying, “Hey, I’m building this tool. Let’s imagine you have to use this tool. What would you want me to change in this so that it fits you?” And obviously, you’re going to take all of their input and decide which ones will be more useful to others as well. You’re not going to obviously, build something for just one team, but at the same time you get to know, like, you know, what is it that, what is it that is not getting them to adopt this right now? So you do need a set of those key opinion leaders very early in the process because they are also not just going to influence their team; they are going to influence other teams. And that’s how the word of mouth is going to spread. So that’s the first step. So it’s not just impact; its impact with ambition, which is where..
Kovid Batra: There should be some inherent motivation there to actually work on it, only then..
Vilas Veeraraghavan: I will, I will say one other thing, Kovid. Like if there is someone that, if there’s a team that doesn’t necessarily have ambition, but it’s doing more of a top-down, like get this done, right? I have often found that, uh, by leaders saying, get this done, it can sometimes backfire because the team feels like it’s an imposition on them. They may be very happy with their current state of tools, but it’s an imposition. Like now, why do you have to change this? Everything works just fine, right? You always have that inertia, like people, everyone doesn’t want change, and sometimes change might not be needed either. You might actually already be efficient, right? But that top-down approach doesn’t always work, which is why for us, I will say this, that for me, the greatest learning was how and seeing how much the bottoms-up approach worked at Walmart was actually very encouraging because I realized that you have to convince an engineer to see this for themselves. So it is very, that’s why I think opinion leaders are not necessarily VPs or they could be, it could be someone who’s well-respected in an area. It could be someone who is, um, like a distinguished engineer, uh, right, whose word carries a lot of value within an organization. Those are the, those are the people who, who tend to be those key opinion leaders, right? Uh, so top-down also doesn’t work. You can’t just be like, uh, your VP is ambitious, but you are not. That, that, that doesn’t work either.
Kovid Batra: Makes sense. Makes sense. All right. So I think when you have defined the team priority problem that you need to solve, then you start hustling, start building, of course, that phase has to be of a lot of to and fro, patience, transition, MVPs. Anything from that phase of implementation that came out to be a great learning for you that you would like to share?
Vilas Veeraraghavan: I’m thinking there was obviously a lot of learning. Uh, we, it was not, it is never a straight path, right, uh, when, when you’re doing something like this. But I think one thing that I, uh, evolved, uh, during that time was at the start, uh, I was definitely operating in a bit of a, “But this is the best way to do it.” Like I was, we were so convinced that there is no other way, but this to do it. That, uh, slight arrogance sometimes leads you down a path where you’re not listening to what people are saying, right? If people are saying, “Hey, I’m facing this pain.” And you’re hearing that across different organizations, different areas, and you dismiss it as, “Oh, it’s just a small thing. Don’t worry about it.” Right? That small thing can snowball into a very big problem that you cannot avoid, eventually. What I learned over time was I used to go into meetings being very defensive about what we already created and what, because the way I would look at it is, “Oh, well, that team can do it. Why can’t you?” And, uh, that was very naive at that time. But then I realized, uh, one of those meetings I went to, I, for some reason, I basically said, “Okay, fine. Tell me exactly how you would have solved the problem.” Maybe I was annoyed. I don’t know what, but I said, “Okay, how would you solve the problem if you were doing this?” And that person was so happy to hear that. And that person actually sat down with me for the next two hours and designed exactly how things could have been better, all of that. Like they, and I went, I was happy to go into detail, but it made me realize these are actually all allies that I should be adding to my list, right, as opposed to saying, “No, no, you have to use this. Like, what? Go away.” I, I, that was a big mistake I did. I probably did that for like six months. I, I will say that that was a bad idea. Uh, don’t do it. Uh, but after that it was, I, I was able to, the team was able to flourish because everyone saw us as partners in this thing, right?
So then we would go and we would say, “Okay, fine. You have this tool that we built, but don’t think about that. Think about what is the ideal tool that you need and let’s find out how much of this, this satisfies, right. And then whatever it doesn’t, we will accept that as feedback. And then we’ll go back and we’ll see and think about it and all that. And we will share with you what our priorities are. You tell us if this is making sense to you or not, and then we’ll keep this communication going.” That is a big evolution.
Kovid Batra: I totally relate to that. But I haven’t been like being back and forth on this thought of bringing in opinions and then taking a decision rather than just taking a decision and then like pushing it. I think it’s the matter of the kind of people you’re working with. You have to make a wise choice that whom you want to listen to and whom you don’t want to. Both things can backfire. I’ve actually experience both, uh, the same happened.
Vilas Veeraraghavan: Oh yeah. You don’t want to. Yeah, obviously, what, it goes without saying that there is gonna be some people who are, uh, giving you the right advice, right? And some people are just complaining because they are complaining. That’s it.
Kovid Batra: Yeah.
Vilas Veeraraghavan: Right? Uh, oh yeah, you have to separate that. But I’m saying there’s two ways to do this, right? Like when you, when you find that initial adoption starts hitting and all that, you can’t go into your shell and be like, “Okay, that’s it. My job is done. People will keep.” So that is what we, I felt like over a brief amount of time, right? When we said, “No, it’s all working just fine. Like, why do you, what are you complaining about?” And then I realized, I don’t know if maybe other folks in my team realized it earlier, but I realized it as a strategy. We needed to change that. And that put a very different face on our team because our team then started getting welcomed into meetings, which we originally were never a part of. It allowed us to see, uh, into their decision process because they were like, “Oh no, it’s important for you to know this because there is a lot of dependency on tools. We can’t change this process, but maybe we can adjust the tools and the settings to help us with this.” Right? So it was a very different perspective. And that learning, I was able to carry it into like other, uh, other initiatives, projects, companies, all of that. It has definitely served me well. Even now, if I’m listening to someone, I’ll usually say, “What would you do if you were in this space?” Right. And then let’s talk about it. Right. Very open. Um, but it is, it is important to have ego outside.
Kovid Batra: Yeah, totally. So I think it’s a very good point you just mentioned, like, uh, taking that constant feedback in some or the other form. But when you’re dealing with large teams, large systems, uh, I have got a sense that you need to have a system in place along with 1-on-1s and discussions with the people. So I’m sure you are focusing on making the delivery, uh, more efficient, faster, the quality should be better, less of failures, right? At the beginning of a journey, let’s say, any project, there must be something, some metrics that you define that, “Okay, this is what the current scenario is. And during the phase, these are our KPIs which we need to like look at every time, every 15 days or 30 days.” And then finally, when you are putting an accomplishment mark to your change that you have brought in, there is a goal that you must be hitting, right? So during this whole journey, what were your benchmarks? What were your ways of evaluating that system data? So that you are always able to like, most of the time it’s like, it’s for our own benefit. Like we know things are working or not. And at the same time you’re working with so many teams, so many stakeholders, you have some factual things in front of you saying, “Okay, this is what has changed.”
Vilas Veeraraghavan: Sure. Um, I’ll say this, um, we, the team used to do regular road shows, which means we would go around to different teams. We would have weekly and monthly meetings where we would showcase what’s coming, what’s happened, how this is a fit for, and we would try to always do something where you would demo this with the team that you’re talking to. We will demo it with something that they are doing, right, saying, “Hey, look, this is a build that you wanted to run. You want it to slow down all that. So you wanted it to speed up and it’s slow right now. This is how much we sped it up and all that.” So that is a roadshow thing. The reason I’m mentioning that is because that brings me to the metrics, right? Metrics, when we started, um, in the sense of day-to-day metrics, um, evolved over time, uh, till like, when I left, right? In the sense that at the very start, our metric was adoption, obviously, when we started creating the tool and sending it out. So for us, for us, it was an option. The mission statement for us was we wanted to get code into production in less than 60 minutes. So this was, when I say ‘code to production’, it is not just any code. It’s code that is tested. So, uh, which means we, we had to build it fast. We had to run unit tests. We had to run integration tests. We’ve also, uh, intended to run performance evaluation, performance testing, right? And then deploy it without having to go trouble the, the, the team again for details, right? Deploy it or, or at least make it ready to deploy. And then you obviously, have some gate that will say, “Okay, ready to deploy. Check.” Someone checks it and then it goes to product, right? We wanted this process to take 60 minutes or less. So that was the very mission statement kind of thing.
Kovid Batra: Got it, got it, yeah.
Vilas Veeraraghavan: But the metrics evolved over time. So initially, it was adoption, like how many people are using this tool? Um, it was about, uh, some common things, for example, um, a lot of folks within Walmart were using different code repositories, right? All of them, because they’re maintained by different parts of the organization. But because we unified those, we started checking, okay, is everything in one place? What is this amount of code that is maybe not in a secure space? Or something like that. Like that became an open thing to share. And we got a lot of partnership from our sister teams in InfoSec, in, uh, like all of these compliance areas, they started helping us a lot because they established policies that became metrics for us to measure. So just like I said, how secure is the code base? That is a great policy saying, “We need to have secure codebases that do not have high-level and medium-level vulnerabilities.” That meant we could measure those by doing code scans and saying, “Okay, we still have these many to go. We can point out exactly what teams need to do what.” And then we would slide in our tool saying, “Hey, by the way, this tool can do it for you if you just did this.” And so, immediately, it affected adoption, right? So, so that is how we started off with metrics.
Uh, but over time, uh, as we consolidated our, the space, we realized that, uh, I mean, once adoption was at like a 75, 80 percent kind of thing, we realized that we didn’t need to track it. I mean, then it’s like diminishing returns. It’ll take its time. The long tail is long. It’ll take time. Uh, at that time we switched, uh, to looking at more efficiency metrics. So which means we wanted to see how much is the scale costing us as a team. Like, are we scaling well to handle the load of builds that are coming to us, right? Are we, are the builds slowing down week over week for other teams, right? Things like that. So that is how we started seeing it because we wanted to get a sense of how much is the developer spending on things like long builds. So if you’re spent, if you’re like, “Oh, I start this build and I have to go away for an hour and come back.” It is a serious loss of productivity for that person. The context switch penalty is high, right? And when you come back, you’re like, “I forgot what I was even doing.” So we wanted to minimize that. So it became about efficiency metrics and that led to the goals and the strategy that we had to decide for the next year. Okay, we need to fix this one next time. So it was an adoption as much as saying, “Okay, make sure that we are still continuing on the, uh, what is the roadshows and things like that, but we’ll shift our attention to this.” So in the roadshows, we will call out those metrics. So you would start the discussion with saying, “Here is where we are right now.” There were publicly accessible dashboards, which is another thing that we believed truly as a DevEx team or a dev enablement team is every action that we take is very public. In a sense, it should be to all the organizations, public to the organization because that’s our customer, right? So we need to tell them exactly where we do, what we’re doing. The investment in money comes from these people, right? The other VPs or the execs are sponsoring this. So they need to see where their money is going. And so it was like transparency was key, and that’s why metrics were helpful. We showed them all the way from adoption to tuning to efficiency. That’s how sort of the thing went.
Kovid Batra: Cool. I think this was really, really interesting to know this whole journey, the phases that you have had. Just in the interest of time, I think we’ll have to just take a pause here, but, uh, this was amazing, amazing discussion that I’ve had with you. Would you like to share a parting advice or something for people who are maybe stepping into this role or are into this role for some time, anything you want to share with them?
Vilas Veeraraghavan: I want to, first of all, thanks, Kovid. This is, this is great. Uh, I, I really enjoyed this conversation. Um, and I also appreciate the curiosity you had, uh, to have this discussion in the first place. So, thanks for that. Um, message is simple, right? I don’t know how this happens, but DevEx never used to be cool in the past, right? In a sense that DevEx felt like one of those things that people would say, “Hey, you’re doing DevEx. You’re not necessarily releasing features.” But in reality, there were tons of features that, that the feature teams needed to deliver their features that we had to create before they did this. DevEx teams needed to be three to six months ahead of where the feature teams are so that when it comes to delivery, feature teams are not waiting on tools. We have to be giving it ready. So I believed it was cool back then, but I’m very happy to hear that DevEx is actually turning cooler because there is a lot of industry backing about it, right? Like, so there’s a lot of push, a lot of people talking about it, like yourself, uh, and we, like, we are doing right now. My only advice is, for those who are interested in it, I would suggest at least speaking to the right people so you know what the opportunities look like, right, before you say no. That’s all I ask.
Kovid Batra: Perfect. All right, that’s our time. Bye for now. But we would love to have you on another episode talking more about DevOps, DevX, dev productivity. Thanks, Vilas. Thank you for your time.
Vilas Veeraraghavan: Yeah. Thanks, Kovid. I’m happy to return anytime.
‘Mastering the IC to Engineering Manager Transition’ with Carlos Neves, Head of Engineering at Vitality
October 4, 2024
•
32 min read
In this episode of the groCTO Originals podcast, host Kovid Batra is joined by Carlos Neves, the Head of Engineering at Vitality, as they explore the often challenging transition from an individual contributor (IC) to an Engineering Manager (EM).
With over 15 years of experience in engineering and leadership, Carlos shares his journey from Portugal to the UK, his initial interest in computer science influenced by a cousin, and his passion for salsa dancing. The discussion delves into the importance of gaining horizontal exposure within an organization, understanding the nuances of management beyond technical skills, and building confidence to overcome imposter syndrome. Carlos emphasizes the significance of proactive communication, trusting the team through delegation, and seeking mentorship. He shares insights into making a conscious decision to transition into management, highlighting the need for self-assessment regarding technical passions and people management skills.
The episode concludes with advice for those considering this career path and the introduction of groCTO Connect, a mentoring initiative aimed at helping technical leaders advance.
Timestamps
00:52 - Meet Carlos
01:49 - Carlos' Journey: From Portugal to the UK
03:39 - Balancing Interests: Sports vs. Computer Science
05:45 - Current Role: Head of Engineering at Vitality
07:58 - Transitioning from IC to EM: Carlos' Experience
13:00 - Key Traits for a Successful Management Transition
17:15 - Financial Considerations: Technical vs. Management Roles
20:13 - Steps to Transition: From Senior Engineer to Manager
26:23 - Overcoming Challenges in a New Management Role
Kovid Batra: Hi everyone. This is Kovid, back with another episode of groCTO podcast. And today with us, we have our special guest, Carlos. He is Head of Engineering at Vitality, having more than 15 plus years of engineering and leadership experience. Welcome to the show, Carlos. Happy to have you here.
Carlos Neves: Thank you. It’s a pleasure to be here and share my experience with you today.
Kovid Batra: Of course, we are looking forward to a lot of learning. And before we get started on our today’s topic, which is the ‘Not-so-easy transition from an IC to an EM role’, uh, we would love to know a little bit more about you. Uh, I, I, I had a very brief intro here, but I would love to know more about you, uh, your hobbies, uh, your childhood, your teenage and how you transitioned into who you are today. So, over to you. Uh, tell us about yourself, something that probably social media doesn’t know.
Carlos Neves: Well, there’s a lot of that, but, um, so first of all, actually I’m Portuguese, um, moved to the UK about eight years ago. Um, it was a, an interesting transition, a new culture, a new way of living, but very happy with that move, um, so far, at least. Uh, in terms of how I got to this, um, to what I got to today, I guess it was mainly influenced by one of my cousins. Uh, I saw him as a little bit of a mentor. When I was a teenager, he was very much keen into computers and computer science and programming. And I was like, “Oh, that looks interesting. So, uh, it’s just something that I will actually enjoy doing.” I remember that I was a little bit on the fence between, uh, following a computer science degree or, uh, going into, um, physical education at the time. So being a PE teacher, but, uh, yeah, in the end, computer science won, um, and I never looked back and it’s been so far a very rewarding journey, if I may say so. And something personal that no one, my friends know about it, uh, but social media doesn’t know is that I’m a very avid salsa dancer. Uh..
Kovid Batra: Oh, nice!
Carlos Neves: Yeah, sort of my, my hobbies outside of work.
Kovid Batra: Perfect. So you have a partner with you?
Carlos Neves: Uh, well, usually when, whenever you go to these social events, you tend to find multiple partners there, but yeah, sometimes I do go with, uh, with friends and, uh, not necessarily, uh, a set partner. So you get to swap, uh, partners during, during the event and it’s a lot of fun. It’s a good way to actually interact and socialize with people. I do recommend for anyone that hasn’t tried before.
Kovid Batra: Perfect. Perfect. I think that was really interesting. But you mentioned about, uh, it was between physical education and, uh, computer science, right? So like from childhood, teenage, like you had any sport that you were really interested in, you were playing something or it was just, uh, out of curiosity or you like physical education in general?
Carlos Neves: No, I was very active as a kid. Um, so when I was, uh, six, seven, my, my parents put me into swimming. So I’m, until I was 15, did some competition, then transitioned to, uh, athletics. I did athletics from the age of 12 until I was 18. Again, did competition and I really did enjoy the competition side of it. Again, the training with colleagues and, um, that was also a lot of fun. And because I did enjoy that, like that, that part, and it made me feel really, really well about myself, so I did think that maybe this is something that I actually want to do full time. But then, uh, looking at all the options and all the alternatives, I guess that’s, computer science just won in the end. Uh, I can, I’m still very physically active. I do try to hit the gym, uh, multiple times a week. I’m not saying that I’m a hundred percent, a hundred percent successful at that, but I did try my best. Uh, but, um, yeah, I still like to keep myself like fit and healthy as much as possible.
Kovid Batra: No, I think that’s, that’s really great. I think, um, childhood, uh, then when you are, uh, as a kid involved in sports and, uh, I’ve, I’ve seen a lot of my, my peers also who have been there, uh, played state-level, national-level competitions. Ultimately, in their careers, professionally also, came out to be very good leaders in general somehow and I am sure there is some linkage to that where you are more motivated, you’re more, uh, like a fighter spirit is there basically. So I think maybe that really impacts, uh on overall journey as, professionally also, if you see. So yeah, cool. I think that’s, that’s really interesting. So I think, uh, from there, moving into present as a Head of Engineering for Vitality, right? Tell us something about the company. What’s your role here? What do you do as a head of engineering? What kind of responsibilities you have? And, uh, of course we would love to know when you transitioned from the point where you were into engineering and then moving into, uh, you’re at an IC and you are moving into a management role. How did that transition happen?
Carlos Neves: Sure. So currently, as you said, I’m Head of Engineering for Vitality, uh, for those that don’t know Vitality is an insurance company that operates within the health and life space, uh, I’m responsible for the systems that support our members in their both health and life claims journey. Uh, there’s a big focus right now for us in terms of increasing our digital capability, so allowing the members to service themselves mostly digitally. Of course, there’s going to be the need to, uh, sometimes reaching by email or call, uh, but trying to minimize that as much as possible. Um, there’s also been a lot of focus in terms of, uh, after you get, uh, treatment or consultation to allow you, to allow you, the member to, uh, continue that, uh, continuous care, like online, as I said, as much as possible. I did a lot of modernization in terms of our systems that comes as part of the data engineering role, a lot of engagement with a lot of other departments, like the product department, um, eventually sales, um, it’s, I think it’s one of the things that I do enjoy the most as part of my role is that I tend to talk to a lot of different people that do a lot of different things. Uh, there’s a lot of forward-looking in terms of what we want to do in the future. What’s the plan for the next two, three years, where do we want to take our products? Um, and this is something that we’ll get into more detail after, but it’s one of the big differences that I put that I see in the role that you have as an IC versus, uh, an EM or a head of specifically where the, the vision that you have, it’s more shorter term as an IC versus a medium to long term vision for someone that operates at, uh, at this level, to be more specific.
Um, specifically about my, my transition. So, let me think. This was a while back. Uh, so, uh, before as a individual contributor, uh, so I started with Microsoft technologies doing C sharp, uh, messing with SQL databases, uh, mainly full stack at the time, which was actually a very good learning opportunity because you do get the opportunity to, uh, learn how the, how an application works full stack, messed a little bit with the back end, a little bit with the front end, a little bit with the, uh, your data store. And that allows you to understand the effort that goes into each of the different components to have an application up and running. This was still in the times where monoliths were the, the trend, not, uh, as it is today where everything is, well, microservices, not everything, but it’s, it seems that that’s the, the trend right now, even if I’ve seen that some, uh, corporations are, uh, depending the, going back to monoliths, which is, uh, something that, that’s, that’s, that would be a completely different podcast, and, uh, we would spend enough time just discussing that, but that’s, yeah, that’s a different conversation. But in terms of transitioning to, um, an EM or a people, uh, team leader, to be more specific, it happened where my manager at the time actually had to leave the business for personal reasons and I was invited to replace him. Um, it was a surprise, a good surprise, because it’s something that I really, really wanted to do, but still a surprise. It was, um, interesting because when I transitioned, I was told that I could choose the, some of the team members that I would want to work with, which in my opinion, actually helped quite a lot because having people that you can trust with you, people that you actually have worked with before, also does, does help in that transition. But I did feel at the time that I did have a little bit of, uh, an imposter syndrome and said, “Well, why am I doing this? And why isn’t, uh, someone else doing this?” Or, “Why was I invited when there’s people that have been here maybe for longer than I have, uh, and are as good or even better than I am?” But then, after going through that process, I said, “Well, if they chose me, there must be a reason why. So let’s trust the process.” And then I tried to use that to build my confidence, um, because it is, it is, it is a shift, it is a change, and it is something that, um, you need to start thinking differently. So for example, when I was working as a software engineer, it was very much focused on my tasks. What do I need to do today? Uh, I, I did have to interact with colleagues and understand what they were doing, but it was very much, um, not siloed, but focused on, on what I had to do, whilst when I went through this transition, it became, okay, what does my team need to do? What do they need to, uh, to perform their tasks? How can I help them? How can I support them to achieve their goals, their objectives, our common goals are common objectives? And that was one of the, the shifts and one of the changes that I, that I had to face. Um, the fact that you were no longer as close to the detail as before was something that I actually struggled with quite a lot, uh, in the beginning, and I remember a situation where I went to my manager at the time. I said, “ How do you know everything that’s going on around you? Because I’m struggling to provide support to my team and knowing what they need to do, but knowing everything that the other teams are working on.” And he said, “Well, sometimes you just have to trust the people that you work with, trust the process and wait for them to come to you with problems. So if no news, so the premise of no news is good news, try to apply that as much as possible. Only focus on what you really need to focus on.” And with that, with that, uh, example, actually it did help quite a lot because you do, if you do trust the people that you work with, I’m using the word ‘trust’ a lot because that’s one of the core values that I believe that I need to have when working, uh, with a team or with multiple teams, as it is my case today. Um, but going back to what I was saying, by doing that, by just focusing on the problems, you allow them to operate how they need to operate and you say, “Okay, I’m here to help you. I’m here to support you. I’m here for what you need, and if what you need is actually just to go out for coffee, for example, let’s do that. Let’s let’s talk.” And sometimes it’s not necessarily just about work.
Kovid Batra: Yeah. I think for you, um, it happened coincidentally that the manager left and you got the opportunity to move into this role.
Carlos Neves: Um, yeah.
Kovid Batra: Uh, I think, uh, now when you are here into this journey for maybe more than a few years, uh, let’s say, if there is someone, uh, who is actually at the point where they can consciously make a choice of transitioning, uh, into a technical role then a management role or a management role then a technical role, uh, what do you think are the core, uh, beliefs that that person should have, uh, to be doing great, uh, in this management side of, uh, the technical vertical, I would say? And what all it takes, the change, I think you have already highlighted a few points that the change, changes are really, really drastic because initially you are just not siloed exactly, but you are working on specific things that are bound to be with you and the impact is like here in front of you and you, you do things and you see changes. So, the changes are there, but at the core, I think when you’re making a conscious choice, you need to know who you are, right? And what are those things one should identify in themselves to do good in this journey?
Carlos Neves: Um, the first thing that I would say is how much do you love being a technical-minded person?
Kovid Batra: Okay.
Carlos Neves: To me, that’s the, the, the fundamental thing. Um, if you love, so talking about engineering specifically, if you love coding, if you love being part of the technical discussions, if you, if it’s something that you know that you’re going to miss, maybe being an engineering manager or a team leader is not for you because the higher up you go, the less opportunity you’re going to have to, to do that. Uh, there are some, some exceptions, of course, where there are some, um, Head of Engineering roles or even, uh, CTO roles that are hands-on, but that’s in my, in my experience, that’s the exception. So if you do really enjoy, um, that aspect of the, of the job, so being technical, being hands-on, maybe moving into that, uh, Engineering Manager role is not necessarily for you. Also, how much do you enjoy managing people? And this is also something that is very, very important because you are no longer focusing just on, on you, on yourself as an individual, you’re supposed to, uh, nurture, guide, mentor, find the opportunity for the people that, uh, you’re responsible for to, to grow. So if you don’t like that aspect of the job, then again, maybe it’s not for you.
Um, so, but if you do, and if you do enjoy talking to other people, if you do enjoy learning more about the, the wider aspect of the, of the business that you’re trying to, uh, to support and you work for, if you’re, if you do enjoy, um, guiding, showing, giving people direction, tell them, uh, show them how their day-to-day work is influencing positively the goals of the company, then yes, by all means go for it. Um, be intentional about it. Try to find within your, your team opportunities to take some of the tasks that your current team leader does. So one of the things that I always tried to do, uh, was to identify within my teams if there were people that actually wanted to take in that step, uh, in the near future and try to expose them to some of the activities that were delegated, that were my responsibility. So I would delegate to them, uh, let’s say, uh, talking to, uh, architects or talking to, uh, some of the, the people from, from the, from the product, uh, teams and by doing that, you can actually assess, “Okay, do I enjoy doing this or is it something that I actually I had in my mind, but it’s not something that I actually do, uh, see myself doing every single day?” Because that’s the thing, uh, doing it every single day, it’s different from doing it every now and then.
Kovid Batra: Yes.
Carlos Neves: The good thing is you can also try it for a while and if it doesn’t work out, you can always refer back to the, the, the, the role that you had before. And I think that’s the, one of the things that people sometimes need to consider is that a choice that you make today is not necessarily a choice for life.
Kovid Batra: Yeah. I think that’s a very good advice and I feel, uh, if someone wants to even try that, uh, one can actually get the taste of it at a technical leader role, right? A team lead role, basically, where you are involved technically, and I have seen most of the team leaders, tech leaders are coding also, and at the same time coding their teams in every possible way. So, I think for anyone who wants to see how things would look like, can get a taste of it as soon as they step into a team lead kind of a role. But the thing is like, uh, most of the people, uh, are driven by two primary reasons to make those career moves. One is, of course, uh, what you like to do, what aligns with your character, your identity, your personality. And the second is, of course, uh, how it is going to progress financially also, right? That, that also becomes a concern for people. So in, in your opinion, how do you think, uh, in, in a futuristic way, uh, things can impact someone financially, they’re taking the technical route or, uh, a management route in, in any company, for say? Maybe you can’t generalize it, but I am asking a general question. You can, of course, answer it the way you feel about this.
Carlos Neves: Well, I guess it all depends where you want to get to. So, um, when you get to that, um, Senior Software Engineer, Principal Software Engineer role or Principal Test Engineer role, so where you are considered to be a specialist that people can look for with any guidance, right? Someone that’s going to help shape a technical decision. Someone that’s going to help define the best technical standards for software engineering and test engineering. Um, from there, eventually the part can become of, of being an architect, solutions architect, enterprise architect, uh, chief enterprise architect. So I think there are ways to progress where you can actually keep being, um, very close to what you enjoy and also seeing that financial benefit. But if you, uh, would rather be a people, people manager, where you go through the Engineering Manager, Head of, CTO, uh, role, then again, there are, there’s different, there are different parts, but you can still get the benefits, the financial benefits that you were talking about. It’s just making sure that at the end of the day, that you still enjoy what you’re doing. Um, in my case, one of the things that actually made me, uh, make this shift wasn’t necessarily, well, of course, the financial, the financial gains are important, but it was actually the fact that I, I enjoy working with people and enjoy working as part of a team and try to expand my, uh, my remit in terms of, uh, who I was interacting with day-to-day. Um, I like to understand or get a better understanding of what I’m doing, how it’s impacting the wider business, and I think that’s where this, uh, want, want came from. It wasn’t necessarily just the, the financial benefits.
But just going back to what I was saying, try to understand, uh, which part makes more sense to you, but I wouldn’t say necessarily that one would be, uh, detrimental in terms of the financial benefit or not. And there’s been, there’s plenty of situations where even software engineers are quite well paid if the skills that they have are quite uncommon in the market. So if that’s the case, if you’re a specialist in an area that there’s not a lot of offer, then you also get that benefit of being, well, financially rewarded and still doing what you love.
Kovid Batra: Makes sense. So let’s, let’s talk about, uh, the point where let’s say, I have taken the decision to move from an IC to a management role. Uh, now what should I start doing today? Let’s say, today I’m a Senior Software Engineer, or let’s say I’m a, I’m a Tech Lead. What should I start doing to get to the next step? Uh, what kind of, uh, uh, impact should I be, uh, reflecting on the team on the things that I’m doing so that the managers, the leaders of the teams are feeling that, okay, I am the right person to be pulled up to this particular, uh, profile? So it happened for you coincidentally, but I’m sure in retrospect, you tell what they saw in you and how, how it turned out. So what do you think, uh, one should start doing today?
Carlos Neves: So I think the first thing is look at the people that, uh, you report into and let them know that that’s something that you do want to do. First thing that’s, that should be the first, the first step. Second is if you feel that the person that you report into is not given the opportunities to, um, get exposed to some of the activities that normally would be given to, to them, then again, ask them, “Is it okay if next time I do this presentation?”, “Is it okay if next time I get the data for this report?” For example, one of the things that an engineer manager has to do is to look at their team metrics, uh, to understand how they’re progressing, if things are going according to plan. Okay, “Is that something that I can do on my own even if my Engineering Manager or my, my Team Lead is actually doing it?” I have access to the information so I can actually go and have a look and understand how is my team performing, if there’s something that is not necessarily right, how, what can I do to, um, to change things? I guess all this summarizes into being intentional. Identify the areas where you, you know, that your Team Lead needs to operate in and try to go in, have a look at what you need to do. Um, but again, it all comes on to being supported by, by that person that it’s, uh, that you’re reporting to. So your, your Line Manager. Uh, if that’s not really an option, then sometimes you need to look for that opportunity elsewhere, even though it’s more difficult because people don’t tend to hire based on the belief that you can do a job. You need to prove that you can do the job itself. So it’s usually easier to find that opportunity, um, within the organization that you’re already working with. But I guess it’s just trying to find that opportunity, if not in your team, within the business, but in a different team. Don’t be afraid of moving horizontally because that can bring benefits. It’s also going to actually give you exposure to other parts of the business that, uh, is going to give you more knowledge, become well-rounded across the, the business, and that’s something that is really valued, uh, when you go and do higher, more, in more senior roles, I would say.
Kovid Batra: Makes sense. I think, um, this is one, uh, very good way, like going out and explicitly mentioning, uh, it to your manager that you want to move into that role. Of course, that really, really helps in terms of highlighting. Okay. For the manager also, it becomes easier to align people, make sure that they stick because their role is to keep people happy, right? And when they know what they are wanting, it’s much easier for them to deliver that. But let’s say, there are situations where the opportunity is not being given by the manager. What else can someone do on their own? What they can do in their day-to-day routine, uh, to actually reflect those traits? And maybe the manager themselves come asking for it, or maybe, let’s say, you are working with a cross-functional team, the other people appreciate that trait of yours, uh, and they start looking at you from that point of view that, oh, yeah, this person could be, uh, moved into a management role or a Tech Lead role and, uh, moving forward. So what, what, what are those kinds of things that probably a Senior Software Engineer or a Tech Lead should start doing from today on their own?
Carlos Neves: Uh, so one of the things that you mentioned that is very, very important is being, uh, someone that is good technically, that a team can rely on and support for guidance, but it’s also trying to be a leader underneath your leader, if it makes sense. So what do I mean by that? Someone that, uh, your team can go to and trust if they feel that they need some, some support. It’s someone that people from outside your team can go to if they have any questions, you need to be seen as someone that knows what they’re doing, that understands, uh, the, the benefit that the team brings, that understands other parts of the business, someone that is seen as an expert in their field, I think that would be the first thing. But it’s also putting yourself out there, and what I said before, in terms of putting yourself out there and telling, telling your manager that you have this, this want and this objective, but talk to other people about it. One, one thing that actually I did indirectly that I think also helped when people thought about me at the time was looking for guidance and mentors outside of my most immediate circle, because when you do that, people, they do realize that you do want, you’re doing more, that you’re ambitious, that you’re trying to, uh, get outside of what you do now and you want to step into a more senior role. And not only that, people get to know you, and that’s one very important thing that is, if people don’t know you, they’re not going to think about you, uh, when an opportunity comes because there’s going to be someone else they’re going to think of first. So put yourself out there.
Kovid Batra: Makes sense. Totally makes sense. So moving on from, uh, what one should be doing at this point of time when they’re wanting to be there, uh, next step is like foreseeing the challenges that are coming on them. I, you briefly talked about it already, but I think, uh, I want to deep dive into what are those experiences? Like, if you could just give me some examples that as soon as you moved into that role, what was the first experience which made you realize where am I, what should I be doing now? Something of that sort, so that people who are really looking up to that should know, okay, what’s on their way now.
Carlos Neves: Well, I guess it depends on the team that you’re going to be looking after. But one thing that, well, two things actually that I think might, might happen, uh, in a way that kind of happened to me. Uh, one is trust yourself, otherwise that imposter syndrome that I mentioned before, it might consume you and then you’re going to be so focused in trying to prove to others that you can actually do it, that you’re going to forget how you should actually be focusing on the job itself. Um, I’ll explain a little bit more on that. So there’s two things that you actually, uh, that I faced, actually. One was the, that imposter syndrome that, uh, in the beginning kind of affected my, my confidence and I got so concerned about what others were thinking that I forgot about doing the, the, the job itself. I was so concerned about, but what if they think that I’m not good enough? What if they, uh, think that I’m not the best person for the job? Don’t, don’t, don’t fall into that trap. As I said before, if you’re appointed to do something, trust that you’re the right person for the job, focus on your skills, focus on the benefits that you believe that you can bring to the team because we’re all different. Different people will manage differently. There’s not necessarily one, uh, size fits all when it comes to management.
And then, I guess the other thing is the fact that some people will, again, try and question. So it’s the same thing, but in coming from others, actually, you get to experience people coming to you and not necessarily asking, “Why are you my manager now when two weeks ago we were peers?”
Kovid Batra: Yeah.
Carlos Neves: But there are some things that you can pick up where actually you can sense that people are almost trying to test you and don’t fall into the trap again of trying to convince them that you’re the right person for the job. So focus on what you think the job is. Look upwards for guidance. Look, not necessarily your Line Manager, but other people that are, uh, that you tend to work with, as long as they have, they have more experience than you, it might be another Team Lead or another Engineering Manager that has done, has done it for a lot longer than you, and you can look at them for guidance and say, “Well, I’m doing this. Do you think this is something that is working or do you have any advice for me to do something slightly differently?” So, try to use that as a, as a sounding board, but don’t fall into the trap of trying to convince others that you’re the right person for the job. So, focus on you.
Kovid Batra: And, um, just to add to it, I think, uh, I have a few friends who have moved into this role and they’re mostly, uh, uh, being troubled, uh, with the fact that now they are not actually doing something related to engineering. They’re mostly managing people, right? And you also mentioned in the beginning that it becomes more about that. And, uh, of course, it doesn’t come, uh, very naturally to a lot of people, uh, who have been into the tech space for, let’s say, a good 5 to 8 to 10 years. And, uh, And then, uh, they’re moving into this role. So now in that situation, I think, uh, what, what would be that right piece of advice for people to change that core belief system? Because it, you become like that, right? You, you tend to be more, I wouldn’t say introvert, introvert could be a wrong word here, but something of that sort where, uh, right communication, uh, handling things proactively so that they don’t end up messed up, end up getting messed up. So things like that happen and, and I think the core thing lies within the frame of having the right communication style, right communication. So how, how one should learn to do that? Because that’s very evident that one needs to do that. How, how should one be doing that in that role?
Carlos Neves: So just, just a few things on that, that is in terms of letting go, I think the best thing that you can do is actually just delegate. And by delegating, I don’t mean delegating your new tasks into your team. Delegate the tasks that you believe that you still, that you should still be doing, to your team, because in the first few months, what’s going to happen is your mindset is going to be, “Oh, I need to go and look at the code.”, “I need to go and check that, that pull request to make sure that it’s following the standards.” No, I’m not saying let it go completely, but if you know the people that you’re working with, you know that you can trust them, just delegate it to them. Don’t, try not to think about it. Again, tell them that if there’s anything that is wrong, if there’s a problem, come to me. Leave that to the side and focus on what does my team need? How are they performing? What does my team require to perform this task? Are they blocked by something? Are they, is there something that I can do differently that would benefit them? I think that’s when things start to, uh, settle down from, from that shift from, uh, an Engineering Manager role, when you start thinking about the team first.
Kovid Batra: Got it.
Carlos Neves: And in terms of communication, one of the things that I do even today is talk to everyone individually, of course, make time to talk to your team individually. Try to understand what their motivations are. Try to understand what drives them. Try to understand how things are going even outside of work, because we’re, we don’t, we’re not just employees. We have a life outside of work.
Kovid Batra: Yeah.
Carlos Neves: That is more important, I would say, at least for me, it’s more important than going into the office nine to five and then that’s, that’s, that’s all of your life. So, and that has a big influence on how you perform at work. So, if there’s anything that is happening, try to be available if they want to talk to you. Um, and finding that space where people start to trust you and they, they come to you for problems, they come to you for good things, and that, that’s when you actually, the communication is flowing. The communication is good between us. They trust me. They feel like I’m here to help them. They feel like I’m here to guide them and do what’s best for them. And it takes, it takes a lot of time to get to that point, but the main thing is stop thinking about what you can do, how, uh, how your own individual work is going to impact you, but try to think more about this is what my team needs. This is what the group of people that I’m responsible for can drive and can succeed because your success comes from their success.
Kovid Batra: Cool. I think, uh, the last line you said is the most impactful one for this role probably, like their success is my success and that’s how one should be progressing, and that’s the mind shift one would need when they’re moving from the role, from the IC role to an EM kind of a role. So cool, Carlos. I think, uh, there is a lot more to talk about this topic, but I am sorry, we can’t cover it in one, one session that we’re having with you. We’d love to have you for another session, maybe seeing how you progress from an EM role to a Head of Engineering role. That could be another discussion totally. And, uh, happy to have you again, uh, anytime, whenever you, you, you think you have time to discuss about it.
And, uh, talking about the mentoring piece, uh, just for our audience to, uh, let them know, uh, groCTO has come up with the, uh, groCTO Connect, uh, initiative where we are helping these EMs, ICs, technical leaders connect with leadership people for their mentorship to grow to the next level. So it’s groCTO Connect. Uh, we’d be happy if people want to send in requests. I’ll share the link of our groCTO Connect page in the comments. And with that, Carlos, thank you so much for your time. Loved having you here, really insightful talk. See you soon.
Carlos Neves: Thank you very much for the opportunity again. It was a pleasure. And reach out, I’ll be always available.
Kovid Batra: Thank you. Thank you so much, Carlos.
'Impactful Engineering: The Secret to Customer Delight' with Jagannath Kintali, Ex-Head of Engineering at Dojo
September 20, 2024
•
30 min read
In this episode of the groCTO Originals Podcast, Kovid Batra talks with Jagannath Kintali, former Head of Engineering at Dojo and ex-startup co-founder, about building impactful engineering teams focused on customer delight.
Jagannath shares his extensive experience of over 18+ years in engineering, discussing the importance of building what is needed rather than overshooting with extravagant systems. He emphasises creating high-performance teams through trust, purpose, and customer empathy. Jagannath highlights his journey, the learnings from his startup, and how he implemented these insights at Dojo, including stories about curtain ordering systems and observability projects. This episode provides valuable insights on leadership, team building, and aligning engineering efforts towards solving real customer problems.
Timestamps
00:00 — Introduction
01:03 — Meet Jag: A Journey in Engineering
05:23 — Startup Lessons: Failures and Learnings
15:22 — Building High-Performance Teams
26:06 — The Importance of Customer Empathy
30:28 — Implementing Observability at Dojo
36:25 — Conclusion: Reflections and Future Insights
Kovid Batra: Hi everyone. This is Kovid, back with another episode of groCTO Podcast. And today with us, we have a very special guest. He’s Ex-Head of Engineering, Dojo. He has been an ex startup co-founder. Welcome to the show, Jag.
Jagannath Kintali: Thank you very much, Kovid. It’s, uh, it’s been a pleasure and thank you for having me on your show.
Kovid Batra: Great. So for the audience, uh, Jag is short for Jagannath and on this show, I think we’ll be calling you Jag. Is that okay with you?
Jagannath Kintali: Oh, that’s absolutely fine. Thank you. Yes, uh, Jagannath, it’s usually not the most common name in the Western world. So short form is Jag.
Kovid Batra: Yeah, that’s, that’s really cool. I think, uh, be a Roman when you are in Rome. So, that works. Yeah.
Cool. So, uh, on that note, like for the audience, um, today’s topic is. How to build impactful engineering teams that really build for the customer delight and I think Jag has, uh, really good hands-on experience with nurturing such teams. But before we dive deep into that part, I think we would love to know more about you, Jag. You have been a startup co-founder and I think it’s been a long journey of 18 years in the engineering world. Tell us something about yourself so that audience, audience, gets, gets to know you a little more. Um, your personal life, your hobbies, what you have been doing, uh, maybe about your startup.
Jagannath Kintali: Oh, absolutely. Uh, I am, my name is Jagannath. Uh, I actually do come from Orissa where Jagannath Puri, uh, Lord Jagannath Puri hails from. So after, uh, being there in Orissa, I’ve done my engineering, uh, I decided to come for a master’s degree in the UK and that’s where I started my software engineering career, uh, so to say, started as a, a software engineer, but once you come from, uh, this background of engineering and add a world to explore, but obviously the Western world was, uh, and especially UK was completely new to me, and the opportunities that you see over here was, uh, so many, I always wanted to go into building something of my own and having something of my own and to start something which will serve the community and, and a certain customers segment in general. And so, I ended up doing after several years of doing software engineering roles and especially my expertise is in solution architecture. But after several years, I decided to take the plunge, like everybody else wants to do that. But yes, I got to warn everybody and the audience that my startup does belong to the 9 failures out of 10 that all the startup happens. But I’ve done that and I have no regrets in giving it a try and doing that, but it is the most, uh, beautiful experience I had during the startup time, and we tried to do it for just over two years. Um, but yes, it was all about, uh, hyper local services, providing services to, um, customers within a certain community. But yes, ever since then, um, I’m still very much passionate about engineering and what I’m very passionate about is building or engineering beautiful products for customers who, you know, have a need for it, a particular challenge that it solves. Solving the customer problems is my main aim in life, and I’ve grown up in a, um, you know, I’ve grown up with the ethos or the principle is that, uh, to service, you know, uh, godliness. That’s where all it comes from. But yes, learning to be a, a pilot, which has been a dream of mine for a very long time. So let’s see how that goes. So hopefully I will get that license in this lifetime.
Kovid Batra: All the best to you for that.
Jagannath Kintali: Thank you.
Kovid Batra: Uh, you, like you said that, uh, you had this beautiful experience of, uh, being a co-founder and having that startup experience particularly. Um, what, what was your major learning from it? Like if I have to say like when you came out of it, I’m sure, uh, it’s never a failure, obviously. I mean, I have been..
Jagannath Kintali: Absolutely.
Kovid Batra: So what you learn out of it is something which is very different from what you do in a job. You get such a holistic experience of solving problems and building solutions when you are doing things as a co-founder or probably as in the leadership of a startup also, for that matter. So what was your learning from that journey? If you could, uh, like highlight that for us.
Jagannath Kintali: My total learning, as you said, it’s never a failure, and actually based on the learnings from the startup, I’ve had many successful jobs based on the learning from, uh, from the startup and, uh, I’ve had many, uh, many times, uh, uh, tried to think about and summarize what, what were the things that I could have completely done, uh, differently, and that’s what I keep on using in my future roles. And I boiled it down to basically three, uh, different learnings. First of all, it was the product. Then second comes the, uh, people aspect of it and how you execute it. Those were the three areas that I, I think, uh, were the three main learnings. First, it was the product, that service that we were trying to provide. It was a very simple concept. It was a matchmaking process where somebody as a service holder can provide the service to a person who is in need of that service and a very hyper local at that point. So within, um, 15, uh, 20 minutes, you get your service, uh, sorted, whether it is, uh, looking for a cleaner, whether you’re looking for a locksmith, or whether you want to, uh, wanted somebody to get some, uh, grocery from the store, uh, to you. Now, nowadays, it sounds like it’s so common. It wasn’t that common in 2012, 2013 when we, uh, started this. But, uh, the first learning was we, the opportunity was so big, we got a little lost, in my opinion, as to which area we should concentrate on. So there were just so many avenues we wanted to go down on. We should have, uh, probably own down in a, kind of set of services and tried to build that platform and repeatedly perfected or make it much more efficient, the process of end-to-end, somebody requesting for a service and somebody getting a service and the feedback loop going back and forth and repeatedly doing that through our systems, through customer feedback and through the service, services that we provided, particularly one or two. We tried to widen it straight away with 10 to 12 different services. And what happens is every service type has, uh, different kinds of needs that the need of a, uh, a cleaner or a maid, uh, you’re looking for a maid is completely different than looking for a locksmith, or, uh, you know, looking for, uh, a nanny’s, uh, completely different and trying to, uh, funnel all of those requirements and make it efficient into one single channel was the most difficult thing. What we should have done is just pick one particular vertical, try to get some traction on it and then you will realize and you will have your learning and then use that learning in other services. Slowly added that.
So being in a particular and it is very behavioral because this is not a Uber, uh, type concept where you have the service being provided outside your house. The service we were trying to provide was within the house. So there’s a big trust factor that needed to come in. And every country that you go to, we were trying to do it in the Middle East, where, uh, it’s a service Mecca. Um, and we want to get some traction over there, but I was in London at that time. I did not spend enough time, uh, I’ve been there quite often, but I do not spend enough time. Be there, be emerged into the local community and try to figure that out by yourself. Going back to the first principles of totally immersing yourself into finding out where the needs are, what the actual requirements are, where the actual inefficiencies are and how to join the dots. Trying to sit completely away and trying to, uh, uh, totally imagine the inefficiencies and, and not looking at the reality was probably one of the, uh, challenges, uh, uh, we faced and the biggest learning, uh, I’ve had in, in, in doing that. And second, uh, on the people side. That was on the product side. People side, it brings me, uh, uh, to the, it’s very related to the topic that we’ll be talking about. It’s building that very strong team.
When you are a startup, it is very difficult to get the right set of people and, uh, you’re looking for funding, you’re looking for finances, you don’t, uh, uh, you are not going to get, uh, you know, the star players that, uh, you wanted on your team from day one, it is very difficult to do that and also trying to build a, uh, build a team, which is totally dedicated for the purpose. What we did was, okay, let’s go out and find a team, whether it is a third-party software provider, uh, or software consultancy, a small outfit somewhere, and try to bring them in. What we didn’t do, that would have also worked, but what we didn’t fail to do, in my opinion, is, is, uh, giving them that purpose. So they always worked as a consultant. They were not integrating. They were not, uh, bought into the product that they were trying to build or the company. Uh, company had a mission, company was trying to solve a particular customer challenge. We did not expose that particular team to that area, and they were just literally taking instructions and building a software system. They didn’t have the direct interaction with the customers or trying to understand the customer problem that we were trying to solve. Uh, that, that, that was the biggest gap, and this is where the impactful engineering comes into play. I’m a true believer in building teams which are totally exposed to the customer challenges. That doesn’t mean that you have to go and talk to the customer every single day, but you’ve got to understand the customer problem that you’re trying to solve on a single, every single day. Find out why, why it is that you are doing and everything that you’re building, how it is impacting the challenge you are trying to solve. If you don’t have that, if you don’t have that purpose, if you don’t have that, uh, you know, the belief that you are actually doing something, which solves a customer problem, you have lost the interest, the engagement of a particular team, and that’s where it goes downhill.
We’ll talk about many different things, and, uh, I’m sure we’ll go in-depth into it. But, uh, those were the biggest learnings and the execution of it. Obviously, being in 3 different geographical locations, we were trying to coordinate and do that. If you want to do a startup, be there, be in the location, be amongst your customers, understand the problem, even be the person who is even delivering that service and, uh, and try to understand the entire life cycle of a product. It’s not about building a software system, which you think will be very useful. It is, uh, if there are no customers who are using it and customers are not willing to pay for it, it’s not going to work out for you. It’s always going to end up in a, uh, well, 9 out of the 10, uh, do fail as starters because of that reason. So, you know, those were my biggest learnings from doing a startup, but I wouldn’t change, uh, this experience, uh, ever. I mean, it was, it was probably the hardest two and a half years of, of my life, we lost a lot of money also, uh, but wouldn’t change the experience for, uh, for any amount of money, for sure.
Kovid Batra: Perfect. I think, uh, the best part about such journeys, uh, are that in those hardships, in those times, you actually see a significant change in your mindset, how you think about things. It’s more like reality coming to you. Uh, it’s, it’s more like reality slapping you, saying that, okay, this is how things should be working, right?
Jagannath Kintali: Yeah, absolutely.
Kovid Batra: So, uh, I think that’s when you, you evolve the most. I mean, according to my understanding of how one should be, um, leading life in this universe is understanding more of world concepts, how reality works, the more you become empathetic and compassionate towards people, nature, how things are working around you, the better decision-making you bring into your, yeah. So I think startup has done that to me at least, and I feel the same when you are talking about, uh, realizing that it’s about building great teams also who focus on customer empathy, like customer delight, so that they can bring out those solutions which really solve the problem. You just don’t become a feature factory delivering features, taking instructions, delivering features. You actually deliver value. That’s how the mindset changes. And on that note, I think, uh, which is, of course, the topic for today, now when you are like four or five years ahead in that journey, you have been leading an engineering team for Dojo, I’m sure you would have incorporated some level of, uh, framework or some level of practices which inculcate this customer empathy or, uh, teams that are fundamentally aligned towards solving problems rather than just building features. So can you tell us about some of your experiences in your journey, how things worked out for you after that, and how you implemented this learning in your, in your teams?
Jagannath Kintali: I’ll start with the story this, uh, while having this conversation, it just came to my mind, previous to Dojo, I worked for it, uh, I was working for a software consultancy and I was working for it, uh, one of the biggest retailers in, uh, in the UK, and, uh, uh, I’ll tell you my first, uh, foray into, or the first time I ever was so delighted, uh, with the work I was doing. So the story goes as if that, so this biggest retailer, they, so it’s a super, um, what is it called? A superstore. They sell from food to clothing to anything, you name it, and they sell it, and they also sell curtains. So this is early into my career, and I’m in this, and I’ve been given this responsibility to design a curtain ordering system. Like, I have no knowledge about curtains. I didn’t even know that there were so many types of curtains that existed in this world, there’s so many textures, the type of cloths, and how the look will be, how to hang it, and all of those, but again, never interacted with any of the actual users. It was a consultancy. So, you know, you went into a dark room, you designed a system, and, and you deliver it to this, uh, retailer. It took my time trying to understand that the business, how the curtain, curtain ordering system works and how it goes from A to B, and when customer orders and it goes to the manufacturer, it comes back, uh, to the retailer and how they deliver it. Everything was beautifully fine, and, and went in, you designed to the best of your ability, right? Uh, trying to understand what the customer might need or, or the shop, the shop assistant who’s using your system to provide this service to the, uh, customer, but somehow we managed, we had a conversation, the system was delivered on, well on time and, and so. But, uh, I never felt like I, uh, so proud of, uh, this project, you know, it’s, it’s, I always used it as a job, okay? I went to work, I did some coding, I built some systems, it’s running absolutely fine, it is delivering what it’s supposed to deliver, you input A and the output B comes out, and those were the right input and output everybody was looking for. Job done. But then I was, in my off day, I visited one of these retailers and I went to one of the stores and I was with my partner at that time. So we were both visiting the store and I was trying to figure out how could I use this system that I built. I wanted to show, okay, I built a system, but I actually went and ordered some curtains at, at this store and the lady who was, uh, uh, serving us in, in the store, she pulls out, uh, an, an, the device, that were hand-held devices that they were doing, and she pulls out the system that the UI that, uh, was built by me and, uh, and two other engineers. And as soon as I saw that, uh, a UI and the ways she was using it, the satisfaction I got and the joy, that the happiness that I got just looking at, uh, and you know, your, uh, hard work is being utilized by somebody, and, and it is being very useful to somebody. On that day forward, onwards, I’m telling that something completely changed in the way I think and the way I approach and, uh, approach my work and the way I try to find that delight every single time I do something in my professional career, it completely changed.
And the best learning, uh, we were talking about in the introduction is the best learning from my 19 or 18, 19 years of engineering, uh, leadership, one thing that I have learned from, uh, uh, some of my senior engineers, I was in a project, one sentence that sums up all of it, ‘Build what you need, not what you want’, because there is this thing that we always overshoot. It’s, it feels like we should be building this, uh, uh, wonderful system, the most efficient, the most effective and do that. But no, you just need to build something which you need and the customer needs, that’s the most delightful thing that you can do for a customer and providing that talent. So that’s one of the best things that has ever happened. And from that moment onwards, this is what, how it has changed my, my perspective on software engineering in general, and how even in engineering leadership.
So coming back to the, I know it’s a roundabout way, but then coming back to the original question about how I’ve done this in my, uh, you know, my, my stint or my, uh, my work at, at Dojo, I tried to find the purpose or, or even build this purpose within a team. Building a high-performance team, in my opinion, it’s nothing to do with tech. It’s nothing to do with what you are trying to achieve. It’s about building that trust and finding that purpose every single, you will find, uh, a star, uh, engineers. You will find all the, uh, uh, right people in the, uh, uh, right places. But if they don’t have a purpose, if they don’t have a goal, they don’t have a direction to go towards, none of this works. And bringing that trust factor is the clue that will bind the team together in moving towards that goal, moving towards that ultimate aim of delivering that customer delight or the impact, customer impact that I keep going back to. And my way of doing it, there was no framework. There is, I know this might be very controversial and there’s nothing to do with frameworks or there’s nothing to do with, you know, reading books or, uh, uh, engineering leadership, it was pure and simple, uh, people’s relationship and building relationship and understanding each and every person within your team that you have. And the more you do it, the more it trickles. I started with a simple team of five when I started, I ended up, uh, when I finished with Dojo, finished at Dojo, I was looking more than 60, 70 engineers at a time. But once you build this environment where you build relationships, you build, you play the long game, not, never a short game for, for the purpose, you build relationships, try and understand each and every person who is within your team, what is that purpose and give them that purpose, give them that direction, give them the, uh, validation and recognition, which is the most undervalued aspect of software engineering. You provide them the right scenarios and the right environment, you will have a high-performing team every single time. I can guarantee you that.
That’s, that has been my mantra. It’s about personal relationships and building relationships and understanding people and going back to the first principles, and why we are doing it, give them the same input. Usually, I mean, it’s almost like if anybody replaces you as well, every team member within a particular team should be able to reiterate the same purpose within the team. So that’s how I always see it. Everybody having the same mentality and, uh, you know, the collective high mentality and trying to achieve that same goal, does a lot of good in a longer run. Uh, you might not see that in a shorter term, but for a longer run, it is the most, uh, the best thing you can do.
Kovid Batra: I think I absolutely agree with you. And in fact, uh, you said there is no framework as such. This is what you do and how you achieve things to build better teams. But I think this is the framework, according to me. Like on your behalf, I would just say that if you really want to build a team that cares for the customer and you are the one who is leading the teams, you build that relation, you build that trust with your team members, and every discussion, every sprint or every procedure that you are following to build something, if you’re putting that out in your thoughts, putting that out in your documents, maybe even if there are some PRDs where you are mentioning why we are solving this along with what needs to be built, I think that’s when you crack the things, because every day, if there is a discussion in the room where we are talking about solving the problems for the customer, automatically everyone starts thinking like that. Of course, there has to be a first-level trust built to be, uh, to be there where everyone looks up to that mindset which you are adopting or if you are preaching that. So this is the key, I believe. Like in every team, whether big or small, you just need to make sure that whatever you are following as a philosophy while building products for the, for the customers, that needs to propagate in every discussion, every document that’s going out from you and people would automatically start following it, and I think that’s how things over a long term would, uh, get imbibed fundamentally, uh..
Jagannath Kintali: Fundamentally. It’s the basic fundamentals that you, uh, uh, that you target and everything, uh, everything falls in place afterwards. And one of the things I’ll tell you for sure, like once you have this, your work becomes a side effect because you are building that, uh, mentality. Are you building that mindset across? The team, you move like a single unit and move and try to target, you know, what you were aiming for. The one thing I actually forgot to mention, or I wanted to bring up is that, you know, people talk about resolving conflict. How do you resolve conflict if there are two competing ideas and which is having, you know, you are having heated arguments or discussions about what is the best way to move it forward? I ask simply the question, which one, which solution will have the biggest impact? For our customer, the problem we are trying to solve, can it, which one does have, and there is always a single solution, there is never a multiple solution which says, okay, this will have, whether you count that as a, uh, how beneficial it will be for the customer, the cost impact of it, or how long-term effect it will have, how it will even have, uh, reduced tech debt, also in the longer run, you will find that asking that question, which one will have the better impact or most, most impact for the challenge that we are trying to solve, then you will, in terms there, there is a resolution always, most of the time.
Kovid Batra: Okay. Yeah, yeah, yeah. I totally get it.
Jagannath Kintali: And every sprint, what we used to do, uh, we do use the uh, uh, you know, the sprinting method as well, in every sprint, we will reiterate. We follow this OKR process, the, you know, and the key results and objectives and key results. Every sprint, we will try and make sure that the objectives and key results are pretty aligned to the needs of the company. First of all, you have your customer, then you also have your company goals to meet as well. So you have to keep this in balance in trying to go through. Make sure that it is still very much valid. It’s still very much aligned to what we are trying to move towards. It’s, it’s, it’s a pyramid kind of structure, if we were to talk about frameworks that we are doing, each and every team needs to do, set their own objectives and key results that they want to try and achieve. But those objectives and key results, need to also come from top-down. So we meet in the middle. So you have very strategic goals set by the founders of the company, Execs, right from the beginning, and then they will say, these are the different areas that we will be targeting on. And, you know, the squads and basically the teams will set their objectives accordingly from an engineering perspective, from the product perspective, and they meet at the middle. That’s how we have always looked into doing that. So it is very aligned. It is very, very much towards the company and the customer aims, uh, or the customer challenges that we are aiming for. And that’s how, uh, in my opinion, that’s, it is not, it is never going to be perfect, but the best results we have got so far is by this OKR framework.
Kovid Batra: Got it. Got it. I think, um, one more thing that I realized, like setting up these objectives and key results definitely brings that structural angle to solving the problems and doing something as a team. But going back to the first point from where you started with a story, I really loved that. And as a, as a, let’s say, as a team member, let’s say, you have been a, an, a leader for the team, if this is something how you would explain something to the, uh, the team members, the developers that how one should be thinking about things, I think that can also go a very long way, right?
Jagannath Kintali: Long way. Absolutely.
Kovid Batra: So basically, getting those team meetings sometimes around, uh, sharing such stories where they actually, uh, experienced what customers feel like, getting into their shoes and experiencing something, and then going back to your desktop or laptop and coding, I think that also is a big, uh, big-time need for, at least for the developer space, because it’s most of the time they’re coding in their own zones and there is a very big disconnect, but if, if we propagate this thought and we incentivize this thought, I think that can also go, as I said, long-term, in terms of building teams that are able to think with empathy, compassion for the customers.
Jagannath Kintali: There’s another story I would like to, uh, tell you. It’s, uh, in, in Dojo, we wanted to, um, introduce, uh, a particular engineering paradigm regarding, uh, observability, right? So the whole idea is that every single system that is, that exists in, in Dojo, we should be, uh, it should be totally observable. Uh, we should know about how it is performing, where it is, how, how much traffic it is coming through, how much CPU or memory, the whole shebang. But it, it was a very, uh, nice, a niche, a concept that we were trying to introduce in Dojo. Dojo was in, in, in its journey to, in its scale of journey. So how do we do that, uh, impact? How do we even, uh, build this, talk about this story, how to, how useful this is going to be, right? So what we did is that we did a very small project and we put it out regarding observability and we called it the ‘architectural pane of glass’. We used to, well, Dojo has a massive, uh, TV screens within the engineering floor, where we are displaying numbers and Grafana dashboards and, uh, you know, all stats flying around. What we did is we took a complete product and every component of that product, we devised it. So it was basically a Grafana dashboard, and every, we broke it down to different parts of the, all the components that builds that system and the system basically builds the product. And we showed everything pictorially on this Grafana dashboard, and every time any problem that would happen within these particular components or systems, it flashes, right? It’s saying, hey, there’s an error, uh, and there’s a metric failure or all the, uh, SLAs and SLIs that you have set, which is dropping down. You have the variants and all of that. It’ll start flashing. What happened, uh, by doing that is, is that every person who passed that, uh, screen, uh, and we have multiple products in Dojo, so any other product members who were passing that, including our CTO and founders, so every time they will pass this screen, they would stop by this screen, right? And they would say, Oh, what’s this about? This is something that we haven’t seen. And this is and ‘red, green’ is a, you know, universal language. You know, if it is showing red, that there’s a problem. It is green, then it is all going well. Oh, why is there a problem? And it became a conversation starter.
Kovid Batra: Got it.
Jagannath Kintali: And what we were trying to push for is, is, is the effective way of operation, uh, of all the different systems. And what we did is building that team who would say, and it was right next to where the team was sitting down, right? So every time somebody came around and talked about, uh, this big screen, the team would really feel, uh, very good about what they have built. They can see the usefulness of this product that we were trying to push for. And what that resulted in, we got the funding to build a team, we got the funding to afford and take that even further and spread it across entire Dojo Engineering. And I, last time I checked, I haven’t been to Dojo in a while, but, uh, the observability system that we have built is uh, I can put my hand on heart and say, probably one of the best in the UK market or in, in, in the FinTechs. I’d go even further in the world, but I haven’t seen many of the other systems, but it’s one of the best systems that we have built. It’s been a journey of two years.
So what I was trying to get to is that even doing small little things and having that customer delight, in this case, it was an internal community of engineers that we were trying to do. But you can see how you can capture the imagination of the customer and uses that you are trying to solve the problem for and get them engaged. And it’s a two-way street. Because the customers are getting engaged, the team is now getting engaged, and they are finding that, oh, uh, you know, that people are talking about this particular product. I was meeting with this person from that particular team and they were saying, hey, how can we, get that system built for us? And it becomes a starting point and starting conversation point, and it spreads all by itself. What about there was no company direction or a top-down approach in doing that? This is doing things very organically and trying to capture the imagination and showing that, hey, this is also possible, this is something that can be done. And, and of course, the product was very useful. We didn’t have as much observability into our systems as, you know, previously, this allowed us to observe our systems even better. So it worked out beautifully and it’s a story that I will probably never forget for as long as I am in this profession. It’s how all of the observability team started from there.
Kovid Batra: Got it. Got it. Amazing. Amazing. I think, uh, this is really a good example where not just thinking about customers who are business customers, but these developers, these people are your internal customers who you have to cater. And as a leader, if you become compassionate and empathetic about how you can actually make them, uh, push towards success metrics and think, think about things which they would align with and bringing this at such large scale, ultimately, would impact your customers also. So I think a very good example shared here and it was a really, really good session.
As we are moving out of time, now I would like to take this to a close end. Thanks a lot, Jag, for bringing such beautiful, beautiful insights on how you can actually build great engineering teams and sharing your experiences at Dojo. It was a lovely, lovely experience for sure.
Jagannath Kintali: Thank you very much for having me on. And it’s always nice to go back to, we, we as engineers, as professional, we don’t usually do this enough where we, uh, stop and, uh, take a pause and look back in our previous experiences, and, and it brings me great joy to even talk about all the different experiences and it, it brings a smile to my face as well. So it was very delightful and, uh, delightful for me as well. Thank you very much for the opportunity.
Kovid Batra: Great. Um, we would definitely love to have you back sometime again, uh, talking about more such engineering challenges and how things work out in the engineering world.
Jagannath Kintali: 100%.
Kovid Batra: Thank you for today. Thank you, Jag.
Jagannath Kintali: Thank you. Have a good day. Bye.
Kovid Batra: Thank you. Bye.
'How AI is Revolutionizing Software Engineering' with Venkat Rangasamy, Director of Engineering at Oracle
In this episode of the groCTO Originals podcast, host Kovid Batra talks to Venkat Rangasamy, the Director of Engineering at Oracle & an advisory member at HBR, about 'How AI is Revolutionizing Software Engineering'.
Venkat discusses his journey from a humble background to his current role and his passion for mentorship and generative AI. The main focus is on the revolutionary impact of AI on the Software Development Life Cycle (SDLC), making product development cheaper, more efficient, and of higher quality. The conversation covers the challenges of using public LLMs versus local LLMs, the evolving role of developers, and actionable advice for engineering leaders in startups navigating this transformative phase.
Timestamps
00:00 - Introduction
00:58 - Venkat's background
01:59 - Venkat's Personal and Professional Journey
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of the groCTO podcast. And today with us, we have a very special guest, Mr. Venkat Rangasamy. He's the Director of Engineering at Oracle. He is the advisor at HBR Advisory Council, where he's helping HBR create content on leadership and management. He comes with 18 plus years of engineering and leadership experience. It's a pleasure to have you on the show, Venkat. Welcome.
Venkat Rangasamy: Yup. Likewise. Thank you. Thanks for the opportunity to discuss on some of the hot topics what we have. I'm, I'm pleasured to be here.
Kovid Batra: Great, Venkat. So I think there is a lot to talk about, uh, what's going on in the engineering landscape. And just for the audience, uh, today's topic is around, uh, how AI is impacting the overall engineering landscape and Venkat coming from that space with an immense experience and exposure, I think there will be a lot of insights coming in from your end. Uh, but before we move on to that section, uh, I would love to know a little bit more about you. Our audience would also love to know a little bit more about you. So anything that you would like to share, uh, from your personal life, from your professional journey, any hobbies, any childhood memories that shape up who you are today, how things have changed for you. We would love to hear about you. Yeah.
Venkat Rangasamy: Yup. Um, in, in, in my humble background, I started, um, without nothing much in place, where, um, started my career and even studies, I did really, really on like, not even electricity to go through to, when we went for studies. That's how I started my study, whole schooling and everything. Then moved on to my college. Again, everything on scholarship. It's, it's like, that's where I started my career. One thing kept me motivated to go to places where, uh, different things and exploring opportunities, mentorship, right? That something is what shaped me from my school when I didn't have even, have food to eat for a day. Still, the mentorship and people who helped me is what I do today.
With that context, why I'm passionate about the generative AI and other areas where I, I connect the dots is usually we used to have mentorship where people will help you, push you, take you in the right direction where you want to be in the different challenges they put together, right? Over a period of time, the mentorship evolved. Hey, I started with a physical mentor. Hey, this is how they handhold you, right? Each and every step of the way what you do. Then when your career moves along, then that, that handholding becomes little off, like it becomes slowly, it becomes like more of like instructions. Hey, this is how you need to do, get it done, right? The more you grow, even it will be abstracted. The one piece what I miss is having the handholding mentorship, right? Even though you grow your career, in the long run, you need something to be handholding you to progress along the way as needed. I see one thing that's motivated me to be part of the generative AI and see what is going on is, it could be another mentor for you to shape your roles and responsibility, your career, how do you want to proceed, bounce your ideas and see where, where you want to go from there on the problem that you have, right? In the context of the work-related stuff.
Um, how, how you can, as a person, you can shape your career is something I'm vested, interested in people to be successful. In the long run, that's my passion to make people successful. The path that I've gone through, I just want to help people in a way to make them successful. That's my belief. I think making, pulling like 10 to 100, how many people you can pull out. The way when you grow is equally important. It's just not your growth yourself. Being part of that whole ecosystem, bring everybody around it. Everybody's career is equally important. I'm passionate about that and I'm happy to do that. And in my way, people come in. I want to make sure we grow together and and make them successful.
Kovid Batra: Yeah, I think it's, uh, it's because of your humble background and the hardships that you've seen in the early of your, uh, childhood and while growing up, uh, you, you share that passion and, uh, you want to help other folks to grow and evolve in their journeys. But, uh, the biggest problem, uh, like when, when I see, uh, with people today is they, they lack that empathy and they lack that motivation to help people. Why do you think it's there and how one can really overcome this? Because in my foundation, uh, in my fundamental beliefs, we, as humans are here to give back to the community, give back to this world, and that's the best feeling, uh, that I have also experienced in my life, uh, over the last few years. I am not sure how to instill that in people who are lacking that motivation to do so. In your experience, how do you, how do you see, how do you want to inspire people to inspire others?
Venkat Rangasamy: Yeah. No, it's, it's, it's like, um, It goes both ways, right? When you try to bring people and make them better is where you can grow yourself. And it becomes like, like last five to 10 years, the whole industry's become like really mechanics, like the expectation went so much, the breathing space. We do not have a breathing space. Hey, I want to chase my next, chase my next, chasing the next one. We leave the bottom food chain, like, hey, bring the food chain entirely with you until you see the taste of it in one product building. Bringing entire food chain to the ecosystem to bring them success is what makes your team at the end of the day. If we start seeing the value for that, people start spending more time on growing other people where they will make you successful. It's important. And that food chain, if it breaks, if it broke, or you, you kind of keep the food chain outside of your progression or growth, that's not actual growth because at one point of time, you get the roadblocks, right? At that point of time, your complete food chain is broken, right? Similar way, your career, the whole team, food chain is, it's completely broken. It's hard to bring them back, get the product launched at the time what you want to do. It's, it's, it's about building a trust, bring them up to speed, make them part of you, is what you have to do make yourself successful. Once you start seeing that in building a products, that will be the model. I think the people will follow that.
The part is you rightly pointed out empathy, right? Have some empathy, right? Career can, it can be, can, can, it can go its own progress, but don't, don't squeeze too much to make it like I want to be like, it won't happen like in a timely manner like every six months and a year. No, it takes its own course of action. Go with this and make it happen, right? There are ups and downs in careers. Don't make, don't think like every, every quarter and every year, my career should be successful. No, that's not how it works. Then, then there is no way you see failure in your career, right? That's not the way equilibrium is. If that happened, everybody becomes evil. That's not a point, right? Every, everything in the context of how do you bring, uplift people is equally important. And I think people should start focusing more on the empathy and other stuff than just bringing as an IC contributor. Then you want to be successful in your own role, be an IC contributor, then don't be a professional manager bringing your whole.. There's a chain under you who trust you and build their career on top of your growth, right? That's important. When you have that responsibility, be meaningful, how do you bring them and uplift them is equally important.
Kovid Batra: Cool. I think, uh, thanks a lot, uh, for this sweet and, uh, real intro about yourself. Uh, we got to, uh, know you a little more now. And with that, I, I'm sorry, but I was talking to you on LinkedIn, uh, from some time and I see that you have been passionately working with different startups and companies also, right, uh, in the space of AI. So, uh, With this note, I think let's move on to our main section, um, where you would, uh, be, where we would be interested in knowing, uh, what kind of, uh, ideas and thoughts, uh, are, uh, encompassing this AI landscape now, where engineering is changing on a day-in and day-out basis. So let's move on to our main section, uh, how AI is impacting or changing the engineering landscape. So, starting with your, uh, uh, advisories and your startups that you're working with, what are the latest things that are going on in the market you are associated with and how, how is technology getting impacted there?
Venkat Rangasamy: Here is, here is what the.. Git analogy, I just want to give some history background about how AI is getting mainstream and people are not quite realizing what's happening around us, right? The part is I think 2010, when we started presenting cloud computing to folks, um, in the banking industry, I used to work for a banking customer. People really laughed at it. Hey, my data will be with me. I don't think it will move any time closer to cloud or anything. It will be with, with and on from, it is not going to change, right? But, you know, over a period of time, cloud made it easy. And, and any startups that build an application don't need to set up any infrastructure or anything, because it gives an easy way to do it. Just put your card, your infrastructure is up and running in a couple of hours, right? That revolutionized a lot the way we deploy and manage our applications.
The second pivotal moment in our history is mobile apps, right? After that, you see the application dominance was with enterprise most of the time. Over a period of time, when mobile got introduced, the distribution channels became easier to reach out to end users, right? Then a lot of billion-dollar unicorns like Uber and Spotify, everything got built out. That's the second big revolution happening. After mobile, I would say there were foundations happening like big data and data analytics. There is some part of ML, it, over a period of time it happened. But revolutionizing the whole aspect of the software, like how cloud and mobile had an impact on the industry, I see AI become the next one. The reason is, um, as of now, the software are built in a way, it's traditional SDLC practice, practice set up a long time ago. What, what's happening around now is that practice is getting questioned and changed a bit in the context of how are we going to develop a software, make them cheaper, more productive and quality deliverables. We used to do it in the 90s. If you've worked during that time, right, COBOL and other things, we used to do something called extreme programming. Peer programming and extreme programming is you, you have an assistant, you sit together, write together a bunch of instructions, right? That's how you start coding and COBOL and other things to validate your procedures. The extreme programming went away. And we started doing code based, IDE based suggestions and other things for developers. But now what's happening is it's coming 360, and everything is how Generative AI is influencing the whole aspect of software industry is, is, is it's going to be impactful for each and every life cycle of the software industry.
And it's just at the initial stage, people are figuring out what to do. From my, my interaction and what I do in my free time with NJ, Generative AI to Change this SDLC process in a meaningful way, I see there will be a profound impact on what we do in a software as software developers. From gathering requirements until deploying, deploying that software into customers and post support into a lifecycle will have a meaningful impact, impact. What does that mean? It'll have cheaper product development, quality deliverables. and having good customer service. What does it bring in over a period of time? It'll be a trade off, but that's where I think it's heading at this point of time. Some folks have started realizing, injecting their SDLC process into generative AI in some shape and form to make them better.
We can go in detail of like how each phases will look like, but that's, that's what I see from industry point of view, how folks are approaching generative AI. There is, there is, it's very conservative. I understand because that's how we started with cloud and other areas, but it's going to be mainstream, but it's going to be like, each and every aspect of it will be relooked and the chain management point of view in a couple of years, the way we see an SDLC will be quite different than what we have today. That's my, my, my belief and what I see in the industry. That's how it's getting there. Yep. Especially the software development itself. It's like eating your own dog food, right? It happened for a long time. This is the first time we do a software development, that whole development itself, it's going to be disturbed in a way. It'll be, it'll be, it'll be more, uh, profound impact on the whole product development. And it'll be cheaper. The product, go to market will be much cheaper. Like how mobile revolutionized, the next evolution will be on using, um, generative AI-like capability to make your product cheaper and go to market in a short term. That's, that's, that's going to happen eventually.
Kovid Batra: Right. I think, uh, this, this is bound to happen. Even I believe so. It is, it is already there. I mean, it's not like, uh, you're talking about real future, future. It's almost there. It's happening right now. But what do you think on the point where this technology, which is right now, uh, not hosted locally, right? Uh, we are talking about inventing, uh, LLMs locally into your servers, into your systems. How do you see that piece evolving? Because lately I have been seeing a lot of concerns from a lot of companies and leaders around the security aspect, around the IP aspect where you are putting all your code into a third-party server to generate new code, right? You can't stop developers from doing that because they've already started doing it. Earlier, the method was going to stack overflow, taking up some code from there, going to GitHub repositories or GitLab repositories, taking up some code. But now this is happening from a single point of source, which is cloud hosted and you have to share your code with third parties. That has started becoming a concern. So though the whole landscape is going to change, as you said, but I think there is a specific direction in which things are moving, right? Very soon people realized that there is an aspect of security and IP that comes along with using such tools in the system. So how do you see that piece progressing in the market right now? And what are the things, what are the products, what are the services that are coming up, impacting this landscape?
Venkat Rangasamy: It's a good question, actually. We, after a couple of years, right, what the realization even I came up with now, the services which are hosted on a cloud, like, uh, like, uh, public LLMs, right, which, you can use an LLM to generate some of these aspects. From a POC point of view, it looks great. You can see it, what is coming your way. But when it comes to the real product, making product in a production environment is not, um, well-defined because as I said, right, security audit complaints, code IP, right? And, and your compliance team, it's about who owned the IP part of it, right? It's those aspects as well as having the code, your IP goes to some trained public LLM. And it's, it's kind of a compromise where there is, there is, there is some concern around that area and people have started and enterprises have started looking upon something to make it within their workspace. End of the day, from a developer point of view, the experience what developer has, it has to be within that IDE itself, right? That's where it becomes successful. And keeping outside of that IDE is not fully baked-in or it's not fully baked-in part of the developer life cycle, which means the tool set, it has to be as if like it's running in local, right? If you ask me, like, is it doable? For sure. Yes. If you’d asked me an year back, I'd have said no. Um, running your own LLM within a laptop, like another IDE, like how do you run an IDE? It's going to be really challenging if you’d asked me an year back. But today, I was doing some recent experiment on this, um, similar challenges, right? Where corporates and other folks, then the, the, the, any, any big enterprises, right? Any security or any talk to a startup founders, the major, the major roadblock is I didn't want to share my IPR code outside of my workspace. Then bringing that experience into your workspace is equally important.
With that context, I was doing some research with one of the POC project with, uh, bringing your Code Llama. Code Llama is one of the LLMs, public LLM, uh, trained by Meta for different languages, right? It's just the end of the day, the smaller the LLMs, the better on these kinds of tasks, right? You don't need to have 700 billion, 70 billion, those, those parameters are, is, it's irrelevant at this point of coding because coding is all about a bunch of instructions which need to be trained, right? And on top of it, your custom coding and templates, just a coding example. Now, how to solve this problem, set up your own local LLM. Um, I've tested and benchmarked in both Mac and PC. Mac is phenomenally well, I won't see any difference. You should be able to set up your LLM. There is a product called Ollama. Ollama is, uh, where you can use, set up your LLM within your workspace as if it's running, like running in your laptop. There's nothing going out of your laptop. Set up that and go to your IDE, create a simple plugin. I created a VC plugin, visual source plugin, connected to your local LLM, because Ollama will give you like a REST API, just connect it. Now, now, within your IDE, whatever code is there, that is going to talk to your LLM, which means every developer can have their own LLM. And as long as you have a right trained data set for basic language, Java, Python, and other thing, it works phenomenally well, because it's already trained for it. If you want to have a custom coding and custom templating, you just need to train that aspect of it, of your coding standards.
Once you train, keep it in your local, just run like part of an IDE. It's a whole integrated experience, which runs within developer workspaces, is what? Scalable and long run. It, if anything, if it goes out of that, which we, we, we have seen that many times, right, past couple of years. Even though we say our LLMs are good enough to do larger tasks in the coding side, if it's, if you want to analyze the complete file, if you send it to a public LLM, with some services available, uh, through some coding and other testing services, what we have, the challenges, number of the size of the tokens what you can send back, right? There is a limit in the number of tokens, which means if you want to analyze the entire project repository what you have, it's not possible with the way it's, these are set up now in a public site, right? Which means you need to have your own LLM within the workspace, which can work and in, in, it's like a, it's part of your workspace, that's what I would say. Like, how do you run your database? Run it part of your workspace, just make it happen. That is possible. And that's going to be the future. I don't think going any public LLM or setting up is, is, is not a viable option, but having the pipeline set up, it's like a patching or giving a database to your developers, it runs in local. Have that set up where everybody can use it within the local workspace itself. It's going to be the future and the tools and tool sets around that is really happening. And it's, it's at the phase where in an year's time from here, you won't even see that's a big thing. It's just like part of your skill. Just set up and connect your editor, whatever source code editor you have, just connect it to LLM, just run with it. I see that's a feature for the coding part of you. Other SDLCs have different nuance to it, but coding, I think it should be pretty straightforward in a year time frame. That's going to be the normal practice.
Kovid Batra: So I think, uh, from what I understand of your opinion is that the, most of the market would be shifting towards their Local LLM models, right? Yeah. Uh, that that's going to be the future, but I'm not sure if I'm having the right analogy here, but let's talk about, uh, something like GitHub, which is, uh, cloud-sourced and one, which is in-house, right? Uh, the teams, the companies always had that option of having it locally, right? But today, um, I'm not sure of the percentage, uh, how many teams are using a cloud-based GitHub on a locally, uh, operated GitHub. But in that situation, they are hosting their code on a third party, right? The code is there.
Venkat Rangasamy: Yup.
Kovid Batra: The market didn't shape that way if we look at it from that perspective of code security and IP and everything. Uh, why do you think that this would happen for, uh, local LLMs? Like wouldn't the market be fragmented? Like large-scale organizations who have grown beyond a size have that mindset now, “Let's have something in-house.” and they would put it out for the local LLMs. Whereas the small companies who are establishing themselves and then, I mean, can it not be the similar path that happened for how you manage your code?
Venkat Rangasamy: I think it is very well possible. The only difference between GitHub and LLM is, um, the artifact, the, GitHub is more like an artifact management, right? When you have your IP, you're just keeping it's kind of first repository to keep everything safe, right? It just with the versioning, branching and other stuff.
Kovid Batra: Right.
Venkat Rangasamy: Um, the only problem there related to security is who's, um, is there any vulnerability within your code? Or it's that your repository is secure, right? That is kind of a compliance or everything needs to be there. As long as that's satisfied, we're good for that. But from an LLM lifecycle point of view, the, the IP, what we call so far in a software is a code, what you write as a code. Um, and the business logic associated to that code and the customizations happenening around that is what your IP is all about. Now, as of now, those IPs are patent, which means, hey, this is what my patent is all about. This is my IP all about. Now you have started giving your IP data to a public LLM, it'll be challenging because end of the day, any data goes through, it can be trained on its own. Using the data set, what user is going through, any LLM can be trained using the dataset. If you ask me, like, every application is critical where you cannot share an IP, not really. Building simple web pages or having REST services is okay because those things, I don't think any IP is bound to have. Where you have the core business of running your own workflows or your own calculations and that is where it's going to be more tough to use any public LLM.
And another challenge is, what I see in a community is, the small startups, right, they won't do much customization on the frameworks. Like they take Java means Java, right, Node means Node, they take React, just plain vanilla, just run through end-to-end, right? Their, their goal is to get the product up to market quicker, right, in the initial stage of when we have 510 developers. But when it grows, the team grows, what happens is, we, the, every enterprise it's bound to happen, I, I've gone through a couple of cycles of that, you start putting together a framework around the whole standardization of coding, the, the scaffolding, the creating your test cases, the whole life cycle will have enforced your own standard on top of it, because to make it consistent across different developers, and because the team became 5 to 1000, 1000 to 10,000, it's hard to manage if you don't have standards around it, right? That's where you have challenges using public LLM because you will have challenges of having your own code with your own standards, which is not trained by LLM, even though it's a simple application. Even simple application will have a challenge at those points of time. But from a basic point of view, still you can use it. But again, you will have a challenge of how big a file you can analyze using public LLM. It's the one challenge you might have. But the answer to your question, yes, it will be hybrid. It won't be 100 percent saying everybody needs to have their own LLM trained and set up. Initial stages, it's totally fine to use it because that's how it's going to grow, because startup companies don't have much resources to put together to build their own frameworks. But once they get in a shape where they want to have the standardized practices, like how they build their own frameworks and other things. Similar way, one point of time, they'd want to bring it up on their own setup and run with it. For large enterprise, for sure, they are going to have their own developer productivity suite, like what they did with their frameworks and other platforms. But for a small startup, start with, they might use public, but long run, eventually over a point of, over a period of time, that might get changed.
And the benefit of getting hybrid is where you will, you'll make your product quick to market, right? Because end of the day, that's important for startups. It's not about getting everything the way they want to set up. It's important, but at the same time, you need to go to market, the amount of money what you have, where you want to prioritize your money. If I take it that way, still code generation and the whole LLM will play a crucial role on a, on the development. But how do you use and what third-party they can use? Of course, there will be some choices where I think in the future, what this, what I see is even these LLMs will be set up and trained for your own data in a, in a more of a hybrid cloud instead of a public cloud, which means your LLM, what you trained in a, in a hybrid cloud has visibility only to your code. It's not going, it's not a public LLM, it's more of a private LLM trained and deployed on, on a cloud can be used by your team. That'll, that'll, that'll be the hybrid approach in the long run. It's going to scale.
Kovid Batra: Got it. Great. Uh, with that, I think, uh, just to put out some actionable advice, uh, for all the engineering leaders out there who are going through this phase of the AI transformation, uh, anything as an actionable advice for those leaders from your end, like what should they focus on right now, how they should make that transition? And I'm talking about, uh, companies where these engineering leaders are working, which are, uh, Series B, Series A, Series C kind of a bracket. I know this is a huge bracket, but what kind of advice you would give to these companies? Because they're in the growing phase of the, of the whole cycle of a company, right? So what, what should they focus on right now at this stage?
Venkat Rangasamy: Here, here is where some start. I was talking to some couple of, uh, uh, ventures, uh, recently about similar topic, how the landscape is going to change as for software development, right? One thing came up in that call frequently was cheaper to develop a product, go to market faster, and the expectation around software development has become changing quite a while, right? In the sense, the expectation around software development and the cost associated to that software development is where it's going to, it's going to be changing drastically. Same time, be clear about your strategy. It's not like we can change 50 percent of productivity overnight now. But at the same time, keep it realistic, right? Hey, this is what I want to make. Here is my charter to go through, from start from ideations to go to market. Here are the meaningful places where I can introduce something which can help the developers and other roles like PMs. Could be even post support, right? Have a meaningful strategy. Just don't go blank with the traditional way what you have, because your investors and advisors are going to start ask questions because they're going to see a similar pattern from others, right? Because that's how others have started looking into it. I would say proactively start going through that landscape and map your process to see where we can inject some of the meaningful, uh, area where it can have impacts, right?
And, and have, be practical about it. Don't think, don't give a commitment. Hey, I make 50 percent cheaper on my development and overnight you might burn because that's not reality, but just.. In my unit test cases and areas where I can build quality products within this money and I can guarantee that can be an industry benchmark. I can do that with introducing some of these practices like test cases, post customer support, writing code in some aspects, right? Um, that is what you need to set up, uh, when you started, uh, going for a venture fund. And have a relook of your SDLC process. That's important. And see how do you inject, and in the long term, that'll help you. And it'll be iterative, but at the end of the day, see, we've gone from waterfall to agile. Agile to many, many other paradigms within agile over a period of time. But, uh, the one thing what we're good at doing is in a software as an industry adapting to a new trend, right? This could be another trend. Keep an eye on it. Make it something where you can make it, make some meaningful impact on your products. I would, I would say, before your investor comes and talked about hey, can you do optimization here? I see another, my portfolio company does this, does this, does this. That's, it's, it's better to start yourself. Be collaborative and see if we can make something meaningful and learn across, share it in the community where other founders can leverage something from you. It will be great. That's my advice to any startup founders who can make a difference. Yep.
Kovid Batra: Perfect. Perfect. Thank you, Venkat. Thank you so much for this insightful, uh, uh, information about how to navigate the situation of changing landscape due to AI. So, uh, it was really interesting. Uh, we would love to have you one another time on this show. I am sure, uh, you have more than these insights to share with us, but I think in the interest of time, we'll have to close it for today, and, uh, we'll see you soon again.
Venkat Rangasamy: See you. Bye.
‘Product vs Engineering: Building Bridges, Not Walls’ with James Charlesworth, Director of Engineering at Pendo
In the recent episode of ‘groCTO: Originals’, host Kovid Batra engages in a thoughtful discussion with James Charlesworth, Director of Engineering at Pendo, who brings over 15 years of experience in engineering and leadership roles. The episode centers around the theme “Product vs Engineering: Building Bridges, Not Walls.”
James begins by sharing how his lifelong passion for technology and software engineering, along with pivotal moments in his life have shaped his career. Moving on to the main section, James addresses the age-old tussle between product and engineering teams, emphasizing that these teams should collaborate closely rather than operate in silos. He shares strategies for fostering empathy, collaboration, and effective collaboration while highlighting the importance of understanding each team’s priorities and the impact of misalignment.
James also underscores the value of one-on-one meetings for having meaningful conversations, building strong relationships, and understanding team members on a deeper level. He also explores the significant role of Engineering Managers in enabling their teams to overcome these challenges, ensuring smooth team dynamics, and achieving successful product outcomes.
Kovid Batra: Hi everyone. This is Kovid, back with a new episode of the groCTO podcast, and today with us, we have a very special guest. He's the Head of Engineering at Pendo, and he has more than 15 years of engineering and leadership experience. Welcome to the show, James. Happy to have you here.
James Charlesworth: Hi, Kovid. Thank you so much for having me on. I'm actually not Head of Engineering at Pendo. I am a Director of Engineering and I run the Sheffield office here in the UK. So thank you for having me on.
Kovid Batra: Oh, all right. My bad then. Okay. So I think today, uh, we are going to have a very interesting discussion with James. We're going to talk about the age-old tussle between product and engineering, and James, uh, is an expert at handling those situations. So he's going to tell us what are the tactics and what are the frameworks he's using here. But before, James, we move on to that section, uh, we would love to know a little bit more about you. Uh, maybe some of your hobbies, some of the life-defining events for you, who James is basically. Please go on.
James Charlesworth: Thanks, Kovid. Um, yeah, this sounds super nerdy, but my hobby has always been technology and software engineering. Um, I first started doing software engineering when I was probably about 11 or 12 years old. I had a Cyon Series 3 that my parents bought me from a boot fair, and I just learned how to program that. Like, I'll just sit there for ages typing these tiny little keys. Um, and my hobby has been like using software and coding to actually solve problems in the real world and build products. And that's kind of led me towards web development and SaaS products. And that's ultimately what we do at Pendo, is help people build better products. So, um, yeah, that's a pretty boring answer to your question about my hobbies. I do also like play music and things. Um, I played guitar in a band for a long time. Um, so that's the only non-techie hobby I guess I have.
Kovid Batra: No, that's great. Thank you for that sweet, small intro about yourself. Anything that, uh, that entices you from your childhood or from your maybe teenage that has defined you a lot? I mean, this, this is something that I usually ask a lot of people, but from there, we, we get to know a lot more about our guests. So if you don't mind, can you just share some of that, uh, experience from your past that defines you today?
James Charlesworth: Yeah, I think the biggest defining moment that a lot of people go through is when they first leave home for the first time and they don't have a direction because I didn't have much of a direction when I was like 18 years old and I left home. I did the wrong degree. I did a degree in control systems engineering and I ended up doing software. So it took me a while to get into web development because of doing the wrong degree. Um, and actually because I had no real direction, I was just sort of fluttering in the wind and just doing whatever. But through that process of just giving yourself a bit of freedom and going out and into the world and doing whatever you want, you really learn about yourself and you learn about other people, and I think that's when I went from being obsessed with computers to being obsessed with people and the way that people interact with each other and, um, you know, like I met people from all different walks of life, and you notice the similarities between anybody from all across the world, but you want to notice the differences, and you can notice how you can celebrate those differences.
And so I think, like, having that moment of moving away from home and, um, like, living by yourself and stuff like that, um, really opens your eyes up to, like, who you want to be and where you want your place in the world to be. So I'm sorry if that's a little bit, um, esoteric but it's, yeah, there was no like one defining moment really. I mean, it was just one of those and then like being in a band goes in with that because I always wanted to be a rock star. It never really worked out. But this idea of you can just get some friends, get together, get a van and just like go touring and play music, um, across the country, that's really cool, and that's really cool when you're in your sort of early twenties and you just want that freedom. Um, and that goes hand in hand with meeting people from, from all over the place. So yeah, like, you know, I'm obsessed with people. I'm obsessed with like human interactions and the way people, um, the way people like carry themselves and interact with each other and what they care about and how we can all align that. Yeah.
Kovid Batra: That's really interesting now. I mean, uh, the kind of role you are into where you are into leadership, you're leading teams, you're a Director of Engineering and this aspect of being aware of different aspects of different people and culture makes you more comfortable when you are, uh, leading people, you, you bring more empathy, you bring more understanding to their situations, and I'm sure that has come, uh, from there, and it, it is definitely growing as you are moving into your career.
So I think, James, this was, this was, uh, really, really interesting. Uh, let's move on to our main section. I think, uh, everyone is waiting to hear about that. Uh, so this has been an age-old tussle, as I said. Uh, the engineers have never liked the product managers. I'm not generalizing it, but just saying, so please, please don't, don't take it wrong. Uh, but yeah, usually the engineers are not very comfortable, uh, in those discussions and this has been an age-old tussle, we all know, know about it. When we talk about product and engineering teams, I personally never think these two as two separate teams. Like it, it never works like that. One thing that I learned as soon as I moved into this industry is it's 'product engineering'. It's not product and engineering separately. So it's not healthy for a team to have this kind of a tussle when you actually are moving towards the same goal and almost every engineering team that I see, there, there is some level of friction there and it's, it's natural to be there because the product managers usually might not be that well, uh, hands-on with the code, hands-on with the kind of daily practices our dev goes through. And then, planning according to that, keeping in mind that, okay, it should be, uh, pushy as well as comfortable for the developers to deliver something. So that's where the main friction starts and you come up with unreasonable requirements which the developers might not be able to relate to that, how it is going to impact the product.
So there are multiple reasons due to which this gap, this friction could be there. So today, I think with that note, I would, uh, hand over the mic to you and, uh, would want to know how you have had such experiences in your past, in your current role and how you end up resolving this so that people operate, like developers and product operates as one single team towards that one goal of making the business successful.
James Charlesworth: Yeah, absolutely. And what you said there about coming together to solve a problem together is really, really important. I think like the number one thing that underpins this is that everybody, product managers, engineers, designers, managers, needs to remember that you're all employed by the same organization and you've got the same shared goals and your, um, contribution to that is no more or less valuable than anybody else's. Like you mentioned that word 'empathy' in your introduction, like empathy is, we're gonna talk about empathy a lot today, right, because empathy is all about putting yourself in somebody else's shoes and seeing what their goals are. Um, and firstly, like trying to steer their goals to what you need, but also trying to like, um, emphasize what your own goals are, um, and align those to the others.
Like the way I always think about product managers is a lot of engineers, they feel like they're on feature factory teams. They feel like they're just being told what to build, and you get into this feature factory loop. Um, and it just seems like all the Product Manager wants to do is add features into the product, add features into the product, add features into the product. Um, and it can feel sometimes like product managers are paid like on commission, like they get a certain commission based on how many features they deliver at a point or something. That's not true. Product managers are paid a salary just like you do, and the way that your success is ultimately measured is the same way that your product manager's success is ultimately measured. And so, it's really, really important to realize that you do align around this goal, and you need to have a two-way conversation about it. Like you need to, you need to really, really explain like what your, you think the priorities should be, and you need to encourage your product manager to explain what they think their priorities should be for the team, and then you can align and find some middle ground that ultimately works best for the business.
But yeah, like in my experience anyway, it's just, you say age-old, like this has been quite a long thing. And before products managers, it was business people. It was maybe, you know, one of my first jobs in software engineering, um, we didn't really have products managers. We just had like the Director of engineering, product research, design, whatever, um, who would just come up with the idea and just say, "This is what we're building." And that's very difficult, um, because you've reported in to that person. So you basically had to just do exactly what they said, and that was super, super unhealthy because that builds up a huge amount of resentment. And I much, much prefer the model we have now, where we have product managers, where engineers don't report into the product managers, because that means that product managers had to lead the product without authority, um, and engineers have to lead the best engineering direction without authority. So you have this thing where you're encouraged to influence your peers on the same team as opposed to just doing the thing that your boss tells you to do, which is how it used to work when I started in this industry.
So it's got, it's got a lot better. Um, and the, yeah, as I've gone through my career and I've worked with some really, really good product managers and some really good product leaders, I've noticed that pattern between the, the product managers that are really, really good, that are really successful are the ones that have that empathy and we will talk about empathy a lot, right? Because it's super, super important. But product managers can have that empathy that can empathize with what engineers actually want to get out of a situation, um, and then align that with their own goals.
Kovid Batra: I have a question here. When you, when you say empathy, I think, uh, in your introduction also, you mentioned, like, when you meet different people from different cultures, different backgrounds, you tend to understand. Your, your brain develops that empathy naturally towards different situations and different people. But that has happened only because you have seen things differently, right? When we put this context into product and engineering, a product manager who has probably never coded in their life, right? Who does not have the context of, uh, how things work in, in the development workspace, right? In that situation, how a manager like that should be able to come to this piece that, okay, uh, if the developer is saying that this is going to take five days or this is difficult and this is complicated to implement and it won't add much value. So in those scenarios, a person who is not hands-on with coding or has never done that piece on his own or her own, uh, how do you think, uh, in a professional environment that empathy would come in? And of course, the Product Manager has his or her own, uh, deliverables, the metrics that need to be looked at. So how does that work in that situation?
James Charlesworth: Well, the same way it works the other way around as well. So the situation you've just described, right? You've got a Product Manager who is trying to get what they need to get done, but they don't understand the full details behind the implementation. You've also got an engineer that does understand the full details behind the implementation, but they don't understand the full business context behind what you're trying to build, right? Because that's the Product Manager's job. So the engineers, they might know exactly how the database is structured and how all of the backend architecture works, which is very complicated, but they don't understand, like they haven't been speaking to customers. They don't know the kinds of things that the Product Manager knows. So both sides need to essentially understand what the other person's priorities are, and that's what empathy is. Empathy is understanding what somebody wants, and not necessarily always giving them what they want, but the very least like comprehending and considering what somebody's goals are in your own way you deal with them, right?
So, um, back to your situation about software engineering. Okay, so if let's say, a Product Manager has come to you and said, "We just need to add this button to this page. It's super, super important. We want to, we want this button to send an email out." And the engineers come back and they say, "Oh, we actually don't have any backend email architecture that can send emails out. So we're going to actually have to build all of that." Um, that, you know, the Product Manager can go, "Well, what's so difficult about that? Just put a button there and send an email out." And the engineers are kind of caught in this rock where they're in a hard place where they're sort of saying, "Well, this is a lot of work. Like that's weeks and weeks and weeks of work, but how do I go to the business and say 'It's a lot of work'?" Um, and so, the solution is to really, really explain and break down to your Product Manager why this is more work than they realize and the Product Manager's job is to turn around to you and explain why we really, really need this. So you both need to align and you both need to understand. Product managers need to understand that some stuff is complicated and the only way they're going to understand it is complicated is if you just explain it to them, right? Like there's no secrets in software engineering. If you spend an hour sitting down and explaining to a Product Manager how your backend is architected and how your databases all fit together and you know, what email service we're using and what the limitations are of that email service, and then they'll understand it. It'll take you an hour to explain that. And equally, your Product Manager can sit down with you and they can show you the customer calls where people are really, really wanting this feature, right? And they can educate you on why we really need this feature, and then ultimately, you'll come, you'll come together where you understand why your Product Manager is pushing for this so hard, and your Product Manager will understand why you're pushing back against this so hard, and you'll find a solution that makes everybody happy in the end. But you do need to listen. You need to listen to the other person's, um, goals and what they want to get out of it. Um, and that's the empathy side. Eventually, it's like, it's on this respecting somebody's motives. It's respecting somebody's, um, like what they've been given, their mandate that they've been given for a certain situation.
Kovid Batra: Right. I think this is one scenario where I definitely see putting in effort in explaining to the other person what it really means, what it stands for. Obviously, anyone cannot be so inconsiderate about the other person when they're working together. So maybe in one or two, uh, situations like this, let's say, I'm a Product Manager, uh, where I have to explain things to the developer, and if I do that for, let's say, two or three such instances, from the fourth or fifth time, automatically that level of trust is built, and you are in a position to maybe, uh, not even explain a lot of times. You get that synchronization in place where things are working well with you.
And on that note, I really feel that people who are joining in large size teams, like, uh, a Product Manager joining in or a developer joining in, usually in large size teams, we have started to see this pattern of having engineering managers also, right? So in your perspective, uh, how much does an Engineering Manager play a role in, uh, bridging this gap and reducing this friction? Because, uh, few of my very close friends who have been from the engineering background have chosen to be in the management space now, and they, they usually tell us what things they are working upon right now. And I feel that that really helps the business as well as the developers to deliver the right things on time and you get a lot of context from both the sides. So what's your perspective on that, uh, of bringing those engineering managers into the system?
James Charlesworth: Yeah. I mean, I think the primary, number one responsibility of an Engineering Manager is to empower the engineers to do all those things that you've just been speaking about, right? So like your number one responsibility, engineering managers tend to have better people skills than engineers. That's why people go into management. Um, and your job is to teach the engineers on your team how to do that, all of those things you've just described. Sometimes you have to step in and sometimes there's a high pressure situation where you do actually have to say, you know, "I'm going to bridge the gap here between engineers and product." But your primary job as an Engineering Manager is to enable the engineers on your team to all have that kind of conversation with the product managers and with the business. Um, and so it's coaching. So it's support. So it's, um, career development, and also, you know, hiring the right people, that's quite a large job of an Engineering Manager. Performance management. Um, and so, a lot of that. Engineering managers should never be the one person that bridges gap between product and engineering because then they're going to become a bottleneck, and also the engineers are never really going to learn to do that themselves.
Um, so yeah, that's always been, and I learned from some really good engineering managers or software development managers about this, about like, um, you know, empowering the people that you've put in charge. Engineering managers aren't in their position because they're necessarily better at everything than the engineers. They're usually better at one or two things. Um, but they're not as good at things like technical architecture. So as an Engineering Manager myself, I would never overrule an IC's opinion on a software architecture because that's not my job. My job is not that. I might know, I might have been doing software for years and years and years and I understand how systems are architected and how databases work and stuff. But I'm also employing people who are better at that than me, and that's the point. And so I would never overrule them, and I would never overrule how they collaborate with their Product Manager. But I would guide and coach them towards being able to do that. Um, and so, that's the case of speaking to engineers, speaking to product managers, trying to find out if they're talking past each other, trying to find out, you know, where, where the disconnect is, and then trying to solve that between the two groups of people. So I think the answer to your question is like the main role of an Engineering Manager is to become a force multiplier on their team, essentially, and to enable everybody to do that. Um, yeah, you can't have engineer managers who are just there to fix the gap. It's just not scalable. That's not a good thing.
Kovid Batra: No, I totally understand that. So when we are talking about bringing, uh, this level of comfort where people are working together, talking about your experience in your teams, there must have been such scenarios and you must have like put in some thoughts at the time of orientation, at the time of onboarding the team members that how they should be working to ensure that things work as a team, uh, can you just tell us about a few incidents and how you ended up solving them and how you put in the right, uh, onboarding for the team members to have that inculcated in the culture?
James Charlesworth: Yeah, the best onboarding is like that group effect of just observing something happening and then joining in with it. So like, by standard the best way to onboard somebody is just to add them to a high-performing team. Like honestly, you just put someone on the team that's super, super collaborative and they will witness how people can collaborate. But I've had you know, I've had positive and negative experiences in the past with joining a team, primarily back when I was a software engineer. I remember I once joined a team for the very first time and I just never really got on with my Product Manager. Like I don't think we clicked as people. We never really had any kind of conversations or anything. Um, and I was never really onboarded properly. So at the start I did have a slightly, um, rocky relationship with this Product Manager where I just couldn't understand, I couldn't understand what they were trying to do. They never explained anything. They just said, "This is what we are doing." So I just had to say, "Well, that's going to take longer than you think." And I tried this for ages. And I spoke to my manager. My manager sort of gave the advice that I've just been trying to give, um, your listeners here, which is like, you know, you need to go out and do it yourself. I'm not going to fix this for you. So what I did is I took this Product Manager and I just said, "Look, let's go for a coffee once every two weeks. We'll just have a one-to-one." This was before COVID. So you can actually go out and do these kinds of things. Um, and we would, every lunch, every Monday, every two weeks, we would just go down the road and have a coffee in a cafe, um, in London where I was working at the time. And I just got to know them as a person. And I really, really got to understand that like, this is a person that is under a lot of pressure in their job and they're very, very stressed out, and they take that sometimes out on their team. It's not necessarily their fault, but that is the way that they deal with things. Um, and if I can just be a little bit, have a little bit of sympathy to that sort of situation they're put in and I can work out what's going on behind that, and I would ask them about like what they want to do, what their career aspirations, what do you know, what do they want to be one day, where they want to work and this sort of stuff. And those kinds of small conversations, like I say, half an hour every two weeks, just a one to one, um, completely fixed the relationship and completely fixed everything else, because you just build up so much more trust with somebody if you're just having small one-on-one conversations with them.
And my kind of hack for engineers, if you like, is to have one-to-ones. People think one-to-ones is just for managers and it's for people to talk to their boss or it's for people to talk to people that report to them. Anyone can have a one-to-one with anyone in the business and set up a regular, no-agenda meeting every couple of weeks. That's just half an hour where you just chat with somebody, and that is a super, super valuable way of building up rapport with people that will pay off dividends in the future, like half an hour invested between a Senior Engineer and their Product Manager, half an hour every couple of weeks will pay off dividends in the future when you meet, uh, when you meet a conflict and you realize that, oh! Actually, I know this person really quite well now because we have coffee every two weeks, every Monday, right? And so you don't need to be somebody, you don't really need a massive reason to have a one-to-one with somebody. Just put it in the calendar, chat to them and say, "Look, I, you know, I really like us to work more effectively together. Um, let's have half an hour every Monday. I'll buy you a croissant or something, whatever it is you want to do." Um, and then just ask them about their life, asking about their career goals, ask them about like what kind of challenges they're facing. And yeah, before you know it, you'll be helping each other out. You'll be desperate to help each other out because that is human nature. We like helping each other. So yeah.
Kovid Batra: I think I would like to, just because I've been working with a lot of engineers and engineering managers these days, what I have really felt is that throughout their initial years of career, they have been talking with a computer, right? It's very difficult to find out what to talk about. I think the advice that you have given is very simple and I think very impactful. I have experienced that myself, but I have, I would say, I have been an exception to the engineering and development space because I have been a little extrovert and have been talking about things lately, at least in my comfort zone. Uh, so I was able to find out that space with people who are themselves very introvert, uh, but still I could break through and I could break that ice.
It's very difficult for the other side of the people who are developers throughout their career to come back and like start these conversations on their own. So what are the things that you really think we should be talking about? Like, even if a Product Manager is going to the engineer or the engineer is wanting to break that ice and like build in that empathy and understand that person, what kind of things you look forward to, uh, in such kind of conversations, let's say?
James Charlesworth: That's an interesting one. Like a lot of, there's a lot of people that are introverted in this industry. A lot of people use introversion as like a crutch or as an excuse, and they shouldn't. Be just, you know, being introverted doesn't mean you can't connect with other people. It just means you connect with other people differently. It means that, you know, you look inwards for experiences and things. Um, and so the practical advice would be to try and recognize how another person functions. You might find that your Product Manager is actually more introverted than they let on. A lot of people just put on a show. A lot of people are super, super introverted, but they put on a show in day-to-day, especially in work life, and they'll, you know, pretend to be all extroverted and they'll pretend to be all confident, but they're not. And I've known many people like this, that if you have an actual conversation with them, they'll admit to you that they're actually super introverted and they get super, super nervous whenever they have to talk to people, but they do it and they force themself to do it because they've learned throughout their careers.
So, um, yeah, I'm not suggesting people should push themselves too far out of their comfort zone, but In terms of practical advice, speaking in statements is quite a big thing, though a lot of people don't realise, but, um, I can't remember where I read this, this is from some book or something on like how to make friends and influence people, I don't think it's that exact book, but essentially, if all you're doing is asking somebody questions, then you're putting all of the onus in the conversation on them, and that's not actually that comfortable in conversation. So if you're talking to a Product Manager and you're just asking them, like, "Why are we doing this?" "Why does the customer want that?" "What's the point of this feature?" That's actually not, that's not a nice thing to do because you're making them lead everything. What you really want to do is just talk in statements. You just want to say, "Hey, like I'm just building this. It's really cool. We've got, we've got connectivity going on between this WebSocket and this backend database. There you go. That's the thing. Um, we've just realized that this is a little bit late. And so, it's going to be a few days extra, but we found an area over here where we can cut some corners." Like just to say things, say things that are going on, tell people stuff that's going on in your life. Um, and then there's no pressure on them to intervene. And this is like standard small talk, right? If you just tell people things, then they can decide to walk away or they can decide to engage you in conversation, but you're not putting too much pressure on them. You're not asking them a barrage of questions that they feel like they have to answer. So that's standard advice for introverts. I think if you are introverted and you feel like you need to talk to somebody to share, share details about what you're working on, share details about, um, you know, your current goals and where you are and see what happens, they might do the same. You might learn something.
Kovid Batra: Yeah, sure. I think, I think everyone actually wants to do that, but it's just that there has to be an initiation from one side. And if it's more relevant and feels like, uh, coming very naturally to build that bond, I think this would really, really work out.
Cool. So I think this was, this was really, uh, again, I would say a very simple, but a very impactful advice that one-on-ones are really things that work, right? At the end of the day, we are humans and at least for developers, I think because they are day-in and day-out just interacting with their computers, I think this is a good escape for them to actually probably go back and talk to people and have those real conversations and build that bond. So yeah, I think that's really amazing. Anything else?
James Charlesworth: You can talk about computers.
Kovid Batra: Sorry?
James Charlesworth: You can talk about computers. Like, if you're really into software, like, this is why gaming, I'm not really a gamer, but I know a lot of people connect over gaming, a lot of people bond over gaming because that's something that you get into as like an introspective thing, and then you find out that somebody else is also into it, you're into the same game, and you can connect over that, and it turns something that was a really insular, inward looking experience into like a shared group sociable thing, right? So like, yeah, I'm in many ways, I'm quite jealous of people that are into gaming because it does have that social aspect. So yeah, talk about computers. Like, just because your life is staring at a screen and talking to a computer, you can still share that with other people and even product managers as well. Product managers in tech companies, they're super, super technical. They might not be able to code, but they definitely know how computers work and they definitely know how systems work. They're designing these things. So talk to them about it. Talk to them about, um, you know, the latest Microsoft Windows version that can spy on your history of AI. Like, talk to your Product Manager about that. They will have opinions on this sort of stuff. And so, yeah, sorry to cut you off, but like, honestly, just because you're into computers and you're into coding, like that, that can be a way to connect with people. It doesn't have to be a way to stay isolated.
Kovid Batra: Definitely, definitely. Uh, perfect. I think more or less the idea is to have that empathy for people, do more one-on-ones and build that trust in the team, and I think that would really solve this problem. And I think one thing that we should have actually talked in the beginning itself, uh, the impact of this problem, actually. Like for our audience, I won't let that question go away like that. Uh, there must have been experiences where you would have dealt with consequences of having this friction in the team, right? So maybe the engineering managers, the product managers, the engineering teams out there, uh, who are looking at delivering successfully, I think they should be, uh, aware of what are the consequences of not putting some focus and effort in solving this problem before it becomes something big. So any of your, uh, experiences that could highlight the impact of this problem in the teams, I think that would be appreciated.
James Charlesworth: I mean, like pretty much all systems out there that are absolutely laden with tech debt and every product is late, is the result of this. It's the result of the breakdown between what the business needs or what the product owner needs and what the engineers are building. And I've worked on many, many systems that have been massively over-engineered because the engineers were given too much free reign. And so it was, you know, you know, not much tech debt, but really, really, really over complicated and took forever to deliver any value to any customers. They've also worked on systems that were massively under-engineered and they fall down and they break and there's bugs all the time. And that's because the engineers weren't given enough reign to do things properly. So you need to find that middle ground. And yeah, like honestly, I've worked, I've seen so many situations where the breakdown in conversation between product managers and engineers has just led to runaway bugs, runaway tech debt, runaway like people leaving their jobs. I've seen that happen before as well. Yeah, and that's, that's really bad. These are all bad outcomes for the entire business, right? You don't want engineers quitting because they don't get on with their Product Manager, and that's something I've seen before. Um, you don't want a huge amount of tech debt piling up because engineers are too scared to put their hands up and say, "Look, we're accruing tech debt here. This approach isn't working." So they're too scared to do that. So they just do the feature factory thing and they ultimately, build up loads of tech debt and then, a huge bug is released. You don't want that. Um, but you also don't want engineers to be just left to it and put in a room for a month and come back with some massively elaborate overengineered system that doesn't actually solve the problem for the customer. So all of these things are bad situations. The good situation is the one that is an iterative approach with feedback and collaboration between engineers and product. It's the only real way of doing it.
Kovid Batra: Definitely. Great, James. Thanks a lot, uh, for giving us so much practical and insightful advice on, uh, how to deal with this situation, and I'm sure this would be really helpful for a lot of engineering teams, product engineering teams out there to be more successful. And we would love to have you back on the show once again and talking about such insightful topics and challenges of the engineering ecosystem. But for today, I think it's time. Uh, thank you so much once again. It was great to have you on the show.
James Charlesworth: No worries. Thanks very much, Kovid.
‘Inside Jedox: The Buy vs. Build Debate’ with Vladislav Maličević, CTO at Jedox
In the recent episode of ‘groCTO: Originals’, host Kovid Batra engages in an insightful conversation with Vladislav Maličević, CTO at Jedox. The central theme of the discussion revolves around “Inside Jedox: The Buy vs. Build Debate”.
The episode starts with Vladislav recounting his 20-year journey from being one of Jedox’s first developers to stepping into the role of CTO. Moving forward, He sheds light on the company's vision, the transformation from an open-source project to a full-fledged cloud platform, and the various hurdles and achievements along the way, such as competing with industry giants like IBM and SAP. He also points out that many early team members remain with the company to this day.
Vladislav then dives into important decisions surrounding whether to build in-house or outsource various parts of their product, explaining that spending constraints often guide these choices. He also emphasizes the 80/20 rule (Pareto principle) and highlights the importance of integrating with Microsoft Excel as a key factor in their success.
Kovid Batra: Hi, everyone. This is Kovid, back with a new episode of groCTO. Today on our show, we have Vlado from Jedox. Welcome to the show. Great to have you here.
Vladislav Maličević: It's a pleasure to be here. Hi.
Kovid Batra: Hey, Vlado. All right. Like before, um, I start off with a beautiful discussion with you around the age-old 'Buy vs Build', I would love to know a little bit more about you, um, your hobbies, what you do at Jedox. So let's, let's start with a quick, cute intro about yourself.
Vladislav Maličević: Yeah. So, uh, my name is Vlado. The long name is Vladislav Maličević, uh, long name coming. It's a, it's a Serbian name and, uh, coming, coming originally from, from Bosnia, but I've been living and working here in Germany for the past 22 or 23 years. I started, uh, 20 years ago this year, uh, with Jedox. I was one of the first, uh, employees, one of the first developers and, uh, slash the, uh, employee of the company, went through the ranks over the years. I was lucky to follow the growth of the company and went through the ranks in the, in the engineering department. I was, the Head of, uh, Development and the Director of Development, VP, um, Development, uh, and later on added support, uh, um, coined the cloud team back in the day. And, um, a few years back, I joined the C-level as a CTO with the company of 450-500 people today. Right. And it was an incredible journey, um, to, to, to look at, uh, from, from within, uh, observe and participate in, in this, uh, in this long journey.
So, um, yeah, but more about personal. So I'm a, I'm a father of three girls. Um, I also have a sausage dog and, uh, yeah, with my wife, uh, we live, uh, with my, with our kids here in Karlsruhe in Germany, which is a university city, um, let's say, more, uh, in the southern part of Germany. Yeah. So that's, that's about it.
Kovid Batra: Cool. I think, uh, this is really amazing to see. I mean, rarely we see this someone spending such long time joining in as an employee and then growing to that C-level in a 20-25 year journey. So that, that has been, uh, one of my first experiences, actually, with someone on this show. I would love to know how it all started and what is Jedox about, uh, what was your vision and the whole company's goals and vision at that point of time? Now, 20, 20 years hence, how, how do things look like?
Vladislav Maličević: Yeah, sure. Yeah, I mean, the only constant in life is change, right? And, and many things, uh, stay unplanned. Initially, I, I really didn't, didn't intend to, to stay with Jedox. I thought it was in-between, uh, just an in-between jobs kind of thing, and, um, also the setup, it was a very small company, small office, uh, just a few, few people, uh, which by the way, all of them are still with the company. So all the people that were in the company when I joined are still with the company, which is also one, one quality, I must say. Like I said, I initially didn't, didn't, didn't plan to stick too long, but the challenge was there and it, it, it became more and more interesting from day-to-day. And, um, we were kind of, it's, it's easy when you have a kind of a black, uh, blank canvas, yeah. There is no product and then you start from scratch and you start building something and you know, over time, it becomes, uh, you see more and more of a product and you, you see more and more customers and it's sticking with the, resonating well with the, with the, with this huge community. And then you also add ecosystem to the, to the mix, you have partners in between the customers, growing globally, opening new offices, adding more people and things like that. So it is, it is simply, um, it was, it was an incredible journey. Usually you start off, like you said, either you hop from one, one, one job to another every few years and change, or you join as a, as a founder, right? Uh, you could also be, uh, it's not, not unusual to have a founder on the team, uh, being early on there and then, you know, doing something with the company and moving on, right? I indeed, I wasn't the founder, but was one of the first early people. So I, I sticked with the, with the, with the product and with the company, and, this is, um, resonates well with my, uh, passion. I kind of map myself or I reflect a lot of my, my work life and life with the product that we built over the years.
What Jedox actually is, is, um, I mean, uh, we are proud, uh, leader in the magic quadrant, Gartner's magic quadrant for, for EPM, CPM, or enterprise performance management, corporate performance management, or XPNA, how they call it nowadays. Being in the upper right corner, it was obviously not, uh, not, it was a journey. It's not like we showed up immediately there, right? From, from zero to hero, right? It took us a few years to move slowly through the, through the, from the lower left to the, to the upper right corner. And certainly, you know, competing there with the, with others, with the big names like Oracle, like SAP, like Anaplan, um, it certainly make, makes us proud because we are by, by far a much smaller company, uh, by, by sheer size, and, um, to some extent also by history or by tenure, right? Um, but yeah, it shows that, um, you don't need a lot of people and a lot of money to build good products and, and make them stick with the, with the customer.
One of the things that helped us in the beginning, I mean, that, that's also, we evolved over time. Uh, one of the things that helped us in the beginning to put a foot in the door in the market is the fact that initially we were, um, actually we started as, as a freeware and then switched to open source, which is kinda, you know, 20 years ago, it was things like Linux started showing up around. I mean, actually it was, uh, uh, Linux was, was way before, but, but around that time, there was like a boom of open source. And we were, um, I belie, theve first product in the market to offer planning, uh, as open source, and that was a big shock in the market and it helped us a lot to, to, to spread the word, uh, globally and become known in the market, although we are, uh, we had the low or no, no marketing budget whatsoever. Right? Um, and then over the years we, we matured, we kind of, um, made a clear separation between, between open source and the commercial bit. And, and, uh, we curated both brands in parallel. But over time, we, we, we focus nowadays, we are focused totally on, on our cloud product under the name Jedox. And, um, basically open source is, is the past. It's also not something that we see in the market nowadays anymore in this, in this, uh, let's say in this bubble. It's relatively, you could say, it's a, it's a niche, but it's a quite, quite, uh, I wouldn't say lucrative, but quite, quite a big niche. It's a specific need. From the business to be able to quickly plan any kind of data, usually finance data, but any kind of numbers, being headcount, in any vertical, in any industry. Yeah. Nowadays, it's even, you see it in every, literally every company needs to do some kind of planning. And doing that with a tool like, like Jedox, makes it less error prone and, uh, very, very seamlessly integrated, allowing, um, to connect to, to the existing third-party systems, um, connecting all the data from all the different systems that you usually find, on average companies nowadays have 150 plus tools or services that they consume. Jedox is well-versed in, in accessing all these different existing products in the, in the customer's ecosystem and then combining those in, in Jedox.
In a nutshell, Jedox is, is a, is a platform. It's a local platform for building business applications, right, speaking less technically. But, what you have in that platform are components. There's a lot of IP, Jedox IP in there. You have your own in-memory database. You have your own ETL tool. You have your, your backend, middleware. You have, uh, frontend for, for mobile, for web, obviously, and we have quite a good integration with, uh, Office, in particular with Microsoft Excel, which is kind of a go-to application for any business user nowadays, right? Most of the time, the journey of our future customers starts somewhere in Excel, they did something over the years in Excel, and, um, they built it, they invested hours and hours in it and they've been living it, but, you know, they, over the years it, it became cumbersome to, to maintain it, multiple copies of it, multiple versions of it, uh, sharing it across the team or even teams, uh, error prone, and it's, it's, uh, known as an Excel chaos, which we actually try to, to solve. Right.
A lot of product, obviously, 20 years we weren't sitting, um, so we were quite busy developing that, but nowadays, it's quite extensive and mature, very grown up, uh, enterprise platform for building business applications, right? And coincidentally, majority of the, let's say, first, first-time users come from somewhere from the office of finance. Usually, that is the, that is the entry point where users come from. But, uh, it's not limited to, right, it's just the, usually the entry point, but we spread quite quickly within the organizations because they see the value of the product.
Kovid Batra: Got it. I think this is very, uh, interesting, competing in a landscape where you have MNCs and legacy players already there. You have been there from the very beginning, so the company founders and the company belief in that respect on day zero and today, uh, would be very different, right? At that time, you guys might not have even imagined where you would be 20 years hence. Of course, people have a vision there, but what was it like for the Jedox team and Jedox founders at that point of time?
Vladislav Maličević: Yeah, I mean, I mean the, the vision was there, but the, the vision, I could say that the, the vision was to, to, to rule the world or rule the bubble, rule the, rule, this, this, let's say, small niche, even back in the day. Appetite was certainly there, but we were also realistic. We knew that, you know, it, it will take a while to, um, even meet, uh, let alone exceed, the functionalities of the, of the established product in the market back in the day. Already, the market was there. It was booming. It was ruled by IBM. IBM was the absolute leader. A company called Infor was, was, uh, also quite prominent in the market back in the day. Actually, they weren't even called Infor back in the day, but through acquisitions, they, they grew into Infor, um, and they still exist, uh, to date. We knew we were on the, on the, let's say, on the, on the lower end and we weren't the disruptor, but one of the vehicles was, was definitely open source and coming through the open source, uh, on the one hand side, you have a, you have a behemoth, let's say, or a mature, um, established leader in the market selling, you know, I don't know, a couple thousand dollars per user, per seat, um, license. And then all of a sudden, this small team from Germany comes with a product that almost, almost, right, not, not really back in the day, but, but, um, almost matches the, the functionality, brings in, let's say, a subset of the functionality for no, no cost at all. It was open source and everybody was open also to contribute back in the day. There was no GitHub back in the day. We used to use SourceForge, sourceforge.net. That was, uh, that was the platform of choice back in the day where we hosted our code.
The word spread quite quickly and, um, the adoption, we saw traction very early. I think I joined in November, October-November 2004 and we had the first version of the product that you pretty much you can recognize even today and in today's products. So everything you need to know, everything you need to be able to work, you already had. Um, I believe we, we shipped in February of 2006. So it's a year and a half. It took us 18 months to put, uh, put the product together, and already there was, uh, in-memory database. We had a frontend for Excel. We also had, uh, let's say some primitive way of ingesting data, let's say, um, some, some baby version of, of ETL within the product. We had a predecessor to our, uh, today's web frontend. We had it, uh, it was, uh, Web 1.0, the old, old school, uh, web frontend that was already connected to, to, to this, um, to, to Excel and to the database. So we, we had web frontend. So we were ready to, to, to rock or ready to run.
Later on, additions came in, including ETL, including a modernized version of, uh, the web frontend. And nowadays, obviously, everything is happening in the web and you are doing, also authoring within the web and Web 2.0 was a thing back in the day. Like we quickly jumped on the boat. Later on, other innovations happened. Shift to Kubernetes, so, microservices and things like that. So going from the legacy, I mean, actually the, the first shift was the cloud cloud was the thing. There was no cloud back in the day, right? Maybe there was some hosting somewhere, but usually customers were running it on their own, even on their laptops or, um, within their corporate network, server, client server kind of thing. And then later, 2012-2013, we saw cloud kind of picking up and this is where we started our excursion into cloud. And from there, um, we moved on. Today, we are a cloud company, a SaaS product.
Kovid Batra: Yeah. So, in this, in this whole journey, I think you survived, in fact, you, you thrived as, as a product, as a, as, as a company, right? Of course, you mentioned that you became an open source product, right? So that, that was one critical move which probably helped you a lot in exceeding what your competitors or counterparts for doing, but there must have been much more deeper technical decisions, and with this question, I think I'm trying to understand from you how many times you had to take those critical calls which impacted the business in an immense way, and whether those were decisions where you were building products in-house or you were outsourcing it and how, how did that journey come along, now, 20 years hence, when you look in, in the, in the retrospective.
Vladislav Maličević: I mean, in retrospective, I wouldn't say there were too many critical events, right? The situation in the market is dynamic and you have to react on, literally on a daily basis. You have to make decisions on, uh, really, literally, some, some important decisions are made on a daily basis. However, the strategy, you don't change every, every two days. I would say, we had three, four waves, uh, over the course of 20 years where we were, for example, cloud decision to go all in on cloud. Um, it took us a while, right? Because we are, I'd say, uh, first of all, it's quite conservative, the market itself is quite conservative. You are usually working with financial data. Financial data is very critical. People are not eager to, to expose financial data out of their corporate network. So when the cloud showed up, it was kind of, oh, do we, do we, do we even jump onto this boat? And, and I remember, uh, vividly, so there were, there were like pros and cons, and, and there were voices in the company. I would say majority of the voices were, were, uh, against cloud. "Hey, nobody will jump on this boat." "Hey, nobody wants to put data in public cloud." "This will not fly." And indeed, it didn't, uh, it didn't, uh, didn't fly immediately, right? It took us a while and depending on the market, obviously here in Germany, it's quite, I would say a bit conservative market. So it takes a few years for, for things to become mainstream, and for the adoption, one thing was technically not nothing, I wouldn't say nothing special, but was a smart move by Microsoft back in the day when they introduced, uh, something, a thing called German cloud, um, which was kind of an idea to, to bring in sovereign German cloud on German soil, operated on German soil, but German company, right? Disconnected from the rest of the world, kind of from, from the corp in, in the US and things like that. This kind of brought more trust into it, of course, with additional marketing and massaging customers, um, across the German and Dach region, Austria, Switzerland, and the Germany region. But it definitely helped in adopting cloud more. And then a few, few years later, all of a sudden, it became, you know, cloud became commodity, somewhat delayed, but became commodity even in Germany. And then it was no brainer. Yeah, okay. Let's go. You don't have these conversations anymore or, or very rarely. Yeah.
That was one thing which was kind of critical back in the day. We started talking about 2011-2012, but the real push came around 2015-2016. So it took us a few years to come from zero to, to really, hey, full-steam ahead, let's, let's, let's do this, this cloud thing. That would be one thing, I think being close to, to, to Excel. So initially being open source helped promoting the brand. You would need to spend millions and millions globally on marketing to spread, spread the word about Jedox, um, normally, and with open source, it kind of went word of mouth and it very quickly spread. Um, again, context, right? We're not in the Google business, right? We're not a commodity that every consumer is using, but let's say, in our bubble, the word spread super-quickly. And, um, later I remember I spoke, we acquired one company, uh, in, in Australia, uh, back in the day and, and they became, before we acquired them, they run as a partner for a few years. And, um, I had a pleasure talking to, to one of the founders of that company, and he said that he remembers well when we announced the first initial version. He was back in the day using IBM and he read it on some forum that there is, um, there is a new product called Jedox and it's open source and it's free where you can download it and use it, and, and he does almost everything, um, that we used to pay for and he wrote to his colleagues, emails saying, "Germans are coming." And I use that reference often. It's quite quite interesting because hey, where's Australia from here, right? On the opposite side of the world, but it really, uh, the word spread quickly. So I think that was a good decision to go with open source initially. It didn't, on the other hand, it didn't create any traction. I must say, very little, almost no traction on the development community, right? We had it, we had it up, but we were the sole maintainers, very little input from the community, right? Maybe it's too technical, yeah? And again, maybe it's a context and a niche which we are in. It's not some commodity that everybody needs on their desktop. So maybe that's one thing.
So, open source, cloud, uh, being close to Excel was always, uh, I think was a good thing. Uh, Excel is the, I don't know. Um, there's this big question. Yeah. What is the most common functionality in every enterprise software? And that's actually 'export to Excel', right? In every enterprise software, you will find that button or option 'export to Excel', in every enterprise software. So, I don't know, there are billions of users of Excel, certainly hundreds of millions of Excel users globally, and, um, this is a kind of citizen developer. Uh, if you, if you, if you look at like from, from that aspect and us being close to that, let's say, both, both literally close. We, we are very compatible with Excel. We integrate well with Excel. We also understand well, uh, Excel format. We can ingest it and import and export it in Excel format. So literally that, but also the, this concept of spreadsheets, I think this was also a good, right choice.
Kovid Batra: Yeah, I think that most of the time it really helps, like instead of going out and introducing something completely new, which is not in the behavior of the user. It might be a hit, of course, like there are revolutionary ideas which are not a part of the behavior and then people form a behavior around it. But mostly, what I've seen is that any product team building a product closer to the existing behavior of people and the way they are using the current solutions, if you are close to it, then it is very easy for adoption, easy for people to understand and relate to, and they quickly start using. And then, of course, you can put them on a journey of gradual learning where you introduce new features, you put in more services and then they grow from there. But the initial hooks should be like that where you are close to the existing solution and yet offering something very impactful and having more data than what the current solution is giving them. So yeah, I think it's, it's a mix of, um, few good decisions, I would say. There was, there is no single bullet, magic bullet that is going to solve for sustaining or a business thriving in this market. It was constantly your eagerness to learn, your eagerness to explore, and then changing and adopting towards things that are coming in, like cloud came in and then, of course, at every point you understand that user behavior as well as what the market trends are saying and moved in that direction. So, of course, this really worked out well for you.
Few more things that I would want to learn here is that on this journey, when you're building such an immense product used by thousands and maybe millions of users, I'm not sure how many users you have right now, but, uh, I'm sure like there are hard decisions that you're taking about, like, uh, building something in-house and, uh, acquiring other companies, like once you just mentioned about one. So when, when there is a decision around building something in-house and, uh, like outsourcing it, can you just give us some example from your journey and, uh, how did you conclude on certain decisions like that?
Vladislav Maličević: Yeah, some, some of those, uh, decisions were, were made quite easily due to whatever constraints, right? So if you have, let's say, uh, if money is your constraint, obviously, and you, let's say, you have a few idle people on payroll, obviously, you go and build, um, start building, especially when you don't understand the magnitude of the, complexity of the problem, right? You don't see, you don't see the big picture in the beginning, right? You go into it somewhat naive. I can say, I mean, I was, uh, I was quite young at the time. If you don't know the, the, the magnitude of the, of the problem, if you don't see the, the, the whole, uh, scope of it, and you are, you have a white canvas, you go for it and you simply, you give it a try. So there was no, there was actually no, in the beginning there was no, um, um, nothing to, no decision to be made whether to invest or acquire or whatnot, because the money was not there, right? So, So let's say in the beginning, majority of decisions were, were built. Um, and then over time, you have the opportunity to change. You can focus, you can keep the core, core of the product to yourself and then, whatever is not core, you can try to outsource. The good thing with a, with a platform like ours is that once you have the platform in place, you can start building actually these, um, applications on top of it and you can make those applications to, to a product. So for example, uh, one more time, one more thing that happened last year, we entered magic, another magic quadrant last year for financial consolidation, which is a, let's say, um, off the shelf separate product from our core product, but it's built on top of the platform and it, and it's sold separately. Right? So there are, there are players obviously in the market who do just that, uh, have a product just for that, but we built it on top of our platform. So you get the platform and you get a, you get a financial consolidation of it and you can build any kind of, uh, business application with it. So, um, once you have applications like that, then, and if it's easy to package it like it is with Jedox, then you can come up with things like a marketplace, which we do have, right? We do have a marketplace. So we put these applications in the marketplace and then you can easily install them from there, and then, it's, I don't know, it covers 60, 70, 80, some, some, some cover 90 percent of the functionality out of the box, and then you just fill it up with life, with your information, and then, uh, off you go, right? It's, it's configured and you can use it, it's ready to use.
Um, so, uh, the kind of the decision is definitely the, um, um, you have, you have spend constraints, so, so you have to be cautious on, on the spend. Similarly with cloud, right? Do you go to public cloud or do you build and host, you know, you do collocate your servers, you build them yourself and run them yourself, uh, somewhere, right? Yeah, that kind of decision someone needs to make, and in the beginning, maybe decision-making is super easy. Yeah, of course, you go and you buy a pizza box and you put discs in it and CPUs and then you collocate it to the, at your nearest host of choice. But then, if you want to run it at scale, things like compliance come into place and you need to, attestations, you need ISO, SOX and CSA star, and whatnot. You cannot manage them anymore on the commodity hardware that, that fails every, every three months, something's broken and you need to take the system offline and things like that. So yeah, this is where, where you go into cloud and use services and build it, build it like a Lego, cherry-pick services that you need and build, build something out of it and outsource kind of responsibility to a, to a infrastructure provider of your choice. Same with code, right? Usually, in the beginning, if you have monetary crusades, but you have the means, if you have, uh, well, just a couple of people should be sufficient to get you going, right? That's the thing, right? In the beginning, this Pareto of 80/20, with 20 percent of, uh, personnel, of people, you can build a lot of products, you can build 80 percent of the product. But the last 20 percent of the product are the hardest, and then you need additional 80 percent of people to, to, to add on board. If you have the means, this is where you, you can make, uh, make decisions, whether you, you outsource it, uh, let it be built, or you take managed service, OM solution for the means, for the, for the particular case. I mean, nowadays, it would be totally different, right? You look from different perspective or from different dimensions and cost being just one of those, right?
Kovid Batra: Yeah.
Vladislav Maličević: You, uh. So, yeah, it depends. It depends on the context, right? In what kind of context and what kind of setup you are. We are scaled up and we are a mature company, profitable company nowadays, so we can afford ourselves to take a bit more time to make decisions such as outsourcing or buying, acquiring pieces of product into the whole. Whereas when you are at the beginning, usually you only have an idea and your free time and then you go and roll up your sleeves and you work, you code, you usually don't have money, right, to, to go and buy expensive services from others, right? Um, yeah.
Kovid Batra: Cool, Vlado. I think that's interesting. Um, we are running short on time, so we'll have to wrap up now, but it was a really interesting talk. Would, would love to talk more, uh, and know more details about what happened in those 20 years, what things you actually felt were very challenging that were solved, on those pieces, but I think that would need another episode and I would love to have you for another episode, absolutely.
Vladislav Maličević: Thanks a lot. Yeah. Let's, let's do that some other time.
Kovid Batra: Sure, absolutely. All right. So that's it for today. Thank you, Vlado. Thank you for your time. Uh, looking forward to host you once again, uh, very soon.
Vladislav Maličević: Thanks a lot. Bye-bye.
Kovid Batra: All right. See you. Bye.
‘Thriving in Recession: Guide for Tech Leaders’ with Leonid Bugaev, Head of Engineering at Tyk
In the latest episode of 'groCTO Originals' podcast, host Kovid Batra engages in a thought-provoking conversation with Leonid Bugaev, Head of Engineering at Tyk. The episode delves into ‘Thriving in recession: Guide for tech leaders.’
The episode starts with Leonid sharing his background, his approach to balancing work at Tyk with side projects, and the key differences between remote and distributed companies. He explains the impact of economic downturns on businesses, stressing survival as the primary objective. He also shares communication techniques for announcing layoffs to developers and explores challenges in managing teams and maintaining operational efficiency in difficult situations. Leo also advises engineering leaders to prioritize customer retention and think in business terms instead of engineering and R&D. He suggests encouragement through additional bonuses & learning opportunities to employees who stay after layoffs.
Lastly, Leonid concludes with essential advice to view change as a driver of innovation and growth rather than a threat.
Kovid Batra: Kovid Batra: Hi, everyone. This is Kovid, back with all new episode of the groCTO podcast, formerly known as Beyond the Code. And today with us on our episode, we have a very special guest who has 18 years of engineering and leadership experience. He's currently heading Engineering for Tyk. Welcome to the show. Leonid.
Leonid Bugaev: Hello.
Kovid Batra: Great to have you here.
Leonid Bugaev: Yeah. Glad to be here. So, it's a good chance to, and a very interesting topic, which was suggested. Uh, so I'm, you know, like I have been in IT for the last 20 years. So a lot of things, I see the companies rising and falling, uh, uh, tried various technologies, uh, I've been both like very deep in engineering, uh, and now I'm in more leadership roles for the last 10 years. So it will be interesting to, you know, like share some of this experience, I guess.
Kovid Batra: Sure. Absolutely, absolutely. But before, uh, like we get started on our today's topic, uh, which is very interesting, and I think nobody has talked about it yet, even though it has been there from the last few years. So we are definitely.. For the audience, I'm just putting it out loud, uh, we are going to talk about how to navigate and lead dev teams during the time of recession. So that's the topic for today. But before we jump into this topic, Leo, I think, uh, we would love to know a little bit more about you. Uh, something, uh, around your hobbies, your personal life that you think would be interesting and would love to share with the audience. So please go ahead and tell us who Leo is.
Leonid Bugaev: Yeah, absolutely. So, you know, like, uh, I was also, you know, like a person who is, who likes challenges, and I was also into like, you know, like startup side projects and all this kind of stuff. So like, I had my first business, it's like at 17 in university, and you know, like, uh, I always worked for startups as well and I really enjoyed it. So I've never really been in, like in a big corporate environment. So, similar, always fast-paced rhythm, and I really enjoy it. So as for now, I'm, you know, like I'm currently living in Istanbul. I'm, you know, like I have two kids, a beautiful wife. And like, uh, I personally, uh, kind of like my hobbies and like my work is kind of like a good intersection. I still, uh, up to this day, I do have a lot of side projects. Some of them I even try to monetize actually, some of them just like for fun and I always stay up-to-date, especially like with the current AI hype and all this kind of stuff. So I'm very, very curious, and, uh, yeah, okay, that's it.
Kovid Batra: Great, great. And what about your current role at Tyk, like when you were heading engineering for such a company, uh, which is multinational, like you have offices in different parts of the world. How is your experience, uh, working at a multinational? Whereas when you say, uh, you are very curious, you have a lot of side projects, don't you think it is very contrasting, like, uh, on a daily life, how you see things?
Leonid Bugaev: Uh, well, I don't know if, you know, like, uh, uh, if it's actually contrasting or not, but you know, like an interesting thing is that I probably would never again work in the office, for example, that's definitely, you know, affected my life in the last 10 years, I'm working like fully remotely, uh, for various, like clients in the US in the Europe, et cetera. And it, uh, changed my lifestyle a lot. Uh, it changed the way how I, uh, manage my work-life balance, how I find time for my, like side projects as well and so, because like it allows you to save some of the time as well. And, uh, yeah, so it's, uh, it's very interesting. And being like a distributed company, you know, like there is also kind of a bit of a difference between like remote and distributed company because when we're talking about remote companies, for example, like you have an office in like your country and then employees like working from other cities, for example, in that country, and you are still close to each other, but being distributed means that people are literally spread across the world, a lot of mixed, different time zones. It's, uh, very, very challenging for building in general, like, uh, teams, the communications, all the channels, and you know, like being, you know, like efficient in communication and so on, becomes like super important and actually like essential for survival of such teams, I guess.
Kovid Batra: Yeah.
Leonid Bugaev: So it's for sure affected a lot, you know, like as the way how I think, how I build the teams, what kind of people I hire and so on. Yeah.
Kovid Batra: Perfect. Great. All right. Thanks for that quick intro, Leo. Let's move on to, uh, our main topic for the day. And, uh, Let's start discussing about these economic crisis that happen time to time in the world, right? And, uh, the latest one is something that we are going through and almost out of it. We are really not sure about it, but we have seen the, yeah.
Leonid Bugaev: I highly doubt it.
Kovid Batra: Yeah, but we have definitely seen the consequences of it from various angles, in our society, in our companies, everywhere. So I think I just want to first understand, uh, from your perspective, how do you see these economic downturns and how do they play into, uh, companies and businesses when they come?
Leonid Bugaev: Yeah. Overall, like, uh, when you live long enough, you start seeing the patterns. When it repeats again and again, you, like, are not surprised anymore that much, and you kind of like know what to expect and what to prepare for. And I think that's like one of the most important things to understand here. So it's always like, uh, economics, et cetera, it's always in cycles. So we had some amount, for example, of time with cheap money, uh, short, like percent rates, uh, a lot of loans, we see market is booming, everyone gets investments and so on. And now, we are in the, like an interface. So like money is very, very hard. Uh, loans under like a big percentage, and the way how companies get treated, uh, from the outside and from the inside change dramatically because, like your values in the company change a lot. And, uh, in a lot of the cases, you know, like, um, I think one of the main things to understand during these times is that it's not.. First of all, it's a time of chance because the ones who will survive this period, will afterwards get a way nice bonus and a very big boost.
So your first goal as a company during such times is to survive, and it's actually not that easy, especially, you know, like if company gets used to, for example, to the VC money, constant growth, and so on, because like, uh, as I mentioned, like I do feel it, I have some like insights on how the stuff works and right now, getting the investment, et cetera, is much, much harder. In the past, it was enough to get to, you know, like to convince investors that like you have some traction, some good ideas, numbers, et cetera. Now they're looking for the cashflow. How much money do you have in the bank? How much time, how much money you spend, et cetera? Are you profitable or not? I remember those times when companies that were bought, were measured in Instagrams. Like, uh, this company was bought for two Instagram, for three Instagram, et cetera. You know, like, uh, and, uh, they, some of them even didn't have revenue, like you know, like the market was booming. And also, you know, like I do see a lot of consolidation in the market happening. Uh, so yeah, if it's even, like when applied to like our market, like it were like API management and so on. I do see some vendors literally like bankrupt, uh, in the lighter industries as well. Some of them get bought by bigger vendors, et cetera. So it's, you know, like very challenging times. And as I mentioned, like survival, uh, not even growth, but survival is probably one of the main, uh, ideas which you need to understand when going through recession, I think, and it actually involves a lot of steps and, uh, and changing the mindset on how you use a company and its values. Yeah.
Kovid Batra: Cool. Totally. I think it's very important to understand, as you said, that these are the patterns and they are bound to happen, and with that, I just want to like move on to something which I feel that everyone should, whether you are in the engineering department of a company or you are in any other department of the company, you should be financially, economically, um, aware that these things can happen anytime and one needs to be prepared for such kinds of turmoil. And that's applicable not just to individuals, but also to the businesses as well. What do you think? What's your thought on that? How companies that have come out well out of this situation have been able to navigate it or handle it better?
Leonid Bugaev: Yeah. Well, first of all, like for example, my role is like an engineering leader. Like I understand that like if you actually spend more time with engineering, like 90 percent of your time is spent with engineering and not with, like the leadership team and business. You're doing the job wrong, especially in such times, because in such times, you really need to spend way, way more time to communicating like, uh, and understanding the business part, how you actually earn the money. And if you're talking about, like, you know, like metrics, like there's actual money on the table, at that time, it's the main metric, and, uh, you need to, you know, like clearly understand where you spend the money and what is your, like, uh, like return on investment. Let's say so. That's why, you know, like during such times, uh, we may see that some of the like research and development projects get closed. There are some like, uh, uh, optimizations of the talent, which was maybe too hard, like too much during the, like good times and so on. The important part here is that if it's not done right, it can actually also like harm the company because like, you know, like if you see that like your cash flow is not where you want it to be. And, uh, if the leadership team, for example, that is, like, not maybe like that's like technical or similar, and you do not have a good connection, the decisions are still made from the above. And if they don't fully understand, like the product, and if you don't fully understand the product, there can be some consequences. All of it needs to be synchronized with the business.
Kovid Batra: Right.
Leonid Bugaev: And, uh, so if you have some, for example, multiple products which you offer to the customers, you need to clearly understand like how much each of those products actually bring you money, and how much time, for example, you spend on the customer support, on the development time, and so on and so forth. And you need to, like, manage all this, like, uh, like, even, like, spreadsheets, and so on. And same with, you know, like money and understand, like, things like budgeting for the tooling, for the HR, and so on and so forth. I know that, like for a lot of people, especially, from, like engineering, it's very hard to talk about money and very hard to deal with these kinds of like routine tasks as well.
And I've been there myself. I mean, like, uh, and it's still very hard for me. It's still like a cause, like, you know, like procrastination and fear and you know, like, I just want to do things. Uh, but you know, like the tip here is, it's like if you will not be able to like optimize this money, et cetera, you will not have a company to work with or for, and like, uh, you won't be able to do things anymore.
Kovid Batra: And I think the biggest, I think the biggest challenge for someone who is coming from a technical background is having that context right. The first step is that, like first you understand what exactly the business is saying. And then as a leader, like, as you said, you have experienced it yourself. I'm sure you would have been in that place where you would have gotten that understanding that where you are right now. But then, you have the whole team to lead along, right? And like take along and lead them so that they are also aligned with the business goals. So I think that's more difficult than, uh, for yourself to first understand what's going on. So I might be, um, having that exposure as an engineering leader on a day-to-day basis with the business, with the product and everything. And it would be still a little easier for me to understand what's going on there. And based on that, I can gain that context and I can definitely align my thoughts to that. But when it comes down to delivering that context to someone who has less exposure to the business, to the product, and let's be very frank and true about it. Like, we try to bring developers to that point where they have complete customer empathy, they have complete exposure to all the things that are happening in the business. But of course there is a layer where leaders are talking, engineering managers and product managers are talking who have context for both sides, but developers are still in that silo where when you start explaining to them these concepts, it might not come easy to them, right?
Leonid Bugaev: Yeah. Yeah. It's really tricky. And, you know, like especially when you know, like a company starts to get like mature, you kind of like, you have to build the layers, like managers, like managers of managers, some, like the board, et cetera. And not everything, first of all, can be, you know, like exposed. Sometimes you can expose it only at the, like, at the final minute or similar. There were a bunch of, like posts on the Hacker News, et cetera, with some examples on how people get fired via Zoom without, like, uh, you know, like a previous notice, et cetera, et cetera. And I've, you know, like, I've been through similar situations. There is always a story behind it. It's not easy, and, you know, there is no easy answer behind it on how to, like deliver such news. So it's very challenging. Working in this role, I've been through multiple, like leadership trainings and one of them, which I found very interesting is, basically they interview, uh, like quite a lot, like multiple sessions and they kind of build like your, like psychological profile with your values, et cetera. And that's you know, quite an interesting document in the end, uh, where you can better understand yourself and it's one that you can also share with your peers, so they will understand like what kind of a person you are, what are your values, et cetera. And my profile was quite unique, uh, in the sense that like, uh, uh, I had like two major like, um, motivators, how they call it. And my major was like, uh, 'peace and harmony', and the second was 'enjoy life and be happy'. You know, like it really contrasts with what I actually need to do sometimes when like, you know, like when such thing happens and, uh, it was, you know, like way hard for me, like it was like a mental shift for myself on how to approach the situations, how to not blame myself, how to, you know, like be more peaceful, how to be reasonable and how to explain it to like people whom you manage, why these changes are required and so on and so forth.
But it's never easy. It's never easy, but once you get, uh, some clear picture and some clear message, and especially, if this clear message is coming from the company, not just, like from you, like this is the direction where we're going, this is what we need to do, et cetera, it becomes actually much easier because like, and especially like, um, for example, uh, at Tyk, we try to share all our financial numbers, churn, new customers, et cetera, et cetera, in the company dashboard, and we share it with everyone publicly in our Slack every month. So every month, everyone can see our numbers, where we're going, are we good, are we bad, et cetera. And every few months, uh, we have like a call with the leadership team where we also try to be open, uh, about the challenges. Obviously, sometimes when, you know, like, uh, for example, we had like one round of layoffs, like about like one and a half year ago, and, uh, sometimes you can't mention all the things. You can put some, you can start preparing for the challenges, you can start, uh, like showing the data, et cetera. It takes some time for people also to prepare for you. It obviously, also rises anxiety, and you need to somehow deal with this anxiety. And that's where, you know, like the personal relationships, uh, with your team is very, very important. You can't treat them just like as employees, you have to be very close to them. And so, they will trust you and your judgment and so on. But yeah, it's, it's challenging. It's challenging.
Kovid Batra: I totally understand, and I feel you there. In these situations, I think the most important part is like first keep your calm in place and then keep everyone and treat everyone as humans, not just employees there. I think that's the biggest factor that would drive how you are communicating things. Of course it really matters what the company's communicating and within that concise communication you are putting across the next steps from there, because for any human, uh, it's very true. Like when you tell something and you keep it open-ended and it's bad news, people would run into chaos, right? They wouldn't know what's going on. They would have their own interpretations. They would decide their own next steps, right? But when it comes with the thought of empathy, with clear, concise communication, and you as a leader are connected and you have your calm in place., I think you would be able to navigate through this situation in a much, much better way as compared to someone who is not doing this.
So in this situation, when you did something, can you recollect some of those incidents, some of those, uh, anecdotal things that happened at that point of time, uh, which you did and could be a good example to explain where you took care of these aspects and, uh, you felt that you did right in that situation? The person who came to you asking, uh, what's going on and you were able to actually help them understand what exactly happened and how things could look like in the near future. So has anything of that sort happened with you?
Leonid Bugaev: Yeah. So, uh, first of all, as I mentioned, not all information can be, you know, unfortunately, you know, like shared with, like people whom you manage, et cetera, and there comes a period of time when you like, you know, like are by yourself and you know the news, but you can't tell them, and you know, like this tricky situation when you jump on a call with someone, but you know, for example, that they will leave and, but you can't tell it. So like, uh, that's definitely like a tricky one. The last time when we had a similar situation, it was very important to actually track some metrics as well, because, um, if there is, uh, one case when you've had to let go a very good employee and like everyone is asking, "Why? What's happened?" I don't know why. And another case is that like, uh, when there is some known issue, yes, maybe you try to fix it, some performance improvement plan or similar. Um, and also if it's like covered by some metrics, uh, like sprint points or similar, that's another case. And especially, if, uh, such metrics are visible to your, like engineering leaders, to your managers, for example, and so on, and then like, in such situations, there always will be people who are confused, afraid, angry, and so on and so forth. What's very important is that like, you can't please everyone. That's like, that's a bad situation from all angles, no matter how you take it. But it's very important that like, uh, You're like a core team, you're like a managing team, etc. You'll be prepared for it and you'll have the right questions, the right answers to the questions. In advance, before all it happens, I've tried to like prepare, like some, like a questionnaire, whatever questions they may end up with, from the people whom they manage and so on. And it makes them like, uh, like much more calm, much more easier because like they know what to answer.
And another case is that like a person will leave and will not get any, like exit package or similar. Another thing is if you know that like this person will be treated well, and like, you know, the company will give them like some good exit package and so on. So having such details and mentioning them to like managers again, et cetera, is also very important. So you need to, uh, clearly explain them the reasons and these should be very valid reasons and give them all the documentation, uh, all the numbers. And, uh..
Kovid Batra: I think it becomes all the way, all the way more important to have this level of clear communication, better performance reviews, and understanding for yourself, as well as for someone whom you're talking to. So like starting from your, uh, clear and concise communication which is transparent enough to gain that trust, and then coming down to doing proper performance reviews, even emphasizing more in those times, because people would ask for explanation that would be in a very chaotic mind-shape. So for sure, these are some of the leadership techniques that one can learn from this discussion that one should have when they are going through this particular situation. Apart from this, like, yeah.
Leonid Bugaev: I just wanted, like to add one more thing, like from, actually from a good point, from the good side. The trick is that sometimes such a shape for the company is actually a very good thing. So sometimes you get used to like a more relaxed rhythm to like, um, more like everything is good, like, uh, like everything is, everyone doing well, et cetera, and people are starting to be like calm, relaxed. And when we actually did this action, like the round of layoffs, and as I mentioned, like we focused only on people who had an issue with performance, whom we identified. We actually found it's like our overall, like velocity and like the ability to deliver actually increased.
Kovid Batra: Yeah.
Leonid Bugaev: So, uh, that actually went really good for the company.
Kovid Batra: And I think it should be like that in those situations, because when you talk about the leadership or the founders, in such situations, they become more aggressive because they have to like adjust to what is going on and like adapt to what is going on and like aggressively fight in those situations, and similarly, if as a team, as a company, if you're not doing that, of course, there are chances that you would fall apart. So I think it's definitely good what you are saying that will bring everything in place from the point of view of a leader to deliver what is needed in those situations. So, yeah, cool. And I think with that note, where we're talking about optimizing things and going through this, uh, stress situation, there is a lot that needs to be done in those times because you have a lower number of resources now and you have to deliver more, right? So how do you take up that challenge? Because, um, however we may put it, when people feel the fear of, uh, losing their jobs, that kind of a situation always sometimes pushes them to do even more, but also, people take a backseat, right? They know that anything wrong can happen. So how do you manage teams? How do you deliver operational efficiency in that situation?
Leonid Bugaev: Yeah. If your company did it at least once, then people would expect that most things come in the future and, you know, like the order levels of anxiety always, you know, like it will rise. No matter what you do, you can't just like smooth it somehow. But, uh, if you, like are regularly doing it, like, as I mentioned, some clear communication and being very open about your actions, uh, they start to understand it. And also, um, it's not about, you know, like providing, uh, these actions during it when it happens, it's also giving them, for example, you know, like what actually gives people a feeling of like productivity or feeling good, et cetera. Usually, it's progress. It's like seeing that you're like progressing somewhere and it can be seen in various things. So it can be, for example, in enhancing performance reviews, as you mentioned, so like, uh, when you start focusing on the performance. Like, and what we actually, one of the actions we did is that we provided a very clear format on how performance reviews should be done, but, you know, focus not on, you know, like, kind of like punishing people if they don't hit some goals, like they have an issue, but, you know, like trying to work together with them to provide some opportunities to grow. Like, "You want to go through some, like, a Kubernetes certification course? No problem. We'll give you budget for this." So, like, let's give some opportunities for learning and so on. So, uh, those people who are like left in the company, they should be encouraged, uh, and they should be motivated, they should maybe get some like additional bonuses and so on. And sometimes like if your company offers it, et cetera, sometimes maybe it also makes sense to motivate with additional, I don't know, like stock options, for example, or like some salary bumps, or similar. So like that's kind of like also essential.
And also like product metrics are also very important because like when the team is working on some feature, some product and they don't have visibility on, uh, does this actually bring money to the company or not? Like some feedback loops from the customers, they feel a bit disconnected from the business and they don't really understand like what's happening. But when you start connecting them with all of the business metrics, uh, when you stream them, all the, like customer feedback, et cetera, they start to feel that they are part of something bigger. They start to get, like faster feedback loops. They start to see that like, uh, we got this client because of you guys, because you build this, and this actually, you know, like, uh, brings a lot of motivation. Product feedback loops is one of the most important things to motivate your teams and improve stability. Yeah.
Kovid Batra: Yeah. And I think in these times it becomes even more important, right? Uh, less resources, you have to move fast, you have to innovate more to stay in the market, up in the game. So, I think in general, the philosophy, the ideals say that one should be working on high-impact features. Uh, but I think in these times it becomes even more impactful, right? That you have to focus on these areas. You have to work on these metrics. You can't let your customers churn in this situation. You have to do every possible thing that requires your customer to be there to pay you. And of course, that cannot happen without prioritizing things in the right direction. So in these situations, how do you ensure that things are moving in the right direction at an even faster rate?
Leonid Bugaev: Yeah, I think it's also, you know, like, um, important to understand that the overall, like company structure and how the teams get, like built. It makes a lot of, you know, a lot of difference. And sometimes it does make sense to rethink how you actually structure your processes, how you structure your team and so on. So let me give you an example. So, um, we came up with a scheme, like, um, about like two years ago. And right now, we're refining, and we're actually constantly refining it. So, uh, I think like first of all, constantly moving and constantly changing your processes, it's like a, it's a key thing. Don't be afraid of change. Then like, uh, don't be afraid to hire people who do the best, a good job for those things you don't like to do. So, I, for example, I am good with people. I'm good at engineering, but I really don't like, and I'm pretty bad at budgeting at like, uh, long-term planning, at various reporting and so on and so forth. So a few years ago, like when we, um, started scaling the team, and it was like a weird jump, like from, uh, like 10 to 50, for example. It's like a five times change. We realized it's like in order to keep up this growth, we need to support operational, uh, and delivery, um, uh, initiatives. So we hired a person, like who's like really amazing, and we had a really good bond, who took some of those responsibilities out of me. And the same was also about the product vision as well. So when you grow, you need someone to have, uh, uh, especially like if you have multiple departments, multiple teams, et cetera, like some coherent vision of like, uh, so all of those teams will work as like, you know, together and everything that they deliver will be somehow connected to each other. You will be developing it as one team. It's much more efficient rather than you all work separately. And from this point of view, they also, such teams also, uh, really allowed us to help, um, get enough time to build good benchmarks, to build like good metrics for your product, what we want to measure, how we want to measure, what exactly we want to build. So this process is super, super important, especially like in such times going very deep into engineering is not an answer. You need to start optimizing your product for the customers, for the churn, maybe for the user experience and so on. So it's not time for like research and development, and et cetera.
So, yeah, that's for sure. Like, uh, we've changed a lot. And right now, for example, we're also trying to like change and optimize some of the processes we, you know, like, uh, trying to, I would say, uh, involve, uh, our technical leads even more into the product area, even more like into non-engineering tasks. So they, together with like product managers, they will plan some like roadmaps in advance, find some brokers, et cetera, and try to think as a product, as a business, as a, as from the money point of view, how are we going to sell it? How are we going to like, you know, like what kind of like customers and jobs are to be done. We want to be covered. So we're trying to like teach people how to think business-wise, in business language, not in the engineering language.
Kovid Batra: Makes sense. The understanding that I'm building here to actually navigate such situations is to have the right culture coming in place with people. And along with that, of course, you would put more focus on those important metrics, which will be impactful, whether they are from the operational efficiency point of view, or whether these are some metrics that directly link to customer satisfaction, customer engagement. So I think a combination of the culture, the right operational metrics and right product and business metrics would sum up to give you a framework, uh, on how teams should operate in such situations. So even though we have talked about the operational part and the customer-centricity part, I would want to understand, like, when you are trying to bring in this culture, that things have changed and you're bringing that transparent communication, you're trying to bring people together, how do you ensure that people are really onboarded? Like, do you go an extra mile to understand that from your teams? Or what exactly do you do to understand whether everyone is coming on the same boat or not?
Leonid Bugaev: Yes. So first of all, when you know, like such things happen, like when there are some big changes, uh, it's going through like multiple phases. First, you have a phase, uh, like preparation. And during this preparation, I usually try to prepare like engineering leaders, uh, with the message. And like, as I mentioned, like prepare them in advance, but the team members still like don't know about it, majority of them. So when, uh, such as something here, like some big announcements happen, some changes, uh, usually it's like a company-wide announcement, like from the founders, from the board, et cetera. And then it should be followed really, uh, to be at the team-level, uh, like calls, uh, more private ones, et cetera. It's very important that every team will have, uh, how to say, like a 'safe zone' to speak out. Usually, it's first of all, people feel safer when they are in a smaller group, especially when they are within the team. Also they need to feel that like you are part of the team, not just like some boss from the top. So like as they should be able to openly speak with you about the problems, about the concerns and so on. And you as a person when you like joint such calls, you should be like super-transparent and super-open. You can't afford to say something like, "I don't know." You can't afford to say like, "I can't tell you this." Or similar. You should be very confident and you should be able to answer any kind of, like concern, or like follow-up later on it and so on. So if you will not be confident, then like, as they will also will not be confident and that's like, that's super important.
So like preparation and getting in advance all the questions, all the answers is super important. And also another thing is that, uh, when you build the, um, like hierarchy of the, like management, et cetera, sometimes maybe, people, uh, so it actually also depends on the cultures a lot. Like while working in distributed teams, I realized that like people from different cultures are very different in communication. Some of them are like very open, uh, some of them are very direct. Some of them are afraid to ask questions in public. And, uh, when you know this, like nuances, you can treat it better. So you need to know your people, you need to know your team. And for some of the team members, to whom I see that, like, they are, like, quiet, they, like, uh, they, and, and I see that by their face, by their emotion, like, something is not right, I will follow-up directly one-on-one with them, or, like, if I know that, like, they have a very good relationship with their manager, for example, I will ask the manager to follow-up with them, and if they have any concerns, to, like, communicate with me directly, and so on. But you need to be constantly on the line and constantly be available for any kind of communication, especially like, uh, for the first week, like, it should be like your main job. Communication, communication, communication. Your main job when such things are happening is to come to people, to give them answers and to be available to them. So the worst thing you can do is that like send some announcement and then like close Slack, whatever you use. And like, you know, like don't answer and, you know, like be back in a few days and so on. So that's, yeah, the worst thing.
Kovid Batra: Cool. I think that that's some really, uh, I would say, hands-on advice coming from someone who has seen situations, and that seems to be very helpful and we can totally relate to it. So now, when, uh, you have, uh, made us understand how things should move in such situations, what's your forward-looking strategy? Or what's your, like major pointers that someone should take away from the whole discussion that we have had? So I would like to quickly summarize that, uh, as we are running out of time. And, uh, hear from you the last few important concluding pointers.
Leonid Bugaev: Yeah, right. So, like, I think like the number one is that like, uh, uh, you should stop, um, being afraid of money-related questions. So, money becomes the number one essential metric, and you need to understand how the company makes money, how the product makes money, which exact parts of the product make money, and which of them are actually, you know, like don't and actually drain your money. So that's number one to understand. You can't do it if you do not have the proper metrics in place. So like, uh, configuring, first of all, like product usage metrics is super important, but also having team metrics, uh, is equally important, specifically like knowing how much in terms of like code category of tickets, the features, et cetera, how much time do you spend? And how much is, for example, uh, change failure rate, or maybe like cycle times for specific, like product features. And, you know, like in the end, like if you, for example, know that like your team spends, I don't know, like $20,000 per sprint, then when you start like making the features, uh, like some customer asks you to like build some feature. Okay, it will take us one sprint to build. It'll cost you like a $20k, you know, like, is it actually worth doing or not? You know, like.
Kovid Batra: Yeah.
Leonid Bugaev: So money becomes very important. And, uh, when you actually need to make the hard decisions, you need to like, uh, the communication is the key. The transparency, if you like as a leader, don't have the answers, if you're not confident, people will not be confident the same. And you should be very close to your team. They should treat you as a team member. You should build the trust lines and you should do it before. It's not like you're coming one day, like, "Hello, I'm your friend. "No, like, it's like, it takes time. And, um, you should have, uh, uh, especially like if you like to have a more complex, bigger team, you should have a very good chain of communication. And also another thing is that you should have, um, very good performance review, uh, process and the performance improvement process. Everything should be audited. Everything should have, uh, been written down, et cetera. It will help you so much in the future if you need to make some hard decisions. And when you make them, when, for example, you need to like lay off some people, especially like if it's like monthly or similar, the best thing you can do as a leader is to be with the people like, uh, "I'll be available.” Your calendar should be open for anyone and you should be proactively following up with everyone, answering all the questions and being as transparent as possible.
Kovid Batra: Makes sense. Makes sense. Totally. It is very, very important to communicate to the teams that they should be involven during these phases in constant innovation and learning so that they can find out areas where they can actually create the impact even in such times. So motivating them in the other direction, like telling them that things would be fine if we move with this motivation of continuous learning and improvement and taking steps towards more innovation. I think, uh, would definitely bring that change and make it easy for everyone, uh, to navigate such situations.
I think with that, uh, I think, uh, we come down to the closing notes. Uh, any parting advice, Leo, for our audience, like, uh, who are aspiring engineering leaders, passionate engineering leaders out there, anything you want to share?
Leonid Bugaev: Uh, don't be afraid to change. So the change is always, you know, frightening, but it's essential for innovation and growth. So that's, yeah, the major, I think, advice.
Kovid Batra: Perfect, man. Perfect. All right. Thanks a lot, Leo. It was great having you on the show. Uh, really, really insightful thoughts. And I think the hands-on experience always says for itself. So cheers, man.
Leonid Bugaev: Thank you so much for having me here.
‘Maslow's Hierarchy for Tech Teams’ with Sergio Visinoni, Fractional CTO and Tech Advisor
In the latest episode of 'groCTO Originals' podcast (formerly: 'Beyond the Code'), host Kovid Batra engages in a thought-provoking discussion with Sergio Visinoni, a Fractional CTO, Tech Advisor & Mentor at groCTO Community. He’s also the author of the newsletter ‘Sudo Makes me A CTO’ and runs a tech leadership coaching startup as a solopreneur. The heart of their conversation revolves around ‘Maslow's Hierarchy for Tech Teams’.
The episode kicks off with Sergio discussing his background & then introducing a framework inspired by Maslow’s Hierarchy, categorising tech team maturity into 3 levels: vital infrastructure, application service quality, and developer performance. He explains how this framework helps align technical and business strategies, identify issues, and communicate needs to business leaders. Through a case study, Sergio illustrates that high feature output doesn’t always equate to high performance.
Lastly, Sergio addresses challenges like standardizing DORA metrics across diverse teams and justifying infrastructure needs, emphasizing how the framework aids in balancing stability and performance for data-driven decision-making and effective communication.
Timestamps
00:57 - Sergio’s background
05:28 - Creating Maslow’s hierarchy for tech teams
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a very special humble guest. He comes with 24 plus years of engineering and leadership experience. He has been a Tech Mentor, and a Fractional CTO at multiple startups and orgs. He's also the author of the newsletter 'Sudo Make Me a CTO'. Currently, he's a solopreneur running his own tech leadership coaching startup, and I would like to welcome our guest from Spain, Sergio. Welcome to the show.
Sergio Visinoni: Hi, everyone. Hi, Kovid. I'm really happy to be here.
Kovid Batra: Same here. So today, Sergio, I think, uh, we have a very interesting topic to talk about and I derived it from the previous conversation that we were having. It's about the Maslow's hierarchy for tech teams. So I think it's very interesting. It's something related to, uh, personal life also, like how Maslow's hierarchy plays a role in our lives and how that maps to tech teams actually. But before we jump into that, I think, uh, the audience would love to know a little bit more about you. You can share your personal things also. Let us know who is Sergio. Over to you.
Sergio Visinoni: Yeah, sure. Thank you, Kovid. Yeah. So before we get into the very interesting topic, let me bore you to death with some personal details. Uh, no jokes aside, I'm Sergio. As you said, I've been spending more than 20 years in the tech industry. I'm originally from Italy. Um, I've been living in many countries, so mostly Europe, but then I spent a bit of time also in Mexico and then I moved here to Spain in 2016. The biggest chunk of my career has been in online marketplaces. That's where I basically went from being a software engineer and to actually being a VP, uh, of, like overseeing more than 300 engineers across multiple countries. So that's been like the peak of complexity that I've been dealing with. Uh, and after that, I've been, um, getting my hands dirty again with smaller companies and startups, until finally, at the end of last year, I took the decision to, you know, uh, move out of the traditional employment setup, uh, because I felt like I wanted more flexibility. I have a family, I have two kids, as you want more personal details. So I want to spend, be able to spend more time with them when needed. Like if there is something at school, I want to be able to go there without, you know, having to, uh, have too much justifications to add and also want to spend a bit more time for myself as well. So that's why I took the jump from January 1st, I became what I like to say a 'solopreneur', which makes it sound cool. Uh, the reality is that what I'm doing is mostly consulting for now. At the moment I have three B2B clients, three companies that I'm consulting with in different capacities. One of them is a traditional fractional CTO. Others are more project-specific. Uh, in parallel to that, I'm, uh, building up my own coaching and mentoring, uh, practice. I really like working with individuals, be them senior engineers or tech leaders who just want to get better in their professions. I really enjoy that because it reminds me of one of the things that I loved the most when I was an engineering leader, which was like working with people in my team, right?
Kovid Batra: Yeah.
Sergio Visinoni: And then, you know, eventually, I also want to start building, you know, online products that can sell themselves. That's why I consider this a solopreneurship. Consulting is a part of it. I don't want to be a full-time consultant forever, but I need to start close to what I can do, uh, right now, as I build my business over time.
On the personal side, I said that I have a family, have two kids. Uh, my main hobby is actually woodworking. So I like to have analog activities to do, uh, lots of pieces of furniture here at home. Uh, I've been there myself. And you can actually see the progression from the first pieces, it's not very good to the last pieces getting better It's like software, you know, when you look back at the software you wrote one year ago, you realize that's not very good. Uh same goes with woodworking. And then yeah, I like reading a lot. I'm an avid reader. Um, I spend time, as you said, producing my own content as well. I have my own newsletter. I write a lot of content on LinkedIn, trying to, you know, be part of this community, and as well, you know, I'm not hiding that it is a big channel for me to market my own brand, right? I try to show my knowledge through all that content so that hopefully people will feel confident, uh, you know, signing up for my service.
Kovid Batra: Sure. I'm sure they would. Great. Thank you so much for this sweet, quick intro. Uh, and I wish you all the best on this solopreneur journey, and I really appreciate you for that.
Cool. So I think now is the time we move on to making dev teams better, enabling engineering leaders to make dev teams better, and I really would love to, uh, hear about your theory of building great dev teams and how you bring in Maslow's hierarchy in that. Uh, over to you, man.
Sergio Visinoni: Yeah. So I'm going to start with a bit of context, right? Um, I remember the year when the 'Accelerate' book came out. Uh, that was 2018, and I actually remember it was, it's fun. I still have a clear, vivid memory of a Slack chat I was having with a software architect that was in Norway. I was already here in Barcelona. And we're talking about, you know, engineering metrics, how they were using certain metrics in their team, et cetera, et cetera. Because I had been kind of banging my head against the wall on this topic. We know it's a difficult topic, right? It's very challenging. It's very easy to come up with wrong metrics. At the same time, I didn't want to just give up and say, "It's impossible to measure anything." Right? Um, and he said, "You know what? This new book just came out and they claim that they found causality, relationship between certain tech metrics and business outcomes." I I said, "Wow! I need to read this book." So that's how, you know, I saw the link on Amazon. I bought it immediately and I started reading it, and it came at a very good moment, because again, as I said, I was kind of thinking a lot about that space. That's where, you know, after reading the book, I've been working with a collaborator, and I also actually need to say that a lot of the credit for what we did goes to him. All right. Uh, he was in my team. He was the Director of Engineering of my team. Uh, he was tasked with helping me figuring out how we can look at metrics across the whole portfolio.
And this is the second piece of context that is important. At that moment in time, as I said, y'know, overseeing a big engineering organization, I was actually overseeing multiple teams operating in different countries for different marketplaces. So it was a very heterogeneous setup where we had a combination of shared platform services, but a lot, I would say more than 80 percent of the code and systems was run locally, right? And this for historical reasons. So this company has been growing very quickly by, uh, forking, uh, different versions of the same platform in different countries to allow for maximum customization, but also through acquisitions. So it's very common, you know, every company or actually no company grows following the ideal path. It's always like bumps on the road. And then, you know, on the engineering side, you're left to deal with some historical, uh, legacy to deal with. So we were facing this very heterogeneous setup. At the same time, you know, I was responsible for making something out of it, right? So I was responsible to oversee all of it and try to figure out how we can look at this portfolio and how we can understand where each team, where each organization is in terms of maturity. And that's where we started thinking about, okay, we need to develop a 'tech maturity framework' to be able to look at this and be able to talk about where every team is and also help the GMs and the CEOs understand what type of investments they're supposed to make to be able to improve the situation, right? So we wanted this to become not a tool to just assess or judge or evaluate. Actually, we wanted this to be a tool to help all the local tech leads in the different countries to have stronger arguments, better arguments to justify investments in architectural refactoring, you know, building better tools or improving processes and whatnot.
So that's where, you know, on the one side, the Accelerate book came out. On the other side, we were having this, uh, challenge on our own. Uh, and that's where with my colleague, Marco, uh, Marco Cupidi. Uh, I recommend him for another podcast, by the way. I think you should definitely interview him. We came out with this idea of the Maslow pyramid for, uh, engineering teams. We started investigating this idea of building the analogous of the Maslow pyramid for tech teams, uh, in this optics of figuring out what, how we can look at tech teams in terms of their maturity, right, because if you think about the Maslow pyramid for humans, you can roughly say, you know, the more you move up on the pyramid in terms of being able to focus on higher and higher needs, the higher the level of maturity your personality has reached, right, because you're not only focusing on surviving, you're not only focusing on recognition, you actually, at some point, you reach the level of self-realization. I think that's the peak of the pyramid.
Kovid Batra: Yes, self-actualization.
Sergio Visinoni: Self-actualization, exactly. So we started working with this idea and now we didn't complete the work, meaning that initially we thought we will have five or six levels of the pyramid. We ended up with three, but that was enough. Honestly, you know, we didn't actually want to stick with the same amount of levels in the pyramid of Maslow pyramid, but we ended up with three levels, and basically, they were organized as such. The first level, the base of the pyramid, which is analogous to, you know, feeding, uh, having food and providing, for Maslow, was focused on what we were calling the vital infrastructure layer. So basically, "Is your platform stable?" And the reason for that is that I've seen many times teams focusing too much on velocity or going faster where actually their main problem is with quality and stability. And whenever a team is facing a lot of stability issues, that has a lot of implications in terms of their ability to work effectively. So it introduces a lot of disruptions, right? They are constantly interrupted by issues, by problems. So there is no way for the team to focus on improving output before they improve their predictability of the cycle.
Kovid Batra: Right.
Sergio Visinoni: And predictability is always negatively affected by how many incidents you're having, right? Every time there is an incident, your attention is redirected to dealing with it, and therefore, again, it disrupts and it adds, creates a lot of waste in the process.
So that first layer, I remember, it included metrics such as availability, but also it had metrics around, uh, latency of the most important pages. We're mostly dealing with websites and with mobile apps as well. So there were metrics on crash rates on the apps, for instance, etc, etc. I don't remember the full list because, y'know, this was a few years ago, but it doesn't really matter because what we came up with was.. Yeah?
Kovid Batra: The highlight is stability like the first level.
Sergio Visinoni: Exactly. So, 'vital infrastructure' is infrastructure, the basic level of what you build, operating in a healthy way, is it healthy, right? Or does it need to constantly look for food because it's starving, right? That was kind of the concept.
And then, the second layer above that was focusing more at the application level. What is the quality of service of your application? And here we're looking more in terms of error rates on the app, or, uh, you know, some of the Core Web Vitals were included there. Not all of them because it was still early days. Uh, actually I don't think we were calling them Core Web Vitals yet, if my memory serves me well. But Google was already quite advanced on, you know, pushing the idea of these important metrics on web performance that reflect the user experience. So this was a combination of, you know, application-level and the quality of service that you're providing to your users, not in terms of user flow, right? So it was not the functional part of user experience, but the non-functional part of user experience. Is it fast enough? Is it reliable? Is it loading in the right way? et cetera, et cetera. So that was the second layer, and the third layer, uh, was basically what we're calling developer performance, I think, and it was mapped into the DORA metrics. So these were the four DORA metrics because that's where you get to, from our perspective, the optimization stage, right? Once the first two layers are in a good place, you can really start focusing your attention on the third layer.
Now, as Maslow says as well, you don't need to complete 100 percent of one layer to start looking at the next layer, right? So it's more like you want a slice where always the base is much larger than the layers above, but you don't want to only focus on feeding yourself until you have enough food to survive for the rest of your life. So there's always a balance. But for us, y'know, going back to when we introduced it, when we introduced this framework, and again, lots of credit goes to Marco. Uh, I was there mostly to help, but also to help socialize this and get buy-in from the business counterparts, right, because until that point, the engineering side of the organization was really considered as a black box for any people outside of engineering.
Kovid Batra: Yeah.
Sergio Visinoni: So, yeah, it's no, things are slow. We have lots of legacy. Old platform. Now, there were all these common words being thrown around, but most people didn't really understand what was going on, and then they resorted to kind of comparing across countries based on how quickly they could ship new features, regardless of whether those features were actually the same or not, right? There was a lot of oversimplification in the discourse around, "Is my team or is your team performing well from a CEO perspective?" And I wanted to not put an end to that, but actually help business leaders understand better, have a deeper understanding of, okay, this is the situation within your team. Now, how do we have them move to a better place, right?
So, using this analogy of the Maslow Pyramid turned out to be a very effective and powerful way to communicate because, you know, every man, every person and their mother have heard about the Maslow Pyramid, and even if they don't, it's very easy to understand the concept once you explain it, right? Of course, we're always putting a lot of caveats, right? Saying, you know, "Don't just translate everything Maslow said into this context and assume that it's going to work." Because there is a lot of simplification going on here, but analogies are useful because they help people grasp the concept more easily.
Kovid Batra: Yeah, it makes a model in your head where you can always understand where you are, what you exactly need to do. So that way it makes it easier for people to make those decisions and then move forward according to that. Yeah. Makes sense. I understand.
Sergio Visinoni: Yeah. So, that. On the one side, so there was the benefit for the business leaders, so they will understand better, you know, their situation. Secondly, it was helping local CTOs because again, I was VP of Engineering, but I had CTOs reporting to me because we had different titles depending on the countries. It was very heterogeneous. Don't get too fixated on the titles. But basically, I had local tech leads who were reporting to their CEO, but then they were dotted line reporting to me, and many of them were struggling to find arguments to justify investments in technology, right? Because especially when you're a small, medium-sized team where the competition on the market is very fierce, you know, you get a lot of pressure to just push new features, right? You just, you need to launch this new feature because we need the revenue, blah, blah, and we all, we've all been there. There are lots of good reasons for that to be the case and some bad ones, right? And especially, first-time tech leaders tend to have a hard time, balancing that need, that business need with, okay, I also need to make sure that we ensure this system will perform well in the long run, right, not just tomorrow, especially with businesses that were well-funded with good business models. So it was not a typical startup that might disappear in a couple of months. Uh, this was a proven model that we're rolling out across different countries. So the long-term horizon perspective was justified even in the early stages.
So we put together the model. We put together a series of metrics, but then, the hard part of the work started, which was how do we collect the metrics, right? So initially, it was a massive, uh, Google spreadsheet, and we're asking local CTOs to, you know, fill in the data on a monthly basis, and the interesting thing for us was to look at the trends. Like, we learn a lot from when we work with DORA metrics. The absolute value is largely meaningless, because it's really very context-dependent, and also depends on how you define the metric, right? Every one of these metrics, you know, you work with that, you need to start with defining, okay, "How do I actually want to measure it?" Because DORA doesn't tell you how to measure them, but even when you talk about availability, there are different ways to look at the numbers and figure out how to calculate this. So, it was a lot of work to kind of standardize on those metrics. And once we got to that level of standardization, then every team will see how they were evolving on a monthly basis, and we're generating monthly reports with different levels of granularity, right? So, for the executive team, we will look at an index that was calculated as a kind of summary or a compilation or, yeah, aggregation of all the submetrics to show, okay, "This team is at 75 percent in the journey. Last month they were at 70%." So there's been a significant improvement. And if you look back last six months, they've been increasing month over month, et cetera, et cetera.
Whereas some other teams maybe were either flat or even decreasing, declining. That proved to be another very useful tool because in my perspective, one of the best usages of engineering metrics is what they call, um, conversation starters. So those metrics are a way for you to spot that there is something going on. It's very dangerous to jump to conclusions just based on metrics.
Kovid Batra: One question.
Sergio Visinoni: But at least they're telling you.. Yeah, go ahead. Sorry.
Kovid Batra: So I think what I am understanding from your, uh, analogy and how this hierarchy is helping people to actually build a strategy, it's more towards a technical strategy. Am I getting it right or is it more around mapping the business strategy also in place? So, that's the real problem for the tech leaders, right, that you are really not sure about what your business needs and what your tech needs, and then you're trying to map both of them so that you survive well, and in fact, thrive in certain situations where you really want to move fast. So this hierarchy, uh, analogy is it helping purely on the tech strategy point or, uh, is it bringing in the business aspect also?
Sergio Visinoni: That is a fantastic question, Kovid. I would start by saying that you cannot have a tech strategy that is decoupled from the business strategy to begin with, right? Because otherwise, so there is no absolute tech strategy that is right for every context. Uh, the tech strategy needs to be tailored to the specific company, team, whatever the scope we're looking at. So by definition, this was helping them better the business strategy into the tech strategy. But, at the same time it was also helping CTOs in some cases define a clear tech strategy because they were finally seeing clearly, you know, this is where you have problems, and based on the model, we recommend you don't start optimizing for speed if your site is down every other day, right? You should start first by creating that stability, right? But it also helps them communicate with the business and actually manage business expectations in a more constructive way. So, you know, in order for us to be able to get to a place where we can fulfill all the needs for the business in terms of like new features, new capabilities that we need to build, first we need to address this. Until we address this, we're not going to be able to predictably do our job, and we'll constantly be facing, you know, urgent fires that will disrupt our ability to plan and actually execute on those plans. So in that sense, there is a clear connection.
Now, what the framework didn't tell was what features to build or how to solve a problem, right? That's exactly how then, you know, in my team, I was combining the framework as a diagnosis tool on one hand. So it was actually an observability tool. It was giving us information about where the potential problems were, and then I had a pool of experts and software engineers that in some cases I was able to deploy to help specific teams improve in certain areas, right? For instance, you know, "You guys, I have an architect here. You need help on refactoring this part of the architecture on your side. I'm gonna have this person working with you next quarter to help you move from here to there."
Kovid Batra: Yeah, exactly I think that's the best part. I think it's not a static framework, right? You are always moving in between these layers depending on the situation. All you have to do is like make sure what exactly each state tells you, those metrics of each state tells you, and based on that, you can at least formulate in your, uh, strategy that, okay, this is where we need to focus, and if this is a business requirement, are we mapped to do it or not? You can actually make decisions by knowing the reality of your system.
Sergio Visinoni: Exactly. You're absolutely right. And basically, it's a framework that helps you prioritize what to do, right?
Kovid Batra: Yeah.
Sergio Visinoni: Uh, it gives you clear insights on, okay, I should look here or I should look there. So, what's the plan, y'know? Again, this is the problem. This is the symptom. Now, how do we go about addressing the underlying problem? And again, that was happening either in isolation within the team or with support from people that were, you know, uh, located with me in Barcelona. The 'how' is really context-dependent, again, because depending on what the specific problem is. It's also depending on the local pool of competences on the team that is suffering, you might come up with different ways to address it.
The other interesting side effect of this approach, by the way, you know, it was not easy to deploy it because in the beginning, a lot of teams were seeing this just as extra toil, right? You're just asking me to collect metrics. What's in it for me? Right. It took some time to prove the value, right? And that's where, you know, strong conviction, but also repeating why we're trying to do something, y'know, being very consistent in the messaging is helpful. But, you know, in any organization, especially big organizations, there is a lot of competition for attention, right?
Kovid Batra: So what do you think was, uh, your takeaway from the incentivization for implementation? Like, uh, why people started sticking to it, uh, with you?
Sergio Visinoni: I think there were probably two broad categories of people. Like, generalization, but to simplify things here, one category of people were people that, understood this at an intuitive level and actually saw this as a way to, okay, achieve something they wanted to do. So people were thinking, okay, "How can we become a bit more data-driven in deciding on this type of work?" So for this group of people, it was obvious. Of course, nobody likes extra work in general, but they understood why, right? And then, there was more like the detractors or people who were, uh, resisting this type of change, and what worked with them was spending time to help them understand this will actually help them do what they wanted to do, and they were frustrated because they were not able to do it, right?. So typically, and this roughly mapped, you know, it's not exactly one to one, but this roughly mapped to the more junior profiles, those who were able to see, okay, "I don't have time. I don't have resources to do all the important things I will need to do because, you know, my boss tells me that we need to prioritize X, Y, and Z." And I was telling him, okay, "So how do we change that?" Right? "How can I, how can we help you create, build stronger arguments to be able to, y'know, put those on the table and be able to have a proper prioritization discussion and say, okay, if we do this, we're going to improve that, and therefore, if we improve that, this will be the potential consequences."
Kovid Batra: Right.
Sergio Visinoni: So this has been the main driver, um, to get most of the people on board.
Kovid Batra: Makes sense. And as you were talking about the challenges, I think if you could give me one example, uh, like some anecdotal information around how things worked out when you were implementing it, that would be something very insightful.
Sergio Visinoni: Yeah. So, on one side we discovered, for instance, without naming names, we discovered that one specific organization that was considered (quote, unquote) "high-performing" because they were churning out lots of features. It turned out their metrics didn't look very good. All right, so this was an interesting case of doing a lot of average work rather than doing a little bit of good work. And once we started going beyond those, so those metrics, the first thing they did were they put us onto something. We realized there was something there that required further investigation. So we started working more closely with the team and we started realizing that there were basically gaps in their knowledge, right? So nobody had any bad intentions. But the problem is that, especially in a big organization, there is a certain element of competition across countries. So nobody wants to look bad. Everybody wants to look better, right? And therefore, they were a bit trapped in this self-fulfilling prophecy of showing off that they were good, even though they weren't, and therefore they didn't really ask for help, right, because asking for help would have meant, you know, we don't necessarily know exactly what we're trying to do. In this discipline, it's very hard to know exactly what you're trying to do. There's a lot of uncertainty, lots of surprises around the corner. So, in this specific case, this framework allowed us to identify something that was very much below the radar until that moment, which led us to spend more time with this team. Actually, this team from a business perspective was one of the most important countries in my portfolio. So it became quite evident that I and my team should prioritize our attention to this team, and over time, that led us to help in making significant changes at the organizational level and on the way they work, and therefore, the end results.
So this has been like one of the biggest investments. The other one, actually the most important asset in that portfolio, um, was falling in the category of those who understood this from day one, right? So this was like mature people who just saw in this a potential support. So there, the collaboration from day one was extremely easy, and it really became a partnership where I actually took a big part of my team to work with them for a sustained amount of time to actually move them very close to excellence on, you know, on the, at that moment, the DORA ranges, if you remember, and that has had massive impact on the business, right? The business was going through an important transformation. There were changes in the leadership, at the top, but actually also these changes on the tech side allowed to support a lot of the initiatives that the business wanted to push. So this was a very good success case where all pieces came together, the local tech team, my team centrally and the business realizing, okay, we need to put investments here, we need to do things in a better way. Um, you know, nowadays, that country is still thriving, uh, and part of it is because of some of those investments we made during those days.
Kovid Batra: I think this, um, philosophy, the tech philosophy, I must say, solves a lot of problems. Like, first and foremost, I would say the decision-making, like anyone who is trying to make a decision that comes to a very good point where you know your reality, you know what is needed from the business and you can map those two and then come out with, okay, "This is the priority list for us." So that's the best thing I see coming out of this, and then, of course, like it makes you achieve as much as you can, not that as much as you want. So that is also very important because sometimes we overstretch ourselves to do things which probably are not realistic or not achievable. This would really help you give a fundamental check, and then, you would say, okay, "This is what needs to be done there."
One more thing, I think I discussed this in one more podcast that it is very difficult for tech leaders to explain to the business counterparts, why this tech thing is very important, right? Now they have a very good framework in place and a few metrics in place to tell them, okay, "This is where we need to focus right now." And if it is needed right now, then you have to bring it up. So I think a data-driven decision-making makes it very convincible for the business counterparts also to take up the decision.
So yeah, I think this was really great, Sergio. I think a very, very good thing I learned today and hopefully the audience did too. So with that, I would like to say buh-bye and would love to have you on another episode anytime soon, and talk more on such topics with you. Thank you so much, Sergio, for today.
Sergio Visinoni: Thank you, Kovid, and you know, just hit me up when you want me to join again, I will be very happy to have another chat.
Kovid Batra: Thank you. Thank you so much. See you.
Are you tired of feeling like you’re constantly playing catch-up with the latest AI tools, trying to figure out how they fit into your workflow? Many developers and managers share that sentiment, caught in a whirlwind of new technologies that promise efficiency but often lead to confusion and frustration.
The problem is clear: while AI offers exciting opportunities to streamline development processes, it can also amplify stress and uncertainty. Developers often struggle with feelings of inadequacy, worrying about how to keep up with rapidly changing demands. This pressure can stifle creativity, leading to burnout and a reluctance to embrace the innovations designed to enhance our work.
But there’s good news. By reframing your relationship with AI and implementing practical strategies, you can turn these challenges into opportunities for growth. In this blog, we’ll explore actionable insights and tools that will empower you to harness AI effectively, reclaim your productivity, and transform your software development journey in this new era.
The Current State of Developer Productivity
Recent industry reports reveal a striking gap between the available tools and the productivity levels many teams achieve. For instance, a survey by GitHub showed that 70% of developers believe repetitive tasks hamper their productivity. Moreover, over half of developers express a desire for tools that enhance their workflow without adding unnecessary complexity.
Understanding the Productivity Paradox
Despite investing heavily in AI, many teams find themselves in a productivity paradox. Research indicates that while AI can handle routine tasks, it can also introduce new complexities and pressures. Developers may feel overwhelmed by the sheer volume of tools at their disposal, leading to burnout. A 2023 report from McKinsey highlights that 60% of developers report higher stress levels due to the rapid pace of change.
Common Emotional Challenges
As we adapt to these changes, feelings of inadequacy and fear of obsolescence may surface. It’s normal to question our skills and relevance in a world where AI plays a growing role. Acknowledging these emotions is crucial for moving forward. For instance, it can be helpful to share your experiences with peers, fostering a sense of community and understanding.
Key Challenges Developers Face in the Age of AI
Understanding the key challenges developers face in the age of AI is essential for identifying effective strategies. This section outlines the evolving nature of job roles, the struggle to balance speed and quality, and the resistance to change that often hinders progress.
Evolving Job Roles
AI is redefining the responsibilities of developers. While automation handles repetitive tasks, new skills are required to manage and integrate AI tools effectively. For example, a developer accustomed to manual testing may need to learn how to work with automated testing frameworks like Selenium or Cypress. This shift can create skill gaps and adaptation challenges, particularly for those who have been in the field for several years.
Balancing speed and Quality
The demand for quick delivery without compromising quality is more pronounced than ever. Developers often feel torn between meeting tight deadlines and ensuring their work meets high standards. For instance, a team working on a critical software release may rush through testing phases, risking quality for speed. This balancing act can lead to technical debt, which compounds over time and creates more significant problems down the line.
Resistance to Change
Many developers hesitate to adopt AI tools, fearing that they may become obsolete. This resistance can hinder progress and prevent teams from fully leveraging the benefits that AI can provide. A common scenario is when a developer resists using an AI-driven code suggestion tool, preferring to rely on their coding instincts instead. Encouraging a mindset shift within teams can help them embrace AI as a supportive partner rather than a threat.
Strategies for Boosting Developer Productivity
To effectively navigate the challenges posed by AI, developers and managers can implement specific strategies that enhance productivity. This section outlines actionable steps and AI applications that can make a significant impact.
Embracing AI as a Collaborator
To enhance productivity, it’s essential to view AI as a collaborator rather than a competitor. Integrating AI tools into your workflow can automate repetitive tasks, freeing up your time for more complex problem-solving. For example, using tools like GitHub Copilot can help developers generate code snippets quickly, allowing them to focus on architecture and logic rather than boilerplate code.
Recommended AI tools: Explore tools that integrate seamlessly with your existing workflow. Platforms like Jira for project management and Test.ai for automated testing can streamline your processes and reduce manual effort.
Actual AI Applications in Developer Productivity
AI offers several applications that can significantly boost developer productivity. Understanding these applications helps teams leverage AI effectively in their daily tasks.
Code generation: AI can automate the creation of boilerplate code. For example, tools like Tabnine can suggest entire lines of code based on your existing codebase, speeding up the initial phases of development and allowing developers to focus on unique functionality.
Code review: AI tools can analyze code for adherence to best practices and identify potential issues before they become problems. Tools like SonarQube provide actionable insights that help maintain code quality and enforce coding standards.
Automated testing: Implementing AI-driven testing frameworks can enhance software reliability. For instance, using platforms like Selenium and integrating them with AI can create smarter testing strategies that adapt to code changes, reducing manual effort and catching bugs early.
Intelligent debugging: AI tools assist in quickly identifying and fixing bugs. For example, Sentry offers real-time error tracking and helps developers trace their sources, allowing teams to resolve issues before they impact users.
Predictive analytics for sprints/project completion: AI can help forecast project timelines and resource needs. Tools like Azure DevOps leverage historical data to predict delivery dates, enabling better sprint planning and management.
Architectural optimization: AI tools suggest improvements to software architecture. For example, the AWS Well-Architected Tool evaluates workloads and recommends changes based on best practices, ensuring optimal performance.
Security assessment: AI-driven tools identify vulnerabilities in code before deployment. Platforms like Snyk scan code for known vulnerabilities and suggest fixes, allowing teams to deliver secure applications.
Continuous Learning and Professional Development
Ongoing education in AI technologies is crucial. Developers should actively seek opportunities to learn about the latest tools and methodologies.
Online resources and communities: Utilize platforms like Coursera, Udemy, and edX for courses on AI and machine learning. Participating in online forums such as Stack Overflow and GitHub discussions can provide insights and foster collaboration among peers.
Cultivating a Supportive Team Environment
Collaboration and open communication are vital in overcoming the challenges posed by AI integration. Building a culture that embraces change can lead to improved team morale and productivity.
Building peer support networks: Establish mentorship programs or regular check-ins to foster support among team members. Encourage knowledge sharing and collaborative problem-solving, creating an environment where everyone feels comfortable discussing their challenges.
Setting Effective Productivity Metrics
Rethink how productivity is measured. Focus on metrics that prioritize code quality and project impact rather than just the quantity of code produced.
Tools for measuring productivity: Use analytics tools like Typo that provide insights into meaningful productivity indicators. These tools help teams understand their performance and identify areas for improvement.
How Typo Enhances Developer Productivity?
There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.
Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.
Here are three ways in which Typo measures the team productivity:
Software Development Lifecycle (SDLC) Visibility
Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.
Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.
AI Powered Code Review
Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them using AI before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.
Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback. This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.
Developer Experience
Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.
Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.
Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.
Continuous Learning: Empowering Developers for Future Success
With its robust features tailored for the modern software development environment, Typo acts as a catalyst for productivity. By streamlining workflows, fostering collaboration, integrating with AI tools, and providing personalized support, Typo empowers developers and their managers to navigate the complexities of development with confidence. Embracing Typo can lead to a more productive, engaged, and satisfied development team, ultimately driving successful project outcomes.
Ha͏ve͏ yo͏u ever felt ͏overwhelmed trying to ͏mainta͏in co͏nsist͏ent͏ c͏o͏de quality acros͏s ͏a remote te͏am? As mo͏re development t͏eams shift to remo͏te work, t͏he challenges of code͏ revi͏e͏ws onl͏y gro͏w—slowed c͏ommunication͏, la͏ck o͏f real-tim͏e feedba͏ck, and t͏he c͏r͏eeping ͏possibility of errors sl͏ipp͏i͏ng t͏hro͏ugh. ͏
Moreover, thin͏k about how͏ much ti͏me is lost͏ ͏waiting͏ fo͏r feedback͏ o͏r having to͏ rewo͏rk code due͏ ͏to sma͏ll͏, ͏overlooked issues. ͏When you’re͏ working re͏motely, the͏se frustra͏tio͏ns com͏poun͏d—su͏ddenly, a task that shou͏ld take hours stre͏tc͏hes into days. You͏ migh͏t ͏be spendin͏g tim͏e on ͏repetitiv͏e tasks ͏l͏ike͏ s͏yn͏ta͏x chec͏king, cod͏e formatting, and ma͏nually catch͏in͏g errors that could be͏ ha͏nd͏led͏ more ef͏fi͏cie͏nt͏ly. Me͏anwhile͏,͏ ͏yo͏u’r͏e ͏expected to deli͏ver high-quality͏ ͏work without delays. ͏
Fortuna͏tely,͏ ͏AI-͏driven too͏ls offer a solutio͏n t͏h͏at can ea͏se this ͏bu͏rd͏en.͏ B͏y automating ͏the tedi͏ous aspects of cod͏e ͏re͏views, such as catchin͏g s͏y͏ntax ͏e͏r͏rors and for͏m͏a͏tting i͏nconsistenc͏ies, AI ca͏n͏ gi͏ve deve͏lopers m͏or͏e͏ time to focus on the creative and comple͏x aspec͏ts of ͏coding.
͏In this ͏blog, we’͏ll ͏explore how A͏I͏ can ͏help͏ remote teams tackle the diffic͏u͏lties o͏f͏ code r͏eviews ͏a͏nd ho͏w ͏t͏o͏ols like Typo can fu͏rther͏ im͏prove this͏ proc͏ess͏, allo͏wing t͏e͏am͏s to focu͏s on what ͏tru͏ly matter͏s—writing excellent͏ code.
Remote work h͏as int͏roduced a unique se͏t of challenges t͏hat imp͏a͏ct t͏he ͏code rev͏iew proce͏ss. They a͏re:͏
Co͏mmunication bar͏riers
When team members are͏ s͏cat͏t͏ered across ͏diffe͏rent time ͏zon͏e͏s, real-t͏ime discussions and feedba͏ck become ͏mor͏e difficult͏. Th͏e͏ lack of face͏-to-͏face͏ ͏int͏e͏ra͏ctions can h͏i͏nder effective ͏commun͏icati͏on ͏an͏d͏ le͏ad ͏to m͏isunde͏rs͏tandings.
Delays in fee͏dback͏
Without͏ the i͏mmedi͏acy of in-pers͏on ͏collabo͏rati͏on͏,͏ remote͏ ͏tea͏ms͏ often experie͏n͏ce del͏ays in receivi͏ng feedback on͏ thei͏r code chang͏e͏s. This ͏can slow d͏own the developmen͏t cycle͏ and fru͏strat͏e ͏te͏am ͏member͏s who are ea͏ger t͏o iterate and impro͏ve the͏ir ͏code.͏
Inc͏rea͏sed risk ͏of human error͏
͏C͏o͏mplex ͏code͏ re͏vie͏ws cond͏ucted ͏remo͏t͏ely are more͏ p͏ro͏n͏e͏ to hum͏an overs͏ight an͏d errors. When team͏ memb͏ers a͏re no͏t ph͏ysically ͏pres͏ent to catch ͏ea͏ch other's mistakes, the risk of intro͏duci͏ng͏ bug͏s or quality i͏ssu͏es into the codebase increa͏ses.
Emo͏tional stres͏s
Re͏mot͏e͏ work can take͏ a toll on t͏eam mo͏rale, with f͏eelings͏ of ͏is͏olation and the pres͏s͏ure ͏to m͏ai͏nt͏a͏in productivit͏y w͏eighing heavily ͏on͏ developers͏. This emo͏tional st͏ress can negativel͏y ͏impact col͏laborati͏on͏ a͏n͏d code quality i͏f not͏ properly add͏ress͏ed.
Ho͏w AI Ca͏n͏ Enhance ͏Remote Co͏d͏e Reviews
AI-powered tools are transforming code reviews, helping teams automate repetitive tasks, improve accuracy, and ensure code quality. Let’s explore how AI dives deep into the technical aspects of code reviews and helps developers focus on building robust software.
NLP for Code Comments
Natural Language Processing (NLP) is essential for understanding and interpreting code comments, which often provide critical context:
Tokenization and Parsing
NLP breaks code comments into tokens (individual words or symbols) and parses them to understand the grammatical structure. For example, "This method needs refactoring due to poor performance" would be tokenized into words like ["This", "method", "needs", "refactoring"], and parsed to identify the intent behind the comment.
Sentiment Analysis
Using algorithms like Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks, AI can analyze the tone of code comments. For example, if a reviewer comments, "Great logic, but performance could be optimized," AI might classify it as having a positive sentiment with a constructive critique. This analysis helps distinguish between positive reinforcement and critical feedback, offering insights into reviewer attitudes.
Intent Classification
AI models can categorize comments based on intent. For example, comments like "Please optimize this function" can be classified as requests for changes, while "What is the time complexity here?" can be identified as questions. This categorization helps prioritize actions for developers, ensuring important feedback is addressed promptly.
Static Code Analysis
Static code analysis goes beyond syntax checking to identify deeper issues in the code:
Syntax and Semantic Analysis
AI-based static analysis tools not only check for syntax errors but also analyze the semantics of the code. For example, if the tool detects a loop that could potentially cause an infinite loop or identifies an undefined variable, it flags these as high-priority errors. AI tools use machine learning to constantly improve their ability to detect errors in Java, Python, and other languages.
Pattern Recognition
AI recognizes coding patterns by learning from vast datasets of codebases. For example, it can detect when developers frequently forget to close file handlers or incorrectly handle exceptions, identifying these as anti-patterns. Over time, AI tools can evolve to suggest better practices and help developers adhere to clean code principles.
Vulnerability Detection
AI, trained on datasets of known vulnerabilities, can identify security risks in the code. For example, tools like Typo or Snyk can scan JavaScript or C++ code and flag potential issues like SQL injection, buffer overflows, or improper handling of user input. These tools improve security audits by automating the identification of security loopholes before code goes into production.
Code Similarity Detection
Finding duplicate or redundant code is crucial for maintaining a clean codebase:
Code Embeddings
Neural networks convert code into embeddings (numerical vectors) that represent the code in a high-dimensional space. For example, two pieces of code that perform the same task but use different syntax would be mapped closely in this space. This allows AI tools to recognize similarities in logic, even if the syntax differs.
Similarity Metrics
AI employs metrics like cosine similarity to compare embeddings and detect redundant code. For example, if two functions across different files are 85% similar based on cosine similarity, AI will flag them for review, allowing developers to refactor and eliminate duplication.
Duplicate Code Detection
Tools like Typo use AI to identify duplicate or near-duplicate code blocks across the codebase. For example, if two modules use nearly identical logic for different purposes, AI can suggest merging them into a reusable function, reducing redundancy and improving maintainability.
Automated Code Suggestions
AI doesn’t just point out problems—it actively suggests solutions:
Generative Models
Models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) can create new code snippets. For example, if a developer writes a function that opens a file but forgets to handle exceptions, an AI tool can generate the missing try-catch block to improve error handling.
Contextual Understanding
AI analyzes code context and suggests relevant modifications. For example, if a developer changes a variable name in one part of the code, AI might suggest updating the same variable name in other related modules to maintain consistency. Tools like GitHub Copilot use models such as GPT to generate code suggestions in real-time based on context, making development faster and more efficient.
Reinforcement Learning for Code Optimization
Reinforcement learning (RL) helps AI continuously optimize code performance:
Reward Functions
In RL, a reward function is defined to evaluate the quality of the code. For example, AI might reward code that reduces runtime by 20% or improves memory efficiency by 30%. The reward function measures not just performance but also readability and maintainability, ensuring a balanced approach to optimization.
Agent Training
Through trial and error, AI agents learn to refactor code to meet specific objectives. For example, an agent might experiment with different ways of parallelizing a loop to improve performance, receiving positive rewards for optimizations and negative rewards for regressions.
Continuous Improvement
The AI’s policy, or strategy, is continuously refined based on past experiences. This allows AI to improve its code optimization capabilities over time. For example, Google’s AlphaCode uses reinforcement learning to compete in coding competitions, showing that AI can autonomously write and optimize highly efficient algorithms.
AI-Assisted Code Review Tools
Modern AI-assisted code review tools offer both rule-based enforcement and machine learning insights:
Rule-Based Systems
These systems enforce strict coding standards. For example, AI tools like ESLint or Pylint enforce coding style guidelines in JavaScript and Python, ensuring developers follow industry best practices such as proper indentation or consistent use of variable names.
Machine Learning Models
AI models can learn from past code reviews, understanding patterns in common feedback. For instance, if a team frequently comments on inefficient data structures, the AI will begin flagging those cases in future code reviews, reducing the need for human intervention.
Hybrid Approaches
Combining rule-based and ML-powered systems, hybrid tools provide a more comprehensive review experience. For example, DeepCode uses a hybrid approach to enforce coding standards while also learning from developer interactions to suggest improvements in real-time. These tools ensure code is not only compliant but also continuously improved based on team dynamics and historical data.
Incorporating AI into code reviews takes your development process to the next level. By automating error detection, analyzing code sentiment, and suggesting optimizations, AI enables your team to focus on what matters most: building high-quality, secure, and scalable software. As these tools continue to learn and improve, the benefits of AI-assisted code reviews will only grow, making them indispensable in modern development environments.
Here’s a table to help you seamlessly understand the code reviews at a glance:
Practical Steps to Im͏pleme͏nt AI-Driven Co͏de ͏Review͏s
To ef͏fectively inte͏grate A͏I ͏into your remote͏ tea͏m's co͏de revi͏ew proce͏ss, con͏side͏r th͏e followi͏ng ste͏ps͏:
Evaluate͏ and choo͏se ͏AI tools: Re͏sear͏ch͏ and ͏ev͏aluat͏e A͏I͏-powe͏red code͏ review tools th͏at ali͏gn with your tea͏m'͏s n͏e͏eds an͏d ͏de͏vel͏opment w͏orkflow.
S͏t͏art with͏ a gr͏ad͏ua͏l ͏approa͏ch: Us͏e AI tools to ͏s͏upp͏ort h͏uman-le͏d code ͏reviews be͏fore gr͏ad͏ua͏lly ͏automating simpler tasks. This w͏ill al͏low your͏ te͏am to become comfortable ͏w͏ith the te͏chnol͏ogy and see its ͏ben͏efit͏s firsthan͏d͏.
͏Foster a cu͏lture of collaboration͏: ͏E͏nc͏ourage͏ yo͏ur tea͏m to view AI ͏as͏ a co͏llaborati͏ve p͏ar͏tner rathe͏r tha͏n͏ a replac͏e͏men͏t for ͏huma͏n expert͏is͏e͏. ͏Emp͏hasize ͏the impo͏rtan͏ce of human oversi͏ght, ͏especially for complex issue͏s th͏at r͏equire ͏nuance͏d͏ ͏judgmen͏t.
Provi͏de trainin͏g a͏nd r͏eso͏urces: Equi͏p͏ ͏your͏ team ͏with͏ the neces͏sary ͏training ͏an͏d resources to ͏use A͏I ͏c͏o͏de revie͏w too͏ls͏ effectively.͏ T͏his include͏s tuto͏rials, docume͏ntatio͏n, and op͏p͏ortunities fo͏r hands-on p͏r͏actice.
Lev͏era͏ging Typo to ͏St͏r͏eam͏line Remot͏e Code ͏Revi͏ews
Typo is an ͏AI-͏po͏w͏er͏ed tool designed to streamli͏ne the͏ code review process for r͏emot͏e teams. By i͏nte͏grating seamlessly wi͏th ͏your e͏xisting d͏e͏vel͏opment tool͏s, Typo mak͏es it easier͏ to ma͏nage feedbac͏k, improve c͏ode͏ q͏uali͏ty, and ͏collab͏o͏ra͏te ͏acr͏o͏ss ͏tim͏e zone͏s͏.
S͏ome key͏ benefi͏ts of ͏using T͏ypo ͏inclu͏de:
AI code analysis
Code context understanding
Auto debuggging with detailed explanations
Proprietary models with known frameworks (OWASP)
Auto PR fixes
Here's a brief comparison on how Typo differentiates from other code review tools
The Hu͏man Element: Com͏bining͏ ͏AI͏ and Human Exp͏ert͏ise
Wh͏ile AI ca͏n ͏s͏i͏gn͏ificantly͏ e͏nhance͏ the code ͏review proces͏s, i͏t͏'s essential͏ to maintain ͏a balance betw͏een AI ͏and human expert͏is͏e. AI ͏is not ͏a repla͏ce͏me͏nt for h͏uman intuition, cr͏eativity, or judgmen͏t but rather ͏a ͏s͏upportive t͏ool that augme͏nts and ͏emp͏ower͏s ͏developers.
By ͏using AI to ͏handle͏ re͏peti͏tive͏ tasks a͏nd prov͏ide real-͏time f͏eedba͏ck, develope͏rs can͏ foc͏us on higher-lev͏el is͏su͏es ͏that re͏quire ͏h͏uman problem-solving ͏skills. T͏h͏is ͏division of͏ l͏abor͏ allows teams ͏to w͏ork m͏ore efficient͏ly͏ and eff͏ectivel͏y while still͏ ͏ma͏in͏taining͏ the ͏h͏uma͏n touch that is cr͏uc͏ial͏ ͏for complex͏ ͏p͏roble͏m-solving and innov͏ation.
Over͏c͏oming E͏mot͏ional Barriers to AI In͏tegra͏tion
In͏troducing new t͏echn͏ol͏og͏ies͏ can so͏metimes be ͏met wit͏h r͏esist͏ance or fear. I͏t's ͏im͏porta͏nt ͏t͏o address these co͏ncerns head-on and hel͏p your͏ team understand t͏he͏ be͏nefits of AI integr͏ation.
Some common͏ fears—͏su͏ch as job͏ r͏eplacement or dis͏r͏u͏pt͏ion of esta͏blished workflows—͏shou͏ld be dire͏ctly addre͏ssed͏.͏ Reas͏sur͏e͏ your t͏ea͏m͏ that AI is ͏designed to r͏e͏duce workload and enh͏a͏nce͏ pro͏duc͏tiv͏ity, no͏t rep͏lace͏ human ex͏pertise.͏ Foster an͏ en͏vironment͏ that embr͏aces new t͏echnologie͏s while focusing on th͏e long-t͏erm be͏nefits of improved ͏eff͏icienc͏y, collabor͏ati͏on, ͏and j͏o͏b sat͏isfaction.
Elevate Your͏ Code͏ Quality: Em͏b͏race AI Solut͏ions͏
AI-d͏riven co͏d͏e revie͏w͏s o͏f͏fer a pr͏omising sol͏ution f͏or remote teams ͏lookin͏g͏ to maintain c͏ode quality, fo͏ster collabor͏ation, and enha͏nce productivity. ͏By emb͏ra͏cing͏ ͏AI tool͏s like Ty͏po, you can streamline ͏your code rev͏iew pro͏cess, reduce delays, and empower ͏your tea͏m to focus on writing gr͏ea͏t code.
Remem͏ber tha͏t ͏AI su͏pports and em͏powers your team—not replace͏ human expe͏rti͏se. Exp͏lore and experim͏ent͏ with A͏I͏ code review tools ͏in y͏o͏ur ͏teams, and ͏wa͏tch as your remote co͏lla͏borati͏on rea͏ches new͏ he͏i͏ghts o͏f effi͏cien͏cy and success͏.
The software development field is constantly evolving field. While this helps deliver the products and services quickly to the end-users, it also implies that developers might take shortcuts to deliver them on time. This not only reduces the quality of the software but also leads to increased technical debt.
But, with new trends and technologies, comes generative AI. It seems to be a promising solution in the software development industry which can ultimately, lead to high-quality code and decreased technical debt.
Let’s explore more about how generative AI can help manage technical debt!
Technical debt: An overview
Technical debt arises when development teams take shortcuts to develop projects. While this gives them short-term gains, it increases their workload in the long run.
In other words, developers prioritize quick solutions over effective solutions. The four main causes behind technical debt are:
Business causes: Prioritizing business needs and the company’s evolving conditions can put pressure on development teams to cut corners. It can result in preponing deadlines or reducing costs to achieve desired goals.
Development causes: As new technologies are evolving rapidly, It makes it difficult for teams to switch or upgrade them quickly. Especially when already dealing with the burden of bad code.
Human resources causes: Unintentional technical debt can occur when development teams lack the necessary skills or knowledge to implement best practices. It can result in more errors and insufficient solutions.
Resources causes: When teams don’t have time or sufficient resources, they take shortcuts by choosing the quickest solution. It can be due to budgetary constraints, insufficient processes and culture, deadlines, and so on.
Why generative AI for code management is important?
As per McKinsey’s study,
“… 10 to 20 percent of the technology budget dedicated to new products is diverted to resolving issues related to tech debt. More troubling still, CIOs estimated that tech debt amounts to 20 to 40 percent of the value of their entire technology estate before depreciation.”
But there’s a solution to it. Handling tech debt is possible and can have a significant impact:
“Some companies find that actively managing their tech debt frees up engineers to spend up to 50 percent more of their time on work that supports business goals. The CIO of a leading cloud provider told us, ‘By reinventing our debt management, we went from 75 percent of engineer time paying the [tech debt] ‘tax’ to 25 percent. It allowed us to be who we are today.”
There are many traditional ways to minimize technical debt which includes manual testing, refactoring, and code review. However, these manual tasks take a lot of time and effort. Due to the ever-evolving nature of the software industry, these are often overlooked and delayed.
Since generative AI tools are on the rise, they are considered to be the right way for code management which subsequently, lowers technical debt. These new tools have started reaching the market already. They are integrated into the software development environments, gather and process the data across the organization in real-time, and further, leveraged to lower tech debt.
Some of the key benefits of generative AI are:
Identify redundant code: Generative AI tools like Codeclone analyze code and suggest improvements. This further helps in improving code readability and maintainability and subsequently, minimizing technical debt.
Generates high-quality code: Automated code review tools such as Typo help in an efficient and effective code review process. They understand the context of the code and accurately fix issues which leads to high-quality code.
Automate manual tasks: Tools like Github Copilot automate repetitive tasks and let the developers focus on high-quality tasks.
Optimal refactoring strategies: AI tools like Deepcode leverage machine learning models to understand code semantics, break it down into more manageable functions, and improve variable namings.
Case studies and real-life examples
Many industries have started adopting generative AI technologies already for tech debt management. These AI tools assist developers in improving code quality, streamlining SDLC processes, and cost savings.
Below are success stories of a few well-known organizations that have implemented these tools in their organizations:
Microsoft uses Diffblue cover for Automated Testing and Bug Detection
Microsoft is a global technology leader that implemented Diffblue cover for automated testing. Through this generative AI, Microsoft has experienced a considerable reduction in the number of bugs during the development process. It also ensures that the new features don’t compromise with existing functionality which positively impacts their code quality. This further helps in faster and more reliable releases and cost savings.
Google implements Codex for code documentation
Google is an internet search engine and technology giant that implemented OpenAI’s Codex for streamlining code documentation processes. Integrating this AI tool helped subsequently reduce the time and effort spent on manual documentation tasks. Due to the consistency across the entire codebase, It enhances code quality and allows developers to focus more on core tasks.
Facebook adopts CodeClone to identify redundancy
Facebook, a leading social media, has adopted a generative AI tool, CodeClone for identifying and eliminating redundant code across its extensive codebase. This resulted in decreased inconsistencies and a more streamlined and efficient codebase which further led to faster development cycles.
Pioneer Square Labs uses GPT-4 for higher-level planning
Pioneer Square Labs, a studio that launches technology startups, adopted GPT-4 to allow developers to focus on core tasks and let these AI tools handle mundane tasks. This further allows them to take care of high-level planning and assist in writing code. Hence, streamlining the development process.
How Typo leverage generative AI to reduce technical debt?
Typo’s automated code review tool enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells.
Typo also auto-analyses your codebase pulls requests to find issues and auto-generates fixes before you merge to master. Its Auto-Fix feature leverages GPT 3.5 Pro trained on millions of open source data & exclusive anonymised private data as well to generate line-by-line code snippets where the issue is detected in the codebase.
As a result, Typo helps reduce technical debt by detecting and addressing issues early in the development process, preventing the introduction of new debt, and allowing developers to focus on high-quality tasks.
Issue detection by Typo
Autofixing the codebase with an option to directly create a Pull Request
Key features
Supports top 10+ languages
Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.
Fix every code issue
Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.
Efficient code optimization
Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.
Professional coding standards
Typo standardizes code and reduces the risk of a security breach.
While generative AI can help reduce technical debt by analyzing code quality, removing redundant code, and automating the code review process, many engineering leaders believe technical debt can be increased too.
Bob Quillin, vFunction chief ecosystem officer stated “These new applications and capabilities will require many new MLOps processes and tools that could overwhelm any existing, already overloaded DevOps team,”
They aren’t wrong either!
Technical debt can be increased when the organizations aren’t properly documenting and training development teams in implementing generative AI the right way. When these AI tools are adopted hastily without considering any long-term implications, they can rather increase the workload of developers and increase technical debt.
Ethical guidelines
Establish ethical guidelines for the use of generative AI in organizations. Understand the potential ethical implications of using AI to generate code, such as the impact on job displacement, intellectual property rights, and biases in AI-generated output.
Diverse training data quality
Ensure the quality and diversity of training data used to train generative AI models. When training data is biased or incomplete, these AI tools can produce biased or incorrect output. Regularly review and update training data to improve the accuracy and reliability of AI-generated code.
Human oversight
Maintain human oversight throughout the generative AI process. While AI can generate code snippets and provide suggestions, the final decision should be upon the developers for final decision making, review, and validate the output to ensure correctness, security, and adherence to coding standards.
Most importantly, human intervention is a must when using these tools. After all, it’s their judgment, creativity, and domain knowledge that help to make the final decision. Generative AI is indeed helpful to reduce the manual tasks of the developers, however, they need to use it properly.
Conclusion
In a nutshell, generative artificial intelligence tools can help manage technical debt when used correctly. These tools help to identify redundancy in code, improve readability and maintainability, and generate high-quality code.
However, it is to be noted that these AI tools shouldn’t be used independently. These tools must work only as the developers’ assistants and they muse use them transparently and fairly.
The code review process is one of the major reasons for developer burnout. This not only hinders the developer’s productivity but also negatively affects the software tasks. Unfortunately, it is a crucial aspect of software development that shouldn’t be compromised.
So, what is the alternative to manual code review? Let’s dive in further to know more about it:
The current State of Manual Code Review
Manual code reviews are crucial for the software development process. It can help identify bugs, mentor new developers, and promote a collaborative culture among team members. However, it comes with its own set of limitations.
Software development is a demanding job with lots of projects and processes. Code review when done manually, can take a lot of time and effort from developers. Especially, when reviewing an extensive codebase. It not only prevents them from working on other core tasks but also leads to fatigue and burnout, resulting in decreased productivity.
Since the reviewers have to read the source code line by line to identify issues and vulnerabilities, it can overwhelm them and they may miss out on some of the critical paths. This can result in human errors especially when the deadline is approaching. Hence, negatively impacting project efficiency and straining team resources.
In short, manual code review demands significant time, effort, and coordination from the development team.
This is when AI code review comes to the rescue. AI code review tools are becoming increasingly popular in today’s times. Let’s read more about AI code review and why is it important for developers:
What is AI Code Review?
AI code review is an automated process that examines and analyzes the code of software applications. It uses artificial intelligence and machine learning techniques to identify patterns, detect potential problems, common programming mistakes, and potential security vulnerabilities. These AI code review tools are entirely based on data so they aren’t biased and can read vast amounts of code in seconds.
Why AI in the Code Review Process is Important?
Augmenting human efforts with AI code review has various benefits:
Enhance Overall Quality
Generative AI in code review tools can detect issues like potential bugs, security vulnerabilities, code smells, bottlenecks, and more. The human code review process usually overlooks these issues. Hence, helping in identifying patterns and recommending code improvements that can enhance efficiency and maintenance and reduce technical debt. This leads to robust and reliable software that meets the highest quality standards.
Improve Productivity
AI-powered tools can scan and analyze large volumes of code within minutes. It not only detects potential issues but also suggests improvements according to coding standards and practices. This allows the development team to catch errors early in the development cycle by providing immediate feedback. This saves time spent on manual inspections and rather, developers can focus on other intricate and imaginative parts of their work.
Better Compliance with Coding Standards
The automated code review process ensures that code conforms to coding standards and best practices. It allows code to be more readable, understandable, and maintainable. Hence, improving the code quality. Moreover, it enhances teamwork and collaboration among developers as all of them adhere to the same guidelines and consistency in the code review process.
Enhance Accuracy
The major disadvantage of manual code reviews is that they are prone to human errors and biases. It further increases other critical issues related to structural quality, architectural decisions or so which negatively impact the software application. Generative AI in code reviews can analyze code much faster and more consistently than humans. Hence, maintaining accuracy and reducing biases since they are entirely based on data.
Increase Scalability
When software projects grow in complexity and size, manual code reviews become increasingly time-consuming. It may also struggle to keep up with the scale of these codebases which further delay the code review process. As mentioned before, AI code review tools can handle large codebases in a fraction of a second and can help development teams maintain high standards of code quality and maintainability.
How Typo Leverage Gen AI to Automate Code Reviews?
Typo’s automated code review tool not only enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells. It auto-analyses your codebase and pulls requests to find issues and auto-generates fixes before you merge to master.
Typo’s Auto-Fix feature leverages GPT 3.5 Pro to generate line-by-line code snippets where the issue is detected in the codebase. This means less time reviewing and more time for important tasks. As a result, making the whole process faster and smoother.
Issue detection by Typo
Auto fixing the codebase with an option to directly create a Pull Request
Key Features
Supports Top 10+ Languages
Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.
Fix Every Code Issue
Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.
Efficient Code Optimization
Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.
Professional Coding Standards
Typo standardizes code and reduces the risk of a security breach.
Comparing Typo with Other AI Code Review Tools
There are other popular AI code review tools available in the market. Let’s compare how we stack against others:
Typo
Sonarcloud
Codacy
Codecov
Code analysis
AI analysis and static code analysis
No
No
No
Code context
Deep understanding
No
No
No
Proprietary models
Yes
No
No
No
Auto debugging
Automated debugging with detailed explanations
Manual
No
No
Auto pull request
Automated pull requests and fixes
No
No
No
AI vs. Humans: The Future of Code Reviews?
AI code review tools are becoming increasingly popular. One question that has been on everyone’s mind is whether these AI code review tools will take away developers’ jobs.
The answer is NO.
Generative AI in code reviews is designed to enhance and streamline the development process. It lets the developers automate the repetitive and time-consuming tasks and focus on other core aspects of software applications. Moreover, human judgment, creativity, and domain knowledge are crucial for software development that AI cannot fully replicate.
While these tools excel at certain tasks like analyzing codebase, identifying code patterns, and software testing, they still cannot fully understand complex business requirements, and user needs, or make subjective decisions.
As a result, the combination of AI code review tools and developers’ intervention is an effective approach to ensure high-quality code.
Conclusion
The tech industry is demanding. The software engineering team needs to stay ahead of the industry trends. New AI tools and technologies can help them complement their skills and expertise and make their task easier.
AI in the code review process offers remarkable benefits including reducing human error and consistent accuracy. But, make sure that they are here to assist you in your task, not your whole strategy or replace you.
How Generative AI Is Revolutionising Developer Productivity
Generative AI has become a transformative force in the tech world. And it isn’t going to stop anytime soon! It will continue to have a major impact, especially in the software development industry.Generative AI, when used in the right way, can help developers in saving their time and effort. It allows them to focus on core tasks and upskilling. It further helps streamline various stages of SDLC and improves Developer Productivity. In this article, let’s dive deeper into how generative AI can positively impact developer productivity.
What is Generative AI?
Generative AI is a category of AI models and tools that are designed to create new content, images, videos, text, music, or code. It uses various techniques including neural networks and deep learning algorithms to generate new content.Generative artificial intelligence holds a great advantage for software developers in improving their productivity. It not only improves code quality and delivers better products and services but also allows them to stay ahead of their competitors.Below are a few benefits of Generative AI:
Increases Efficiency
With the help of Generative AI, developers can automate tasks that are either repetitive or don’t require much attention. This saves a lot of time and energy and allows developers to be more productive and efficient in their work. Hence, they can focus on more complex and critical aspects of the software without constantly stressing about other work.
Improves Quality
Generative AI can help in minimizing errors and address potential issues early. When they are set as per the coding standards, it can contribute to more effective coding reviews. This increases the code quality and decreases costly downtime and data loss.
Helps in Learning and Assisting with Work
Generative AI can assist developers by analyzing and generating examples of well-structured code, providing suggestions for refactoring, generating code snippets, and detecting blind spots. This further helps developers in upskilling and gaining knowledge about their tasks.
Cost Savings
Integrating generative AI tools can reduce costs. It enables developers to use existing codebases effectively and complete projects faster even with shorter teams. Generative AI can streamline the stages of the software development life cycle and get the most out of less budget.
Predict Analytics
Generative AI can help in detecting potential issues in the early stages by analyzing historical data. It can also make predictions about future trends. This allows developers to make informed decisions about their projects, streamline their workflow, and hence, deliver high-quality products and services.
How does Generative AI Help Software Developers?
Below are four key areas in which Generative AI can be a great asset to software developers:
It Eliminates Manual and Repetitive Tasks
Generative AI can take up the manual and routine tasks of software development teams. A few of them are test automation, completing coding statements, writing documentation, and so on. Developers can provide the prompt to Generative AI i.e. information regarding their code and documentation that adheres to the best practices. And it can generate the required content accordingly. It minimizes human errors and increases accuracy.This increases the creativity and problem-solving skills of developers. It further lets them focus more on solving complex business challenges and fast-track new software capabilities. Hence, it helps in faster delivery of products and services to end users.
It Helps Developers to Tackle New Challenges
When developers face any challenges or obstacles in their projects, they can turn to these AI tools to seek assistance. These AI tools can track performance, provide feedback, offer predictions, and find the optimal path to complete tasks. By providing the right and clear prompts, these tools can provide problem-specific recommendations and proven solutions.This prevents developers from being stressed out with certain tasks. Rather, they can use their time and energy for other important tasks or can take breaks.It increases their productivity and performance. Hence, improves the overall developer experience.
It Helps in Creating the First Draft of the Code
With the help of generative artificial intelligence, developers can get helpful code suggestions and generate initial drafts. It can be done by entering the prompt in a separate window or within the IDE that helps in developing the software.This prevents developers from entering into a slump and getting in the flow sooner. Besides this, these AI tools can also assist in root cause analysis and generate new system designs. Hence, it allows developers to reflect on code at a higher and more abstract level and focus more on what they want to build.
It Helps in Making Changes to Existing Code Faster
Generative AI can accelerate updates to existing code faster. Developers simply have to provide the criteria for the same and the AI tool can proceed further. It usually includes those tasks that get sidelined due to workload and lack of time. For example, Refactoring existing code further helps in making small changes and improving code readability and performance.As a result, developers can focus on high-level design and critical decision-making without worrying much about existing tasks.
How does Generative AI Improve Developer Productivity?
Below are a few ways in which Generative AI can have a positive impact on developer productivity:
Focus on Meaningful Tasks
As Generative AI tools take up tedious and repetitive tasks, they allow developers to give their time and energy to meaningful activities. This avoids distractions and prevents them from stress and burnout. Hence, it increases their productivity and positively impacts the overall developer experience.
Assist in their Learning Graph
Generative AI lets developers be less dependent on their seniors and co-workers. Since they can gain practical insights and examples from these AI tools. It allows them to enter their flow state faster and reduces their stress level.
Assist in Pair Programming
Through Generative AI, developers can collaborate with other developers easily. These AI tools help in providing intelligent suggestions and feedback during coding sessions. This stimulates discussion between them and leads to better and more creative solutions.
Increase the Pace of Software Development
Generative AI helps in the continuous delivery of products and services and drives business strategy. It addresses potential issues in the early stages and provides suggestions for improvements. Hence, it not only accelerates the phases of SDLC but improves overall quality as well.
Typo auto-analyzes your code and pull requests to find issues and suggests auto-fixes before getting merged.
Use Case
The code review process is time-consuming. Typo enables developers to find issues as soon as PR is raised and shows alerts within the git account. It gives you a detailed summary of security, vulnerability, and performance issues. To streamline the whole process, it suggests auto-fixes and best practices to move things faster and better.
Github Copilot is an AI pair programmer that provides autocomplete style suggestions to your code.
Use Case
Coding is an integral part of your software development project. However, when done manually, takes a lot of effort. Github Copilot picks suggestions from your current or related code files and lets you test and select your code to perform different actions. It also ensures that vulnerable coding patterns are filtered out and blocks problematic public code suggestions.
Tabnine is an AI-powered code completion tool that uses deep learning to suggest code as you type.
Use Case
Writing code can prevent you from focusing on other core activities. Tabnine can provide accurate suggestions over time as per your coding habits and personalize code too. It also includes programming languages such as Javascript and Python and integrates them with popular IDEs for speedy setup and reduced context switching.
ChatGPT is a language model developed by OpenAI to understand prompts and generate human-like texts.
Use Case
Developers need to brainstorm ideas and get feedback on their projects. This is when ChatGPT comes to their rescue. This AI tool helps in finding answers to their coding, technical documentation, programming concepts and much more quickly. It uses natural language to understand questions and provide relevant suggestions.
Mintlify is an AI-powered documentation writer that allows developers to quickly and accurately generate code documentation.
Use Case
Code documentation can be a tedious process. Mintlify can analyze code, quickly understand complicated functions, and include built-in analytics to help developers understand how users engage with the documentation. It also has a Mintlify chat that reads documents and answers user questions instantly.
How to Mitigate Risks Associated with Generative AI?
No matter how effective Generative AI is becoming nowadays, it also comes with a lot of defects and errors. They are not always correct hence, human review is important after giving certain tasks to AI tools.Below are a few ways you can reduce risks related to Generative AI:
Implement Quality Control Practices
Develop guidelines and policies to address ethical challenges such as fairness, privacy, transparency, and accuracy of software development projects. Make sure to monitor a system that tracks model accuracy, performance metrics, and potential biases.
Provide Generative AI Training
Offer mentorship and training regarding Generative AI. This will increase AI literacy across departments and mitigate the risk. Help them know how to effectively utilize these tools and know their capabilities and limitations.
Understand AI is an Assistant, Not a Replacement
Make your developers understand that these generative tools should be viewed as assistants only. Encourage collaboration between these tools and human operators to leverage the strength of AI.
Conclusion
In a nutshell, Generative AI stands as a game-changer in the software development industry. When they are harnessed effectively, they can bring a multitude of benefits to the table. However, ensure that your developers approach the integration of Generative AI with caution.
Scope creep is one of the most challenging—and often frustrating—issues engineering managers face. As projects progress, new requirements, changing technologies, and evolving stakeholder demands can all lead to incremental additions that push your project beyond its original scope. Left unchecked, scope creep strains resources, raises costs, and jeopardizes deadlines, ultimately threatening project success.
This guide is here to help you take control. We’ll delve into advanced strategies and practical solutions specifically for managers to spot and manage scope creep before it disrupts your project. With detailed steps, technical insights, and tools like Typo, you can set boundaries, keep your team aligned, and drive projects to a successful, timely completion.
Understanding Scope Creep in Sprints
Scope creep can significantly impact projects, affecting resource allocation, team morale, and project outcomes. Understanding what scope creep is and why it frequently occurs provides a solid foundation for developing effective strategies to manage it.
What is Scope Creep?
Scope creep in projects refers to the gradual addition of project requirements beyond what was originally defined. Unlike industries with stable parameters, Feature projects often encounter rapid changes—emerging features, stakeholder requests, or even unanticipated technical complexities—that challenge the initial project boundaries.
While additional features can improve the end product, they can also risk the project's success if not managed carefully. Common triggers for scope creep include unclear project requirements, mid-project requests from stakeholders, and iterative development cycles, all of which require proactive management to keep projects on track.
Why does Scope Creep Happen?
Scope creep often results from the unique factors inherent to the industry. By understanding these drivers, you can develop processes that minimize their impact and keep your project on target.
Scope creep often results from several factors unique to the field:
Unclear requirements: At the start of a project, unclear or vague requirements can lead to an ever-expanding set of deliverables. For engineering managers, ensuring all requirements are well-defined is critical to setting project boundaries.
Shifting technological needs: IT projects must often adapt to new technology or security requirements that weren’t anticipated initially, leading to added complexity and potential delays.
Stakeholder influence and client requests: Frequent client input can introduce scope creep, especially if changes are not formally documented or accounted for in resources and timelines.
Agile development: Agile development allows flexibility and iterative updates, but without careful scope management, it can lead to feature creep.
These challenges make it essential for managers to recognize scope creep indicators early and develop robust systems to manage new requests and technical changes.
Identifying Scope Creep Early in the Sprints
Identifying scope creep early is key to preventing it from derailing your project. By setting clear boundaries and maintaining consistent communication with stakeholders, you can catch scope changes before they become a problem.
Define Clear Project Scope and Objectives
The first step in minimizing scope creep is establishing a well-defined project scope that explicitly outlines deliverables, timelines, and performance metrics. In sprints, this scope must include technical details like software requirements, infrastructure needs, and integration points.
Regular Stakeholder Check-Ins
Frequent communication with stakeholders is crucial to ensure alignment on the project’s progress. Schedule periodic reviews to present progress, confirm objectives, and clarify any evolving requirements.
Routine Project Reviews and Status Updates
Integrate routine reviews into the project workflow to regularly assess the project’s alignment with its scope. Typo enables teams to conduct these reviews seamlessly, providing a comprehensive view of the project’s current state. This structured approach allows managers to address any adjustments or unexpected tasks before they escalate into significant scope creep issues.
Strategies for Managing Scope Creep
Once scope creep has been identified, implementing specific strategies can help prevent it from escalating. With the following approaches, you can address new requests without compromising your project timeline or objectives.
Implement a Change Control Process
One of the most effective ways to manage scope creep is to establish a formal change control process. A structured approach allows managers to evaluate each change request based on its technical impact, resource requirements, and alignment with project goals.
Effective Communication and Real-Time Updates
Communication breakdowns can lead to unnecessary scope expansion, especially in complex team environments. Use Typo’s Sprint Analysis to track project changes and real-time developments. This level of visibility gives stakeholders a clear understanding of trade-offs and allows managers to communicate the impact of requests, whether related to resource allocation, budget implications, or timeline shifts.
Prioritize and Adjust Requirements in Real Time
In Software development, feature prioritization can be a strategic way to handle evolving needs without disrupting core project objectives. When a high-priority change arises, use Typo to evaluate resource availability, timelines, and dependencies, making necessary adjustments without jeopardizing essential project elements.
Advanced Tools and Techniques to Prevent Scope Creep
Beyond basic strategies, specific tools and advanced techniques can further safeguard your IT project against scope creep. Leveraging project management solutions and rigorous documentation practices are particularly effective.
Leverage Typo for End-to-End Project Management
For projects, having a comprehensive project management tool can make all the difference. Typo provides robust tracking for timelines, tasks, and resources that align directly with project objectives. Typo also offers visibility into task assignments and dependencies, which helps managers monitor all project facets and mitigate scope risks proactively.
Detailed Change Tracking and Documentation
Documentation is vital in managing scope creep, especially in projects where technical requirements can evolve quickly. By creating a “single source of truth,” Typo enables the team to stay aligned, with full visibility into any shifts in project requirements.
Budget and Timeline Contingencies
Software projects benefit greatly from budget and time contingencies that allow for minor, unexpected adjustments. By pre-allocating resources for possible scope adjustments, managers have the flexibility to accommodate minor changes without impacting the project’s overall trajectory.
Maintaining Team Morale and Focus amid Scope Creep
As scope adjustments occur, it’s important to maintain team morale and motivation. Empowering the team and celebrating their progress can help keep everyone focused and resilient.
Empower the Team to Decline Non-Essential Changes
Encouraging team members to communicate openly about their workload and project demands is crucial for maintaining productivity and morale.
Recognize and Celebrate Milestones
Managing IT projects with scope creep can be challenging, so it’s essential to celebrate milestones and acknowledge team achievements.
Typo - An Effective Sprint Analysis Tool
Typo’s sprint analysis monitors scope creep to quantify its impact on the team’s workload and deliverables. It allows you to track and analyze your team’s progress throughout a sprint and helps you gain visual insights into how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This information enables you to identify any potential problems early on and take corrective action.
Our sprint analysis feature uses data from Git and issue management tools to provide insights into how your team is working. You can see how long tasks are taking, how often they’re being blocked, and where bottlenecks are occurring. This information can help you identify areas for improvement and make sure your team is on track to meet their goals.
Taking Charge of Scope Creep
Effective management of scope creep in IT projects requires a balance of proactive planning, structured communication, and robust change management. With the right strategies and tools like Typo, managers can control project scope while keeping the team focused and aligned with project goals.
If you’re facing scope creep challenges, consider implementing these best practices and exploring Typo’s project management capabilities. By using Typo to centralize communication, track progress, and evaluate change requests, IT managers can prevent scope creep and lead their projects to successful, timely completion.
Are your code reviews fostering constructive discussions or stuck in endless cycles of revisions?
Let’s change that.
In many development teams, code reviews have become a necessary but frustrating part of the workflow. Rather than enhancing collaboration and improvement, they often drag on, leaving developers feeling drained and disengaged.
This inefficiency can lead to rushed releases, increased bugs in production, and a demotivated team. As deadlines approach, the very process meant to elevate code quality can become a barrier to success, creating a culture where developers feel undervalued and hesitant to share their insights.
The good news? You can transform your code review process into a constructive and engaging experience. By implementing strategic changes, you can cultivate a culture of open communication, collaborative learning, and continuous improvement.
This blog aims to provide developers and engineering managers with a comprehensive framework for optimizing the code review process, incorporating insights on leveraging tools like Typo and discussing the technical nuances that underpin effective code reviews.
The Importance of Code Reviews
Code reviews are a critical aspect of the software development lifecycle. They provide an opportunity to scrutinize code, catch errors early, and ensure adherence to coding standards. Here’s why code reviews are indispensable:
Error detection and bug prevention
The primary function of code reviews is to identify issues before they escalate into costly bugs or security vulnerabilities. By implementing rigorous review protocols, teams can detect errors at an early stage, reducing technical debt and enhancing code stability.
Utilizing static code analysis tools like SonarQube and ESLint can automate the detection of common issues, allowing developers to focus on more intricate code quality aspects.
Knowledge sharing
Code reviews foster an environment of shared learning and expertise. When developers engage in peer reviews, they expose themselves to different coding styles, techniques, and frameworks. This collaborative process enhances individual skill sets and strengthens the team’s collective knowledge base.
To facilitate this knowledge transfer, teams should maintain documentation of coding standards and review insights, which can serve as a reference for future projects.
Maintaining code quality
Adherence to coding standards and best practices is crucial for maintaining a high-quality codebase. Effective code reviews enforce guidelines related to design patterns, performance optimization, and security practices.
By prioritizing clean, maintainable code, teams can reduce the likelihood of introducing technical debt. Establishing clear documentation for coding standards and conducting periodic training sessions can reinforce these practices.
Enhanced collaboration
The code review process inherently encourages open dialogue and constructive feedback. It creates a culture where developers feel comfortable discussing their approaches, leading to richer collaboration. Implementing pair programming alongside code reviews can provide real-time feedback and enhance team cohesion.
Accelerated onboarding
For new team members, code reviews are an invaluable resource for understanding the team’s coding conventions and practices. Engaging in the review process allows them to learn from experienced colleagues while providing opportunities for immediate feedback.
Pairing new hires with seasoned developers during the review process accelerates their integration into the team.
Common Challenges in Code Reviews
Despite their advantages, code reviews can present challenges that hinder productivity. It’s crucial to identify and address these issues to optimize the process effectively:
Lengthy review cycles
Extended review cycles can impede development timelines and lead to frustration among developers. This issue often arises from an overload of reviewers or complex pull requests. To combat this, implement guidelines that limit the size of pull requests, making them more manageable and allowing for quicker reviews. Additionally, establishing defined review timelines can help maintain momentum.
Inconsistent feedback
A lack of standardization in feedback can create confusion and frustration among team members. Inconsistency often stems from varying reviewer expectations. Implementing a standardized checklist or rubric for code reviews can ensure uniformity in feedback and clarify expectations for all team members.
Bottlenecks and lack of accountability
If code reviews are concentrated among a few individuals, it can lead to bottlenecks that slow down the entire process. Distributing review responsibilities evenly among team members is essential to ensure timely feedback. Utilizing tools like GitHub and GitLab can facilitate the assignment of reviewers and track progress in real-time.
Limited collaboration and feedback
Sparse or overly critical feedback can hinder the collaborative nature of code reviews. Encouraging a culture of constructive criticism is vital. Train reviewers to provide specific, actionable feedback that emphasizes improvement rather than criticism.
Regularly scheduled code review sessions can enhance collaboration and ensure engagement from all team members.
How Typo can Streamline your Code Review Process
To optimize your code review process effectively, leveraging the right tools is paramount. Typo offers a suite of features designed to enhance productivity and code quality:
Automated code analysis
Automating code analysis through Typo significantly streamlines the review process. Built-in linting and static analysis tools flag potential issues before the review begins, enabling developers to concentrate on complex aspects of the code. Integrating Typo with CI/CD pipelines ensures that only code that meets quality standards enters the review process.
Feedback and commenting system
Typo features an intuitive commenting system that allows reviewers to leave clear, actionable feedback directly within the code. This approach ensures developers receive specific suggestions, leading to more effective revisions. Implementing a tagging system for comments can categorize feedback and prioritize issues efficiently.
Metrics and insights
Typo provides detailed metrics and insights into code review performance. Engineering managers can analyze trends, such as recurring bottlenecks or areas for improvement, allowing for data-driven decision-making. Tracking metrics like review time, comment density, and acceptance rates can reveal deeper insights into team performance and highlight areas needing further training or resources.
In addition to leveraging tools like Typo, adopting best practices can further enhance your code review process:
1. Set clear objectives and standards
Define clear objectives for code reviews, detailing what reviewers should focus on during evaluations. Developing a comprehensive checklist that includes adherence to coding conventions, performance considerations, and testing coverage ensures consistency and clarity in expectations.
2. Leverage automation tools
Employ automation tools to reduce manual effort and improve review quality. Automating code analysis helps identify common mistakes early, freeing reviewers to address more complex issues. Integrating automated testing frameworks validates code functionality before reaching the review stage.
3. Encourage constructive feedback
Fostering a culture of constructive feedback is crucial for effective code reviews. Encourage reviewers to provide specific, actionable comments emphasizing improvement. Implementing a “no blame” policy during reviews promotes an environment where developers feel safe to make mistakes and learn from them.
4. Balance thoroughness and speed
Finding the right balance between thorough reviews and maintaining development velocity is essential. Establish reasonable time limits for reviews to prevent bottlenecks while ensuring reviewers dedicate adequate time to assess code quality thoroughly. Timeboxing reviews can help maintain focus and reduce reviewer fatigue.
5. Rotate reviewers and share responsibilities
Regularly rotating reviewers prevents burnout and ensures diverse perspectives in the review process. Sharing responsibilities promotes knowledge transfer across the team and mitigates the risk of bottlenecks. Implementing a rotation schedule that pairs developers with different reviewers fosters collaboration and learning.
While developers execute the code review process, engineering managers have a critical role in optimizing and supporting it. Here’s how they can contribute effectively:
Facilitating communication and support
Engineering managers must actively facilitate communication within the team, ensuring alignment on the goals and expectations of code reviews. Regular check-ins can help identify roadblocks and provide opportunities for team members to express concerns or seek guidance.
Setting expectations and accountability
Establishing a culture of accountability around code reviews is essential. Engineering managers should communicate clear expectations for both developers and reviewers, creating a shared understanding of responsibilities. Providing ongoing training on effective review practices reinforces these expectations.
Monitoring metrics and performance
Utilizing the metrics and insights provided by Typo enables engineering managers to monitor team performance during code reviews. Analyzing this data allows managers to identify trends and make informed decisions about adjustments to the review process, ensuring continuous improvement.
Promoting a growth mindset
Engineering managers should cultivate a growth mindset within the team, encouraging developers to view feedback as an opportunity for learning and improvement. Creating an environment where constructive criticism is welcomed fosters a culture of continuous development and innovation. Encouraging participation in code review workshops or technical training sessions can reinforce this mindset.
Wrapping up: Elevating your code review process
An optimized code review process is not merely a procedural necessity; it is a cornerstone of developer productivity and code quality. By establishing clear guidelines, promoting collaboration, and leveraging tools like Typo, you can streamline the review process and foster a culture of continuous improvement within your team.
Typo serves as a robust platform that enhances the efficiency and effectiveness of code reviews, allowing teams to deliver higher-quality software at an accelerated pace. By embracing best practices and adopting a collaborative mindset, you can transform your code review process into a powerful driver of success.
In an ever-changing tech landscape, organizations need to stay agile and deliver high-quality software rapidly. DevOps plays a crucial role in achieving these goals by bridging the gap between development and operations teams.
In this blog, we will delve into how to build a DevOps culture within your organization and explore the fundamental practices and strategies that can lead to more efficient, reliable, and customer-focused software development.
What is DevOps?
DevOps is a software development methodology that integrates development (Dev) and IT operations (Ops) to enhance software delivery’s speed, efficiency, and quality. The primary goal is to break down traditional silos between development and operations teams and foster a culture of collaboration and communication throughout the software development lifecycle. This creates a more efficient and agile workflow that allows organizations to respond quickly to changes and deliver value to customers faster.
Why DevOps Culture is Beneficial?
DevOps culture refers to a collaborative and integrated approach between development and operations teams. It focuses on breaking down silos, fostering a shared sense of responsibility, and improving processes through automation and continuous feedback.
Fostering collaboration between development and operations allows organizations to innovate more rapidly, and respond to market changes and customer needs effectively.
Automation and streamlined processes reduce manual tasks and errors to increase efficiency in software delivery. This efficiency results in faster time-to-market for new features and updates.
Continuous integration and delivery practices improve software quality by early detection of issues. This helps maintain system stability and reliability.
A DevOps culture encourages teamwork and mutual trust to improve collaboration between previously siloed teams. This cohesive environment fosters innovation and collective problem-solving.
DevOps culture results in faster recovery time as they can identify and address issues more swiftly, reducing downtime and improving overall service reliability.
Delivering high-quality software quickly and efficiently enhances customer satisfaction and loyalty, which is vital for long-term success.
The CALMS Framework of DevOps
The CALMS framework is used to understand and implement DevOps principles effectively. It breaks down DevOps into five key components:
Culture
The culture pillar focuses on fostering a collaborative environment where shared responsibility and open communication are prioritized. It is crucial to break down silos between development and operations teams and allow them to work together more effectively.
Automation
Automation emphasizes minimizing manual intervention in processes. This includes automating testing, deployment, and infrastructure management to enhance efficiency and reliability.
Lean
The lean aspect aims to optimize workflows, manage work-in-progress (WIP), and eliminate non-value-adding activities. This is to streamline processes to accelerate software delivery and improve overall quality.
Measurement
Measurement involves collecting data to assess the effectiveness of software delivery processes and practices. It enables teams to make informed, fact-based decisions, identify areas for improvement, and track progress.
Sharing
The sharing component promotes open communication and knowledge transfer among teams It facilitates cross-team collaboration, fosters a learning environment, and ensures that successful practices and insights are shared and adopted widely.
Tips to Build a DevOps Culture
Start Simple
Don’t overwhelm teams completely with the DevOps haul. Begin small and implement DevOps practice gradually. You can start first with the team that is better aligned with DevOps principles and then move ahead with other teams in the organization. Build momentum with early wins and evolve practices as you gain experience.
Foster Communication and Collaborative Environment
Communication is a key. When done correctly, it promotes collaboration and a smooth flow of information across the organization. This further aligns organization operations and lets the engineering leaders make informed decisions.
Moreover, the combined working environment between the development and operations teams promotes a culture of shared responsibility and common objectives. They can openly communicate ideas and challenges, allowing them to have a mutual conversation about resources, schedules, required features, and execution of projects.
Create Common Goal
Apart from encouraging communication and a collaborative environment, create a clear plan that outlines where you want to go and how you will get there. Ensure that these goals are realistic and achievable. This will allow teams to see the bigger picture and understand the desired outcome, motivating them to move in the right direction.
Focus on Automation
Tools such as Slack, Kubernetes, Docker, and Jfrog help build automation capabilities for DevOps teams. These tools are useful as they automate repetitive and mundane tasks and allow teams to focus on value-adding work. This allows them to fail fast, build fast, and deliver quickly which enhances their efficiency and process acceleration, positively impacting DevOps culture. Ensure that instead of assuming, ask your team directly what part can be automated and further support facilities to automate it.
Implement CI/CD pipeline
The organization must fully understand and implement CI/CD to establish a DevOps culture and streamline the software delivery process. This allows for automating deployment from development to production and releasing the software more frequently with better quality and reduced risks. The CI/CD tools further allow teams to catch bugs early in the development cycle, reduce manual work, and minimize downtime between releases.
Foster Continuous Learning and Improvement
Continuous improvement is a key principle of DevOps culture. Engineering leaders must look for ways to encourage continuous learning and improvement such as by training and providing upskilling opportunities. Besides this, give them the freedom to experiment with new tools and techniques. Create a culture where they feel comfortable making mistakes and learning from them.
Balance Speed and Security
The teams must ensure that delivering products quickly doesn’t mean compromising security. In DevOps culture, the organization must adopt a ‘Security-first approach’ by integrating security practices into the DevOps pipeline. To maintain a strong security posture, regular security audits and compliance checks are essential. Security scans should be conducted at every stage of the development lifecycle to continuously monitor and assess security.
Monitor and Measure
Regularly monitor and track system performance to detect issues early and ensure smooth operation. Use metrics and data to guide decisions, optimize processes, and continuously improve DevOps practices. Implement comprehensive dashboards and alerts to ensure teams can quickly respond to performance issues and maintain optimal health.
Prioritize Customer Needs
In DevOps culture, the organization must emphasize the ever-evolving needs of the customers. Encourage teams to think from the customer’s perspective and keep their needs and satisfaction at the forefront of the software delivery processes. Regularly incorporate customer feedback into the development cycle to ensure the product aligns with user expectations.
Typo - An Effective Platform to Promote DevOps Culture
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through DORA and other key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Building a DevOps culture is essential for organizations to improve their software delivery capabilities and maintain a competitive edge. Implementing key practices as mentioned above will pave the way for a successful DevOps transformation.
DORA metrics are a compass for engineering teams striving to optimise their development and operations processes.
Consistently tracking these metrics can lead to significant and lasting improvements in your software delivery processes and overall business performance.
Below is a detailed guide on how Typo uses DORA to improve DevOps performance and boost efficiency:
What are DORA Metrics?
In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble and Nicole Forsgren to evaluate and improve software development practices. The aim was to improve the understanding of how organisations can deliver software faster, more reliable and of higher quality.
They developed DORA metrics that provide insights into the performance of DevOps practices and help organisations improve their software development and delivery processes. These metrics help in finding answers to these two questions:
How to identify organisations’ elite performers?
What should low performers teams must focus on?
The Four DORA Metrics
DORA metrics helps in assessing software delivery performance based on four key (or accelerate) metrics:
Deployment Frequency
Lead Time for Changes
Change Failure Rate
Mean Time to Recover
Deployment Frequency
Deployment Frequency measures the number of times that code is deployed into production. It helps in understanding team’s throughput and quantifying how much value is delivered to customers.
When organizations achieve a high Deployment Frequency, they can enjoy rapid releases without compromising the software’s robustness. This can be a powerful driver of agility and efficiency, making it an essential component for software development teams.
One deployment per week is standard. However, it also depends on the type of product.
Why is it Important?
It provides insights into the overall efficiency and speed of the DevOps team’s processes.
It helps in identifying pitfalls and areas for improvement in the software development life cycle.
It helps in making data-driven decisions to optimise the process.
It helps in understanding the impact of changes on system performance.
Lead Time for Changes
Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. The measurement of this metric offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.
By analysing the Lead Time for Changes, development teams can identify bottlenecks in the delivery pipeline and streamline their workflows to improve software delivery’s overall speed and efficiency. Shorter lead time states that the DevOps team is more efficient in deploying code.
Why is it Important?
It helps organisations gather feedback and validate assumptions quickly, leading to informed decision-making and aligning software development with customer needs.
It helps organizations gain agility and adaptability, allowing them to swiftly respond to market changes, embrace new technologies, and meet evolving business needs.
It enables experimentation, learning, and continuous improvement, empowering organizations to stay competitive in dynamic environments.
It demands collaborative teamwork, breaking silos, fostering shared ownership, and improving communication, coordination, and efficiency.
Change Failure Rate
Change Failure Rate gauges the percentage of changes that require hot fixes or other remediation after production. It reflects the stability and reliability of the entire software development and deployment lifecycle.
By tracking CFR, teams can identify bottlenecks, flaws, or vulnerabilities in their processes, tools, or infrastructure that can negatively impact the quality, speed, and cost of software delivery.
0% — 15% CFR is considered to be a good indicator of your code quality.
Why is it Important?
It enhances user experience and builds trust by reducing failures.
It protects your business from financial risks which helps in avoiding revenue loss, customer churn, and brand damage by reducing failures.
It helps in allocating resources effectively and focuses on delivering new features.
It ensures changes are implemented smoothly and with minimal disruption.
Mean Time to Recovery
Mean Time to Recovery measures how quickly a team can bounce back from incidents or failures. It concentrates on determining the efficiency and effectiveness of an organisation’s incident response and resolution procedures.
A lower mean time to recovery is synonymous with a resilient system capable of handling challenges effectively.
The response time should be as short as possible. 24 hours is considered to be a good rule of thumb.
Why is it Important?
It enhances user satisfaction by reducing downtime and resolution times.
It mitigates the negative impacts of downtime on business operations, including financial losses, missed opportunities, and reputational damage.
It helps meet service level agreements (SLAs) that are vital for upholding client trust and fulfilling contractual commitments.
It provides valuable insights in day to day practices such as incident management, engineering team performance and helps elevate customer satisfaction.
The Fifth Metrics: Reliability
Reliability is a fifth metric that was added by the DORA team in 2021. It measures modern operational practices and doesn’t have standard quantifiable targets for performance levels.
Reliability comprises several metrics used to assess operational performance that includes availability, latency, performance and scalability that measures user-facing behaviour, software SLAs, performance targets, and error budgets.
How Typo Uses DORA to Boost Dev Efficiency?
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It offers comprehensive insights into the deployment process through key DORA metrics such as change failure rate, time to build, and deployment frequency.
Below is a detailed view of how Typo uses DORA to boost dev efficiency and team performance:
DORA Metrics Dashboard
Typo’s DORA metrics dashboard has a user-friendly interface and robust features tailored for DevOps excellence. This helps in identifying bottlenecks, improves collaboration between teams, optimises delivery speed and effectively communicates team’s success.
DORA metrics dashboard pulls in data from all the sources and presents in a visualised and detailed way to engineering leaders and development team.
DORA metrics helps in many ways:
With pre-built integrations in the dev tool stack, DORA dashboard provides all the relevant data flowing in within minutes.
It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency and much more from a single dashboard.
The dashboard sets custom improvement goals for each team and tracks their success in real-time.
It gives real-time visibility into a team’s KPI and lets them make informed decisions.
Firstly, define clear and measurable objectives. Consider KPIs that align with your organisational goals. Whether it’s improving deployment speed, reducing failure rates, or enhancing overall efficiency, having a well-defined set of objectives will help guide your implementation of the dashboard.
Understanding DORA metrics
Gain a deeper understanding of DORA metrics by exploring the nuances of Deployment Frequency, Lead Time, Change Failure Rate, and MTTR. Then, connect each of these metrics with your organisation’s DevOps goals to have a comprehensive understanding of how they contribute towards improving overall performance and efficiency.
Dashboard configuration
Follow specific guidelines to properly configure your dashboard. Customise the widgets to accurately represent important metrics and personalise the layout to create a clear and intuitive visualisation of your data. This ensures that your team can easily interpret the insights provided by the dashboard and take appropriate actions.
Implementing data collection mechanisms
To ensure the accuracy and reliability of your DORA Metrics, establish strong data collection mechanisms. Configure your dashboard to collect real-time data from relevant sources, so that the metrics reflect the current state of your DevOps processes.
Integrating automation tools
Integrate automation tools to optimise the performance of your DORA Metrics Dashboard.
By utilising automation for data collection, analysis, and reporting processes, you can streamline routine tasks. This will free up your team’s time and allow them to focus on making strategic decisions and improvements.
Utilising the dashboard effectively
To get the most out of your well-configured DORA Metrics Dashboard, use the insights gained to identify bottlenecks, streamline processes, and improve overall DevOps efficiency. Analyse the dashboard data regularly to drive continuous improvement initiatives and make informed decisions that will positively impact your software development lifecycle.
Comprehensive Visualization of Key Metrics
Typo’s dashboard provides clear and intuitive visualisations of the four key DORA metrics:
Deployment Frequency
It tracks how often new code is deployed to production, highlighting the team’s productivity.
By integrating with your CI/CD tool, Typo calculates Deployment Frequency by counting the number of unique production deployments within the selected time range. The workflows and repositories that align with production can be configured by you.
Cycle Time (Lead Time for Changes)
It measures the time it takes from code being committed to it being deployed in production, indicating the efficiency of the development pipeline.
In the context of Typo it is the average time all pull requests have spent in the “Coding”, “Pickup”, “Review” and “Merge” stages of the pipeline. Typo considers all the merged Pull Requests for the main/master/production branch for the selected time range and calculates the average time spent by each Pull Request in every stage of the pipeline. No open/draft Pull Requests are considered in this calculation.
Change Failure Rate
It shows the percentage of deployments causing a failure in production, reflecting the quality and stability of releases.
There are multiple ways this metric can be configured:
A deployment that needs a rollback or a hotfix: For such cases, any Pull Request having a title/tag/label that represents a rollback/hotfix that is merged to production can be considered as a failure.
A high-priority production incident: For such cases, any ticket in your Issue Tracker having a title/tag/label that represents a high-priority production incident can be considered as a failure.
A deployment that failed during the production workflow: For such cases, Typo can integrate with your CI/CD tool and consider any failed deployment as a failure.
To calculate the final percentage, the total number of failures are divided by the total number of deployments (this can be picked either from the Deployment PRs or from the CI/CD tool deployments).
Mean Time to Restore (MTTR)
It measures the time taken to recover from a failure, showing the team’s ability to respond to and fix issues.
The way a team tracks production failure (CFR) defines how MTTR is calculated for that team. If a team considers a production failure as :
Pull Request tagging to track a deployment that needs a rollback or a hotfix: In such a case, MTTR is calculated as the time between the last deployment till such a Pull Request was merged to main/master/production.
Tickets tagging for high-priority production incidents: In such a case, MTTR is calculated as the average time such a ticket takes from the ‘In Progress’ state to the ‘Done’ state.
CI/CD integration to track deployments that failed during the production workflow: In such a case, MTTR is calculated as the average time between that deployment failure to its being successfully deployed.
Benchmarking for Context
Industry Standards: By providing benchmarks, Typo allows teams to compare their performance against industry standards, helping them understand where they stand.
Historical Performance: Teams can also compare their current performance with their historical data to track improvements or identify regressions.
Find out what it takes to build reliable high-velocity dev teams:
Typo provides a clear, data-driven view of software development performance. It offers insights into various aspects of development and operational processes.
It helps in tracking progress over time. Through continuous tracking, it monitors improvements or regressions in a team’s performance.
It supports DevOps practices that focus on both development speed and operational stability.
DORA metrics help in mitigating risk. With the help of CFR and MTTR, engineering leaders can manage and lower risk, ensuring more stability and reliability associated with software changes.
It identifies bottlenecks and inefficiencies and pinpoints where the team is struggling such as longer lead times or high failure rates.
How Does it Help Development Teams?
Typo provides a clear, real-time view of a team’s performance and lets the team make informed decisions based on empirical data rather than guesswork.
It encourages balance between speed and quality by providing metrics that highlight both aspects.
It helps in predicting future performance based on historical data. This helps in better planning and resource allocation.
It helps in identifying potential risks early and taking proactive measures to mitigate them.
Conclusion
DORA metrics deliver crucial insights into team performance. Monitoring Change Failure Rate and Mean Time to Recovery helps leaders ensure their teams are building resilient services with minimal downtime. Similarly, keeping an eye on Deployment Frequency and Lead Time for Changes assures engineering leaders that the team is maintaining a swift pace.
Together, these metrics offer a clear picture of how well the team balances speed and quality in their workflows.
One of the ways organizations are implementing is through a continuous feedback process. While it may seem a straightforward process, it is not. Every developer takes feedback in different ways. Hence, it is important to engineer the feedback the right way.
Why is the feedback process important?
Below are a few ways why continuous feedback is beneficial for both developers and engineering leaders:
Keeps everyone on the same page: Feedback enables individuals to be on the same page. No matter what type of tasks they are working on. It allows them to understand their strengths and improve their blind spots. Hence, provide high-quality work.
Facilitates improvement: Feedback enables developers the areas they need to improve and the opportunities they can grab according to their strengths. With the right context and motivation, it can encourage software developers to work on their personal and professional growth.
Nurtures healthy relationships: Feedback fosters open and honest communication. It lets developers be comfortable in sharing ideas and seeking support without any judgements even when they aren’t performing well.
Enhances user satisfaction: Feedback helps developers to enhance their quality of work. This can have a direct impact on user satisfaction which further positively affects the organization.
Strength performance management: Feedback enables you to set clear expectations, track progress, and provide ongoing support and guidance to developers. This further strengthens their performance and streamlines their workflow.
How to engineer your feedback?
There are a lot of things to consider when giving effective and honest feedback. We’ve divided the process into three sections. Do check it out below:
Before the feedback session
Frame the context of the developer feedback
Plan in advance how will you start the conversation, what is worth mentioning, and what is not. For example, if it is related to pull requests, can start by discussing their past performance related to the same. Further, you can talk about how well are they performing, whether they are delivering the work on time, rating their performance and action plan, and if there are any challenges they are facing. Make sure to relate it to the bigger picture.
When framed appropriately and constructively, it helps in focusing on improvement rather than criticism. It also enables developers to take feedback the right way and help them grow and succeed.
Keep tracking continuously
Observe and note down everything related to the developers. Track their performance continuously. Jot down whatever noticed even if it is not worth mentioning during the feedback session. It allows you to share feedback more accurately and comprehensively. It also helps you to identify the trends and patterns in developer performance and lets them know that the feedback isn’t based on isolated incidents but rather the consistent observation.
For example, XYZ is a software developer at ABC organization. The engineering leader observed XYZ for three months before delivering effective feedback. She told him:
In 1st month, XYZ wasn’t able to work well on the initial implementation strategy. So, she provided him with resources.
In 2nd month, he showed signs of improvement yet he hesitated to participate in the team meetings.
In 3rd month, XYZ’s technical skills kept improving but he struggled to engage in meetings and share his ideas.
So, the engineering leader was able to discuss effectively his strengths and areas of improvement.
Understand the difference between feedback and criticism
Before offering feedback to software development teams, make sure you are well aware of the differences between constructive feedback and criticism. Constructive feedback encourages developers to enhance their personal and professional development. On the other hand, criticism enables developers to be defensive and hinder their progress.
Constructive feedback allows you to focus on the behavior and outcome of the developers and help them by providing actionable insights while criticism focuses on faults and mistakes without providing the right guidance.
For example,
Situation: A developer’s recent code review missed several critical issues.
Feedback: “Your recent code review missed a few critical issues, like the memory leak in the data processing module. Next time, please double-check for potential memory leaks. If you’re unsure how to spot them, let’s review some strategies together.”
Criticism: “Your code reviews are sloppy and miss too many important issues. You need to do a better job.”
Collect all important information
Review previous feedback given to developers before the session. Check what was last discussed and make sure to bring it up again. Also, include those that were you tracking during this time and connect them with the previous feedback process. Look for metrics such as pull request activity, work progress, team velocity, work log, check-ins, and more to get in-depth insights about their work. You can also gather peer reviews to get 360-degree feedback and understand better how well individuals are performing.
This makes your feedback balanced and takes into account all aspects of developers’ contributions and challenges.
During the feedback session
Two-way feedback
The feedback shouldn’t be a top-down approach. It must go both ways. You can start by bringing up the discussion that happened in the previous feedback session. Know their opinion and perspective on certain topics and ideas. Make sure that you ask questions to make them realize that you respect their opinions and want to hear what they want to discuss.
Now, share your feedback based on the last discussion, observations, and performance. You can also modify your feedback based on their perspective and reflections. It allows the feedback to be detailed and comprehensive.
Establish clear steps for improvement
When you have shared their areas of improvement, make sure you provide them with clear actionable plans as well. Discuss with them what needs immediate attention and what steps can they take. Set small goals with them as it makes it easier to focus on them and let them know that their goals are important. You must also schedule follow-up meetings with them after they reach every step and understand if they are facing any challenges. You can also provide resources and tools that can help them attain their goals.
Apply the SBI framework
Developed by the Center for Creative Leadership, the SBI stands for situation, behavior, and impact framework. It includes:
Situation: First, describe the specific context or scenario in which the observation/behavior took place. Provide factual details and avoid vague descriptions.
Example: Last week’s team collaboration on the new feature development.
Behavior: Now, articulate specific behavior you observed or experienced during that situation. Focus only on tangible actions or words instead of assumptions or generalizations.
Example: “You did not participate actively in the brainstorming sessions and missed a few important meetings.”
Impact: Lastly, explain the impact of behavior on you or others involved. Share the consequences on the team, project, and the organization.
Example: “This led to a lack of input from your side, and we missed out on potentially valuable ideas. It also caused some delays as we had to reschedule discussions.”
Final words could be: “Please ensure to attend all relevant meetings and actively participate in discussions. Your contributions are important to the team.”
This allows for delivering feedback that is clear, actionable, and respectful. It makes it relevant and directly tied to the situation. Note that, this framework is for both positive and negative feedback.
Understand constraints and personal circumstances
It is also important to know if any constraints are negatively impacting their performance. It could include tight deadlines or a heavy workload that is hampering their productivity or facing health issues due to which they aren’t able to focus properly. Ask them while you deliver feedback to them. You can further create actionable plans accordingly. This shows developers that you care for them and makes the feedback more personalized and relevant. Besides this, it also allows you to share tangible improvements rather than adding more pressure.
For example: “During the last sprint, there were a few missed deadlines. Is there something outside of work that might be affecting your ability to meet these deadlines? Please let me know if there’s anything we can do to accommodate your situation.”
Ask them if there’s anything else to discuss and summarize the feedback
Before concluding the meeting, ask them if there’s anything they would like to discuss. It could likely be that they have missed out on something or it wasn’t bought up during the session.
Afterwards, summarize what has been discussed. Ask the developers what are their key takeaways from the session and share your perspective as well. You can document the summary to help you and developers in the future feedback meetings. This gives mutual understanding and ensures that both are on the same page.
After the feedback session
Write a summary for yourself
Keep a record of what was discussed during this session and action plans provided to the developers. You can take a look at them in future feedback meetings or performance evaluations. An example of the structure of summary:
Date and time
List the main topics and specific behaviors discussed.
Include any constraints, personal circumstances, or insights the developer shared.
Outline the specific actions, along with any support or resources you committed to providing.
Detail the agreed-upon timeline for follow-up meetings or check-ins to monitor progress.
Add any personal observations or reflections that might help in future interactions.
Monitor the progress
Ensure you give them measurable goals and timelines during the feedback session. Monitor their progress through check-ins, provide ongoing support and guidance, and keep discussing the challenges or roadblocks they are facing. It helps the developers stay on track and feel supported throughout their journey.
How Typo can help enhance the feedback process?
Typo is an effective software engineering intelligence platform that can help in improving the feedback process within development teams. Here’s how Typo’s features can be leveraged to enhance feedback sessions:
By providing visibility into key SDLC metrics, engineering managers can give more precise and data-driven feedback.
It also captures qualitative insights and provides a 360-degree view of the developer experience allowing managers to understand the real issues developers face.
Comparing the team’s performance across industry benchmarks can help in understanding where the developers stand.
Customizable dashboards allow teams to focus on the most relevant metrics, ensuring feedback is aligned with the team’s specific goals and challenges.
The sprint analysis feature tracks and analyzes the progress throughout a sprint, making it easier to identify bottlenecks and areas for improvement. This makes the feedback more timely and targeted.
Software developers deserve high-quality feedback. It not only helps them identify their blind spots but also polishes their skills. The feedback loop lets developers know where they stand and the recognition they deserve.
Building and structuring an effective engineering team
Building a high-performing engineering team is crucial for the success of any company, especially in the dynamic and constantly evolving world of technology. Whether you’re a startup on the rise or an established enterprise looking to maintain your competitive edge, having a well-structured engineering team is essential.
This blog will explore the intricacies of building and structuring engineering teams for scale and success. We’ll cover many topics, including talent acquisition, skill development, team management, and more.
Whether you’re a CTO, a team leader, or an entrepreneur looking to build your own engineering team, this blog will equip you with the knowledge and tools to create a high-performing engineering team that can drive innovation and help you achieve your business goals.
What are the dynamics of engineering teams?
Before we dive into the specifics of team structure, it’s vital to understand the dynamics that shape engineering teams. Various factors, including team size, communication channels, leadership style, and cultural fit, influence these dynamics. Each factor plays a significant role in determining how well a team operates.
Team size
The size of a team can significantly impact its operation. Smaller teams tend to be more agile and flexible, making it easier for them to make quick decisions and respond to project changes. On the other hand, larger teams can provide more resources, skills, and knowledge, but they may struggle with communication and coordination.
Communication channels
Effective communication is essential for any team’s success. In engineering teams, communication channels play a significant role in ensuring team members can collaborate effectively. Different communication channels, such as email, chat, video conferencing, or face-to-face, can impact the team’s effectiveness.
Leadership style
A team leader’s leadership style can significantly impact the team’s effectiveness. Autocratic leaders tend to make decisions without input from team members, while democratic leaders encourage team members to participate in decision-making. Moreover, transformational leaders inspire and motivate team members to achieve their best.
Cultural fit
Cultural fit refers to how well team members align with the team’s values, norms, and beliefs. A team that has members with similar values and beliefs is more likely to work well together and be more productive. In contrast, a team with members with conflicting values and beliefs may struggle to work effectively.
Scaling engineering teams can present challenges, and planning and strategizing thoughtfully is crucial to ensure that the team remains effective. Understanding the dynamics that shape engineering teams can help teams overcome these challenges and work together effectively.
Key roles in engineering teams
An engineering team must be diverse and collaborative. Each team member should specialize in a particular area but also be able to comprehend and collaborate with others in building a product.
A few of them include:
Software development team lead and manager
The software development team lead plays a crucial role in guiding and coordinating the efforts of the software development team. They could have under 10 to hundreds of team members under their lead.
Software developer
Software developers write the code, their job is purely technical and they build the product. Most of them are individual contributors i.e. they have no management or HR responsibilities.
Product managers
Product managers define the product vision, gather and prioritize requirements, and deal with collaboration with engineering teams.
Designers
Designers create user-friendly interfaces, develop prototypes to visualize concepts and iterate on feedback-based designs.
Key principles for building and structuring engineering teams
Once the dynamics of engineering teams are understood, organizations can apply key principles to build and structure teams for scale. From defining goals and establishing role clarity to fostering a culture of collaboration and innovation, these principles serve as a foundation for effective team building.
Setting clear goals ensures everyone is aligned and working towards the same vision.
Clearly defined roles and responsibilities help prevent confusion and promote accountability within the team.
Foster an environment where team members feel empowered to collaborate, share ideas, and innovate.
Communication is the backbone of any successful team. Establishing efficient communication channels is vital for sharing information and maintaining transparency.
Encourage continuous learning and professional development to keep your team members motivated and up-to-date with the latest technologies and trends.
Allow individual team members autonomy while ensuring alignment with the organization’s overall goals and objectives.
Different approaches to structuring engineering teams
There is no one-size-fits-all approach to structuring engineering teams. Different structures may be more suitable depending on the organization’s size, industry, and goals. Organizations can identify the structure that best aligns with their unique needs and objectives by exploring various approaches.
The top two approaches are:
Project-based structure
When teams are formed based on the project for a defined period. It is a traditional way where engineers and designers are selected from their respective departments and tasked with project-related work.
It may seem logical, but it poses challenges. Project-based teams can prioritize short-term objectives and collaborating with unfamiliar team members can lead to communication gaps, particularly between developers and other project stakeholders.
Product-based structure
When teams are aligned around specific products or features to promote ownership and accountability. Since this team structure is centered around the product, it is a long-term project, and team members are bound to work together more efficiently.
As the product gains traction and attracts users, the team needs to adapt to a changing environment i.e. restructuring and hiring specialists.
Other approaches include:
Functional-based structure: Organizing teams based on specialized functions such as backend, frontend, or QA.
Matrix-based structure: Combining functional and product-based structures to leverage expertise and resources efficiently.
Hybrid models: Tailoring the team structure to fit your organization’s unique needs and challenges.
Top pain points in building engineering teams
Sharing responsibilities
In engineering organizations, there is a tendency to rely heavily on one person for all responsibilities rather than distributing them among team members. It not only leads to bottlenecks and inefficiencies but also, slows down progress and the inability to deliver quality products.
Broken communication
The two most common communication issues while structuring and building engineering teams are – Alignment and context-switching between engineering teams. This increases the miscommunication among team members and leads to duplication of work, neglected responsibilities, and coverage gaps.
Lack of independence
When engineering leaders micromanage developers, it can hinder productivity, innovation, and overall team effectiveness. Hence, having a structure that fosters optimization, ownership, and effectiveness is important for building an effective team.
Best practices for scaling engineering teams
Scaling an engineering team requires careful planning and execution. Here are the best practices to build a team that scales well:
Streamline your hiring and onboarding processes to attract top talent and integrate new team members seamlessly.
Develop scalable processes and workflows to accommodate growth and maintain efficiency.
Foster a diverse and inclusive workplace culture to attract and retain top talent from all backgrounds.
Invest in the right tools and technologies to streamline development workflows and enhance collaboration.
Continuously evaluate your team structure and processes, making adjustments as necessary to adapt to changing needs and challenges.
Build an engineering team that sets your team up for success!
Building and structuring engineering teams for scale is a multifaceted endeavor that requires careful planning, execution, and adaptation.
But this doesn’t end here! Measuring a team’s performance is equally important to build an effective team. This is where Typo comes in!
It is an intelligent engineering management platform used for gaining visibility, removing blockers, and maximizing developer effectiveness. It gives a comparative view of each team’s performance across velocity, quality, and throughput.
Key features
Seamlessly integrates with third-party applications such as Git, Slack, Calenders, and CI/CD tools.
‘Sprint analysis’ feature allows for tracking and analyzing the team’s progress throughout a sprint.
Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
Offers engineering benchmark to compare the team’s results across industries.
Agile project management relies on iterative development cycles to deliver value efficiently. Central to this methodology is the iteration burndown chart, a visual representation of work progress over time. In this blog, we’ll explore leveraging and enhancing the iteration burndown chart to optimize Agile project outcomes and team collaboration.
What is an iteration burndown chart?
An iteration burndown chart is a graphical representation of the total work remaining over time in an Agile iteration, helping teams visualize progress toward completing their planned work.
Components
It typically includes an ideal line representing the planned progress, an actual line indicating the real progress, and axes to represent time and work remaining.
Purpose
The chart enables teams to monitor their velocity, identify potential bottlenecks, and make data-driven decisions to ensure successful iteration completion.
Benefits of using iteration burndown charts
Understanding the advantages of iteration burndown charts is key to appreciating their value in Agile project management. From enhanced visibility to improved decision-making, these charts offer numerous benefits that can positively impact project outcomes.
Improved visibility: provides stakeholders with a clear view of project progress.
Early risk identification: helps identify and address issues early in the iteration.
Enhanced communication: facilitates transparent communication within the team and with stakeholders.
Data-driven decisions: enables teams to make informed decisions based on real-time progress data.
How to create an effective iteration burndown chart
Crafting an effective iteration burndown chart requires a thorough and step-by-step approach. Here are some detailed guidelines to help you create a well-designed burndown chart that accurately reflects progress and facilitates efficient project management:
Set clear goals: Before you start creating your chart, it’s essential to define clear objectives and expectations for the iteration. Be specific about what you want to achieve, what tasks need to be completed, and what resources you’ll need to get there.
Break down tasks: Once you’ve established your goals, you’ll need to break down tasks into manageable units to track progress effectively. Divide the work into smaller tasks that can be completed within a reasonable timeframe and assign them to team members accordingly.
Accurate estimation: Accurate estimation of effort required for each task is crucial for creating an effective burndown chart. Make sure to involve team members in the estimation process, and use historical data to improve accuracy. This will help you to determine how much work is left to be done and when the iteration will be completed.
Choose the right tools: Creating an effective burndown chart requires selecting the appropriate tools for tracking and visualizing data. Typo is a great option for creating and managing burndown charts, as it allows you to customize the chart’s appearance and track progress in real time.
Regular updates: Updating the chart regularly is essential for keeping track of progress and making necessary adjustments. Set a regular schedule for updating the chart, and ensure that team members are aware of the latest updates. This will help you to identify potential issues early on and adjust the plan accordingly.
By following these detailed guidelines, you’ll be able to create an accurate and effective iteration burndown chart that can help you and your team monitor your project’s progress and manage it more efficiently.
Tips for using iteration burndown charts effectively
While creating a burndown chart is a crucial first step, maximizing its effectiveness requires ongoing attention and refinement. These tips will help you harness the full potential of your iteration burndown chart, empowering your development teams to achieve greater success in Agile projects.
Simplicity: keep the chart simple and easy to understand.
Consistency: use consistent data and metrics for accurate analysis.
Collaboration: encourage team collaboration and transparency in updating the chart.
Analytical approach: analyze trends and patterns to identify areas for improvement.
Adaptability: adjust the chart based on feedback and lessons learned during the iteration.
Improving your iteration burndown chart
Continuous improvement lies at the heart of Agile methodology, and your iteration burndown chart is no exception. By incorporating feedback, analyzing historical data, and experimenting with different approaches, you can refine your chart to better meet your team’s and stakeholders’ needs.
Review historical data: analyze past iterations to identify trends and improve future performance.
Incorporate feedback: gather input from team members and stakeholders to refine the chart’s effectiveness.
Experiment with formats: try different chart formats and visualizations to find what works best for your team.
Additional metrics: integrate additional metrics to provide deeper insights into project progress.
Are iteration burndown charts worth it?
A burndown chart is great for evaluating the ratio of work remaining and the time it takes to complete the work. However, relying solely on a burndown chart is not the right way due to certain limitations.
Time-consuming and manual process
Although creating a burndown chart in Excel is easy, entering data manually requires more time and effort. This makes the work repetitive and tiresome after a certain point.
Unable to give insights into the types of issues
The Burndown chart helps to track the progress of completing tasks or user stories over time within a sprint or iteration. But, it doesn’t provide insights about the specific types of issues or tasks being worked on. It includes shipping new features, determining technical debt, and so on.
Gives equal weight to all the tasks
A burndown chart doesn’t differentiate between an easy and difficult task. It considers all of them equal, regardless of their size, complexity, or effort required to complete it. Hence, leading to ineffective outlines of project progress. This further potentially masks critical issues and hinders project management efforts.
Unable to give complete information on sprint predictability
The burndown chart primarily focuses on tracking remaining work throughout a sprint, but it doesn’t directly indicate the predictability of completing that work within the sprint timeframe. It lacks insight into factors like team velocity fluctuations or scope changes, which are crucial for assessing sprint predictability accurately.
How does Typo leverage the sprint predictability?
Typo’s sprint analysis is an essential tool for any team using an agile development methodology. It allows agile teams to track and analyze overall progress throughout a sprint timeline. It helps to gain visual insights into how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This information can help to identify any potential problems early on and take corrective action.
Our sprint analysis feature uses data from Git and issue management tools to provide insights into how software development teams are working. They can see how long tasks are taking, how often they’re being blocked, and where bottlenecks are occurring.
It is easy to use and can be integrated with existing Git and Jira/Linear/Clickup workflows.
Key features
A velocity chart shows how much work has been completed in previous sprints.
A sprint backlog that shows all of the work that needs to be completed in the sprint.
A list of sprint issues that shows the status of each issue.
Time tracking to see how long tasks are taking.
Blockage tracking to check how often tasks are being blocked, and what are the causes of those blocks.
Bottleneck identification to identify areas where work is slowing down.
Historical data analysis to compare sprint data over time.
Constantly improve your charts!
The iteration burndown chart is a vital tool in Agile project management. It offers agile and scrum teams a clear, concise way to track progress and make data-driven decisions.
However, one shouldn’t rely solely on the burndown charts. Moreover, there are various advanced sprint analysis tools such as Typo in the market that allow teams to track and gain visual insights into the overall progress of the work.
Jira is a widely used project management tool that enables teams to work together efficiently and achieve outstanding outcomes. The Jira dashboard is a vital component of this tool, offering teams valuable insights, metrics, and project visibility. In this journey, we will explore the potential of Jira dashboards and learn how to leverage their full capabilities.
What is a Jira Dashboard?
A Jira dashboard serves as the nerve center of project activity, offering a consolidated view of tasks, progress, and key metrics. It gives stakeholders a centralized location to monitor project health, track progress, and make informed decisions.
What are the Components of a Jira Dashboard?
Gadgets
These modular components provide specific information and functionality, such as task lists, burndown charts, and activity streams. There are several gadgets built into Jira such as filter results gadget, issue statistics gadget, and road map gadget. However, additional gadgets can also be downloaded from the Atlassian Marketplace. Some of them are the pivot gadget and gauge gadget.
Reports
Jira dashboards host various reports, including velocity charts, sprint summaries, and issue statistics, offering valuable insights into team performance, and project trends.
Why is it Used?
Jira dashboards are used for several reasons:
Visibility: Dashboards offer stakeholders a real-time snapshot of project status and progress, promoting transparency and accountability.
Decision Making: By providing access to actionable insights and performance metrics, dashboards enable data-driven decision-making, leading to more informed choices.
Collaboration: Dashboards foster collaboration by providing a centralized platform for teams to track tasks, share updates and communicate effectively.
Efficiency: Dashboards streamline project management processes and enhance team productivity by consolidating project information and metrics in one location.
The default Jira dashboard
The default dashboard is also known as the system dashboard. It is the screen Jira users see the first time they log in. It includes gadgets from Jira’s pre-installed selection and is limited to only one dashboard page.
Creating your Jira dashboard
Creating custom dashboards requires careful planning and consideration of project objectives and team requirements. Let’s explore the step-by-step process of crafting a bespoke dashboard:
Create a New Dashboard
Log in to your Jira account. Go to the dashboard and click ‘Create Dashboard’.
Define Dashboard Objectives
Start by defining the objectives and goals of your dashboard page. Determine what information is crucial for your team to track and monitor, and tailor your dashboard accordingly.
Select Relevant Gadgets and Reports
Choose gadgets and reports that align with your project’s needs and objectives. When curating your dashboard content, consider factors such as team workflow, project complexity, and stakeholder requirements.
Opt for your Preferred Layout and Configuration
Choose your preferred dashboard layout and configuration to ensure optimal visibility and usability for all stakeholders. Arrange gadgets and reports logically and intuitively to facilitate easy navigation and information access.
Iterative Refinement
Embrace an iterative dashboard refinement approach. Solicit user and stakeholder feedback to improve its effectiveness and usability continuously. Regularly assess and update your dashboard to reflect evolving project needs and priorities.
Share the Dashboard with Team Members
Don’t forget to share the Jira dashboard with the team. This ensures transparency and fosters a collaborative culture. By granting appropriate permissions, they can view and interact with the dashboard and get real-time updates.
JIRA Dashboard Examples
Personal Dashboard
A personal dashboard is tailored to individual needs and offers various advantages in streamlining workflow management and improving productivity. It provides a centralized platform for organizing and visualizing user’s tasks, different projects, issues, etc.
Sprint Burndown Dashboard
This dashboard gives real-time updates on whether the team is on pace to meet a sprint goal. It offers a glimpse of how much work is left in the queue and how long your team will take to complete it. Moreover, the sprint burndown dashboard allows you to jump on any issue when the remaining workload is pacing slower than the delivery date.
Workload Dashboard
The workload dashboard, also known as the monitor resource dashboard tracks the amount of work assigned to each team member and adjusts their workload accordingly. It helps identify workload patterns and plan resource allocation.
Issue Tracking Dashboard
The issue tracking dashboard allows users to quickly identify and prioritize the most important issues. It focuses on providing visibility into the status and progress of issues or tickets within a project.
Maximizing Dashboard Impact
To maximize the impact of your Jira dashboard, consider the following best practices:
Promote Transparency and Collaboration
Share your dashboard with relevant stakeholders to promote transparency and collaboration. Encourage team members to actively engage with the dashboard and provide feedback to drive continuous improvement.
Leverage Automation and Integration
Integrating your Jira dashboard with other tools and systems is the best way to automate data capture and reporting processes. Leverage integration capabilities to streamline workflow management and enhance productivity.
Foster Data-Driven Decision Making
Empower project teams and leaders to make informed decisions by providing access to actionable insights and performance metrics through the dashboard. Encourage data-driven discussions and decision-making to drive project success.
Advanced dashboard customization
Take your Jira dashboard customization to the next level with advanced techniques and strategies:
Dashboard Filters and Contextualization
Implement filters and contextualization techniques to personalize the dashboard experience for individual users or specific project phases. Allow users to tailor the dashboard view based on their preferences and requirements.
Dynamic Dashboard Updates
Utilize dynamic updating capabilities to ensure that your dashboard reflects real-time changes and updates in project data. Implement automated refresh intervals and notifications to keep stakeholders informed and engaged.
Custom Gadgets and Extensions
Explore the possibilities of custom gadgets and extensions to extend the functionality of your Jira dashboard. Develop custom gadgets or integrate third-party extensions to address unique project requirements and enhance user experience.
How Typo's Sprint Analysis Feature is Useful for the Jira Dashboard?
Typo’s sprint analysis feature can be seamlessly integrated with the Jira dashboard. It allows to track and analyze the team’s progress throughout a sprint and provides valuable insights into work progress, work breakup, team velocity, developer workload, and issue cycle time.
The benefits of Sprint analysis feature are:
It helps spot potential issues early, allowing for corrective action to avoid major problems.
Pinpointing inefficiencies, such as excessive time spent on tasks, enables workflow improvements to boost team productivity.
Provides real-time progress updates, ensuring deadlines are met by highlighting areas needing adjustments.
A well-designed Jira dashboard is a catalyst for project excellence, providing teams with the insights and visibility they need to succeed. By understanding its components, crafting a tailored dashboard, and maximizing its impact, you can unlock Jira dashboards’ full potential and drive your projects toward success.
Furthermore, while Jira dashboards offer extensive functionalities, it’s essential to explore alternative tools that may simplify the process and enhance user experience. Typo is one such tool that streamlines project management by offering intuitive dashboard creation, seamless integration, and a user-friendly interface. With Typo, teams can effortlessly visualize project data, track progress, and collaborate effectively, ultimately leading to improved productivity and project outcomes. Explore Typo today and revolutionize your project management experience.
Scrum has become one of the most popular project management frameworks, but like any methodology, it’s not without its challenges. Scrum anti-patterns are common obstacles that teams may face, leading to decreased productivity, low morale, and project failure. Let’s explore the most prevalent Scrum anti patterns and provide practical solutions to overcome them.
Lack of clear definition of done
A lack of a clear Definition of Done (DoD) can cause teams to struggle to deliver shippable increments at the end of each sprint. It can be due to a lack of communication and transparency. This ambiguity leads to rework and dissatisfaction among stakeholders.
Fix
Collaboration is key to establishing a robust DoD. Scrum team members should work together to define clear criteria for completing each user story. These criteria should encompass all necessary steps, from development to testing and acceptance. The DoD should be regularly reviewed and refined to adapt to evolving project needs and ensure stakeholder satisfaction.
Overcommitting in sprint planning
One of the common anti patterns is overcommitment during sprint planning meetings. It sets unrealistic expectations, leading to compromised quality and missed deadlines.
Fix
Base sprint commitments on past performance and team capacity rather than wishful thinking. Focus on realistic sprint goal setting to ensure the team can deliver commitments consistently. Emphasize the importance of transparency and communication in setting and adjusting sprint goals.
Micromanagement by the scrum master
Micromanagement stifles team autonomy and creativity, leading to disengagement, lack of trust and reduced productivity.
Fix
Scrum Masters should adopt a servant-leadership approach, empowering teams to self-organize and make decisions autonomously. They should foster a culture of trust and collaboration where team members feel comfortable taking ownership of their work. They should provide support and guidance when needed, but avoid dictating tasks or solutions.
Lack of product owner engagement
Disengaged Product Owners fail to provide clear direction and effectively prioritize the product backlog, leading to confusion and inefficiency.
Fix
Encourage regular communication and collaboration between the Product Owner and the development team. Ensure that the Product Owner is actively involved in sprint planning, backlog refinement, and sprint reviews. Establish clear channels for feedback and decision-making to ensure alignment with project goals and stakeholder expectations.
Failure to adapt and improve
Failing to embrace a mindset of continuous improvement and adaptation leads to stagnation and inefficiency.
Fix
Prioritize retrospectives and experimentation to identify areas for improvement. Encourage a culture of learning and innovation where team members feel empowered to suggest and implement changes. Emphasize the importance of feedback loops and iterative development to drive continuous improvement and adaptation.
Scope creep
Allowing the project scope to expand unchecked during the sprint leads to incomplete work and missed deadlines.
Fix
Define a clear product vision and prioritize features based on value and feasibility. Review and refine the product backlog regularly to ensure that it reflects the most valuable and achievable items. Encourage stakeholder collaboration and feedback to validate assumptions and manage expectations.
Lack of cross-functional collaboration
Siloed teams hinder communication and collaboration, leading to bottlenecks and inefficiencies.
Fix
Foster a collaboration and knowledge-sharing culture across teams and disciplines. Encourage cross-functional teams to work together towards common goals. Implement practices such as pair programming, code reviews, and knowledge-sharing sessions to facilitate collaboration and break down silos.
Inadequate Sprint review and retrospective
Rushing through sprint retrospective and review meetings results in missed opportunities for feedback and improvement.
Fix
Allocate sufficient time for thorough discussion and reflection during sprint review and retrospective meetings. Encourage open and honest communication and ensure that all development team members have a chance to share their insights and observations. Based on feedback and retrospective findings, prioritize action items for continuous improvement.
Unrealistic commitments by the product owner
Product Owners making unrealistic commitments disrupt the team’s focus and cause delays.
Fix
Establish a clear process for managing changes to the product backlog. Encourage collaboration between the Product Owner and the development team to negotiate realistic commitments and minimize disruptions during the sprint. Prioritize backlog items based on value and effort to ensure the team consistently delivers on its commitments.
Lack of stakeholder involvement
Limited involvement or feedback from stakeholders leads to misunderstandings and dissatisfaction with the final product.
Fix
Engage stakeholders early and often throughout the project lifecycle. Solicit feedback and involve stakeholders in key decision-making processes. Communicate project progress regularly and solicit input to ensure alignment with stakeholder expectations and requirements.
Ignoring technical debt
Neglecting to address technical debt results in decreased code quality, increased bugs, and slower development velocity over time.
Fix
Allocate time during each sprint for addressing technical debt alongside new feature development. Encourage collaboration between developers and stakeholders to prioritize and tackle technical debt incrementally. Invest in automated testing and refactoring to maintain code quality and reduce technical debt accumulation.
Lack of continuous integration and deployment
Failing to implement continuous integration and deployment practices leads to integration issues, longer release cycles, and reduced agility.
Fix
Establish automated CI/CD pipelines to ensure that code changes are integrated and deployed frequently and reliably. Invest in infrastructure and tools that support automated testing and deployment. Encourage a culture of automation and DevOps practices to streamline the development and delivery process.
Daily scrum meetings are inefficient
Daily scrum meeting is usually used synonymously with daily status meetings. This loses its focus on collaboration and decision-making. Sometimes, team members don’t find any value in these meetings leading to disengagement and decreased motivation.
Fix
In daily scrums, the focus should only be on talking to each other about what’s the most important work to get done that day and how to do it. Encourage team members to collaborate to tackle problems and achieve sprint goals. Moreover, keep the daily scrums short and timeboxed, typically to 15 minutes.
Navigating scrum challenges with confidence
Successfully implementing Scrum requires more than just following the framework—it demands a keen understanding of potential pitfalls and proactive strategies to overcome them. By addressing common Scrum anti patterns, teams can cultivate a culture of collaboration, efficiency, and continuous improvement, leading to better project outcomes and stakeholder satisfaction.
However, without the right tools, identifying and addressing these anti-patterns can be daunting. That’s where Typo comes in. Typo is an intuitive project management platform designed to streamline Agile processes, enhance team communication, and mitigate common Scrum challenges.
With Typo, teams can effortlessly manage their Scrum projects, identify and address anti-patterns in real-time, and achieve greater success in their Agile endeavors.
So why wait? Try Typo today and elevate your Scrum experience to new heights!
Jira software has become the backbone of project management for many teams across various industries. Its flexibility and powerful features make it an invaluable tool for organizing tasks, tracking progress, and collaborating effectively. However, maximizing its potential requires more than just basic knowledge. To truly excel in Jira ticket management, you must implement strategies and best practices that streamline your workflows and enhance productivity.
What is Jira Ticket Management?
Jira is a popular project management tool developed by Atlassian, commonly used for issue tracking, bug tracking, and project management. Jira ticket management refers to the process of creating, updating, assigning, prioritizing, and tracking issues within Jira.
Key Challenges in Jira Ticketing System
Requires Significant Manual Work
One of the major challenges with Jira ticketing platform is that it requires a lot of tedious and manual work. This leads to developer frustration, incomplete ticket updates, and undocumented work.
Complexity of Configuration
Setting up Jira software to align with the specific needs of a team or project can be complicated. Configuring workflows, custom fields, and permissions requires careful planning and may involve a learning curve for administrators.
Lacks Data Hygiene
Due to the above-mentioned points, it can lead to software development team work becoming untracked and invisible. Hence, the team lacks data hygiene which further leads top management to make decisions with incomplete information. This can further impact planning accuracy as well.
How to Manage JIRA Tickets Better?
Below are some essential tips to help you manage your Jira tickets better:
JIRA Automations
Developers often find it labor-intensive to keep tickets updated. Hence, JIRA provides some automation that eases the work of developers. Although these automations are a bit complex initially, once mastered, they offer significant efficiency gains. Moreover, they can be customized as well.
This is one of the most commonly used automation that involves ensuring accountability for an issue by automatically assigning it to its creator. It ensures that there is always a designated individual responsible for addressing the matter, streamlining workflow management and accountability within the team.
Auto-Create Sub-Tasks
This automation can be customized to suit various scenarios, such as applying it to epics and stories or refining it with specific conditions tailored to your workflow. For example, when a bug issue is reported, you can set up automation to automatically create tasks aimed at resolving the problem. It not only streamlines the process but also ensures that necessary tasks are promptly initiated, enhancing overall efficiency in issue management.
Clone Issues
Implementing this advanced automation involves creating a duplicate of an issue in a different project when it undergoes a specific transition. It also leaves a comment on the original issue to establish a connection between them. It becomes particularly valuable in scenarios where one project is dedicated to managing customer requests, while another project is focused on executing the actual work.
Change Due Date
This automation automatically computes and assigns a due date to an issue when it’s moved from the backlog to the ‘In Progress’ status. This streamlines the process of managing task timelines, ensuring that deadlines are promptly established as tasks transition into active development stages.
Standardize Ticket Creation
Establishing clear guidelines for creating tickets ensures consistency across your projects. Include essential details such as a descriptive title, priority level, assignee, and due date. This ensures that everyone understands what needs to be done at a glance, reducing confusion and streamlining the workflow.
Moreover, standardizing ticket creation practices fosters alignment within your team and improves communication. When everyone follows the same format for ticket creation, it becomes easier to track progress, assign tasks, and prioritize work effectively. Consistency also enhances transparency, as stakeholders can quickly grasp the status of each ticket without needing to decipher varying formats.
Customize Workflows
Tailoring Jira workflows to match your team’s specific processes and requirements is essential for efficient ticket management. Whether you follow Agile, Scrum, Kanban, or a hybrid methodology, configure workflows that accurately reflect your workflow stages and transitions. This customization ensures your team can work seamlessly within Jira, optimizing productivity and collaboration.
Customizing workflows allows you to streamline your team’s unique processes and adapt to changing project needs. For example, you can define distinct stages for task assignment, development, testing, and deployment that reflect your team’s workflow. Custom workflows empower teams to work more efficiently by clarifying task progression and facilitating smoother handoffs between team members.
Prioritize Effectively
Not all tasks are created equal in Jira. Use priority fields to categorize tickets based on urgency and importance. This strategic prioritization helps your team focus on high-priority items and prevents critical tasks from slipping through the cracks. By prioritizing effectively, you can ensure that important deadlines are met and resources are allocated efficiently.
Effective prioritization involves considering various factors, such as project deadlines, stakeholder requirements, and resource availability. By assessing the impact and urgency of each task, teams can more effectively allocate their time and resources. Regularly reviewing and updating priorities ensures your team remains agile and responsive to changing project needs.
Utilize Labels and Tags
Leverage tags or custom fields to add context to your tickets. Whether it’s categorizing tasks by feature, department, or milestone, these metadata elements make it easier to filter and search for relevant tickets. By utilizing labels and tags effectively, you can improve organization and streamline ticket management within Jira.
Furthermore, consistent labeling conventions enhance collaboration and communication across teams. When everyone adopts a standardized approach to labeling tickets, it becomes simpler to locate specific tasks and understand their context. Moreover, labels and tags can provide valuable insights for reporting and analytics, enabling teams to track progress and identify trends over time.
Encourage Clear Communication
Effective communication is the cornerstone of successful project management. Encourage team members to provide detailed updates, ask questions, and collaborate openly within Jira ticket comments. This transparent communication ensures that everyone stays informed and aligned, fostering a collaborative environment conducive to productivity and success.
Clear communication within Jira ticket comments keeps team members informed and facilitates knowledge sharing and problem-solving. Encouraging open dialogue enables team members to provide feedback, offer assistance, and address potential roadblocks promptly. Additionally, documenting discussions within ticket comments provides valuable context for future reference, aiding in project continuity and decision-making.
Automate Repetitive Tasks
Identify repetitive tasks or processes and automate them using Jira’s built-in automation features or third-party integrations. This not only saves time but also reduces the likelihood of human error. By automating repetitive tasks, you can free up valuable resources and focus on more strategic initiatives, improving overall efficiency and productivity.
Moreover, automation can standardize workflows and enforce best practices, ensuring project consistency. By defining automated rules and triggers, teams can streamline repetitive processes such as task assignments, status updates, and notifications. This minimizes manual intervention and enables team members to devote their time and energy to tasks that require human judgment and creativity.
Regularly Review and Refine
Continuously reviewing your Jira setup and workflows is essential to identify areas for improvement. Solicit feedback from team members and stakeholders to understand pain points and make necessary adjustments. By regularly reviewing and refining your Jira configuration, you can optimize processes and adapt to evolving project requirements effectively.
Moreover, regular reviews foster a culture of continuous improvement within your team. By actively seeking feedback and incorporating suggestions for enhancement, you demonstrate a commitment to excellence and encourage team members to engage. Additionally, periodic reviews help identify bottlenecks and inefficiencies, allowing teams to address them proactively and maintain high productivity levels.
Integrate with Other Tools
Jira seamlessly integrates with a wide range of third-party tools and services, enhancing its capabilities and extending its functionality. Integrating with other tools can streamline your development process and enhance collaboration, whether it’s version control systems, CI/CD pipelines, or communication platforms. Incorporating workflow automation tools into the mix further enhances efficiency by automating repetitive tasks and reducing manual intervention, ultimately accelerating project delivery and reducing errors.
Furthermore, integrating Jira with other tools promotes cross-functional collaboration and data sharing. By connecting disparate systems and centralizing information within Jira, teams can eliminate silos and improve visibility into project progress. Additionally, integrating with complementary tools allows teams to leverage existing investments and build upon established workflows, maximizing efficiency and effectiveness.
Foster a Culture of Continuous Improvement
Encourage a mindset of continuous improvement within your software teams. Encourage feedback, experimentation, and learning from both successes and failures. By embracing a culture of constant improvement, you can adapt to changing requirements and drive greater efficiency in your Jira ticket management process while also building a robust knowledge base of best practices and lessons learned.
Moreover, fostering a culture of continuous improvement empowers team members to take ownership of their work and seek opportunities for growth and innovation. By encouraging experimentation and learning from failures, teams can cultivate resilience and agility, enabling them to thrive in dynamic environments. Additionally, celebrating successes and acknowledging contributions fosters morale and motivation, creating a positive and supportive work culture.
How these Strategies Can Help in Better Planning?
Better JIRA ticket management helps in improving planning accuracy. Below are a few of the ways how these strategies can further help in better planning:
Automating these tasks reduces the likelihood of human error and ensures that essential tasks are promptly initiated and tracked, leading to better planning accuracy.
Establishing clear guidelines for creating tickets reduces confusion and ensures that all necessary details are included from the start, facilitating more accurate planning and resource allocation.
Clear communication within JIRA comments ensures that everyone understands project requirements and updates, reducing misunderstandings and enhancing planning accuracy by facilitating effective coordination and decision-making.
Connecting disparate systems and centralizing information improves visibility into project progress and facilitates data sharing. Hence, improving planning by providing a comprehensive view of project status and dependencies.
When you consistently follow through on your commitments, you build trust not just within your own team, but across the entire company. Hence, allowing other teams to confidently line up their timelines to development timelines, leading to a tightly aligned, high-velocity organization.
Plan your Way into a Good Jira Ticket System!
Improving your Jira ticket management, essential for effective task management, requires thoughtful planning, ongoing refinement, and a commitment to best practices. Implementing these tips and fostering a culture of continuous improvement can optimize your workflows, enhance collaboration, and drive greater project success, benefiting both internal teams and external customers.
If you need further help in optimizing your engineering processes, Typo is here to help you.
In an ever-evolving tech world, organisations need to innovate quickly while keeping up high standards of quality and performance. The key to achieving these goals is empowering engineering leaders with the right tools and technologies.
About Typo
Typo is a software intelligence platform that optimizes software delivery by identifying real-time bottlenecks in SDLC, automating code reviews, and measuring developer experience. We aim to help organizations ship reliable software faster and build high-performing teams.
However, engineering leaders often struggle to bridge the divide between traditional management practices and modern software development leading to missed opportunities for growth, ineffective team dynamics, and slower progress in achieving organizational goals.
To address this gap, we launched groCTO, a community designed specifically for engineering leaders.
What is groCTO Community?
Effective engineering leadership is crucial for building high-performing teams and driving innovation. However, many leaders face significant challenges and gaps that hinder their effectiveness. The role of an engineering leader is both demanding and essential. From aligning teams with strategic goals to managing complex projects and fostering a positive culture, they have a lot on their plates. Hence, leaders need to have the right direction and support so they can navigate the challenges and guide their teams efficiently.
Here’s when groCTO comes in!
groCTO is a community designed to empower engineering managers on their leadership journey. The aim is to help engineering leaders evolve, navigate complex technical challenges, and drive innovative solutions to create groundbreaking software. Engineering leaders can connect, learn, and grow to enhance their capabilities and, in turn, the performance of their teams.
Key Components of groCTO
groCTO Connect
Over 73% of successful tech leaders believe having a mentor is key to their success.
At groCTO, we recognize mentorship as a powerful tool for addressing leadership challenges and offering personalised support and fresh perspectives. That’s why we’ve kept Connect a cornerstone of our community - offering 1:1 mentorship sessions with global tech leaders and CTOs. With over 74 mentees and 20 mentors, our Connect program fosters valuable relationships and supports your growth as a tech leader.
Gain personalised advice: Through 1:1 sessions, mentors address individual challenges and tailor guidance to the specific needs and career goals of emerging leaders.
Navigate career growth: These mentors understand the strengths and weaknesses of the individual and help them focus on improving specific leadership skills and competencies and build confidence.
Build valuable professional relationships: Our mentorship sessions expand professional connections and foster collaborations and knowledge sharing that can offer ongoing support and opportunities.
Weekly Tech Insights
To keep our tech community informed and inspired, groCTO brings you a fresh set of learning resources every week:
CTO Diaries: The CTO Diaries provide a unique glimpse into the experiences and lessons learned by seasoned Chief Technology Officers. These include personal stories, challenges faced, and successful strategies implemented by them. Hence, helping engineering leaders gain practical insights and real-world examples that can inspire and inform their approach to leadership and team management.
groCTO Originals is a weekly podcast for current and aspiring tech leaders aiming to transform their approach by learning from seasoned industry experts and successful engineering leaders across the globe.
‘The DORA Lab’ by groCTO is an exclusive podcast that’s all about DORA and other engineering metrics. In each episode, expert leaders from the tech world bring their extensive knowledge of the challenges, inspirations, and practical uses of DORA metrics and beyond.
Bytes: groCTO Bytes is a weekly sun-day dose of curated wisdom delivered straight to your inbox, in the form of a newsletter. Our goal is to keep tech leaders and CTOs, VPEs up-to-date on the latest trends and best practices in engineering leadership, tech management, system design, and more.
At groCTO, we are committed to making this community bigger and better. We want current and aspiring engineering leaders to invest in their growth as well as contribute to pushing the boundaries of what engineering teams can achieve.
We’re just getting started. A few of our future plans for groCTO include:
Virtual Events: We plan to conduct interactive webinars and workshops to help engineering leaders and CTOs get deeper dives into specific topics and networking opportunities.
Slack Channels: We plan to create Slack channels to allow emerging tech leaders to engage in vibrant discussions and get real-time support tailored to various aspects of engineering leadership.
We envision a community that thrives on continuous engagement and growth. Scaling our resources and expanding our initiatives, we want to ensure that every member of groCTO finds the support and knowledge they need to excel.
Get in Touch with us!
At Typo, our vision is clear: to ship reliable software faster and build high-performing engineering teams. With groCTO, we are making significant progress toward this goal by empowering engineering leaders with the tools and support they need to excel.
Join us in this exciting new chapter and be a part of a community that empowers tech leaders to excel and innovate.
We’d love to hear from you! For more information about groCTO and how to get involved, write to us at hello@grocto.dev
Dev teams hold great importance in the engineering organization. They are essential for building high-quality software products, fostering innovation, and driving the success of technology companies in today’s competitive market.
However, engineering leaders need to understand the bottlenecks holding them back. Since these blindspots can directly affect the projects. Hence, this is when software development analytics tools come to your rescue. And these analytics software stands better when they have various features and integrations, engineering leaders are usually looking out for.
Typo is an intelligent engineering platform that is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Let’s know more about why engineering leaders prefer to choose Typo as their important tool:
You get Customized DORA and other Engineering Metrics
Engineering metrics are the measurements of engineering outputs and processes. However, there isn’t a pre-defined set of metrics that the software development teams use to measure to ensure success. This depends on various factors including team size, the background of the team members, and so on.
Typo’s customized DORA (Deployment frequency, Change failure rate, Lead time, and Mean Time to Recover) key metrics and other engineering metrics can be configured in a single dashboard based on specific development processes. This helps benchmark the dev team’s performance and identifies real-time bottlenecks, sprint delays, and blocked PRs. With the user-friendly interface and tailored integrations, engineering leaders can get all the relevant data within minutes and drive continuous improvement.
Typo has an In-Built Automated Code Review Feature
Code review is all about improving the code quality. It improves the software teams’ productivity and streamlines the development process. However, when done manually, the code review process can be time-consuming and takes a lot of effort.
Typo’s automated code review tool auto-analyses codebase and pull requests to find issues and auto-generates fixes before it merges to master. It understands the context of your code and quickly finds and fixes any issues accurately, making pull requests easy and stress-free. It standardizes your code, reducing the risk of a software security breach and boosting maintainability, while also providing insights into code coverage and code complexity for thorough analysis.
You can Track the Team’s Progress by Advanced Sprint Analysis Tool
While a burndown chart helps visually monitor teams’ work progress, it is time-consuming and doesn’t provide insights about the specific types of issues or tasks. Hence, it is always advisable to complement it with sprint analysis tools to provide additional insights tailored to agile project management.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This helps in identifying potential problems in the early stages, identifying areas where teams can be more efficient, and meeting deadlines.
The metrics Dashboard Focuses on Team-Level Improvement and Not Micromanaging Individual Developers
When engineering metrics focus on individual success rather than team performance, it creates a sense of surveillance rather than support. This leads to decreased motivation, productivity, and trust among development teams. Hence, there are better ways to use the engineering metrics.
Typo has a metrics dashboard that focuses on the team’s health and performance. It lets engineering leaders compare the team’s results with what healthy benchmarks across industries look like and drive impactful initiatives for your team. Since it considers only the team’s goals, it lets team members work together and solve problems together. Hence, fosters a healthier and more productive work environment conducive to innovation and growth.
Typo Takes into Consideration the Human Side of Engineering
Measuring developer experience not only focuses on quantitative metrics but also requires qualitative feedback as well. By prioritizing the human side of team members and developer productivity, engineering managers can create a more inclusive and supportive environment for them.
Typo helps in getting a 360 view of the developer experience as it captures qualitative insights and provides an in-depth view of the real issues that need attention. With signals from work patterns and continuous AI-driven pulse check-ins on the experience of developers in the team, Typo helps with early indicators of their well-being and actionable insights on the areas that need your attention. It also tracks the work habits of developers across multiple activities, such as Commits, PRs, Reviews, Comments, Tasks, and Merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the Burnout zone or at risk of burnout.
You can integrate as many tools with the dev stack
The more the tools can be integrated with software, the better it is for the software developers. It streamlines the development process, enforces standardization and consistency, and provides access to valuable resources and functionalities.
Typo lets you see the complete picture of your engineering health by seamlessly connecting to your tech tool stack. This includes:
GIT versioning tools that use the Git version control system
Issue tracker tools for managing tasks, bug tracking, and other project-related issues
CI/CD tools to automate and streamline the software development process
Communication tools to facilitate the exchange of ideas and information
Incident management tools to resolve unexpected events or failures
Conclusion
Typo is a software delivery tool that can help ship reliable software faster. You can find real-time bottlenecks in your SDLC, automate code reviews, and measure developer experience – all in a single platform.
We are delighted to share that Typo ranks as a leader in the Software Development analytics tool category. A big thank you to all our customers who supported us in this journey and took the time to write reviews about their experience. It really got us motivated to keep moving forward and bring the best to the table in the coming weeks.
Typo Taking the Lead
Typo is placed among the leaders in Software Development Analytics. Besides this, we earned the ‘User loved us’ badge as well.
Our wall of fame shines bright with –
Leader in the overall Grid® Report for Software Development Analytics Tools category
Leader in the Mid Market Grid® Report for Software Development Analytics Tools category
Rated #1 for Likelihood to Recommend
Rated #1 for Quality of Support
Rated #1 for Meets Requirements
Rated #1 for Ease of Use
Rated #1 for Analytics and Trends
Typo has been ranked a Leader in the Grid Report for Software Development Analytics Tool | Summer 2023. This is a testament to our continuous efforts toward building a product that engineering teams love to use.
The ratings also include –
97% of the reviewers have rated Typo high in analyzing historical data to highlight trends, statistics & KPIs
100% of the reviewers have rated us high in Productivity Updates
We, as a team, achieved the feat of attaining the score of:
Here’s What our Customers Say about Typo
Check out what other users have to say about Typo here.
What Makes Typo Different?
Typo is an intelligent AI-driven Engineering Management platform that enables modern software teams with visibility, insights & tools to code better, deploy faster & stay aligned with business goals.
Having launched with Product Hunt, we started with 15 engineers working with sheer hard work and dedication and have impacted 5000+ developers globally and engineering leaders globally, 400,000+ PRs & 1.5M+ commits.
We are NOT just the software delivery analytics platform. We go beyond the SDLC metrics to build an ecosystem that is a combination of intelligent insights, impactful actions & automated workflows – that will help Managers to lead better & developers perform better
As the first step, Typo gives core insights into dev velocity, quality & throughout that has helped the engineering leaders reduce their PR cycle time by almost 57% and 2X faster project deliveries.
Continuous Improvement with Typo
Typo empowers continuous improvement in the developers & managers with goal setting & specific visibility to developers themselves.
The leaders can set goals to ensure best practices like PR sizes, avoid merging PRs without review, identify high-risk work & others. Typo nudges the key stakeholders on Slack as soon as the goal is breached. Typo also automates the workflow on Slack to help developers with faster PR shipping and code reviews.
Developer’s View
Typo provides core insights to your developers that are 100% confidential to them. It helps developers to identify their strengths and core areas of improvement that have impacted the software delivery. It helps them gain visibility & measure the impact of their work on team efficiency & goals.
Developer’s Well-Being
We believe that all three aspects – work, collaboration & well-being – need to fall in place to help an individual deliver their best. Inspired by the SPACE framework for developer productivity, we support Pulse Check-Ins, Developer Experience insights, Burnout predictions & Engineering surveys to paint a complete picture.
10X your Dev Teams’ Efficiency with Typo
It’s all of your immense love and support that made us a leader in such a short period. We are grateful to you!
But this is just the beginning. Our aim has always been to level up your dev game and we will be coming with the new exciting releases in the next few weeks.