Typo's Picks

Maintaining a balance between speed and code quality is a challenge for every developer. 

Deadlines and fast-paced projects often push teams to prioritize rapid delivery, leading to compromises in code quality that can have long-lasting consequences. While cutting corners might seem efficient in the moment, it often results in technical debt and a codebase that becomes increasingly difficult to manage.

The hidden costs of poor code quality are real, impacting everything from development cycles to team morale. This blog delves into the real impact of low code quality, its common causes, and actionable solutions tailored to developers looking to elevate their code standards.

Understanding the Core Elements of Code Quality

Code quality goes beyond writing functional code. High-quality code is characterized by readability, maintainability, scalability, and reliability. Ensuring these aspects helps the software evolve efficiently without causing long-term issues for developers. Let’s break down these core elements further:

  • Readability: Code that follows consistent formatting, uses meaningful variable and function names, and includes clear inline documentation or comments. Readable code allows any developer to quickly understand its purpose and logic.
  • Maintainability: Modular code that is organized with reusable functions and components. Maintainability ensures that code changes, whether for bug fixes or new features, don’t introduce cascading errors throughout the codebase.
  • Scalability: Code designed withan architecture that supports growth. This involves using design patterns that decouple different parts of the code and make it easier to extend functionalities.
  • Reliability: Robust code that has been tested under different scenarios to minimize bugs and unexpected behavior.

The Real Costs of Low Code Quality

Low code quality can significantly impact various facets of software development. Below are key issues developers face when working with substandard code:

Sluggish Development Cycles

Low-quality code often involves unclear logic and inconsistent practices, making it difficult for developers to trace bugs or implement new features. This can turn straightforward tasks into hours of frustrating work, delaying project milestones and adding stress to sprints.

Escalating Technical Debt

Technical debt accrues when suboptimal code is written to meet short-term goals. While it may offer an immediate solution, it complicates future updates. Developers need to spend significant time refactoring or rewriting code, which detracts from new development and wastes resources.

Bug-Prone Software

Substandard code tends to harbor hidden bugs that may not surface until they affect end-users. These bugs can be challenging to isolate and fix, leading to patchwork solutions that degrade the codebase further over time.

Collaboration Friction

When multiple developers contribute to a project, low code quality can cause misalignment and confusion. Developers might spend more time deciphering each other’s work than contributing to new development, leading to decreased team efficiency and a lower-quality product.

Scalability Bottlenecks

A codebase that doesn’t follow proper architectural principles will struggle when scaling. For instance, tightly coupled components make it hard to isolate and upgrade parts of the system, leading to performance issues and reduced flexibility.

Developer Burnout

Constantly working with poorly structured code is taxing. The mental effort needed to debug or refactor a convoluted codebase can demoralize even the most passionate developers, leading to frustration, reduced job satisfaction, and burnout.

Root Causes of Low Code Quality

Understanding the reasons behind low code quality helps in developing practical solutions. Here are some of the main causes:

Pressure to Deliver Rapidly

Tight project deadlines often push developers to prioritize quick delivery over thorough, well-thought-out code. While this may solve immediate business needs, it sacrifices code quality and introduces problems that require significant time and resources to fix later.

Lack of Unified Coding Standards

Without established coding standards, developers may approach problems in inconsistent ways. This lack of uniformity leads to a codebase that’s difficult to maintain, read, and extend. Coding standards help enforce best practices and maintain consistent formatting and documentation.

Insufficient Code Reviews

Skipping code reviews means missing opportunities to catch errors, bad practices, or code smells before they enter the main codebase. Peer reviews help maintain quality, share knowledge, and align the team on best practices.

Limited Testing Strategies

A codebase without sufficient testing coverage is bound to have undetected errors. Tests, especially automated ones, help identify issues early and ensure that any code changes do not break existing features.

Overreliance on Low-Code/No-Code Solutions

Low-code platforms offer rapid development but often generate code that isn’t optimized for long-term use. This code can be bloated, inefficient, and difficult to debug or extend, causing problems when the project scales or requires custom functionality.

Comprehensive Solutions to Improve Code Quality

Addressing low code quality requires deliberate, consistent effort. Here are expanded solutions with practical tips to help developers maintain and improve code standards:

  1. Adopt Rigorous Code Reviews

Code reviews should be an integral part of the development process. They serve as a quality checkpoint to catch issues such as inefficient algorithms, missing documentation, or security vulnerabilities. To make code reviews effective:

  • Create a structured code review checklist that focuses on readability, adherence to coding standards, potential performance issues, and proper error handling.
  • Foster a culture where code reviews are seen as collaborative learning opportunities rather than criticism.
  • Implement tools like GitHub’s review features or Bitbucket for in-depth code discussions.

  1. Integrate Linters and Static Analysis Tools

Linters help maintain consistent formatting and detect common errors automatically. Tools like ESLint (JavaScript), RuboCop (Ruby), and Pylint (Python) check your code for syntax issues and adherence to coding standards. Static analysis tools go a step further by analyzing code for complex logic, performance issues, and potential vulnerabilities. To optimize their use:

  • Configure these tools to align with your project’s coding standards.
  • Run these tools in pre-commit hooks with Husky or integrate them into your CI/CD pipelines to ensure code quality checks are performed automatically.

  1. Prioritize Comprehensive Testing

Adopt a multi-layered testing strategy to ensure that code is reliable and bug-free:

  • Unit Tests: Write unit tests for individual functions or methods to verify they work as expected. Frameworks like Jest for JavaScript, PyTest for Python, and JUnit for Java are popular choices.
  • Integration Tests: Ensure that different parts of your application work together smoothly. Tools like Cypress and Selenium can help automate these tests.
  • End-to-End Tests: Simulate real user interactions to catch potential issues that unit and integration tests might miss.
  • Integrate testing into your CI/CD pipeline so that tests run automatically on every code push or pull request.

  1. Dedicate Time for Refactoring

Refactoring helps improve code structure without changing its behavior. Regularly refactoring prevents code rot and keeps the codebase maintainable. Practical strategies include:

  • Identify “code smells” such as duplicated code, overly complex functions, or tightly coupled modules.
  • Apply design patterns where appropriate, such as Factory or Observer, to simplify complex logic.
  • Use IDE refactoring tools like IntelliJ IDEA’s refactor feature or Visual Studio Code extensions to speed up the process.

  1. Create and Enforce Coding Standards

Having a shared set of coding standards ensures that everyone on the team writes code with consistent formatting and practices. To create effective standards:

  • Collaborate with the team to create a coding guideline that includes best practices, naming conventions, and common pitfalls to avoid.
  • Document the guideline in a format accessible to all team members, such as a README file or a Confluence page.
  • Conduct periodic training sessions to reinforce these standards.

  1. Leverage Typo for Enhanced Code Quality

Typo can be a game-changer for teams looking to automate code quality checks and streamline reviews. It offers a range of features:

  • Automated Code Review: Detects common issues, code smells, and inconsistencies, supplementing manual code reviews.
  • Detailed Reports: Provides actionable insights, allowing developers to understand code weaknesses and focus on the most critical issues.
  • Seamless Collaboration: Enables teams to leave comments and feedback directly on code, enhancing peer review discussions and improving code knowledge sharing.
  • Continuous Monitoring: Tracks changes in code quality over time, helping teams spot regressions early and maintain consistent standards.

  1. Enhance Knowledge Sharing and Training

Keeping the team informed on best practices and industry trends strengthens overall code quality. To foster continuous learning:

  • Organize workshops, code review sessions, and tech talks where team members share insights or recent challenges they overcame.
  • Encourage developers to participate in webinars, online courses, and conferences.
  • Create a mentorship program where senior developers guide junior members through complex code and teach them best practices.

  1. Strategically Use Low-Code Tools

Low-code tools should be leveraged for non-critical components or rapid prototyping, but ensure that the code generated is thoroughly reviewed and optimized. For more complex or business-critical parts of a project:

  • Supplement low-code solutions with custom coding to improve performance and maintainability.
  • Regularly review and refactor code generated by these platforms to align with project standards.

Commit to Continuous Improvement

Improving code quality is a continuous process that requires commitment, collaboration, and the right tools. Developers should assess current practices, adopt new ones gradually, and leverage automated tools like Typo to streamline quality checks. 

By incorporating these strategies, teams can create a strong foundation for building maintainable, scalable, and high-quality software. Investing in code quality now paves the way for sustainable development, better project outcomes, and a healthier, more productive team.

Sign up for a quick demo with Typo to learn more!

Mobile development comes with a unique set of challenges: rapid release cycles, stringent user expectations, and the complexities of maintaining quality across diverse devices and operating systems. Engineering teams need robust frameworks to measure their performance and optimize their development processes effectively. 

DORA metrics—Deployment Frequency, Lead Time for Changes, Mean Time to Recovery (MTTR), and Change Failure Rate—are key indicators that provide valuable insights into a team’s DevOps performance. Leveraging these metrics can empower mobile development teams to make data-driven improvements that boost efficiency and enhance user satisfaction.

Importance of DORA Metrics in Mobile Development

DORA metrics, rooted in research from the DevOps Research and Assessment (DORA) group, help teams measure key aspects of software delivery performance.

Here's why they matter for mobile development:

  • Deployment Frequency: Mobile teams need to keep up with the fast pace of updates required to satisfy user demand. Frequent, smooth deployments signal a team’s ability to deliver features, fixes, and updates consistently.
  • Lead Time for Changes: This metric tracks the time between code commit and deployment. For mobile teams, shorter lead times mean a streamlined process, allowing quicker responses to user feedback and faster feature rollouts.
  • MTTR: Downtime in mobile apps can result in frustrated users and poor reviews. By tracking MTTR, teams can assess and improve their incident response processes, minimizing the time an app remains in a broken state.
  • Change Failure Rate: A high change failure rate can indicate inadequate testing or rushed releases. Monitoring this helps mobile teams enhance their quality assurance practices and prevent issues from reaching production.

Deep Dive into Practical Solutions for Tracking DORA Metrics

Tracking DORA metrics in mobile app development involves a range of technical strategies. Here, we explore practical approaches to implement effective measurement and visualization of these metrics.

Implementing a Measurement Framework

Integrating DORA metrics into existing workflows requires more than a simple add-on; it demands technical adjustments and robust toolchains that support continuous data collection and analysis.

  1. Automated Data Collection

Automating the collection of DORA metrics starts with choosing the right CI/CD platforms and tools that align with mobile development. Popular options include:

  • Jenkins Pipelines: Set up custom pipeline scripts that log deployment events and timestamps, capturing deployment frequency and lead times. Use plugins like the Pipeline Stage View for visual insights.
  • GitLab CI/CD: With GitLab's built-in analytics, teams can monitor deployment frequency and lead time for changes directly within their CI/CD pipeline.
  • GitHub Actions: Utilize workflows that trigger on commits and deployments. Custom actions can be developed to log data and push it to external observability platforms for visualization.

Technical setup: For accurate deployment tracking, implement triggers in your CI/CD pipelines that capture key timestamps at each stage (e.g., start and end of builds, start of deployment). This can be done using shell scripts that append timestamps to a database or monitoring tool.

  1. Real-Time Monitoring and Visualization

To make sense of the collected data, teams need a robust visualization strategy. Here’s a deeper look at setting up effective dashboards:

  • Prometheus with Grafana: Integrate Prometheus to scrape data from CI/CD pipelines, and use Grafana to create dashboards with deployment trends and lead time breakdowns.
  • Elastic Stack (ELK): Ship logs from your CI/CD process to Elasticsearch and build visualizations in Kibana. This setup provides detailed logs alongside high-level metrics.

Technical Implementation Tips:

  • Use Prometheus exporters or custom scripts that expose metric data as HTTP endpoints.
  • Design Grafana dashboards to show current and historical trends for DORA metrics, using panels that highlight anomalies or spikes in lead time or failure rates.

  1. Comprehensive Testing Pipelines

Testing is integral to maintaining a low change failure rate. To align with this, engineering teams should develop thorough, automated testing strategies:

  • Unit Testing: Implement unit tests with frameworks like JUnit for Android or XCTest for iOS. Ensure these are part of every build to catch low-level issues early.
  • Integration Testing: Use tools such as Espresso and UIAutomator for Android and XCUITest for iOS to validate complex user interactions and integrations.
  • End-to-End Testing: Integrate Appium or Selenium to automate tests across different devices and OS versions. End-to-end testing helps simulate real-world usage and ensures new deployments don't break critical app flows.

Pipeline Integration:

  • Set up your CI/CD pipeline to trigger these tests automatically post-build. Configure your pipeline to fail early if a test doesn’t pass, preventing faulty code from being deployed.
  1. Incident Response and MTTR Management

Reducing MTTR requires visibility into incidents and the ability to act swiftly. Engineering teams should:

  • Implement Monitoring Tools: Use tools like Firebase Crashlytics for crash reporting and monitoring. Integrate with third-party tools like Sentry for comprehensive error tracking.
  • Set Up Automated Alerts: Configure alerts for critical failures using observability tools like Grafana Loki, Prometheus Alertmanager, or PagerDuty. This ensures that the team is notified as soon as an issue arises.

Strategies for Quick Recovery:

  • Implement automatic rollback procedures using feature flags and deployment strategies such as blue-green deployments or canary releases.
  • Use scripts or custom CI/CD logic to switch between versions if a critical incident is detected.

Weaving Typo into Your Workflow

After implementing these technical solutions, teams can leverage Typo for seamless DORA metrics integration. Typo can help consolidate data and make metric tracking more efficient and less time-consuming.

For teams looking to streamline the integration of DORA metrics tracking, Typo offers a solution that is both powerful and easy to adopt. Typo provides:

  • Automated Deployment Tracking: By integrating with existing CI/CD tools, Typo collects deployment data and visualizes trends, simplifying the tracking of deployment frequency.
  • Detailed Lead Time Analysis: Typo’s analytics engine breaks down lead times by stages in your pipeline, helping teams pinpoint delays in specific steps, such as code review or testing.
  • Real-Time Incident Response Support: Typo includes incident monitoring capabilities that assist in tracking MTTR and offering insights into incident trends, facilitating better response strategies.
  • Seamless Integration: Typo connects effortlessly with platforms like Jenkins, GitLab, GitHub, and Jira, centralizing DORA metrics in one place without disrupting existing workflows.

Typo’s integration capabilities mean engineering teams don’t need to build custom scripts or additional data pipelines. With Typo, developers can focus on analyzing data rather than collecting it, ultimately accelerating their journey toward continuous improvement.

Establishing a Continuous Improvement Cycle

To fully leverage DORA metrics, teams must establish a feedback loop that drives continuous improvement. This section outlines how to create a process that ensures long-term optimization and alignment with development goals.

  1. Regular Data Reviews: Conduct data-driven retrospectives to analyze trends and set goals for improvements.
  2. Iterative Process Enhancements: Use findings to adjust coding practices, enhance automated testing coverage, or refine build processes.
  3. Team Collaboration and Learning: Share knowledge across teams to spread best practices and avoid repeating mistakes.

Empowering Your Mobile Development Process

DORA metrics provide mobile engineering teams with the tools needed to measure and optimize their development processes, enhancing their ability to release high-quality apps efficiently. By integrating DORA metrics tracking through automated data collection, real-time monitoring, comprehensive testing pipelines, and advanced incident response practices, teams can achieve continuous improvement. 

Tools like Typo make these practices even more effective by offering seamless integration and real-time insights, allowing developers to focus on innovation and delivering exceptional user experiences.

In this episode of the groCTO Podcast, host Kovid Batra engages in a comprehensive discussion with Geoffrey Teale, the Principal Product Engineer at Upvest, who brings over 25 years of engineering and leadership experience.

The episode begins with Geoffrey's role at Upvest, where he has transitioned from Head of Developer Experience to Principal Product Engineer, emphasizing a holistic approach to improving both developer experience and engineering standards across the organization. Upvest's business model as a financial infrastructure company providing investment banking services through APIs is also examined. Geoffrey underscores the multifaceted engineering requirements, including security, performance, and reliability, essential for meeting regulatory standards and customer expectations. The discussion further delves into the significance of product thinking for internal teams, highlighting the challenges and strategies of building platforms that resonate with developers' needs while competing with external solutions.

Throughout the episode, Geoffrey offers valuable insights into the decision-making processes, the importance of simplicity in early-phase startups, and the crucial role of documentation in fostering team cohesion and efficient communication. Geoffrey also shares his personal interests outside work, including his passion for music, open-source projects, and low-carbon footprint computing, providing a holistic view of his professional and personal journey.

Timestamps

  • 00:00 - Introduction
  • 00:49 - Welcome to the groCTO Podcast
  • 01:22 - Meet Geoffrey: Principal Engineer at Upvest
  • 01:54 - Understanding Upvest's Business & Engineering Challenges
  • 03:43 - Geoffrey's Role & Personal Interests
  • 05:48 - Improving Developer Experience at Upvest
  • 08:25 - Challenges in Platform Development and Team Cohesion
  • 13:03 - Product Thinking for Internal Teams
  • 16:48 - Decision-Making in Platform Development
  • 19:26 - Early-Phase Startups: Balancing Resources and Growth
  • 27:25 - Scaling Challenges & Documentation Importance
  • 31:52 - Conclusion

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO Podcast. Today with us, we have a very special guest who has great expertise in managing developer experience at small scale and large scale organizations. He is currently the Principal Engineer at Upvestm, and has almost 25 plus years of experience in engineering and leadership. Welcome to the show, Geoffrey. Great to have you here. 

Geoffrey Teale: Great to be here. Thank you. 

Kovid Batra: So Geoffrey, I think, uh, today's theme is more around improving the developer experience, bringing the product thinking while building the platform teams, the platform. Uh, and you, you have been, uh, doing all this from quite some time now, like at Upvest and previous organizations that you've worked with, but at your current company, uh, like Upvest, first of all, we would like to know what kind of a business you're into, what does Upvest do, and let's then deep dive into how engineering is, uh, getting streamlined there according to the business.

Geoffrey Teale: Yeah. So, um, Upvest is a financial infrastructure company. Um, we provide, uh, essentially investment banking services, a complete, uh, solution for building investment banking experiences, uh, for, for client organizations. So we're business to business to customer. We provide our services via an API and client organizations, uh, names that you'd heard of people like Revolut and N26 build their client-facing applications using our backend services to provide that complete investment experience, um, currently within the European Union. Um, but, uh, we'll be expanding out from there shortly. 

Kovid Batra: Great. Great. So I think, uh, when you talk about investment banking and supporting the companies with APIs, what kind of engineering is required here? Is it like more, uh, secure-oriented, secure-focused, or is it more like delivering on time? Or is it more like, uh, making things very very robust? How do you see it right now in your organization? 

Geoffrey Teale: Well, yeah, I mean, I think in the space that we're in the, the answer unfortunately is all of the above, right? So all those things are our requirements. It has to be secure. It has to meet the, uh, the regulatory standards that we, we have in our industry. Um, it has to be performant enough for our customers who are scaling out to quite large scales, quite large numbers of customers. Um, has to be reliable. Um, so there's a lot of uh, uh, how would I say that? Pressure, uh, to perform well and to make sure that things are done to the highest possible standard in order to deliver for our customers. And, uh, if we don't do that, then, then, well, the customers won't trust us. If they don't trust us, then we wouldn't be where we are today. So, uh, yeah. 

Kovid Batra: No, I totally get that. Uh, so talking more about you now, like, what's your current role in the organization? And even before that, tell us something about yourself which the LinkedIn doesn't know. Uh, I think the audience would love to know you a little bit more. Uh, let's start from there. Uh, maybe things that you do to unwind or your hobbies or you're passionate about anything else apart from your job that you're doing? 

Geoffrey Teale: Oh, well, um, so, I'm, I'm quite old now. I have a family. I have two daughters, a dog, a cat, fish, quail. Keep quail in the garden. Uh, and that occupies most of my time outside of work. Actually my passions outside of work were always um, music. So I play guitar, and actually technology itself. So outside of work, I'm involved and have been involved in, in open source and free software for, for longer than I've been working. And, uh, I have a particular interest in, in low carbon footprint computing that I pursue outside of, out of work.

Kovid Batra: That's really amazing. So, um, like when you say low carbon, uh, cloud computing, what exactly are you doing to do that? 

Geoffrey Teale: Oh, not specifically cloud computing, but that would be involved. So yeah, there's, there's multiple streams to this. So one thing is about using, um, low power platforms, things like RISC-V. Um, the other is about streamlining of software to make it more efficient so we can look into lots of different, uh, topics there about operating systems, tools, programming languages, how they, uh, how they perform. Um, sort of reversing a trend, uh, that's been going on for as long as I've been in computing, which is that we use more and more power, both in terms of computing resource, but also actual electricity for the network, um, to deliver more and more functionality, but we're also programming more and more abstracted ways with more and more layers, which means that we're actually sort of getting less, uh, less bang for buck, if you, if you like, than we used to. So, uh, trying to reverse those trends a little bit. 

Kovid Batra: Perfect. Perfect. All right. That's really interesting. Thanks for that quick, uh, cute little intro. Uh, and, uh, now moving on to your work, like we were talking about your experience and your specialization in DevEx, right, improving the developer experience in teams. So what's your current, uh, role, responsibility that comes with, uh, within Upvest? Uh, and what are those interesting initiatives that you have, you're working on? 

Geoffrey Teale: Yeah. So I've actually just changed roles at Upvest. I've been at Upvest for a little bit over two years now, and the first two years I spent as the Head of Developer Experience. So running a tribe with a specific responsibility for client-facing developer experience. Um, now I've switched into a Principal Engineering role, which means that I have, um, a scope now which is across the whole of our engineering department, uh, with a, yeah, a view for improving experience and improving standards and quality of engineering internally as well. So, um, a slight shift in role, but my, my previous five years before, uh, Upvest, were all in, uh, internal development experience. So I think, um, quite a lot of that skill, um, coming into play in the new role which um, yeah, in terms of challenges actually, we're just at the very beginning of what we're doing on that side. So, um, early challenges are actually about identifying what problems do exist inside the company and where we can improve and how we can make ourselves ready for the next phase of the company's lifetime. So, um, I think some of those topics would be quite familiar to any company that's relatively modern in terms of its developer practices. If you're using microservices, um, there's this aspect of Conway's law, which is to say that your organizational structure starts to follow the program structure and vice versa. And, um, in that sense, you can easily get into this world where teams have autonomy, which is wonderful, but they can be, um, sort of pushed into working in a, in a siloized fashion, which can be very efficient within the team, but then you have to worry about cohesion within the organization and about making sure that people are doing the right things, uh, to, to make the services work together, in terms of design, in terms of the technology that we develop there. So that bridges a lot into this world of developer experience, into platform drives, I think you mentioned already, and about the way in which you think about your internal development, uh, as opposed to just what you do for customers. 

Kovid Batra: I agree. I mean, uh, as you said, like when the teams are siloed, they might be thinking they are efficient within themselves. And that's mostly the use case, the case. But when it comes to integrating different pieces together, that cohesion has to fall in. What is the biggest challenge you have seen, uh, in, in the teams in the last few years of your experience that prevents this cohesion? And what is it that works the best to bring in this cohesion in the teams? 

Geoffrey Teale: Yeah. So I think there's, there's, there's a lot of factors there. The, the, the, the biggest one I think is pressure, right? So teams in most companies have customers that they're working for, they have pressure to get things done, and that tends to make you focus on the problem in front of you, rather than the bigger picture, right? So, um, dealing, dealing with that and reinforcing the message to engineers that it's actually okay to do good engineering and to worry about the other people, um, is a big part of that. I've always said, actually, that in developer experience, a big part of what you have to do, the first thing you have to do is actually teach people about why developer experience is important. And, uh, one of those reasons is actually sort of saying, you know, promoting good behavior within engineering teams themselves and saying, we only succeed together. We only do that when we make the situation for ourselves that allows us to engineer well. And when we sort of step away from good practice and rush, rush, um, that maybe works for a short period of time. But, uh, in the long term that actually creates a situation where there's a lot of mess and you have to deal with, uh, getting past, we talk about factors like technical debt. There's a lot of things that you have to get past before you can actually get on and do the productive things that you want to do. Um, so teaching organizations and engineers to think that way is, uh, is, uh, I think a big, uh, a big part of the work that has to be done, finding ways to then take that message and put it into a package that is acceptable to people outside of engineering so that they understand why this is a priority and why it should be worked on is, I think, probably the second biggest part of that as well.

Kovid Batra: Makes sense. I think, uh, most of the, so is it like a behavioral challenge, uh, where, uh, developers and team members really don't like the fact that they have to work in cohesion with the teams? Or is it more like the organizational structure that put people into a certain kind of mindset and then they start growing with that and that becomes a problem in the later phase of the organization? What, what you have seen, uh, from your experience? 

Geoffrey Teale: Yeah. So I mean, I think growth is a big part of this. So, um, I mean, I've, I've worked with a number of startups. I've also worked in much bigger organizations. And what happens in that transition is that you move from a small tight-knit group of people who sort of inherently have this very good interpersonal communication, they all know what's going on with the company as a whole, and they build trust between them. And that way, this, this early stage organization works very well, and even though you might be working on disparate tasks, you always have some kind of cohesion there. You know what to do. And if something comes up that affects all of you, it's very easy to identify the people that you need to talk to and find a solution for it. Then as you grow, you start to have this situation where you start to take domains and say, okay, this particular part of, of what we do now belongs in a team, it has a leader and this piece over here goes over there. And that still works quite well up into a certain scale, right? But after time in an organization, several things happen. Okay, so your priorities drift apart, right? You no longer have such good understanding of the common goal. You tend to start prioritizing your work within those departments. So you can have some, some tension between those goals. It's not always clear that Department A should be working together with Department B on the same priority. You also have natural staff turnover. So those people who are there at the beginning, they start to leave, some of them, at least, and these trust relationships break down, the communication channels break down. And the third factor is that new people coming into the organization, they haven't got these relationships, they haven't got this experience. They usually don't have, uh, the position to, to have influence over things on such a large scale. So they get an expectation of these people that they're going to be effective across the organization in the way that people who've been there a long time are, and it tends not to happen. And if you haven't set up for that, if you haven't built the support systems for that and the internal processes and tooling for that, then that communication stops happening in the way that it was happening before.

So all of those things create pressure to, to siloes, then you put it on the pressure of growth and customers and, and it just, um, uh, ossifies in that state. 

Kovid Batra: Totally. Totally. And I think, um, talking about the customers, uh, last time when we were discussing, uh, you very beautifully put across this point of bringing that product thinking, not just for the products that you're building for the customer, but when you're building it for the teams. And I, what I feel is that, the people who are working on the platform teams have come across this situation more than anyone else in the team as a developer, where they have to put in that thought of product thinking for the people within the team. So what, what, what, uh, from where does this philosophy come? How you have fitted it into, uh, how platform teams should be built? Just tell us something about that. 

Geoffrey Teale: Yeah. So this is something I talk about a little bit when I do presentations, uh, about developer experience. And one of the points that I make actually, particularly for platform teams, but any kind of internal team that's serving other internal teams is that you have to think about yourself, not as a mandatory piece that the company will always support and say, "You must use this, this platform that we have." Because I have direct experience, not in my current company, but in previous, uh, in previous employers where a lot of investment has been made into making a platform, but no thought really was given to this kind of developer experience, or actually even the idea of selling the platform internally, right? It was just an assumption that people would have to use it and so they would use it. And that creates a different set of forces than you'll find elsewhere. And, and people start to ignore the fact that, you know, if you've got a cloud platform in this case, um, there is competition, right? Every day as an engineer, you run into people out there working in the wide world, working for, for companies, the Amazons, AWS of this world, as your Google, they're all producing cloud platform tools. They're all promoting their cloud native development environments with their own reasons for doing that. But they expend a lot of money developing those things, developing them to a very high standard and a lot of money promoting and marketing those things. And it doesn't take very much when we talk just now about trust breaking down, the cohesion between teams breaking down. It doesn't take very much for a platform to start looking like less of a solution and more of a problem if it's taking you a long time to get things done, if you can't find out how to do things, if you, um, you have bad experiences with deployment. This all turns that product into an internal problem. 

Kovid Batra: In context of an internal problem for the teams. 

Geoffrey Teale: Yeah, and in that context, and this is what I, what I've seen, when you then either have someone coming in from outside with experience with another, a product that you could use, or you get this kind of marketing push and sales push from one of these big companies saying, "Hey, look at this, this platform that we've got that you could just buy into." um, it, it puts you in direct competition and you can lose that, that, right? So I have seen whole divisions of a, of a very large company switch away from the internal platform to using cloud native development, right, on, on a particular platform. Now there are downsides for that. There are all sorts of things that they didn't realize they would have to do that they end up having to do. But once they've made the decision, that battle is lost. And I think that's a really key topic to understand that you are in competition, even though you're an internal team, you are in competition with other people, and you have to do some of the things that they do to convince the people in your organization that what you're doing is beneficial, that it's, it's, it's useful, and it's better in some very distinct way than what they would get off the shelf from, from somewhere else. 

Kovid Batra: Got it. Got it. So, when, uh, whenever the teams are making this decision, let's, let's take something, build a platform, what are those nitty gritties that one should be taking care of? Like, either people can go with off the shelf solutions, right? And then they start building. What, what should be the mindset, what should be the decision-making mindset, I must say, uh, for, for this kind of a process when they have to go through? 

Geoffrey Teale: So I think, um, uh, we within Upvest, follow a very, um, uh, prescribed is not the right word, but we have a, we have a process for how we think about things, and I think that's actually a very useful example of how to think about any technical project, right? So we start with this 'why' question and the 'why' question is really important. We talk about product thinking. Um, this is, you know, who are we doing this for and what are the business outcomes that we want to achieve? And that's where we have to start from, right? So we define that very, very clearly because, and this is a really important part, there's no value, uh, in anybody within the organization saying, "Let's go and build a platform." For example, if that doesn't deliver what the company needs. So you have to have clarity about this. What is the best way to build this? I mean, nobody builds a platform, well not nobody, but very few people build a platform in the cloud starting from scratch. Most people are taking some existing solution, be that a cloud native solution from a big public cloud, or be that Kubernetes or Cloud Foundry. People take these tools and they wrap them up in their own processes, their own software tools around it to package them up as a, uh, a nice application platform for, for development to happen, right? So why do you do that? What, what purpose are you, are you serving in doing this? How will this bring your business forward? And if you can't answer those questions, then you probably should never even start the project, right? That's, that's my, my view. And if you can't continuously keep those, um, ideas in mind and repeat them back, right? Repeat them back in terms of what are we delivering? What do we measure up against to the, to the, to the company? Then again, you're not doing a very good job of, of, of communicating why that product exists. If you can't think of a reason why your platform delivers more to your company and the people working in your company than one of the off the shelf solutions, then what are you for, right? That's the fundamental question.

So we start there, we think about those things well before we even start talking about solution space and, and, um, you know, what kind of technology we're going to use, how we're going to build that. That's the first lesson. 

Kovid Batra: Makes sense. A follow-up question on that. Uh, let's say a team is let's say 20-30 folks right now, okay? I'm talking about an engineering team, uh, who are not like super-funded right now or not in a very profit making business. This comes with a cost, right? You will have to deploy resources. You will have to invest time and effort, right? So is it a good idea according to you to have shared resources for such an initiative or it doesn't work out that way? You need to have dedicated resources, uh, working on this project separately or how, how do you contemplate that? 

Geoffrey Teale: My experience of early-phase startups is that people have to be multitaskers and they have to work on multiple things to make it work, right? It just doesn't make sense in the early phase of a company to invest so heavily in a single solution. Um, and I think one of the mistakes that I see people making now actually is that they start off with this, this predefined idea of where they're going to be in five years. And so they sort of go away and say, "Okay, well, I want my, my, my system to run on microservices on Kubernetes." And they invest in setting up Kubernetes, right, which has got a lot easier over the last few years, I have to say. Um, you can, to some degree, go and just pick that stuff off the shelf and pay for it. Um, but it's an example of, of a technical decision that, that's putting the cart before the horse, right? So, of course, you want to make architectural decisions. You don't want to make investments on something that isn't going to last, but you also have to remember that you don't know what's going to happen. And actually, getting to a product quickly, uh, is more important than, than, you know, doing everything perfectly the first time around. So, when I talk about these, these things, I think uh, we have to accept that there is a difference between being like the scrappy little startup and then being in growth phase and being a, a mega corporation. These are different environments with different pressures 

Kovid Batra: Got it. So, when, when teams start, let's say, work on it, working on it and uh, they have started and taken up this project for let's say, next six months to at least go out with the first phase of it. Uh, what are those challenges which, uh, the platform heads or the people who are working, the engineers who are working on it, should be aware of and how to like dodge those? Something from your experience that you can share.

Geoffrey Teale: Yes. So I mean, in, in, in the, the very earliest phase, I mean, as I just alluded to that keeping it simple is, is a, a, a big benefit. And actually keeping it simple sometimes means, uh, spending money upfront. So what I've, what I've seen is, is, um, many times I've, I've worked at companies, um, but so many, at least three times who've invested in a monitoring platform. So they've bought a off the shelf software as a service monitoring platform, uh, and used that effectively up until a certain point of growth. Now the reason they only use it up into a certain point of growth is because these tools are extremely expensive and those costs tend to scale with your company and your organization. And so, there comes a point in the life of that organization where that no longer makes sense financially. And then you withdraw from that and actually invest in, in specialist resources, either internally or using open source tools or whatever it is. It could just be optimization of the tool that you're using to reduce those costs. But all of those things have a, a time and financial costs associated with them. Whereas at the beginning, when the costs are quite low to use these services, it actually tends to make more sense to just focus on your own project and, and, you know, pick those things up off the shelf because that's easier and quicker. And I think, uh, again, I've seen some companies fail because they tried to do everything themselves from scratch and that, that doesn't work in the beginning. So yeah, I think that's a, it's a big one. 

The second one is actually slightly later as you start to grow, getting something up and running at all is a challenge. Um, what tends to happen as you get a little bit bigger is this effect that I was talking about before where people get siloized, um, the communication starts to break down and people aren't aware of the differing concerns. So if you start worrying about things that you might not worry about at first, like system recovery, uh, compliance in some cases, like there's laws around what you do in terms of your platform and your recoverability and data protection and all these things, all of these topics tend to take focus away, um, from what the developers are doing. So on the first hand, that tends to slow down delivery of, of, features that the engineers within your company want in favor of things that they don't really want to know about. Now, all the time you're doing this, you're taking problems away from them and solving them for them. But if you don't talk about that, then you're not, you're not, you may be delivering value, but nobody knows you're delivering value. So that's the first thing. 

The other thing is that you then tend to start losing focus on, on the impact that some of these things have. If you stop thinking about the developers as the primary stakeholders and you get obsessed about these other technical and legal factors, um, then you can start putting barriers into place. You can start, um, making the interfaces to the system the way in which it's used, become more complicated. And if you don't really focus then on the developer experience, right, what it is like to use that platform, then you start to turn into the problem, which I mentioned before, because, um, if you're regularly doing something, if you're deploying or testing on a platform and you have to do that over and over again, and it's slowed down by some bureaucracy or some practice or just literally running slowly, um, then that starts to be the thing that irritates you. It starts to be the thing that's in your way, stopping you doing what you're doing. And so, I mean, one thing is, is, is recognizing when this point happens, when your concerns start to deviate and actually explicitly saying, "Okay, yes, we're going to focus on all these things we have to focus on technically, but we're going to make sure that we reserve some technical resource for monitoring our performance and the way in which our customers interact with the system, failure cases, complaints that come up often."

Um, so one thing, again, I saw in much bigger companies, is they migrated to the cloud from, from legacy systems in data centers. And they were used to having turnaround times on, on procedures for deploying software that took at least weeks or having month-long projects because they had to wait for specific training that they had to get sign off. And they thought that by moving to an internal cloud platform, they would solve these things and have this kind of rapid development and deployment cycle. They sort of did in some ways, but they forgot, right? When they were speculating out, they forgot to make the developers a stakeholder and saying, "What do you need to achieve that?" And what they actually need to achieve that is a change in the mindset around the bureaucracy that came around. It's all well and good, like not having to physically put a machine in a rack and order it from a company. But if you still have these rules that say, okay, you need to go in this training course before you can do anything with this, and there's a six month waiting list for that training course, or this has to be approved by five managers who can only be contacted by email before you can do it. These processes are slowing things down. So actually, I mentioned that company that, uh, we lost the whole department from the, from the, uh, platform that we had internally. One of the reasons actually was that just getting started with this platform took months. Whereas if you went to a public cloud service, all you needed was a credit card and you could do it and you wouldn't be breaking any rules in the company in doing that. As long as you had the, the right to spend the money on the credit card, it was fine.

So, you know, that difference of experience, that difference of, uh, of understanding something that starts to grow out as you, as you grow, right? So I think that's a, uh, a thing to look out for as you move from the situation when you're 10, 20 people in the whole company to when you're about, I would say, 100 to 200 people in the whole company. These forces start to become apparent. 

Kovid Batra: Got it. So when, when you touch that point of 100-200, uh, then there is definitely a different journey that you have to look up to, right? And there are their own set of challenges. So from that zero to one and then one to X, uh, journey, what, what things have you experienced? Like, this would be my last question for, for today, but yeah, I would be really interested for people who are listening to you heading teams of sizes, a hundred and above. What kind of things they should be looking at when they are, let's say, moving from an off the shelf to an in-house product and then building these teams together?

Geoffrey Teale: Oh, what should they be looking at? I mean, I think we just covered, uh, one of the big ones. I'd say actually that one of the, the biggest things for engineers particularly, um, and managers of engineers is resistance to documentation and, and sort of ideas about documentation that people have. So, um, when you're again, when you're that very small company, it's very easy to just know what's going on. As you grow, what happens, new people come into your team and they have the same questions that have been asked and answered before, or were just known things. So you get this pattern where you repeatedly get the same information being requested by people and it's very nice and normal to have conversations. It builds teams. Um, but there's this kind of key phrase, which is, 'Documentation is automation', right? So engineers understand automation. They understand why automation is required to scale, but they tend to completely discount that when it comes to documentation. So almost every engineer that I've ever met hates writing documentation. Not everyone, but almost everyone. Uh, but if you go and speak to engineers about what they need to start working with a new product, and again, we think about this as a product, um, they'll say, of course, I need some documentation. Uh, and if you dive into that, they don't really want to have fancy YouTube videos. And so, that sometimes that helps people overcome a resistance to learning. Um, but, uh, having anything at all is useful, right? But this is a key, key learning documentation. You need to treat it a little bit like you treat code, right? So it's a very natural, um, observation from, from most engineers. Well, if I write a document about this, that document is just going to sit there and, and rot, and then it will be worse than useless because it will say the wrong thing, which is absolutely true. But the problem there is that someone said it will sit there and rot, right? It shouldn't be the case, right? If you need the documentation to scale out, you need these pieces to, to support new people coming into the company and to actually reduce the overhead of communication because more people, the more different directions of communication you have, the more costly it gets for the organization. Documentation is boring. It's old-fashioned, but it is the solution that works for fixing that. 

The only other thing I'm going to say about is mindset, is it's really important to teach engineers what to document, right? Get them away from this mindset that documentation means writing massive, uh, uh, reams and reams of, of text explaining things in, in detail. It's about, you know, documenting the right things in the right place. So at code-level, commenting, um, saying not what the code there does, but more importantly, generally, why it does that. You know, what decision was made that led to that? What customer requirement led to that? What piece of regulation led to that? Linking out to the resources that explain that. And then at slightly higher levels, making things discoverable. So we talk actually in DevEx about things like, um, service catalogs so people can find out what services are running, what APIs are available internally. But also actually documentation has to be structured in a way that meets the use cases. And so, actually not having individual departments dropping little bits of information all over a wiki with an arcane structure, but actually sort of having a centralized resource. Again, that's one thing that I did actually in a bigger company. I came into the platform team and said, "Nobody can find any information about your platform. You actually need like a central website and you need to promote that website and tell people, 'Hey, this is here. This is how you get the information that you need to understand this platform.' And actually including at the very front of that page why this platform is better than just going out somewhere else to come back to the same topic."

Documentation isn't a silver bullet, but it's the closest thing I'm aware of in tech organizations, and it's the thing that we routinely get wrong.

Kovid Batra: Great. I think, uh, just in the interest of time, we'll have to stop here. But, uh, Geoffrey, this was something really, really interesting. I also explored a few things, uh, which were very new to me from the platform perspective. Uh, we would love to, uh, have you for another episode discussing and deep diving more into such topics. But for today, I think this is our time. And, uh, thank you once again for joining in, taking out time for this. Appreciate it.

Geoffrey Teale: Thank you. It's my pleasure.

For agile teams, tracking productivity can quickly become overwhelming, especially when too many metrics clutter the process. Many teams feel they’re working hard without seeing the progress they expect. By focusing on a handful of high-impact JIRA metrics, teams can gain clear, actionable insights that streamline decision-making and help them stay on course. 

These five essential metrics highlight what truly drives productivity, enabling teams to make informed adjustments that propel their work forward.

Why JIRA Metrics Matter for Agile Teams

Agile teams often face missed deadlines, unclear priorities, and resource management issues. Without effective metrics, these issues remain hidden, leading to frustration. JIRA metrics provide clarity on team performance, enabling early identification of bottlenecks and allowing teams to stay agile and efficient. By tracking just a few high-impact metrics, teams can make informed, data-driven decisions that improve workflows and outcomes.

Top 5 JIRA Metrics to Improve Your Team’s Productivity

1. Work In Progress (WIP)

Work In Progress (WIP) measures the number of tasks actively being worked on. Setting WIP limits encourages teams to complete existing tasks before starting new ones, which reduces task-switching, increases focus, and improves overall workflow efficiency.

Technical applications: 

Setting WIP limits: On JIRA Kanban boards, teams can set WIP limits for each stage, like “In Progress” or “Review.” This prevents overloading and helps teams maintain steady productivity without overwhelming team members.

Identifying bottlenecks: WIP metrics highlight bottlenecks in real time. If tasks accumulate in a specific stage (e.g., “In Review”), it signals a need to address delays, such as availability of reviewers or unclear review standards.

Using cumulative flow diagrams: JIRA’s cumulative flow diagrams visualize WIP across stages, showing where tasks are getting stuck and helping teams keep workflows balanced.

2. Work Breakdown

Work Breakdown details how tasks are distributed across project components, priorities, and team members. Breaking down tasks into manageable parts (Epics, Stories, Subtasks) provides clarity on resource allocation and ensures each project aspect receives adequate attention.

Technical applications:

Epics and stories in JIRA: JIRA enables teams to organize large projects by breaking them into Epics, Stories, and Subtasks, making complex tasks more manageable and easier to track.

Advanced roadmaps: JIRA’s Advanced Roadmaps allow visualization of task breakdown in a timeline, displaying dependencies and resource allocations. This overview helps maintain balanced workloads across project components.

Tracking priority and status: Custom filters in JIRA allow teams to view high-priority tasks across Epics and Stories, ensuring critical items are progressing as expected.

3. Developer Workload

Developer Workload monitors the task volume and complexity assigned to each developer. This metric ensures balanced workload distribution, preventing burnout and optimizing each developer’s capacity.

Technical applications:

JIRA workload reports: Workload reports aggregate task counts, hours estimated, and priority levels for each developer. This helps project managers reallocate tasks if certain team members are overloaded.

Time tracking and estimation: JIRA allows developers to log actual time spent on tasks, making it possible to compare against estimates for improved workload planning.

Capacity-based assignment: Project managers can analyze workload data to assign tasks based on each developer’s availability and capacity, ensuring sustainable productivity.

4. Team Velocity

Team Velocity measures the amount of work completed in each sprint, establishing a baseline for sprint planning and setting realistic goals.

Technical applications:

Velocity chart: JIRA’s Velocity Chart displays work completed versus planned work, helping teams gauge their performance trends and establish realistic goals for future sprints.

Estimating story points: Story points assigned to tasks allow teams to calculate velocity and capacity more accurately, improving sprint planning and goal setting.

Historical analysis for planning: Historical velocity data enables teams to look back at performance trends, helping identify factors that impacted past sprints and optimizing future planning.

5. Cycle Time

Cycle Time tracks how long tasks take from start to completion, highlighting process inefficiencies. Shorter cycle times generally mean faster delivery.

Technical applications:

Control chart: The Control Chart in JIRA visualizes Cycle Time, displaying how long tasks spend in each stage, helping to identify where delays occur.

Custom workflows and time tracking: Customizable workflows allow teams to assign specific time limits to each stage, identifying areas for improvement and reducing Cycle Time.

SLAs for timely completion: For teams with service-level agreements, setting cycle-time goals can help track SLA adherence, providing benchmarks for performance.

How to Set Up JIRA Metrics for Success: Practical Tips for Maximizing the Benefits of JIRA Metrics with Typo

Effectively setting up and using JIRA metrics requires strategic configuration and the right tools to turn raw data into actionable insights. Here’s a practical, step-by-step guide to configuring these metrics in JIRA for optimal tracking and collaboration. With Typo’s integration, teams gain additional capabilities for managing, analyzing, and discussing metrics collaboratively.

Step 1: Configure Key Dashboards for Visibility

Setting up dashboards in JIRA for metrics like Cycle Time, Developer Workload, and Team Velocity allows for quick access to critical data.

How to set up:

  1. Go to the Dashboards section in JIRA, select Create Dashboard, and add specific gadgets such as Cumulative Flow Diagram for WIP and Velocity Chart for Team Velocity.
  2. Position each gadget for easy reference, giving your team a visual summary of project progress at a glance.

Step 2: Use Typo’s Sprint Analysis for Enhanced Sprint Visibility

Typo’s sprint analysis offers an in-depth view of your team’s progress throughout a sprint, enabling engineering managers and developers to better understand performance trends, spot blockers, and refine future planning. Typo integrates seamlessly with JIRA to provide real-time sprint insights, including data on team velocity, task distribution, and completion rates.

Key features of Typo’s sprint analysis:

Detailed sprint performance summaries: Typo automatically generates sprint performance summaries, giving teams a clear view of completed tasks, WIP, and uncompleted items.

Sprint progress tracking: Typo visualizes your team’s progress across each sprint phase, enabling managers to identify trends and respond to bottlenecks faster.

Velocity trend analysis: Track velocity over multiple sprints to understand performance patterns. Typo’s charts display average, maximum, and minimum velocities, helping teams make data-backed decisions for future sprint planning.

Step 3: Leverage Typo’s Customizable Reports for Deeper Analysis

Typo enables engineering teams to go beyond JIRA’s native reporting by offering customizable reports. These reports allow teams to focus on specific metrics that matter most to them, creating targeted views that support sprint retrospectives and help track ongoing improvements.

Key benefits of Typo reports:

Customized metrics views: Typo’s reporting feature allows you to tailor reports by sprint, team member, or task type, enabling you to create a focused analysis that meets team objectives.

Sprint performance comparison: Easily compare current sprint performance with past sprints to understand progress trends and potential areas for optimization.

Collaborative insights: Typo’s centralized platform allows team members to add comments and insights directly into reports, facilitating discussion and shared understanding of sprint outcomes.

Step 4: Track Team Velocity with Typo’s Velocity Trend Analysis

Typo’s Velocity Trend Analysis provides a comprehensive view of team capacity and productivity over multiple sprints, allowing managers to set realistic goals and adjust plans according to past performance data.

How to use:

  1. Access Typo’s Velocity Trend Analysis to view velocity averages and deviations over time, helping your team anticipate work capacity more accurately.
  2. Use Typo’s charts to visualize and discuss the effects of any changes made to workflows or team processes, allowing for data-backed sprint planning.
  3. Incorporate these insights into future sprint planning meetings to establish achievable targets and manage team workload effectively.

Step 5: Automate Alerts and Notifications for Key Metrics

Setting up automated alerts in JIRA and Typo helps teams stay on top of metrics without manual checking, ensuring that critical changes are visible in real-time.

How to set up:

  1. Use JIRA’s automation rules to create alerts for specific metrics. For example, set a notification if a task’s Cycle Time exceeds a predefined threshold, signaling potential delays.
  2. Enable notifications in Typo for sprint analysis updates, such as velocity changes or WIP limits being exceeded, to keep team members informed throughout the sprint.
  3. Automate report generation in Typo, allowing your team to receive regular updates on sprint performance without needing to pull data manually.

Step 6: Host Collaborative Retrospectives with Typo

Typo’s integration makes retrospectives more effective by offering a shared space for reviewing metrics and discussing improvement opportunities as a team.

How to use:

  1. Use Typo’s reports and sprint analysis as discussion points in retrospective meetings, focusing on completed vs. planned work, Cycle Time efficiency, and WIP trends.
  2. Encourage team members to add insights or suggestions directly into Typo, fostering collaborative improvement and shared accountability.
  3. Document key takeaways and actionable steps in Typo, ensuring continuous tracking and follow-through on improvement efforts in future sprints.

Read more: Moving beyond JIRA Sprint Reports 

Monitoring Scope Creep

Scope creep—when a project’s scope expands beyond its original objectives—can disrupt timelines, strain resources, and lead to project overruns. Monitoring scope creep is essential for agile teams that need to stay on track without sacrificing quality. 

In JIRA, tracking scope creep involves setting clear boundaries for task assignments, monitoring changes, and evaluating their impact on team workload and sprint goals.

How to Monitor Scope Creep in JIRA

  1. Define scope boundaries: Start by clearly defining the scope of each project, sprint, or epic in JIRA, detailing the specific tasks and goals that align with project objectives. Make sure these definitions are accessible to all team members.
  2. Use the issue history and custom fields: Track changes in task descriptions, deadlines, and priorities by utilizing JIRA’s issue history and custom fields. By setting up custom fields for scope-related tags or labels, teams can flag tasks or sub-tasks that deviate from the original project scope, making scope creep more visible.
  3. Monitor workload adjustments with Typo: When scope changes are approved, Typo’s integration with JIRA can help assess their impact on the team’s workload. Use Typo’s reporting to analyze new tasks added mid-sprint or shifts in priorities, ensuring the team remains balanced and prepared for adjusted goals.
  4. Sprint retrospectives for reflection: During sprint retrospectives, review any instances of scope creep and assess the reasons behind the adjustments. This allows the team to identify recurring patterns, evaluate the necessity of certain changes, and refine future project scoping processes.

By closely monitoring and managing scope creep, agile teams can keep their projects within boundaries, maintain productivity, and make adjustments only when they align with strategic objectives.

Building a Data-Driven Engineering Culture

Building a data-driven culture goes beyond tracking metrics; it’s about engaging the entire team in understanding and applying these insights to support shared goals. By fostering collaboration and using metrics as a foundation for continuous improvement, teams can align more effectively and adapt to challenges with agility.

Regularly revisiting and refining metrics ensures they stay relevant and actionable as team priorities evolve. To see how Typo can help you create a streamlined, data-driven approach, schedule a personalized demo today and unlock your team’s full potential.

Engineering Analytics

View All

Tracking DORA Metrics for Mobile Apps

Mobile development comes with a unique set of challenges: rapid release cycles, stringent user expectations, and the complexities of maintaining quality across diverse devices and operating systems. Engineering teams need robust frameworks to measure their performance and optimize their development processes effectively. 

DORA metrics—Deployment Frequency, Lead Time for Changes, Mean Time to Recovery (MTTR), and Change Failure Rate—are key indicators that provide valuable insights into a team’s DevOps performance. Leveraging these metrics can empower mobile development teams to make data-driven improvements that boost efficiency and enhance user satisfaction.

Importance of DORA Metrics in Mobile Development

DORA metrics, rooted in research from the DevOps Research and Assessment (DORA) group, help teams measure key aspects of software delivery performance.

Here's why they matter for mobile development:

  • Deployment Frequency: Mobile teams need to keep up with the fast pace of updates required to satisfy user demand. Frequent, smooth deployments signal a team’s ability to deliver features, fixes, and updates consistently.
  • Lead Time for Changes: This metric tracks the time between code commit and deployment. For mobile teams, shorter lead times mean a streamlined process, allowing quicker responses to user feedback and faster feature rollouts.
  • MTTR: Downtime in mobile apps can result in frustrated users and poor reviews. By tracking MTTR, teams can assess and improve their incident response processes, minimizing the time an app remains in a broken state.
  • Change Failure Rate: A high change failure rate can indicate inadequate testing or rushed releases. Monitoring this helps mobile teams enhance their quality assurance practices and prevent issues from reaching production.

Deep Dive into Practical Solutions for Tracking DORA Metrics

Tracking DORA metrics in mobile app development involves a range of technical strategies. Here, we explore practical approaches to implement effective measurement and visualization of these metrics.

Implementing a Measurement Framework

Integrating DORA metrics into existing workflows requires more than a simple add-on; it demands technical adjustments and robust toolchains that support continuous data collection and analysis.

  1. Automated Data Collection

Automating the collection of DORA metrics starts with choosing the right CI/CD platforms and tools that align with mobile development. Popular options include:

  • Jenkins Pipelines: Set up custom pipeline scripts that log deployment events and timestamps, capturing deployment frequency and lead times. Use plugins like the Pipeline Stage View for visual insights.
  • GitLab CI/CD: With GitLab's built-in analytics, teams can monitor deployment frequency and lead time for changes directly within their CI/CD pipeline.
  • GitHub Actions: Utilize workflows that trigger on commits and deployments. Custom actions can be developed to log data and push it to external observability platforms for visualization.

Technical setup: For accurate deployment tracking, implement triggers in your CI/CD pipelines that capture key timestamps at each stage (e.g., start and end of builds, start of deployment). This can be done using shell scripts that append timestamps to a database or monitoring tool.

  1. Real-Time Monitoring and Visualization

To make sense of the collected data, teams need a robust visualization strategy. Here’s a deeper look at setting up effective dashboards:

  • Prometheus with Grafana: Integrate Prometheus to scrape data from CI/CD pipelines, and use Grafana to create dashboards with deployment trends and lead time breakdowns.
  • Elastic Stack (ELK): Ship logs from your CI/CD process to Elasticsearch and build visualizations in Kibana. This setup provides detailed logs alongside high-level metrics.

Technical Implementation Tips:

  • Use Prometheus exporters or custom scripts that expose metric data as HTTP endpoints.
  • Design Grafana dashboards to show current and historical trends for DORA metrics, using panels that highlight anomalies or spikes in lead time or failure rates.

  1. Comprehensive Testing Pipelines

Testing is integral to maintaining a low change failure rate. To align with this, engineering teams should develop thorough, automated testing strategies:

  • Unit Testing: Implement unit tests with frameworks like JUnit for Android or XCTest for iOS. Ensure these are part of every build to catch low-level issues early.
  • Integration Testing: Use tools such as Espresso and UIAutomator for Android and XCUITest for iOS to validate complex user interactions and integrations.
  • End-to-End Testing: Integrate Appium or Selenium to automate tests across different devices and OS versions. End-to-end testing helps simulate real-world usage and ensures new deployments don't break critical app flows.

Pipeline Integration:

  • Set up your CI/CD pipeline to trigger these tests automatically post-build. Configure your pipeline to fail early if a test doesn’t pass, preventing faulty code from being deployed.
  1. Incident Response and MTTR Management

Reducing MTTR requires visibility into incidents and the ability to act swiftly. Engineering teams should:

  • Implement Monitoring Tools: Use tools like Firebase Crashlytics for crash reporting and monitoring. Integrate with third-party tools like Sentry for comprehensive error tracking.
  • Set Up Automated Alerts: Configure alerts for critical failures using observability tools like Grafana Loki, Prometheus Alertmanager, or PagerDuty. This ensures that the team is notified as soon as an issue arises.

Strategies for Quick Recovery:

  • Implement automatic rollback procedures using feature flags and deployment strategies such as blue-green deployments or canary releases.
  • Use scripts or custom CI/CD logic to switch between versions if a critical incident is detected.

Weaving Typo into Your Workflow

After implementing these technical solutions, teams can leverage Typo for seamless DORA metrics integration. Typo can help consolidate data and make metric tracking more efficient and less time-consuming.

For teams looking to streamline the integration of DORA metrics tracking, Typo offers a solution that is both powerful and easy to adopt. Typo provides:

  • Automated Deployment Tracking: By integrating with existing CI/CD tools, Typo collects deployment data and visualizes trends, simplifying the tracking of deployment frequency.
  • Detailed Lead Time Analysis: Typo’s analytics engine breaks down lead times by stages in your pipeline, helping teams pinpoint delays in specific steps, such as code review or testing.
  • Real-Time Incident Response Support: Typo includes incident monitoring capabilities that assist in tracking MTTR and offering insights into incident trends, facilitating better response strategies.
  • Seamless Integration: Typo connects effortlessly with platforms like Jenkins, GitLab, GitHub, and Jira, centralizing DORA metrics in one place without disrupting existing workflows.

Typo’s integration capabilities mean engineering teams don’t need to build custom scripts or additional data pipelines. With Typo, developers can focus on analyzing data rather than collecting it, ultimately accelerating their journey toward continuous improvement.

Establishing a Continuous Improvement Cycle

To fully leverage DORA metrics, teams must establish a feedback loop that drives continuous improvement. This section outlines how to create a process that ensures long-term optimization and alignment with development goals.

  1. Regular Data Reviews: Conduct data-driven retrospectives to analyze trends and set goals for improvements.
  2. Iterative Process Enhancements: Use findings to adjust coding practices, enhance automated testing coverage, or refine build processes.
  3. Team Collaboration and Learning: Share knowledge across teams to spread best practices and avoid repeating mistakes.

Empowering Your Mobile Development Process

DORA metrics provide mobile engineering teams with the tools needed to measure and optimize their development processes, enhancing their ability to release high-quality apps efficiently. By integrating DORA metrics tracking through automated data collection, real-time monitoring, comprehensive testing pipelines, and advanced incident response practices, teams can achieve continuous improvement. 

Tools like Typo make these practices even more effective by offering seamless integration and real-time insights, allowing developers to focus on innovation and delivering exceptional user experiences.

Top 5 JIRA Metrics to Boost Productivity

For agile teams, tracking productivity can quickly become overwhelming, especially when too many metrics clutter the process. Many teams feel they’re working hard without seeing the progress they expect. By focusing on a handful of high-impact JIRA metrics, teams can gain clear, actionable insights that streamline decision-making and help them stay on course. 

These five essential metrics highlight what truly drives productivity, enabling teams to make informed adjustments that propel their work forward.

Why JIRA Metrics Matter for Agile Teams

Agile teams often face missed deadlines, unclear priorities, and resource management issues. Without effective metrics, these issues remain hidden, leading to frustration. JIRA metrics provide clarity on team performance, enabling early identification of bottlenecks and allowing teams to stay agile and efficient. By tracking just a few high-impact metrics, teams can make informed, data-driven decisions that improve workflows and outcomes.

Top 5 JIRA Metrics to Improve Your Team’s Productivity

1. Work In Progress (WIP)

Work In Progress (WIP) measures the number of tasks actively being worked on. Setting WIP limits encourages teams to complete existing tasks before starting new ones, which reduces task-switching, increases focus, and improves overall workflow efficiency.

Technical applications: 

Setting WIP limits: On JIRA Kanban boards, teams can set WIP limits for each stage, like “In Progress” or “Review.” This prevents overloading and helps teams maintain steady productivity without overwhelming team members.

Identifying bottlenecks: WIP metrics highlight bottlenecks in real time. If tasks accumulate in a specific stage (e.g., “In Review”), it signals a need to address delays, such as availability of reviewers or unclear review standards.

Using cumulative flow diagrams: JIRA’s cumulative flow diagrams visualize WIP across stages, showing where tasks are getting stuck and helping teams keep workflows balanced.

2. Work Breakdown

Work Breakdown details how tasks are distributed across project components, priorities, and team members. Breaking down tasks into manageable parts (Epics, Stories, Subtasks) provides clarity on resource allocation and ensures each project aspect receives adequate attention.

Technical applications:

Epics and stories in JIRA: JIRA enables teams to organize large projects by breaking them into Epics, Stories, and Subtasks, making complex tasks more manageable and easier to track.

Advanced roadmaps: JIRA’s Advanced Roadmaps allow visualization of task breakdown in a timeline, displaying dependencies and resource allocations. This overview helps maintain balanced workloads across project components.

Tracking priority and status: Custom filters in JIRA allow teams to view high-priority tasks across Epics and Stories, ensuring critical items are progressing as expected.

3. Developer Workload

Developer Workload monitors the task volume and complexity assigned to each developer. This metric ensures balanced workload distribution, preventing burnout and optimizing each developer’s capacity.

Technical applications:

JIRA workload reports: Workload reports aggregate task counts, hours estimated, and priority levels for each developer. This helps project managers reallocate tasks if certain team members are overloaded.

Time tracking and estimation: JIRA allows developers to log actual time spent on tasks, making it possible to compare against estimates for improved workload planning.

Capacity-based assignment: Project managers can analyze workload data to assign tasks based on each developer’s availability and capacity, ensuring sustainable productivity.

4. Team Velocity

Team Velocity measures the amount of work completed in each sprint, establishing a baseline for sprint planning and setting realistic goals.

Technical applications:

Velocity chart: JIRA’s Velocity Chart displays work completed versus planned work, helping teams gauge their performance trends and establish realistic goals for future sprints.

Estimating story points: Story points assigned to tasks allow teams to calculate velocity and capacity more accurately, improving sprint planning and goal setting.

Historical analysis for planning: Historical velocity data enables teams to look back at performance trends, helping identify factors that impacted past sprints and optimizing future planning.

5. Cycle Time

Cycle Time tracks how long tasks take from start to completion, highlighting process inefficiencies. Shorter cycle times generally mean faster delivery.

Technical applications:

Control chart: The Control Chart in JIRA visualizes Cycle Time, displaying how long tasks spend in each stage, helping to identify where delays occur.

Custom workflows and time tracking: Customizable workflows allow teams to assign specific time limits to each stage, identifying areas for improvement and reducing Cycle Time.

SLAs for timely completion: For teams with service-level agreements, setting cycle-time goals can help track SLA adherence, providing benchmarks for performance.

How to Set Up JIRA Metrics for Success: Practical Tips for Maximizing the Benefits of JIRA Metrics with Typo

Effectively setting up and using JIRA metrics requires strategic configuration and the right tools to turn raw data into actionable insights. Here’s a practical, step-by-step guide to configuring these metrics in JIRA for optimal tracking and collaboration. With Typo’s integration, teams gain additional capabilities for managing, analyzing, and discussing metrics collaboratively.

Step 1: Configure Key Dashboards for Visibility

Setting up dashboards in JIRA for metrics like Cycle Time, Developer Workload, and Team Velocity allows for quick access to critical data.

How to set up:

  1. Go to the Dashboards section in JIRA, select Create Dashboard, and add specific gadgets such as Cumulative Flow Diagram for WIP and Velocity Chart for Team Velocity.
  2. Position each gadget for easy reference, giving your team a visual summary of project progress at a glance.

Step 2: Use Typo’s Sprint Analysis for Enhanced Sprint Visibility

Typo’s sprint analysis offers an in-depth view of your team’s progress throughout a sprint, enabling engineering managers and developers to better understand performance trends, spot blockers, and refine future planning. Typo integrates seamlessly with JIRA to provide real-time sprint insights, including data on team velocity, task distribution, and completion rates.

Key features of Typo’s sprint analysis:

Detailed sprint performance summaries: Typo automatically generates sprint performance summaries, giving teams a clear view of completed tasks, WIP, and uncompleted items.

Sprint progress tracking: Typo visualizes your team’s progress across each sprint phase, enabling managers to identify trends and respond to bottlenecks faster.

Velocity trend analysis: Track velocity over multiple sprints to understand performance patterns. Typo’s charts display average, maximum, and minimum velocities, helping teams make data-backed decisions for future sprint planning.

Step 3: Leverage Typo’s Customizable Reports for Deeper Analysis

Typo enables engineering teams to go beyond JIRA’s native reporting by offering customizable reports. These reports allow teams to focus on specific metrics that matter most to them, creating targeted views that support sprint retrospectives and help track ongoing improvements.

Key benefits of Typo reports:

Customized metrics views: Typo’s reporting feature allows you to tailor reports by sprint, team member, or task type, enabling you to create a focused analysis that meets team objectives.

Sprint performance comparison: Easily compare current sprint performance with past sprints to understand progress trends and potential areas for optimization.

Collaborative insights: Typo’s centralized platform allows team members to add comments and insights directly into reports, facilitating discussion and shared understanding of sprint outcomes.

Step 4: Track Team Velocity with Typo’s Velocity Trend Analysis

Typo’s Velocity Trend Analysis provides a comprehensive view of team capacity and productivity over multiple sprints, allowing managers to set realistic goals and adjust plans according to past performance data.

How to use:

  1. Access Typo’s Velocity Trend Analysis to view velocity averages and deviations over time, helping your team anticipate work capacity more accurately.
  2. Use Typo’s charts to visualize and discuss the effects of any changes made to workflows or team processes, allowing for data-backed sprint planning.
  3. Incorporate these insights into future sprint planning meetings to establish achievable targets and manage team workload effectively.

Step 5: Automate Alerts and Notifications for Key Metrics

Setting up automated alerts in JIRA and Typo helps teams stay on top of metrics without manual checking, ensuring that critical changes are visible in real-time.

How to set up:

  1. Use JIRA’s automation rules to create alerts for specific metrics. For example, set a notification if a task’s Cycle Time exceeds a predefined threshold, signaling potential delays.
  2. Enable notifications in Typo for sprint analysis updates, such as velocity changes or WIP limits being exceeded, to keep team members informed throughout the sprint.
  3. Automate report generation in Typo, allowing your team to receive regular updates on sprint performance without needing to pull data manually.

Step 6: Host Collaborative Retrospectives with Typo

Typo’s integration makes retrospectives more effective by offering a shared space for reviewing metrics and discussing improvement opportunities as a team.

How to use:

  1. Use Typo’s reports and sprint analysis as discussion points in retrospective meetings, focusing on completed vs. planned work, Cycle Time efficiency, and WIP trends.
  2. Encourage team members to add insights or suggestions directly into Typo, fostering collaborative improvement and shared accountability.
  3. Document key takeaways and actionable steps in Typo, ensuring continuous tracking and follow-through on improvement efforts in future sprints.

Read more: Moving beyond JIRA Sprint Reports 

Monitoring Scope Creep

Scope creep—when a project’s scope expands beyond its original objectives—can disrupt timelines, strain resources, and lead to project overruns. Monitoring scope creep is essential for agile teams that need to stay on track without sacrificing quality. 

In JIRA, tracking scope creep involves setting clear boundaries for task assignments, monitoring changes, and evaluating their impact on team workload and sprint goals.

How to Monitor Scope Creep in JIRA

  1. Define scope boundaries: Start by clearly defining the scope of each project, sprint, or epic in JIRA, detailing the specific tasks and goals that align with project objectives. Make sure these definitions are accessible to all team members.
  2. Use the issue history and custom fields: Track changes in task descriptions, deadlines, and priorities by utilizing JIRA’s issue history and custom fields. By setting up custom fields for scope-related tags or labels, teams can flag tasks or sub-tasks that deviate from the original project scope, making scope creep more visible.
  3. Monitor workload adjustments with Typo: When scope changes are approved, Typo’s integration with JIRA can help assess their impact on the team’s workload. Use Typo’s reporting to analyze new tasks added mid-sprint or shifts in priorities, ensuring the team remains balanced and prepared for adjusted goals.
  4. Sprint retrospectives for reflection: During sprint retrospectives, review any instances of scope creep and assess the reasons behind the adjustments. This allows the team to identify recurring patterns, evaluate the necessity of certain changes, and refine future project scoping processes.

By closely monitoring and managing scope creep, agile teams can keep their projects within boundaries, maintain productivity, and make adjustments only when they align with strategic objectives.

Building a Data-Driven Engineering Culture

Building a data-driven culture goes beyond tracking metrics; it’s about engaging the entire team in understanding and applying these insights to support shared goals. By fostering collaboration and using metrics as a foundation for continuous improvement, teams can align more effectively and adapt to challenges with agility.

Regularly revisiting and refining metrics ensures they stay relevant and actionable as team priorities evolve. To see how Typo can help you create a streamlined, data-driven approach, schedule a personalized demo today and unlock your team’s full potential.

How to Reduce Cyclomatic Complexity?

Think of reading a book with multiple plot twists and branching storylines. While engaging, it can also be confusing and overwhelming when there are too many paths to follow. Just as a complex storyline can confuse readers, high Cyclic Complexity can make code hard to understand, maintain, and test, leading to bugs and errors. 

In this blog, we will discuss why high cyclomatic complexity can be problematic and ways to reduce it.

What is Cyclomatic Complexity? 

Cyclomatic Complexity, a software metric, was developed by Thomas J. Mccabe in 1976. It is a metric that indicates the complexity of the program by counting its decision points. 

A higher cyclomatic Complexity score reflects more execution paths, leading to increased complexity. On the other hand, a low score signifies fewer paths and, hence, less complexity. 

Cyclomatic Complexity is calculated using a control flow graph: 

M = E - N + 2P

M = Cyclomatic Complexity

N = Nodes (Block of code) 

E = Edges (Flow of control)

P = Number of Connected Components 

Why is High Cyclomatic Complexity Problematic? 

Increases Error Prone 

The more complex the code is, the more the chances of bugs. When there are many possible paths and conditions, developers may overlook certain conditions or edge cases during testing. This leads to defects in the software and becomes challenging to test all of them. 

Leads to Cognitive Complexity 

Cognitive complexity refers to the level of difficulty in understanding a piece of code. 

Cyclomatic Complexity is one of the factors that increases cognitive complexity. Since, it becomes overwhelming to process information effectively for developers, which makes it harder to understand the overall logic of code.

Difficulty in Onboarding 

Codebases with high cyclomatic Complexity make onboarding difficult for new developers or team members. The learning curve becomes steeper for them and they require more time and effort to understand and become productive. This also leads to misunderstanding and they may misinterpret the logic or overlook critical paths. 

Higher Risks of Defects

More complex code leads to more misunderstandings, which further results in higher defects in the codebase. Complex code is more prone to errors as it hinders adherence to coding standards and best practices. 

Rise in Maintainance Efforts 

Due to the complex codebase, the software development team may struggle to grasp the full impact of their changes which results in new errors. This further slows down the process. It also results in ripple effects i.e. difficulty in isolating changes as one modification can impact multiple areas of application. 

How to Reduce Cyclomatic Complexity? 

Function Decomposition

  • Single Responsibility Principle (SRP): This principle states that each module or function should have a defined responsibility and one reason to change. If a function is responsible for multiple tasks, it can result in bloated and hard-to-maintain code. 
  • Modularity: This means dividing large, complex functions into smaller, modular units so that each piece serves a focused purpose. It makes individual functions easier to understand, test, and modify without affecting other parts of the code.
  • Cohesion: Cohesion focuses on keeping related code close to functions and modules. When related functions are grouped together, it results in high cohesion which helps with readability and maintainability.
  • Coupling: This principle states to avoid excessive dependencies between modules. This will reduce the complexity and make each module more self-contained, enabling changes without affecting other parts of the system.

Conditional Logic Simplification

  • Guard Clauses: Developers must implement guard clauses to exit from a function as soon as a condition is met. This avoids deep nesting and enhances the readability and simplicity of the main logic of the function. 
  • Boolean Expressions: Use De Morgan's laws and simplify Boolean expressions to reduce the complexity of conditions. For example, rewriting! (A && B) as ! A || !B can sometimes make the code easier to understand.
  • Conditional Expressions: Consider using ternary operators or switch statements where appropriate. This will condense complex conditional branches into more concise expressions which further enhance their readability and reduce code size.
  • Flag Variables: Avoid unnecessary flag variables that track control flow. Developers should restructure the logic to eliminate these flags which can lead to simpler and cleaner code.

Loop Optimization

  • Loop Unrolling: Expand the loop body to perform multiple operations in each iteration. This is useful for loops with a small number of iterations as it reduces loop overhead and improves performance.
  • Loop Fusion: When two loops iterate over the same data, you may be able to combine them into a single loop. This enhances performance by reducing the number of loop iterations and boosting data locality.
  • Loop Strength Reduction: Consider replacing costly operations in loops with less expensive ones, such as using addition instead of multiplication where possible. This will reduce the computational cost within the loop.
  • Loop Invariant Code Motion: Prevent redundant computation by moving calculations that do not change with each loop iteration outside of the loop. 

Code Refactoring

  • Extract Method: Move repetitive or complex code segments into separate functions. This simplifies the original function, reduces complexity, and makes code easier to reuse.
  • Introduce Explanatory Variables: Use intermediate variables to hold the results of complex expressions. This can make code more readable and allow others to understand its purpose without deciphering complex operations.
  • Replace Magic Numbers with Named Constants: Magic numbers are hard-coded numbers in code. Instead of directly using them, create symbolic constants for hard-coded values. It makes it easy to change the value at a later stage and improves the readability and maintainability of the code.
  • Simplify Complex Expressions: Break down long, complex expressions into smaller, more digestible parts to improve readability and reduce cognitive load on the reader.

5. Design Patterns

  • Strategy Pattern: This pattern allows developers to encapsulate algorithms within separate classes. By delegating responsibilities to these classes, you can avoid complex conditional statements and reduce overall code complexity.
  • State Pattern: When an object has multiple states, the State Pattern can represent each state as a separate class. This simplifies conditional code related to state transitions.
  • Observer Pattern: The Observer Pattern helps decouple components by allowing objects to communicate without direct dependencies. This reduces complexity by minimizing the interconnectedness of code components.

6. Code Analysis Tools

  • Static Code Analyzers: Static Code Analysis Tools like Typo or Sonarqube, can automatically highlight areas of high complexity, unused code, or potential errors. This allows developers to identify and address complex code areas proactively.
  • Code Coverage Tools: Code coverage is a measure that indicates the percentage of a codebase that is tested by automated tests. Tools like Typo measures code coverage, highlighting untested areas. It helps ensure that the tests cover a significant portion of the code which helps identifies untested parts and potential bugs.

Other Ways to Reduce Cyclomatic Complexity 

  • Identify and remove dead code to simplify the codebase and reduce maintenance efforts. This keeps the code clean, improves performance, and reduces potential confusion.
  • Consolidate duplicate code into reusable functions to reduce redundancy and improve consistency. This makes it easier to update logic in one place and avoid potential bugs from inconsistent changes.
  • Continuously improve code structure by refactoring regularly to enhance readability, and maintainability, and reduce technical debt. This ensures that the codebase evolves to stay efficient and adaptable to future needs.
  • Perform peer reviews to catch issues early, promote coding best practices, and maintain high code quality. Code reviews encourage knowledge sharing and help align the team on coding standards.
  • Write Comprehensive Unit Tests to ensure code functions correctly and supports easier refactoring in the future. They provide a safety net which makes it easier to identify issues when changes are made.

Typo - An Automated Code Review Tool

Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.

Key Features:

  • Supports top 8 languages including C++ and C#.
  • Understands the context of the code and fixes issues accurately.
  • Optimizes code efficiently.
  • Provides automated debugging with detailed explanations.
  • Standardizes code and reduces the risk of a security breach

 

Conclusion 

The cyclomatic complexity metric is critical in software engineering. Reducing cyclomatic complexity increases the code maintainability, readability, and simplicity. By implementing the above-mentioned strategies, software engineering teams can reduce complexity and create a more streamlined codebase. Tools like Typo’s automated code review also help in identifying complexity issues early and providing quick fixes. Hence, enhancing overall code quality.

View All

Software Delivery

View All

Impact of Low Code Quality on Software Development

Maintaining a balance between speed and code quality is a challenge for every developer. 

Deadlines and fast-paced projects often push teams to prioritize rapid delivery, leading to compromises in code quality that can have long-lasting consequences. While cutting corners might seem efficient in the moment, it often results in technical debt and a codebase that becomes increasingly difficult to manage.

The hidden costs of poor code quality are real, impacting everything from development cycles to team morale. This blog delves into the real impact of low code quality, its common causes, and actionable solutions tailored to developers looking to elevate their code standards.

Understanding the Core Elements of Code Quality

Code quality goes beyond writing functional code. High-quality code is characterized by readability, maintainability, scalability, and reliability. Ensuring these aspects helps the software evolve efficiently without causing long-term issues for developers. Let’s break down these core elements further:

  • Readability: Code that follows consistent formatting, uses meaningful variable and function names, and includes clear inline documentation or comments. Readable code allows any developer to quickly understand its purpose and logic.
  • Maintainability: Modular code that is organized with reusable functions and components. Maintainability ensures that code changes, whether for bug fixes or new features, don’t introduce cascading errors throughout the codebase.
  • Scalability: Code designed withan architecture that supports growth. This involves using design patterns that decouple different parts of the code and make it easier to extend functionalities.
  • Reliability: Robust code that has been tested under different scenarios to minimize bugs and unexpected behavior.

The Real Costs of Low Code Quality

Low code quality can significantly impact various facets of software development. Below are key issues developers face when working with substandard code:

Sluggish Development Cycles

Low-quality code often involves unclear logic and inconsistent practices, making it difficult for developers to trace bugs or implement new features. This can turn straightforward tasks into hours of frustrating work, delaying project milestones and adding stress to sprints.

Escalating Technical Debt

Technical debt accrues when suboptimal code is written to meet short-term goals. While it may offer an immediate solution, it complicates future updates. Developers need to spend significant time refactoring or rewriting code, which detracts from new development and wastes resources.

Bug-Prone Software

Substandard code tends to harbor hidden bugs that may not surface until they affect end-users. These bugs can be challenging to isolate and fix, leading to patchwork solutions that degrade the codebase further over time.

Collaboration Friction

When multiple developers contribute to a project, low code quality can cause misalignment and confusion. Developers might spend more time deciphering each other’s work than contributing to new development, leading to decreased team efficiency and a lower-quality product.

Scalability Bottlenecks

A codebase that doesn’t follow proper architectural principles will struggle when scaling. For instance, tightly coupled components make it hard to isolate and upgrade parts of the system, leading to performance issues and reduced flexibility.

Developer Burnout

Constantly working with poorly structured code is taxing. The mental effort needed to debug or refactor a convoluted codebase can demoralize even the most passionate developers, leading to frustration, reduced job satisfaction, and burnout.

Root Causes of Low Code Quality

Understanding the reasons behind low code quality helps in developing practical solutions. Here are some of the main causes:

Pressure to Deliver Rapidly

Tight project deadlines often push developers to prioritize quick delivery over thorough, well-thought-out code. While this may solve immediate business needs, it sacrifices code quality and introduces problems that require significant time and resources to fix later.

Lack of Unified Coding Standards

Without established coding standards, developers may approach problems in inconsistent ways. This lack of uniformity leads to a codebase that’s difficult to maintain, read, and extend. Coding standards help enforce best practices and maintain consistent formatting and documentation.

Insufficient Code Reviews

Skipping code reviews means missing opportunities to catch errors, bad practices, or code smells before they enter the main codebase. Peer reviews help maintain quality, share knowledge, and align the team on best practices.

Limited Testing Strategies

A codebase without sufficient testing coverage is bound to have undetected errors. Tests, especially automated ones, help identify issues early and ensure that any code changes do not break existing features.

Overreliance on Low-Code/No-Code Solutions

Low-code platforms offer rapid development but often generate code that isn’t optimized for long-term use. This code can be bloated, inefficient, and difficult to debug or extend, causing problems when the project scales or requires custom functionality.

Comprehensive Solutions to Improve Code Quality

Addressing low code quality requires deliberate, consistent effort. Here are expanded solutions with practical tips to help developers maintain and improve code standards:

  1. Adopt Rigorous Code Reviews

Code reviews should be an integral part of the development process. They serve as a quality checkpoint to catch issues such as inefficient algorithms, missing documentation, or security vulnerabilities. To make code reviews effective:

  • Create a structured code review checklist that focuses on readability, adherence to coding standards, potential performance issues, and proper error handling.
  • Foster a culture where code reviews are seen as collaborative learning opportunities rather than criticism.
  • Implement tools like GitHub’s review features or Bitbucket for in-depth code discussions.

  1. Integrate Linters and Static Analysis Tools

Linters help maintain consistent formatting and detect common errors automatically. Tools like ESLint (JavaScript), RuboCop (Ruby), and Pylint (Python) check your code for syntax issues and adherence to coding standards. Static analysis tools go a step further by analyzing code for complex logic, performance issues, and potential vulnerabilities. To optimize their use:

  • Configure these tools to align with your project’s coding standards.
  • Run these tools in pre-commit hooks with Husky or integrate them into your CI/CD pipelines to ensure code quality checks are performed automatically.

  1. Prioritize Comprehensive Testing

Adopt a multi-layered testing strategy to ensure that code is reliable and bug-free:

  • Unit Tests: Write unit tests for individual functions or methods to verify they work as expected. Frameworks like Jest for JavaScript, PyTest for Python, and JUnit for Java are popular choices.
  • Integration Tests: Ensure that different parts of your application work together smoothly. Tools like Cypress and Selenium can help automate these tests.
  • End-to-End Tests: Simulate real user interactions to catch potential issues that unit and integration tests might miss.
  • Integrate testing into your CI/CD pipeline so that tests run automatically on every code push or pull request.

  1. Dedicate Time for Refactoring

Refactoring helps improve code structure without changing its behavior. Regularly refactoring prevents code rot and keeps the codebase maintainable. Practical strategies include:

  • Identify “code smells” such as duplicated code, overly complex functions, or tightly coupled modules.
  • Apply design patterns where appropriate, such as Factory or Observer, to simplify complex logic.
  • Use IDE refactoring tools like IntelliJ IDEA’s refactor feature or Visual Studio Code extensions to speed up the process.

  1. Create and Enforce Coding Standards

Having a shared set of coding standards ensures that everyone on the team writes code with consistent formatting and practices. To create effective standards:

  • Collaborate with the team to create a coding guideline that includes best practices, naming conventions, and common pitfalls to avoid.
  • Document the guideline in a format accessible to all team members, such as a README file or a Confluence page.
  • Conduct periodic training sessions to reinforce these standards.

  1. Leverage Typo for Enhanced Code Quality

Typo can be a game-changer for teams looking to automate code quality checks and streamline reviews. It offers a range of features:

  • Automated Code Review: Detects common issues, code smells, and inconsistencies, supplementing manual code reviews.
  • Detailed Reports: Provides actionable insights, allowing developers to understand code weaknesses and focus on the most critical issues.
  • Seamless Collaboration: Enables teams to leave comments and feedback directly on code, enhancing peer review discussions and improving code knowledge sharing.
  • Continuous Monitoring: Tracks changes in code quality over time, helping teams spot regressions early and maintain consistent standards.

  1. Enhance Knowledge Sharing and Training

Keeping the team informed on best practices and industry trends strengthens overall code quality. To foster continuous learning:

  • Organize workshops, code review sessions, and tech talks where team members share insights or recent challenges they overcame.
  • Encourage developers to participate in webinars, online courses, and conferences.
  • Create a mentorship program where senior developers guide junior members through complex code and teach them best practices.

  1. Strategically Use Low-Code Tools

Low-code tools should be leveraged for non-critical components or rapid prototyping, but ensure that the code generated is thoroughly reviewed and optimized. For more complex or business-critical parts of a project:

  • Supplement low-code solutions with custom coding to improve performance and maintainability.
  • Regularly review and refactor code generated by these platforms to align with project standards.

Commit to Continuous Improvement

Improving code quality is a continuous process that requires commitment, collaboration, and the right tools. Developers should assess current practices, adopt new ones gradually, and leverage automated tools like Typo to streamline quality checks. 

By incorporating these strategies, teams can create a strong foundation for building maintainable, scalable, and high-quality software. Investing in code quality now paves the way for sustainable development, better project outcomes, and a healthier, more productive team.

Sign up for a quick demo with Typo to learn more!

why jira dashboards are insufficient

Why JIRA Dashboard is Insufficient?- Time for JIRA-Git Data Integration

Introduction

In today's fast-paced and rapidly evolving software development landscape, effective project management is crucial for engineering teams striving to meet deadlines, deliver quality products, and maintain customer satisfaction. Project management not only ensures that tasks are completed on time but also optimizes resource allocation enhances team collaboration, and improves communication across all stakeholders. A key tool that has gained prominence in this domain is JIRA, which is widely recognized for its robust features tailored for agile project management.

However, while JIRA offers numerous advantages, such as customizable workflows, detailed reporting, and integration capabilities with other tools, it also comes with limitations that can hinder its effectiveness. For instance, teams relying solely on JIRA dashboard gadget may find themselves missing critical contextual data from the development process. They may obtain a snapshot of project statuses but fail to appreciate the underlying issues impacting progress. Understanding both the strengths and weaknesses of JIRA dashboard gadget is vital for engineering managers to make informed decisions about their project management strategies.

The Limitations of JIRA Dashboard Gadgets

Lack of Contextual Data

JIRA dashboard gadgets primarily focus on issue tracking and project management, often missing critical contextual data from the development process. While JIRA can show the status of tasks and issues, it does not provide insights into the actual code changes, commits, or branch activities that contribute to those tasks. This lack of context can lead to misunderstandings about project progress and team performance. For example, a task may be marked as "in progress," but without visibility into the associated Git commits, managers may not know if the team is encountering blockers or if significant progress has been made. This disconnect can result in misaligned expectations and hinder effective decision-making.

Static Information

JIRA dashboards having road map gadget or sprint burndown gadget can sometimes present a static view of project progress, which may not reflect real-time changes in the development process. For instance, while a JIRA road map gadget or sprint burndown gadget may indicate that a task is "done," it does not account for any recent changes or updates made in the codebase. This static nature can hinder proactive decision-making, as managers may not have access to the most current information about the project's health. Additionally, relying on historical data can create a lag in response to emerging issues in issue statistics gadget. In a rapidly changing development environment, the ability to react quickly to new information is crucial for maintaining project momentum hence we need to move beyond default chart gadget like road map gadget or burndown chart gadget.

Limited Collaboration Insights

Collaboration is essential in software development, yet JIRA dashboards often do not capture the collaborative efforts of the team. Metrics such as code reviews, pull requests, and team discussions are crucial for understanding how well the team is working together. Without this information, managers may overlook opportunities for improvement in team dynamics and communication. For example, if a team is actively engaged in code reviews but this activity is not reflected in JIRA gadgets or sprint burndown gadget, managers may mistakenly assume that collaboration is lacking. This oversight can lead to missed opportunities to foster a more cohesive team environment and improve overall productivity.

Overemphasis on Individual Metrics

JIRA dashboard or other copy dashboard can sometimes encourage a focus on individual performance metrics rather than team outcomes. This can foster an environment of unhealthy competition, where developers prioritize personal achievements over collaborative success. Such an approach can undermine team cohesion and lead to burnout. When individual metrics are emphasized, developers may feel pressured to complete tasks quickly, potentially sacrificing code quality and collaboration. This focus on personal performance can create a culture where teamwork and knowledge sharing are undervalued, ultimately hindering project success.

Inflexibility in Reporting

JIRA dashboard layout often rely on predefined metrics and reports, which may not align with the unique needs of every project or team. This inflexibility can result in a lack of relevant insights that are critical for effective project management. For example, a team working on a highly innovative project may require different metrics than a team maintaining legacy software. The inability to customize reports can lead to frustration and a sense of disconnect from the data being presented.

The Power of Integrating Git Data with JIRA

Integrating Git data with JIRA provides a more holistic view of project performance and developer productivity. Here’s how this integration can enhance insights:

Real-Time Visibility into Development Activity

By connecting Git repositories with JIRA, engineering managers can gain real-time visibility into commits, branches, and pull requests associated with JIRA issues & issue statistics. This integration allows teams to see the actual development work being done, providing context to the status of tasks on the JIRA dashboard gadet. For instance, if a developer submits a pull request that relates to a specific JIRA ticket, the project manager instantly knows that work is ongoing, fostering transparency. Additionally, automated notifications for changes in the codebase linked to JIRA issues keep everyone updated without having to dig through multiple tools. This integrated approach ensures that management has a clear understanding of actual progress rather than relying on static task statuses.

Enhanced Collaboration and Communication

Integrating Git data with JIRA facilitates better collaboration among team members. Developers can reference JIRA issues in their commit messages, making it easier for the team to track changes related to specific tasks. This transparency fosters a culture of collaboration, as everyone can see how their work contributes to the overall project goals. Moreover, by having a clear link between code changes and JIRA issues, team members can engage in more meaningful discussions during stand-ups and retrospectives. This enhanced communication can lead to improved problem-solving and a stronger sense of shared ownership over the project.

Improved Risk Management

With integrated Git and JIRA data, engineering managers can identify potential risks more effectively. By monitoring commit activity and pull requests alongside JIRA issue statuses, managers can spot trends and anomalies that may indicate project delays or technical challenges. For example, if there is a sudden decrease in commit activity for a specific task, it may signal that the team is facing challenges or blockers. This proactive approach allows teams to address issues before they escalate, ultimately improving project outcomes and reducing the likelihood of last-minute crises.

Comprehensive Reporting and Analytics

The combination of JIRA and Git data enables more comprehensive reporting and analytics. Engineering managers can analyze not only task completion rates but also the underlying development activity that drives those metrics. This deeper understanding can inform better decision-making and strategic planning for future projects. For instance, by analyzing commit patterns and pull request activity, managers can identify trends in team performance and areas for improvement. This data-driven approach allows for more informed resource allocation and project planning, ultimately leading to more successful outcomes.

Best Practices for Integrating Git Data with JIRA

To maximize the benefits of integrating Git data with JIRA, engineering managers should consider the following best practices:

Select the Right Tools

Choose integration tools that fit your team's specific needs. Tools like Typo can facilitate the connection between Git and JIRA smoothly. Additionally, JIRA integrates directly with several source control systems, allowing for automatic updates and real-time visibility.

Sprint analysis in Typo

If you’re ready to enhance your project delivery speed and predictability, consider integrating Git data with your JIRA dashboards. Explore Typo! We can help you do this in a few clicks & make it one of your favorite dashboards.

Establish Commit Message Guidelines

Encourage your team to adopt consistent commit message guidelines. Including JIRA issue keys in commit messages will create a direct link between the code change and the JIRA issue. This practice not only enhances traceability but also aids in generating meaningful reports and insights. For example, a commit message like 'JIRA-123: Fixed the login issue' can help managers quickly identify relevant commits related to specific tasks.

Automate Workflows

Leverage automation features available in both JIRA and Git platforms to streamline the integration process. For instance, set up automated triggers that update JIRA issues based on events in Git, such as moving a JIRA issue to 'In Review' once a pull request is submitted in Git. This reduces manual updates and alleviates the administrative burden on the team.

Train Your Team

Providing adequate training to your team ensures everyone understands the integration process and how to effectively use both tools together. Conduct workshops or create user guides that outline the key benefits of integrating Git and JIRA, along with tips on how to leverage their combined functionalities for improved workflows.

Monitor and Adapt

Implement regular check-ins to assess the effectiveness of the integration. Gather feedback from team members on how well the integration is functioning and identify any pain points. This ongoing feedback loop allows you to make incremental improvements, ensuring the integration continues to meet the needs of the team.

Utilize Dashboards for Visualization

Create comprehensive dashboards that visually represent combined metrics from both Git and JIRA. Tools like JIRA dashboards, Confluence, or custom-built data visualization platforms can provide a clearer picture of project health. Metrics can include the number of active pull requests, average time in code review, or commit activity relevant to JIRA task completion.

Encourage Regular Code Reviews

With the changes being reflected in JIRA, create a culture around regular code reviews linked to specific JIRA tasks. This practice encourages collaboration among team members, ensures code quality, and keeps everyone aligned with project objectives. Regular code reviews also lead to knowledge sharing, which strengthens the team's overall skill set.

Case Study:

25% Improvement in Task Completion with Jira-Git Integration at Trackso

To illustrate the benefits of integrating Git data with JIRA, let’s consider a case study of a software development team at a company called Trackso.

Background

Trackso, a remote monitoring platform for Solar energy, was developing a new SaaS platform that consisted of a diverse team of developers, designers, and project managers. The team relied heavily on JIRA for tracking project statuses, but they found their productivity hampered by several issues:

  • Tasks had vague statuses that did not reflect actual progress to project managers.
  • Developers frequently worked in isolation without insight into each other's code contributions.
  • They could not correlate project delays with specific code changes or reviews, leading to poor risk management.

Implementation of Git and JIRA Integration

In 2022, Trackso's engineering manager decided to integrate Git data with JIRA. They chose GitHub for version control, given its robust collaborative features. The team set up automatic links between their JIRA tickets and corresponding GitHub pull requests and standardized their commit messages to include JIRA issue keys.

Metrics of Improvement

After implementing the integration, Trackso experienced significant improvements within three months:

  • Increased Collaboration: There was a 40% increase in code review participation as developers began referencing JIRA issues in their commits, facilitating clearer discussions during code reviews.
  • Reduced Delivery Times: Average task completion times decreased by 25%, as developers could see almost immediately when tasks were being actively worked on or if blockers arose.
  • Improved Risk Management: The team reduced project delays by 30% due to enhanced visibility. For example, the integration helped identify that a critical feature was lagging due to slow pull request reviews. This enabled team leads to improve their code review workflows.
  • Boosted Developer Morale: Developer satisfaction surveys indicated that 85% of team member felt more engaged in their work due to improved communication and clarity around task statuses.

Challenges Faced

Despite these successes, Trackso faced challenges during the integration process:

  • Initial Resistance: Some team member were hesitant to adopt new practices & new personal dashboard. The engineering manager organized training sessions to showcase the benefits of integrating Git and JIRA & having a personal dashboard, promoting buy-in from the team and leaving the default dashboard.
  • Maintaining Commit Message Standards: Initially, not all developers consistently used the issue keys in their commit messages. The team revisited training sessions and created a shared repository of best practices to ensure adherence.

Conclusion

While JIRA dashboards are valuable tools for project management, they are insufficient on their own for engineering managers seeking to improve project delivery speed and predictability. By integrating Git data with JIRA, teams can gain richer insights into development activity, enhance collaboration, and manage risks more effectively. This holistic approach empowers engineering leaders to make informed decisions and drive continuous improvement in their software development processes. Embracing this integration will ultimately lead to better project outcomes and a more productive engineering culture. As the software development landscape continues to evolve, leveraging the power of both JIRA and Git data will be essential for teams looking to stay competitive and deliver high-quality products efficiently.

What Lies Ahead: Platform Engineering Predictions

As platform engineering continues to evolve, it brings both promising opportunities and potential challenges. 

As we look to the future, what changes lie ahead for Platform Engineering? In this blog, we will explore the future landscape of platform engineering and strategize how organizations can stay at the forefront of innovation.

What is Platform Engineering? 

Platform Engineering is an emerging technology approach that enables software developers with all the required resources. It acts as a bridge between development and infrastructure which helps in simplifying the complex tasks and enhancing development velocity. The primary goal is to improve developer experience, operational efficiency, and the overall speed of software delivery.

Importance of Platform Engineering

  • Platform engineering helps in creating reusable components and standardized processes. It also automates routine tasks, such as deployment, monitoring, and scaling, to speed up the development cycle.
  • Platform engineering integrates security measures into the platform to ensure that applications are built and deployed securely. This allows the platform to meet regulatory and compliance requirements.
  • It ensures efficient use of resources to balance performance and expenditure. It also provides transparency into resource usage and associated costs to help organizations make informed decisions about scaling and investment.
  • By providing tools, frameworks, and services, platform engineering tool empowers developers to build, deploy, and manage applications more effectively.
  • A well-engineered platform allows organizations to adapt quickly to market changes, new technologies, and customer needs.

Key Predictions for Platform Engineering

More Focus on Developer Experience

The rise in Platform Engineering will enhance developer experience by creating standard toolchains and workflow. In the coming time, the platform engineering team will work closely with developers to understand what they need to be productive. Moreover, the platform tool will be integrated and closely monitored through DevEx and reports. This will enable developers to work efficiently and focus on the core tasks by automating repetitive tasks, further improving their productivity and satisfaction. 

Rise in Internal Developer Platform 

Platform engineering is closely associated with the development of IDP. In today’s times, organizations are striving for efficiency, hence, the creation and adoption of internal development platforms will rise. This will streamline operations, provide a standardized way of deploying and managing applications, and reduce cognitive load. Hence, reducing time to market for new features and products, allowing developers to focus on delivering high-quality products more efficiently rather than managing infrastructure. 

Growing Trend of Ephemeral Environment 

Modern software development demands rapid iteration. The ephemeral environments, temporary, ideal environments, will be an effective way to test new features and bugs before they are merged into the main codebase. These environments will prioritize speed, flexibility, and cost efficiency. Since they are created on-demand and short-lived, they will align perfectly with modern development practices. 

Integration with Generative AI 

As times are changing, AI-driven tools become more prevalent. These Generative AI tools such as GitHub Copilot and Google Gemini will enhance capabilities such as infrastructure as code, governance as code, and security as code. This will not only automate manual tasks but also support smoother operations and improved documentation processes. Hence, driving innovation and automating dev workflow. 

Extension to DevOps 

Platform engineering is a natural extension of DevOps. In the future, the platform engineers will work alongside DevOps rather than replacing it to address its complexities and scalability challenges. This will provide a standardized and automated approach to software development and deployment leading to faster project initialization, reduced lead time, and increased productivity. 

Shift to Product-Centric Funding Model 

Software organizations are now shifting from project project-centric model towards product product-centric funding model. When platforms are fully-fledged products, they serve internal customers and require a thoughtful and user-centric approach in their ongoing development. It also aligns well with the product lifecycle that is ongoing and continuous which enhances innovation and reduces operational friction. It will also decentralize decision making which allows platform engineering leaders to make and adjust funding decisions for their teams. 

Why Staying Updated on Platform Engineering Trends is Crucial?

  • Platform Engineering is a relatively new and evolving field. Hence, platform engineering teams need to keep up with rapid tech changes and ensure the platform remains robust and efficient.
  • Emerging technologies such as serverless computers and edge computers will shape the future of platform engineering. Moreover, Artificial intelligence and machine learning also help in optimizing various aspects of software development such as testing and monitoring. 
  • Platform engineering trends are introducing new ways to automate processes, manage infrastructure, and optimize workflows. This enables organizations to streamline operations, reduce manual work, and focus on more strategic tasks, leading to enhanced developer productivity. 
  • A platform aims to deliver a superior user experience. When platform engineers stay ahead of the learning curve, they can implement features and improvements that improve the end-user experience, resulting in higher customer satisfaction and retention.
  • Trends in platform engineering highlight new methods for building scalable and flexible systems. It allows platform engineers to design platforms that can easily adapt to changing demands and scale without compromising performance.

Typo - An Effective Platform Engineering Tool 

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion 

The future of platform engineering is both exciting and dynamic. As this field continues to evolve, staying ahead of these developments is crucial for organizations aiming to maintain a competitive edge. By embracing these predictions and proactively adapting to changes, platform engineering teams can drive innovation, improve efficiency, and deliver high-quality products that meet the demands of an ever-changing tech landscape.

View All

DevEx

View All
SPACE Framework

SPACE Framework: Strategies for Maximum Efficiency in Developer Productivity

What if we told you that writing more code could be making you less productive? 

While equating productivity with output is tempting, developer efficiency is far more complex. The real challenge often lies in processes, collaboration, and well-being. Without addressing these, inefficiencies and burnout will inevitably follow.

You may spend hours coding, only to feel your work isn’t making an impact—projects get delayed, bug fixes drag on, and constant context switching drains your focus. The key isn’t to work harder but smarter by solving the root causes of these issues.

The SPACE framework addresses this by focusing on five dimensions: Satisfaction, Performance, Activity, Communication, and Efficiency. It helps teams improve how much they do and how effectively they work, reducing workflow friction, improving collaboration, and supporting well-being to boost long-term productivity.

Understanding the SPACE Framework

The space framework addresses five key dimensions of developer productivity: satisfaction and well-being, performance, activity, collaboration and communication, and efficiency and flow. Together, these dimensions provide a comprehensive view of how developers work and where improvements can be made, beyond just measuring output.

By taking these factors into account, teams can better support developers, helping them not only produce better work but also maintain their motivation and well-being. Let’s take a closer look at each part of the framework and how it can help your team achieve a balance between productivity and a healthy work environment.

Common Developer Challenges that SPACE Addresses

In fast-paced, tech-driven environments, developers face several roadblocks to productivity:

  • Constant interruptions: Developers often deal with frequent context switching, from bug fixes to feature development to emergency support, making it hard to stay focused.
  • Cross-team collaboration: Working with multiple teams, such as DevOps, QA, and product management, can lead to miscommunication and misaligned priorities.
  • Lack of real-time feedback: Without timely feedback, developers may unknowingly veer off course or miss performance issues until much later in the development cycle.
  • Technical debt: Legacy systems and inconsistent coding practices create overhead and slow down development cycles, making it harder to move quickly on new features.

The space framework helps identify and address these challenges by focusing on improving both the technical processes and the developer experience.

How SPACE can help: A Deep Dive into Each Dimension

Let’s explore how each aspect of the space framework can directly impact technical teams:

Satisfaction and well-being

Developers are more productive when they feel engaged and valued. It's important to create an environment where developers are recognized for their contributions and have a healthy work-life balance. This can include feedback mechanisms, peer recognition, or even mental health initiatives. Automated tools that reduce repetitive tasks can also contribute to overall well-being.

Performance

Measuring performance should go beyond tracking the number of commits or pull requests. It’s about understanding the impact of the work being done. High-performing teams focus on delivering high-quality code and minimizing technical debt. Integrating automated testing and static code analysis tools into your CI/CD pipeline ensures code quality is maintained without manual intervention.

Activity

Focusing on meaningful developer activity, such as code reviews, tests written, and pull requests merged, helps align efforts with goals. Tools that track and visualize developer activities provide insight into how time is spent. For example, tracking code review completion times or how often changes are being pushed can reveal bottlenecks or opportunities for improving workflows.

Collaboration and communication

Effective communication across teams reduces friction in the development process. By integrating communication tools directly into the workflow, such as through Git or CI/CD notifications, teams can stay aligned on project goals. Automating feedback loops within the development process, such as notifications when builds succeed or fail, helps teams respond faster to issues.

Efficiency and flow

Developers enter a “flow state” when they can work on a task without distractions. One way to foster this is by reducing manual tasks and interruptions. Implementing CI/CD tools that automate repetitive tasks—like build testing or deployments—frees up developers to focus on writing code. It’s also important to create dedicated time blocks where developers can work without interruptions, helping them enter and maintain that flow.

Practical Strategies for Applying the SPACE Framework

To make the space framework actionable, here are some practical strategies your team can implement:

Automate repetitive tasks to enhance focus

A large portion of developer time is spent on tasks that can easily be automated, such as code formatting, linting, and testing. By introducing tools that handle these tasks automatically, developers can focus on the more meaningful aspects of their work, like writing new features or fixing bugs. This is where tools like Typo can make a difference. Typo integrates seamlessly into your development process, ensuring that code adheres to best practices by automating code quality checks and providing real-time feedback. Automating these reviews reduces the time developers spend on manual reviews and ensures consistency across the codebase.

Track meaningful metrics

Instead of focusing on superficial metrics like lines of code written or hours logged, focus on tracking activities that lead to tangible progress. Typo, for example, helps track key metrics like the number of pull requests merged, the percentage of code coverage, or the speed at which developers address code reviews. These insights give team leads a clearer picture of where bottlenecks are occurring and help teams prioritize tasks that move the project forward.

Improve communication and collaboration through integrated tools

Miscommunication between developers, product managers, and QA teams can cause delays and frustration. Integrating feedback systems that provide automatic notifications when tests fail or builds succeed can significantly improve collaboration. Typo plays a role here by streamlining communication between teams. By automatically reporting code review statuses or deployment readiness, Typo ensures that everyone stays informed without the need for constant manual updates or status meetings.

Protect flow time and eliminate disruptions

Protecting developer flow is essential to maintaining efficiency. Schedule dedicated “flow” periods where meetings are minimized, and developers can focus solely on their tasks. Typo enhances this by minimizing the need for developers to leave their coding environment to check on build statuses or review feedback. With automated reports, developers can stay updated without disrupting their focus. This helps ensure that developers can spend more time in their flow state and less time on administrative tasks.

Identify bottlenecks in your workflow

Using metrics from tools like Typo, you can gain visibility into where delays are happening in your development process—whether it's slow code review cycles, inefficient testing processes, or unclear requirements. With this insight, you can make targeted improvements, such as adjusting team structures, automating manual testing processes, or dedicating more resources to code reviews to ensure smoother project progression.

How Typo supports the SPACE framework

By using Typo as part of your workflow, you can naturally align with many of the principles of the space framework:

  • Automated code quality: Typo ensures code quality through automated reviews and real-time feedback, reducing the manual effort required during code review processes.
  • Tracking developer metrics: Typo tracks key activities that are directly related to developer efficiency, helping teams stay on track with performance goals.
  • Seamless communication: With automatic notifications and updates, Typo ensures that developers and other team members stay in sync without manual reporting, which helps maintain flow and improve collaboration.
  • Supporting flow: Typo’s integrations provide updates within the development environment, reducing the need for developers to context switch between tasks.

Bringing it all together: Maximizing Developer Productivity with SPACE

The space framework offers a well-rounded approach to improving developer productivity and well-being. By focusing on automating repetitive tasks, improving collaboration, and fostering uninterrupted flow time, your team can achieve more without sacrificing quality or developer satisfaction. Tools like Typo naturally fit into this process, helping teams streamline workflows, enhance communication, and maintain high code quality.

If you’re looking to implement the space framework, start by automating repetitive tasks and protecting your developers' flow time. Gradually introduce improvements in collaboration and tracking meaningful activity. Over time, you’ll notice improvements in both productivity and the overall well-being of your development team.

What challenges are you facing in your development workflow? 

Share your experiences and let us know how tools like Typo could help your team implement the space framework to improve productivity and collaboration!

Schedule a demo with Typo today

measuring developer productivity

Measuring and Improving Developer Productivity

Developer productivity is the new buzzword across the industry. Suddenly, measuring developer productivity has started going mainstream after the remote work culture, and companies like McKinsey are publishing articles titled - ”Yes, you can measure software developer productivity” causing a stir in the software development community, So we thought we should share our take on- Developer Productivity.

We will be covering the following Whats, Whys & Hows about Developer Productivity in this piece-

  • What is developer productivity?
  • Why do we need to measure developer productivity?
  • How do we measure it at the Team and individual level? & Why is it more complicated to measure developer productivity than Sales or Hiring productivity?
  • Challenges & Dangers of measuring developer productivity & What not to measure.
  • What is the impact of measuring developer productivity on engineering culture?

What is Developer Productivity?

Developer productivity refers to the effectiveness and efficiency with which software developers create high-quality software that meets business goals. It encompasses various dimensions, including code quality, development speed, team collaboration, and adherence to best practices. For engineering managers and leaders, understanding developer productivity is essential for driving continuous improvement and achieving successful project outcomes.

Key Aspects of Developer Productivity

Quality of Output: Developer productivity is not just about the quantity of code or code changes produced; it also involves the quality of that code. High-quality code is maintainable, readable, and free of significant bugs, which ultimately contributes to the overall success of a project.

Development Speed: This aspect measures how quickly developers (usually referred as developer velocity) can deliver features, fixes, and updates. While developer velocity is important, it should not come at the expense of code quality. Effective engineering teams strike a balance between delivering quickly and maintaining high standards.

Collaboration and Team Dynamics: Successful software development relies heavily on effective teamwork. Collaboration tools and practices that foster communication and knowledge sharing can significantly enhance developer productivity. Engineering managers should prioritize creating a collaborative environment that encourages teamwork.

Adherence to Best Practices for Outcomes: Following coding standards, conducting code review, and implementing testing protocols are essential for maintaining development productivity. These practices ensure that developers produce high-quality work consistently, which can lead to improved project outcomes.

Wanna Improve your Dev Productivity?

Why do we need to measure dev productivity?

We all know that no love to be measured but the CEOs & CFOs have an undying love for measuring the ROI of their teams, which we can't ignore. The more the development productivity, the more the RoI. However, measuring developer productivity is essential for engineering managers and leaders too who want to optimize their teams' performance- We can't improve something that we don't measure.

Understanding how effectively developers work can lead to improved project outcomes, better resource allocation, and enhanced team morale. In this section, we will explore the key reasons why measuring developer productivity is crucial for engineering management.

Enhancing Team Performance

Measuring developer productivity allows engineering managers to identify strengths and weaknesses within their teams. By analyzing developer productivity metrics, leaders can pinpoint areas where new developer excel and where they may need additional support or resources. This insight enables managers to tailor training programs, allocate tasks more effectively, and foster a culture of continuous improvement.

Team's insights in Typo

Driving Business Outcomes

Developer productivity is directly linked to business success. By measuring development team productivity, managers can assess how effectively their teams deliver features, fix bugs, and contribute to overall project goals. Understanding productivity levels helps align development efforts with business objectives, ensuring that the team is focused on delivering value that meets customer needs.

Improving Resource Allocation

Effective measurement of developer productivity enables better resource allocation. By understanding how much time and effort are required for various tasks, managers can make informed decisions about staffing, project timelines, and budget allocation. This ensures that resources are utilized efficiently, minimizing waste and maximizing output.

Fostering a Positive Work Environment

Measuring developer productivity can also contribute to a positive work environment. By recognizing high-performing teams and individuals, managers can boost morale and motivation. Additionally, understanding productivity trends can help identify burnout or dissatisfaction, allowing leaders to address issues proactively and create a healthier workplace culture.

Developer surveys insights in Typo

Facilitating Data-Driven Decisions

In today’s fast-paced software development landscape, data-driven decision-making is essential. Measuring developer productivity provides concrete data that can inform strategic decisions. Whether it's choosing new tools, adopting agile methodologies, or implementing process changes, having reliable developer productivity metrics allows managers to make informed choices that enhance team performance.

Investment distribution in Typo

Encouraging Collaboration and Communication

Regularly measuring productivity can highlight the importance of collaboration and communication within teams. By assessing metrics related to teamwork, such as code reviews and pair programming sessions, managers can encourage practices that foster collaboration. This not only improves productivity but overall developer experience by strengthening team dynamics and knowledge sharing.

Ultimately, understanding developer experience and measuring developer productivity leads to better outcomes for both the team and the organization as a whole.

How do we measure Developer Productivity?

Measuring developer productivity is essential for engineering managers and leaders who want to optimize their teams' performance.

Strategies for Measuring Productivity

Focus on Outcomes, Not Outputs: Shift the emphasis from measuring outputs like lines of code to focusing on outcomes that align with business objectives. This encourages developers to think more strategically about the impact of their work.

Measure at the Team Level: Assess productivity at the team level rather than at the individual level. This fosters team collaboration, knowledge sharing, and a focus on collective goals rather than individual competition.

Incorporate Qualitative Feedback: Balance quantitative metrics with qualitative feedback from developers through surveys, interviews, and regular check-ins. This provides valuable context and helps identify areas for improvement.

Encourage Continuous Improvement: Position productivity measurement as a tool for continuous improvement rather than a means of evaluation. Encourage developers to use metrics to identify areas for growth and work together to optimize workflows and development processes.

Lead by Example: As engineering managers and leaders, model the behavior you want to see in your team & team members. Prioritize work-life balance, encourage risk-taking and innovation, and create an environment where developers feel supported and empowered.

Measuring Dev productivity involves assessing both team and individual contributions to understand how effectively developers are delivering value through their development processes. Here’s how to approach measuring productivity at both levels:

Team-Level Developer Productivity

Measuring productivity at the team level provides a more comprehensive view of how collaborative efforts contribute to project success. Here are some effective metrics:

DORA Metrics

The DevOps Research and Assessment (DORA) metrics are widely recognized for evaluating team performance. Key metrics include:

  • Deployment Frequency: How often the software engineering team releases code to production.
  • Lead Time for Changes: The time taken for committed code to reach production.
  • Change Failure Rate: The percentage of deployments that result in failures.
  • Time to Restore Service: The time taken to recover from a failure.

Issue Cycle Time

This metric measures the time taken from the start of work on a task to its completion, providing insights into the efficiency of the software development process.

Team Satisfaction and Engagement

Surveys and feedback mechanisms can gauge team morale and satisfaction, which are critical for long-term productivity.

Collaboration Metrics

Assessing the frequency and quality of code reviews, pair programming sessions, and communication can provide insights into how well the software engineering team collaborates.

Individual Developer Productivity

While team-level metrics are crucial, individual developer productivity also matters, particularly for performance evaluations and personal development. Here are some metrics to consider:

  • Pull Requests and Code Reviews: Tracking the number of pull requests submitted and the quality of code reviews can provide insights into an individual developer's engagement and effectiveness.
  • Commit Frequency: Measuring how often a developer commits code can indicate their active participation in projects, though it should be interpreted with caution to avoid incentivizing quantity over quality.
  • Personal Goals and Outcomes: Setting individual objectives related to project deliverables and tracking their completion can help assess individual productivity in a meaningful way.
  • Skill Development: Encouraging developers to pursue training and certifications can enhance their skills, contributing to overall productivity.

Measuring developer productivity metrics presents unique challenges compared to more straightforward metrics used in sales or hiring. Here are some reasons why:

  • Complexity of Work: Software development involves intricate problem-solving, creativity, and collaboration, making it difficult to quantify contributions accurately. Unlike sales, where metrics like revenue generated are clear-cut, developer productivity encompasses various qualitative aspects that are harder to measure for project management.
  • Collaborative Nature: Development work is highly collaborative. Team members often intertwine with team efforts, making it challenging to isolate the impact of one developer's work. In sales, individual performance is typically more straightforward to assess based on personal sales figures.
  • Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity of a pragmatic engineer. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
  • Varied Work Activities: Developers engage in various activities beyond coding, including debugging, code reviews, and meetings. These essential tasks are often overlooked in productivity measurements, whereas sales roles typically have more consistent and quantifiable activities.
  • Productivity Tools and Software development Process: The developer productivity tools and methodologies used in software development are constantly changing, making it difficult to establish consistent metrics. In contrast, sales processes tend to be more stable, allowing for easier benchmarking and comparison.

By employing a balanced approach that considers both quantitative and qualitative factors, with a few developer productivity tools, engineering leaders can gain valuable insights into their teams' productivity and foster an environment of continuous improvement & better developer experience.

Challenges of measuring Developer Productivity - What not to Measure?

Measuring developer productivity is a critical task for engineering managers and leaders, yet it comes with its own set of challenges and potential pitfalls. Understanding these challenges is essential to avoid the dangers of misinterpretation and to ensure that developer productivity metrics genuinely reflect the contributions of developers. In this section, we will explore the challenges of measuring developer productivity and highlight what not to measure.

Challenges of Measuring Developer Productivity

  • Complexity of Software Development: Software development is inherently complex, involving creativity, problem-solving, and collaboration. Unlike more straightforward fields like sales, where performance can be quantified through clear metrics (e.g., sales volume), developer productivity is multifaceted and includes various non-tangible elements. This complexity makes it difficult to establish a one-size-fits-all metric.
  • Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
  • Team Dynamics and Collaboration: Measuring individual productivity can overlook the collaborative nature of software development. Developers often work in teams where their contributions are interdependent. Focusing solely on individual metrics may ignore the synergistic effects of collaboration, mentorship, and knowledge sharing, which are crucial for a team's overall success.
  • Context Ignorance: Developer productivity metrics often fail to consider the context in which developers work. Factors such as project complexity, team dynamics, and external dependencies can significantly impact productivity but are often overlooked in traditional assessments. This lack of context can lead to misleading conclusions about a developer's performance.
  • Potential for Misguided Incentives: Relying heavily on specific metrics can create perverse incentives. For example, if developers are rewarded based on the number of commits, they may prioritize frequent small commits over meaningful contributions. This can lead to a culture of "gaming the system" rather than fostering genuine productivity and innovation.

What Not to Measure

  • Lines of Code (LOC): While LOC can provide some insight into coding activity, it is not a reliable measure of productivity. More code does not necessarily equate to better software. Instead, focus on the quality and impact of the code produced.
  • Commit Frequency: Tracking how often developers commit code can give a false sense of productivity. Frequent commits do not always indicate meaningful progress and can encourage developers to break down their work into smaller, less significant pieces.
  • Bug Counts: Focusing on the number of bugs reported or fixed can create a negative environment where developers feel pressured to avoid complex tasks that may introduce bugs. This can stifle innovation and lead to a culture of risk aversion.
  • Time Spent on Tasks: Measuring how long developers spend on specific tasks can be misleading. Developers may take longer on complex problems that require deep thinking and creativity, which are essential for high-quality software development.

Measuring developer productivity is fraught with challenges and dangers that engineering managers must navigate carefully. By understanding these complexities and avoiding outdated or superficial metrics, leaders can foster a more accurate and supportive environment for their development team productivity.

What is the impact of measuring Dev productivity on engineering culture?

Developer productivity improvements are a critical factor in the success of software development projects. As engineering managers or technology leaders, measuring and optimizing developer productivity is essential for driving development team productivity and delivering successful outcomes. However, measuring development productivity can have a significant impact on engineering culture & software engineering talent, which must be carefully navigated. Let's talk about measuring developer productivity while maintaining a healthy and productive engineering culture.

Measuring developer productivity presents unique challenges compared to other fields. The complexity of software development, inadequate traditional metrics, team dynamics, and lack of context can all lead to misguided incentives and decreased morale. It's crucial for engineering managers to understand these challenges to avoid the pitfalls of misinterpretation and ensure that developer productivity metrics genuinely reflect the contributions of developers.

Remember, the goal is not to maximize metrics but to create a development environment where software engineers can thrive and deliver maximum value to the organization.

Development teams using Typo experience a 30% improvement in Developer Productivity. Want to Try Typo?

Member's insights in Typo
Wanna Improve your Dev Productivity?

Optimizing Code Reviews to Boost Developer Productivity

Code review is all about improving the code quality. However, it can be a nightmare for developers when not done correctly. They may experience several code review challenges and slow down the entire development process. This further reduces their morale and efficiency and results in developer burnout.

Hence, optimizing the code review process is crucial for both code reviewers and developers. In this blog post, we have shared a few tips on optimizing code reviews to boost developer productivity.

Importance of Code Reviews

The Code review process is an essential stage in the software development life cycle. It has been a defining principle in agile methodologies. It ensures high-quality code and identifies potential issues or bugs before they are deployed into production.

Another notable benefit of code reviews is that it helps to maintain a continuous integration and delivery pipeline to ensure code changes are aligned with project requirements. It also ensures that the product meets the quality standards, contributing to the overall success of sprint or iteration.

With a consistent code review process, the development team can limit the risks of unnoticed mistakes and prevent a significant amount of tech debt.

They also make sure that the code meets the set acceptance criteria, and functional specifications and whether the code base follows consistent coding styles across the codebase.

Lastly, it provides an opportunity for developers to learn from each other and improve their coding skills which further helps in fostering continuous growth and helps raise the overall code quality.

How do Ineffective Code Reviews Decrease Developer Productivity?

Unclear Standards and Inconsistencies

When the code reviews lack clear guidelines or consistent criteria for evaluation, the developers may feel uncertain of what is expected from their end. This leads to ambiguity due to varied interpretations of code quality and style. It also takes a lot of their time to fix issues on different reviewers’ subjective opinions. This leads to frustration and decreased morale among developers.

Increase in Bottlenecks and Delays

When developers wait for feedback for an extended period, it prevents them from progressing. This slows down the entire software development lifecycle, resulting in missed deadlines and decreased morale. Hence, negatively affecting the deployment timeline, customer satisfaction, and overall business outcomes.

Low Quality and Delayed Feedback

When reviewers communicate vague, unclear, and delayed feedback, they usually miss out on critical information. This leads to context-switching for developers which makes them lose focus on their current tasks. Moreover, they need to refamiliarize themselves with the code when the review is completed. Hence, resulting in developers losing their productivity.

Increase Cognitive Load

Frequent switching between writing and reviewing code requires a lot of mental effort. This makes it harder for developers to be focused and productive. Poorly structured, conflicting, or unclear feedback also confuses developers on which of them to prioritize first and understand the rationale behind suggested changes. This slows down the progress, leading to decision fatigue and reducing the quality of work.

Knowledge Gaps and Lack of Context

Knowledge gaps usually arise when reviewers lack the necessary domain knowledge or context about specific parts of the codebase. This results in a lack of context which further misguides developers who may overlook important issues. They may also need extra time to justify their decision and educate reviewers.

How to Optimize Code Review Process to Improve Developer Productivity?

Set Clear Goals and Standards

Establish clear objectives, coding standards, and expectations for code reviews. Communicate in advance with developers such as how long reviews should take and who will review the code. This allows both reviewers and developers to focus their efforts on relevant issues and prevent their time being wasted on insignificant matters.

Use a Code Review Checklist

Code review checklists include a predetermined set of questions and rules that the team will follow during the code review process. A few of the necessary quality checks include:

  • Readability and maintainability: This is the first criterion and cannot be overstated enough.
  • Uniform formatting: Whether the code with consistent indentation, spacing, and naming convention easy to understand?
  • Testing and quality assurance: Whether it have meticulous testing and quality assurance processes?
  • Boundary testing: Are we exploring extreme scenarios and boundary conditions to identify hidden problems?
  • Security and performance: Are we ensuring security and performance in our source code?
  • Architectural integrity: Whether the code is scalable, sustainable, and has a solid architectural design?

Prioritize High-Impact Issues

The issues must be prioritized based on their severity and impact. Not every issue in the code review process is equally important. Take up those issues first which affect system performance, security, or major features. Review them more thoroughly rather than the ones that have smaller and less impactful changes. It helps in allocating time and resources effectively.

Encourage Constructive Feedback

Always share specific, honest, and actionable feedback with the developers. The feedback must point in the right direction and must explain the ‘why’ behind it. It will reduce follow-ups and give necessary context to the developers. This also helps the engineering team to improve their skills and produce better code which further results in a high-quality codebase.

Automate Wherever Possible

Use automation tools such as style check, syntax check, and static code analysis tools to speed up the review process. This allows for routine checks for style, syntax errors, potential bugs, and performance issues and reduces the manual effort needed on such tasks. Automation allows developers to focus on more complex issues and allocate time more effectively.

Keep Reviews Small and Focused

Break down code into smaller, manageable chunks. This will be less overwhelming and time-consuming. The code reviewers can concentrate on details, adhere to the style guide and coding standards, and identify potential bugs. This will allow them to provide meaningful feedback more effectively. This helps in a deeper understanding of the code’s impact on the overall project.

Recognize and Reward Good Work

Acknowledge and celebrate developers who consistently produce high-quality code. This enables developers to feel valued for their contributions, leading to increased engagement, job satisfaction, and a sense of ownership in the project’s success. They are also more likely to continue producing high-quality code and actively participate in the review process.

Encourage Pair Programming or Pre-Review

Encourage pair programming or pre-review sessions to by enabling real-time feedback, reducing review time, and improving code quality. This fosters collaboration, enhances knowledge sharing, and helps catch issues early. Hence, leading to smoother and more effective reviews. It also promotes team bonding, streamlines communication, and cultivates a culture of continuous learning and improvement.

Use a Software Engineering Analytics Platform

Using an Engineering analytics platform in an organization is a powerful way to optimize the code review process and improve developer productivity. It provides comprehensive insights into the code quality, technical debt, and bug frequency which allow teams to proactively identify bottlenecks and address issues in real time before they escalate. It also allow teams to monitor their practices continuously and make adjustments as needed.

Typo — Automated Code Review Tool

Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.

Key Features

  • Supports top 8 languages including C++ and C#.
  • Understands the context of the code and fixes issues accurately.
  • Optimizes code efficiently.
  • Provides automated debugging with detailed explanations.
  • Standardizes code and reduces the risk of a security breach

Learn More About Typo

Conclusion

If you prioritize the code review process, follow the above-mentioned tips. It will help in maximizing code quality, improve developer productivity, and streamline the development process.

Happy reviewing!

View All

Podcasts

View All

'Product Thinking Secrets for Platform Teams' with Geoffrey Teale, Principal Product Engineer, Upvest

In this episode of the groCTO Podcast, host Kovid Batra engages in a comprehensive discussion with Geoffrey Teale, the Principal Product Engineer at Upvest, who brings over 25 years of engineering and leadership experience.

The episode begins with Geoffrey's role at Upvest, where he has transitioned from Head of Developer Experience to Principal Product Engineer, emphasizing a holistic approach to improving both developer experience and engineering standards across the organization. Upvest's business model as a financial infrastructure company providing investment banking services through APIs is also examined. Geoffrey underscores the multifaceted engineering requirements, including security, performance, and reliability, essential for meeting regulatory standards and customer expectations. The discussion further delves into the significance of product thinking for internal teams, highlighting the challenges and strategies of building platforms that resonate with developers' needs while competing with external solutions.

Throughout the episode, Geoffrey offers valuable insights into the decision-making processes, the importance of simplicity in early-phase startups, and the crucial role of documentation in fostering team cohesion and efficient communication. Geoffrey also shares his personal interests outside work, including his passion for music, open-source projects, and low-carbon footprint computing, providing a holistic view of his professional and personal journey.

Timestamps

  • 00:00 - Introduction
  • 00:49 - Welcome to the groCTO Podcast
  • 01:22 - Meet Geoffrey: Principal Engineer at Upvest
  • 01:54 - Understanding Upvest's Business & Engineering Challenges
  • 03:43 - Geoffrey's Role & Personal Interests
  • 05:48 - Improving Developer Experience at Upvest
  • 08:25 - Challenges in Platform Development and Team Cohesion
  • 13:03 - Product Thinking for Internal Teams
  • 16:48 - Decision-Making in Platform Development
  • 19:26 - Early-Phase Startups: Balancing Resources and Growth
  • 27:25 - Scaling Challenges & Documentation Importance
  • 31:52 - Conclusion

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO Podcast. Today with us, we have a very special guest who has great expertise in managing developer experience at small scale and large scale organizations. He is currently the Principal Engineer at Upvestm, and has almost 25 plus years of experience in engineering and leadership. Welcome to the show, Geoffrey. Great to have you here. 

Geoffrey Teale: Great to be here. Thank you. 

Kovid Batra: So Geoffrey, I think, uh, today's theme is more around improving the developer experience, bringing the product thinking while building the platform teams, the platform. Uh, and you, you have been, uh, doing all this from quite some time now, like at Upvest and previous organizations that you've worked with, but at your current company, uh, like Upvest, first of all, we would like to know what kind of a business you're into, what does Upvest do, and let's then deep dive into how engineering is, uh, getting streamlined there according to the business.

Geoffrey Teale: Yeah. So, um, Upvest is a financial infrastructure company. Um, we provide, uh, essentially investment banking services, a complete, uh, solution for building investment banking experiences, uh, for, for client organizations. So we're business to business to customer. We provide our services via an API and client organizations, uh, names that you'd heard of people like Revolut and N26 build their client-facing applications using our backend services to provide that complete investment experience, um, currently within the European Union. Um, but, uh, we'll be expanding out from there shortly. 

Kovid Batra: Great. Great. So I think, uh, when you talk about investment banking and supporting the companies with APIs, what kind of engineering is required here? Is it like more, uh, secure-oriented, secure-focused, or is it more like delivering on time? Or is it more like, uh, making things very very robust? How do you see it right now in your organization? 

Geoffrey Teale: Well, yeah, I mean, I think in the space that we're in the, the answer unfortunately is all of the above, right? So all those things are our requirements. It has to be secure. It has to meet the, uh, the regulatory standards that we, we have in our industry. Um, it has to be performant enough for our customers who are scaling out to quite large scales, quite large numbers of customers. Um, has to be reliable. Um, so there's a lot of uh, uh, how would I say that? Pressure, uh, to perform well and to make sure that things are done to the highest possible standard in order to deliver for our customers. And, uh, if we don't do that, then, then, well, the customers won't trust us. If they don't trust us, then we wouldn't be where we are today. So, uh, yeah. 

Kovid Batra: No, I totally get that. Uh, so talking more about you now, like, what's your current role in the organization? And even before that, tell us something about yourself which the LinkedIn doesn't know. Uh, I think the audience would love to know you a little bit more. Uh, let's start from there. Uh, maybe things that you do to unwind or your hobbies or you're passionate about anything else apart from your job that you're doing? 

Geoffrey Teale: Oh, well, um, so, I'm, I'm quite old now. I have a family. I have two daughters, a dog, a cat, fish, quail. Keep quail in the garden. Uh, and that occupies most of my time outside of work. Actually my passions outside of work were always um, music. So I play guitar, and actually technology itself. So outside of work, I'm involved and have been involved in, in open source and free software for, for longer than I've been working. And, uh, I have a particular interest in, in low carbon footprint computing that I pursue outside of, out of work.

Kovid Batra: That's really amazing. So, um, like when you say low carbon, uh, cloud computing, what exactly are you doing to do that? 

Geoffrey Teale: Oh, not specifically cloud computing, but that would be involved. So yeah, there's, there's multiple streams to this. So one thing is about using, um, low power platforms, things like RISC-V. Um, the other is about streamlining of software to make it more efficient so we can look into lots of different, uh, topics there about operating systems, tools, programming languages, how they, uh, how they perform. Um, sort of reversing a trend, uh, that's been going on for as long as I've been in computing, which is that we use more and more power, both in terms of computing resource, but also actual electricity for the network, um, to deliver more and more functionality, but we're also programming more and more abstracted ways with more and more layers, which means that we're actually sort of getting less, uh, less bang for buck, if you, if you like, than we used to. So, uh, trying to reverse those trends a little bit. 

Kovid Batra: Perfect. Perfect. All right. That's really interesting. Thanks for that quick, uh, cute little intro. Uh, and, uh, now moving on to your work, like we were talking about your experience and your specialization in DevEx, right, improving the developer experience in teams. So what's your current, uh, role, responsibility that comes with, uh, within Upvest? Uh, and what are those interesting initiatives that you have, you're working on? 

Geoffrey Teale: Yeah. So I've actually just changed roles at Upvest. I've been at Upvest for a little bit over two years now, and the first two years I spent as the Head of Developer Experience. So running a tribe with a specific responsibility for client-facing developer experience. Um, now I've switched into a Principal Engineering role, which means that I have, um, a scope now which is across the whole of our engineering department, uh, with a, yeah, a view for improving experience and improving standards and quality of engineering internally as well. So, um, a slight shift in role, but my, my previous five years before, uh, Upvest, were all in, uh, internal development experience. So I think, um, quite a lot of that skill, um, coming into play in the new role which um, yeah, in terms of challenges actually, we're just at the very beginning of what we're doing on that side. So, um, early challenges are actually about identifying what problems do exist inside the company and where we can improve and how we can make ourselves ready for the next phase of the company's lifetime. So, um, I think some of those topics would be quite familiar to any company that's relatively modern in terms of its developer practices. If you're using microservices, um, there's this aspect of Conway's law, which is to say that your organizational structure starts to follow the program structure and vice versa. And, um, in that sense, you can easily get into this world where teams have autonomy, which is wonderful, but they can be, um, sort of pushed into working in a, in a siloized fashion, which can be very efficient within the team, but then you have to worry about cohesion within the organization and about making sure that people are doing the right things, uh, to, to make the services work together, in terms of design, in terms of the technology that we develop there. So that bridges a lot into this world of developer experience, into platform drives, I think you mentioned already, and about the way in which you think about your internal development, uh, as opposed to just what you do for customers. 

Kovid Batra: I agree. I mean, uh, as you said, like when the teams are siloed, they might be thinking they are efficient within themselves. And that's mostly the use case, the case. But when it comes to integrating different pieces together, that cohesion has to fall in. What is the biggest challenge you have seen, uh, in, in the teams in the last few years of your experience that prevents this cohesion? And what is it that works the best to bring in this cohesion in the teams? 

Geoffrey Teale: Yeah. So I think there's, there's, there's a lot of factors there. The, the, the, the biggest one I think is pressure, right? So teams in most companies have customers that they're working for, they have pressure to get things done, and that tends to make you focus on the problem in front of you, rather than the bigger picture, right? So, um, dealing, dealing with that and reinforcing the message to engineers that it's actually okay to do good engineering and to worry about the other people, um, is a big part of that. I've always said, actually, that in developer experience, a big part of what you have to do, the first thing you have to do is actually teach people about why developer experience is important. And, uh, one of those reasons is actually sort of saying, you know, promoting good behavior within engineering teams themselves and saying, we only succeed together. We only do that when we make the situation for ourselves that allows us to engineer well. And when we sort of step away from good practice and rush, rush, um, that maybe works for a short period of time. But, uh, in the long term that actually creates a situation where there's a lot of mess and you have to deal with, uh, getting past, we talk about factors like technical debt. There's a lot of things that you have to get past before you can actually get on and do the productive things that you want to do. Um, so teaching organizations and engineers to think that way is, uh, is, uh, I think a big, uh, a big part of the work that has to be done, finding ways to then take that message and put it into a package that is acceptable to people outside of engineering so that they understand why this is a priority and why it should be worked on is, I think, probably the second biggest part of that as well.

Kovid Batra: Makes sense. I think, uh, most of the, so is it like a behavioral challenge, uh, where, uh, developers and team members really don't like the fact that they have to work in cohesion with the teams? Or is it more like the organizational structure that put people into a certain kind of mindset and then they start growing with that and that becomes a problem in the later phase of the organization? What, what you have seen, uh, from your experience? 

Geoffrey Teale: Yeah. So I mean, I think growth is a big part of this. So, um, I mean, I've, I've worked with a number of startups. I've also worked in much bigger organizations. And what happens in that transition is that you move from a small tight-knit group of people who sort of inherently have this very good interpersonal communication, they all know what's going on with the company as a whole, and they build trust between them. And that way, this, this early stage organization works very well, and even though you might be working on disparate tasks, you always have some kind of cohesion there. You know what to do. And if something comes up that affects all of you, it's very easy to identify the people that you need to talk to and find a solution for it. Then as you grow, you start to have this situation where you start to take domains and say, okay, this particular part of, of what we do now belongs in a team, it has a leader and this piece over here goes over there. And that still works quite well up into a certain scale, right? But after time in an organization, several things happen. Okay, so your priorities drift apart, right? You no longer have such good understanding of the common goal. You tend to start prioritizing your work within those departments. So you can have some, some tension between those goals. It's not always clear that Department A should be working together with Department B on the same priority. You also have natural staff turnover. So those people who are there at the beginning, they start to leave, some of them, at least, and these trust relationships break down, the communication channels break down. And the third factor is that new people coming into the organization, they haven't got these relationships, they haven't got this experience. They usually don't have, uh, the position to, to have influence over things on such a large scale. So they get an expectation of these people that they're going to be effective across the organization in the way that people who've been there a long time are, and it tends not to happen. And if you haven't set up for that, if you haven't built the support systems for that and the internal processes and tooling for that, then that communication stops happening in the way that it was happening before.

So all of those things create pressure to, to siloes, then you put it on the pressure of growth and customers and, and it just, um, uh, ossifies in that state. 

Kovid Batra: Totally. Totally. And I think, um, talking about the customers, uh, last time when we were discussing, uh, you very beautifully put across this point of bringing that product thinking, not just for the products that you're building for the customer, but when you're building it for the teams. And I, what I feel is that, the people who are working on the platform teams have come across this situation more than anyone else in the team as a developer, where they have to put in that thought of product thinking for the people within the team. So what, what, what, uh, from where does this philosophy come? How you have fitted it into, uh, how platform teams should be built? Just tell us something about that. 

Geoffrey Teale: Yeah. So this is something I talk about a little bit when I do presentations, uh, about developer experience. And one of the points that I make actually, particularly for platform teams, but any kind of internal team that's serving other internal teams is that you have to think about yourself, not as a mandatory piece that the company will always support and say, "You must use this, this platform that we have." Because I have direct experience, not in my current company, but in previous, uh, in previous employers where a lot of investment has been made into making a platform, but no thought really was given to this kind of developer experience, or actually even the idea of selling the platform internally, right? It was just an assumption that people would have to use it and so they would use it. And that creates a different set of forces than you'll find elsewhere. And, and people start to ignore the fact that, you know, if you've got a cloud platform in this case, um, there is competition, right? Every day as an engineer, you run into people out there working in the wide world, working for, for companies, the Amazons, AWS of this world, as your Google, they're all producing cloud platform tools. They're all promoting their cloud native development environments with their own reasons for doing that. But they expend a lot of money developing those things, developing them to a very high standard and a lot of money promoting and marketing those things. And it doesn't take very much when we talk just now about trust breaking down, the cohesion between teams breaking down. It doesn't take very much for a platform to start looking like less of a solution and more of a problem if it's taking you a long time to get things done, if you can't find out how to do things, if you, um, you have bad experiences with deployment. This all turns that product into an internal problem. 

Kovid Batra: In context of an internal problem for the teams. 

Geoffrey Teale: Yeah, and in that context, and this is what I, what I've seen, when you then either have someone coming in from outside with experience with another, a product that you could use, or you get this kind of marketing push and sales push from one of these big companies saying, "Hey, look at this, this platform that we've got that you could just buy into." um, it, it puts you in direct competition and you can lose that, that, right? So I have seen whole divisions of a, of a very large company switch away from the internal platform to using cloud native development, right, on, on a particular platform. Now there are downsides for that. There are all sorts of things that they didn't realize they would have to do that they end up having to do. But once they've made the decision, that battle is lost. And I think that's a really key topic to understand that you are in competition, even though you're an internal team, you are in competition with other people, and you have to do some of the things that they do to convince the people in your organization that what you're doing is beneficial, that it's, it's, it's useful, and it's better in some very distinct way than what they would get off the shelf from, from somewhere else. 

Kovid Batra: Got it. Got it. So, when, uh, whenever the teams are making this decision, let's, let's take something, build a platform, what are those nitty gritties that one should be taking care of? Like, either people can go with off the shelf solutions, right? And then they start building. What, what should be the mindset, what should be the decision-making mindset, I must say, uh, for, for this kind of a process when they have to go through? 

Geoffrey Teale: So I think, um, uh, we within Upvest, follow a very, um, uh, prescribed is not the right word, but we have a, we have a process for how we think about things, and I think that's actually a very useful example of how to think about any technical project, right? So we start with this 'why' question and the 'why' question is really important. We talk about product thinking. Um, this is, you know, who are we doing this for and what are the business outcomes that we want to achieve? And that's where we have to start from, right? So we define that very, very clearly because, and this is a really important part, there's no value, uh, in anybody within the organization saying, "Let's go and build a platform." For example, if that doesn't deliver what the company needs. So you have to have clarity about this. What is the best way to build this? I mean, nobody builds a platform, well not nobody, but very few people build a platform in the cloud starting from scratch. Most people are taking some existing solution, be that a cloud native solution from a big public cloud, or be that Kubernetes or Cloud Foundry. People take these tools and they wrap them up in their own processes, their own software tools around it to package them up as a, uh, a nice application platform for, for development to happen, right? So why do you do that? What, what purpose are you, are you serving in doing this? How will this bring your business forward? And if you can't answer those questions, then you probably should never even start the project, right? That's, that's my, my view. And if you can't continuously keep those, um, ideas in mind and repeat them back, right? Repeat them back in terms of what are we delivering? What do we measure up against to the, to the, to the company? Then again, you're not doing a very good job of, of, of communicating why that product exists. If you can't think of a reason why your platform delivers more to your company and the people working in your company than one of the off the shelf solutions, then what are you for, right? That's the fundamental question.

So we start there, we think about those things well before we even start talking about solution space and, and, um, you know, what kind of technology we're going to use, how we're going to build that. That's the first lesson. 

Kovid Batra: Makes sense. A follow-up question on that. Uh, let's say a team is let's say 20-30 folks right now, okay? I'm talking about an engineering team, uh, who are not like super-funded right now or not in a very profit making business. This comes with a cost, right? You will have to deploy resources. You will have to invest time and effort, right? So is it a good idea according to you to have shared resources for such an initiative or it doesn't work out that way? You need to have dedicated resources, uh, working on this project separately or how, how do you contemplate that? 

Geoffrey Teale: My experience of early-phase startups is that people have to be multitaskers and they have to work on multiple things to make it work, right? It just doesn't make sense in the early phase of a company to invest so heavily in a single solution. Um, and I think one of the mistakes that I see people making now actually is that they start off with this, this predefined idea of where they're going to be in five years. And so they sort of go away and say, "Okay, well, I want my, my, my system to run on microservices on Kubernetes." And they invest in setting up Kubernetes, right, which has got a lot easier over the last few years, I have to say. Um, you can, to some degree, go and just pick that stuff off the shelf and pay for it. Um, but it's an example of, of a technical decision that, that's putting the cart before the horse, right? So, of course, you want to make architectural decisions. You don't want to make investments on something that isn't going to last, but you also have to remember that you don't know what's going to happen. And actually, getting to a product quickly, uh, is more important than, than, you know, doing everything perfectly the first time around. So, when I talk about these, these things, I think uh, we have to accept that there is a difference between being like the scrappy little startup and then being in growth phase and being a, a mega corporation. These are different environments with different pressures 

Kovid Batra: Got it. So, when, when teams start, let's say, work on it, working on it and uh, they have started and taken up this project for let's say, next six months to at least go out with the first phase of it. Uh, what are those challenges which, uh, the platform heads or the people who are working, the engineers who are working on it, should be aware of and how to like dodge those? Something from your experience that you can share.

Geoffrey Teale: Yes. So I mean, in, in, in the, the very earliest phase, I mean, as I just alluded to that keeping it simple is, is a, a, a big benefit. And actually keeping it simple sometimes means, uh, spending money upfront. So what I've, what I've seen is, is, um, many times I've, I've worked at companies, um, but so many, at least three times who've invested in a monitoring platform. So they've bought a off the shelf software as a service monitoring platform, uh, and used that effectively up until a certain point of growth. Now the reason they only use it up into a certain point of growth is because these tools are extremely expensive and those costs tend to scale with your company and your organization. And so, there comes a point in the life of that organization where that no longer makes sense financially. And then you withdraw from that and actually invest in, in specialist resources, either internally or using open source tools or whatever it is. It could just be optimization of the tool that you're using to reduce those costs. But all of those things have a, a time and financial costs associated with them. Whereas at the beginning, when the costs are quite low to use these services, it actually tends to make more sense to just focus on your own project and, and, you know, pick those things up off the shelf because that's easier and quicker. And I think, uh, again, I've seen some companies fail because they tried to do everything themselves from scratch and that, that doesn't work in the beginning. So yeah, I think that's a, it's a big one. 

The second one is actually slightly later as you start to grow, getting something up and running at all is a challenge. Um, what tends to happen as you get a little bit bigger is this effect that I was talking about before where people get siloized, um, the communication starts to break down and people aren't aware of the differing concerns. So if you start worrying about things that you might not worry about at first, like system recovery, uh, compliance in some cases, like there's laws around what you do in terms of your platform and your recoverability and data protection and all these things, all of these topics tend to take focus away, um, from what the developers are doing. So on the first hand, that tends to slow down delivery of, of, features that the engineers within your company want in favor of things that they don't really want to know about. Now, all the time you're doing this, you're taking problems away from them and solving them for them. But if you don't talk about that, then you're not, you're not, you may be delivering value, but nobody knows you're delivering value. So that's the first thing. 

The other thing is that you then tend to start losing focus on, on the impact that some of these things have. If you stop thinking about the developers as the primary stakeholders and you get obsessed about these other technical and legal factors, um, then you can start putting barriers into place. You can start, um, making the interfaces to the system the way in which it's used, become more complicated. And if you don't really focus then on the developer experience, right, what it is like to use that platform, then you start to turn into the problem, which I mentioned before, because, um, if you're regularly doing something, if you're deploying or testing on a platform and you have to do that over and over again, and it's slowed down by some bureaucracy or some practice or just literally running slowly, um, then that starts to be the thing that irritates you. It starts to be the thing that's in your way, stopping you doing what you're doing. And so, I mean, one thing is, is, is recognizing when this point happens, when your concerns start to deviate and actually explicitly saying, "Okay, yes, we're going to focus on all these things we have to focus on technically, but we're going to make sure that we reserve some technical resource for monitoring our performance and the way in which our customers interact with the system, failure cases, complaints that come up often."

Um, so one thing, again, I saw in much bigger companies, is they migrated to the cloud from, from legacy systems in data centers. And they were used to having turnaround times on, on procedures for deploying software that took at least weeks or having month-long projects because they had to wait for specific training that they had to get sign off. And they thought that by moving to an internal cloud platform, they would solve these things and have this kind of rapid development and deployment cycle. They sort of did in some ways, but they forgot, right? When they were speculating out, they forgot to make the developers a stakeholder and saying, "What do you need to achieve that?" And what they actually need to achieve that is a change in the mindset around the bureaucracy that came around. It's all well and good, like not having to physically put a machine in a rack and order it from a company. But if you still have these rules that say, okay, you need to go in this training course before you can do anything with this, and there's a six month waiting list for that training course, or this has to be approved by five managers who can only be contacted by email before you can do it. These processes are slowing things down. So actually, I mentioned that company that, uh, we lost the whole department from the, from the, uh, platform that we had internally. One of the reasons actually was that just getting started with this platform took months. Whereas if you went to a public cloud service, all you needed was a credit card and you could do it and you wouldn't be breaking any rules in the company in doing that. As long as you had the, the right to spend the money on the credit card, it was fine.

So, you know, that difference of experience, that difference of, uh, of understanding something that starts to grow out as you, as you grow, right? So I think that's a, uh, a thing to look out for as you move from the situation when you're 10, 20 people in the whole company to when you're about, I would say, 100 to 200 people in the whole company. These forces start to become apparent. 

Kovid Batra: Got it. So when, when you touch that point of 100-200, uh, then there is definitely a different journey that you have to look up to, right? And there are their own set of challenges. So from that zero to one and then one to X, uh, journey, what, what things have you experienced? Like, this would be my last question for, for today, but yeah, I would be really interested for people who are listening to you heading teams of sizes, a hundred and above. What kind of things they should be looking at when they are, let's say, moving from an off the shelf to an in-house product and then building these teams together?

Geoffrey Teale: Oh, what should they be looking at? I mean, I think we just covered, uh, one of the big ones. I'd say actually that one of the, the biggest things for engineers particularly, um, and managers of engineers is resistance to documentation and, and sort of ideas about documentation that people have. So, um, when you're again, when you're that very small company, it's very easy to just know what's going on. As you grow, what happens, new people come into your team and they have the same questions that have been asked and answered before, or were just known things. So you get this pattern where you repeatedly get the same information being requested by people and it's very nice and normal to have conversations. It builds teams. Um, but there's this kind of key phrase, which is, 'Documentation is automation', right? So engineers understand automation. They understand why automation is required to scale, but they tend to completely discount that when it comes to documentation. So almost every engineer that I've ever met hates writing documentation. Not everyone, but almost everyone. Uh, but if you go and speak to engineers about what they need to start working with a new product, and again, we think about this as a product, um, they'll say, of course, I need some documentation. Uh, and if you dive into that, they don't really want to have fancy YouTube videos. And so, that sometimes that helps people overcome a resistance to learning. Um, but, uh, having anything at all is useful, right? But this is a key, key learning documentation. You need to treat it a little bit like you treat code, right? So it's a very natural, um, observation from, from most engineers. Well, if I write a document about this, that document is just going to sit there and, and rot, and then it will be worse than useless because it will say the wrong thing, which is absolutely true. But the problem there is that someone said it will sit there and rot, right? It shouldn't be the case, right? If you need the documentation to scale out, you need these pieces to, to support new people coming into the company and to actually reduce the overhead of communication because more people, the more different directions of communication you have, the more costly it gets for the organization. Documentation is boring. It's old-fashioned, but it is the solution that works for fixing that. 

The only other thing I'm going to say about is mindset, is it's really important to teach engineers what to document, right? Get them away from this mindset that documentation means writing massive, uh, uh, reams and reams of, of text explaining things in, in detail. It's about, you know, documenting the right things in the right place. So at code-level, commenting, um, saying not what the code there does, but more importantly, generally, why it does that. You know, what decision was made that led to that? What customer requirement led to that? What piece of regulation led to that? Linking out to the resources that explain that. And then at slightly higher levels, making things discoverable. So we talk actually in DevEx about things like, um, service catalogs so people can find out what services are running, what APIs are available internally. But also actually documentation has to be structured in a way that meets the use cases. And so, actually not having individual departments dropping little bits of information all over a wiki with an arcane structure, but actually sort of having a centralized resource. Again, that's one thing that I did actually in a bigger company. I came into the platform team and said, "Nobody can find any information about your platform. You actually need like a central website and you need to promote that website and tell people, 'Hey, this is here. This is how you get the information that you need to understand this platform.' And actually including at the very front of that page why this platform is better than just going out somewhere else to come back to the same topic."

Documentation isn't a silver bullet, but it's the closest thing I'm aware of in tech organizations, and it's the thing that we routinely get wrong.

Kovid Batra: Great. I think, uh, just in the interest of time, we'll have to stop here. But, uh, Geoffrey, this was something really, really interesting. I also explored a few things, uh, which were very new to me from the platform perspective. Uh, we would love to, uh, have you for another episode discussing and deep diving more into such topics. But for today, I think this is our time. And, uh, thank you once again for joining in, taking out time for this. Appreciate it.

Geoffrey Teale: Thank you. It's my pleasure.

'The Art & Science of Leading Global Dev Teams' with Christopher Zotter, Head of Engineering, Sky Germany

In this episode of the groCTO Originals podcast, host Kovid Batra engages in an insightful conversation with Christopher Zotter, the Head of Digital Engineering at Sky, Germany. Christopher brings a wealth of experience, including a decade of leading engineering teams and founding a software development agency.

Known for his unique leadership philosophy, Christopher believes in the power of building trust, embracing failures, and fostering a transparent culture. He shares his journey from an apprentice in Germany to a leadership role, emphasizing the importance of hands-on experience and continuous learning. The discussion delves into the challenges and strategies of managing culturally diverse remote teams, effective communication, and transitioning from legacy systems to cutting-edge technologies.

Christopher also highlights the significance of being a role model and integrating community involvement into one’s career. This episode offers a deep dive into the principles and practices that can guide leaders in nurturing successful global development teams.

Timestamps

  • 00:00 — Introduction
  • 00:49 — Welcome to the groCTO Podcast
  • 01:39 — Meet Christopher: Personal and Professional Background
  • 03:34 — Christopher’s Career Journey and Key Learnings
  • 05:38 — The Importance of Community and Respect in Leadership
  • 07:42 — Balancing Side Projects and Career Growth
  • 11:33 — Leading Global Teams at Sky
  • 15:20 — Challenges and Strategies in Remote Team Management
  • 21:48 — Navigating Major System Migrations
  • 24:26 — Ensuring Team Motivation and Embracing Change
  • 27:35 — Using Metrics to Drive Improvement
  • 30:59 — Conclusion and Final Thoughts

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO podcast. And today with us, we have a very special guest. Uh, he’s Head of Engineering at Sky, Germany. He is also the founder of a software dev agency, and he has been leading engineering teams from past 10 years now. And today, we are going to talk to him about how to lead those global dev teams because he has been an expert at doing that. So welcome to the show, Christopher. Great to have you here.

Christopher Zotter: Thanks for having me. I’m really excited to be here, part of the great podcast. I get to know this and also the last months and with key insights and hope I can provide some of my learnings from the past experience also to your great audience. So happy, happy to be here.

Kovid Batra: I’m sure you can do that. All right. But before we get started into, um, knowing something about your team and your, uh, areas of expertise of how you lead teams, we would love to know a little bit about you. Like something that LinkedIn doesn’t know, something that is very impactful in your life, from your childhood, from your teenage. Um anything that you would like to share

Christopher Zotter: So first of all, the most important part is not business, it’s my family. So I’m a proud father of two kids and I have a lovely wife. So this is the foundation of everything that I can do, also my job properly to be honest and gives me energy. Um, and also what is not on LinkedIn or it’s on LinkedIn, but it’s worth mentioning is I didn’t study anything. So you see now my title, which is, I also need to reflect, impressive to be honest, also to myself, but I only did a normal apprenticeship in Germany to work as a software developer. So I really start at the core of the things, but now I managed to do so. So I make my, my way through doing the things, getting hats, hands-on, and don’t fear to make mistakes. I learned from things, um, I did, I deployed the hard coded ID and tested it on production while on a software in the past. Yeah, that never happened again. So I really get hands-on and get these kinds of experiences. Um, And what is also, I think, important is to not only focus on, on the software things, but also doing some things for the society, for the community beside the work, which, which gave me the balance. So this is not on LinkedIn. This is something that has also very positive impact on, on my, on my past. So, um, yeah, that’s roughly where, who am I, but I can also continue a bit of my journey to, to becoming that position if you’re interested in too.

Kovid Batra: Sure, why not? Please go ahead.

Christopher Zotter: Um, yeah, then my, my, as I said, I, I did an apprenticeship in Germany, which takes mostly three, three and a half years, and I had the chance to work at the very small company. It’s not, it’s not, the company doesn’t exist anymore, I think, but I got the chance to work in a very small team with great experts, and I got responsibility from day one. So I didn’t develop something for the trash. It was really then something which can go to production, of course, with review process, et cetera. And again, the advice I can already share is try to do as many things as possible. Even if in the younger years, you have the time. I see that now with family, the priority shifts obviously, but use the time you have, do side projects if possible, because getting hands on the things, nothing can beat experience. And this is, I think also the big learning I had over the, uh, over the time is I get all of my, um, promotions all of my way through the career, starting from an apprenticeship, junior developer, senior developer, lead developer, and now Head of Engineering, um, through my experience. I did hands-on and I can prove, showcase what I did starting from code skills, simple HTML page for with the, with the simple contact form, everything. So I get my hands on different things to get, uh, get, get the knowledge, and I think knowledge and experience beats most of the, of the things, but you can’t study it. Um, you need to get hands-on. Yeah, just briefly, and now I’m here.

Kovid Batra: Yeah, no, I think that was a very, very nice intro, and I think we now, we now know you a little more. And one, one thing that I really loved when you said that, uh, it’s not just about work. Uh, there is family, there’s community that you want to do for. So I’m sure this community thing which you are doing, uh, this, this would have helped in shaping up, uh, some level of leadership, some level of giving back. I think leadership is another name for giving back. So from there, it should be coming in. So can you share some of your experience from there that helped you in your career moving from let’s say an IC to an EM and then growing to a leadership position?

Christopher Zotter: I like that you say leadership is giving back. Yes. Um, I didn’t see it that way, but it totally echoes with me. Um, at the end, it’s all about the people. Um, I think we have, we have also on this planet, so many, uh, wars happening, so many people working against it, and I’m, I try to do the opposite because we’re all humans. And I learned also through working for the community in a certain way. So I, I worked for one year to support disabled people, to go with them to school, young people, and there I learned, hey, these are all humans and everybody’s trying their best. Also now, in my position, it’s about people, it’s about getting their feelings, getting their circumstances and getting their perspectives, getting their culture. We will come to the topic later, um, because there are different cultures. We are working together, even in software development, you’re across the globe. Um, and there, you need, always need to, to think about and not act like everybody has the pressure to get it done, get it done. And so, we need to consider that humans behind and let’s find to create a win-win situation for everybody that everybody feels confident, confident and comfortable and respected. And, um, this I learned, I’m a very value-driven person. And my key value is respect because respect is there for everything no matter what you’re doing. Um, it starts going into the office, the cleaning person, greet the same way as you greet the CEO. Um, it’s, it’s, we are all humans, everybody’s putting the bits and pieces together and this sometimes we, we forget in our daily business. So, um, this is what I definitely learned from being there, putting, giving away something for the community or whatever there is. So yeah

Kovid Batra: Perfect. Perfect. And another interesting piece in your career is, uh, no academic background, uh, in engineering and then doing things hands-on. And then, uh, you are working on a side business as well, which you just mentioned where you, you recommend people to do that in the early ages, because that’s where you get the most of your experience and knowledge to do things, how to complete things. How exactly that has contributed in your career growth? Because I also come from a similar experience. I would love for you to explain it if this has contributed in some way

Christopher Zotter: Okay. Yeah, great. Um, that’s yeah. I started my side business also, I think now eight, nine years ago. Um, and by the way, this will now come to an end right now. It’s already more or less ended because my, my daily job requires full attention plus family. There is no time and you need to also to say no to the things. Um, but in that time it was, uh, it was pretty important for me because what I did is the things I learned in my company, in my apprenticeship, um, I tried to do then some projects for first, for my own and then for my inner circle. So for some friends, they had also built up a company, whatever that is, need a home page, need a web application. Um, and I built it on my side business. Then to adapt the things I learned in my, in my daily business and enhance it on a certain way in my environment to test it to work against and enhance the knowledge. Try things out if they’re working there in a smaller, bits of pieces, not in the big company where you’re working on. Um, helps me a lot to grow, trying out, trial and error. Uh, and at least that’s the experience you get and this experience, if you bring it back to your company, if you want it to make career, um, this is where you can benefit from, and yeah, that knowledge beats everything at the end.

Kovid Batra: Sure. I think for me, like I also had a side business and how it has helped me is that I was interacting with the customers directly, right? So that was for me a great experience, which when you are in a larger organization where you have people doing the front end job and then you are getting just the requirements, that relatability with the problem statement with the audience is much lesser So I think that way it has helped me much more from that point of view.

Christopher Zotter: Interesting, because we at Sky we have, our claim is the, the customer or the users in the centric of everything and I have the, the I, I’m a Sky, a soccer fan, and, and, and Sky probably just to name it what we are doing, um, because there is probably a conflict with your audience from India because Sky channel there is known and it’s a bit of a different thing than what Sky Germany is doing. So, um, for, for, for you, we are the major entertainment provider here in Germany called pay tv. We have sports, um, mostly the Bundesliga, so the German soccer football, uh, um, rights we have in place or some, uh, own produced movies. Uh, you can watch Netflix and stuff over our platform, either it’s streaming or it’s our Q receiver. And, um, as I’m a big, Bayern Munich fan, I use Sky or previously it was named Premier, uh, for a long, long time ago. So I’m also the customer on the one hand side to use our product and know what’s going on and know the issues and can bring it then into and learn from it on, on the other side, which is now a great benefit, but I can echo it. It’s, it’s definitely one of the key things to know who’s your audience and what are the users and what are the customers and go out and get to know them, what is their behavior in order to deliver them the best product, the best experience they can, they can have.

Kovid Batra: Sure, sure. Absolutely. All right. I think, uh, that was, outside what you do at Sky, most of it, uh, we discussed. Now moving in from that note into the world of Sky where you are heading teams and, uh, most of them are remotely working from India, from Germany and other parts of the world. So first thing I would like to understand, like, how things have changed in the last four or five years from your perspective? Um, you have grown from a manager to a leadership profile. What were those things that came into, uh, into your role as a responsibility, uh, that you took up with these global teams that help you grow here? How was the experience the last four years?

Christopher Zotter: It was an amazing ride. Um, I think every, every, every step has their challenges in, in a certain way. Um, being a developer, you can then go to either other developers or have your scrum master and feature teams. Um, but coming to be, um, a leader for such, such a, such a big team. So my team is currently, we have five people here in Germany and we have 15–16 right now sitting in Chennai, India. You have to think about different things. You have to think about the team harmony, how the people working together, you have to think about communication. You have to think about values, how everything works then together, and not only getting the code done in a proper way with all of their quality checks in between, but also that I need now to consider there helps me to get the experience in beforehand to know what is technically possible, what we need to do in order to shape, um, the best and the most effective process. We will talk about that, I think, later also, what can be done there. But also, um, yeah, to consider, as I said previously, the different perspectives. Everybody is on a different level, um, has different circumstances. Somebody is now getting it further earlier. So probably not that much focus on work, which is fine. We need to deal with that also to support wherever we can. Somebody is getting sick and all of the things you need to consider. Um, and it’s, it was also a big change for me and I’m still in progress to be honest, because I started my journey as a developer and I love to code also. Um, but so much coding in that position is not possible anymore. And you need to build up your team where you can trust and give them the task and get it back done or get it, getting the right feedback, uh, whatever that is. So this is one of the things to build trust to having a lot of conversations. So having a lot of coffee in the office with the different guys to get to know what’s going on. And of course, um, you are now, or I am now in a position to having, uh, stakeholders, uh, communication with our CTO, COO, uh, different, different areas, which you don’t have normally as a developer that you only get the requirements. So again, I’m a bit next to the customer, right? Because I can also bring my bits and pieces into some of the features and decisions. Um, and this, this is one of the biggest changes to, to go out of the real, getting the hands-on and, and yeah, bringing the layer on top to prepare everything and protect everything that my developers can really focus or my architects can focus on the work without any disruption and make the work as smooth and as fast as possible.

Kovid Batra: But I think in your case, um, as compared to, uh, I would say, a single culture, a uniculture team, um, your case is different. You have people in India, across the globe. This collaboration, uh, I’m sure this becomes a little difficult and it’s a challenge of a lot of companies after COVID, uh, because things have gone remote and people are hiring from across the borders. How, how it has been an experience for you to handle these remote teams who are from different culture? And what, what really worked out, what didn’t work out some of those examples from your journey?

Christopher Zotter: Uh, yes, this is definitely a challenge and I have to say I’m the only German-speaking guy in my team. So we are a German company, but I’m the only German speaking guy. So I, in Germany, we have also some Indian colleagues, some from Russia, uh, sorry, from Ukraine. We have some from, uh, Egypt. So it’s mixed. And as, as you said, a lot of people are coming from, from Chennai, India. And imagine this is about 4, 000 kilometers difference. Um, a lot of, uh, at the end, and we have two different cultures. And this was the biggest learning I got to know is at the beginning, just an example, a yes doesn’t mean a yes. Um, we had some requirements, we talked about that and I got the feedback, “Yes.” Okay, and then I assumed the ticket will be done, but it was only, “Yes. I got to know that I need to do that.” But not, “Yes, I understand it.” So there’s a communication, a learning over the time and which the whole company has to do. So we all need to transform here at Sky and also at Comcast Engineering in India that we are going together, find a way of communication, get to know the, the other, uh, the other culture, the other people, the other behavior, how they’re working.

Um, and of course, I’m also a fan of remote working, but also a fan of getting in touch, uh, getting into, into personal conversations with people, um, not only, uh, not via camera, but in person. So that’s also why we have some mandatory days at Sky where we need to go to the office. But I’ll also be there in India once or twice a year, even if it’s a long travel and, you know, challenge with family, but, um, the investment is, is worth it. Um, I got to know the, the Indian culture very well. Um, and it’s also kind to them to show appreciation. So they recognize, “Hey, they really take care about us and we’re not only there outsource for things, get the things done.” And as I said, I’m taking care of, at least my goal is to take care about the people, to treat them with respect and try to find the way together. And if you’re having the 1-on-1 conversations in person, get to know the culture, go to temples, get to know all of the things we’re running around, what they, what, the food. Oh! It’s amazing in India. Um, everything. Um, then you grow together and then this makes, after my second visit, I can say, um, the communication was a totally different one. So I got to know then, or I feel really the trust of my team then to say, “Hey, Christopher, this doesn’t work.” So they say and you know, this is a cultural topic because in india, it’s normally, uh, it’s they’re not used to saying, “No, it’s not working.” They say yes and try to make it work anyhow, but it doesn’t help in the, in the daily business. So it’s better to say, “Uh, I need help at the first place and then we can get it done as a team.” But coming to that point, that’s one of the biggest challenges I faced. It’s still not perfect yet, but this is where we think always about what is their circumstances? Is that really yes, they got it or do they need some other kind of help, um, that we can provide them to them?

Kovid Batra: I think a very, very good example. Being an Indian, I can totally relate to it. Uh, we go with that mindset and at times it is not, uh, beneficial for the business as such, but there is a natural instinct which says, okay, let’s say yes. Let’s say, “Yeah, we are trying.” And try to fight for it maybe. Not sure what exactly drives that, but yeah, a very, uh, important point to understand and look at.

All right. So I think this is, this is definitely one example, which, uh, our audiences, if they are leading some teams from India, would keep in mind when they’re leading them. Anything else that you, that comes to your mind that you would want to do to ensure good communication or collaboration across these teams?

Christopher Zotter: I think when we stick to the topic is to be the role model. Um, I said it in my introduction. I deployed something hard coded to production with an ID. I bring that always as an example to say “Yes, this was a failure.” But I took a great learning out of it. So to establish these kind of things to acting as a role model, especially as a leader, because then you lead and the people will follow you and you should.. My claim is to act as a leader who is not there. I’m the same. I only have another title, but we are all equal. I can’t do my work without you and the other way around. So we’re one team, no matter who has, which level of a junior or, uh, whoever that is, so working together as a team and be there and support everybody. And I say always, “If they don’t need me anymore, I did my job perfectly.” Um, so this is what I, what I’m aiming for. No, to be really a leader, to be a role model, to, to say, “Hey, this doesn’t work.” “Oh, this was my failure of the week.” That’s what we probably now try to establish failure of the week that everybody, uh, put that failure into learning and share that with the audience. Um, it breaks a bit everything. So they see, “Hey, they are now doing it. So I can do that as well.” And this takes away the fear of if I say too much things I can’t do, I get fired. That’s the most fear, I also get to know why talking to the people. Um, as I know, that’s not the case. I appreciate it more if you say it to me instead of hiding it. So, um, yeah, this is definitely, definitely the thing.

Kovid Batra: True. I think one example that comes to my mind, uh, when I talk to my, um, friends and colleagues who are working across different organizations, like Amazon, Microsoft, world, handling teams from India for US or vice versa. Um, whenever there is huge transitions, let’s say from legacy systems to new architecture, they are like for 6 to 10 to 12 months, I’ve seen they were in a stressed situation where they’re saying like, “The team is not here communicating and managing that stuff is becoming difficult for me.” They were making multiple trips to, to the, uh, to the main home ground and then getting things done. So in your case, you, you guys are remote-first and I’m assuming most of the times you’re dealing with such situations remotely. So has there been a situation where you had to migrate from some legacy systems to new systems, new architecture, and, uh, there were challenges on that journey?

Christopher Zotter: Um, we’re currently in. Uh, so we are in a big transformation phase at Sky. So this is taking off for some years. And, uh, let’s say we in the final steps to be there to create, we started everything, challenged every technology we had, um, a few years back and say, “What can we provide best to our customers? So what technology is cutting edge? What technology is bringing our faster cycles of deployment, faster cycles of changes?” And challenged our content management system up to all completely our CRM system. Um, and that’s, that’s, we’re currently in the middle of it. Um, the challenge is obviously, yes, you always did in the past, something is not documented, some processes are there, and not everybody’s trying to challenge all of the things which happened in the past but it’s exactly the right time to do so, to, to challenge what was there. Do we really need to convert it and migrate it to a new system or not? Um, and get better into doing that. So take the learnings, challenge it and bring it to the new system. And that we’re in the middle of, um, that’s why, why I also started at Sky to, to, to kick-off that journey and at this part of time I was the developer who started it and, um, now i’m happy to say that we are in a very good shape. So we are live with, uh, with most of the things already, the migration is still going on, but um, our sales journey and stuff is already live and going to customers. We have proper monitoring set up. We have good testing in place. So, um, yeah, but again, what I said is, um, I see also now the old worlds, the old systems, um, and we, we all have to be open-minded to getting, getting transferred to new things, um, to always learn every day, especially, I think your audience knows that pretty well. In software development or development is that every day is a new tool, every day is a new change, a new version and new things you need to update it here and there. To always stick to that level is a challenge we face every day, but we’re trying to do our best to always get the latest version and the best features out for our customers.

Kovid Batra: Sure. I think one very good point you highlighted, like as a leader, uh, as a manager, you might still realize that this change is for the good, and this change is going to impact us in much better ways for the business point of view, from our engineering point of view. But when it comes to the people who are actually developing, coding, uh, how do you ensure like such big migrations come handy, people don’t have resistance? Because giving a plan and a strategy, uh, is definitely one thing which you have to craft carefully. But one very important thing goes into the, the innate motivation of people to execute it so that they think of use cases, make it even better than what you have planned for, at least on the paper. So what, what do you do to ensure such kind of, uh, culture shift or such kind of culture being instilled in people to embrace that change?

Christopher Zotter: Um, first of all, I think if you are yourself your own customer, this is the first thing. So you need to consume your own product as well. So dog food it. Um, It’s a bit difficult with India, but we have possibilities to also use Sky at least in the office to play around, to watch the movies to watch the things, um, that we can identify with that. That’s the first thing that we know what we’re doing to know what, how our customers are acting and I always said is I use a lot of data, um, to just, hey, how many visits do we have on these pages? Or check this feature, has this impact on our sales, whatever that is. So using that data to show, hey, the button you’re changing right now is not only a color change. This has a psychologically thing. If you change it to green one to give a positive feedback to our customers that they would click then and buy the things, just stupid example. Um, And you will see when we put that on production or do some user tests, you see directly your impact and it would go to millions of customers. And coming out and bringing that every time, every day to the table, um, opens up, hey, the things they’re doing, they have a real impact and this is everybody can be proud of. And I said always, hey, look, if you show that to your family and your mother, this, you can, and that’s a good thing at that development. You can show the things, uh, if you’re doing an API, it’s also important, but it’s a different thing. That’s why I love that development to say, you can showcase the things. Um, so we’re constantly measuring the things constantly, constantly improving. And this gives also the, the, the developers a sense of, “Hey, this is really important, what I’m doing here and this is the impact.” Um, and in order not to, you know, putting too much pressure on the people. We always have, uh, uh, we are working in a safe environment, so a scaled agile framework where we plan the next three months ahead and the planning is done by the developers and the developers commit to this, um, uh, plan provided by the business and they commit what they can achieve. So they have then the plan and they have an influence on that. And this gives us a balance to first be predictable, but also, uh, make the developers identify with things they’re developing.

Kovid Batra: Got it. Got it. Makes sense. I think it revolves around creating those right incentives, creating those right experiences for the developers to understand and relate to. Uh, so while, while you’re talking about having those right incentives, measuring the impactful areas, uh, I’m sure you must be using some level of metrics, some level of processes to ensure that you continuously improve on these things, you continuously keep working on the impactful areas. So, uh, at, at Sky or at your previous organizations, what kind of frameworks you have deployed? What kind of metrics you look at for different initiatives?

Christopher Zotter: Um, first of all, uh, I got to know that only what you measure, you can improve. That’s the one claim I always get to know. Um, it can be a weight, but, uh, then you see also some improvements. So just an example. Um, I’m, I’m a developer. So, uh, let’s start with the coding part, probably GitHub. Um, yeah, I mean, GitHub, a lot of different cycles, um, starting from creating a pull request, uh, reviewing a pull request, checking if it gets rejected or not, how many comments you get, um, uh, up to, it’s connected to CI/CD where some of our testing frameworks are running against different features we wanted to merge. Um, this is one of the key indicators where we say, um, or in the past also where we, we were looking into and say, “Okay, um, how big is a pull request? How much time does it take that it gets reviewed?” Um, all of these KPIs, um, or there are KPIs behind that, but the, my goal is that I get identified if I need to go deeper into some of the topics to find probably some root cause. Um, the same happened on, on the delivery level. So not on the code level but on the delivery level where we have our tickets, our story points and where we can roughly say a story point is one day more or less, um, and if I see there’s one story point, but the ticket is in development for five days, um, I need to go into, uh, into communication, say, “Hey, are there any challenges?” Um, or, “Do you need some support? Is there a knowledge gap?” Or if a feature has too many bugs after that assigned, um, after it’s merged to our development stage, um, we probably have a lack of quality. It could lead to a lack of, uh, lack of yeah knowledge here and there. So this is my, my measures to not to and this is again coming into a culture topic, um, to use the data the right way and not to say, “I micromanage you. You get fired if you don’t hit the KPIs.” No. Um, the key is we need to have in these KPIs that I get an alert as early as possible that I need to go into communication and find a way to take the people by hand and work together against some strategies. Could be knowledge sharing, could be coachings, could be whatever that is. It could also be that I got identified. We have some issues with one of the product owners, for example, who doesn’t provide all of the details in a ticket beforehand. It comes to development. It can be a lot of things, but if I don’t do that, I don’t have or at least I get to know that by a lot of weeks later, and then it’s too late. So gives me an indicator where I need to get into communication to improve, um, the process, to improve, um, the people, to make them better and, and yeah, to support them.

Kovid Batra: Make sense. I think very rightly said, um, using these metrics always makes sense, but how you’re using it will ultimately be the core thing, whether they are going to help you or they can give back. So yeah, I think great advice there, Christopher. And I think in the interest of time, uh, we’ll have to take a pause here, though I, I really loved the discussion and I would love to deep dive more into how you’re managing your teams, but maybe another episode for that. Uh, and once again, uh, thanks a lot for taking our time, sharing your experience at Sky, telling us about yourself. Thank you so much.

Christopher Zotter: Thanks for having me. Uh, thanks for having me. It was a pleasure to be here. Happy to come a second time to dive deep, uh, deep dive into some of the topics, um, if interested and, uh, also kudos to you. It’s a great podcast. I love to listen to it on my own because I also pick some nuggets out of that each of the time. So keep, keep pushing that. Thanks a lot.

Kovid Batra: Thank you so much, Christopher.

'DevEx: It's NOT Just About Dev Tools!' with Vilas Veeraraghavan, Startup Advisor, Ex-Walmart

In this episode of the groCTO Originals podcast, host Kovid Batra engages with Vilas, an accomplished engineering leader with significant experience at companies like Walmart, Netflix, and Bill.com.

Vilas discusses the concept of Developer Experience (DevEx) and how it extends beyond simply providing tools. Vilas highlights the importance of enabling developers with frictionless processes and addresses the multidimensional challenges involved. The conversation delves into Vilas’s journey in DevEx, insights from designing platforms and enabling developer productivity, and the necessity of engaging with key opinion leaders for successful adoption. Vilas shares personal anecdotes and learning experiences, stressing the significance of treating developer enablement as a product and encouraging collaboration.

The discussion concludes with advice for those stepping into DevEx roles, underlining the evolving significance of this field in the industry.

Timestamps

  • 00:00 — Introduction
  • 00:51 — Meet Vilas: The Man Behind the Expertise
  • 04:28 — Diving into DevEx: Concepts and Definitions
  • 06:32 — The Evolution of DevEx: From Platform to Productivity
  • 13:19 — Challenges and Strategies in DevEx Implementation
  • 31:34 — Metrics and Measuring Success in DevEx
  • 37:46 — Final Thoughts and Parting Advice

Links and Mentions

Episode Transcript

Kovid Batra: Hi everyone, this is Kovid, back with a new episode of groCTO podcast. Today with us, we have a very special guest. He’s an accomplished engineering leader, has been building successful teams from last 15 years at Walmart, Netflix, Bill.com, and with his expertise in DevEx and Dev productivity, he’s now very well renowned. So we found Vilas through LinkedIn and, uh, his posts around DevEx and Dev Productivity, and I just like started resonating with it. So, uh, welcome to the show, Vilas, great to have you here.

Vilas Veeraraghavan: Thanks Kovid. I am grateful for getting to meet people like yourself who are interested in this topic and want to talk about it. Um, so yeah, I’m looking forward to having a discussion.

Kovid Batra: Perfect. Perfect. But Vilas, before we get started, um, this is a ritual for groCTO podcast.

Vilas Veeraraghavan: Okay.

Kovid Batra: Uh, we will have to like, uh, know you a little more beyond what LinkedIn tells about you. So tell us about yourself, like your hobbies, how do you unwind your day? Something from your childhood memories that tells who Vilas today is. So, yeah.

Vilas Veeraraghavan: Okay. Okay. That’s, I was not prepared for it, but I’ll, I’ll share it anyway. Um, so I am a, the thing that most people don’t know about me, uh, is that I am a big movie fan. Like I watch movies of all languages, all kinds, and I pride myself on knowing, uh, most of the details around why the movie was made. Um, like, you know, I really want to get into those details. Like I want to get the inspiration of behind the movie. It’s almost like appreciating art. You want to get into like, why did this person do this? Uh, so I’m very passionate about that. Um, so that’s, that’s something that people don’t necessarily know. Um, and apart from that, like, I, I enjoy, uh, running and walking. It sounds weird to say I enjoy walking, but I genuinely do that. Like that’s my, that’s the place where I do most of my thinking, analysing, all of that.

Kovid Batra: Perfect. Which one’s the weirdest movie that you have watched and like found out certain details which were like very surprising for you as well?

Vilas Veeraraghavan: I don’t know if I would say weird, but you know, all of, every director, every film director has one movie that, you know, they have always yearned to make. So they, their entire career goes in sort of trying to get to that movie, right? Because it’s their magnum opus, right? That’s the, that’s the term that people use. Um, I always find that fascinating. So I always try to look for, for every director, what was their magnum opus, right? Uh, so for example, for Raj Kapoor, it was Mera Naam Joker, and that was his magnum opus. Like what went into really making that film? Why did he make it? Like what? And you’ll realize also that their vision, the director’s vision is actually very, um, pure in those, in a sense that they will not listen to anyone else. They will not edit it short. They will not cut off songs or scenes. It’s such a, uh, important thing for them that they will deliver it. So I always chase that. That’s the story I chase.

Kovid Batra: Got it. Perfect. I think that was a very quick, interesting intro about yourself. Good to know that you are a movie buff. And now like, let’s, let’s move on to the main section. So just for the audience, they know, uh, we’re going to talk about DevEx, dev productivity, which is Vilas’s main area of expertise. And his, his quote from my last discussion with him was that DevEx is not just, uh, some tools being brought in, some dev productivity tools being brought in. So I think with that note, uh, let’s get started, Vilas.

Vilas Veeraraghavan: Sure.

Kovid Batra: What according to you defines DevEx? Like let’s start with that first basic question. What is DevEx for you?

Vilas Veeraraghavan: Okay. So before I jump into that, I want to give you, give the context behind that statement I said, right? Um, it’s not about throwing tools at someone and expecting that things will get better. Um, I learnt that over time, right? I was a big fan of automation and creating tools to help people, and I would often be surprised by why people are not using them the way I thought they should. And then I realized it’s about the fact that their process that they are following today does not allow them to include this. There is too much friction that brings that. If they bring in a new tool, it’s too much friction. And then I realized also what the people, about management, all of that stuff. So it’s a very, it’s a, it’s a multidimensional problem. And so that, I just want to set that context because that’s how I defined DevEx, right? DevEx or I, as I like to call it more about dev enablement, is about making sure that your developers have the best possible path through which they can deliver features to production. Right? And so it’s, it’s not about productivity. I think productivity is inherent in the fact that if you enable someone, uh, you are providing them with the shortest paved road kind of thing to get to their destination. They will become productive. Uh, it’s sort of, uh, automatic extrapolation, if you will, from that. So that’s the reason why I, that’s how I defined DevEx. Um, but it’s important because that’s how, that was my journey to learn as well.

Kovid Batra: So I think, uh, before the discussion started, we were talking about how you got into this role and how DevEx came into play. So I think, uh, let audience also hear it from you. Like, we know like DevEx is a very new term. Uh, this is something that has been introduced very lately, but back in the day, when you started working on things, what defined DevEx at that time and how you got involved in it?

Vilas Veeraraghavan: Um, so back in the day when I started working in a software organization, the thing that drew me to, uh, what we would call ‘platform’ back then was the fact that there were a lot of opportunities to see quick wins from doing improvements for other teams. So for example, if I created something, if I improved something at the platform layer, it will not benefit one team. It will benefit all teams, multiple teams. And so the, the impact is actually pretty widespread and it’s immediate. You can see the, um, the joy of making someone happy. Like someone will come to you and say, “Oh, I was spending so much time and now I don’t have to do this.” Uh, so that drew me in, it wasn’t called DevEx. It wasn’t even called Dev Productivity at that time. Um, but this is I’m talking about like 2008, 2007–2008 timeframe. But then what happened over time was that, um, I realized that automation and creating the tools and all of that, uh, I realized how much of a superpower that can be for a company to have, uh, investment in that because it’s a multifold impact on how quickly people can get features. So how quickly you innovate, how efficient your engineering team is, how, um, excellent the, uh, how it says, the practices are within the engineering organization. They can all be defined by providing your engineers something that is, they can use every day and they don’t have to think and reinvent new ways and they don’t have to relitigate the same problem again and again.

Um, so that drew me in. Uh, so over time I’ve seen it evolve from just platform or like there used to be common libraries that people would write, which other companies, other teams would, uh, ingest and then they would release, uh, and we did not have, uh, continuous delivery. Uh, funnily enough, uh, we used to ship CDs, compact discs for those who are new to this process. Uh, so we would actually ship physical media over. So we would burn all the software on it and then we would ship it, um, to the data center and an admin would install it. So there was no concept of that level of continuous delivery, but we did have CI, and we did have a sense of automation within the actual pipeline, the software delivery pipeline. That is still valid.

Kovid Batra: There is one interesting question, like, uh, this is something that I have also felt, uh, coming from an engineering background. People usually don’t have, uh, an interest towards moving into platform teams, DevOps kind of things, right? You say that you are passionate about it. So I just want to hear it from you, like what drives that passion? Like you just mentioned that there is an impact that you’re creating with all the teams who are working there. Um, so is, is that the key thing or is it something else that is driving that passion?

Vilas Veeraraghavan: I mean, I feel like that is the key thing because I, I derive a lot of joy out of that, because I feel that when you make a change and sometimes, uh, the result, the impact of that change is not visible till it’s actually live and then people use it. I mean, for example, if you wanted to, let’s say you’re moving from a GitLab pipeline to, uh, using Argo CD for something or something like that. You’re doing a massive migration. It can be very troubling to look at it when you’re stepping back and looking at it as a big picture. But then when all of the change is done and you see how it has impacted, uh, you see how fast you’re running or you see, something like that, right? So I think it’s that, um, obviously is, which is a big motivator, but here’s the other thing, right? I think, uh, and this is a secret that I hope others also, uh, realize that it was right there all along. They just haven’t seen it. The secret is that by being in a space like DevEx, you actually solve multiple different domain, uh, domain areas, problems, right? So for example, at Walmart, I got deeply, I had a chance to deeply understand supply chain issues, like supply chain teams had issues that were different from maybe, uh, like teams that were doing more payment management. Uh, the problems are different, but when you look at the problem, uh, you have to understand deeply what that technology is. So you end up having a lot of really broad knowledge across multiple domain areas. And when you solve a problem for a domain area, you will be surprised to know, Oh, this actually solves it for five other areas as well. Right? So it’s, it’s a fascinating thing that I think people don’t realize immediately. So it feels less glamorous than something else, um, like a feature team maybe. Um, but in fact, it’s actually, in my opinion, uh, more powerful.

Kovid Batra: Got it. Is this the effect of working with large organizations particularly? Like, uh..

Vilas Veeraraghavan: It’s possible.

Kovid Batra: I’m not making any assumptions here but I’m just asking a question.

Vilas Veeraraghavan: Yeah. It’s possible.

Kovid Batra: Okay.

Vilas Veeraraghavan: Yeah, it is. I, I, yes. Uh, I, I will say that there is definitely a privilege that I’m, I should call out here, is that the privilege for me was to work, uh, in companies which allowed me the ability to like learn this, right? There was a lot of, um, bandwidth that was offered to me to learn all of this. Um, and Netflix was, is, is always good about a lot of transparency across organizations. Uh, so as an engineer, if you are working for a company like Netflix, you absorb a lot of information. And because you, if you’re curious, you can do more, you can do a lot, right? Um, obviously Walmart, fortune one, big, biggest company I’ve ever worked for. I think it’s, it is the biggest company in terms of size as well. Um, again, right, you have the ability to learn, uh, and you work your way out of ambiguity by defining structure yourself. Um, so same thing happens. I think I’ve been lucky in that way as well, um, to learn from all of these folks who worked there and obviously, amazing, talented people work in these places. So something, you keep hearing about it, you keep learning about it and then it makes you better as an engineer as well.

Kovid Batra: Makes sense. So, um, let’s, let’s deep dive into some of these situations where you applied your great brains around designing the platform teams, defining things for, uh, these platforms. So maybe, can you just bring up some examples from your journey at Netflix or Walmart or Bill.com where you had a great challenge in front of you? Uh, and what were the decision-making framework, uh, frameworks you, you, uh, basically deployed at that point of time and how things spanned out during the journey? So this might be a long question, but like, uh, I just wanted to, uh, dive into any one of those journeys if you, if you’re okay.

Vilas Veeraraghavan: Okay. I think we have had in the past, you’ve had Bryan Finster. So this was something that we traversed together along with many other people. Uh, we were all part of the same team, um, when we did this. Uh, so I’ll start with Walmart, uh, as an example. Um, I’ll, I’ll keep, keep it to sort of, I’ll go into generics and not give you specifics, but the challenge, uh, at a company like Walmart is that as a company, a big company, there is a lot of established practices, uh, a lot of established processes, established tools that teams use and businesses rely on, right? So each of these areas within the company is a business by itself. Uh, they are obviously wanting to get the best possible output for their customers. Uh, and they rely on a bunch of processes, tools, people, all of that, right? Um, if you now, going in, say that, “Hey, I’m going to introduce something that’s brand new.” Or if you’re going to change something drastically, you are creating unnecessary churn and unnecessary friction within the system, right? So in order for us to think about how we wanted to do dev enablement within Walmart, it is important to understand that you had to address the friction, right? If you are providing a solution that is replacing existing solution and doing just enough, that’s not going to cut it. It has to be a sea change. It has to be something that significantly changes how the company does software delivery, right? Uh, and so, one thing I’ll say is that I was very lucky to work for someone and for like leaders at Walmart that also understood this at that time. Um, so, for all those who are in the process right now, you cannot do it unless your leadership has that, you have buy in from that leadership, you have sponsorship from your executive teams. Uh, that helped us a lot.

Now, once you have buy in, you still have to produce something that is of value, right? And so that is where I’m saying this thing is important. So initially, uh, in my mind, uh, naively, my expectation was we build some amazing tools, right? And then we provide that to these teams and of course, they’ll be super happy, uh, the word of month will spread and that’s it. Right. All done. Um, what I found was in order to solve a problem where engineers were spending a lot of time doing toil, right? Like they were doing a lot of manual processes or repeated, uh, work throwing a tool at them was actually exacerbating the cognitive load problem, right?

Kovid Batra: Yeah.

Vilas Veeraraghavan: So now, while they maintain existing solutions, they have to now learn something new, migrate it, then convince their leaders and their teams to say, “Yeah, this is how we have to do things.” And then move forward. So you’re making that problem worse, that bandwidth problem, which is I’m a developer. I have certain amount of time to spend on feature delivery. I don’t have time for everything. So now I’m squeezing this into my, like 20 percent time, on my own free time outside of work to learn what this new thing is about. What that meant is that adoption would not succeed. So if adoption doesn’t succeed, then obviously, if your customers are not using you, you’re not, you’re a failed product, right? So what we realized was there are two other aspects to it that we had not thought about. One was process and the other one was people, right? So when I say people, I mean it could be management, it could be a key opinion leader within the space, right? That’s what we attacked. And you can obviously ask Bryan more about it. He is, he’s, he knows all about it. But the way that we attacked it was we created programs which were more grassroots, like more bottoms up view of saying, “Hey, we are starting to use these new tools. Come join us as we learn together. Let’s discuss what problems we have. Let’s talk about successes that we have. Let’s talk about how we want to do this well.” And we were open to feedback. So, inside my organization, uh, which is the dev enablement area, there was also a product organization. Uh, so we had product owners with each of the teams that are building these tools and the product owners had a pulse on the customer’s need.

So that is, that is how we found success over time. We did not obviously succeed at the start, and there was obviously, a lot of challenges we had to work through, but what happened is adoption only kicked up when we saw that we were able to, one, provide a solution that is X times better than where we were, right? So if you were to, if you were maintaining configuration, if you’re meeting five config, uh, different configs, now we just have to meet in one YAML file and that’s checked into GitHub or something like that, right? That’s a big difference productivity-wise. lesser errors. Uh, second thing is how many times do I have to look at the build? Uh, and then security review after the build and all that. So you say, okay, let’s do security scanning before the build. Uh, so even before you build a binary, you know if it’s safe to build it based on your code scan. Uh, things like that we did to improve the process itself. And then we educated our teams about it. All of our teams. We upskilled them. We gave them a chance to upskill themselves by giving them lots to, lots of references. We showed them like what the industry standards are. By showing them what the industry standards are, you created a need inside them say, “Hey, we need to be like that, right? Like, why can’t we do this?” And so that essentially became a motivating factor for most teams and most managers and directors and VPs started saying, “Hey, I want all of my teams to do exactly that.” Right. We need to be that kind of a team. And that introduced a lot of sort of gamification, right? Because when we, when you look at dashboards that look slick, right, and you’re like, “Hey, why can’t I do this? Why can’t my team do this?” It created a very natural tension, a very natural competition within the company, which served adoption well. Once the adoption was starting to grow and beyond a certain threshold, it became a very natural, or we didn’t have to go asking for customers, customers came looking for us. And so, that’s how we got to the point where there was more uniformity in how software is delivered.

Kovid Batra: Perfect. So I think it’s more around defining the right problem for the teams that you’re going to work with, defining a priority on those problems, how you were like very swiftly slide into their existing system so that the adoption is not a barrier in the first place itself. So the basic principles of how you bring in a product into the market. Similarly, you just have to..

Vilas Veeraraghavan: It is the exact same.

Kovid Batra: Yeah.

Vilas Veeraraghavan: Uh, platform, dev enablement, tooling, all of this. These are all products. Your developers are your customers. If your customers are not happy and they don’t use you, um, yeah, you are a failed organization then. That’s how it is. Right. So if you, if you feel like, um, just because you are part of a DevEx team, uh, what you say has to be the law of the land, it doesn’t work that way, right? The customers vote with their, with the time that they give you. Uh, and if that, if you find if, let’s say in an organization, you see that there are some tools that’s been released by the developer productivity or DevEx or enablement or platform engineering organization, but most people are using workarounds to do something. Then I hope the teams understand that there needs to be some serious change in the DevEx organization.

Kovid Batra: Cool. I’ll just go back to the first point itself from where you start. Is there any specific way to identify which teams are dealing with the most impactful problems right now and then you go about tackling that? Or it’s more like you are talking to a lot of engineering leaders around you and then you just think that, “Okay, this is something that we can easily solve and it seems impactful. Let’s pick this up.” How does that work?

Vilas Veeraraghavan: That’s actually a very, um, important thing to think about. And thanks for reminding me of that because I did ignore to say that. I didn’t say this the last time. Uh, you do need some champions and that’s why I said key opinion leaders, right? In the company, you need champions who can help do that early adoption and then find success. That comes from not just impact, which means, let’s say that someone is doing, uh, a hundred million dollars of business every year. Uh, and if they change something that they made to save a significant amount of money, that can be big impact, but it’s also about what their ambition is. So if I am a hundred million dollar business, but my ambition is I want to be a hundred million dollar business next year as well. They may not be able to be the, uh, they may not be the person who’s pushing at the boundaries, right?

Kovid Batra: Got it.

Vilas Veeraraghavan: They may be saying, “Oh yeah, it’s fine. I mean, everything is working just fine. I don’t want to break anything. I don’t want to touch anything. I don’t want to innovate. Let’s keep going.” But on the other hand, you will see, and this is common in many big companies is there’ll always be pockets of rapid innovation, right? And so, these folks who are in that space and their decision makers in those spaces, uh, them having a discussion with it, a really deep discussion, a very open discussion with them, uh, almost like a partnership, right? Saying, “Hey, I’m building this tool. Let’s imagine you have to use this tool. What would you want me to change in this so that it fits you?” And obviously, you’re going to take all of their input and decide which ones will be more useful to others as well. You’re not going to obviously, build something for just one team, but at the same time you get to know, like, you know, what is it that, what is it that is not getting them to adopt this right now? So you do need a set of those key opinion leaders very early in the process because they are also not just going to influence their team; they are going to influence other teams. And that’s how the word of mouth is going to spread. So that’s the first step. So it’s not just impact; its impact with ambition, which is where..

Kovid Batra: There should be some inherent motivation there to actually work on it, only then..

Vilas Veeraraghavan: I will, I will say one other thing, Kovid. Like if there is someone that, if there’s a team that doesn’t necessarily have ambition, but it’s doing more of a top-down, like get this done, right? I have often found that, uh, by leaders saying, get this done, it can sometimes backfire because the team feels like it’s an imposition on them. They may be very happy with their current state of tools, but it’s an imposition. Like now, why do you have to change this? Everything works just fine, right? You always have that inertia, like people, everyone doesn’t want change, and sometimes change might not be needed either. You might actually already be efficient, right? But that top-down approach doesn’t always work, which is why for us, I will say this, that for me, the greatest learning was how and seeing how much the bottoms-up approach worked at Walmart was actually very encouraging because I realized that you have to convince an engineer to see this for themselves. So it is very, that’s why I think opinion leaders are not necessarily VPs or they could be, it could be someone who’s well-respected in an area. It could be someone who is, um, like a distinguished engineer, uh, right, whose word carries a lot of value within an organization. Those are the, those are the people who, who tend to be those key opinion leaders, right? Uh, so top-down also doesn’t work. You can’t just be like, uh, your VP is ambitious, but you are not. That, that, that doesn’t work either.

Kovid Batra: Makes sense. Makes sense. All right. So I think when you have defined the team priority problem that you need to solve, then you start hustling, start building, of course, that phase has to be of a lot of to and fro, patience, transition, MVPs. Anything from that phase of implementation that came out to be a great learning for you that you would like to share?

Vilas Veeraraghavan: I’m thinking there was obviously a lot of learning. Uh, we, it was not, it is never a straight path, right, uh, when, when you’re doing something like this. But I think one thing that I, uh, evolved, uh, during that time was at the start, uh, I was definitely operating in a bit of a, “But this is the best way to do it.” Like I was, we were so convinced that there is no other way, but this to do it. That, uh, slight arrogance sometimes leads you down a path where you’re not listening to what people are saying, right? If people are saying, “Hey, I’m facing this pain.” And you’re hearing that across different organizations, different areas, and you dismiss it as, “Oh, it’s just a small thing. Don’t worry about it.” Right? That small thing can snowball into a very big problem that you cannot avoid, eventually. What I learned over time was I used to go into meetings being very defensive about what we already created and what, because the way I would look at it is, “Oh, well, that team can do it. Why can’t you?” And, uh, that was very naive at that time. But then I realized, uh, one of those meetings I went to, I, for some reason, I basically said, “Okay, fine. Tell me exactly how you would have solved the problem.” Maybe I was annoyed. I don’t know what, but I said, “Okay, how would you solve the problem if you were doing this?” And that person was so happy to hear that. And that person actually sat down with me for the next two hours and designed exactly how things could have been better, all of that. Like they, and I went, I was happy to go into detail, but it made me realize these are actually all allies that I should be adding to my list, right, as opposed to saying, “No, no, you have to use this. Like, what? Go away.” I, I, that was a big mistake I did. I probably did that for like six months. I, I will say that that was a bad idea. Uh, don’t do it. Uh, but after that it was, I, I was able to, the team was able to flourish because everyone saw us as partners in this thing, right?

So then we would go and we would say, “Okay, fine. You have this tool that we built, but don’t think about that. Think about what is the ideal tool that you need and let’s find out how much of this, this satisfies, right. And then whatever it doesn’t, we will accept that as feedback. And then we’ll go back and we’ll see and think about it and all that. And we will share with you what our priorities are. You tell us if this is making sense to you or not, and then we’ll keep this communication going.” That is a big evolution.

Kovid Batra: I totally relate to that. But I haven’t been like being back and forth on this thought of bringing in opinions and then taking a decision rather than just taking a decision and then like pushing it. I think it’s the matter of the kind of people you’re working with. You have to make a wise choice that whom you want to listen to and whom you don’t want to. Both things can backfire. I’ve actually experience both, uh, the same happened.

Vilas Veeraraghavan: Oh yeah. You don’t want to. Yeah, obviously, what, it goes without saying that there is gonna be some people who are, uh, giving you the right advice, right? And some people are just complaining because they are complaining. That’s it.

Kovid Batra: Yeah.

Vilas Veeraraghavan: Right? Uh, oh yeah, you have to separate that. But I’m saying there’s two ways to do this, right? Like when you, when you find that initial adoption starts hitting and all that, you can’t go into your shell and be like, “Okay, that’s it. My job is done. People will keep.” So that is what we, I felt like over a brief amount of time, right? When we said, “No, it’s all working just fine. Like, why do you, what are you complaining about?” And then I realized, I don’t know if maybe other folks in my team realized it earlier, but I realized it as a strategy. We needed to change that. And that put a very different face on our team because our team then started getting welcomed into meetings, which we originally were never a part of. It allowed us to see, uh, into their decision process because they were like, “Oh no, it’s important for you to know this because there is a lot of dependency on tools. We can’t change this process, but maybe we can adjust the tools and the settings to help us with this.” Right? So it was a very different perspective. And that learning, I was able to carry it into like other, uh, other initiatives, projects, companies, all of that. It has definitely served me well. Even now, if I’m listening to someone, I’ll usually say, “What would you do if you were in this space?” Right. And then let’s talk about it. Right. Very open. Um, but it is, it is important to have ego outside.

Kovid Batra: Yeah, totally. So I think it’s a very good point you just mentioned, like, uh, taking that constant feedback in some or the other form. But when you’re dealing with large teams, large systems, uh, I have got a sense that you need to have a system in place along with 1-on-1s and discussions with the people. So I’m sure you are focusing on making the delivery, uh, more efficient, faster, the quality should be better, less of failures, right? At the beginning of a journey, let’s say, any project, there must be something, some metrics that you define that, “Okay, this is what the current scenario is. And during the phase, these are our KPIs which we need to like look at every time, every 15 days or 30 days.” And then finally, when you are putting an accomplishment mark to your change that you have brought in, there is a goal that you must be hitting, right? So during this whole journey, what were your benchmarks? What were your ways of evaluating that system data? So that you are always able to like, most of the time it’s like, it’s for our own benefit. Like we know things are working or not. And at the same time you’re working with so many teams, so many stakeholders, you have some factual things in front of you saying, “Okay, this is what has changed.”

Vilas Veeraraghavan: Sure. Um, I’ll say this, um, we, the team used to do regular road shows, which means we would go around to different teams. We would have weekly and monthly meetings where we would showcase what’s coming, what’s happened, how this is a fit for, and we would try to always do something where you would demo this with the team that you’re talking to. We will demo it with something that they are doing, right, saying, “Hey, look, this is a build that you wanted to run. You want it to slow down all that. So you wanted it to speed up and it’s slow right now. This is how much we sped it up and all that.” So that is a roadshow thing. The reason I’m mentioning that is because that brings me to the metrics, right? Metrics, when we started, um, in the sense of day-to-day metrics, um, evolved over time, uh, till like, when I left, right? In the sense that at the very start, our metric was adoption, obviously, when we started creating the tool and sending it out. So for us, for us, it was an option. The mission statement for us was we wanted to get code into production in less than 60 minutes. So this was, when I say ‘code to production’, it is not just any code. It’s code that is tested. So, uh, which means we, we had to build it fast. We had to run unit tests. We had to run integration tests. We’ve also, uh, intended to run performance evaluation, performance testing, right? And then deploy it without having to go trouble the, the, the team again for details, right? Deploy it or, or at least make it ready to deploy. And then you obviously, have some gate that will say, “Okay, ready to deploy. Check.” Someone checks it and then it goes to product, right? We wanted this process to take 60 minutes or less. So that was the very mission statement kind of thing.

Kovid Batra: Got it, got it, yeah.

Vilas Veeraraghavan: But the metrics evolved over time. So initially, it was adoption, like how many people are using this tool? Um, it was about, uh, some common things, for example, um, a lot of folks within Walmart were using different code repositories, right? All of them, because they’re maintained by different parts of the organization. But because we unified those, we started checking, okay, is everything in one place? What is this amount of code that is maybe not in a secure space? Or something like that. Like that became an open thing to share. And we got a lot of partnership from our sister teams in InfoSec, in, uh, like all of these compliance areas, they started helping us a lot because they established policies that became metrics for us to measure. So just like I said, how secure is the code base? That is a great policy saying, “We need to have secure codebases that do not have high-level and medium-level vulnerabilities.” That meant we could measure those by doing code scans and saying, “Okay, we still have these many to go. We can point out exactly what teams need to do what.” And then we would slide in our tool saying, “Hey, by the way, this tool can do it for you if you just did this.” And so, immediately, it affected adoption, right? So, so that is how we started off with metrics.

Uh, but over time, uh, as we consolidated our, the space, we realized that, uh, I mean, once adoption was at like a 75, 80 percent kind of thing, we realized that we didn’t need to track it. I mean, then it’s like diminishing returns. It’ll take its time. The long tail is long. It’ll take time. Uh, at that time we switched, uh, to looking at more efficiency metrics. So which means we wanted to see how much is the scale costing us as a team. Like, are we scaling well to handle the load of builds that are coming to us, right? Are we, are the builds slowing down week over week for other teams, right? Things like that. So that is how we started seeing it because we wanted to get a sense of how much is the developer spending on things like long builds. So if you’re spent, if you’re like, “Oh, I start this build and I have to go away for an hour and come back.” It is a serious loss of productivity for that person. The context switch penalty is high, right? And when you come back, you’re like, “I forgot what I was even doing.” So we wanted to minimize that. So it became about efficiency metrics and that led to the goals and the strategy that we had to decide for the next year. Okay, we need to fix this one next time. So it was an adoption as much as saying, “Okay, make sure that we are still continuing on the, uh, what is the roadshows and things like that, but we’ll shift our attention to this.” So in the roadshows, we will call out those metrics. So you would start the discussion with saying, “Here is where we are right now.” There were publicly accessible dashboards, which is another thing that we believed truly as a DevEx team or a dev enablement team is every action that we take is very public. In a sense, it should be to all the organizations, public to the organization because that’s our customer, right? So we need to tell them exactly where we do, what we’re doing. The investment in money comes from these people, right? The other VPs or the execs are sponsoring this. So they need to see where their money is going. And so it was like transparency was key, and that’s why metrics were helpful. We showed them all the way from adoption to tuning to efficiency. That’s how sort of the thing went.

Kovid Batra: Cool. I think this was really, really interesting to know this whole journey, the phases that you have had. Just in the interest of time, I think we’ll have to just take a pause here, but, uh, this was amazing, amazing discussion that I’ve had with you. Would you like to share a parting advice or something for people who are maybe stepping into this role or are into this role for some time, anything you want to share with them?

Vilas Veeraraghavan: I want to, first of all, thanks, Kovid. This is, this is great. Uh, I, I really enjoyed this conversation. Um, and I also appreciate the curiosity you had, uh, to have this discussion in the first place. So, thanks for that. Um, message is simple, right? I don’t know how this happens, but DevEx never used to be cool in the past, right? In a sense that DevEx felt like one of those things that people would say, “Hey, you’re doing DevEx. You’re not necessarily releasing features.” But in reality, there were tons of features that, that the feature teams needed to deliver their features that we had to create before they did this. DevEx teams needed to be three to six months ahead of where the feature teams are so that when it comes to delivery, feature teams are not waiting on tools. We have to be giving it ready. So I believed it was cool back then, but I’m very happy to hear that DevEx is actually turning cooler because there is a lot of industry backing about it, right? Like, so there’s a lot of push, a lot of people talking about it, like yourself, uh, and we, like, we are doing right now. My only advice is, for those who are interested in it, I would suggest at least speaking to the right people so you know what the opportunities look like, right, before you say no. That’s all I ask.

Kovid Batra: Perfect. All right, that’s our time. Bye for now. But we would love to have you on another episode talking more about DevOps, DevX, dev productivity. Thanks, Vilas. Thank you for your time.

Vilas Veeraraghavan: Yeah. Thanks, Kovid. I’m happy to return anytime.

View All

AI

View All

Developer Productivity in the Age of AI

Are you tired of feeling like you’re constantly playing catch-up with the latest AI tools, trying to figure out how they fit into your workflow? Many developers and managers share that sentiment, caught in a whirlwind of new technologies that promise efficiency but often lead to confusion and frustration.

The problem is clear: while AI offers exciting opportunities to streamline development processes, it can also amplify stress and uncertainty. Developers often struggle with feelings of inadequacy, worrying about how to keep up with rapidly changing demands. This pressure can stifle creativity, leading to burnout and a reluctance to embrace the innovations designed to enhance our work.

But there’s good news. By reframing your relationship with AI and implementing practical strategies, you can turn these challenges into opportunities for growth. In this blog, we’ll explore actionable insights and tools that will empower you to harness AI effectively, reclaim your productivity, and transform your software development journey in this new era.

The Current State of Developer Productivity

Recent industry reports reveal a striking gap between the available tools and the productivity levels many teams achieve. For instance, a survey by GitHub showed that 70% of developers believe repetitive tasks hamper their productivity. Moreover, over half of developers express a desire for tools that enhance their workflow without adding unnecessary complexity.

Understanding the Productivity Paradox

Despite investing heavily in AI, many teams find themselves in a productivity paradox. Research indicates that while AI can handle routine tasks, it can also introduce new complexities and pressures. Developers may feel overwhelmed by the sheer volume of tools at their disposal, leading to burnout. A 2023 report from McKinsey highlights that 60% of developers report higher stress levels due to the rapid pace of change.

Common Emotional Challenges

As we adapt to these changes, feelings of inadequacy and fear of obsolescence may surface. It’s normal to question our skills and relevance in a world where AI plays a growing role. Acknowledging these emotions is crucial for moving forward. For instance, it can be helpful to share your experiences with peers, fostering a sense of community and understanding.

Key Challenges Developers Face in the Age of AI

Understanding the key challenges developers face in the age of AI is essential for identifying effective strategies. This section outlines the evolving nature of job roles, the struggle to balance speed and quality, and the resistance to change that often hinders progress.

Evolving Job Roles

AI is redefining the responsibilities of developers. While automation handles repetitive tasks, new skills are required to manage and integrate AI tools effectively. For example, a developer accustomed to manual testing may need to learn how to work with automated testing frameworks like Selenium or Cypress. This shift can create skill gaps and adaptation challenges, particularly for those who have been in the field for several years.

Balancing speed and Quality

The demand for quick delivery without compromising quality is more pronounced than ever. Developers often feel torn between meeting tight deadlines and ensuring their work meets high standards. For instance, a team working on a critical software release may rush through testing phases, risking quality for speed. This balancing act can lead to technical debt, which compounds over time and creates more significant problems down the line.

Resistance to Change

Many developers hesitate to adopt AI tools, fearing that they may become obsolete. This resistance can hinder progress and prevent teams from fully leveraging the benefits that AI can provide. A common scenario is when a developer resists using an AI-driven code suggestion tool, preferring to rely on their coding instincts instead. Encouraging a mindset shift within teams can help them embrace AI as a supportive partner rather than a threat.

Strategies for Boosting Developer Productivity

To effectively navigate the challenges posed by AI, developers and managers can implement specific strategies that enhance productivity. This section outlines actionable steps and AI applications that can make a significant impact.

Embracing AI as a Collaborator

To enhance productivity, it’s essential to view AI as a collaborator rather than a competitor. Integrating AI tools into your workflow can automate repetitive tasks, freeing up your time for more complex problem-solving. For example, using tools like GitHub Copilot can help developers generate code snippets quickly, allowing them to focus on architecture and logic rather than boilerplate code.

  • Recommended AI tools: Explore tools that integrate seamlessly with your existing workflow. Platforms like Jira for project management and Test.ai for automated testing can streamline your processes and reduce manual effort.

Actual AI Applications in Developer Productivity

AI offers several applications that can significantly boost developer productivity. Understanding these applications helps teams leverage AI effectively in their daily tasks.

  • Code generation: AI can automate the creation of boilerplate code. For example, tools like Tabnine can suggest entire lines of code based on your existing codebase, speeding up the initial phases of development and allowing developers to focus on unique functionality.
  • Code review: AI tools can analyze code for adherence to best practices and identify potential issues before they become problems. Tools like SonarQube provide actionable insights that help maintain code quality and enforce coding standards.
  • Automated testing: Implementing AI-driven testing frameworks can enhance software reliability. For instance, using platforms like Selenium and integrating them with AI can create smarter testing strategies that adapt to code changes, reducing manual effort and catching bugs early.
  • Intelligent debugging: AI tools assist in quickly identifying and fixing bugs. For example, Sentry offers real-time error tracking and helps developers trace their sources, allowing teams to resolve issues before they impact users.
  • Predictive analytics for sprints/project completion: AI can help forecast project timelines and resource needs. Tools like Azure DevOps leverage historical data to predict delivery dates, enabling better sprint planning and management.
  • Architectural optimization: AI tools suggest improvements to software architecture. For example, the AWS Well-Architected Tool evaluates workloads and recommends changes based on best practices, ensuring optimal performance.
  • Security assessment: AI-driven tools identify vulnerabilities in code before deployment. Platforms like Snyk scan code for known vulnerabilities and suggest fixes, allowing teams to deliver secure applications.

Continuous Learning and Professional Development

Ongoing education in AI technologies is crucial. Developers should actively seek opportunities to learn about the latest tools and methodologies.

Online resources and communities: Utilize platforms like Coursera, Udemy, and edX for courses on AI and machine learning. Participating in online forums such as Stack Overflow and GitHub discussions can provide insights and foster collaboration among peers.

Cultivating a Supportive Team Environment

Collaboration and open communication are vital in overcoming the challenges posed by AI integration. Building a culture that embraces change can lead to improved team morale and productivity.

Building peer support networks: Establish mentorship programs or regular check-ins to foster support among team members. Encourage knowledge sharing and collaborative problem-solving, creating an environment where everyone feels comfortable discussing their challenges.

Setting Effective Productivity Metrics

Rethink how productivity is measured. Focus on metrics that prioritize code quality and project impact rather than just the quantity of code produced.

Tools for measuring productivity: Use analytics tools like Typo that provide insights into meaningful productivity indicators. These tools help teams understand their performance and identify areas for improvement.

How Typo Enhances Developer Productivity?

There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.

Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.

Here are three ways in which Typo measures the team productivity:

Software Development Lifecycle (SDLC) Visibility

Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.

Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.

AI Powered Code Review

Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them using AI before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.

Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback.  This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.

Developer Experience

Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.

Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.

Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.

Continuous Learning: Empowering Developers for Future Success

With its robust features tailored for the modern software development environment, Typo acts as a catalyst for productivity. By streamlining workflows, fostering collaboration, integrating with AI tools, and providing personalized support, Typo empowers developers and their managers to navigate the complexities of development with confidence. Embracing Typo can lead to a more productive, engaged, and satisfied development team, ultimately driving successful project outcomes.

Want to know more?

AI code reviews

AI C͏o͏de Rev͏iews ͏for Remote͏ Teams

Ha͏ve͏ yo͏u ever felt ͏overwhelmed trying to ͏mainta͏in co͏nsist͏ent͏ c͏o͏de quality acros͏s ͏a remote te͏am? As mo͏re development t͏eams shift to remo͏te work, t͏he challenges of code͏ revi͏e͏ws onl͏y gro͏w—slowed c͏ommunication͏, la͏ck o͏f real-tim͏e feedba͏ck, and t͏he c͏r͏eeping ͏possibility of errors sl͏ipp͏i͏ng t͏hro͏ugh. ͏

Moreover, thin͏k about how͏ much ti͏me is lost͏ ͏waiting͏ fo͏r feedback͏ o͏r having to͏ rewo͏rk code due͏ ͏to sma͏ll͏, ͏overlooked issues. ͏When you’re͏ working re͏motely, the͏se frustra͏tio͏ns com͏poun͏d—su͏ddenly, a task that shou͏ld take hours stre͏tc͏hes into days. You͏ migh͏t ͏be spendin͏g tim͏e on ͏repetitiv͏e tasks ͏l͏ike͏ s͏yn͏ta͏x chec͏king, cod͏e formatting, and ma͏nually catch͏in͏g errors that could be͏ ha͏nd͏led͏ more ef͏fi͏cie͏nt͏ly. Me͏anwhile͏,͏ ͏yo͏u’r͏e ͏expected to deli͏ver high-quality͏ ͏work without delays. ͏

Fortuna͏tely,͏ ͏AI-͏driven too͏ls offer a solutio͏n t͏h͏at can ea͏se this ͏bu͏rd͏en.͏ B͏y automating ͏the tedi͏ous aspects of cod͏e ͏re͏views, such as catchin͏g s͏y͏ntax ͏e͏r͏rors and for͏m͏a͏tting i͏nconsistenc͏ies, AI ca͏n͏ gi͏ve deve͏lopers m͏or͏e͏ time to focus on the creative and comple͏x aspec͏ts of ͏coding. 

͏In this ͏blog, we’͏ll ͏explore how A͏I͏ can ͏help͏ remote teams tackle the diffic͏u͏lties o͏f͏ code r͏eviews ͏a͏nd ho͏w ͏t͏o͏ols like Typo can fu͏rther͏ im͏prove this͏ proc͏ess͏, allo͏wing t͏e͏am͏s to focu͏s on what ͏tru͏ly matter͏s—writing excellent͏ code.

The͏ Unique Ch͏allenges͏ ͏of R͏emot͏e C͏ode Revi͏ews

Remote work h͏as int͏roduced a unique se͏t of challenges t͏hat imp͏a͏ct t͏he ͏code rev͏iew proce͏ss. They a͏re:͏ 

Co͏mmunication bar͏riers

When team members are͏ s͏cat͏t͏ered across ͏diffe͏rent time ͏zon͏e͏s, real-t͏ime discussions and feedba͏ck become ͏mor͏e difficult͏. Th͏e͏ lack of face͏-to-͏face͏ ͏int͏e͏ra͏ctions can h͏i͏nder effective ͏commun͏icati͏on ͏an͏d͏ le͏ad ͏to m͏isunde͏rs͏tandings.

Delays in fee͏dback͏

Without͏ the i͏mmedi͏acy of in-pers͏on ͏collabo͏rati͏on͏,͏ remote͏ ͏tea͏ms͏ often experie͏n͏ce del͏ays in receivi͏ng feedback on͏ thei͏r code chang͏e͏s. This ͏can slow d͏own the developmen͏t cycle͏ and fru͏strat͏e ͏te͏am ͏member͏s who are ea͏ger t͏o iterate and impro͏ve the͏ir ͏code.͏

Inc͏rea͏sed risk ͏of human error͏

͏C͏o͏mplex ͏code͏ re͏vie͏ws cond͏ucted ͏remo͏t͏ely are more͏ p͏ro͏n͏e͏ to hum͏an overs͏ight an͏d errors. When team͏ memb͏ers a͏re no͏t ph͏ysically ͏pres͏ent to catch ͏ea͏ch other's mistakes, the risk of intro͏duci͏ng͏ bug͏s or quality i͏ssu͏es into the codebase increa͏ses.

Emo͏tional stres͏s

Re͏mot͏e͏ work can take͏ a toll on t͏eam mo͏rale, with f͏eelings͏ of ͏is͏olation and the pres͏s͏ure ͏to m͏ai͏nt͏a͏in productivit͏y w͏eighing heavily ͏on͏ developers͏. This emo͏tional st͏ress can negativel͏y ͏impact col͏laborati͏on͏ a͏n͏d code quality i͏f not͏ properly add͏ress͏ed.

Ho͏w AI Ca͏n͏ Enhance ͏Remote Co͏d͏e Reviews

AI-powered tools are transforming code reviews, helping teams automate repetitive tasks, improve accuracy, and ensure code quality. Let’s explore how AI dives deep into the technical aspects of code reviews and helps developers focus on building robust software.

NLP for Code Comments

Natural Language Processing (NLP) is essential for understanding and interpreting code comments, which often provide critical context:

Tokenization and Parsing

NLP breaks code comments into tokens (individual words or symbols) and parses them to understand the grammatical structure. For example, "This method needs refactoring due to poor performance" would be tokenized into words like ["This", "method", "needs", "refactoring"], and parsed to identify the intent behind the comment.

Sentiment Analysis

Using algorithms like Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks, AI can analyze the tone of code comments. For example, if a reviewer comments, "Great logic, but performance could be optimized," AI might classify it as having a positive sentiment with a constructive critique. This analysis helps distinguish between positive reinforcement and critical feedback, offering insights into reviewer attitudes.

Intent Classification

AI models can categorize comments based on intent. For example, comments like "Please optimize this function" can be classified as requests for changes, while "What is the time complexity here?" can be identified as questions. This categorization helps prioritize actions for developers, ensuring important feedback is addressed promptly.

Static Code Analysis

Static code analysis goes beyond syntax checking to identify deeper issues in the code:

Syntax and Semantic Analysis

AI-based static analysis tools not only check for syntax errors but also analyze the semantics of the code. For example, if the tool detects a loop that could potentially cause an infinite loop or identifies an undefined variable, it flags these as high-priority errors. AI tools use machine learning to constantly improve their ability to detect errors in Java, Python, and other languages.

Pattern Recognition

AI recognizes coding patterns by learning from vast datasets of codebases. For example, it can detect when developers frequently forget to close file handlers or incorrectly handle exceptions, identifying these as anti-patterns. Over time, AI tools can evolve to suggest better practices and help developers adhere to clean code principles.

Vulnerability Detection

AI, trained on datasets of known vulnerabilities, can identify security risks in the code. For example, tools like Typo or Snyk can scan JavaScript or C++ code and flag potential issues like SQL injection, buffer overflows, or improper handling of user input. These tools improve security audits by automating the identification of security loopholes before code goes into production.

Code Similarity Detection

Finding duplicate or redundant code is crucial for maintaining a clean codebase:

Code Embeddings

Neural networks convert code into embeddings (numerical vectors) that represent the code in a high-dimensional space. For example, two pieces of code that perform the same task but use different syntax would be mapped closely in this space. This allows AI tools to recognize similarities in logic, even if the syntax differs.

Similarity Metrics

AI employs metrics like cosine similarity to compare embeddings and detect redundant code. For example, if two functions across different files are 85% similar based on cosine similarity, AI will flag them for review, allowing developers to refactor and eliminate duplication.

Duplicate Code Detection

Tools like Typo use AI to identify duplicate or near-duplicate code blocks across the codebase. For example, if two modules use nearly identical logic for different purposes, AI can suggest merging them into a reusable function, reducing redundancy and improving maintainability.

Automated Code Suggestions

AI doesn’t just point out problems—it actively suggests solutions:

Generative Models

Models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) can create new code snippets. For example, if a developer writes a function that opens a file but forgets to handle exceptions, an AI tool can generate the missing try-catch block to improve error handling.

Contextual Understanding

AI analyzes code context and suggests relevant modifications. For example, if a developer changes a variable name in one part of the code, AI might suggest updating the same variable name in other related modules to maintain consistency. Tools like GitHub Copilot use models such as GPT to generate code suggestions in real-time based on context, making development faster and more efficient.

Reinforcement Learning for Code Optimization

Reinforcement learning (RL) helps AI continuously optimize code performance:

Reward Functions

In RL, a reward function is defined to evaluate the quality of the code. For example, AI might reward code that reduces runtime by 20% or improves memory efficiency by 30%. The reward function measures not just performance but also readability and maintainability, ensuring a balanced approach to optimization.

Agent Training

Through trial and error, AI agents learn to refactor code to meet specific objectives. For example, an agent might experiment with different ways of parallelizing a loop to improve performance, receiving positive rewards for optimizations and negative rewards for regressions.

Continuous Improvement

The AI’s policy, or strategy, is continuously refined based on past experiences. This allows AI to improve its code optimization capabilities over time. For example, Google’s AlphaCode uses reinforcement learning to compete in coding competitions, showing that AI can autonomously write and optimize highly efficient algorithms.

AI-Assisted Code Review Tools

Modern AI-assisted code review tools offer both rule-based enforcement and machine learning insights:

Rule-Based Systems

These systems enforce strict coding standards. For example, AI tools like ESLint or Pylint enforce coding style guidelines in JavaScript and Python, ensuring developers follow industry best practices such as proper indentation or consistent use of variable names.

Machine Learning Models

AI models can learn from past code reviews, understanding patterns in common feedback. For instance, if a team frequently comments on inefficient data structures, the AI will begin flagging those cases in future code reviews, reducing the need for human intervention.

Hybrid Approaches

Combining rule-based and ML-powered systems, hybrid tools provide a more comprehensive review experience. For example, DeepCode uses a hybrid approach to enforce coding standards while also learning from developer interactions to suggest improvements in real-time. These tools ensure code is not only compliant but also continuously improved based on team dynamics and historical data.

Incorporating AI into code reviews takes your development process to the next level. By automating error detection, analyzing code sentiment, and suggesting optimizations, AI enables your team to focus on what matters most: building high-quality, secure, and scalable software. As these tools continue to learn and improve, the benefits of AI-assisted code reviews will only grow, making them indispensable in modern development environments.

Here’s a table to help you seamlessly understand the code reviews at a glance:

Practical Steps to Im͏pleme͏nt AI-Driven Co͏de ͏Review͏s

To ef͏fectively inte͏grate A͏I ͏into your remote͏ tea͏m's co͏de revi͏ew proce͏ss, con͏side͏r th͏e followi͏ng ste͏ps͏:

Evaluate͏ and choo͏se ͏AI tools: Re͏sear͏ch͏ and ͏ev͏aluat͏e A͏I͏-powe͏red code͏ review tools th͏at ali͏gn with your tea͏m'͏s n͏e͏eds an͏d ͏de͏vel͏opment w͏orkflow.

S͏t͏art with͏ a gr͏ad͏ua͏l ͏approa͏ch: Us͏e AI tools to ͏s͏upp͏ort h͏uman-le͏d code ͏reviews be͏fore gr͏ad͏ua͏lly ͏automating simpler tasks. This w͏ill al͏low your͏ te͏am to become comfortable ͏w͏ith the te͏chnol͏ogy and see its ͏ben͏efit͏s firsthan͏d͏.

͏Foster a cu͏lture of collaboration͏: ͏E͏nc͏ourage͏ yo͏ur tea͏m to view AI ͏as͏ a co͏llaborati͏ve p͏ar͏tner rathe͏r tha͏n͏ a replac͏e͏men͏t for ͏huma͏n expert͏is͏e͏. ͏Emp͏hasize ͏the impo͏rtan͏ce of human oversi͏ght, ͏especially for complex issue͏s th͏at r͏equire ͏nuance͏d͏ ͏judgmen͏t.

Provi͏de trainin͏g a͏nd r͏eso͏urces: Equi͏p͏ ͏your͏ team ͏with͏ the neces͏sary ͏training ͏an͏d resources to ͏use A͏I ͏c͏o͏de revie͏w too͏ls͏ effectively.͏ T͏his include͏s tuto͏rials, docume͏ntatio͏n, and op͏p͏ortunities fo͏r hands-on p͏r͏actice.

Lev͏era͏ging Typo to ͏St͏r͏eam͏line Remot͏e Code ͏Revi͏ews

Typo is an ͏AI-͏po͏w͏er͏ed tool designed to streamli͏ne the͏ code review process for r͏emot͏e teams. By i͏nte͏grating seamlessly wi͏th ͏your e͏xisting d͏e͏vel͏opment tool͏s, Typo mak͏es it easier͏ to ma͏nage feedbac͏k, improve c͏ode͏ q͏uali͏ty, and ͏collab͏o͏ra͏te ͏acr͏o͏ss ͏tim͏e zone͏s͏.

S͏ome key͏ benefi͏ts of ͏using T͏ypo ͏inclu͏de:

  • AI code analysis
  • Code context understanding
  • Auto debuggging with detailed explanations
  • Proprietary models with known frameworks (OWASP)
  • Auto PR fixes

Here's a brief comparison on how Typo differentiates from other code review tools

The Hu͏man Element: Com͏bining͏ ͏AI͏ and Human Exp͏ert͏ise

Wh͏ile AI ca͏n ͏s͏i͏gn͏ificantly͏ e͏nhance͏ the code ͏review proces͏s, i͏t͏'s essential͏ to maintain ͏a balance betw͏een AI ͏and human expert͏is͏e. AI ͏is not ͏a repla͏ce͏me͏nt for h͏uman intuition, cr͏eativity, or judgmen͏t but rather ͏a ͏s͏upportive t͏ool that augme͏nts and ͏emp͏ower͏s ͏developers.

By ͏using AI to ͏handle͏ re͏peti͏tive͏ tasks a͏nd prov͏ide real-͏time f͏eedba͏ck, develope͏rs can͏ foc͏us on higher-lev͏el is͏su͏es ͏that re͏quire ͏h͏uman problem-solving ͏skills. T͏h͏is ͏division of͏ l͏abor͏ allows teams ͏to w͏ork m͏ore efficient͏ly͏ and eff͏ectivel͏y while still͏ ͏ma͏in͏taining͏ the ͏h͏uma͏n touch that is cr͏uc͏ial͏ ͏for complex͏ ͏p͏roble͏m-solving and innov͏ation.

Over͏c͏oming E͏mot͏ional Barriers to AI In͏tegra͏tion

In͏troducing new t͏echn͏ol͏og͏ies͏ can so͏metimes be ͏met wit͏h r͏esist͏ance or fear. I͏t's ͏im͏porta͏nt ͏t͏o address these co͏ncerns head-on and hel͏p your͏ team understand t͏he͏ be͏nefits of AI integr͏ation.

Some common͏ fears—͏su͏ch as job͏ r͏eplacement or dis͏r͏u͏pt͏ion of esta͏blished workflows—͏shou͏ld be dire͏ctly addre͏ssed͏.͏ Reas͏sur͏e͏ your t͏ea͏m͏ that AI is ͏designed to r͏e͏duce workload and enh͏a͏nce͏ pro͏duc͏tiv͏ity, no͏t rep͏lace͏ human ex͏pertise.͏ Foster an͏ en͏vironment͏ that embr͏aces new t͏echnologie͏s while focusing on th͏e long-t͏erm be͏nefits of improved ͏eff͏icienc͏y, collabor͏ati͏on, ͏and j͏o͏b sat͏isfaction.

Elevate Your͏ Code͏ Quality: Em͏b͏race AI Solut͏ions͏

AI-d͏riven co͏d͏e revie͏w͏s o͏f͏fer a pr͏omising sol͏ution f͏or remote teams ͏lookin͏g͏ to maintain c͏ode quality, fo͏ster collabor͏ation, and enha͏nce productivity. ͏By emb͏ra͏cing͏ ͏AI tool͏s like Ty͏po, you can streamline ͏your code rev͏iew pro͏cess, reduce delays, and empower ͏your tea͏m to focus on writing gr͏ea͏t code.

Remem͏ber tha͏t ͏AI su͏pports and em͏powers your team—not replace͏ human expe͏rti͏se. Exp͏lore and experim͏ent͏ with A͏I͏ code review tools ͏in y͏o͏ur ͏teams, and ͏wa͏tch as your remote co͏lla͏borati͏on rea͏ches new͏ he͏i͏ghts o͏f effi͏cien͏cy and success͏.

How does Gen AI address Technical Debt?

The software development field is constantly evolving field. While this helps deliver the products and services quickly to the end-users, it also implies that developers might take shortcuts to deliver them on time. This not only reduces the quality of the software but also leads to increased technical debt.

But, with new trends and technologies, comes generative AI. It seems to be a promising solution in the software development industry which can ultimately, lead to high-quality code and decreased technical debt.

Let’s explore more about how generative AI can help manage technical debt!

Technical debt: An overview

Technical debt arises when development teams take shortcuts to develop projects. While this gives them short-term gains, it increases their workload in the long run.

In other words, developers prioritize quick solutions over effective solutions. The four main causes behind technical debt are:

  • Business causes: Prioritizing business needs and the company’s evolving conditions can put pressure on development teams to cut corners. It can result in preponing deadlines or reducing costs to achieve desired goals.
  • Development causes: As new technologies are evolving rapidly, It makes it difficult for teams to switch or upgrade them quickly. Especially when already dealing with the burden of bad code.
  • Human resources causes: Unintentional technical debt can occur when development teams lack the necessary skills or knowledge to implement best practices. It can result in more errors and insufficient solutions.
  • Resources causes: When teams don’t have time or sufficient resources, they take shortcuts by choosing the quickest solution. It can be due to budgetary constraints, insufficient processes and culture, deadlines, and so on.

Why generative AI for code management is important?

As per McKinsey’s study,

“… 10 to 20 percent of the technology budget dedicated to new products is diverted to resolving issues related to tech debt. More troubling still, CIOs estimated that tech debt amounts to 20 to 40 percent of the value of their entire technology estate before depreciation.”

But there’s a solution to it. Handling tech debt is possible and can have a significant impact:

“Some companies find that actively managing their tech debt frees up engineers to spend up to 50 percent more of their time on work that supports business goals. The CIO of a leading cloud provider told us, ‘By reinventing our debt management, we went from 75 percent of engineer time paying the [tech debt] ‘tax’ to 25 percent. It allowed us to be who we are today.”

There are many traditional ways to minimize technical debt which includes manual testing, refactoring, and code review. However, these manual tasks take a lot of time and effort. Due to the ever-evolving nature of the software industry, these are often overlooked and delayed.

Since generative AI tools are on the rise, they are considered to be the right way for code management which subsequently, lowers technical debt. These new tools have started reaching the market already. They are integrated into the software development environments, gather and process the data across the organization in real-time, and further, leveraged to lower tech debt.

Some of the key benefits of generative AI are:

  • Identify redundant code: Generative AI tools like Codeclone analyze code and suggest improvements. This further helps in improving code readability and maintainability and subsequently, minimizing technical debt.
  • Generates high-quality code: Automated code review tools such as Typo help in an efficient and effective code review process. They understand the context of the code and accurately fix issues which leads to high-quality code.  
  • Automate manual tasks: Tools like Github Copilot automate repetitive tasks and let the developers focus on high-quality tasks.
  • Optimal refactoring strategies: AI tools like Deepcode leverage machine learning models to understand code semantics, break it down into more manageable functions, and improve variable namings.

Case studies and real-life examples

Many industries have started adopting generative AI technologies already for tech debt management. These AI tools assist developers in improving code quality, streamlining SDLC processes, and cost savings.

Below are success stories of a few well-known organizations that have implemented these tools in their organizations:

Microsoft uses Diffblue cover for Automated Testing and Bug Detection

Microsoft is a global technology leader that implemented Diffblue cover for automated testing. Through this generative AI, Microsoft has experienced a considerable reduction in the number of bugs during the development process. It also ensures that the new features don’t compromise with existing functionality which positively impacts their code quality. This further helps in faster and more reliable releases and cost savings.

Google implements Codex for code documentation

Google is an internet search engine and technology giant that implemented OpenAI’s Codex for streamlining code documentation processes. Integrating this AI tool helped subsequently reduce the time and effort spent on manual documentation tasks. Due to the consistency across the entire codebase, It enhances code quality and allows developers to focus more on core tasks.

Facebook adopts CodeClone to identify redundancy

Facebook, a leading social media, has adopted a generative AI tool, CodeClone for identifying and eliminating redundant code across its extensive codebase. This resulted in decreased inconsistencies and a more streamlined and efficient codebase which further led to faster development cycles.

Pioneer Square Labs uses GPT-4 for higher-level planning

Pioneer Square Labs, a studio that launches technology startups, adopted GPT-4 to allow developers to focus on core tasks and let these AI tools handle mundane tasks. This further allows them to take care of high-level planning and assist in writing code. Hence, streamlining the development process.

How Typo leverage generative AI to reduce technical debt?

Typo’s automated code review tool enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells.

Typo also auto-analyses your codebase pulls requests to find issues and auto-generates fixes before you merge to master. Its Auto-Fix feature leverages GPT 3.5 Pro trained on millions of open source data & exclusive anonymised private data as well to generate line-by-line code snippets where the issue is detected in the codebase.

As a result, Typo helps reduce technical debt by detecting and addressing issues early in the development process, preventing the introduction of new debt, and allowing developers to focus on high-quality tasks.

Issue detection by Typo

AI to reduce technical debt

Autofixing the codebase with an option to directly create a Pull Request

AI to reduce technical debt

Key features

Supports top 10+ languages

Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.

Fix every code issue

Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.

Efficient code optimization

Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.

Professional coding standards

Typo standardizes code and reduces the risk of a security breach.

Professional coding standards

Click here to know more about our Code Review tool

Can technical debt increase due to generative AI?

While generative AI can help reduce technical debt by analyzing code quality, removing redundant code, and automating the code review process, many engineering leaders believe technical debt can be increased too.

Bob Quillin, vFunction chief ecosystem officer stated “These new applications and capabilities will require many new MLOps processes and tools that could overwhelm any existing, already overloaded DevOps team,”

They aren’t wrong either!

Technical debt can be increased when the organizations aren’t properly documenting and training development teams in implementing generative AI the right way. When these AI tools are adopted hastily without considering any long-term implications, they can rather increase the workload of developers and increase technical debt.

Ethical guidelines

Establish ethical guidelines for the use of generative AI in organizations. Understand the potential ethical implications of using AI to generate code, such as the impact on job displacement, intellectual property rights, and biases in AI-generated output.

Diverse training data quality

Ensure the quality and diversity of training data used to train generative AI models. When training data is biased or incomplete, these AI tools can produce biased or incorrect output. Regularly review and update training data to improve the accuracy and reliability of AI-generated code.

Human oversight

Maintain human oversight throughout the generative AI process. While AI can generate code snippets and provide suggestions, the final decision should be upon the developers for final decision making, review, and validate the output to ensure correctness, security, and adherence to coding standards.

Most importantly, human intervention is a must when using these tools. After all, it’s their judgment, creativity, and domain knowledge that help to make the final decision. Generative AI is indeed helpful to reduce the manual tasks of the developers, however, they need to use it properly.

Conclusion

In a nutshell, generative artificial intelligence tools can help manage technical debt when used correctly. These tools help to identify redundancy in code, improve readability and maintainability, and generate high-quality code.

However, it is to be noted that these AI tools shouldn’t be used independently. These tools must work only as the developers’ assistants and they muse use them transparently and fairly.

View All

Tutorials

View All

How to Manage Scope Creep?

Scope creep is one of the most challenging—and often frustrating—issues engineering managers face. As projects progress, new requirements, changing technologies, and evolving stakeholder demands can all lead to incremental additions that push your project beyond its original scope. Left unchecked, scope creep strains resources, raises costs, and jeopardizes deadlines, ultimately threatening project success.

This guide is here to help you take control. We’ll delve into advanced strategies and practical solutions specifically for managers to spot and manage scope creep before it disrupts your project. With detailed steps, technical insights, and tools like Typo, you can set boundaries, keep your team aligned, and drive projects to a successful, timely completion.

Understanding Scope Creep in Sprints

Scope creep can significantly impact projects, affecting resource allocation, team morale, and project outcomes. Understanding what scope creep is and why it frequently occurs provides a solid foundation for developing effective strategies to manage it.

What is Scope Creep?

Scope creep in projects refers to the gradual addition of project requirements beyond what was originally defined. Unlike industries with stable parameters, Feature projects often encounter rapid changes—emerging features, stakeholder requests, or even unanticipated technical complexities—that challenge the initial project boundaries.

While additional features can improve the end product, they can also risk the project's success if not managed carefully. Common triggers for scope creep include unclear project requirements, mid-project requests from stakeholders, and iterative development cycles, all of which require proactive management to keep projects on track.

Why does Scope Creep Happen?

Scope creep often results from the unique factors inherent to the industry. By understanding these drivers, you can develop processes that minimize their impact and keep your project on target.

Scope creep often results from several factors unique to the field:

  • Unclear requirements: At the start of a project, unclear or vague requirements can lead to an ever-expanding set of deliverables. For engineering managers, ensuring all requirements are well-defined is critical to setting project boundaries.
  • Shifting technological needs: IT projects must often adapt to new technology or security requirements that weren’t anticipated initially, leading to added complexity and potential delays.
  • Stakeholder influence and client requests: Frequent client input can introduce scope creep, especially if changes are not formally documented or accounted for in resources and timelines.
  • Agile development: Agile development allows flexibility and iterative updates, but without careful scope management, it can lead to feature creep.

These challenges make it essential for managers to recognize scope creep indicators early and develop robust systems to manage new requests and technical changes.

Identifying Scope Creep Early in the Sprints

Identifying scope creep early is key to preventing it from derailing your project. By setting clear boundaries and maintaining consistent communication with stakeholders, you can catch scope changes before they become a problem.

Define Clear Project Scope and Objectives

The first step in minimizing scope creep is establishing a well-defined project scope that explicitly outlines deliverables, timelines, and performance metrics. In sprints, this scope must include technical details like software requirements, infrastructure needs, and integration points.

Regular Stakeholder Check-Ins

Frequent communication with stakeholders is crucial to ensure alignment on the project’s progress. Schedule periodic reviews to present progress, confirm objectives, and clarify any evolving requirements.

Routine Project Reviews and Status Updates

Integrate routine reviews into the project workflow to regularly assess the project’s alignment with its scope. Typo enables teams to conduct these reviews seamlessly, providing a comprehensive view of the project’s current state. This structured approach allows managers to address any adjustments or unexpected tasks before they escalate into significant scope creep issues.

Strategies for Managing Scope Creep

Once scope creep has been identified, implementing specific strategies can help prevent it from escalating. With the following approaches, you can address new requests without compromising your project timeline or objectives.

Implement a Change Control Process

One of the most effective ways to manage scope creep is to establish a formal change control process. A structured approach allows managers to evaluate each change request based on its technical impact, resource requirements, and alignment with project goals.

Effective Communication and Real-Time Updates 

Communication breakdowns can lead to unnecessary scope expansion, especially in complex team environments. Use Typo’s Sprint Analysis to track project changes and real-time developments. This level of visibility gives stakeholders a clear understanding of trade-offs and allows managers to communicate the impact of requests, whether related to resource allocation, budget implications, or timeline shifts.

Prioritize and Adjust Requirements in Real Time

In Software development, feature prioritization can be a strategic way to handle evolving needs without disrupting core project objectives. When a high-priority change arises, use Typo to evaluate resource availability, timelines, and dependencies, making necessary adjustments without jeopardizing essential project elements.

Advanced Tools and Techniques to Prevent Scope Creep

Beyond basic strategies, specific tools and advanced techniques can further safeguard your IT project against scope creep. Leveraging project management solutions and rigorous documentation practices are particularly effective.

Leverage Typo for End-to-End Project Management

For projects, having a comprehensive project management tool can make all the difference. Typo provides robust tracking for timelines, tasks, and resources that align directly with project objectives. Typo also offers visibility into task assignments and dependencies, which helps managers monitor all project facets and mitigate scope risks proactively.

Detailed Change Tracking and Documentation

Documentation is vital in managing scope creep, especially in projects where technical requirements can evolve quickly. By creating a “single source of truth,” Typo enables the team to stay aligned, with full visibility into any shifts in project requirements.

Budget and Timeline Contingencies

Software projects benefit greatly from budget and time contingencies that allow for minor, unexpected adjustments. By pre-allocating resources for possible scope adjustments, managers have the flexibility to accommodate minor changes without impacting the project’s overall trajectory.

Maintaining Team Morale and Focus amid Scope Creep 

As scope adjustments occur, it’s important to maintain team morale and motivation. Empowering the team and celebrating their progress can help keep everyone focused and resilient.

Empower the Team to Decline Non-Essential Changes

Encouraging team members to communicate openly about their workload and project demands is crucial for maintaining productivity and morale.

Recognize and Celebrate Milestones

Managing IT projects with scope creep can be challenging, so it’s essential to celebrate milestones and acknowledge team achievements. 

Typo - An Effective Sprint Analysis Tool

Typo’s sprint analysis monitors scope creep to quantify its impact on the team’s workload and deliverables. It allows you to track and analyze your team’s progress throughout a sprint and helps you gain visual insights into how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This information enables you to identify any potential problems early on and take corrective action.

Our sprint analysis feature uses data from Git and issue management tools to provide insights into how your team is working. You can see how long tasks are taking, how often they’re being blocked, and where bottlenecks are occurring. This information can help you identify areas for improvement and make sure your team is on track to meet their goals.

Screenshot 2024-03-16 at 12.06.28 AM.png

Taking Charge of Scope Creep

Effective management of scope creep in IT projects requires a balance of proactive planning, structured communication, and robust change management. With the right strategies and tools like Typo, managers can control project scope while keeping the team focused and aligned with project goals.

If you’re facing scope creep challenges, consider implementing these best practices and exploring Typo’s project management capabilities. By using Typo to centralize communication, track progress, and evaluate change requests, IT managers can prevent scope creep and lead their projects to successful, timely completion.

code review optimization

How Efficient Code Review Impacts Developer Productivity

Are your code reviews fostering constructive discussions or stuck in endless cycles of revisions?

Let’s change that. 

In many development teams, code reviews have become a necessary but frustrating part of the workflow. Rather than enhancing collaboration and improvement, they often drag on, leaving developers feeling drained and disengaged.

This inefficiency can lead to rushed releases, increased bugs in production, and a demotivated team. As deadlines approach, the very process meant to elevate code quality can become a barrier to success, creating a culture where developers feel undervalued and hesitant to share their insights.

The good news? You can transform your code review process into a constructive and engaging experience. By implementing strategic changes, you can cultivate a culture of open communication, collaborative learning, and continuous improvement.

This blog aims to provide developers and engineering managers with a comprehensive framework for optimizing the code review process, incorporating insights on leveraging tools like Typo and discussing the technical nuances that underpin effective code reviews.

The Importance of Code Reviews

Code reviews are a critical aspect of the software development lifecycle. They provide an opportunity to scrutinize code, catch errors early, and ensure adherence to coding standards. Here’s why code reviews are indispensable:

Error detection and bug prevention

The primary function of code reviews is to identify issues before they escalate into costly bugs or security vulnerabilities. By implementing rigorous review protocols, teams can detect errors at an early stage, reducing technical debt and enhancing code stability. 

Utilizing static code analysis tools like SonarQube and ESLint can automate the detection of common issues, allowing developers to focus on more intricate code quality aspects.

Knowledge sharing

Code reviews foster an environment of shared learning and expertise. When developers engage in peer reviews, they expose themselves to different coding styles, techniques, and frameworks. This collaborative process enhances individual skill sets and strengthens the team’s collective knowledge base. 

To facilitate this knowledge transfer, teams should maintain documentation of coding standards and review insights, which can serve as a reference for future projects.

Maintaining code quality

Adherence to coding standards and best practices is crucial for maintaining a high-quality codebase. Effective code reviews enforce guidelines related to design patterns, performance optimization, and security practices. 

By prioritizing clean, maintainable code, teams can reduce the likelihood of introducing technical debt. Establishing clear documentation for coding standards and conducting periodic training sessions can reinforce these practices.

Enhanced collaboration

The code review process inherently encourages open dialogue and constructive feedback. It creates a culture where developers feel comfortable discussing their approaches, leading to richer collaboration. Implementing pair programming alongside code reviews can provide real-time feedback and enhance team cohesion.

Accelerated onboarding

For new team members, code reviews are an invaluable resource for understanding the team’s coding conventions and practices. Engaging in the review process allows them to learn from experienced colleagues while providing opportunities for immediate feedback. 

Pairing new hires with seasoned developers during the review process accelerates their integration into the team.

Common Challenges in Code Reviews

Despite their advantages, code reviews can present challenges that hinder productivity. It’s crucial to identify and address these issues to optimize the process effectively:

Lengthy review cycles

Extended review cycles can impede development timelines and lead to frustration among developers. This issue often arises from an overload of reviewers or complex pull requests. To combat this, implement guidelines that limit the size of pull requests, making them more manageable and allowing for quicker reviews. Additionally, establishing defined review timelines can help maintain momentum.

Inconsistent feedback

A lack of standardization in feedback can create confusion and frustration among team members. Inconsistency often stems from varying reviewer expectations. Implementing a standardized checklist or rubric for code reviews can ensure uniformity in feedback and clarify expectations for all team members.

Bottlenecks and lack of accountability 

If code reviews are concentrated among a few individuals, it can lead to bottlenecks that slow down the entire process. Distributing review responsibilities evenly among team members is essential to ensure timely feedback. Utilizing tools like GitHub and GitLab can facilitate the assignment of reviewers and track progress in real-time.

Limited collaboration and feedback

Sparse or overly critical feedback can hinder the collaborative nature of code reviews. Encouraging a culture of constructive criticism is vital. Train reviewers to provide specific, actionable feedback that emphasizes improvement rather than criticism. 

Regularly scheduled code review sessions can enhance collaboration and ensure engagement from all team members.

How Typo can Streamline your Code Review Process

To optimize your code review process effectively, leveraging the right tools is paramount. Typo offers a suite of features designed to enhance productivity and code quality:

Automated code analysis

Automating code analysis through Typo significantly streamlines the review process. Built-in linting and static analysis tools flag potential issues before the review begins, enabling developers to concentrate on complex aspects of the code. Integrating Typo with CI/CD pipelines ensures that only code that meets quality standards enters the review process.

Feedback and commenting system

Typo features an intuitive commenting system that allows reviewers to leave clear, actionable feedback directly within the code. This approach ensures developers receive specific suggestions, leading to more effective revisions. Implementing a tagging system for comments can categorize feedback and prioritize issues efficiently.

Metrics and insights

Typo provides detailed metrics and insights into code review performance. Engineering managers can analyze trends, such as recurring bottlenecks or areas for improvement, allowing for data-driven decision-making. Tracking metrics like review time, comment density, and acceptance rates can reveal deeper insights into team performance and highlight areas needing further training or resources.

Also read: Best Code Review Tools

Best Practices for Optimizing Code Reviews

In addition to leveraging tools like Typo, adopting best practices can further enhance your code review process:

1. Set clear objectives and standards

Define clear objectives for code reviews, detailing what reviewers should focus on during evaluations. Developing a comprehensive checklist that includes adherence to coding conventions, performance considerations, and testing coverage ensures consistency and clarity in expectations.

2. Leverage automation tools

Employ automation tools to reduce manual effort and improve review quality. Automating code analysis helps identify common mistakes early, freeing reviewers to address more complex issues. Integrating automated testing frameworks validates code functionality before reaching the review stage.

3. Encourage constructive feedback

Fostering a culture of constructive feedback is crucial for effective code reviews. Encourage reviewers to provide specific, actionable comments emphasizing improvement. Implementing a “no blame” policy during reviews promotes an environment where developers feel safe to make mistakes and learn from them.

4. Balance thoroughness and speed

Finding the right balance between thorough reviews and maintaining development velocity is essential. Establish reasonable time limits for reviews to prevent bottlenecks while ensuring reviewers dedicate adequate time to assess code quality thoroughly. Timeboxing reviews can help maintain focus and reduce reviewer fatigue.

5. Rotate reviewers and share responsibilities

Regularly rotating reviewers prevents burnout and ensures diverse perspectives in the review process. Sharing responsibilities promotes knowledge transfer across the team and mitigates the risk of bottlenecks. Implementing a rotation schedule that pairs developers with different reviewers fosters collaboration and learning.

Also read: AI C͏o͏de Rev͏iews ͏for Remote͏ Teams

The Role of Engineering Managers

While developers execute the code review process, engineering managers have a critical role in optimizing and supporting it. Here’s how they can contribute effectively:

Facilitating communication and support

Engineering managers must actively facilitate communication within the team, ensuring alignment on the goals and expectations of code reviews. Regular check-ins can help identify roadblocks and provide opportunities for team members to express concerns or seek guidance.

Setting expectations and accountability

Establishing a culture of accountability around code reviews is essential. Engineering managers should communicate clear expectations for both developers and reviewers, creating a shared understanding of responsibilities. Providing ongoing training on effective review practices reinforces these expectations.

Monitoring metrics and performance

Utilizing the metrics and insights provided by Typo enables engineering managers to monitor team performance during code reviews. Analyzing this data allows managers to identify trends and make informed decisions about adjustments to the review process, ensuring continuous improvement.

Promoting a growth mindset

Engineering managers should cultivate a growth mindset within the team, encouraging developers to view feedback as an opportunity for learning and improvement. Creating an environment where constructive criticism is welcomed fosters a culture of continuous development and innovation. Encouraging participation in code review workshops or technical training sessions can reinforce this mindset.

Wrapping up: Elevating your code review process

An optimized code review process is not merely a procedural necessity; it is a cornerstone of developer productivity and code quality. By establishing clear guidelines, promoting collaboration, and leveraging tools like Typo, you can streamline the review process and foster a culture of continuous improvement within your team.

Typo serves as a robust platform that enhances the efficiency and effectiveness of code reviews, allowing teams to deliver higher-quality software at an accelerated pace. By embracing best practices and adopting a collaborative mindset, you can transform your code review process into a powerful driver of success.

Book a demo with Typo today!

How to Build a DevOps Culture?

In an ever-changing tech landscape, organizations need to stay agile and deliver high-quality software rapidly. DevOps plays a crucial role in achieving these goals by bridging the gap between development and operations teams. 

In this blog, we will delve into how to build a DevOps culture within your organization and explore the fundamental practices and strategies that can lead to more efficient, reliable, and customer-focused software development.

What is DevOps? 

DevOps is a software development methodology that integrates development (Dev) and IT operations (Ops) to enhance software delivery’s speed, efficiency, and quality. The primary goal is to break down traditional silos between development and operations teams and foster a culture of collaboration and communication throughout the software development lifecycle.  This creates a more efficient and agile workflow that allows organizations to respond quickly to changes and deliver value to customers faster.

Why DevOps Culture is Beneficial? 

DevOps culture refers to a collaborative and integrated approach between development and operations teams. It focuses on breaking down silos, fostering a shared sense of responsibility, and improving processes through automation and continuous feedback.

  • Fostering collaboration between development and operations allows organizations to innovate more rapidly, and respond to market changes and customer needs effectively. 
  • Automation and streamlined processes reduce manual tasks and errors to increase efficiency in software delivery. This efficiency results in faster time-to-market for new features and updates.
  • Continuous integration and delivery practices improve software quality by early detection of issues. This helps maintain system stability and reliability.
  • A DevOps culture encourages teamwork and mutual trust to improve collaboration between previously siloed teams. This cohesive environment fosters innovation and collective problem-solving. 
  • DevOps culture results in faster recovery time as they can identify and address issues more swiftly, reducing downtime and improving overall service reliability.
  • Delivering high-quality software quickly and efficiently enhances customer satisfaction and loyalty, which is vital for long-term success. 

The CALMS Framework of DevOps 

The CALMS framework is used to understand and implement DevOps principles effectively. It breaks down DevOps into five key components:

Culture

The culture pillar focuses on fostering a collaborative environment where shared responsibility and open communication are prioritized. It is crucial to break down silos between development and operations teams and allow them to work together more effectively. 

Automation

Automation emphasizes minimizing manual intervention in processes. This includes automating testing, deployment, and infrastructure management to enhance efficiency and reliability.

Lean

The lean aspect aims to optimize workflows, manage work-in-progress (WIP), and eliminate non-value-adding activities. This is to streamline processes to accelerate software delivery and improve overall quality.

Measurement

Measurement involves collecting data to assess the effectiveness of software delivery processes and practices. It enables teams to make informed, fact-based decisions, identify areas for improvement, and track progress. 

Sharing

The sharing component promotes open communication and knowledge transfer among teams It facilitates cross-team collaboration, fosters a learning environment, and ensures that successful practices and insights are shared and adopted widely.

Tips to Build a DevOps Culture

Start Simple 

Don’t overwhelm teams completely with the DevOps haul. Begin small and implement DevOps practice gradually. You can start first with the team that is better aligned with DevOps principles and then move ahead with other teams in the organization. Build momentum with early wins and evolve practices as you gain experience.

Foster Communication and Collaborative Environment 

Communication is a key. When done correctly, it promotes collaboration and a smooth flow of information across the organization. This further aligns organization operations and lets the engineering leaders make informed decisions. 

Moreover, the combined working environment between the development and operations teams promotes a culture of shared responsibility and common objectives. They can openly communicate ideas and challenges, allowing them to have a mutual conversation about resources, schedules, required features, and execution of projects. 

Create Common Goal 

Apart from encouraging communication and a collaborative environment, create a clear plan that outlines where you want to go and how you will get there. Ensure that these goals are realistic and achievable. This will allow teams to see the bigger picture and understand the desired outcome, motivating them to move in the right direction.

Focus on Automation 

Tools such as Slack, Kubernetes, Docker, and Jfrog help build automation capabilities for DevOps teams. These tools are useful as they automate repetitive and mundane tasks and allow teams to focus on value-adding work. This allows them to fail fast, build fast, and deliver quickly which enhances their efficiency and process acceleration, positively impacting DevOps culture. Ensure that instead of assuming, ask your team directly what part can be automated and further support facilities to automate it. 

Implement CI/CD pipeline

The organization must fully understand and implement CI/CD to establish a DevOps culture and streamline the software delivery process. This allows for automating deployment from development to production and releasing the software more frequently with better quality and reduced risks. The CI/CD tools further allow teams to catch bugs early in the development cycle, reduce manual work, and minimize downtime between releases. 

Foster Continuous Learning and Improvement

Continuous improvement is a key principle of DevOps culture. Engineering leaders must look for ways to encourage continuous learning and improvement such as by training and providing upskilling opportunities. Besides this, give them the freedom to experiment with new tools and techniques. Create a culture where they feel comfortable making mistakes and learning from them. 

Balance Speed and Security 

The teams must ensure that delivering products quickly doesn’t mean compromising security. In DevOps culture, the organization must adopt a ‘Security-first approach’ by integrating security practices into the DevOps pipeline. To maintain a strong security posture, regular security audits and compliance checks are essential. Security scans should be conducted at every stage of the development lifecycle to continuously monitor and assess security.

Monitor and Measure 

Regularly monitor and track system performance to detect issues early and ensure smooth operation. Use metrics and data to guide decisions, optimize processes, and continuously improve DevOps practices. Implement comprehensive dashboards and alerts to ensure teams can quickly respond to performance issues and maintain optimal health. 

Prioritize Customer Needs

In DevOps culture, the organization must emphasize the ever-evolving needs of the customers. Encourage teams to think from the customer’s perspective and keep their needs and satisfaction at the forefront of the software delivery processes. Regularly incorporate customer feedback into the development cycle to ensure the product aligns with user expectations.

Typo - An Effective Platform to Promote DevOps Culture

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through DORA and other key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion 

Building a DevOps culture is essential for organizations to improve their software delivery capabilities and maintain a competitive edge. Implementing key practices as mentioned above will pave the way for a successful DevOps transformation. 

View All

Product Updates

View All

Typo Launches groCTO: Community to Empower Engineering Leaders

In an ever-evolving tech world, organisations need to innovate quickly while keeping up high standards of quality and performance. The key to achieving these goals is empowering engineering leaders with the right tools and technologies. 

About Typo

Typo is a software intelligence platform that optimizes software delivery by identifying real-time bottlenecks in SDLC, automating code reviews, and measuring developer experience. We aim to help organizations ship reliable software faster and build high-performing teams. 

However, engineering leaders often struggle to bridge the divide between traditional management practices and modern software development leading to missed opportunities for growth, ineffective team dynamics, and slower progress in achieving organizational goals. 

To address this gap, we launched groCTO, a community designed specifically for engineering leaders.

What is groCTO Community? 

Effective engineering leadership is crucial for building high-performing teams and driving innovation. However, many leaders face significant challenges and gaps that hinder their effectiveness. The role of an engineering leader is both demanding and essential. From aligning teams with strategic goals to managing complex projects and fostering a positive culture, they have a lot on their plates. Hence, leaders need to have the right direction and support so they can navigate the challenges and guide their teams efficiently. 

Here’s when groCTO comes in! 

groCTO is a community designed to empower engineering managers on their leadership journey. The aim is to help engineering leaders evolve, navigate complex technical challenges, and drive innovative solutions to create groundbreaking software. Engineering leaders can connect, learn, and grow to enhance their capabilities and, in turn, the performance of their teams. 

Key Components of groCTO 

groCTO Connect

Over 73% of successful tech leaders believe having a mentor is key to their success.

At groCTO, we recognize mentorship as a powerful tool for addressing leadership challenges and offering personalised support and fresh perspectives. That’s why we’ve kept Connect a cornerstone of our community - offering 1:1 mentorship sessions with global tech leaders and CTOs. With over 74 mentees and 20 mentors, our Connect program fosters valuable relationships and supports your growth as a tech leader.

These sessions allow emerging leaders to: 

  • Gain personalised advice: Through 1:1 sessions, mentors address individual challenges and tailor guidance to the specific needs and career goals of emerging leaders. 
  • Navigate career growth: These mentors understand the strengths and weaknesses of the individual and help them focus on improving specific leadership skills and competencies and build confidence. 
  • Build valuable professional relationships: Our mentorship sessions expand professional connections and foster collaborations and knowledge sharing that can offer ongoing support and opportunities. 

Weekly Tech Insights

To keep our tech community informed and inspired, groCTO brings you a fresh set of learning resources every week:

  • CTO Diaries: The CTO Diaries provide a unique glimpse into the experiences and lessons learned by seasoned Chief Technology Officers. These include personal stories, challenges faced, and successful strategies implemented by them. Hence, helping engineering leaders gain practical insights and real-world examples that can inspire and inform their approach to leadership and team management.
  • Podcasts: 
    • groCTO Originals is a weekly podcast for current and aspiring tech leaders aiming to transform their approach by learning from seasoned industry experts and successful engineering leaders across the globe.
    • ‘The DORA Lab’ by groCTO is an exclusive podcast that’s all about DORA and other engineering metrics. In each episode, expert leaders from the tech world bring their extensive knowledge of the challenges, inspirations, and practical uses of DORA metrics and beyond.
  • Bytes: groCTO Bytes is a weekly sun-day dose of curated wisdom delivered straight to your inbox, in the form of a newsletter. Our goal is to keep tech leaders and CTOs, VPEs up-to-date on the latest trends and best practices in engineering leadership, tech management, system design, and more.
Are you a tech coach looking to make an impact? 

Looking Ahead: Building a Dynamic Community

At groCTO, we are committed to making this community bigger and better. We want current and aspiring engineering leaders to invest in their growth as well as contribute to pushing the boundaries of what engineering teams can achieve.

We’re just getting started. A few of our future plans for groCTO include:

  • Virtual Events: We plan to conduct interactive webinars and workshops to help engineering leaders and CTOs get deeper dives into specific topics and networking opportunities.
  • Slack Channels: We plan to create Slack channels to allow emerging tech leaders to engage in vibrant discussions and get real-time support tailored to various aspects of engineering leadership.

We envision a community that thrives on continuous engagement and growth. Scaling our resources and expanding our initiatives, we want to ensure that every member of groCTO finds the support and knowledge they need to excel. 

Get in Touch with us! 

At Typo, our vision is clear: to ship reliable software faster and build high-performing engineering teams. With groCTO, we are making significant progress toward this goal by empowering engineering leaders with the tools and support they need to excel. 

Join us in this exciting new chapter and be a part of a community that empowers tech leaders to excel and innovate. 

We’d love to hear from you! For more information about groCTO and how to get involved, write to us at hello@grocto.dev

Why do Companies Choose Typo?

Dev teams hold great importance in the engineering organization. They are essential for building high-quality software products, fostering innovation, and driving the success of technology companies in today’s competitive market.

However, engineering leaders need to understand the bottlenecks holding them back. Since these blindspots can directly affect the projects. Hence, this is when software development analytics tools come to your rescue. And these analytics software stands better when they have various features and integrations, engineering leaders are usually looking out for.

Typo is an intelligent engineering platform that is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Let’s know more about why engineering leaders prefer to choose Typo as their important tool:

You get Customized DORA and other Engineering Metrics

Engineering metrics are the measurements of engineering outputs and processes. However, there isn’t a pre-defined set of metrics that the software development teams use to measure to ensure success. This depends on various factors including team size, the background of the team members, and so on.

Typo’s customized DORA (Deployment frequency, Change failure rate, Lead time, and Mean Time to Recover) key metrics and other engineering metrics can be configured in a single dashboard based on specific development processes. This helps benchmark the dev team’s performance and identifies real-time bottlenecks, sprint delays, and blocked PRs. With the user-friendly interface and tailored integrations, engineering leaders can get all the relevant data within minutes and drive continuous improvement.

Typo has an In-Built Automated Code Review Feature

Code review is all about improving the code quality. It improves the software teams’ productivity and streamlines the development process. However, when done manually, the code review process can be time-consuming and takes a lot of effort.

Typo’s automated code review tool auto-analyses codebase and pull requests to find issues and auto-generates fixes before it merges to master. It understands the context of your code and quickly finds and fixes any issues accurately, making pull requests easy and stress-free. It standardizes your code, reducing the risk of a software security breach and boosting maintainability, while also providing insights into code coverage and code complexity for thorough analysis.

You can Track the Team’s Progress by Advanced Sprint Analysis Tool

While a burndown chart helps visually monitor teams’ work progress, it is time-consuming and doesn’t provide insights about the specific types of issues or tasks. Hence, it is always advisable to complement it with sprint analysis tools to provide additional insights tailored to agile project management.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This helps in identifying potential problems in the early stages, identifying areas where teams can be more efficient, and meeting deadlines.

The metrics Dashboard Focuses on Team-Level Improvement and Not Micromanaging Individual Developers

When engineering metrics focus on individual success rather than team performance, it creates a sense of surveillance rather than support. This leads to decreased motivation, productivity, and trust among development teams. Hence, there are better ways to use the engineering metrics.

Typo has a metrics dashboard that focuses on the team’s health and performance. It lets engineering leaders compare the team’s results with what healthy benchmarks across industries look like and drive impactful initiatives for your team. Since it considers only the team’s goals, it lets team members work together and solve problems together. Hence, fosters a healthier and more productive work environment conducive to innovation and growth.

Typo Takes into Consideration the Human Side of Engineering

Measuring developer experience not only focuses on quantitative metrics but also requires qualitative feedback as well. By prioritizing the human side of team members and developer productivity, engineering managers can create a more inclusive and supportive environment for them.

Typo helps in getting a 360 view of the developer experience as it captures qualitative insights and provides an in-depth view of the real issues that need attention. With signals from work patterns and continuous AI-driven pulse check-ins on the experience of developers in the team, Typo helps with early indicators of their well-being and actionable insights on the areas that need your attention. It also tracks the work habits of developers across multiple activities, such as Commits, PRs, Reviews, Comments, Tasks, and Merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the Burnout zone or at risk of burnout.

You can integrate as many tools with the dev stack

The more the tools can be integrated with software, the better it is for the software developers. It streamlines the development process, enforces standardization and consistency, and provides access to valuable resources and functionalities.

Typo lets you see the complete picture of your engineering health by seamlessly connecting to your tech tool stack. This includes:

  • GIT versioning tools that use the Git version control system
  • Issue tracker tools for managing tasks, bug tracking, and other project-related issues
  • CI/CD tools to automate and streamline the software development process
  • Communication tools to facilitate the exchange of ideas and information
  • Incident management tools to resolve unexpected events or failures

Conclusion

Typo is a software delivery tool that can help ship reliable software faster. You can find real-time bottlenecks in your SDLC, automate code reviews, and measure developer experience – all in a single platform.

Typo Ranked as a Leader in G2 Summer 2023 Reports

The G2 Summer 2023 report is out!

We are delighted to share that Typo ranks as a leader in the Software Development analytics tool category. A big thank you to all our customers who supported us in this journey and took the time to write reviews about their experience. It really got us motivated to keep moving forward and bring the best to the table in the coming weeks.

Typo Taking the Lead

Typo is placed among the leaders in Software Development Analytics. Besides this, we earned the ‘User loved us’ badge as well.

Our wall of fame shines bright with –

  • Leader in the overall Grid® Report for Software Development Analytics Tools category
  • Leader in the Mid Market Grid® Report for Software Development Analytics Tools category
  • Rated #1 for Likelihood to Recommend
  • Rated #1 for Quality of Support
  • Rated #1 for Meets Requirements
  • Rated #1 for Ease of Use
  • Rated #1 for Analytics and Trends

Typo has been ranked a Leader in the Grid Report for Software Development Analytics Tool | Summer 2023. This is a testament to our continuous efforts toward building a product that engineering teams love to use.

The ratings also include –

  • 97% of the reviewers have rated Typo high in analyzing historical data to highlight trends, statistics & KPIs
  • 100% of the reviewers have rated us high in Productivity Updates

We, as a team, achieved the feat of attaining the score of:

Typo User  ratings

Here’s What our Customers Say about Typo

Check out what other users have to say about Typo here.

What Makes Typo Different?

Typo is an intelligent AI-driven Engineering Management platform that enables modern software teams with visibility, insights & tools to code better, deploy faster & stay aligned with business goals.

Having launched with Product Hunt, we started with 15 engineers working with sheer hard work and dedication and have impacted 5000+ developers globally and engineering leaders globally, 400,000+ PRs & 1.5M+ commits.

We are NOT just the software delivery analytics platform. We go beyond the SDLC metrics to build an ecosystem that is a combination of intelligent insights, impactful actions & automated workflows – that will help Managers to lead better & developers perform better

As the first step, Typo gives core insights into dev velocity, quality & throughout that has helped the engineering leaders reduce their PR cycle time by almost 57% and 2X faster project deliveries.

PR cycle time

Continuous Improvement with Typo

Typo empowers continuous improvement in the developers & managers with goal setting & specific visibility to developers themselves.

The leaders can set goals to ensure best practices like PR sizes, avoid merging PRs without review, identify high-risk work & others. Typo nudges the key stakeholders on Slack as soon as the goal is breached. Typo also automates the workflow on Slack to help developers with faster PR shipping and code reviews.

Continuous Improvement with Typo

Developer’s View

Typo provides core insights to your developers that are 100% confidential to them. It helps developers to identify their strengths and core areas of improvement that have impacted the software delivery. It helps them gain visibility & measure the impact of their work on team efficiency & goals.

Developer’s view
Developer’s Well-Being

We believe that all three aspects – work, collaboration & well-being – need to fall in place to help an individual deliver their best. Inspired by the SPACE framework for developer productivity, we support Pulse Check-Ins, Developer Experience insights, Burnout predictions & Engineering surveys to paint a complete picture.

Developer’s well-being

10X your Dev Teams’ Efficiency with Typo

It’s all of your immense love and support that made us a leader in such a short period. We are grateful to you!

But this is just the beginning. Our aim has always been to level up your dev game and we will be coming with the new exciting releases in the next few weeks.

Interested in using Typo? Sign up for FREE today and get insights in 5 min.

View All
Made in Webflow