Typo's Picks

Many Agile teams confuse velocity with capacity. Both measure work, but they serve different purposes. Understanding the difference is key to better planning and execution. 

Agile’s rise in popularity is no surprise—it helps teams deliver on time. Velocity tracks completed work over time, guiding future estimates. Capacity measures available resources, ensuring realistic commitments. 

Misusing these metrics can lead to missed deadlines and inefficiencies. Used correctly, they boost productivity and streamline workflows. 

In this blog, we’ll break down velocity vs. capacity, highlight their differences, and share best practices to ensure agile success for you. 

What is Agile Velocity? 

Agile velocity measures the amount of work a team completes in a sprint, typically using story points. It reflects a team’s actual output over time. By tracking velocity, teams can predict future sprint capacity and set realistic goals. 

Velocity is not fixed—it evolves as teams improve. New teams may start with lower velocity, which grows as they refine their processes. However, it is not a direct measure of efficiency. High velocity does not always mean better performance. 

Understanding velocity helps teams make data-driven decisions. It ensures sprint planning aligns with past performance, reducing the risk of overcommitment. 

How to Calculate Agile Velocity? 

Velocity is calculated by averaging the total story points completed over multiple sprints. 

Example:

  • Sprint 1: Team completes 30 story points
  • Sprint 2: Team completes 25 story points
  • Sprint 3: Team completes 35 story points

Average velocity = (30 + 25 + 35) ÷ 3 = 30 story points per sprint 

This means the team can reasonably commit to about 30 story points in upcoming sprints. 

What is Agile Capacity? 

Agile capacity is the total available working hours for a team in a sprint. It factors in team size, holidays, and non-project work. Unlike velocity, which shows actual output, capacity focuses on potential workload. 

Capacity planning helps teams set realistic expectations. It prevents burnout by ensuring workload matches availability. 

Capacity fluctuates based on external factors. A fully staffed sprint has more capacity than one with multiple absences. Tracking it ensures smoother sprint execution and better resource management. 

How to calculated agile capacity? 

Capacity is based on available working hours in a sprint. It factors in team size, work hours per day, and non-project time. 

Example: 

  • Team of 5 members
  • Each works 8 hours per day
  • Sprint length: 10 working days
  • Total capacity: 5 × 8 × 10 = 400 hours

If one member is on leave for 2 days, the adjusted capacity is:
(4 × 8 × 10) + (1 × 8 × 8) = 384 hours

Velocity shows past output, while capacity shows available effort. Both help teams plan sprints effectively. 

Differences Between Agile Velocity and Capacity 

While both velocity and capacity deal with workload, they serve different roles. The confusion arises when teams assume high capacity means high velocity. 

But velocity depends on factors beyond available hours—such as efficiency, experience, and blockers. 

Here’s a deeper look at their key differences: 

1. Measurement Units 

Velocity is measured in story points, reflecting completed work. It captures complexity and effort rather than just time. Capacity, on the other hand, is measured in hours or workdays. It represents the total time available, not the work accomplished. 

For example, a team with a capacity of 400 hours may complete only 30 story points. The work done depends on efficiency, not just available hours. 

2. Predictability vs. Availability 

Velocity helps predict future output based on historical data. It evolves with team performance. Capacity only shows available effort in a sprint. It does not indicate how much work will actually be completed. 

A team may have 500 hours of capacity but deliver only 35 story points. Predictability relies on velocity, while availability depends on capacity. 

3. Influence of Team Experience and Efficiency 

Velocity changes as teams gain experience and refine processes. A team working together for months will likely have a higher velocity than a newly formed team. Capacity remains fixed unless team size or sprint duration changes. 

For example, two teams with the same capacity (400 hours) may have different velocities—one completing 40 story points, another only 25. Experience and engineering efficiency are the reasons behind this gap. 

4. Impact of External Factors 

Capacity is affected by leaves, training, and holidays. Velocity is influenced by dependencies, technical debt, and workflow efficiency. 

Example:

  • A team with 10 members and 800 capacity hours may lose 100 hours due to vacations. 
  • However, velocity might drop due to unexpected blockers, not just reduced capacity. 

External factors impact both, but their effects differ. Capacity loss is predictable, while velocity fluctuations are harder to forecast. 

5. Use in Sprint Planning 

Capacity helps determine how much work the team could take on. Velocity helps decide how much work the team should take on based on past performance. 

If a team has a velocity of 30 story points but a capacity of 500 hours, taking on 50 story points will likely lead to failure. Sprint planning should balance both, prioritizing past velocity over raw capacity. 

6. Adjustments Over Time 

Velocity is dynamic. It shifts due to process improvements, team changes, and work complexity. Capacity remains relatively stable unless the team structure changes. 

For example, a team with a velocity of 25 story points may improve to 35 story points after optimizing workflows. Capacity (e.g., 400 hours) remains the same unless sprint length or team size changes. 

Velocity improves with Agile maturity, while capacity remains a logistical factor. 

7. Risk of Misinterpretation 

Using capacity as a performance metric can mislead teams. A high capacity does not mean a team should take on more work. Similarly, a drop in velocity does not always indicate lower performance—it may mean more complex work was tackled. 

Example: 

  • A team’s velocity drops from 40 to 30 story points. Instead of assuming inefficiency, check if the complexity of tasks increased. 
  • A team with 600 capacity hours should not assume they can complete 60 story points if past velocity suggests 45 is realistic. 

Misinterpreting these metrics can lead to overloading, burnout, and poor sprint outcomes. 

Best Practices to Follow for Agile Velocity and Capacity 

Here are some best practices to follow to strike the right balance between agile velocity and capacity: 

  • Track Velocity Over Multiple Sprints: Use an average to get a reliable estimate rather than relying on a single sprint’s data. 
  • Don’t Overcommit Based on Capacity: Always plan work based on past velocity, not just available hours. 
  • Account for Non-Project Time: Factor in meetings, training, and unforeseen blockers when calculating capacity. 
  • Adjust for Team Changes: Both will fluctuate if team members join or leave, so recalibrate expectations accordingly. 
  • Use Capacity for Workload Balancing: Ensure tasks are evenly distributed to prevent burnout. 
  • Avoid Comparing Teams’ Velocities: Each team has different workflows and efficiencies; velocity isn’t a competition. 
  • Monitor Trends, Not Just Numbers: Look for patterns in velocity and capacity changes to improve forecasting. 
  • Use Both Metrics Together: Velocity ensures realistic commitments, while capacity prevents overloading. 
  • Reassess Regularly: Review both metrics after each sprint to refine planning. 
  • Communicate Changes Transparently: Keep stakeholders informed when capacity or velocity shifts impact delivery. 

Conclusion 

Understanding the difference between velocity and capacity is key to Agile success. 

Companies can enhance agility by integrating AI into their engineering process with Typo. It enables AI-powered engineering analytics that tracks both metrics, identifies bottlenecks, and optimizes sprint planning. Automated fixes and intelligent recommendations help teams improve velocity without overloading capacity. 

By leveraging AI-driven insights, businesses can make smarter decisions and accelerate delivery. 

Want to see how AI can streamline your Agile processes?

Many confuse engineering management with project management. The overlap makes it easy to see why. 

Both involve leadership, planning, and execution. Both drive projects to completion. But their goals, focus areas, and responsibilities differ significantly. 

This confusion can lead to hiring mistakes and inefficient workflows. 

A project manager ensures a project is delivered on time and within scope. An engineering manager looks beyond a single project, focusing on team growth, technical strategy, and long-term impact. 

Understanding these differences is crucial for businesses and employees alike. 

Let’s break down the key differences. 

What is Engineering Management? 

Engineering management focuses on leading engineering teams and driving technical success. It involves decisions related to engineering resource allocation, team growth, and process optimization. 

In a software company, an engineering manager oversees multiple teams building a new AI feature. They ensure the teams follow best practices and meet high technical standards. 

Their role extends beyond individual projects. They also have to mentor engineers and help them adjust to workflows. 

What is Engineering Project Management? 

Engineering project management focuses on delivering specific projects on time and within scope. 

For the same AI feature, the project manager coordinates deadlines, assigns tasks, and tracks progress. They manage dependencies, remove roadblocks, and ensure developers have what they need. 

Difference b/w Engineering Management and Project Management 

Both engineering management and engineering project management fall under classical project management. 

However, their roles differ based on the organization’s structure. 

In Engineering, Procurement, and Construction (EPC) organizations, project managers play a central role, while engineering managers operate within project constraints. 

In contrast, in pure engineering firms, the difference fades, and project managers often assume engineering management responsibilities. 

1. Scope of Responsibility 

Engineering management focuses on the broader development of engineering teams and processes. It is not tied to a single project but instead ensures long-term success by improving technical strategy. 

On the other hand, engineering project management is centered on delivering a specific project within defined constraints. The project manager ensures clear goals, proper task delegation, and timely execution. Once the project is completed, their role shifts to the next initiative. 

2. Temporal Orientation 

The core lies in time and continuity. Engineering managers operate on an ongoing basis without a defined endpoint. Their role is to ensure that engineering teams continuously improve and adapt to evolving technologies. 

Even when individual projects end, their responsibilities persist as they focus on optimizing workflows. 

Engineering project managers, in contrast, work within fixed project timelines. Their focus is to ensure that specific engineering initiatives are delivered on time and under budget. 

Each software project has a lifecycle, typically consisting of phases such as — initiation, planning, execution, monitoring, and closure. 

For example, if a company is building a recommendation engine, the engineering manager ensures the team is well-trained and the technical process are set up for accuracy and efficiency. Meanwhile, the project manager tracks the AI model’s development timeline, coordinates testing, and ensures deployment deadlines are met. 

Once the recommendation engine is live, the project manager moves on to the next project, while the engineering manager continues refining the system and supporting the team. 

3. Resource Governance Models 

Engineering managers allocate resources based on long-term strategy. They focus on team stability, ensuring individual engineers work on projects that align with their expertise. 

Project managers, however, use temporary resource allocation models. They often rely on tools like RACI matrices and effort-based planning to distribute workload efficiently. 

If a company is launching a new mobile app, the project manager might pull engineers from different teams temporarily, ensuring the right expertise is available without long-term restructuring. 

4. Knowledge Management Approaches 

Engineering management establishes structured frameworks like communities of practice, where engineers collaborate, share expertise, and refine best practices. 

Technical mentorship programs ensure that senior engineers pass down insights to junior team members, strengthening the organization’s technical depth. Additionally, capability models help map out engineering competencies. 

In contrast, engineering project management prioritizes short-term knowledge capture for specific projects. 

Project managers implement processes to document key artifacts, such as technical specifications, decision logs, and handover materials. These artifacts ensure smooth project transitions and prevent knowledge loss when team members move to new initiatives. 

5. Decision Framework Complexity 

Engineering managers operate within highly complex decision environments, balancing competing priorities like architectural governance, technical debt, scalability, and engineering culture. 

They must ensure long-term sustainability while managing trade-offs between innovation, cost, and maintainability. Decisions often involve cross-functional collaboration, requiring alignment with product teams, executive leadership, and engineering specialists. 

Engineering project management, however, works within defined decision constraints. Their focus is on scope, cost, and time. Project managers are in charge of achieving as much balance as possible among the three constraints. 

They use structured frameworks like critical path analysis and earned value management to optimize project execution. 

While they have some influence over technical decisions, their primary concern is delivering within set parameters rather than shaping the technical direction. 

6. Performance Evaluation Methodologies 

Engineering management performance is measured on criterias like code quality improvements, process optimizations, mentorship impact, and technical thought leadership. The focus is on continuous improvement not immediate project outcomes. 

Engineering project management, on the other hand, relies on quantifiable delivery metrics. 

Project manager’s success is determined by on-time milestone completion, adherence to budget, risk mitigation effectiveness, and variance analysis against project baselines. Engineering metrics like cycle times, defect rates, and stakeholder satisfaction scores ensure that projects remain aligned with business objectives. 

7. Value Creation Mechanisms 

Engineering managers drive value through capability development and innovation enablement. They focus on building scalable processes and investing in the right talent. 

Their work leads to long-term competitive advantages, ensuring that engineering teams remain adaptable and technically strong. 

Engineering project managers create value by delivering projects predictably and efficiently. Their role ensures that cross-functional teams work in sync and delivery remains structured. 

By implementing agile workflows, dependency mapping, and phased execution models, they ensure business goals are met without unnecessary delays. 

8. Organizational Interfacing Patterns 

Engineering management requires deep engagement with leadership, product teams, and functional stakeholders. 

Engineering managers participate in long-term planning discussions, ensuring that engineering priorities align with broader business goals. They also establish feedback loops with teams, improving alignment between technical execution and market needs. 

Engineering project management, however, relies on temporary, tactical stakeholder interactions. 

Project managers coordinate status updates, cross-functional meetings, and expectation management efforts. Their primary interfaces are delivery teams, sponsors, and key decision-makers involved in a specific initiative. 

Unlike engineering managers, who shape organizational direction, project managers ensure smooth execution within predefined constraints. 

Conclusion 

Visibility is key to effective engineering and project management. Without clear insights, inefficiencies go unnoticed, risks escalate, and productivity suffers. Engineering analytics bridge this gap by providing real-time data on team performance, code quality, and project health. 

Typo enhances this further with AI-powered code analysis and auto-fixes, improving efficiency and reducing technical debt. It also offers developer experience visibility, helping teams identify bottlenecks and streamline workflows. 

With better visibility, teams can make informed decisions, optimize resources, and accelerate delivery. 

Ensuring software quality is non-negotiable. Every software project needs a dedicated quality assurance mechanism. 

But measuring quality is not always so simple. 

There are numerous metrics available, each providing different insights. However, not all metrics need equal attention. 

The key is to track those that have a direct impact on software performance and user experience. 

Metrics you must measure for software quality 

Here are the numbers you need to keep a close watch on: 

1. Code Quality 

Code quality measures how well-written and maintainable a software codebase is. 

Poor code quality leads to increased technical debt, making future updates and debugging more difficult. It directly affects software performance and scalability. 

Measuring code quality requires static code analysis, which helps detect vulnerabilities, code smells, and non-compliance with coding standards. 

Platforms like Typo assist in evaluating factors such as complexity, duplication, and adherence to best practices. 

Additionally, code reviews provide qualitative insights by assessing readability and overall structure. Frequent defects in a specific module can help identify code quality issues that require attention. 

2. Defect Density 

Defect density determines the number of defects relative to the size of the codebase. 

It is calculated by dividing the total number of defects by the total lines of code or function points. 

A higher defect density indicates a higher likelihood of software failure, while a lower defect density suggests better software quality. 

This metric is particularly useful when comparing different releases or modules within the same project. 

3. Mean Time To Recovery (MTTR) 

MTTR measures how quickly a system can recover from failures. It is crucial for assessing software resilience and minimizing downtime. 

MTTR is calculated by dividing the total downtime caused by failures by the number of incidents. 

A lower MTTR indicates that the team can identify, troubleshoot, and resolve issues efficiently. And it’s a problem if it’s high. 

This metric measures the effectiveness of incident response processes and the ability of the system to return to operational status quickly. 

Ideally, you should set up automated monitoring and well-defined recovery strategies to improve MTTR. 

4. Mean Time Between Failures (MTBF) 

MTBF measures the average time a system operates before running into a failure. It reflects software reliability and the likelihood of experiencing downtime. 

MTBF is calculated by dividing the total operational time by the number of failures. 

If it’s high, you get better stability, while a lower MTBF indicates frequent failures that may require improvements on architectural level. 

Tracking MTBF over time helps teams predict potential failures and implement preventive measures. 

How to increase it? Invest in regular software updates, performance optimizations, and proactive monitoring. 

5. Cyclomatic Complexity 

Cyclomatic complexity measures the complexity of a codebase by analyzing the number of independent execution paths within a program. 

High cyclomatic complexity increases the risk of defects and makes code harder to test and maintain. 

This metric is determined by counting the number of decision points, such as loops and conditionals, in a function. 

Lower complexity results in simpler, more maintainable code, while higher complexity suggests the need for refactoring. 

6. Code Coverage 

Code coverage measures the percentage of source code executed during automated testing. 

A higher percentage means better test coverage, reducing the chances of undetected defects. 

This metric is calculated by dividing the number of executed lines of code by the total lines of code. 

While high coverage is desirable, it does not guarantee the absence of bugs, as it does not account for the effectiveness of test cases. 

Note: Maintaining balanced coverage with meaningful test scenarios is essential for reliable software. 

7. Test Coverage 

Test coverage assesses how well test cases cover software functionality. 

Unlike code coverage, which measures executed code, test coverage focuses on functional completeness by evaluating whether all critical paths, edge cases, and requirements are tested. This metric helps teams identify untested areas and improve test strategies. 

Measuring test coverage requires you to track executed test cases against total planned test cases and ensure all requirements are validated. The higher the test coverage, the more you can rely on software. 

8. Static Code Analysis Defects 

Static code analysis identifies defects without executing the code. It detects vulnerabilities, security risks, and deviations from coding standards. 

Automated tools like Typo can scan the codebase to flag issues like uninitialized variables, memory leaks, and syntax violations. The number of defects found per scan indicates code stability. 

Frequent or recurring issues suggest poor coding practices or inadequate developer training. 

9. Lead Time for Changes 

Lead time for changes measures how long it takes for a code change to move from development to deployment. 

A shorter lead time indicates an efficient development pipeline. 

It is calculated from the moment a change request is made to when it is successfully deployed. 

Continuous integration, automated testing, and streamlined workflows help reduce this metric, ensuring faster software improvements. 

10. Response Time 

Response time measures how quickly a system reacts to a user request. Slow response times degrade user experience and impact performance. 

It is measured in milliseconds or seconds, depending on the operation. 

Web applications, APIs, and databases must maintain low response times for optimal performance. 

Monitoring tools track response times, helping teams identify and resolve performance bottlenecks. 

11. Resource Utilization 

Resource utilization evaluates how efficiently a system uses CPU, memory, disk, and network resources. 

High resource consumption without proportional performance gains indicates inefficiencies. 

Engineering monitoring platforms measure resource usage over time, helping teams optimize software to prevent excessive load. 

Optimized algorithms, caching mechanisms, and load balancing can help improve resource efficiency. 

12. Crash Rate 

Crash rate measures how often an application unexpectedly terminates. Frequent crashes means the software is not stable. 

It is calculated by dividing the number of crashes by the total number of user sessions or active users. 

Crash reports provide insights into root causes, allowing developers to fix issues before they impact a larger audience. 

13. Customer-reported Bugs 

Customer-reported bugs are the number of defects identified by users. If it’s high, it means the testing process is neither adequate nor effective. 

These bugs are usually reported through support tickets, reviews, or feedback forms. Tracking them helps assess software reliability from the end-user perspective. 

A decrease in customer-reported bugs over time signals improvements in testing and quality assurance. 

Proactive debugging, thorough testing, and quick issue resolution reduce reliance on user feedback for defect detection. 

14. Release Frequency 

Release frequency measures how often new software versions are deployed. Frequent releases suggest an agile and responsive development process. 

This metric is especially critical in DevOps and continuous delivery environments. 

A high release frequency enables faster feature updates and bug fixes. However, too many releases without proper quality control can lead to instability. 

When you balance speed and stability, you can rest assured there will be continuous improvements without compromising user experience. 

15. Customer Satisfaction Score (CSAT) 

CSAT measures user satisfaction with software performance, usability, and reliability. It is gathered through post-interaction surveys where users rate their experience. 

A high CSAT indicates a positive user experience, while a low score suggests dissatisfaction with performance, bugs, or usability. 

Conclusion 

You must track essential software quality metrics to ensure the software is reliable and there are no performance gaps. 

However, simply measuring them is not enough—real-time insights and automation are crucial for continuous improvement. 

Platforms like Typo help teams monitor quality metrics and also velocity, DORA insights, and delivery performance, ensuring faster issue detection and resolution. 

AI-powered code analysis and auto-fixes further enhance software quality by identifying and addressing defects proactively. 

With the right tools, teams can maintain high standards while accelerating development and deployment. 

How do engineering leaders stay relevant in the age of Generative AI?

With the rise of GenAI, engineering teams are rethinking productivity, prototyping, and scalability. But AI is only as powerful as the engineering practices behind it.

In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Suresh Bysani, Director of Engineering at Eightfold, about the real-world impact of AI on engineering leadership. From writing boilerplate code to scaling enterprise platforms, Suresh shares practical insights and hard-earned lessons from the frontlines of tech.

What You’ll Learn in This Episode:


AI Meets Engineering: How GenAI is transforming productivity, prototyping & software workflows.


Platform vs. Product Teams: Why technical expectations differ — and how to lead both effectively.


Engineering Practices Still Matter: Why GenAI can’t replace fundamental principles like scalability, testing, and reliability.


Avoiding AI Pitfalls: Common mistakes in adopting AI for internal tooling & how to avoid them.


Upskilling for the Future: Why managers & engineers need to build AI fluency now.


A Leader’s Journey: Suresh shares personal stories that shaped his perspective as a people-first tech leader.

Closing Insight: AI isn’t a silver bullet, but a powerful tool. The best engineering leaders combine AI innovation with strong fundamentals, people-centric leadership, and a long-term view.

Timestamps

  • 00:00 — Let’s Begin!
  • 00:55 — Suresh at Eightfold: Role & Background
  • 02:00 — Career Milestones & Turning Points
  • 04:15 — GenAI’s Impact on Engineering Management
  • 07:59 — Why Technical Depth Still Matters
  • 11:58 — AI + Legacy Systems: Key Insights
  • 15:40 — Common GenAI Adoption Mistakes
  • 23:42 — Measuring AI Success
  • 28:08 — AI Use Cases in Engineering
  • 31:05 — Final Advice for Tech Leaders

Links & Mentions

Episode Transcript

Kovid Batra: Hi everyone. This is Kovid, back with another episode of groCTO by Typo. Today with us, we have a very special guest who is an expert in AI and machine learning. So we are gonna talk a lot about Gen AI, engineering management with them, but let me quickly introduce Suresh to all of you. Hi, Suresh.

Suresh Bysani: Hello.

Kovid Batra: So, Suresh is an Engineering, uh, Director at Eightfold and he holds a postgraduate degree in AI and machine learning from USC, and he has almost 10 to 12 years of experience in engineering and leadership. So today, uh, Suresh, we are, we are grateful to have you here. And before we get started with the main section, which is engineering management in the age of GenAI, we would love to know a little bit more about you, maybe your hobbies, something inspiring from your life that defines who you are today. So if you could just take the stage and tell us something about yourself that your LinkedIn profile doesn’t there.

Suresh Bysani: Okay. So, thanks Kovid for having me. Hello everybody. Um, yeah, so if I have to recall a few incidents, I’ll probably recall one or two, right? So right from my childhood, um, I was not an outstanding student, let me put it that way. I have a record of, uh, you know, failing every subject until 10th grade, right? So I’m a totally different person. I feel sometimes, you know, uh, that gave me a lot of confidence in life because, uh, at a very early age, I was, you know, uh, exposed to what failure means, or how does being in failure for a very long time mean, right. That kind of gave me a lot of, you know, mental stability or courage to face failures, right? I’ve seen a lot of friends who were, you know, outstanding students right from the beginning and they get shaken aback when they see a setback or a failure in life. Right? So I feel that defined my personality to take aggressive decisions and moves in my life. That’s, that’s one thing.

Kovid Batra: That’s interesting.

Suresh Bysani: Yeah. And the second thing is, uh, during undergrad we went to a program called Net Tech. So it’s organized by, um, a very famous person in India. It’s most of, mostly an educational thing, right, around, uh, cybersecurity and ethical hacking. So I kind of met the country’s brightest minds in this program. All people from all sorts of background came to this program. Mostly, mostly the good ones, right? So it kind of helped me calibrate where I am across the country’s talent and gave me a fresh perspective of looking beyond my current institution, et cetera. Right. So these are two life defining moments for me in terms of my career growth.

Kovid Batra: Perfect. Perfect. I think you become more resilient, uh, when you’ve seen failures, and I think the openness to learn and exposure definitely gives you a perspective that takes you, uh, in your career, not linearly, but it gives you a geometric progression probably, or exponential progression in your life. So totally relate to that and great start to this. Uh, so Suresh, I think today, now we can jump onto the main section and, uh, talk more about, uh, AI, implementation of AI, Agent ai. But again, that is something that I would like to touch upon, uh, little later. First, I would want to understand from your journey, you are an engineering director, uh, and you have spent good enough time in this management and moving from management to the senior management or a leadership position, I would say. Uh, what’s your perspective of engineering management in today’s world? How is it evolving? What are the things that you see, uh, are kind of set and set as ideals in, um, in engineering management, but might not be very right? So just throw some light on your journey of engineering management and how you see it today evolving.

Suresh Bysani: Yep. Um, before we talk about the evolution, I will just share my thoughts about what does being an engineering manager or a leader means in general, and how is it very different from an IC. I get, I get asked this question quite a lot. A lot of people, a lot of, you know, very strong ICs come to me, uh, with this question of, I want to become a manager or can I become a manager? Right. And this happens quite a lot in, in Bay Area as well as Bangalore.

Kovid Batra: Yeah.

Suresh Bysani: So the first question I ask them is, why do you want to become a manager? Right? What are your reasons for it? I, I hear all great sorts of answers, right? Some folks generally come and say, I like execution. I like to drive from front. I’m responsible. I mean, I want to be the team leader sort of thing, right? I mean, all great answers, right? But if you think about it, execution, project management, JIRA management, or leading from the front; these are all characteristics of any technical leader, not just engineering manager. Even if you’re a staff engineer or an architect or a principal engineer, uh, you are responsible for a reasonable degree of execution, project management, planning, mentorship, getting things done, et cetera. After all, we are all evaluated by execution. So that is not a satisfactory answer for me. The main answer that I’m looking for is I like to grow people. I can, I want to see success in people who are around me. So as an engineering manager, it’s quite a tricky role because most of the time you are only as good as your team. You are evaluated by your team’s progress, team’s success, team’s delivery. Until that point, most ICs are only responsible for their work, right? I mean, they’re doing a project.

Kovid Batra: Yeah.

Suresh Bysani: They do amazing work in their project, and most of the time they get fantastic ratings and materialistic benefits. But all of a sudden when you become an engineering manager or leader, you are spending probably more number of hours to get things done because you have to coordinate the rest of the team, but they don’t necessarily translate to your, you know, growth or materialistic benefits because you are only as good as an average person in your team. So the first thing people have to evaluate is, am I or do I get happiness in growing others? If the answer is yes, if that’s your P0, you are going to be a great engineering leader. Everything else will follow. Now to the second question that you asked. This has been, this remained constant across the years from last 25 years. This is the number one characteristic of an engineering leader. Now, the evolution part. As the technology evolves, what I see as challenge is, uh, in a, an engineering manager should typically understand or go to a reasonable depth into people’s work. Technically, I mean. So as the technologies evolves, most of the engineering managers are typically 10 years, 15 years, 20 years experienced as ICs, right?

Kovid Batra: Yeah.

Suresh Bysani: Now, uh, most of these new engineering managers or seasoned engineering managers, they don’t understand what new technology evolution is. For example, all the recent advancements that we are seeing in AI, GenAI, you know, the engineering managers have no clue about it. If the, most of the time when there is bottom up innovation, how are engineering managers going to look at all of this and evaluate all of this from a technical standpoint? What this means is that there is a constant need for upskilling, and we’ll talk about that, uh, you know, in your questions.

Kovid Batra: Sure. Yeah. But I think, uh, I, I would just like to, uh, ask one question here. I mean, I have been working with a lot of engineering managers in my career as well, and, uh, I’ve been talking to a lot of them. There is always a debate around how much technical an engineering manager should be.

Suresh Bysani: Yeah.

Kovid Batra: And I think that lies in a little more detail and probably you could tell with some of your examples. Uh, an engineering manager who is working more on the product team and product side, uh, and an engineering manager who is probably involved in a platform team or maybe infrastructure team, I think things change a little bit. What’s, what’s your thought on that part?

Suresh Bysani: Yeah, so I think, uh, good question by the way. Uh, my general guidance to most engineering manager’s is they have to be reasonably technical. I mean, it is just that they are given a different responsibility in the company, but that’s it. Right? The, it is not an excuse for them, not for not being technical. Yes, they don’t have to code a 100% of the time that’s given. Right. It, it, so how much time they should be spending coding or doing the technical design? It totally depends on the company, project, situation, et cetera. Right? But they have to be technical. But you have a very interesting question around product teams versus platform teams, right?

Kovid Batra: Yeah.

Suresh Bysani: Engineering manager for product teams generally, you know, deals with a lot of stakeholders, whether it is PMs or customers or you know, uh, uh, the, the potential people and the potential new customers that are going to the company. So their time, uh, is mostly spent there. They hardly have enough time to, you know, go deep within the product. That’s the nature of their job. But at the same time, they do, uh, they are also expected to be, uh, reasonably technical, but not as technical as engineering leaders of platform teams or infrastructure teams. The plat for the platform teams and infrastructure teams, yes. They also engage with stakeholders, but their stakeholders are mostly internal and other engineering managers. That’s, That’s the general setup.

Kovid Batra: Yeah. Yeah.

Suresh Bysani: And, you know, uh, just like how engineering managers are able to guide how the product should look like, platform managers and infrastructure managers should, you know, uh, go deep into what platform or infrastructure we should provide to the rest of the company. And obviously, as the problem statement sounds, that requires a lot more technical depth, focus than, than the rest of the engineering leaders. So yes, engineering managers for platform and infrastructure are required to be reasonably technically stronger than the rest of the leaders.

Kovid Batra: Totally. I think that’s, that’s the key here. And the balance is something that one needs to identify based on their situation, project, how much of things they need to take care of in their teams. So totally agree to it. Uh, moving on. Uh, I think the most burning piece, uh, I think everyone is talking about it, which is AI, Agent AI, implementing, uh, AI into the most core legacy services in, in a team, in a company. But I think things need to be highlighted, uh, in a way where people need to understand what needs to be done, why it needs to be done, and, uh, while we were talking a few days back, uh, you mentioned about mentoring a few startups and technical founders who are actually doing it at this point of time, and you’re guiding them and you have seen certain patterns where you feel that there is a guidance required in the industry now.

Suresh Bysani: Yeah.

Kovid Batra: So while we have you here, my next question to you is like, what should an engineering manager do in this age of GenAI to, let’s say, stay technically equipped and take the right decisions moving forward?

Suresh Bysani: Yeah. I, I’ll start with this. The first thing is upskilling, right? As we were talking about in our previous, uh, question, uh, most engineering managers have not coded in the GenAI era, right? Because it’s just started.

Kovid Batra: Yeah.

Suresh Bysani: So, but all the new ideas or the new projects, uh, there is a GenAI or an AI flavor to it. That’s where the world is moving towards. I mean, uh, let’s be honest, right? If we don’t upskill ourselves in AI right now, we will be termed legacy. So when there is bottom up innovation happening within the team, how is the engineering manager supposed to, you know, uh, technically calibrate the project/design/code that is happening in the team? So that is why I say there is a need for upskilling. At Eightfold, uh, what we did is one of our leader, uh, he said, uh, all the engineering managers, let’s not do anything for a week. Let’s create something with GenAI that is useful for the company and all of you code it, right? I really loved the idea because periodically engineering managers are supposed to step back like this, whether it is in the form of hackathons or ideas or whatever it is, right? They should get their hands dirty in this new tech to get some perspective. And once I did that, it gave me a totally new perspective and I started seeing every idea with this new lens of GenAI, right? And I started asking fundamental questions like why can’t we write an agent to do this? Why can’t we do this? Should we spend a lot of time writing business logic for this, right? That is important for every engineering leader. How do you periodically step back and get your hands dirty and go to the roots? Sometimes it’s not easy because of the commitments that you have. So you have to spend your weekends or, you know, or after time to go read about some of this, read some papers, write some code, or it could, it doesn’t have to be something outside. It can be, you know, uh, part of your projects too. Go pick up like five to 10% of your code in one of the projects. Get your hands dirty. So you’ll start being relevant and the amount of confidence that you will get will automatically improve. And the kind of questions that you’ll start asking for your, you know, uh, immediate reportees will also change and they will start seeing this too. They’ll start feeling that my leader is reasonably technical and I can go and talk to him about anything. So this aspect is very, very important.

Now coming to your second question which is, uh, what are the common mistakes people are doing with this, you know, GenAI or this advancements of technologies? See, um, GenAI is great in terms of, you know, um, writing a lot of code on behalf of an engineer, right? Writing a lot of monotonic code on behalf of an engineer. But it is an evolving technology. It’ll have limitations. The fundamental mistake that I’m seeing a lot of people are making is they’re assuming that GenAI or the LLMs can replace a lot of strong engineers; maybe in the future, but that’s not the case right now. They’re great for prototyping. They’re great for writing agents. They’re great for, you know, automating some routine mundane tasks, right, and make your product agentic too. That’s all great. They’re moving with great velocity. But the thing is, there’s a lot of difference, uh, between showing this initial prototype and productionizing this. Let’s face it, enterprise customers have a very high bar. They don’t want, you know, something that breaks at scalability or reliability in production, right? Which means while LLM and Agentic worlds offer a lot of fancy ways of doing things, you still need solid engineering design practices around all of this to make sure that your product does not break in production. So that is where I spend a lot of time advising these new founders or, you know, people in large companies who are trying to adopt AI into their SDLC, that this is not going to be a, you know, magical replacement for everything that you guys are doing. It is, think of it as a friend who is going to assist you or you know, improve your productivity by 10x, but everything around a solid engineering design or an organization, it’s not a replacement for that or at least not yet.

Kovid Batra: Makes sense. I think I’d like to deep dive a little bit more on this piece itself, where if you could give us some examples of how, so first of all, where you have seen these problems occurring, like people just going out and implementing AI or agents, uh, without even thinking whether it is gonna make some sense or not, and if you need to do it in the right way..

Suresh Bysani: Yeah.

Kovid Batra: Can you give us some examples? Like, okay, if this is a case, this is how one should proceed step by step. And I think I, I don’t mind if you get a little more technical here explaining what exactly needs to be done.

Suresh Bysani: Yeah. So let’s take a very basic product, right, which, uh, any SaaS application which has all the layers from infrastructure to authentication to product, to, you know, some workflow that SaaS application is supposed to do. So in the non-agentic/AI world, we are all familiar with how to do this, right? We probably do some microservices, we deploy them in Kubernetes or any other compute infrastructure that people are comfortable with. And you know, we write tons and tons of business logic saying, if this is the request, do this. If this is the request, do this. That’s, that is the programming style we are used to, and that’s still very popular. In the world of agents, agents can be thought of, you know, uh, an LLM abstraction where instead of writing a lot of business logic yourself, you have a set of tools that you author, typically the functions or utils that you call, you, you have in your microservices. And agents kind of decide what are the right set of tools to execute in order to get things done. The claim is there’s a lot of time people spend in writing business logic and not the utils itself. So you write this utils/tools one time and let agents do the business logic. That’s okay. That’s a very beautiful claim, right? But where it’ll fail is if I, if you think about enterprise customers, yes, we’ll talk about consumer applications, but let’s talk about enterprise because that’s where most of the immediate money is, right? Enterprise customers allow determinism. So for example, let’s take an application like Jira or you know, Asana, or whatever application you want to think about, right? They expect a lot of determinism. So let’s say you move a Jira ticket from ‘in-progress’ to say ‘completed’, I mean, I, I’m taking Jira as an example because this is a common enterprise product everybody is familiar with, so they expect it to work deterministically. Agents, as we know, are just wrappers around LLM and they are still hallucinating models, right? Uh, so, determinism is a question mark, right? Yes, we, we, there are a lot of techniques and tools people are using to improve the determinism factor, but if the determinism is a 100%, it’s as good as AI can do everything, right? It’s never going to be the case. So we have to carefully pick and choose the parts of the product, which are okay to be non-deterministic. We’ll talk about what they can be. And we have, we obviously know the parts of the product which cannot be non-deterministic. For example, all the permission boundaries, right? One of the common mistakes I see early startups making is they just code permission boundaries with agents. So let’s say given a logged in user, what are the permissions this person is supposed to have? We can’t let agents guess that. It has to be deterministic because what if there is a mistake and you start seeing your boss’s salary, right? It’s not acceptable. Uh, so similarly, permission boundaries, authentications, authorizations, any, anything in this layer, definitely no agents. Uh, anything that has a strong deterministic workflow requirements, basically moving the state missions and moving from one state to another in a very deterministic way, definitely no agents, but there’s, a lot of parts of the product where we can get away with not having deterministic code. It’s okay to take one path versus the other, for example, you know, uh, uh, how, how do I, how do I say it? Uh, let’s say you have an agent, you know, which is trying to, uh, act as a, as a, um, as a, as a persona, let me put it that way. So one of the common example I can take is, let’s say you are trying to use Jira, uh, and somebody’s trying to generate some reports with Jira, right? So think of it as offline reporting. So whether you do report number 1, 2, 3, or whether you do report number 3, 2, 1 in different order, it’s okay. Nobody’s going to, you know, uh, nobody’s going to make a big deal about it. So you get the idea, right? So anywhere there is acceptability in terms of non-determinism, it’s okay to code agents, so that you will reduce on the time you’re spending on the business logic. But any, anywhere you need determinism, you definitely have to have solid code which obeys, you know, the rules of determinism.

Kovid Batra: Yeah, totally. I think that’s a very good example to explain where things can be implemented and where you need to be a little cautious. I think one more thing that comes to my mind is that every time you’re implementing something, uh, talking in terms of AI, uh, you also need to show the results.

Suresh Bysani: Yeah.

Kovid Batra: Right? Let’s say if I implement GitHub Copilot in my team, I need to make sure, uh, the coding standards are improving, or at least the speed of writing the code is improving. There are lesser performance issues. There are lesser, let’s say, vulnerability or security issues. So similarly, I think, uh, at Eightfold or at any other startup where you are an advisor, do you see these implementations happening and people consciously measuring whether, uh, things are improving or not, or they’re just going by, uh, the thing that, okay, if it’s the age of AI, let’s implement and do it, and everything is all positive? They’re not looking at results. They’re not measuring. And if they are, how are they measuring? Can you again, give an example and help us understand?

Suresh Bysani: Yeah. So I think I’ve seen both styles. Uh, the answer to this largely relies on the, you know, influence of, uh, founders and technical leaders within the team. For example, Eightfold is an AI company. Most of the leaders at Eightfold are from strong AI backgrounds. So even before GenAI, they knew how to evaluate a, a model and how to make sure that AI is doing its job. So that goodness will continue even in the GenAI world, right? Typically people do this with Evals frameworks, right? They log everything that is done by AI. And, you know, they kind of understand if, uh, what percentage of it is accurate, right? I mean, they can start with something simple and we can take it all the way fancy. But yes, there are many companies where founders or technical leaders have not worked or they don’t understand AI a lot, right? I mean, there’s, they’re still upskilling, just like all of us.

Kovid Batra: Yeah.

Suresh Bysani: And they don’t know how to really evaluate how good of a job AI is doing. Right? I mean, they are just checking their box saying that yes, I have agents. Yes, I’m using AI. Yes, I’m using LLMs, and whatnot, right? So that’s where the danger is. And, and that’s where I spend a lot of time advising them that you should have a solid framework around observability to understand, you know, how much of these decisions are accurate. You know, what part, how much of your productivity is getting a boost, right? Uh, totally. Right. I think people are now upskilling. That’s where I spend a lot of time educating these new age founders, especially the ones who do not have the AI background, uh, to help them understand that you need to have strong Evals frameworks to understand accuracy and use of this AI for everything that you are, that you’re doing. And, and I see a huge, you know, improvement in, in, in their understanding over time.

Kovid Batra: Perfect. Anything specific that you would like to mention here in terms of your evaluation frameworks for AI, uh, that could really help the larger audience to maybe approach things fundamentally?

Suresh Bysani: Oh, I mean, so there are tons of Evals frameworks on the internet, right? I mean, pick a basic one. Nothing fancy. Especially, I mean, obviously it depends on the size of your project and the impact of your AI model. Things can change significantly. But for most of the agents that people are developing in-house, pick a very simple Evals framework. I mean, if you, I, I see a lot of people are using LangGraph and LangSmith nowadays, right? I mean, I’m not married to a framework. People can, are free to use any framework, but. LangSmith is a good example of what observability in the GenAI world should look like, right? So they’ll, they’re, they’re nicely logging all the conversations that we are having with, with LLM, and you can start looking at the impact of each of these conversations. And over time, you will start understanding whether to tweak your prompt or start providing more context or, you know, maybe build a RAG around it. The whole idea is to understand your interactions with AI because these are all headless agents, right? These are not GPT-like conversations where a user is trying to enter this conversation. Your product is doing this on behalf of you, so which means you are not actually seeing what is happening in terms of interactions with this LLM. So having these Evals frameworks will, you know, kind of nicely log everything that we are doing with LLM and we can start observing what to do in order to improve the accuracy and, you know, get, get better results. That’s, that’s the first idea. So I, I would, I would start with LangSmith and people can get a lot of ideas from LangSmith, and yes, we can go all fancy from there.

Kovid Batra: Great. I think before we, uh, uh, complete this discussion and, uh, say goodbye to you, I think one important thing that comes to my mind is that implementing AI in any tech organization, there could be various areas, various dimensions where you can take it to, but anything that you think is kind of proven already where people should invest, engineering managers should invest, like blindly, okay, this is something that we can pick and like, see the impact and improve the overall engineering efficiency?

Suresh Bysani: Yes. I, I generally recommend people to start with internal productivity because it is not customer-facing AI. So you’re okay to do experiments and fail, and it’ll give a nice headway for people within the company to upskill for Agentic worlds. There are tons of problems, right? Whether it is, I mean, I have a simple goal. 10% of my PRs that are generated within the company should be AI-generated. It looks like a very big number, but if you think about it, you can, all the unit tests can be written by AI, all the, you know, uh, PagerDuty problems can be, can, can be taken at first shot by agents and write simple PRs, right? There are tons of internal things that we can just do with agents. Now, agents are becoming very good at code writing and, you know, code generation. Obviously there are still limitations, but for simple things like unit test, bugs, failures, agents can definitely take a first shot at it. That’s one. And second thing is if we think about all these retro documents, internal confluence documents, or bunch of non-productive things that a lot of engineering people do, right? Uh, agents can do it without getting any boredom, right? I mean, think about it. You don’t need to pay any salaries for agents, right? They can continuously work for you. They’ll automate and do all the repetitive and mundane tasks. But in this process, as we’re talking about it, we should start learning the several frameworks and improve the accuracy of these internal agents, and thereby, because internal agents are easy to measure, right? 10% of my PRs. My bugs have reduced this much by a month or 1 month. Bugs as in, the overall bugs will not reduce. The number of bugs that a developer had to fix versus an agent had to fix, that will reduce, uh, over time, right? So these are very simple metrics to measure and learn, and improve on the agent’s accuracy. Once you have this solid understanding, engineers are the best people. They have fantastic product context, so they will start looking at gaps. Oh, I can put an agent here. I can put an agent here. Maybe I can do an agent for this part in the product. That’s the natural evolution I recommend people. I don’t recommend people to start agents in the product direction.

Kovid Batra: Makes sense. Great. I think Suresh, this was a really interesting session. We got some very practical advice around implementing AI and avoiding the pitfalls. Uh, anything else that you would like to say to our audience, uh, as parting advice?

Suresh Bysani: Yeah, so, uh, I’m sure there is a lot of technical audience that are going to see this. Uh, upskill yourself in agents or AI in general. Uh, I think five years ago it was probably not seen as a requirement, uh, with, there was a group of people who were doing AI and generating models, and majority of the world was just doing backend/full stack engineering. But right now, the definition of a full stack engineer has changed completely. Right? So a full stack engineer is now writing agents, right? So it doesn’t have to be fine tuning your models or going into the depth of models, right? That is still models experts’ job, you know? Uh, but at least learning to write programs using agents and incorporating agents as a first class citizen in your projects; definitely spend a lot of time on that.

Kovid Batra: Great. Thank you so much. That’s our time for today. Pleasure having you.

Suresh Bysani: Thank you. Bye-bye.

Engineering Analytics

View All
Essential Software Quality Metrics That Truly Matter

Essential Software Quality Metrics That Truly Matter

Ensuring software quality is non-negotiable. Every software project needs a dedicated quality assurance mechanism. 

But measuring quality is not always so simple. 

There are numerous metrics available, each providing different insights. However, not all metrics need equal attention. 

The key is to track those that have a direct impact on software performance and user experience. 

Metrics you must measure for software quality 

Here are the numbers you need to keep a close watch on: 

1. Code Quality 

Code quality measures how well-written and maintainable a software codebase is. 

Poor code quality leads to increased technical debt, making future updates and debugging more difficult. It directly affects software performance and scalability. 

Measuring code quality requires static code analysis, which helps detect vulnerabilities, code smells, and non-compliance with coding standards. 

Platforms like Typo assist in evaluating factors such as complexity, duplication, and adherence to best practices. 

Additionally, code reviews provide qualitative insights by assessing readability and overall structure. Frequent defects in a specific module can help identify code quality issues that require attention. 

2. Defect Density 

Defect density determines the number of defects relative to the size of the codebase. 

It is calculated by dividing the total number of defects by the total lines of code or function points. 

A higher defect density indicates a higher likelihood of software failure, while a lower defect density suggests better software quality. 

This metric is particularly useful when comparing different releases or modules within the same project. 

3. Mean Time To Recovery (MTTR) 

MTTR measures how quickly a system can recover from failures. It is crucial for assessing software resilience and minimizing downtime. 

MTTR is calculated by dividing the total downtime caused by failures by the number of incidents. 

A lower MTTR indicates that the team can identify, troubleshoot, and resolve issues efficiently. And it’s a problem if it’s high. 

This metric measures the effectiveness of incident response processes and the ability of the system to return to operational status quickly. 

Ideally, you should set up automated monitoring and well-defined recovery strategies to improve MTTR. 

4. Mean Time Between Failures (MTBF) 

MTBF measures the average time a system operates before running into a failure. It reflects software reliability and the likelihood of experiencing downtime. 

MTBF is calculated by dividing the total operational time by the number of failures. 

If it’s high, you get better stability, while a lower MTBF indicates frequent failures that may require improvements on architectural level. 

Tracking MTBF over time helps teams predict potential failures and implement preventive measures. 

How to increase it? Invest in regular software updates, performance optimizations, and proactive monitoring. 

5. Cyclomatic Complexity 

Cyclomatic complexity measures the complexity of a codebase by analyzing the number of independent execution paths within a program. 

High cyclomatic complexity increases the risk of defects and makes code harder to test and maintain. 

This metric is determined by counting the number of decision points, such as loops and conditionals, in a function. 

Lower complexity results in simpler, more maintainable code, while higher complexity suggests the need for refactoring. 

6. Code Coverage 

Code coverage measures the percentage of source code executed during automated testing. 

A higher percentage means better test coverage, reducing the chances of undetected defects. 

This metric is calculated by dividing the number of executed lines of code by the total lines of code. 

While high coverage is desirable, it does not guarantee the absence of bugs, as it does not account for the effectiveness of test cases. 

Note: Maintaining balanced coverage with meaningful test scenarios is essential for reliable software. 

7. Test Coverage 

Test coverage assesses how well test cases cover software functionality. 

Unlike code coverage, which measures executed code, test coverage focuses on functional completeness by evaluating whether all critical paths, edge cases, and requirements are tested. This metric helps teams identify untested areas and improve test strategies. 

Measuring test coverage requires you to track executed test cases against total planned test cases and ensure all requirements are validated. The higher the test coverage, the more you can rely on software. 

8. Static Code Analysis Defects 

Static code analysis identifies defects without executing the code. It detects vulnerabilities, security risks, and deviations from coding standards. 

Automated tools like Typo can scan the codebase to flag issues like uninitialized variables, memory leaks, and syntax violations. The number of defects found per scan indicates code stability. 

Frequent or recurring issues suggest poor coding practices or inadequate developer training. 

9. Lead Time for Changes 

Lead time for changes measures how long it takes for a code change to move from development to deployment. 

A shorter lead time indicates an efficient development pipeline. 

It is calculated from the moment a change request is made to when it is successfully deployed. 

Continuous integration, automated testing, and streamlined workflows help reduce this metric, ensuring faster software improvements. 

10. Response Time 

Response time measures how quickly a system reacts to a user request. Slow response times degrade user experience and impact performance. 

It is measured in milliseconds or seconds, depending on the operation. 

Web applications, APIs, and databases must maintain low response times for optimal performance. 

Monitoring tools track response times, helping teams identify and resolve performance bottlenecks. 

11. Resource Utilization 

Resource utilization evaluates how efficiently a system uses CPU, memory, disk, and network resources. 

High resource consumption without proportional performance gains indicates inefficiencies. 

Engineering monitoring platforms measure resource usage over time, helping teams optimize software to prevent excessive load. 

Optimized algorithms, caching mechanisms, and load balancing can help improve resource efficiency. 

12. Crash Rate 

Crash rate measures how often an application unexpectedly terminates. Frequent crashes means the software is not stable. 

It is calculated by dividing the number of crashes by the total number of user sessions or active users. 

Crash reports provide insights into root causes, allowing developers to fix issues before they impact a larger audience. 

13. Customer-reported Bugs 

Customer-reported bugs are the number of defects identified by users. If it’s high, it means the testing process is neither adequate nor effective. 

These bugs are usually reported through support tickets, reviews, or feedback forms. Tracking them helps assess software reliability from the end-user perspective. 

A decrease in customer-reported bugs over time signals improvements in testing and quality assurance. 

Proactive debugging, thorough testing, and quick issue resolution reduce reliance on user feedback for defect detection. 

14. Release Frequency 

Release frequency measures how often new software versions are deployed. Frequent releases suggest an agile and responsive development process. 

This metric is especially critical in DevOps and continuous delivery environments. 

A high release frequency enables faster feature updates and bug fixes. However, too many releases without proper quality control can lead to instability. 

When you balance speed and stability, you can rest assured there will be continuous improvements without compromising user experience. 

15. Customer Satisfaction Score (CSAT) 

CSAT measures user satisfaction with software performance, usability, and reliability. It is gathered through post-interaction surveys where users rate their experience. 

A high CSAT indicates a positive user experience, while a low score suggests dissatisfaction with performance, bugs, or usability. 

Conclusion 

You must track essential software quality metrics to ensure the software is reliable and there are no performance gaps. 

However, simply measuring them is not enough—real-time insights and automation are crucial for continuous improvement. 

Platforms like Typo help teams monitor quality metrics and also velocity, DORA insights, and delivery performance, ensuring faster issue detection and resolution. 

AI-powered code analysis and auto-fixes further enhance software quality by identifying and addressing defects proactively. 

With the right tools, teams can maintain high standards while accelerating development and deployment. 

Mastering GitHub Analytics

Mastering GitHub Analytics

In today's fast-paced software development world, tracking progress and understanding project dynamics is crucial. GitHub Analytics transforms raw data from repositories into actionable intelligence, offering insights that enable teams to optimize workflows, enhance collaboration, and improve software delivery. This guide explores the core aspects of GitHub Analytics, from key metrics to best practices, helping you leverage data to drive informed decision-making.

Why GitHub Analytics Matters

GitHub Analytics provides invaluable insights into project activity, empowering developers and project managers to track performance, identify bottlenecks, and enhance productivity. Unlike generic analytics tools, GitHub Analytics focuses on software development-specific metrics such as commits, pull requests, issue tracking, and cycle time analysis. This targeted approach allows for a deeper understanding of development workflows and enables teams to make data-driven decisions that directly impact project success.

Understanding GitHub Analytics

GitHub Analytics encompasses a suite of metrics and tools that help developers assess repository activity and project health.

Key Components of GitHub Analytics:

  • Data and Process Hygiene: Establishing standardized workflows through consistent labeling, commit keywords, and issue tracking is paramount. This ensures data accuracy and facilitates meaningful analysis.
    • Real-World Example: A team standardizes issue labels (e.g., "bug," "feature," "enhancement," "documentation") to categorize issues effectively and track trends in different issue types.
  • Pulse and Contribution Tracking: Monitoring repository activity, including commit frequency, work distribution among team members, and overall activity trends.
    • Real-World Example: A team uses GitHub Analytics to identify periods of low activity, which might indicate potential roadblocks or demotivation, allowing them to proactively address the issue.
  • Team Performance Metrics: Analyzing key metrics like cycle time (the time taken to complete a piece of work), lead time for changes, and DORA metrics (Deployment Frequency, Change Failure Rate, Mean Time to Recovery, Lead Time for Changes) to identify inefficiencies and improve productivity.
    • Real-World Example: A team uses DORA metrics to track deployment frequency and identify areas for improvement in their continuous delivery pipeline, leading to faster releases and reduced time to market.

GitHub Analytics vs. Other Analytics Tools

While other analytics platforms focus on user behavior or application performance, GitHub Analytics specifically tracks code contributions, repository health, and team collaboration, making it an indispensable tool for software development teams. This focus on development-specific data provides unique insights that are not readily available from generic analytics platforms.

Role of GitHub Analytics in Project Management

  • Performance Monitoring: Analytics provide real-time visibility into how and when contributions are made, enabling project managers to track progress against milestones and identify potential delays.
    • Real-World Example: A project manager uses GitHub Analytics to track the progress of critical features and identify any potential bottlenecks that might impact the project timeline.
  • Resource Allocation: Data-driven insights from GitHub Analytics help optimize resource allocation, ensuring that team members are working on the most impactful tasks and that their skills are effectively utilized.
    • Real-World Example: A project manager analyzes team member contributions and identifies areas where specific skillsets are lacking, informing decisions on hiring or training.
  • Quality Assurance: Identifying recurring issues, analyzing code review comments, and tracking bug trends helps teams proactively refine processes, improve code quality, and reduce the number of defects.
    • Real-World Example: A team analyzes code review comments to identify common code quality issues and implement best practices to prevent them in the future.
  • Strategic Planning: Historical project data, including past performance metrics, successful strategies, and areas for improvement, informs future roadmaps, enabling teams to predict and mitigate potential risks.
    • Real-World Example: A team analyzes past project data to identify trends in development velocity and predict future project timelines more accurately.

Getting Started with GitHub Analytics

Accessing GitHub Analytics:

  • Connect Your GitHub Account: Integrate analytics tools via GitHub settings or utilize GitHub's built-in insights.
  • Use GitHub's Built-in Insights: Access repository insights to track contributions, trends, and identify areas for improvement.
  • Customize Your Dashboard: Set up personalized views with relevant KPIs (Key Performance Indicators) that are most important to your team and project goals.

Navigating GitHub Analytics:

  • Real-Time Dashboards: Monitor KPIs such as deployment frequency and failure rates in real-time to gain immediate insights into project health.
  • Filtering Data: Focus on relevant insights using custom filters based on time frames, contributors, issue labels, and other criteria.
  • Multi-Repository Monitoring: Track multiple projects from a single dashboard to gain a comprehensive overview of team performance across different initiatives.

Configuring GitHub Analytics for Efficiency:

  • Customize Dashboard Templates: Create and save custom dashboard templates for different projects or teams to streamline analysis and reporting.
  • Optimize Data Insights: Aggregate pull requests, issues, and commits to generate meaningful reports and identify trends.
  • Foster Collaboration: Share dashboards with the entire team to promote transparency, foster a data-driven culture, and encourage open discussion around project performance.

Key GitHub Analytics Metrics

Software Development Cycle Time Metrics:

  • Coding Time: Duration from the start of development to when the code is ready for review.
  • Review Time: Measures the efficiency of collaboration in code reviews, indicating potential bottlenecks or areas for improvement in the review process.
  • Merge Time: Time taken from the completion of the code review to the integration of the code into the main branch.

Software Delivery Speed Metrics:

  • Average Pull Request Size: Tracks the scope of merged pull requests, providing insights into the team's approach to code changes and identifying potential areas for improvement in code modularity.
  • DORA Metrics:
    • Deployment Frequency: How often changes are deployed to production.
    • Change Failure Rate: Percentage of deployments that result in failures.
    • Lead Time for Changes: The time it takes to go from code commit to code in production.
    • Mean Time to Recovery: The average time it takes to restore service after a deployment failure.
  • Issue Queue Time: Measures how long issues remain unaddressed, highlighting potential delays in issue resolution and potential impacts on project progress.
  • Overdue Items: Tracks tasks that exceed their expected completion times, identifying potential bottlenecks and areas for improvement in project planning and execution.

Process Quality and Compliance Metrics:

  • Bug Lead Time for Changes (BLTC): Tracks the speed of bug resolution, providing insights into the team's responsiveness to and efficiency in addressing defects.
  • Raised Bugs Tracker (RBT): Monitors the frequency of bug identification, highlighting areas where improvements in code quality and testing can be made.
  • Pull Request Review Ratio (PRRR): Ensures adequate peer review coverage for all code changes, promoting code quality and knowledge sharing within the team.

Best Practices for Monitoring and Improving Performance

Regular Analytics Reviews:

  • Scheduled Checks: Conduct weekly or bi-weekly reviews of key metrics to track progress toward project goals and identify any emerging issues.

Screenshot 2024-03-16 at 12.29.43 AM.png
  • Sprint Planning Integration: Incorporate GitHub Analytics data into sprint planning meetings to refine sprint objectives, allocate resources effectively, and make data-driven decisions about scope and priorities.

  • CI/CD Monitoring: Track deployment success rates and identify areas for improvement in the continuous integration and continuous delivery pipeline.

Encouraging Team Engagement:

  • Open Data Access: Promote transparency by sharing analytics dashboards and reports with the entire team, fostering a shared understanding of project performance.
  • Training on Analytics: Provide training to team members on how to effectively interpret and utilize GitHub Analytics data to make informed decisions.
  • Recognition Based on Metrics: Acknowledge and reward team members and teams for achieving positive performance outcomes as measured by key metrics.

Unlocking the Potential of GitHub Analytics

GitHub Analytics tools like Typo are powerful tools for software teams, providing critical insights into development performance, collaboration, and project health. By embracing these analytics, teams can streamline workflows, enhance software quality, improve team communication, and make informed, data-driven decisions that ultimately lead to greater project success.

GitHub Analytics FAQs

  • What is GitHub Analytics?
    • A toolset that provides insights into repository activity, collaboration, and project performance.
  • How does GitHub Analytics support project management?
    • It helps monitor team performance, allocate resources effectively, identify inefficiencies, and make data-driven decisions to improve project outcomes.
  • Can GitHub Analytics be customized?
    • Yes, users can tailor dashboards, select specific metrics, and configure reports to meet their unique needs and project requirements.
  • What key metrics are available?
    • Key metrics include development cycle time metrics, software delivery speed metrics (including DORA metrics), and process quality and compliance metrics.
  • Can analytics improve code quality?
    • Yes, by tracking bug reports, analyzing code review trends, and identifying recurring issues, teams can proactively address code quality concerns and implement strategies for improvement.
  • Can GitHub Analytics help manage technical debt?
    • Absolutely. By monitoring changes, identifying areas needing improvement, and tracking the impact of technical debt on development velocity, teams can strategically address technical debt and maintain a healthy codebase.

Engineering Metrics: The Boardroom Perspective

Engineering Metrics: The Boardroom Perspective

Achieving engineering excellence isn’t just about clean code or high velocity. It’s about how engineering drives business outcomes. 

Every CTO and engineering department manager must know the importance of metrics like cycle time, deployment frequency, or mean time to recovery. These numbers are crucial for gauging team performance and delivery efficiency. 

But here’s the challenge: converting these metrics into language that resonates in the boardroom. 

In this blog, we’re going to share how you make these numbers more understandable. 

What are Engineering Metrics? 

Engineering metrics are quantifiable measures that assess various aspects of software development processes. They provide insights into team efficiency, software quality, and delivery speed. 

Some believe that engineering productivity can be effectively measured through data. Others argue that metrics oversimplify the complexity of high-performing teams. 

While the topic is controversial, the focus of metrics in the boardroom is different. 

In the board meeting, these metrics are a means to show that the team is delivering value. The engineering operations are efficient. And the investments being made by the company are justified. 

Challenges in Communicating Engineering Metrics to the Board 

Communicating engineering metrics to the board isn’t always easy. Here are some common hurdles you might face: 

1. The Language Barrier 

Engineering metrics often rely on technical terms like “cycle time” or “MTTR” (mean time to recovery). To someone outside the tech domain, these might mean little. 

For example, discussing “code coverage” without tying it to reduced defect rates and faster releases can leave board members disengaged. 

The challenge is conveying these technical terms into business language—terms that resonate with growth, revenue, and strategic impact. 

2. Data Overload 

Engineering teams track countless metrics, from pull request volumes to production incidents. While this is valuable internally, presenting too much data in board meetings can overwhelm your board members. 

A cluttered slide deck filled with metrics risks diluting your message. These granular-level operational details are for managers to take care of the team. The board members, however, care about the bigger picture. 

3. Misalignment with Business Goals 

Metrics without context can feel irrelevant. For example, sharing deployment frequency might seem insignificant unless you explain how it accelerates time-to-market. 

Aligning metrics with business priorities, like reducing churn or scaling efficiently, ensures the board sees their true value. 

Key Metrics CTOs Should Highlight in the Boardroom 

Before we go on to solve the above-mentioned challenges, let’s talk about the five key categories of metrics one should be mapping: 

1. R&D Investment Distribution 

These metrics show the engineering resource allocation and the return they generate. 

  • R&D Spend as a Percentage of Revenue: Tracks how much is invested in engineering relative to the company's revenue. Demonstrates commitment to innovation.
  • CapEx vs. OpEx Ratio: This shows the balance between long-term investments (e.g., infrastructure) and ongoing operational costs. 
  • Allocation by Initiative: Shows how engineering time and money are split between new product development, maintenance, and technical debt. 

2. Deliverables

These metrics focus on the team’s output and alignment with business goals. 

  • Feature Throughput: Tracks the number of features delivered within a timeframe. The higher it is, the happier the board. 
  • Roadmap Completion Rate: Measures how much of the planned roadmap was delivered on time. Gives predictability to your fellow board members. 
  • Time-to-Market: Tracks the duration from idea inception to product delivery. It has a huge impact on competitive advantage. 

3. Quality

Metrics in this category emphasize the reliability and performance of engineering outputs. 

  • Defect Density: Measures the number of defects per unit of code. Indicates code quality.
  • Customer-Reported Incidents: Tracks issues reported by customers. Board members use it to get an idea of the end-user experience. 
  • Uptime/Availability: Monitors system reliability. Tied directly to customer satisfaction and trust. 

4. Delivery & Operations

These metrics focus on engineering efficiency and operational stability.

  • Cycle Time: Measures the time taken from work start to completion. Indicates engineering workflow efficiency.
  • Deployment Frequency: Tracks how often code is deployed. Reflects agility and responsiveness.
  • Mean Time to Recovery (MTTR): Measures how quickly issues are resolved. Impacts customer trust and operational stability. 

5. People & Recruiting

These metrics highlight team growth, engagement, and retention. 

  • Offer Acceptance Rate: Tracks how many job offers are accepted. Reflects employer appeal. 
  • Attrition Rate: Measures employee turnover. High attrition signals team instability. 
  • Employee Satisfaction (e.g., via surveys): Gauges team morale and engagement. Impacts productivity and retention. 

By focusing on these categories, you can show the board how engineering contributes to your company's growth. 

Tools for Tracking and Presenting Engineering Metrics 

Here are three tools that can help CTOs streamline the process and ensure their message resonates in the boardroom: 

1. Typo

Typo is an AI-powered platform designed to amplify engineering productivity. It unifies data from your software development lifecycle (SDLC) into a single platform, offering deep visibility and actionable insights. 

Key Features:

  • Real-time SDLC visibility to identify blockers and predict sprint delays.
  • Automated code reviews to analyze pull requests, identify issues, and suggest fixes.
  • DORA and SDLC metrics dashboards for tracking deployment frequency, cycle time, and other critical metrics.
  • Developers experience insights to benchmark productivity and improve team morale. 
  • SOC2 Type II compliant

2. Dashboards with Tableau or Looker

For customizable data visualization, tools like Tableau or Looker are invaluable. They allow you to create dashboards that present engineering metrics in an easy-to-digest format. With these, you can highlight trends, focus on key metrics, and connect them to business outcomes effectively. 

3. Slide Decks

Slide decks remain a classic tool for boardroom presentations. Summarize key takeaways, use simple visuals, and focus on the business impact of metrics. A clear, concise deck ensures your message stays sharp and engaging. 

Best Practices and Tips for CTOs for Presenting Engineering Metrics to the Board 

More than data, engineering metrics for the board is about delivering a narrative that connects engineering performance to business goals. 

Here are some best practices to follow: 

1. Educate the Board About Metrics 

Start by offering a brief overview of key metrics like DORA metrics. Explain how these metrics—deployment frequency, MTTR, etc.—drive business outcomes such as faster product delivery or increased customer satisfaction. Always include trends and real-world examples. For example, show how improving cycle time has accelerated a recent product launch. 

2. Align Metrics with Investment Decisions

Tie metrics directly to budgetary impact. For example, show how allocating additional funds for DevOps could reduce MTTR by 20%, which could lead to faster recoveries and an estimated Y% revenue boost. You must include context and recommendations so the board understands both the problem and the solution. 

3. Highlight Actionable Insights 

Data alone isn’t enough. Share actionable takeaways. For example: “To reduce MTTR by 20%, we recommend investing in observability tools and expanding on-call rotations.” Use concise slides with 5-7 metrics max, supported by simple and consistent visualizations. 

4. Emphasize Strategic Value

Position engineering as a business enabler. You should show its role in driving innovation, increasing market share, and maintaining competitive advantage. For example, connect your team’s efforts in improving system uptime to better customer retention. 

5. Tailor Your Communication Style

Understand your board member’s technical understanding and priorities. Begin with business impact, then dive into the technical details. Use clear charts (e.g., trend lines, bar graphs) and executive summaries to convey your message. Tell stories behind the numbers to make them relatable. 

Conclusion 

Engineering metrics are more than numbers—they’re a bridge between technical performance and business outcomes. Focus on metrics that resonate with the board and align them with strategic goals. 

When done right, your metrics can show how engineering is at the core of value and growth.

View All

Software Delivery

View All
Agile Velocity vs. Capacity: Key Differences and Best Practices

Agile Velocity vs. Capacity: Key Differences and Best Practices

Many Agile teams confuse velocity with capacity. Both measure work, but they serve different purposes. Understanding the difference is key to better planning and execution. 

Agile’s rise in popularity is no surprise—it helps teams deliver on time. Velocity tracks completed work over time, guiding future estimates. Capacity measures available resources, ensuring realistic commitments. 

Misusing these metrics can lead to missed deadlines and inefficiencies. Used correctly, they boost productivity and streamline workflows. 

In this blog, we’ll break down velocity vs. capacity, highlight their differences, and share best practices to ensure agile success for you. 

What is Agile Velocity? 

Agile velocity measures the amount of work a team completes in a sprint, typically using story points. It reflects a team’s actual output over time. By tracking velocity, teams can predict future sprint capacity and set realistic goals. 

Velocity is not fixed—it evolves as teams improve. New teams may start with lower velocity, which grows as they refine their processes. However, it is not a direct measure of efficiency. High velocity does not always mean better performance. 

Understanding velocity helps teams make data-driven decisions. It ensures sprint planning aligns with past performance, reducing the risk of overcommitment. 

How to Calculate Agile Velocity? 

Velocity is calculated by averaging the total story points completed over multiple sprints. 

Example:

  • Sprint 1: Team completes 30 story points
  • Sprint 2: Team completes 25 story points
  • Sprint 3: Team completes 35 story points

Average velocity = (30 + 25 + 35) ÷ 3 = 30 story points per sprint 

This means the team can reasonably commit to about 30 story points in upcoming sprints. 

What is Agile Capacity? 

Agile capacity is the total available working hours for a team in a sprint. It factors in team size, holidays, and non-project work. Unlike velocity, which shows actual output, capacity focuses on potential workload. 

Capacity planning helps teams set realistic expectations. It prevents burnout by ensuring workload matches availability. 

Capacity fluctuates based on external factors. A fully staffed sprint has more capacity than one with multiple absences. Tracking it ensures smoother sprint execution and better resource management. 

How to calculated agile capacity? 

Capacity is based on available working hours in a sprint. It factors in team size, work hours per day, and non-project time. 

Example: 

  • Team of 5 members
  • Each works 8 hours per day
  • Sprint length: 10 working days
  • Total capacity: 5 × 8 × 10 = 400 hours

If one member is on leave for 2 days, the adjusted capacity is:
(4 × 8 × 10) + (1 × 8 × 8) = 384 hours

Velocity shows past output, while capacity shows available effort. Both help teams plan sprints effectively. 

Differences Between Agile Velocity and Capacity 

While both velocity and capacity deal with workload, they serve different roles. The confusion arises when teams assume high capacity means high velocity. 

But velocity depends on factors beyond available hours—such as efficiency, experience, and blockers. 

Here’s a deeper look at their key differences: 

1. Measurement Units 

Velocity is measured in story points, reflecting completed work. It captures complexity and effort rather than just time. Capacity, on the other hand, is measured in hours or workdays. It represents the total time available, not the work accomplished. 

For example, a team with a capacity of 400 hours may complete only 30 story points. The work done depends on efficiency, not just available hours. 

2. Predictability vs. Availability 

Velocity helps predict future output based on historical data. It evolves with team performance. Capacity only shows available effort in a sprint. It does not indicate how much work will actually be completed. 

A team may have 500 hours of capacity but deliver only 35 story points. Predictability relies on velocity, while availability depends on capacity. 

3. Influence of Team Experience and Efficiency 

Velocity changes as teams gain experience and refine processes. A team working together for months will likely have a higher velocity than a newly formed team. Capacity remains fixed unless team size or sprint duration changes. 

For example, two teams with the same capacity (400 hours) may have different velocities—one completing 40 story points, another only 25. Experience and engineering efficiency are the reasons behind this gap. 

4. Impact of External Factors 

Capacity is affected by leaves, training, and holidays. Velocity is influenced by dependencies, technical debt, and workflow efficiency. 

Example:

  • A team with 10 members and 800 capacity hours may lose 100 hours due to vacations. 
  • However, velocity might drop due to unexpected blockers, not just reduced capacity. 

External factors impact both, but their effects differ. Capacity loss is predictable, while velocity fluctuations are harder to forecast. 

5. Use in Sprint Planning 

Capacity helps determine how much work the team could take on. Velocity helps decide how much work the team should take on based on past performance. 

If a team has a velocity of 30 story points but a capacity of 500 hours, taking on 50 story points will likely lead to failure. Sprint planning should balance both, prioritizing past velocity over raw capacity. 

6. Adjustments Over Time 

Velocity is dynamic. It shifts due to process improvements, team changes, and work complexity. Capacity remains relatively stable unless the team structure changes. 

For example, a team with a velocity of 25 story points may improve to 35 story points after optimizing workflows. Capacity (e.g., 400 hours) remains the same unless sprint length or team size changes. 

Velocity improves with Agile maturity, while capacity remains a logistical factor. 

7. Risk of Misinterpretation 

Using capacity as a performance metric can mislead teams. A high capacity does not mean a team should take on more work. Similarly, a drop in velocity does not always indicate lower performance—it may mean more complex work was tackled. 

Example: 

  • A team’s velocity drops from 40 to 30 story points. Instead of assuming inefficiency, check if the complexity of tasks increased. 
  • A team with 600 capacity hours should not assume they can complete 60 story points if past velocity suggests 45 is realistic. 

Misinterpreting these metrics can lead to overloading, burnout, and poor sprint outcomes. 

Best Practices to Follow for Agile Velocity and Capacity 

Here are some best practices to follow to strike the right balance between agile velocity and capacity: 

  • Track Velocity Over Multiple Sprints: Use an average to get a reliable estimate rather than relying on a single sprint’s data. 
  • Don’t Overcommit Based on Capacity: Always plan work based on past velocity, not just available hours. 
  • Account for Non-Project Time: Factor in meetings, training, and unforeseen blockers when calculating capacity. 
  • Adjust for Team Changes: Both will fluctuate if team members join or leave, so recalibrate expectations accordingly. 
  • Use Capacity for Workload Balancing: Ensure tasks are evenly distributed to prevent burnout. 
  • Avoid Comparing Teams’ Velocities: Each team has different workflows and efficiencies; velocity isn’t a competition. 
  • Monitor Trends, Not Just Numbers: Look for patterns in velocity and capacity changes to improve forecasting. 
  • Use Both Metrics Together: Velocity ensures realistic commitments, while capacity prevents overloading. 
  • Reassess Regularly: Review both metrics after each sprint to refine planning. 
  • Communicate Changes Transparently: Keep stakeholders informed when capacity or velocity shifts impact delivery. 

Conclusion 

Understanding the difference between velocity and capacity is key to Agile success. 

Companies can enhance agility by integrating AI into their engineering process with Typo. It enables AI-powered engineering analytics that tracks both metrics, identifies bottlenecks, and optimizes sprint planning. Automated fixes and intelligent recommendations help teams improve velocity without overloading capacity. 

By leveraging AI-driven insights, businesses can make smarter decisions and accelerate delivery. 

Want to see how AI can streamline your Agile processes?

Engineering Management vs. Project Management: Key Differences Explained

Engineering Management vs. Project Management: Key Differences Explained

Many confuse engineering management with project management. The overlap makes it easy to see why. 

Both involve leadership, planning, and execution. Both drive projects to completion. But their goals, focus areas, and responsibilities differ significantly. 

This confusion can lead to hiring mistakes and inefficient workflows. 

A project manager ensures a project is delivered on time and within scope. An engineering manager looks beyond a single project, focusing on team growth, technical strategy, and long-term impact. 

Understanding these differences is crucial for businesses and employees alike. 

Let’s break down the key differences. 

What is Engineering Management? 

Engineering management focuses on leading engineering teams and driving technical success. It involves decisions related to engineering resource allocation, team growth, and process optimization. 

In a software company, an engineering manager oversees multiple teams building a new AI feature. They ensure the teams follow best practices and meet high technical standards. 

Their role extends beyond individual projects. They also have to mentor engineers and help them adjust to workflows. 

What is Engineering Project Management? 

Engineering project management focuses on delivering specific projects on time and within scope. 

For the same AI feature, the project manager coordinates deadlines, assigns tasks, and tracks progress. They manage dependencies, remove roadblocks, and ensure developers have what they need. 

Difference b/w Engineering Management and Project Management 

Both engineering management and engineering project management fall under classical project management. 

However, their roles differ based on the organization’s structure. 

In Engineering, Procurement, and Construction (EPC) organizations, project managers play a central role, while engineering managers operate within project constraints. 

In contrast, in pure engineering firms, the difference fades, and project managers often assume engineering management responsibilities. 

1. Scope of Responsibility 

Engineering management focuses on the broader development of engineering teams and processes. It is not tied to a single project but instead ensures long-term success by improving technical strategy. 

On the other hand, engineering project management is centered on delivering a specific project within defined constraints. The project manager ensures clear goals, proper task delegation, and timely execution. Once the project is completed, their role shifts to the next initiative. 

2. Temporal Orientation 

The core lies in time and continuity. Engineering managers operate on an ongoing basis without a defined endpoint. Their role is to ensure that engineering teams continuously improve and adapt to evolving technologies. 

Even when individual projects end, their responsibilities persist as they focus on optimizing workflows. 

Engineering project managers, in contrast, work within fixed project timelines. Their focus is to ensure that specific engineering initiatives are delivered on time and under budget. 

Each software project has a lifecycle, typically consisting of phases such as — initiation, planning, execution, monitoring, and closure. 

For example, if a company is building a recommendation engine, the engineering manager ensures the team is well-trained and the technical process are set up for accuracy and efficiency. Meanwhile, the project manager tracks the AI model’s development timeline, coordinates testing, and ensures deployment deadlines are met. 

Once the recommendation engine is live, the project manager moves on to the next project, while the engineering manager continues refining the system and supporting the team. 

3. Resource Governance Models 

Engineering managers allocate resources based on long-term strategy. They focus on team stability, ensuring individual engineers work on projects that align with their expertise. 

Project managers, however, use temporary resource allocation models. They often rely on tools like RACI matrices and effort-based planning to distribute workload efficiently. 

If a company is launching a new mobile app, the project manager might pull engineers from different teams temporarily, ensuring the right expertise is available without long-term restructuring. 

4. Knowledge Management Approaches 

Engineering management establishes structured frameworks like communities of practice, where engineers collaborate, share expertise, and refine best practices. 

Technical mentorship programs ensure that senior engineers pass down insights to junior team members, strengthening the organization’s technical depth. Additionally, capability models help map out engineering competencies. 

In contrast, engineering project management prioritizes short-term knowledge capture for specific projects. 

Project managers implement processes to document key artifacts, such as technical specifications, decision logs, and handover materials. These artifacts ensure smooth project transitions and prevent knowledge loss when team members move to new initiatives. 

5. Decision Framework Complexity 

Engineering managers operate within highly complex decision environments, balancing competing priorities like architectural governance, technical debt, scalability, and engineering culture. 

They must ensure long-term sustainability while managing trade-offs between innovation, cost, and maintainability. Decisions often involve cross-functional collaboration, requiring alignment with product teams, executive leadership, and engineering specialists. 

Engineering project management, however, works within defined decision constraints. Their focus is on scope, cost, and time. Project managers are in charge of achieving as much balance as possible among the three constraints. 

They use structured frameworks like critical path analysis and earned value management to optimize project execution. 

While they have some influence over technical decisions, their primary concern is delivering within set parameters rather than shaping the technical direction. 

6. Performance Evaluation Methodologies 

Engineering management performance is measured on criterias like code quality improvements, process optimizations, mentorship impact, and technical thought leadership. The focus is on continuous improvement not immediate project outcomes. 

Engineering project management, on the other hand, relies on quantifiable delivery metrics. 

Project manager’s success is determined by on-time milestone completion, adherence to budget, risk mitigation effectiveness, and variance analysis against project baselines. Engineering metrics like cycle times, defect rates, and stakeholder satisfaction scores ensure that projects remain aligned with business objectives. 

7. Value Creation Mechanisms 

Engineering managers drive value through capability development and innovation enablement. They focus on building scalable processes and investing in the right talent. 

Their work leads to long-term competitive advantages, ensuring that engineering teams remain adaptable and technically strong. 

Engineering project managers create value by delivering projects predictably and efficiently. Their role ensures that cross-functional teams work in sync and delivery remains structured. 

By implementing agile workflows, dependency mapping, and phased execution models, they ensure business goals are met without unnecessary delays. 

8. Organizational Interfacing Patterns 

Engineering management requires deep engagement with leadership, product teams, and functional stakeholders. 

Engineering managers participate in long-term planning discussions, ensuring that engineering priorities align with broader business goals. They also establish feedback loops with teams, improving alignment between technical execution and market needs. 

Engineering project management, however, relies on temporary, tactical stakeholder interactions. 

Project managers coordinate status updates, cross-functional meetings, and expectation management efforts. Their primary interfaces are delivery teams, sponsors, and key decision-makers involved in a specific initiative. 

Unlike engineering managers, who shape organizational direction, project managers ensure smooth execution within predefined constraints. 

Conclusion 

Visibility is key to effective engineering and project management. Without clear insights, inefficiencies go unnoticed, risks escalate, and productivity suffers. Engineering analytics bridge this gap by providing real-time data on team performance, code quality, and project health. 

Typo enhances this further with AI-powered code analysis and auto-fixes, improving efficiency and reducing technical debt. It also offers developer experience visibility, helping teams identify bottlenecks and streamline workflows. 

With better visibility, teams can make informed decisions, optimize resources, and accelerate delivery. 

The Power of GitHub & JIRA Integration

The Power of GitHub & JIRA Integration

In the ever-changing world of software development, tracking progress and gaining insights into your projects is crucial. While GitHub Analytics provides developers and teams with valuable data-driven intelligence, relying solely on GitHub data may not provide the full picture needed for making informed decisions. By integrating GitHub Analytics with JIRA, engineering teams can gain a more comprehensive view of their development workflows, enabling them to take more meaningful actions.

Why GitHub Analytics Alone is Insufficient

GitHub Analytics offers valuable insights into:

  • Repository Activity: Tracking commits, pull requests and contributor activity within repositories.
  • Collaboration Effectiveness: Evaluating how effectively teams collaborate on code reviews and issue resolution.
  • Workflow Identification: Identifying potential bottlenecks and inefficiencies within the development process.
  • Project Management Support: Providing data-backed insights for improving project management decisions.

However, GitHub Analytics primarily focuses on repository activity and code contributions. It lacks visibility into broader project management aspects such as sprint progress, backlog prioritization, and cross-team dependencies. This limited perspective can hinder a team's ability to understand the complete picture of their development workflow and make informed decisions.

The Power of GitHub & JIRA Integration

JIRA is a widely used platform for issue tracking, sprint planning, and agile project management. When combined with GitHub Analytics, it creates a powerful ecosystem that:

  • Connects Code Changes with Project Tasks and Business Objectives: By linking GitHub commits and pull requests to specific JIRA issues (like user stories, bugs, and epics), teams can understand how their code changes contribute to overall project goals.
    • Real-World Example: A developer fixes a bug in a specific feature. By linking the GitHub pull request to the corresponding JIRA bug ticket, the team can track the resolution of the issue and its impact on the overall product.
  • Provides Deeper Insights into Development Velocity, Bottlenecks, and Blockers: Analyzing data from both GitHub and JIRA allows teams to identify bottlenecks in the development process that might not be apparent when looking at GitHub data alone.
    • Real-World Example: If a team observes a sudden drop in commit frequency, they can investigate JIRA issues to determine if it's caused by unresolved dependencies, unclear requirements, or other blockers.
  • Enhances Collaboration Between Engineering and Product Management Teams: By providing a shared view of project progress, GitHub and JIRA integration fosters better communication and collaboration between engineering and product management teams.
    • Real-World Example: Product managers can gain insights into the engineering team's progress on specific features by tracking the progress of related JIRA issues and linked GitHub pull requests.
  • Ensures Traceability from Feature Requests to Code Deployments: By linking JIRA issues to GitHub pull requests and ultimately to production deployments, teams can establish clear traceability from initial feature requests to their implementation and release.
    • Real-World Example: A team can track the journey of a feature from its initial conception in JIRA to its final deployment to production by analyzing the linked GitHub commits, pull requests, and deployment information.


More Examples of How JIRA + GitHub Analytics Brings More Insights

  • Tracking Work from Planning to Deployment:
    • Without JIRA: GitHub Analytics shows PR activity and commit frequency but doesn't provide context on whether work is aligned with business goals.
    • With JIRA: Teams can link commits and PRs to specific JIRA tickets, tracking the progress of user stories and epics from the backlog to release, ensuring that development efforts are aligned with business priorities.
  • Identifying Bottlenecks in the Development Process:
    • Without JIRA: GitHub Analytics highlights cycle time, but it doesn't explain why a delay is happening.
    • With JIRA: Teams can analyze blockers within JIRA issues—whether due to unresolved dependencies, pending stakeholder approvals, unclear requirements, or other factors—to pinpoint the root cause of delays and address them effectively.
  • Enhanced Sprint Planning & Resource Allocation:
    • Without JIRA: Engineering teams rely on GitHub metrics to gauge performance but may struggle to connect them with workload distribution.
    • With JIRA: Managers can assess how many tasks remain open versus completed, analyze team workloads, and adjust priorities in real-time to ensure efficient resource allocation and maximize team productivity.
  • Connecting Engineering Efforts to Business Goals:
    • Without JIRA: GitHub Analytics tracks technical contributions but doesn't show their impact on business priorities.
    • With JIRA: Product owners can track how engineering efforts align with strategic objectives by analyzing the progress of JIRA issues linked to key business goals, ensuring that the team is working on the most impactful tasks.

Getting Started with GitHub & JIRA Analytics Integration

Start leveraging the power of integrated analytics with tools like Typo, a dynamic platform designed to optimize your GitHub and JIRA experience. Whether you're working on a startup project or managing an enterprise-scale development team, such tools can offers powerful analytics tools tailored to your specific needs.

How to Integrate GitHub & JIRA with Typo:

  1. Connect Your GitHub and JIRA Accounts: Visit Typo's platform and seamlessly link both tools to establish a unified view of your development data.
  2. Configure Dashboards: Build custom analytics dashboards that track both code contributions (from GitHub) and issue progress (from JIRA) in a single, integrated view.
  3. Analyze Insights Together: Gain deeper insights by analyzing GitHub commit trends alongside JIRA sprint performance, identifying correlations and uncovering hidden patterns within your development workflow.

Conclusion

While GitHub Analytics is a valuable tool for tracking repository activity, integrating it with JIRA unlocks deeper engineering insights, allowing teams to make smarter, data-driven decisions. By bridging the gap between code contributions and project management, teams can improve efficiency, enhance collaboration, and ensure that engineering efforts align with business goals.

Sign Up for Typo’s GitHub & JIRA Analytics Today!

Whether you aim to enhance software delivery, improve team collaboration, or refine project workflows, Typo provides a flexible, data-driven platform to meet your needs.

FAQs

1. How to integrate GitHub with JIRA for better analytics?

  • Utilize native integrations: Some tools offer native integrations between GitHub and JIRA.
  • Leverage third-party apps: Apps like Typo can streamline the integration process and provide advanced analytics capabilities.
  • Utilize APIs: For more advanced integrations, you can utilize the APIs provided by GitHub and JIRA.

2. What are some common challenges in integrating JIRA with Github?

  • Data inconsistency: Ensuring data accuracy and consistency between the two platforms can be challenging.
  • Integration complexity: Setting up and maintaining integrations can sometimes be technically complex.
  • Data overload: Integrating data from both platforms can generate a large volume of data, making it difficult to analyze and interpret.

3. How can I ensure the accuracy of data in my integrated GitHub and JIRA analytics?

  • Establish clear data entry guidelines: Ensure that all team members adhere to consistent data entry practices in both GitHub and JIRA.
  • Regularly review and clean data: Conduct regular data audits to identify and correct any inconsistencies or errors.
  • Utilize data validation rules: Implement data validation rules within JIRA to ensure data accuracy and consistency.
View All

DevEx

View All
10 Best Developer Experience (DevEx) Tools in 2025

10 Best Developer Experience (DevEx) Tools in 2025

Developer Experience (DevEx) is essential for boosting productivity, collaboration, and overall efficiency in software development. The right DevEx tools streamline workflows, provide actionable insights, and enhance code quality.

We’ve explored the 10 best Developer Experience tools in 2025, highlighting their key features and limitations to help you choose the best fit for your team.

Key Features to Look For in DevEx Tools 

Integrated Development Environment (IDE) Plugins

The DevEx tool must contain IDE plugins that enhance coding environments with syntax highlighting, code completion, and error detection features. They must also allow integration with external tools directly from the IDE and support multiple programming languages for versatility. 

Collaboration Features

The tools must promote teamwork through seamless collaboration, such as shared workspaces, real-time editing capabilities, and in-context discussions. These features facilitate better communication among teams and improve project outcomes. 

Developer Insights and Analytics

The Developer Experience tool could also offer insights into developer performance through qualitative metrics including deployment frequency and planning accuracy. This helps engineering leaders understand the developer experience holistically. 

Feedback Loops 

For a smooth workflow, developers need timely feedback for an efficient software process. Hence, ensure that the tools and processes empower teams to exchange feedback such as real-time feedback mechanisms, code quality analysis, or live updates to get the view of changes immediately. 

Impact on Productivity

Evaluate how the tool affects workflow efficiency and developers’ productivity. Assess it based on whether it reduces time spent on repetitive tasks or facilitates easier collaboration. Analyzing these factors can help gauge the tool's potential impact on productivity. 

Top 10 Developer Experience Tools 

Typo 

Typo is an intelligent engineering management platform to gain visibility, remove blockers, and maximize developer effectiveness. It captures 360 views of the developer experience and uncovers real issues. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins. Typo also sends automated alerts to identify burnout signs in developers at an early stage. It can seamlessly integrate with third-party applications such as Git, Slack, Calenders, and CI/CD tools.

GetDX

GetDX is a comprehensive insights platform founded by researchers behind the DORA and SPACE framework. It offers both qualitative and quantitative measures to give a holistic view of the organization. GetDX breaks down results based on personas and streamlines developer onboarding with real-time insights. 

Key Features

  • Provides a suite of tools that capture data from surveys and systems in real time.
  • Contextualizes performance with 180,000+ industry benchmark samples.
  • Uses advanced statistical analysis to identify the top opportunities.

Limitations 

  • GetDX’s frequent updates and features can disrupt user experience and confuse teams. 
  • New managers often face a steep learning curve. 
  • Users managing multiple teams face configuration and managing team data difficulties. 

Jellyfish 

Jellyfish is a developer experience platform that combines developer-reported insights with system metrics. It captures qualitative and quantitative data to provide a complete picture of the development ecosystem and identify bottlenecks. Jellyfish can be seamlessly integrated with survey tools or use sentiment analysis to gather direct feedback from developers. 

Key Features

  • Enables continuous feedback loops and rapid response to developer needs.
  • Allows teams to track effort without time tracking. 
  • Tracks team health metrics such as code churn and pull request review times. 

Limitations

  • Problem in integrating with popular tools like Jira and Okta which complicates the initial setup process and affects the overall user experience.
  • Absence of an API restricts users from exporting metrics for further analysis in other systems. 
  • Overlooks important aspects of developer productivity by emphasizing throughput over qualitative metrics. 

LinearB

LinearB provides engineering teams with data-driven insights and automation capabilities.  This software delivery intelligence platform provides teams with full visibility and control over developer experience and productivity. LinearB also helps them focus on the most important aspects of coding to speed up project delivery. 

Key Features

  • Automates routine tasks and processes to reduce manual effort and cognitive load. 
  • Offers visibility into team workload and capacity. 
  • Helps maximize DevOps groups’ efficiency with various metrics.

Limitations 

  • Teams that do not utilize GIT-based workflow may find that many of the features are not applicable or useful to their processes.
  • Lacks comprehensive historical data or external benchmarks.
  • Needs to rely on separate tools for comprehensive project tracking and management. 

Github Copilot 

Github Copilot was developed by GitHub in collaboration with open AI. It uses an open AI codex for writing code, test cases and code comments quickly. It draws context from the code and suggests whole lines or complete functions that developers can accept, modify, or reject. Github Copilot can generate code in multiple languages including Typescript, Javascript and C++. 

Key Features

  • Creates predictive lines of code from comments and existing patterns in the code.
  • Seamlessly integrates with popular editors such as Neovim, JetBrains IDEs, and Visual Studio.
  • Create dictionaries of lookup data. 

Limitations 

  • Struggles to fully grasp the context of complex coding tasks or specific project requirements.
  • Less experienced developers may become overly reliant on Copilot for coding task.
  • Can be costly for smaller teams. 

Postman 

Postman is a widely used automation testing tool for API. It provides a streamlined process for standardizing API testing and monitoring it for usage and trend insights. This tool provides a collaborative environment for designing APIs using specifications like OpenAPI and a robust testing framework for ensuring API functionality and reliability. 

 

Key Features

  • Enables users to mimic real-world scenarios and assess API behavior under various conditions.
  • Creates mock servers, and facilitates realistic simulations and comprehensive testing.
  • Auto-generates documentation to make APIs easily understandable and accessible.

Limitations 

  • User interface non friendly for beginners. 
  • Heavy reliance on Postman may create challenges when migrating workflows to other tools or platforms.
  • More suitable for manual testing rather than automated testing. 

Sourcegraph 

An AI code-based assistant tool that provides code-specific information and helps in locating precise code based on natural language description, file names, or function names. 

It improves the developer experience by simplifying the development process in intricate enterprise environments. 

Key Features

  • Explain complex lines of code in simple language.
  • Identifies bugs and errors in a codebase and provides suggestions.
  • Offers documentation generation.

Limitations

  • Doesn’t support creating insights over specific branches or revisions.
  • Codebase size and project complexity may impact performance.
  • Certain features available when running insights over all repositories. 

Code Climate Velocity 

Code Climate Velocity is an engineering intelligence platform that provides leaders with customized solutions based on data-driven insights. Teams using Code Climate Velocity follows a three-step approach: a diagnostic workshop with Code Climate experts, a personalized dashboard with insight reports, and a customized action plan tailored to their business.

Key Features

  • Seamlessly integrates with developer tools such as Jira, GitLab, and Bitbucket. 
  • Supports long-term strategic planning and process improvement efforts.
  • Offers insights tailored for managers to help them understand team dynamics and individual contributions.

Limitations

  • Relies heavily on the quality and comprehensiveness of the data it analyzes.
  • Overlooks qualitative aspects of software development, such as team collaboration, creativity, and problem-solving skills.
  • Offers limited customization options.

Vercel 

Vercel is a cloud platform that gives frontend developers space to focus on coding and innovation. It simplifies the entire lifecycle of web applications by automating the entire deployment pipeline. Vercel has collaborative features such as preview environments to help iterate quickly while maintaining high code quality. 

Key Features

  • Applications can be deployed directly from their Git repositories. 
  • Includes pre-built templates to jumpstart the app development process.
  • Allows to create APIs without managing traditional backend infrastructure.

Limitations

  • Projects hosted on Vercel may rely on various third-party services for functionality which can impact the performance and reliability of applications. 
  • Limited features available with the free version. 
  • Lacks robust documentation and support resources.

Quovery 

A cloud deployment platform to simplify the deployment and management of applications. 

It automates essential tasks such as server setup, scaling, and configuration management that allows developers to prioritize faster time to market instead of handling infrastructure.

Key Features

  • Supports the creation of ephemeral environments for testing and development. 
  • Scales applications automatically on demand.
  • Includes built-in security measures such as multi-factor authentication and fine-grained access controls. 

Limitations

  • Occasionally experiences minor bugs.
  • Can be overwhelming for those new to cloud and DevOps.
  • Deployment times may be slow.

Conclusion 

We’ve curated the best Developer Experience tools for you in 2025. Feel free to explore other options as well. Make sure to do your own research and choose what fits best for you.

All the best!

CTO’s Guide to Software Engineering Efficiency

CTO’s Guide to Software Engineering Efficiency

As a CTO, you often face a dilemma: should you prioritize efficiency or effectiveness? It’s a tough call. 

Engineering efficiency ensures your team delivers quickly and with fewer resources. On the other hand, effectiveness ensures those efforts create real business impact. 

So choosing one over the other is definitely not the solution. 

That’s why we came up with this guide to software engineering efficiency. 

Defining Software Engineering Efficiency 

Software engineering efficiency is the intersection of speed, quality, and cost. It’s not just about how quickly code ships or how flawless it is; it’s about delivering value to the business while optimizing resources. 

True efficiency is when engineering outputs directly contribute to achieving strategic business goals—without overextending timelines, compromising quality, or overspending. 

A holistic approach to efficiency means addressing every layer of the engineering process. It starts with streamlining workflows to minimize bottlenecks, adopting tools that enhance productivity, and setting clear KPIs for code quality and delivery timelines. 

As a CTO, to architect this balance, you need to foster collaboration between cross-functional teams, defining clear metrics for efficiency and ensuring that resource allocation prioritizes high-impact initiatives. 

Establishing Tech Governance 

Tech governance refers to the framework of policies, processes, and standards that guide how technology is used, managed, and maintained within an organization. 

For CTOs, it’s the backbone of engineering efficiency, ensuring consistency, security, and scalability across teams and projects. 

Here’s why tech governance is so important: 

  • Standardization: Promotes uniformity in tools, processes, and coding practices.
  • Risk Mitigation: Reduces vulnerabilities by enforcing compliance with security protocols.
  • Operational Efficiency: Streamlines workflows by minimizing ad-hoc decisions and redundant efforts.
  • Scalability: Prepares systems and teams to handle growth without compromising performance.
  • Transparency: Provides clarity into processes, enabling better decision-making and accountability.

For engineering efficiency, tech governance should focus on three core categories: 

1. Configuration Management

Configuration management is foundational to maintaining consistency across systems and software, ensuring predictable performance and behavior. 

It involves rigorously tracking changes to code, dependencies, and environments to eliminate discrepancies that often cause deployment failures or bugs. 

Using tools like Git for version control, Terraform for infrastructure configurations, or Ansible for automation ensures that configurations are standardized and baselines are consistently enforced. 

This approach not only minimizes errors during rollouts but also reduces the time required to identify and resolve issues, thereby enhancing overall system reliability and deployment efficiency. 

2. Infrastructure Management 

Infrastructure management focuses on effectively provisioning and maintaining the physical and cloud-based resources that support software engineering operations. 

The adoption of Infrastructure as Code (IaC) practices allows teams to automate resource provisioning, scaling, and configuration updates, ensuring infrastructure remains agile and cost-effective. 

Advanced monitoring tools like Typo provide real-time SDLC insights, enabling proactive issue resolution and resource optimization. 

By automating repetitive tasks, infrastructure management frees engineering teams to concentrate on innovation rather than maintenance, driving operational efficiency at scale. 

3. Frameworks for Deployment 

Frameworks for deployment establish the structured processes and tools required to release code into production environments seamlessly. 

A well-designed CI/CD pipeline automates the stages of building, testing, and deploying code, ensuring that releases are both fast and reliable. 

Additionally, rollback mechanisms safeguard against potential issues during deployment, allowing for quick restoration of stable environments. This streamlined approach reduces downtime, accelerates time-to-market, and fosters a collaborative engineering culture. 

Together, these deployment frameworks enhance software delivery and also ensure that the systems remain resilient under changing business demands. 

By focusing on these tech governance categories, CTOs can build a governance model that maximizes efficiency while aligning engineering operations with strategic objectives. 

Balancing Business Impact and Engineering Productivity 

If your engineering team’s efforts don’t align with key objectives like revenue growth, customer satisfaction, or market positioning, you’re not doing justice to your organization. 

To ensure alignment, focus on building features that solve real problems, not just “cool” additions. 

1. Chase value addition, not cool features 

Rather than developing flashy tools that don’t address user needs, prioritize features that improve user experience or address pain points. This prevents your engineering team from being consumed by tasks that don’t add value and keeps their efforts laser-focused on meeting demand. 

2. Decision-making is a crucial factor 

You need to know when to prioritize speed over quality or vice versa. For example, during a high-stakes product launch, speed might be crucial to seize market opportunities. However, if a feature underpins critical infrastructure, you’d prioritize quality and scalability to avoid long-term failures. Balancing these decisions requires clear communication and understanding of business priorities. 

3. Balance innovation and engineering efficiency 

Encourage your team to explore new ideas, but within a framework that ensures tangible outcomes. Innovation should drive value, not just technical novelty. This approach ensures every project contributes meaningfully to the organization’s success. 

Communicating Efficiency to the CEO and Board 

If you’re at a company where the CEO doesn’t come from a technical background — you will face some communication challenges. There will always be questions about why new features are not being shipped despite having a good number of software engineers. 

What you should focus on is giving the stakeholders insights into how the engineering headcount is being utilized. 

1. Reporting Software Engineering Efficiency 

Instead of presenting granular task lists, focus on providing a high-level summary of accomplishments tied to business objectives. For example, show the percentage of technical debt reduced, the cycle time improvements, or the new features delivered and their impact on customer satisfaction or revenue. 

Include visualizations like charts or dashboards to offer a clear, data-driven view of progress. Highlight key milestones, ongoing priorities, and how resources are being allocated to align with organizational goals. 

2. Translating Technical Metrics into Business Language

Board members and CEOs may not resonate with terms like “code churn” or “defect density,” but they understand business KPIs like revenue growth, customer retention, and market expansion. 

For instance, instead of saying, “We reduced bug rate by 15%,” explain, “Our improvements in code quality have resulted in a 10% reduction in downtime, enhancing user experience and supporting retention.” 

3. Building Trust Through Transparency

Trust is built when you are upfront about trade-offs, challenges, and achievements. 

For example, if you chose to delay a feature release to improve scalability, explain the rationale: “While this slowed our time-to-market, it prevents future bottlenecks, ensuring long-term reliability.” 

4. Framing Discussions Around ROI and Risk Management

Frame engineering decisions in terms of ROI, risk mitigation, and long-term impact. For example, explain how automating infrastructure saves costs in the long run or how adopting robust CI/CD practices reduces deployment risks. Linking these outcomes to strategic goals ensures the board sees technology investments as valuable, forward-thinking decisions that drive sustained business growth. 

Build vs. Buy Decisions 

Deciding whether to build a solution in-house or purchase off-the-shelf technology is crucial for maintaining software engineering efficiency. Here’s what to take into account: 

1. Cost Considerations 

From an engineering efficiency standpoint, building in-house often requires significant engineering hours that could be spent on higher-value projects. The direct costs include developer time, testing, and ongoing maintenance. Hidden costs like delays or knowledge silos can also reduce operational efficiency. 

Conversely, buying off-the-shelf technology allows immediate deployment and support, freeing the engineering team to focus on core business challenges. 

However, it’s crucial to evaluate licensing and customization costs to ensure they don’t create inefficiencies later. 

2. Strategic Alignment 

For software engineering efficiency, the choice must align with broader business goals. Building in-house may be more efficient if it allows your team to streamline unique workflows or gain a competitive edge. 

However, if the solution is not central to your business’s differentiation, buying ensures the engineering team isn’t bogged down by unnecessary development tasks, maintaining their focus on high-impact initiatives. 

3. Scalability, Flexibility, and Integration 

An efficient engineering process requires solutions that scale with the business, integrate seamlessly into existing systems, and adapt to future needs. 

While in-house builds offer customization, they can overburden teams if integration or scaling challenges arise. 

Off-the-shelf solutions, though less flexible, often come with pre-tested scalability and integrations, reducing friction and enabling smoother operations. 

Key Metrics CTOs Should Measure for Software Engineering Efficiency 

While the CTO’s role is rooted in shaping the company’s vision and direction, it also requires ensuring that software engineering teams maintain high productivity. 

Here are some of the metrics you should keep an eye on: 

1. Cycle Time 

Cycle time measures how long it takes to move a feature or task from development to deployment. A shorter cycle time means faster iterations, enabling quicker feedback loops and faster value delivery. Monitoring this helps identify bottlenecks and improve development workflows. 

2. Lead Time 

Lead time tracks the duration from ideation to delivery. It encompasses planning, design, development, and deployment phases. A long lead time might indicate inefficiencies in prioritization or resource allocation. By optimizing this, CTOs ensure that the team delivers what matters most to the business in a timely manner.

3. Velocity 

Velocity measures how much work a team completes in a sprint or milestone. This metric reflects team productivity and helps forecast delivery timelines. Consistent or improving velocity is a strong indicator of operational efficiency and team stability.

4. Bug Rate and Defect Density

Bug rate and defect density assess the quality and reliability of the codebase. High values indicate a need for better testing or development practices. Tracking these ensures that speed doesn’t come at the expense of quality, which can lead to technical debt.

5. Code Churn 

Code churn tracks how often code changes after the initial commit. Excessive churn may signal unclear requirements or poor initial implementation. Keeping this in check ensures efficiency and reduces rework. 

By selecting and monitoring these metrics, you can align engineering outcomes with strategic objectives while building a culture of accountability and continuous improvement. 

Conclusion 

The CTO plays a crucial role in driving software engineering efficiency, balancing technical execution with business goals. 

By focusing on key metrics, establishing strong governance, and ensuring that engineering efforts align with broader company objectives, CTOs help maximize productivity while minimizing waste. 

A balanced approach to decision-making—whether prioritizing speed or quality—ensures both immediate impact and long-term scalability. 

Effective CTOs deliver efficiency through clear communication, data-driven insights, and the ability to guide engineering teams toward solutions that support the company’s strategic vision. 

What is Developer Experience?

Let’s take a look at the situation below: 

You are driving a high-performance car, but the controls are clunky, the dashboard is confusing, and the engine constantly overheats. 

Frustrating, right? 

When developers work in a similar environment, dealing with inefficient tools, unclear processes, and a lack of collaboration, it leads to decreased morale and productivity. 

Just as a smooth, responsive driving experience makes all the difference on the road, a seamless Developer Experience (DX) is essential for developer teams.

DX isn't just a buzzword; it's a key factor in how developers interact with their work environments and produce innovative solutions. In this blog, let’s explore what Developer Experience truly means and why it is crucial for developers. 

What is Developer Experience? 

Developer Experience, commonly known as DX, is the overall quality of developers’ interactions with their work environment. It encompasses tools, processes, and organizational culture. It aims to create an environment where developers are working efficiently, focused, and producing high-quality code with minimal friction. 

Why Does Developer Experience Matter? 

Developer Experience is a critical factor in enhancing organizational performance and innovation. It matters because:

Boosts Developer Productivity 

When developers have access to intuitive tools, clear documentation, and streamlined workflow, it allows them to complete the tasks quicker and focus on core activities. This leads to a faster development cycle and improved efficiency as developers can connect emotionally with their work. 

As per Gartner's Report, Developer Experience is the key indicator of Developer Productivity

High Product Quality 

Positive developer experience leads to improved code quality, resulting in high-quality work. This leads to customer satisfaction and a decrease in defects in software products. DX also leads to effective communication and collaboration which reduces cognitive load among developers and can thoroughly implement best practices. 

Talent Attraction and Retention 

A positive work environment appeals to skilled developers and retains top talents. When the organization supports developers’ creativity and innovation, it significantly reduces turnover rates. Moreover, when they feel psychologically safe to express ideas and take risks, they would want to be associated with an organization for the long run. 

Enhances Developer Morale 

When developers feel empowered and supported at their workplace, they are more likely to be engaged with their work. This further leads to high morale and job satisfaction. When organizations minimize common pain points, developers encounter fewer obstacles, allowing them to focus more on productive tasks rather than tedious ones.

Competitive Advantage 

Organizations with positive developer experiences often gain a competitive edge in the market. Enabling faster development cycles and higher-quality software delivery allows companies to respond more swiftly to market demands and customer needs. This agility improves customer satisfaction and positions the organization favorably against competitors. 

What is Flow State and Why Consider it as a Core Goal of a Great DX? 

In simple words, flow state means ‘Being in the zone’. Also known as deep work, it refers to the mental state characterized by complete immersion and focused engagement in an activity. Achieving flow can significantly result in a sense of engagement, enjoyment, and productivity. 

Flow state is considered a core goal of a great DX because this allows developers to work with remarkable efficiency. Hence, allowing them to complete tasks faster and with higher quality. It enables developers to generate innovative solutions and ideas when they are deeply engaged in their work, leading to better problem-solving outcomes. 

Also, flow isn’t limited to individual work, it can also be experienced collectively within teams. When development teams achieve flow together, they operate with synchronized efficiency which enhances collaboration and communication. 

What Developer Experience is not?  

Developer Experience is Not Just a Good Tooling 

Tools like IDEs, frameworks, and libraries play a vital role in a positive developer experience, but, it is not the sole component. Good tooling is merely a part of the overall experience. It helps to streamline workflows and reduce friction, but DX encompasses much more, such as documentation, support, learning resources, and the community. Tools alone cannot address issues like poor communication, lack of feedback, or insufficient documentation, and without a holistic approach, these tools can still hinder developer satisfaction and productivity.

Developer Experience is Not a Quick Fix 

Improving DX isn’t a one-off task that can be patched quickly. It requires a long-term commitment and a deep understanding of developer needs, consistent feedback loops, and iterative improvements. Great developer experience involves ongoing evaluation and adaptation of processes, tools, and team dynamics to create an environment where developers can thrive over time. 

Developer Experience isn’t About Pampering Developers or Using AI tools to Cut Costs

One common myth about DX is that it focuses solely on pampering developers or uses AI tools as cost-cutting measures. True DX aims to create an environment where developers can work efficiently and effectively. In other words, it is about empowering developers with the right resources, autonomy, and opportunities for growth. While AI tools help in simplifying tasks, without considering the broader context of developer needs may lead to dissatisfaction if those tools do not genuinely enhance their work experience. 

Developer Experience is Not User Experience 

DX and UX look alike, however, they target different audiences and goals. User Experience is about how end-users interact with a product, while Developer Experience concerns the experience of developers who build, test, and deploy products. Improving DX involves understanding developers' unique challenges and needs rather than only applying UX principles meant for end-users.

Developer Experience is Not Same as Developer Productivity 

Developer Experience and Developer Productivity are interrelated yet not identical. While a positive developer experience can lead to increased productivity, productivity metrics alone don’t reflect the quality of the developer experience. These metrics often focus on output (like lines of code or hours worked), which can be misleading. True DX encompasses emotional satisfaction, engagement levels, and the overall environment in which developers work. Positive developer experience further creates conditions that naturally lead to higher productivity rather than measuring it directly through traditional metrics

How does Typo Help to Improve DevEx?

Typo is a valuable tool for software development teams that captures 360 views of developer experience. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.

Key features

  • Research-backed framework that captures parameters and uncovers real issues.
  • In-depth insights are published on the dashboard.
  • Combines data-driven insights with proactive monitoring and strategic intervention.
  • Identifies the key priority areas affecting developer productivity and well-being.
  • Sends automated alerts to identify burnout signs in developers at an early stage.

Conclusion 

Developer Experience empowers developers to focus on building exceptional solutions. A great DX fosters innovation, enhances productivity, and creates an environment where developers can thrive individually and collaboratively.

Implementing developer tools empowers organizations to enhance DX and enable teams to prevent burnout and reach their full potential.

View All

Podcasts

View All

'Engineering Management in the Age of GenAI' with Suresh Bysani, Director of Engineering, Eightfold

How do engineering leaders stay relevant in the age of Generative AI?

With the rise of GenAI, engineering teams are rethinking productivity, prototyping, and scalability. But AI is only as powerful as the engineering practices behind it.

In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Suresh Bysani, Director of Engineering at Eightfold, about the real-world impact of AI on engineering leadership. From writing boilerplate code to scaling enterprise platforms, Suresh shares practical insights and hard-earned lessons from the frontlines of tech.

What You’ll Learn in This Episode:


AI Meets Engineering: How GenAI is transforming productivity, prototyping & software workflows.


Platform vs. Product Teams: Why technical expectations differ — and how to lead both effectively.


Engineering Practices Still Matter: Why GenAI can’t replace fundamental principles like scalability, testing, and reliability.


Avoiding AI Pitfalls: Common mistakes in adopting AI for internal tooling & how to avoid them.


Upskilling for the Future: Why managers & engineers need to build AI fluency now.


A Leader’s Journey: Suresh shares personal stories that shaped his perspective as a people-first tech leader.

Closing Insight: AI isn’t a silver bullet, but a powerful tool. The best engineering leaders combine AI innovation with strong fundamentals, people-centric leadership, and a long-term view.

Timestamps

  • 00:00 — Let’s Begin!
  • 00:55 — Suresh at Eightfold: Role & Background
  • 02:00 — Career Milestones & Turning Points
  • 04:15 — GenAI’s Impact on Engineering Management
  • 07:59 — Why Technical Depth Still Matters
  • 11:58 — AI + Legacy Systems: Key Insights
  • 15:40 — Common GenAI Adoption Mistakes
  • 23:42 — Measuring AI Success
  • 28:08 — AI Use Cases in Engineering
  • 31:05 — Final Advice for Tech Leaders

Links & Mentions

Episode Transcript

Kovid Batra: Hi everyone. This is Kovid, back with another episode of groCTO by Typo. Today with us, we have a very special guest who is an expert in AI and machine learning. So we are gonna talk a lot about Gen AI, engineering management with them, but let me quickly introduce Suresh to all of you. Hi, Suresh.

Suresh Bysani: Hello.

Kovid Batra: So, Suresh is an Engineering, uh, Director at Eightfold and he holds a postgraduate degree in AI and machine learning from USC, and he has almost 10 to 12 years of experience in engineering and leadership. So today, uh, Suresh, we are, we are grateful to have you here. And before we get started with the main section, which is engineering management in the age of GenAI, we would love to know a little bit more about you, maybe your hobbies, something inspiring from your life that defines who you are today. So if you could just take the stage and tell us something about yourself that your LinkedIn profile doesn’t there.

Suresh Bysani: Okay. So, thanks Kovid for having me. Hello everybody. Um, yeah, so if I have to recall a few incidents, I’ll probably recall one or two, right? So right from my childhood, um, I was not an outstanding student, let me put it that way. I have a record of, uh, you know, failing every subject until 10th grade, right? So I’m a totally different person. I feel sometimes, you know, uh, that gave me a lot of confidence in life because, uh, at a very early age, I was, you know, uh, exposed to what failure means, or how does being in failure for a very long time mean, right. That kind of gave me a lot of, you know, mental stability or courage to face failures, right? I’ve seen a lot of friends who were, you know, outstanding students right from the beginning and they get shaken aback when they see a setback or a failure in life. Right? So I feel that defined my personality to take aggressive decisions and moves in my life. That’s, that’s one thing.

Kovid Batra: That’s interesting.

Suresh Bysani: Yeah. And the second thing is, uh, during undergrad we went to a program called Net Tech. So it’s organized by, um, a very famous person in India. It’s most of, mostly an educational thing, right, around, uh, cybersecurity and ethical hacking. So I kind of met the country’s brightest minds in this program. All people from all sorts of background came to this program. Mostly, mostly the good ones, right? So it kind of helped me calibrate where I am across the country’s talent and gave me a fresh perspective of looking beyond my current institution, et cetera. Right. So these are two life defining moments for me in terms of my career growth.

Kovid Batra: Perfect. Perfect. I think you become more resilient, uh, when you’ve seen failures, and I think the openness to learn and exposure definitely gives you a perspective that takes you, uh, in your career, not linearly, but it gives you a geometric progression probably, or exponential progression in your life. So totally relate to that and great start to this. Uh, so Suresh, I think today, now we can jump onto the main section and, uh, talk more about, uh, AI, implementation of AI, Agent ai. But again, that is something that I would like to touch upon, uh, little later. First, I would want to understand from your journey, you are an engineering director, uh, and you have spent good enough time in this management and moving from management to the senior management or a leadership position, I would say. Uh, what’s your perspective of engineering management in today’s world? How is it evolving? What are the things that you see, uh, are kind of set and set as ideals in, um, in engineering management, but might not be very right? So just throw some light on your journey of engineering management and how you see it today evolving.

Suresh Bysani: Yep. Um, before we talk about the evolution, I will just share my thoughts about what does being an engineering manager or a leader means in general, and how is it very different from an IC. I get, I get asked this question quite a lot. A lot of people, a lot of, you know, very strong ICs come to me, uh, with this question of, I want to become a manager or can I become a manager? Right. And this happens quite a lot in, in Bay Area as well as Bangalore.

Kovid Batra: Yeah.

Suresh Bysani: So the first question I ask them is, why do you want to become a manager? Right? What are your reasons for it? I, I hear all great sorts of answers, right? Some folks generally come and say, I like execution. I like to drive from front. I’m responsible. I mean, I want to be the team leader sort of thing, right? I mean, all great answers, right? But if you think about it, execution, project management, JIRA management, or leading from the front; these are all characteristics of any technical leader, not just engineering manager. Even if you’re a staff engineer or an architect or a principal engineer, uh, you are responsible for a reasonable degree of execution, project management, planning, mentorship, getting things done, et cetera. After all, we are all evaluated by execution. So that is not a satisfactory answer for me. The main answer that I’m looking for is I like to grow people. I can, I want to see success in people who are around me. So as an engineering manager, it’s quite a tricky role because most of the time you are only as good as your team. You are evaluated by your team’s progress, team’s success, team’s delivery. Until that point, most ICs are only responsible for their work, right? I mean, they’re doing a project.

Kovid Batra: Yeah.

Suresh Bysani: They do amazing work in their project, and most of the time they get fantastic ratings and materialistic benefits. But all of a sudden when you become an engineering manager or leader, you are spending probably more number of hours to get things done because you have to coordinate the rest of the team, but they don’t necessarily translate to your, you know, growth or materialistic benefits because you are only as good as an average person in your team. So the first thing people have to evaluate is, am I or do I get happiness in growing others? If the answer is yes, if that’s your P0, you are going to be a great engineering leader. Everything else will follow. Now to the second question that you asked. This has been, this remained constant across the years from last 25 years. This is the number one characteristic of an engineering leader. Now, the evolution part. As the technology evolves, what I see as challenge is, uh, in a, an engineering manager should typically understand or go to a reasonable depth into people’s work. Technically, I mean. So as the technologies evolves, most of the engineering managers are typically 10 years, 15 years, 20 years experienced as ICs, right?

Kovid Batra: Yeah.

Suresh Bysani: Now, uh, most of these new engineering managers or seasoned engineering managers, they don’t understand what new technology evolution is. For example, all the recent advancements that we are seeing in AI, GenAI, you know, the engineering managers have no clue about it. If the, most of the time when there is bottom up innovation, how are engineering managers going to look at all of this and evaluate all of this from a technical standpoint? What this means is that there is a constant need for upskilling, and we’ll talk about that, uh, you know, in your questions.

Kovid Batra: Sure. Yeah. But I think, uh, I, I would just like to, uh, ask one question here. I mean, I have been working with a lot of engineering managers in my career as well, and, uh, I’ve been talking to a lot of them. There is always a debate around how much technical an engineering manager should be.

Suresh Bysani: Yeah.

Kovid Batra: And I think that lies in a little more detail and probably you could tell with some of your examples. Uh, an engineering manager who is working more on the product team and product side, uh, and an engineering manager who is probably involved in a platform team or maybe infrastructure team, I think things change a little bit. What’s, what’s your thought on that part?

Suresh Bysani: Yeah, so I think, uh, good question by the way. Uh, my general guidance to most engineering manager’s is they have to be reasonably technical. I mean, it is just that they are given a different responsibility in the company, but that’s it. Right? The, it is not an excuse for them, not for not being technical. Yes, they don’t have to code a 100% of the time that’s given. Right. It, it, so how much time they should be spending coding or doing the technical design? It totally depends on the company, project, situation, et cetera. Right? But they have to be technical. But you have a very interesting question around product teams versus platform teams, right?

Kovid Batra: Yeah.

Suresh Bysani: Engineering manager for product teams generally, you know, deals with a lot of stakeholders, whether it is PMs or customers or you know, uh, uh, the, the potential people and the potential new customers that are going to the company. So their time, uh, is mostly spent there. They hardly have enough time to, you know, go deep within the product. That’s the nature of their job. But at the same time, they do, uh, they are also expected to be, uh, reasonably technical, but not as technical as engineering leaders of platform teams or infrastructure teams. The plat for the platform teams and infrastructure teams, yes. They also engage with stakeholders, but their stakeholders are mostly internal and other engineering managers. That’s, That’s the general setup.

Kovid Batra: Yeah. Yeah.

Suresh Bysani: And, you know, uh, just like how engineering managers are able to guide how the product should look like, platform managers and infrastructure managers should, you know, uh, go deep into what platform or infrastructure we should provide to the rest of the company. And obviously, as the problem statement sounds, that requires a lot more technical depth, focus than, than the rest of the engineering leaders. So yes, engineering managers for platform and infrastructure are required to be reasonably technically stronger than the rest of the leaders.

Kovid Batra: Totally. I think that’s, that’s the key here. And the balance is something that one needs to identify based on their situation, project, how much of things they need to take care of in their teams. So totally agree to it. Uh, moving on. Uh, I think the most burning piece, uh, I think everyone is talking about it, which is AI, Agent AI, implementing, uh, AI into the most core legacy services in, in a team, in a company. But I think things need to be highlighted, uh, in a way where people need to understand what needs to be done, why it needs to be done, and, uh, while we were talking a few days back, uh, you mentioned about mentoring a few startups and technical founders who are actually doing it at this point of time, and you’re guiding them and you have seen certain patterns where you feel that there is a guidance required in the industry now.

Suresh Bysani: Yeah.

Kovid Batra: So while we have you here, my next question to you is like, what should an engineering manager do in this age of GenAI to, let’s say, stay technically equipped and take the right decisions moving forward?

Suresh Bysani: Yeah. I, I’ll start with this. The first thing is upskilling, right? As we were talking about in our previous, uh, question, uh, most engineering managers have not coded in the GenAI era, right? Because it’s just started.

Kovid Batra: Yeah.

Suresh Bysani: So, but all the new ideas or the new projects, uh, there is a GenAI or an AI flavor to it. That’s where the world is moving towards. I mean, uh, let’s be honest, right? If we don’t upskill ourselves in AI right now, we will be termed legacy. So when there is bottom up innovation happening within the team, how is the engineering manager supposed to, you know, uh, technically calibrate the project/design/code that is happening in the team? So that is why I say there is a need for upskilling. At Eightfold, uh, what we did is one of our leader, uh, he said, uh, all the engineering managers, let’s not do anything for a week. Let’s create something with GenAI that is useful for the company and all of you code it, right? I really loved the idea because periodically engineering managers are supposed to step back like this, whether it is in the form of hackathons or ideas or whatever it is, right? They should get their hands dirty in this new tech to get some perspective. And once I did that, it gave me a totally new perspective and I started seeing every idea with this new lens of GenAI, right? And I started asking fundamental questions like why can’t we write an agent to do this? Why can’t we do this? Should we spend a lot of time writing business logic for this, right? That is important for every engineering leader. How do you periodically step back and get your hands dirty and go to the roots? Sometimes it’s not easy because of the commitments that you have. So you have to spend your weekends or, you know, or after time to go read about some of this, read some papers, write some code, or it could, it doesn’t have to be something outside. It can be, you know, uh, part of your projects too. Go pick up like five to 10% of your code in one of the projects. Get your hands dirty. So you’ll start being relevant and the amount of confidence that you will get will automatically improve. And the kind of questions that you’ll start asking for your, you know, uh, immediate reportees will also change and they will start seeing this too. They’ll start feeling that my leader is reasonably technical and I can go and talk to him about anything. So this aspect is very, very important.

Now coming to your second question which is, uh, what are the common mistakes people are doing with this, you know, GenAI or this advancements of technologies? See, um, GenAI is great in terms of, you know, um, writing a lot of code on behalf of an engineer, right? Writing a lot of monotonic code on behalf of an engineer. But it is an evolving technology. It’ll have limitations. The fundamental mistake that I’m seeing a lot of people are making is they’re assuming that GenAI or the LLMs can replace a lot of strong engineers; maybe in the future, but that’s not the case right now. They’re great for prototyping. They’re great for writing agents. They’re great for, you know, automating some routine mundane tasks, right, and make your product agentic too. That’s all great. They’re moving with great velocity. But the thing is, there’s a lot of difference, uh, between showing this initial prototype and productionizing this. Let’s face it, enterprise customers have a very high bar. They don’t want, you know, something that breaks at scalability or reliability in production, right? Which means while LLM and Agentic worlds offer a lot of fancy ways of doing things, you still need solid engineering design practices around all of this to make sure that your product does not break in production. So that is where I spend a lot of time advising these new founders or, you know, people in large companies who are trying to adopt AI into their SDLC, that this is not going to be a, you know, magical replacement for everything that you guys are doing. It is, think of it as a friend who is going to assist you or you know, improve your productivity by 10x, but everything around a solid engineering design or an organization, it’s not a replacement for that or at least not yet.

Kovid Batra: Makes sense. I think I’d like to deep dive a little bit more on this piece itself, where if you could give us some examples of how, so first of all, where you have seen these problems occurring, like people just going out and implementing AI or agents, uh, without even thinking whether it is gonna make some sense or not, and if you need to do it in the right way..

Suresh Bysani: Yeah.

Kovid Batra: Can you give us some examples? Like, okay, if this is a case, this is how one should proceed step by step. And I think I, I don’t mind if you get a little more technical here explaining what exactly needs to be done.

Suresh Bysani: Yeah. So let’s take a very basic product, right, which, uh, any SaaS application which has all the layers from infrastructure to authentication to product, to, you know, some workflow that SaaS application is supposed to do. So in the non-agentic/AI world, we are all familiar with how to do this, right? We probably do some microservices, we deploy them in Kubernetes or any other compute infrastructure that people are comfortable with. And you know, we write tons and tons of business logic saying, if this is the request, do this. If this is the request, do this. That’s, that is the programming style we are used to, and that’s still very popular. In the world of agents, agents can be thought of, you know, uh, an LLM abstraction where instead of writing a lot of business logic yourself, you have a set of tools that you author, typically the functions or utils that you call, you, you have in your microservices. And agents kind of decide what are the right set of tools to execute in order to get things done. The claim is there’s a lot of time people spend in writing business logic and not the utils itself. So you write this utils/tools one time and let agents do the business logic. That’s okay. That’s a very beautiful claim, right? But where it’ll fail is if I, if you think about enterprise customers, yes, we’ll talk about consumer applications, but let’s talk about enterprise because that’s where most of the immediate money is, right? Enterprise customers allow determinism. So for example, let’s take an application like Jira or you know, Asana, or whatever application you want to think about, right? They expect a lot of determinism. So let’s say you move a Jira ticket from ‘in-progress’ to say ‘completed’, I mean, I, I’m taking Jira as an example because this is a common enterprise product everybody is familiar with, so they expect it to work deterministically. Agents, as we know, are just wrappers around LLM and they are still hallucinating models, right? Uh, so, determinism is a question mark, right? Yes, we, we, there are a lot of techniques and tools people are using to improve the determinism factor, but if the determinism is a 100%, it’s as good as AI can do everything, right? It’s never going to be the case. So we have to carefully pick and choose the parts of the product, which are okay to be non-deterministic. We’ll talk about what they can be. And we have, we obviously know the parts of the product which cannot be non-deterministic. For example, all the permission boundaries, right? One of the common mistakes I see early startups making is they just code permission boundaries with agents. So let’s say given a logged in user, what are the permissions this person is supposed to have? We can’t let agents guess that. It has to be deterministic because what if there is a mistake and you start seeing your boss’s salary, right? It’s not acceptable. Uh, so similarly, permission boundaries, authentications, authorizations, any, anything in this layer, definitely no agents. Uh, anything that has a strong deterministic workflow requirements, basically moving the state missions and moving from one state to another in a very deterministic way, definitely no agents, but there’s, a lot of parts of the product where we can get away with not having deterministic code. It’s okay to take one path versus the other, for example, you know, uh, uh, how, how do I, how do I say it? Uh, let’s say you have an agent, you know, which is trying to, uh, act as a, as a, um, as a, as a persona, let me put it that way. So one of the common example I can take is, let’s say you are trying to use Jira, uh, and somebody’s trying to generate some reports with Jira, right? So think of it as offline reporting. So whether you do report number 1, 2, 3, or whether you do report number 3, 2, 1 in different order, it’s okay. Nobody’s going to, you know, uh, nobody’s going to make a big deal about it. So you get the idea, right? So anywhere there is acceptability in terms of non-determinism, it’s okay to code agents, so that you will reduce on the time you’re spending on the business logic. But any, anywhere you need determinism, you definitely have to have solid code which obeys, you know, the rules of determinism.

Kovid Batra: Yeah, totally. I think that’s a very good example to explain where things can be implemented and where you need to be a little cautious. I think one more thing that comes to my mind is that every time you’re implementing something, uh, talking in terms of AI, uh, you also need to show the results.

Suresh Bysani: Yeah.

Kovid Batra: Right? Let’s say if I implement GitHub Copilot in my team, I need to make sure, uh, the coding standards are improving, or at least the speed of writing the code is improving. There are lesser performance issues. There are lesser, let’s say, vulnerability or security issues. So similarly, I think, uh, at Eightfold or at any other startup where you are an advisor, do you see these implementations happening and people consciously measuring whether, uh, things are improving or not, or they’re just going by, uh, the thing that, okay, if it’s the age of AI, let’s implement and do it, and everything is all positive? They’re not looking at results. They’re not measuring. And if they are, how are they measuring? Can you again, give an example and help us understand?

Suresh Bysani: Yeah. So I think I’ve seen both styles. Uh, the answer to this largely relies on the, you know, influence of, uh, founders and technical leaders within the team. For example, Eightfold is an AI company. Most of the leaders at Eightfold are from strong AI backgrounds. So even before GenAI, they knew how to evaluate a, a model and how to make sure that AI is doing its job. So that goodness will continue even in the GenAI world, right? Typically people do this with Evals frameworks, right? They log everything that is done by AI. And, you know, they kind of understand if, uh, what percentage of it is accurate, right? I mean, they can start with something simple and we can take it all the way fancy. But yes, there are many companies where founders or technical leaders have not worked or they don’t understand AI a lot, right? I mean, there’s, they’re still upskilling, just like all of us.

Kovid Batra: Yeah.

Suresh Bysani: And they don’t know how to really evaluate how good of a job AI is doing. Right? I mean, they are just checking their box saying that yes, I have agents. Yes, I’m using AI. Yes, I’m using LLMs, and whatnot, right? So that’s where the danger is. And, and that’s where I spend a lot of time advising them that you should have a solid framework around observability to understand, you know, how much of these decisions are accurate. You know, what part, how much of your productivity is getting a boost, right? Uh, totally. Right. I think people are now upskilling. That’s where I spend a lot of time educating these new age founders, especially the ones who do not have the AI background, uh, to help them understand that you need to have strong Evals frameworks to understand accuracy and use of this AI for everything that you are, that you’re doing. And, and I see a huge, you know, improvement in, in, in their understanding over time.

Kovid Batra: Perfect. Anything specific that you would like to mention here in terms of your evaluation frameworks for AI, uh, that could really help the larger audience to maybe approach things fundamentally?

Suresh Bysani: Oh, I mean, so there are tons of Evals frameworks on the internet, right? I mean, pick a basic one. Nothing fancy. Especially, I mean, obviously it depends on the size of your project and the impact of your AI model. Things can change significantly. But for most of the agents that people are developing in-house, pick a very simple Evals framework. I mean, if you, I, I see a lot of people are using LangGraph and LangSmith nowadays, right? I mean, I’m not married to a framework. People can, are free to use any framework, but. LangSmith is a good example of what observability in the GenAI world should look like, right? So they’ll, they’re, they’re nicely logging all the conversations that we are having with, with LLM, and you can start looking at the impact of each of these conversations. And over time, you will start understanding whether to tweak your prompt or start providing more context or, you know, maybe build a RAG around it. The whole idea is to understand your interactions with AI because these are all headless agents, right? These are not GPT-like conversations where a user is trying to enter this conversation. Your product is doing this on behalf of you, so which means you are not actually seeing what is happening in terms of interactions with this LLM. So having these Evals frameworks will, you know, kind of nicely log everything that we are doing with LLM and we can start observing what to do in order to improve the accuracy and, you know, get, get better results. That’s, that’s the first idea. So I, I would, I would start with LangSmith and people can get a lot of ideas from LangSmith, and yes, we can go all fancy from there.

Kovid Batra: Great. I think before we, uh, uh, complete this discussion and, uh, say goodbye to you, I think one important thing that comes to my mind is that implementing AI in any tech organization, there could be various areas, various dimensions where you can take it to, but anything that you think is kind of proven already where people should invest, engineering managers should invest, like blindly, okay, this is something that we can pick and like, see the impact and improve the overall engineering efficiency?

Suresh Bysani: Yes. I, I generally recommend people to start with internal productivity because it is not customer-facing AI. So you’re okay to do experiments and fail, and it’ll give a nice headway for people within the company to upskill for Agentic worlds. There are tons of problems, right? Whether it is, I mean, I have a simple goal. 10% of my PRs that are generated within the company should be AI-generated. It looks like a very big number, but if you think about it, you can, all the unit tests can be written by AI, all the, you know, uh, PagerDuty problems can be, can, can be taken at first shot by agents and write simple PRs, right? There are tons of internal things that we can just do with agents. Now, agents are becoming very good at code writing and, you know, code generation. Obviously there are still limitations, but for simple things like unit test, bugs, failures, agents can definitely take a first shot at it. That’s one. And second thing is if we think about all these retro documents, internal confluence documents, or bunch of non-productive things that a lot of engineering people do, right? Uh, agents can do it without getting any boredom, right? I mean, think about it. You don’t need to pay any salaries for agents, right? They can continuously work for you. They’ll automate and do all the repetitive and mundane tasks. But in this process, as we’re talking about it, we should start learning the several frameworks and improve the accuracy of these internal agents, and thereby, because internal agents are easy to measure, right? 10% of my PRs. My bugs have reduced this much by a month or 1 month. Bugs as in, the overall bugs will not reduce. The number of bugs that a developer had to fix versus an agent had to fix, that will reduce, uh, over time, right? So these are very simple metrics to measure and learn, and improve on the agent’s accuracy. Once you have this solid understanding, engineers are the best people. They have fantastic product context, so they will start looking at gaps. Oh, I can put an agent here. I can put an agent here. Maybe I can do an agent for this part in the product. That’s the natural evolution I recommend people. I don’t recommend people to start agents in the product direction.

Kovid Batra: Makes sense. Great. I think Suresh, this was a really interesting session. We got some very practical advice around implementing AI and avoiding the pitfalls. Uh, anything else that you would like to say to our audience, uh, as parting advice?

Suresh Bysani: Yeah, so, uh, I’m sure there is a lot of technical audience that are going to see this. Uh, upskill yourself in agents or AI in general. Uh, I think five years ago it was probably not seen as a requirement, uh, with, there was a group of people who were doing AI and generating models, and majority of the world was just doing backend/full stack engineering. But right now, the definition of a full stack engineer has changed completely. Right? So a full stack engineer is now writing agents, right? So it doesn’t have to be fine tuning your models or going into the depth of models, right? That is still models experts’ job, you know? Uh, but at least learning to write programs using agents and incorporating agents as a first class citizen in your projects; definitely spend a lot of time on that.

Kovid Batra: Great. Thank you so much. That’s our time for today. Pleasure having you.

Suresh Bysani: Thank you. Bye-bye.

Webinar: The Hows & Whats of DORA with Mario Mechoulam & Kshitij Mohan

Are your engineering metrics actually driving impact? 🚀
Many teams struggle with slow cycle times, high work-in-progress, and unclear efficiency benchmarks. The right DORA metrics can change that — but only when used effectively.

In this session of ‘The Hows & Whats of DORA’ webinar powered by Typo, host Kovid Batra is joined by:
🎙️ Mario Viktorov Mechoulam — Senior Engineering Manager at Contentsquare & metrics expert
🎙️ Kshitij Mohan — Co-Founder & CEO of Typo

Together, they break down the science of engineering metrics, how they’ve evolved, and their direct impact on team performance, shipping velocity, and business outcomes.

What You’ll Learn in This Episode:


Why DORA Metrics Matter — The role of cycle time, deployment frequency & work in progress


Optimizing Engineering Efficiency — How to balance speed, stability & delivery quality


Avoiding Common Pitfalls — The biggest mistakes teams make with metrics


Connecting Metrics to Business Outcomes — How they influence revenue & customer satisfaction


Getting Leadership Buy-In — Strategies to align executives on the value of engineering metrics


Live Q&A — Addressing audience questions on industry benchmarks, emerging trends & best practices

Timestamps

  • 00:00 — Let’s begin!
  • 00:58 — Meet the Speakers
  • 03:00 — Personal Inspirations
  • 04:52 — Importance of Engineering Metrics
  • 10:00 — Challenges in Implementing Metrics
  • 18:20 — Starting with Metrics
  • 28:03 — Identifying Patterns and Taking Actions
  • 29:15 — Choosing the Right Metrics: Averages vs. Deviations
  • 30:57 — Qualitative Analysis in Metrics
  • 34:31 — Pitfalls in Using Engineering Metrics
  • 42:30 — Q&A Session
  • 47:59 — Balancing Speed and Stability in DORA Metrics
  • 56:04 — Concluding Thoughts and Parting Advice

Links & Mentions

Episode Transcript

Kovid Batra: Hi everyone. Thanks for joining in for the DORA Metrics Webinar ‘The Hows and Whats of DORA’, powered by Typo. This is Kovid, your host, and with me today we have two amazing speakers who are the front runners, uh, the promoters of bringing data-driven engineering to the engineering world and making dev teams even more impactful. Please welcome the metrics expert tonight, Mario.

Mario Viktorov Mechoulam: Hello. Hello everybody. I’m very happy to be here again, uh, with you Kovid, and meeting you, Mohan.

Kshitij Mohan: Yup. Same, same here, Mario. Glad to be here, uh, with you and Kovid. So yeah, thanks. Thanks for joining in.

Kovid Batra: Thank you, Mario. Thank you for joining in. And the second guy on the screen, uh, he’s the co-founder and CEO of Typo, Kshitij Mohan. Uh, I’ll be talking more about Kshitij in the coming few seconds and minutes, but, uh, before we get started, I’d like to tell you something. Uh, so my idea of bringing these two people, uh, on this, on this, uh, episode, on this session was to basically bring two of the best guys who I have known in the past few years, uh, Mario and I met recently, but both of them, uh, have very deep knowledge, very deep understanding of how the engineering world works, how engineering metrics particularly work. Uh, Mario has been there to help us out in the last one month for any kind of webinar or podcast and then, he has been there in the industry from the last 15 years, uh, a true metrics enthusiast working with his team at ContentSquare, implementing those metrics, tackling things at the ground level, all the hurdles. And with Kshitij, uh, uh, we, we have, uh, spent almost, uh, three to four years now, uh, uh, and I’ve seen him doing the hard work building Typo, uh, talking to multiple engineering teams, understanding their problems, shaping Typo to where it is today, and, uh, honestly, helping and enabling those teams to, to get more success. So with that, I introduce both of my guests, Mario, and Kshitij to the, to the show. Uh, welcome to the show once again, guys.

Kshitij Mohan: Thank you. Thank you. Thank you so much, Kovid. I think you’ve already said too much about us, so really humbled to, to hear what you are, what you have been saying. But yeah, thanks. Thanks for having us here, man. Yeah.

Kovid Batra: Great guys. So I think, um, Mario, uh, you are going to be, uh, our, our key speaker today, uh, with, of course, but we are looking at a lot of learnings today coming from your end. But before we jump onto that section, I think, uh, I would love to know something about you, the audience would love to know something about you. Uh, tell us something about your hobbies or people, uh, who inspire you, uh, whom you look up to, and what, what exactly what trait in them, what quality in them inspires you.

Mario Viktorov Mechoulam: Right. Uh, I have to go here with my wife, not because she’s listening, uh, but because I’ve never seen anybody so determined and so hardworking. I think for her, nothing is impossible and, uh, I, yeah, I can’t stop admiring her since the day we met and I want to be more like her.

Kovid Batra: Perfect. Perfect.

Kshitij Mohan: You already chose the best answer, Mario. I don’t think we have anything else left to say.

Kovid Batra: Uh, same question to you, uh, Kshitij, uh, who, uh, who inspires you? What quality in them inspire you?

Kshitij Mohan: Sure. I think other than my wife, right? So who is definitely the most hardworking person, the, the bread earner of our house, because you know, how a startup work functions, it runs. So I think I deeply admire Roger Federer, so he’s one of my, my heroes, core to my heart. So I, I am, I’m a, like, deep, uh, sports enthusiast and definitely Federer has been a symbol for me of hard work, persistence, of flawlessness, uh, the, the elegance that he brought to the game. I think this is all what I have been trying to learn and, and implicate all those learnings into what we are building today. So, yeah, I would say that’s, that’s something that I really admire about him.

Kovid Batra: Interesting. No, a really interesting choice. Even I admire him a lot. And, uh, um, the hard work that, I mean, the, the game he has, I think I, I love him more, more for, for that actually. But yeah. Great, great inspiration for you. Uh, perfect, guys. Thank you so much. Uh, I think we jump onto the main section now where we talk about engineering metrics, DORA, beyond, like implementing them, the challenges that you have seen implementing them in the teams. So we’ll start with the first, very basic question. Uh, and the question to you, uh, is, is to you, Mario, uh, why do you think these metrics are important? Of course, you are an enthusiast. You, you do know how it is, uh, to have a team with or without metrics. So why do you think these metrics are important? Why these teams should implement it, and how things are evolving in this space? Like we are talking about DORA, but things have moved beyond it because DORA alone would not be sufficient to suffice, uh, measuring the, uh, engineering efficiency or productivity. So what are your thoughts on that?

Mario Viktorov Mechoulam: Absolutely. Thanks for the question. I think they’re important because, uh, they can be a stepping stone to, to doing better. Um, we are, we are looking into business impact and business results, and I think we can all agree that there might be some metrics that are good proxy indicators on what can go well or what can be improved. So if, if we agree on that, I think we can also agree that it’s smart to measure them. Once we measure them, we, we become accountable of them. So, summarizing a bit, uh, all this loop, um, we need to make space for the team so that they can, um, discuss and analyze what is going on. We need to make sure that there is, if there is any knowledge gap, this knowledge gap is covered so that we can, um, we, we can really, uh, understand what impact there could be in changing something that we have. And of course, we have to prevent the metrics of being misused because otherwise we lose all the trust. And in the end, what we want, want to, to make sure is that we, we have quality data that we, we can base our decisions on. So if we manage to do that, um, that’s already, uh, a big milestone. I think DORA, DORA is great. DORA is one of the best things that we have in the industry. It’s, uh, if I’m not mistaken, the only peer reviewed, um, metrics that we have that has, that they have stood the test of time and scrutiny, which is, which is good. So by all means, um, if you don’t know where to start, DORA, DORA can be a good place. Um, now I think that as with any metrics, just, uh, using. DORA as a target can be, uh, can be, uh, can be wrong. Um, also having, you using specific metric values as a target can also be wrong. Um, so if, if I look a bit, uh, broader, what do I see? I see that many teams nowadays are no longer caught up in these, um, old-fashioned ways, uh, ways of, of doing things. Uh, we, we tried to bring manufacturing, uh, into engineering, and, and that went, uh, very wrong. We, we all remember the, the Gantt charts, uh, multi, multi-year programs that never met the deadlines, these type of things. So to, today, if you want to start with DORA, uh, you might often find that your team is already high-performant. What do I mean with that? If we talk about, uh, lead time for change, uh, if we know that we can already ship in a few hours up to a few days. Uh, if I look at, um, um, the deployment frequency, we know we can ship as many times per day as we need. Uh, we are probably already high performing teams, so what do we do for there? Uh, I think we have to, um, we, we have to look further. And, um, one of the, uh, one of the ways to do that is to, to, to stop putting the focus only on engineering, right? So, um, DORA might be great if you want to, if you already feel that you have a big issue inside your, your coding or your pipeline, uh, areas. Uh, they can also be great if you want to optimize that final 2–5%, but they often cannot change a lot in the grand scheme of things. Right? When we talk about engineering, we are often talking about 10–30% time of the lead time since the moment we a customer requests something, or the moment we, we have an idea that we have to deliver, uh, we want to deliver to our customers, and if we, from this 10–30%, we want to narrow it down further to just the coding part, that’s maybe 10% or less per the 10% of the, of the total time. So we, we are missing a lot. And also by doing that, we are sending a message that this is just engineering. This is not product, this is not design, this is focused on just the team or not looking broader. So what I would do is start expanding with all the metrics. We have customer metrics, we have well-being metrics, we have cycle time, lead time metrics that start taking into consideration much more. We have operational metrics that, uh, for example, um, can, uh, can help us identify how frustrated the customer is. So these, these type of things.

Kovid Batra: Makes sense. I mean, uh, a lot of times I have also seen while talking to the clients while understanding what they’re exactly looking for in these metrics, and I found out they, they started with DORA. They, they were good with cycle time. They were good with deployment frequency to look at and the first go. But as they evolved, they started looking at the bigger picture. As you said, they wanted more metrics, more visibility, not just metrics, but more visibility that DORA was not able to provide. So, I mean, uh, if, if I have to ask you like, at, at, uh, ContentSquare, or, or companies where you have worked previously, how, how did you start and, uh, how it evolved? Can you just share one example so that we, we can more relate to how things evolve when someone starts with DORA and then move to some other metrics that help them help understand a better picture?

Mario Viktorov Mechoulam: Of course. I have to confess that I have never started with DORA. Um, I think that, um, that can be a great place to start if you, if you don’t know where to start. But, uh, if you already are aiming high, then there’s no reason to start with that. So to me, there are two metrics that if we look at, uh, the efficiency, uh, domain of things. So doing the things right, um, might, might be key, key metrics. So one of them is cycle time. So from the moment we start working on something until the moment it, we put it in the customer hands. So this would include not only, um, co, um, uh, pull request, review time and deployment time. We’re including the coding time, we’re including the design time, the conception time, these type of things. Uh, the other one, and, and this is the lagging indicator, right? So by the time we have some actionable, uh, actionable data on this metric, it might be already too late, but we, we, we are still on time to make changes in the future. So I, I, we can talk about that in, in detail. This is one of the, I think, most important things that the team should, should, uh, sit, sit down and do together. The other, which is not a lagging indicator, is work in progress. There is no better indicator for you to realize if you’re being efficient and focused than to look at work in progress. Uh, I mentioned it the last time, but if you have 30 work, uh, 30 things ongoing with a team of five, there is something deeply wrong in, in the way you, you are working with your team.

Kovid Batra: Great. I think that that’s a good example. Great.

Kshitij Mohan: Sorry. Sorry. That’s a great point, Mario. And I think this is where I would like to add few things and talk more around that front as well, right. So, uh, what we have seen personally across our journey as well, right. So, uh, the first question that always comes is that why do we need metrics? So that has been, I think, the first starting point for each and every engineering leader today, which is pretty surprising to us, because like at a fundamental way, as you mentioned, right? So engineering should be run with clarity and not with chaos. I think this is where the core mission statement is what we started off with. And every other critical function in any organization has some indicators, has some metrics, has some goals to follow on which they are tracking and they are moving somewhere or the other, and, and tech being such a, a, I think, uh, so much effort, so much cost, everything being put in, this is really surprising one why still we need to think on, hey, whether should be there any core set of metrics system or not, because there should be. However, I think the biggest challenge that always comes across is, and this is what we have heard most of the folks say at a very first upfront way is that, hey, uh, metrics could be gamed, right? And I think this is what we have been constantly trying to solve in some way or the other end, like trying to figure out, say, do all those discussions that hey, we know, right? So yes, if you put out very, uh, I would say rudimentary metrics, and if you start making people accountable for those metrics on a one-on-one way, then yes, they’re gonna not feel driven by it, but more around by, pushed by them, and that’s where the real, like the gaming of the system starts happening. So hence it also becomes really important that how do you design and define your metrics. I think this is what I think I would love to talk more around and would hear, love to hear your experiences as well that you, okay, there’s a first thought that yes, let’s identify or let’s go and implement this metric system in place, but right, how do you go and decide and define what could be those right areas, uh, we should be starting off with? So I think if you have any thoughts or if you have done that some way or the other, I think that would be really great for everyone to kind of just know what those starting points could look like.

Mario Viktorov Mechoulam: Right. Um, first of all, I think, I think you’re right. Um, there is always the risk that metrics, uh, get gamed. Yeah. Um, that’s why the metric itself should not be the target, nor, nor the value the metric should be additional data that we can use to, to make better decisions. I have, I have evolved what I’ve, I, I do over, over the years, and I don’t think that what I do today is, is perfect. Far, far from it. I think, uh, it’ll continue to evolve a lot, uh, in the, in the coming years. But I think that at least we need, um, two, two main areas or two main domains when we design our metrics. Um, and by the way, one way is to do them alone, to invite the team to do something, but the other is to, uh, to make them part of this, of this decision and this discussion. Sometimes this is easier, sometimes you have more time to do it. Other times, uh, the team asks you, just give me a jumpstart so you know about it. So recommend me something and then we can move from there on. So the two main areas that I think should exist are, um, areas targeting efficiency. So doing the things right. And areas targeting, targeting, uh, effectiveness. So doing the right things, right. We, because we want to be correlating the, the output, uh, of what we do with the, with the impact that we want to see, right? Yeah. And then, uh, be able to adopt there. Uh, with time, I, I realized how important it’s also to, to, um, to have more, um, self-reported metrics, well-being metrics. So two that I think are staple and I think some of the frameworks like SPACE I recommend, um, are, are the well-being and, and the satisfaction of the work you do. And, and the other, for me, it’s very important is the focus time. So in the end, um, I, I don’t think this will come as a surprise to anybody. There was a, a new paper published, I think by, by, uh, one of, well, at least one of the outsources, but the same, uh, for DORA, Nicole, um, I forgot what’s her name.

Kovid Batra: Nicole Forsgren.

Mario Viktorov Mechoulam: Yeah. Um, and, and, and answer, unsurprisingly, they, even though it was a small one, they saw a correlation between, uh, time available to do coding and work being delivered faster and with less defects. Uh, now that study, I, I mean, I, I’ve not, uh, read it, uh, all through yet, but, uh, it was focusing on individual contributors. I think it’s much more important, uh, to focus on teams. Uh, teams are already a local, optimal, right? So let’s not make that even, even more local, uh, with another one. So I think it’s important to, to measure this as a, at a team level. And, and finally, I think we should not forget, especially with the advent of DevOps, that, uh, you own what you ship, and it’s really important that your, your ownership and accountability doesn’t end when you put something in production, right? So I, I like to, to here to track, um, how many defects, uh, end up appearing on production. So this means that potentially there is quality gates that are not good enough, and we might then decide, uh, does it make sense to do a further investment based on the defects that end up in production? But there is only one thing that’s worse than having defects in production, and that is the customer discovering these defects before you. Right? So, if this happens, then you should definitely track these, it means that not only your quality gates are faulty or can be improved. It also means that your monitors and alerts, uh, uh, are lacking because the, yeah, you want to, uh, you want to prevent this frustration.

Kovid Batra: True, I think, uh, in some great insights here, but, uh. Just mentioned is one of the problems that we see. Uh, and then circling back to the hurdles that teams, uh, actually face while implementing these metrics. So gaming of these metrics is one of the problems, but along with that, there comes many more challenges, right? And I think, uh, these are the next things that we want to discuss with you, Mario, uh, with your experience. I think, uh, first thing is people hesitate to start, right? Like even take up these initiatives and think of getting a buy-in from, let’s say the executives, uh, there is push from, uh, peer teams whether it is required or not. So there are many such reasons why people don’t even start. So how do you recommend exactly this initiative should start within teams? Because it’s not about how to implement certain metrics for certain use cases. Of course, that is the next layer of problem that we will need to solve when these metrics are getting implemented. But before that, there is, uh, uh, a lot more to be known on how to take up these initiatives, how these initiatives shape up in a team in the best way possible. So I’m sure you have experiences on that as well. Uh, would you like to throw some light and tell us how, how this works out and how one should start?

Mario Viktorov Mechoulam: Yes. With, with pleasure. Um, so the, the best moment to start is now. Not now. After, after the webinar, of course. There you want, you won’t have find the perfect moment in which, uh, your team will align with the way, uh, that you think your executive will align, uh, with the way you think. That, that’s not gonna happen, magical new budget is not gonna appear. So the best thing that you can do is start right away. There is also free material that can be very helpful. One, one, one book that I always recommend is the ‘Flow Metrics for Scrum Teams’. It’s a free book from the ProKanban organization. I think that that’s great. At least, at least the first chapters, um, help you understand the concept of flow. And after that you have to, uh, you have to crunch some of the metrics. Um, unfortunately, most, most of the popular software that we have, uh, we, we use, like Jara is notoriously bad for anything that, that is not, uh, scrum. So it might be a bit complex. You can also do this manually, but I advise against that. So that leaves us with, uh, the option of, uh, choosing a tool, choosing a tool. Uh, we are lucky to live in a, in a moment where, um, there is, uh, a competing market. So there is, there is plenty of tools to choose from and many of those tools also are nice enough to offer free trials and to onboard you and your teams. So why is this important? Because it is much more important to, um, to get, uh, support from, from, from execs and support from your team if you already have some data to show from it. Yeah. Uh, don’t get me wrong, there is one part which is you still have to sell it and sometimes you have to, uh, maybe over-promise and sometimes you have to, um, shorten all these, all these links and notes that go between starting to implement metric, implement metrics and seeing some results. Sometimes you have to say, oh, we’ll, we move faster, right? Or we’ll be more predictable. That’s not really the end goal. That might be a byproduct. Uh, the end goal is to, to change a bit the culture, to change the way people think to, um, to, to help them see data in the same way they see code reviews or pair programming or test-driven design or, uh, CI uh, CD pipeline, these type of things.

Kovid Batra: Perfect. I think, um, one thing you just mentioned about, um, around the adoption part where you go out, try something, do a POC, and then with that POC, you, you go back to the teams and executives saying what exactly worked out, what could, what could not exactly work out with these kind of tools. So when, when somebody’s into that situation, uh, how do you exactly recommend that one should go about, uh, picking up certain metrics? Can we take an example here and, uh, maybe from your experience that how exactly one should go about picking few metrics, working on those, showing those numbers to the executives, and then moving on to the next step for the wider teams and, uh, the whole organization?

Mario Viktorov Mechoulam: Yes, engineering organization. Of course.

Kshitij Mohan: Also, I think just, just to add one more thing here. So, uh, what we have realized is, and I I’m, I’m pretty sure you might have some experiences around that front as well, that, uh, it’s never possible, uh, to show any change, let’s say in the first 14 days or in within a month or, or, or one and a half months or so, right? So that’s, and the challenge that, hey, so what we usually recommend is that, hey, you need to spend actively, at least the first three months where you build the processes, start, identify, measure, execute, right? But that also becomes one of the core challenging aspects because firstly, if there are no dedicated folks who are working on that front, and then they start slacking. And then the other part is if there are no results, then everything has to be blamed on the, on whatever methodology or whatever the tool that that team is incorporating. So how do you balance that front as well?

Mario Viktorov Mechoulam: Definitely, yeah. Lots of very good points. Uh. So, so yes, you’ll end up having to spend some of your own money possibly, and that is a good thing. It means that you believe in it and it’ll give you more time to, to, to show some results. Uh, second, very important, not to mention any mythology with capital M, any, any framework so that people don’t link that, uh, to something that they can blame afterwards. Yeah, link it to, to outcomes and to information that they can use. Um, and finally, it, it is true that, um, it might be hard to see results quickly. That is why initially I recommend not to go all in and try, uh, 10, 20, 30 metrics. Go with one or two and, and, and one or two, that that can be combined and have, um, have synergies with each other. And so for me, that would definitely be the cycle time and the work in progress. Um, so I, I can tell you what we do, uh, what do we like, what, what do I like doing with cycle time. Um, I, I like to track from start to end, and I think this is a staple for anything because you cannot apply it at team level. But the beauty of it is that then you can apply it at, at group level, and then you can apply that at line level and you can apply it at organizational level. And in each of these flight levels, you can measure something different. For a team, maybe it’s important tasks because you’re seeing how, how, how quickly can we get into flow with the right priorities and sequencing. But then at line level, line level, maybe it’s important because you have a strategic initiative for the year that you really want to see delivered, and that helps you see when it is moving and when it is being blocked, because maybe not all teams are aligned around the same OKRs. So with, with cycle time, what I like doing is every, every two weeks or so, we sit together and we identify everything that has been shipped in this period, and we, and we check the cycle time. And, and you can see, what you can see is the, the, um, bell shape, shape distribution, right? So not, not every, uh, task stays the same time. You, you go from a few hours, up to a few days, maybe a week, sometimes two weeks, or there is more complicated work that sometimes, uh, because of, of its nature, of its, uh, conditions can take even longer. Um, and then this bell-shaped Q curve can also be binomial. So it could have two heads or more heads. Uh, it could have a long, long tail distribution. If you see something uniform, you’re doing it wrong, you’re definitely doing something wrong, right? Uh, but what, where I’m going with this is that in the end, this gives you a spectrum of probability. You know that after a certain time, um, unless, unless the nature of the work that you have to do changes drastically, and that’s unlikely for teams, except for the yearly re-org maybe that, uh, we all do.

Kovid Batra: Yeah, yeah.

Mario Viktorov Mechoulam: Um, or except if the team’s composition changes drastically, this same probability distribution is gonna be maintained. And, and, and after two, three weeks, you already, we will have some degree of confidence of this, of this being maintained. Um, and, and that means that you can use that maybe to forecast a bit the future, to understand what is your base of delivery, what, what’s your attack time. Okay, but what, what do we do? So maybe we see something which is binomial. It has a long, uh, a thin tail distribution. We want to be better. Um, so to be better, what we have to do is we have to narrow this bell, bell shape, right? So we have to reduce the standard deviation, and we have to understand what are the things that occur that are not adding value, that are waste; waiting time, handover, et cetera. So what, what I like doing is we, we look at, at the period and we grab the outliers. So the thing is that had the, the highest cycle time, and we analyze them as a team. No blaming, but we have to go through the events and conditions that made this thing take for example, four weeks or five weeks or whatever. And oftentimes, we have the same culprits. So we are not, uh, we’re not gonna publish a paper by, by doing this. We’re not doing rocket science. You have, um, the slices of work are much larger than what they should. You have scope creep. Uh, you have ill definition. So you had to do something, but it was not clear, we did something else. You have defects, you have waiting time, you have handovers. You have so many things. Uh, you have process issues, you have infrastructure issues, uh, so, so many things that, uh, they’re always the same. So when you analyze two or three of those, it can take you all from half an hour to an hour. But this is, uh, this is the quality, qualitative analysis that is, that is useful. You can already identify some patterns. For me, uh, at one point it was the pipeline was not automated. We could not release every day. Um, some of the times it was, there is a lot of dependencies on this work with other teams. We didn’t identify them earlier. Um, we, you identify these patterns and then this offers you ground to take actions, right? So as a team, you can agree, say, oh, this happened. We know what, what causes it, what actions do we take to prevent that? And over time, you’ll see that the goal is that this, these outliers in the cycle time, they disappear because if they disappear or move or have a lower cycle time, not only, uh, your average, uh, cycle time looks better, but the, the whole standard deviation gets narrower and narrower. Yeah. So at some point, if you can say, oh, my quantile 85 or 95 is below a week, or around a week or below two weeks, that’s already great. And people can decide to gain that or not gain that. But that gives you already an idea of how good are we at sequencing and having a, a cadence of delivery.

Kovid Batra: Yeah, makes sense. Totally makes sense. Very good example there. But, uh, I, I have, uh, one, one doubt here. Uh, but, uh, Kshitij, do you have something to add here or do you have a question?

Kshitij Mohan: No, no, I think that’s fair enough. Uh, but I think one more, uh, just a point to it. So there is also a question, uh, when we are looking at some metrics and maybe could be cycle time or some review times, and also, uh, what some folks have come back and asked us as well that, hey, should we look at the averages, the medians, or should we be looking at, uh, the, the deviations on, on how to act on? So any, any thoughts around that as well, Mario?

Mario Viktorov Mechoulam: Um, I think it depends, but this is one of the lowest hanging fruits in terms of conversation that you have openly with your team. Maybe for which specific metrics should we start with, uh, the team prefers your advice, but this is already something that, uh, but just by looking at the numbers you can have normally for a cycle time, I like taking a look at the 85 quantile because I mean, averages sometimes are too good and we are not doing metrics to tap ourselves on the back or as just a vanity thing, right? So what we, one of the purposes we use the cycle time is to maybe to be able to understand how likely are we to deliver something on time or what’s the cadence, right? So it doesn’t help if we use the quantile, uh, if you use the average, right? Because, um, yeah, so half of the time you’ll, you’ll deliver in, in this amount of time, but then what about the rest? And the rest could go close to the infinite, right? And, and you’re not interested in that. You’re interested in, in doing this. So you can generate some trust with your stakeholders. At some point, when you see, when, when, when, when your stakeholders and, and the exec see that you deliver as long as you’re allowed to put the focus on one thing and have the time, that’s it. Then you, you stop getting the question of when is something gonna be delivered, because they know it’s a matter of letting you do the work.

Kshitij Mohan: Yeah, right.

Kovid Batra: Um, but I think one thing, uh, Kshitij would also be able to relate to this, uh, a lot of teams have come back to us, uh, where they just say that these metrics don’t tell the real picture and we need to understand the context behind it. And you already raised the point around it that the qualitative analysis of these metrics is also very important. So when we say qualitative analysis, in, in my understanding, we are talking about understanding the context around that particular problem in the engineering team for which we are looking at the metrics, and how do we go about it?

Mario Viktorov Mechoulam: Okay. So if I understood the question, um, how do we make sure that when we discuss metrics, we include things that are, um, directly impacting the, the, the metric that we see, right?

Kovid Batra: Yes, yes.

Mario Viktorov Mechoulam: Yes. Uh, this is a bit more complicated. Um, and it’s, uh, so it, it’s unrelated, but you definitely need it. And this is one of the first things that you should do with your teams, and that is generating trust. You, you want to make sure that when you sit together, um, people feel comfortable enough that they tell you the truth without fear of being blamed. Uh, right? So if, if somebody has a perception that this is a problem or this is going wrong, or this is not good, or we lack, lack something, you want to know about it. Yeah. You want, because when you know, when you know about it, you can try to measure it. You can try to do something and see if, uh, the result is moving the needle in the direction that you want to or, or not.

Kovid Batra: Perfect.

Kshitij Mohan: Yeah, I think just to add on to that. So there could be some qualitative data also that could be looked at in a way. So for example, uh, and, and this is something that, that we are also trying to make more robust as well, is that if you can start comparing some qualitative with the quantitative part. So for example, let’s say, if you look at for example PR size of the review time as one of the metrics, right? So that could be definitely one of the ways, uh, where if you are looking at it, then that’s definitely one good indicator. But if you can start correlating it, for example, uh, on the data based directly coming from developer surveys on developer flow. So that becomes a good way to look at a complete picture as well because one way you are trying to look at how exactly at a transactional level, how the pull requests are flowing, for example, and on top of it, let’s say if the developers are coming back and sharing this feedback that, hey, our flow is mostly blocked because of X, Y, Z reasons, then, then this becomes a, a good picture to see at an overall level, hey, this is the real, uh, context that we might need to solve because unless and until we don’t solve this, the review times are always gonna be, uh, spread across, uh, the spectrum. So maybe that could be a few of the ways to add on to the picture as well.

Mario Viktorov Mechoulam: Yeah, definitely.

Kovid Batra: Yeah. I think, uh, to sum up the point, uh, from where we started that what exactly one should take up initially to do that POC and probably go back to the team and the execs to make sure that things are moving and then getting a bigger buy-in for the wider adoption, uh, is looking at one or two important metrics, understanding the quantitative as well as the qualitative part along with it so that it makes complete understanding of what strategies, what action points need to be taken. And then when you have results, you go back to the team and the execs to make sure that, yeah, this is something that we should go ahead with and adopt in the team. So, great guys, I think this, this is one interesting piece that I really wanted to discuss. Uh, other than that, I think a lot of people do it, but they don’t do it the right way. Uh, there are pitfalls to how you, uh, use these engineering metrics or how you implement these engineering metrics. So any examples, uh, that you would like to share, Mario, uh, that you have seen are how not to use the metrics basically?

Mario Viktorov Mechoulam: Uh, yes. Um, yeah, so, so there is three areas that normally you have to take care of when you, when you start using metrics, uh, that is covering the knowledge gap, preventing the metrics from, uh, from being, uh, misused and, um, and right now, I forgot the third one, but, uh, it’ll come up to mind afterwards. Um, one, one of the, one of the key things that we need to do when, uh, we start implementing metrics, uh, is to avoid falling into the pitfall of, uh, keeping everybody busy, 100% utilization. And I understand the context right now is, um, is, is delicate. Um, many companies are looking to optimize the cost to make sure that, um, the, the, the people, the resources are invested in, in what’s best. So this might be translated into are people working on something or are they idle? And, and best is, uh, it’s debatable here. So I, I would like to, to to, um, to start with a question on that, that you can also use for, to ask, uh, to ask your team, uh, which is, I think it comes from Tai Chiana. I could not find the quote, uh, when I looked for it, but, uh, but I think it was him that at least I remember somebody telling me about that. And it is, uh, what, what do you find more bothersome, uh, when you walk into a workplace? Uh, is it, uh, seeing people or resources idle or seeing everybody busy? And if you’re watching this now, you’ll have some time to think. If you’re watching the recording, you can pause and think about the question and think about why. And I, I think most of us, um, even though being ideal sounds, sounds strange, there is a hunch. Our gut feeling is, is is to, to say, no, no, I think seeing everybody busy is, is more bothersome. And, and to help with the answer, I think we have another question, which is, would you go, would you want to go into a hospital that has everybody 100% of the time busy? No, of course. Especially if it’s a life or death situation. Um, so, so keeping, keeping people busy just for the sake of it, um, can be very dangerous and it might slow everything down. Why, why you don’t see it? One, one exercise I like doing with my teams is to try, uh, to, to, to invite them to design what would be, um, what would be the way to determine, uh, our, our system through something that is closer to them, like technology. In this case, I usually use Kafka, uh, producers, consumers, and, and, and the first question I ask them, okay, if you, if you were to design producers and consumers with Kafka in between, uh, would you like that given, you know, what is your, your output, you have an input that is higher than the output, so you’re not able to process in time all the traffic. So what happens then? And naturally, everybody knows that what happens then is that if you have a queue, this queue becomes larger and larger. So the, the service time, uh, tends to influence. Yeah. So, so this is the first lesson. If you, if you have, if you, if you have a system, you want this system to be stable, uh, but you can go further. Um, I don’t think any of us would design, uh, a system with, uh, producers, consumers, and Kafka in between, uh, where you match exactly the input capacity, uh, that you have, right to the, to the processing power. That’s, that’s extremely risky. Yeah, so I think the same thing happens, um, the same thing happens when you, when you work with teams. Uh, you want to have some leave some space for, for maintenance, for downtime, for reflection, for improving, for this type of thing. So that, that’s the second learning. Uh, then we can take it a step further and we can start thinking about where are bottlenecks. So you don’t have a single system, you have multiple, you have a chain of them. Uh, your speed, your delivery, uh, will be as good as the slowest link. That’s right. And, and here we’ve seen this reflected in the teams many time. So who is responsible of the conception, technical design documents? Everybody. Who’s responsible for coding? Everybody. Who is doing pull, pull request review, code review? Everybody. Who’s doing QA? Oh, that poor guy over there. Who’s doing, uh, product validation, UAT? Oh, that, that other person, right? So this is wrong, I think here. Uh, we have to design our system in a way where everybody’s responsible for every, everything, so that we don’t have bottlenecks. There is also the, the forced learning that we can apply through this exercise is, uh, you have to, it’s not about designing the system and living it there. You have to be able to adapt to, to the demands and adjust to demands, right? So sometimes we know there’s gonna be an event and the capacity that we have is not good enough. Um, the, the same thing happened the other way around. So sometimes I’m idle and I feel bad. So instead of going with somebody to do some pair programming, I say, I’m gonna start this thing. What happens next is that the priority, the focus shifts and we have not adjusted, uh, uh, the, the capacity to the demand and that thing, uh, stay, stays for, for weeks or months, uh, and, and contributes to be an outlier eventually. Um, so I think all, all, all these things can, this, this type of exercise that is very close to engineering can help teams understand how these same principles can be applied to teams. In the end, what we want to make sure is that engineers have time to think, product managers have time to think, designers have time to think, teams have time to think; quality time. So being idle, so thinking is not being idle, and this is something very important that we have to, um, that we have to, uh, communicate. And, um, some of the changes we did in our team and, and, and I, I started talking about pitfalls because it’s very easy when you start a team and metrics to try to optimize the metrics by making people busy and, and you normally get the opposite effect. Uh, some of the changes we did was limiting work in progress. Um, you don’t want to have five people in a team, everybody working on a project alone. That’s, uh, that might, might with a, uh, big quotes here give you a boost in, in, in, in speed, or however you want to call it in the short term, but in the long term it’s, it’s, it’s bad. So all this knowledge is gonna be lost when people have to, to, to leave or have to whatever, uh, projects are gonna get sold. So the second thing we did is we favored, uh, pair programming, even more programming. Yeah. Um, so all, all these, all these changes contributed in the end to, um, by, yeah, by do, doing less and achieving more. And sometimes the best thing to go fast is not to, not to start new work.

Kovid Batra: Makes sense. Great.

Kshitij Mohan: ‘Going faster is not necessarily the fastest way to go’ is basically what you wanna say.

Mario Viktorov Mechoulam: Say again?

Kshitij Mohan: Going fast is not always the fastest way to go.

Mario Viktorov Mechoulam: Exactly, exactly.

Kovid Batra: Great, guys. I think, uh, we break for a minute. Uh, we have now the QA session for 15 minutes, so there would be some questions that our audience would like to ask. Uh, so I request the audience to put in all the questions that they have in the next two minutes, uh, so that we can take it up till that time. Uh, you have some music to hear.

Mario Viktorov Mechoulam: Okay.

Kovid Batra: All right. Uh, we have a few questions here now. Uh, I think we can, we are good to go. Uh, I’ll just highlight the first one. This comes from Daniel. How do you handle the continuous nature of the cycle? For example, there might not be one moment of prioritization or there might be a break because of other business priorities taking over between prioritization and engineering started. Interesting question. I think we touched a little bit on this, but Mario, uh, please take this up.

Mario Viktorov Mechoulam: Yes, very good question. Uh, to me, this, this is, uh, very much connected with, uh, how many companies do roadmap and delivery plans. I think we’re often planning way too much ahead and in doing so, we are, um, investing into, into effort and time that is then never capitalized on. So one, one way to, to break this, uh, is to, to do less things. Um, and this applies, uh, both. So I’m, I’m, I’m not against roadmap. I think roadmaps are great. The problem is, is when you start treating your roadmap as a delivery plan or as a strategy, and as you have a, uh, deadlines that are fixed. Um, so one way to do that is do, do less things. What this means is that, uh, if we try to put it to perspective where many teams work together, uh, there is always a chance that something new comes up that, uh, changes the plan that you have so far. So what is the least amount of work that you can start so that you have the minimal risk of being blocked, interrupted and having to switch to something else? And that’s normally one project, maybe two projects. If you, if you have a big team, uh, the same thing can be applied within the team. Uh, I think this connects a bit to one of the points I was making about adjusting, um, capacity to demand. Uh, it’s, uh, there is, if you, if you use Kanban for example, but I’m sure that other, uh, methodologies have different ways. There’s this replenishment meaning, so putting something on the board means that there is a commitment to see it through all the way to the end. So if we are not sure about the priority that something is gonna, uh, is having today, or if we cannot be confident enough that this thing is gonna maintain the, the same priority throughout its cycle, the safety thing might be not to start it to work on something else with somebody else, uh, within your team.

Kovid Batra: And I think I would just like to add here on this question, Daniel, uh, probably if you want to, like, there are unavoidable situations also, we know that, and you need to jump onto other tasks. Uh, then at least, uh, if you’re looking at it from the metrics perspective, that it is gonna, um, like change the complete picture of your cycle time because one of the tasks is lying there and taking too much time. Uh, we can actually go ahead and build a system that either exclude those PRs that are going high in time in the cycle time, or in fact there can be a system to probably reduce the number of hours that just went in vain where there was no activity. So if, if that was something that you were also trying to understand or ask here, uh, that was my take on this question. Yeah.

Mario Viktorov Mechoulam: One, one comment on my end from this, I think, uh, you, you can definitely do that. Uh, there is one, one side effect of doing this though, and which is you won’t be able to communicate, um, to, to the business or to the, uh, people making this decision what is the impact it has on the team. So sometimes it’s, I think it’s better that if we look at metrics, we feel bad, we feel uncomfortable because that’s the trigger to try to do, uh, to try to do something about it. Uh, but while you were talking, Kovid, I, um, I think you, you mentioned something, uh, something important that, that resonated with me and, and gave me another idea, which is, uh, if you work small, um, I think we, yeah, if we manage to work in around, uh, one to three days, uh, task, uh, we can afford to delay something no matter how urgent it is or almost no matter how urgent it is, for a few, uh, for a couple days, uh, to make sure that this is..

Kovid Batra: So, that’s also a good solution here actually, breaking your work into batches, small batches. I think that would be great. Uh, creating small batch of work would really help there. Yeah. Perfect. Uh, moving on to the next question that’s from Nisha. Uh, how do you recommend balancing speed and stability when optimizing DORA metrics? Yeah, I think, uh, uh, yeah, go ahead. Please go ahead.

Mario Viktorov Mechoulam: Okay. Uh, so my, my, my first reflection on this is that speed and stability are the same thing. At least they should be. You want stability to have speed. So, uh, I am of the opinion that unless you’re a startup and you’re at the point where it, it really depends on a few weeks, whether you’re running out of budget or not, or you sign up these, these critical clients, uh, normally stability is gonna have the long term pros, uh, better effects on, on speed. So for me, if you, if you, um, if we are realizing that we are not fast enough to deploy a fix, that to fix and deploy a resolution, uh, that, that’s the first point that we should start addressing.

Kshitij Mohan: Sure. I think, uh, I’ll also just add to it, uh, one, one critical point to what we realized is also depends on the nature of your work or the market that you are in. So what we have realized is, so there are some kinds of users to whom you cannot ship unstable things, you cannot ship unstable features. Uh, whereas when you are going on for maybe some kind of a novelty or just a tryout or a beta thing, then definitely, uh, speed matters because it’s okay if, if it breaks out as well. Uh, but I think that’s also a call that you would have to take on what exactly the situation is and where you have the leeway of prioritizing what, but yeah, that, that’s, that’s the call that, that also comes into play.

Mario Viktorov Mechoulam: Yeah. Very good point.

Kovid Batra: Okay. Uh, I think, uh, we have a question from Narendra. There are different, uh, formula available to calculate metrics. Example, lead time for change is calculated by the time taken for the master branch on production. Uh, on some other sites it says it should be time when the task was taken in into development till the deployment. What is the idle way for this? Okay. Uh, Mario, you wanna take it or Kshitij I think you, you are equally excited to take this up?

Kshitij Mohan: No, I think, uh, fair question, Narendra. So I think it’s a funny take as well, uh, because everyone has their own ways of suggesting what it is, but it, it matters on what you want to actually measure. So basically, if you want to measure your lead time to change and let’s say if you’re in a situation where you would want to understand the time it takes from the inception of when we thought about this work to getting started to actually finally moving into production, you go by, uh, what, what you find a firm, you what, what you found a formula of calculating it from the inception time. But let’s say if you are in a regular, uh, scrum-based approach or you are looking at more on a continuous, uh, CI/CD type of a ecosystem, where you are at, then you might want to look at just the point where you started the work and the time it finally got merged into any of your branches that you would consider. So the way we recommend is, is actually, it should be configurable in a way on what it suits for different teams and different businesses. But yeah, uh, there is no one defined, uh, I would say formula for, for seeing this. You need to look at it at a broader perspective on what exactly translates into your, uh, basically engineering ecosystem. So that’s, that’s what my take would be. But yeah, Mario, if you have anything to add, please, please feel free.

Mario Viktorov Mechoulam: Perfect answer. You, you read my mind.

Kovid Batra: Great. I think we can move on to the next one then. Uh, it’s from Priya. Are there any new or emerging metrics that you believe will become industry standards soon?

Mario Viktorov Mechoulam: Um, good, good question. I don’t know. Uh, I think we are far away from industry standards for, for many things. And, and when there have been some industry standards, often they have been wrong or let’s say miss, things are being mismeasured. Um, what do you think, Mohan? Is there anything?

Kshitij Mohan: Yeah, I think that’s.. Yeah, I think, uh, firstly it’s a very good question to think about that how, uh, this current ecosystem is changing. And I think with the advent of new frameworks coming in, like SPACE, DevEx, DORA, and, and a lot more now coming into, so it’s, it’s, I think it’s a pretty, uh, interesting inflection point is what we see this entirely, uh, I would say the whole metric defining pieces. So, as you mentioned, we are not sure what becomes the norm, uh, but what I feel is they’re not going to be any, uh, one specific way to measure things because what we are seeing is every team, every, uh, organization, every stage that your company’s at, there are so many different nuances that everyone wants to track and understand. Uh, so I believe, I don’t think so there is gonna be, uh, one specific metric, but there are definitely going to be a lot more frameworks coming in, uh, over the next few years that’s gonna help us maybe identify and define the metrics in a better way.

Kovid Batra: Perfect. Uh, moving on. Uh, I think this is the last one we can take. Uh, this is from Vishal. Uh, how do DORA Metrics correlate with business outcomes like revenue growth or customer satisfaction in your experience? Uh, very relevant question I think. Uh, Mario, would you like to take this up?

Kshitij Mohan: Yup.

Mario Viktorov Mechoulam: Yes, sure. Um, so there is, there is not a direct link, right? So your, your metrics, uh, are information that you can use to improve how you ship things. How you ship things, uh, is able, allows you to create business impact, which might not always be direct to the customer. So maybe your business impact is our pipelines are now better, more stable, they’re faster. And by having this type of business impact, we can easily unlock business value. Um, to make sure that the business value is the outcome that we want, we need to also measure. So this is what, when I, when I was talking about, uh, effectiveness metrics, these are normally the business or the product metrics. Uh, so we have to, uh, of course, have everything before that, uh, including a very fast pipeline so we can correlate the, the changes that we ship with the, with the changes and the outcomes that we see.

Kovid Batra: Totally. I think, yeah, uh, so there is no direct impact, but there is definitely, uh, a goal related impact that we can see and there are a few examples that I would like to share here. Uh, like change failure rate, uh, for, for that matter, could be something that you might find very close to, uh, not revenue, but at least customer satisfaction because if the failure rate is higher, then the customer satisfaction will definitely drop down. So you can easily find a correlation there with at least that metric. But metrics like, uh, cycle time, for example, you might not necessarily find a direct impact, but if it’s better, it is really gonna help you ship more features faster, and that ultimately could help you maybe at what stage you are finding a PMF or maybe having more revenue through the next feature that you just released. So, yeah, that, that’s my answer to Vishal’s question.

Kshitij Mohan: And definitely, and I think just to add one last thing to it. So fundamentally we, uh, talk a lot about DORA, uh, but I think somehow this is something that I find very less talked about is that fundamentally DORA is trying to capture, uh, simplistically the velocity, quality, throughput of your systems. And these could be translated into multiple ways and into multiple forms. But if used in the right effective way, there would be some ways that you can start impacting and understanding the overall outcomes. But yeah, they have to be now correlated with what makes, what metrics make more sense to you. So if you already have some key metrics, if you start fitting them around the DORA ecosystem, then that really works well is what we have seen. But yeah, uh, it kind of, uh, depends on your use case as well.

Kovid Batra: Great, Kshitij. Thank you so much. Uh, I think guys, uh, we are, uh, that’s our time, actually. Uh, but before we just go, uh, both of you, uh, it was really great talking to you, uh, learning from you both. Uh, any parting thoughts that you would like to share? Uh, I think we’ll go with Mario first. Uh, Mario, any, any parting thoughts for the audience?

Mario Viktorov Mechoulam: Yes. Um, I hope I’m not repeating myself, but, uh, when we talk about the pitfall of 100% utilization, um, we have to take that into, into the whole business, uh, domain. So oftentimes we are hearing people need to innovate. People need to automate. People need to improve. People need to this. People need to that. It, it’s a zero sum game, right? So if we want people to do that, that type of things, there needs to be a change in culture that accompanies the, the words we talk. And when we do that, we have to be mindful of this, of, of Kafka, right, of the stability of the system if we want people to invest there. And as leaders, as managers, it’s our responsibility to, to enable that.

Kshitij Mohan: Totally. Perfect. I think just to add on to what Mario said. So, exactly. So it should not be a zero sum game. It should be a positive sum game. I think this is all how we should be defining everything for, for the entire ecosystem out there.

Kovid Batra: Alright, thank you. Thank you guys. Uh, it was a pleasure having you both here and there are many more such sessions to come. And Mario, we’ll keep bugging you, uh, to share your insights on those. Till then, uh, this is our time guys. Thank you so much.

Mario Viktorov Mechoulam: Thank you.

Kshitij Mohan: Thank you. Thanks, Mario. Thank you so much.

'Data-Driven Engineering: Building a Culture of Metrics' with Mario Viktorov Mechoulam, Sr. Engineering Manager, Contentsquare

How do you build a culture of engineering metrics that drives real impact? Engineering teams often struggle with inefficiencies — high work-in-progress, unpredictable cycle times, and slow shipping. But what if the right metrics could change that?

In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Mario Viktorov Mechoulam, Senior Engineering Manager at Contentsquare, about how to establish a data-driven engineering culture using effective metrics. From overcoming cultural resistance to getting executive buy-in, Mario shares his insights on making metrics work for your team.

What You’ll Learn in This Episode:

Why Metrics Matter: How the lack of metrics creates inefficiencies & frustrations in tech teams.

Building a Metrics-Driven Culture: The five key steps — observability, accountability, understanding, discussions, and agreements.

Overcoming Resistance: How to tackle biases, cultural pushback, and skepticism around metrics.

Practical Tips for Engineering Managers: Early success indicators like reduced work-in-progress & improved predictability.

Getting Executive Buy-In: How to align leadership on the value of engineering metrics.

A Musician’s Path to Engineering Metrics: Mario’s unique journey from music to Lean & Toyota Production System-inspired engineering.

Timestamps

  • 00:00 — Let’s begin!
  • 00:47 — Meet the Guest: Mario
  • 01:48 — Mario’s Journey into Engineering Metrics
  • 03:22 — Building a Metrics-Driven Engineering Culture
  • 06:49 — Challenges & Solutions in Metrics Adoption
  • 07:37 — Why Observability & Accountability Matter
  • 11:12 — Driving Cultural Change for Long-Term Success
  • 20:05 — Getting Leadership Buy-In for Metrics
  • 28:17 — Key Metrics & Early Success Indicators
  • 30:34 — Final Insights & Takeaways

Links & Mentions

Episode Transcript

Kovid Batra: Hi, everyone. Welcome to the all new episode of groCTO by Typo. This is Kovid, your host. Today with us, we have a very special guest whom I found after stalking a lot of people on LinkedIn, but found him in my nearest circle. Uh, welcome, welcome to the show, Mario. Uh, Mario is a Senior Engineering Manager at Contentsquare and, uh, he is an engineering metrics enthusiast, and that’s where we connected. We talked a lot about it and I was sure that he’s the guy we should have on the podcast to talk about it. And that’s why we thought today’s topic should be something that is very close to Mario, which is setting metrics culture in the engineering teams. So once again, welcome, welcome to the show, Mario. It’s great to have you here.

Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. I’m very happy to join this series.

Kovid Batra: Great. So Mario, I think before we get started, one quick question, so that we know you a little bit more. Uh, this is kind of a ritual we always have, so don’t get surprised by it. Uh, tell us something about yourself from your childhood or from your teenage that defines who you are today.

Mario Viktorov Mechoulam: Right. I think my, my, both of my parents are musicians and I played violin for a few years, um, also in the junior orchestra. I think this contact with music and with the orchestra in particular, uh, was very important to define who I am today because of teamwork and synchronicity. So, orchestras need to work together and need to have very, very good collaboration. So, this part stuck somewhere on the back of my brain. And teamwork and collaboration is something that defines me today and I value a lot in others as well.

Kovid Batra: That’s really interesting. That is one unique thing that I got to learn today. And I’m sure orchestra must have been fun.

Mario Viktorov Mechoulam: Yes.

Kovid Batra: Do you do that, uh, even today?

Mario Viktorov Mechoulam: Uh, no, no, unfortunately I’m, I’m like the black sheep of my family because I, once I discovered computers and switched to that, um, I have not looked back. Uh, some days I regret it a bit, uh, but this new adventure, this journey that I’m going through, um, I don’t think it’s, it’s irreplaceable. So I’m, I’m happy with what I’m doing.

Kovid Batra: Great! Thank you for sharing this. Uh, moving on, uh, to our main section, which is setting a culture of metrics in engineering teams. I think a very known topic, a very difficult to do thing, but I think we’ll address the elephant in the room today because we have an expert here with us today. So Mario, I think I’ll, I’ll start with this. Uh, sorry to say this, but, uh, this looks like a boring topic to a lot of engineering teams, right? People are not immediately aligned towards having metrics and measurement and people looking at what they’re doing. And of course, there are biases around it. It’s a good practice. It’s an ideal practice to have in high performing engineering teams. But what made you, uh, go behind this, uh, what excited you to go behind this?

Mario Viktorov Mechoulam: A very good question. And I agree that, uh, it’s not an easy topic. I think that, uh, what’s behind the metrics is around us, whether we like it or not. Efficiency, effectiveness, optimization, productivity. It’s, it’s in everything we do in the world. So, for example, even if you, if you go to the airport and you stay in a queue for your baggage check in, um, I’m sure there’s some metrics there, whether they track it or not, I don’t know. And, um, and I discovered in my, my university years, I had, uh, first contact with, uh, Toyota production system with Lean, how we call it in the West, and I discovered how there were, there were things that looked like, like magic that you could simply by observing and applying use to transform the landscape of organizations and the landscape systems. And I was very lucky to be in touch with this, uh, with this one professor who is, uh, uh, the Director of the Lean Institute in Spain. Um, and I was surprised to see how no matter how big the corporation, how powerful the people, how much money they have, there were inefficiencies everywhere. And in my eyes, it looks like a magic wand. Uh, you just, uh, weave it around and then you magically solve stuff that could not be solved, uh, no matter how much money you put on them. And this was, yeah, this stuck with me for quite some time, but I never realized until a few years into the industry that, that was not just for manufacturing, but, uh, lean and metrics, they’re around us and it’s our responsibility to seize it and to make them, to put them to good use.

Kovid Batra: Interesting. Interesting. So I think from here, I would love to know some of the things that you have encountered in your journey, um, as an engineering leader. Uh, when you start implementing or bringing this thought at first point in the teams, what’s their reaction? How do you deal with it? I know it’s an obvious question to ask because I have been dealing with a lot of teams, uh, while working at Typo, but I want to hear it from you firsthand. What’s the experience like? How do you bring it in? How do you motivate those people to actually come on board? So maybe if you have an example, if you have a story to tell us from there, please go ahead.

Mario Viktorov Mechoulam: Of course, of course. It’s not easy and I’ve made a lot of mistakes and one thing that I learned is that there is no fast track. It doesn’t matter if you know, if you know how to do it. If you’ve done it a hundred times, there’s no fast track. Most of the times it’s a slow grind and requires walking the path with people. I like to follow the, these steps. We start with observability, then accountability, then understanding, then discussions and finally agreements. Um, but of course, we cannot, we cannot, uh, uh, drop everything at, at, at, at once at the team because as you said, there are people who are generally wary of, of this, uh, because of, um, bad, bad practices, because of, um, unmet expectations, frustrations in the past. So indeed, um, I have, I have had to be very, very careful about it. So to me, the first thing is starting with observability, you need to be transparent with your intentions. And I think one, one key sentence that has helped me there is that trying to understand what are the things that people care about. Do you care about your customers? Do you care about how much focus time, how much quality focus time do you have? Do you care about the quality of what you ship? Do you care about the impact of what you ship? So if the answer to these questions is yes, and for the majority of engineers, and not only engineers, it’s, it’s yes, uh, then if you care about something, it might be smart to measure it. So that’s a, that’s a good first start. Um, then by asking questions about what are the pains or generating curiosity, like for example, where do you think we spend the most time when we are working to ship something? You can, uh, you can get to a point where the team agrees to have some observability, some metrics in place. So that’s the first step.

Uh, the second step is to generate accountability. And that is arguably harder. Why so? Because in my career, I’ve seen sometimes people, um, who think that these are management metrics. Um, and they are, so don’t get me wrong. I think management can put these metrics to good use, um, but this sends a message in that nobody else is responsible for them, and I disagree with this. I think that everybody is responsible. Of course, I’m ultimately responsible. So, what I do here is I try to help teams understand how they are accountable of this. So if it was me, then I get to decide how it really works, how they do the work, what tools they use, what process they use. This is boring. It’s boring for me, but it’s also boring and frustrating for the people. People might see this as micromanagement. I think it’s, uh, it’s much more intellectually interesting if you get to decide how you do the work. And this is how I connect the accountability so that we can get teams to accept that okay, these metrics that we see, they are a result of how we have decided to work together. The things, the practices, the habits that we do. And we can, we can influence them.

Kovid Batra: Totally. But the thing is, uh, when you say that everyone should be onboarded with this thought that it is not just for the management, for the engineering, what exactly, uh, are those action items that you plan that get this into the team as a culture? Because I, I feel, uh, I’ll touch this topic again when we move ahead, but when we talk about culture, it comes with a lot of aspects that you can, you can not just define, uh, in two days or three days or five days of time. There is a mindset that already exists and everything that you add on top of it comes only or fits only if it aligns with that because changing culture is a hard thing, right? So when you say that people usually feel that these are management metrics, somehow I feel that this is part of the culture. But when you bring it, when you bring it in a way that everyone is accountable, bringing that change into the mindset is, is, is a little hard, I feel. So what exactly do you do there is what I want to understand from you.

Mario Viktorov Mechoulam: Sure. Um, so just, just to be, to be clear, at the point where you introduce this observability and accountability, it’s not, it’s not part of the culture yet. I think this is the, like, putting the foot on the door, uh, to get people to start, um, to start looking at these, using these and eventually they become a culture, but way, way later down the line.

Kovid Batra: Got it, got it. Yeah.

Mario Viktorov Mechoulam: Another thing is that culture takes, takes a lot of time. It’s, uh, um, how can we say? Um, organic adoption is very slow. And after organic adoption, you eventually get a shifting culture. Um, so I was talking to somebody a few weeks back, and they were telling me a senior leader for one of another company, and they were telling me that it took a good 3–4 years to roll out metrics in a company. And even then, they did not have all the levels of adoption, all the cultural changes everywhere in all the layers that they wanted to. Um, so, so this, there’s no fast track. This, this takes time. And when you say that, uh, people are wary about metrics or people think that manage, this is management metrics when they, when, when you say this is part of culture, it’s true. And it comes maybe from a place where people have been kept out of it, or where they have seen that metrics have been misused to do precisely micromanagement, right?

Kovid Batra: Right.

Mario Viktorov Mechoulam: So, yeah, people feel like, oh, with this, my work is going to be scrutinized. Perhaps I’m going to have to cut corners. I’m going to be forced to cut corners. I will have less satisfaction in the work we do. So, so we need to break that, um, to change the culture. We need to break the existing culture and that, that takes time. Um, so for me, this is just the first step. Uh, just the first step to, um, to make people feel responsible, because at the end of the day, um, every, every team costs some, some, some budget, right, to the company. So for an average sized team, we might be talking $1 million, depending on where you’re located, of course. But $1 million per year. So, of course, this, each of these teams, they need to make $1 million in, uh, in impact to at least break even, but we need more. Um, how do we do that? So two things. First, you need, you need to track the impact of the work you do. So that already tells you that if we care about this, there is a metric that we have to incorporate. We have to track the impact, the effect that the work we ship has in the product. But then the second, second thing is to be able to correlate this, um, to correlate what we ship with the impact that we see. And, and there is a very, very, uh, narrow window to do that. You cannot start working on something and then ship it three years later and say, Oh, I had this impact. No, in three years, landscape changed a lot, right? So we need to be quicker in shipping and we need to be tracking what we ship. Therefore, um, measuring lead time, for example, or cycle time becomes one of the highest expressions of being agile, for example.

Kovid Batra: Got it.

Mario Viktorov Mechoulam: So it’s, it’s through these, uh, constant repetition and helping people see how the way they do work, how, whether they track or not, and can improve or not, um, has repercussions in the customer. Um, it’s, it’s the way to start, uh, introducing this, this, uh, this metric concept and eventually helping shift the culture.

Kovid Batra: So is, let’s say cycle time for, for that matter, uh, is, is a metric that is generally applicable in every situation and we can start introducing it at, at the first step and then maybe explore more and, uh, go for some specifics or cycle time is specific to a situation in itself?

Mario Viktorov Mechoulam: I think cycle time is one of these beautiful metrics that you can apply everywhere. Uh, normally you see it applied on the teams. To do, doing, done. But, uh, what I like is that you can apply it, um, everywhere. So you can apply it, um, across teams, you can apply, apply it at line level, you can even apply it at company level. Um, which is not done often. And I think this is, this is a problem. But applying it outside of teams, it’s definitely part of the cultural change. Um, I’ve seen that the focus is often on teams. There’s a lot of focus in optimizing teams, but when you look at the whole picture, um, there are many other places that present opportunities for optimization, and one way to do that is to start, to start measuring.

Kovid Batra: Mario, did you get a chance where you could see, uh, or compare basically, uh, teams or organizations where people are using engineering metrics, and let’s say, a team which doesn’t use engineering metrics? How does the value delivery in these systems, uh, vary, and to what extent, basically?

Mario Viktorov Mechoulam: Let me preface that. Um, metrics are just a cornerstone, but they don’t guarantee that you’d do better or worse than the teams that don’t apply them. However, it’s, it’s very hard, uh, sometimes to know whether you’re doing good or bad if you don’t have something measurable, um, to, to do that. What I’ve seen is much more frustration generally in teams that do not have metrics. But because not having them, uh, forces them into some bad habits. One of the typical things that I, that I see when I join a team or do a Gemba Walk, uh, on some of the teams that are not using engineering metrics, is high work in progress. We’re talking 30+ things are ongoing for a team of five engineers. This means that on average, everybody’s doing 5–6 things at the same time. A lot of context switching, a lot of multitasking, a lot of frustration and leading to things taking months to ship instead of days. Of course, as I said, we can have teams that are doing great without this, but, um, if you’re already doing this, I think just adding the metric to validate it is a very small price to pay. And even if you’re doing great, this can start to change in any moment because of changes in the team composition, changes in the domain, changes in the company, changes in the process that is top-down. So it’s, uh, normally it’s, it’s, it’s very safe to have the metrics to be able to identify this type of drift, this type of degradation as soon as they happen. What I’ve seen also with teams that do have metric adoption is first this eventual cultural change, but then in general, uh, one thing that they do is that they keep, um, they keep the pieces of work small, they limit the work in progress and they are very, very much on top of the results on a regular basis and discussing these results. Um, so this is where we can continue with the, uh, cultural change.

Uh, so after we have, uh, accountability, uh, the next thing, step is understanding. So helping people through documentation, but also through coaching, understand how the choices that we make, the decisions, the events, produce the results that we see for which we’re responsible. And after that, fostering discussion for which you need to have trust, because here we don’t want blaming. We don’t want comparing teams. We want to understand what happened, what led to this. And then, with these discussions, see what can we do to prevent these things. Um, which leads to agreement. So doing this circle, closing the circle, doing it constantly, creates habits. Habits create continuous improvement, continuous learning. And at a certain point, you have the feeling that the team already understands the concepts and is able to work autonomously on this. And this is the moment where you delegate responsibility, um, of this and of the execution as well. And you have created, you have changed a bit the culture in one team.

Kovid Batra: Makes sense. What else does it take, uh, to actually bring in this culture? What else do you think is, uh, missing in this recipe yet?

Mario Viktorov Mechoulam: Yes. Um, I think working with teams is one thing. It’s a small and controlled environment. But the next thing is that you need executive sponsorship. You need to work at the organization level. And that is, that is a bit harder. Let’s say just a bit harder. Um, why is it hard?

Kovid Batra: I see some personal pain coming in there, right?

Mario Viktorov Mechoulam: Um, well, no, it depends. I think it can be harder or it can be easier. So, for example, uh, my experience with startups is that in general, getting executive sponsorship there, the buy-in, is way easier. Um, at the same time, the, because it’s flatter, so you’re in contact day to day with the people who, who need to give you this buy-in. At the same time, very interestingly, engineers in these organizations often are, often need these metrics much less at that point. Why? Because when we talk about startups, we’re talking about much less meetings, much less process. A lot of times, a lot of, um, people usually wear multiple hats, boundaries between roles are not clear. So there’s a lot of collaboration. People usually sit in the very same room. Um, so, so these are engineers that don’t need it, but it’s also a good moment to plant the seed because when these companies grow, uh, you’ll be thankful for that later. Uh, where it’s harder to get it, it’s in bigger corporations. But it’s in these places where I think that it’s most needed because the amount of process, the amount of bureaucracy, the amount of meetings, is very, very draining to the teams in those places. And usually you see all these just piles up. It seldom gets removed. Um, that, maybe it’s a topic for a different discussion. But I think people are very afraid of removing something and then be responsible of the result that removal brings. But yeah, I have, I have had, um, we can say fairly, a fair success of also getting the executive sponsorship, uh, in, in organizations to, to support this and I have learned a few things also along the way.

Kovid Batra: Would you like to share some of the examples? Not specifically from, let’s say, uh, getting sponsorship from the executives, I would be interested because you say it’s a little hard in places. So what things do you think, uh, can work out when you are in that room where you need to get a buy-in on this? What exactly drives that?

Mario Viktorov Mechoulam: Yes. The first point is the same, both for grassroots movements with teams and executive sponsorship, and that is to be transparent. Transparent with what, what do you want to do? What’s your intent and why do you think this is important? Uh, now here, and I’m embarrassed to say this, um, we, we want to change the culture, right? So we should focus on talking about habits, um, right? About culture, about people, et cetera. Not that much about, um, magic to say that, but I, but I’m guilty of using that because, um, people, people like how this sounds, people like to see, to, to, to hear, oh, we’ll introduce metrics and they will be faster and we’ll be more efficient. Um, so it’s not a direct relationship. As I said, it’s a stepping stone that can help you get there. Um, but, but it’s not, it’s not a one month journey or a one year journey. It can take slightly longer, but sometimes to get, to get the attention, you have to have a pitch which focuses more on efficiency, which focuses more on predictability and these type of things. So that’s definitely one, one learning. Um, second learning is that it’s very important, no matter who you are, but it’s even more important when you are, uh, not at the top of the, uh, of the management, uh, uh, pyramid to get, um, by, uh, so to get coaching from your, your direct manager. So if you have somebody that, uh, makes your goals, your objectives, their own, uh, it’s great because they have more experience, uh, they can help you navigate these and present the cases, uh, in a much better and structured way for the, for the intent that you have. And I was very lucky there as well to count on people that were supportive, uh, that were coaching me along the way. Um, yes.

So, first step is the same. First step is to be transparent and, uh, with your intent and share something that you have done already. Uh, here we are often in a situation where you have to put your money where your mouth is, and sometimes you have to invest from your own pocket if you want, for example, um, to use a specific tool. So to me, tools don’t really matter. So what’s important is start with some, something and then build up on top of it, change the culture, and then you’ll find the perfect tool that serves your purpose. Um, exactly. So sometimes you have to, you have to initiate this if you want to have some, some, some metrics. Of course, you can always do this manually. I’ve done it in the past, but I definitely don’t recommend it because it’s a lot of work. In an era where most of these tools are commodities, so we’re lucky enough to be able to gather this metric, this information. Yeah, so usually after this PoC, this experiment for three to six months with the team, you should have some results that you can present, um, to, um, to get executive sponsorship. Something that’s important here that I learned is that you need to present the results very, very precisely. Uh, so what was the problem? What are the actions we did? What’s the result? And that’s not always easy because when you, when you work with metrics for a while, you quickly start to see that there are a lot of synergies. There’s overlapping. There are things that impact other things, right? So sometimes you see a change in the trend, you see an improvement somewhere, uh, you see the cultural impact also happening, but you’re not able to define exactly what’s one thing that we need or two things that we, that we need to change that. Um, so, so that part, I think is very important, but it’s not always easy. So it has to be prepared clearly. Um, the second part is that unfortunately, I discovered that not many people are familiar with the topics. So when introducing it to get the exact sponsorship, you need to, you need to be able to explain them in a very simple, uh, and an easy way and also be mindful of the time because most of the people are very busy. Um, so you don’t want to go in a full, uh, full blown explanation of several hours.

Kovid Batra: I think those people should watch these kinds of podcasts.

Mario Viktorov Mechoulam: Yeah. Um, but, but, yeah, so it’s, it’s, it’s the experiment, it’s the results, it’s the actions, but also it’s a bit of background of why is this important and, um, yeah, and, and how did it influence what we did.

Kovid Batra: Yeah, I mean, there’s always, uh, different, uh, levels where people are in this journey. Let’s, let’s call this a journey where you are super aware, you know what needs to be done. And then there is a place where you’re not aware of the problem itself. So when you go through this funnel, there are people whom you need to onboard in your team, who need to first understand what we are talking about what does it mean, how it’s going to impact, and what exactly it is, in very simple layman language. So I totally understand that point and realize that how easy as well as difficult it is to get these things in place, bring that culture of metrics, engineering metrics in the engineering teams.

Well, I think this was something really, really interesting. Uh, one last piece that I want to touch upon is when you put in all these efforts into onboarding the teams, fostering that culture, getting buy-in from the executives, doing your PoCs and then presenting it, getting in sync with the team, there must be some specific indicators, right, that you start seeing in the teams. I know you have just covered it, but I want to again highlight that point that what exactly someone who is, let’s say an engineering manager and trying to implement it in the team should be looking for early on, or let’s say maybe one month, two months down the line when they started doing that PoC in their teams.

Mario Viktorov Mechoulam: I think, um, how comfortable the people in the team get in discussing and explaining the concepts during analysis of the metrics, this quality analysis is key. Um, and this is probably where most of the effort goes in the first months. We need to make sure that people do understand the metrics, what they represent, how the work we do has an impact on those. And, um, when we reached that point, um, one, one cue for me was the people in my teams, uh, telling me, I want to run this. This meant to me that we had closed the circle and we were close to having a habit and people were, uh, were ready to have this responsibility delegated to them to execute this. So it put people in a place where, um, they had to drive a conversation and they had to think about okay, what am I seeing? What happened? But what could it mean? But then what actions do we want to take? But this is something that we saw in the past already, and we tried to address, and then maybe we made it worse. And then you should also see, um, a change in the trend of metrics. For example, work in progress, getting from 30+ down to something close to the team size. Uh, it could be even better because even then it means that people are working independently and maybe you want them to collaborate. Um, some of the metrics change drastically. Uh, we can, we can talk about it another time, but the standard deviation of the cycle time, you can see how it squeezes, which means that, uh, it, it doesn’t, uh, feel random anymore. When, when I’m going to ship something, but now right now we can make a very, um, a very accurate guess of when, when it’s going to happen. So these types of things to me, mark, uh, good, good changes and that you’re on the right path.

Kovid Batra: Uh, honestly, Mario, very insightful, very practical tips that I have heard today about the implementation piece, and I’m sure this doesn’t end here. Uh, we are going to have more such discussions on this topic, and I want to deep dive into what exact metrics, how to use them, what suits which situation, talking about things like standard deviation from your cycle time would start changing, and that is in itself an interesting thing to talk about. So probably we’ll cover that in the next podcast that we have with you. For today, uh, this is our time. Any parting advice that you would like to share with the audience? Let’s say, there is an Engineering Manager. Let’s say, Mario five years back, who is thinking to go in this direction, what piece of advice would you give that person to get on this journey and what’s the incentive for that person?

Mario Viktorov Mechoulam: Yes. Okay. Clear. In, in general, you, you’ll, you’ll hear that people and teams are too busy to improve. We all know that. So I think as a manager who wants to start introducing these, uh, these concepts and these metrics, your, one of your responsibilities is to make room, to make space for the team, so that they can sit down and have a quality, quality time for this type of conversation. Without it, it’s not, uh, it’s not going to happen.

Kovid Batra: Okay, perfect. Great, Mario. It was great having you here. And I’m sure, uh, we are recording a few more sessions on this topic because this is close to us as well. But for today, this is our time. Thank you so much. See you once again.

Mario Viktorov Mechoulam: Thank you, Kovid. Pleasure is mine. Bye-bye!

Kovid Batra: Bye.

View All

AI

View All
AI-Driven SDLC: The Future of Software Development

AI-Driven SDLC: The Future of Software Development

Leveraging AI-driven tools for the Software Development Life Cycle (SDLC) has reshaped how software is planned, developed, tested, and deployed. By automating repetitive tasks, analyzing vast datasets, and predicting future trends, AI enhances efficiency, accuracy, and decision-making across all SDLC phases.

Let's explore the impact of AI on SDLC and highlight must-have AI tools for streamlining software development workflows.

How AI Transforms SDLC?

The SDLC comprises seven phases, each with specific objectives and deliverables that ensure the efficient development and deployment of high-quality software. Here is an overview of how AI influences each stage of the SDLC:

Requirement Analysis and Gathering

This is the primary process of SDLC that directly affects other steps. In this phase, developers gather and analyze various requirements of software projects.

How AI Impacts Requirement Analysis and Gathering?

  • AI-driven tools help in quality checks, data collection and requirement analysis such as requirement classification, models and traceability.
  • They analyze historical data to predict future trends, resource needs and potential risks to help optimize planning and resource allocation.
  • AI tools detect patterns in new data and forecast upcoming trends for specific periods to make data-driven decisions.

Planning

This stage comprises comprehensive project planning and preparation before starting the next step. This involves defining project scope, setting objectives, allocating resources, understanding business requirements and creating a roadmap for the development process.

How AI Impacts Planning?

  • AI tools analyze historical data, market trajectories, and technological advancements to anticipate future needs and shape forward-looking roadmaps.
  • These tools dive into past trends, team performance and necessary resources for optimal resource allocation to each project phase.
  • They also help in facilitating communication among stakeholders by automating meeting scheduling, summarizing discussions, and generating actionable insights.

Design and Prototype

The third step of SDLC is generating a software prototype or concept aligned with software architecture or development pattern. This involves creating a detailed blueprint of the software based on the requirements, outlining its components and how it will be built.

How AI Impacts Design and Prototype?

  • AI-powered tools convert natural language processing (NLP) into UI mockups, wireframes and even design documents.
  • They also suggest optimal design patterns based on project requirements and assist in creating more scalable software architecture.
  • AI tools can simulate different scenarios that enable developers to visualize their choices' impact and choose optimal design.

Microservices Architecture and AI-Driven SDLC

The adoption of microservices architecture has transformed how modern applications are designed and built. When combined with AI-driven development approaches, microservices offer unprecedented flexibility, scalability, and resilience.

How AI Impacts Microservices Implementation

  • Service Boundary Optimization: AI analyzes domain models and data flow patterns to recommend optimal service boundaries, ensuring high cohesion and low coupling between microservices.

  • API Design Assistance: Machine learning models examine existing APIs and suggest design improvements, consistency patterns, and potential breaking changes before they affect consumers.

  • Service Mesh Intelligence: AI-enhanced service meshes like Istio can dynamically adjust routing rules, implement circuit breaking, and optimize load balancing based on real-time traffic patterns and service health metrics.

  • Automated Canary Analysis: AI systems evaluate the performance of new service versions against baseline metrics, automatically controlling the traffic distribution during deployments to minimize risk.

Development

Development Stage aims to develop software that is efficient, functional and user-friendly. In this stage, the design is transformed into a functional application—actual coding takes place based on design specifications.

How AI Impacts Development?

  • AI-driven coding swiftly writes and understands code, generates documentation and code snippets that speeds up time-consuming and resource-intensive tasks.
  • These tools also act as a virtual partner by facilitating pair programming and offering insights and solutions to complex coding problems.
  • They enforce best practices and coding standards by automatically analyzing code to identify violations and detect issues like code duplication and potential security vulnerabilities.

Testing

Once project development is done, the entire coding structure is thoroughly examined and optimized. It ensures flawless software operations before it reaches end-users and identifies opportunities for enhancement.

How AI Impacts Testing?

  • Machine learning algorithms analyze past test results to identify patterns and predict areas of the code that are likely to fail.
  • They explore software requirements, user stories, and historical data to automatically generate test cases that ensure comprehensive coverage of functional and non-functional aspects of the application.
  • AI and ML automate visual testing by comparing the user interface (UI) across various platforms and devices to enable consistency in design and functionality.

Deployment

The deployment phase involves releasing the tested and optimized software to end-users. This stage serves as a gateway to post-deployment activities like maintenance and updates.

How AI Impacts Deployment?

  • These tools streamline the deployment process by automating routine tasks, optimize resource allocation, collect user feedback and address issues that arise.
  • AI-driven CI/CD pipelines monitor the deployment environment, predict potential issues and automatically roll back changes, if necessary.
  • They also analyze deployment data to predict and mitigate potential issues for the smooth transition from development to production.

DevOps Integration in AI-Driven SDLC

The integration of DevOps principles with AI-driven SDLC creates a powerful synergy that enhances collaboration between development and operations teams while automating crucial processes. DevOps practices ensure continuous integration, delivery, and deployment, which complements the AI capabilities throughout the SDLC.

How AI Enhances DevOps Integration

  • Infrastructure as Code (IaC) Optimization: AI algorithms analyze infrastructure configurations to suggest optimizations, identify potential security vulnerabilities, and ensure compliance with organizational standards. Tools like HashiCorp's Terraform with AI plugins can predict resource requirements based on application behavior patterns.

  • Automated Environment Synchronization: AI-powered tools detect discrepancies between development, staging, and production environments, reducing the "it works on my machine" syndrome. This capability ensures consistent behavior across all deployment stages.

  • Anomaly Detection in CI/CD Pipelines: Machine learning models identify abnormal patterns in build and deployment processes, flagging potential issues before they impact production. These systems learn from historical pipeline executions to establish baselines for normal operation.

  • Self-Healing Infrastructure: AI systems monitor application health metrics and can automatically initiate remediation actions when predefined thresholds are breached, reducing mean time to recovery (MTTR) significantly.

Maintenance

This is the final and ongoing phase of the software development life cycle. 'Maintenance' ensures that software continuously functions effectively and evolves according to user needs and technical advancements over time.

How AI Impacts Maintenance?

  • AI analyzes performance metrics and logs to identify potential bottlenecks and suggest targeted fixes.
  • AI-powered chatbots and virtual assistants handle user queries, generate self-service documentation and escalate complex issues to the concerned team.
  • These tools also maintain routine lineups of system updates, security patching and database management to ensure accuracy and less human intervention.

Observability and AIOps

Traditional monitoring approaches are insufficient for today's complex distributed systems. AI-driven observability platforms provide deeper insights into system behavior, enabling teams to understand not just what's happening, but why.

How AI Enhances Observability

  • Distributed Tracing Intelligence: AI analyzes trace data across microservices to identify performance bottlenecks and optimize service dependencies automatically.

  • Predictive Alert Correlation: Machine learning algorithms correlate seemingly unrelated alerts across different systems, identifying root causes more quickly and reducing alert fatigue among operations teams.

  • Log Pattern Recognition: Natural language processing extracts actionable insights from unstructured log data, identifying unusual patterns that might indicate security breaches or impending system failures.

  • Service Level Objective (SLO) Optimization: AI systems continuously analyze system performance against defined SLOs, recommending adjustments to maintain reliability while optimizing resource utilization.

Security and Compliance in AI-Driven SDLC

With increasing regulatory requirements and sophisticated cyber threats, integrating security and compliance throughout the SDLC is no longer optional. AI-driven approaches have transformed this traditionally manual area into a proactive and automated discipline.

How AI Transforms Security and Compliance

  • Shift-Left Security Testing: AI-powered static application security testing (SAST) and dynamic application security testing (DAST) tools identify vulnerabilities during development rather than after deployment. Tools like Snyk and SonarQube with AI capabilities detect security issues contextually within code review processes.

  • Regulatory Compliance Automation: Natural language processing models analyze regulatory requirements and automatically map them to code implementations, ensuring continuous compliance with standards like GDPR, HIPAA, or PCI-DSS.

  • Threat Modeling Assistance: AI systems analyze application architectures to identify potential threats, recommend mitigation strategies, and prioritize security concerns based on risk impact.

  • Runtime Application Self-Protection (RASP): AI-driven RASP solutions monitor application behavior in production, detecting and blocking exploitation attempts in real-time without human intervention.

Top Must-Have AI Tools for SDLC

Requirement Analysis and Gathering

  • ChatGPT/OpenAI: Generates user stories, asks clarifying questions, gathers requirements and functional specifications based on minimal input.
  • IBM Watson: Uses natural language processing (NLP) to analyze large volumes of unstructured data, such as customer feedback or stakeholder interviews.

Planning

  • Jira (AI Plugins): With AI plugins like BigPicture or Elements.ai helps in task automation, risk prediction, scheduling optimization.
  • Microsoft Project AI: Microsoft integrates AI and machine learning features for forecasting timelines, costs, and optimizing resource allocation.

Design and Prototype

  • Figma: Integrates AI plugins like Uizard or Galileo AI for generating design prototypes from text descriptions or wireframes.
  • Lucidchart: Suggest design patterns, optimize workflows, and automate the creation of diagrams like ERDs, flowcharts, and wireframes.

Microservices Architecture

  • Kong Konnect: AI-powered API gateway that optimizes routing and provides insights into API usage patterns.
  • MeshDynamics: Uses machine learning to optimize service mesh configurations and detect anomalies.

Development

  • GitHub Copilot: Suggests code snippets, functions, and even entire blocks of code based on the context of the project.
  • Tabnine: Supports multiple programming languages and learns from codebase to provide accurate and context-aware suggestions.

Testing

  • Testim: Creates, executes, and maintains automated tests. It can self-heal tests by adapting to changes in the application's UI.
  • Applitools: Leverages AI for visual testing and detects visual regressions automatically.

Deployment

  • Harness: Automates deployment pipelines, monitors deployments, detects anomalies and rolls back deployments automatically if issues are detected.
  • Jenkins (AI Plugins): Automates CI/CD pipelines with predictive analytics for deployment risks.

DevOps Integration

  • GitLab AI: Provides insights into CI/CD pipelines, suggesting optimizations and identifying potential bottlenecks.
  • Dynatrace: Uses AI to provide full-stack observability and automate operational tasks.

Security and Compliance

  • Checkmarx: AI-driven application security testing that identifies vulnerabilities with context-aware coding suggestions.
  • Prisma Cloud: Provides AI-powered cloud security posture management across the application lifecycle.

Maintenance

  • Datadog: Uses AI to provide insights into application performance, infrastructure, and logs.
  • PagerDuty: Prioritize alerts, automates responses, and predicts potential outages.

Observability and AIOps

  • New Relic One: Combines AI-powered observability with automatic anomaly detection and root cause analysis.
  • Splunk IT Service Intelligence: Uses machine learning to predict and prevent service degradations and outages.

How does Typo help in improving SDLC visibility?

Typo is an intelligent engineering management platform. It is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Through SDLC metrics, you can ensure alignment with business goals and prevent developer burnout. This tool can be integrated with the tech stack to deliver real-time insights. Git, Slack, Calendars, and CI/CD to name a few.

Typo Key Features:

  • Cycle time breakdown
  • Work log
  • Investment distribution
  • Goal setting for continuous improvement
  • Developer burnout alert
  • PR insights
  • Developer workflow automation

Future Trends in AI-Driven SDLC

As AI technologies continue to evolve, several emerging trends are set to further transform the software development lifecycle:

  • Generative AI for Complete Application Creation: Beyond code snippets, future AI systems will generate entire applications from high-level descriptions, with humans focusing on requirements and business logic rather than implementation details.

  • Autonomous Testing Evolution: AI will eventually create and maintain test suites independently, adjusting coverage based on code changes and user behavior without human intervention.

  • Digital Twins for SDLC: Creating virtual replicas of the entire development environment will enable simulations of changes before implementation, predicting impacts across the system landscape.

  • Cross-Functional AI Assistants: Future development environments will feature AI assistants that understand business requirements, technical constraints, and user needs simultaneously, bridging gaps between stakeholders.

  • Quantum Computing Integration: As quantum computing matures, it will enhance AI capabilities in the SDLC, enabling complex simulations and optimizations currently beyond classical computing capabilities.

Conclusion

AI-driven SDLC has revolutionized software development, helping businesses enhance productivity, reduce errors, and optimize resource allocation. These tools ensure that software is not only developed efficiently but also evolves in response to user needs and technological advancements.

As AI continues to evolve, it is crucial for organizations to embrace these changes to stay ahead of the curve in the ever-changing software landscape.

Developer Productivity in the Age of AI

Are you tired of feeling like you’re constantly playing catch-up with the latest AI tools, trying to figure out how they fit into your workflow? Many developers and managers share that sentiment, caught in a whirlwind of new technologies that promise efficiency but often lead to confusion and frustration.

The problem is clear: while AI offers exciting opportunities to streamline development processes, it can also amplify stress and uncertainty. Developers often struggle with feelings of inadequacy, worrying about how to keep up with rapidly changing demands. This pressure can stifle creativity, leading to burnout and a reluctance to embrace the innovations designed to enhance our work.

But there’s good news. By reframing your relationship with AI and implementing practical strategies, you can turn these challenges into opportunities for growth. In this blog, we’ll explore actionable insights and tools that will empower you to harness AI effectively, reclaim your productivity, and transform your software development journey in this new era.

The Current State of Developer Productivity

Recent industry reports reveal a striking gap between the available tools and the productivity levels many teams achieve. For instance, a survey by GitHub showed that 70% of developers believe repetitive tasks hamper their productivity. Moreover, over half of developers express a desire for tools that enhance their workflow without adding unnecessary complexity.

Understanding the Productivity Paradox

Despite investing heavily in AI, many teams find themselves in a productivity paradox. Research indicates that while AI can handle routine tasks, it can also introduce new complexities and pressures. Developers may feel overwhelmed by the sheer volume of tools at their disposal, leading to burnout. A 2023 report from McKinsey highlights that 60% of developers report higher stress levels due to the rapid pace of change.

Common Emotional Challenges

As we adapt to these changes, feelings of inadequacy and fear of obsolescence may surface. It’s normal to question our skills and relevance in a world where AI plays a growing role. Acknowledging these emotions is crucial for moving forward. For instance, it can be helpful to share your experiences with peers, fostering a sense of community and understanding.

Key Challenges Developers Face in the Age of AI

Understanding the key challenges developers face in the age of AI is essential for identifying effective strategies. This section outlines the evolving nature of job roles, the struggle to balance speed and quality, and the resistance to change that often hinders progress.

Evolving Job Roles

AI is redefining the responsibilities of developers. While automation handles repetitive tasks, new skills are required to manage and integrate AI tools effectively. For example, a developer accustomed to manual testing may need to learn how to work with automated testing frameworks like Selenium or Cypress. This shift can create skill gaps and adaptation challenges, particularly for those who have been in the field for several years.

Balancing speed and Quality

The demand for quick delivery without compromising quality is more pronounced than ever. Developers often feel torn between meeting tight deadlines and ensuring their work meets high standards. For instance, a team working on a critical software release may rush through testing phases, risking quality for speed. This balancing act can lead to technical debt, which compounds over time and creates more significant problems down the line.

Resistance to Change

Many developers hesitate to adopt AI tools, fearing that they may become obsolete. This resistance can hinder progress and prevent teams from fully leveraging the benefits that AI can provide. A common scenario is when a developer resists using an AI-driven code suggestion tool, preferring to rely on their coding instincts instead. Encouraging a mindset shift within teams can help them embrace AI as a supportive partner rather than a threat.

Strategies for Boosting Developer Productivity

To effectively navigate the challenges posed by AI, developers and managers can implement specific strategies that enhance productivity. This section outlines actionable steps and AI applications that can make a significant impact.

Embracing AI as a Collaborator

To enhance productivity, it’s essential to view AI as a collaborator rather than a competitor. Integrating AI tools into your workflow can automate repetitive tasks, freeing up your time for more complex problem-solving. For example, using tools like GitHub Copilot can help developers generate code snippets quickly, allowing them to focus on architecture and logic rather than boilerplate code.

  • Recommended AI tools: Explore tools that integrate seamlessly with your existing workflow. Platforms like Jira for project management and Test.ai for automated testing can streamline your processes and reduce manual effort.

Actual AI Applications in Developer Productivity

AI offers several applications that can significantly boost developer productivity. Understanding these applications helps teams leverage AI effectively in their daily tasks.

  • Code generation: AI can automate the creation of boilerplate code. For example, tools like Tabnine can suggest entire lines of code based on your existing codebase, speeding up the initial phases of development and allowing developers to focus on unique functionality.
  • Code review: AI tools can analyze code for adherence to best practices and identify potential issues before they become problems. Tools like SonarQube provide actionable insights that help maintain code quality and enforce coding standards.
  • Automated testing: Implementing AI-driven testing frameworks can enhance software reliability. For instance, using platforms like Selenium and integrating them with AI can create smarter testing strategies that adapt to code changes, reducing manual effort and catching bugs early.
  • Intelligent debugging: AI tools assist in quickly identifying and fixing bugs. For example, Sentry offers real-time error tracking and helps developers trace their sources, allowing teams to resolve issues before they impact users.
  • Predictive analytics for sprints/project completion: AI can help forecast project timelines and resource needs. Tools like Azure DevOps leverage historical data to predict delivery dates, enabling better sprint planning and management.
  • Architectural optimization: AI tools suggest improvements to software architecture. For example, the AWS Well-Architected Tool evaluates workloads and recommends changes based on best practices, ensuring optimal performance.
  • Security assessment: AI-driven tools identify vulnerabilities in code before deployment. Platforms like Snyk scan code for known vulnerabilities and suggest fixes, allowing teams to deliver secure applications.

Continuous Learning and Professional Development

Ongoing education in AI technologies is crucial. Developers should actively seek opportunities to learn about the latest tools and methodologies.

Online resources and communities: Utilize platforms like Coursera, Udemy, and edX for courses on AI and machine learning. Participating in online forums such as Stack Overflow and GitHub discussions can provide insights and foster collaboration among peers.

Cultivating a Supportive Team Environment

Collaboration and open communication are vital in overcoming the challenges posed by AI integration. Building a culture that embraces change can lead to improved team morale and productivity.

Building peer support networks: Establish mentorship programs or regular check-ins to foster support among team members. Encourage knowledge sharing and collaborative problem-solving, creating an environment where everyone feels comfortable discussing their challenges.

Setting Effective Productivity Metrics

Rethink how productivity is measured. Focus on metrics that prioritize code quality and project impact rather than just the quantity of code produced.

Tools for measuring productivity: Use analytics tools like Typo that provide insights into meaningful productivity indicators. These tools help teams understand their performance and identify areas for improvement.

How Typo Enhances Developer Productivity?

There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.

Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.

Here are three ways in which Typo measures the team productivity:

Software Development Lifecycle (SDLC) Visibility

Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.

Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.

AI Powered Code Review

Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them using AI before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.

Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback.  This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.

Developer Experience

Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.

Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.

Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.

Continuous Learning: Empowering Developers for Future Success

With its robust features tailored for the modern software development environment, Typo acts as a catalyst for productivity. By streamlining workflows, fostering collaboration, integrating with AI tools, and providing personalized support, Typo empowers developers and their managers to navigate the complexities of development with confidence. Embracing Typo can lead to a more productive, engaged, and satisfied development team, ultimately driving successful project outcomes.

Want to know more?

AI code reviews

AI C͏o͏de Rev͏iews ͏for Remote͏ Teams

Ha͏ve͏ yo͏u ever felt ͏overwhelmed trying to ͏mainta͏in co͏nsist͏ent͏ c͏o͏de quality acros͏s ͏a remote te͏am? As mo͏re development t͏eams shift to remo͏te work, t͏he challenges of code͏ revi͏e͏ws onl͏y gro͏w—slowed c͏ommunication͏, la͏ck o͏f real-tim͏e feedba͏ck, and t͏he c͏r͏eeping ͏possibility of errors sl͏ipp͏i͏ng t͏hro͏ugh. ͏

Moreover, thin͏k about how͏ much ti͏me is lost͏ ͏waiting͏ fo͏r feedback͏ o͏r having to͏ rewo͏rk code due͏ ͏to sma͏ll͏, ͏overlooked issues. ͏When you’re͏ working re͏motely, the͏se frustra͏tio͏ns com͏poun͏d—su͏ddenly, a task that shou͏ld take hours stre͏tc͏hes into days. You͏ migh͏t ͏be spendin͏g tim͏e on ͏repetitiv͏e tasks ͏l͏ike͏ s͏yn͏ta͏x chec͏king, cod͏e formatting, and ma͏nually catch͏in͏g errors that could be͏ ha͏nd͏led͏ more ef͏fi͏cie͏nt͏ly. Me͏anwhile͏,͏ ͏yo͏u’r͏e ͏expected to deli͏ver high-quality͏ ͏work without delays. ͏

Fortuna͏tely,͏ ͏AI-͏driven too͏ls offer a solutio͏n t͏h͏at can ea͏se this ͏bu͏rd͏en.͏ B͏y automating ͏the tedi͏ous aspects of cod͏e ͏re͏views, such as catchin͏g s͏y͏ntax ͏e͏r͏rors and for͏m͏a͏tting i͏nconsistenc͏ies, AI ca͏n͏ gi͏ve deve͏lopers m͏or͏e͏ time to focus on the creative and comple͏x aspec͏ts of ͏coding. 

͏In this ͏blog, we’͏ll ͏explore how A͏I͏ can ͏help͏ remote teams tackle the diffic͏u͏lties o͏f͏ code r͏eviews ͏a͏nd ho͏w ͏t͏o͏ols like Typo can fu͏rther͏ im͏prove this͏ proc͏ess͏, allo͏wing t͏e͏am͏s to focu͏s on what ͏tru͏ly matter͏s—writing excellent͏ code.

The͏ Unique Ch͏allenges͏ ͏of R͏emot͏e C͏ode Revi͏ews

Remote work h͏as int͏roduced a unique se͏t of challenges t͏hat imp͏a͏ct t͏he ͏code rev͏iew proce͏ss. They a͏re:͏ 

Co͏mmunication bar͏riers

When team members are͏ s͏cat͏t͏ered across ͏diffe͏rent time ͏zon͏e͏s, real-t͏ime discussions and feedba͏ck become ͏mor͏e difficult͏. Th͏e͏ lack of face͏-to-͏face͏ ͏int͏e͏ra͏ctions can h͏i͏nder effective ͏commun͏icati͏on ͏an͏d͏ le͏ad ͏to m͏isunde͏rs͏tandings.

Delays in fee͏dback͏

Without͏ the i͏mmedi͏acy of in-pers͏on ͏collabo͏rati͏on͏,͏ remote͏ ͏tea͏ms͏ often experie͏n͏ce del͏ays in receivi͏ng feedback on͏ thei͏r code chang͏e͏s. This ͏can slow d͏own the developmen͏t cycle͏ and fru͏strat͏e ͏te͏am ͏member͏s who are ea͏ger t͏o iterate and impro͏ve the͏ir ͏code.͏

Inc͏rea͏sed risk ͏of human error͏

͏C͏o͏mplex ͏code͏ re͏vie͏ws cond͏ucted ͏remo͏t͏ely are more͏ p͏ro͏n͏e͏ to hum͏an overs͏ight an͏d errors. When team͏ memb͏ers a͏re no͏t ph͏ysically ͏pres͏ent to catch ͏ea͏ch other's mistakes, the risk of intro͏duci͏ng͏ bug͏s or quality i͏ssu͏es into the codebase increa͏ses.

Emo͏tional stres͏s

Re͏mot͏e͏ work can take͏ a toll on t͏eam mo͏rale, with f͏eelings͏ of ͏is͏olation and the pres͏s͏ure ͏to m͏ai͏nt͏a͏in productivit͏y w͏eighing heavily ͏on͏ developers͏. This emo͏tional st͏ress can negativel͏y ͏impact col͏laborati͏on͏ a͏n͏d code quality i͏f not͏ properly add͏ress͏ed.

Ho͏w AI Ca͏n͏ Enhance ͏Remote Co͏d͏e Reviews

AI-powered tools are transforming code reviews, helping teams automate repetitive tasks, improve accuracy, and ensure code quality. Let’s explore how AI dives deep into the technical aspects of code reviews and helps developers focus on building robust software.

NLP for Code Comments

Natural Language Processing (NLP) is essential for understanding and interpreting code comments, which often provide critical context:

Tokenization and Parsing

NLP breaks code comments into tokens (individual words or symbols) and parses them to understand the grammatical structure. For example, "This method needs refactoring due to poor performance" would be tokenized into words like ["This", "method", "needs", "refactoring"], and parsed to identify the intent behind the comment.

Sentiment Analysis

Using algorithms like Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks, AI can analyze the tone of code comments. For example, if a reviewer comments, "Great logic, but performance could be optimized," AI might classify it as having a positive sentiment with a constructive critique. This analysis helps distinguish between positive reinforcement and critical feedback, offering insights into reviewer attitudes.

Intent Classification

AI models can categorize comments based on intent. For example, comments like "Please optimize this function" can be classified as requests for changes, while "What is the time complexity here?" can be identified as questions. This categorization helps prioritize actions for developers, ensuring important feedback is addressed promptly.

Static Code Analysis

Static code analysis goes beyond syntax checking to identify deeper issues in the code:

Syntax and Semantic Analysis

AI-based static analysis tools not only check for syntax errors but also analyze the semantics of the code. For example, if the tool detects a loop that could potentially cause an infinite loop or identifies an undefined variable, it flags these as high-priority errors. AI tools use machine learning to constantly improve their ability to detect errors in Java, Python, and other languages.

Pattern Recognition

AI recognizes coding patterns by learning from vast datasets of codebases. For example, it can detect when developers frequently forget to close file handlers or incorrectly handle exceptions, identifying these as anti-patterns. Over time, AI tools can evolve to suggest better practices and help developers adhere to clean code principles.

Vulnerability Detection

AI, trained on datasets of known vulnerabilities, can identify security risks in the code. For example, tools like Typo or Snyk can scan JavaScript or C++ code and flag potential issues like SQL injection, buffer overflows, or improper handling of user input. These tools improve security audits by automating the identification of security loopholes before code goes into production.

Code Similarity Detection

Finding duplicate or redundant code is crucial for maintaining a clean codebase:

Code Embeddings

Neural networks convert code into embeddings (numerical vectors) that represent the code in a high-dimensional space. For example, two pieces of code that perform the same task but use different syntax would be mapped closely in this space. This allows AI tools to recognize similarities in logic, even if the syntax differs.

Similarity Metrics

AI employs metrics like cosine similarity to compare embeddings and detect redundant code. For example, if two functions across different files are 85% similar based on cosine similarity, AI will flag them for review, allowing developers to refactor and eliminate duplication.

Duplicate Code Detection

Tools like Typo use AI to identify duplicate or near-duplicate code blocks across the codebase. For example, if two modules use nearly identical logic for different purposes, AI can suggest merging them into a reusable function, reducing redundancy and improving maintainability.

Automated Code Suggestions

AI doesn’t just point out problems—it actively suggests solutions:

Generative Models

Models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) can create new code snippets. For example, if a developer writes a function that opens a file but forgets to handle exceptions, an AI tool can generate the missing try-catch block to improve error handling.

Contextual Understanding

AI analyzes code context and suggests relevant modifications. For example, if a developer changes a variable name in one part of the code, AI might suggest updating the same variable name in other related modules to maintain consistency. Tools like GitHub Copilot use models such as GPT to generate code suggestions in real-time based on context, making development faster and more efficient.

Reinforcement Learning for Code Optimization

Reinforcement learning (RL) helps AI continuously optimize code performance:

Reward Functions

In RL, a reward function is defined to evaluate the quality of the code. For example, AI might reward code that reduces runtime by 20% or improves memory efficiency by 30%. The reward function measures not just performance but also readability and maintainability, ensuring a balanced approach to optimization.

Agent Training

Through trial and error, AI agents learn to refactor code to meet specific objectives. For example, an agent might experiment with different ways of parallelizing a loop to improve performance, receiving positive rewards for optimizations and negative rewards for regressions.

Continuous Improvement

The AI’s policy, or strategy, is continuously refined based on past experiences. This allows AI to improve its code optimization capabilities over time. For example, Google’s AlphaCode uses reinforcement learning to compete in coding competitions, showing that AI can autonomously write and optimize highly efficient algorithms.

AI-Assisted Code Review Tools

Modern AI-assisted code review tools offer both rule-based enforcement and machine learning insights:

Rule-Based Systems

These systems enforce strict coding standards. For example, AI tools like ESLint or Pylint enforce coding style guidelines in JavaScript and Python, ensuring developers follow industry best practices such as proper indentation or consistent use of variable names.

Machine Learning Models

AI models can learn from past code reviews, understanding patterns in common feedback. For instance, if a team frequently comments on inefficient data structures, the AI will begin flagging those cases in future code reviews, reducing the need for human intervention.

Hybrid Approaches

Combining rule-based and ML-powered systems, hybrid tools provide a more comprehensive review experience. For example, DeepCode uses a hybrid approach to enforce coding standards while also learning from developer interactions to suggest improvements in real-time. These tools ensure code is not only compliant but also continuously improved based on team dynamics and historical data.

Incorporating AI into code reviews takes your development process to the next level. By automating error detection, analyzing code sentiment, and suggesting optimizations, AI enables your team to focus on what matters most: building high-quality, secure, and scalable software. As these tools continue to learn and improve, the benefits of AI-assisted code reviews will only grow, making them indispensable in modern development environments.

Here’s a table to help you seamlessly understand the code reviews at a glance:

Practical Steps to Im͏pleme͏nt AI-Driven Co͏de ͏Review͏s

To ef͏fectively inte͏grate A͏I ͏into your remote͏ tea͏m's co͏de revi͏ew proce͏ss, con͏side͏r th͏e followi͏ng ste͏ps͏:

Evaluate͏ and choo͏se ͏AI tools: Re͏sear͏ch͏ and ͏ev͏aluat͏e A͏I͏-powe͏red code͏ review tools th͏at ali͏gn with your tea͏m'͏s n͏e͏eds an͏d ͏de͏vel͏opment w͏orkflow.

S͏t͏art with͏ a gr͏ad͏ua͏l ͏approa͏ch: Us͏e AI tools to ͏s͏upp͏ort h͏uman-le͏d code ͏reviews be͏fore gr͏ad͏ua͏lly ͏automating simpler tasks. This w͏ill al͏low your͏ te͏am to become comfortable ͏w͏ith the te͏chnol͏ogy and see its ͏ben͏efit͏s firsthan͏d͏.

͏Foster a cu͏lture of collaboration͏: ͏E͏nc͏ourage͏ yo͏ur tea͏m to view AI ͏as͏ a co͏llaborati͏ve p͏ar͏tner rathe͏r tha͏n͏ a replac͏e͏men͏t for ͏huma͏n expert͏is͏e͏. ͏Emp͏hasize ͏the impo͏rtan͏ce of human oversi͏ght, ͏especially for complex issue͏s th͏at r͏equire ͏nuance͏d͏ ͏judgmen͏t.

Provi͏de trainin͏g a͏nd r͏eso͏urces: Equi͏p͏ ͏your͏ team ͏with͏ the neces͏sary ͏training ͏an͏d resources to ͏use A͏I ͏c͏o͏de revie͏w too͏ls͏ effectively.͏ T͏his include͏s tuto͏rials, docume͏ntatio͏n, and op͏p͏ortunities fo͏r hands-on p͏r͏actice.

Lev͏era͏ging Typo to ͏St͏r͏eam͏line Remot͏e Code ͏Revi͏ews

Typo is an ͏AI-͏po͏w͏er͏ed tool designed to streamli͏ne the͏ code review process for r͏emot͏e teams. By i͏nte͏grating seamlessly wi͏th ͏your e͏xisting d͏e͏vel͏opment tool͏s, Typo mak͏es it easier͏ to ma͏nage feedbac͏k, improve c͏ode͏ q͏uali͏ty, and ͏collab͏o͏ra͏te ͏acr͏o͏ss ͏tim͏e zone͏s͏.

S͏ome key͏ benefi͏ts of ͏using T͏ypo ͏inclu͏de:

  • AI code analysis
  • Code context understanding
  • Auto debuggging with detailed explanations
  • Proprietary models with known frameworks (OWASP)
  • Auto PR fixes

Here's a brief comparison on how Typo differentiates from other code review tools

The Hu͏man Element: Com͏bining͏ ͏AI͏ and Human Exp͏ert͏ise

Wh͏ile AI ca͏n ͏s͏i͏gn͏ificantly͏ e͏nhance͏ the code ͏review proces͏s, i͏t͏'s essential͏ to maintain ͏a balance betw͏een AI ͏and human expert͏is͏e. AI ͏is not ͏a repla͏ce͏me͏nt for h͏uman intuition, cr͏eativity, or judgmen͏t but rather ͏a ͏s͏upportive t͏ool that augme͏nts and ͏emp͏ower͏s ͏developers.

By ͏using AI to ͏handle͏ re͏peti͏tive͏ tasks a͏nd prov͏ide real-͏time f͏eedba͏ck, develope͏rs can͏ foc͏us on higher-lev͏el is͏su͏es ͏that re͏quire ͏h͏uman problem-solving ͏skills. T͏h͏is ͏division of͏ l͏abor͏ allows teams ͏to w͏ork m͏ore efficient͏ly͏ and eff͏ectivel͏y while still͏ ͏ma͏in͏taining͏ the ͏h͏uma͏n touch that is cr͏uc͏ial͏ ͏for complex͏ ͏p͏roble͏m-solving and innov͏ation.

Over͏c͏oming E͏mot͏ional Barriers to AI In͏tegra͏tion

In͏troducing new t͏echn͏ol͏og͏ies͏ can so͏metimes be ͏met wit͏h r͏esist͏ance or fear. I͏t's ͏im͏porta͏nt ͏t͏o address these co͏ncerns head-on and hel͏p your͏ team understand t͏he͏ be͏nefits of AI integr͏ation.

Some common͏ fears—͏su͏ch as job͏ r͏eplacement or dis͏r͏u͏pt͏ion of esta͏blished workflows—͏shou͏ld be dire͏ctly addre͏ssed͏.͏ Reas͏sur͏e͏ your t͏ea͏m͏ that AI is ͏designed to r͏e͏duce workload and enh͏a͏nce͏ pro͏duc͏tiv͏ity, no͏t rep͏lace͏ human ex͏pertise.͏ Foster an͏ en͏vironment͏ that embr͏aces new t͏echnologie͏s while focusing on th͏e long-t͏erm be͏nefits of improved ͏eff͏icienc͏y, collabor͏ati͏on, ͏and j͏o͏b sat͏isfaction.

Elevate Your͏ Code͏ Quality: Em͏b͏race AI Solut͏ions͏

AI-d͏riven co͏d͏e revie͏w͏s o͏f͏fer a pr͏omising sol͏ution f͏or remote teams ͏lookin͏g͏ to maintain c͏ode quality, fo͏ster collabor͏ation, and enha͏nce productivity. ͏By emb͏ra͏cing͏ ͏AI tool͏s like Ty͏po, you can streamline ͏your code rev͏iew pro͏cess, reduce delays, and empower ͏your tea͏m to focus on writing gr͏ea͏t code.

Remem͏ber tha͏t ͏AI su͏pports and em͏powers your team—not replace͏ human expe͏rti͏se. Exp͏lore and experim͏ent͏ with A͏I͏ code review tools ͏in y͏o͏ur ͏teams, and ͏wa͏tch as your remote co͏lla͏borati͏on rea͏ches new͏ he͏i͏ghts o͏f effi͏cien͏cy and success͏.

View All

Tutorials

View All
What are Git Bash Commands

What are Git Bash Commands?

For developers working in Windows environments, Git Bash offers a powerful bridge between the Unix command line world and Windows operating systems. This guide will walk you through essential Git Bash commands, practical workflows, and time-saving techniques that will transform how you interact with your code repositories.

Understanding Git Bash and Its Role in Development

Git Bash serves as a command-line terminal for Windows users that combines Git functionality with the Unix Bash shell environment. Unlike the standard Windows Command Prompt, Git Bash provides access to both Git commands and Unix utilities, creating a consistent environment across different operating systems.

At its core, Git Bash offers:

  • A Unix-style command-line interface in Windows
  • Integrated Git version control commands
  • Access to common Unix tools and utilities
  • Support for shell scripting and automation
  • Consistent terminal experience across platforms

For Windows developers, Git Bash eliminates the barrier between operating systems, providing the same powerful command-line tools that macOS and Linux users enjoy. Rather than switching contexts between different command interfaces, Git Bash creates a unified experience.

Setting Up Your Git Bash Environment

Before diving into commands, let's ensure your Git Bash environment is properly configured.

Installation Steps

  1. Download Git for Windows from the official Git website
  2. During installation, accept the default options unless you have specific preferences
  3. Ensure "Git Bash" is selected as a component to install
  4. Complete the installation and launch Git Bash from the Start menu

First-Time Configuration

When using Git for the first time, set up your identity:

# Set your username
git config --global user.name "Your Name"

# Set your email
git config --global user.email "youremail@example.com"

# Verify your settings
git config --list


Customizing Your Terminal

Make Git Bash your own with these customizations:

# Enable colorful output
git config --global color.ui auto

# Set your preferred text editor
git config --global core.editor "code --wait"  # For VS Code


For a more informative prompt, create or edit your .bash_profile file to show your current branch:

# Add this to your .bash_profile
parse_git_branch() {
    git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
}
export PS1="\[\033[36m\]\u\[\033[m\]@\[\033[32m\]\h:\[\033[33;1m\]\w\[\033[m\]\[\033[32m\]\$(parse_git_branch)\[\033[m\]$ "


Essential Navigation and File Operations

Git Bash's power begins with basic file system navigation and management.

Directory Navigation

# Show current directory
pwd

# List files and directories
ls
ls -la  # Show hidden files and details

# Change directory
cd project-folder
cd ..   # Go up one level
cd ~    # Go to home directory
cd /c/  # Access C: drive


File Management

# Create a new directory
mkdir new-project

# Create a new file
touch README.md

# Copy files
cp original.txt copy.txt
cp -r source-folder/ destination-folder/  # Copy directory

# Move or rename files
mv oldname.txt newname.txt
mv file.txt /path/to/destination/

# Delete files and directories
rm unwanted.txt
rm -rf old-directory/  # Be careful with this!


Reading and Searching File Content

# View file content
cat config.json

# View file with pagination
less large-file.log

# Search for text in files
grep "function" *.js
grep -r "TODO" .  # Search recursively in current directory


Repository Management Commands

These commands form the foundation of Git operations in your daily workflow.

Creating and Cloning Repositories

# Initialize a new repository
git init

# Clone an existing repository
git clone https://github.com/username/repository.git

# Clone to a specific folder
git clone https://github.com/username/repository.git custom-folder-name


Tracking Changes

# Check repository status
git status

# Add files to staging area
git add filename.txt       # Add specific file
git add .                  # Add all changes
git add *.js               # Add all JavaScript files
git add src/               # Add entire directory

# Commit changes
git commit -m "Add user authentication feature"

# Amend the last commit
git commit --amend -m "Updated message"


Viewing History

# View commit history
git log

# Compact view of history
git log --oneline

# Graph view with branches
git log --graph --oneline --decorate

# View changes in a commit
git show commit-hash

# View changes between commits
git diff commit1..commit2


Mastering Branches with Git Bash

Branching is where Git's power truly shines, allowing parallel development streams.

Branch Management

# List all branches
git branch               # Local branches
git branch -r            # Remote branches
git branch -a            # All branches

# Create a new branch
git branch feature-login

# Create and switch to a new branch
git checkout -b feature-payment

# Switch branches
git checkout main

# Rename a branch
git branch -m old-name new-name

# Delete a branch
git branch -d feature-complete
git branch -D feature-broken  # Force delete


Merging and Rebasing

# Merge a branch into current branch
git merge feature-complete

# Merge with no fast-forward (creates a merge commit)
git merge --no-ff feature-login

# Rebase current branch onto another
git rebase main

# Interactive rebase to clean up commits
git rebase -i HEAD~5


Remote Repository Interactions

Connect your local work with remote repositories for collaboration.

Managing Remotes

# List remote repositories
git remote -v

# Add a remote
git remote add origin https://github.com/username/repo.git

# Change remote URL
git remote set-url origin https://github.com/username/new-repo.git

# Remove a remote
git remote remove upstream


Syncing with Remotes

# Download changes without merging
git fetch origin

# Download and merge changes
git pull origin main

# Upload local changes
git push origin feature-branch

# Set up branch tracking
git branch --set-upstream-to=origin/main main


Time-Saving Command Shortcuts

Save precious keystrokes with Git aliases and Bash shortcuts.

Git Aliases

Add these to your .gitconfig file:

[alias]
    # Status, add, and commit shortcuts
    s = status
    a = add
    aa = add --all
    c = commit -m
    ca = commit --amend
    
    # Branch operations
    b = branch
    co = checkout
    cob = checkout -b
    
    # History viewing
    l = log --oneline --graph --decorate --all
    ld = log --pretty=format:"%C(yellow)%h%Cred%d\\ %Creset%s%Cblue\\ [%cn]" --decorate
    
    # Useful combinations
    save = !git add --all && git commit -m 'SAVEPOINT'
    undo = reset HEAD~1 --mixed
    wipe = !git add --all && git commit -qm 'WIPE SAVEPOINT' && git reset HEAD~1 --hard


Bash Aliases for Git

Add these to your .bash_profile or .bashrc:

# Quick status check
alias gs='git status'

# Branch management
alias gb='git branch'
alias gba='git branch -a'
alias gbd='git branch -d'

# Checkout shortcuts
alias gco='git checkout'
alias gcb='git checkout -b'
alias gcm='git checkout main'

# Pull and push simplified
alias gpl='git pull'
alias gps='git push'
alias gpom='git push origin main'

# Log visualization
alias glog='git log --oneline --graph --decorate'
alias gloga='git log --oneline --graph --decorate --all'


Advanced Command Line Techniques

Level up your Git Bash skills with these powerful techniques.

Temporary Work Storage with Stash

# Save changes temporarily
git stash

# Save with a description
git stash push -m "Work in progress for feature X"

# List all stashes
git stash list

# Apply most recent stash
git stash apply

# Apply specific stash
git stash apply stash@{2}

# Apply and remove from stash list
git stash pop

# Remove a stash
git stash drop stash@{0}

# Clear all stashes
git stash clear


Finding Information

# Search commit messages
git log --grep="bug fix"

# Find who changed a line
git blame filename.js

# Find when a function was added/removed
git log -L :functionName:filename.js

# Find branches containing a commit
git branch --contains commit-hash

# Find all commits that modified a file
git log -- filename.txt


Advanced History Manipulation

# Cherry-pick a commit
git cherry-pick commit-hash

# Revert a commit
git revert commit-hash

# Interactive rebase for cleanup
git rebase -i HEAD~5

# View reflog (history of HEAD changes)
git reflog

# Reset to a previous state
git reset --soft HEAD~3  # Keep changes staged
git reset --mixed HEAD~3  # Keep changes unstaged
git reset --hard HEAD~3  # Discard changes (careful!)

Problem-Solving with Git Bash

Git Bash excels at solving common Git predicaments.

Fixing Commit Mistakes

# Forgot to add a file to commit
git add forgotten-file.txt
git commit --amend --no-edit

# Committed to wrong branch
git branch correct-branch  # Create the right branch
git reset HEAD~ --soft     # Undo the commit but keep changes
git stash                  # Stash the changes
git checkout correct-branch
git stash pop              # Apply changes to correct branch
git add .                  # Stage changes
git commit -m "Commit message"  # Commit to correct branch


Resolving Merge Conflicts

# When merge conflict occurs
git status  # Check which files have conflicts

# After manually resolving conflicts
git add resolved-file.txt
git commit  # Completes the merge


For more complex conflicts:

# Use merge tool
git mergetool

# Abort a problematic merge
git merge --abort


Recovering Lost Work

# Find deleted commits with reflog
git reflog

# Restore lost commit
git checkout commit-hash

# Create branch from detached HEAD
git checkout -b recovery-branch


When Command Line Beats GUI Tools

While graphical Git clients are convenient, Git Bash provides superior capabilities in several scenarios:

Complex Operations

Scenario: Cleanup branches after sprint completion

GUI approach: Manually select and delete each branch - tedious and error-prone.

Git Bash solution:

# Delete all local branches that have been merged to main
git checkout main
git branch --merged | grep -v "main" | xargs git branch -d


Search and Analysis

Scenario: Find who introduced a bug and when

GUI approach: Scroll through commit history hoping to spot the culprit.

Git Bash solution:

# Find when a line was changed
git blame -L15,25 problematic-file.js

# Find commits mentioning the feature
git log --grep="feature name"

# Find commits that changed specific functions
git log -p -S "functionName"


Automation Workflows

Scenario: Standardize commit formatting for team

GUI approach: Distribute written guidelines and hope team follows them.

Git Bash solution:

# Set up a commit template
git config --global commit.template ~/.gitmessage

# Create ~/.gitmessage with your template
# Then add a pre-commit hook to enforce standards


These examples demonstrate how Git Bash can handle complex scenarios more efficiently than GUI tools, especially for batch operations, deep repository analysis, and customized workflows.

Frequently Asked Questions

How does Git Bash differ from Windows Command Prompt?

Git Bash provides a Unix-like shell environment on Windows, including Bash commands (like grep, ls, and cd) that work differently from their CMD equivalents. It also comes pre-loaded with Git commands and supports Unix-style paths using forward slashes, making it more consistent with macOS and Linux environments.

Do I need Git Bash if I use a Git GUI client?

While GUI clients are user-friendly, Git Bash offers powerful capabilities for complex operations, scripting, and automation that most GUIs can't match. Even if you primarily use a GUI, learning Git Bash gives you a fallback for situations where the GUI is insufficient or unavailable.

How do I install Git Bash on different operating systems?

Windows: Download Git for Windows from git-scm.com, which includes Git Bash.

macOS: Git Bash isn't necessary since macOS already has a Unix-based Terminal. Install Git via Homebrew with brew install git.

Linux: Similarly, Linux distributions have native Bash terminals. Install Git with your package manager (e.g., apt-get install git for Ubuntu).

Is Git Bash only for Git operations?

No! Git Bash provides a full Bash shell environment. You can use it for any command-line tasks, including file management, text processing, and running scripts—even in projects that don't use Git.

How can I make Git Bash remember my credentials?

Set up credential storage with:

# Cache credentials for 15 minutes
git config --global credential.helper cache

# Store credentials permanently
git config --global credential.helper store

# Use Windows credential manager
git config --global credential.helper wincred


Can I use Git Bash for multiple GitHub/GitLab accounts?

Yes, you can set up SSH keys for different accounts and create a config file to specify which key to use for which repository. This allows you to manage multiple accounts without constant credential switching.

By mastering Git Bash commands, you'll gain powerful tools that extend far beyond basic version control. The command line gives you precision, automation, and deep insight into your repositories that point-and-click interfaces simply can't match. Start with the basics, gradually incorporate more advanced commands, and soon you'll find Git Bash becoming an indispensable part of your development workflow.

Whether you're resolving complex merge conflicts, automating repetitive tasks, or diving deep into your project's history, Git Bash provides the tools you need to work efficiently and effectively. Embrace the command line, and watch your productivity soar.

AI Engineer vs. Software Engineer: How They Compare

AI Engineer vs. Software Engineer: How They Compare

Software engineering is a vast field, so much so that most people outside the tech world don’t realize just how many roles exist within it. 

To them, software development is just about "coding," and they may not even know that roles like Quality Assurance (QA) testers exist. DevOps might as well be science fiction to the non-technical crowd. 

One such specialized niche within software engineering is artificial intelligence (AI). However, an AI engineer isn’t just a developer who uses AI tools to write code. AI engineering is a discipline of its own, requiring expertise in machine learning, data science, and algorithm optimization. 

In this post, we give you a detailed comparison. 

Who is an AI engineer? 

An AI engineer specializes in designing, building, and optimizing artificial intelligence systems. Their work revolves around machine learning models, neural networks, and data-driven algorithms. 

Unlike traditional developers, AI engineers focus on training models to learn from vast datasets and make predictions or decisions without explicit programming. 

For example, an AI engineer building a skin analysis tool for a beauty app would train a model on thousands of skin images. The model would then identify skin conditions and recommend personalized products. 

This role demands expertise in data science, mathematics, and more importantly—expertise in the industry. AI engineers don’t just write code—they enable machines to learn, reason, and improve over time. 

Who is a software engineer? 

A software engineer designs, develops, and maintains applications, systems, and platforms. Their expertise lies in programming, algorithms, and system architecture. 

Unlike AI engineers, who focus on training models, software engineers build the infrastructure that powers software applications. 

They work with languages like JavaScript, Python, and Java to create web apps, mobile apps, and enterprise systems. 

For example, a software engineer working on an eCommerce mobile app ensures that customers can browse products, add items to their cart, and complete transactions seamlessly. They integrate APIs, optimize database queries, and handle authentication systems. 

While some software engineers may use AI models in their applications, they don’t typically build or train them. Their primary role is to develop functional, efficient, and user-friendly software solutions. 

Difference between AI engineer and software engineer 

Now that you have a gist of who they are, let’s understand how these roles differ. While both require programming expertise, their focus, skill set, and day-to-day tasks set them apart. 

1. Focus area 

Software engineers work on designing, building, testing, and maintaining software applications across various industries. Their role is broad, covering everything from front-end and back-end development to cloud infrastructure and database management. They build web platforms, mobile apps, enterprise systems, and more. 

AI engineers, however, specialize in creating intelligent systems that learn from data. Their focus is on building machine learning models, fine-tuning algorithms, and optimizing AI-powered solutions. Rather than developing entire applications, they work on AI components like recommendation engines, chatbots, and computer vision systems. 

2. Required skills 

AI engineers need a deep understanding of machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn. They must be proficient in data science, statistics, and probability. Their role also demands expertise in neural networks, deep learning architectures, and data visualization. Strong mathematical skills are essential. 

Software engineers, on the other hand, require a broader programming skill set. They must be proficient in languages like Python, Java, C++, or JavaScript. Their expertise lies in system architecture, object-oriented programming, database management, and API integration. Unlike AI engineers, they do not need in-depth knowledge of machine learning models. 

3. Lifecycle differences 

Software engineering follows a structured development lifecycle: requirement analysis, design, coding, testing, deployment, and maintenance. 

AI development, however, starts with data collection and preprocessing, as models require vast amounts of structured data to learn. Instead of traditional coding, AI engineers focus on selecting algorithms, training models, and fine-tuning hyperparameters. 

Evaluation is iterative—models must be tested against new data, adjusted, and retrained for accuracy. Deployment involves integrating models into applications while monitoring for drift (when models become less effective over time). 

Unlike traditional software, which works deterministically based on logic, AI systems evolve. Continuous updates and retraining are essential to maintain accuracy. This makes AI development more experimental and iterative than traditional software engineering. 

4. Tools and technologies 

AI engineers use specialized tools designed for machine learning and data analysis. They work with frameworks like TensorFlow, PyTorch, and Scikit-learn to build and train models. They also use data visualization platforms such as Tableau and Power BI to analyze patterns. Statistical tools like MATLAB and R help with modeling and prediction. Additionally, they rely on cloud-based AI services like Google Vertex AI and AWS SageMaker for model deployment. 

Software engineers use more general-purpose tools for coding, debugging, and deployment. They work with IDEs like Visual Studio Code, JetBrains, and Eclipse. They manage databases with MySQL, PostgreSQL, or MongoDB. For version control, they use GitHub or GitLab. Cloud platforms like AWS, Azure, and Google Cloud are essential for hosting and scaling applications. 

5. Collaboration patterns 

AI engineers collaborate closely with data scientists, who provide insights and help refine models. They also work with domain experts to ensure AI solutions align with business needs. AI projects often require coordination with DevOps engineers to deploy models efficiently. 

Software engineers typically collaborate with other developers, UX designers, product managers, and business stakeholders. Their goal is to create a better experience. They engage with QA engineers for testing and security teams to ensure robust applications. 

6. Problem approach 

AI engineers focus on making systems learn from data and improve over time. Their solutions involve probabilities, pattern recognition, and adaptive decision-making. AI models can evolve as they receive more data. 

Software engineers build deterministic systems that follow explicit logic. They design algorithms, write structured code, and ensure the software meets predefined requirements without changing behavior over time unless manually updated. 

Is AI going to replace software engineers? 

If you’re comparing AI engineers and software engineers, chances are you’ve also wondered—will AI replace software engineers? The short answer is no. 

AI is making software delivery more effective and efficient. Large language models can generate code, automate testing, and assist with debugging. Some believe this will make software engineers obsolete, just like past predictions about no-code platforms and automated tools. But history tells a different story. 

For decades, people have claimed that programmers would become unnecessary. From code generation tools in the 1990s to frameworks like Rails and Django, every breakthrough was expected to eliminate the need for engineers. Yet, demand for software engineers has only increased. 

The reality is that the world still needs more software, not less. Businesses struggle with outdated systems and inefficiencies. AI can help write code, but it can’t replace critical thinking, problem-solving, or system design. 

Instead of replacing software engineers, AI will make their their work more productive, efficient, and valuable. 

Conclusion 

With advancements in AI, the focus for software engineering teams should be on improving the quality of their outputs while achieving efficiency. 

AI is not here to replace engineers but to enhance their capabilities—automating repetitive tasks, optimizing workflows, and enabling smarter decision-making. The challenge now is not just writing code but delivering high-quality software faster and more effectively. 

This is where Typo comes in. With AI-powered SDLC insights, automated code reviews, and business-aligned investments, it streamlines the development process. It helps engineering teams ensure that the efforts are focused on what truly matters—delivering impactful software solutions. 

Code Rot: What It Is and How to Identify It

Code Rot: What It Is and How to Identify It

Code rot, also known as software rot, refers to the gradual deterioration of code quality over time. 

The term was more common in the early days of software engineering but is now often grouped under technical debt. 

Research Gate has found that maintenance consumes 40-80% of a software project’s total cost, much of it due to code rot. 

In this blog, we’ll explore its types, causes, consequences, and how to prevent it. 

What is Code Rot? 

Code rot occurs when software degrades over time, becoming harder to maintain, modify, or scale. This happens due to accumulating inefficiencies and poor design decisions. If you don’t update the code often, you might also be prone to it. As a result of these inefficiencies, developers face increased bugs, longer development cycles, and higher maintenance costs. 

Types of Code Rot 

  1. Active Code Rot: This happens when frequent changes increase complexity, which makes the codebase harder to manage. Poorly implemented features, inconsistent coding styles, and rushed fixes also contribute to this. 
  2. Dormant Code Rot: Occurs when unused or outdated code remains in the system, leading to confusion and potential security risks. 

Let’s say you’re building an eCommerce platform where each update introduces duplicate logic. This will create an unstructured and tangled codebase, which is a form of active code rot. 

The same platform also has a legacy API integration. If not in use but still exist in the codebase, it’ll cause unnecessary dependencies and maintenance overhead. This is the form of dormant code rot. 

Note that both types increase technical debt, slowing down future development. 

What Are the Causes of Code Rot? 

The uncomfortable truth is that even your best code is actively decaying right now. And your development practices are probably accelerating its demise. 

Here are some common causes of code rot: 

1. Lack of Regular Maintenance 

Code that isn’t actively maintained tends to decay. Unpatched dependencies, minor bugs, or problematic sections that aren’t refactored — these small inefficiencies compound into major problems. Unmaintained code becomes outdated and difficult to work with.

2. Poor Documentation 

Without proper documentation, developers struggle to understand original design decisions. Over time, outdated or missing documentation leads to incorrect assumptions and unnecessary workarounds. This lack of context results in code that becomes increasingly fragile and difficult to modify. 

3. Technical Debt Accumulation 

Quick fixes and rushed implementations create technical debt. While shortcuts may be necessary in the short term, they result in complex, fragile code that requires increasing effort to maintain. If left unaddressed, technical debt compounds, making future development error-prone. 

4. Inconsistent Coding Standards 

A lack of uniform coding practices leads to a patchwork of different styles, patterns, and architectures. This inconsistency makes the codebase harder to read and debug, which increases the risk of defects. 

5. Changing Requirements Without Refactoring 

Adapting code to new business requirements without refactoring leads to convoluted logic. Instead of restructuring for maintainability, developers often bolt on new functionality, which brings unnecessary complexity. Over time, this results in an unmanageable codebase. 

What Are the Symptoms of Code Rot? 

If your development team is constantly struggling with unexpected bugs, slow feature development, or unclear logic, your code might be rotting. 

Recognizing these early symptoms can help prevent long-term damage. 

  • Increasing Bug Frequency: Fixing one bug introduces new ones, indicating fragile and overly complex code. 
  • Slower Development Cycles: New features take longer to implement due to tangled dependencies and unclear logic. 
  • High Onboarding Time for New Developers: New team members struggle to understand the codebase due to poor documentation and inconsistent structures. 
  • Frequent Workarounds: Developers avoid touching certain parts of the code, relying on hacks instead of proper fixes. 
  • Performance Degradation: As the codebase grows, the system becomes slower and less efficient, often due to redundant or inefficient code paths. 

What is the Impact of Code Rot? 

Code rot doesn’t just make development frustrating—it has tangible consequences that affect productivity, costs, and business performance. 

Left unchecked, it can even lead to system failures. Here’s how code rot impacts different aspects of software development: 

1. Increased Maintenance Costs 

As code becomes more difficult to modify, even small changes require more effort. Developers spend more time debugging and troubleshooting rather than building new features. Over time, maintenance costs can surpass the original development costs. 

2. Reduced Developer Productivity 

A messy, inconsistent codebase forces developers to work around issues instead of solving problems efficiently. Poorly structured code increases cognitive load, leading to slower progress and higher turnover rates in development teams. 

3. Higher Risk of System Failures 

Unstable, outdated, or overly complex code increases the risk of crashes, data corruption, and security vulnerabilities. A single unpatched dependency or fragile module can bring down an entire application. 

4. Slower Feature Delivery

With a decaying codebase, adding new functionality becomes a challenge. Developers must navigate and untangle existing complexities, slowing down innovation and making it harder to stay agile. It only increases software delivery risks. 

5. Poor User Experience 

Code rot can lead to performance issues and inconsistent behavior in production. Users may experience slower load times, unresponsive interfaces, or frequent crashes, all of which negatively impact customer satisfaction and retention. Ignoring code rot directly impacts business success. 

How to Fix Code Rot? 

Code rot is inevitable, but it can be managed and reversed with proactive strategies. Addressing it requires a combination of better coding practices. Here’s how to fix code rot effectively: 

1. Perform Regular Code Reviews

Frequent code reviews help catch issues early, ensuring that poor coding practices don’t accumulate. Encourage team-wide adherence to clean code principles, and use automated tools to detect code smells and inefficiencies. 

2. Refactor Incrementally 

Instead of attempting a full system rewrite, adopt a continuous refactoring approach. Identify problematic areas and improve them gradually while implementing new features. This prevents disruption while steadily improving the codebase. 

3. Keep Dependencies Up to Date 

Outdated libraries and frameworks can introduce security risks and compatibility issues. Regularly update dependencies and remove unused packages to keep the codebase lean and maintainable. 

4. Standardize Coding Practices

Enforce consistent coding styles, naming conventions, and architectural patterns across the team. Use linters and formatting tools to maintain uniformity, reducing confusion and technical debt accumulation. 

5. Improve Documentation

Well-documented code is easier to maintain and modify. Ensure that function descriptions, API references, and architectural decisions are clearly documented so future developers can understand and extend the code without unnecessary guesswork. 

6. Automate Testing

A robust test suite prevents regressions and helps maintain code quality. Implement unit, integration, and end-to-end tests to catch issues early, ensuring new changes don’t introduce hidden bugs. 

7. Allocate Time for Maintenance

Allocate engineering resources and dedicated time for refactoring and maintenance in each sprint. Technical debt should be addressed alongside feature development to prevent long-term decay. 

8. Track Code Quality Metrics 

Track engineering metrics like code complexity, duplication, cyclomatic complexity, and maintainability index to assess code health. Tools like Typo can help identify problem areas before they spiral into code rot. 

By implementing these strategies, teams can reduce code rot and maintain a scalable and sustainable codebase. 

Conclusion 

Code rot is an unavoidable challenge, but proactive maintenance, refactoring, and standardization can keep it under control. Ignoring it leads to higher costs, slower development, and poor user experience. 

To effectively track and prevent code rot, you can use engineering analytics platforms like Typo, which provide insights into code quality and team productivity. 

Start optimizing your codebase with Typo today!

View All

Product Updates

View All
Typo is now SOC 2 Type II compliant

Typo is now SOC 2 Type II compliant

We are pleased to announce that Typo has successfully achieved SOC 2 Type II certification, a significant milestone in our ongoing commitment to security excellence and data protection. This certification reflects our dedication to implementing and maintaining the highest standards of security controls to protect our customers' valuable development data.

Understanding SOC 2 Type II Certification

SOC 2 (Service Organization Control 2) is a framework developed by the American Institute of Certified Public Accountants (AICPA) that establishes comprehensive standards for managing customer data based on five "trust service criteria": security, availability, processing integrity, confidentiality, and privacy.

The distinction between Type I and Type II certification is substantial. While Type I examines whether a company's security controls are suitably designed at a specific point in time, Type II requires a more rigorous evaluation of these controls over an extended period—typically 6-12 months. This provides a more thorough verification that our security practices are not only well-designed but consistently operational.

Why SOC 2 Type II Matters for Typo Customers

For organizations relying on Typo's software engineering intelligence platform, this certification delivers several meaningful benefits:

  • Independently Verified Security: Our security controls have been thoroughly examined by independent auditors who have confirmed their consistent effectiveness over time.
  • Proactive Risk Management: Our systematic approach to identifying and addressing potential security vulnerabilities helps protect your development data from emerging threats.
  • Simplified Compliance: Working with certified vendors like Typo can streamline your organization's own compliance efforts, particularly important for teams operating in regulated industries.
  • Enhanced Trust: In today's security-conscious environment, partnering with SOC 2 Type II certified vendors demonstrates your commitment to protecting sensitive information.

What This Means for You

The SOC 2 Type II report represents a comprehensive assessment of Typo's security infrastructure and practices. This independent verification covers several critical dimensions of our security program:

  • Infrastructure and Application Security: Our certification validates the robustness of our technical architecture, from our development practices to our cloud infrastructure security. The connections between our analytics tools and your development environment are secured through enterprise-grade protections that have been independently verified.
  • Comprehensive Risk Management: The report confirms our methodical approach to assessing, prioritizing, and mitigating security risks. This includes our vulnerability management program, regularly scheduled penetration testing, and systematic processes for addressing emerging threats in the security landscape.
  • Security Governance and Team Readiness: Beyond technical controls, the certification evaluates our organizational security culture, from our hiring practices to our security awareness program. This ensures that everyone at Typo understands their responsibilities in safeguarding customer data.
  • Operational Security Controls: The certification verifies our day-to-day security operations, including access management protocols, data encryption standards, network security measures, and monitoring systems that protect your development analytics data.

Our Certification Journey

Achieving SOC 2 Type II certification required a comprehensive effort across our organization and consisted of several key phases:

Preparation and Gap Analysis

We began with a thorough assessment of our existing security controls against SOC 2 requirements, identifying areas for enhancement. This systematic gap analysis was essential for establishing a clear roadmap toward certification, particularly regarding our integration capabilities that connect with customers' sensitive development environments.

Implementation of Controls

Based on our assessment findings, we implemented enhanced security measures across multiple domains:

  • Information Security: We strengthened our policies and procedures to ensure comprehensive protection of customer data throughout its lifecycle.
  • Access Management: We implemented rigorous access controls following the principle of least privilege, ensuring appropriate access limitations across our systems.
  • Risk Assessment: We established formal, documented processes for regular risk assessments and vulnerability management.
  • Change Management: We developed structured protocols to manage system changes while maintaining security integrity.
  • Incident Response: We refined our procedures for detecting, responding to, and recovering from potential security incidents.
  • Vendor Management: We enhanced our due diligence processes for evaluating and monitoring third-party vendors that support our operations.

Continuous Monitoring

A distinguishing feature of Type II certification is the requirement to demonstrate consistent adherence to security controls over time. This necessitated implementing robust monitoring systems and conducting regular internal audits to ensure sustained compliance with SOC 2 standards.

Independent Audit

The final phase involved a thorough examination by an independent CPA firm, which conducted a comprehensive assessment of our security controls and their operational effectiveness over the specified period. Their verification confirmed our adherence to the rigorous standards required for SOC 2 Type II certification.

How to Request Our SOC 2 Report

We understand that many organizations need to review our security practices as part of their vendor assessment process. To request our SOC 2 Type II report:

  • Please email hello@typoapp.io with "SOC 2 Report Request" in the subject line
  • Include your organization name and primary contact information
  • Specify whether you are a current customer or evaluating Typo for potential implementation
  • Note any specific security concerns or areas of particular interest regarding our practices

Our team will respond within two business days with next steps, which may include a standard non-disclosure agreement to protect the confidential information contained in the report.

The comprehensive report provides detailed information about our control environment, risk assessment methodologies, control activities, information and communication systems, and monitoring procedures—all independently evaluated by third-party auditors.

Looking Forward: Our Ongoing Commitment

While achieving SOC 2 Type II certification marks an important milestone, we recognize that security is a continuous journey rather than a destination. As the threat landscape evolves, so too must our security practices.

Our ongoing security initiatives include:

  • Conducting regular security assessments and penetration testing
  • Expanding our security awareness program for all team members
  • Enhancing our monitoring capabilities and alert systems
  • Maintaining transparent communication regarding our security practices

These efforts underscore our enduring commitment to protecting the development data our customers entrust to us.

Conclusion

At Typo, we believe that robust security is foundational to delivering effective developer analytics that engineering teams can confidently rely upon. Our SOC 2 Type II certification demonstrates our commitment to protecting your valuable data while providing the insights your development teams need to excel.

By choosing Typo, organizations gain not only powerful development analytics but also a partner dedicated to maintaining the highest standards of security and compliance—particularly important for teams operating in regulated environments with stringent requirements.

We appreciate the trust our customers place in us and remain committed to maintaining and enhancing the security controls that protect your development data. If you have questions about our security practices or SOC 2 certification, please contact us at hello@typoapp.io.

AI-Powered PR Summary for Efficient Code Reviews

AI-Powered PR Summary for Efficient Code Reviews

Tired of code reviews disrupting your workflow? As developers know, pull request reviews are crucial for software quality, but they often lead to context switching and time-consuming interruptions. That's why Typo, is excited to announce powerful new feature designed to empower reviewers: AI-Generated PR Summaries with Estimated Time to Review Label. This feature is built to minimize interruptions, save time, and ultimately, make your life as a reviewer significantly easier.

AI-Powered PR Summary for Efficient Code Reviews

1. Take Control of Your Schedule with Estimated Time to Review Labels

Imagine knowing exactly how much time a pull request (PR) will take to review. No more guessing, no more unexpected time sinks. Typo's Estimated Time to Review Labels provide a clear, data-driven estimate of the review effort required.

How It Works:

  • Intelligent Analysis: Typo analyzes code changes, file complexity, and the number of lines modified to calculate an estimated review time.
  • Clear Labels: The tool automatically assigns labels like "Quick Review (Under 5 minutes)," "Moderate Review (5-15 minutes)," or "In-Depth Review (15+ minutes)."
  • Strategic Prioritization: Reviewers can use these labels to prioritize PRs based on their available time, ensuring they stay focused on their current tasks.

Benefits:

  • Minimize Interruptions: Easily defer in-depth reviews until you have dedicated time, avoiding context switching.
  • Optimize Workflow: Prioritize quick reviews to clear backlogs and maintain a smooth development pipeline.
  • Improve Time Management: Gain a clear understanding of the time commitment required for each review.

2. Accelerate Approvals with AI-Generated PR Summaries

Time is a precious commodity for developers. Typo's AI-Generated PR Summaries provide a concise and insightful overview of code changes, allowing reviewers to quickly grasp the key modifications without wading through every line of code.

How It Works:

  • AI-Driven Analysis: Typo's advanced algorithms analyze code diffs, commit messages, and associated issues.
  • Concise Summaries: The AI generates a clear summary highlighting the purpose and impact of the changes.
  • Rapid Understanding: Reviewers can quickly understand the context and make informed decisions.

Benefits:

  • Faster Review Cycles: Quickly grasp the essence of PRs and accelerate the approval process.
  • Enhanced Efficiency: Save valuable time by avoiding manual code inspection for every change.
  • Improved Focus: Quickly understand the changes, and get back to your own work.

Typo: Empowering Reviewers, Boosting Productivity

These two features work together to create a more efficient and less disruptive code review process. By providing time estimates and AI-powered summaries, Typo empowers reviewers to:

  • Maintain focus on their primary tasks.
  • Save valuable time and reduce context switching.
  • Accelerate the code review process.
  • Increase developer velocity.

Key Takeaways:

Typo helps developers maintain focus and save time, even when faced with incoming PR reviews.

  • Estimated Time to Review Labels provide valuable insights into review effort, enabling better time management.
  • AI-Generated PR Summaries accelerate approvals by providing concise overviews of code changes.

Ready to transform your code review workflow?

Try Typo today and experience the benefits of AI-powered time estimates and summaries. Streamline your processes, boost productivity, and empower your development team.

Typo Launches groCTO: Community to Empower Engineering Leaders

In an ever-evolving tech world, organisations need to innovate quickly while keeping up high standards of quality and performance. The key to achieving these goals is empowering engineering leaders with the right tools and technologies. 

About Typo

Typo is a software intelligence platform that optimizes software delivery by identifying real-time bottlenecks in SDLC, automating code reviews, and measuring developer experience. We aim to help organizations ship reliable software faster and build high-performing teams. 

However, engineering leaders often struggle to bridge the divide between traditional management practices and modern software development leading to missed opportunities for growth, ineffective team dynamics, and slower progress in achieving organizational goals. 

To address this gap, we launched groCTO, a community designed specifically for engineering leaders.

What is groCTO Community? 

Effective engineering leadership is crucial for building high-performing teams and driving innovation. However, many leaders face significant challenges and gaps that hinder their effectiveness. The role of an engineering leader is both demanding and essential. From aligning teams with strategic goals to managing complex projects and fostering a positive culture, they have a lot on their plates. Hence, leaders need to have the right direction and support so they can navigate the challenges and guide their teams efficiently. 

Here’s when groCTO comes in! 

groCTO is a community designed to empower engineering managers on their leadership journey. The aim is to help engineering leaders evolve, navigate complex technical challenges, and drive innovative solutions to create groundbreaking software. Engineering leaders can connect, learn, and grow to enhance their capabilities and, in turn, the performance of their teams. 

Key Components of groCTO 

groCTO Connect

Over 73% of successful tech leaders believe having a mentor is key to their success.

At groCTO, we recognize mentorship as a powerful tool for addressing leadership challenges and offering personalised support and fresh perspectives. That’s why we’ve kept Connect a cornerstone of our community - offering 1:1 mentorship sessions with global tech leaders and CTOs. With over 74 mentees and 20 mentors, our Connect program fosters valuable relationships and supports your growth as a tech leader.

These sessions allow emerging leaders to: 

  • Gain personalised advice: Through 1:1 sessions, mentors address individual challenges and tailor guidance to the specific needs and career goals of emerging leaders. 
  • Navigate career growth: These mentors understand the strengths and weaknesses of the individual and help them focus on improving specific leadership skills and competencies and build confidence. 
  • Build valuable professional relationships: Our mentorship sessions expand professional connections and foster collaborations and knowledge sharing that can offer ongoing support and opportunities. 

Weekly Tech Insights

To keep our tech community informed and inspired, groCTO brings you a fresh set of learning resources every week:

  • CTO Diaries: The CTO Diaries provide a unique glimpse into the experiences and lessons learned by seasoned Chief Technology Officers. These include personal stories, challenges faced, and successful strategies implemented by them. Hence, helping engineering leaders gain practical insights and real-world examples that can inspire and inform their approach to leadership and team management.
  • Podcasts: 
    • groCTO Originals is a weekly podcast for current and aspiring tech leaders aiming to transform their approach by learning from seasoned industry experts and successful engineering leaders across the globe.
    • ‘The DORA Lab’ by groCTO is an exclusive podcast that’s all about DORA and other engineering metrics. In each episode, expert leaders from the tech world bring their extensive knowledge of the challenges, inspirations, and practical uses of DORA metrics and beyond.
  • Bytes: groCTO Bytes is a weekly sun-day dose of curated wisdom delivered straight to your inbox, in the form of a newsletter. Our goal is to keep tech leaders and CTOs, VPEs up-to-date on the latest trends and best practices in engineering leadership, tech management, system design, and more.
Are you a tech coach looking to make an impact? 

Looking Ahead: Building a Dynamic Community

At groCTO, we are committed to making this community bigger and better. We want current and aspiring engineering leaders to invest in their growth as well as contribute to pushing the boundaries of what engineering teams can achieve.

We’re just getting started. A few of our future plans for groCTO include:

  • Virtual Events: We plan to conduct interactive webinars and workshops to help engineering leaders and CTOs get deeper dives into specific topics and networking opportunities.
  • Slack Channels: We plan to create Slack channels to allow emerging tech leaders to engage in vibrant discussions and get real-time support tailored to various aspects of engineering leadership.

We envision a community that thrives on continuous engagement and growth. Scaling our resources and expanding our initiatives, we want to ensure that every member of groCTO finds the support and knowledge they need to excel. 

Get in Touch with us! 

At Typo, our vision is clear: to ship reliable software faster and build high-performing engineering teams. With groCTO, we are making significant progress toward this goal by empowering engineering leaders with the tools and support they need to excel. 

Join us in this exciting new chapter and be a part of a community that empowers tech leaders to excel and innovate. 

We’d love to hear from you! For more information about groCTO and how to get involved, write to us at hello@grocto.dev

View All
Made in Webflow