Typo's Picks

Most companies treat software development costs as just another expense and are unsure how certain costs can be capitalized. 

Recording the actual value of any software development process must involve recognizing the development process as a high-return asset. 

That’s what software capitalization is for.

This article will answer all the what’s, why’s, and when’s of software capitalization.

What is Software Capitalization?

Software capitalization is an accounting process that recognizes the incurred software development costs and treats them as long-term assets rather than immediate expenses. 

Typical costs include employee wages, third-party app expenses, consultation fees, and license purchases. 

The idea is to amortize these costs over the software’s lifetime, thus aligning expenses with future revenues generated by the software.

Why is Software Capitalization Important?

Shifting a developed software’s narrative from being an expense to a revenue-generating asset comes with some key advantages:

1. Preserves profitability

Capitalization helps preserve profitability for the longer term by reducing the impact on the company’s expenses. That’s because you amortize intangible and tangible asset expenses, thus minimizing cash flow impact.   

2. Reflects asset value

Capitalizing software development costs results in higher reported asset value and reduces short-term expenses, which ultimately improves your profitability metrics like net profit margin, ARR growth, and ROA (return on assets).

3. Complies with accounting standards

Software capitalization complies with the rules set by major accounting standards like ASC 350-40, U.S. GAAP, and IFRS and makes it easier for companies to undergo audits.

When is Software Capitalization Applicable?

Here’s when it’s acceptable to capitalize software costs:

1. Development stage

The software development stage starts when you receive funding and are in an active development phase. Here, you can capitalize on any cost directly related to development, considering the software is for internal use. 

Example costs include interface designing, coding, configuring, installation, and testing.

2. Technical feasibility

If the software is intended for external use, then your costs can be capitalized when the software reaches the technical feasibility stage, i.e., when it’s viable. Example costs include coding, testing, and employee wages. 

3. Future economic benefits

The software must be a probable candidate to generate consistent revenue for your company in the long run and considered an “asset”. For external use software, this can mean it possesses a selling and leasing expectation.

4. Measurable costs 

The overall software development costs must be accurately measurable. This way, you ensure that the capitalized amount reflects the software’s exact invested amount. 

Key Costs that can be Capitalized

The five main costs you can capitalize for software are:

1. Direct development costs

Direct costs that go into your active development phase can be capitalized. These include payroll costs of employees who were directly part of the software development, additional software purchase fees, and travel costs.

2. External development costs

These costs include the ones incurred by the developers when working with external service providers. Examples include travel costs, technical support, outsourcing expenses, and more.

3. Software licensing fees

License fees can be capitalized instead of being treated as an expense. However, this can depend on the type of accounting standard. For example, GAAP’s terms state capitalization is feasible for one-time software license purchases where it provides long-term benefits.  

4. Acquisition costs

Acquisition costs can be capitalized as assets, provided your software is intended for internal use. 

5. Training and documentation costs

Training and documentation costs are considered assets only if you’re investing in them during the development phase. Post-implementation, these costs turn into operating expenses and cannot be amortized. 

Costs that should NOT be Capitalized

Here are a few costs that do not qualify for software capitalization and are expensed:

1. Research and planning costs 

Research and planning stages are categorized under the preliminary software development stage. These incurred costs are expensed and cannot be capitalized. The GAAP accounting standard, for example, states that an organization can begin to capitalize on costs only after completing these stages. 

2. Post-implementation costs 

Post-implementation or the operational stage is the maintenance period after the software is fully deployed. Any costs, be it training, support, or other operational charges during this time are expensed as incurred. 

3. Costs for upgrades and enhancements

Any costs related to software upgrades, modernization, or enhancements cannot be capitalized. For example, money spent on bug fixes, future modifications, and routine maintenance activities. 

Accounting Standards you should know for Software Capitalization

Below are the two most common accounting standards that state the eligibility criteria for software capitalization: 

1. U.S. GAAP (Generally Accepted Accounting Principles)

GAAP is a set of rules and procedures that organizations must follow while preparing their financial statements. These standards ensure accuracy and transparency in reporting across industries, including software. 

Understanding GAAP and key takeaways for software capitalization:

  • GAAP allows capitalization for internal and external costs directly related to the software development process. Examples of costs include licensing fees, third-party development costs, and wages of employees who are part of the project.
  • Costs incurred after the software is deemed viable but before it is ready for use can be capitalized. Example costs can be for coding, installation, and testing. 
  • Every post-implementation cost is expensed.
  • A development project still in the preliminary or planning phase is too early to capitalize on. 

2. IFRS (International Financial Reporting Standards)

IFRS is an alternative to GAAP and is used worldwide. Compared to GAAP, IFRS allows better capitalization of development costs, considering you meet every criterion, naturally making the standard more complex.

Understanding IFRS and key takeaways for software capitalization:

  • IFRS treats computer software as an intangible asset. If it’s internally developed software (for internal/external use or sale), it is charged to expense until it reaches technical feasibility.
  • All research and planning costs are charged as expenses.
  • Development costs are capitalized only after technical or commercial feasibility for sale if the software’s use has been established.  

Financial Implications of Software Capitalization

Software capitalization, from a financial perspective, can have the following aftereffects:

1. Impact on profit and loss statement

A company’s profit and loss (P&L) statement is an income report that shows the company’s overall expenses and revenues. So, if your company wishes to capitalize some of the software’s R&D costs, they are recognized as “profitable assets” instead of “losses,” so development can be amortized over a time period. 

2. Balance sheet impact

Software capitalization treats your development-related costs as long-term assets rather than incurred expenses. This means putting these costs on a balance sheet without recognizing the initial costs until you have a viable finished product that generates revenue. 

As a result, it delays paying taxes on those costs and leads to a bigger net income over that period.

3. Tax considerations 

Although tax implications can be complex, capitalizing on software can often lead to tax deferral. That’s because amortization deductions are spread across multiple periods, reducing your company’s tax burden for the time being. 

Detailed Software Capitalization Financial Model

Workforce and Development Parameters

Team Composition

  • Senior Software Engineers: 4
  • Mid-level Software Engineers: 6
  • Junior Software Engineers: 3
  • Total Team: 13 engineers

Compensation Structure (Annual)

  1. Senior Engineers
    • Base Salary: $180,000
    • Fully Loaded Cost: $235,000 (includes benefits, taxes, equipment)
    • Hourly Rate: $113 (2,080 working hours/year)
  2. Mid-level Engineers
    • Base Salary: $130,000
    • Fully Loaded Cost: $169,000
    • Hourly Rate: $81
  3. Junior Engineers
    • Base Salary: $90,000
    • Fully Loaded Cost: $117,000
    • Hourly Rate: $56

Story Point Economics

Story Point Allocation Model

  • 1 Story Point = 1 hour of work
  • Complexity-based hourly ratessome text
    • Junior: $56/SP
    • Mid-level: $81/SP
    • Senior: $113/SP

Project Capitalization Worksheet

Project: Enterprise Security Enhancement Module

Detailed Story Point Breakdown

Indirect Costs Allocation

  1. Infrastructure Costs
    • Cloud Development Environments: $75,000
    • Security Testing Platforms: $45,000
    • Development Tools Licensing: $30,000
    • Total: $150,000
  2. Overhead Allocation
    • Project Management (15%): $37,697
    • DevOps Support (10%): $25,132
    • Total Overhead: $62,829

Total Capitalization Calculation

  • Direct Labor Costs: $251,316
  • Infrastructure Costs: $150,000
  • Overhead Costs: $62,829
  • Total Capitalizable Costs: $464,145

Capitalization Eligibility Assessment

Capitalization Criteria Checklist

✓ Specific identifiable project 

✓ Intent to complete and use the software 

✓ Technical feasibility demonstrated 

✓ Expected future economic benefits 

✓ Sufficient resources to complete project 

✓ Ability to reliably measure development costs

Amortization Schedule

Useful Life Estimation

  • Estimated Useful Life: 4 years
  • Amortization Method: Straight-line
  • Annual Amortization: $116,036 ($464,145 ÷ 4)

Financial Impact Analysis

Income Statement Projection

Risk Mitigation Factors

Capitalization Risk Assessment

  1. Over-capitalization probability: Low (15%)
  2. Underestimation risk: Moderate (25%)
  3. Compliance deviation risk: Low (10%)

Sensitivity Analysis

Cost Variation Scenarios

  • Best Case: $441,938 (5% cost reduction)
  • Base Case: $464,145 (current estimate)
  • Worst Case: $487,352 (5% cost increase)

Compliance Considerations

Key Observations

  1. Precise tracking of story points allows granular cost allocation
  2. Multi-tier engineer cost model reflects skill complexity
  3. Comprehensive overhead and infrastructure costs included
  4. Rigorous capitalization criteria applied

Recommendation

Capitalize the entire $464,145 as an intangible asset, amortizing over 4 years.

How Typo can help 

Tracking R&D investments is a major part of streamlining software capitalization while leaving no room for manual errors. With Typo, you streamline this entire process by automating the reporting and management of R&D costs.

Typo’s best features and benefits for software capitalization include:

  • Automated Reporting: Generates customizable reports for capitalizable and non-capitalizable work.
  • Resource Allocation: Provides visibility into team investments, allowing for realignment with business objectives.
  • Custom Dashboards: Offers real-time tracking of expenditures and resource allocation.
  • Predictive Insights: Uses KPIs to forecast project timelines and delivery risks.
  • DORA Metrics: Assesses software delivery performance, enhancing productivity.

Typo transforms R&D from a cost center into a revenue-generating function by optimizing financial workflows and improving engineering efficiency, thus maximizing your returns on software development investments.

Wrapping up

Capitalizing software costs allows tech companies to secure better investment opportunities by increasing profits legitimately. 

Although software capitalization can be quite challenging, it presents massive future revenue potential.

With a tool like Typo, you rapidly maximize returns on software development investments with its automated capitalized asset reporting and real-time effort tracking. 

In this episode of the groCTO Podcast, host Kovid Batra interviews David Archer, the Director of Software Engineering at Imagine Learning, with over 12 years of experience in engineering and leadership, including a tenure at Amazon.

The discussion centers on successfully integrating acquired teams, a critical issue following company mergers and acquisitions. David shares his approach to onboarding new team members, implementing a buddy system, and fostering a growth mindset and no-blame culture to mitigate high attrition rates. He further discusses the importance of having clear documentation, pairing sessions, and promoting collaboration across international teams. Additionally, David touches on his personal interests, emphasizing the impact of his time in Japan and his love for Formula 1 and rugby. The episode provides insights into the challenges and strategies for creating stable and cohesive engineering teams in a dynamic corporate landscape.

Timestamps

  • 00:00 - Introduction
  • 00:57 - Welcome to the Podcast
  • 01:06 - Guest Introduction: David's Background
  • 03:25 - Transitioning from Amazon to Imagine Learning
  • 10:49 - Integrating Acquired Teams: Challenges and Strategies
  • 14:57 - Building a No-Blame Culture
  • 18:32 - Retaining Talent and Knowledge Sharing
  • 24:22 - Skill Development and Cultural Alignment
  • 29:10 - Conclusion and Final Thoughts

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO podcast. And today with us, we have a very special guest. He has 12 plus years of engineering and leadership experience. He has been an ex-Software Development Manager for Amazon and currently working as Director of Engineering for Imagine Learning. Welcome to the show, David. Great to have you here.

David Archer: Thanks very much. Thanks for the introduction.

Kovid Batra: All right. Um, so there is a ritual, uh, whosoever comes to our podcast, before we get down to the main section. So for the audience, the main section, uh, today’s topic of discussion is how to integrate the acquired teams successfully, uh, which has been a burning topic in the last four years because there have been a lot of acquisitions. There have been a lot of mergers. But before we move there, uh, David, we would love to know something about you, uh, your hobbies, something from your childhood, from your teenage or your, from personal life, which LinkedIn doesn’t tell and you would like to share with us.

David Archer: Sure. Um, so in terms of my personal life, the things that I’ve enjoyed the most, um, I always used to love video games as a child. And so, one of the things that I am very proud of is that I went to go and live in Japan for university and, and that was, um, a genuinely life-changing experience. Um, and I absolutely loved my time there. And I think it’s, it’s had a bit of an effect on my time, uh, since then. But with that, um, I’m very much a fan of formula one and rugby. And so, I’ve been very happy in the last, in the post-COVID-19 years, um, of spending a lot of time over in Silverstone and Murrayfield to go and see some of those things. So, um, that’s something that most people don’t know about me, but I actually quite like my sports of all things. So, yeah.

Kovid Batra: Great. Thanks for that little, uh, cute intro and, uh, with that, I think, uh, let’s get going with the main section. Uh, so integrating, uh, your acquired team successfully has been a challenge with a lot of, uh, engineering leaders, engineering managers with whom I have talked. And, uh, you come with an immense experience, like you have had been, uh, engineering manager for OVO and then for, uh, Amazon. I mean, you have been leading teams at large organizations and then moving into Imagine Learning. So before we touch on the topic of how you absorbed such teams successfully, I would love to know, how does this transition look like? Like Amazon is a giant, right? And then you’re moving to Imagine Learning. Of course, that is also a very big company. But there is definitely a shift there. So what made you move? How was this transition? Maybe some goods or bads, if you can share without getting your job impacted.

David Archer: Yeah, no problem. Um, so once upon a time, um, you’re correct in terms of that I’ve got, you know, over 12 years experience in the industry. Um, but before that, I was a teacher. So for me, education is extremely important and I still think it’s one of the most rewarding things that as a human you can be a part of. Helping to bring the next generation, or in terms of their education, give them better, uh, capabilities and potential for the future. Um, and so when somebody approached me with the position here at Imagine Learning, um, I had to jump at the chance. It sounded extremely exciting and, um, I was correct. It was extremely exciting. There’s definitely been a lot of movement and, and I’m sure we’ll touch on that in a little while, but there is definitely a, a, quite a major cultural shift. Um, and then obviously there is the fact that Amazon being a US-centric company with a UK arm, which I was a part of, um, Imagine Learning is very similar. Um, it’s a US-centric company with a US-centric educational stance. Um, and then, yeah, me being part of the UK arm of the company means that there are some cultural challenges that Amazon has already worked through that Imagine Learning still needed to work through. Um, and so part of that challenge is, you know, sort of educating up the chain, if you like, um, on the cultural differences between the two. So, um, definitely some, some big changes. It’s less easy to sort of move sideways as you can in companies like Amazon, um, where you can transition from one team to another. Um, here, it’s a little bit more, um, put together. There’s, there’s, there’s only one or two teams here that you could potentially work for. Um, but that’s not to say that the opportunities aren’t there. And again, we’ll touch on that in a little bit, I’m sure.

Kovid Batra: Perfect. Perfect. All right. So one, one question I think, uh, all the audience would love to know, like, in a company like Amazon, what is it like to get there? Because it takes almost eight to 10 years if you’re really good at something in Amazon, spend that time and then you move into that profile of a Software Development Manager, right? So how, how was that experience for you? And what do you think it, it requires, uh, in an Engineering Manager at Amazon to be there?

David Archer: That’s a difficult question to answer because it changes upon the person. Um, I jumped straight in as a Software Development Manager. And in terms of what they’re looking for, anybody that has looked into the company will be aware of their leadership principles. And being able to display their leadership principles through previous experiences, that’s the thing that will get you in. So if you naturally have that capability to always put the customer first, to ensure that you are data-driven, to ensure that you have, they call it a bias for action, but that you move quickly is kind of what it comes down to. Um, and that you earn trust in a meaningful way. Those are some of the things that I think most managers would be looking for, and when interviewing, of course, there is a technical aspect to this. You need to be able to talk the talk, and, um, I think if you are not able to be able to reel off the information in an intrinsic manner, as in you’ve internalized how the technology works, that will get picked up. Of course it will. You can’t prepare for it like you can an exam. There is an element of this that requires experience. That being said, there are definitely some areas that people can prepare for. Um, and those are primarily in the area of ensuring that you get the experiences that meet the leadership principles that will push you into that position. In order to succeed, it requires a lot of real work. Um, I’m not going to pretend that it’s easy to work at a company like Amazon. They are well known for, um, ensuring that the staff that they have are the best and that they’re working with the best. And you have to, as a manager, ensure that the team that you’re building up can fulfill what you require them to do. If you’re not able to do that, if you’re taking people on because they seem like they might be a good fit for now, you will in the medium to long-term find that that is detrimental to you as a manager, as well as your team and its capabilities, and you need to be able to then resolve that potential problem by making some difficult decisions and having some difficult conversations with individuals, because at the end of the day, you as a manager are measured on what your team output, not what you as an individual output. And that’s a real shift in thinking from being a, even a Technical Lead to being an Engineering Manager.

Kovid Batra: That’s for sure there. One thing, uh, that you feel, uh, stands out in you, uh, that has put you in this position where you are an SDM at Amazon and then you transitioned to a leadership position now, which is Director of Engineering at Imagine Learning. So what is that, uh, one or two traits of yourself that you might have reflected upon that have made you move here, grow in the career?

David Archer: I think you have to be very flexible in your thinking. You have to have a manner of thinking that enables for a much wider scope and you have to be able to let go of an individual product. If your thinking is really focused on one team and one product and it stays in that single first party of what you’re concentrating on that moment in time, then it really limits your ability to look a little bit further beyond the scope and start to move into that strategic thinking. That’s where you start moving from a Software Development Manager into a more senior position is with that strategic thinking mindset where you’re thinking beyond the three months and beyond the single product and you’re starting to move into the half-yearly, full-yearly thinking is a minimum. And you start thinking about how you can bring your team along for a strategic vision as opposed to a tactical goal.

Kovid Batra: Got it. Perfect. All right. So with that, moving to Imagine Learning, uh, and your experience here in the last, uh, one, one and a half years, a little more than that, actually, uh, you, you have, uh, gone through the phase of your self-learning and then getting teams onboarded that were from the acquired product companies and that experience when you started sharing with me on our last, last call, I found that very interesting. So I think we can start off with that point here. Uh, like how this journey of, uh, rearranging teams, bringing different teams together started happening for you. What were the challenges? What was your roadmap in your head and your team? How will you align them? How will you make the right impact in the fastest timeframe possible? So how things shaped up around that.

David Archer: Sure. Initially, um, the biggest challenge I had was that there was a very significant knowledge drain before I had started. Um, so in the year before I came on board and it was in the first year post-acquisition, the attrition rate for the digital part of the company was somewhere in the region of 50%. Um, so people were leaving at a very fast pace. Um, I had to find a way to plug that end quickly because we couldn’t continue to have such a large knowledge drain. Um now the way that I did that was I, I believe in, in the engineers that I have in front of me. They wouldn’t be in the position that they’re in if they didn’t have a significant amount of capability. But I also wanted to ensure that they had and acquired a growth mindset. Um, and that was something that I think up until that point they were more interested in just getting work done as opposed to wanting to grow into a, a sort of more senior position or a position with more responsibility and a bigger challenge. And so I ensured that I mixed the teams together. We had, you know, front enders and back enders in separate teams initially. And so I joined them together to make sure that they held responsibility for a piece of work from beginning to end, um, which gave them autonomy on the work that they were doing. I ensured that I earner trust with that team as well. And most importantly, I put in a ‘no-blame culture’, um, because my expectation is that everybody’s always acting with the best of intentions and that usually when something is going wrong, there is a mechanism that is missing that would have resolved the issue.

Kovid Batra: But, uh, sorry to interrupt you here. Um, do you think, uh, the reasons for attrition were aligned with these factors in the team where people didn’t have autonomy, uh, there was a blame game happening? Were these the reasons or, uh, the reasons were different? I mean, if you’re comfortable sharing, cool, but otherwise, like we can just move on.

David Archer: No, yeah, I think that in reality there, there was an element of that there, there was a, um, a somewhat, not toxic necessarily culture, but definitely a culture of, um, moving fast just to get things done as opposed to trying to work in the correct manner. And that means that people then did feel blamed. They felt pressured. They felt that they had no autonomy. Every decision was made for them. And so, uh, with more senior staff, especially, you know, looking at an MNA situation where that didn’t change, they didn’t see a future in their career there because they didn’t know where they could possibly move forward into because they had no decision-making or autonomy capability themselves.

Kovid Batra: Makes sense. Got it. Yeah, please go on. Yeah.

David Archer: Sorry, yes. So, um, we’re putting these things in place, giving everybody a growth mindset mentality and ensuring that, um, you know, there was a no-blame culture. There were some changes in personnel as well. Um, I identified a couple of individuals that were detrimental to the team and those sort of things are quite difficult, you know, moving people on who, um, they’re trying their best and I don’t deny that they are, but their way of working is, is detrimental to a team. But with those changes, um, we then move from a 50% regressive attrition to a 5% regressive attrition over the course of 23 and 24, which is a very, very significant change in, um, in attrition. And, uh, we also, at that point in time, were able to start implementing new methodologies of bringing in talent from, from below. So we started partnering with Glasgow University to bring in an internship program. We also took on some of their graduates to ensure that we had, um, for once with a better phrase, new blood in the team to ensure that we’re bringing new ideas in. Um, and then we prepared people through the training programs that they should need.

Kovid Batra: I’m curious about one thing, uh, saying that stopping this culture of blame game, uh, is definitely, uh, good to hear, but what exactly did you do in practice on a daily level or on a weekly level or on every sprint level that impacted and changed this mindset? What, what were the things that you inculcated in the culture?

David Archer: So initially, um, and some people think that this might be a trite point, but, um, I actually put out the policy in front of people. I wrote it down and put it in front of people and gave them a document review session to say, “This is a no-blame culture, and this is what I mean by that.” So that people understood what my meaning was from that. Following that, um, I then did have a conversation with some of the parts of, you know, some people in other parts of the company to say, “Please, reroute your conversations through me. Don’t go directly to engineers. I want to be that, that point of contact going forward so that I can ensure that communication is felt in the right manner and the right capacity.” And then, um, the, the other thing is that we started bringing in things like, um, postmortems or incident response management, um, sessions that, that where we, I was very forceful on ensuring that no names were put into these documents because until that point, people did put other people’s names in, um, and wanted to make sure that it was noted that it was so and so’s fault. Um, and I had to step on that very, very strongly. I was like, this could have been anyone’s fault. It’s just that they happen to be at that mine of code at that point in time. Um, and made that decision, which they did with a good intention. Um, so I had to really step in with the team and every single post mortem, every major decision in that, that area, every sprint where we went through what the team had completed in terms of work and made sure we did pick out individuals in terms of particularly good work that they did, but then stepped very strongly on any hint of trying to blame someone for a problem that had happened and made it very clear to them again that this could have happened to anyone and we need to work together to ensure it can’t happen to anyone ever again.

Kovid Batra: Makes sense. So when, when this, uh, impact started happening, uh, did you see, uh, people from the previous, uh, developers, like who were already the part of Imagine Learning, those were getting retained or, uh, the ones who joined after acquisition from the other company, those developers were also getting retained? How, how did it impact the two groups and how did they like, gel up later on?

David Archer: Both actually. Yeah. So the, the staff who were already here, um, effectively the, the, the drain stopped and there weren’t people leaving anymore that had had, you know, some level of tenure longer than six months, um, at all from that point forward, and new staff that were joining, they were getting integrated with these new teams. I implemented a buddy system so that every new engineer that came in would have somebody that they could work alongside for the first six months and show that they had some, somebody to contact for the whole time that they were, um, getting used to the company. And, uh, I frequently say that as you join a company like this, you are drinking from a fire hose for the first couple of months. There’s a lot of information that comes your way. Um, and so having a buddy there helped there. Um, I added software engineering managers to the team to ensure that there were people who specifically looked after the team, continue to ensure there was a growth mindset to continue to implement the plans that I had, um, to make these teams more stable. Um, and that took a while to find the right people, I will say that. Um, there was also a challenge with integrating the teams from our vendors in, um, international, uh, countries. So we worked with some teams in India and some teams in the Ukraine. Um, and with integrating people from those teams, there was some level of separation, and I think one of the major things we started doing then was getting the people to meet in a more personal manner, bringing them across to our team to actually meet each other face-to-face, um, and realize that these are very talented individuals, just like we are. They’re, they’re no different just because they, you know, live a five and a half hour time zone away and doesn’t mean that they’re any less capable. Um, they just have a different way of working and we can absolutely work with these very talented people. And bringing them into the teams via a buddy, ensuring that they have someone to work with, making sure that the no-blame culture continued, even into our contractors, it took a while, don’t get me wrong. And there were definitely some missteps, um, but it was vital to ensuring that there was team cohesion all the way across.

Kovid Batra: Definitely. And, uh, I’ve also experienced this, uh, when talking to other, uh, engineering leaders that when teams come in, usually it is hard to find space for them to do that impactful work, right? So you, you need to give those people that space in general in the team, which you did. But also at the same time, the kind of work they are picking up, that also becomes a challenge sometimes. So was that a case in your scenario as well? And did you like find a way out there?

David Archer: It was the case here. Um, there definitely was a case of the, the work was predefined, if you like, to some extent by the, the most senior personnel. And so one of the things that we ensured that we did, uh, I worked very closely with our product team to ensure that this happened is that we brought the engineers in a lot sooner. We ensured that this wasn’t just the most senior member of the team, but instead that we worked with different personnel and de-siloing that information from one person to another was extremely important because there were silos of information within our teams. And I made it very clear that if there’s an incident and somebody needs some help, and there’s only one person on the team, um, that is capable of actually working, then, um, we’re going to find ourselves in, in a real problem. Um, and I think people understood that intrinsically because of the knowledge loss that had happened before I started, or just as I was coming on board, um, because they knew that there were people who, you know, knew this part of the code base or this database or how this part of infrastructure worked, and suddenly we didn’t have anybody that had that knowledge. So we now needed to reacquire it. And so, I ensured that the, you know, this comes from an Amazon background, so anybody that, that has worked at this company will know what I’m talking about here, but documentation is key. Ensuring document reviews was extremely important. Um, those are the kind of things, ensuring that we could pass on information from one person to another from one team to another in the most scalable fashion, it does slow you down in delivery, but it speeds you up in the longer term because it enables more people to do a wider range of work without needing to rely on that one person that knows everything.

Kovid Batra: Sure, definitely. I think documentation has been like always on the top of, uh, the priority list itself now whomsoever I’m talking to, because once there are downturns and you face such problems, you realize the importance of it. In the early phase, you are just running, building, not focusing on that piece, but later on, it becomes a matter of priority for sure. And I can totally relate to it. Um, so talking about these people, uh, who have joined in and you’re trying to integrate, uh, they definitely need some level of cultural alignment also, like they are coming from a different background, coming into a new company. Along with that, there might be requirements, you mentioned like skill development, right? So were there any skill development plans that worked out, that worked out here that you implemented? Anything from that end you want to share?

David Archer: Yeah, absolutely. So with joining together our teams of frontend and backend developers, um, that’s obviously going to cause some issues. So some developers are not going to be quite as excited about working in a different area. Um, but I think with knowing that the siloing of information was there and that we had to resolve that as an issue and then ensuring that people who are being brought on via, you know, vendors from international countries and things like that, um, what we started to do was to ensure that we put in, um, pairing sessions with all of our developers. Up until that point, they kind of worked on their own and so, um, I find that working one-to-one with another individual tends to be the fastest way to learn how the things work, work in the same way as, um, a child learns their language from their parents far faster than they ever would from watching TV. Um, although sometimes I do wonder about that myself with my daughter singing baby shark to me 16 times and I don’t think I’ve ever sung that. So let’s see where that goes. Um, but having that one-to-one, um, relationship with the person means that we’re able to ask questions, we’re able to gain that knowledge very quickly. Having the documentation backing that up means that you’ve got a frame of reference to keep going to as well. And then if you keep doing that quite frequently and add in some of the more abstract knowledge sharing sessions, I’m thinking like, um, a ‘launch and learn’ type sessions or lightning talks, as well as having a, a base of, sort of a knowledge base that people can learn from. So, obvious examples of things like Pluralsight or O’Reilly’s library. Um, But we also have our own internal documentation as well where we give people tutorials, we walk people through things, we added in a code review session, we added in a code of the sprint and a session as well for our um, sprint reviews that went out to the whole team and to the rest of the company where we showed that we’re optimizing where we can. And all these things, they didn’t just enable the team to, to become full stack and I will say all of our developers now are full stack. I’d be very surprised if there are any developers I’m working with that are not able to make a switch. But it also built trust with the rest of the company as well and that’s the thing with being a company that has been acquired is that we need to, um, very quickly and very deliberately shout about how well we’re doing as a company so that they can look at what we’re doing and use us, as has frequently been the case recently actually as a best practice, a company that’s doing things well and doing things meaningfully and has that growth mindset. And we start then to have conversations with the wider company, which enables things like a tiger team type session that enables us to widen our scope and have more same company. It’s kind of a spiral at that point in time because you start to increase your scope and with doing that, it means that your team can grow because you know, that they know that thing, that they can trust us to do things effectively. And it also gives, going back to what I said at the beginning, and people more autonomy, then more decision-making capabilities they need to get further out into a company.

Kovid Batra: And in such situations, the opinions that they’re bringing in are more customer-centric. They have more understanding of the business. All those things ultimately add up to a lot of intrinsic incentivization, I would say. That if I’m being heard in the team, being a developer, I feel good about it, right? And all of this is like connected there. So I, it totally makes sense. And I think that’s a very good hack to bringing new, uh, people, new teams into the same, uh, journey where you are already continuing. So, great. I think, uh, with that, we have, uh, come to, uh, the end of this discussion. And in the interest of time, we’ll have to pause here. Uh, really loved talking to you, would love to know more such experiences from you, but it will be in the, maybe in the next episodes. So, David, once again, thanks a lot for your time. Thanks for sharing your experiences. It was great to have you here.

David Archer: Thank you so much and I really appreciate, uh, the time that you’ve taken with me. I hope that this proves useful to at least one person and they can gain something from this. So, thank you.

Kovid Batra: I’m sure it will be. Thank you. Thank you so much. Have a great day ahead.

David Archer: Thank you. Cheers now!

In the first session of the ‘Unlocking Engineering Productivity’ webinar series, host Kovid Batra from Typo welcomes two prominent engineering leaders: Paulo André, CTO of Resquared, and Denis Čahuk, a technical coach and TDD/DDD expert.

They discuss the importance of engineering productivity and share insights about their journeys. Paulo emphasizes the significance of collaboration in software development and the pitfalls of focusing solely on individual productivity metrics. Denis highlights the value of consistent improvement and reliability over individual velocity. Both guests underline the importance of creating clarity and making work visible within teams to enhance productivity. Audience questions address topics such as balancing technical debt with innovation and integrating new tools without disrupting workflows. Overall, the session offers practical strategies for engineering leaders to build effective and cohesive teams.

Timestamps

  • 00:00 — Introduction
  • 00:52 — Meet the Experts: Paulo and Denis
  • 03:13 — Childhood Stories that Shaped Careers
  • 05:37 — Defining Engineering Productivity
  • 11:18 — Why Focus on Engineering Productivity Now?
  • 15:47 — When and How to Measure Productivity
  • 22:00 — Team vs. Individual Productivity
  • 35:35 — Real-World Examples and Insights
  • 37:17 — Addressing Common Engineering Challenges
  • 38:34 — The Importance of Team Reliability
  • 40:32 — Planning and Execution Strategies
  • 45:31 — Creating Clarity and Competence
  • 53:24 — Audience Q&A: Balancing Technical Debt and Innovation
  • 57:02 — Audience Q&A: Overlooked Metrics and Security
  • 01:02:49 — Audience Q&A: Integrating New Tools and Frameworks
  • 01:08:47 — Final Thoughts and Farewell

Links and Mentions

Transcript

Kovid Batra: All right. Time to get started. Uh, welcome everyone. Welcome to the first episode, first session of our new, all new webinar series, Unlocking Engineering Productivity. So after the success of our previous webinar The Hows and Whats of DORA, we are even more excited to bring you this webinar series which is totally designed to help the engineering leaders become better, learn more and build successful, impactful dev teams. And today with us, uh, we have two passionate engineering leaders. Uh, I have known them for a while now. They have been super helpful, all the time up for helping us out. So let me start with the introduction. Uh, Paulo, Paulo André, uh, CTO of Resquared, a YC-backed startup. He has been the, he has been ex-engineering leadership coach for Hotjar, and he has, he’s an author of the Hagakure newsletter. So welcome to, welcome to the unlocking, uh, engineering productivity webinar, Paulo.

Paulo André: Thanks for having me. It’s a real pleasure to be here.

Kovid Batra: Great. Uh, then we have Denis. Uh, he’s coming to this for the second time. And, uh, Denis is a tech leadership coach, TDD expert, and author of Crafting Tech Teams. And he’s also a guitar player, a professional gamer. Uh, hi, hi, Denis. Welcome, welcome to the episode.

Denis Čahuk: Hi, thanks for inviting me again. Always a pleasure. And Hey, Paulo, it’s our first time meeting on stage.

Paulo André: Good to meet you, Denis.

Kovid Batra: I think I missed mentioning one thing about Paulo. Like, uh, he is like a very, uh, he’s an avid book reader and a coffee lover, just like me. So on that note, Paulo, uh, which book you’re reading these days?

Paulo André: Oh, that’s a good question. Let, let me pull up my, because I’m always reading a bunch of them at the same time, sort of. So right now, I’m very interested, I wonder why in, you know, geopolitical topics. So I’m reading a lot about, you know, superpowers and how this has played out, uh, in history. I’m also reading a fiction book from an author called David Baldacci. It’s this series that I recommend everyone who likes to read thrillers and stuff like that. It’s called the 6:20 Man. So.

Kovid Batra: Great.

Paulo André: That’s what I’m reading right now.

Kovid Batra: So what’s going to be the next superpower then? Is it, is it, is it China, Russia coming in together or it’s the USA?

Paulo André: I’ll tell you offline. I’ll tell you offline.

Kovid Batra: All right. All right. Let’s get started then. Um, I think before actually we move on to the main section, uh, there is one ritual that we have to follow every time so that our audience gets to know you a little more. Uh, this is my favorite question. So I think I’ll, I’ll start with Paulo, you once again. Uh, you have to tell us something from your childhood or from teenage, uh, that defines you, who you are today. So over to you.

Paulo André: I mean, you already talked about the books. I think the reason why I became such a book lover was because there were a ton of books in my house, even though my parents were not readers. So I don’t know, it was more decorative. But I think more importantly for this conversation, I think the one thing about my childhood was when they gifted me a computer when I was six years old. We’re talking about 88, 89 of the type that you still connected to your big TV in the living room. So that changed my life because it came with an instruction manual that had code listings. Then you could type it in and you can see what happens on the screen and the rest is history. So I think that was definitely the most consequential thing that happened in my childhood when you consider how my life and career has played out.

Kovid Batra: Definitely. Cool. Um, Denis, I think the same question to you, man. Uh, what, what has been that childhood teenage memory that has been defining you today?

Denis Čahuk: Oh, you’re putting me on the spot here. I’ll have to come up with a new story every time I join a new webinar. Uh, no, no, I had a similar experience as Paulo. Um, I have an older brother and our household got our first computer when I was five-six years old, first commodore 64. So I learned how to code before I could read. Uh, I knew, I knew what keys to press so I could load Donald Duck into the, into the TV. Um, yeah, other than that when I, when I got a little bit, you know into the teenage years, I, um, World of Warcraft and playing games online became my passion project when I, when I received access to the internet. Um, so that’s, you know, I played World of Warcraft professionally, semi-professionally for quite a few years, like almost an entire decade, you know, and that, that was sort of parallel with my, with my sort of tech career, because we’re usually doing it in a very large organization, game-wise. Yeah. And that, that, that had a huge influence because it gave me an outlet for my competitiveness.

Kovid Batra: That’s interesting. All right, guys. Thanks. Thanks for sharing this with us. Uh, I think we’ll now move on to the main section and discuss something around which our audience would love to learn from you both. Uh, so let’s, let’s start with the first basic fundamental definition of what productivity, what dev productivity or engineering productivity looks like to you. So Paulo, would you like to take this first? Like, how do you define productivity?

Paulo André: So you start with a very small question, right? Um, you actually start with a million-dollar question. What is productivity? I’m happy to take a stab at it, but I think it’s one of those things that everyone has their own definition. For what it’s worth, when I think about productivity of engineering teams, I cannot decouple it from the purpose of an engineering team. And then ultimately, the way I see it is that an engineering team serves a business and serves the users of that business in case it’s a product company, obviously, um, but any, any kind of company kind of has that as the delivery of value, right? So with that in mind, is this team doing their part in the delivery of value, whatever value is for that business and for those users, right? And so having that sort of frame in mind, I also break it down in my mind, at least, in terms of like winning right now and increasing our capacity to win in the future. So a productive team is not just a team that delivers today, but it’s also a team that is getting better and better at delivering tomorrow, right? And so productivity would be, are we doing what it takes to deliver that value regardless of the output? Um, it is necessary to have output to have results and outcomes, but at the end of the day, how are we contributing to the outcomes rather than to the, um, the just purely to the outputs? And the reason why I bring this up has to do obviously with sometimes you see the obsession about things like story points and you know, all of that stuff that ultimately you can be working a lot, but achieving very little or nothing at all. So, yeah, I would never decouple, um, the delivery of value from how well an engineering team is doing.

Kovid Batra: Perfect. I think very well framed here and the perspective makes a lot of sense. Um, by the way, uh, audience, uh, while we are talking, discussing this EP, please feel free to shoot out all the questions that you have in the comments section. We’ll definitely be taking them at the end of the session. Uh, but it would be great if you could just throw in questions right now. Well, this was an advice from Denis, so I wouldn’t want to forget this. Okay. Uh, I think coming back, Denis, what’s your take on, uh, productivity, engineering productivity, dev productivity?

Denis Čahuk: Well, aPauloal said, that’s a million dollar question. I think, I think coming from a, from like a more analytical perspective, more data-driven perspective, I think we like to use the, the financial analogies, metaphors a lot for things like technical debt and, you know, good story points. It’s all about estimating something, you know, value of something or, or scale of something, scope of something. I think just using two metaphors is very useful for productivity. One is, you know, how risky is the team itself? And risk can come from many different places. It can be their methodologies, their personalities, the age of the company, the maturity of the company. The project can be risky. The timing on the market can be risky, right? So, but there is an inherent risk coming from the team itself. That’s, that’s what I mean. So how risky is it to work with this team in particular? Uh, and the other thing is to what degree does the team reason about, um, “I will produce this output for this outcome.” versus “I need to fill my schedule with activity because this input is demanded of me.” Right? So if I, if I use the four pillars that you probably know from business model canvases for activity, input, output, outcome, um, a productive team would not be measuring productivity per se. They will be more aligned with their business, aligned with their product and focusing on what, which of their outputs can provide what kind of outcomes for the business, right? So it’s not so much about measuring it or discussing it. It’s more about a, you know, are we shifting our mentality far enough into the things that matter, or are we chasing our own tail, essentially, um, protecting our calendars and making sure we didn’t over-promise or under-promise, etc.?

Kovid Batra: Got it. Makes sense.

Paulo André: Can I just add one, one last thing here, because Denis got my, my brain kinda going? Um, just to make the point that I think the industry spends a lot of time thinking about what is productivity and trying to define productivity. I think there is value in really getting clear about what productivity is not. And so I think what both Denis and I are definitely aligned on among other things is that it’s not output. That’s not what productivity is in isolation. So output is necessary, but it is not sufficient. And unfortunately, a lot of these conversations end up being purely about output because it’s easy to measure and because it’s easy to measure, that’s where we stop. And so we need to do the homework and measure what’s hard as well, so we can get to the real insight.

Kovid Batra: No, totally makes sense. I think I relate to this because when I talk to so many engineering leaders and almost all the time this, this comes into discussion, like how exactly they should be doing it. But what, what is becoming more interesting for me is that this million dollar question has suddenly started raising concerns, right? I mean, almost everywhere in like in business, uh, people are measuring productivity in some or the other way, right? But somehow engineering teams have suddenly come into the focus. So this, this perspective of bringing more focus now, why do you think it has come into the picture now?

Paulo André: Is that for me or Denis? Who should go first?

Kovid Batra: Anyone. Maybe Paulo, you can go ahead. No problem.

Paulo André: Okay. So, look. In, in my opinion, I think I was thinking a little bit about this. I think it’s a good question. And I think there’s at least three things, three main things that are kind of conspiring for this renewed focus or double down on engineering productivity specifically. I think on the one hand, it’s what I already mentioned, right? It’s easier to measure engineering than anything else. Um, at least in the product design and engineering world, of course, sales are very easy to measure. Did you close or not? And that sort of thing. But when it comes to product design and engineering, engineering, especially if you focus on outputs is so much easier to measure. And then someone gets a good sense of ROI from that, which may or may not be accurate. But I think that’s one of the things. The other thing is that when times get more lean or things get more difficult and funding kind of dries up, um, then, of course, you need to tighten the belt and where are you going to tighten the belt? And at the end of the day, I always say this to my teams, like, engineering is not more special in any way than any other team in a company. That being said, when it comes to a software company, the engineering team is where the rubber meets the road. In other words, you do absolutely need some degree of engineering team or engineering capacity to translate ideas and designs and so on into actual software. So it’s very easy to kind of just look at it as in, “Oh, engineers are absolutely critical. Everything else, maybe are nice to have.” Or something of that, to that effect, right? And then lastly, I think the so-called Elon Musk effect definitely is a thing. I mean, when someone with that prominence and with, you know, the soapbox that he has, comes in and says, you know, we’re going to focus on engineers and it’s about builders and even Mark Andreessen wrote an article like three years ago or so saying it’s time to build, all of that speaks like engineering, engineering, engineering. Um, and so when you put that all together and how influencible all of us are, but I think especially then founders and CEOs are kind of really attuned to their industry and to investors and so on, and I think there’s this, um, feedback loop where engineering is where it’s at right now, especially the age of AI and so on. So yeah, i’m not surprised that when you put this all together in this day and age, we have what we have in terms of engineering being like the holy grail and the focus.

Kovid Batra: Uh, Denis, you, you have something to add on this?

Denis Čahuk: I mean, when it comes to the timing, I don’t think anything comes to mind, you know, why now? What I can definitely say is that engineering of everything that’s going on is the biggest cost in a, in a large company. I mean, it’s not, not to say that it’s all about salaries or operational expenses, but it is also from a business’s perspective, engineering is, you know, if I put a price to the business being wrong on an experiment, the engineering side of things, the product engineering side of things defines most of that cost, right? So when it comes to experiments, the likelihood of it succeeding or not succeeding, or the how fast you gain feedback to be able to, you know, to, to think of experiment feedback as cashflow, you know, you want the big bet that you do once every three months, or do you want to do a bunch of small bets continuously several times per day? You know, all of that is decided and all of that happens in engineering and it also happens to be the biggest fiscal costs. So it makes sense that, hey, there’s an, you know, there’s a big thing that costs a lot, that is very complex and it’s defining the company. Yeah, of course, business owners would want to measure it. It will be irresponsible not to. It doesn’t mean that it, that productivity from a team’s or an engineer’s, an individual’s perspective is the most sensible thing to measure. But I, you know, I understand the people that would intuitively come to that conclusion.

Kovid Batra: Yeah. I think that makes a lot of sense. And what do you think, like, this should be done that, that is totally, uh, understandable, but when is the right time to start doing this and how one should start it? Because every time our engineering leader is held accountable for a team, whether big or small, there is a point where you have to decide your priorities and think about things that you are going to do, right? So how and when should an engineering leader or an engineering manager for a team should start taking up this journey?

Paulo André: I think Denis can go first on this one.

Denis Čahuk: Well, I would never, you know, I would never start measuring. So I coach teams professionally, you know, they, they reach out to me because something about my communication on LinkedIn or newsletter resonated with them regarding, you know, a very no-nonsense way of how to deal with customers, how to communicate, how to plan, how to not plan, how to, how to bring, you know, that excitement into engineering, that makes engineering very hyperproductive and fun. And then they come to me and ask, well, you know, “I want to measure all these things to see what I can do.” I think that context is always misleading. You know, we don’t just go in, you know, it’s not a speedometer like the, I think the very, very first intuition that people still have from the 90s, from the, from the, like the initial scrum and Kanban, um, modes of thought that, “Oh, I can just put up speedometer on the team and it will have a velocity and it, you know, it will just be a number.” Um, I think that is naive. That is not what measuring is. And that is not the right time ever to measure that. Like that I think is my say. Um, the right time to measure is when you say, “I am improving A or B. I am consciously trying to figure out continuously, consciously trying to figure out what will make my teams better.” So a leader might approach, “Okay. If I introduce this initiative, how can I tell if things are better?” And then you can say, “Well, I’ll eyeball it or I’ll survey the team.” And at a certain point, the eyeballing is too inaccurate or it requires too many disagreeing eyeballs, or, um, you run the risk of a survey fatiguing the team, so it’s just way too many surveys asking boring questions, and when you ask engineers to do repetitive, boring things, they will start giving you nonsense answers, right? So that would be the point where I think measuring makes sense, right? Where you basically take a little bit of subjective opinion out, with the exception of surveys, qualitative surveys, and you introduce a machine that says, “Hey, this is a process.” You know, it’s one computer talking to the other computer, you know, in the case of GitHub and similar, which seems to be the primary vector for measurement. Um, can I just extract some metrics of, you know, what are the characteristics of the machine? It doesn’t tell you how fast or how slow it’s going. Just what are the characteristics? Maybe I can get some insights too and decide whether this was a good idea or a bad idea, or if we’re missing something. But the decision to help your teams improve on some initiative and introducing the initiative comes first. And then you measure if you have no other alternative or if the alternatives are way too fuzzy.

Kovid Batra: Makes sense. Paulo, would you like to add something?

Paulo André: Yeah, I mean, I think my, my perspective on this is not very different from, from Denis. Uh, maybe it comes from a slightly different angle and I’ll explain what I mean. So, at the end of the day, if you want to create an outcome, right? And you want to change customer behavior, you want to create results for the business, you’re going to have to build something. And where I would not start is with the metrics, right? So you asked Kovid, like where, where do we start in this journey? I would say do not start with the metrics because in my mind, the metrics are a source of insight or answers to a set of questions. And so start with the questions, right? Start with the challenges that we, that you have to get to where you want to be, right? And so, coming back to what I was saying, if you want to create value, you’re going to have to build something, typically, most of the time, sometimes it creates value by removing something, but in general, you are building and iterating on your products. And, and so with that in mind, what is going back to first principles? What is the nature of software development? Well, it’s a collaborative effort. Nobody does everything end-to-end by themselves. And so with that in mind, there’s going to be handoffs. There’s going to be collaboration. There’s going to be all, all of that sort of flow, right? Where, where the work goes through a certain, you can see it as a pipeline. And so then when it comes to productivity, to me is, is, you know, from a lean software development perspective is how do we increase the flow? If you think of a Kanban board, how do you go, you know, in a smooth way, as smooth as possible from left to right, from something being ready for development to being shipped in production and creating value for the user and for the company? And so if you see it that way with that mental model, then it becomes like, where is the constraint? What is the bottleneck? And then how do we measure that? How do we get the answers is by measuring. And so when it comes to the DORA metrics that you guys obviously with Typo provide, um, you know, a good, good insight into, and, and other such things, generally cycle time, lead time really allows us to start understanding where’s this getting stuck. And that leads to then conversations around what can we do about that? And ultimately everybody can rally around the idea of how do we increase flow? And so that’s where I would start is what are we trying to do? What is getting in our way? And then let’s look at the data that we have available without going too crazy about that into like, what can we learn and where can we improve and where’s the biggest leverage?

Kovid Batra: Makes sense. I think one, one good point that you brought here is that software development is a collaborative effort, right? And every time when we go about doing that, there are people, there are teams, uh, there are processes, right? Uh, how, how would you define in a situation that whether you should go about measuring, uh, at an individual-level productivity, a developer-level productivity, and, uh, and then when, when we are talking about this collaborative effort, the engineering productivity? So how do you differentiate and how do you make sure that you are measuring things right? And sometimes the terminologies also bring in a lot of confusion. Uh, like, I would never perceive developer productivity to be something, uh, specific to developers. It ultimately boils down to the team. So I would want to hear both of you on this point, like how, how do you differentiate or what’s your perspective on that? When you talk to your team that, okay, this is what we are going to measure, uh, your teams are not taken aback by that, and there is a smooth transition of thought, goals when we are talking about improving the productivity. Uh, Paulo, maybe you could answer that.

Paulo André: I was trying to unmute myself. I was actually gonna.. Um, and then it feels free to kind of like interject at any point with your thinking as well. You know, if I follow up on what I was just saying that this is a team sport, then the unit of value is going to be the team. Are there individual productivity metrics? Yes. Are they insightful? Yes, they can be. But for what end? What can you actually infer from them? What can you learn from them? Personally, as an engineering leader, the way I look at individual productivity metrics is more like a smoke alarm. So, for example, if someone is not pushing code for long periods of time, that’s a question. Like, what’s going on? There might be some very good reasons for that, or maybe this person is struggling and so I’m glad that I saw that in the, in the metrics, right? And then we can have a conversation around it. Again, the individual is necessary, but it’s not sufficient to deliver value. And so I need to focus on the team-level productivity metrics, right? Um, so that’s, that’s kind of like how I disambiguate, if you will, this, these two, like the individual and the team, the team comes first. I look at the individual to understand to what degree is the individual or the individuals serving the team, because it comes back to also questions, obviously, of performance and, and performance reviews and compensation and promotions, like all of that stuff, right? Um, but do I look at the metrics to decide on that? Personally, I don’t. What I do look at is what can I see in the metrics in terms of what this person’s contribution to the team is and for the team to be able to be successful and productive.

Kovid Batra: Got it. Denis, uh, you have something to add here?

Denis Čahuk: It’s, it’s such an interesting topic that sort of has nuances from many different perspectives that my brain just wants to talk about all three at the same time. So I want to sort of approach every, like, do a quick dip into all three areas. First is the business side, right? So, uh, for example, let’s take a, let’s take the examples of baseball and soccer. Um, off, when off season comes for baseball. Baseball is more of an individual sport than soccer, you know, like the individual performance stands out way more than in soccer when everything’s moving all the time. Um, it’s, it’s very difficult to individuate performance in soccer, although you still can and people still do and it’s still very sexy. Um, when it’s off season, people want to decide, okay, which players do we keep? Which players do we trade? Which players do we replace? You know, this is completely normal, and you would want to do this, and you would want to have some kind of metrics, ideally, merit-based metrics of, yeah, this person performed better. Having this person on the team makes the team better. In baseball, this makes perfect sense. In soccer, not so much, but you still have to decide, well, how much do we pay each player? And you can probably tell if you’re following the scene that every soccer player is being, you know, their salary, their, their, um, their contracts are priced individually based on their value to the brand of the team, all the way to public relations, marketing, and yes, performance on, on the field. Even if they’re on the bench all the time, you know, they might have a positive effect on the team as a coach or as a mentor, as a captain. Um, so if you did bring that into that, that’s one aspect. So now bringing it back into software teams, that’s the business side of things. Yes, these decisions have to be made.

Then there’s the other side of things, which is how does the team work? You know, from my perspective, if output or outcomes can be traced back to one individual person, I think there’s something wrong. I think there’s a lot of sort of value left on the table if you can say, “Oh, this thing was done by this one person.” Generally, it’s a team effort and the more complex the problems get, the harder it is, you know, look, look, for example, NASA, um, the Apollo missions. Which one engineer, you know, made the rocket fly? You don’t have an answer to that because it was thousands of people collaborating together. You know, which one person made a movie? Yes, the director or the producer or the main actor, like they are, they stand out when it comes to branding. But there were tens of thousands of people involved, right? So like to, you know, at the end of the day, what matters is the box office. So I think that that’s what it really comes down to, uh, is that yes, generally there will be like a few stars and some smoke alarms, as Paulo mentioned, I really liked that analogy, right? So you’re sort of checking for, hey, is anybody below standard and does anybody sort of stand out? Usually in branding and communication, not in technical skill. Um, and then try to reason about the team as a whole.

And then there’s the third aspect, which is how productive does the individual feel? You know, how productive, if somebody says they’re a senior with seven years of experience, how productive they, do they feel? Do they get to do everything they wanted to in a day? You know, and then keep going up. Does the product owner feel productive or efficient? Or does the leader feel that they’re supporting their teams enough, right? So it also comes down to perception. We saw this recently with the usages and various surveys regarding AI usage and coding assistance, where developers say, “Yeah, it makes me feel amazing because I feel more productive.” But in reality, the outcomes that it produces didn’t change, or it was so insignificant that it was very difficult to measure.

So with those three sort of three angles to consider, I would say, you know, the way to approach measuring and particularly this individual versus team performance, is that it’s a moving target. You sort of need to have a plan for why you’re measuring and what you’re measuring and ideally, once you know that you’re measuring the right things when it comes to the business, it’ll be very difficult, um, to trace it back to an individual. If tracing it back to an individual is very easy, or if that’s an outcome that you’re pursuing, I would say there’s other issues or potential improvements afoot. And again, measuring those might show you that measuring them is a wrong, is a bad idea.

Paulo André: Can I just add one, one quick thing again? Like, this is something that took me a little while to understand for myself and to become intuitive, which is not intuitive at all. Um, but I think it’s an important pitfall to kind of highlight, which is if we incentivize individual behaviors, individual productivity, that can really backfire on the team. And again, I remind you that the team is the unit of value. And so if we incentivize throughput or output from individual developers, how does that hurt the team? It doesn’t sound very intuitive, but if you think about, for example, a very prolific developer that is constantly just taking on more tickets and creating more pull requests, and those pull requests are just piling up because there’s no capacity in the team to review them, the customer is not getting any value on the other side. That work in progress is just in lean terminology. It’s just waste at that point, right? But that developer can be regarded depending on how you look at it as a very productive developer, but is it? Or could it be that that developer could be testing something? Or could it be that that developer is helping doing code reviews and so on and so forth, right? So again, the team and individual productivity can lead to wildly different results. And sometimes you have teams that are very unproductive despite having very productive developers in them, but they are looking at the wrong, sort of, in my opinion, wrong definition of what productivity is and where it comes from, and what the unit of value is, like I said, it’s the team.

Kovid Batra: Yeah.

Denis Čahuk: Can I jump in quickly, Kovid?

Kovid Batra: Yeah.

Denis Čahuk: There’s something I’ve always said. Um, it’s very unintuitive, and I can give you a complete example from coaching, that it throws leaders off-guard every time I suggest it, and it ends up being a very positive outcome. I always ask them, you know, “What are you using to assign tickets? Are you assigning them?” And they say, “Yes, we use Jira.” Or something equivalent. And I tell them, And I ask them, “Well, have you considered not assigning the tickets?” Right? And, well, who should own it? And I say, “Well, it’s in the team’s backlog. The team owns it. Stop assigning an individual.” Right? And they’re like, and they’re usually taken aback. It’s like, “What do you mean? Like, it won’t get done if I don’t assign it.” No, it’s in the team’s backlog, of course it’ll get done. Right? And if not, if they can’t decide who will do it, then that’s a conversation they should have, and then keep it unassigned. Or, alternatively, use some kind of software that allows multiple people to be assigned. But you don’t need to, because the moment you start, you know, Jira, for example, had like a full activity log, so I comment on it, you comment on it, you review, I review, we merge, I merge, I ask a question. You have a full paper trail of everybody who was involved. Why would you need an owner, right? So this idea of an owner is, again, going back to lean activities and talking about handoffs, right? So I hand it off to you, you’re now the owner, and you’ll hand it off to somebody else. Well, and, but having many handoffs is an anti-pattern in itself, usually in most contexts. Actually the better idea would be, how can we have less people than we have? How can we have less handoffs then we have people? If there are seven people in the pipeline, there shouldn’t be seven handoffs, you know, how can we have just one deliverable, just one thing to assign and seven people working on it? That would be the best sort of positive outcome because then you don’t cap, you know, how much money you can put around a problem because that allows you to sort of scale your efforts in intensity, not just in parallelism. Um, and usually that parallelism comes at a very, very steep cost.

Paulo André: Yeah.

Denis Čahuk: Um, so incentivizing methods to make individual work activity untraceable can unintuitively have, and usually does, drastic and immediate positive, positive benefits for the team. Also, if the team is lacking in psychological safety, this will make it immediately sort of washed over them and they’ll have to have some like really rough conversations in the first week and then things drastically start improving. At least that’s my experience.

Paulo André: Yeah. And the handoff piece is a very interesting one. I’ll be very quick, uh, Kovid. When we think about the perspective of a piece of work, a work package, a ticket or whatever, it’s either being actively worked on or it’s waiting for someone to do something about it, right? And if we measure these things, what we, what we realize, and it’s the same thing if you go to the airport and we think about how often, how much time are we actually spending on something like checking in or boarding the plane versus waiting at some of the stages, the waiting time is typically way more than the active time. And so that waiting time is waste as well. That’s an opportunity. Those delays, we can think about how can we reduce those and the more handoffs we have in the process, the more opportunity for delay creeps in, right? So it’s, it’s a very different way of looking at things. But sometimes when I say estimates and so on, estimates is all about like active time. It’s how long it’s going to take, but we don’t realize that nothing is done individually, and because of the handoffs, you cannot possibly predict the waiting times. So the best that you can do is to reduce the handoffs, so you have less opportunity for those delays to creep in.

Kovid Batra: Totally. I think to summarize both of your points, I would have understood is that making those smoke alarms ready at individual level and at process level also ready so that you are able to understand those gaps if there is something falling apart. But at the end of the day, if you’re measuring productivity for a team, it has to be a collaborative team-level thing that you’re looking at and looking at value delivery. So I think it’s a very interesting thing. Uh, I think there’s a lot of learning for us when we are working at Typo that we need to think more on the angle of how we bring in those pointers, those metrics which work as those smoke alarms, rather than just looking at individual efficiency or productivity and defining that for somebody. Uh, I think that, that makes a lot of sense. All right. I think we are into a very interesting conversation and I would like to ask one of you to tell us something from your experience. So let’s start with you, Denis. Um, like you have been coaching a lot of teams, right? And, uh, there, there are instances where you deal with large-scale teams, small teams, startups, right? There are different combinations. Anything that you feel is an interesting experience to share here about how a team approached solving a particular problem or a bottleneck in their team that was slowing them down, basically like not having the right impact that they wanted to, and what did they do about it? And then how, how they arrived to the goal that they were looking at?

Denis Čahuk: Well, I can, I can list many. I’ll, I’ll focus on two. One is, generally the team knows what’s the problem. Generally, the team knows already, hey, yeah, we don’t have enough tests, or, ah, yeah, we keep missing deadlines, or our relationship with stakeholders is very bad, and they just communicate with us through, you know, strict roadmaps and strict deadlines and strict expectations. Um, that’s a problem to be solved. That’s not, you know, it doesn’t have to be that way. So if you know what the problem is, there’s no point measuring, because there’s no, there’s no further insight to be gained that, yeah, this is a problem, but hey, let’s get distracted with this insight. No, like, you know what the problem is, you can just decide what to do, and then if you need help along the way, maybe measurements would help. Or maybe measurements on an organizational level would help, not, not just engineering. Um, or you bring on a coach to sort of help you, you know, gain clarity. That’s one aspect. If you know what the problem is, you don’t need to measure. Usually people ask me, Denis, what should I measure? Should I introduce DORA metrics? And I usually tell them, Oh, what’s the main problem? What’s the problem this week? Oh yeah, a lot of PRs are waiting around and we’re not writing enough tests. Okay, that’s actionable. Like, that’s enough. Like, do you want more? Like, but do you need a bigger problem? Because then you just, you know, spend a lot of time looking for a problem that you wish was bigger than that so that you wouldn’t have to, right, because that’s just resistance that just either your ego or trying to play it safe or trying to put it into the next quarter when maybe there’s less stress and right, there isn’t. That’s one aspect.

The other aspect, you know, this idea of.. How did you phrase it? An approach that works that aren’t generally approaches that work. You know, I always say that everything we do is nowadays basically a proxy to eliminating handoffs, right? Getting the engineers very close to the customer and, um, you know, getting closer to continuous delivery. Continuous integration at the very minimum, but continuous delivery, right? So that when software is ready, it’s releasable on demand, and there isn’t like this long waiting that Paolo mentioned earlier, right? Like this is just a general form of waste. Um, but potentially something that both of these cases handle unintuitively that I like to bring in as a sort of more qualitative metric is, um, the reliability of the team. You know, we like to measure the reliability of systems and the whole Scrum movement introduced this idea of velocity, and I like to bring in this idea of, let’s say you want to be on time as a leader. Um, I’m interested in proving the theory that, hey, if you want to be on time, you probably need to be on time every week, and in order to be on time on the week, you probably need to be on time every day. So if you don’t know what an on-time day looks like, there’s no point planning roadmaps and saying that deadlines are a primary focus. Maybe the team should be planning in smaller batches, not with, not trying to chase higher accuracy in something very large. And what I usually use as a proxy metric is just to say, how risky is your word? Right, so how reliable is your promise? Uh, and we don’t measure how fast the team is moving. What I like to measure with them is say, okay, when do you think this will be done? They say Friday. Okay. If you’re right, Monday needs to look like this. Tuesday needs to look like this. Let me just try to reverse engineer it from that. It’s very basic. And then I’m trying to figure out how many days or hours or minutes into a plan they’re off-track. I don’t care about velocity. So no proxy metrics. I’m just interested if they create like a three month roadmap, how many hours into the three-month roadmap are they off-course? Because that’s what I’m interested in, because that’s actionable. Okay. You said three months from now, this is done. One month from now, there’ll be a milestone. But yesterday you said that today something would be done. It’s not done. Maybe we should work on that. Maybe we should really get down to a much smaller batch size and just try to make the communication structures around the team building stuff more reliable. That would de-stress a lot of people at the same time and sort of reduce anxiety. And maybe the problem is that you have a building-to-deploying nuance and maybe that’s also part of the problem. It usually is. And then there might be a planning-to-building nuance that also needs to be addressed. And then we basically come down to this idea of continuous delivery extreme programming, you know, let’s plan a little bit. Let’s Build a little bit. Let’s test it. Let’s test our assumptions. And behind the scenes once we do that for a few days, once we have evidence that we’re reliable, then let’s plan the next two weeks. Only when we have shown evidence of the team understands what a reliable work week for them looks like. If they’ve never experienced that and they’ve been chasing their own tail deadline after deadline, um, there’s not much you can do with such a team. And a lot of people just need a wake up call to see that, “Hey, you know what? I actually don’t know how to plan. You know, I don’t know how to estimate.” And that’s okay. As long as you have this intention of trying to improve or trying to look for alternatives, not to become better.

Kovid Batra: I think my next question would be, uh, like when you’re talking about, uh, this aspect in the teams, how do you exactly go about having that conversations or having that, that visibility on a day-to-day basis? Like most, most of the things that you mentioned were qualitative in nature, right, as, as you mentioned, right? So how, how do you exactly go about doing that? Like if someone wants to understand and deploy the same thought-process in a team, how should they actually do and measure it?

Denis Čahuk: Well, from a leader’s perspective, it’s very simple, you know, because I can just ask them, “Hey, is it done? Is it on anybody’s mind today?” Um, and they might tell me, “Yeah, it’s done, but not merged.” Or, “It’s waiting for review, but it’s done, but it’s kind of waiting for review.” And then that might be one possible answer. Um, it doesn’t need to be qualitative in the sense that I need a human for that. What, you know, what I’m looking for is precision. Like, is it, is it definitively done? Was there an increment? You know, did we test our assumptions? What, is there a releasable artifact? Is it possible to gain feedback on this?

Kovid Batra: Got it.

Denis Čahuk: Did you, did you talk to the team to establish if we deploy this as soon as possible, what question do we want to answer? Like what feedback, what kind of product feedback are we looking for? Or are we just blindly going through a list of features? Like, are we making improvements to our software or is somebody else who is not an engineer? Maybe that’s the problem, right? So it’s very difficult to pinpoint to like one generic thing. But a team that I worked with, the best proxy for these kinds of improvements from the leader was how ready they felt to be interrupted and get course correction. Right? Because the main thing with priorities in a team is that, you know, the main unintuitive thing is that you need to make bets and you need to reduce the cost of you being wrong, right? So the business is making bets on the market, on the product and working with this particular team with these particular individuals. The team is making bets with implementation details to a choice of technology, ratio between keeping the lights on, technical debt and new features, support and communication styles, you know, change of technology maybe. Um, so you need to just make sure that you’re playing with the market. The upside will take care of itself. You just need to make sure that you’re not making stupid mistakes that cost you a lot, either in opportunity or actual fiscal value. Um, but once you got that out of the way, you know, sky’s the limit. A lot of engineers think that we’re expensive. It’s large projects. We gotta get it right the first time. So they try to measure how often they got it right the first time, which is silly. And usually that’s where most measurements go. Are we getting it right the first time? We need to do this to get it right the first time, right? So failure is not an option. Whereas my mantra would be, no, you are going to fail. Just make sure it happens sooner rather than later and with as little intensity as possible so that we can act on it while there’s still time.

Kovid Batra: Got it. Makes sense. Makes sense. All right. Uh, Paulo, I think, uh, we are just running short on time, but I really want to ask this question to you as well, uh, just like Denis has shared something from his experience and that’s really interesting to know like how qualitatively you can measure or see things every time and solve for those. In your experience, um, you have, uh, recently joined this startup as, as a CTO, right? So maybe how does it feel like a new CTO and what things come to your mind when you would think of improving productivity in your teams and building a team which is impactful?

Paulo André: Yeah, I joined this company as a CTO six months ago. It’s been quite a journey and it’s, so it’s very fresh in my mind. And of course, every team is different and every starting point is different and so on, but ultimately, I think the pattern that i’ve always seen in my career is that some things are just not connected and the work is not visible and there’s lack of clarity about what’s value, uh, about what are the goals, what are the priorities, how do we make decisions, like all of that stuff, right? And so, every hour that I’ve been putting into this role with my team so far in these six months has been really either, either about creating clarity or about developing competence to the extent that I can. And so the development of competence is, is basically every opportunity is an opportunity to learn, both for myself and for anyone else in the team. And I can try to leverage my coaching skills, um, in making those learning conversations effective. And then the creation of clarity in my role, I happen to lead both product and engineering, so I cannot blame somebody else for lack of clarity on what the product should be or where it should go. It’s, it’s on me. And I’ve been working with some really good people in terms of what is our product strategy? What do we focus on and not focus on? Why this and not that? What are we trying to accomplish? What are those outcomes that we were talking about that we want to drive, right? So all of that is hard to answer. It’s deceptively difficult to answer. But at the end of the day, it’s what’s most important for that engineering productivity piece, because if you have an engineering team that is, you know, doing wasted work left and right, or things are not connected, and they’re just like, not clear about what they should be doing in the first place, that doesn’t sound like the ingredients for a productive team, right? And ultimately, the product side needs to answer to a large extent those, those difficult questions. So obviously, I could go into a lot of specific details about how we’re doing this and that. I don’t think we have at least today the time for that. Maybe we can do a deep dive later. But ultimately, it’s all about how do I create clarity for everyone and for myself in the first place so I can give it and then also developing the competence of the people that we do have. And that’s the increasing the capacity to win that I was talking about earlier. And if we make good progress on these two things, then we can give a lot of control and autonomy to people because they understand what we’re going for, and they have the skills to actually deliver on that, right? That’s, that’s the holy grail. And that’s motivation, right? That’s happiness. That’s a moment at work that is so elusive. But at the end of the day, I think that’s what we’re, we’re working towards.

Kovid Batra: Totally. I’ll still, uh, want to deep dive a little bit in any one of those, uh, instances, like if you have something to share from last six months where you actually, when prioritized this transparency for the team to be in, uh, how exactly you executed it, a small instance or a small maybe a meeting that you have had and..

Paulo André: Very simple example. Very simple example. Um, one of the things that I immediately noticed in the team is that a lot of the work that was happening was just not visible. It was not on a ticket. It was not on a notion document. It was nowhere, right? Because knowledge was in people’s minds, and so there was a lot of like, gaps of understanding and things that would just take a lot longer than they think they should. And so I already mentioned my bias towards lean software development. What does that mean? First and foremost, make the work visible because if you don’t make the work visible, you have no chance of optimizing the process and getting better at what you do. So I’ve been hammering this idea of making the work visible. I think my team is sick of me pointing to is there a ticket for it? Did you create a ticket for it? Where is the ticket? And so on. Because the way we work with Jira, that’s, that’s where the work becomes visible. And I think now we got to a point where this just became second nature, uh, for all of us. So that would be one example where it’s like very basic fundamental thing. Don’t need to measure anything. Don’t need complicated KPIs and whatnot. What we do need is to make the work visible so we can reason about it together. That’s it.

Kovid Batra: Makes sense. And anything which you found very unique about this team and you took a unique approach to solve it? Any, anything of that sort?

Paulo André: Unique? Oh, that’s a, that’s a really good question. I mean, everyone is different, but at the end of the day, we’re all human beings trying to work together towards something that is somehow meaningful. And so from that perspective, frankly, no real surprises. I think what I’m, if anything, I’m really grateful for the team to be so driven to do better, even if, you know, we lack the experience in many areas that we need to level up. Um, but as far as something being really unique, I think maybe a challenge our team has to really deal with tough technical challenges is around email deliverability, for example, that’s not necessarily unique. Of course, there’s other companies that need to debate themselves with the exact same problems. But in my career, that’s not a particular topic that I have to deal with a lot. And I’m seeing, like, just how complex and how tricky it is to get to get right. Um, and it’s an always evolving sort of landscape for those that are familiar with that type of stuff. So, yeah, not a good, not a good answer to your question. There’s nothing unique. It’s just that, yeah, what’s unique is the team. The team is unique. There’s no other team like this one, like these individuals doing this thing right here, right now in this company in 2024.

Kovid Batra: Great, man. I think your team is gonna love you for that. All right. I think there will be a lot more questions from the audience now. We’ll dedicate some time to that. We’ll take a minute’s break here and we’ll just gather all the questions that the audience has put in. Uh, though we are running a little out of time, is it okay for you guys to like extend for 5–10 minutes? Perfect. All right. Uh, so we’ll take a break for a minute and, uh, just gather the questions here.

All right. I think time to get started with the questions. Uh, I see a lot of them. Uh, let’s take them one by one on the screen and start answering those. Okay. So the first one is coming from, uh, Kshitij Mohan. That’s, uh, the CEO of Typo. Hi, Kshitij. Uh, everything is going good here. Uh, so this is for Denis. Uh, as someone working at the intersection of engineering and cloud technologies, how do you prioritize between technical debt and innovation?

Denis Čahuk: It’s a great question. Hey, Kshitij. Well, I think first of all, I need to know whether it’s actual debt or whether it’s just crap code. You know, like it’s crappy implementation is not an excuse for debt, right? So for you to have debt, there are three things needed to have happen. At some point in the past, you had two choices, A or B. And you made a choice without, with insufficient knowledge. And later on, you figured out that either something in the market changed or timing changed, or we gained more knowledge, and we realized that we, that now the other one is better, for whatever reason. I mean, it’s unnecessary that it was wrong at the time, but we now have more information that we need to go from A to B. Uh, originally we picked A. Now you also need to know how much it costs to go from A to B and how much you stand to gain or trade if you decide not to do that, right? So maybe going from A to B now cost you two months and ten thousand euros and doing it later next year, maybe it’s going to double the cost and add an extra week. That’s technical debt. Like the, the nature of that decision, that’s technical debt. If you, if you made the wrong decision in, in the past and you know it was the wrong decision and now you’re trying to explore whether you want to do something about it, that’s not technical debt. That’s just, you know, that’s you seeking for excuses to not do a rewrite. So it’s, first of all you need to identify is it debt. If it is debt, you know the cost, you know the trade-off, you know, you know, you can either put it on a timeline or you can measure some kind of business outcome with it. So that’s one side.

On the, on the innovation side, you need to decide what is innovation exactly? You know, is it like an investment? Is it a capital expense where I am building a laboratory and we’re going to innovate with new technologies? And then once we build them, we will find, um, sort of private market applications for them or B2B applications for them. Like, is it that kind of innovation? Or is innovation a umbrella term for new features, right? Cause, cause that’s operational. That’s much closer to operational expense, operational expense, right? So it’s just something you do continuously and you deliver continuously, and that innovation that you do can continuously feature development will also produce new debt. So once you’ve got these two things, these two sides figured out, then it’s a very simple decision. How much debt can you live with? How fast are you creating new debt compared to how fast you’re paying it off? And what can you do to get rid of all the non-debt, all the crap, essentially? That’s it, you know. Then you just make sure that you balance out those activities and that you consistently do them. It isn’t just, oh yeah. We do innovation for nine months and then we pay off debt. That usually doesn’t go very well.

Kovid Batra: I think this is coming from a very personal pain point. Now we’re really moving towards the AI wave and building things at Typo. That’s where Kshitij is coming from. Uh, totally. I think, thanks, thanks, Denis. I think we’ll move on to the next question now. Uh, that’s from, uh, Madhurima. Yeah. Hey Paulo, this one’s for you. Uh, which metric do you think is often overlooked in engineering teams but has significant impact on long-term success?

Paulo André: Yeah, that’s a great question. I’m going to, I’m going to give a bit of a cheeky answer and I’m going to say, disclaimer, this is not a metric that I track with, we track with, with my team, and it’s also not, I don’t know, a very scientific way or concrete way of measuring it. However, to the question, what is overlooked in engineering teams and has significant long-term impact, or success, on long-term success, that’s what I would call ‘mean time to clarity’. How quickly do we get clear on where we need to be and how do we get there? Right? And we don’t have all the answers upfront. We need to, as Denis mentioned earlier, experiment and iterate and learn and we’ll get smarter, hopefully, as we go along, as we learn. But how quickly we get to that clarity in every which way that we’re working. I think that’s, that’s the one that is most important because it has implications, right? Um, if we don’t look at that and if we don’t care about that, are we doing what it takes to create that clarity in the first place? And if that’s not the case, the waste is going to be abundant, right? So that’s the one I would say as an engineering leader, how do I get for myself all the clarity that I need to be able to pass it along to others and create that sense that we know where we’re going and what we don’t know, we have the means to learn and to keep getting smarter.

Kovid Batra: Cool. Great answer there. Uh, let’s move on to the next one. I think this one is again for Paulo. Yeah.

Paulo André: Okay, so you know what? Maybe this is going to be a bit, uh, I don’t know what to call it, but considering that I don’t think the most important things are gonna change in the next five years, um, AI notwithstanding, and what are the most important things? It’s still a bunch of people working together and depending on each other to achieve common goals. We may have less people with more artificial intelligence, but I don’t think we’re anywhere near the point where the artificial intelligence just does everything, including the thinking for itself. And so with that in mind, it’s still back to what I said earlier, um, in the session. It’s really about how is the work flowing from left to right? And I don’t know of a better, um, sort of set of metrics than the DORA metrics for this, particularly cycle time and deployment frequency and that sort of stuff that is more about the actual flow. Um, but like, you know, let’s not get into the DORA metrics. I’m sure the audience here already knows a lot about it, but that’s, that’s, I think, what, what is the most important, um, and will continue to be critical in the next five years, um, that’s, that’s basically it.

Kovid Batra: Cool. Moving on. All right. That’s again for, oh, this one, Denis. How do you ensure cloud solutions remain secure and scalable while addressing ever-changing customer demands?

Denis Čahuk: Well, there’s two parts to that question. You know, one is security, the other one is ever-changing customer demands. I think, you know, security will be a sort of an expression of the standard, or at least some degree of sensible defaults within the team. So the better question would be, what do engineers need to not have to consciously, to not have to constantly and consciously and deliberately think about security, right? So do they have support by, are they supported by a security expert? Do they have platform engineering teams that are supporting with security initiatives, right? So if there’s a product team that’s focusing on product, support them so that they also don’t have to become an expert in security, cause that’s where all the problems start, where you basically have a team of five and they need to wear 20 hats and they start triaging the hats and making trade-offs in security, you know. And usually, usually large teams that are overwhelmed, love doing privacy or security trade-offs because they don’t have skin in the game. The business has skin in the game, right? And then when you individuate incentive to such a degree that it becomes dysfunctional, um, security usually doesn’t bode well. Um, at least not till there’s some incident or maybe some security review or some inspection, et cetera.

So give the teams what they need. If they’re not a security expert, provide them support. Um, and the same thing with scalability. Scalability is also something that can benefit more from tighter collaboration, more so than security. Um, so just make sure that the team is able to express itself as a team through pair programming or having more immediate conversations rather than just, you know, asynchronous code review conversations or stand up conversations way at the end of the cycle. At the end of the cycle when the code is written and it’s going into merging or QA, it’s too late, the code is written, right? So you want the preempt. That solution is being created by the team being able to express itself as a team rather than just a group of individuals, being the individual goals.

Kovid Batra: Cool. I think, uh, we have a few more questions, but running way out of time now. Uh, maybe we can take one more last, last question and then we can wrap it up.

Paulo André: Sounds good. Okay, so this one is for me, right? How do I approach, uh, integrating new tools and frameworks into engineering workflows without disrupting productivity? That, that final piece is interesting. I think it also starts with how we frame this type of stuff. So there is a cost to making improvements. I don’t think we can have our cake and eat it, too, necessarily. And it’s just part of the job, and it’s part of what we do. And so, um, you know, for example, if you take the time to have a regular retrospective with your team, right, is that going to impact productivity? I mean, you could be coding for an extra hour every two weeks. It’s certainly going to have some impact. But then it also depends on what is the outcome of that retrospective, and how much does it impact the long-term, um, you know, capacity to win of the team. So with that in mind, what I would say is that the most important thing I find is that you don’t just, again, as an engineering leader, as an engineering manager, you just don’t, you don’t just download certain practices and tools and frameworks on the teams. You always start from what are we trying to solve here and why does it matter and get that shared understanding to the point where we’re all looking at the same problem roughly the same way. We can then disagree on solutions, but we agree that this is a problem worth solving right now, and we’re gonna go and do that. And so the tools and the frameworks are kind of like downstream from that. Okay, now what do we need to gain the inside? Oh, now what do we need to solve the problem? Then we can talk about those things. Okay? So as an example, one thing I’m working on now with my team, I mentioned this earlier, I believe is like, uh, a bit of a full-on product delivery, product discovery and delivery, um, process, right? That includes a product strategy, um, that shouldn’t change that much that often. And then there are a lot of tools and frameworks that we can use. Tools, we use three different types of projects in Jira, for example. And when it comes to frameworks, we’re starting to adopt something called opportunity solution trees, which is just a fancy way of saying what outcomes are we trying to generate, what opportunities do we see to, to get there and what are the solutions that can capitalize on these opportunities, right? That sort of thing. But it all starts with we need to gain clarity about where we’re gonna go as a business and as a product and everything kind of comes downstream from that, right? So I think if you take the time and this is where I’ll leave it. If you take the time and I think you should to start there and to do this groundwork and create this shared context and understanding with your teams, everything else downstream becomes so much easier because you can connect it to the problem that you’re solving. Otherwise, you’re just talking solutions for problems that most people will think they are inexistent or they just look completely different, right? And this takes work, this takes time, this takes energy, this takes attention, takes all of those things. But frankly, if you ask me, that’s the work of leadership. That’s the work of management.

Kovid Batra: Great. Well said, Paulo. I think Denis has a point to add here.

Denis Čahuk: Yeah, I had a conversation this week with one of the CEOs and founders of one of Ljubljana, Slovenia’s biggest agencies, because we were talking about this. And, and, and they asked me this question, they said, “Denis, you don’t have a catalog. Like, what do you do? Like, how do, how does working with you look like? Do we do a workshop or something?” And I said, and I asked, “Do you want to do a workshop? And, and I saw on their face, they said, “Well..” I told them, “Yes, exactly, exactly. That’s why I don’t have a catalog because, because, because the workshops are this, I will show you how a great team works, right? I will give you all of this fancy storytelling about how productive teams work, and then you’re like, “Great. Cool. But we’re not that and we can’t have that in our team.” So great, now I’d go away because I’m, because I’d feel demoralized, right? Like that’s not a good way of approaching working with that team. I, I always tell them, “Look, I don’t know what will help you. You probably also don’t know what will help you. We need to figure it out together. But generally, what’s more important than figuring out how to help you is to figure out how much are you willing to invest consistently in improvement? Because maybe I teach you something and you only have 10 minutes. That’s the wrong way about it, right? I need to ask you how much time do you have consistently every week 15 minutes? Okay, then when I need to teach you something that you can put in practice every 15 minutes Otherwise, I’m robbing you of your time. Otherwise, I’m wasting your time. If you have three hour retrospectives and we’re putting nothing into action, I’m wasting your time, right? So we need to personally figure out like what is consistent for you? What kind of improvement, how intense do you want it? How do you know if you’re making progress?”

Those two are the most important things, because I always come to these kinds of questions about new tools and frameworks because people love asking me about, “Hey, Denis. Can you do a TDD workshop?”, “Denis, can you do a domain-driven design workshop?”, “Denis, can you help us do event storming?” And I always say, “If what you need is that one workshop, it’s not going to solve any problems because I’m all about consistent improvement, about learning, about growing your team, about, you know, investing into the people, not about changing, you know, changing some label or some other label.” And I always come back to the mantra of what can you do consistently starting this week so that the product and the team is much better six months from now? That’s the big question. That’s, that should be the focus. Cause if you need to learn something, you know, go do a certification that takes you a year to perform correctly, and then you need to renew it every year. That’s nonsense. This week, what can we do this week? Start this week, apply this week, and then consistently grow and apply every single week for the next six months. That would be huge. Or you can go to a conference and send everybody on vacation and pretend the workshop was very productive. Thank you.

Kovid Batra: Perfect. I think that brings us to the end of this episode. Uh, I think the next episode that we’re going to have would be in the next year, which is not very far. So, before we depart, uh, I think I would like to wish the audience, uh, a very Happy New Year in advance, a Merry Christmas in advance. And to both of our panelists also, Paulo, Denis, thank you, thank you so much, uh, for taking out time. It was really great talking to you. I would love to have you both again here. talking more in depth about different topics and how to make teams better. But for today, that’s our time. Anything that you would like to, that you guys would want to add, please feel free. All right. Yeah, please go ahead.

Denis Čahuk: Thanks for inviting us.

Paulo André: Yeah, exactly. From my side, I was just going to say that thanks for having us. Thanks also to the audience that has put up with us and also asked very good questions, to be honest. Unfortunately, we couldn’t get to a few more that are still there that I think are very good ones. Um, but yeah, looking forward to coming back and deep diving into, into some of the topics that we talked about here.

Kovid Batra: Great. Definitely.

Denis Čahuk: And thank you for Kovid for inviting us and for introducing us to each other and to everybody backstage and at Typo for, they’re probably doing a lot of annoying groundwork at the background that makes all of this so much more enjoyable. Thank you.

Kovid Batra: All right, guys. Thank you. Thank you so much. Have a great evening ahead. Bye!

Your engineering team is the biggest asset of your organization. They work tirelessly on software projects, despite the tight deadlines. 

However, there could be times when bottlenecks arise unexpectedly, and you struggle to get a clear picture of how resources are being utilized. 

This is where an Engineering Management Platform (EMP) comes into play.

An EMP acts as a central hub for engineering teams. It transforms chaos into clarity by offering actionable insights and aligning engineering efforts with broader business goals.

In this blog, we’ll discuss the essentials of EMPs and how to choose the best one for your team.

What are Engineering Management Platforms? 

Engineering Management Platforms (EMPs) are comprehensive tools that enhance the visibility and efficiency of engineering teams. They serve as a bridge between engineering processes and project management, enabling teams to optimize workflows, track how they allocate their time and resources, track performance metrics, assess progress on key deliverables, and make informed decisions based on data-driven insights. This further helps in identifying bottlenecks, streamlining processes, and improving the developer experience (DX). 

Core Functionalities 

Actionable Insights 

One main functionality of EMP is transforming raw data into actionable insights. This is done by analyzing performance metrics to identify trends, inefficiencies, and potential bottlenecks in the software delivery process. 

Risk Management 

The Engineering Management Platform helps risk management by identifying potential vulnerabilities in the codebase, monitoring technical debt, and assessing the impact of changes in real time. 

Team Collaboration

These platforms foster collaboration between cross-functional teams (Developers, testers, product managers, etc). They can be integrated with team collaboration tools like Slack, JIRA, and MS Teams. It promotes knowledge sharing and reduces silos through shared insights and transparent reporting. 

Performance Management 

EMPs provide metrics to track performance against predefined benchmarks and allow organizations to assess development process effectiveness. By measuring KPIs, engineering leaders can identify areas of improvement and optimize workflows for better efficiency. 

Essential Elements of an Engineering Management Platform

Developer Experience 

Developer Experience refers to how easily developers can perform their tasks. When the right tools are available, the process is streamlined and DX leads to an increase in productivity and job satisfaction. 

Key aspects include: 

  • Streamlined workflows such as seamless integration with IDEs, CI/CD pipelines, and VCS. 
  • Metrics such as WIP and Merge Frequency to identify areas for improvement. 

Engineering Velocity 

Engineering Velocity can be defined as the team’s speed and efficiency during software delivery. To track it, the engineering leader must have a bird’s-eye view of the team’s performance and areas of bottlenecks. 

Key aspects include:

  • Monitor DORA metrics to track the team’s performance 
  • Provide resources and tools to track progress toward goals 

Business Alignment 

Engineering Management Software must align with broader business goals to help move in the right direction. This alignment is necessary for maximizing the impact of engineering work on organizational goals.

Key aspects include: 

  • Track where engineering resources (Time and People) are being allocated. 
  • Improved project forecasting and sprint planning to meet deadlines and commitments. 

Benefits of Engineering Management Platform 

Enhances Team Collaboration

The engineering management platform offers end-to-end visibility into developer workload, processes, and potential bottlenecks. It provides centralized tools for the software engineering team to communicate and coordinate seamlessly by integrating with platforms like Slack or MS Teams. It also allows engineering leaders and developers to have data-driven and sufficient context around 1:1. 

Increases Visibility 

Engineering software offers 360-degree visibility into engineering workflows to understand project statuses, deadlines, and risks for all stakeholders. This helps identify blockers and monitor progress in real-time. It also provides engineering managers with actionable data to guide and supervise engineering teams.

Facilitates Continuous Improvement 

EMPs allow developers to adapt quickly to changes based on project demands or market conditions. They foster post-mortems and continuous learning and enable team members to retrospectively learn from successes and failures. 

Improves Developer Well-being 

EMPs provide real-time visibility into developers' workloads that allow engineering managers to understand where team members' time is being invested. This allows them to know their developers’ schedule and maintain a flow state, hence, reducing developer burnout and workload management.

Fosters Data-driven Decision-Making 

Engineering project management software provides actionable insights into a team’s performance and complex engineering projects. It further allows the development team to prioritize tasks effectively and engage in strategic discussions with stakeholders. 

How to Choose an Engineering Management Platform for Your Team? 

Understanding Your Team’s Needs

The first and foremost point is to assess your team’s pain points. Identify the current challenges such as tracking progress, communication gaps, or workload management. Also, consider Team Size and Structure such as whether your team is small or large, distributed or co-located, as this will influence the type of platform you need.

Be clear about what you want the platform to achieve, for example: improving efficiency, streamlining processes, or enhancing collaboration.

Evaluate Key Categories

When choosing the right EMP for your team, consider assessing the following categories:

Processes and Team Health

A good EMP must evaluate how well the platform supports efficient workflows and provides a multidimensional picture of team health including team well-being, collaboration, and productivity.

User Experience and Customization 

The Engineering Management Platform must have an intuitive and user-friendly interface for both tech and non-tech users. It should also include customization of dashboards, repositories, and metrics that cater to specific needs and workflow. 

Allocation and Business Value 

The right platform helps in assessing resource allocation across various projects and tasks such as time spent on different activities, identifying over or under-utilization of resources, and quantifying the value delivered by the engineering team. 

Integration Capabilities 

Strong integrations centralize the workflow, reduce fragmentation, and improve efficiency. These platforms must integrate seamlessly with existing tools, such as project management software, communication platforms, and CRMs.

Customer Support 

The platform must offer reliable customer support through multiple channels such as chat, email, or phone. You can also take note of extensive self-help resources like FAQs, tutorials, and forums.

Research and Compare Options 

Research various EMPs available in the market. Now based on your key needs, narrow down platforms that fit your requirements. Use resources like reviews, comparisons, and recommendations from industry peers to understand real-world experiences. You can also schedule demos with shortlisted providers to know the features and usability in detail. 

Conduct a Trial Run

Opt for a free trial or pilot phase to test the platform with a small group of users to get a hands-on feel. Afterward, Gather feedback from your team to evaluate how well the tool fits into their workflows.

Select your Best Fit 

Finally, choose the EMP that best meets your requirements based on the above-mentioned categories and feedback provided by the team members. 

Typo: An Engineering Management Platform 

Typo is an effective engineering management platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion

An Engineering Management Platform (EMP) not only streamlines workflow but transforms the way teams operate. These platforms foster collaboration, reduce bottlenecks, and provide real-time visibility into progress and performance. 

Engineering Analytics

View All

Webinar: 'Unlocking Engineering Productivity' with Paulo André & Denis Čahuk

In the first session of the ‘Unlocking Engineering Productivity’ webinar series, host Kovid Batra from Typo welcomes two prominent engineering leaders: Paulo André, CTO of Resquared, and Denis Čahuk, a technical coach and TDD/DDD expert.

They discuss the importance of engineering productivity and share insights about their journeys. Paulo emphasizes the significance of collaboration in software development and the pitfalls of focusing solely on individual productivity metrics. Denis highlights the value of consistent improvement and reliability over individual velocity. Both guests underline the importance of creating clarity and making work visible within teams to enhance productivity. Audience questions address topics such as balancing technical debt with innovation and integrating new tools without disrupting workflows. Overall, the session offers practical strategies for engineering leaders to build effective and cohesive teams.

Timestamps

  • 00:00 — Introduction
  • 00:52 — Meet the Experts: Paulo and Denis
  • 03:13 — Childhood Stories that Shaped Careers
  • 05:37 — Defining Engineering Productivity
  • 11:18 — Why Focus on Engineering Productivity Now?
  • 15:47 — When and How to Measure Productivity
  • 22:00 — Team vs. Individual Productivity
  • 35:35 — Real-World Examples and Insights
  • 37:17 — Addressing Common Engineering Challenges
  • 38:34 — The Importance of Team Reliability
  • 40:32 — Planning and Execution Strategies
  • 45:31 — Creating Clarity and Competence
  • 53:24 — Audience Q&A: Balancing Technical Debt and Innovation
  • 57:02 — Audience Q&A: Overlooked Metrics and Security
  • 01:02:49 — Audience Q&A: Integrating New Tools and Frameworks
  • 01:08:47 — Final Thoughts and Farewell

Links and Mentions

Transcript

Kovid Batra: All right. Time to get started. Uh, welcome everyone. Welcome to the first episode, first session of our new, all new webinar series, Unlocking Engineering Productivity. So after the success of our previous webinar The Hows and Whats of DORA, we are even more excited to bring you this webinar series which is totally designed to help the engineering leaders become better, learn more and build successful, impactful dev teams. And today with us, uh, we have two passionate engineering leaders. Uh, I have known them for a while now. They have been super helpful, all the time up for helping us out. So let me start with the introduction. Uh, Paulo, Paulo André, uh, CTO of Resquared, a YC-backed startup. He has been the, he has been ex-engineering leadership coach for Hotjar, and he has, he’s an author of the Hagakure newsletter. So welcome to, welcome to the unlocking, uh, engineering productivity webinar, Paulo.

Paulo André: Thanks for having me. It’s a real pleasure to be here.

Kovid Batra: Great. Uh, then we have Denis. Uh, he’s coming to this for the second time. And, uh, Denis is a tech leadership coach, TDD expert, and author of Crafting Tech Teams. And he’s also a guitar player, a professional gamer. Uh, hi, hi, Denis. Welcome, welcome to the episode.

Denis Čahuk: Hi, thanks for inviting me again. Always a pleasure. And Hey, Paulo, it’s our first time meeting on stage.

Paulo André: Good to meet you, Denis.

Kovid Batra: I think I missed mentioning one thing about Paulo. Like, uh, he is like a very, uh, he’s an avid book reader and a coffee lover, just like me. So on that note, Paulo, uh, which book you’re reading these days?

Paulo André: Oh, that’s a good question. Let, let me pull up my, because I’m always reading a bunch of them at the same time, sort of. So right now, I’m very interested, I wonder why in, you know, geopolitical topics. So I’m reading a lot about, you know, superpowers and how this has played out, uh, in history. I’m also reading a fiction book from an author called David Baldacci. It’s this series that I recommend everyone who likes to read thrillers and stuff like that. It’s called the 6:20 Man. So.

Kovid Batra: Great.

Paulo André: That’s what I’m reading right now.

Kovid Batra: So what’s going to be the next superpower then? Is it, is it, is it China, Russia coming in together or it’s the USA?

Paulo André: I’ll tell you offline. I’ll tell you offline.

Kovid Batra: All right. All right. Let’s get started then. Um, I think before actually we move on to the main section, uh, there is one ritual that we have to follow every time so that our audience gets to know you a little more. Uh, this is my favorite question. So I think I’ll, I’ll start with Paulo, you once again. Uh, you have to tell us something from your childhood or from teenage, uh, that defines you, who you are today. So over to you.

Paulo André: I mean, you already talked about the books. I think the reason why I became such a book lover was because there were a ton of books in my house, even though my parents were not readers. So I don’t know, it was more decorative. But I think more importantly for this conversation, I think the one thing about my childhood was when they gifted me a computer when I was six years old. We’re talking about 88, 89 of the type that you still connected to your big TV in the living room. So that changed my life because it came with an instruction manual that had code listings. Then you could type it in and you can see what happens on the screen and the rest is history. So I think that was definitely the most consequential thing that happened in my childhood when you consider how my life and career has played out.

Kovid Batra: Definitely. Cool. Um, Denis, I think the same question to you, man. Uh, what, what has been that childhood teenage memory that has been defining you today?

Denis Čahuk: Oh, you’re putting me on the spot here. I’ll have to come up with a new story every time I join a new webinar. Uh, no, no, I had a similar experience as Paulo. Um, I have an older brother and our household got our first computer when I was five-six years old, first commodore 64. So I learned how to code before I could read. Uh, I knew, I knew what keys to press so I could load Donald Duck into the, into the TV. Um, yeah, other than that when I, when I got a little bit, you know into the teenage years, I, um, World of Warcraft and playing games online became my passion project when I, when I received access to the internet. Um, so that’s, you know, I played World of Warcraft professionally, semi-professionally for quite a few years, like almost an entire decade, you know, and that, that was sort of parallel with my, with my sort of tech career, because we’re usually doing it in a very large organization, game-wise. Yeah. And that, that, that had a huge influence because it gave me an outlet for my competitiveness.

Kovid Batra: That’s interesting. All right, guys. Thanks. Thanks for sharing this with us. Uh, I think we’ll now move on to the main section and discuss something around which our audience would love to learn from you both. Uh, so let’s, let’s start with the first basic fundamental definition of what productivity, what dev productivity or engineering productivity looks like to you. So Paulo, would you like to take this first? Like, how do you define productivity?

Paulo André: So you start with a very small question, right? Um, you actually start with a million-dollar question. What is productivity? I’m happy to take a stab at it, but I think it’s one of those things that everyone has their own definition. For what it’s worth, when I think about productivity of engineering teams, I cannot decouple it from the purpose of an engineering team. And then ultimately, the way I see it is that an engineering team serves a business and serves the users of that business in case it’s a product company, obviously, um, but any, any kind of company kind of has that as the delivery of value, right? So with that in mind, is this team doing their part in the delivery of value, whatever value is for that business and for those users, right? And so having that sort of frame in mind, I also break it down in my mind, at least, in terms of like winning right now and increasing our capacity to win in the future. So a productive team is not just a team that delivers today, but it’s also a team that is getting better and better at delivering tomorrow, right? And so productivity would be, are we doing what it takes to deliver that value regardless of the output? Um, it is necessary to have output to have results and outcomes, but at the end of the day, how are we contributing to the outcomes rather than to the, um, the just purely to the outputs? And the reason why I bring this up has to do obviously with sometimes you see the obsession about things like story points and you know, all of that stuff that ultimately you can be working a lot, but achieving very little or nothing at all. So, yeah, I would never decouple, um, the delivery of value from how well an engineering team is doing.

Kovid Batra: Perfect. I think very well framed here and the perspective makes a lot of sense. Um, by the way, uh, audience, uh, while we are talking, discussing this EP, please feel free to shoot out all the questions that you have in the comments section. We’ll definitely be taking them at the end of the session. Uh, but it would be great if you could just throw in questions right now. Well, this was an advice from Denis, so I wouldn’t want to forget this. Okay. Uh, I think coming back, Denis, what’s your take on, uh, productivity, engineering productivity, dev productivity?

Denis Čahuk: Well, aPauloal said, that’s a million dollar question. I think, I think coming from a, from like a more analytical perspective, more data-driven perspective, I think we like to use the, the financial analogies, metaphors a lot for things like technical debt and, you know, good story points. It’s all about estimating something, you know, value of something or, or scale of something, scope of something. I think just using two metaphors is very useful for productivity. One is, you know, how risky is the team itself? And risk can come from many different places. It can be their methodologies, their personalities, the age of the company, the maturity of the company. The project can be risky. The timing on the market can be risky, right? So, but there is an inherent risk coming from the team itself. That’s, that’s what I mean. So how risky is it to work with this team in particular? Uh, and the other thing is to what degree does the team reason about, um, “I will produce this output for this outcome.” versus “I need to fill my schedule with activity because this input is demanded of me.” Right? So if I, if I use the four pillars that you probably know from business model canvases for activity, input, output, outcome, um, a productive team would not be measuring productivity per se. They will be more aligned with their business, aligned with their product and focusing on what, which of their outputs can provide what kind of outcomes for the business, right? So it’s not so much about measuring it or discussing it. It’s more about a, you know, are we shifting our mentality far enough into the things that matter, or are we chasing our own tail, essentially, um, protecting our calendars and making sure we didn’t over-promise or under-promise, etc.?

Kovid Batra: Got it. Makes sense.

Paulo André: Can I just add one, one last thing here, because Denis got my, my brain kinda going? Um, just to make the point that I think the industry spends a lot of time thinking about what is productivity and trying to define productivity. I think there is value in really getting clear about what productivity is not. And so I think what both Denis and I are definitely aligned on among other things is that it’s not output. That’s not what productivity is in isolation. So output is necessary, but it is not sufficient. And unfortunately, a lot of these conversations end up being purely about output because it’s easy to measure and because it’s easy to measure, that’s where we stop. And so we need to do the homework and measure what’s hard as well, so we can get to the real insight.

Kovid Batra: No, totally makes sense. I think I relate to this because when I talk to so many engineering leaders and almost all the time this, this comes into discussion, like how exactly they should be doing it. But what, what is becoming more interesting for me is that this million dollar question has suddenly started raising concerns, right? I mean, almost everywhere in like in business, uh, people are measuring productivity in some or the other way, right? But somehow engineering teams have suddenly come into the focus. So this, this perspective of bringing more focus now, why do you think it has come into the picture now?

Paulo André: Is that for me or Denis? Who should go first?

Kovid Batra: Anyone. Maybe Paulo, you can go ahead. No problem.

Paulo André: Okay. So, look. In, in my opinion, I think I was thinking a little bit about this. I think it’s a good question. And I think there’s at least three things, three main things that are kind of conspiring for this renewed focus or double down on engineering productivity specifically. I think on the one hand, it’s what I already mentioned, right? It’s easier to measure engineering than anything else. Um, at least in the product design and engineering world, of course, sales are very easy to measure. Did you close or not? And that sort of thing. But when it comes to product design and engineering, engineering, especially if you focus on outputs is so much easier to measure. And then someone gets a good sense of ROI from that, which may or may not be accurate. But I think that’s one of the things. The other thing is that when times get more lean or things get more difficult and funding kind of dries up, um, then, of course, you need to tighten the belt and where are you going to tighten the belt? And at the end of the day, I always say this to my teams, like, engineering is not more special in any way than any other team in a company. That being said, when it comes to a software company, the engineering team is where the rubber meets the road. In other words, you do absolutely need some degree of engineering team or engineering capacity to translate ideas and designs and so on into actual software. So it’s very easy to kind of just look at it as in, “Oh, engineers are absolutely critical. Everything else, maybe are nice to have.” Or something of that, to that effect, right? And then lastly, I think the so-called Elon Musk effect definitely is a thing. I mean, when someone with that prominence and with, you know, the soapbox that he has, comes in and says, you know, we’re going to focus on engineers and it’s about builders and even Mark Andreessen wrote an article like three years ago or so saying it’s time to build, all of that speaks like engineering, engineering, engineering. Um, and so when you put that all together and how influencible all of us are, but I think especially then founders and CEOs are kind of really attuned to their industry and to investors and so on, and I think there’s this, um, feedback loop where engineering is where it’s at right now, especially the age of AI and so on. So yeah, i’m not surprised that when you put this all together in this day and age, we have what we have in terms of engineering being like the holy grail and the focus.

Kovid Batra: Uh, Denis, you, you have something to add on this?

Denis Čahuk: I mean, when it comes to the timing, I don’t think anything comes to mind, you know, why now? What I can definitely say is that engineering of everything that’s going on is the biggest cost in a, in a large company. I mean, it’s not, not to say that it’s all about salaries or operational expenses, but it is also from a business’s perspective, engineering is, you know, if I put a price to the business being wrong on an experiment, the engineering side of things, the product engineering side of things defines most of that cost, right? So when it comes to experiments, the likelihood of it succeeding or not succeeding, or the how fast you gain feedback to be able to, you know, to, to think of experiment feedback as cashflow, you know, you want the big bet that you do once every three months, or do you want to do a bunch of small bets continuously several times per day? You know, all of that is decided and all of that happens in engineering and it also happens to be the biggest fiscal costs. So it makes sense that, hey, there’s an, you know, there’s a big thing that costs a lot, that is very complex and it’s defining the company. Yeah, of course, business owners would want to measure it. It will be irresponsible not to. It doesn’t mean that it, that productivity from a team’s or an engineer’s, an individual’s perspective is the most sensible thing to measure. But I, you know, I understand the people that would intuitively come to that conclusion.

Kovid Batra: Yeah. I think that makes a lot of sense. And what do you think, like, this should be done that, that is totally, uh, understandable, but when is the right time to start doing this and how one should start it? Because every time our engineering leader is held accountable for a team, whether big or small, there is a point where you have to decide your priorities and think about things that you are going to do, right? So how and when should an engineering leader or an engineering manager for a team should start taking up this journey?

Paulo André: I think Denis can go first on this one.

Denis Čahuk: Well, I would never, you know, I would never start measuring. So I coach teams professionally, you know, they, they reach out to me because something about my communication on LinkedIn or newsletter resonated with them regarding, you know, a very no-nonsense way of how to deal with customers, how to communicate, how to plan, how to not plan, how to, how to bring, you know, that excitement into engineering, that makes engineering very hyperproductive and fun. And then they come to me and ask, well, you know, “I want to measure all these things to see what I can do.” I think that context is always misleading. You know, we don’t just go in, you know, it’s not a speedometer like the, I think the very, very first intuition that people still have from the 90s, from the, from the, like the initial scrum and Kanban, um, modes of thought that, “Oh, I can just put up speedometer on the team and it will have a velocity and it, you know, it will just be a number.” Um, I think that is naive. That is not what measuring is. And that is not the right time ever to measure that. Like that I think is my say. Um, the right time to measure is when you say, “I am improving A or B. I am consciously trying to figure out continuously, consciously trying to figure out what will make my teams better.” So a leader might approach, “Okay. If I introduce this initiative, how can I tell if things are better?” And then you can say, “Well, I’ll eyeball it or I’ll survey the team.” And at a certain point, the eyeballing is too inaccurate or it requires too many disagreeing eyeballs, or, um, you run the risk of a survey fatiguing the team, so it’s just way too many surveys asking boring questions, and when you ask engineers to do repetitive, boring things, they will start giving you nonsense answers, right? So that would be the point where I think measuring makes sense, right? Where you basically take a little bit of subjective opinion out, with the exception of surveys, qualitative surveys, and you introduce a machine that says, “Hey, this is a process.” You know, it’s one computer talking to the other computer, you know, in the case of GitHub and similar, which seems to be the primary vector for measurement. Um, can I just extract some metrics of, you know, what are the characteristics of the machine? It doesn’t tell you how fast or how slow it’s going. Just what are the characteristics? Maybe I can get some insights too and decide whether this was a good idea or a bad idea, or if we’re missing something. But the decision to help your teams improve on some initiative and introducing the initiative comes first. And then you measure if you have no other alternative or if the alternatives are way too fuzzy.

Kovid Batra: Makes sense. Paulo, would you like to add something?

Paulo André: Yeah, I mean, I think my, my perspective on this is not very different from, from Denis. Uh, maybe it comes from a slightly different angle and I’ll explain what I mean. So, at the end of the day, if you want to create an outcome, right? And you want to change customer behavior, you want to create results for the business, you’re going to have to build something. And where I would not start is with the metrics, right? So you asked Kovid, like where, where do we start in this journey? I would say do not start with the metrics because in my mind, the metrics are a source of insight or answers to a set of questions. And so start with the questions, right? Start with the challenges that we, that you have to get to where you want to be, right? And so, coming back to what I was saying, if you want to create value, you’re going to have to build something, typically, most of the time, sometimes it creates value by removing something, but in general, you are building and iterating on your products. And, and so with that in mind, what is going back to first principles? What is the nature of software development? Well, it’s a collaborative effort. Nobody does everything end-to-end by themselves. And so with that in mind, there’s going to be handoffs. There’s going to be collaboration. There’s going to be all, all of that sort of flow, right? Where, where the work goes through a certain, you can see it as a pipeline. And so then when it comes to productivity, to me is, is, you know, from a lean software development perspective is how do we increase the flow? If you think of a Kanban board, how do you go, you know, in a smooth way, as smooth as possible from left to right, from something being ready for development to being shipped in production and creating value for the user and for the company? And so if you see it that way with that mental model, then it becomes like, where is the constraint? What is the bottleneck? And then how do we measure that? How do we get the answers is by measuring. And so when it comes to the DORA metrics that you guys obviously with Typo provide, um, you know, a good, good insight into, and, and other such things, generally cycle time, lead time really allows us to start understanding where’s this getting stuck. And that leads to then conversations around what can we do about that? And ultimately everybody can rally around the idea of how do we increase flow? And so that’s where I would start is what are we trying to do? What is getting in our way? And then let’s look at the data that we have available without going too crazy about that into like, what can we learn and where can we improve and where’s the biggest leverage?

Kovid Batra: Makes sense. I think one, one good point that you brought here is that software development is a collaborative effort, right? And every time when we go about doing that, there are people, there are teams, uh, there are processes, right? Uh, how, how would you define in a situation that whether you should go about measuring, uh, at an individual-level productivity, a developer-level productivity, and, uh, and then when, when we are talking about this collaborative effort, the engineering productivity? So how do you differentiate and how do you make sure that you are measuring things right? And sometimes the terminologies also bring in a lot of confusion. Uh, like, I would never perceive developer productivity to be something, uh, specific to developers. It ultimately boils down to the team. So I would want to hear both of you on this point, like how, how do you differentiate or what’s your perspective on that? When you talk to your team that, okay, this is what we are going to measure, uh, your teams are not taken aback by that, and there is a smooth transition of thought, goals when we are talking about improving the productivity. Uh, Paulo, maybe you could answer that.

Paulo André: I was trying to unmute myself. I was actually gonna.. Um, and then it feels free to kind of like interject at any point with your thinking as well. You know, if I follow up on what I was just saying that this is a team sport, then the unit of value is going to be the team. Are there individual productivity metrics? Yes. Are they insightful? Yes, they can be. But for what end? What can you actually infer from them? What can you learn from them? Personally, as an engineering leader, the way I look at individual productivity metrics is more like a smoke alarm. So, for example, if someone is not pushing code for long periods of time, that’s a question. Like, what’s going on? There might be some very good reasons for that, or maybe this person is struggling and so I’m glad that I saw that in the, in the metrics, right? And then we can have a conversation around it. Again, the individual is necessary, but it’s not sufficient to deliver value. And so I need to focus on the team-level productivity metrics, right? Um, so that’s, that’s kind of like how I disambiguate, if you will, this, these two, like the individual and the team, the team comes first. I look at the individual to understand to what degree is the individual or the individuals serving the team, because it comes back to also questions, obviously, of performance and, and performance reviews and compensation and promotions, like all of that stuff, right? Um, but do I look at the metrics to decide on that? Personally, I don’t. What I do look at is what can I see in the metrics in terms of what this person’s contribution to the team is and for the team to be able to be successful and productive.

Kovid Batra: Got it. Denis, uh, you have something to add here?

Denis Čahuk: It’s, it’s such an interesting topic that sort of has nuances from many different perspectives that my brain just wants to talk about all three at the same time. So I want to sort of approach every, like, do a quick dip into all three areas. First is the business side, right? So, uh, for example, let’s take a, let’s take the examples of baseball and soccer. Um, off, when off season comes for baseball. Baseball is more of an individual sport than soccer, you know, like the individual performance stands out way more than in soccer when everything’s moving all the time. Um, it’s, it’s very difficult to individuate performance in soccer, although you still can and people still do and it’s still very sexy. Um, when it’s off season, people want to decide, okay, which players do we keep? Which players do we trade? Which players do we replace? You know, this is completely normal, and you would want to do this, and you would want to have some kind of metrics, ideally, merit-based metrics of, yeah, this person performed better. Having this person on the team makes the team better. In baseball, this makes perfect sense. In soccer, not so much, but you still have to decide, well, how much do we pay each player? And you can probably tell if you’re following the scene that every soccer player is being, you know, their salary, their, their, um, their contracts are priced individually based on their value to the brand of the team, all the way to public relations, marketing, and yes, performance on, on the field. Even if they’re on the bench all the time, you know, they might have a positive effect on the team as a coach or as a mentor, as a captain. Um, so if you did bring that into that, that’s one aspect. So now bringing it back into software teams, that’s the business side of things. Yes, these decisions have to be made.

Then there’s the other side of things, which is how does the team work? You know, from my perspective, if output or outcomes can be traced back to one individual person, I think there’s something wrong. I think there’s a lot of sort of value left on the table if you can say, “Oh, this thing was done by this one person.” Generally, it’s a team effort and the more complex the problems get, the harder it is, you know, look, look, for example, NASA, um, the Apollo missions. Which one engineer, you know, made the rocket fly? You don’t have an answer to that because it was thousands of people collaborating together. You know, which one person made a movie? Yes, the director or the producer or the main actor, like they are, they stand out when it comes to branding. But there were tens of thousands of people involved, right? So like to, you know, at the end of the day, what matters is the box office. So I think that that’s what it really comes down to, uh, is that yes, generally there will be like a few stars and some smoke alarms, as Paulo mentioned, I really liked that analogy, right? So you’re sort of checking for, hey, is anybody below standard and does anybody sort of stand out? Usually in branding and communication, not in technical skill. Um, and then try to reason about the team as a whole.

And then there’s the third aspect, which is how productive does the individual feel? You know, how productive, if somebody says they’re a senior with seven years of experience, how productive they, do they feel? Do they get to do everything they wanted to in a day? You know, and then keep going up. Does the product owner feel productive or efficient? Or does the leader feel that they’re supporting their teams enough, right? So it also comes down to perception. We saw this recently with the usages and various surveys regarding AI usage and coding assistance, where developers say, “Yeah, it makes me feel amazing because I feel more productive.” But in reality, the outcomes that it produces didn’t change, or it was so insignificant that it was very difficult to measure.

So with those three sort of three angles to consider, I would say, you know, the way to approach measuring and particularly this individual versus team performance, is that it’s a moving target. You sort of need to have a plan for why you’re measuring and what you’re measuring and ideally, once you know that you’re measuring the right things when it comes to the business, it’ll be very difficult, um, to trace it back to an individual. If tracing it back to an individual is very easy, or if that’s an outcome that you’re pursuing, I would say there’s other issues or potential improvements afoot. And again, measuring those might show you that measuring them is a wrong, is a bad idea.

Paulo André: Can I just add one, one quick thing again? Like, this is something that took me a little while to understand for myself and to become intuitive, which is not intuitive at all. Um, but I think it’s an important pitfall to kind of highlight, which is if we incentivize individual behaviors, individual productivity, that can really backfire on the team. And again, I remind you that the team is the unit of value. And so if we incentivize throughput or output from individual developers, how does that hurt the team? It doesn’t sound very intuitive, but if you think about, for example, a very prolific developer that is constantly just taking on more tickets and creating more pull requests, and those pull requests are just piling up because there’s no capacity in the team to review them, the customer is not getting any value on the other side. That work in progress is just in lean terminology. It’s just waste at that point, right? But that developer can be regarded depending on how you look at it as a very productive developer, but is it? Or could it be that that developer could be testing something? Or could it be that that developer is helping doing code reviews and so on and so forth, right? So again, the team and individual productivity can lead to wildly different results. And sometimes you have teams that are very unproductive despite having very productive developers in them, but they are looking at the wrong, sort of, in my opinion, wrong definition of what productivity is and where it comes from, and what the unit of value is, like I said, it’s the team.

Kovid Batra: Yeah.

Denis Čahuk: Can I jump in quickly, Kovid?

Kovid Batra: Yeah.

Denis Čahuk: There’s something I’ve always said. Um, it’s very unintuitive, and I can give you a complete example from coaching, that it throws leaders off-guard every time I suggest it, and it ends up being a very positive outcome. I always ask them, you know, “What are you using to assign tickets? Are you assigning them?” And they say, “Yes, we use Jira.” Or something equivalent. And I tell them, And I ask them, “Well, have you considered not assigning the tickets?” Right? And, well, who should own it? And I say, “Well, it’s in the team’s backlog. The team owns it. Stop assigning an individual.” Right? And they’re like, and they’re usually taken aback. It’s like, “What do you mean? Like, it won’t get done if I don’t assign it.” No, it’s in the team’s backlog, of course it’ll get done. Right? And if not, if they can’t decide who will do it, then that’s a conversation they should have, and then keep it unassigned. Or, alternatively, use some kind of software that allows multiple people to be assigned. But you don’t need to, because the moment you start, you know, Jira, for example, had like a full activity log, so I comment on it, you comment on it, you review, I review, we merge, I merge, I ask a question. You have a full paper trail of everybody who was involved. Why would you need an owner, right? So this idea of an owner is, again, going back to lean activities and talking about handoffs, right? So I hand it off to you, you’re now the owner, and you’ll hand it off to somebody else. Well, and, but having many handoffs is an anti-pattern in itself, usually in most contexts. Actually the better idea would be, how can we have less people than we have? How can we have less handoffs then we have people? If there are seven people in the pipeline, there shouldn’t be seven handoffs, you know, how can we have just one deliverable, just one thing to assign and seven people working on it? That would be the best sort of positive outcome because then you don’t cap, you know, how much money you can put around a problem because that allows you to sort of scale your efforts in intensity, not just in parallelism. Um, and usually that parallelism comes at a very, very steep cost.

Paulo André: Yeah.

Denis Čahuk: Um, so incentivizing methods to make individual work activity untraceable can unintuitively have, and usually does, drastic and immediate positive, positive benefits for the team. Also, if the team is lacking in psychological safety, this will make it immediately sort of washed over them and they’ll have to have some like really rough conversations in the first week and then things drastically start improving. At least that’s my experience.

Paulo André: Yeah. And the handoff piece is a very interesting one. I’ll be very quick, uh, Kovid. When we think about the perspective of a piece of work, a work package, a ticket or whatever, it’s either being actively worked on or it’s waiting for someone to do something about it, right? And if we measure these things, what we, what we realize, and it’s the same thing if you go to the airport and we think about how often, how much time are we actually spending on something like checking in or boarding the plane versus waiting at some of the stages, the waiting time is typically way more than the active time. And so that waiting time is waste as well. That’s an opportunity. Those delays, we can think about how can we reduce those and the more handoffs we have in the process, the more opportunity for delay creeps in, right? So it’s, it’s a very different way of looking at things. But sometimes when I say estimates and so on, estimates is all about like active time. It’s how long it’s going to take, but we don’t realize that nothing is done individually, and because of the handoffs, you cannot possibly predict the waiting times. So the best that you can do is to reduce the handoffs, so you have less opportunity for those delays to creep in.

Kovid Batra: Totally. I think to summarize both of your points, I would have understood is that making those smoke alarms ready at individual level and at process level also ready so that you are able to understand those gaps if there is something falling apart. But at the end of the day, if you’re measuring productivity for a team, it has to be a collaborative team-level thing that you’re looking at and looking at value delivery. So I think it’s a very interesting thing. Uh, I think there’s a lot of learning for us when we are working at Typo that we need to think more on the angle of how we bring in those pointers, those metrics which work as those smoke alarms, rather than just looking at individual efficiency or productivity and defining that for somebody. Uh, I think that, that makes a lot of sense. All right. I think we are into a very interesting conversation and I would like to ask one of you to tell us something from your experience. So let’s start with you, Denis. Um, like you have been coaching a lot of teams, right? And, uh, there, there are instances where you deal with large-scale teams, small teams, startups, right? There are different combinations. Anything that you feel is an interesting experience to share here about how a team approached solving a particular problem or a bottleneck in their team that was slowing them down, basically like not having the right impact that they wanted to, and what did they do about it? And then how, how they arrived to the goal that they were looking at?

Denis Čahuk: Well, I can, I can list many. I’ll, I’ll focus on two. One is, generally the team knows what’s the problem. Generally, the team knows already, hey, yeah, we don’t have enough tests, or, ah, yeah, we keep missing deadlines, or our relationship with stakeholders is very bad, and they just communicate with us through, you know, strict roadmaps and strict deadlines and strict expectations. Um, that’s a problem to be solved. That’s not, you know, it doesn’t have to be that way. So if you know what the problem is, there’s no point measuring, because there’s no, there’s no further insight to be gained that, yeah, this is a problem, but hey, let’s get distracted with this insight. No, like, you know what the problem is, you can just decide what to do, and then if you need help along the way, maybe measurements would help. Or maybe measurements on an organizational level would help, not, not just engineering. Um, or you bring on a coach to sort of help you, you know, gain clarity. That’s one aspect. If you know what the problem is, you don’t need to measure. Usually people ask me, Denis, what should I measure? Should I introduce DORA metrics? And I usually tell them, Oh, what’s the main problem? What’s the problem this week? Oh yeah, a lot of PRs are waiting around and we’re not writing enough tests. Okay, that’s actionable. Like, that’s enough. Like, do you want more? Like, but do you need a bigger problem? Because then you just, you know, spend a lot of time looking for a problem that you wish was bigger than that so that you wouldn’t have to, right, because that’s just resistance that just either your ego or trying to play it safe or trying to put it into the next quarter when maybe there’s less stress and right, there isn’t. That’s one aspect.

The other aspect, you know, this idea of.. How did you phrase it? An approach that works that aren’t generally approaches that work. You know, I always say that everything we do is nowadays basically a proxy to eliminating handoffs, right? Getting the engineers very close to the customer and, um, you know, getting closer to continuous delivery. Continuous integration at the very minimum, but continuous delivery, right? So that when software is ready, it’s releasable on demand, and there isn’t like this long waiting that Paolo mentioned earlier, right? Like this is just a general form of waste. Um, but potentially something that both of these cases handle unintuitively that I like to bring in as a sort of more qualitative metric is, um, the reliability of the team. You know, we like to measure the reliability of systems and the whole Scrum movement introduced this idea of velocity, and I like to bring in this idea of, let’s say you want to be on time as a leader. Um, I’m interested in proving the theory that, hey, if you want to be on time, you probably need to be on time every week, and in order to be on time on the week, you probably need to be on time every day. So if you don’t know what an on-time day looks like, there’s no point planning roadmaps and saying that deadlines are a primary focus. Maybe the team should be planning in smaller batches, not with, not trying to chase higher accuracy in something very large. And what I usually use as a proxy metric is just to say, how risky is your word? Right, so how reliable is your promise? Uh, and we don’t measure how fast the team is moving. What I like to measure with them is say, okay, when do you think this will be done? They say Friday. Okay. If you’re right, Monday needs to look like this. Tuesday needs to look like this. Let me just try to reverse engineer it from that. It’s very basic. And then I’m trying to figure out how many days or hours or minutes into a plan they’re off-track. I don’t care about velocity. So no proxy metrics. I’m just interested if they create like a three month roadmap, how many hours into the three-month roadmap are they off-course? Because that’s what I’m interested in, because that’s actionable. Okay. You said three months from now, this is done. One month from now, there’ll be a milestone. But yesterday you said that today something would be done. It’s not done. Maybe we should work on that. Maybe we should really get down to a much smaller batch size and just try to make the communication structures around the team building stuff more reliable. That would de-stress a lot of people at the same time and sort of reduce anxiety. And maybe the problem is that you have a building-to-deploying nuance and maybe that’s also part of the problem. It usually is. And then there might be a planning-to-building nuance that also needs to be addressed. And then we basically come down to this idea of continuous delivery extreme programming, you know, let’s plan a little bit. Let’s Build a little bit. Let’s test it. Let’s test our assumptions. And behind the scenes once we do that for a few days, once we have evidence that we’re reliable, then let’s plan the next two weeks. Only when we have shown evidence of the team understands what a reliable work week for them looks like. If they’ve never experienced that and they’ve been chasing their own tail deadline after deadline, um, there’s not much you can do with such a team. And a lot of people just need a wake up call to see that, “Hey, you know what? I actually don’t know how to plan. You know, I don’t know how to estimate.” And that’s okay. As long as you have this intention of trying to improve or trying to look for alternatives, not to become better.

Kovid Batra: I think my next question would be, uh, like when you’re talking about, uh, this aspect in the teams, how do you exactly go about having that conversations or having that, that visibility on a day-to-day basis? Like most, most of the things that you mentioned were qualitative in nature, right, as, as you mentioned, right? So how, how do you exactly go about doing that? Like if someone wants to understand and deploy the same thought-process in a team, how should they actually do and measure it?

Denis Čahuk: Well, from a leader’s perspective, it’s very simple, you know, because I can just ask them, “Hey, is it done? Is it on anybody’s mind today?” Um, and they might tell me, “Yeah, it’s done, but not merged.” Or, “It’s waiting for review, but it’s done, but it’s kind of waiting for review.” And then that might be one possible answer. Um, it doesn’t need to be qualitative in the sense that I need a human for that. What, you know, what I’m looking for is precision. Like, is it, is it definitively done? Was there an increment? You know, did we test our assumptions? What, is there a releasable artifact? Is it possible to gain feedback on this?

Kovid Batra: Got it.

Denis Čahuk: Did you, did you talk to the team to establish if we deploy this as soon as possible, what question do we want to answer? Like what feedback, what kind of product feedback are we looking for? Or are we just blindly going through a list of features? Like, are we making improvements to our software or is somebody else who is not an engineer? Maybe that’s the problem, right? So it’s very difficult to pinpoint to like one generic thing. But a team that I worked with, the best proxy for these kinds of improvements from the leader was how ready they felt to be interrupted and get course correction. Right? Because the main thing with priorities in a team is that, you know, the main unintuitive thing is that you need to make bets and you need to reduce the cost of you being wrong, right? So the business is making bets on the market, on the product and working with this particular team with these particular individuals. The team is making bets with implementation details to a choice of technology, ratio between keeping the lights on, technical debt and new features, support and communication styles, you know, change of technology maybe. Um, so you need to just make sure that you’re playing with the market. The upside will take care of itself. You just need to make sure that you’re not making stupid mistakes that cost you a lot, either in opportunity or actual fiscal value. Um, but once you got that out of the way, you know, sky’s the limit. A lot of engineers think that we’re expensive. It’s large projects. We gotta get it right the first time. So they try to measure how often they got it right the first time, which is silly. And usually that’s where most measurements go. Are we getting it right the first time? We need to do this to get it right the first time, right? So failure is not an option. Whereas my mantra would be, no, you are going to fail. Just make sure it happens sooner rather than later and with as little intensity as possible so that we can act on it while there’s still time.

Kovid Batra: Got it. Makes sense. Makes sense. All right. Uh, Paulo, I think, uh, we are just running short on time, but I really want to ask this question to you as well, uh, just like Denis has shared something from his experience and that’s really interesting to know like how qualitatively you can measure or see things every time and solve for those. In your experience, um, you have, uh, recently joined this startup as, as a CTO, right? So maybe how does it feel like a new CTO and what things come to your mind when you would think of improving productivity in your teams and building a team which is impactful?

Paulo André: Yeah, I joined this company as a CTO six months ago. It’s been quite a journey and it’s, so it’s very fresh in my mind. And of course, every team is different and every starting point is different and so on, but ultimately, I think the pattern that i’ve always seen in my career is that some things are just not connected and the work is not visible and there’s lack of clarity about what’s value, uh, about what are the goals, what are the priorities, how do we make decisions, like all of that stuff, right? And so, every hour that I’ve been putting into this role with my team so far in these six months has been really either, either about creating clarity or about developing competence to the extent that I can. And so the development of competence is, is basically every opportunity is an opportunity to learn, both for myself and for anyone else in the team. And I can try to leverage my coaching skills, um, in making those learning conversations effective. And then the creation of clarity in my role, I happen to lead both product and engineering, so I cannot blame somebody else for lack of clarity on what the product should be or where it should go. It’s, it’s on me. And I’ve been working with some really good people in terms of what is our product strategy? What do we focus on and not focus on? Why this and not that? What are we trying to accomplish? What are those outcomes that we were talking about that we want to drive, right? So all of that is hard to answer. It’s deceptively difficult to answer. But at the end of the day, it’s what’s most important for that engineering productivity piece, because if you have an engineering team that is, you know, doing wasted work left and right, or things are not connected, and they’re just like, not clear about what they should be doing in the first place, that doesn’t sound like the ingredients for a productive team, right? And ultimately, the product side needs to answer to a large extent those, those difficult questions. So obviously, I could go into a lot of specific details about how we’re doing this and that. I don’t think we have at least today the time for that. Maybe we can do a deep dive later. But ultimately, it’s all about how do I create clarity for everyone and for myself in the first place so I can give it and then also developing the competence of the people that we do have. And that’s the increasing the capacity to win that I was talking about earlier. And if we make good progress on these two things, then we can give a lot of control and autonomy to people because they understand what we’re going for, and they have the skills to actually deliver on that, right? That’s, that’s the holy grail. And that’s motivation, right? That’s happiness. That’s a moment at work that is so elusive. But at the end of the day, I think that’s what we’re, we’re working towards.

Kovid Batra: Totally. I’ll still, uh, want to deep dive a little bit in any one of those, uh, instances, like if you have something to share from last six months where you actually, when prioritized this transparency for the team to be in, uh, how exactly you executed it, a small instance or a small maybe a meeting that you have had and..

Paulo André: Very simple example. Very simple example. Um, one of the things that I immediately noticed in the team is that a lot of the work that was happening was just not visible. It was not on a ticket. It was not on a notion document. It was nowhere, right? Because knowledge was in people’s minds, and so there was a lot of like, gaps of understanding and things that would just take a lot longer than they think they should. And so I already mentioned my bias towards lean software development. What does that mean? First and foremost, make the work visible because if you don’t make the work visible, you have no chance of optimizing the process and getting better at what you do. So I’ve been hammering this idea of making the work visible. I think my team is sick of me pointing to is there a ticket for it? Did you create a ticket for it? Where is the ticket? And so on. Because the way we work with Jira, that’s, that’s where the work becomes visible. And I think now we got to a point where this just became second nature, uh, for all of us. So that would be one example where it’s like very basic fundamental thing. Don’t need to measure anything. Don’t need complicated KPIs and whatnot. What we do need is to make the work visible so we can reason about it together. That’s it.

Kovid Batra: Makes sense. And anything which you found very unique about this team and you took a unique approach to solve it? Any, anything of that sort?

Paulo André: Unique? Oh, that’s a, that’s a really good question. I mean, everyone is different, but at the end of the day, we’re all human beings trying to work together towards something that is somehow meaningful. And so from that perspective, frankly, no real surprises. I think what I’m, if anything, I’m really grateful for the team to be so driven to do better, even if, you know, we lack the experience in many areas that we need to level up. Um, but as far as something being really unique, I think maybe a challenge our team has to really deal with tough technical challenges is around email deliverability, for example, that’s not necessarily unique. Of course, there’s other companies that need to debate themselves with the exact same problems. But in my career, that’s not a particular topic that I have to deal with a lot. And I’m seeing, like, just how complex and how tricky it is to get to get right. Um, and it’s an always evolving sort of landscape for those that are familiar with that type of stuff. So, yeah, not a good, not a good answer to your question. There’s nothing unique. It’s just that, yeah, what’s unique is the team. The team is unique. There’s no other team like this one, like these individuals doing this thing right here, right now in this company in 2024.

Kovid Batra: Great, man. I think your team is gonna love you for that. All right. I think there will be a lot more questions from the audience now. We’ll dedicate some time to that. We’ll take a minute’s break here and we’ll just gather all the questions that the audience has put in. Uh, though we are running a little out of time, is it okay for you guys to like extend for 5–10 minutes? Perfect. All right. Uh, so we’ll take a break for a minute and, uh, just gather the questions here.

All right. I think time to get started with the questions. Uh, I see a lot of them. Uh, let’s take them one by one on the screen and start answering those. Okay. So the first one is coming from, uh, Kshitij Mohan. That’s, uh, the CEO of Typo. Hi, Kshitij. Uh, everything is going good here. Uh, so this is for Denis. Uh, as someone working at the intersection of engineering and cloud technologies, how do you prioritize between technical debt and innovation?

Denis Čahuk: It’s a great question. Hey, Kshitij. Well, I think first of all, I need to know whether it’s actual debt or whether it’s just crap code. You know, like it’s crappy implementation is not an excuse for debt, right? So for you to have debt, there are three things needed to have happen. At some point in the past, you had two choices, A or B. And you made a choice without, with insufficient knowledge. And later on, you figured out that either something in the market changed or timing changed, or we gained more knowledge, and we realized that we, that now the other one is better, for whatever reason. I mean, it’s unnecessary that it was wrong at the time, but we now have more information that we need to go from A to B. Uh, originally we picked A. Now you also need to know how much it costs to go from A to B and how much you stand to gain or trade if you decide not to do that, right? So maybe going from A to B now cost you two months and ten thousand euros and doing it later next year, maybe it’s going to double the cost and add an extra week. That’s technical debt. Like the, the nature of that decision, that’s technical debt. If you, if you made the wrong decision in, in the past and you know it was the wrong decision and now you’re trying to explore whether you want to do something about it, that’s not technical debt. That’s just, you know, that’s you seeking for excuses to not do a rewrite. So it’s, first of all you need to identify is it debt. If it is debt, you know the cost, you know the trade-off, you know, you know, you can either put it on a timeline or you can measure some kind of business outcome with it. So that’s one side.

On the, on the innovation side, you need to decide what is innovation exactly? You know, is it like an investment? Is it a capital expense where I am building a laboratory and we’re going to innovate with new technologies? And then once we build them, we will find, um, sort of private market applications for them or B2B applications for them. Like, is it that kind of innovation? Or is innovation a umbrella term for new features, right? Cause, cause that’s operational. That’s much closer to operational expense, operational expense, right? So it’s just something you do continuously and you deliver continuously, and that innovation that you do can continuously feature development will also produce new debt. So once you’ve got these two things, these two sides figured out, then it’s a very simple decision. How much debt can you live with? How fast are you creating new debt compared to how fast you’re paying it off? And what can you do to get rid of all the non-debt, all the crap, essentially? That’s it, you know. Then you just make sure that you balance out those activities and that you consistently do them. It isn’t just, oh yeah. We do innovation for nine months and then we pay off debt. That usually doesn’t go very well.

Kovid Batra: I think this is coming from a very personal pain point. Now we’re really moving towards the AI wave and building things at Typo. That’s where Kshitij is coming from. Uh, totally. I think, thanks, thanks, Denis. I think we’ll move on to the next question now. Uh, that’s from, uh, Madhurima. Yeah. Hey Paulo, this one’s for you. Uh, which metric do you think is often overlooked in engineering teams but has significant impact on long-term success?

Paulo André: Yeah, that’s a great question. I’m going to, I’m going to give a bit of a cheeky answer and I’m going to say, disclaimer, this is not a metric that I track with, we track with, with my team, and it’s also not, I don’t know, a very scientific way or concrete way of measuring it. However, to the question, what is overlooked in engineering teams and has significant long-term impact, or success, on long-term success, that’s what I would call ‘mean time to clarity’. How quickly do we get clear on where we need to be and how do we get there? Right? And we don’t have all the answers upfront. We need to, as Denis mentioned earlier, experiment and iterate and learn and we’ll get smarter, hopefully, as we go along, as we learn. But how quickly we get to that clarity in every which way that we’re working. I think that’s, that’s the one that is most important because it has implications, right? Um, if we don’t look at that and if we don’t care about that, are we doing what it takes to create that clarity in the first place? And if that’s not the case, the waste is going to be abundant, right? So that’s the one I would say as an engineering leader, how do I get for myself all the clarity that I need to be able to pass it along to others and create that sense that we know where we’re going and what we don’t know, we have the means to learn and to keep getting smarter.

Kovid Batra: Cool. Great answer there. Uh, let’s move on to the next one. I think this one is again for Paulo. Yeah.

Paulo André: Okay, so you know what? Maybe this is going to be a bit, uh, I don’t know what to call it, but considering that I don’t think the most important things are gonna change in the next five years, um, AI notwithstanding, and what are the most important things? It’s still a bunch of people working together and depending on each other to achieve common goals. We may have less people with more artificial intelligence, but I don’t think we’re anywhere near the point where the artificial intelligence just does everything, including the thinking for itself. And so with that in mind, it’s still back to what I said earlier, um, in the session. It’s really about how is the work flowing from left to right? And I don’t know of a better, um, sort of set of metrics than the DORA metrics for this, particularly cycle time and deployment frequency and that sort of stuff that is more about the actual flow. Um, but like, you know, let’s not get into the DORA metrics. I’m sure the audience here already knows a lot about it, but that’s, that’s, I think, what, what is the most important, um, and will continue to be critical in the next five years, um, that’s, that’s basically it.

Kovid Batra: Cool. Moving on. All right. That’s again for, oh, this one, Denis. How do you ensure cloud solutions remain secure and scalable while addressing ever-changing customer demands?

Denis Čahuk: Well, there’s two parts to that question. You know, one is security, the other one is ever-changing customer demands. I think, you know, security will be a sort of an expression of the standard, or at least some degree of sensible defaults within the team. So the better question would be, what do engineers need to not have to consciously, to not have to constantly and consciously and deliberately think about security, right? So do they have support by, are they supported by a security expert? Do they have platform engineering teams that are supporting with security initiatives, right? So if there’s a product team that’s focusing on product, support them so that they also don’t have to become an expert in security, cause that’s where all the problems start, where you basically have a team of five and they need to wear 20 hats and they start triaging the hats and making trade-offs in security, you know. And usually, usually large teams that are overwhelmed, love doing privacy or security trade-offs because they don’t have skin in the game. The business has skin in the game, right? And then when you individuate incentive to such a degree that it becomes dysfunctional, um, security usually doesn’t bode well. Um, at least not till there’s some incident or maybe some security review or some inspection, et cetera.

So give the teams what they need. If they’re not a security expert, provide them support. Um, and the same thing with scalability. Scalability is also something that can benefit more from tighter collaboration, more so than security. Um, so just make sure that the team is able to express itself as a team through pair programming or having more immediate conversations rather than just, you know, asynchronous code review conversations or stand up conversations way at the end of the cycle. At the end of the cycle when the code is written and it’s going into merging or QA, it’s too late, the code is written, right? So you want the preempt. That solution is being created by the team being able to express itself as a team rather than just a group of individuals, being the individual goals.

Kovid Batra: Cool. I think, uh, we have a few more questions, but running way out of time now. Uh, maybe we can take one more last, last question and then we can wrap it up.

Paulo André: Sounds good. Okay, so this one is for me, right? How do I approach, uh, integrating new tools and frameworks into engineering workflows without disrupting productivity? That, that final piece is interesting. I think it also starts with how we frame this type of stuff. So there is a cost to making improvements. I don’t think we can have our cake and eat it, too, necessarily. And it’s just part of the job, and it’s part of what we do. And so, um, you know, for example, if you take the time to have a regular retrospective with your team, right, is that going to impact productivity? I mean, you could be coding for an extra hour every two weeks. It’s certainly going to have some impact. But then it also depends on what is the outcome of that retrospective, and how much does it impact the long-term, um, you know, capacity to win of the team. So with that in mind, what I would say is that the most important thing I find is that you don’t just, again, as an engineering leader, as an engineering manager, you just don’t, you don’t just download certain practices and tools and frameworks on the teams. You always start from what are we trying to solve here and why does it matter and get that shared understanding to the point where we’re all looking at the same problem roughly the same way. We can then disagree on solutions, but we agree that this is a problem worth solving right now, and we’re gonna go and do that. And so the tools and the frameworks are kind of like downstream from that. Okay, now what do we need to gain the inside? Oh, now what do we need to solve the problem? Then we can talk about those things. Okay? So as an example, one thing I’m working on now with my team, I mentioned this earlier, I believe is like, uh, a bit of a full-on product delivery, product discovery and delivery, um, process, right? That includes a product strategy, um, that shouldn’t change that much that often. And then there are a lot of tools and frameworks that we can use. Tools, we use three different types of projects in Jira, for example. And when it comes to frameworks, we’re starting to adopt something called opportunity solution trees, which is just a fancy way of saying what outcomes are we trying to generate, what opportunities do we see to, to get there and what are the solutions that can capitalize on these opportunities, right? That sort of thing. But it all starts with we need to gain clarity about where we’re gonna go as a business and as a product and everything kind of comes downstream from that, right? So I think if you take the time and this is where I’ll leave it. If you take the time and I think you should to start there and to do this groundwork and create this shared context and understanding with your teams, everything else downstream becomes so much easier because you can connect it to the problem that you’re solving. Otherwise, you’re just talking solutions for problems that most people will think they are inexistent or they just look completely different, right? And this takes work, this takes time, this takes energy, this takes attention, takes all of those things. But frankly, if you ask me, that’s the work of leadership. That’s the work of management.

Kovid Batra: Great. Well said, Paulo. I think Denis has a point to add here.

Denis Čahuk: Yeah, I had a conversation this week with one of the CEOs and founders of one of Ljubljana, Slovenia’s biggest agencies, because we were talking about this. And, and, and they asked me this question, they said, “Denis, you don’t have a catalog. Like, what do you do? Like, how do, how does working with you look like? Do we do a workshop or something?” And I said, and I asked, “Do you want to do a workshop? And, and I saw on their face, they said, “Well..” I told them, “Yes, exactly, exactly. That’s why I don’t have a catalog because, because, because the workshops are this, I will show you how a great team works, right? I will give you all of this fancy storytelling about how productive teams work, and then you’re like, “Great. Cool. But we’re not that and we can’t have that in our team.” So great, now I’d go away because I’m, because I’d feel demoralized, right? Like that’s not a good way of approaching working with that team. I, I always tell them, “Look, I don’t know what will help you. You probably also don’t know what will help you. We need to figure it out together. But generally, what’s more important than figuring out how to help you is to figure out how much are you willing to invest consistently in improvement? Because maybe I teach you something and you only have 10 minutes. That’s the wrong way about it, right? I need to ask you how much time do you have consistently every week 15 minutes? Okay, then when I need to teach you something that you can put in practice every 15 minutes Otherwise, I’m robbing you of your time. Otherwise, I’m wasting your time. If you have three hour retrospectives and we’re putting nothing into action, I’m wasting your time, right? So we need to personally figure out like what is consistent for you? What kind of improvement, how intense do you want it? How do you know if you’re making progress?”

Those two are the most important things, because I always come to these kinds of questions about new tools and frameworks because people love asking me about, “Hey, Denis. Can you do a TDD workshop?”, “Denis, can you do a domain-driven design workshop?”, “Denis, can you help us do event storming?” And I always say, “If what you need is that one workshop, it’s not going to solve any problems because I’m all about consistent improvement, about learning, about growing your team, about, you know, investing into the people, not about changing, you know, changing some label or some other label.” And I always come back to the mantra of what can you do consistently starting this week so that the product and the team is much better six months from now? That’s the big question. That’s, that should be the focus. Cause if you need to learn something, you know, go do a certification that takes you a year to perform correctly, and then you need to renew it every year. That’s nonsense. This week, what can we do this week? Start this week, apply this week, and then consistently grow and apply every single week for the next six months. That would be huge. Or you can go to a conference and send everybody on vacation and pretend the workshop was very productive. Thank you.

Kovid Batra: Perfect. I think that brings us to the end of this episode. Uh, I think the next episode that we’re going to have would be in the next year, which is not very far. So, before we depart, uh, I think I would like to wish the audience, uh, a very Happy New Year in advance, a Merry Christmas in advance. And to both of our panelists also, Paulo, Denis, thank you, thank you so much, uh, for taking out time. It was really great talking to you. I would love to have you both again here. talking more in depth about different topics and how to make teams better. But for today, that’s our time. Anything that you would like to, that you guys would want to add, please feel free. All right. Yeah, please go ahead.

Denis Čahuk: Thanks for inviting us.

Paulo André: Yeah, exactly. From my side, I was just going to say that thanks for having us. Thanks also to the audience that has put up with us and also asked very good questions, to be honest. Unfortunately, we couldn’t get to a few more that are still there that I think are very good ones. Um, but yeah, looking forward to coming back and deep diving into, into some of the topics that we talked about here.

Kovid Batra: Great. Definitely.

Denis Čahuk: And thank you for Kovid for inviting us and for introducing us to each other and to everybody backstage and at Typo for, they’re probably doing a lot of annoying groundwork at the background that makes all of this so much more enjoyable. Thank you.

Kovid Batra: All right, guys. Thank you. Thank you so much. Have a great evening ahead. Bye!

Best Practices of CI/CD Optimization Using DORA Metrics

Every delay in your deployment could mean losing a customer. Speed and reliability are crucial, yet many teams struggle with slow deployment cycles, frustrating rollbacks, and poor visibility into performance metrics.

When you’ve worked hard on a feature, it is frustrating when a last-minute bug derails the deployment. Or you face a rollback that disrupts workflows and undermines team confidence. These familiar scenarios breed anxiety and inefficiency, impacting team dynamics and business outcomes.

Fortunately, DORA metrics offer a practical framework to address these challenges. By leveraging these metrics, organizations can gain insights into their CI/CD practices, pinpoint areas for improvement, and cultivate a culture of accountability. This blog will explore how to optimize CI/CD processes using DORA metrics, providing best practices and actionable strategies to help teams deliver quality software faster and more reliably.

Understanding the challenges in CI/CD optimization

Before we dive into solutions, it’s important to recognize the common challenges teams face in CI/CD optimization. By understanding these issues, we can better appreciate the strategies needed to overcome them.

Slow deployment cycles

Development teams frequently experience slow deployment cycles due to a variety of factors, including complex code bases, inadequate testing, and manual processes. Each of these elements can create significant bottlenecks. A sluggish cycle not only hampers agility but also reduces responsiveness to customer needs and market changes. To address this, teams can adopt practices like:

  • Streamlining the pipeline: Evaluate each step in your deployment pipeline to identify redundancies or unnecessary manual interventions. Aim to automate where possible.
  • Using feature flags: Implement feature toggles to enable or disable features without deploying new code. This allows you to deploy more frequently while managing risk effectively.

Frequent rollbacks

Frequent rollbacks can significantly disrupt workflows and erode team confidence. They typically indicate issues such as inadequate testing, lack of integration processes, or insufficient quality assurance. To mitigate this:

  • Enhance testing practices: Invest in automated testing at all levels—unit, integration, and end-to-end testing. This ensures that issues are caught early in the development process.
  • Implement a staging environment: Conduct final tests before deployment, use a staging environment that mirrors production. This practice helps catch integration issues that might not appear in earlier testing phases.

Visibility gaps

A lack of visibility into your CI/CD pipeline can make it challenging to track performance and pinpoint areas for improvement. This opacity can lead to delays and hinder your ability to make data-driven decisions. To improve visibility:

  • Adopt dashboard tools: Use dashboards that visualize key metrics in real time, allowing teams to monitor the health of the CI/CD pipeline effectively.
  • Regularly review performance: Schedule consistent review meetings to discuss metrics, successes, and areas for improvement. This fosters a culture of transparency and accountability.

Cultural barriers

Cultural barriers between development and operations teams can lead to misunderstandings and inefficiencies. To foster a more collaborative environment:

  • Encourage cross-team collaboration: Hold regular meetings that bring developers and operations staff together to discuss challenges and share knowledge.
  • Cultivate a DevOps mindset: Promote the principles of DevOps across your organization to break down silos and encourage shared responsibility for software delivery.

We understand how these challenges can create stress and hinder your team’s well-being. Addressing them is crucial not just for project success but also for maintaining a positive and productive work environment.

Introduction to DORA metrics

DORA (DevOps Research and Assessment) metrics are key performance indicators that provide valuable insights into your software delivery performance. They help measure and improve the effectiveness of your CI/CD practices, making them crucial for software teams aiming for excellence.

Overview of the four key metrics

  • Deployment frequency: This metric indicates how often code is successfully deployed to production. High deployment frequency shows a responsive and agile team.
  • Lead time for changes: This measures the time it takes for code to go from committed to deployed in production. Short lead times indicate efficient processes and quick feedback loops.
  • Change failure rate: This tracks the percentage of deployments that lead to failures in production. A lower change failure rate reflects higher code quality and effective testing practices.
  • Mean time to recovery (MTTR): This metric assesses how quickly the team can restore service after a failure. A shorter MTTR indicates a resilient system and effective incident management practices.

By understanding and utilizing these metrics, software teams gain actionable insights that foster continuous improvement and a culture of accountability.

Best practices for CI/CD optimization using DORA metrics

Implementing best practices is crucial for optimizing your CI/CD processes. Each practice provides actionable insights that can lead to substantial improvements.

Measure and analyze current performance

To effectively measure and analyze your current performance, start by utilizing the right tools to gather valuable data. This foundational step is essential for identifying areas that need improvement.

  • Utilize tools: Use tools like GitLab, Jenkins, and Typo to collect and visualize data on your DORA metrics. This data forms a solid foundation for identifying performance gaps.
  • Conduct regular performance reviews: Regularly review performance to pinpoint bottlenecks and areas needing improvement. A data-driven approach can reveal insights that may not be immediately obvious.
  • Establish baseline metrics: Set baseline metrics to understand your current performance, allowing you to set realistic improvement targets.

How Typo helps: Typo seamlessly integrates with your CI/CD tools, offering real-time insights into DORA metrics. This integration simplifies assessment and helps identify specific areas for enhancement.

Set specific, measurable goals

Clearly defined goals are crucial for driving performance. Establishing specific, measurable goals aligns your team's efforts with broader organizational objectives.

  • Define SMART goals: Establish goals that are Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) aligned with your DORA metrics to ensure clarity in your objectives.
  • Communicate goals clearly: Ensure that these goals are communicated effectively to all team members. Utilize project management tools like ClickUp to track progress and maintain accountability.
  • Align with business goals: Align your objectives with broader business goals to support overall company strategy, reinforcing the importance of each team member's contribution.

How Typo helps: Typo's goal-setting and tracking capabilities promote accountability within your team, helping monitor progress toward targets and keeping everyone aligned and focused.

Implement incremental changes

Implementing gradual changes based on data insights can lead to more sustainable improvements. Focusing on small, manageable changes can often yield better results than sweeping overhauls.

  • Introduce gradual improvements: Focus on small, achievable changes based on insights from your DORA metrics. This approach is often more effective than trying to overhaul the entire system at once.
  • Enhance automation and testing: Work on enhancing automation and testing processes to reduce lead times and failure rates. Continuous integration practices should include automated unit and integration tests.
  • Incorporate continuous testing: Implement a CI/CD pipeline that includes continuous testing. By catching issues early, teams can significantly reduce lead times and minimize the impact of failures.

How Typo helps: Typo provides actionable recommendations based on performance data, guiding teams through effective process changes that can be implemented incrementally.

Foster a culture of collaboration

A collaborative environment fosters innovation and efficiency. Encouraging open communication and shared responsibility can significantly enhance team dynamics.

  • Encourage open communication: Promote transparent communication among team members using tools like Slack or Microsoft Teams.
  • Utilize retrospectives: Regularly hold retrospectives to celebrate successes and learn collectively from setbacks. This practice can improve team dynamics and help identify areas for improvement.
  • Promote cross-functional collaboration: Foster collaboration between development and operations teams. Conduct joint planning sessions to ensure alignment on objectives and priorities.

How Typo helps: With features like shared dashboards and performance reports, Typo facilitates transparency and alignment, breaking down silos and ensuring everyone is on the same page.

Review and adapt regularly

Regular reviews are essential for maintaining momentum and ensuring alignment with goals. Establishing a routine for evaluation can help your team adapt to changes effectively.

  • Establish a routine: Create a routine for evaluating your DORA metrics and adjusting strategies accordingly. Regular check-ins help ensure that your team remains aligned with its goals.
  • Conduct retrospectives: Use retrospectives to gather insights and continuously improve processes. Cultivate a safe environment where team members can express concerns and suggest improvements.
  • Consider A/B testing: Implement A/B testing in your CI/CD process to measure effectiveness. Testing different approaches can help identify the most effective practices.

How Typo helps: Typo’s advanced analytics capabilities support in-depth reviews, making it easier to identify trends and adapt your strategies effectively. This ongoing evaluation is key to maintaining momentum and achieving long-term success.

Additional strategies for faster deployments

To enhance your CI/CD process and achieve faster deployments, consider implementing the following strategies:

Automation

Automate various aspects of the development lifecycle to improve efficiency. For build automation, utilize tools like Jenkins, GitLab CI/CD, or CircleCI to streamline the process of building applications from source code. This reduces errors and increases speed. Implementing automated unit, integration, and regression tests allows teams to catch defects early in the development process, significantly reducing the time spent on manual testing and enhancing code quality. 

Additionally, automate the deployment of applications to different environments (development, staging, production) using tools like Ansible, Puppet, or Chef to ensure consistency and minimize the risk of human error during deployments.

Version Control

Employ a version control system like Git to effectively track changes to your codebase and facilitate collaboration among developers. Implementing effective branching strategies such as Gitflow or GitHub Flow helps manage different versions of your code and isolate development work, allowing multiple team members to work on features simultaneously without conflicts.

Continuous Integration

Encourage developers to commit their code changes frequently to the main branch. This practice helps reduce integration issues and allows conflicts to be identified early. Set up automated builds and tests that run whenever new code is committed to the main branch. 

This ensures that issues are caught immediately, allowing for quicker resolutions. Providing developers with immediate feedback on the success or failure of their builds and tests fosters a culture of accountability and promotes continuous improvement.

Continuous Delivery

Automate the deployment of applications to various environments, which reduces manual effort and minimizes the potential for errors. Ensure consistency between different environments to minimize deployment risks; utilizing containers or virtualization can help achieve this. 

Additionally, consider implementing canary releases, where new features are gradually rolled out to a small subset of users before a full deployment. This allows teams to monitor performance and address any issues before they impact the entire user base.

Infrastructure as Code (IaC)

Use tools like Terraform or CloudFormation to manage infrastructure resources (e.g., servers, networks, storage) as code. This approach simplifies infrastructure management and enhances consistency across environments. Store infrastructure code in a version control system to track changes and facilitate collaboration. 

This practice enables teams to maintain a history of infrastructure changes and revert if necessary. Ensuring consistent infrastructure across different environments through IaC reduces discrepancies that can lead to deployment failures.

Monitoring and Feedback

Implement monitoring tools to track the performance and health of your applications in production. Continuous monitoring allows teams to proactively identify and resolve issues before they escalate. Set up automated alerts to notify teams of critical issues or performance degradation. 

Quick alerts enable faster responses to potential problems. Use feedback from monitoring and alerting systems to identify and address problems proactively, helping teams learn from past deployments and improve future processes.

Final thoughts

By implementing these best practices, you will improve your deployment speed and reliability while also boosting team satisfaction and delivering better experiences to your customers. Remember, you’re not alone on this journey—resources and communities are available to support you every step of the way.

Your best bet for seamless collaboration is with Typo, sign up for a personalized demo and find out yourself! 

Tracking DORA Metrics for Mobile Apps

Mobile development comes with a unique set of challenges: rapid release cycles, stringent user expectations, and the complexities of maintaining quality across diverse devices and operating systems. Engineering teams need robust frameworks to measure their performance and optimize their development processes effectively. 

DORA metrics—Deployment Frequency, Lead Time for Changes, Mean Time to Recovery (MTTR), and Change Failure Rate—are key indicators that provide valuable insights into a team’s DevOps performance. Leveraging these metrics can empower mobile development teams to make data-driven improvements that boost efficiency and enhance user satisfaction.

Importance of DORA Metrics in Mobile Development

DORA metrics, rooted in research from the DevOps Research and Assessment (DORA) group, help teams measure key aspects of software delivery performance.

Here's why they matter for mobile development:

  • Deployment Frequency: Mobile teams need to keep up with the fast pace of updates required to satisfy user demand. Frequent, smooth deployments signal a team’s ability to deliver features, fixes, and updates consistently.
  • Lead Time for Changes: This metric tracks the time between code commit and deployment. For mobile teams, shorter lead times mean a streamlined process, allowing quicker responses to user feedback and faster feature rollouts.
  • MTTR: Downtime in mobile apps can result in frustrated users and poor reviews. By tracking MTTR, teams can assess and improve their incident response processes, minimizing the time an app remains in a broken state.
  • Change Failure Rate: A high change failure rate can indicate inadequate testing or rushed releases. Monitoring this helps mobile teams enhance their quality assurance practices and prevent issues from reaching production.

Deep Dive into Practical Solutions for Tracking DORA Metrics

Tracking DORA metrics in mobile app development involves a range of technical strategies. Here, we explore practical approaches to implement effective measurement and visualization of these metrics.

Implementing a Measurement Framework

Integrating DORA metrics into existing workflows requires more than a simple add-on; it demands technical adjustments and robust toolchains that support continuous data collection and analysis.

  1. Automated Data Collection

Automating the collection of DORA metrics starts with choosing the right CI/CD platforms and tools that align with mobile development. Popular options include:

  • Jenkins Pipelines: Set up custom pipeline scripts that log deployment events and timestamps, capturing deployment frequency and lead times. Use plugins like the Pipeline Stage View for visual insights.
  • GitLab CI/CD: With GitLab's built-in analytics, teams can monitor deployment frequency and lead time for changes directly within their CI/CD pipeline.
  • GitHub Actions: Utilize workflows that trigger on commits and deployments. Custom actions can be developed to log data and push it to external observability platforms for visualization.

Technical setup: For accurate deployment tracking, implement triggers in your CI/CD pipelines that capture key timestamps at each stage (e.g., start and end of builds, start of deployment). This can be done using shell scripts that append timestamps to a database or monitoring tool.

  1. Real-Time Monitoring and Visualization

To make sense of the collected data, teams need a robust visualization strategy. Here’s a deeper look at setting up effective dashboards:

  • Prometheus with Grafana: Integrate Prometheus to scrape data from CI/CD pipelines, and use Grafana to create dashboards with deployment trends and lead time breakdowns.
  • Elastic Stack (ELK): Ship logs from your CI/CD process to Elasticsearch and build visualizations in Kibana. This setup provides detailed logs alongside high-level metrics.

Technical Implementation Tips:

  • Use Prometheus exporters or custom scripts that expose metric data as HTTP endpoints.
  • Design Grafana dashboards to show current and historical trends for DORA metrics, using panels that highlight anomalies or spikes in lead time or failure rates.

  1. Comprehensive Testing Pipelines

Testing is integral to maintaining a low change failure rate. To align with this, engineering teams should develop thorough, automated testing strategies:

  • Unit Testing: Implement unit tests with frameworks like JUnit for Android or XCTest for iOS. Ensure these are part of every build to catch low-level issues early.
  • Integration Testing: Use tools such as Espresso and UIAutomator for Android and XCUITest for iOS to validate complex user interactions and integrations.
  • End-to-End Testing: Integrate Appium or Selenium to automate tests across different devices and OS versions. End-to-end testing helps simulate real-world usage and ensures new deployments don't break critical app flows.

Pipeline Integration:

  • Set up your CI/CD pipeline to trigger these tests automatically post-build. Configure your pipeline to fail early if a test doesn’t pass, preventing faulty code from being deployed.
  1. Incident Response and MTTR Management

Reducing MTTR requires visibility into incidents and the ability to act swiftly. Engineering teams should:

  • Implement Monitoring Tools: Use tools like Firebase Crashlytics for crash reporting and monitoring. Integrate with third-party tools like Sentry for comprehensive error tracking.
  • Set Up Automated Alerts: Configure alerts for critical failures using observability tools like Grafana Loki, Prometheus Alertmanager, or PagerDuty. This ensures that the team is notified as soon as an issue arises.

Strategies for Quick Recovery:

  • Implement automatic rollback procedures using feature flags and deployment strategies such as blue-green deployments or canary releases.
  • Use scripts or custom CI/CD logic to switch between versions if a critical incident is detected.

Weaving Typo into Your Workflow

After implementing these technical solutions, teams can leverage Typo for seamless DORA metrics integration. Typo can help consolidate data and make metric tracking more efficient and less time-consuming.

For teams looking to streamline the integration of DORA metrics tracking, Typo offers a solution that is both powerful and easy to adopt. Typo provides:

  • Automated Deployment Tracking: By integrating with existing CI/CD tools, Typo collects deployment data and visualizes trends, simplifying the tracking of deployment frequency.
  • Detailed Lead Time Analysis: Typo’s analytics engine breaks down lead times by stages in your pipeline, helping teams pinpoint delays in specific steps, such as code review or testing.
  • Real-Time Incident Response Support: Typo includes incident monitoring capabilities that assist in tracking MTTR and offering insights into incident trends, facilitating better response strategies.
  • Seamless Integration: Typo connects effortlessly with platforms like Jenkins, GitLab, GitHub, and Jira, centralizing DORA metrics in one place without disrupting existing workflows.

Typo’s integration capabilities mean engineering teams don’t need to build custom scripts or additional data pipelines. With Typo, developers can focus on analyzing data rather than collecting it, ultimately accelerating their journey toward continuous improvement.

Establishing a Continuous Improvement Cycle

To fully leverage DORA metrics, teams must establish a feedback loop that drives continuous improvement. This section outlines how to create a process that ensures long-term optimization and alignment with development goals.

  1. Regular Data Reviews: Conduct data-driven retrospectives to analyze trends and set goals for improvements.
  2. Iterative Process Enhancements: Use findings to adjust coding practices, enhance automated testing coverage, or refine build processes.
  3. Team Collaboration and Learning: Share knowledge across teams to spread best practices and avoid repeating mistakes.

Empowering Your Mobile Development Process

DORA metrics provide mobile engineering teams with the tools needed to measure and optimize their development processes, enhancing their ability to release high-quality apps efficiently. By integrating DORA metrics tracking through automated data collection, real-time monitoring, comprehensive testing pipelines, and advanced incident response practices, teams can achieve continuous improvement. 

Tools like Typo make these practices even more effective by offering seamless integration and real-time insights, allowing developers to focus on innovation and delivering exceptional user experiences.

View All

Software Delivery

View All

Engineering Management Platform: A Quick Overview

Your engineering team is the biggest asset of your organization. They work tirelessly on software projects, despite the tight deadlines. 

However, there could be times when bottlenecks arise unexpectedly, and you struggle to get a clear picture of how resources are being utilized. 

This is where an Engineering Management Platform (EMP) comes into play.

An EMP acts as a central hub for engineering teams. It transforms chaos into clarity by offering actionable insights and aligning engineering efforts with broader business goals.

In this blog, we’ll discuss the essentials of EMPs and how to choose the best one for your team.

What are Engineering Management Platforms? 

Engineering Management Platforms (EMPs) are comprehensive tools that enhance the visibility and efficiency of engineering teams. They serve as a bridge between engineering processes and project management, enabling teams to optimize workflows, track how they allocate their time and resources, track performance metrics, assess progress on key deliverables, and make informed decisions based on data-driven insights. This further helps in identifying bottlenecks, streamlining processes, and improving the developer experience (DX). 

Core Functionalities 

Actionable Insights 

One main functionality of EMP is transforming raw data into actionable insights. This is done by analyzing performance metrics to identify trends, inefficiencies, and potential bottlenecks in the software delivery process. 

Risk Management 

The Engineering Management Platform helps risk management by identifying potential vulnerabilities in the codebase, monitoring technical debt, and assessing the impact of changes in real time. 

Team Collaboration

These platforms foster collaboration between cross-functional teams (Developers, testers, product managers, etc). They can be integrated with team collaboration tools like Slack, JIRA, and MS Teams. It promotes knowledge sharing and reduces silos through shared insights and transparent reporting. 

Performance Management 

EMPs provide metrics to track performance against predefined benchmarks and allow organizations to assess development process effectiveness. By measuring KPIs, engineering leaders can identify areas of improvement and optimize workflows for better efficiency. 

Essential Elements of an Engineering Management Platform

Developer Experience 

Developer Experience refers to how easily developers can perform their tasks. When the right tools are available, the process is streamlined and DX leads to an increase in productivity and job satisfaction. 

Key aspects include: 

  • Streamlined workflows such as seamless integration with IDEs, CI/CD pipelines, and VCS. 
  • Metrics such as WIP and Merge Frequency to identify areas for improvement. 

Engineering Velocity 

Engineering Velocity can be defined as the team’s speed and efficiency during software delivery. To track it, the engineering leader must have a bird’s-eye view of the team’s performance and areas of bottlenecks. 

Key aspects include:

  • Monitor DORA metrics to track the team’s performance 
  • Provide resources and tools to track progress toward goals 

Business Alignment 

Engineering Management Software must align with broader business goals to help move in the right direction. This alignment is necessary for maximizing the impact of engineering work on organizational goals.

Key aspects include: 

  • Track where engineering resources (Time and People) are being allocated. 
  • Improved project forecasting and sprint planning to meet deadlines and commitments. 

Benefits of Engineering Management Platform 

Enhances Team Collaboration

The engineering management platform offers end-to-end visibility into developer workload, processes, and potential bottlenecks. It provides centralized tools for the software engineering team to communicate and coordinate seamlessly by integrating with platforms like Slack or MS Teams. It also allows engineering leaders and developers to have data-driven and sufficient context around 1:1. 

Increases Visibility 

Engineering software offers 360-degree visibility into engineering workflows to understand project statuses, deadlines, and risks for all stakeholders. This helps identify blockers and monitor progress in real-time. It also provides engineering managers with actionable data to guide and supervise engineering teams.

Facilitates Continuous Improvement 

EMPs allow developers to adapt quickly to changes based on project demands or market conditions. They foster post-mortems and continuous learning and enable team members to retrospectively learn from successes and failures. 

Improves Developer Well-being 

EMPs provide real-time visibility into developers' workloads that allow engineering managers to understand where team members' time is being invested. This allows them to know their developers’ schedule and maintain a flow state, hence, reducing developer burnout and workload management.

Fosters Data-driven Decision-Making 

Engineering project management software provides actionable insights into a team’s performance and complex engineering projects. It further allows the development team to prioritize tasks effectively and engage in strategic discussions with stakeholders. 

How to Choose an Engineering Management Platform for Your Team? 

Understanding Your Team’s Needs

The first and foremost point is to assess your team’s pain points. Identify the current challenges such as tracking progress, communication gaps, or workload management. Also, consider Team Size and Structure such as whether your team is small or large, distributed or co-located, as this will influence the type of platform you need.

Be clear about what you want the platform to achieve, for example: improving efficiency, streamlining processes, or enhancing collaboration.

Evaluate Key Categories

When choosing the right EMP for your team, consider assessing the following categories:

Processes and Team Health

A good EMP must evaluate how well the platform supports efficient workflows and provides a multidimensional picture of team health including team well-being, collaboration, and productivity.

User Experience and Customization 

The Engineering Management Platform must have an intuitive and user-friendly interface for both tech and non-tech users. It should also include customization of dashboards, repositories, and metrics that cater to specific needs and workflow. 

Allocation and Business Value 

The right platform helps in assessing resource allocation across various projects and tasks such as time spent on different activities, identifying over or under-utilization of resources, and quantifying the value delivered by the engineering team. 

Integration Capabilities 

Strong integrations centralize the workflow, reduce fragmentation, and improve efficiency. These platforms must integrate seamlessly with existing tools, such as project management software, communication platforms, and CRMs.

Customer Support 

The platform must offer reliable customer support through multiple channels such as chat, email, or phone. You can also take note of extensive self-help resources like FAQs, tutorials, and forums.

Research and Compare Options 

Research various EMPs available in the market. Now based on your key needs, narrow down platforms that fit your requirements. Use resources like reviews, comparisons, and recommendations from industry peers to understand real-world experiences. You can also schedule demos with shortlisted providers to know the features and usability in detail. 

Conduct a Trial Run

Opt for a free trial or pilot phase to test the platform with a small group of users to get a hands-on feel. Afterward, Gather feedback from your team to evaluate how well the tool fits into their workflows.

Select your Best Fit 

Finally, choose the EMP that best meets your requirements based on the above-mentioned categories and feedback provided by the team members. 

Typo: An Engineering Management Platform 

Typo is an effective engineering management platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion

An Engineering Management Platform (EMP) not only streamlines workflow but transforms the way teams operate. These platforms foster collaboration, reduce bottlenecks, and provide real-time visibility into progress and performance. 

Impact of Low Code Quality on Software Development

Maintaining a balance between speed and code quality is a challenge for every developer. 

Deadlines and fast-paced projects often push teams to prioritize rapid delivery, leading to compromises in code quality that can have long-lasting consequences. While cutting corners might seem efficient in the moment, it often results in technical debt and a codebase that becomes increasingly difficult to manage.

The hidden costs of poor code quality are real, impacting everything from development cycles to team morale. This blog delves into the real impact of low code quality, its common causes, and actionable solutions tailored to developers looking to elevate their code standards.

Understanding the Core Elements of Code Quality

Code quality goes beyond writing functional code. High-quality code is characterized by readability, maintainability, scalability, and reliability. Ensuring these aspects helps the software evolve efficiently without causing long-term issues for developers. Let’s break down these core elements further:

  • Readability: Code that follows consistent formatting, uses meaningful variable and function names, and includes clear inline documentation or comments. Readable code allows any developer to quickly understand its purpose and logic.
  • Maintainability: Modular code that is organized with reusable functions and components. Maintainability ensures that code changes, whether for bug fixes or new features, don’t introduce cascading errors throughout the codebase.
  • Scalability: Code designed withan architecture that supports growth. This involves using design patterns that decouple different parts of the code and make it easier to extend functionalities.
  • Reliability: Robust code that has been tested under different scenarios to minimize bugs and unexpected behavior.

The Real Costs of Low Code Quality

Low code quality can significantly impact various facets of software development. Below are key issues developers face when working with substandard code:

Sluggish Development Cycles

Low-quality code often involves unclear logic and inconsistent practices, making it difficult for developers to trace bugs or implement new features. This can turn straightforward tasks into hours of frustrating work, delaying project milestones and adding stress to sprints.

Escalating Technical Debt

Technical debt accrues when suboptimal code is written to meet short-term goals. While it may offer an immediate solution, it complicates future updates. Developers need to spend significant time refactoring or rewriting code, which detracts from new development and wastes resources.

Bug-Prone Software

Substandard code tends to harbor hidden bugs that may not surface until they affect end-users. These bugs can be challenging to isolate and fix, leading to patchwork solutions that degrade the codebase further over time.

Collaboration Friction

When multiple developers contribute to a project, low code quality can cause misalignment and confusion. Developers might spend more time deciphering each other’s work than contributing to new development, leading to decreased team efficiency and a lower-quality product.

Scalability Bottlenecks

A codebase that doesn’t follow proper architectural principles will struggle when scaling. For instance, tightly coupled components make it hard to isolate and upgrade parts of the system, leading to performance issues and reduced flexibility.

Developer Burnout

Constantly working with poorly structured code is taxing. The mental effort needed to debug or refactor a convoluted codebase can demoralize even the most passionate developers, leading to frustration, reduced job satisfaction, and burnout.

Root Causes of Low Code Quality

Understanding the reasons behind low code quality helps in developing practical solutions. Here are some of the main causes:

Pressure to Deliver Rapidly

Tight project deadlines often push developers to prioritize quick delivery over thorough, well-thought-out code. While this may solve immediate business needs, it sacrifices code quality and introduces problems that require significant time and resources to fix later.

Lack of Unified Coding Standards

Without established coding standards, developers may approach problems in inconsistent ways. This lack of uniformity leads to a codebase that’s difficult to maintain, read, and extend. Coding standards help enforce best practices and maintain consistent formatting and documentation.

Insufficient Code Reviews

Skipping code reviews means missing opportunities to catch errors, bad practices, or code smells before they enter the main codebase. Peer reviews help maintain quality, share knowledge, and align the team on best practices.

Limited Testing Strategies

A codebase without sufficient testing coverage is bound to have undetected errors. Tests, especially automated ones, help identify issues early and ensure that any code changes do not break existing features.

Overreliance on Low-Code/No-Code Solutions

Low-code platforms offer rapid development but often generate code that isn’t optimized for long-term use. This code can be bloated, inefficient, and difficult to debug or extend, causing problems when the project scales or requires custom functionality.

Comprehensive Solutions to Improve Code Quality

Addressing low code quality requires deliberate, consistent effort. Here are expanded solutions with practical tips to help developers maintain and improve code standards:

Adopt Rigorous Code Reviews

Code reviews should be an integral part of the development process. They serve as a quality checkpoint to catch issues such as inefficient algorithms, missing documentation, or security vulnerabilities. To make code reviews effective:

  • Create a structured code review checklist that focuses on readability, adherence to coding standards, potential performance issues, and proper error handling.
  • Foster a culture where code reviews are seen as collaborative learning opportunities rather than criticism.
  • Implement tools like GitHub’s review features or Bitbucket for in-depth code discussions.

Integrate Linters and Static Analysis Tools

Linters help maintain consistent formatting and detect common errors automatically. Tools like ESLint (JavaScript), RuboCop (Ruby), and Pylint (Python) check your code for syntax issues and adherence to coding standards. Static analysis tools go a step further by analyzing code for complex logic, performance issues, and potential vulnerabilities. To optimize their use:

  • Configure these tools to align with your project’s coding standards.
  • Run these tools in pre-commit hooks with Husky or integrate them into your CI/CD pipelines to ensure code quality checks are performed automatically.

Prioritize Comprehensive Testing

Adopt a multi-layered testing strategy to ensure that code is reliable and bug-free:

  • Unit Tests: Write unit tests for individual functions or methods to verify they work as expected. Frameworks like Jest for JavaScript, PyTest for Python, and JUnit for Java are popular choices.
  • Integration Tests: Ensure that different parts of your application work together smoothly. Tools like Cypress and Selenium can help automate these tests.
  • End-to-End Tests: Simulate real user interactions to catch potential issues that unit and integration tests might miss.
  • Integrate testing into your CI/CD pipeline so that tests run automatically on every code push or pull request.

Dedicate Time for Refactoring

Refactoring helps improve code structure without changing its behavior. Regularly refactoring prevents code rot and keeps the codebase maintainable. Practical strategies include:

  • Identify “code smells” such as duplicated code, overly complex functions, or tightly coupled modules.
  • Apply design patterns where appropriate, such as Factory or Observer, to simplify complex logic.
  • Use IDE refactoring tools like IntelliJ IDEA’s refactor feature or Visual Studio Code extensions to speed up the process.

Create and Enforce Coding Standards

Having a shared set of coding standards ensures that everyone on the team writes code with consistent formatting and practices. To create effective standards:

  • Collaborate with the team to create a coding guideline that includes best practices, naming conventions, and common pitfalls to avoid.
  • Document the guideline in a format accessible to all team members, such as a README file or a Confluence page.
  • Conduct periodic training sessions to reinforce these standards.

Leverage Typo for Enhanced Code Quality

Typo can be a game-changer for teams looking to automate code quality checks and streamline reviews. It offers a range of features:

  • Automated Code Review: Detects common issues, code smells, and inconsistencies, supplementing manual code reviews.
  • Detailed Reports: Provides actionable insights, allowing developers to understand code weaknesses and focus on the most critical issues.
  • Seamless Collaboration: Enables teams to leave comments and feedback directly on code, enhancing peer review discussions and improving code knowledge sharing.
  • Continuous Monitoring: Tracks changes in code quality over time, helping teams spot regressions early and maintain consistent standards.

Enhance Knowledge Sharing and Training

Keeping the team informed on best practices and industry trends strengthens overall code quality. To foster continuous learning:

  • Organize workshops, code review sessions, and tech talks where team members share insights or recent challenges they overcame.
  • Encourage developers to participate in webinars, online courses, and conferences.
  • Create a mentorship program where senior developers guide junior members through complex code and teach them best practices.

Strategically Use Low-Code Tools

Low-code tools should be leveraged for non-critical components or rapid prototyping, but ensure that the code generated is thoroughly reviewed and optimized. For more complex or business-critical parts of a project:

  • Supplement low-code solutions with custom coding to improve performance and maintainability.
  • Regularly review and refactor code generated by these platforms to align with project standards.

Commit to Continuous Improvement

Improving code quality is a continuous process that requires commitment, collaboration, and the right tools. Developers should assess current practices, adopt new ones gradually, and leverage automated tools like Typo to streamline quality checks. 

By incorporating these strategies, teams can create a strong foundation for building maintainable, scalable, and high-quality software. Investing in code quality now paves the way for sustainable development, better project outcomes, and a healthier, more productive team.

Sign up for a quick demo with Typo to learn more!

why jira dashboards are insufficient

Why JIRA Dashboard is Insufficient?- Time for JIRA-Git Data Integration

Introduction

In today's fast-paced and rapidly evolving software development landscape, effective project management is crucial for engineering teams striving to meet deadlines, deliver quality products, and maintain customer satisfaction. Project management not only ensures that tasks are completed on time but also optimizes resource allocation enhances team collaboration, and improves communication across all stakeholders. A key tool that has gained prominence in this domain is JIRA, which is widely recognized for its robust features tailored for agile project management.

However, while JIRA offers numerous advantages, such as customizable workflows, detailed reporting, and integration capabilities with other tools, it also comes with limitations that can hinder its effectiveness. For instance, teams relying solely on JIRA dashboard gadget may find themselves missing critical contextual data from the development process. They may obtain a snapshot of project statuses but fail to appreciate the underlying issues impacting progress. Understanding both the strengths and weaknesses of JIRA dashboard gadget is vital for engineering managers to make informed decisions about their project management strategies.

The Limitations of JIRA Dashboard Gadgets

Lack of Contextual Data

JIRA dashboard gadgets primarily focus on issue tracking and project management, often missing critical contextual data from the development process. While JIRA can show the status of tasks and issues, it does not provide insights into the actual code changes, commits, or branch activities that contribute to those tasks. This lack of context can lead to misunderstandings about project progress and team performance. For example, a task may be marked as "in progress," but without visibility into the associated Git commits, managers may not know if the team is encountering blockers or if significant progress has been made. This disconnect can result in misaligned expectations and hinder effective decision-making.

Static Information

JIRA dashboards having road map gadget or sprint burndown gadget can sometimes present a static view of project progress, which may not reflect real-time changes in the development process. For instance, while a JIRA road map gadget or sprint burndown gadget may indicate that a task is "done," it does not account for any recent changes or updates made in the codebase. This static nature can hinder proactive decision-making, as managers may not have access to the most current information about the project's health. Additionally, relying on historical data can create a lag in response to emerging issues in issue statistics gadget. In a rapidly changing development environment, the ability to react quickly to new information is crucial for maintaining project momentum hence we need to move beyond default chart gadget like road map gadget or burndown chart gadget.

Limited Collaboration Insights

Collaboration is essential in software development, yet JIRA dashboards often do not capture the collaborative efforts of the team. Metrics such as code reviews, pull requests, and team discussions are crucial for understanding how well the team is working together. Without this information, managers may overlook opportunities for improvement in team dynamics and communication. For example, if a team is actively engaged in code reviews but this activity is not reflected in JIRA gadgets or sprint burndown gadget, managers may mistakenly assume that collaboration is lacking. This oversight can lead to missed opportunities to foster a more cohesive team environment and improve overall productivity.

Overemphasis on Individual Metrics

JIRA dashboard or other copy dashboard can sometimes encourage a focus on individual performance metrics rather than team outcomes. This can foster an environment of unhealthy competition, where developers prioritize personal achievements over collaborative success. Such an approach can undermine team cohesion and lead to burnout. When individual metrics are emphasized, developers may feel pressured to complete tasks quickly, potentially sacrificing code quality and collaboration. This focus on personal performance can create a culture where teamwork and knowledge sharing are undervalued, ultimately hindering project success.

Inflexibility in Reporting

JIRA dashboard layout often rely on predefined metrics and reports, which may not align with the unique needs of every project or team. This inflexibility can result in a lack of relevant insights that are critical for effective project management. For example, a team working on a highly innovative project may require different metrics than a team maintaining legacy software. The inability to customize reports can lead to frustration and a sense of disconnect from the data being presented.

The Power of Integrating Git Data with JIRA

Integrating Git data with JIRA provides a more holistic view of project performance and developer productivity. Here’s how this integration can enhance insights:

Real-Time Visibility into Development Activity

By connecting Git repositories with JIRA, engineering managers can gain real-time visibility into commits, branches, and pull requests associated with JIRA issues & issue statistics. This integration allows teams to see the actual development work being done, providing context to the status of tasks on the JIRA dashboard gadet. For instance, if a developer submits a pull request that relates to a specific JIRA ticket, the project manager instantly knows that work is ongoing, fostering transparency. Additionally, automated notifications for changes in the codebase linked to JIRA issues keep everyone updated without having to dig through multiple tools. This integrated approach ensures that management has a clear understanding of actual progress rather than relying on static task statuses.

Enhanced Collaboration and Communication

Integrating Git data with JIRA facilitates better collaboration among team members. Developers can reference JIRA issues in their commit messages, making it easier for the team to track changes related to specific tasks. This transparency fosters a culture of collaboration, as everyone can see how their work contributes to the overall project goals. Moreover, by having a clear link between code changes and JIRA issues, team members can engage in more meaningful discussions during stand-ups and retrospectives. This enhanced communication can lead to improved problem-solving and a stronger sense of shared ownership over the project.

Improved Risk Management

With integrated Git and JIRA data, engineering managers can identify potential risks more effectively. By monitoring commit activity and pull requests alongside JIRA issue statuses, managers can spot trends and anomalies that may indicate project delays or technical challenges. For example, if there is a sudden decrease in commit activity for a specific task, it may signal that the team is facing challenges or blockers. This proactive approach allows teams to address issues before they escalate, ultimately improving project outcomes and reducing the likelihood of last-minute crises.

Comprehensive Reporting and Analytics

The combination of JIRA and Git data enables more comprehensive reporting and analytics. Engineering managers can analyze not only task completion rates but also the underlying development activity that drives those metrics. This deeper understanding can inform better decision-making and strategic planning for future projects. For instance, by analyzing commit patterns and pull request activity, managers can identify trends in team performance and areas for improvement. This data-driven approach allows for more informed resource allocation and project planning, ultimately leading to more successful outcomes.

Best Practices for Integrating Git Data with JIRA

To maximize the benefits of integrating Git data with JIRA, engineering managers should consider the following best practices:

Select the Right Tools

Choose integration tools that fit your team's specific needs. Tools like Typo can facilitate the connection between Git and JIRA smoothly. Additionally, JIRA integrates directly with several source control systems, allowing for automatic updates and real-time visibility.

Sprint analysis in Typo

If you’re ready to enhance your project delivery speed and predictability, consider integrating Git data with your JIRA dashboards. Explore Typo! We can help you do this in a few clicks & make it one of your favorite dashboards.

Establish Commit Message Guidelines

Encourage your team to adopt consistent commit message guidelines. Including JIRA issue keys in commit messages will create a direct link between the code change and the JIRA issue. This practice not only enhances traceability but also aids in generating meaningful reports and insights. For example, a commit message like 'JIRA-123: Fixed the login issue' can help managers quickly identify relevant commits related to specific tasks.

Automate Workflows

Leverage automation features available in both JIRA and Git platforms to streamline the integration process. For instance, set up automated triggers that update JIRA issues based on events in Git, such as moving a JIRA issue to 'In Review' once a pull request is submitted in Git. This reduces manual updates and alleviates the administrative burden on the team.

Train Your Team

Providing adequate training to your team ensures everyone understands the integration process and how to effectively use both tools together. Conduct workshops or create user guides that outline the key benefits of integrating Git and JIRA, along with tips on how to leverage their combined functionalities for improved workflows.

Monitor and Adapt

Implement regular check-ins to assess the effectiveness of the integration. Gather feedback from team members on how well the integration is functioning and identify any pain points. This ongoing feedback loop allows you to make incremental improvements, ensuring the integration continues to meet the needs of the team.

Utilize Dashboards for Visualization

Create comprehensive dashboards that visually represent combined metrics from both Git and JIRA. Tools like JIRA dashboards, Confluence, or custom-built data visualization platforms can provide a clearer picture of project health. Metrics can include the number of active pull requests, average time in code review, or commit activity relevant to JIRA task completion.

Encourage Regular Code Reviews

With the changes being reflected in JIRA, create a culture around regular code reviews linked to specific JIRA tasks. This practice encourages collaboration among team members, ensures code quality, and keeps everyone aligned with project objectives. Regular code reviews also lead to knowledge sharing, which strengthens the team's overall skill set.

Case Study:

25% Improvement in Task Completion with Jira-Git Integration at Trackso

To illustrate the benefits of integrating Git data with JIRA, let’s consider a case study of a software development team at a company called Trackso.

Background

Trackso, a remote monitoring platform for Solar energy, was developing a new SaaS platform that consisted of a diverse team of developers, designers, and project managers. The team relied heavily on JIRA for tracking project statuses, but they found their productivity hampered by several issues:

  • Tasks had vague statuses that did not reflect actual progress to project managers.
  • Developers frequently worked in isolation without insight into each other's code contributions.
  • They could not correlate project delays with specific code changes or reviews, leading to poor risk management.

Implementation of Git and JIRA Integration

In 2022, Trackso's engineering manager decided to integrate Git data with JIRA. They chose GitHub for version control, given its robust collaborative features. The team set up automatic links between their JIRA tickets and corresponding GitHub pull requests and standardized their commit messages to include JIRA issue keys.

Metrics of Improvement

After implementing the integration, Trackso experienced significant improvements within three months:

  • Increased Collaboration: There was a 40% increase in code review participation as developers began referencing JIRA issues in their commits, facilitating clearer discussions during code reviews.
  • Reduced Delivery Times: Average task completion times decreased by 25%, as developers could see almost immediately when tasks were being actively worked on or if blockers arose.
  • Improved Risk Management: The team reduced project delays by 30% due to enhanced visibility. For example, the integration helped identify that a critical feature was lagging due to slow pull request reviews. This enabled team leads to improve their code review workflows.
  • Boosted Developer Morale: Developer satisfaction surveys indicated that 85% of team member felt more engaged in their work due to improved communication and clarity around task statuses.

Challenges Faced

Despite these successes, Trackso faced challenges during the integration process:

  • Initial Resistance: Some team member were hesitant to adopt new practices & new personal dashboard. The engineering manager organized training sessions to showcase the benefits of integrating Git and JIRA & having a personal dashboard, promoting buy-in from the team and leaving the default dashboard.
  • Maintaining Commit Message Standards: Initially, not all developers consistently used the issue keys in their commit messages. The team revisited training sessions and created a shared repository of best practices to ensure adherence.

Conclusion

While JIRA dashboards are valuable tools for project management, they are insufficient on their own for engineering managers seeking to improve project delivery speed and predictability. By integrating Git data with JIRA, teams can gain richer insights into development activity, enhance collaboration, and manage risks more effectively. This holistic approach empowers engineering leaders to make informed decisions and drive continuous improvement in their software development processes. Embracing this integration will ultimately lead to better project outcomes and a more productive engineering culture. As the software development landscape continues to evolve, leveraging the power of both JIRA and Git data will be essential for teams looking to stay competitive and deliver high-quality products efficiently.

View All

DevEx

View All

What is Developer Experience?

Let’s take a look at the situation below: 

You are driving a high-performance car, but the controls are clunky, the dashboard is confusing, and the engine constantly overheats. 

Frustrating, right? 

When developers work in a similar environment, dealing with inefficient tools, unclear processes, and a lack of collaboration, it leads to decreased morale and productivity. 

Just as a smooth, responsive driving experience makes all the difference on the road, a seamless Developer Experience (DX) is essential for developer teams.

DX isn't just a buzzword; it's a key factor in how developers interact with their work environments and produce innovative solutions. In this blog, let’s explore what Developer Experience truly means and why it is crucial for developers. 

What is Developer Experience? 

Developer Experience, commonly known as DX, is the overall quality of developers’ interactions with their work environment. It encompasses tools, processes, and organizational culture. It aims to create an environment where developers are working efficiently, focused, and producing high-quality code with minimal friction. 

Why Does Developer Experience Matter? 

Developer Experience is a critical factor in enhancing organizational performance and innovation. It matters because:

Boosts Developer Productivity 

When developers have access to intuitive tools, clear documentation, and streamlined workflow, it allows them to complete the tasks quicker and focus on core activities. This leads to a faster development cycle and improved efficiency as developers can connect emotionally with their work. 

As per Gartner's Report, Developer Experience is the key indicator of Developer Productivity

High Product Quality 

Positive developer experience leads to improved code quality, resulting in high-quality work. This leads to customer satisfaction and a decrease in defects in software products. DX also leads to effective communication and collaboration which reduces cognitive load among developers and can thoroughly implement best practices. 

Talent Attraction and Retention 

A positive work environment appeals to skilled developers and retains top talents. When the organization supports developers’ creativity and innovation, it significantly reduces turnover rates. Moreover, when they feel psychologically safe to express ideas and take risks, they would want to be associated with an organization for the long run. 

Enhances Developer Morale 

When developers feel empowered and supported at their workplace, they are more likely to be engaged with their work. This further leads to high morale and job satisfaction. When organizations minimize common pain points, developers encounter fewer obstacles, allowing them to focus more on productive tasks rather than tedious ones.

Competitive Advantage 

Organizations with positive developer experiences often gain a competitive edge in the market. Enabling faster development cycles and higher-quality software delivery allows companies to respond more swiftly to market demands and customer needs. This agility improves customer satisfaction and positions the organization favorably against competitors. 

What is Flow State and Why Consider it as a Core Goal of a Great DX? 

In simple words, flow state means ‘Being in the zone’. Also known as deep work, it refers to the mental state characterized by complete immersion and focused engagement in an activity. Achieving flow can significantly result in a sense of engagement, enjoyment, and productivity. 

Flow state is considered a core goal of a great DX because this allows developers to work with remarkable efficiency. Hence, allowing them to complete tasks faster and with higher quality. It enables developers to generate innovative solutions and ideas when they are deeply engaged in their work, leading to better problem-solving outcomes. 

Also, flow isn’t limited to individual work, it can also be experienced collectively within teams. When development teams achieve flow together, they operate with synchronized efficiency which enhances collaboration and communication. 

What Developer Experience is not?  

Developer Experience is Not Just a Good Tooling 

Tools like IDEs, frameworks, and libraries play a vital role in a positive developer experience, but, it is not the sole component. Good tooling is merely a part of the overall experience. It helps to streamline workflows and reduce friction, but DX encompasses much more, such as documentation, support, learning resources, and the community. Tools alone cannot address issues like poor communication, lack of feedback, or insufficient documentation, and without a holistic approach, these tools can still hinder developer satisfaction and productivity.

Developer Experience is Not a Quick Fix 

Improving DX isn’t a one-off task that can be patched quickly. It requires a long-term commitment and a deep understanding of developer needs, consistent feedback loops, and iterative improvements. Great developer experience involves ongoing evaluation and adaptation of processes, tools, and team dynamics to create an environment where developers can thrive over time. 

Developer Experience isn’t About Pampering Developers or Using AI tools to Cut Costs

One common myth about DX is that it focuses solely on pampering developers or uses AI tools as cost-cutting measures. True DX aims to create an environment where developers can work efficiently and effectively. In other words, it is about empowering developers with the right resources, autonomy, and opportunities for growth. While AI tools help in simplifying tasks, without considering the broader context of developer needs may lead to dissatisfaction if those tools do not genuinely enhance their work experience. 

Developer Experience is Not User Experience 

DX and UX look alike, however, they target different audiences and goals. User Experience is about how end-users interact with a product, while Developer Experience concerns the experience of developers who build, test, and deploy products. Improving DX involves understanding developers' unique challenges and needs rather than only applying UX principles meant for end-users.

Developer Experience is Not Same as Developer Productivity 

Developer Experience and Developer Productivity are interrelated yet not identical. While a positive developer experience can lead to increased productivity, productivity metrics alone don’t reflect the quality of the developer experience. These metrics often focus on output (like lines of code or hours worked), which can be misleading. True DX encompasses emotional satisfaction, engagement levels, and the overall environment in which developers work. Positive developer experience further creates conditions that naturally lead to higher productivity rather than measuring it directly through traditional metrics

How does Typo Help to Improve DevEx?

Typo is a valuable tool for software development teams that captures 360 views of developer experience. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.

Key features

  • Research-backed framework that captures parameters and uncovers real issues.
  • In-depth insights are published on the dashboard.
  • Combines data-driven insights with proactive monitoring and strategic intervention.
  • Identifies the key priority areas affecting developer productivity and well-being.
  • Sends automated alerts to identify burnout signs in developers at an early stage.

Conclusion 

Developer Experience empowers developers to focus on building exceptional solutions. A great DX fosters innovation, enhances productivity, and creates an environment where developers can thrive individually and collaboratively.

Implementing developer tools empowers organizations to enhance DX and enable teams to prevent burnout and reach their full potential.

SPACE Framework

SPACE Framework: Strategies for Maximum Efficiency in Developer Productivity

What if we told you that writing more code could be making you less productive? 

While equating productivity with output is tempting, developer efficiency is far more complex. The real challenge often lies in processes, collaboration, and well-being. Without addressing these, inefficiencies and burnout will inevitably follow.

You may spend hours coding, only to feel your work isn’t making an impact—projects get delayed, bug fixes drag on, and constant context switching drains your focus. The key isn’t to work harder but smarter by solving the root causes of these issues.

The SPACE framework addresses this by focusing on five dimensions: Satisfaction, Performance, Activity, Communication, and Efficiency. It helps teams improve how much they do and how effectively they work, reducing workflow friction, improving collaboration, and supporting well-being to boost long-term productivity.

Understanding the SPACE Framework

The space framework addresses five key dimensions of developer productivity: satisfaction and well-being, performance, activity, collaboration and communication, and efficiency and flow. Together, these dimensions provide a comprehensive view of how developers work and where improvements can be made, beyond just measuring output.

By taking these factors into account, teams can better support developers, helping them not only produce better work but also maintain their motivation and well-being. Let’s take a closer look at each part of the framework and how it can help your team achieve a balance between productivity and a healthy work environment.

Common Developer Challenges that SPACE Addresses

In fast-paced, tech-driven environments, developers face several roadblocks to productivity:

  • Constant interruptions: Developers often deal with frequent context switching, from bug fixes to feature development to emergency support, making it hard to stay focused.
  • Cross-team collaboration: Working with multiple teams, such as DevOps, QA, and product management, can lead to miscommunication and misaligned priorities.
  • Lack of real-time feedback: Without timely feedback, developers may unknowingly veer off course or miss performance issues until much later in the development cycle.
  • Technical debt: Legacy systems and inconsistent coding practices create overhead and slow down development cycles, making it harder to move quickly on new features.

The space framework helps identify and address these challenges by focusing on improving both the technical processes and the developer experience.

How SPACE can help: A Deep Dive into Each Dimension

Let’s explore how each aspect of the space framework can directly impact technical teams:

Satisfaction and well-being

Developers are more productive when they feel engaged and valued. It's important to create an environment where developers are recognized for their contributions and have a healthy work-life balance. This can include feedback mechanisms, peer recognition, or even mental health initiatives. Automated tools that reduce repetitive tasks can also contribute to overall well-being.

Performance

Measuring performance should go beyond tracking the number of commits or pull requests. It’s about understanding the impact of the work being done. High-performing teams focus on delivering high-quality code and minimizing technical debt. Integrating automated testing and static code analysis tools into your CI/CD pipeline ensures code quality is maintained without manual intervention.

Activity

Focusing on meaningful developer activity, such as code reviews, tests written, and pull requests merged, helps align efforts with goals. Tools that track and visualize developer activities provide insight into how time is spent. For example, tracking code review completion times or how often changes are being pushed can reveal bottlenecks or opportunities for improving workflows.

Collaboration and communication

Effective communication across teams reduces friction in the development process. By integrating communication tools directly into the workflow, such as through Git or CI/CD notifications, teams can stay aligned on project goals. Automating feedback loops within the development process, such as notifications when builds succeed or fail, helps teams respond faster to issues.

Efficiency and flow

Developers enter a “flow state” when they can work on a task without distractions. One way to foster this is by reducing manual tasks and interruptions. Implementing CI/CD tools that automate repetitive tasks—like build testing or deployments—frees up developers to focus on writing code. It’s also important to create dedicated time blocks where developers can work without interruptions, helping them enter and maintain that flow.

Practical Strategies for Applying the SPACE Framework

To make the space framework actionable, here are some practical strategies your team can implement:

Automate repetitive tasks to enhance focus

A large portion of developer time is spent on tasks that can easily be automated, such as code formatting, linting, and testing. By introducing tools that handle these tasks automatically, developers can focus on the more meaningful aspects of their work, like writing new features or fixing bugs. This is where tools like Typo can make a difference. Typo integrates seamlessly into your development process, ensuring that code adheres to best practices by automating code quality checks and providing real-time feedback. Automating these reviews reduces the time developers spend on manual reviews and ensures consistency across the codebase.

Track meaningful metrics

Instead of focusing on superficial metrics like lines of code written or hours logged, focus on tracking activities that lead to tangible progress. Typo, for example, helps track key metrics like the number of pull requests merged, the percentage of code coverage, or the speed at which developers address code reviews. These insights give team leads a clearer picture of where bottlenecks are occurring and help teams prioritize tasks that move the project forward.

Improve communication and collaboration through integrated tools

Miscommunication between developers, product managers, and QA teams can cause delays and frustration. Integrating feedback systems that provide automatic notifications when tests fail or builds succeed can significantly improve collaboration. Typo plays a role here by streamlining communication between teams. By automatically reporting code review statuses or deployment readiness, Typo ensures that everyone stays informed without the need for constant manual updates or status meetings.

Protect flow time and eliminate disruptions

Protecting developer flow is essential to maintaining efficiency. Schedule dedicated “flow” periods where meetings are minimized, and developers can focus solely on their tasks. Typo enhances this by minimizing the need for developers to leave their coding environment to check on build statuses or review feedback. With automated reports, developers can stay updated without disrupting their focus. This helps ensure that developers can spend more time in their flow state and less time on administrative tasks.

Identify bottlenecks in your workflow

Using metrics from tools like Typo, you can gain visibility into where delays are happening in your development process—whether it's slow code review cycles, inefficient testing processes, or unclear requirements. With this insight, you can make targeted improvements, such as adjusting team structures, automating manual testing processes, or dedicating more resources to code reviews to ensure smoother project progression.

How Typo supports the SPACE framework

By using Typo as part of your workflow, you can naturally align with many of the principles of the space framework:

  • Automated code quality: Typo ensures code quality through automated reviews and real-time feedback, reducing the manual effort required during code review processes.
  • Tracking developer metrics: Typo tracks key activities that are directly related to developer efficiency, helping teams stay on track with performance goals.
  • Seamless communication: With automatic notifications and updates, Typo ensures that developers and other team members stay in sync without manual reporting, which helps maintain flow and improve collaboration.
  • Supporting flow: Typo’s integrations provide updates within the development environment, reducing the need for developers to context switch between tasks.

Bringing it all together: Maximizing Developer Productivity with SPACE

The space framework offers a well-rounded approach to improving developer productivity and well-being. By focusing on automating repetitive tasks, improving collaboration, and fostering uninterrupted flow time, your team can achieve more without sacrificing quality or developer satisfaction. Tools like Typo naturally fit into this process, helping teams streamline workflows, enhance communication, and maintain high code quality.

If you’re looking to implement the space framework, start by automating repetitive tasks and protecting your developers' flow time. Gradually introduce improvements in collaboration and tracking meaningful activity. Over time, you’ll notice improvements in both productivity and the overall well-being of your development team.

What challenges are you facing in your development workflow? 

Share your experiences and let us know how tools like Typo could help your team implement the space framework to improve productivity and collaboration!

Schedule a demo with Typo today

measuring developer productivity

Measuring and Improving Developer Productivity

Developer productivity is the new buzzword across the industry. Suddenly, measuring developer productivity has started going mainstream after the remote work culture, and companies like McKinsey are publishing articles titled - ”Yes, you can measure software developer productivity” causing a stir in the software development community, So we thought we should share our take on- Developer Productivity.

We will be covering the following Whats, Whys & Hows about Developer Productivity in this piece-

  • What is developer productivity?
  • Why do we need to measure developer productivity?
  • How do we measure it at the Team and individual level? & Why is it more complicated to measure developer productivity than Sales or Hiring productivity?
  • Challenges & Dangers of measuring developer productivity & What not to measure.
  • What is the impact of measuring developer productivity on engineering culture?

What is Developer Productivity?

Developer productivity refers to the effectiveness and efficiency with which software developers create high-quality software that meets business goals. It encompasses various dimensions, including code quality, development speed, team collaboration, and adherence to best practices. For engineering managers and leaders, understanding developer productivity is essential for driving continuous improvement and achieving successful project outcomes.

Key Aspects of Developer Productivity

Quality of Output: Developer productivity is not just about the quantity of code or code changes produced; it also involves the quality of that code. High-quality code is maintainable, readable, and free of significant bugs, which ultimately contributes to the overall success of a project.

Development Speed: This aspect measures how quickly developers (usually referred as developer velocity) can deliver features, fixes, and updates. While developer velocity is important, it should not come at the expense of code quality. Effective engineering teams strike a balance between delivering quickly and maintaining high standards.

Collaboration and Team Dynamics: Successful software development relies heavily on effective teamwork. Collaboration tools and practices that foster communication and knowledge sharing can significantly enhance developer productivity. Engineering managers should prioritize creating a collaborative environment that encourages teamwork.

Adherence to Best Practices for Outcomes: Following coding standards, conducting code review, and implementing testing protocols are essential for maintaining development productivity. These practices ensure that developers produce high-quality work consistently, which can lead to improved project outcomes.

Wanna Improve your Dev Productivity?

Why do we need to measure dev productivity?

We all know that no love to be measured but the CEOs & CFOs have an undying love for measuring the ROI of their teams, which we can't ignore. The more the development productivity, the more the RoI. However, measuring developer productivity is essential for engineering managers and leaders too who want to optimize their teams' performance- We can't improve something that we don't measure.

Understanding how effectively developers work can lead to improved project outcomes, better resource allocation, and enhanced team morale. In this section, we will explore the key reasons why measuring developer productivity is crucial for engineering management.

Enhancing Team Performance

Measuring developer productivity allows engineering managers to identify strengths and weaknesses within their teams. By analyzing developer productivity metrics, leaders can pinpoint areas where new developer excel and where they may need additional support or resources. This insight enables managers to tailor training programs, allocate tasks more effectively, and foster a culture of continuous improvement.

Team's insights in Typo

Driving Business Outcomes

Developer productivity is directly linked to business success. By measuring development team productivity, managers can assess how effectively their teams deliver features, fix bugs, and contribute to overall project goals. Understanding productivity levels helps align development efforts with business objectives, ensuring that the team is focused on delivering value that meets customer needs.

Improving Resource Allocation

Effective measurement of developer productivity enables better resource allocation. By understanding how much time and effort are required for various tasks, managers can make informed decisions about staffing, project timelines, and budget allocation. This ensures that resources are utilized efficiently, minimizing waste and maximizing output.

Fostering a Positive Work Environment

Measuring developer productivity can also contribute to a positive work environment. By recognizing high-performing teams and individuals, managers can boost morale and motivation. Additionally, understanding productivity trends can help identify burnout or dissatisfaction, allowing leaders to address issues proactively and create a healthier workplace culture.

Developer surveys insights in Typo

Facilitating Data-Driven Decisions

In today’s fast-paced software development landscape, data-driven decision-making is essential. Measuring developer productivity provides concrete data that can inform strategic decisions. Whether it's choosing new tools, adopting agile methodologies, or implementing process changes, having reliable developer productivity metrics allows managers to make informed choices that enhance team performance.

Investment distribution in Typo

Encouraging Collaboration and Communication

Regularly measuring productivity can highlight the importance of collaboration and communication within teams. By assessing metrics related to teamwork, such as code reviews and pair programming sessions, managers can encourage practices that foster collaboration. This not only improves productivity but overall developer experience by strengthening team dynamics and knowledge sharing.

Ultimately, understanding developer experience and measuring developer productivity leads to better outcomes for both the team and the organization as a whole.

How do we measure Developer Productivity?

Measuring developer productivity is essential for engineering managers and leaders who want to optimize their teams' performance.

Strategies for Measuring Productivity

Focus on Outcomes, Not Outputs: Shift the emphasis from measuring outputs like lines of code to focusing on outcomes that align with business objectives. This encourages developers to think more strategically about the impact of their work.

Measure at the Team Level: Assess productivity at the team level rather than at the individual level. This fosters team collaboration, knowledge sharing, and a focus on collective goals rather than individual competition.

Incorporate Qualitative Feedback: Balance quantitative metrics with qualitative feedback from developers through surveys, interviews, and regular check-ins. This provides valuable context and helps identify areas for improvement.

Encourage Continuous Improvement: Position productivity measurement as a tool for continuous improvement rather than a means of evaluation. Encourage developers to use metrics to identify areas for growth and work together to optimize workflows and development processes.

Lead by Example: As engineering managers and leaders, model the behavior you want to see in your team & team members. Prioritize work-life balance, encourage risk-taking and innovation, and create an environment where developers feel supported and empowered.

Measuring Dev productivity involves assessing both team and individual contributions to understand how effectively developers are delivering value through their development processes. Here’s how to approach measuring productivity at both levels:

Team-Level Developer Productivity

Measuring productivity at the team level provides a more comprehensive view of how collaborative efforts contribute to project success. Here are some effective metrics:

DORA Metrics

The DevOps Research and Assessment (DORA) metrics are widely recognized for evaluating team performance. Key metrics include:

  • Deployment Frequency: How often the software engineering team releases code to production.
  • Lead Time for Changes: The time taken for committed code to reach production.
  • Change Failure Rate: The percentage of deployments that result in failures.
  • Time to Restore Service: The time taken to recover from a failure.

Issue Cycle Time

This metric measures the time taken from the start of work on a task to its completion, providing insights into the efficiency of the software development process.

Team Satisfaction and Engagement

Surveys and feedback mechanisms can gauge team morale and satisfaction, which are critical for long-term productivity.

Collaboration Metrics

Assessing the frequency and quality of code reviews, pair programming sessions, and communication can provide insights into how well the software engineering team collaborates.

Individual Developer Productivity

While team-level metrics are crucial, individual developer productivity also matters, particularly for performance evaluations and personal development. Here are some metrics to consider:

  • Pull Requests and Code Reviews: Tracking the number of pull requests submitted and the quality of code reviews can provide insights into an individual developer's engagement and effectiveness.
  • Commit Frequency: Measuring how often a developer commits code can indicate their active participation in projects, though it should be interpreted with caution to avoid incentivizing quantity over quality.
  • Personal Goals and Outcomes: Setting individual objectives related to project deliverables and tracking their completion can help assess individual productivity in a meaningful way.
  • Skill Development: Encouraging developers to pursue training and certifications can enhance their skills, contributing to overall productivity.

Measuring developer productivity metrics presents unique challenges compared to more straightforward metrics used in sales or hiring. Here are some reasons why:

  • Complexity of Work: Software development involves intricate problem-solving, creativity, and collaboration, making it difficult to quantify contributions accurately. Unlike sales, where metrics like revenue generated are clear-cut, developer productivity encompasses various qualitative aspects that are harder to measure for project management.
  • Collaborative Nature: Development work is highly collaborative. Team members often intertwine with team efforts, making it challenging to isolate the impact of one developer's work. In sales, individual performance is typically more straightforward to assess based on personal sales figures.
  • Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity of a pragmatic engineer. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
  • Varied Work Activities: Developers engage in various activities beyond coding, including debugging, code reviews, and meetings. These essential tasks are often overlooked in productivity measurements, whereas sales roles typically have more consistent and quantifiable activities.
  • Productivity Tools and Software development Process: The developer productivity tools and methodologies used in software development are constantly changing, making it difficult to establish consistent metrics. In contrast, sales processes tend to be more stable, allowing for easier benchmarking and comparison.

By employing a balanced approach that considers both quantitative and qualitative factors, with a few developer productivity tools, engineering leaders can gain valuable insights into their teams' productivity and foster an environment of continuous improvement & better developer experience.

Challenges of measuring Developer Productivity - What not to Measure?

Measuring developer productivity is a critical task for engineering managers and leaders, yet it comes with its own set of challenges and potential pitfalls. Understanding these challenges is essential to avoid the dangers of misinterpretation and to ensure that developer productivity metrics genuinely reflect the contributions of developers. In this section, we will explore the challenges of measuring developer productivity and highlight what not to measure.

Challenges of Measuring Developer Productivity

  • Complexity of Software Development: Software development is inherently complex, involving creativity, problem-solving, and collaboration. Unlike more straightforward fields like sales, where performance can be quantified through clear metrics (e.g., sales volume), developer productivity is multifaceted and includes various non-tangible elements. This complexity makes it difficult to establish a one-size-fits-all metric.
  • Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
  • Team Dynamics and Collaboration: Measuring individual productivity can overlook the collaborative nature of software development. Developers often work in teams where their contributions are interdependent. Focusing solely on individual metrics may ignore the synergistic effects of collaboration, mentorship, and knowledge sharing, which are crucial for a team's overall success.
  • Context Ignorance: Developer productivity metrics often fail to consider the context in which developers work. Factors such as project complexity, team dynamics, and external dependencies can significantly impact productivity but are often overlooked in traditional assessments. This lack of context can lead to misleading conclusions about a developer's performance.
  • Potential for Misguided Incentives: Relying heavily on specific metrics can create perverse incentives. For example, if developers are rewarded based on the number of commits, they may prioritize frequent small commits over meaningful contributions. This can lead to a culture of "gaming the system" rather than fostering genuine productivity and innovation.

What Not to Measure

  • Lines of Code (LOC): While LOC can provide some insight into coding activity, it is not a reliable measure of productivity. More code does not necessarily equate to better software. Instead, focus on the quality and impact of the code produced.
  • Commit Frequency: Tracking how often developers commit code can give a false sense of productivity. Frequent commits do not always indicate meaningful progress and can encourage developers to break down their work into smaller, less significant pieces.
  • Bug Counts: Focusing on the number of bugs reported or fixed can create a negative environment where developers feel pressured to avoid complex tasks that may introduce bugs. This can stifle innovation and lead to a culture of risk aversion.
  • Time Spent on Tasks: Measuring how long developers spend on specific tasks can be misleading. Developers may take longer on complex problems that require deep thinking and creativity, which are essential for high-quality software development.

Measuring developer productivity is fraught with challenges and dangers that engineering managers must navigate carefully. By understanding these complexities and avoiding outdated or superficial metrics, leaders can foster a more accurate and supportive environment for their development team productivity.

What is the impact of measuring Dev productivity on engineering culture?

Developer productivity improvements are a critical factor in the success of software development projects. As engineering managers or technology leaders, measuring and optimizing developer productivity is essential for driving development team productivity and delivering successful outcomes. However, measuring development productivity can have a significant impact on engineering culture & software engineering talent, which must be carefully navigated. Let's talk about measuring developer productivity while maintaining a healthy and productive engineering culture.

Measuring developer productivity presents unique challenges compared to other fields. The complexity of software development, inadequate traditional metrics, team dynamics, and lack of context can all lead to misguided incentives and decreased morale. It's crucial for engineering managers to understand these challenges to avoid the pitfalls of misinterpretation and ensure that developer productivity metrics genuinely reflect the contributions of developers.

Remember, the goal is not to maximize metrics but to create a development environment where software engineers can thrive and deliver maximum value to the organization.

Development teams using Typo experience a 30% improvement in Developer Productivity. Want to Try Typo?

Member's insights in Typo
Wanna Improve your Dev Productivity?
View All

Podcasts

View All

‘Integrating Acquired Tech Teams’ with David Archer, Director of Software Engineering, Imagine Learning

In this episode of the groCTO Podcast, host Kovid Batra interviews David Archer, the Director of Software Engineering at Imagine Learning, with over 12 years of experience in engineering and leadership, including a tenure at Amazon.

The discussion centers on successfully integrating acquired teams, a critical issue following company mergers and acquisitions. David shares his approach to onboarding new team members, implementing a buddy system, and fostering a growth mindset and no-blame culture to mitigate high attrition rates. He further discusses the importance of having clear documentation, pairing sessions, and promoting collaboration across international teams. Additionally, David touches on his personal interests, emphasizing the impact of his time in Japan and his love for Formula 1 and rugby. The episode provides insights into the challenges and strategies for creating stable and cohesive engineering teams in a dynamic corporate landscape.

Timestamps

  • 00:00 - Introduction
  • 00:57 - Welcome to the Podcast
  • 01:06 - Guest Introduction: David's Background
  • 03:25 - Transitioning from Amazon to Imagine Learning
  • 10:49 - Integrating Acquired Teams: Challenges and Strategies
  • 14:57 - Building a No-Blame Culture
  • 18:32 - Retaining Talent and Knowledge Sharing
  • 24:22 - Skill Development and Cultural Alignment
  • 29:10 - Conclusion and Final Thoughts

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO podcast. And today with us, we have a very special guest. He has 12 plus years of engineering and leadership experience. He has been an ex-Software Development Manager for Amazon and currently working as Director of Engineering for Imagine Learning. Welcome to the show, David. Great to have you here.

David Archer: Thanks very much. Thanks for the introduction.

Kovid Batra: All right. Um, so there is a ritual, uh, whosoever comes to our podcast, before we get down to the main section. So for the audience, the main section, uh, today’s topic of discussion is how to integrate the acquired teams successfully, uh, which has been a burning topic in the last four years because there have been a lot of acquisitions. There have been a lot of mergers. But before we move there, uh, David, we would love to know something about you, uh, your hobbies, something from your childhood, from your teenage or your, from personal life, which LinkedIn doesn’t tell and you would like to share with us.

David Archer: Sure. Um, so in terms of my personal life, the things that I’ve enjoyed the most, um, I always used to love video games as a child. And so, one of the things that I am very proud of is that I went to go and live in Japan for university and, and that was, um, a genuinely life-changing experience. Um, and I absolutely loved my time there. And I think it’s, it’s had a bit of an effect on my time, uh, since then. But with that, um, I’m very much a fan of formula one and rugby. And so, I’ve been very happy in the last, in the post-COVID-19 years, um, of spending a lot of time over in Silverstone and Murrayfield to go and see some of those things. So, um, that’s something that most people don’t know about me, but I actually quite like my sports of all things. So, yeah.

Kovid Batra: Great. Thanks for that little, uh, cute intro and, uh, with that, I think, uh, let’s get going with the main section. Uh, so integrating, uh, your acquired team successfully has been a challenge with a lot of, uh, engineering leaders, engineering managers with whom I have talked. And, uh, you come with an immense experience, like you have had been, uh, engineering manager for OVO and then for, uh, Amazon. I mean, you have been leading teams at large organizations and then moving into Imagine Learning. So before we touch on the topic of how you absorbed such teams successfully, I would love to know, how does this transition look like? Like Amazon is a giant, right? And then you’re moving to Imagine Learning. Of course, that is also a very big company. But there is definitely a shift there. So what made you move? How was this transition? Maybe some goods or bads, if you can share without getting your job impacted.

David Archer: Yeah, no problem. Um, so once upon a time, um, you’re correct in terms of that I’ve got, you know, over 12 years experience in the industry. Um, but before that, I was a teacher. So for me, education is extremely important and I still think it’s one of the most rewarding things that as a human you can be a part of. Helping to bring the next generation, or in terms of their education, give them better, uh, capabilities and potential for the future. Um, and so when somebody approached me with the position here at Imagine Learning, um, I had to jump at the chance. It sounded extremely exciting and, um, I was correct. It was extremely exciting. There’s definitely been a lot of movement and, and I’m sure we’ll touch on that in a little while, but there is definitely a, a, quite a major cultural shift. Um, and then obviously there is the fact that Amazon being a US-centric company with a UK arm, which I was a part of, um, Imagine Learning is very similar. Um, it’s a US-centric company with a US-centric educational stance. Um, and then, yeah, me being part of the UK arm of the company means that there are some cultural challenges that Amazon has already worked through that Imagine Learning still needed to work through. Um, and so part of that challenge is, you know, sort of educating up the chain, if you like, um, on the cultural differences between the two. So, um, definitely some, some big changes. It’s less easy to sort of move sideways as you can in companies like Amazon, um, where you can transition from one team to another. Um, here, it’s a little bit more, um, put together. There’s, there’s, there’s only one or two teams here that you could potentially work for. Um, but that’s not to say that the opportunities aren’t there. And again, we’ll touch on that in a little bit, I’m sure.

Kovid Batra: Perfect. Perfect. All right. So one, one question I think, uh, all the audience would love to know, like, in a company like Amazon, what is it like to get there? Because it takes almost eight to 10 years if you’re really good at something in Amazon, spend that time and then you move into that profile of a Software Development Manager, right? So how, how was that experience for you? And what do you think it, it requires, uh, in an Engineering Manager at Amazon to be there?

David Archer: That’s a difficult question to answer because it changes upon the person. Um, I jumped straight in as a Software Development Manager. And in terms of what they’re looking for, anybody that has looked into the company will be aware of their leadership principles. And being able to display their leadership principles through previous experiences, that’s the thing that will get you in. So if you naturally have that capability to always put the customer first, to ensure that you are data-driven, to ensure that you have, they call it a bias for action, but that you move quickly is kind of what it comes down to. Um, and that you earn trust in a meaningful way. Those are some of the things that I think most managers would be looking for, and when interviewing, of course, there is a technical aspect to this. You need to be able to talk the talk, and, um, I think if you are not able to be able to reel off the information in an intrinsic manner, as in you’ve internalized how the technology works, that will get picked up. Of course it will. You can’t prepare for it like you can an exam. There is an element of this that requires experience. That being said, there are definitely some areas that people can prepare for. Um, and those are primarily in the area of ensuring that you get the experiences that meet the leadership principles that will push you into that position. In order to succeed, it requires a lot of real work. Um, I’m not going to pretend that it’s easy to work at a company like Amazon. They are well known for, um, ensuring that the staff that they have are the best and that they’re working with the best. And you have to, as a manager, ensure that the team that you’re building up can fulfill what you require them to do. If you’re not able to do that, if you’re taking people on because they seem like they might be a good fit for now, you will in the medium to long-term find that that is detrimental to you as a manager, as well as your team and its capabilities, and you need to be able to then resolve that potential problem by making some difficult decisions and having some difficult conversations with individuals, because at the end of the day, you as a manager are measured on what your team output, not what you as an individual output. And that’s a real shift in thinking from being a, even a Technical Lead to being an Engineering Manager.

Kovid Batra: That’s for sure there. One thing, uh, that you feel, uh, stands out in you, uh, that has put you in this position where you are an SDM at Amazon and then you transitioned to a leadership position now, which is Director of Engineering at Imagine Learning. So what is that, uh, one or two traits of yourself that you might have reflected upon that have made you move here, grow in the career?

David Archer: I think you have to be very flexible in your thinking. You have to have a manner of thinking that enables for a much wider scope and you have to be able to let go of an individual product. If your thinking is really focused on one team and one product and it stays in that single first party of what you’re concentrating on that moment in time, then it really limits your ability to look a little bit further beyond the scope and start to move into that strategic thinking. That’s where you start moving from a Software Development Manager into a more senior position is with that strategic thinking mindset where you’re thinking beyond the three months and beyond the single product and you’re starting to move into the half-yearly, full-yearly thinking is a minimum. And you start thinking about how you can bring your team along for a strategic vision as opposed to a tactical goal.

Kovid Batra: Got it. Perfect. All right. So with that, moving to Imagine Learning, uh, and your experience here in the last, uh, one, one and a half years, a little more than that, actually, uh, you, you have, uh, gone through the phase of your self-learning and then getting teams onboarded that were from the acquired product companies and that experience when you started sharing with me on our last, last call, I found that very interesting. So I think we can start off with that point here. Uh, like how this journey of, uh, rearranging teams, bringing different teams together started happening for you. What were the challenges? What was your roadmap in your head and your team? How will you align them? How will you make the right impact in the fastest timeframe possible? So how things shaped up around that.

David Archer: Sure. Initially, um, the biggest challenge I had was that there was a very significant knowledge drain before I had started. Um, so in the year before I came on board and it was in the first year post-acquisition, the attrition rate for the digital part of the company was somewhere in the region of 50%. Um, so people were leaving at a very fast pace. Um, I had to find a way to plug that end quickly because we couldn’t continue to have such a large knowledge drain. Um now the way that I did that was I, I believe in, in the engineers that I have in front of me. They wouldn’t be in the position that they’re in if they didn’t have a significant amount of capability. But I also wanted to ensure that they had and acquired a growth mindset. Um, and that was something that I think up until that point they were more interested in just getting work done as opposed to wanting to grow into a, a sort of more senior position or a position with more responsibility and a bigger challenge. And so I ensured that I mixed the teams together. We had, you know, front enders and back enders in separate teams initially. And so I joined them together to make sure that they held responsibility for a piece of work from beginning to end, um, which gave them autonomy on the work that they were doing. I ensured that I earner trust with that team as well. And most importantly, I put in a ‘no-blame culture’, um, because my expectation is that everybody’s always acting with the best of intentions and that usually when something is going wrong, there is a mechanism that is missing that would have resolved the issue.

Kovid Batra: But, uh, sorry to interrupt you here. Um, do you think, uh, the reasons for attrition were aligned with these factors in the team where people didn’t have autonomy, uh, there was a blame game happening? Were these the reasons or, uh, the reasons were different? I mean, if you’re comfortable sharing, cool, but otherwise, like we can just move on.

David Archer: No, yeah, I think that in reality there, there was an element of that there, there was a, um, a somewhat, not toxic necessarily culture, but definitely a culture of, um, moving fast just to get things done as opposed to trying to work in the correct manner. And that means that people then did feel blamed. They felt pressured. They felt that they had no autonomy. Every decision was made for them. And so, uh, with more senior staff, especially, you know, looking at an MNA situation where that didn’t change, they didn’t see a future in their career there because they didn’t know where they could possibly move forward into because they had no decision-making or autonomy capability themselves.

Kovid Batra: Makes sense. Got it. Yeah, please go on. Yeah.

David Archer: Sorry, yes. So, um, we’re putting these things in place, giving everybody a growth mindset mentality and ensuring that, um, you know, there was a no-blame culture. There were some changes in personnel as well. Um, I identified a couple of individuals that were detrimental to the team and those sort of things are quite difficult, you know, moving people on who, um, they’re trying their best and I don’t deny that they are, but their way of working is, is detrimental to a team. But with those changes, um, we then move from a 50% regressive attrition to a 5% regressive attrition over the course of 23 and 24, which is a very, very significant change in, um, in attrition. And, uh, we also, at that point in time, were able to start implementing new methodologies of bringing in talent from, from below. So we started partnering with Glasgow University to bring in an internship program. We also took on some of their graduates to ensure that we had, um, for once with a better phrase, new blood in the team to ensure that we’re bringing new ideas in. Um, and then we prepared people through the training programs that they should need.

Kovid Batra: I’m curious about one thing, uh, saying that stopping this culture of blame game, uh, is definitely, uh, good to hear, but what exactly did you do in practice on a daily level or on a weekly level or on every sprint level that impacted and changed this mindset? What, what were the things that you inculcated in the culture?

David Archer: So initially, um, and some people think that this might be a trite point, but, um, I actually put out the policy in front of people. I wrote it down and put it in front of people and gave them a document review session to say, “This is a no-blame culture, and this is what I mean by that.” So that people understood what my meaning was from that. Following that, um, I then did have a conversation with some of the parts of, you know, some people in other parts of the company to say, “Please, reroute your conversations through me. Don’t go directly to engineers. I want to be that, that point of contact going forward so that I can ensure that communication is felt in the right manner and the right capacity.” And then, um, the, the other thing is that we started bringing in things like, um, postmortems or incident response management, um, sessions that, that where we, I was very forceful on ensuring that no names were put into these documents because until that point, people did put other people’s names in, um, and wanted to make sure that it was noted that it was so and so’s fault. Um, and I had to step on that very, very strongly. I was like, this could have been anyone’s fault. It’s just that they happen to be at that mine of code at that point in time. Um, and made that decision, which they did with a good intention. Um, so I had to really step in with the team and every single post mortem, every major decision in that, that area, every sprint where we went through what the team had completed in terms of work and made sure we did pick out individuals in terms of particularly good work that they did, but then stepped very strongly on any hint of trying to blame someone for a problem that had happened and made it very clear to them again that this could have happened to anyone and we need to work together to ensure it can’t happen to anyone ever again.

Kovid Batra: Makes sense. So when, when this, uh, impact started happening, uh, did you see, uh, people from the previous, uh, developers, like who were already the part of Imagine Learning, those were getting retained or, uh, the ones who joined after acquisition from the other company, those developers were also getting retained? How, how did it impact the two groups and how did they like, gel up later on?

David Archer: Both actually. Yeah. So the, the staff who were already here, um, effectively the, the, the drain stopped and there weren’t people leaving anymore that had had, you know, some level of tenure longer than six months, um, at all from that point forward, and new staff that were joining, they were getting integrated with these new teams. I implemented a buddy system so that every new engineer that came in would have somebody that they could work alongside for the first six months and show that they had some, somebody to contact for the whole time that they were, um, getting used to the company. And, uh, I frequently say that as you join a company like this, you are drinking from a fire hose for the first couple of months. There’s a lot of information that comes your way. Um, and so having a buddy there helped there. Um, I added software engineering managers to the team to ensure that there were people who specifically looked after the team, continue to ensure there was a growth mindset to continue to implement the plans that I had, um, to make these teams more stable. Um, and that took a while to find the right people, I will say that. Um, there was also a challenge with integrating the teams from our vendors in, um, international, uh, countries. So we worked with some teams in India and some teams in the Ukraine. Um, and with integrating people from those teams, there was some level of separation, and I think one of the major things we started doing then was getting the people to meet in a more personal manner, bringing them across to our team to actually meet each other face-to-face, um, and realize that these are very talented individuals, just like we are. They’re, they’re no different just because they, you know, live a five and a half hour time zone away and doesn’t mean that they’re any less capable. Um, they just have a different way of working and we can absolutely work with these very talented people. And bringing them into the teams via a buddy, ensuring that they have someone to work with, making sure that the no-blame culture continued, even into our contractors, it took a while, don’t get me wrong. And there were definitely some missteps, um, but it was vital to ensuring that there was team cohesion all the way across.

Kovid Batra: Definitely. And, uh, I’ve also experienced this, uh, when talking to other, uh, engineering leaders that when teams come in, usually it is hard to find space for them to do that impactful work, right? So you, you need to give those people that space in general in the team, which you did. But also at the same time, the kind of work they are picking up, that also becomes a challenge sometimes. So was that a case in your scenario as well? And did you like find a way out there?

David Archer: It was the case here. Um, there definitely was a case of the, the work was predefined, if you like, to some extent by the, the most senior personnel. And so one of the things that we ensured that we did, uh, I worked very closely with our product team to ensure that this happened is that we brought the engineers in a lot sooner. We ensured that this wasn’t just the most senior member of the team, but instead that we worked with different personnel and de-siloing that information from one person to another was extremely important because there were silos of information within our teams. And I made it very clear that if there’s an incident and somebody needs some help, and there’s only one person on the team, um, that is capable of actually working, then, um, we’re going to find ourselves in, in a real problem. Um, and I think people understood that intrinsically because of the knowledge loss that had happened before I started, or just as I was coming on board, um, because they knew that there were people who, you know, knew this part of the code base or this database or how this part of infrastructure worked, and suddenly we didn’t have anybody that had that knowledge. So we now needed to reacquire it. And so, I ensured that the, you know, this comes from an Amazon background, so anybody that, that has worked at this company will know what I’m talking about here, but documentation is key. Ensuring document reviews was extremely important. Um, those are the kind of things, ensuring that we could pass on information from one person to another from one team to another in the most scalable fashion, it does slow you down in delivery, but it speeds you up in the longer term because it enables more people to do a wider range of work without needing to rely on that one person that knows everything.

Kovid Batra: Sure, definitely. I think documentation has been like always on the top of, uh, the priority list itself now whomsoever I’m talking to, because once there are downturns and you face such problems, you realize the importance of it. In the early phase, you are just running, building, not focusing on that piece, but later on, it becomes a matter of priority for sure. And I can totally relate to it. Um, so talking about these people, uh, who have joined in and you’re trying to integrate, uh, they definitely need some level of cultural alignment also, like they are coming from a different background, coming into a new company. Along with that, there might be requirements, you mentioned like skill development, right? So were there any skill development plans that worked out, that worked out here that you implemented? Anything from that end you want to share?

David Archer: Yeah, absolutely. So with joining together our teams of frontend and backend developers, um, that’s obviously going to cause some issues. So some developers are not going to be quite as excited about working in a different area. Um, but I think with knowing that the siloing of information was there and that we had to resolve that as an issue and then ensuring that people who are being brought on via, you know, vendors from international countries and things like that, um, what we started to do was to ensure that we put in, um, pairing sessions with all of our developers. Up until that point, they kind of worked on their own and so, um, I find that working one-to-one with another individual tends to be the fastest way to learn how the things work, work in the same way as, um, a child learns their language from their parents far faster than they ever would from watching TV. Um, although sometimes I do wonder about that myself with my daughter singing baby shark to me 16 times and I don’t think I’ve ever sung that. So let’s see where that goes. Um, but having that one-to-one, um, relationship with the person means that we’re able to ask questions, we’re able to gain that knowledge very quickly. Having the documentation backing that up means that you’ve got a frame of reference to keep going to as well. And then if you keep doing that quite frequently and add in some of the more abstract knowledge sharing sessions, I’m thinking like, um, a ‘launch and learn’ type sessions or lightning talks, as well as having a, a base of, sort of a knowledge base that people can learn from. So, obvious examples of things like Pluralsight or O’Reilly’s library. Um, But we also have our own internal documentation as well where we give people tutorials, we walk people through things, we added in a code review session, we added in a code of the sprint and a session as well for our um, sprint reviews that went out to the whole team and to the rest of the company where we showed that we’re optimizing where we can. And all these things, they didn’t just enable the team to, to become full stack and I will say all of our developers now are full stack. I’d be very surprised if there are any developers I’m working with that are not able to make a switch. But it also built trust with the rest of the company as well and that’s the thing with being a company that has been acquired is that we need to, um, very quickly and very deliberately shout about how well we’re doing as a company so that they can look at what we’re doing and use us, as has frequently been the case recently actually as a best practice, a company that’s doing things well and doing things meaningfully and has that growth mindset. And we start then to have conversations with the wider company, which enables things like a tiger team type session that enables us to widen our scope and have more same company. It’s kind of a spiral at that point in time because you start to increase your scope and with doing that, it means that your team can grow because you know, that they know that thing, that they can trust us to do things effectively. And it also gives, going back to what I said at the beginning, and people more autonomy, then more decision-making capabilities they need to get further out into a company.

Kovid Batra: And in such situations, the opinions that they’re bringing in are more customer-centric. They have more understanding of the business. All those things ultimately add up to a lot of intrinsic incentivization, I would say. That if I’m being heard in the team, being a developer, I feel good about it, right? And all of this is like connected there. So I, it totally makes sense. And I think that’s a very good hack to bringing new, uh, people, new teams into the same, uh, journey where you are already continuing. So, great. I think, uh, with that, we have, uh, come to, uh, the end of this discussion. And in the interest of time, we’ll have to pause here. Uh, really loved talking to you, would love to know more such experiences from you, but it will be in the, maybe in the next episodes. So, David, once again, thanks a lot for your time. Thanks for sharing your experiences. It was great to have you here.

David Archer: Thank you so much and I really appreciate, uh, the time that you’ve taken with me. I hope that this proves useful to at least one person and they can gain something from this. So, thank you.

Kovid Batra: I’m sure it will be. Thank you. Thank you so much. Have a great day ahead.

David Archer: Thank you. Cheers now!

'Leading Tech Teams at Stack Overflow' with Ben Matthews, Senior Director of Engineering, Stack Overflow

In this episode of the groCTO Podcast, host Kovid Batra is joined by Ben Matthews, Senior Director of Engineering at Stack Overflow, with over 20 years of experience in engineering and leadership.

Ben shares his career journey from QA to engineering leadership, shedding light on the importance of creating organizations that function collaboratively rather than just executing tasks independently. He underscores the need for cross-functional teamwork and reducing friction points to build cohesive and successful teams. Ben also addresses the challenges and opportunities presented by the AI revolution, emphasizing Stack Overflow’s strategy to embrace and leverage AI innovations. Additionally, he offers valuable advice for onboarding junior developers, such as involving them in code reviews and emphasizing documentation.

Throughout the discussion, Ben highlights essential leadership principles like advocating for oneself and one’s team, managing team dynamics, and setting clear expectations. He provides practical tips for engineering managers on creating value, addressing organizational weaknesses, and fostering a supportive environment for continuous growth and learning. The episode wraps up with Ben sharing his thoughts on maintaining a vision and connecting it with new technological developments.

Timestamps

  • 00:00 - Introduction
  • 01:08 - Meet Ben Matthews
  • 01:22 - Ben's Journey from QA to Engineering Leadership
  • 03:21 - The Importance of Team Collaboration
  • 04:03 - Current Role and Responsibilities at Stack Overflow
  • 09:12 - Advice for Aspiring Technologists
  • 17:41 - Embracing AI at Stack Overflow
  • 23:30 - Onboarding and Nurturing Junior Developers
  • 26:59 - Parting Advice for Engineering Managers
  • 29:36 - Conclusion

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO podcast. And today with us, we have an exciting guest. This is Senior Director from Stack Overflow with 20 plus years of experience in engineering and leadership, Ben Matthews. Hey, Ben.

Ben Matthews: Thanks for having me. I just wanted to cover you there.

Kovid Batra: All right. So I think, uh, today, uh, we’re going to talk about, uh, Ben’s journey and how he moved from a QA to an engineering leadership position at Stack Overflow. And here we are like primarily interested in knowing how they are scaling tech and teams at Stack Overflow. So we are totally excited about this episode, man. But before we jump on to the main section, uh, there is a small ritual that we have. So you have to introduce yourself that your LinkedIn profile doesn’t tell you about.

Ben Matthews: Okay. Uh, well, that’s not in my LinkedIn profile. Well, um, So I am the Senior Director of Engineering at Stack Overflow for our community products, but something about myself that’s not, uh, I, I love to snowboard. I’m a huge fan of calzones and I’m a total movie nerd. Is that what you had in mind?

Kovid Batra: Yeah, of course. I mean, uh, I would love you to talk a little more, even if there is something that you want to share that tells about you in terms of who you are. Maybe something from your childhood, from your teenage, anything, anything of that sort that you think defines you who you are today.

Ben Matthews: Uh, yeah. Um, yeah, that’s a great question. Of, of really just getting into tech in general, a lot of that did come from some natural inclinations, uh, that have kind of always been there. For the longest time I didn’t think I would really enjoy technology. There was the stereotype of the person who sat in the corner, just coded all day and never talked to people like kind of the Hollywood impression of what a developer was. That didn’t seem very appealing. I like interacting with people. I like actually making some tangible differences, but once I actually dug into it and actually saw like there was that click that a lot of people have the first time that you compile and run your code and you’re like, wait, I made that happen. I made that change and that’s what kind of the addiction started. But even after that, I still loved interacting with people. Um, and I think we were very lucky. I came at a time where the industry was starting to change, where it was no longer people working in isolation. This, this is a team sport now, like developers have to work together. You’re working with other departments. And that’s actually kind of what I really enjoy. I love, I love interacting with people and building things that people like to work with. So, um, that’s really kind of what sings to me about tech is it’s a quick way to build things that other people can interact with and bring value to them. And I get to do it together with another team of people who, who enjoy it as well. So I would say like, that’s kind of what gets me out of bed in the morning of trying to help people do more with their day and build something that helped them.

Kovid Batra: Great, great. Thanks for that intro. Um, I think, uh, I’m really interested to start with the part, uh, with your current role and responsibility at Stack Overflow. Uh, like, uh, like how, uh, you, you started here or in fact, like, we can go a little back also, like from where you actually started. So wherever you are comfortable, like, uh, you can just begin. Yeah.

Ben Matthews: Yeah. Um, so the, the full journey has its interesting and boring parts altogether, but how it really started was out of school, I still had that feeling of I didn’t know if development was for me because of the perception I had. But I actually got my first job as a quality assurance engineer for a small startup. Uh, now the best part about working at a small company is that you’re forced to wear multiple hats. That, you know, you don’t just have one role. I was also doing tech support. And then I also looked at some of the code. I helped to do some small code reviews. And from there, I thought like, you know, I would love to take a shot at doing this development thing. Maybe, maybe I would like it more. Um, and then I did, I kind of got that high of like, I pushed this live and people are using it and you know, that’s mine and they’re enjoying it and that kind of became addictive to me, of where I really liked being a developer. So I really leaned into that. Um, and then enjoying that startup and having a great mentor there, uh, that really kind of, I set a foundation for how I view, how I want to develop and the things I want to build, uh, of really taking the point of view of how I’m creating value for the users. And my, and my next role, I actually worked for a marketing agency doing digital marketing. Um, and that took that up to 11 of the number of things I had to interact with and be prepared for. Like every week or every couple weeks I had a new project, a new customer, a new problem to solve, and I had to use usually with code, sometimes not with code. We’re solving these problems and creating value and getting that whole high level view of working on databases, kind of doing QA for other people doing development front and back, and I got to see what I really like to do. But I also got an insight into how organizations work, how pieces of a company work together, pieces of a development team work together, and how that really creates value for, for users and customers, which in the end, that’s what we’re here to do is to create value for people.

Um, so my next role after that is my first foray into leadership. I went to another digital agency leading a small development team. And, um, it had its highs and lows. There was definitely a learning curve there. Um, there, there was that ache of not being able to develop of, of enabling other people to develop.

Kovid Batra: Yeah. And this was, and this was a startup or this was an organization like, uh, medium or large-scale organization?

Ben Matthews: This was a medium-sized organization, much more, uh, founded, they, they were trying to start up a new tech department, so I had a little freedom in setting some standards. But it was a mature organization. Um, they kind of knew what they wanted to accomplish. Um, so like then I had a big learning curve, excuse me, of what it’s like to work there, how do I lead people, how do I set expectations for them, um, how do I advocate for myself and others, and, you know, I had plenty of missteps that like looking back now, there’s a bunch of times I wish I could go back and say, “Nope, this is totally the wrong direction. Your instincts are wrong. You need to learn and grow.” Um, and then after that I went to a couple of other organizations of doing leadership there, some very, very large, some smaller, getting that whole view of kind of ins and outs and the stacks of what I would like to be. Then I landed here on Stack which has been a terrific fit for me of, of getting to work directly with users and, uh, and knowing that the people I’m leading are customers, of Stack Overflow just as much as they are employees here, which is very satisfying. We really feel like we’re helping people. I get to have a big impact on a very large application and, um, there’s still a lot of freedom for me to, to execute in the vision. Working with the other leaders here has been a joy as well, since we’re kind of like-minded, which I think is very important for people looking for a place to land. Uh, I know in a lot of interviews, you rarely get to interact with people who will be your peers, but when you do, like really see how well do you bounce off of each other, um, are you all alike? Cause that’s not great. Or are you all different? That’s not great either. You want to have like a little bit of friction there so you can create great ideas. And I think that’s what we have at Stack and it’s been wonderful.

Kovid Batra: No, I think that’s great. But, uh, one question here. Like, um, you were very, uh, passionate about when you told how you started your journey, uh, with the, with the startup, you got an exposure, uh, from the business level to, uh, product teams to developers, and that really opened your mind. Um, would you recommend this for anyone who is beginning their journey in, in, in tech, like, uh, would this be a recommended way of going about how you, uh, set your foundation?

Ben Matthews: Yeah, that’s a great question. I think a lot of people are going to have very different journeys. Um, that I think, you know, one thing that really stuck out to me actually just recently talking to someone when I was, I was at a panel just this past weekend and the variety of journeys that people took of where they started. I think one of the most fascinating ones was someone who was not in tech at all. They’ve been a teacher for 15 years, teaching parts of computer science and design, never professionally worked on one. And now they’re breaking into it now and having a lot of success. Um, I mean, I think my advice to people is like, like your journey is not right or wrong, whatever you’re trying to get to, I think there’s plenty of ways to get to it. What I would say that you do want to focus on though, is that you keep challenging yourself of what I thought I would be working on now is certainly not, uh, what I’m actually working on today, uh, even, whether, I think that’s at all levels, whether at senior, uh, executive, down to like junior engineer, uh, from year to year, the technology landscape changes. How we organize people and execute on that changes. Um, so whatever that journey is, whatever you think it’s going to be, I’m 99 percent sure it’s going to be different than what you envisioned and you have to be prepared to shift that way and keep learning and challenging yourself and it’ll be uncomfortable but that, that’s part of the journey.

Kovid Batra: Yeah, I think that’s the way to go, actually. Then that’s the area when you learn the maximum I think. Uh, so yeah, totally agree with that. Uh, when, uh, when you reflect back, when you see your journey from a QA to a Senior Director at Stack Overflow, I’m curious to know, like, do you know what is that quality in you, uh, that made you stand out and grow to such a profile in, in a, in a reputed organization?

Ben Matthews: Yeah, I think, um, I had a great mentor that pointed out a lot of things that weren’t obvious to me. Um, and I think being a developer, um, I think sometimes for, for us being a people leader is it doesn’t come as naturally sometimes because we tend to think more functionally, which isn’t a bad thing. But there’s some things that at least for me, it didn’t jump out, obviously. I remember one great piece of feedback that took me from just a team manager to get me into a higher level piece was really advocating for yourself. Uh, that didn’t come naturally to me. And I don’t think that comes naturally to a lot of people in our industry. Um, some like to just label it as bragging or see it as bragging, but if you’re not being proud of your successes, other people won’t know they’re there. But it’s not even just for you, but you should be bragging and, and communicating the successes of your team, communicating the successes of your organization. That’s a big part of letting people know of what’s worked, what hasn’t. So one that you can keep doing it. But also other people can emulate it, emulate it and other people in your organization can see you there. There needs to be a profile there. You need to be visible to be a leader. Uh, and I separate that from manager. Being a manager, you don’t necessarily have to be visible. You, there’s very good managers that don’t like to be in the limelight. They’re still supporting their people and moving things forward. But if you’re going to be a leader and set an example and set hard expectations of the vision of where things are going to go, you need to be visible and part of that is advocating and communicating more broadly.

Kovid Batra: Sure. Makes sense. Okay, coming back to your, your current, uh, roles and responsibilities at Stack Overflow. I’m sure working with developers, uh, who know, uh, what the product is about and they are themselves the users. What is that, uh, one thing that you really, uh, abide by as a principle for leading your teams? How, how you’re leading it differently at Stack Overflow, making things successful, scalable, robust?

Ben Matthews: Yeah. Um, and that’s a great question. Cause every organization is different, I’ve had to tackle this problem in different ways at different places. At Stack, I’ve been very fortunate that, uh, there’s already a very talented group of people here that I’ve been able to expand on and keep growing. Um, people tend to be very passionate about the project already, the project and products that we build. That’s a great benefit to have as well. You’re not really trying to talk people into the vision of Stack Overflow, that they were users before there were customers. So that, that was great. But, um, but with that also comes like a different way of how do you leverage the most out of people given this hand? Um, and I know it’s partially a cliché, but with that vision that’s already there with already talented people, um, kind of the steps of making sure you’re setting clear expectations for your folks, setting that vision very loudly, broadly, and clearly to them, um, and then making sure they have all the resources they need to do that. Sometimes it’s time, sometimes it’s, it’s some money or equipment. And then lastly, kind of getting out of their way and removing all the roadblocks. Those three steps are kind of the big parts that I think are general rule of thumb, but, um, given that a lot of other friction points were out of the way, I could really lean into that.

A great example was, uh, I had a team that, uh, was trying to work on a brand new product that, uh, no, it didn’t quite work out before, but we were going to give it another try. We were starting over. And looking at some of the things that went well and what didn’t, it was honestly just a clear lack of vision was their problem. They kept changing directions often. And I was talking to product of like, “Hey, what went wrong?” And they had their own internal struggles. We had our struggles and just aligning that saying like, “Hey, this is going to be a little bit more broad. We’re specifically trying to accomplish this. How do we do it?” And from a bottom-up approach, they set the goals, they set what they think the milestone should be, and that was so much more successful. Um, like that formula that doesn’t work everywhere, but it really thrives here at Stack of like, “Hey, what do you think? How is the best way to execute this?” And we tweak it, we manage it, we keep it on the rails. But once they started moving into it, um, it actually launched and became very successful. So that’s another way of like, kind of reading your team, reading the other stakeholders and, and leveraging their strengths.

Kovid Batra: But what I feel is that, uh, it’s great. Like this approach works at, uh, Stack, but usually what I felt is that when you go with the bottom-up approach, uh, there is an imbalance, uh, like developers are usually inclined towards taking care of the infra, managing the tech debt and not really intuitively prioritizing your, uh, customer needs and requirements, even though they relate to it at times, at least in case of Stack, I can say that. But still there is a, there is a bias in the developer to make the code better before looking at the customer side of it. So how, how do you take care of that?

Ben Matthews: That’s a, that’s a great point. Um, and just to be clear to other developers listening, I love that instinct if you have it, it’s so valuable that you want to leave code better than you found it. But, uh, to what your point, I think that goes back to setting those clear expectations again of, “Hey, like this is what we’re going to accomplish. This is how we need to do it. Um, if we can address tech debt along the way, you need to justify that. I give you the freedom to justify that. But in the end, I, I’m setting these goals. This is what has to happen by then and I’m happy to support you and what we need to get there.” Um, and then also sharing advice and, and, and you know, learning where the minds are on some of those paths. Uh, some people have experience in making these mistakes like I have. I’ve, uh, tried to say, “Well, we could also do this and then also do this and then also do our goal.” And then we’ve taken on too much, and we’re, you know, we’re trying to do too many things at once that we can’t execute.

So you’re right in that. Just kind of not giving any clear direction or expectations, things can kind of go off the rails and what they want to work on isn’t always what we need to focus on. I think there’s a balance there. But, uh, yeah, I mean, setting those expectations is a key part to those three steps, I would say arguably the most important part. If they don’t know which way they’re supposed to be aiming for, they can’t execute on it.

Kovid Batra: Makes sense. Okay, um, next thing that I want to know is, uh, in the last few, few, not actually, actually few years, it’s just been a year or two when the AI wave has like taken over the industry, right? And everyone’s rushing. Um, I’m sure there was a huge impact on the user base, but maybe I’m wrong, on the user base of Stack because people go there to see code, uh, libraries and like code which is there. Now, uh, ChatGPT and tools like that are really helping developers do like automated code. Uh, how you have, uh, taken up with that and what’s your new strategy? I mean, of course you can say everything here, but I would love to know, like how it has been absorbed in the team now.

Ben Matthews: Now, um, I think for the most part, we’ve kind of worn our strategy on our sleeve. Our, our CEO and Chief Product Officer and our CTO have talked about this a bit of, I mean, Stack is, is there to help educate and empower technologists of the world. This is a new tool that’s part of the landscape now and there are a lot of companies that are concerned about it or feel like it’s a doomsday. Um, we’re embracing it. It’s a new way for information to get in and out of people’s hands. Uh, and this is something we were going to try to be a part of. I think we’ve made some great steps of leveraging AI, uh, we’re trying to build some partnerships with people to kind of get a hand on the wheel to make sure that like this is going in the right direction. But, um, there’s technical revolutions every couple years, and this is another one. Uh, and how Stack fits into it is we’re still going to try to provide that value to folks and AI is a new part of it. Uh, we’re building new products that leverage AI. Um, we actually have a couple that are hopefully going to be launching soon that try to improve the experience for users on the site, leveraging AI. We’re going to try to find new ways for people to interact with AI to know that Stack Overflow is a part of what that experience is and to kind of create a cycle there. Um, But it’s changed how people work. But I think Stack Overflow is still a big part of that equation. Uh, we are a big knowledge repository, uh, like along with Reddit or, or news articles, like all of these things need to be there to even power AI. That, that’s sort of the cycle. Like, um, that has to go there. Without human beings, without a community generating content, AI is pretty powerless. But, um, so there has to be a way for us to keep that feedback loop going. And we’re excited that of all the opportunities to be a part of that and find new ways to keep educating people.

Kovid Batra: Definitely. I think that’s a very good point, actually. Like, without humans feeding that information, at least right now AI is not at that stage that it can generate things on its own. It’s the community that would always be driving things at the end. So I also believe in that fact. My question, uh, a follow-up question on that is that when such kind of, uh, big changes happen, how, how your teams are taking it? Like, at Stack, how people are embracing it, particularly developers? I’m just saying that if there are new products that we are going to work on or new tech that we are going to build, how people are embracing it, how fast they are adopting to the new requirements and the new thought process which the company’s adopting?

Ben Matthews: Uh, through the context of AI or just in general?

Kovid Batra: Just, just in the context of AI.

Ben Matthews: Oh yeah. Um, well, in a fun way, there’s been a wide range of opinions on how for us to embrace or to try to channel the AI capabilities that are now very pervasive in the industry. Um, um, so first part of it starts with a lot of that we’re trying to gather as much data and information we can. Again, we have a good user base. So we’re able to interact with them and ask them questions. We’re looking at behavior changes. And so from there, we try to make a data informed decision to our teams of like, “Hey, this is what we’re seeing. So this is what we’re going to try.” Um, I mean, the beauty of data is there’s a bunch of ways to interpret it and our developers are no different. They have some thoughts on, on the best ways to go about it. But I think this also goes to a general leadership technique is you’re never going to get unanimous consent on an idea. If that’s what your requirement is, you’re never going to move forward. What you do have to get is people to at least agree that this is worth trying or like understand that I might be wrong. And a lot of people feel like this is the best way, so we’ll give it a shot. Uh, and that’s something I’ve been proud of to be able to achieve at Stack. It’s something that is very important for a leader of saying, “Hey, I know you don’t agree, but I need you to roll along with me on this. I understand your point. You’ve been heard, but this is the decision we’re making.” Um, a lot of people agree with the idea. Some don’t, but trying to get the enthusiasm and I think also connecting the dots on those ideas with the larger picture. I think that’s also something people miss a lot during these revolutions of if you start out with like vision A. And then something big happens and now you have vision B, um, you still have to connect the dots in like, “Hey, we’re still trying to, to like provide value the same way. We’re still the same company. We’re in this new thing that you’re doing. This dot still connects to what we want to do. There’s still a path there. We’re not like totally pivoting to block chain or something like that. It’s not a huge change for us.” So I think that also motivates people like we’re still trying to build the same vision, the same power for the company. We’re just doing it in a different way. And what you’re doing is still really creating value. I think that’s a big part for leaders to, to keep people motivated.

Kovid Batra: Makes sense. When it comes to, uh, bringing developers on board and nurturing them, I think the biggest challenge that I have always heard from managers, particularly is, uh, getting these new-age, uh, junior developers and the fresh ones coming into the picture. Um, any thoughts, any techniques that you have used to, uh, bring these people on board, nurture them well, and so that they can contribute and create that impact?

Ben Matthews: Yeah. Uh, onboarding people is a huge thing that I try to give the other managers that work for me that are bringing on new team members. Um, uh, I mean, a big part of it, it goes back to empowerment, but I think a lot of it is also the same challenges we’ve had I think for decades, of me even having my own Computer Science degree. In my first development job, there was a huge gap of what I learned in school versus what I’m doing day-to-day as an actual developer. Uh, as far as I can tell, that hasn’t really changed that much. People come in from bootcamps or not. Uh, funny we’ve had a really good experience of people that don’t have formal degrees coming in, who have just been coding their whole time. They tend to actually have an easier time working within a team. That’s not to disparage any Computer Science degree, it’s still very valuable, but it’s just to highlight the gap between what you actually do and what they’ve been training. A great example is, um, of what we try to get junior engineers to really focus on initially, it’s like just doing code reviews. That is a huge part of what we do in modern development. It’s a great way for you to understand the code base, understand how your team works, understand like kind of the ins and outs and where some of the scary parts of the code are. And, um, and even though that can be intimidating, the best thing I think you can do in a code review is just ask questions of like, “Hey, I see you’re doing this. This doesn’t make sense to me. Can you explain why?” And after time, even a senior engineer will read them and be like, “You know what? That is kind of confusing. Why did we do it that way? Let me..” And they’ll even update their PR. I think that’s one of the best tools to get a junior engineer up to speed is just like get them in the code and reviewing it.

Um, the other part of kind of the unsung hero of all of software development that never gets enough love is just documentation, of having them go through some of the pieces of the product, commenting and documenting how things work. That, one, it helps onboard other people, but two, that, that forces them to have an understanding of how parts of the code work. Uh, and then from there at their own pace, here at Stack, we, we try to have people push code to production on day one. Uh, we find something small for them to do, work them through the whole build pipeline process so they can see how it works and like, kind of get that scary part of the way. Like something you wrote is now in production on Stack Overflow in front of hundreds of millions of people. Congratulations! But let’s just get that part out of the way. Um, but then how they can actually understand the code and keep building things, take on new tickets, work with product, size, refinement, all of that, we just ease them into that in their own pace, but keeping them exposed to that code through documentation and PRs really shortens the learning curve.

Kovid Batra: Cool. Makes sense. I think, uh, most of the things, uh, that I have seen, uh, working out for the developers, for, uh, the, the teams that are working well, the managers play a really, really good role there. Like the team managers who are leading them play a very good role there. So before we like end this discussion, I would love for you, uh, to give some parting advice to the engineering managers who are leading such teams, uh, who are looking forward to growing in their career also, uh, that would be helpful for them. Yeah.

Ben Matthews: Yeah. I, I, I, uh, I would say three big points that were big for me from that mentor. One, I’ve already spoke on of advocating for yourself. And, um, and for you, your team and your people, that’s a big part of getting visibility to, to try to grow, to show that you’re being successful. And, and, and honestly, just helping your other peers be successful. It’s a great way for people to see that you’re good at what you do. Another thing that, that I think people could focus on is building an organization that functions and not just executes. Those are kind of two different things, though they sound similar. For I can have a front end team that is great at pumping out front end code or building a new front end framework, and that’s valuable. They’re executing. But they have to work in concert with our back end team or DBA team, with product to align things, getting those things to work together, that’s an organization that functions. And though it may seem like you might be slowing down one to get them to work in tandem or in line with another one, um, that’s actually what’s really going to make your organization successful. If you can show that you have teams working together, reducing friction points and actually building things as one unit, that shows you’re being a good leader, you’re setting a clear vision and you’re, you’re creating the most value you can out of that organization. Um, and last I would say is, um, really identifying friction points or slowdowns in your organization, owning them and setting a plan on how to tackle them. There I had a natural inclination as I was moving up to hide my weaknesses, like to hide what was not going well in my organization. Um, and because of that, I wasn’t able to get feedback from my fellow leaders, from my manager or help. Um, but I would say if you have a problem that you’re tackling, own it and be like, “Hey, this is what’s going on. This is a problem I’m having here. So I’m going to address it.” And welcome any thoughts, but that’s another success story to share that you can tackle problems and things that are going wrong and also advocate for those. Uh, show that you can address problems and keep improving and making things better.

Uh, those three things I think have really helped me move forward in my career of kind of that mindset has made my organizations better, made my people better and let people know that, um, you know, I’m there to try to create the most value I can in the organization.

Kovid Batra: Makes sense. Thank you, Ben. Thank you so much for such a, such a great session, uh, and such great advice. Uh, for today, uh, in the interest of time, we’ll have to stop here, but we would love to know more of your, uh, stories and experiences, maybe on another episode. It was great to have you today here.

Ben Matthews: Thank you, Kovid. It was great to be here.

'Product Thinking Secrets for Platform Teams' with Geoffrey Teale, Principal Product Engineer, Upvest

In this episode of the groCTO Podcast, host Kovid Batra engages in a comprehensive discussion with Geoffrey Teale, the Principal Product Engineer at Upvest, who brings over 25 years of engineering and leadership experience.

The episode begins with Geoffrey's role at Upvest, where he has transitioned from Head of Developer Experience to Principal Product Engineer, emphasizing a holistic approach to improving both developer experience and engineering standards across the organization. Upvest's business model as a financial infrastructure company providing investment banking services through APIs is also examined. Geoffrey underscores the multifaceted engineering requirements, including security, performance, and reliability, essential for meeting regulatory standards and customer expectations. The discussion further delves into the significance of product thinking for internal teams, highlighting the challenges and strategies of building platforms that resonate with developers' needs while competing with external solutions.

Throughout the episode, Geoffrey offers valuable insights into the decision-making processes, the importance of simplicity in early-phase startups, and the crucial role of documentation in fostering team cohesion and efficient communication. Geoffrey also shares his personal interests outside work, including his passion for music, open-source projects, and low-carbon footprint computing, providing a holistic view of his professional and personal journey.

Timestamps

  • 00:00 - Introduction
  • 00:49 - Welcome to the groCTO Podcast
  • 01:22 - Meet Geoffrey: Principal Engineer at Upvest
  • 01:54 - Understanding Upvest's Business & Engineering Challenges
  • 03:43 - Geoffrey's Role & Personal Interests
  • 05:48 - Improving Developer Experience at Upvest
  • 08:25 - Challenges in Platform Development and Team Cohesion
  • 13:03 - Product Thinking for Internal Teams
  • 16:48 - Decision-Making in Platform Development
  • 19:26 - Early-Phase Startups: Balancing Resources and Growth
  • 27:25 - Scaling Challenges & Documentation Importance
  • 31:52 - Conclusion

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of groCTO Podcast. Today with us, we have a very special guest who has great expertise in managing developer experience at small scale and large scale organizations. He is currently the Principal Engineer at Upvestm, and has almost 25 plus years of experience in engineering and leadership. Welcome to the show, Geoffrey. Great to have you here. 

Geoffrey Teale: Great to be here. Thank you. 

Kovid Batra: So Geoffrey, I think, uh, today's theme is more around improving the developer experience, bringing the product thinking while building the platform teams, the platform. Uh, and you, you have been, uh, doing all this from quite some time now, like at Upvest and previous organizations that you've worked with, but at your current company, uh, like Upvest, first of all, we would like to know what kind of a business you're into, what does Upvest do, and let's then deep dive into how engineering is, uh, getting streamlined there according to the business.

Geoffrey Teale: Yeah. So, um, Upvest is a financial infrastructure company. Um, we provide, uh, essentially investment banking services, a complete, uh, solution for building investment banking experiences, uh, for, for client organizations. So we're business to business to customer. We provide our services via an API and client organizations, uh, names that you'd heard of people like Revolut and N26 build their client-facing applications using our backend services to provide that complete investment experience, um, currently within the European Union. Um, but, uh, we'll be expanding out from there shortly. 

Kovid Batra: Great. Great. So I think, uh, when you talk about investment banking and supporting the companies with APIs, what kind of engineering is required here? Is it like more, uh, secure-oriented, secure-focused, or is it more like delivering on time? Or is it more like, uh, making things very very robust? How do you see it right now in your organization? 

Geoffrey Teale: Well, yeah, I mean, I think in the space that we're in the, the answer unfortunately is all of the above, right? So all those things are our requirements. It has to be secure. It has to meet the, uh, the regulatory standards that we, we have in our industry. Um, it has to be performant enough for our customers who are scaling out to quite large scales, quite large numbers of customers. Um, has to be reliable. Um, so there's a lot of uh, uh, how would I say that? Pressure, uh, to perform well and to make sure that things are done to the highest possible standard in order to deliver for our customers. And, uh, if we don't do that, then, then, well, the customers won't trust us. If they don't trust us, then we wouldn't be where we are today. So, uh, yeah. 

Kovid Batra: No, I totally get that. Uh, so talking more about you now, like, what's your current role in the organization? And even before that, tell us something about yourself which the LinkedIn doesn't know. Uh, I think the audience would love to know you a little bit more. Uh, let's start from there. Uh, maybe things that you do to unwind or your hobbies or you're passionate about anything else apart from your job that you're doing? 

Geoffrey Teale: Oh, well, um, so, I'm, I'm quite old now. I have a family. I have two daughters, a dog, a cat, fish, quail. Keep quail in the garden. Uh, and that occupies most of my time outside of work. Actually my passions outside of work were always um, music. So I play guitar, and actually technology itself. So outside of work, I'm involved and have been involved in, in open source and free software for, for longer than I've been working. And, uh, I have a particular interest in, in low carbon footprint computing that I pursue outside of, out of work.

Kovid Batra: That's really amazing. So, um, like when you say low carbon, uh, cloud computing, what exactly are you doing to do that? 

Geoffrey Teale: Oh, not specifically cloud computing, but that would be involved. So yeah, there's, there's multiple streams to this. So one thing is about using, um, low power platforms, things like RISC-V. Um, the other is about streamlining of software to make it more efficient so we can look into lots of different, uh, topics there about operating systems, tools, programming languages, how they, uh, how they perform. Um, sort of reversing a trend, uh, that's been going on for as long as I've been in computing, which is that we use more and more power, both in terms of computing resource, but also actual electricity for the network, um, to deliver more and more functionality, but we're also programming more and more abstracted ways with more and more layers, which means that we're actually sort of getting less, uh, less bang for buck, if you, if you like, than we used to. So, uh, trying to reverse those trends a little bit. 

Kovid Batra: Perfect. Perfect. All right. That's really interesting. Thanks for that quick, uh, cute little intro. Uh, and, uh, now moving on to your work, like we were talking about your experience and your specialization in DevEx, right, improving the developer experience in teams. So what's your current, uh, role, responsibility that comes with, uh, within Upvest? Uh, and what are those interesting initiatives that you have, you're working on? 

Geoffrey Teale: Yeah. So I've actually just changed roles at Upvest. I've been at Upvest for a little bit over two years now, and the first two years I spent as the Head of Developer Experience. So running a tribe with a specific responsibility for client-facing developer experience. Um, now I've switched into a Principal Engineering role, which means that I have, um, a scope now which is across the whole of our engineering department, uh, with a, yeah, a view for improving experience and improving standards and quality of engineering internally as well. So, um, a slight shift in role, but my, my previous five years before, uh, Upvest, were all in, uh, internal development experience. So I think, um, quite a lot of that skill, um, coming into play in the new role which um, yeah, in terms of challenges actually, we're just at the very beginning of what we're doing on that side. So, um, early challenges are actually about identifying what problems do exist inside the company and where we can improve and how we can make ourselves ready for the next phase of the company's lifetime. So, um, I think some of those topics would be quite familiar to any company that's relatively modern in terms of its developer practices. If you're using microservices, um, there's this aspect of Conway's law, which is to say that your organizational structure starts to follow the program structure and vice versa. And, um, in that sense, you can easily get into this world where teams have autonomy, which is wonderful, but they can be, um, sort of pushed into working in a, in a siloized fashion, which can be very efficient within the team, but then you have to worry about cohesion within the organization and about making sure that people are doing the right things, uh, to, to make the services work together, in terms of design, in terms of the technology that we develop there. So that bridges a lot into this world of developer experience, into platform drives, I think you mentioned already, and about the way in which you think about your internal development, uh, as opposed to just what you do for customers. 

Kovid Batra: I agree. I mean, uh, as you said, like when the teams are siloed, they might be thinking they are efficient within themselves. And that's mostly the use case, the case. But when it comes to integrating different pieces together, that cohesion has to fall in. What is the biggest challenge you have seen, uh, in, in the teams in the last few years of your experience that prevents this cohesion? And what is it that works the best to bring in this cohesion in the teams? 

Geoffrey Teale: Yeah. So I think there's, there's, there's a lot of factors there. The, the, the, the biggest one I think is pressure, right? So teams in most companies have customers that they're working for, they have pressure to get things done, and that tends to make you focus on the problem in front of you, rather than the bigger picture, right? So, um, dealing, dealing with that and reinforcing the message to engineers that it's actually okay to do good engineering and to worry about the other people, um, is a big part of that. I've always said, actually, that in developer experience, a big part of what you have to do, the first thing you have to do is actually teach people about why developer experience is important. And, uh, one of those reasons is actually sort of saying, you know, promoting good behavior within engineering teams themselves and saying, we only succeed together. We only do that when we make the situation for ourselves that allows us to engineer well. And when we sort of step away from good practice and rush, rush, um, that maybe works for a short period of time. But, uh, in the long term that actually creates a situation where there's a lot of mess and you have to deal with, uh, getting past, we talk about factors like technical debt. There's a lot of things that you have to get past before you can actually get on and do the productive things that you want to do. Um, so teaching organizations and engineers to think that way is, uh, is, uh, I think a big, uh, a big part of the work that has to be done, finding ways to then take that message and put it into a package that is acceptable to people outside of engineering so that they understand why this is a priority and why it should be worked on is, I think, probably the second biggest part of that as well.

Kovid Batra: Makes sense. I think, uh, most of the, so is it like a behavioral challenge, uh, where, uh, developers and team members really don't like the fact that they have to work in cohesion with the teams? Or is it more like the organizational structure that put people into a certain kind of mindset and then they start growing with that and that becomes a problem in the later phase of the organization? What, what you have seen, uh, from your experience? 

Geoffrey Teale: Yeah. So I mean, I think growth is a big part of this. So, um, I mean, I've, I've worked with a number of startups. I've also worked in much bigger organizations. And what happens in that transition is that you move from a small tight-knit group of people who sort of inherently have this very good interpersonal communication, they all know what's going on with the company as a whole, and they build trust between them. And that way, this, this early stage organization works very well, and even though you might be working on disparate tasks, you always have some kind of cohesion there. You know what to do. And if something comes up that affects all of you, it's very easy to identify the people that you need to talk to and find a solution for it. Then as you grow, you start to have this situation where you start to take domains and say, okay, this particular part of, of what we do now belongs in a team, it has a leader and this piece over here goes over there. And that still works quite well up into a certain scale, right? But after time in an organization, several things happen. Okay, so your priorities drift apart, right? You no longer have such good understanding of the common goal. You tend to start prioritizing your work within those departments. So you can have some, some tension between those goals. It's not always clear that Department A should be working together with Department B on the same priority. You also have natural staff turnover. So those people who are there at the beginning, they start to leave, some of them, at least, and these trust relationships break down, the communication channels break down. And the third factor is that new people coming into the organization, they haven't got these relationships, they haven't got this experience. They usually don't have, uh, the position to, to have influence over things on such a large scale. So they get an expectation of these people that they're going to be effective across the organization in the way that people who've been there a long time are, and it tends not to happen. And if you haven't set up for that, if you haven't built the support systems for that and the internal processes and tooling for that, then that communication stops happening in the way that it was happening before.

So all of those things create pressure to, to siloes, then you put it on the pressure of growth and customers and, and it just, um, uh, ossifies in that state. 

Kovid Batra: Totally. Totally. And I think, um, talking about the customers, uh, last time when we were discussing, uh, you very beautifully put across this point of bringing that product thinking, not just for the products that you're building for the customer, but when you're building it for the teams. And I, what I feel is that, the people who are working on the platform teams have come across this situation more than anyone else in the team as a developer, where they have to put in that thought of product thinking for the people within the team. So what, what, what, uh, from where does this philosophy come? How you have fitted it into, uh, how platform teams should be built? Just tell us something about that. 

Geoffrey Teale: Yeah. So this is something I talk about a little bit when I do presentations, uh, about developer experience. And one of the points that I make actually, particularly for platform teams, but any kind of internal team that's serving other internal teams is that you have to think about yourself, not as a mandatory piece that the company will always support and say, "You must use this, this platform that we have." Because I have direct experience, not in my current company, but in previous, uh, in previous employers where a lot of investment has been made into making a platform, but no thought really was given to this kind of developer experience, or actually even the idea of selling the platform internally, right? It was just an assumption that people would have to use it and so they would use it. And that creates a different set of forces than you'll find elsewhere. And, and people start to ignore the fact that, you know, if you've got a cloud platform in this case, um, there is competition, right? Every day as an engineer, you run into people out there working in the wide world, working for, for companies, the Amazons, AWS of this world, as your Google, they're all producing cloud platform tools. They're all promoting their cloud native development environments with their own reasons for doing that. But they expend a lot of money developing those things, developing them to a very high standard and a lot of money promoting and marketing those things. And it doesn't take very much when we talk just now about trust breaking down, the cohesion between teams breaking down. It doesn't take very much for a platform to start looking like less of a solution and more of a problem if it's taking you a long time to get things done, if you can't find out how to do things, if you, um, you have bad experiences with deployment. This all turns that product into an internal problem. 

Kovid Batra: In context of an internal problem for the teams. 

Geoffrey Teale: Yeah, and in that context, and this is what I, what I've seen, when you then either have someone coming in from outside with experience with another, a product that you could use, or you get this kind of marketing push and sales push from one of these big companies saying, "Hey, look at this, this platform that we've got that you could just buy into." um, it, it puts you in direct competition and you can lose that, that, right? So I have seen whole divisions of a, of a very large company switch away from the internal platform to using cloud native development, right, on, on a particular platform. Now there are downsides for that. There are all sorts of things that they didn't realize they would have to do that they end up having to do. But once they've made the decision, that battle is lost. And I think that's a really key topic to understand that you are in competition, even though you're an internal team, you are in competition with other people, and you have to do some of the things that they do to convince the people in your organization that what you're doing is beneficial, that it's, it's, it's useful, and it's better in some very distinct way than what they would get off the shelf from, from somewhere else. 

Kovid Batra: Got it. Got it. So, when, uh, whenever the teams are making this decision, let's, let's take something, build a platform, what are those nitty gritties that one should be taking care of? Like, either people can go with off the shelf solutions, right? And then they start building. What, what should be the mindset, what should be the decision-making mindset, I must say, uh, for, for this kind of a process when they have to go through? 

Geoffrey Teale: So I think, um, uh, we within Upvest, follow a very, um, uh, prescribed is not the right word, but we have a, we have a process for how we think about things, and I think that's actually a very useful example of how to think about any technical project, right? So we start with this 'why' question and the 'why' question is really important. We talk about product thinking. Um, this is, you know, who are we doing this for and what are the business outcomes that we want to achieve? And that's where we have to start from, right? So we define that very, very clearly because, and this is a really important part, there's no value, uh, in anybody within the organization saying, "Let's go and build a platform." For example, if that doesn't deliver what the company needs. So you have to have clarity about this. What is the best way to build this? I mean, nobody builds a platform, well not nobody, but very few people build a platform in the cloud starting from scratch. Most people are taking some existing solution, be that a cloud native solution from a big public cloud, or be that Kubernetes or Cloud Foundry. People take these tools and they wrap them up in their own processes, their own software tools around it to package them up as a, uh, a nice application platform for, for development to happen, right? So why do you do that? What, what purpose are you, are you serving in doing this? How will this bring your business forward? And if you can't answer those questions, then you probably should never even start the project, right? That's, that's my, my view. And if you can't continuously keep those, um, ideas in mind and repeat them back, right? Repeat them back in terms of what are we delivering? What do we measure up against to the, to the, to the company? Then again, you're not doing a very good job of, of, of communicating why that product exists. If you can't think of a reason why your platform delivers more to your company and the people working in your company than one of the off the shelf solutions, then what are you for, right? That's the fundamental question.

So we start there, we think about those things well before we even start talking about solution space and, and, um, you know, what kind of technology we're going to use, how we're going to build that. That's the first lesson. 

Kovid Batra: Makes sense. A follow-up question on that. Uh, let's say a team is let's say 20-30 folks right now, okay? I'm talking about an engineering team, uh, who are not like super-funded right now or not in a very profit making business. This comes with a cost, right? You will have to deploy resources. You will have to invest time and effort, right? So is it a good idea according to you to have shared resources for such an initiative or it doesn't work out that way? You need to have dedicated resources, uh, working on this project separately or how, how do you contemplate that? 

Geoffrey Teale: My experience of early-phase startups is that people have to be multitaskers and they have to work on multiple things to make it work, right? It just doesn't make sense in the early phase of a company to invest so heavily in a single solution. Um, and I think one of the mistakes that I see people making now actually is that they start off with this, this predefined idea of where they're going to be in five years. And so they sort of go away and say, "Okay, well, I want my, my, my system to run on microservices on Kubernetes." And they invest in setting up Kubernetes, right, which has got a lot easier over the last few years, I have to say. Um, you can, to some degree, go and just pick that stuff off the shelf and pay for it. Um, but it's an example of, of a technical decision that, that's putting the cart before the horse, right? So, of course, you want to make architectural decisions. You don't want to make investments on something that isn't going to last, but you also have to remember that you don't know what's going to happen. And actually, getting to a product quickly, uh, is more important than, than, you know, doing everything perfectly the first time around. So, when I talk about these, these things, I think uh, we have to accept that there is a difference between being like the scrappy little startup and then being in growth phase and being a, a mega corporation. These are different environments with different pressures 

Kovid Batra: Got it. So, when, when teams start, let's say, work on it, working on it and uh, they have started and taken up this project for let's say, next six months to at least go out with the first phase of it. Uh, what are those challenges which, uh, the platform heads or the people who are working, the engineers who are working on it, should be aware of and how to like dodge those? Something from your experience that you can share.

Geoffrey Teale: Yes. So I mean, in, in, in the, the very earliest phase, I mean, as I just alluded to that keeping it simple is, is a, a, a big benefit. And actually keeping it simple sometimes means, uh, spending money upfront. So what I've, what I've seen is, is, um, many times I've, I've worked at companies, um, but so many, at least three times who've invested in a monitoring platform. So they've bought a off the shelf software as a service monitoring platform, uh, and used that effectively up until a certain point of growth. Now the reason they only use it up into a certain point of growth is because these tools are extremely expensive and those costs tend to scale with your company and your organization. And so, there comes a point in the life of that organization where that no longer makes sense financially. And then you withdraw from that and actually invest in, in specialist resources, either internally or using open source tools or whatever it is. It could just be optimization of the tool that you're using to reduce those costs. But all of those things have a, a time and financial costs associated with them. Whereas at the beginning, when the costs are quite low to use these services, it actually tends to make more sense to just focus on your own project and, and, you know, pick those things up off the shelf because that's easier and quicker. And I think, uh, again, I've seen some companies fail because they tried to do everything themselves from scratch and that, that doesn't work in the beginning. So yeah, I think that's a, it's a big one. 

The second one is actually slightly later as you start to grow, getting something up and running at all is a challenge. Um, what tends to happen as you get a little bit bigger is this effect that I was talking about before where people get siloized, um, the communication starts to break down and people aren't aware of the differing concerns. So if you start worrying about things that you might not worry about at first, like system recovery, uh, compliance in some cases, like there's laws around what you do in terms of your platform and your recoverability and data protection and all these things, all of these topics tend to take focus away, um, from what the developers are doing. So on the first hand, that tends to slow down delivery of, of, features that the engineers within your company want in favor of things that they don't really want to know about. Now, all the time you're doing this, you're taking problems away from them and solving them for them. But if you don't talk about that, then you're not, you're not, you may be delivering value, but nobody knows you're delivering value. So that's the first thing. 

The other thing is that you then tend to start losing focus on, on the impact that some of these things have. If you stop thinking about the developers as the primary stakeholders and you get obsessed about these other technical and legal factors, um, then you can start putting barriers into place. You can start, um, making the interfaces to the system the way in which it's used, become more complicated. And if you don't really focus then on the developer experience, right, what it is like to use that platform, then you start to turn into the problem, which I mentioned before, because, um, if you're regularly doing something, if you're deploying or testing on a platform and you have to do that over and over again, and it's slowed down by some bureaucracy or some practice or just literally running slowly, um, then that starts to be the thing that irritates you. It starts to be the thing that's in your way, stopping you doing what you're doing. And so, I mean, one thing is, is, is recognizing when this point happens, when your concerns start to deviate and actually explicitly saying, "Okay, yes, we're going to focus on all these things we have to focus on technically, but we're going to make sure that we reserve some technical resource for monitoring our performance and the way in which our customers interact with the system, failure cases, complaints that come up often."

Um, so one thing, again, I saw in much bigger companies, is they migrated to the cloud from, from legacy systems in data centers. And they were used to having turnaround times on, on procedures for deploying software that took at least weeks or having month-long projects because they had to wait for specific training that they had to get sign off. And they thought that by moving to an internal cloud platform, they would solve these things and have this kind of rapid development and deployment cycle. They sort of did in some ways, but they forgot, right? When they were speculating out, they forgot to make the developers a stakeholder and saying, "What do you need to achieve that?" And what they actually need to achieve that is a change in the mindset around the bureaucracy that came around. It's all well and good, like not having to physically put a machine in a rack and order it from a company. But if you still have these rules that say, okay, you need to go in this training course before you can do anything with this, and there's a six month waiting list for that training course, or this has to be approved by five managers who can only be contacted by email before you can do it. These processes are slowing things down. So actually, I mentioned that company that, uh, we lost the whole department from the, from the, uh, platform that we had internally. One of the reasons actually was that just getting started with this platform took months. Whereas if you went to a public cloud service, all you needed was a credit card and you could do it and you wouldn't be breaking any rules in the company in doing that. As long as you had the, the right to spend the money on the credit card, it was fine.

So, you know, that difference of experience, that difference of, uh, of understanding something that starts to grow out as you, as you grow, right? So I think that's a, uh, a thing to look out for as you move from the situation when you're 10, 20 people in the whole company to when you're about, I would say, 100 to 200 people in the whole company. These forces start to become apparent. 

Kovid Batra: Got it. So when, when you touch that point of 100-200, uh, then there is definitely a different journey that you have to look up to, right? And there are their own set of challenges. So from that zero to one and then one to X, uh, journey, what, what things have you experienced? Like, this would be my last question for, for today, but yeah, I would be really interested for people who are listening to you heading teams of sizes, a hundred and above. What kind of things they should be looking at when they are, let's say, moving from an off the shelf to an in-house product and then building these teams together?

Geoffrey Teale: Oh, what should they be looking at? I mean, I think we just covered, uh, one of the big ones. I'd say actually that one of the, the biggest things for engineers particularly, um, and managers of engineers is resistance to documentation and, and sort of ideas about documentation that people have. So, um, when you're again, when you're that very small company, it's very easy to just know what's going on. As you grow, what happens, new people come into your team and they have the same questions that have been asked and answered before, or were just known things. So you get this pattern where you repeatedly get the same information being requested by people and it's very nice and normal to have conversations. It builds teams. Um, but there's this kind of key phrase, which is, 'Documentation is automation', right? So engineers understand automation. They understand why automation is required to scale, but they tend to completely discount that when it comes to documentation. So almost every engineer that I've ever met hates writing documentation. Not everyone, but almost everyone. Uh, but if you go and speak to engineers about what they need to start working with a new product, and again, we think about this as a product, um, they'll say, of course, I need some documentation. Uh, and if you dive into that, they don't really want to have fancy YouTube videos. And so, that sometimes that helps people overcome a resistance to learning. Um, but, uh, having anything at all is useful, right? But this is a key, key learning documentation. You need to treat it a little bit like you treat code, right? So it's a very natural, um, observation from, from most engineers. Well, if I write a document about this, that document is just going to sit there and, and rot, and then it will be worse than useless because it will say the wrong thing, which is absolutely true. But the problem there is that someone said it will sit there and rot, right? It shouldn't be the case, right? If you need the documentation to scale out, you need these pieces to, to support new people coming into the company and to actually reduce the overhead of communication because more people, the more different directions of communication you have, the more costly it gets for the organization. Documentation is boring. It's old-fashioned, but it is the solution that works for fixing that. 

The only other thing I'm going to say about is mindset, is it's really important to teach engineers what to document, right? Get them away from this mindset that documentation means writing massive, uh, uh, reams and reams of, of text explaining things in, in detail. It's about, you know, documenting the right things in the right place. So at code-level, commenting, um, saying not what the code there does, but more importantly, generally, why it does that. You know, what decision was made that led to that? What customer requirement led to that? What piece of regulation led to that? Linking out to the resources that explain that. And then at slightly higher levels, making things discoverable. So we talk actually in DevEx about things like, um, service catalogs so people can find out what services are running, what APIs are available internally. But also actually documentation has to be structured in a way that meets the use cases. And so, actually not having individual departments dropping little bits of information all over a wiki with an arcane structure, but actually sort of having a centralized resource. Again, that's one thing that I did actually in a bigger company. I came into the platform team and said, "Nobody can find any information about your platform. You actually need like a central website and you need to promote that website and tell people, 'Hey, this is here. This is how you get the information that you need to understand this platform.' And actually including at the very front of that page why this platform is better than just going out somewhere else to come back to the same topic."

Documentation isn't a silver bullet, but it's the closest thing I'm aware of in tech organizations, and it's the thing that we routinely get wrong.

Kovid Batra: Great. I think, uh, just in the interest of time, we'll have to stop here. But, uh, Geoffrey, this was something really, really interesting. I also explored a few things, uh, which were very new to me from the platform perspective. Uh, we would love to, uh, have you for another episode discussing and deep diving more into such topics. But for today, I think this is our time. And, uh, thank you once again for joining in, taking out time for this. Appreciate it.

Geoffrey Teale: Thank you. It's my pleasure.

View All

AI

View All

Developer Productivity in the Age of AI

Are you tired of feeling like you’re constantly playing catch-up with the latest AI tools, trying to figure out how they fit into your workflow? Many developers and managers share that sentiment, caught in a whirlwind of new technologies that promise efficiency but often lead to confusion and frustration.

The problem is clear: while AI offers exciting opportunities to streamline development processes, it can also amplify stress and uncertainty. Developers often struggle with feelings of inadequacy, worrying about how to keep up with rapidly changing demands. This pressure can stifle creativity, leading to burnout and a reluctance to embrace the innovations designed to enhance our work.

But there’s good news. By reframing your relationship with AI and implementing practical strategies, you can turn these challenges into opportunities for growth. In this blog, we’ll explore actionable insights and tools that will empower you to harness AI effectively, reclaim your productivity, and transform your software development journey in this new era.

The Current State of Developer Productivity

Recent industry reports reveal a striking gap between the available tools and the productivity levels many teams achieve. For instance, a survey by GitHub showed that 70% of developers believe repetitive tasks hamper their productivity. Moreover, over half of developers express a desire for tools that enhance their workflow without adding unnecessary complexity.

Understanding the Productivity Paradox

Despite investing heavily in AI, many teams find themselves in a productivity paradox. Research indicates that while AI can handle routine tasks, it can also introduce new complexities and pressures. Developers may feel overwhelmed by the sheer volume of tools at their disposal, leading to burnout. A 2023 report from McKinsey highlights that 60% of developers report higher stress levels due to the rapid pace of change.

Common Emotional Challenges

As we adapt to these changes, feelings of inadequacy and fear of obsolescence may surface. It’s normal to question our skills and relevance in a world where AI plays a growing role. Acknowledging these emotions is crucial for moving forward. For instance, it can be helpful to share your experiences with peers, fostering a sense of community and understanding.

Key Challenges Developers Face in the Age of AI

Understanding the key challenges developers face in the age of AI is essential for identifying effective strategies. This section outlines the evolving nature of job roles, the struggle to balance speed and quality, and the resistance to change that often hinders progress.

Evolving Job Roles

AI is redefining the responsibilities of developers. While automation handles repetitive tasks, new skills are required to manage and integrate AI tools effectively. For example, a developer accustomed to manual testing may need to learn how to work with automated testing frameworks like Selenium or Cypress. This shift can create skill gaps and adaptation challenges, particularly for those who have been in the field for several years.

Balancing speed and Quality

The demand for quick delivery without compromising quality is more pronounced than ever. Developers often feel torn between meeting tight deadlines and ensuring their work meets high standards. For instance, a team working on a critical software release may rush through testing phases, risking quality for speed. This balancing act can lead to technical debt, which compounds over time and creates more significant problems down the line.

Resistance to Change

Many developers hesitate to adopt AI tools, fearing that they may become obsolete. This resistance can hinder progress and prevent teams from fully leveraging the benefits that AI can provide. A common scenario is when a developer resists using an AI-driven code suggestion tool, preferring to rely on their coding instincts instead. Encouraging a mindset shift within teams can help them embrace AI as a supportive partner rather than a threat.

Strategies for Boosting Developer Productivity

To effectively navigate the challenges posed by AI, developers and managers can implement specific strategies that enhance productivity. This section outlines actionable steps and AI applications that can make a significant impact.

Embracing AI as a Collaborator

To enhance productivity, it’s essential to view AI as a collaborator rather than a competitor. Integrating AI tools into your workflow can automate repetitive tasks, freeing up your time for more complex problem-solving. For example, using tools like GitHub Copilot can help developers generate code snippets quickly, allowing them to focus on architecture and logic rather than boilerplate code.

  • Recommended AI tools: Explore tools that integrate seamlessly with your existing workflow. Platforms like Jira for project management and Test.ai for automated testing can streamline your processes and reduce manual effort.

Actual AI Applications in Developer Productivity

AI offers several applications that can significantly boost developer productivity. Understanding these applications helps teams leverage AI effectively in their daily tasks.

  • Code generation: AI can automate the creation of boilerplate code. For example, tools like Tabnine can suggest entire lines of code based on your existing codebase, speeding up the initial phases of development and allowing developers to focus on unique functionality.
  • Code review: AI tools can analyze code for adherence to best practices and identify potential issues before they become problems. Tools like SonarQube provide actionable insights that help maintain code quality and enforce coding standards.
  • Automated testing: Implementing AI-driven testing frameworks can enhance software reliability. For instance, using platforms like Selenium and integrating them with AI can create smarter testing strategies that adapt to code changes, reducing manual effort and catching bugs early.
  • Intelligent debugging: AI tools assist in quickly identifying and fixing bugs. For example, Sentry offers real-time error tracking and helps developers trace their sources, allowing teams to resolve issues before they impact users.
  • Predictive analytics for sprints/project completion: AI can help forecast project timelines and resource needs. Tools like Azure DevOps leverage historical data to predict delivery dates, enabling better sprint planning and management.
  • Architectural optimization: AI tools suggest improvements to software architecture. For example, the AWS Well-Architected Tool evaluates workloads and recommends changes based on best practices, ensuring optimal performance.
  • Security assessment: AI-driven tools identify vulnerabilities in code before deployment. Platforms like Snyk scan code for known vulnerabilities and suggest fixes, allowing teams to deliver secure applications.

Continuous Learning and Professional Development

Ongoing education in AI technologies is crucial. Developers should actively seek opportunities to learn about the latest tools and methodologies.

Online resources and communities: Utilize platforms like Coursera, Udemy, and edX for courses on AI and machine learning. Participating in online forums such as Stack Overflow and GitHub discussions can provide insights and foster collaboration among peers.

Cultivating a Supportive Team Environment

Collaboration and open communication are vital in overcoming the challenges posed by AI integration. Building a culture that embraces change can lead to improved team morale and productivity.

Building peer support networks: Establish mentorship programs or regular check-ins to foster support among team members. Encourage knowledge sharing and collaborative problem-solving, creating an environment where everyone feels comfortable discussing their challenges.

Setting Effective Productivity Metrics

Rethink how productivity is measured. Focus on metrics that prioritize code quality and project impact rather than just the quantity of code produced.

Tools for measuring productivity: Use analytics tools like Typo that provide insights into meaningful productivity indicators. These tools help teams understand their performance and identify areas for improvement.

How Typo Enhances Developer Productivity?

There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.

Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.

Here are three ways in which Typo measures the team productivity:

Software Development Lifecycle (SDLC) Visibility

Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.

Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.

AI Powered Code Review

Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them using AI before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.

Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback.  This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.

Developer Experience

Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.

Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.

Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.

Continuous Learning: Empowering Developers for Future Success

With its robust features tailored for the modern software development environment, Typo acts as a catalyst for productivity. By streamlining workflows, fostering collaboration, integrating with AI tools, and providing personalized support, Typo empowers developers and their managers to navigate the complexities of development with confidence. Embracing Typo can lead to a more productive, engaged, and satisfied development team, ultimately driving successful project outcomes.

Want to know more?

AI code reviews

AI C͏o͏de Rev͏iews ͏for Remote͏ Teams

Ha͏ve͏ yo͏u ever felt ͏overwhelmed trying to ͏mainta͏in co͏nsist͏ent͏ c͏o͏de quality acros͏s ͏a remote te͏am? As mo͏re development t͏eams shift to remo͏te work, t͏he challenges of code͏ revi͏e͏ws onl͏y gro͏w—slowed c͏ommunication͏, la͏ck o͏f real-tim͏e feedba͏ck, and t͏he c͏r͏eeping ͏possibility of errors sl͏ipp͏i͏ng t͏hro͏ugh. ͏

Moreover, thin͏k about how͏ much ti͏me is lost͏ ͏waiting͏ fo͏r feedback͏ o͏r having to͏ rewo͏rk code due͏ ͏to sma͏ll͏, ͏overlooked issues. ͏When you’re͏ working re͏motely, the͏se frustra͏tio͏ns com͏poun͏d—su͏ddenly, a task that shou͏ld take hours stre͏tc͏hes into days. You͏ migh͏t ͏be spendin͏g tim͏e on ͏repetitiv͏e tasks ͏l͏ike͏ s͏yn͏ta͏x chec͏king, cod͏e formatting, and ma͏nually catch͏in͏g errors that could be͏ ha͏nd͏led͏ more ef͏fi͏cie͏nt͏ly. Me͏anwhile͏,͏ ͏yo͏u’r͏e ͏expected to deli͏ver high-quality͏ ͏work without delays. ͏

Fortuna͏tely,͏ ͏AI-͏driven too͏ls offer a solutio͏n t͏h͏at can ea͏se this ͏bu͏rd͏en.͏ B͏y automating ͏the tedi͏ous aspects of cod͏e ͏re͏views, such as catchin͏g s͏y͏ntax ͏e͏r͏rors and for͏m͏a͏tting i͏nconsistenc͏ies, AI ca͏n͏ gi͏ve deve͏lopers m͏or͏e͏ time to focus on the creative and comple͏x aspec͏ts of ͏coding. 

͏In this ͏blog, we’͏ll ͏explore how A͏I͏ can ͏help͏ remote teams tackle the diffic͏u͏lties o͏f͏ code r͏eviews ͏a͏nd ho͏w ͏t͏o͏ols like Typo can fu͏rther͏ im͏prove this͏ proc͏ess͏, allo͏wing t͏e͏am͏s to focu͏s on what ͏tru͏ly matter͏s—writing excellent͏ code.

The͏ Unique Ch͏allenges͏ ͏of R͏emot͏e C͏ode Revi͏ews

Remote work h͏as int͏roduced a unique se͏t of challenges t͏hat imp͏a͏ct t͏he ͏code rev͏iew proce͏ss. They a͏re:͏ 

Co͏mmunication bar͏riers

When team members are͏ s͏cat͏t͏ered across ͏diffe͏rent time ͏zon͏e͏s, real-t͏ime discussions and feedba͏ck become ͏mor͏e difficult͏. Th͏e͏ lack of face͏-to-͏face͏ ͏int͏e͏ra͏ctions can h͏i͏nder effective ͏commun͏icati͏on ͏an͏d͏ le͏ad ͏to m͏isunde͏rs͏tandings.

Delays in fee͏dback͏

Without͏ the i͏mmedi͏acy of in-pers͏on ͏collabo͏rati͏on͏,͏ remote͏ ͏tea͏ms͏ often experie͏n͏ce del͏ays in receivi͏ng feedback on͏ thei͏r code chang͏e͏s. This ͏can slow d͏own the developmen͏t cycle͏ and fru͏strat͏e ͏te͏am ͏member͏s who are ea͏ger t͏o iterate and impro͏ve the͏ir ͏code.͏

Inc͏rea͏sed risk ͏of human error͏

͏C͏o͏mplex ͏code͏ re͏vie͏ws cond͏ucted ͏remo͏t͏ely are more͏ p͏ro͏n͏e͏ to hum͏an overs͏ight an͏d errors. When team͏ memb͏ers a͏re no͏t ph͏ysically ͏pres͏ent to catch ͏ea͏ch other's mistakes, the risk of intro͏duci͏ng͏ bug͏s or quality i͏ssu͏es into the codebase increa͏ses.

Emo͏tional stres͏s

Re͏mot͏e͏ work can take͏ a toll on t͏eam mo͏rale, with f͏eelings͏ of ͏is͏olation and the pres͏s͏ure ͏to m͏ai͏nt͏a͏in productivit͏y w͏eighing heavily ͏on͏ developers͏. This emo͏tional st͏ress can negativel͏y ͏impact col͏laborati͏on͏ a͏n͏d code quality i͏f not͏ properly add͏ress͏ed.

Ho͏w AI Ca͏n͏ Enhance ͏Remote Co͏d͏e Reviews

AI-powered tools are transforming code reviews, helping teams automate repetitive tasks, improve accuracy, and ensure code quality. Let’s explore how AI dives deep into the technical aspects of code reviews and helps developers focus on building robust software.

NLP for Code Comments

Natural Language Processing (NLP) is essential for understanding and interpreting code comments, which often provide critical context:

Tokenization and Parsing

NLP breaks code comments into tokens (individual words or symbols) and parses them to understand the grammatical structure. For example, "This method needs refactoring due to poor performance" would be tokenized into words like ["This", "method", "needs", "refactoring"], and parsed to identify the intent behind the comment.

Sentiment Analysis

Using algorithms like Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks, AI can analyze the tone of code comments. For example, if a reviewer comments, "Great logic, but performance could be optimized," AI might classify it as having a positive sentiment with a constructive critique. This analysis helps distinguish between positive reinforcement and critical feedback, offering insights into reviewer attitudes.

Intent Classification

AI models can categorize comments based on intent. For example, comments like "Please optimize this function" can be classified as requests for changes, while "What is the time complexity here?" can be identified as questions. This categorization helps prioritize actions for developers, ensuring important feedback is addressed promptly.

Static Code Analysis

Static code analysis goes beyond syntax checking to identify deeper issues in the code:

Syntax and Semantic Analysis

AI-based static analysis tools not only check for syntax errors but also analyze the semantics of the code. For example, if the tool detects a loop that could potentially cause an infinite loop or identifies an undefined variable, it flags these as high-priority errors. AI tools use machine learning to constantly improve their ability to detect errors in Java, Python, and other languages.

Pattern Recognition

AI recognizes coding patterns by learning from vast datasets of codebases. For example, it can detect when developers frequently forget to close file handlers or incorrectly handle exceptions, identifying these as anti-patterns. Over time, AI tools can evolve to suggest better practices and help developers adhere to clean code principles.

Vulnerability Detection

AI, trained on datasets of known vulnerabilities, can identify security risks in the code. For example, tools like Typo or Snyk can scan JavaScript or C++ code and flag potential issues like SQL injection, buffer overflows, or improper handling of user input. These tools improve security audits by automating the identification of security loopholes before code goes into production.

Code Similarity Detection

Finding duplicate or redundant code is crucial for maintaining a clean codebase:

Code Embeddings

Neural networks convert code into embeddings (numerical vectors) that represent the code in a high-dimensional space. For example, two pieces of code that perform the same task but use different syntax would be mapped closely in this space. This allows AI tools to recognize similarities in logic, even if the syntax differs.

Similarity Metrics

AI employs metrics like cosine similarity to compare embeddings and detect redundant code. For example, if two functions across different files are 85% similar based on cosine similarity, AI will flag them for review, allowing developers to refactor and eliminate duplication.

Duplicate Code Detection

Tools like Typo use AI to identify duplicate or near-duplicate code blocks across the codebase. For example, if two modules use nearly identical logic for different purposes, AI can suggest merging them into a reusable function, reducing redundancy and improving maintainability.

Automated Code Suggestions

AI doesn’t just point out problems—it actively suggests solutions:

Generative Models

Models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) can create new code snippets. For example, if a developer writes a function that opens a file but forgets to handle exceptions, an AI tool can generate the missing try-catch block to improve error handling.

Contextual Understanding

AI analyzes code context and suggests relevant modifications. For example, if a developer changes a variable name in one part of the code, AI might suggest updating the same variable name in other related modules to maintain consistency. Tools like GitHub Copilot use models such as GPT to generate code suggestions in real-time based on context, making development faster and more efficient.

Reinforcement Learning for Code Optimization

Reinforcement learning (RL) helps AI continuously optimize code performance:

Reward Functions

In RL, a reward function is defined to evaluate the quality of the code. For example, AI might reward code that reduces runtime by 20% or improves memory efficiency by 30%. The reward function measures not just performance but also readability and maintainability, ensuring a balanced approach to optimization.

Agent Training

Through trial and error, AI agents learn to refactor code to meet specific objectives. For example, an agent might experiment with different ways of parallelizing a loop to improve performance, receiving positive rewards for optimizations and negative rewards for regressions.

Continuous Improvement

The AI’s policy, or strategy, is continuously refined based on past experiences. This allows AI to improve its code optimization capabilities over time. For example, Google’s AlphaCode uses reinforcement learning to compete in coding competitions, showing that AI can autonomously write and optimize highly efficient algorithms.

AI-Assisted Code Review Tools

Modern AI-assisted code review tools offer both rule-based enforcement and machine learning insights:

Rule-Based Systems

These systems enforce strict coding standards. For example, AI tools like ESLint or Pylint enforce coding style guidelines in JavaScript and Python, ensuring developers follow industry best practices such as proper indentation or consistent use of variable names.

Machine Learning Models

AI models can learn from past code reviews, understanding patterns in common feedback. For instance, if a team frequently comments on inefficient data structures, the AI will begin flagging those cases in future code reviews, reducing the need for human intervention.

Hybrid Approaches

Combining rule-based and ML-powered systems, hybrid tools provide a more comprehensive review experience. For example, DeepCode uses a hybrid approach to enforce coding standards while also learning from developer interactions to suggest improvements in real-time. These tools ensure code is not only compliant but also continuously improved based on team dynamics and historical data.

Incorporating AI into code reviews takes your development process to the next level. By automating error detection, analyzing code sentiment, and suggesting optimizations, AI enables your team to focus on what matters most: building high-quality, secure, and scalable software. As these tools continue to learn and improve, the benefits of AI-assisted code reviews will only grow, making them indispensable in modern development environments.

Here’s a table to help you seamlessly understand the code reviews at a glance:

Practical Steps to Im͏pleme͏nt AI-Driven Co͏de ͏Review͏s

To ef͏fectively inte͏grate A͏I ͏into your remote͏ tea͏m's co͏de revi͏ew proce͏ss, con͏side͏r th͏e followi͏ng ste͏ps͏:

Evaluate͏ and choo͏se ͏AI tools: Re͏sear͏ch͏ and ͏ev͏aluat͏e A͏I͏-powe͏red code͏ review tools th͏at ali͏gn with your tea͏m'͏s n͏e͏eds an͏d ͏de͏vel͏opment w͏orkflow.

S͏t͏art with͏ a gr͏ad͏ua͏l ͏approa͏ch: Us͏e AI tools to ͏s͏upp͏ort h͏uman-le͏d code ͏reviews be͏fore gr͏ad͏ua͏lly ͏automating simpler tasks. This w͏ill al͏low your͏ te͏am to become comfortable ͏w͏ith the te͏chnol͏ogy and see its ͏ben͏efit͏s firsthan͏d͏.

͏Foster a cu͏lture of collaboration͏: ͏E͏nc͏ourage͏ yo͏ur tea͏m to view AI ͏as͏ a co͏llaborati͏ve p͏ar͏tner rathe͏r tha͏n͏ a replac͏e͏men͏t for ͏huma͏n expert͏is͏e͏. ͏Emp͏hasize ͏the impo͏rtan͏ce of human oversi͏ght, ͏especially for complex issue͏s th͏at r͏equire ͏nuance͏d͏ ͏judgmen͏t.

Provi͏de trainin͏g a͏nd r͏eso͏urces: Equi͏p͏ ͏your͏ team ͏with͏ the neces͏sary ͏training ͏an͏d resources to ͏use A͏I ͏c͏o͏de revie͏w too͏ls͏ effectively.͏ T͏his include͏s tuto͏rials, docume͏ntatio͏n, and op͏p͏ortunities fo͏r hands-on p͏r͏actice.

Lev͏era͏ging Typo to ͏St͏r͏eam͏line Remot͏e Code ͏Revi͏ews

Typo is an ͏AI-͏po͏w͏er͏ed tool designed to streamli͏ne the͏ code review process for r͏emot͏e teams. By i͏nte͏grating seamlessly wi͏th ͏your e͏xisting d͏e͏vel͏opment tool͏s, Typo mak͏es it easier͏ to ma͏nage feedbac͏k, improve c͏ode͏ q͏uali͏ty, and ͏collab͏o͏ra͏te ͏acr͏o͏ss ͏tim͏e zone͏s͏.

S͏ome key͏ benefi͏ts of ͏using T͏ypo ͏inclu͏de:

  • AI code analysis
  • Code context understanding
  • Auto debuggging with detailed explanations
  • Proprietary models with known frameworks (OWASP)
  • Auto PR fixes

Here's a brief comparison on how Typo differentiates from other code review tools

The Hu͏man Element: Com͏bining͏ ͏AI͏ and Human Exp͏ert͏ise

Wh͏ile AI ca͏n ͏s͏i͏gn͏ificantly͏ e͏nhance͏ the code ͏review proces͏s, i͏t͏'s essential͏ to maintain ͏a balance betw͏een AI ͏and human expert͏is͏e. AI ͏is not ͏a repla͏ce͏me͏nt for h͏uman intuition, cr͏eativity, or judgmen͏t but rather ͏a ͏s͏upportive t͏ool that augme͏nts and ͏emp͏ower͏s ͏developers.

By ͏using AI to ͏handle͏ re͏peti͏tive͏ tasks a͏nd prov͏ide real-͏time f͏eedba͏ck, develope͏rs can͏ foc͏us on higher-lev͏el is͏su͏es ͏that re͏quire ͏h͏uman problem-solving ͏skills. T͏h͏is ͏division of͏ l͏abor͏ allows teams ͏to w͏ork m͏ore efficient͏ly͏ and eff͏ectivel͏y while still͏ ͏ma͏in͏taining͏ the ͏h͏uma͏n touch that is cr͏uc͏ial͏ ͏for complex͏ ͏p͏roble͏m-solving and innov͏ation.

Over͏c͏oming E͏mot͏ional Barriers to AI In͏tegra͏tion

In͏troducing new t͏echn͏ol͏og͏ies͏ can so͏metimes be ͏met wit͏h r͏esist͏ance or fear. I͏t's ͏im͏porta͏nt ͏t͏o address these co͏ncerns head-on and hel͏p your͏ team understand t͏he͏ be͏nefits of AI integr͏ation.

Some common͏ fears—͏su͏ch as job͏ r͏eplacement or dis͏r͏u͏pt͏ion of esta͏blished workflows—͏shou͏ld be dire͏ctly addre͏ssed͏.͏ Reas͏sur͏e͏ your t͏ea͏m͏ that AI is ͏designed to r͏e͏duce workload and enh͏a͏nce͏ pro͏duc͏tiv͏ity, no͏t rep͏lace͏ human ex͏pertise.͏ Foster an͏ en͏vironment͏ that embr͏aces new t͏echnologie͏s while focusing on th͏e long-t͏erm be͏nefits of improved ͏eff͏icienc͏y, collabor͏ati͏on, ͏and j͏o͏b sat͏isfaction.

Elevate Your͏ Code͏ Quality: Em͏b͏race AI Solut͏ions͏

AI-d͏riven co͏d͏e revie͏w͏s o͏f͏fer a pr͏omising sol͏ution f͏or remote teams ͏lookin͏g͏ to maintain c͏ode quality, fo͏ster collabor͏ation, and enha͏nce productivity. ͏By emb͏ra͏cing͏ ͏AI tool͏s like Ty͏po, you can streamline ͏your code rev͏iew pro͏cess, reduce delays, and empower ͏your tea͏m to focus on writing gr͏ea͏t code.

Remem͏ber tha͏t ͏AI su͏pports and em͏powers your team—not replace͏ human expe͏rti͏se. Exp͏lore and experim͏ent͏ with A͏I͏ code review tools ͏in y͏o͏ur ͏teams, and ͏wa͏tch as your remote co͏lla͏borati͏on rea͏ches new͏ he͏i͏ghts o͏f effi͏cien͏cy and success͏.

How does Gen AI address Technical Debt?

The software development field is constantly evolving field. While this helps deliver the products and services quickly to the end-users, it also implies that developers might take shortcuts to deliver them on time. This not only reduces the quality of the software but also leads to increased technical debt.

But, with new trends and technologies, comes generative AI. It seems to be a promising solution in the software development industry which can ultimately, lead to high-quality code and decreased technical debt.

Let’s explore more about how generative AI can help manage technical debt!

Technical debt: An overview

Technical debt arises when development teams take shortcuts to develop projects. While this gives them short-term gains, it increases their workload in the long run.

In other words, developers prioritize quick solutions over effective solutions. The four main causes behind technical debt are:

  • Business causes: Prioritizing business needs and the company’s evolving conditions can put pressure on development teams to cut corners. It can result in preponing deadlines or reducing costs to achieve desired goals.
  • Development causes: As new technologies are evolving rapidly, It makes it difficult for teams to switch or upgrade them quickly. Especially when already dealing with the burden of bad code.
  • Human resources causes: Unintentional technical debt can occur when development teams lack the necessary skills or knowledge to implement best practices. It can result in more errors and insufficient solutions.
  • Resources causes: When teams don’t have time or sufficient resources, they take shortcuts by choosing the quickest solution. It can be due to budgetary constraints, insufficient processes and culture, deadlines, and so on.

Why generative AI for code management is important?

As per McKinsey’s study,

“… 10 to 20 percent of the technology budget dedicated to new products is diverted to resolving issues related to tech debt. More troubling still, CIOs estimated that tech debt amounts to 20 to 40 percent of the value of their entire technology estate before depreciation.”

But there’s a solution to it. Handling tech debt is possible and can have a significant impact:

“Some companies find that actively managing their tech debt frees up engineers to spend up to 50 percent more of their time on work that supports business goals. The CIO of a leading cloud provider told us, ‘By reinventing our debt management, we went from 75 percent of engineer time paying the [tech debt] ‘tax’ to 25 percent. It allowed us to be who we are today.”

There are many traditional ways to minimize technical debt which includes manual testing, refactoring, and code review. However, these manual tasks take a lot of time and effort. Due to the ever-evolving nature of the software industry, these are often overlooked and delayed.

Since generative AI tools are on the rise, they are considered to be the right way for code management which subsequently, lowers technical debt. These new tools have started reaching the market already. They are integrated into the software development environments, gather and process the data across the organization in real-time, and further, leveraged to lower tech debt.

Some of the key benefits of generative AI are:

  • Identify redundant code: Generative AI tools like Codeclone analyze code and suggest improvements. This further helps in improving code readability and maintainability and subsequently, minimizing technical debt.
  • Generates high-quality code: Automated code review tools such as Typo help in an efficient and effective code review process. They understand the context of the code and accurately fix issues which leads to high-quality code.  
  • Automate manual tasks: Tools like Github Copilot automate repetitive tasks and let the developers focus on high-quality tasks.
  • Optimal refactoring strategies: AI tools like Deepcode leverage machine learning models to understand code semantics, break it down into more manageable functions, and improve variable namings.

Case studies and real-life examples

Many industries have started adopting generative AI technologies already for tech debt management. These AI tools assist developers in improving code quality, streamlining SDLC processes, and cost savings.

Below are success stories of a few well-known organizations that have implemented these tools in their organizations:

Microsoft uses Diffblue cover for Automated Testing and Bug Detection

Microsoft is a global technology leader that implemented Diffblue cover for automated testing. Through this generative AI, Microsoft has experienced a considerable reduction in the number of bugs during the development process. It also ensures that the new features don’t compromise with existing functionality which positively impacts their code quality. This further helps in faster and more reliable releases and cost savings.

Google implements Codex for code documentation

Google is an internet search engine and technology giant that implemented OpenAI’s Codex for streamlining code documentation processes. Integrating this AI tool helped subsequently reduce the time and effort spent on manual documentation tasks. Due to the consistency across the entire codebase, It enhances code quality and allows developers to focus more on core tasks.

Facebook adopts CodeClone to identify redundancy

Facebook, a leading social media, has adopted a generative AI tool, CodeClone for identifying and eliminating redundant code across its extensive codebase. This resulted in decreased inconsistencies and a more streamlined and efficient codebase which further led to faster development cycles.

Pioneer Square Labs uses GPT-4 for higher-level planning

Pioneer Square Labs, a studio that launches technology startups, adopted GPT-4 to allow developers to focus on core tasks and let these AI tools handle mundane tasks. This further allows them to take care of high-level planning and assist in writing code. Hence, streamlining the development process.

How Typo leverage generative AI to reduce technical debt?

Typo’s automated code review tool enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells.

Typo also auto-analyses your codebase pulls requests to find issues and auto-generates fixes before you merge to master. Its Auto-Fix feature leverages GPT 3.5 Pro trained on millions of open source data & exclusive anonymised private data as well to generate line-by-line code snippets where the issue is detected in the codebase.

As a result, Typo helps reduce technical debt by detecting and addressing issues early in the development process, preventing the introduction of new debt, and allowing developers to focus on high-quality tasks.

Issue detection by Typo

AI to reduce technical debt

Autofixing the codebase with an option to directly create a Pull Request

AI to reduce technical debt

Key features

Supports top 10+ languages

Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.

Fix every code issue

Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.

Efficient code optimization

Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.

Professional coding standards

Typo standardizes code and reduces the risk of a security breach.

Professional coding standards

Click here to know more about our Code Review tool

Can technical debt increase due to generative AI?

While generative AI can help reduce technical debt by analyzing code quality, removing redundant code, and automating the code review process, many engineering leaders believe technical debt can be increased too.

Bob Quillin, vFunction chief ecosystem officer stated “These new applications and capabilities will require many new MLOps processes and tools that could overwhelm any existing, already overloaded DevOps team,”

They aren’t wrong either!

Technical debt can be increased when the organizations aren’t properly documenting and training development teams in implementing generative AI the right way. When these AI tools are adopted hastily without considering any long-term implications, they can rather increase the workload of developers and increase technical debt.

Ethical guidelines

Establish ethical guidelines for the use of generative AI in organizations. Understand the potential ethical implications of using AI to generate code, such as the impact on job displacement, intellectual property rights, and biases in AI-generated output.

Diverse training data quality

Ensure the quality and diversity of training data used to train generative AI models. When training data is biased or incomplete, these AI tools can produce biased or incorrect output. Regularly review and update training data to improve the accuracy and reliability of AI-generated code.

Human oversight

Maintain human oversight throughout the generative AI process. While AI can generate code snippets and provide suggestions, the final decision should be upon the developers for final decision making, review, and validate the output to ensure correctness, security, and adherence to coding standards.

Most importantly, human intervention is a must when using these tools. After all, it’s their judgment, creativity, and domain knowledge that help to make the final decision. Generative AI is indeed helpful to reduce the manual tasks of the developers, however, they need to use it properly.

Conclusion

In a nutshell, generative artificial intelligence tools can help manage technical debt when used correctly. These tools help to identify redundancy in code, improve readability and maintainability, and generate high-quality code.

However, it is to be noted that these AI tools shouldn’t be used independently. These tools must work only as the developers’ assistants and they muse use them transparently and fairly.

View All

Tutorials

View All

What is Software Capitalization?

Most companies treat software development costs as just another expense and are unsure how certain costs can be capitalized. 

Recording the actual value of any software development process must involve recognizing the development process as a high-return asset. 

That’s what software capitalization is for.

This article will answer all the what’s, why’s, and when’s of software capitalization.

What is Software Capitalization?

Software capitalization is an accounting process that recognizes the incurred software development costs and treats them as long-term assets rather than immediate expenses. 

Typical costs include employee wages, third-party app expenses, consultation fees, and license purchases. 

The idea is to amortize these costs over the software’s lifetime, thus aligning expenses with future revenues generated by the software.

Why is Software Capitalization Important?

Shifting a developed software’s narrative from being an expense to a revenue-generating asset comes with some key advantages:

1. Preserves profitability

Capitalization helps preserve profitability for the longer term by reducing the impact on the company’s expenses. That’s because you amortize intangible and tangible asset expenses, thus minimizing cash flow impact.   

2. Reflects asset value

Capitalizing software development costs results in higher reported asset value and reduces short-term expenses, which ultimately improves your profitability metrics like net profit margin, ARR growth, and ROA (return on assets).

3. Complies with accounting standards

Software capitalization complies with the rules set by major accounting standards like ASC 350-40, U.S. GAAP, and IFRS and makes it easier for companies to undergo audits.

When is Software Capitalization Applicable?

Here’s when it’s acceptable to capitalize software costs:

1. Development stage

The software development stage starts when you receive funding and are in an active development phase. Here, you can capitalize on any cost directly related to development, considering the software is for internal use. 

Example costs include interface designing, coding, configuring, installation, and testing.

2. Technical feasibility

If the software is intended for external use, then your costs can be capitalized when the software reaches the technical feasibility stage, i.e., when it’s viable. Example costs include coding, testing, and employee wages. 

3. Future economic benefits

The software must be a probable candidate to generate consistent revenue for your company in the long run and considered an “asset”. For external use software, this can mean it possesses a selling and leasing expectation.

4. Measurable costs 

The overall software development costs must be accurately measurable. This way, you ensure that the capitalized amount reflects the software’s exact invested amount. 

Key Costs that can be Capitalized

The five main costs you can capitalize for software are:

1. Direct development costs

Direct costs that go into your active development phase can be capitalized. These include payroll costs of employees who were directly part of the software development, additional software purchase fees, and travel costs.

2. External development costs

These costs include the ones incurred by the developers when working with external service providers. Examples include travel costs, technical support, outsourcing expenses, and more.

3. Software licensing fees

License fees can be capitalized instead of being treated as an expense. However, this can depend on the type of accounting standard. For example, GAAP’s terms state capitalization is feasible for one-time software license purchases where it provides long-term benefits.  

4. Acquisition costs

Acquisition costs can be capitalized as assets, provided your software is intended for internal use. 

5. Training and documentation costs

Training and documentation costs are considered assets only if you’re investing in them during the development phase. Post-implementation, these costs turn into operating expenses and cannot be amortized. 

Costs that should NOT be Capitalized

Here are a few costs that do not qualify for software capitalization and are expensed:

1. Research and planning costs 

Research and planning stages are categorized under the preliminary software development stage. These incurred costs are expensed and cannot be capitalized. The GAAP accounting standard, for example, states that an organization can begin to capitalize on costs only after completing these stages. 

2. Post-implementation costs 

Post-implementation or the operational stage is the maintenance period after the software is fully deployed. Any costs, be it training, support, or other operational charges during this time are expensed as incurred. 

3. Costs for upgrades and enhancements

Any costs related to software upgrades, modernization, or enhancements cannot be capitalized. For example, money spent on bug fixes, future modifications, and routine maintenance activities. 

Accounting Standards you should know for Software Capitalization

Below are the two most common accounting standards that state the eligibility criteria for software capitalization: 

1. U.S. GAAP (Generally Accepted Accounting Principles)

GAAP is a set of rules and procedures that organizations must follow while preparing their financial statements. These standards ensure accuracy and transparency in reporting across industries, including software. 

Understanding GAAP and key takeaways for software capitalization:

  • GAAP allows capitalization for internal and external costs directly related to the software development process. Examples of costs include licensing fees, third-party development costs, and wages of employees who are part of the project.
  • Costs incurred after the software is deemed viable but before it is ready for use can be capitalized. Example costs can be for coding, installation, and testing. 
  • Every post-implementation cost is expensed.
  • A development project still in the preliminary or planning phase is too early to capitalize on. 

2. IFRS (International Financial Reporting Standards)

IFRS is an alternative to GAAP and is used worldwide. Compared to GAAP, IFRS allows better capitalization of development costs, considering you meet every criterion, naturally making the standard more complex.

Understanding IFRS and key takeaways for software capitalization:

  • IFRS treats computer software as an intangible asset. If it’s internally developed software (for internal/external use or sale), it is charged to expense until it reaches technical feasibility.
  • All research and planning costs are charged as expenses.
  • Development costs are capitalized only after technical or commercial feasibility for sale if the software’s use has been established.  

Financial Implications of Software Capitalization

Software capitalization, from a financial perspective, can have the following aftereffects:

1. Impact on profit and loss statement

A company’s profit and loss (P&L) statement is an income report that shows the company’s overall expenses and revenues. So, if your company wishes to capitalize some of the software’s R&D costs, they are recognized as “profitable assets” instead of “losses,” so development can be amortized over a time period. 

2. Balance sheet impact

Software capitalization treats your development-related costs as long-term assets rather than incurred expenses. This means putting these costs on a balance sheet without recognizing the initial costs until you have a viable finished product that generates revenue. 

As a result, it delays paying taxes on those costs and leads to a bigger net income over that period.

3. Tax considerations 

Although tax implications can be complex, capitalizing on software can often lead to tax deferral. That’s because amortization deductions are spread across multiple periods, reducing your company’s tax burden for the time being. 

Detailed Software Capitalization Financial Model

Workforce and Development Parameters

Team Composition

  • Senior Software Engineers: 4
  • Mid-level Software Engineers: 6
  • Junior Software Engineers: 3
  • Total Team: 13 engineers

Compensation Structure (Annual)

  1. Senior Engineers
    • Base Salary: $180,000
    • Fully Loaded Cost: $235,000 (includes benefits, taxes, equipment)
    • Hourly Rate: $113 (2,080 working hours/year)
  2. Mid-level Engineers
    • Base Salary: $130,000
    • Fully Loaded Cost: $169,000
    • Hourly Rate: $81
  3. Junior Engineers
    • Base Salary: $90,000
    • Fully Loaded Cost: $117,000
    • Hourly Rate: $56

Story Point Economics

Story Point Allocation Model

  • 1 Story Point = 1 hour of work
  • Complexity-based hourly ratessome text
    • Junior: $56/SP
    • Mid-level: $81/SP
    • Senior: $113/SP

Project Capitalization Worksheet

Project: Enterprise Security Enhancement Module

Detailed Story Point Breakdown

Indirect Costs Allocation

  1. Infrastructure Costs
    • Cloud Development Environments: $75,000
    • Security Testing Platforms: $45,000
    • Development Tools Licensing: $30,000
    • Total: $150,000
  2. Overhead Allocation
    • Project Management (15%): $37,697
    • DevOps Support (10%): $25,132
    • Total Overhead: $62,829

Total Capitalization Calculation

  • Direct Labor Costs: $251,316
  • Infrastructure Costs: $150,000
  • Overhead Costs: $62,829
  • Total Capitalizable Costs: $464,145

Capitalization Eligibility Assessment

Capitalization Criteria Checklist

✓ Specific identifiable project 

✓ Intent to complete and use the software 

✓ Technical feasibility demonstrated 

✓ Expected future economic benefits 

✓ Sufficient resources to complete project 

✓ Ability to reliably measure development costs

Amortization Schedule

Useful Life Estimation

  • Estimated Useful Life: 4 years
  • Amortization Method: Straight-line
  • Annual Amortization: $116,036 ($464,145 ÷ 4)

Financial Impact Analysis

Income Statement Projection

Risk Mitigation Factors

Capitalization Risk Assessment

  1. Over-capitalization probability: Low (15%)
  2. Underestimation risk: Moderate (25%)
  3. Compliance deviation risk: Low (10%)

Sensitivity Analysis

Cost Variation Scenarios

  • Best Case: $441,938 (5% cost reduction)
  • Base Case: $464,145 (current estimate)
  • Worst Case: $487,352 (5% cost increase)

Compliance Considerations

Key Observations

  1. Precise tracking of story points allows granular cost allocation
  2. Multi-tier engineer cost model reflects skill complexity
  3. Comprehensive overhead and infrastructure costs included
  4. Rigorous capitalization criteria applied

Recommendation

Capitalize the entire $464,145 as an intangible asset, amortizing over 4 years.

How Typo can help 

Tracking R&D investments is a major part of streamlining software capitalization while leaving no room for manual errors. With Typo, you streamline this entire process by automating the reporting and management of R&D costs.

Typo’s best features and benefits for software capitalization include:

  • Automated Reporting: Generates customizable reports for capitalizable and non-capitalizable work.
  • Resource Allocation: Provides visibility into team investments, allowing for realignment with business objectives.
  • Custom Dashboards: Offers real-time tracking of expenditures and resource allocation.
  • Predictive Insights: Uses KPIs to forecast project timelines and delivery risks.
  • DORA Metrics: Assesses software delivery performance, enhancing productivity.

Typo transforms R&D from a cost center into a revenue-generating function by optimizing financial workflows and improving engineering efficiency, thus maximizing your returns on software development investments.

Wrapping up

Capitalizing software costs allows tech companies to secure better investment opportunities by increasing profits legitimately. 

Although software capitalization can be quite challenging, it presents massive future revenue potential.

With a tool like Typo, you rapidly maximize returns on software development investments with its automated capitalized asset reporting and real-time effort tracking. 

Understanding Cyclomatic Complexity: A Developer's Comprehensive Guide

Introduction

Look, let's cut to the chase. As a software developer, you've probably heard about cyclomatic complexity, but maybe you've never really dug deep into what it means or why it matters. This guide is going to change that. We'll break down everything you need to know about cyclomatic complexity - from its fundamental concepts to practical implementation strategies.

What is Cyclomatic Complexity?

Cyclomatic complexity is essentially a software metric that measures the structural complexity of your code. Think of it as a way to quantify how complicated your software's control flow is. The higher the number, the more complex and potentially difficult to understand and maintain your code becomes.

Imagine your code as a roadmap. Cyclomatic complexity tells you how many different paths or "roads" exist through that map. Each decision point, each branch, each conditional statement adds another potential route. More routes mean more complexity, more potential for bugs, and more challenging maintenance.

Why Should You Care?

  1. Code Maintainability: Higher complexity means harder-to-maintain code
  2. Testing Effort: More complex code requires more comprehensive testing
  3. Potential Bug Zones: Increased complexity correlates with higher bug probability
  4. Performance Implications: Complex code can lead to performance bottlenecks

What is the Formula for Cyclomatic Complexity?

The classic formula for cyclomatic complexity is beautifully simple:

Where:

  • V(G): Cyclomatic complexity
  • E: Number of edges in the control flow graph
  • N: Number of nodes in the control flow graph
  • P: Number of connected components (typically 1 for a single function/method)

Alternatively, you can calculate it by counting decision points:

Decision points include:

  • if statements
  • else clauses
  • switch cases
  • for loops
  • while loops
  • && and || operators
  • catch blocks
  • Ternary operators

Practical Calculation Example

Let's break down a code snippet:

Calculation:

  • Decision points: 4
  • Cyclomatic Complexity: 4 + 1 = 5

Practical Example of Cyclomatic Complexity

Let's walk through a real-world scenario to demonstrate how complexity increases.

Low Complexity Example

Cyclomatic Complexity: 1 (No decision points)

Medium Complexity Example

Cyclomatic Complexity: 3 (Two decision points)

High Complexity Example

Cyclomatic Complexity: 7-8 (Multiple nested conditions)

How to Test Cyclomatic Complexity

Manual Inspection Method

  1. Count decision points in your function
  2. Add 1 to the total number of decision points
  3. Verify the complexity makes sense for the function's purpose

Automated Testing Approaches

Most modern programming languages have tools to automatically calculate cyclomatic complexity:

  • Python: radon, pylint
  • Java: SonarQube, JDepend
  • JavaScript: eslint-plugin-complexity
  • .NET: Visual Studio's built-in metrics

Recommended Complexity Thresholds

  • Low Complexity (1-5): Easily maintainable, minimal testing required
  • Medium Complexity (6-10): Requires careful testing, potential refactoring
  • High Complexity (11-20): Significant refactoring needed
  • Very High Complexity (20+): Immediate refactoring required

Cyclomatic Complexity Analysis Techniques

Static Code Analysis

  • Use automated tools to scan your codebase
  • Generate complexity reports
  • Identify high-complexity functions
  • Prioritize refactoring efforts

Refactoring Strategies

  • Extract Method: Break complex methods into smaller, focused methods
  • Replace Conditional with Polymorphism: Use object-oriented design principles
  • Simplify Conditional Logic: Reduce nested conditions
  • Use Guard Clauses: Eliminate deep nesting

Code Example: Refactoring for Lower Complexity

Before (High Complexity):

After (Lower Complexity):

Tools and Software for Cyclomatic Complexity

Integrated Development Environment (IDE) Tools

  • Visual Studio Code: Extensions like "Code Metrics"
  • JetBrains IDEs: Built-in code complexity analysis
  • Eclipse: Various complexity measurement plugins

Cloud-Based Analysis Platforms

  • GitHub Actions
  • GitLab CI/CD
  • Typo AI
  • SonarCloud

How Typo solves for Cyclomatic Complexity?

Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother by optimizing complex methods, reducing cyclomatic complexity, and standardizing code efficiently.

Key Features of Typo

  1. Complexity Measurement
    • Detailed cyclomatic complexity tracking
    • Real-time complexity score generation
    • Granular analysis at function and method levels
  2. Code Quality Metrics
    • Automated code smell detection
    • Technical debt estimation
  3. Integration Capabilities
    • Seamless GitHub/GitLab integration
    • CI/CD pipeline support
    • Continuous monitoring of code repositories
  4. Language Support

Conclusion

Cyclomatic complexity isn't just a theoretical concept—it's a practical tool for writing better, more maintainable code. By understanding and managing complexity, you transform yourself from a mere coder to a software craftsman.

Remember: Lower complexity means:

  • Easier debugging
  • Simpler testing
  • More readable code
  • Fewer potential bugs

Keep your code clean, your complexity low, and your coffee strong! 🚀👩‍💻👨‍💻

Pro Tip: Make complexity measurement a regular part of your code review process. Set team standards and continuously refactor to keep your codebase healthy.

How to Manage Scope Creep?

Scope creep is one of the most challenging—and often frustrating—issues engineering managers face. As projects progress, new requirements, changing technologies, and evolving stakeholder demands can all lead to incremental additions that push your project beyond its original scope. Left unchecked, scope creep strains resources, raises costs, and jeopardizes deadlines, ultimately threatening project success.

This guide is here to help you take control. We’ll delve into advanced strategies and practical solutions specifically for managers to spot and manage scope creep before it disrupts your project. With detailed steps, technical insights, and tools like Typo, you can set boundaries, keep your team aligned, and drive projects to a successful, timely completion.

Understanding Scope Creep in Sprints

Scope creep can significantly impact projects, affecting resource allocation, team morale, and project outcomes. Understanding what scope creep is and why it frequently occurs provides a solid foundation for developing effective strategies to manage it.

What is Scope Creep?

Scope creep in projects refers to the gradual addition of project requirements beyond what was originally defined. Unlike industries with stable parameters, Feature projects often encounter rapid changes—emerging features, stakeholder requests, or even unanticipated technical complexities—that challenge the initial project boundaries.

While additional features can improve the end product, they can also risk the project's success if not managed carefully. Common triggers for scope creep include unclear project requirements, mid-project requests from stakeholders, and iterative development cycles, all of which require proactive management to keep projects on track.

Why does Scope Creep Happen?

Scope creep often results from the unique factors inherent to the industry. By understanding these drivers, you can develop processes that minimize their impact and keep your project on target.

Scope creep often results from several factors unique to the field:

  • Unclear requirements: At the start of a project, unclear or vague requirements can lead to an ever-expanding set of deliverables. For engineering managers, ensuring all requirements are well-defined is critical to setting project boundaries.
  • Shifting technological needs: IT projects must often adapt to new technology or security requirements that weren’t anticipated initially, leading to added complexity and potential delays.
  • Stakeholder influence and client requests: Frequent client input can introduce scope creep, especially if changes are not formally documented or accounted for in resources and timelines.
  • Agile development: Agile development allows flexibility and iterative updates, but without careful scope management, it can lead to feature creep.

These challenges make it essential for managers to recognize scope creep indicators early and develop robust systems to manage new requests and technical changes.

Identifying Scope Creep Early in the Sprints

Identifying scope creep early is key to preventing it from derailing your project. By setting clear boundaries and maintaining consistent communication with stakeholders, you can catch scope changes before they become a problem.

Define Clear Project Scope and Objectives

The first step in minimizing scope creep is establishing a well-defined project scope that explicitly outlines deliverables, timelines, and performance metrics. In sprints, this scope must include technical details like software requirements, infrastructure needs, and integration points.

Regular Stakeholder Check-Ins

Frequent communication with stakeholders is crucial to ensure alignment on the project’s progress. Schedule periodic reviews to present progress, confirm objectives, and clarify any evolving requirements.

Routine Project Reviews and Status Updates

Integrate routine reviews into the project workflow to regularly assess the project’s alignment with its scope. Typo enables teams to conduct these reviews seamlessly, providing a comprehensive view of the project’s current state. This structured approach allows managers to address any adjustments or unexpected tasks before they escalate into significant scope creep issues.

Strategies for Managing Scope Creep

Once scope creep has been identified, implementing specific strategies can help prevent it from escalating. With the following approaches, you can address new requests without compromising your project timeline or objectives.

Implement a Change Control Process

One of the most effective ways to manage scope creep is to establish a formal change control process. A structured approach allows managers to evaluate each change request based on its technical impact, resource requirements, and alignment with project goals.

Effective Communication and Real-Time Updates 

Communication breakdowns can lead to unnecessary scope expansion, especially in complex team environments. Use Typo’s Sprint Analysis to track project changes and real-time developments. This level of visibility gives stakeholders a clear understanding of trade-offs and allows managers to communicate the impact of requests, whether related to resource allocation, budget implications, or timeline shifts.

Prioritize and Adjust Requirements in Real Time

In Software development, feature prioritization can be a strategic way to handle evolving needs without disrupting core project objectives. When a high-priority change arises, use Typo to evaluate resource availability, timelines, and dependencies, making necessary adjustments without jeopardizing essential project elements.

Advanced Tools and Techniques to Prevent Scope Creep

Beyond basic strategies, specific tools and advanced techniques can further safeguard your IT project against scope creep. Leveraging project management solutions and rigorous documentation practices are particularly effective.

Leverage Typo for End-to-End Project Management

For projects, having a comprehensive project management tool can make all the difference. Typo provides robust tracking for timelines, tasks, and resources that align directly with project objectives. Typo also offers visibility into task assignments and dependencies, which helps managers monitor all project facets and mitigate scope risks proactively.

Detailed Change Tracking and Documentation

Documentation is vital in managing scope creep, especially in projects where technical requirements can evolve quickly. By creating a “single source of truth,” Typo enables the team to stay aligned, with full visibility into any shifts in project requirements.

Budget and Timeline Contingencies

Software projects benefit greatly from budget and time contingencies that allow for minor, unexpected adjustments. By pre-allocating resources for possible scope adjustments, managers have the flexibility to accommodate minor changes without impacting the project’s overall trajectory.

Maintaining Team Morale and Focus amid Scope Creep 

As scope adjustments occur, it’s important to maintain team morale and motivation. Empowering the team and celebrating their progress can help keep everyone focused and resilient.

Empower the Team to Decline Non-Essential Changes

Encouraging team members to communicate openly about their workload and project demands is crucial for maintaining productivity and morale.

Recognize and Celebrate Milestones

Managing IT projects with scope creep can be challenging, so it’s essential to celebrate milestones and acknowledge team achievements. 

Typo - An Effective Sprint Analysis Tool

Typo’s sprint analysis monitors scope creep to quantify its impact on the team’s workload and deliverables. It allows you to track and analyze your team’s progress throughout a sprint and helps you gain visual insights into how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This information enables you to identify any potential problems early on and take corrective action.

Our sprint analysis feature uses data from Git and issue management tools to provide insights into how your team is working. You can see how long tasks are taking, how often they’re being blocked, and where bottlenecks are occurring. This information can help you identify areas for improvement and make sure your team is on track to meet their goals.

Screenshot 2024-03-16 at 12.06.28 AM.png

Taking Charge of Scope Creep

Effective management of scope creep in IT projects requires a balance of proactive planning, structured communication, and robust change management. With the right strategies and tools like Typo, managers can control project scope while keeping the team focused and aligned with project goals.

If you’re facing scope creep challenges, consider implementing these best practices and exploring Typo’s project management capabilities. By using Typo to centralize communication, track progress, and evaluate change requests, IT managers can prevent scope creep and lead their projects to successful, timely completion.

View All

Product Updates

View All

Typo Launches groCTO: Community to Empower Engineering Leaders

In an ever-evolving tech world, organisations need to innovate quickly while keeping up high standards of quality and performance. The key to achieving these goals is empowering engineering leaders with the right tools and technologies. 

About Typo

Typo is a software intelligence platform that optimizes software delivery by identifying real-time bottlenecks in SDLC, automating code reviews, and measuring developer experience. We aim to help organizations ship reliable software faster and build high-performing teams. 

However, engineering leaders often struggle to bridge the divide between traditional management practices and modern software development leading to missed opportunities for growth, ineffective team dynamics, and slower progress in achieving organizational goals. 

To address this gap, we launched groCTO, a community designed specifically for engineering leaders.

What is groCTO Community? 

Effective engineering leadership is crucial for building high-performing teams and driving innovation. However, many leaders face significant challenges and gaps that hinder their effectiveness. The role of an engineering leader is both demanding and essential. From aligning teams with strategic goals to managing complex projects and fostering a positive culture, they have a lot on their plates. Hence, leaders need to have the right direction and support so they can navigate the challenges and guide their teams efficiently. 

Here’s when groCTO comes in! 

groCTO is a community designed to empower engineering managers on their leadership journey. The aim is to help engineering leaders evolve, navigate complex technical challenges, and drive innovative solutions to create groundbreaking software. Engineering leaders can connect, learn, and grow to enhance their capabilities and, in turn, the performance of their teams. 

Key Components of groCTO 

groCTO Connect

Over 73% of successful tech leaders believe having a mentor is key to their success.

At groCTO, we recognize mentorship as a powerful tool for addressing leadership challenges and offering personalised support and fresh perspectives. That’s why we’ve kept Connect a cornerstone of our community - offering 1:1 mentorship sessions with global tech leaders and CTOs. With over 74 mentees and 20 mentors, our Connect program fosters valuable relationships and supports your growth as a tech leader.

These sessions allow emerging leaders to: 

  • Gain personalised advice: Through 1:1 sessions, mentors address individual challenges and tailor guidance to the specific needs and career goals of emerging leaders. 
  • Navigate career growth: These mentors understand the strengths and weaknesses of the individual and help them focus on improving specific leadership skills and competencies and build confidence. 
  • Build valuable professional relationships: Our mentorship sessions expand professional connections and foster collaborations and knowledge sharing that can offer ongoing support and opportunities. 

Weekly Tech Insights

To keep our tech community informed and inspired, groCTO brings you a fresh set of learning resources every week:

  • CTO Diaries: The CTO Diaries provide a unique glimpse into the experiences and lessons learned by seasoned Chief Technology Officers. These include personal stories, challenges faced, and successful strategies implemented by them. Hence, helping engineering leaders gain practical insights and real-world examples that can inspire and inform their approach to leadership and team management.
  • Podcasts: 
    • groCTO Originals is a weekly podcast for current and aspiring tech leaders aiming to transform their approach by learning from seasoned industry experts and successful engineering leaders across the globe.
    • ‘The DORA Lab’ by groCTO is an exclusive podcast that’s all about DORA and other engineering metrics. In each episode, expert leaders from the tech world bring their extensive knowledge of the challenges, inspirations, and practical uses of DORA metrics and beyond.
  • Bytes: groCTO Bytes is a weekly sun-day dose of curated wisdom delivered straight to your inbox, in the form of a newsletter. Our goal is to keep tech leaders and CTOs, VPEs up-to-date on the latest trends and best practices in engineering leadership, tech management, system design, and more.
Are you a tech coach looking to make an impact? 

Looking Ahead: Building a Dynamic Community

At groCTO, we are committed to making this community bigger and better. We want current and aspiring engineering leaders to invest in their growth as well as contribute to pushing the boundaries of what engineering teams can achieve.

We’re just getting started. A few of our future plans for groCTO include:

  • Virtual Events: We plan to conduct interactive webinars and workshops to help engineering leaders and CTOs get deeper dives into specific topics and networking opportunities.
  • Slack Channels: We plan to create Slack channels to allow emerging tech leaders to engage in vibrant discussions and get real-time support tailored to various aspects of engineering leadership.

We envision a community that thrives on continuous engagement and growth. Scaling our resources and expanding our initiatives, we want to ensure that every member of groCTO finds the support and knowledge they need to excel. 

Get in Touch with us! 

At Typo, our vision is clear: to ship reliable software faster and build high-performing engineering teams. With groCTO, we are making significant progress toward this goal by empowering engineering leaders with the tools and support they need to excel. 

Join us in this exciting new chapter and be a part of a community that empowers tech leaders to excel and innovate. 

We’d love to hear from you! For more information about groCTO and how to get involved, write to us at hello@grocto.dev

Why do Companies Choose Typo?

Dev teams hold great importance in the engineering organization. They are essential for building high-quality software products, fostering innovation, and driving the success of technology companies in today’s competitive market.

However, engineering leaders need to understand the bottlenecks holding them back. Since these blindspots can directly affect the projects. Hence, this is when software development analytics tools come to your rescue. And these analytics software stands better when they have various features and integrations, engineering leaders are usually looking out for.

Typo is an intelligent engineering platform that is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Let’s know more about why engineering leaders prefer to choose Typo as their important tool:

You get Customized DORA and other Engineering Metrics

Engineering metrics are the measurements of engineering outputs and processes. However, there isn’t a pre-defined set of metrics that the software development teams use to measure to ensure success. This depends on various factors including team size, the background of the team members, and so on.

Typo’s customized DORA (Deployment frequency, Change failure rate, Lead time, and Mean Time to Recover) key metrics and other engineering metrics can be configured in a single dashboard based on specific development processes. This helps benchmark the dev team’s performance and identifies real-time bottlenecks, sprint delays, and blocked PRs. With the user-friendly interface and tailored integrations, engineering leaders can get all the relevant data within minutes and drive continuous improvement.

Typo has an In-Built Automated Code Review Feature

Code review is all about improving the code quality. It improves the software teams’ productivity and streamlines the development process. However, when done manually, the code review process can be time-consuming and takes a lot of effort.

Typo’s automated code review tool auto-analyses codebase and pull requests to find issues and auto-generates fixes before it merges to master. It understands the context of your code and quickly finds and fixes any issues accurately, making pull requests easy and stress-free. It standardizes your code, reducing the risk of a software security breach and boosting maintainability, while also providing insights into code coverage and code complexity for thorough analysis.

You can Track the Team’s Progress by Advanced Sprint Analysis Tool

While a burndown chart helps visually monitor teams’ work progress, it is time-consuming and doesn’t provide insights about the specific types of issues or tasks. Hence, it is always advisable to complement it with sprint analysis tools to provide additional insights tailored to agile project management.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This helps in identifying potential problems in the early stages, identifying areas where teams can be more efficient, and meeting deadlines.

The metrics Dashboard Focuses on Team-Level Improvement and Not Micromanaging Individual Developers

When engineering metrics focus on individual success rather than team performance, it creates a sense of surveillance rather than support. This leads to decreased motivation, productivity, and trust among development teams. Hence, there are better ways to use the engineering metrics.

Typo has a metrics dashboard that focuses on the team’s health and performance. It lets engineering leaders compare the team’s results with what healthy benchmarks across industries look like and drive impactful initiatives for your team. Since it considers only the team’s goals, it lets team members work together and solve problems together. Hence, fosters a healthier and more productive work environment conducive to innovation and growth.

Typo Takes into Consideration the Human Side of Engineering

Measuring developer experience not only focuses on quantitative metrics but also requires qualitative feedback as well. By prioritizing the human side of team members and developer productivity, engineering managers can create a more inclusive and supportive environment for them.

Typo helps in getting a 360 view of the developer experience as it captures qualitative insights and provides an in-depth view of the real issues that need attention. With signals from work patterns and continuous AI-driven pulse check-ins on the experience of developers in the team, Typo helps with early indicators of their well-being and actionable insights on the areas that need your attention. It also tracks the work habits of developers across multiple activities, such as Commits, PRs, Reviews, Comments, Tasks, and Merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the Burnout zone or at risk of burnout.

You can integrate as many tools with the dev stack

The more the tools can be integrated with software, the better it is for the software developers. It streamlines the development process, enforces standardization and consistency, and provides access to valuable resources and functionalities.

Typo lets you see the complete picture of your engineering health by seamlessly connecting to your tech tool stack. This includes:

  • GIT versioning tools that use the Git version control system
  • Issue tracker tools for managing tasks, bug tracking, and other project-related issues
  • CI/CD tools to automate and streamline the software development process
  • Communication tools to facilitate the exchange of ideas and information
  • Incident management tools to resolve unexpected events or failures

Conclusion

Typo is a software delivery tool that can help ship reliable software faster. You can find real-time bottlenecks in your SDLC, automate code reviews, and measure developer experience – all in a single platform.

Typo Ranked as a Leader in G2 Summer 2023 Reports

The G2 Summer 2023 report is out!

We are delighted to share that Typo ranks as a leader in the Software Development analytics tool category. A big thank you to all our customers who supported us in this journey and took the time to write reviews about their experience. It really got us motivated to keep moving forward and bring the best to the table in the coming weeks.

Typo Taking the Lead

Typo is placed among the leaders in Software Development Analytics. Besides this, we earned the ‘User loved us’ badge as well.

Our wall of fame shines bright with –

  • Leader in the overall Grid® Report for Software Development Analytics Tools category
  • Leader in the Mid Market Grid® Report for Software Development Analytics Tools category
  • Rated #1 for Likelihood to Recommend
  • Rated #1 for Quality of Support
  • Rated #1 for Meets Requirements
  • Rated #1 for Ease of Use
  • Rated #1 for Analytics and Trends

Typo has been ranked a Leader in the Grid Report for Software Development Analytics Tool | Summer 2023. This is a testament to our continuous efforts toward building a product that engineering teams love to use.

The ratings also include –

  • 97% of the reviewers have rated Typo high in analyzing historical data to highlight trends, statistics & KPIs
  • 100% of the reviewers have rated us high in Productivity Updates

We, as a team, achieved the feat of attaining the score of:

Typo User  ratings

Here’s What our Customers Say about Typo

Check out what other users have to say about Typo here.

What Makes Typo Different?

Typo is an intelligent AI-driven Engineering Management platform that enables modern software teams with visibility, insights & tools to code better, deploy faster & stay aligned with business goals.

Having launched with Product Hunt, we started with 15 engineers working with sheer hard work and dedication and have impacted 5000+ developers globally and engineering leaders globally, 400,000+ PRs & 1.5M+ commits.

We are NOT just the software delivery analytics platform. We go beyond the SDLC metrics to build an ecosystem that is a combination of intelligent insights, impactful actions & automated workflows – that will help Managers to lead better & developers perform better

As the first step, Typo gives core insights into dev velocity, quality & throughout that has helped the engineering leaders reduce their PR cycle time by almost 57% and 2X faster project deliveries.

PR cycle time

Continuous Improvement with Typo

Typo empowers continuous improvement in the developers & managers with goal setting & specific visibility to developers themselves.

The leaders can set goals to ensure best practices like PR sizes, avoid merging PRs without review, identify high-risk work & others. Typo nudges the key stakeholders on Slack as soon as the goal is breached. Typo also automates the workflow on Slack to help developers with faster PR shipping and code reviews.

Continuous Improvement with Typo

Developer’s View

Typo provides core insights to your developers that are 100% confidential to them. It helps developers to identify their strengths and core areas of improvement that have impacted the software delivery. It helps them gain visibility & measure the impact of their work on team efficiency & goals.

Developer’s view
Developer’s Well-Being

We believe that all three aspects – work, collaboration & well-being – need to fall in place to help an individual deliver their best. Inspired by the SPACE framework for developer productivity, we support Pulse Check-Ins, Developer Experience insights, Burnout predictions & Engineering surveys to paint a complete picture.

Developer’s well-being

10X your Dev Teams’ Efficiency with Typo

It’s all of your immense love and support that made us a leader in such a short period. We are grateful to you!

But this is just the beginning. Our aim has always been to level up your dev game and we will be coming with the new exciting releases in the next few weeks.

Interested in using Typo? Sign up for FREE today and get insights in 5 min.

View All
Made in Webflow