Think of reading a book with multiple plot twists and branching storylines. While engaging, it can also be confusing and overwhelming when there are too many paths to follow. Just as a complex storyline can confuse readers, high Cyclic Complexity can make code hard to understand, maintain, and test, leading to bugs and errors.
In this blog, we will discuss why high cyclomatic complexity can be problematic and ways to reduce it.
Cyclomatic Complexity, a software metric, was developed by Thomas J. Mccabe in 1976. It is a metric that indicates the complexity of the program by counting its decision points.
A higher cyclomatic Complexity score reflects more execution paths, leading to increased complexity. On the other hand, a low score signifies fewer paths and, hence, less complexity.
Cyclomatic Complexity is calculated using a control flow graph:
M = E - N + 2P
M = Cyclomatic Complexity
N = Nodes (Block of code)
E = Edges (Flow of control)
P = Number of Connected Components
The more complex the code is, the more the chances of bugs. When there are many possible paths and conditions, developers may overlook certain conditions or edge cases during testing. This leads to defects in the software and becomes challenging to test all of them.
Cognitive complexity refers to the level of difficulty in understanding a piece of code.
Cyclomatic Complexity is one of the factors that increases cognitive complexity. Since, it becomes overwhelming to process information effectively for developers, which makes it harder to understand the overall logic of code.
Codebases with high cyclomatic Complexity make onboarding difficult for new developers or team members. The learning curve becomes steeper for them and they require more time and effort to understand and become productive. This also leads to misunderstanding and they may misinterpret the logic or overlook critical paths.
More complex code leads to more misunderstandings, which further results in higher defects in the codebase. Complex code is more prone to errors as it hinders adherence to coding standards and best practices.
Due to the complex codebase, the software development team may struggle to grasp the full impact of their changes which results in new errors. This further slows down the process. It also results in ripple effects i.e. difficulty in isolating changes as one modification can impact multiple areas of application.
Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.
The cyclomatic complexity metric is critical in software engineering. Reducing cyclomatic complexity increases the code maintainability, readability, and simplicity. By implementing the above-mentioned strategies, software engineering teams can reduce complexity and create a more streamlined codebase. Tools like Typo’s automated code review also help in identifying complexity issues early and providing quick fixes. Hence, enhancing overall code quality.
Burndown charts are essential instruments for tracking the progress of agile teams. They are simple and effective ways to determine whether the team is on track or falling behind. However, there may be times when a burndown chart is not ideal for teams, as it may not capture a holistic view of the agile team’s progress.
In this blog, we have discussed the latter part in greater detail.
Burndown Chart is a visual representation of the team’s progress used for agile project management. They are useful for scrum teams and agile project managers to assess whether the project is on track or not.
The primary objective is to accurately depict the time allocations and plan for future resources.
There are two axes: x and y. The horizontal axis represents the time or iteration and the vertical axis displays user story points.
It represents the remaining work that an agile team has at a specific point of the project or sprint under an ideal condition.
It is a realistic indication of a team's progress that is updated in real time. When this line is consistently below the ideal line, it indicates the team is ahead of schedule. When the line is above, it means they are falling behind.
It indicates whether the team has completed a project/sprint on time, behind or ahead of schedule.
The data points on the actual work remaining line represents the amount of work left at specific intervals i.e. daily updates.
There are two types of Burndown Chart:
This type of burndown chart focuses on the big picture and visualises the entire project. It helps project managers and teams monitor the completion of work across multiple sprints and iteration.
Sprint Burndown chart particularly tracks the remaining work within a sprint. It indicates progress towards completing the sprint backlog.
Burndown Chart captures how much work is completed and how much is left. It allows the agile team to compare the actual progress with the ideal progress line to track if they are ahead or behind the schedule.
Burndown Chart motivates teams to align their progress with the ideal line. These small milestones boost morale and keep their motivation high throughout the sprint. It also reinforces the sense of achievement when they see their tasks completed on time.
It helps in analyzing performance over sprint during retrospection. Agile teams can review past data through burndown Charts to identify patterns, adjust future estimates, and refine processes for improved efficiency. It allows them to pinpoint periods where progress went down and help to uncover blockers that need to be addressed.
Burndown Chart visualizes the direct comparison of planned work and actual progress. It can quickly assess whether a team is on track to meet the goals, and monitor trends or recurring issues such as over-committing or underestimating tasks.
While the Burndown Chart comes with lots of pros, it could be misleading as well. It focuses solely on the task alone without accounting for individual developer productivity. It ignores the aspects of agile software development such as code quality, team collaboration, and problem-solving.
Burndown Chart doesn’t explain how the task impacted the developer productivity or the fluctuations due to various factors such as team morale, external dependencies, or unexpected challenges. It also doesn’t focus on work quality which results in unaddressed underlying issues.
While the Burndown Chart is a visual representation of Agile teams’ progress, it fails to capture the intricate layers and interdependencies within the project. It overlooks the critical factors that influence project outcomes which may lead to misinformed decisions and unrealistic expectations.
Scope Creep refers to modification in the project requirement such as adding new features or altering existing tasks. Burndown Chart doesn’t take note of the same rather shows a flat line or even a decline in progress which can signify that the team is underperforming, however, that’s not the actual case. This leads to misinterpretation of the team’s progress and overall project health.
Burndown Chart doesn’t differentiate between easy and difficult tasks. It considers all of the tasks equal, regardless of their size, complexity, or effort required. Whether the task is on priority or less impactful, it treats every task as the same. Hence, obscuring insights into what truly matters for the project's success.
Burndown Chart treats team members equally. It doesn't take individual contributions into consideration as well as other factors including personal challenges. It also neglects how well they are working with each other, sharing knowledge, or supporting each other in completing tasks.
Gantt Charts are ideal for complex projects. They are a visual representation of a project schedule using horizontal axes. They provide a clear timeline for each task i.e. when the project starts and ends as well as understanding overlapping tasks and dependencies between them.
CFD visualizes how work moves through different stages. It offers insight into workflow status and identity trends and bottlenecks. It also helps in measuring key metrics such as cycle time and throughput.
Kanban Boards is an agile management tool that is best for ongoing work. It helps to visualize work, limit work in progress, and manage workflows. They can easily accommodate changes in project scope without the need for adjusting timelines.
Burnup Chart is a quick, easy way to plot work schedules on two lines along a vertical axis. It shows how much work has been done and the total scope of the project, hence, providing a clearer picture of project completion.
DI platforms focus on how smooth and satisfying a developer experience is. It streamlines the development process and offers a holistic view of team productivity, code quality, and developer satisfaction. These platforms also provide real-time insights into various metrics that reflect the team’s overall health and efficiency beyond task completion alone.
One such platform is Typo, which goes beyond the traditional metrics. Its sprint analysis is an essential tool for any team using an agile development methodology. It allows agile teams to monitor and assess progress across the sprint timeline, providing visual insights into completed work, ongoing tasks, and remaining time. This visual representation allows to spot potential issues early and make timely adjustments.
Our sprint analysis feature leverages data from Git and issue management tools to focus on team workflows. They can track task durations, identify frequent blockers, and pinpoint bottlenecks.
With easy integration into existing Git and Jira/Linear/Clickup workflows, Typo offers:
Hence, helping agile teams stay on track, optimize processes, and deliver quality results efficiently.
While the burndown chart is a valuable tool for visualizing task completion and tracking progress, it often overlooks critical aspects like team morale, collaboration, code quality, and factors impacting developer productivity. There are several alternatives to the burndown chart, with Typo’s sprint analysis tool standing out as a powerful option. Through this, agile teams gain a more comprehensive view of progress, fostering resilience, motivation, and peak performance.
In this episode of the groCTO Originals podcast, host Kovid Batra talks to Venkat Rangasamy, the Director of Engineering at Oracle & an advisory member at HBR, about 'How AI is Revolutionizing Software Engineering'.
Venkat discusses his journey from a humble background to his current role and his passion for mentorship and generative AI. The main focus is on the revolutionary impact of AI on the Software Development Life Cycle (SDLC), making product development cheaper, more efficient, and of higher quality. The conversation covers the challenges of using public LLMs versus local LLMs, the evolving role of developers, and actionable advice for engineering leaders in startups navigating this transformative phase.
Kovid Batra: Hi, everyone. This is Kovid, back with another episode of the groCTO podcast. And today with us, we have a very special guest, Mr. Venkat Rangasamy. He's the Director of Engineering at Oracle. He is the advisor at HBR Advisory Council, where he's helping HBR create content on leadership and management. He comes with 18 plus years of engineering and leadership experience. It's a pleasure to have you on the show, Venkat. Welcome.
Venkat Rangasamy: Yup. Likewise. Thank you. Thanks for the opportunity to discuss on some of the hot topics what we have. I'm, I'm pleasured to be here.
Kovid Batra: Great, Venkat. So I think there is a lot to talk about, uh, what's going on in the engineering landscape. And just for the audience, uh, today's topic is around, uh, how AI is impacting the overall engineering landscape and Venkat coming from that space with an immense experience and exposure, I think there will be a lot of insights coming in from your end. Uh, but before we move on to that section, uh, I would love to know a little bit more about you. Our audience would also love to know a little bit more about you. So anything that you would like to share, uh, from your personal life, from your professional journey, any hobbies, any childhood memories that shape up who you are today, how things have changed for you. We would love to hear about you. Yeah.
Venkat Rangasamy: Yup. Um, in, in, in my humble background, I started, um, without nothing much in place, where, um, started my career and even studies, I did really, really on like, not even electricity to go through to, when we went for studies. That's how I started my study, whole schooling and everything. Then moved on to my college. Again, everything on scholarship. It's, it's like, that's where I started my career. One thing kept me motivated to go to places where, uh, different things and exploring opportunities, mentorship, right? That something is what shaped me from my school when I didn't have even, have food to eat for a day. Still, the mentorship and people who helped me is what I do today.
With that context, why I'm passionate about the generative AI and other areas where I, I connect the dots is usually we used to have mentorship where people will help you, push you, take you in the right direction where you want to be in the different challenges they put together, right? Over a period of time, the mentorship evolved. Hey, I started with a physical mentor. Hey, this is how they handhold you, right? Each and every step of the way what you do. Then when your career moves along, then that, that handholding becomes little off, like it becomes slowly, it becomes like more of like instructions. Hey, this is how you need to do, get it done, right? The more you grow, even it will be abstracted. The one piece what I miss is having the handholding mentorship, right? Even though you grow your career, in the long run, you need something to be handholding you to progress along the way as needed. I see one thing that's motivated me to be part of the generative AI and see what is going on is, it could be another mentor for you to shape your roles and responsibility, your career, how do you want to proceed, bounce your ideas and see where, where you want to go from there on the problem that you have, right? In the context of the work-related stuff.
Um, how, how you can, as a person, you can shape your career is something I'm vested, interested in people to be successful. In the long run, that's my passion to make people successful. The path that I've gone through, I just want to help people in a way to make them successful. That's my belief. I think making, pulling like 10 to 100, how many people you can pull out. The way when you grow is equally important. It's just not your growth yourself. Being part of that whole ecosystem, bring everybody around it. Everybody's career is equally important. I'm passionate about that and I'm happy to do that. And in my way, people come in. I want to make sure we grow together and and make them successful.
Kovid Batra: Yeah, I think it's, uh, it's because of your humble background and the hardships that you've seen in the early of your, uh, childhood and while growing up, uh, you, you share that passion and, uh, you want to help other folks to grow and evolve in their journeys. But, uh, the biggest problem, uh, like when, when I see, uh, with people today is they, they lack that empathy and they lack that motivation to help people. Why do you think it's there and how one can really overcome this? Because in my foundation, uh, in my fundamental beliefs, we, as humans are here to give back to the community, give back to this world, and that's the best feeling, uh, that I have also experienced in my life, uh, over the last few years. I am not sure how to instill that in people who are lacking that motivation to do so. In your experience, how do you, how do you see, how do you want to inspire people to inspire others?
Venkat Rangasamy: Yeah. No, it's, it's, it's like, um, It goes both ways, right? When you try to bring people and make them better is where you can grow yourself. And it becomes like, like last five to 10 years, the whole industry's become like really mechanics, like the expectation went so much, the breathing space. We do not have a breathing space. Hey, I want to chase my next, chase my next, chasing the next one. We leave the bottom food chain, like, hey, bring the food chain entirely with you until you see the taste of it in one product building. Bringing entire food chain to the ecosystem to bring them success is what makes your team at the end of the day. If we start seeing the value for that, people start spending more time on growing other people where they will make you successful. It's important. And that food chain, if it breaks, if it broke, or you, you kind of keep the food chain outside of your progression or growth, that's not actual growth because at one point of time, you get the roadblocks, right? At that point of time, your complete food chain is broken, right? Similar way, your career, the whole team, food chain is, it's completely broken. It's hard to bring them back, get the product launched at the time what you want to do. It's, it's, it's about building a trust, bring them up to speed, make them part of you, is what you have to do make yourself successful. Once you start seeing that in building a products, that will be the model. I think the people will follow that.
The part is you rightly pointed out empathy, right? Have some empathy, right? Career can, it can be, can, can, it can go its own progress, but don't, don't squeeze too much to make it like I want to be like, it won't happen like in a timely manner like every six months and a year. No, it takes its own course of action. Go with this and make it happen, right? There are ups and downs in careers. Don't make, don't think like every, every quarter and every year, my career should be successful. No, that's not how it works. Then, then there is no way you see failure in your career, right? That's not the way equilibrium is. If that happened, everybody becomes evil. That's not a point, right? Every, everything in the context of how do you bring, uplift people is equally important. And I think people should start focusing more on the empathy and other stuff than just bringing as an IC contributor. Then you want to be successful in your own role, be an IC contributor, then don't be a professional manager bringing your whole.. There's a chain under you who trust you and build their career on top of your growth, right? That's important. When you have that responsibility, be meaningful, how do you bring them and uplift them is equally important.
Kovid Batra: Cool. I think, uh, thanks a lot, uh, for this sweet and, uh, real intro about yourself. Uh, we got to, uh, know you a little more now. And with that, I, I'm sorry, but I was talking to you on LinkedIn, uh, from some time and I see that you have been passionately working with different startups and companies also, right, uh, in the space of AI. So, uh, With this note, I think let's move on to our main section, um, where you would, uh, be, where we would be interested in knowing, uh, what kind of, uh, ideas and thoughts, uh, are, uh, encompassing this AI landscape now, where engineering is changing on a day-in and day-out basis. So let's move on to our main section, uh, how AI is impacting or changing the engineering landscape. So, starting with your, uh, uh, advisories and your startups that you're working with, what are the latest things that are going on in the market you are associated with and how, how is technology getting impacted there?
Venkat Rangasamy: Here is, here is what the.. Git analogy, I just want to give some history background about how AI is getting mainstream and people are not quite realizing what's happening around us, right? The part is I think 2010, when we started presenting cloud computing to folks, um, in the banking industry, I used to work for a banking customer. People really laughed at it. Hey, my data will be with me. I don't think it will move any time closer to cloud or anything. It will be with, with and on from, it is not going to change, right? But, you know, over a period of time, cloud made it easy. And, and any startups that build an application don't need to set up any infrastructure or anything, because it gives an easy way to do it. Just put your card, your infrastructure is up and running in a couple of hours, right? That revolutionized a lot the way we deploy and manage our applications.
The second pivotal moment in our history is mobile apps, right? After that, you see the application dominance was with enterprise most of the time. Over a period of time, when mobile got introduced, the distribution channels became easier to reach out to end users, right? Then a lot of billion-dollar unicorns like Uber and Spotify, everything got built out. That's the second big revolution happening. After mobile, I would say there were foundations happening like big data and data analytics. There is some part of ML, it, over a period of time it happened. But revolutionizing the whole aspect of the software, like how cloud and mobile had an impact on the industry, I see AI become the next one. The reason is, um, as of now, the software are built in a way, it's traditional SDLC practice, practice set up a long time ago. What, what's happening around now is that practice is getting questioned and changed a bit in the context of how are we going to develop a software, make them cheaper, more productive and quality deliverables. We used to do it in the 90s. If you've worked during that time, right, COBOL and other things, we used to do something called extreme programming. Peer programming and extreme programming is you, you have an assistant, you sit together, write together a bunch of instructions, right? That's how you start coding and COBOL and other things to validate your procedures. The extreme programming went away. And we started doing code based, IDE based suggestions and other things for developers. But now what's happening is it's coming 360, and everything is how Generative AI is influencing the whole aspect of software industry is, is, is it's going to be impactful for each and every life cycle of the software industry.
And it's just at the initial stage, people are figuring out what to do. From my, my interaction and what I do in my free time with NJ, Generative AI to Change this SDLC process in a meaningful way, I see there will be a profound impact on what we do in a software as software developers. From gathering requirements until deploying, deploying that software into customers and post support into a lifecycle will have a meaningful impact, impact. What does that mean? It'll have cheaper product development, quality deliverables. and having good customer service. What does it bring in over a period of time? It'll be a trade off, but that's where I think it's heading at this point of time. Some folks have started realizing, injecting their SDLC process into generative AI in some shape and form to make them better.
We can go in detail of like how each phases will look like, but that's, that's what I see from industry point of view, how folks are approaching generative AI. There is, there is, it's very conservative. I understand because that's how we started with cloud and other areas, but it's going to be mainstream, but it's going to be like, each and every aspect of it will be relooked and the chain management point of view in a couple of years, the way we see an SDLC will be quite different than what we have today. That's my, my, my belief and what I see in the industry. That's how it's getting there. Yep. Especially the software development itself. It's like eating your own dog food, right? It happened for a long time. This is the first time we do a software development, that whole development itself, it's going to be disturbed in a way. It'll be, it'll be, it'll be more, uh, profound impact on the whole product development. And it'll be cheaper. The product, go to market will be much cheaper. Like how mobile revolutionized, the next evolution will be on using, um, generative AI-like capability to make your product cheaper and go to market in a short term. That's, that's, that's going to happen eventually.
Kovid Batra: Right. I think, uh, this, this is bound to happen. Even I believe so. It is, it is already there. I mean, it's not like, uh, you're talking about real future, future. It's almost there. It's happening right now. But what do you think on the point where this technology, which is right now, uh, not hosted locally, right? Uh, we are talking about inventing, uh, LLMs locally into your servers, into your systems. How do you see that piece evolving? Because lately I have been seeing a lot of concerns from a lot of companies and leaders around the security aspect, around the IP aspect where you are putting all your code into a third-party server to generate new code, right? You can't stop developers from doing that because they've already started doing it. Earlier, the method was going to stack overflow, taking up some code from there, going to GitHub repositories or GitLab repositories, taking up some code. But now this is happening from a single point of source, which is cloud hosted and you have to share your code with third parties. That has started becoming a concern. So though the whole landscape is going to change, as you said, but I think there is a specific direction in which things are moving, right? Very soon people realized that there is an aspect of security and IP that comes along with using such tools in the system. So how do you see that piece progressing in the market right now? And what are the things, what are the products, what are the services that are coming up, impacting this landscape?
Venkat Rangasamy: It's a good question, actually. We, after a couple of years, right, what the realization even I came up with now, the services which are hosted on a cloud, like, uh, like, uh, public LLMs, right, which, you can use an LLM to generate some of these aspects. From a POC point of view, it looks great. You can see it, what is coming your way. But when it comes to the real product, making product in a production environment is not, um, well-defined because as I said, right, security audit complaints, code IP, right? And, and your compliance team, it's about who owned the IP part of it, right? It's those aspects as well as having the code, your IP goes to some trained public LLM. And it's, it's kind of a compromise where there is, there is, there is some concern around that area and people have started and enterprises have started looking upon something to make it within their workspace. End of the day, from a developer point of view, the experience what developer has, it has to be within that IDE itself, right? That's where it becomes successful. And keeping outside of that IDE is not fully baked-in or it's not fully baked-in part of the developer life cycle, which means the tool set, it has to be as if like it's running in local, right? If you ask me, like, is it doable? For sure. Yes. If you’d asked me an year back, I'd have said no. Um, running your own LLM within a laptop, like another IDE, like how do you run an IDE? It's going to be really challenging if you’d asked me an year back. But today, I was doing some recent experiment on this, um, similar challenges, right? Where corporates and other folks, then the, the, the, any, any big enterprises, right? Any security or any talk to a startup founders, the major, the major roadblock is I didn't want to share my IPR code outside of my workspace. Then bringing that experience into your workspace is equally important.
With that context, I was doing some research with one of the POC project with, uh, bringing your Code Llama. Code Llama is one of the LLMs, public LLM, uh, trained by Meta for different languages, right? It's just the end of the day, the smaller the LLMs, the better on these kinds of tasks, right? You don't need to have 700 billion, 70 billion, those, those parameters are, is, it's irrelevant at this point of coding because coding is all about a bunch of instructions which need to be trained, right? And on top of it, your custom coding and templates, just a coding example. Now, how to solve this problem, set up your own local LLM. Um, I've tested and benchmarked in both Mac and PC. Mac is phenomenally well, I won't see any difference. You should be able to set up your LLM. There is a product called Ollama. Ollama is, uh, where you can use, set up your LLM within your workspace as if it's running, like running in your laptop. There's nothing going out of your laptop. Set up that and go to your IDE, create a simple plugin. I created a VC plugin, visual source plugin, connected to your local LLM, because Ollama will give you like a REST API, just connect it. Now, now, within your IDE, whatever code is there, that is going to talk to your LLM, which means every developer can have their own LLM. And as long as you have a right trained data set for basic language, Java, Python, and other thing, it works phenomenally well, because it's already trained for it. If you want to have a custom coding and custom templating, you just need to train that aspect of it, of your coding standards.
Once you train, keep it in your local, just run like part of an IDE. It's a whole integrated experience, which runs within developer workspaces, is what? Scalable and long run. It, if anything, if it goes out of that, which we, we, we have seen that many times, right, past couple of years. Even though we say our LLMs are good enough to do larger tasks in the coding side, if it's, if you want to analyze the complete file, if you send it to a public LLM, with some services available, uh, through some coding and other testing services, what we have, the challenges, number of the size of the tokens what you can send back, right? There is a limit in the number of tokens, which means if you want to analyze the entire project repository what you have, it's not possible with the way it's, these are set up now in a public site, right? Which means you need to have your own LLM within the workspace, which can work and in, in, it's like a, it's part of your workspace, that's what I would say. Like, how do you run your database? Run it part of your workspace, just make it happen. That is possible. And that's going to be the future. I don't think going any public LLM or setting up is, is, is not a viable option, but having the pipeline set up, it's like a patching or giving a database to your developers, it runs in local. Have that set up where everybody can use it within the local workspace itself. It's going to be the future and the tools and tool sets around that is really happening. And it's, it's at the phase where in an year's time from here, you won't even see that's a big thing. It's just like part of your skill. Just set up and connect your editor, whatever source code editor you have, just connect it to LLM, just run with it. I see that's a feature for the coding part of you. Other SDLCs have different nuance to it, but coding, I think it should be pretty straightforward in a year time frame. That's going to be the normal practice.
Kovid Batra: So I think, uh, from what I understand of your opinion is that the, most of the market would be shifting towards their Local LLM models, right? Yeah. Uh, that that's going to be the future, but I'm not sure if I'm having the right analogy here, but let's talk about, uh, something like GitHub, which is, uh, cloud-sourced and one, which is in-house, right? Uh, the teams, the companies always had that option of having it locally, right? But today, um, I'm not sure of the percentage, uh, how many teams are using a cloud-based GitHub on a locally, uh, operated GitHub. But in that situation, they are hosting their code on a third party, right? The code is there.
Venkat Rangasamy: Yup.
Kovid Batra: The market didn't shape that way if we look at it from that perspective of code security and IP and everything. Uh, why do you think that this would happen for, uh, local LLMs? Like wouldn't the market be fragmented? Like large-scale organizations who have grown beyond a size have that mindset now, “Let's have something in-house.” and they would put it out for the local LLMs. Whereas the small companies who are establishing themselves and then, I mean, can it not be the similar path that happened for how you manage your code?
Venkat Rangasamy: I think it is very well possible. The only difference between GitHub and LLM is, um, the artifact, the, GitHub is more like an artifact management, right? When you have your IP, you're just keeping it's kind of first repository to keep everything safe, right? It just with the versioning, branching and other stuff.
Kovid Batra: Right.
Venkat Rangasamy: Um, the only problem there related to security is who's, um, is there any vulnerability within your code? Or it's that your repository is secure, right? That is kind of a compliance or everything needs to be there. As long as that's satisfied, we're good for that. But from an LLM lifecycle point of view, the, the IP, what we call so far in a software is a code, what you write as a code. Um, and the business logic associated to that code and the customizations happenening around that is what your IP is all about. Now, as of now, those IPs are patent, which means, hey, this is what my patent is all about. This is my IP all about. Now you have started giving your IP data to a public LLM, it'll be challenging because end of the day, any data goes through, it can be trained on its own. Using the data set, what user is going through, any LLM can be trained using the dataset. If you ask me, like, every application is critical where you cannot share an IP, not really. Building simple web pages or having REST services is okay because those things, I don't think any IP is bound to have. Where you have the core business of running your own workflows or your own calculations and that is where it's going to be more tough to use any public LLM.
And another challenge is, what I see in a community is, the small startups, right, they won't do much customization on the frameworks. Like they take Java means Java, right, Node means Node, they take React, just plain vanilla, just run through end-to-end, right? Their, their goal is to get the product up to market quicker, right, in the initial stage of when we have 510 developers. But when it grows, the team grows, what happens is, we, the, every enterprise it's bound to happen, I, I've gone through a couple of cycles of that, you start putting together a framework around the whole standardization of coding, the, the scaffolding, the creating your test cases, the whole life cycle will have enforced your own standard on top of it, because to make it consistent across different developers, and because the team became 5 to 1000, 1000 to 10,000, it's hard to manage if you don't have standards around it, right? That's where you have challenges using public LLM because you will have challenges of having your own code with your own standards, which is not trained by LLM, even though it's a simple application. Even simple application will have a challenge at those points of time. But from a basic point of view, still you can use it. But again, you will have a challenge of how big a file you can analyze using public LLM. It's the one challenge you might have. But the answer to your question, yes, it will be hybrid. It won't be 100 percent saying everybody needs to have their own LLM trained and set up. Initial stages, it's totally fine to use it because that's how it's going to grow, because startup companies don't have much resources to put together to build their own frameworks. But once they get in a shape where they want to have the standardized practices, like how they build their own frameworks and other things. Similar way, one point of time, they'd want to bring it up on their own setup and run with it. For large enterprise, for sure, they are going to have their own developer productivity suite, like what they did with their frameworks and other platforms. But for a small startup, start with, they might use public, but long run, eventually over a point of, over a period of time, that might get changed.
And the benefit of getting hybrid is where you will, you'll make your product quick to market, right? Because end of the day, that's important for startups. It's not about getting everything the way they want to set up. It's important, but at the same time, you need to go to market, the amount of money what you have, where you want to prioritize your money. If I take it that way, still code generation and the whole LLM will play a crucial role on a, on the development. But how do you use and what third-party they can use? Of course, there will be some choices where I think in the future, what this, what I see is even these LLMs will be set up and trained for your own data in a, in a more of a hybrid cloud instead of a public cloud, which means your LLM, what you trained in a, in a hybrid cloud has visibility only to your code. It's not going, it's not a public LLM, it's more of a private LLM trained and deployed on, on a cloud can be used by your team. That'll, that'll, that'll be the hybrid approach in the long run. It's going to scale.
Kovid Batra: Got it. Great. Uh, with that, I think, uh, just to put out some actionable advice, uh, for all the engineering leaders out there who are going through this phase of the AI transformation, uh, anything as an actionable advice for those leaders from your end, like what should they focus on right now, how they should make that transition? And I'm talking about, uh, companies where these engineering leaders are working, which are, uh, Series B, Series A, Series C kind of a bracket. I know this is a huge bracket, but what kind of advice you would give to these companies? Because they're in the growing phase of the, of the whole cycle of a company, right? So what, what should they focus on right now at this stage?
Venkat Rangasamy: Here, here is where some start. I was talking to some couple of, uh, uh, ventures, uh, recently about similar topic, how the landscape is going to change as for software development, right? One thing came up in that call frequently was cheaper to develop a product, go to market faster, and the expectation around software development has become changing quite a while, right? In the sense, the expectation around software development and the cost associated to that software development is where it's going to, it's going to be changing drastically. Same time, be clear about your strategy. It's not like we can change 50 percent of productivity overnight now. But at the same time, keep it realistic, right? Hey, this is what I want to make. Here is my charter to go through, from start from ideations to go to market. Here are the meaningful places where I can introduce something which can help the developers and other roles like PMs. Could be even post support, right? Have a meaningful strategy. Just don't go blank with the traditional way what you have, because your investors and advisors are going to start ask questions because they're going to see a similar pattern from others, right? Because that's how others have started looking into it. I would say proactively start going through that landscape and map your process to see where we can inject some of the meaningful, uh, area where it can have impacts, right?
And, and have, be practical about it. Don't think, don't give a commitment. Hey, I make 50 percent cheaper on my development and overnight you might burn because that's not reality, but just.. In my unit test cases and areas where I can build quality products within this money and I can guarantee that can be an industry benchmark. I can do that with introducing some of these practices like test cases, post customer support, writing code in some aspects, right? Um, that is what you need to set up, uh, when you started, uh, going for a venture fund. And have a relook of your SDLC process. That's important. And see how do you inject, and in the long term, that'll help you. And it'll be iterative, but at the end of the day, see, we've gone from waterfall to agile. Agile to many, many other paradigms within agile over a period of time. But, uh, the one thing what we're good at doing is in a software as an industry adapting to a new trend, right? This could be another trend. Keep an eye on it. Make it something where you can make it, make some meaningful impact on your products. I would, I would say, before your investor comes and talked about hey, can you do optimization here? I see another, my portfolio company does this, does this, does this. That's, it's, it's better to start yourself. Be collaborative and see if we can make something meaningful and learn across, share it in the community where other founders can leverage something from you. It will be great. That's my advice to any startup founders who can make a difference. Yep.
Kovid Batra: Perfect. Perfect. Thank you, Venkat. Thank you so much for this insightful, uh, uh, information about how to navigate the situation of changing landscape due to AI. So, uh, it was really interesting. Uh, we would love to have you one another time on this show. I am sure, uh, you have more than these insights to share with us, but I think in the interest of time, we'll have to close it for today, and, uh, we'll see you soon again.
Venkat Rangasamy: See you. Bye.
Typo hosted an exclusive live webinar titled 'The Hows and Whats of DORA', featuring Bryan Finster and Richard Pangborn. With over 150+ attendees, we explored how DORA can be misused and learnt practical tips for turning engineering metrics into dev team success.
Bryan Finster, Value Stream Architect at Defense Unicorns and co-author of 'How to Misuse and Abuse DORA Metrics’, and Richard Pangborn, Software Development Manager at Method and advocate for Typo, brought valuable perspectives to the table.
The discussion covered DORA metrics' implementation and challenges, highlighting the critical role of continuous delivery and value stream management. Bryan provided insights from his experience at Walmart and Defense Unicorns, explaining the pitfalls of misusing DORA metrics. Meanwhile, Richard shared his hands-on experience with implementation challenges, including data collection difficulties and the crucial need for accurate observability. They also reinforced the idea that DORA metrics should serve as health indicators rather than direct targets. Bryan and Richard offered parting advice on using observability effectively and ensuring that metrics lead to meaningful improvements rather than superficial compliance. They both emphasized the importance of a supportive culture that sees metrics as tools for improvement rather than instruments of pressure.
The event concluded with an interactive Q&A session, allowing attendees to ask questions and gain deeper insights.
P.S.: Our next live webinar is on September 25, featuring DORA expert Dave Farley. We hope to see you there!
Kovid Batra: Hi, everyone. Thanks for joining in for our DORA exclusive webinar, The Hows and Whats of DORA, powered by Typo. I'm Kovid, founding member at Typo and your host for today's webinar. With me, we have two special people. Please welcome the DORA expert for tonight, Bryan Finster, who is an exceptional Value Stream Architect at Defense Unicorns and the co-author of the ebook, 'How to Misuse and Abuse DORA Metrics', and one of our product mentors, and Typo advocates, Richard Pangborn, who is a Software Development Manager at Method. Thanks, Bryan. Thanks, Rich, for joining in.
Bryan Finster: Thanks for having me.
Richard Pangborn: Yeah, no problem.
Kovid Batra: Great. So, like, before we, uh, get started and discuss about how to implement DORA, how to misuse DORA, uh, Rich, you have some questions to ask, uh, we would love to know a little bit about you both. So if you could just spare a minute and tell us about yourself. So I think we can get started with you, Rich. Uh, and then we can come back to Bryan.
Richard Pangborn: Sure. Yeah, sounds good. Uh, my name is Richard Pangborn. I'm the Software Developer Manager here at Method. Uh, I've been a manager for about three years now. Um, but I do come from a Tech Lead role of five or more years. Um, I started here as a junior developer when we were just in the startup phase. Um, went through the series funding, the investments, the exponential growth. Today we're, you know, over a 100-person company with six software development teams. Um, and yeah, Typo is definitely something that we've been using to help us measure ourselves and succeed. Um, some interesting things about myself, I guess, is I was part of the company and team that succeeded when we did a Intuit hackathon. Um, it was pretty impactful to me. Um, We brought this giant check, uh, back with us from Cali all the way to Toronto, where we're located. Uh, we got to celebrate with, uh, all of the company, everyone who put in all the hard and hard work to, to help us succeed. Um, that's, that's sort of what pushed me into sort of a management path to sort of mentor and help those, um, that are junior or intermediate, uh, have that same sort of career path, uh, and set them up for success.
Kovid Batra: Perfect. Perfect. Thanks, Richard. And something apart from your professional life, anything that you want to share with the audience about yourself?
Richard Pangborn: Uh, myself, um, I'm a gamer, um, I do like to golf, I do like to, um, exercise, uh, something interesting also is, um, I met my, uh, wife here at the company who I still work with today.
Kovid Batra: Great. Thank you so much, Rich. Bryan, over to you.
Bryan Finster: Oh, yes. I'm Bryan Finster. I've been a software developer for, oh, well, since 1996. So I'll let you do the math. I'm mostly doing enterprise development. I worked for Walmart for 19 of those years, um, in logistics for most of that time and, uh, helped pilot continuous delivery at Walmart inside logistics. I've got scars to show for it. Um, and then later moved to platform at Walmart, where I was originally in charge of the delivery metrics we were gathering to help teams understand how to do continuous delivery so they can compare themselves to what good continuous delivery looked like. And then later was asked to start a dojo at Walmart to directly pair with teams to help them solve the problem of how do we do CD. And then about a little over three years ago, I was, I joined Defense Unicorns as employee number three of three, uh, and we're, we're now, um, over 150 people. We're focused on how do we help the Department of Defense deliver, um, you know, do continuous delivery and secure environments. So it's a fun path.
Kovid Batra: Great, great. Perfect. And the same question to you. Something that LinkedIn doesn't tell about you, you would like to share with the audience.
Bryan Finster: Um, computers aren't my hobby. Uh, I, you know, it's a lot better than roofing. My dad had a construction company, so I know what that's like. Um, but I, I very much enjoy photography, uh, collecting watches, ride motorcycles, and build plastic models. So that's where I spend my time.
Kovid Batra: Nice. Great to know that. All right. So now I think, uh, we are good to go and start with the main section of, of our webinar. So I think first, uh, let's, let's start with you, Bryan. Um, I think you have been a long-time advocate of value streams, continuous delivery, DORA metrics. You just briefly told us about how this journey started, but let's, let's deep dive a little bit more into this. Uh, tell us about how value stream management, continuous delivery, all this as a concept started appealing to you from the point of Walmart and then how it has evolved over time for you in your life.
Bryan Finster: Sure. Uh, no, at Walmart, um, continuous delivery was the answer to a problem. It wasn't, it was, we had a business problem, you know, our lead time for change in logistics was a year. We were delivering every quarter with massive explosions. Every time we piloted, I mean, it was really stressful. Um, any, anytime we did a big change of recorder, we had planned 24 by 7 support for at least a week and sometimes longer, um, And it was just a complete nightmare. And our SVP, instead of hiring in a bunch of consultants, cause we've been through a whole bunch of agile transformations over the years, asked the senior engineers in the area to figure out how we can deliver every two weeks. Now, if you can imagine these giant explosions happening every two weeks instead of every quarter, we didn't want that. And so we started digging in, how do we get that done? And my partner in crime bought a copy of continuous delivery. We started reading that book cover to cover, pulling out everything we could, uh, started building Jenkins pipelines with templates, so the teams didn't have to go and build their own pipeline. They can just extend the base template which was a pattern we took forward later. And, and, uh, we built a global platform. I started trying to figure out how do we actually do the workflow that enables continuous delivery. I mean, we weren't testing at all. Think how scary that is. Uh, other than, you know, handing it off to QA and say, "Hey, test this for us.
And so I had to really dig into how do we do continuous integration. And then that led into what's the communication problems that are stopping us from getting information so we can test before we commit code. Um, and then once you start doing that at the team level, what's preventing us from getting all the other information that we need outside the team? How do we get the connection? You know that, all the, all the roadblocks that are preventing us from doing continuous delivery, how do we fix those? Which kind of let me fall backwards in the value stream management because now you're looking at the broader value stream. It's beyond just what your team can do. Um, and so it's, uh, it's, it's been just a journey of solving that problem of how do we allow every team to independently deploy from any other team as frequently as they can.
Kovid Batra: Great. And, and how do, uh, DORA metrics and engineering metrics, while you are implementing these projects, taking up these initiatives, play a role in it?
Bryan Finster: Well, so, you know, all this effort that we went on predated Accelerate coming out, but I was going to DevOps Enterprise Summit and learning as much as I could starting in 2015 and talking to people about how do we measure things, cause I was actually sent to DevOps Enterprise Summit the first time to figure out how do we measure if we're doing it well, and then started pulling together, you know, some metrics to show that we're progressing on this path to CD, you know, how frequently integrating code, how many defects are being generated over time, you know, and how, how often can individuals on the team deploy as like, you know, deploys per day per developer was a metric that Jim proposed back in 2015 as just a health metric. How are we doing? And then later in the, and when we started the dojo in platform at Walmart, we were using a metrics-based approach to help teams. Continuous delivery was the method we were using to improve engineering excellence in the organization. We, you know, we weren't doing any Agile frameworks. It was just, why can't we deliver change daily? Um, and early on when we started building the platform, the first tool was the CI tool. Second tool was how do we measure. And we brought in CapitalOne's Hygieia, and then we gamified delivery metrics so we can show teams with a star rating how they were doing on integration frequency, build time, build success rate, deploy frequency, you know, and code complexity, that sort of thing, to show them, you know, this is what good looks like, and here's where you are. That's it. Now, I learned a lot from that, and there's some things I would still game today, and some things I would absolutely not gamify. Um, but that's where I, you know, I spent a long time running that as the game master about how do we, how do we run the game to get teams to want to, want to move and have shown where to go.
And then later, Accelerate came out, and the big thing that Accelerate did was it validated everything we thought was true. All the experiences we had, because the reason I'm so passionate about it is that first, first experience with CD was such a morale improvement on the team that I, nobody ever wanted to work any other way, and when things later changed, they were forced to not work that way by new leadership, everyone who could left. And that's just the reality of it. And, but accelerate came out and said these exact things that we were seeing. And it wasn't just a one-off. It wasn't just this, you know, just localized to. What we were saying, it was everywhere.
Kovid Batra: Yeah, totally makes sense. I think, uh, it's been a burning topic now, and a lot of, uh, talks have been around it. In fact, like, these things are at team-level, system-level. In fact, uh, I'm, uh, the McKinsey article that came out, uh, talking about dev productivity also. So I, I have actually a question there. So, uh.
Bryan Finster: Oh, I shouldn't have read the article. Yeah, go ahead.
Kovid Batra: I mean, it's basically, it's basically talking about individual, uh, dev productivity, right? People say that it can be measured. So yeah. What's your take on that?
Bryan Finster: That's, that's really dumb. If you want to absolutely kill outcomes, uh, focus on HR metrics instead of outcome metrics, you know. And, and so, I want to touch a little bit on the DORA metrics I think. You know, I've, having worked to apply those metrics on top of the metrics we're already using, there's some of them that are useful, but you have to understand those came from surveys, and there's some of them that are, that if you try to measure them directly, you won't get the results you want, you won't get useful data from measuring directly. Um, you know, and they don't tell you things are going well, they only tell you things are going poorly and you can't use those as your, your, the thing that tells you whether, whether you're delivering value well, you know? It's just something that you, cause you to ask questions about what might be going wrong or not, but it's not, it's not something you use like a dashboard.
Kovid Batra: Makes sense. And I think, uh, the book that you have written, uh, 'How to Misuse and Abuse DORA Metrics', I think, let's, let's talk, talk about that a little bit. Like you have summarized a lot of things there, how DORA metrics should not be used, or Engineering metrics for that matter should not be used. So like, when do you think, how do you think teams should be using it? When do the teams actually feel the need of using these metrics and in which areas?
Bryan Finster: Well, I think observability in general is something people don't pay enough attention to. And not just, you know, not just production observability, but how are we working as a team. And, and really what we're trying to do is you have to think of it first from what are we trying to do with product development. Um, a big mistake people make is assuming that their idea is correct, and all we have to do is build something according to spec, make sure it tests according to spec and deliver it when we're done. When fundamentally, the idea is probably wrong. And so the question is, how big of a chunk of wrong idea do I want to deliver to the end user and which money do I want to spend doing that? So what we're trying to do is we're trying to become much more efficient about how we make change so we can make smaller change at lower costs so that we can be more effective about delivering value and deliver less wrong stuff. And so what you're really trying to do is you're trying to measure the, the, the way that we work, the way we test, to find areas where we can improve that workflow, so that we can reduce the cost and increase the velocity, which we can deliver change. So we can deliver smaller units of work more frequently, get faster feedback and adjust our idea, right? And so if you're not, if you're just looking at, "Oh, we just need to deliver faster." But you're not looking at why do we want to deliver faster is to get faster feedback on the idea. And also from my perspective, after 20 years of carrying a pager, fix production very, very quickly and safely, I think those are both key things.
And so what we're trying to do with the metrics is we're trying to identify where those problems are. And so in the paper I wrote for IT revolution, which was about twice as long as they asked me for on, on how to misuse and abuse DORA metrics, I went into the details of how we apply those metrics in real life. At Walmart, when we were working with teams to help them improve and also, you know, using them on ourselves, I think if a team really wants to focus on improving, the first thing they should measure is how well they're doing at continuous integration, you know, how frequently are we integrating code, how long does it take us to finish whatever a unit of work is, and what's our, uh, how many defects we're generating, uh, over time as a trend. And measure trends and improve all those trends at the same time.
Kovid Batra: How do we measure this piece where we are talking about measuring the continuous integration?
Bryan Finster: So, as an average on the team, how frequently are we integrating code? And you really want to be at least daily, right? And that's integrated to the trunk, not to some develop branch. And then also, you know, people generally work on a task or a story or whatever it is. How long does it take to go from when we start that work until it's delivered? What's that time frame? And there's, there's other times within that we can measure and that was when we get into value stream mapping. We can talk about that later, but, uh, we want small units of work because you get higher quality information if you get smaller units work and you're more predictable on delivery of that unit of work, which takes a lot of pressure off, it eliminates story points. But then you also have to balance those with the quality of what we did, and you can't measure that quality until it's in production, because test to spec doesn't mean it's good. 'Fit for purpose' means the user finds it good.
Kovid Batra: Right. Can you give us some examples of where you have seen implementing these metrics went completely south instead of working positively? Like how exactly were they abused and misused in a scenario?
Bryan Finster: Yeah, every single time somebody builds a dashboard without really understanding what the problems you're trying to solve are, I see, I've seen lots of people over the years since Accelerate was published, building dashboards to sell, but they don't understand the core problem they're trying to solve. But also, you know, when you have management who reads a book and says, Oh, look, here's an end, you know, I helped cause this problems, which is why I work so hard to fix it by saying, "Hey, look at these four key metrics." Aren't you? You know, this, this can tell you some things, but then they start using them as goals instead of health indicators that are contextual to individual teams. And when you start saying, "Hey, all teams must have this, this level of delivery frequency." Well, maybe, but everybody has their own delivery context. You're not going to deliver to an air-gapped environment as frequently as you are to, you know, AWS, right? And so, you have to understand what it is you're actually trying to do. What, what decisions are you going to make with any metric? What questions are you trying to answer before you go and measure it? You have to define what the problem is before you try to measure that you're successful at correcting the problem.
Kovid Batra: Right. Makes sense. There are challenges that I've seen in teams. Uh, so of course, Typo is getting implemented in various organizations here. What we have commonly come across is teams tend to start using it, but sometimes it happens that when there are certain indicators highlighted from those metrics, they're not sure of what to do next.
Bryan Finster: Right.
Kovid Batra: So I'm sure you must.
Bryan Finster: Well, but the reason why is because they didn't know why they were measuring it in the first place, right? And so, like I said, you know, DORA metrics in specific, they tell you something, but they're very much trailing metrics, which is why I point to CI because CI is really the, the CI workflow is really the engine that starts driving improvement. And then, you know, once you get better at that, you say, "Well, why can't I deliver today's work today?" And you start finding other things in the value stream that are broken, but then you have to identify, okay, well, We see this issue here with code review. We see this issue here. We have this handoff to another team downstream of development before we can deploy. How do we improve those? And how can we measure that we are improving? So you have to ask the question first. And then come up with the metrics that you're using to evaluate success.
And so, people are saying, well, I don't know what to do with this number. It's because they don't, they didn't, they started with a metric and then tried to figure out what to do with it because someone told him it was a good metric. No metric is a good metric unless you know what you're doing with it. I mean, if I put a tachometer on a car and you think that more is better but you don't understand what the tachometer is telling you, then you'll just blow up your engine.
Kovid Batra: But don't you think like there is a possible way to actually not know what to measure, but to identify what to measure also from these metrics itself? For example, like, uh, we have certain benchmarks for, uh, different industries for each metric, right? And let's say I start looking at the lead time, I start looking at the deployment frequency, mean time to restore, there are various other metrics. And from there, I try to identify where my engineering efficiency or productivity is, productivity is getting impacted. So can, can it not be a top-down approach where we find out what we need to actually measure and improve upon from those metrics itself?
Bryan Finster: Only if you start with a question you're trying to answer. But I wouldn't compare. So one of the other problems I have with the DORA metrics specifically is that the, and I've talked to DORA at Google about this as well, it's, it's like some of the questions are nonspecific. So for your, the system you work on most of the time, how frequently you deliver. Well, are you talking about a thousand developers, a hundred developers, a team of eight, right? And so, your delivery frequency is going to be very much relative to the number of people working on it, plus other constraints outside of it. And so you, yes, high performers deliver, you know, multiple times a day with, uh, you know, lead times of less than an hour, except that what's the definition of lead time? Well, there's two inside Accelerate, and they're different depending on how you read it. And, but that doesn't mean that you should just copy what it says. You should look at that and say, "Okay, now what, what am I trying to accomplish? And how can I apply these ideas? Not necessarily the metrics directly, but how can I apply these ideas to measure what I'm trying to measure to find out where my problems are?" So you have to deep dive into where your problems are. And so just like, "Hey, measure these things and here's your benchmarks.
Kovid Batra: Makes sense. Makes sense. Richard, do you have a point that I think we have been talking for a long, if you have any question, uh, I think let's, let's hear from Richard also. Um, he has used Typo, uh, has been using it for a while now, and I'm sure, uh, in this journey of implementing engineering metrics, DORA metrics in his team, he would have seen certain challenges. Richard, I think the stage is yours.
Richard Pangborn: Yeah, sure. Um, so my research into using DORA metrics stem from, um, building high-performing teams. So, um, we always, we're looking for continuous improvement, but we're really looking for ways to measure ourselves that, that makes sense, that can't be totally gamed, that, um, that are like standards. Uh, what I liked about DORA was it had some counterbalancing metrics like throughput versus quality, time to repair versus time to build, speed for stability. That's, it's a, it's a nice counterbalancing, um, effect. Um, and high-performing teams, they care about stuff like continuous improvement, they want to do better than they did last quarter or, or last month, they want to, um, they want help with decision-making. So better data to drive some of the guesswork about, you know, what, what area needs, um, The most improvement or what area is, uh, broken in our pipeline, maybe for like continuous delivery for quality. Um, I want to make sure that they're making a difference, that they're moving a needle, um, ladders up. So a lot of times, a lot of companies, uh, have different measurements at different levels, like company level, department level, team level, individual level. So DORA, we were able to identify some that do ladder up, which is great.
Some of the there are some challenges with implementing DORA, like when we first started. Um, so I think part of the challenge, one of the first ones was the complexity around data collection. Um, so, you know, accurately tracking and measuring DORA metrics. So deployment frequency, lead time for changes, failure rate, recovery, um, they all come from different sources. So CI/CD pipelines, version control systems, incident management tools. So integrating these data sources and ensuring they provide consistent results can be a little time consuming. Um, it can be a little difficult to understand. Yeah, so that was, that was definitely one part of it. Uh, we haven't rolled out all four yet. We're still in the process, just ensuring that, you know, what we are measuring is accurate.
Bryan Finster: Yeah, and I'm glad you touched on the accuracy thing. Um, When we would go and work with teams and start collecting data, so number one, we had data from the pipeline because it was embedded into the platform, but we also knew that when we worked with teams that the Git data was accurate, but the workload was going to be garbage unless the teams actually cared about using Jira correctly. And so, education step number one was while we were cleaning up the, the data in Jira, educating them on why Jira actually should matter to them, instead of just as a, it's not, it's not a time-tracking tool, it's a communication tool. You know, and educating them so that they would actually take it seriously so that the workflow data would be accurate so that they could then use that to help them identify where the improvements could happen because we're going to try to teach them how to improve, we weren't just trying to teach them to do what we said. Um, and yeah, I built a few data collection tools since we started this, and yeah, the collecting the data and showing where, um, accuracy problems happen as part of the dashboard is something that needs to be understood because people will just say, "Oh, the data's right." But yeah, I mean, especially with workflow data, one of the things we really did on the last one I built was show where, where the, you know, where we're out of bounds, very high or very low, you know. I talked to management. I was like, "Well, look, we're doing really good. I've got stuff closing here really fast." I'm like, you're telling me it took 30 seconds to do that, give it a work. Yeah, the accuracy issues. And MTTR is something that DORA's talked about ditching entirely because it's a far too noisy metric if you're trying to collect it automatically.
Richard Pangborn: Yeah, we haven't started tracking MTTR yet. Um, we're more concerned with the throughput versus stability that would have the biggest, um, change at the department level, at the team level. Um, I think, I think that's made the difference so far. Also, we have a challenge with, um, yeah, just doing a lot of stuff manually. So lack of tooling and automation. Um, there's a lot of manual measurements that are taking place. So like you said, error-prone for data collection, inconsistent processes. Um, once we get to a more automated state, I feel like it will be a bit more successful.
Bryan Finster: Yeah. There's a dashboard I built for the, for the Air Force. I'll send you a link later. It might, it might be useful, I'm not sure. But also the other thing is change failure rate is something that people misunderstand a lot, uh, and I've, I've combed through Accelerate multiple times. Uh, uh, Walmart has actually asked to reverse engineer the survey for the book, so I've gone back in depth. Change failure rate is any defect. It's not an incident. If you go and read what it says about change failure rate, it's any defect, which it should be because also the idea is wrong. If the user's reporting it's defective, and you say, "Well, that's a new feature." No, the idea was defective. We're not, it's not fit for purpose in most, you know, unless it's some edge case, but we should track that as well, because that's part of our quality process and change failure rate's trying to track our quality process.
Richard Pangborn: Another problem we had is, um, mean, uh, meantime to recovery. So because we track our bugs or defects differently, they have different priorities. So, um, P0s here has to be done, has to be fixed in less than 24 hours. Um, P, priority 1 means, you know, five days, priority two, you have two weeks. So trying to come up with a, an algorithm to accurately identify, um, time to fix, I guess you'd have like three, three or four different ones instead of one.
Bryan Finster: I've tried to solve that problem too, and especially on distributed systems, it becomes very difficult. So who's getting measured on MTTR? I mean, I'm sorry. Yes, yes. Who's getting measured, right? It's going to be because MTTR, by definition, is when the user sees impact. And so really, that's whoever has the user interface owns that metric. If you're trying to help a team improve their processes for recovery. So it's, it's, it's just a really difficult metric to try to do anything with unless, um, well, you can't, it's, I've, I've, I've tried to measure it directly. I've talked to Verizon, CapitalOne, uh, you know, other people in the dojo consortium, they've tried to make, nobody's been successful at measuring it. But yeah. I think better metrics are out there for how fast we can resolve defects.
Richard Pangborn: Um, one of the things we were concerned about at the beginning was like a resistance to measurement. Um, some people don't want to be measured.
Bryan Finster: That's because they have management meeting over the head and using it as, as the reason why it's a massive fear thing. And it's part of the, it's a cultural thing. I mean, as long as you, it's, you have to have a generative culture to make these metrics effective. One of the things we would do when we start working with teams is number one, we'd explain to them, we're not trying to judge you. We're like your doctor. We're working with you. We're in the trenches with you. These are all of our metrics. They're not yours. And here's how to use them to help you improve. And if a manager comes and starts trying to beat you up with them, just, you know, stop making the data valid.
Richard Pangborn: Yeah. Well, some developers do want to know am I doing well, how do I measure myself? Um, So this gives them a way to do it a little bit, but we told them, um, you know, you set your own goals. Improve yourself. Don't measure yourself next to a developer, another developer on your team or, or someone else where you're looking for your own improvement.
Bryan Finster: Well, I think it's also really important that the smallest unit that's measured with delivery metrics is team and not person. If, if, if individuals are being measured, they're going to optimize for themselves instead of optimizing for team goals. And this is something I've seen, uh, frequently, uh, there was one, uh, with, you know, on, on our, on the dojo team, we can walk into your team and see that if there was filters by individual developer, your team was seriously broken. Uh, and I've seen managers who measured team members by how many Jira issues they closed, which meant that code review is going to be delayed, uh, mentoring was not going to happen, um, uh, you'd have senior engineers focusing on easy tasks to get their numbers up instead of focusing on solving the hard problems, design was not going to happen well because it wasn't a ticket, you know, and so you focus on team outcomes and measure team goals and individual performance because everybody has different roles on the teams. People know that from an HR perspective, coaching by walking around is how you find out who's struggling. You go to the gimbal, you find out who's struggling, you can't measure people directly, that way it'll impact team goals, business goals.
Richard Pangborn: Yeah, I don't think we measure it as a, um, whether they're not successful, it's just something for them to, to watch themselves.
Bryan Finster: As long as somebody else can see it. I mean.
Richard Pangborn: Yeah, it's just for them, isn't it? Not for anyone else.
Bryan Finster: Yeah.
Richard Pangborn: Um, cool. Yeah. Yeah. That's, that's about it for me. I think at the moment.
Kovid Batra: Perfect, perfect. I think, uh, Rich, if, if you are done with your questions, we have already started seeing questions from the audience.
Bryan Finster: There's one other thing I'd like to mention real quick before we go there.
Kovid Batra: Sure.
Bryan Finster: I also gave a talk about how to misuse and abuse DORA metrics, and the fact that people think there's, yes, there's four key metrics they focus on, but read Accelerate. There's a lot more in that book for things that you should measure, including culture. Uh, it's, it's important that you look at this as a holistic thing and not just focus on these metrics to show how well we're doing at CD. Cool, but the most valuable thing in Accelerate is Appendix A and not the four key metrics. So that's number one. But number two, value stream maps, they're manual, but they give you far deeper insights into what's going wrong than the 4 key metrics will. So learn how to do value stream maps and learn how to use them to identify problems and fix those problems.
Kovid Batra: And how exactly, uh, so just an example, I'm expecting an example here, like when, when you are dealing with value stream maps, you're collecting data from system, you're collecting data from people through surveys and what exactly are you creating here?
Bryan Finster: No, I don't collect any data from the system initially. So if I'm doing a value stream map, it'll be bringing a team together. We're not doing it at the, at the organization level. We're doing it at the team level. So you bring a team together and then you talk about the process, starting from delivery and working backwards to initiation of how we deliver change. Uh, you get a consensus from the team about how long things take, how long things are waiting to start. And then you start seeing things like, Oh, we do asynchronous code review, and so I'm ready for code review to start. Four to eight hours later, somebody picks it up and they review it. And then I find out later that they've done and there's changes being made, you know, maybe the next day. And then I go make those changes, resubmit it, and like four to eight hours later, somebody would go re-review it. And, and you see things like, Oh, well, what if we just sat down and discuss the change together and just fix it on the fly, um, and remove all that wait time? How much, you know, that would encourage smaller pieces of work? And we can deliver more frequently and get faster feedback and see, you can see just immediate improvements from things like that, just by doing a value stream map. But bringing the team together will give you much higher quality data than trying to instrument that because not all of those things are, there's data being collected anywhere.
Kovid Batra: Makes sense. All right. We'll take a minute break and we'll start with the Q and A after that. So audience, uh, please shoot out all your questions that you have.
All right. Uh, we have the first question.
Bryan Finster: Yeah. So MTTR is a metric measuring customer impact. So the moment from when a customer is impacted or user impact until they are no longer impacted. And that doesn't mean you fix the defect. It means that you are no, they are no longer being impacted. So roll back, roll forward, doesn't matter. That's what MTTR has mentioned.
Kovid Batra: Perfect. Let's, let's move on to the next one.
Bryan Finster: Yeah. So, um, there's some things where I can set hard targets on as, as ways to know that we're doing well. Integration frequency is one of those, you know, if, if we're integrating once per day or better into the trunk, then we're doing a really good job of breaking down our work. We're doing a good job of testing, or as long as we keep our defects from blowing up, you know, we should be testing. But you can set targets for that. You can also set targets as a team, not something you impose on a team. This is something we as a team do that we want to keep a story size of two days or less. Paul Hammett would say one day or less. Uh, but I think two days is, is a good time limit, that if we, if it takes us more than two days, we'll start running into other dysfunctions that cause quality impact and, and issues with delivery. So I've built dashboards where I have a line on those two graphs that say "this is what good looks like", so the teams can compare themselves to good. Other things that you don't want to gamify, you don't ever want to measure test coverage and say, "Hey, this is what good test coverage looks like." Because test coverage doesn't measure quality. It just measures how much code is executed by code that says it's a test whether it's a test or not. So don't want to do that. That's a fail. I learned that the hard way. Delivery frequency, of course, it's, that's relative to their delivery problem. Uh, you may be delivering every day, every hour, every week, and that all could be good. It just depends. Um, but you can make objective measurements on integration frequency and how long a unit of work takes to do.
Kovid Batra: Cool. Moving on to the next one. Uh, any recommendations where you learn, uh, where we can learn value stream maps?
Bryan Finster: Yeah, so Steve Pereira and Andrew Davis released 'Flow Engineering', which is basically, because there's lots of books on value stream mapping, but it's, from the past, but they're mostly focused on manufacturing and Steve and Andrew released the Flow Engineering book where they talk about using value stream maps to identify problems and how to go about fixing those things. So it was just released earlier this year.
Kovid Batra: Cool. Moving on to the next one. When would you start and how to convince upper management? They want KPI now and we are trying to get a VSM expert to come in and help. It's a hard sell.
Bryan Finster: Yeah, yeah. We want easy numbers. Okay. Well, you know, I would, I would start with having a conversation about what problems we're trying to solve. It's very much like the conversation you have when you're trying to convince management that we want to do continuous delivery. They don't care about continuous delivery unless that they're, they're deep into the topic. But they do care about, uh, you know, delivering better about business value. So you talk about the business value. When you're talking about performance indicators, well, what performance are we trying to measure? And we really need to have that hard conversation about, are we trying to measure how much, how many lines of code are getting dumped onto the end user? How much value are we delivering? Are we trying to, you know, reduce the size and cost of delivering change so we can be more effective about this, or are we just trying to make sure people are busy? And so if you have management that just wants to make sure people are productive, uh, and they're not opening to listening to why they're wrong, I'd quit.
Kovid Batra: All right. Can we move on to the next one then?
Bryan Finster: Where's the next one?
Kovid Batra: Yeah.
Bryan Finster: Oh, okay.
Kovid Batra: Is there any scientific evidence we can use to point out that working on small steps iteratively is better than working in larger batches? The goal is to avoid anecdotal evidence while discussing what can improve the development process.
Bryan Finster: You know, the hard thing about software, uh, in an industry is that people don't like sharing their information, uh, the real information because it can be stock impacting. And so we're, we're going to get a scientific study from a private company. Um, but we have a, you know, a few centuries worth of, of knowledge about knowing that if you build a whole bunch of the wrong thing, that you're not going to sell it. Um, there's, you don't have to do a scientific study because we have knowledge from manufacturing. Uh, you know, the, the, the Simpsons, the documentary The Simpsons, where they talk about the Homer car, where they build the entirely wrong car and put the company out of business without, because there was no feedback loop on that car at all until it was unveiled. Right? That's, that's really the problem. We're doing product development. And if you go off and say, I have this brilliant, well, you know, like, uh, uh, what was the, uh, Silicon Valley, they spent so much money building something nobody wanted and they kept iterating and trying to find the right thing, but they kept building the complete thing and building the wrong thing and just burning money. And this, this is the problem we're trying to solve. And so you're, you're trying to get faster feedback about when we're wrong, because we're inventing something new. Edison didn't build a million wrong light bulbs and see if any, I see if they worked.
Kovid Batra: All right. I think we can move on to the next one. Uh, what strategies do you recommend for setting realistic yet ambitious goals based on our current DORA metrics?
Bryan Finster: Uh, I would start with why can't we deliver today's work today? Well, I'd do that right after why can't we integrate today's work today? And then start finding out what those problems are and solving them. Uh, as far as ambitious goals, I mean, I think it's ambitious to be doing continuous delivery. Why can't we do continuous delivery? Uh, you know, one of the reasons why we put minimumcd. org together several years ago was because it's a list of problems to solve, and if you solve those problems, you can't solve those problems with an organization that's not a great place to work. You just can't. And the goal is to make it a better place to work. So solve those problems. That's an ambitious goal. Do CD.
Kovid Batra: Richard, do you have a question?
Richard Pangborn: Uh, myself? No?
Kovid Batra: Yup.
Richard Pangborn: Nope.
Kovid Batra: Okay. One last one we'll take here. Uh, yeah.
Bryan Finster: Yeah, so common pitfalls, and I think we touched on some of these before, is trying to instrument all but two of them. You could instrument two of them mostly, I think that, uh, you know, and change fail rate is not named well because of the description. It's really defect arrival rate. But even then, that depends on being able to collect data from defects and whether or not that's being collected in a disciplined manner. Um, delivery frequency, you know, people frequently measure that at the organization level, but that doesn't really tell you anything. You really need to get down to where the work is happening and try to measure that there. But then setting targets around delivery frequency, instead of identifying how do we improve, right? And it's, it's, it's all it is, is how do we, how do we get better, um, using them as goals? They're absolutely not goals. They're health indicators. You know, like I talked about the tachometer before, I don't have a goal of, we're going to run at 5, 000 RPM. I mean, number one, it depends on the engine, right? I mean, that would be really terrible for a sport bike, would blow up a diesel. So we, we need to, using them naively without understanding what they mean and what it is we're trying to do. I see it constantly. Uh, I and others who were early adopters of these met out, screaming about this for several years, and that's why I'm on here today is please, please don't use them incorrectly because it just hurts things.
Kovid Batra: Perfect. Uh, Bryan, I have one question. Uh, uh, like when, when teams are setting these benchmarks for different metrics that they have identified to be measured, what should be the ideal strategy, ideal way of setting those benchmarks? Because that's a question I get asked a lot.
Bryan Finster: Let's say, they were never benchmarks in Accelerate either. What they said was is that we're seeing a correlation between companies with these outcomes and metrics that look like this. So those aren't industry benchmarks, that's a correlation they're making. And correlation is not equal causation. I will tell you that being really good at continuous delivery means that you can, if you have good ideas, deliver good ideas well, but being good at CD doesn't mean you're going to be good at, at, at, you know, meeting your business goals because it depends, you know, garbage in, garbage out. Um, and so, you don't set them as benchmarks. They're not benchmarks. They're health indicators. Use them as health indicators. How do we make this better? Use them as, as things to cause you to ask questions. Why can't we deliver more than once a month?
Kovid Batra: So basically, if we are, let's say, for a lack of a better term, we use 'benchmarks'. There should, those should be set on the basis of the cadence of our own team, how they are working, how they are designed to deliver. That's how we should be doing. Is that what you mean?
Bryan Finster: No, I would absolutely use them as health indicators, you know, track trends. Are we trending up? Are we trending down? And then use that as the basis of starting an investigation into why are we trending up? Why are we trending down? I mean, are we trending up because people think it's a goal? And were there some other metric that's going south that we're not aware of while we're, while we're focusing on this one thing getting better? I mean, this is Richard, I mean, you pointed out exactly. It's a good balance set of metrics if they're measured correctly unlike if it's collected correctly. And you can't, you know, another problem I see is people focusing on 1. I remember a director telling his area, "Hey, we're going to start using DORA metrics. But for change management purposes, we're only going to start by focusing on MTTR instead of anything else." They're a set, they go together, you know? You can't just peel one out. Um, so.
Kovid Batra: Got it, got it. Yeah, that absolutely answers my question. All right. I think with that, we come to the end of this session. Uh, before we part, uh, any parting advice from you, Bryan, Rich?
Richard Pangborn: Um, just what we found successful in our own journey. Every, every company is different. They all have their own different processes, their own way of doing things, their own way of building things. So, there's not exactly one right way to do it. It's usually by trial and error for each, probably each company, uh, I would say. Depending on the tooling that you want to choose, the way you want to break down tasks and deliver stories. Like for us, we chose one day tasks in Jira. Um, we didn't choose, uh, long-lived branches. Um, we're not trunk-based explicitly, but we're, our PRs last no longer than a day. Um, so this is what we find works well for us. We're delivering daily. We haven't gotten yet to the, um, you know, delivering multiple times a day, but that's, that's somewhere in the future that we're going to get to, but you have to balance that with business goals. You need to get buy-in from stakeholders before you can get, um, development time to sort of build out that, that structure. So, um, it's a process. Um, everyone's different. Um, but I think bringing in some of these KPIs or, or sorry, benchmarks or health metrics, whatever you want to call them, um, has worked for us in the way where we have more observability into how we operate as engineers than we've ever had in the past. Um, so it's been pretty beneficial for us.
Bryan Finster: Yeah. I'd say that the observability is critical. Um, you know, I've, I've built a few dashboards for showing these things. And for people, for development teams who were, uh, focusing on "we want to improve", they always found value in those things. Um, but I, one, one caution I have is that if you are showing metrics on a dashboard, understand that the user experience of that will change people's behaviors. It's so important people understand. And whenever I'm building a dashboard, I'm showing offsetting metrics together in a way that they can't be separated, um, because you, otherwise you'll just focus on one. I want you to focus on those offsetting metrics as a group, make them all better. Um, but it only matters if people are looking at it. And if it's not a constant topic of conversation, um, it, it, it won't help at all. And I know, uh, Abi Noda and I have a difference of opinion on how, on data collection. You know, I'm big on, I want real-time data because I'm trying to improve quickly. Uh, he's big on surveys, but for me, and I don't get feedback fast enough on, um, with a survey to be able to correct the course correctly if I'm trying to do, if I'm trying to improve CI and CD. It's good for other stuff. Good for culture. So that's the difference. Um, but make sure that you're not just going out and buying a tool to measure these things that shows data in a way or has, you know, that causes bad behavior, um, or shows, or collects data in a way where it's not collecting it correctly. Really understand what you're doing before you go and implement a tool.
Kovid Batra: Cool. Thanks for that piece of advice, Bryan, Rich. Uh, with that, I think that's our time. Just a quick announcement about the next webinar session, which is with the pioneer of CD, the co-author of the book 'Continuous Delivery', Dave Farley. That will be on 25th of September. So audience, stay tuned. I'll be sharing the link with you guys, sending you emails. Thank you so much. That's it for today.
Bryan Finster: Thanks so much.
Richard Pangborn: I appreciate it.
Kovid Batra: Thanks, Rich. Thanks, Bryan.
Code review is all about improving the code quality. However, it can be a nightmare for developers when not done correctly. They may experience several code review challenges and slow down the entire development process. This further reduces their morale and efficiency and results in developer burnout.
Hence, optimizing the code review process is crucial for both code reviewers and developers. In this blog post, we have shared a few tips on optimizing code reviews to boost developer productivity.
The Code review process is an essential stage in the software development life cycle. It has been a defining principle in agile methodologies. It ensures high-quality code and identifies potential issues or bugs before they are deployed into production.
Another notable benefit of code reviews is that it helps to maintain a continuous integration and delivery pipeline to ensure code changes are aligned with project requirements. It also ensures that the product meets the quality standards, contributing to the overall success of sprint or iteration.
With a consistent code review process, the development team can limit the risks of unnoticed mistakes and prevent a significant amount of tech debt.
They also make sure that the code meets the set acceptance criteria, and functional specifications and whether the code base follows consistent coding styles across the codebase.
Lastly, it provides an opportunity for developers to learn from each other and improve their coding skills which further helps in fostering continuous growth and helps raise the overall code quality.
When the code reviews lack clear guidelines or consistent criteria for evaluation, the developers may feel uncertain of what is expected from their end. This leads to ambiguity due to varied interpretations of code quality and style. It also takes a lot of their time to fix issues on different reviewers’ subjective opinions. This leads to frustration and decreased morale among developers.
When developers wait for feedback for an extended period, it prevents them from progressing. This slows down the entire software development lifecycle, resulting in missed deadlines and decreased morale. Hence, negatively affecting the deployment timeline, customer satisfaction, and overall business outcomes.
When reviewers communicate vague, unclear, and delayed feedback, they usually miss out on critical information. This leads to context-switching for developers which makes them lose focus on their current tasks. Moreover, they need to refamiliarize themselves with the code when the review is completed. Hence, resulting in developers losing their productivity.
Frequent switching between writing and reviewing code requires a lot of mental effort. This makes it harder for developers to be focused and productive. Poorly structured, conflicting, or unclear feedback also confuses developers on which of them to prioritize first and understand the rationale behind suggested changes. This slows down the progress, leading to decision fatigue and reducing the quality of work.
Knowledge gaps usually arise when reviewers lack the necessary domain knowledge or context about specific parts of the codebase. This results in a lack of context which further misguides developers who may overlook important issues. They may also need extra time to justify their decision and educate reviewers.
Establish clear objectives, coding standards, and expectations for code reviews. Communicate in advance with developers such as how long reviews should take and who will review the code. This allows both reviewers and developers to focus their efforts on relevant issues and prevent their time being wasted on insignificant matters.
Code review checklists include a predetermined set of questions and rules that the team will follow during the code review process. A few of the necessary quality checks include:
The issues must be prioritized based on their severity and impact. Not every issue in the code review process is equally important. Take up those issues first which affect system performance, security, or major features. Review them more thoroughly rather than the ones that have smaller and less impactful changes. It helps in allocating time and resources effectively.
Always share specific, honest, and actionable feedback with the developers. The feedback must point in the right direction and must explain the ‘why’ behind it. It will reduce follow-ups and give necessary context to the developers. This also helps the engineering team to improve their skills and produce better code which further results in a high-quality codebase.
Use automation tools such as style check, syntax check, and static code analysis tools to speed up the review process. This allows for routine checks for style, syntax errors, potential bugs, and performance issues and reduces the manual effort needed on such tasks. Automation allows developers to focus on more complex issues and allocate time more effectively.
Break down code into smaller, manageable chunks. This will be less overwhelming and time-consuming. The code reviewers can concentrate on details, adhere to the style guide and coding standards, and identify potential bugs. This will allow them to provide meaningful feedback more effectively. This helps in a deeper understanding of the code’s impact on the overall project.
Acknowledge and celebrate developers who consistently produce high-quality code. This enables developers to feel valued for their contributions, leading to increased engagement, job satisfaction, and a sense of ownership in the project’s success. They are also more likely to continue producing high-quality code and actively participate in the review process.
Encourage pair programming or pre-review sessions to by enabling real-time feedback, reducing review time, and improving code quality. This fosters collaboration, enhances knowledge sharing, and helps catch issues early. Hence, leading to smoother and more effective reviews. It also promotes team bonding, streamlines communication, and cultivates a culture of continuous learning and improvement.
Using an Engineering analytics platform in an organization is a powerful way to optimize the code review process and improve developer productivity. It provides comprehensive insights into the code quality, technical debt, and bug frequency which allow teams to proactively identify bottlenecks and address issues in real time before they escalate. It also allow teams to monitor their practices continuously and make adjustments as needed.
Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.
If you prioritize the code review process, follow the above-mentioned tips. It will help in maximizing code quality, improve developer productivity, and streamline the development process.
Happy reviewing!
Software engineering teams are important assets for the organization. They build high-quality products, gather and analyze requirements, design system architecture and components, and write clean, efficient code. Measuring their success and identifying the potential challenges they may be facing is important. However, this isn’t always easy and takes a lot of time.
And that’s how Engineering Analytics Tools comes to the rescue. One of the popular tools is Jellyfish which is widely used by engineering leaders and CTOs across the globe.
While this is usually the best choice for the organizations, there might be chances that it doesn’t work for you. Worry not! We’ve curated the top 6 Jellyfish alternatives that you can consider when choosing an engineering analytics tool for your company.
Jellyfish is a popular engineering management platform that offers real-time visibility into engineering organization and team progress. It translates tech data into information that the business side can understand and offers multiple perspectives on resource allocation. It also shows the status of every pull request and commits on the team. Jellyfish can be integrated with third-party tools such as Bitbucket, Github, Gitlab, JIRA, and other popular HR, Calendar, and Roadmap tools.
However, its UI can be tricky initially and has a steep learning curve due to the vast amount of data it provides, which can be overwhelming for new users.
Typo is another Jellyfish alternative that maximizes the business value of software delivery by offering features that improve SDLC visibility, developer insights, and workflow automation. It provides comprehensive insights into the deployment process through key DORA and other engineering metrics and offers engineering benchmarks to compare the team’s results across industries. Its automated code tool helps development teams identify code issues and auto-fix them before merging to master. It captures a 360-degree view of developers’ experience and includes an effective sprint analysis that tracks and analyzes the team’s progress. Typo can be integrated with tech tools such as GitHub, GitLab, Jira, Linear, and Jenkins.
LinearB is another leading software engineering intelligence platform that provides insights for identifying bottlenecks and streamlining software development workflow. It highlights automatable tasks to save time and enhance developer productivity. It also tracks DORA metrics and collects data from other tools to provide a holistic view of performance. Its project delivery tracker reflects project delivery status updates using planning accuracy and delivery reports. LinearB can be integrated with third-party applications such as Jira, Slack, and Shortcut.
Waydev is a software development analytics platform that provides actionable insights on metrics related to bug fixes, velocity, and more. It uses the agile method for tracking output during the development process and allows engineering leaders to see data from different perspectives. It emphasizes market-based metrics and ROI, unlike other platforms. Its resource planning assistance feature allows for avoiding scope creep and offers an understanding of the cost and progress of deliverables and key initiatives. Waydev can be integrated with well-known tools such as Gitlab, Github, CircleCI, and AzureOPS.
Pluralsight Flow is a popular tool that tracks DORA metrics and helps to benchmark DevOps practices. It aggregates GIT data into comprehensive insights and offers a bird-eye view of what’s happening in development teams. Its sprint feature helps to make better plans and dive into the team’s accomplished work and whether they are committed or unplanned. Its team-level ticket filters, GIT tags, and other lightweight signals streamline pulling data from different sources. Pluralsight Flow can be integrated with manual and automated testing tools such as Azure DevOps, and GitLab.
Code Climate Velocity is a popular tool that uses repos to synthesize data and offers visibility into code coverage, coding practices, and security risks. It tracks issues in real time to help quickly move through existing workflows and allow engineering leaders to compile data on dev velocity and code quality. It has JIRA and GIT support that compresses into real-time analytics. Its customized dashboard and trends provide a view into each individual’s day-to-day tasks to long progress. Code Climate Velocity also provides technical debt assessment and style check in every pull request.
Swarmia is another well-known engineering effectiveness platform that provides quantitative insights into the software development pipeline. It offers visibility into three key areas: Business outcomes, developer productivity, and developer experience. It allows engineering leaders to create flexible and audit-ready software cost capitalization reports. It also identifies and fixes common teamwork antipatterns such as siloing and too much work in progress. Swarmia can be integrated with popular tools such as Slack, JIRA, Gitlab, Azure DevOps, and more.
While we have shared top software development analytics tools, don’t forget to conduct thorough research before selecting for your engineering team. Check whether it aligns well with your requirements, facilitates team collaboration and continuous improvement, integrates seamlessly with your existing and upcoming tools, and so on.
All the best!
In an ever-changing tech landscape, organizations need to stay agile and deliver high-quality software rapidly. DevOps plays a crucial role in achieving these goals by bridging the gap between development and operations teams.
In this blog, we will delve into how to build a DevOps culture within your organization and explore the fundamental practices and strategies that can lead to more efficient, reliable, and customer-focused software development.
DevOps is a software development methodology that integrates development (Dev) and IT operations (Ops) to enhance software delivery’s speed, efficiency, and quality. The primary goal is to break down traditional silos between development and operations teams and foster a culture of collaboration and communication throughout the software development lifecycle. This creates a more efficient and agile workflow that allows organizations to respond quickly to changes and deliver value to customers faster.
DevOps culture refers to a collaborative and integrated approach between development and operations teams. It focuses on breaking down silos, fostering a shared sense of responsibility, and improving processes through automation and continuous feedback.
The CALMS framework is used to understand and implement DevOps principles effectively. It breaks down DevOps into five key components:
The culture pillar focuses on fostering a collaborative environment where shared responsibility and open communication are prioritized. It is crucial to break down silos between development and operations teams and allow them to work together more effectively.
Automation emphasizes minimizing manual intervention in processes. This includes automating testing, deployment, and infrastructure management to enhance efficiency and reliability.
The lean aspect aims to optimize workflows, manage work-in-progress (WIP), and eliminate non-value-adding activities. This is to streamline processes to accelerate software delivery and improve overall quality.
Measurement involves collecting data to assess the effectiveness of software delivery processes and practices. It enables teams to make informed, fact-based decisions, identify areas for improvement, and track progress.
The sharing component promotes open communication and knowledge transfer among teams It facilitates cross-team collaboration, fosters a learning environment, and ensures that successful practices and insights are shared and adopted widely.
Don’t overwhelm teams completely with the DevOps haul. Begin small and implement DevOps practice gradually. You can start first with the team that is better aligned with DevOps principles and then move ahead with other teams in the organization. Build momentum with early wins and evolve practices as you gain experience.
Communication is a key. When done correctly, it promotes collaboration and a smooth flow of information across the organization. This further aligns organization operations and lets the engineering leaders make informed decisions.
Moreover, the combined working environment between the development and operations teams promotes a culture of shared responsibility and common objectives. They can openly communicate ideas and challenges, allowing them to have a mutual conversation about resources, schedules, required features, and execution of projects.
Apart from encouraging communication and a collaborative environment, create a clear plan that outlines where you want to go and how you will get there. Ensure that these goals are realistic and achievable. This will allow teams to see the bigger picture and understand the desired outcome, motivating them to move in the right direction.
Tools such as Slack, Kubernetes, Docker, and Jfrog help build automation capabilities for DevOps teams. These tools are useful as they automate repetitive and mundane tasks and allow teams to focus on value-adding work. This allows them to fail fast, build fast, and deliver quickly which enhances their efficiency and process acceleration, positively impacting DevOps culture. Ensure that instead of assuming, ask your team directly what part can be automated and further support facilities to automate it.
The organization must fully understand and implement CI/CD to establish a DevOps culture and streamline the software delivery process. This allows for automating deployment from development to production and releasing the software more frequently with better quality and reduced risks. The CI/CD tools further allow teams to catch bugs early in the development cycle, reduce manual work, and minimize downtime between releases.
Continuous improvement is a key principle of DevOps culture. Engineering leaders must look for ways to encourage continuous learning and improvement such as by training and providing upskilling opportunities. Besides this, give them the freedom to experiment with new tools and techniques. Create a culture where they feel comfortable making mistakes and learning from them.
The teams must ensure that delivering products quickly doesn’t mean compromising security. In DevOps culture, the organization must adopt a ‘Security-first approach’ by integrating security practices into the DevOps pipeline. To maintain a strong security posture, regular security audits and compliance checks are essential. Security scans should be conducted at every stage of the development lifecycle to continuously monitor and assess security.
Regularly monitor and track system performance to detect issues early and ensure smooth operation. Use metrics and data to guide decisions, optimize processes, and continuously improve DevOps practices. Implement comprehensive dashboards and alerts to ensure teams can quickly respond to performance issues and maintain optimal health.
In DevOps culture, the organization must emphasize the ever-evolving needs of the customers. Encourage teams to think from the customer’s perspective and keep their needs and satisfaction at the forefront of the software delivery processes. Regularly incorporate customer feedback into the development cycle to ensure the product aligns with user expectations.
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through DORA and other key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Building a DevOps culture is essential for organizations to improve their software delivery capabilities and maintain a competitive edge. Implementing key practices as mentioned above will pave the way for a successful DevOps transformation.
In the recent episode of ‘groCTO: Originals’, host Kovid Batra engages in a thoughtful discussion with James Charlesworth, Director of Engineering at Pendo, who brings over 15 years of experience in engineering and leadership roles. The episode centers around the theme “Product vs Engineering: Building Bridges, Not Walls.”
James begins by sharing how his lifelong passion for technology and software engineering, along with pivotal moments in his life have shaped his career. Moving on to the main section, James addresses the age-old tussle between product and engineering teams, emphasizing that these teams should collaborate closely rather than operate in silos. He shares strategies for fostering empathy, collaboration, and effective collaboration while highlighting the importance of understanding each team’s priorities and the impact of misalignment.
James also underscores the value of one-on-one meetings for having meaningful conversations, building strong relationships, and understanding team members on a deeper level. He also explores the significant role of Engineering Managers in enabling their teams to overcome these challenges, ensuring smooth team dynamics, and achieving successful product outcomes.
Kovid Batra: Hi everyone. This is Kovid, back with a new episode of the groCTO podcast, and today with us, we have a very special guest. He's the Head of Engineering at Pendo, and he has more than 15 years of engineering and leadership experience. Welcome to the show, James. Happy to have you here.
James Charlesworth: Hi, Kovid. Thank you so much for having me on. I'm actually not Head of Engineering at Pendo. I am a Director of Engineering and I run the Sheffield office here in the UK. So thank you for having me on.
Kovid Batra: Oh, all right. My bad then. Okay. So I think today, uh, we are going to have a very interesting discussion with James. We're going to talk about the age-old tussle between product and engineering, and James, uh, is an expert at handling those situations. So he's going to tell us what are the tactics and what are the frameworks he's using here. But before, James, we move on to that section, uh, we would love to know a little bit more about you. Uh, maybe some of your hobbies, some of the life-defining events for you, who James is basically. Please go on.
James Charlesworth: Thanks, Kovid. Um, yeah, this sounds super nerdy, but my hobby has always been technology and software engineering. Um, I first started doing software engineering when I was probably about 11 or 12 years old. I had a Cyon Series 3 that my parents bought me from a boot fair, and I just learned how to program that. Like, I'll just sit there for ages typing these tiny little keys. Um, and my hobby has been like using software and coding to actually solve problems in the real world and build products. And that's kind of led me towards web development and SaaS products. And that's ultimately what we do at Pendo, is help people build better products. So, um, yeah, that's a pretty boring answer to your question about my hobbies. I do also like play music and things. Um, I played guitar in a band for a long time. Um, so that's the only non-techie hobby I guess I have.
Kovid Batra: No, that's great. Thank you for that sweet, small intro about yourself. Anything that, uh, that entices you from your childhood or from your maybe teenage that has defined you a lot? I mean, this, this is something that I usually ask a lot of people, but from there, we, we get to know a lot more about our guests. So if you don't mind, can you just share some of that, uh, experience from your past that defines you today?
James Charlesworth: Yeah, I think the biggest defining moment that a lot of people go through is when they first leave home for the first time and they don't have a direction because I didn't have much of a direction when I was like 18 years old and I left home. I did the wrong degree. I did a degree in control systems engineering and I ended up doing software. So it took me a while to get into web development because of doing the wrong degree. Um, and actually because I had no real direction, I was just sort of fluttering in the wind and just doing whatever. But through that process of just giving yourself a bit of freedom and going out and into the world and doing whatever you want, you really learn about yourself and you learn about other people, and I think that's when I went from being obsessed with computers to being obsessed with people and the way that people interact with each other and, um, you know, like I met people from all different walks of life, and you notice the similarities between anybody from all across the world, but you want to notice the differences, and you can notice how you can celebrate those differences.
And so I think, like, having that moment of moving away from home and, um, like, living by yourself and stuff like that, um, really opens your eyes up to, like, who you want to be and where you want your place in the world to be. So I'm sorry if that's a little bit, um, esoteric but it's, yeah, there was no like one defining moment really. I mean, it was just one of those and then like being in a band goes in with that because I always wanted to be a rock star. It never really worked out. But this idea of you can just get some friends, get together, get a van and just like go touring and play music, um, across the country, that's really cool, and that's really cool when you're in your sort of early twenties and you just want that freedom. Um, and that goes hand in hand with meeting people from, from all over the place. So yeah, like, you know, I'm obsessed with people. I'm obsessed with like human interactions and the way people, um, the way people like carry themselves and interact with each other and what they care about and how we can all align that. Yeah.
Kovid Batra: That's really interesting now. I mean, uh, the kind of role you are into where you are into leadership, you're leading teams, you're a Director of Engineering and this aspect of being aware of different aspects of different people and culture makes you more comfortable when you are, uh, leading people, you, you bring more empathy, you bring more understanding to their situations, and I'm sure that has come, uh, from there, and it, it is definitely growing as you are moving into your career.
So I think, James, this was, this was, uh, really, really interesting. Uh, let's move on to our main section. I think, uh, everyone is waiting to hear about that. Uh, so this has been an age-old tussle, as I said. Uh, the engineers have never liked the product managers. I'm not generalizing it, but just saying, so please, please don't, don't take it wrong. Uh, but yeah, usually the engineers are not very comfortable, uh, in those discussions and this has been an age-old tussle, we all know, know about it. When we talk about product and engineering teams, I personally never think these two as two separate teams. Like it, it never works like that. One thing that I learned as soon as I moved into this industry is it's 'product engineering'. It's not product and engineering separately. So it's not healthy for a team to have this kind of a tussle when you actually are moving towards the same goal and almost every engineering team that I see, there, there is some level of friction there and it's, it's natural to be there because the product managers usually might not be that well, uh, hands-on with the code, hands-on with the kind of daily practices our dev goes through. And then, planning according to that, keeping in mind that, okay, it should be, uh, pushy as well as comfortable for the developers to deliver something. So that's where the main friction starts and you come up with unreasonable requirements which the developers might not be able to relate to that, how it is going to impact the product.
So there are multiple reasons due to which this gap, this friction could be there. So today, I think with that note, I would, uh, hand over the mic to you and, uh, would want to know how you have had such experiences in your past, in your current role and how you end up resolving this so that people operate, like developers and product operates as one single team towards that one goal of making the business successful.
James Charlesworth: Yeah, absolutely. And what you said there about coming together to solve a problem together is really, really important. I think like the number one thing that underpins this is that everybody, product managers, engineers, designers, managers, needs to remember that you're all employed by the same organization and you've got the same shared goals and your, um, contribution to that is no more or less valuable than anybody else's. Like you mentioned that word 'empathy' in your introduction, like empathy is, we're gonna talk about empathy a lot today, right, because empathy is all about putting yourself in somebody else's shoes and seeing what their goals are. Um, and firstly, like trying to steer their goals to what you need, but also trying to like, um, emphasize what your own goals are, um, and align those to the others.
Like the way I always think about product managers is a lot of engineers, they feel like they're on feature factory teams. They feel like they're just being told what to build, and you get into this feature factory loop. Um, and it just seems like all the Product Manager wants to do is add features into the product, add features into the product, add features into the product. Um, and it can feel sometimes like product managers are paid like on commission, like they get a certain commission based on how many features they deliver at a point or something. That's not true. Product managers are paid a salary just like you do, and the way that your success is ultimately measured is the same way that your product manager's success is ultimately measured. And so, it's really, really important to realize that you do align around this goal, and you need to have a two-way conversation about it. Like you need to, you need to really, really explain like what your, you think the priorities should be, and you need to encourage your product manager to explain what they think their priorities should be for the team, and then you can align and find some middle ground that ultimately works best for the business.
But yeah, like in my experience anyway, it's just, you say age-old, like this has been quite a long thing. And before products managers, it was business people. It was maybe, you know, one of my first jobs in software engineering, um, we didn't really have products managers. We just had like the Director of engineering, product research, design, whatever, um, who would just come up with the idea and just say, "This is what we're building." And that's very difficult, um, because you've reported in to that person. So you basically had to just do exactly what they said, and that was super, super unhealthy because that builds up a huge amount of resentment. And I much, much prefer the model we have now, where we have product managers, where engineers don't report into the product managers, because that means that product managers had to lead the product without authority, um, and engineers have to lead the best engineering direction without authority. So you have this thing where you're encouraged to influence your peers on the same team as opposed to just doing the thing that your boss tells you to do, which is how it used to work when I started in this industry.
So it's got, it's got a lot better. Um, and the, yeah, as I've gone through my career and I've worked with some really, really good product managers and some really good product leaders, I've noticed that pattern between the, the product managers that are really, really good, that are really successful are the ones that have that empathy and we will talk about empathy a lot, right? Because it's super, super important. But product managers can have that empathy that can empathize with what engineers actually want to get out of a situation, um, and then align that with their own goals.
Kovid Batra: I have a question here. When you, when you say empathy, I think, uh, in your introduction also, you mentioned, like, when you meet different people from different cultures, different backgrounds, you tend to understand. Your, your brain develops that empathy naturally towards different situations and different people. But that has happened only because you have seen things differently, right? When we put this context into product and engineering, a product manager who has probably never coded in their life, right? Who does not have the context of, uh, how things work in, in the development workspace, right? In that situation, how a manager like that should be able to come to this piece that, okay, uh, if the developer is saying that this is going to take five days or this is difficult and this is complicated to implement and it won't add much value. So in those scenarios, a person who is not hands-on with coding or has never done that piece on his own or her own, uh, how do you think, uh, in a professional environment that empathy would come in? And of course, the Product Manager has his or her own, uh, deliverables, the metrics that need to be looked at. So how does that work in that situation?
James Charlesworth: Well, the same way it works the other way around as well. So the situation you've just described, right? You've got a Product Manager who is trying to get what they need to get done, but they don't understand the full details behind the implementation. You've also got an engineer that does understand the full details behind the implementation, but they don't understand the full business context behind what you're trying to build, right? Because that's the Product Manager's job. So the engineers, they might know exactly how the database is structured and how all of the backend architecture works, which is very complicated, but they don't understand, like they haven't been speaking to customers. They don't know the kinds of things that the Product Manager knows. So both sides need to essentially understand what the other person's priorities are, and that's what empathy is. Empathy is understanding what somebody wants, and not necessarily always giving them what they want, but the very least like comprehending and considering what somebody's goals are in your own way you deal with them, right?
So, um, back to your situation about software engineering. Okay, so if let's say, a Product Manager has come to you and said, "We just need to add this button to this page. It's super, super important. We want to, we want this button to send an email out." And the engineers come back and they say, "Oh, we actually don't have any backend email architecture that can send emails out. So we're going to actually have to build all of that." Um, that, you know, the Product Manager can go, "Well, what's so difficult about that? Just put a button there and send an email out." And the engineers are kind of caught in this rock where they're in a hard place where they're sort of saying, "Well, this is a lot of work. Like that's weeks and weeks and weeks of work, but how do I go to the business and say 'It's a lot of work'?" Um, and so, the solution is to really, really explain and break down to your Product Manager why this is more work than they realize and the Product Manager's job is to turn around to you and explain why we really, really need this. So you both need to align and you both need to understand. Product managers need to understand that some stuff is complicated and the only way they're going to understand it is complicated is if you just explain it to them, right? Like there's no secrets in software engineering. If you spend an hour sitting down and explaining to a Product Manager how your backend is architected and how your databases all fit together and you know, what email service we're using and what the limitations are of that email service, and then they'll understand it. It'll take you an hour to explain that. And equally, your Product Manager can sit down with you and they can show you the customer calls where people are really, really wanting this feature, right? And they can educate you on why we really need this feature, and then ultimately, you'll come, you'll come together where you understand why your Product Manager is pushing for this so hard, and your Product Manager will understand why you're pushing back against this so hard, and you'll find a solution that makes everybody happy in the end. But you do need to listen. You need to listen to the other person's, um, goals and what they want to get out of it. Um, and that's the empathy side. Eventually, it's like, it's on this respecting somebody's motives. It's respecting somebody's, um, like what they've been given, their mandate that they've been given for a certain situation.
Kovid Batra: Right. I think this is one scenario where I definitely see putting in effort in explaining to the other person what it really means, what it stands for. Obviously, anyone cannot be so inconsiderate about the other person when they're working together. So maybe in one or two, uh, situations like this, let's say, I'm a Product Manager, uh, where I have to explain things to the developer, and if I do that for, let's say, two or three such instances, from the fourth or fifth time, automatically that level of trust is built, and you are in a position to maybe, uh, not even explain a lot of times. You get that synchronization in place where things are working well with you.
And on that note, I really feel that people who are joining in large size teams, like, uh, a Product Manager joining in or a developer joining in, usually in large size teams, we have started to see this pattern of having engineering managers also, right? So in your perspective, uh, how much does an Engineering Manager play a role in, uh, bridging this gap and reducing this friction? Because, uh, few of my very close friends who have been from the engineering background have chosen to be in the management space now, and they, they usually tell us what things they are working upon right now. And I feel that that really helps the business as well as the developers to deliver the right things on time and you get a lot of context from both the sides. So what's your perspective on that, uh, of bringing those engineering managers into the system?
James Charlesworth: Yeah. I mean, I think the primary, number one responsibility of an Engineering Manager is to empower the engineers to do all those things that you've just been speaking about, right? So like your number one responsibility, engineering managers tend to have better people skills than engineers. That's why people go into management. Um, and your job is to teach the engineers on your team how to do that, all of those things you've just described. Sometimes you have to step in and sometimes there's a high pressure situation where you do actually have to say, you know, "I'm going to bridge the gap here between engineers and product." But your primary job as an Engineering Manager is to enable the engineers on your team to all have that kind of conversation with the product managers and with the business. Um, and so it's coaching. So it's support. So it's, um, career development, and also, you know, hiring the right people, that's quite a large job of an Engineering Manager. Performance management. Um, and so, a lot of that. Engineering managers should never be the one person that bridges gap between product and engineering because then they're going to become a bottleneck, and also the engineers are never really going to learn to do that themselves.
Um, so yeah, that's always been, and I learned from some really good engineering managers or software development managers about this, about like, um, you know, empowering the people that you've put in charge. Engineering managers aren't in their position because they're necessarily better at everything than the engineers. They're usually better at one or two things. Um, but they're not as good at things like technical architecture. So as an Engineering Manager myself, I would never overrule an IC's opinion on a software architecture because that's not my job. My job is not that. I might know, I might have been doing software for years and years and years and I understand how systems are architected and how databases work and stuff. But I'm also employing people who are better at that than me, and that's the point. And so I would never overrule them, and I would never overrule how they collaborate with their Product Manager. But I would guide and coach them towards being able to do that. Um, and so, that's the case of speaking to engineers, speaking to product managers, trying to find out if they're talking past each other, trying to find out, you know, where, where the disconnect is, and then trying to solve that between the two groups of people. So I think the answer to your question is like the main role of an Engineering Manager is to become a force multiplier on their team, essentially, and to enable everybody to do that. Um, yeah, you can't have engineer managers who are just there to fix the gap. It's just not scalable. That's not a good thing.
Kovid Batra: No, I totally understand that. So when we are talking about bringing, uh, this level of comfort where people are working together, talking about your experience in your teams, there must have been such scenarios and you must have like put in some thoughts at the time of orientation, at the time of onboarding the team members that how they should be working to ensure that things work as a team, uh, can you just tell us about a few incidents and how you ended up solving them and how you put in the right, uh, onboarding for the team members to have that inculcated in the culture?
James Charlesworth: Yeah, the best onboarding is like that group effect of just observing something happening and then joining in with it. So like, by standard the best way to onboard somebody is just to add them to a high-performing team. Like honestly, you just put someone on the team that's super, super collaborative and they will witness how people can collaborate. But I've had you know, I've had positive and negative experiences in the past with joining a team, primarily back when I was a software engineer. I remember I once joined a team for the very first time and I just never really got on with my Product Manager. Like I don't think we clicked as people. We never really had any kind of conversations or anything. Um, and I was never really onboarded properly. So at the start I did have a slightly, um, rocky relationship with this Product Manager where I just couldn't understand, I couldn't understand what they were trying to do. They never explained anything. They just said, "This is what we are doing." So I just had to say, "Well, that's going to take longer than you think." And I tried this for ages. And I spoke to my manager. My manager sort of gave the advice that I've just been trying to give, um, your listeners here, which is like, you know, you need to go out and do it yourself. I'm not going to fix this for you. So what I did is I took this Product Manager and I just said, "Look, let's go for a coffee once every two weeks. We'll just have a one-to-one." This was before COVID. So you can actually go out and do these kinds of things. Um, and we would, every lunch, every Monday, every two weeks, we would just go down the road and have a coffee in a cafe, um, in London where I was working at the time. And I just got to know them as a person. And I really, really got to understand that like, this is a person that is under a lot of pressure in their job and they're very, very stressed out, and they take that sometimes out on their team. It's not necessarily their fault, but that is the way that they deal with things. Um, and if I can just be a little bit, have a little bit of sympathy to that sort of situation they're put in and I can work out what's going on behind that, and I would ask them about like what they want to do, what their career aspirations, what do you know, what do they want to be one day, where they want to work and this sort of stuff. And those kinds of small conversations, like I say, half an hour every two weeks, just a one to one, um, completely fixed the relationship and completely fixed everything else, because you just build up so much more trust with somebody if you're just having small one-on-one conversations with them.
And my kind of hack for engineers, if you like, is to have one-to-ones. People think one-to-ones is just for managers and it's for people to talk to their boss or it's for people to talk to people that report to them. Anyone can have a one-to-one with anyone in the business and set up a regular, no-agenda meeting every couple of weeks. That's just half an hour where you just chat with somebody, and that is a super, super valuable way of building up rapport with people that will pay off dividends in the future, like half an hour invested between a Senior Engineer and their Product Manager, half an hour every couple of weeks will pay off dividends in the future when you meet, uh, when you meet a conflict and you realize that, oh! Actually, I know this person really quite well now because we have coffee every two weeks, every Monday, right? And so you don't need to be somebody, you don't really need a massive reason to have a one-to-one with somebody. Just put it in the calendar, chat to them and say, "Look, I, you know, I really like us to work more effectively together. Um, let's have half an hour every Monday. I'll buy you a croissant or something, whatever it is you want to do." Um, and then just ask them about their life, asking about their career goals, ask them about like what kind of challenges they're facing. And yeah, before you know it, you'll be helping each other out. You'll be desperate to help each other out because that is human nature. We like helping each other. So yeah.
Kovid Batra: I think I would like to, just because I've been working with a lot of engineers and engineering managers these days, what I have really felt is that throughout their initial years of career, they have been talking with a computer, right? It's very difficult to find out what to talk about. I think the advice that you have given is very simple and I think very impactful. I have experienced that myself, but I have, I would say, I have been an exception to the engineering and development space because I have been a little extrovert and have been talking about things lately, at least in my comfort zone. Uh, so I was able to find out that space with people who are themselves very introvert, uh, but still I could break through and I could break that ice.
It's very difficult for the other side of the people who are developers throughout their career to come back and like start these conversations on their own. So what are the things that you really think we should be talking about? Like, even if a Product Manager is going to the engineer or the engineer is wanting to break that ice and like build in that empathy and understand that person, what kind of things you look forward to, uh, in such kind of conversations, let's say?
James Charlesworth: That's an interesting one. Like a lot of, there's a lot of people that are introverted in this industry. A lot of people use introversion as like a crutch or as an excuse, and they shouldn't. Be just, you know, being introverted doesn't mean you can't connect with other people. It just means you connect with other people differently. It means that, you know, you look inwards for experiences and things. Um, and so the practical advice would be to try and recognize how another person functions. You might find that your Product Manager is actually more introverted than they let on. A lot of people just put on a show. A lot of people are super, super introverted, but they put on a show in day-to-day, especially in work life, and they'll, you know, pretend to be all extroverted and they'll pretend to be all confident, but they're not. And I've known many people like this, that if you have an actual conversation with them, they'll admit to you that they're actually super introverted and they get super, super nervous whenever they have to talk to people, but they do it and they force themself to do it because they've learned throughout their careers.
So, um, yeah, I'm not suggesting people should push themselves too far out of their comfort zone, but In terms of practical advice, speaking in statements is quite a big thing, though a lot of people don't realise, but, um, I can't remember where I read this, this is from some book or something on like how to make friends and influence people, I don't think it's that exact book, but essentially, if all you're doing is asking somebody questions, then you're putting all of the onus in the conversation on them, and that's not actually that comfortable in conversation. So if you're talking to a Product Manager and you're just asking them, like, "Why are we doing this?" "Why does the customer want that?" "What's the point of this feature?" That's actually not, that's not a nice thing to do because you're making them lead everything. What you really want to do is just talk in statements. You just want to say, "Hey, like I'm just building this. It's really cool. We've got, we've got connectivity going on between this WebSocket and this backend database. There you go. That's the thing. Um, we've just realized that this is a little bit late. And so, it's going to be a few days extra, but we found an area over here where we can cut some corners." Like just to say things, say things that are going on, tell people stuff that's going on in your life. Um, and then there's no pressure on them to intervene. And this is like standard small talk, right? If you just tell people things, then they can decide to walk away or they can decide to engage you in conversation, but you're not putting too much pressure on them. You're not asking them a barrage of questions that they feel like they have to answer. So that's standard advice for introverts. I think if you are introverted and you feel like you need to talk to somebody to share, share details about what you're working on, share details about, um, you know, your current goals and where you are and see what happens, they might do the same. You might learn something.
Kovid Batra: Yeah, sure. I think, I think everyone actually wants to do that, but it's just that there has to be an initiation from one side. And if it's more relevant and feels like, uh, coming very naturally to build that bond, I think this would really, really work out.
Cool. So I think this was, this was really, uh, again, I would say a very simple, but a very impactful advice that one-on-ones are really things that work, right? At the end of the day, we are humans and at least for developers, I think because they are day-in and day-out just interacting with their computers, I think this is a good escape for them to actually probably go back and talk to people and have those real conversations and build that bond. So yeah, I think that's really amazing. Anything else?
James Charlesworth: You can talk about computers.
Kovid Batra: Sorry?
James Charlesworth: You can talk about computers. Like, if you're really into software, like, this is why gaming, I'm not really a gamer, but I know a lot of people connect over gaming, a lot of people bond over gaming because that's something that you get into as like an introspective thing, and then you find out that somebody else is also into it, you're into the same game, and you can connect over that, and it turns something that was a really insular, inward looking experience into like a shared group sociable thing, right? So like, yeah, I'm in many ways, I'm quite jealous of people that are into gaming because it does have that social aspect. So yeah, talk about computers. Like, just because your life is staring at a screen and talking to a computer, you can still share that with other people and even product managers as well. Product managers in tech companies, they're super, super technical. They might not be able to code, but they definitely know how computers work and they definitely know how systems work. They're designing these things. So talk to them about it. Talk to them about, um, you know, the latest Microsoft Windows version that can spy on your history of AI. Like, talk to your Product Manager about that. They will have opinions on this sort of stuff. And so, yeah, sorry to cut you off, but like, honestly, just because you're into computers and you're into coding, like that, that can be a way to connect with people. It doesn't have to be a way to stay isolated.
Kovid Batra: Definitely, definitely. Uh, perfect. I think more or less the idea is to have that empathy for people, do more one-on-ones and build that trust in the team, and I think that would really solve this problem. And I think one thing that we should have actually talked in the beginning itself, uh, the impact of this problem, actually. Like for our audience, I won't let that question go away like that. Uh, there must have been experiences where you would have dealt with consequences of having this friction in the team, right? So maybe the engineering managers, the product managers, the engineering teams out there, uh, who are looking at delivering successfully, I think they should be, uh, aware of what are the consequences of not putting some focus and effort in solving this problem before it becomes something big. So any of your, uh, experiences that could highlight the impact of this problem in the teams, I think that would be appreciated.
James Charlesworth: I mean, like pretty much all systems out there that are absolutely laden with tech debt and every product is late, is the result of this. It's the result of the breakdown between what the business needs or what the product owner needs and what the engineers are building. And I've worked on many, many systems that have been massively over-engineered because the engineers were given too much free reign. And so it was, you know, you know, not much tech debt, but really, really, really over complicated and took forever to deliver any value to any customers. They've also worked on systems that were massively under-engineered and they fall down and they break and there's bugs all the time. And that's because the engineers weren't given enough reign to do things properly. So you need to find that middle ground. And yeah, like honestly, I've worked, I've seen so many situations where the breakdown in conversation between product managers and engineers has just led to runaway bugs, runaway tech debt, runaway like people leaving their jobs. I've seen that happen before as well. Yeah, and that's, that's really bad. These are all bad outcomes for the entire business, right? You don't want engineers quitting because they don't get on with their Product Manager, and that's something I've seen before. Um, you don't want a huge amount of tech debt piling up because engineers are too scared to put their hands up and say, "Look, we're accruing tech debt here. This approach isn't working." So they're too scared to do that. So they just do the feature factory thing and they ultimately, build up loads of tech debt and then, a huge bug is released. You don't want that. Um, but you also don't want engineers to be just left to it and put in a room for a month and come back with some massively elaborate overengineered system that doesn't actually solve the problem for the customer. So all of these things are bad situations. The good situation is the one that is an iterative approach with feedback and collaboration between engineers and product. It's the only real way of doing it.
Kovid Batra: Definitely. Great, James. Thanks a lot, uh, for giving us so much practical and insightful advice on, uh, how to deal with this situation, and I'm sure this would be really helpful for a lot of engineering teams, product engineering teams out there to be more successful. And we would love to have you back on the show once again and talking about such insightful topics and challenges of the engineering ecosystem. But for today, I think it's time. Uh, thank you so much once again. It was great to have you on the show.
James Charlesworth: No worries. Thanks very much, Kovid.
As platform engineering continues to evolve, it brings both promising opportunities and potential challenges.
As we look to the future, what changes lie ahead for Platform Engineering? In this blog, we will explore the future landscape of platform engineering and strategize how organizations can stay at the forefront of innovation.
Platform Engineering is an emerging technology approach that enables software developers with all the required resources. It acts as a bridge between development and infrastructure which helps in simplifying the complex tasks and enhancing development velocity. The primary goal is to improve developer experience, operational efficiency, and the overall speed of software delivery.
The rise in Platform Engineering will enhance developer experience by creating standard toolchains and workflow. In the coming time, the platform engineering team will work closely with developers to understand what they need to be productive. Moreover, the platform tool will be integrated and closely monitored through DevEx and reports. This will enable developers to work efficiently and focus on the core tasks by automating repetitive tasks, further improving their productivity and satisfaction.
Platform engineering is closely associated with the development of IDP. In today’s times, organizations are striving for efficiency, hence, the creation and adoption of internal development platforms will rise. This will streamline operations, provide a standardized way of deploying and managing applications, and reduce cognitive load. Hence, reducing time to market for new features and products, allowing developers to focus on delivering high-quality products more efficiently rather than managing infrastructure.
Modern software development demands rapid iteration. The ephemeral environments, temporary, ideal environments, will be an effective way to test new features and bugs before they are merged into the main codebase. These environments will prioritize speed, flexibility, and cost efficiency. Since they are created on-demand and short-lived, they will align perfectly with modern development practices.
As times are changing, AI-driven tools become more prevalent. These Generative AI tools such as GitHub Copilot and Google Gemini will enhance capabilities such as infrastructure as code, governance as code, and security as code. This will not only automate manual tasks but also support smoother operations and improved documentation processes. Hence, driving innovation and automating dev workflow.
Platform engineering is a natural extension of DevOps. In the future, the platform engineers will work alongside DevOps rather than replacing it to address its complexities and scalability challenges. This will provide a standardized and automated approach to software development and deployment leading to faster project initialization, reduced lead time, and increased productivity.
Software organizations are now shifting from project project-centric model towards product product-centric funding model. When platforms are fully-fledged products, they serve internal customers and require a thoughtful and user-centric approach in their ongoing development. It also aligns well with the product lifecycle that is ongoing and continuous which enhances innovation and reduces operational friction. It will also decentralize decision making which allows platform engineering leaders to make and adjust funding decisions for their teams.
Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
The future of platform engineering is both exciting and dynamic. As this field continues to evolve, staying ahead of these developments is crucial for organizations aiming to maintain a competitive edge. By embracing these predictions and proactively adapting to changes, platform engineering teams can drive innovation, improve efficiency, and deliver high-quality products that meet the demands of an ever-changing tech landscape.
Platform engineering is a relatively new and evolving field in the tech industry. However, like any evolving field, it comes with its share of challenges. If overlooked can limit its effectiveness.
In this blog post, we dive deep into these common missteps and provide actionable insights to overcome them, so that your platform engineering efforts are both successful and sustainable.
Platform Engineering refers to providing foundational tools and services to the development team that allow them to quickly and safely deliver their applications. This aims to increase developer productivity by providing a unified technical platform to streamline the process which helps reduce errors and enhance reliability.
The core component of Platform Engineering is IDP i.e. centralized collections of tools, services, and automated workflows that enable developers to self-serve resources needed for building, testing, and deploying applications. It empowers developers to deliver faster by reducing reliance on other teams, automating repetitive tasks, reducing the risk of errors, and ensuring every application adheres to organizational standards.
The platform team consists of platform engineers who are responsible for building, maintaining, and configuring the IDP. The platform team standardizes workflows, automates repetitive tasks, and ensures that developers have access to the necessary tools and resources. The aim is to create a seamless experience for developers. Hence, allowing them to focus on building applications rather than managing infrastructure.
Platform engineering focuses on the importance of standardizing processes and automating infrastructure management. This includes creating paved roads for common development tasks such as deployment scripts, testing, and scaling to simplify workflows and reduce friction for developers. Curating a catalog of resources, following predefined templates, and establishing best practices ensure that every deployment follows the same standards, thus enhancing consistency across development efforts while allowing flexibility for individual preferences.
Platform engineering is an iterative process, requiring ongoing assessment and enhancement based on developer feedback and changing business needs. This results in continuous improvement that ensures the platform evolves to meet the demands of its users and incorporates new technologies and practices as they emerge.
Security is a key component of platform engineering. Integrating security best practices into the platform such as automated vulnerability scanning, encryption, and compliance monitoring is the best way to protect against vulnerabilities and ensure compliance with relevant regulations. This proactive approach is integrated into all stages of the platform helps mitigate risks associated with software delivery and fosters a secure development environment.
One of the common mistakes platform engineers make is focusing solely on dashboards without addressing the underlying issues that need solving. While dashboards provide a good overview, they can lead to a superficial understanding of problems instead of encouraging genuine process improvements.
To avoid this, teams must combine dashboards with automated alerts, tracing, and log analysis to get actionable insights and a more comprehensive observability strategy for faster incident detection and resolution.
Developing a platform based on assumptions ends up not addressing real problems and does not meet the developers’s needs. The platform may lack important features for developers leading to dissatisfaction and low adoption.
Hence, establishing clear objectives and success criteria vital for guiding development efforts. Engage with developers now and then. Conduct surveys, interviews, or workshops to gather insights into their pain points and needs before building the platform.
Building an overlay complex platform hinders rather than helps development efforts. When the platform contains features that aren’t necessary or used by developers, it leads to increased maintenance costs and confusion among developers that further hampers their productivity.
The goal must be finding the right balance between functionality and simplicity. Hence, ensuring the platform effectively meets the needs of developers without unnecessary complications and iterating it based on actual usage and feedback.
The belief that a single platform caters to all development teams and uses cases uniformly is a fallacy. Different teams and applications have varying needs, workflows, and technology stacks, necessitating tailored solutions rather than a uniform approach. As a result, the platform may end up being too rigid for some teams and overly complex for some resulting in low adoption and inefficiencies.
Hence, design a flexible and customizable platform that adapts to diverse requirements. This allows teams to tailor the platform to their specific workflows while maintaining shared standards and governance.
Spending excessive time in the planning phase leads to delays in implementation, missed opportunities, and not fully meeting the evolving needs of end-users. When the teams focus on perfecting every detail before implementation it results in the platform remaining theoretical instead of delivering real value.
An effective way is to create a balance between planning and executing by adopting an iterative approach. In other words, focus on delivering a minimum viable product (MVP) quickly and continuously improving it based on real user feedback. This allows the platform to evolve in alignment with actual developer needs which ensures better adoption and more effective outcomes.
Building the platform without incorporating security measures from the beginning can create opportunities for cyber threats and attacks. This also exposes the organization to compliance risks, vulnerabilities, and potential breaches that could be costly to resolve.
Implementing automated security tools, such as identity and access management (IAM), encrypted communications, and code analysis tools helps continuously monitor for security issues and ensure compliance with best practices. Besides this, provide ongoing security training that covers common vulnerabilities, secure coding practices, and awareness of evolving threats.
When used correctly, platform engineering offers many benefits:
Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.
It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.
Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
Platform engineering has immense potential to streamline development and improve efficiency, but avoiding common pitfalls is key. By focusing on the pitfalls mentioned above, you can create a platform that drives productivity and innovation.
All the best! :)
In the recent episode of ‘groCTO: Originals’, host Kovid Batra engages in an insightful conversation with Vladislav Maličević, CTO at Jedox. The central theme of the discussion revolves around “Inside Jedox: The Buy vs. Build Debate”.
The episode starts with Vladislav recounting his 20-year journey from being one of Jedox’s first developers to stepping into the role of CTO. Moving forward, He sheds light on the company's vision, the transformation from an open-source project to a full-fledged cloud platform, and the various hurdles and achievements along the way, such as competing with industry giants like IBM and SAP. He also points out that many early team members remain with the company to this day.
Vladislav then dives into important decisions surrounding whether to build in-house or outsource various parts of their product, explaining that spending constraints often guide these choices. He also emphasizes the 80/20 rule (Pareto principle) and highlights the importance of integrating with Microsoft Excel as a key factor in their success.
Kovid Batra: Hi, everyone. This is Kovid, back with a new episode of groCTO. Today on our show, we have Vlado from Jedox. Welcome to the show. Great to have you here.
Vladislav Maličević: It's a pleasure to be here. Hi.
Kovid Batra: Hey, Vlado. All right. Like before, um, I start off with a beautiful discussion with you around the age-old 'Buy vs Build', I would love to know a little bit more about you, um, your hobbies, what you do at Jedox. So let's, let's start with a quick, cute intro about yourself.
Vladislav Maličević: Yeah. So, uh, my name is Vlado. The long name is Vladislav Maličević, uh, long name coming. It's a, it's a Serbian name and, uh, coming, coming originally from, from Bosnia, but I've been living and working here in Germany for the past 22 or 23 years. I started, uh, 20 years ago this year, uh, with Jedox. I was one of the first, uh, employees, one of the first developers and, uh, slash the, uh, employee of the company, went through the ranks over the years. I was lucky to follow the growth of the company and went through the ranks in the, in the engineering department. I was, the Head of, uh, Development and the Director of Development, VP, um, Development, uh, and later on added support, uh, um, coined the cloud team back in the day. And, um, a few years back, I joined the C-level as a CTO with the company of 450-500 people today. Right. And it was an incredible journey, um, to, to, to look at, uh, from, from within, uh, observe and participate in, in this, uh, in this long journey.
So, um, yeah, but more about personal. So I'm a, I'm a father of three girls. Um, I also have a sausage dog and, uh, yeah, with my wife, uh, we live, uh, with my, with our kids here in Karlsruhe in Germany, which is a university city, um, let's say, more, uh, in the southern part of Germany. Yeah. So that's, that's about it.
Kovid Batra: Cool. I think, uh, this is really amazing to see. I mean, rarely we see this someone spending such long time joining in as an employee and then growing to that C-level in a 20-25 year journey. So that, that has been, uh, one of my first experiences, actually, with someone on this show. I would love to know how it all started and what is Jedox about, uh, what was your vision and the whole company's goals and vision at that point of time? Now, 20, 20 years hence, how, how do things look like?
Vladislav Maličević: Yeah, sure. Yeah, I mean, the only constant in life is change, right? And, and many things, uh, stay unplanned. Initially, I, I really didn't, didn't intend to, to stay with Jedox. I thought it was in-between, uh, just an in-between jobs kind of thing, and, um, also the setup, it was a very small company, small office, uh, just a few, few people, uh, which by the way, all of them are still with the company. So all the people that were in the company when I joined are still with the company, which is also one, one quality, I must say. Like I said, I initially didn't, didn't, didn't plan to stick too long, but the challenge was there and it, it, it became more and more interesting from day-to-day. And, um, we were kind of, it's, it's easy when you have a kind of a black, uh, blank canvas, yeah. There is no product and then you start from scratch and you start building something and you know, over time, it becomes, uh, you see more and more of a product and you, you see more and more customers and it's sticking with the, resonating well with the, with the, with this huge community. And then you also add ecosystem to the, to the mix, you have partners in between the customers, growing globally, opening new offices, adding more people and things like that. So it is, it is simply, um, it was, it was an incredible journey. Usually you start off, like you said, either you hop from one, one, one job to another every few years and change, or you join as a, as a founder, right? Uh, you could also be, uh, it's not, not unusual to have a founder on the team, uh, being early on there and then, you know, doing something with the company and moving on, right? I indeed, I wasn't the founder, but was one of the first early people. So I, I sticked with the, with the, with the product and with the company, and, this is, um, resonates well with my, uh, passion. I kind of map myself or I reflect a lot of my, my work life and life with the product that we built over the years.
What Jedox actually is, is, um, I mean, uh, we are proud, uh, leader in the magic quadrant, Gartner's magic quadrant for, for EPM, CPM, or enterprise performance management, corporate performance management, or XPNA, how they call it nowadays. Being in the upper right corner, it was obviously not, uh, not, it was a journey. It's not like we showed up immediately there, right? From, from zero to hero, right? It took us a few years to move slowly through the, through the, from the lower left to the, to the upper right corner. And certainly, you know, competing there with the, with others, with the big names like Oracle, like SAP, like Anaplan, um, it certainly make, makes us proud because we are by, by far a much smaller company, uh, by, by sheer size, and, um, to some extent also by history or by tenure, right? Um, but yeah, it shows that, um, you don't need a lot of people and a lot of money to build good products and, and make them stick with the, with the customer.
One of the things that helped us in the beginning, I mean, that, that's also, we evolved over time. Uh, one of the things that helped us in the beginning to put a foot in the door in the market is the fact that initially we were, um, actually we started as, as a freeware and then switched to open source, which is kinda, you know, 20 years ago, it was things like Linux started showing up around. I mean, actually it was, uh, uh, Linux was, was way before, but, but around that time, there was like a boom of open source. And we were, um, I belie, theve first product in the market to offer planning, uh, as open source, and that was a big shock in the market and it helped us a lot to, to, to spread the word, uh, globally and become known in the market, although we are, uh, we had the low or no, no marketing budget whatsoever. Right? Um, and then over the years we, we matured, we kind of, um, made a clear separation between, between open source and the commercial bit. And, and, uh, we curated both brands in parallel. But over time, we, we, we focus nowadays, we are focused totally on, on our cloud product under the name Jedox. And, um, basically open source is, is the past. It's also not something that we see in the market nowadays anymore in this, in this, uh, let's say in this bubble. It's relatively, you could say, it's a, it's a niche, but it's a quite, quite, uh, I wouldn't say lucrative, but quite, quite a big niche. It's a specific need. From the business to be able to quickly plan any kind of data, usually finance data, but any kind of numbers, being headcount, in any vertical, in any industry. Yeah. Nowadays, it's even, you see it in every, literally every company needs to do some kind of planning. And doing that with a tool like, like Jedox, makes it less error prone and, uh, very, very seamlessly integrated, allowing, um, to connect to, to the existing third-party systems, um, connecting all the data from all the different systems that you usually find, on average companies nowadays have 150 plus tools or services that they consume. Jedox is well-versed in, in accessing all these different existing products in the, in the customer's ecosystem and then combining those in, in Jedox.
In a nutshell, Jedox is, is a, is a platform. It's a local platform for building business applications, right, speaking less technically. But, what you have in that platform are components. There's a lot of IP, Jedox IP in there. You have your own in-memory database. You have your own ETL tool. You have your, your backend, middleware. You have, uh, frontend for, for mobile, for web, obviously, and we have quite a good integration with, uh, Office, in particular with Microsoft Excel, which is kind of a go-to application for any business user nowadays, right? Most of the time, the journey of our future customers starts somewhere in Excel, they did something over the years in Excel, and, um, they built it, they invested hours and hours in it and they've been living it, but, you know, they, over the years it, it became cumbersome to, to maintain it, multiple copies of it, multiple versions of it, uh, sharing it across the team or even teams, uh, error prone, and it's, it's, uh, known as an Excel chaos, which we actually try to, to solve. Right.
A lot of product, obviously, 20 years we weren't sitting, um, so we were quite busy developing that, but nowadays, it's quite extensive and mature, very grown up, uh, enterprise platform for building business applications, right? And coincidentally, majority of the, let's say, first, first-time users come from somewhere from the office of finance. Usually, that is the, that is the entry point where users come from. But, uh, it's not limited to, right, it's just the, usually the entry point, but we spread quite quickly within the organizations because they see the value of the product.
Kovid Batra: Got it. I think this is very, uh, interesting, competing in a landscape where you have MNCs and legacy players already there. You have been there from the very beginning, so the company founders and the company belief in that respect on day zero and today, uh, would be very different, right? At that time, you guys might not have even imagined where you would be 20 years hence. Of course, people have a vision there, but what was it like for the Jedox team and Jedox founders at that point of time?
Vladislav Maličević: Yeah, I mean, I mean the, the vision was there, but the, the vision, I could say that the, the vision was to, to, to rule the world or rule the bubble, rule the, rule, this, this, let's say, small niche, even back in the day. Appetite was certainly there, but we were also realistic. We knew that, you know, it, it will take a while to, um, even meet, uh, let alone exceed, the functionalities of the, of the established product in the market back in the day. Already, the market was there. It was booming. It was ruled by IBM. IBM was the absolute leader. A company called Infor was, was, uh, also quite prominent in the market back in the day. Actually, they weren't even called Infor back in the day, but through acquisitions, they, they grew into Infor, um, and they still exist, uh, to date. We knew we were on the, on the, let's say, on the, on the lower end and we weren't the disruptor, but one of the vehicles was, was definitely open source and coming through the open source, uh, on the one hand side, you have a, you have a behemoth, let's say, or a mature, um, established leader in the market selling, you know, I don't know, a couple thousand dollars per user, per seat, um, license. And then all of a sudden, this small team from Germany comes with a product that almost, almost, right, not, not really back in the day, but, but, um, almost matches the, the functionality, brings in, let's say, a subset of the functionality for no, no cost at all. It was open source and everybody was open also to contribute back in the day. There was no GitHub back in the day. We used to use SourceForge, sourceforge.net. That was, uh, that was the platform of choice back in the day where we hosted our code.
The word spread quite quickly and, um, the adoption, we saw traction very early. I think I joined in November, October-November 2004 and we had the first version of the product that you pretty much you can recognize even today and in today's products. So everything you need to know, everything you need to be able to work, you already had. Um, I believe we, we shipped in February of 2006. So it's a year and a half. It took us 18 months to put, uh, put the product together, and already there was, uh, in-memory database. We had a frontend for Excel. We also had, uh, let's say some primitive way of ingesting data, let's say, um, some, some baby version of, of ETL within the product. We had a predecessor to our, uh, today's web frontend. We had it, uh, it was, uh, Web 1.0, the old, old school, uh, web frontend that was already connected to, to, to this, um, to, to Excel and to the database. So we, we had web frontend. So we were ready to, to, to rock or ready to run.
Later on, additions came in, including ETL, including a modernized version of, uh, the web frontend. And nowadays, obviously, everything is happening in the web and you are doing, also authoring within the web and Web 2.0 was a thing back in the day. Like we quickly jumped on the boat. Later on, other innovations happened. Shift to Kubernetes, so, microservices and things like that. So going from the legacy, I mean, actually the, the first shift was the cloud cloud was the thing. There was no cloud back in the day, right? Maybe there was some hosting somewhere, but usually customers were running it on their own, even on their laptops or, um, within their corporate network, server, client server kind of thing. And then later, 2012-2013, we saw cloud kind of picking up and this is where we started our excursion into cloud. And from there, um, we moved on. Today, we are a cloud company, a SaaS product.
Kovid Batra: Yeah. So, in this, in this whole journey, I think you survived, in fact, you, you thrived as, as a product, as a, as, as a company, right? Of course, you mentioned that you became an open source product, right? So that, that was one critical move which probably helped you a lot in exceeding what your competitors or counterparts for doing, but there must have been much more deeper technical decisions, and with this question, I think I'm trying to understand from you how many times you had to take those critical calls which impacted the business in an immense way, and whether those were decisions where you were building products in-house or you were outsourcing it and how, how did that journey come along, now, 20 years hence, when you look in, in the, in the retrospective.
Vladislav Maličević: I mean, in retrospective, I wouldn't say there were too many critical events, right? The situation in the market is dynamic and you have to react on, literally on a daily basis. You have to make decisions on, uh, really, literally, some, some important decisions are made on a daily basis. However, the strategy, you don't change every, every two days. I would say, we had three, four waves, uh, over the course of 20 years where we were, for example, cloud decision to go all in on cloud. Um, it took us a while, right? Because we are, I'd say, uh, first of all, it's quite conservative, the market itself is quite conservative. You are usually working with financial data. Financial data is very critical. People are not eager to, to expose financial data out of their corporate network. So when the cloud showed up, it was kind of, oh, do we, do we, do we even jump onto this boat? And, and I remember, uh, vividly, so there were, there were like pros and cons, and, and there were voices in the company. I would say majority of the voices were, were, uh, against cloud. "Hey, nobody will jump on this boat." "Hey, nobody wants to put data in public cloud." "This will not fly." And indeed, it didn't, uh, it didn't, uh, didn't fly immediately, right? It took us a while and depending on the market, obviously here in Germany, it's quite, I would say a bit conservative market. So it takes a few years for, for things to become mainstream, and for the adoption, one thing was technically not nothing, I wouldn't say nothing special, but was a smart move by Microsoft back in the day when they introduced, uh, something, a thing called German cloud, um, which was kind of an idea to, to bring in sovereign German cloud on German soil, operated on German soil, but German company, right? Disconnected from the rest of the world, kind of from, from the corp in, in the US and things like that. This kind of brought more trust into it, of course, with additional marketing and massaging customers, um, across the German and Dach region, Austria, Switzerland, and the Germany region. But it definitely helped in adopting cloud more. And then a few, few years later, all of a sudden, it became, you know, cloud became commodity, somewhat delayed, but became commodity even in Germany. And then it was no brainer. Yeah, okay. Let's go. You don't have these conversations anymore or, or very rarely. Yeah.
That was one thing which was kind of critical back in the day. We started talking about 2011-2012, but the real push came around 2015-2016. So it took us a few years to come from zero to, to really, hey, full-steam ahead, let's, let's, let's do this, this cloud thing. That would be one thing, I think being close to, to, to Excel. So initially being open source helped promoting the brand. You would need to spend millions and millions globally on marketing to spread, spread the word about Jedox, um, normally, and with open source, it kind of went word of mouth and it very quickly spread. Um, again, context, right? We're not in the Google business, right? We're not a commodity that every consumer is using, but let's say, in our bubble, the word spread super-quickly. And, um, later I remember I spoke, we acquired one company, uh, in, in Australia, uh, back in the day and, and they became, before we acquired them, they run as a partner for a few years. And, um, I had a pleasure talking to, to one of the founders of that company, and he said that he remembers well when we announced the first initial version. He was back in the day using IBM and he read it on some forum that there is, um, there is a new product called Jedox and it's open source and it's free where you can download it and use it, and, and he does almost everything, um, that we used to pay for and he wrote to his colleagues, emails saying, "Germans are coming." And I use that reference often. It's quite quite interesting because hey, where's Australia from here, right? On the opposite side of the world, but it really, uh, the word spread quickly. So I think that was a good decision to go with open source initially. It didn't, on the other hand, it didn't create any traction. I must say, very little, almost no traction on the development community, right? We had it, we had it up, but we were the sole maintainers, very little input from the community, right? Maybe it's too technical, yeah? And again, maybe it's a context and a niche which we are in. It's not some commodity that everybody needs on their desktop. So maybe that's one thing.
So, open source, cloud, uh, being close to Excel was always, uh, I think was a good thing. Uh, Excel is the, I don't know. Um, there's this big question. Yeah. What is the most common functionality in every enterprise software? And that's actually 'export to Excel', right? In every enterprise software, you will find that button or option 'export to Excel', in every enterprise software. So, I don't know, there are billions of users of Excel, certainly hundreds of millions of Excel users globally, and, um, this is a kind of citizen developer. Uh, if you, if you, if you look at like from, from that aspect and us being close to that, let's say, both, both literally close. We, we are very compatible with Excel. We integrate well with Excel. We also understand well, uh, Excel format. We can ingest it and import and export it in Excel format. So literally that, but also the, this concept of spreadsheets, I think this was also a good, right choice.
Kovid Batra: Yeah, I think that most of the time it really helps, like instead of going out and introducing something completely new, which is not in the behavior of the user. It might be a hit, of course, like there are revolutionary ideas which are not a part of the behavior and then people form a behavior around it. But mostly, what I've seen is that any product team building a product closer to the existing behavior of people and the way they are using the current solutions, if you are close to it, then it is very easy for adoption, easy for people to understand and relate to, and they quickly start using. And then, of course, you can put them on a journey of gradual learning where you introduce new features, you put in more services and then they grow from there. But the initial hooks should be like that where you are close to the existing solution and yet offering something very impactful and having more data than what the current solution is giving them. So yeah, I think it's, it's a mix of, um, few good decisions, I would say. There was, there is no single bullet, magic bullet that is going to solve for sustaining or a business thriving in this market. It was constantly your eagerness to learn, your eagerness to explore, and then changing and adopting towards things that are coming in, like cloud came in and then, of course, at every point you understand that user behavior as well as what the market trends are saying and moved in that direction. So, of course, this really worked out well for you.
Few more things that I would want to learn here is that on this journey, when you're building such an immense product used by thousands and maybe millions of users, I'm not sure how many users you have right now, but, uh, I'm sure like there are hard decisions that you're taking about, like, uh, building something in-house and, uh, acquiring other companies, like once you just mentioned about one. So when, when there is a decision around building something in-house and, uh, like outsourcing it, can you just give us some example from your journey and, uh, how did you conclude on certain decisions like that?
Vladislav Maličević: Yeah, some, some of those, uh, decisions were, were made quite easily due to whatever constraints, right? So if you have, let's say, uh, if money is your constraint, obviously, and you, let's say, you have a few idle people on payroll, obviously, you go and build, um, start building, especially when you don't understand the magnitude of the, complexity of the problem, right? You don't see, you don't see the big picture in the beginning, right? You go into it somewhat naive. I can say, I mean, I was, uh, I was quite young at the time. If you don't know the, the, the magnitude of the, of the problem, if you don't see the, the, the whole, uh, scope of it, and you are, you have a white canvas, you go for it and you simply, you give it a try. So there was no, there was actually no, in the beginning there was no, um, um, nothing to, no decision to be made whether to invest or acquire or whatnot, because the money was not there, right? So, So let's say in the beginning, majority of decisions were, were built. Um, and then over time, you have the opportunity to change. You can focus, you can keep the core, core of the product to yourself and then, whatever is not core, you can try to outsource. The good thing with a, with a platform like ours is that once you have the platform in place, you can start building actually these, um, applications on top of it and you can make those applications to, to a product. So for example, uh, one more time, one more thing that happened last year, we entered magic, another magic quadrant last year for financial consolidation, which is a, let's say, um, off the shelf separate product from our core product, but it's built on top of the platform and it, and it's sold separately. Right? So there are, there are players obviously in the market who do just that, uh, have a product just for that, but we built it on top of our platform. So you get the platform and you get a, you get a financial consolidation of it and you can build any kind of, uh, business application with it. So, um, once you have applications like that, then, and if it's easy to package it like it is with Jedox, then you can come up with things like a marketplace, which we do have, right? We do have a marketplace. So we put these applications in the marketplace and then you can easily install them from there, and then, it's, I don't know, it covers 60, 70, 80, some, some, some cover 90 percent of the functionality out of the box, and then you just fill it up with life, with your information, and then, uh, off you go, right? It's, it's configured and you can use it, it's ready to use.
Um, so, uh, the kind of the decision is definitely the, um, um, you have, you have spend constraints, so, so you have to be cautious on, on the spend. Similarly with cloud, right? Do you go to public cloud or do you build and host, you know, you do collocate your servers, you build them yourself and run them yourself, uh, somewhere, right? Yeah, that kind of decision someone needs to make, and in the beginning, maybe decision-making is super easy. Yeah, of course, you go and you buy a pizza box and you put discs in it and CPUs and then you collocate it to the, at your nearest host of choice. But then, if you want to run it at scale, things like compliance come into place and you need to, attestations, you need ISO, SOX and CSA star, and whatnot. You cannot manage them anymore on the commodity hardware that, that fails every, every three months, something's broken and you need to take the system offline and things like that. So yeah, this is where, where you go into cloud and use services and build it, build it like a Lego, cherry-pick services that you need and build, build something out of it and outsource kind of responsibility to a, to a infrastructure provider of your choice. Same with code, right? Usually, in the beginning, if you have monetary crusades, but you have the means, if you have, uh, well, just a couple of people should be sufficient to get you going, right? That's the thing, right? In the beginning, this Pareto of 80/20, with 20 percent of, uh, personnel, of people, you can build a lot of products, you can build 80 percent of the product. But the last 20 percent of the product are the hardest, and then you need additional 80 percent of people to, to, to add on board. If you have the means, this is where you, you can make, uh, make decisions, whether you, you outsource it, uh, let it be built, or you take managed service, OM solution for the means, for the, for the particular case. I mean, nowadays, it would be totally different, right? You look from different perspective or from different dimensions and cost being just one of those, right?
Kovid Batra: Yeah.
Vladislav Maličević: You, uh. So, yeah, it depends. It depends on the context, right? In what kind of context and what kind of setup you are. We are scaled up and we are a mature company, profitable company nowadays, so we can afford ourselves to take a bit more time to make decisions such as outsourcing or buying, acquiring pieces of product into the whole. Whereas when you are at the beginning, usually you only have an idea and your free time and then you go and roll up your sleeves and you work, you code, you usually don't have money, right, to, to go and buy expensive services from others, right? Um, yeah.
Kovid Batra: Cool, Vlado. I think that's interesting. Um, we are running short on time, so we'll have to wrap up now, but it was a really interesting talk. Would, would love to talk more, uh, and know more details about what happened in those 20 years, what things you actually felt were very challenging that were solved, on those pieces, but I think that would need another episode and I would love to have you for another episode, absolutely.
Vladislav Maličević: Thanks a lot. Yeah. Let's, let's do that some other time.
Kovid Batra: Sure, absolutely. All right. So that's it for today. Thank you, Vlado. Thank you for your time. Uh, looking forward to host you once again, uh, very soon.
Vladislav Maličević: Thanks a lot. Bye-bye.
Kovid Batra: All right. See you. Bye.
Robert C. Martin introduced the ‘Clean Code’ concept in his book ‘Clean Code: A Handbook of Agile Software Craftsmanship’. He defined clean code as:
“A code that has been taken care of. Someone has taken the time to keep it simple and orderly. They have laid appropriate attention to details. They have cared.”
Clean code is easy to read, understand, and maintain. It is well structured and free of unnecessary complexity, code smell, and anti-patterns.
This principle states that each module or function should have a defined responsibility and one reason to change. Otherwise, it can result in bloated and hard-to-maintain code.
Example: the code’s responsibilities are separated into three distinct classes: User, Authentication, and EmailService. This makes the code more modular, easier to test, and easier to maintain.
class User {
constructor(name, email, password) {
this.name = name;
this.email = email;
this.password = password;
}
}
class Authentication {
login(user, password) {
// ... login logic
}
register(user, password) {
// ... registration logic
}
}
class EmailService {
sendVerificationEmail(email) {
// ... email sending logic
}
}
The DRY Principle states that unnecessary duplication and repetition of code must be avoided. If not followed, it can increase the risk of inconsistency and redundancy. Instead, you can abstract common functionality into reusable functions, classes, or modules.
Example: The common greeting formatting logic is extracted into a reusable formatGreeting function, which makes the code DRY and easier to maintain.
function formatGreeting(name, message) {
return message + ", " + name + "!";
}
function greetUser(name) {
console.log(formatGreeting(name, "Hello"));
}
function sayGoodbye(name) {
console.log(formatGreeting(name, "Goodbye"));
}
YAGNI is an extreme programming practice that states “Always implement things when you actually need them, never when you just foresee that you need them.”
It doesn’t mean avoiding flexibility in code but rather not overengineer everything based on assumptions about future needs. The principle means delivering the most critical features on time and prioritizing them based on necessity.
This principle states that the code must be simple over complex to enhance comprehensibility, usability, and maintainability. Direct and clear code is better to avoid making it bloated or confusing.
Example: The function directly multiplies the length and width to calculate the area and there are no extra steps or conditions that might confuse or complicate the code.
def calculate_area(length, width):
return length * width
According to ‘The Boy Scout Rule’, always leave the code in a better state than you found it. In other words, make continuous, small enhancements whenever engaging with the codebase. It could be either adding a feature or fixing a bug. It encourages continuous improvement and maintains a high-quality codebase over time.
Example: The original code had unnecessary complexity due to the redundant variable and nested conditional. The cleaned-up code is more concise and easier to understand.
Before:
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
# Before:
result = factorial(5)
print(result)
# After:
print(factorial(5))
After:
def factorial(n):
return 1 if n == 0 else n * factorial(n - 1)
This principle indicates that the code must fail as early as possible. This limits the bugs that make it into production and promptly addresses errors. This ensures the code remains clean, reliable, and usable.
As per the Open/Closed Principle, the software entities should be open to extension but closed to modification. This means that team members must add new functionalities to an existing software system without changing the existing code.
Example: The Open/Closed Principle allows adding new employee types (like "intern" or "contractor") without modifying the existing calculate_salary function. This makes the system more flexible and maintainable.
Without the Open/Closed Principle
def calculate_salary(employee_type):
if employee_type == "regular":
return base_salary
elif employee_type == "manager":
return base_salary * 1.5
elif employee_type == "executive":
return base_salary * 2
else:
raise ValueError("Invalid employee type")
With the Open/Closed Principle
class Employee:
def calculate_salary(self):
raise NotImplementedError()
class RegularEmployee(Employee):
def calculate_salary(self):
return base_salary
class Manager(Employee):
def calculate_salary(self):
return base_salary * 1.5
class Executive(Employee):
def calculate_salary(self):
return base_salary * 2
When you choose to approach something in a specific way, ensure maintaining consistency throughout the entire project. This includes consistent naming conventions, coding styles, and formatting. It also ensures that the code aligns with team standards, to make it easier for others to understand and work with. Consistent practice also allows you to identify areas for improvement and learn new techniques.
This means to use ‘has-a’ relationships (containing instances of other classes) instead of ‘is-a’ relationships (inheriting from a superclass). This makes the code more flexible and maintainable.
Example: In this example, the SportsCar class has a Car object as a member, and it can also have additional components like a spoiler. This makes it more flexible, as we can easily create different types of cars with different combinations of components.
class Engine:
def start(self):
pass
class Car:
def __init__(self, engine):
self.engine = engine
class SportsCar(Car):
def __init__(self, engine, spoiler):
super().__init__(engine)
self.spoiler = spoiler
Avoid hardcoded numbers, rather use named constants or variables to make the code more readable and maintainable.
Example:
Instead of:
discount_rate = 0.2
Use:
DISCOUNT_RATE = 0.2
This makes the code more readable and easier to modify if the discount rate needs to be changed.
Typo’s automated code review tool enables developers to catch issues related to code issues and detect code smells and potential bugs promptly.
With automated code reviews, auto-generated fixes, and highlighted hotspots, Typo streamlines the process of merging clean, secure, and high-quality code. It automatically scans your codebase and pull requests for issues, generating safe fixes before merging to master. Hence, ensuring your code stays efficient and error-free.
The ‘Goals’ feature empowers engineering leaders to set specific objectives for their tech teams that directly support writing clean code. By tracking progress and providing performance insights, Typo helps align teams with best practices, making it easier to maintain clean, efficient code. The goals are fully customizable, allowing you to set tailored objectives for different teams simultaneously.
Writing clean code isn’t just a crucial skill for developers. It is an important way to sustain software development projects.
By following the above-mentioned principles, you can develop a habit of writing clean code. It will take time but it will be worth it in the end.
Sign up now and you’ll be up and running on Typo in just minutes