‘Implementing Technology & Processes in the Age of AI’ with Jeff Watkins, CPTO, xDesign

In the recent episode of ‘Beyond the Code’, Host Kovid Batra welcomes Jeff Watkins, CPTO at xDesign. He has previously contributed his expertise to various organizations such as AND Digital, BJSS Ltd, and more. Their conversation revolves around implementing technology and processes in the age of AI. 

The episode starts with a fireside chat with Jeff, providing us a glimpse into his personal background. Transitioning to the main section, Jeff imparts valuable wisdom on the concept of the ‘Iron Triangle’ and the ways to balance it. He also delves deeper into implementing the latest technologies that involve AI and security while also exploring effective methods for defining team success. Jeff also sheds light on the importance of understanding the psychology of programmers in effective organizational leadership.

Wrapping up, Jeff provides meaningful advice to the audience to approach tasks with a focus on efficiency and smarter practices.

Timestamps

  • (0:07): Jeff’s background 
  • (1:19): Fireside chat
  • (7:27): Diving deep into the philosophy of the ‘Iron Triangle’
  • (11:04): How to implement the latest technologies that involve AI and security? 
  • (16:59): How do you define success for your team?
  • (23:14): How to measure and enforce process adherence for success? 
  • (24:58): How does grasping the psychology of programmers help in leading organizations?
  • (28:15): Parting advice for the audience 

Links and mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a fascinating personality who is a technology leader, tech blogger, public speaker, and a technical architect. He has 25 years of experience in engineering and leadership, currently serving as a CPTO at xDesign, leading 250+ engineers. 

He has great hunger for learning in technology. And I can surely say that because at this point of career, after 25 years, he has gone for a degree in Cyber Security, and he is going for another degree in Implementing AI. Happy to have you here, Jeff. Welcome to the show. 

Jeff Watkins: Thank you very much. I’m glad to be here, Kovid. Yeah, you probably give me too much credit there, I think, I’ve really enjoyed my career in tech and it’s been hugely rewarding, and also, subjects as Cyber Security and Artificial Intelligence are so really of our time. I think it’s really important to make sure that as somebody who sits in the Exec space to really understand them as, as best as possible, really. 

Kovid Batra: No, I think you deserve this. I’m sure people around you feel so. All right. So, we will get started on this. The first part is going to be a quick fireside chat where we’re going to know more about Jeff personally rather than professionally. So, are you ready for that? 

Jeff Watkins: Sure. Yes. Looking forward to it. 

Kovid Batra: Sure. So, the first question. So, I already know that you have a cat. Last time when we talked, she was also joining the call. I just wanted to know, why cat, not a dog? 

Jeff Watkins: Oh, I think.. So, I love both cats and dogs. In fact, I’ve got a cat, sorry, a dog sat next to me here just out of camera shot at the moment. 

Kovid Batra: Oh! That’s amazing. 

Jeff Watkins: So the cats are definitely not here cause they’re not really good friends at the moment. I’ve grown up with both of them, but, cats certainly I think they’re quite independent. They’ve got a lot of personality. They knock everything off your desk. I think they are just very great fun to be around. I love both, but certainly, for me, I just slightly prefer cats, but I’m trying not to make the dog jealous. I’m sure she’s listening. 

Kovid Batra: Amazing, amazing. So, this is the next question. What was your first encounter with technology?

Jeff Watkins: So, yeah, I remember it really well, which is bizarre because I was around 6 years old, so I was quite young. And my father had just bought a Commodore VIC-20, which is a exceedingly old computer. I had 4 Kilobytes of RAM. It had an incredibly.. It couldn’t support like things like sprites and he just had a beeper for a sound chip. It was very, very basic and rudimentary, but that was my first real technology introduction. I mean, obviously we know, we use other bits of technology in our everyday lives, but actually a bit that you could program and you do things with. And, I could put, 10 print “Hello World” and 20 GOTO 10, and be amazed that you could do that. So that was my first, and that was what got me hooked. And I started learning to code pretty soon after that.

Kovid Batra: Oh, that’s amazing. And that’s quite old. Which year are we talking about here? 

Jeff Watkins: So, this would be around, maybe 1983. So, a long time ago. I mean, it’s a MOS 6502 processor running at point something of a Megahertz. I mean, we’re talking seriously primitive technology. Certainly never imagined that in my lifetime, the entire way of living would change because the internet, I think has existed since the 50s in some respects, but the actual web, you really in my lifetime, seeing it flourish and become that way of life, you know. When I grew up, we didn’t even have a telephone inside the house. We had a telephone box at the end of the street. To go from that to having always on instant communications throughout the entire world is, is, and this has all happened in my lifetime, which is absolutely crazy to me.

Kovid Batra: Yeah, of course. I mean, seeing this transition.. I was not even born then. So, you have seen technology since that time, not to make you feel old at all. I’m sure you are younger at heart than me, of course. But this transition. 

Jeff Watkins: Nice save! 

Kovid Batra: But this transition from that point till today, I think you have seen a lot. I’m sure you have seen a lot. 

Jeff Watkins: And the thing is, looking at a little bit of a taste of what’s happened this year with artificial intelligence, I don’t think it’s the last one of these we’ll see. We’ll see another great change in our lifetimes. Now, if I could predict that, then I would be the next Bill Gates or the Steve Jobs, but I think there’ll be another one or two, the… If you look at the industrial revolutions, they’re getting shorter and closer together. So, you know, the first one happened, I can’t remember the exact date, some time ago. And then, we’ve suddenly started squeezing together and the, and when will we be seeing industry 4.0, 5.0, they’re getting close together and we will see these huge changes in how we live and how we, how we think even, you know. AI and Google have already started to change how we think. The different studies on things like neuroplasticity, and seeing how people retrieve rather than store information, you know. School used to be about learn loads of facts and cram it all in and keep it in there. And now it’s like, how do you go and get information? So, yeah, it’s very exciting. very exciting time to be alive. 

Kovid Batra: All right. Amazing! Moving on to the next one. I’m sure you’re reading a lot of books. Which one’s your last one? And, share some learnings from there.

Jeff Watkins: I buy far too many books. So, I love to read, but also I buy more than I could possibly read. I think there’s more than I could read in my lifetime. The last one I read was, ‘Team Topologies’. I’d been thinking for some time about how teams are formed and also how static they can be. And I think this backs up the idea of, like, thinking about moving to more dynamic team structures, thinking about flipping Conway’s law on its head and thinking about actually designing your organization, and certainly within your digital arm around what you want to achieve rather than letting your systems end up being designed the other way around. I think breaking down silos with siloed areas of the business, you know, traditionally things like architecture was quite a siloed activity. I mean, once upon a time QA was security, still is, and that’s one of the big things I’m really with my fellow podcast host we’re trying to break down those barriers. Every time we, it’s like whack a mole, every time we break a silo down in tech, there’s another one somewhere as the tech world gets bigger and bigger. So, there’s a lot of lessons like that in there and things like around reducing cognitive load, because you want to be able to give people information and communication, but not too much because it becomes overbearing, you know. It’s good to have a big organization and the support of a big organization, but sometimes it can be a bit of a blunderbuss of information. It’s far too much.

Kovid Batra: Right. Makes sense. Cool. I think this is an amazing read for the audience as well.

Jeff Watkins: It’s quite short. It’s quite a short book. So also, don’t worry too much about the time investment. I think you’ll take a lot from it quite quickly. 

Kovid Batra: Cool. All right. So, that was a nice fire chat with you. Now, we are moving to the main section where we would want to learn from your experiences, from your success and failures, the challenges that you have had. We will get onto that, I have a question from there. Last thing when we were talking, you mentioned about your philosophy of the ‘iron triangle’. So, I just wanted to know more, a deep dive on it, make the audience also learn the same aspects which you were mentioning. So, can you just throw in some light on that philosophy of yours?

Jeff Watkins: Yeah. For the listener who’s not maybe aware of the concept, it’s an agile project management concept of the ‘iron triangle’. Now, once upon a time, it used to be time, cost, and quality, but then people changed that to be time, cost, and scope. And the idea is you can choose two of them, but you can’t have all three because you can’t cross the triangle. And you’d put quality in the middle and hopefully make that invariant because really people usually don’t want poor-quality software. And the idea is, well, if it’s going to be cheap and fast, then you need to lose scope. And if it’s going to be cheap and have the scope, then maybe you’ll just need to take longer, cause you’ll need a smaller, more focused team, etcetera. 

Well, how do you beat the iron triangle? And, in tech, I remember recently working on a system where we’ll use containerized Java services for this, a few years ago now. And we looked at it and went, “But, these services are really simple. Why not just use serverless tech? And they make this as light as possible.” And went, “Oh, but we haven’t done much of that before.” Well, learning’s fun. I personally did some of the first integration and went, “Oh look, that took us from zero to actually working maybe a couple of hours.” And really, if you were trying to build all that up and then put it in all those infrastructures, code, etcetera, that’d probably take days or weeks, anything. So it’s, it’s using this technology smarter. I think that in my opinion is the key to beating the iron triangle in tech, it’s using smart technology. What that does is it gets you more from your time and your money without sacrificing scope at all. There’s a set of scales here because you couldn’t just go, “Well, let’s just use a zero, a no-code platform.” And that’s kind of okay for some things. So there was some, I’d say back office applications where it’s fine, possibly. But if this is a user-facing application, it might not be fine to use a no-code app because some of them don’t necessarily have the most amazing user experiences. 

An ex-colleague of mine, he came up with the term ‘lower code’. So, it’s not just a low-code platform, and ‘lower code’ is just like leveraging as much of Cloud Native as possible. And leaning into, well, what can it say something like as your DevOps, as your functions and making sure you use like these are database services, and like, what, what can you use with the minimum amount of configuration and the minimum amount of boilerplate to get you to where you need to be securely and scalably. And if you can crack that, then those kinds of services don’t really put any restrictions on how you deploy them and how you use them compared to maybe a no-code platform. 

Kovid Batra: Yeah. 

Jeff Watkins: So actually, this gives you all the flexibility of kind of coding it, but also that a lot of the speed of not coding it, which I think is a.. 

Kovid Batra: Is a big advantage. 

Jeff Watkins: And we’re starting to, yeah, we’re starting to harness that appropriately. I think this is where consultancies really do help, cause they know how to do this. And, there’s loads of people still hand-rolling loads of stuff and wasting a lot of time on money, I think.

Kovid Batra: I think it totally makes sense. And I think that’s how you break the age-old iron triangle, and maybe just thinking out of the box, thinking about new technologies to be implemented, you can save a lot of time and maybe deliver fast and buy little cheap. So yeah, cool. I think that was really insightful. 

I have another aspect that I wanted to know because you, you have been into leading teams and building things at such scale in organizations. So, you and your team, of course, approach implementing the latest technologies that are involving security and AI? How does that work out right now? 

Jeff Watkins: I think there’s a couple of things that I think if we’re talking about security tech or AI technologies,  one of the key things for, if you’re going to take something into your organization is doing a proper technology diligence process. And, that sounds awfully long and boring. But actually if you boil it down to what, what are the key things that matter to you? Well. It’d be slightly different between say an open-source project, maybe a proprietary or a SaaS product, but for say like, open-source, well.. How mature is this technology? How many people are contributing to it? When was the last major release? Is it actually, if you scale it, is it vulnerable or not? Is it still in good use of the community? And going on to Stack Overflow and looking at how many Stack Overflow questions are about it, is it a good yardstick for how popular something is. But it also could, is it too mature? You know, there’s a point where Log4j became so old and long in the tooth that the creator of Log4j said, “No, don’t use that. Use Logback instead.” So, it’s that balance of looking at maturity, the developer support as well. So, it’s no good procuring a technology if you have to train everybody to use it, really. It becomes then quite onerous task to actually upskill everybody on it. And, then you have a problem that well, how do you know best practice if nobody else knows it in the team? So, choosing technology, I think. But, when it comes to things like AI, there’s a lot of it’s around, well, what’s this going to cost? Cause AI is expensive to run quite often. And also where’s the data going? Because you probably, you know, we’re in GDPR countries. So, you probably want to be sure that your data, you’re not giving away personally identifying information, or clients or your own intellectual property, cause it’s not just the AI provider who might run away with that. They probably won’t, you know, it’s something like OpenAI would, but they themselves are subject to attack. They could be part of a supply chain attack. So, your data may or may not be safe. So now, safety is a kind of a… It’s never a 100%. There’s no such thing as a 100% security unless you turn off all your computers, seal them in 7 meters of reinforced concrete and dump them in the Mariana Trench. You know, that’s a particularly safe, you know, that I’ll be secure, but it won’t be reusable. So, it’s that trade-off of risk enabling innovation in that respect. But, things like, if you’re choosing AI tech, you know, in the case of, is, do we have a UK data center that makes things much easier? And if they’re within a GDPR country, that makes these things way, way easier.

Kovid Batra: Yeah, yeah, definitely it does. So, basically, whenever you are looking at implementing AI, there is definitely the aspect of security which you’re really emphasizing. How exactly, when you take care of the security part along with it, is there a framework? Is there any steps or guidelines that we have to, like, actually follow when we are implementing it? That would be my follow-up question there. 

Jeff Watkins: Yeah. So, if you’re using an AI to build into a product, really one thing about AI is it’s really quite hard to secure the use of the AI if it’s like ChatGPT-based, because people try and put in protections around it, but what you don’t want to do is end up becoming effectively a free version of ChatGPT for people to abuse. So, really we recommend like thinking about the design of the security from the very far left as possible, and that’s for the product when you actually have a product and design space, because it’s really hard to, and it’s expensive to fix fundamental security flaws when you all get all the way to like release. So, you’re thinking about, well, actually, how do you make this? And it’s not just from a technical point of view. It’s almost all from a, well, who might abuse this service point of view? We call it, there’s something called ‘persona non grata’, and it’s the opposite of the persona. You know, when you do product persona, it’s the opposite. It’s the people you don’t want, they’re abusers. 

Kovid Batra: Yeah, I got it. 

Jeff Watkins: And then we put tenors into misuse cases and abuser stories, and link them to our security and our functional requirements. And when it comes to AI, it’s really hard. It’s like, well, what countermeasures do you put in place to stop people abusing it? But also there’s the, obviously the other sides to AI, you know, people could try and do a model inversion attack that might try and poison your model. They might use other adversarial AI attacks. So, it is also the protections you need to put around that model to avoid it being unduly influenced. How do you, kind of tear that model down? If you think your model has been poisoned, how do you tear down and recreate it in a safe way? So, you have to think about all of that before you really, before you build it all out. And the other thing is it’s like they could effectively attack your infrastructure by overloading that AI, that model or just blow your budget by spamming it with requests. So, there’s a lot of considerations. And the main thing is, like putting in threat modeling, to start with. And that can be product-based threat modeling through persona non grata, misuse cases, abuser stories, attack trees, and things like that. But then also, once you’ve got the basics there, move on to something like, using something like STRIDE to actually do structured threat modeling. And then, make sure you have a very robust way of managing your security threats and risks. 

And the other thing is, and make sure you have, your entire development pipeline has got a really good automated security pipeline in place as well.

Kovid Batra: That sounds something like a good step-by-step advice for a lot of things while implementing AI and taking care of the security alongside. That’s great. I think that really helps our audience learning more about it. 

Next question is, again, there are things apart from implementing technologies and delivering projects. We particularly look into the developer experience. When we are at this scale, we need to have a good culture in place. So, when you try to make sure that your team is efficient, what kind of metrics you look at? How do you define success for a team? And, when you identify problems in your team, how exactly you end up solving them? So, you can just give some examples here that you would have seen at your organization or your previous organization. 

Jeff Watkins: So, obviously there’s old-fashioned… traditional, I think old-fashioned is too harsh, traditional metrics of things like velocity, but then you can start thinking about DORA metrics. And, a lot of it is around I think helping people understand the Lean and Agile together. And nowadays, we’re implementing SPACE metrics, which covers kind of quite a broad range of things. I think it’s really useful to look at where our pipeline stalls in the process, and where there’s waste in the process. So, if something keeps going around the loop, because it keeps failing, or you’re getting high defect injection rate. Well, why is that? It’s not about blaming people. It’s about figuring out what part of that process or culture is broken. Maybe things like staying in review for too long, maybe things are taking too long to release. And quite often for the most part, I’d say most teams do want to do a good job. It’s just their environment around them, it’s not allowing them to do so because it’s inefficient in some way. 

Kovid Batra: Yeah. 

Jeff Watkins: It might be poor infrastructure, it might be poor release process. It might be tedious red tape. It might be over the wall mentality because you’ve got a separate team doing QA, rather than bringing them together. So there’s a whole raft and it’s measuring the waste. Basically, for the most part, it’s measuring the time wasted because there’s a gap in the process, or because it hits the process and has to go back again. So, any measurements like that, and obviously things like actual quality measurements, stuff that’s relatively non-controversial, like SonarCloud coverage. Test coverage is a hot, you know, it was a hot topic for a long time because people would try and get towards a 100% coverage, but actually, if you just measure coverage, that’s a good hard slothing. When a measurement becomes a target, it ceases to be a good measurement, and people would just fake tests to make a 100% coverage. It’s all about the actual quality of that coverage as well. And there’s, I don’t think there’s a shortcut to quality, but we can do is try and bake it in as much as possible by measuring what you can in a sensible way. I think McKinsey drew a lot of ire from the industry by saying, “Oh, you can measure developer productivity.” And well, I think if you’re measuring productivity in a way to actually help, like remove blockers and build the ideal culture and efficient way of working, that’s fine. If it’s measuring productivity, so you can shoot the bottom 10%, that’s not fine at-all. 

Kovid Batra: Yeah, of course. 

Jeff Watkins: So I think there’s different ways of looking at that lens. And I think actually, if you take away the emotive aspects of what McKinsey published, and then look at some of the areas they’re looking at, like kind of the organizational maturity, and then engineering maturity, like, okay, so this is very sensible stuff to measure. And, we’ve actually started building out all these measurements. We’ve been assessing our internal projects, you find some really interesting things. You find improvements in where a CI/CD can be made, and improvements where you know, things like security champions can be, um, the program can be improved.

So, and when you start measuring and you have to understand that you will find that you’ll probably find a lot of stuff. Then, once you’ve measured, it’s then it’s actually about prioritizing what’s going to be the most impactful change. 

Kovid Batra: Right. 

Jeff Watkins: Things change, because you might find 50 things, and which one of those is going to give you your best return on investment? That’s where you need to start evaluating and think, “Well, actually this is costing us, we’ll say 80 developer hours a week because it’s really hitting the team. And, well, this probably needs fixing.” If this is an annoyance that’s costing us half an hour every sprint, well, we can probably live with it, but also then, there’s the impact of other things as well. You know security deficiencies may not actually have any actual productivity impact, but they sure will if you get fined because you’ve lost all your data, or your system goes down, or you, or your data is stolen. 

Kovid Batra: So, this is something which would really help in identifying the inefficiencies or the bottlenecks in the whole software delivery pipeline. I would like to also emphasize on the point where you define the success for your engineering teams. Of course, these metrics are something which would give you symptoms or would tell you problems and where they exist, but ultimately… So this is again, a generic as well as a personal opinion that for engineering teams, there should be certain goals, how we define success for them. So, how do you see that success? I would be interested, and the audience would definitely be interested in knowing that. 

Jeff Watkins: Yeah. So again, I don’t think it’s around the age-old thing of how many lines of code have you delivered, I think it’s more about the, for me, it’s reducing cycle times and increasing the quality of builds, reducing failures, reducing the waste. So, you should be aiming towards, well, what’s our effective, like what’s our uptime of our builds, and, incentivizing on things like that, incentivizing on quality going up. You know, the measurement should be if you’re improving quality, then you are being a good citizen. If you are leaving the campground in a better state than you come along, you are being a good citizen. That should be, you know, that should be acknowledged. It should be, people should…

Kovid Batra: Rewarded for that. 

Jeff Watkins: …Celebrate that people have taken an interest in quality. And then there should be, there should also be notifications if there are problems, if the quality is dropping, if cycle time’s getting longer, because sometimes it might just be, “Well actually, we need to fix our build process because our build process is taking too long.” I’ve had that in the past where people are going, “Our productivity is awful because our build process is too long.” We found some ways of paralyzing pieces. We took the build down from half an hour or 40 minutes to about 4 minutes, so. And suddenly, and that’s not about anybody being non-productive. It’s just about, unfortunately, over time things change. And, but if you’re measuring that and you understand where’s the waste in the system, and set targets for people, if they’re not meeting the targets, not there to beat them over the head with a stick, it’s like what’s preventing you from hitting these targets?

Kovid Batra: Absolutely. So, in an organization where you have 250+ engineers, how do you measure it? How do you ensure? Is it done through processes? And then how do you ensure that the process is followed, or you use some tooling around it? So, I would like to understand that as well. 

Jeff Watkins: So, there’s a mixture and because we’re a consultancy, so actually we use a number of different tech stacks. In some ways it’d be harder if we had nearly 300 engineers to measure in one giant team, but also in some ways it’s easier because the individual engineering managers can see at their smaller level. So currently, it’s a mixture of we take metrics from things like the build systems we’re using, the CI/CD we’re using on things like Jira. And also, then there’s elements of kind of manual inspection around. Basically, we have an engineering maturity framework where we come in and kind of lift the lid on each projects within that on a rotation. 

I think moving towards a more holistic dashboard of, and I think this will be even more useful for product companies, even more so for like product companies where they have very large engineering teams, all working towards the same kind of goals, the same kind of metrics, is having a metrics platform where you can actually see, you know, what does your build stability look like, what does your throughput look like, what does your waste look like. We’ve come a long way in these platforms in the last few years, but there’s still, I think there’s still room for challenges and still room for new innovation in that, because I think there are ideas of metrics also evolving as well. And a lot of companies still haven’t adopted any at all, or if they do, the most rudimentary ones.

Obviously, you know, as the industry becomes more, continues to grow and become, becomes quite cost conscious. I think it will be a growth area.

Kovid Batra: Thanks a lot for this insight. And the one last thing which is definitely of your interest, which is psychology. So, you mentioned that you are into psychology and understanding the psychology of a programmer, particularly. So, how your understanding of a psychology of a programmer has helped you lead at the organizations? 

Jeff Watkins: So, yeah. You know, first of all, you can’t completely a 100% put people in one box, but, there’s a good chunk of us got into software for one type of reason, some for another. But, a lot of us got into it because we enjoy solving problems and we enjoy learning, and we enjoy the challenge of all, rather than, um, just the money. And, I think when it comes to, and this is backed up by research and we’re like, “What do developers want?” And, I broke it into several key things. And, there’s enablement, like people who are very smart, like to be able to work creatively. I think there’s a misconception that developers aren’t creative. And I think, it’s just not, maybe splashing paint on the walls or making music. It’s a different kind of symphony. It’s a symphony of code. So it’s enablement, like giving them the tech and the tools, and the automate and actually build and problem solve. And there’s the kind of, there’s the excitement as well. Developers work so much better in my opinion, and from my experience, when they’ve bought into the vision of what they’re building, I think that was all. So, 20 or something years ago, it really was kind of, “Well, we’ll pay you some money, build some stuff.” And it was back, a lot of it was backend stuff, not so much web stuff. And then, but the modern graduate and the modern younger developer really do care about the ethics, and the company ethos and culture, of what they’re building and where they’re working. So really having a really, really clear set of values is really important to the modern generation of developers.

And then, there’s also the kind of engagement as well. As engineers are thinkers, they like to be engaged, they like to have their opinion heard and to also see that people act on that opinion. It’s no good asking for feedback, if you then just discard the feedback. So, it’s that kind of 3 things of enable, excite, and engage. And there’s a number of different topics under there, and almost none of them really mentioned money. And people will stay somewhere if they feel all of those 3 factors, if they feel that they’re excited and enabled and engaged, and it’s, and it goes back to some of the measurements as well, you know. If you actually start to produce the waste in the system, produce enablement, and allow people to work super-efficiently with modern tech, then actually, people will stay, even if they can see a higher salary somewhere else.

Kovid Batra: Yeah. I think this all, all brings it back to the point where you have to have a good understanding of who your audience is, and when you’re leading a dev team, the developers are your audience and you have to build an experience for them. And it totally makes sense to understand their psychology and then build around it.

Jeff Watkins: Well, there is, there’s a whole field of developer user experience. People go, “Oh, user experience?” You think about the end customer. Like actually, there is another user and that user is often the developer or the other people in the team as well. You know, the team is having an experience at the same time and you can neglect that at your peril. 

Kovid Batra: Yes, absolutely. Great, Jeff. I think that was really, really insightful and great talking to you. Any parting words for our audience? 

Jeff Watkins: I think, whenever you’re next looking at doing something new, think about how you could be doing it smarter because you can’t beat the iron triangle any other way, in my opinion. It’s think about doing things smarter. 

Kovid Batra: Amazing. Perfect. Thank you, Jeff. 

Jeff Watkins: Thank you very much for having me today. It’s been wonderful.

Kovid Batra: It’s our pleasure. All right. See you. Bye bye. 

Jeff Watkins: Bye.