Modernizing from the Mainframe: An Exploration of Distributed Systems
Director, PwC UK
Never miss an episode
Chris Stura, Director of PwC UK has had over 20+ years in the world of software, wearing many hats from Software Engineer to Chief Architect to Director of Technology Delivery.
During his time at PwC, Chris’ projects have spanned helping banks migrate from legacy systems to modern architecture, working on the performance analytics software Cloud Cost Assurance, and technical consulting for financial services and capital markets.
Join as we discuss:
The banks are really facing this huge transition so they’ve got these old legacy mainframe systems. I think that’s coming to a head. They’re starting to think about, okay, I’m going to need to redevelop these systems that they were originally engineered by engineers who should now retire, and there isn’t a real way to maintain those-
What is up everyone, and welcome to the latest episode of the Big Ideas in App Architecture Podcast. Our guest today is Chris Stura, who works as a director at PwC. In today’s podcast, we talk about Chris’s amazing career and work at PwC, his passion for distributed computing to open-source products and advice around how engineers and architects should go about implementing the latest tech trends for end users and their customer experience. So, pump up that volume for an insightful conversation with Chris Stura. All right, awesome. Well, welcome to the podcast, Chris. How are you doing today?
I’m doing great. I’m doing great. I’ve said a bit busy, but now happy to be here.
Awesome. Well, busy is usually good. When it’s not busy, that’s when you’re like, well, I’m not sure what to do. Especially, for someone like you who’s been so active in the distributive space, and open-source community, and working on cool things in PwC, right?
Yeah. Well, there’s different types of busy, I think. Today was more of one of those, the busy, talkative types of things. The busy coding is a different area. It’s probably one that I prefer a bit more, but nevertheless, both good types of busy, I suppose.
That’s awesome. That’s awesome. Well, what we’ll do is we’ll just jump into this podcast conversation and first of all, I just want to say thank you for taking the time to accept and finding some time for us to get on this podcast. And what I was doing while I was researching you, I was looking at what you’ve done in your career.
So, you’ve had this amazing career, almost 20 years wearing different hats from software engineer to chief architect to now being a director at PwC. What I was thinking about was 20 years ago, how did Chris, a young Chris decide that this is what I wanted to do? So, maybe we’ll start the conversation by getting to know, what motivated you to get into tech and just where you started?
It actually stretches back further than 20 years. 20 years is on my LinkedIn. Go back a bit, it’s probably 24, 25. So, actually, I love programming on computers and part of the reason for that is that I used to… I think I wanted to play video games, but my parents wouldn’t buy them for me. So, at a certain point, I decided to buy. I took the kind of Sams Teach Yourself C in 21 Days.
I used to write myself, and part of the reason was is that I couldn’t get the mother ross. So, that’s the reason the way I got into coding. In terms of why I chose technology, actually, it was when I was at university, I was a bit mixed and I wanted to do a mixture of, it was either English literature or software development, which is an interesting mix of the two things.
But in the end, I opted for software because simply, there was more money into it. This is much money in being an author unless you’re famous. And I do the statistics on that and yeah, it just tells you a bit about my analytical mindset, always been a bit more analytical than creative. That’s awesome, you made the right choice.
That’s awesome. So, when you started your career, did you start off as a software engineer directly? Was that what you wanted to do and that was what you followed?
Yeah. Yeah. So, I mean I’d always worked to try and be a software engineer. So, very first program I wrote was actually of my own kit. So, I wrote a inventory management system for a small shop called S and G Computers that used to repair computers in San Diego, assemble the clones, the old PC clones back then.
And that I built this inventory system for them to know what was in the shop, and who was coming into the shop, and who had things to repair, and what their names were and some of the inventory off the back of that. And I had written that in C, and behind that, I’d written actually my own database as well. Because clearly, I couldn’t afford to license any software way back then, so we have anything and everything was quite expensive. So, I built that with, it was Borland C at the time, version compiler on Windows.
Awesome. I wish you could have thought a young Chris, if there was a lot of open-source databases at that time, things would’ve been so much more easier.
So many things would’ve been easier if they were today. I train a lot of people in my career and I keep telling them, you guys are incredibly lucky, all the information you possibly want at your fingertips. I remember hanging out in bookstores to try and read the books that I couldn’t afford to buy to learn some of the programming concepts and different frameworks at the time, which were actually quite complex. Like, we bet it way too easy nowadays.
I agree. I think I have a similar thought. I feel like we are right now in today’s day and age, the amount of locations or places where you can get information is so easy now. Of course, we will talk about this towards the end also, as I was thinking as that with ChatGPT, you can actually write a bona fide code with just a few lines of prompt.
So, it’s fascinating how things are right now. So, let’s accelerate the conversation to where you are right now, which is you’re now the director at PwC, and how does what you do each day matter at PwC? What is it that you do?
So, actually, I’ve recently moved roles laterally. So, I used to work with clients in banking and capital market space, and that was a really interesting thing to do because the banks are really facing this huge transition. So, they’ve got these old legacy mainframe systems. I think that’s coming to ahead. They’re starting to think about, okay, I’m going to need to redevelop these systems that they were originally engineered by engineers who should now retire.
And there isn’t a real way to maintain those. I think those systems came from the days when I was programming back in C building inventory systems. So, that was a skillset where programming was for the few and not for the many. And those few just aren’t there anymore. And the many that we have today don’t have the same understanding and the same appreciation for the hardware and how the soft hardware, wherein the software bind together.
So, a lot of those systems are not maintainable in the way that they were, they aren’t providing the same efficiency and value that they were before. So, it was very interesting to go and chat with banks. The insurance companies understand their transitions, and the journey across from those legacy systems into more modern architectures.
And thinking about how you have to actually change the way applications are designed to meet the challenge. Because distributed systems, which are the way cloud applications are defined today, I’d call Kubernetes basically the operating system for the cloud, yeah, build distributed systems by default, microservices architecture patterns lead themselves to that.
And clearly, the nice distributed database systems, which we find ourselves with today, now they’ve been offer asset compliance, things that you couldn’t dream of before. I think best chance you could get back in the day was in the Cassandra stuff, and otherwise, you’d be limited in the horizontal scale that you could have. So, that was a very exciting chapter.
While I was doing that, I worked for another department in PwC and I built one of PwC’s, not only but pretty much, software products, something called Cloud Cost Assurance, which is focused on performance analytics and how do you build efficient software for the cloud. And if you haven’t, it would point out where you haven’t done that, how much money you were wasting by having done that.
And what you may be able to do to remediate that and helping customers really slice millions off their cloud bill by adopting that type of software, I was quite proud of that. And that set me up for the role that I’m in today, which is thinking about how, as PwC, we can be building software that’s going to help our clients and accelerate the time to market.
So, consulting is a very traditional piece of work. You do everything bespoke for a client, and there isn’t much muscle memory or knowledge that goes into that. So, this new challenge, now that we have generative AI of thinking about how we can codify some of that knowledge, deliver consistency to our clients, increased value for the same amount of money, all of that’s important, especially given the cost of living crisis we’re all in today.
So, very excited to be in that new role, and to see what I can do as part of this, pushing the codification agenda inside of PwC and driving those types of engagements with more modern architectures, systems which are going to get you going off the bat and clearly delivering much higher quality than we would’ve been able. But that also runs on modern technology, so yeah, very, quite exciting.
But that’s awesome. So, Chris, tell us about this product that you worked on, in the sense that when you work with different companies who are in the banking space, how does a product really look at the old legacy hardware and then recommend a journey towards cloud? What did you have to consider when you were building this together to give that recommendation?
So, there’s a bit of secret sauce, okay. So, the key to that software is we have developed an algorithm, it was actually maths that I wrote up on, I still remember the whiteboard where I wrote the stuff up on, and we tweaked the algorithm to make it work. But we’ve come up with a way to combine the core metrics for compute.
So, CPU, memory, the disc IO and network IO for the interaction between different systems because systems software today isn’t designed for a single piece of machinery, it’s designed across multiple pieces of machinery. And the real challenge is not how you benchmark the performance of a piece of software on a single machine, that serves no purpose, but more how does it behave across a big distributed system and how do you get a number for that?
So, we came up with a calculation which gives us the transaction per dollar ratio. And using that, which is a proxy for computational density, we’re then able to look at various scenarios, various different architecture patterns, for the lack of a better word, brute force them against the computational density ratio that we have based on some of the analytics that we’ve built, and some of the tuning and tanning we’ve done across the different infrastructure components.
And come up with a, this is the best way to do that, it’s the most cost-efficient way, it’s the way you get highest TPD. And based on that, then we’ll make the recommendations on how you actually move from A to B, what that’s going to save you.
That’s brilliant. Yeah. So, one of the things that you brought was the fact that enterprises are typically looking to save and bring more value. So, I know the challenge of cost is there, but what else have you observed as a challenge for enterprises or companies who are trying to modernize?
I know banging sector is heavily mainframe and that adds its own challenges. So, when you go into conversations, what are the things that you feel companies need to consider when they’re looking at moving to modern infrastructure and thinking about distributed scale like? What’s the motivation?
So, I think there are possibly two answers to that. So, I think there’s a story on the business side and there’s a story on the technical side. So, you can tell I’m a bit more of a technologist, so I’ll answer the technical one first. I think that’s quite interesting. So, mainframes are really fast, and they’re quite easy to develop scalable applications on because they have the right components to ensure base things like management of transactions, rollback, ACID compliance.
They 100% uptime, they give you all of this stuff for free, really big boxes that manage this at a hardware level. So, yeah, this type of software that you would build on those, you could make monolith scale, and you could do so quite easily. Provided, of course, you knew how to develop on mainframes and knew how to use the subsystem components.
Moved out into the distributed age, and all of a sudden, components can fail at any time. And you see these yearly designs of Netflix design, ultimate resilience, high scalability with things like Chaos Monkey running and destroying things just to prove that it will be up all the time. You’ve got the same sort of model, but very, very different architecture patterns and designs.
So, what you had to wait for to enable these enterprise systems is how do you bring that very complicated architecture and make it consumable by the masses so that the transition can take place. And I think that took quite a bit of time. So, Kubernetes was clearly a building block, distributed computing systems, the ability to break down the problem into small components, loosely coupled architectures, the ability to deploy different components different times without any downtime.
So, big enabler there, and then of course, then you had to solve for the data layer. And for the longest time you had distributed databases, yes, but they were eventually consistent. And only of recent times you start to see some of the asset compliant distributed systems come into place, and some of the more modern distribution protocols come about.
And then, you’ve got patterns around I think the two components. Cockroach clearly is one of my favorite solutions. It’s a distributed SQL database based on Raft, absolutely brilliant, quite like that. And then, of course, you have enabling components like Kafka and Zookeeper, which came out of Kafka in its own right, is quite unique in the way that it manages scale.
And it allows you to build software components that are more reactive than the traditional imperative programming paradigms. So, that was a really big enabler to building distributed systems in these large scale systems. And then, of course, you have, I think the father of all of these was the hippy PICO system and started the big data revolution, came about with some of the components that came out of that open-source initiative.
Yeah. I was going to say, when you were saying there, I was just thinking I had a similar struggle when I was working on applications and I’ve used Spark heavily in my career before as I was working on data sciency stuff and we would do a lot of data engineering. And so, I know how Hadoop was amazing, but then Spark came in at the right time to do in-memory processing.
And everybody was like, “Wow, this is way quicker than what we expected, Hadoop.” And again, with talking about distributed databases, use Cassandra for a long time as working on Cassandra part of the community. And again, I would go into creating new software, and new requirements, and I would see eventual consistency was something that I was looking for and it would not do it.
And my introduction to Cockroach also was basically because I was looking for something that had consistency, but at the same time, was distributed and gave that performance. So, I feel like that is the paradigm where we have naturally reached. Because in 2007 and ‘06, when these two influential papers written at AWS, as well as at Google, that led to the formation of something like Cassandra.
I feel like at that time, nobody thought consistency. They were looking at, well, how can we squeeze the most amount of transactions and data into the machines that we are having in the distributed way, but never considered the scenarios where you made consistency. And so, I definitely feel like that’s a challenge, and I’m glad that you and your team are tackling that for enterprises.
When you talk to architects, what is the biggest challenge that you see when you tell them, hey, you need to consider the distributed architecture when they have worked on something monolithic for such a long day?
So, I think this goes back to the question before of there’s a business problem behind change, and this problem is actually associated with risk. So, a lot of these systems that they’re trying to fix and replace are very complex. And as I mentioned, the differences in monolithic architectures, things that you’d find on the mainframe, things that you’d find in traditional three-tier systems, et cetera.
Those, they’re very difficult to break apart, and to break them apart successfully without breaking the, let’s say breaking basket of eggs, is very, very difficult. So, I would say that the majority of people I talk to are challenged by this amount of change. They’re challenged by the risk that introduces, and often are quite wary of being able to communicate that change and manage the risk around it.
And hence, you do see slowness in the ecosystem in terms of modernization. And to a certain extent, you have to wonder to which point traditional businesses will be able to sustain without the risk and not change whilst the incumbents are building on new modern platforms, which are distributed and of much, much lower cost complexity, higher availability with a series of other advantages behind them.
I think what we’re seeing in reality is that the digital channel has greatly expanded. So, digital is now first, it’s part of the fabric, it’s part of who we are, and this is evermore true for the up and coming generations. So, I think now is the time for a lot of that modernization to take place. But what I am still seeing in business is there’s a lot of risk aversion, especially on the change agenda.
And I think there is difficulty in quantifying or qualifying the motivation for that change. So, I’m talking about defining the core business cases around basically changing a piece of software which has worked for decades. And it’s very difficult to bite the bullet and say, “Yeah, we need to do that.” And you find instances of organizations that are doing that, but equally, I think you find far more which are still sitting on the fence and waiting to see what happens.
Right. So, when you talk about the challenge itself and people having that risk aversion, when you go and what are the key things that you identify as are the selling factor for distributed architecture that you have to present to these people, and that helps them get convinced, “Okay, I know this software has worked for such a long time, but what Chris is saying makes sense, this is where the future is?” So, what are those fear? I know you brought up consistency, when you talk about business cases, you just mentioned high availability. Could you just expand that a bit more?
I think you have to give them comfort that the new systems are going to do what the old systems are going to do. And the next biggest problem is how do you modernize progressively? So, modern architectures allow you to upgrade without risk, but it only works if the entire system is modern.
So, one of the biggest challenges, how do you architect in that hybrid way so that you enable the transition from one architecture pattern to the other, and acknowledging that may take a large quantity of time. At the same time, you’re going to have an evolution of the newer software paradigms, whilst having the necessity to maintain the old.
So, it’s that managing the pace of that transition such that you can move across progressively I think is one of the biggest challenges that you see. And it’s the type of modernization, which I would go out and propose to my clients. You can’t really do things big, and that’s very risky, very expensive, and you don’t know whether it’s going to work.
On top of that, there’s the risks around waterfall. You’re migrating something, which basically, it’s a train that’s running. So, I lean to get on that journey and start to do it progressively so that you can enhance and follow the market because the market is-
A hundred percent. So, I was just trying to also think about when you were saying in the last 10 years, especially the last 10 years, we have had the advent of cloud and distributed technology, but there’s also been a push towards serverless. A lot of people are like, “Hey, let’s not even put these things together. Let’s use serverless.” How much does serverless come into conversation for you? And even with the PwC software that you have, which you call Cloud Cost Assurance, do you consider serverless as a solution that you present to these people?
I think it’s interesting that we call it a solution. I like to think technology as tools. So, you’ve got a hammer, you’ve got nails, you’ve got some wood, and ultimately, you need to assemble a solution, which is a business problem. It’s not so the solution is really how do you solve the business problem, which is a piece of software assembled from the toolbox and the combination of people that then service that demand.
Serverless is a tool as all the others. And if you think about its computational density, and I think you have to pair it with its economics. So, from a compute density perspective, serverless is very high end. Because the unitary function only runs for the timeline in which that function is necessary to run. So, the intensity is very high.
But let’s say the economics that govern then serverless can be different depending on the provider and depending on the way in which you deploy serverless. Look at the differences between AWS’s Lambda versus Knative. So, framework running on Kubernetes, in one case, you have an instantiated baseline.
Clearly, you’re able to utilize the nodes on an exceptional basis, but you do have a base overhead to run that. Lambda has no overhead, but it has a cost per execution. Depending on the volumes that you’re putting, that you’re running through the system, each of those patterns will be more or less affected in delivering the right financial results.
So, the thing is is that there isn’t really a by default do serverless. There isn’t an easy answer to that. It depends on where you are. So, if you’re a startup and you’re unsure about what you’re building, serverless is a great idea. Is that going to be the answer going forward? No, maybe not, right? Because as the volumes increase, then it may become more efficient to have provision of the architecture at the infrastructure permanently to service the demand of your clients and that may become more efficient.
A hundred percent. I think I agree with that thought as well because I observed that when you start something, when you’re doing something from scratch, serverless is definitely a good place to start. But as you scale, that the cost and the value of serverless depletes a little bit, and you may want to consider provisional infrastructure that makes sense.
So, let’s pivot to a question that I’ve been meaning to ask you for a while, and since we had a last conversation just prepping was you are a great enthusiast of distributed systems, but you’re also an avid contributor to open source. So, tell us about how you got into open-source technology before, I know you were writing your own database, but tell us more about how you got into open-source technology? What value that has brought to you personally in terms of the effect that’s had on you and your carerer?
Open source has always been something, I believed it. And I think as part of the… we go back to the library story, where I just sit in libraries to read books. Access to knowledge and the access to build new things has to be free. And open source enables this. It enables people to collaborate without boundaries. It enables innovation.
And if we look at innovation, so where are these distributed systems coming from? Where is this new technology coming from? It’s coming from the open-source community because it’s about sworn people coming together and doing things they’re passionate about. And I think it’s one of the things when people do things for passion, they just turn out better.
And in terms of the open source community, that’s exactly why I continue to contribute today across a variety of things, including owning my own project, which is the Eclipse Jemo Project, which you can go and check out on open-source distributed compute engine for Java with the horizontal scale over verticalization of compute workload with little to no footprint or overhead based on Kubernetes, or bare metal, or trying to solve the problem of Java startup times and being able to make better use of the JVM.
But you’ve got a variety of these things Knative was of similar solution, maybe with a tighter overhead, but broader use case across different functional arm. And then, you’ve got other passions. I still like to write in C, and Postgres community, and I’m passionate about data distribution in Spark community. But it’s the ability of these organizations, and depending on whether it’s a small open-source project that starts from nothing, or the more mature open-source projects with their politics and governance.
It’s incredibly exciting to be able to be part of what is the future of technology. And you don’t need a job to do that. You don’t need anybody to accept you. You just need to write good code and that’s it. And it’s that low boundary and that meritocracy, which I think is just fascinating to be part of.
I agree. I think it’s fascinating how open source has evolved. And there was a time when everybody was really careful about picking open-source technology and to start off with this. But then from there, we have had these amazing communities for Kafka or Cassandra. Even we Cockroach, we have a great community of folks who are using the open-source product that we provide.
My personally experience has been Spark, I used so much open-source Spark that I’d reached a point where I had to start using other things to feel like, okay, I’m too heavily in involved in Spark. So, for you personally, what was the first open-source project that you tried that I’m just curious to know what that was?
Bit of open source I contributed was… it was a Kreb generator for PHP. I don’t remember, it’s probably still on GitHub, but basically, it was a tool that would generate a PHP scaffolding for Postgres. I think, it was at the time, 6.5 database, and it will give you full end-to-end CRUD capabilities off basically built off the database schema.
So, it would go and read the database schema and assemble the CRUD application. Off the basis of that, generate the code so you could modify it and then off you go. And I thought everybody was using PHP to build CRUD applications at the time, so I thought it was a good thing to give back to the community.
That’s awesome. I’m glad that you just don’t feel like you’re one of those people who wants to use what the community brings out, but you also want to contribute to the community and help it shape. You’ve been really involved in the Kubernetes community, and as watching, and reading your LinkedIn post on Knative and some recommendations around infrastructure that you are saying, “Hey, why can’t we have a minimum of four MB of RAM to just run some infrastructure?” Tell me more about that.
So, there’s an interesting debate. So, I was having this with one of my colleagues, he comes from, I think it’s database Cloudera world, a guy Mitchell, I highly recommend it, the guy is really great. And we were talking about the need to eliminate the boilerplate. So, one of the things about Kubernetes is that there’s a whole lot of config, and there’s all load of boilerplate, and you’ve got the operating system underneath, and the libraries and just the footprint is really, really high.
And we’re looking at this and in reality, all you want to do is execute some business logic, which tends to be 20, 30 lines of code. Why is there all the FAF, right? And if I think about a serverless platform, serverless platform is really just about loading a library, and instantiating some scaffolding around it, and running it, and then exiting.
And if you think about this at a very basic level, you could do this with a C++ application and FFI. And it would occupy a few kilobytes of memory. And so, one of my thought process is actually, this is part of the application framework that I’m putting together and some of the signs I was putting together. I was looking at the base requirements and I’m like, “Well, that’s a whole load of resources just in instantiate, just to be able to run something.”
If I think about, I’m going back to the days, you used to be able to run entire email servers for 11,000 people on 386s. So, clearly, the computers have become faster, but the software has become far more inefficient, and there’s a question of which has gone faster than the other?
A hundred percent, yeah. So, I saw the debate and I was curious that you are on this side where you’re like, “Well, why can’t we just get things started quickly?” Am I right?
Yeah. Yeah. Loading a library, loading a shared object takes milliseconds, microseconds even. Running that within the context of an existing application where the membrane has already loaded, again, microseconds. So, there really is that no reason that I can see as to why you would need all the scaffolding.
Got it. Yeah.
I believe, think about the way Kubernetes works and that I can instantiate multiple pods to do that. I can share disc between them. They can all allow access to the same infrastructure and applications. You have dynamic routing, you’ve got service catalogs, you got all the rest, so there are a variety of different ways in which you can do this and still make it safe, still make it secure, still make it scalable, still make it immutable, et cetera.
And you could put all the scaffolding on top in the REST Web Services, proto buffers, anything you like because that all sits within the framework executing the business logic.
I think that’s the key, right? Because when you’re passionate about something, and I think I’ve seen this in other communities as well, when you’re passionate about what’s building something, you sometimes have a single-minded thought process and sometimes business logic is not considered as much or we forget about what the struggle that somebody who’s going to use this in a project may have.
And what you bring to the table is some really good real-life experience on how enterprises would want to use this as they put the software in production or something like that. So, tell me about an instance where you… I’m just asking this question because I’m always curious about bad things that have happened when you view a software that you really believe in.
So, tell me an instance from your career where you really believe that, “Hey, this is the right direction,” and then you put that into production, and something really went off and you were like, “Oh my god, this is not what we thought was going to happen?”
That’s an interesting question. I’m not sure about that. So, actually, there is a story about that. So, at a certain point, I was looking to… and it should have been something simple. So, at the time, I was working for a small company, and one of the services that we offered was email servers. So, everybody’s email at the time [inaudible 00:30:22].
And there was a new project based on the Avalon framework from Apache. I don’t know if it exists anymore. And there was an email software called James, which was built on this framework and it’s quite new, but it had the advantage of being in Java, it had the advantage of being flexible. It had an aspect-oriented programming paradigm, it was composable.
Logically, it looked pretty cool. And so, we said, “All right, well, let’s use this as the email server.” And that thing was plagued with problems. We ended up having to write an enormous amount of code, and plugins, and whatnot to make it work. And there are two sides of that story, so it was interesting to go and fix it, but at the same time, it was just something that should have worked and something that you would’ve never wanted to actually get into, or modify, or change in any way.
Right. And I think the key is when you’re building software to consider how easy it would be to roll back. And I feel like sometimes I’ve had instances where we believed in a project, and something, and we take that code and start doing that. It works really well for the first, second day. And then suddenly, on the third day, you start having issues.
And you’re looking for documentation how to roll this back because the community has never thought about putting that documentation together. So, I’ve had instances like that working on some things as well. All right, so let’s pivot to a question that any podcast you go to now is going to have is what are your thoughts around generative AI and how, especially, enterprises in finance and PwC, how are you guys thinking about leveraging that in terms of the grand architecture?
That’s interesting. So, actually, it’s in the PwC slogan around technology. So, PwC have a human-led tech powered ethos. And I actually think this is probably the best way to describe where we stand with generative AI today. So, if you go back and so at the day, way back when Sun Microsystems said that actually, a good way to introduce inference into software was something called sensible defaults.
And so, the software would suggest a series of things and you would say that looks like the right thing. And of course, at the time, it didn’t learn anything, and the suggestions were quite simple and whatnot. I think generative AI is in the same place. It’s a productivity tool. So, what it does really, really well is deliver quite verbose, sometimes accurate, sometimes not, answers to very short pointed questions.
And as human beings we like verboseness, eloquency in language. That’s part of one of the reasons why my other passion was English literature because it’s nice to describe things and listen to the tonality and whatnot, brilliant sort of thing. Equally, when you communicate with pretty much anybody, even as we’re doing today, I’m going to use big and nice words, try not to repeat the same things twice, and there’s a certain tonality to that and people like that.
So, in the same way, I think generative AI, it’s a brilliant enabler all along. So, it helps us to write code from concepts. That’s brilliant. Does that take away from I want to optimize that particular piece of code so that I can shift memory around with low level C? Probably not. It’s not really going to do it the way I’d like to do it because I’d like to see it written in a certain way with certain macros.
Is it going to give me the scaffolding to then start to change it to make it look like something that I want? Yeah, it’s going to help me to do that. It’s going to make it a bit faster so I can get around, and get on with the job, and move on to the next exciting thing that I want to do. So, it’s going to allow us to be that more productive equally.
We all have a series of bullet points of things we want to say. How many times have we toiled over putting that into an email? Again, beautiful, beautiful use case for generative AI, where we can give it the concepts, we come up with something, tweak it that we’re so slightly and off it goes. So, it allows us to do our jobs quicker, faster, better. And I think it’ll only improve.
So, I really do see it in that productivity space, and it’s part of the area where I look to, when I think about how do I bring that to business, let’s look at some of those areas where you have those interactions. How can we enhance those interactions? How can we enhance the way in which our employees work? How do we make their lives better by introducing generative AI so that they can do better things?
They can make your customers happy. They can deliver a better service. And I think that’s really what generative AI is going to enable for us. We’re all going to be able to get that time back to deliver the quality that we need to do, which is going to overall, make the world a better place to live in.
That’s awesome. Yeah, I think I also look at it in a similar way around how it helps. It does provide great boilerplate code that you use and get started with. But at the same time, I think your point on human-led ethos of having human intervention is so critical too, if you are trying to build anything unique. So, one of the questions I also wanted to tee up was around the ethics of AI.
And I know you have been an advocate of looking at that. You recently were looking at a post by Sam Altman about a month ago, and I was also following that, and having the conversation around how do you look at the ethics around AI, and generative AI, and how to use it? Especially, when you work out of UK, and the data, and data security, and data privacy is very important, how do you see all of that coming together with AI and that idea of ethics?
So, there is this concept of data share, right? I think you have to be as careful sharing data with AI as you would be sharing data on social media. So, it’s out there. It’s out there. That, I don’t see much of a challenge with, we make that a bit bigger problem than it naturally is. If we use common sense and we think about would we share that with the world, right?
You’re probably going to go the right way if the answer is no, you shouldn’t. Now, equally, generative AI are models and you can train LLMs for use within business so that you can share those within smaller community. And those are enabled, and you should absolutely do that to enhance the productivity of your business so that you can have wider [inaudible 00:36:34].
So, it’s the time and place. And for certain things, public LLMs are going to be useful, right? How social media is today, or how has the search engines are for that matter, anything in public domain clearly is going to be accessible via the search engines anyway, so your opinions are everywhere. The twist to generative AI is that you’re now adding knowledge into the community.
And that knowledge can then be regurgitated in a format, which is let’s say non-IP protected. Then, there may be a bit of legislation now absolutely catching up with that. So, copyright law is quite old and may need to modernize regenerative AI to work. There’s a bit of a legislative angle to that, but largely, the ethics around the use of it in terms of not losing your data is something that can be managed a bit of common sense, a bit of guidance.
I think one of the things that’s more worrying is what impact is that going to have on how we go about structuring and optimizing our businesses. So, every wave of modernization, these are all the industrial revolutions and whatnot had winners and losers. And it’s how do we be a bit fairer about who those winners and losers are, and how do we learn to share the wealth, something not been good within the past, and something which isn’t really cabled into the fabric of capitalism.
So, one of the things I’ve said before is generative AI will make the world a better place if we enable better customer experiences. If we deliver the same customer experiences or worse because we can get that without people, then we’re not going to have advanced any further. In fact, we’ll have only gone backwards, and I don’t know who that benefits. So, I think we just have to be conscious in enterprises and businesses as to how we deploy that to make sure that we’re upping the game and not the contrary.
Right. So, you’re basically looking at avoiding the scenario where just because something is trend and something looks fancy, doesn’t mean that it’s something that you need to apply because your software might not need it. Do you think that’s also the case where everybody is talking about AI and using it because everybody is trying to think about it, but they really might be struggling to have a use case where it might need to be applied?
That’s an interesting… I think incorporating the latest thing is sometimes good, sometimes bad, depending on whether it’s applicable. Now, generative AI is an interesting one because it’s got so many use cases. So, are you ramming it in? It’s a difficult question to ask. Yeah, but then again, I’ve seen lots of people around frameworks and all over the place.
I’m certain, if anybody who’s ever programmed in Spring has seen the orgy of design patterns that comes out of the applications, which are written within that, even if they’re not necessary. So, there’s always good and bad use of technology and will people inevitably get it wrong? Yeah, maybe. Are there a whole load of use cases regenerative that I should be? Yeah, absolutely.
And as with all things, good business sense is going to tell us and experience feedback from customers I think will tell us whether incorporating that has been good or bad based on whether they find it useful or not.
Awesome. Yeah. So, just when you were saying, I was thinking where you are from with your amazing experience, and recommendations, and consulting experience and working in open source, what would your advice be to listeners who are architects and engineers who see different projects and different AI projects or solutions that they feel like they need to apply to their product?
What is your advice to them around how to make that decision and what would be your one, two, three, four, five steps be on, “Hey, look at this, look at this, look at this, look at this. And then, if you feel this makes sense, apply that to the project that you’re working on?” I’m curious to see how you approach it and what your advice would be.
And I think, well, my advice, more ask and listen. So, actually, as software engineers, architects, engineering community in general, we’re terrible at listening to the people that use our stuff. And I think one of the things to be successful at generative AI, and the artificial intelligence in general is you really have to get a sense for how people are using your software.
You have that sense, you’ll figure out where AI fits best, and how it fits best, and where it’s going to deliver value as opposed to just being there because we want to call, and say the buzzword, and put it in our marketing material. It has to be tied to the user experience. And in general, yeah, I think we have gone a bit downhill in terms of making at the user experience.\
With the exception of the few companies, which are user-centric, and clearly laser focused on how is the user going to feel, how are they going to interact, how great is the software going to be for the end user to use? I think we’ve been quite industrialized, and requirements led, and not always had the end user at the heart of what we’ve been delivering.
And I think all engineers are guilty of this to one extent or other. The difference is when you’re building an extra bit of scaffolding or implementing an extra pattern in Spring, the end user, they’re not really going to see that. So, it’s probably not great, but okay on the generative AI side, hammering it in some place where no one’s ever going to use it, maybe more obnoxious and do more harm than good. So, there’s probably a bit more listening necessary there.
Right. So, the biggest advice is make sure you’re listening to your users, listen to the business use case, or understand the business objectives is what you’re thinking is the best approach.
End users. Got it. Because the business objectives, again, business are not always… they’re not always attuned to customers and end users. Got it, a hundred percent. So, I know we have a few more minutes left before we go. Tell us more about what you’re passionate about when you’re not working on technology.
I know you’re active technologist here, thought leader and thank you so much for sharing all these amazing thought. But what else are you passionate about? When Chris has kept his laptop down or kept his mobile down, what is Chris’s favorite activity to do?
So, on the tech side, I actually still quite enjoy doing the macros, and CC macros, or something, which I’ve always found fascinating and I think they’re really… the outside of that, I love to ride my mountain bike. I’m a big motorcycle fan. I’ve got all sorts of motorbikes and whatnot, including an electric one, which I think is awesome.
One of the things I don’t get is because I love bikes and I’ve got petrol bikes and all sorts of them, but they’re all wonderful and fascinating in their own right. So, I think just trying out different things, maybe that’s just engineer’s curiosity, so absolutely that. And then, of course, I’ve got three wonderful kids at home. I spend a lot of time with my family. So, I think fascinating to see little kids grow up and it’s just one of those things that fills your heart with joy, so that’s me in a nutshell.
That’s awesome. Yeah. So, as we close, I wanted to ask a question around how do you continue to learn in your free time, and how do you schedule stuff in a way that you’re also being a father, and being a technologist, and making sure that you’re up with active on all these projects?
Because one of the things that many of us struggle with is that there are a lot of different projects, different things that we are working on, and then we have to deal with children. And how do you keep your passion alive of continuing to learn while being a dad or a family man as well?
I think it’s just about being passionate in general. So, I’m passionate about my kids and my family the same way I am about technology. So, passion breeds curiosity and it makes it not a job than more fun. And I think that’s really the secret sauce is, yeah, if you care about what you do, you naturally do it well and it doesn’t seem like a chore. The moment it’s not a chore, you don’t mind doing it and everything just seems to fall together.
Awesome. Yeah. So, that sounds like one of the advices I got was do what you love and then what you love would not feel like a job.
Or you won’t have a job, you’ll just be given money to do play around basically, which anyway, what I feel that I work myself into a position like that, which is a great place to be.
Awesome. Well, thank you so much, Chris, for all your conversation, for all the conversation and for sharing these insightful thoughts with me. I know we are almost hitting the limit in terms of time. I think this is one of our first conversations and I hope we can have more conversations, but for whatever we could do and whatever we could discuss today, I appreciate it.
Once again, thank you for jumping on the podcast. For everyone, Chris is an active member of the community. Can you tell us what that community is, and how folks can find you, or that community itself, Chris, a little bit more?
Yeah. So, basically, I’m all over the place and probably the best place is LinkedIn, but I post on Twitter, and the forums, and then you’ll find me over the open-source community. So, if you’re interested in Spark, Postgres, or you subscribed to the Eclipse Jemo Project before, by all means, it’s quite interesting. It’s got some nice code in there, but I’ll be out, and about, and contributing different bits and bulbs of code all around.
Awesome. Well, thank you so much, Chris, for once again, taking the time to come on. And thank you so much for jumping on this podcast once again and for your time.
Big Ideas in App Architecture
A podcast for architects and engineers who are building modern, data-intensive applications and systems. In each weekly episode, an innovator joins host David Joy to share useful insights from their experiences building reliable, scalable, maintainable systems.
Host, Big Ideas in App Architecture