title SAP: Bringing the ‘Operating System’ of a Company into the AI Era with CTO Philipp Herzig

description More than fifty years ago, the modern idea of the standard enterprise software was birthed at SAP. Now, after managing companies through technological shifts from the mainframe to mobile, SAP is at the forefront of closing the AI adoption gap for their customers. SAP Chief Technology Officer Philipp Herzig joins Sarah Guo to talk about how SAP has remained a durable end-to-end “operating system” for its more than 400,000 customers from finance to supply chain. Philipp argues that the AI transition in businesses should focus on customer outcomes, UI changes, business processes, and the data layer. He also explains the challenges in enterprise AI adoption, including security, scaling, and data fragmentation, as well as the importance of evals and verifiability. They also discuss SAP’s suite of AI products, limitations of predictive tabular models, how SAP is shifting its pricing models in the AI era, and Philipp’s interest in quantum computing optimization.

Sign up for new podcasts every week. Email feedback to [email protected]

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @pheartig | @SAP

Chapters:

00:00 – Cold Open

00:42 – Philipp Herzig Introduction

01:18 – What SAP Does

02:51 – Why SAP Endures

06:53 – CTO Priorities and AI Push

12:14 – Scaling AI in Enterprise

17:06 – Verifiability and Agent Mining

20:42 – Tool Calling vs. Computer Use

22:11 – Domains Where Agents Deliver Value

24:58 – Limitations of Predictive Tabular Models

29:07 – Barriers to Enterprise Adoption

31:54 – How AI Will ‘Uplevels’ Work

34:03 – How AI Changes SAP’s Pricing Model

36:41 – What Makes a Winner in the AI Era

38:53 – Day in the Life of a CTO

40:08 – Customer Challenges

42:36 – Business Problem of Quantum Computing

46:21 – Conclusion

pubDate Thu, 23 Apr 2026 10:00:00 GMT

author Conviction

duration 2744000

transcript

Speaker 1:
[00:05] Hey, listeners, welcome back to No Priors. Today, I'm here with Philipp Herzig, the CTO of SAP, the enterprise juggernaut. We talk about their AI strategy, why SAP has endured and thrived through several technology transitions. Why entrepreneurs are underestimating the challenges of scale. Why AI is a business model transition, not just a technology transition. Why he thinks that LLMs are not enough for predictive analytics and even about the traveling salesman problem in the real world and the straight-up hermeneuse. Welcome, Philipp. Philipp, thanks so much for being with us.

Speaker 2:
[00:39] Yeah, it's a pleasure to be here. Thank you.

Speaker 1:
[00:41] Everybody knows the name SAP, but I do think that for lots of engineers or people who aren't close to the system in a larger enterprise, they don't really know the breadth and function of the platform. Can you just describe what you guys do for customers?

Speaker 2:
[00:59] Oh, absolutely. Look, SAP is the market leader in enterprise, software applications and platforms. It has 400,000 enterprise customers. Usually, I just running their finance, HR, and supply chain, manufacturing, execution, logistics, warehouse management, and then of course everything on the customer side, sales services, commerce, procurement, you name it. End-to-end, like SAP, we always say we have the broadest portfolio in terms of end-to-end running the business end-to-end. This is where SAP started with, giving real-time insight. Usually, I really describe this as it's not just software in itself, it's kind of the operating system of a company essentially, in order to get from everything from order to cash, from source to pay, right? End-to-end managed for companies around the entire globe.

Speaker 1:
[01:57] I definitely want to talk about AI, LLM, some of the stuff that you guys are doing internally, and then around predictive models as well. But just because the macro backdrop is on everyone's mind, both from a technology and an economic perspective. I want to talk about SAP's position in the market a little bit. SAP has stood the test of time through multiple technology and market cycles. I, as an early stage venture capitalist, I'm on the other side of this where the narrative is like, well, when you have internet and cloud and mobile and AI and social, like you have an opportunity for new players. What do you, like, SAP, you know, even today is the, I believe, the largest like market cap enterprise software vendor versus sort of the last generation of the new guard, like the sales forces of the world. How does that happen? Like, how did you do it? And what makes it so durable?

Speaker 2:
[03:04] Well, what makes it so durable, right? At the end of the day, I mean, if you think about this, and it's happening a little bit the same way also when we talk about the Saas-Estedt Narrative or the Saas-Pakka Lips. I mean, anyway, I have the feeling like in this market, last year, AI was in a big bubble and everybody was kind of saying, no, it's not, and now this year Saas-Estedt and so on and so forth. Look, the reality is now, of course, with the costs of building being so low, right? With specifically agentic coding and all these latest powerful models. I mean, something has always prevailed over the years because even when SAP was founded in 1972, a long time ago, I mean, why was it started? Because actually in the 70s, when the founders of SAP were still at IBM, what did they do? They went to each customer and they implemented the finance system again and again and again and again. And then they said like, hey, this makes no sense, right? Because the economics, it doesn't scale, right? Because of course, you can do this, right? But you can only add so much value in any given time. And by the way, we are basically programming the system very similar. Of course, there's always a little bit that is specific then to the customer. And this was the idea where standard, the notion of the standard software was born essentially, right? And then of course, that stood the test of time, right? Because there is simply things and companies that need to get managed, right? From time to end. And that also has transformed throughout the years. You've mentioned that, right? First from the mainframe to client server, right? Then to the internet, then mobile. And now of course, AI. So of course, the software has changed and evolved all along with these technologies. What hasn't changed is what customers are seeking for, which is outcomes, right? Outcomes and return on their investment in order to get the things done, right? And of course, now AI is an amazing technology that again helps to get more things done in the enterprise, right? And then that is actually what SAP is standing for, right? And so what we are really doing is in given, of course, also the breadth of the portfolio and the customers is, of course, to help customers to achieve more by deeply embedding AI, AI agents, and of course, transforming now the user interface, and so on and so forth, to help them get more, right, done in whichever industry that they are working in. And we believe that still will continue, right? Because this is exactly what we're also seeing right now with, of course, there's still, of course, there's tremendous progress, but we also see that the AI adoption in the enterprise is still not where we want to see it, right? Like there's this Gardner curve, right? Where say like there's this AI innovation race, and then there's this AI outcome race, right? Then the gap almost increases, right? Versus getting narrow. And that is what we are really focused on, right? That customers get the outcome from AI to achieve more, given of course the foundation we have. But simultaneously, of course, the system, we are kind of re-engineering the entire system, right? With the help of AI, in a totally new way, you know?

Speaker 1:
[06:16] You now as CTO of SAP have like a very broad purview. It also includes the AI strategy piece of it, internal and for your customers. Like, what do you think of as your own top priorities for the organization? And where is SAP on this re-engineering or reimagination journey?

Speaker 2:
[06:39] No, look, I mean, we are in the meantime all in on AI, right? So, I mean, everybody in the company is using agentic coding, right? Because that's, of course, an amazing productivity boost, right? That our developers have no matter in which programming language they are building the software for the customers. But of course, it's also really, again, focusing on customer outcomes, right? And we've seen this, for example, early in the early days now with consulting, for example, right? We built this thing called Jewel for Consulting, which is phenomenal because it's one of our fastest growing AI products, because what this actually helps is to build the, to help the consultants, right? In an SAP project or in a complex landscape, if they're, I mean, again, right? We are serving some of the largest customers. They have a lot of heritage. They have a lot of complex landscape to help them actually to move into the cloud, to adopt the latest AI capabilities and so on. With Jewel for Consultant, they can reduce 30% of their efforts, to get to the outcome faster, which of course then directly reduces the costs, not just the time, but also the costs that are necessary in order to get to the latest software. We've seen this, of course, with Conqueror, for example, where now our travel booking agents, our expense agents are alive. There's many of these outcomes that we are designing. But when you look from a CTO perspective, really, in my mind, it's three things that are really getting, not disrupted, but are massively changing. To me, the metaphor is a little bit like when we move from on-prem to the cloud. Originally, everybody thought, hey, we just take the on-prem software we already built, we put it on the Internet, call it cloud. But then only over a certain period of time, people realized, oh, what does actually CICD mean? You can deploy it daily or multiple times per day. Then you realize, oh, we always had multi-tenancy in the on-prem software already. But then, of course, in the cloud, you have to learn how to scale it up and down. All of a sudden, the software got re-engineered to really build real SaaS software with all the concepts in cloud computing. With AI, the same is happening. It is happening on three levels. It happens, of course, on the UI side. The time is clearly over where you design software, where the dump software that requires the intelligence to sit in front of the computer. If you look at classical software, what did you do? You decide to use an interface. Hopefully, you did some user research, try to figure out in the easiest way or the most intuitive way to teach a human how they get their tasks done by clicking through the UI essentially. This is over. It's now, we call this Generative UI. The UIs get dynamically generated. If you have analytical questions, for example, or if you want to do your deep research, not just the deep research you find on perplexity or the usual chatbots, but deeply rooted, let's say tariffs are being introduced or new taxes or the straight-o-formals, what does this mean for my supply chain? Then you can analyze this in conjunction with your SAP data. There's a lot of exciting opportunities and new things you can build that we only dreamed of in the last 20 years, at least since I'm a developer, where now the system becomes much more multimodal, much more proactive because it can run overnight. Then only if you wake up in the morning, tell you, hey Sarah, have you looked at this? Here's a problem on the sales side. Maybe the order entry is going down. You should do something here and here are some recommendations already for this. Or here, there's a problem in the supply chain. Because now if you're an oil and gas customer, obviously you want to know what are my options you need to re-plan. All these things become super important for customers and that changes the UI. Then the second one is, of course, the business processes like an order to cash in the past. Of course, it has variances and so on and so forth. But it was a rather rigid process, like the standard operating procedure of a company. But now, of course, with these agents, we can blend the structured and unstructured world more seemingly, to get actually more work done. So this whole move from software as a service to some call it service as a software outcome as a service, that is, of course, what these agents are building for us. And then, of course, below that, you have the whole data layer, right? The whole data layer of bringing, of course, SAP has a lot of super valuable data for a company, right? Like all your general ledger and your invoices and your warehouse and inventory information, etc. But of course, you now want to combine this with the plethora of other data, right? In order to kind of build this one harmonized, semantical view, because only, we always say AI is only as powerful as the data is, right? So, and that is exactly what we are then also doing and transforming on the data side to help our customers to benefit from a globally harmonized data model to fuel the AI.

Speaker 1:
[11:37] What is the biggest engineering or technical challenge for you guys when you look at these three bodies of work or anything else that you're doing in SAP?

Speaker 2:
[11:48] Well, the biggest challenge, quite frankly, is, of course, when you look at this, is how do you, it's actually not the AI so much, right? But it's actually teaching the AI to do the right thing at scale. Right? Because, I mean, you can, look, you can build, two years ago, right? Everybody built a rack service, right? And you could easily with a POC blew off everybody's, like the CEO's socks and like, look, how easy it is to build a chatbot on 10 documents, right? But that, but SAP and these large customers, right? They always have a problem of scale. Okay, what do you know with 100 documents? Well, it becomes a little harder. A thousand documents becomes a deeper engineering challenge. And now if you go into Julia or Sarah, you're maybe an SAP US employee, right? Of course, if you ask a question, of course, for travel policy, for example, of course you expect a very different answer than me as a German employee would get. So you now need to connect this actually with your master data. Like where are you located? In which country are you? Under which payroll are you actually? Which taxes apply to you? And so on. So all of a sudden it becomes a very, very tricky problem. Same with MCP. Like last year, everybody could build an MCP server. It was so super simple to hook up your MCP server and do amazing things with it. But that becomes like for 10 APIs, not an issue, 100 because you'll get already context bloat and all these challenges. But we have 20,000 APIs, right? So it becomes just like because it's so huge, right? There's so much things. So it becomes this problem of scale, right? And doing this really end-to-end for the customer because what we also build is really an integrated experience across. So you can ask a finance question, you can ask an HR question, your supply chain, you can correlate that. This is the biggest challenge to bring that, so to speak, together and design it then really for the right outcome. You said this also, it was interesting. Another interesting thing is, from my perspective, is you had recently this other podcast, I think with Andre, you had it on. The most important thing from a development perspective is actually people start writing their evals. That is, I was on this tour for a very long time because the problem, why does agenda coding work so well, Sarah, is of course, you can verify the outcome, right? You can either say, hey, is the program compiling, or are you unit tests, right? Does it work, etc.? Of course, combined with a little bit of taste and a lot of hard engineering work, Entropic and OpenAI built these phenomenal code generation models. The problem is, if you now want to build a reliable outcome in finance and so on, you need the data that say, hey, with this input, that's the output, right? In order so that the coding agent can validate that and assert that against the reliable outcome. That's something where there's a mindset shift in terms of how you describe the right boundary conditions to your coding agent. The harness, all the boundary conditions need to be true from a security perspective and from a data privacy perspective, all the code qualities because you also still want to maintain that code on day 2, and day 3, and day 4, not just get the first version bytecoded. Then of course, these evals that then tell you, hey, this agent is actually doing what it's supposed to be doing in a variety of ways. Sometimes you have to laugh because do you still remember when I was a computer science student, where the Google guys came in in a lecture, and they said like, hey, I can go home at 5 PM because I wrote my tests. Of course, this was non-useful. Remember that test first or test-driven development?

Speaker 1:
[15:34] Of course, yeah, it's coming back.

Speaker 2:
[15:37] It's coming back. The reality is nobody did it. At least I never did it because it was so much more fun. It was not very popular at the end. Why was it? Because A, it was so much more fun to write the code first, right? And then B, of course, usually the product manager gave you a very messy requirement. It was very hard for you to write the test actually first. So while you wrote the code, you kind of iteratively discovered how the system would actually behave. Now the behavior and the writing the code is so much automated, right? Because now you can write almost software completely on its own. But of course, now you need to describe the right outcome, what you want from this thing. And so that changes very much how the developers, of course, also now need to work, specifically now that the models have the steps changed since December last year.

Speaker 1:
[16:29] It's really hard, or I think it's not obvious how to picture if there's a version of agents and models powering those agents in enterprise systems like SAP getting better in a compounding way, the way they have in generic code generation. Do you think it's possible in terms of verifiability or the ability to go understand and evaluate against that intent? Because it is much, I don't know if I would say it's more diverse than code, but it's not obviously verifiable as you pointed out. Do you think it can be?

Speaker 2:
[17:12] That's exactly the point. That is where the starting condition is great. I think in terms of two lanes. The first lane is, of course, you have the system of record today. You know exactly in the system, hey, given this or that instruction, what is the outcome? Because you can see it in the database. And then you can construct, hey, if the order to cache process runs like this, then you need to expect, right, that the cache, like the accounts receivable, needs to come in this way, right, with the following taxes and so on and so forth. So that gives you verifiability. Now the challenge, of course, is rather, this is never enough, right? Because if you just look into the system of record today, that data is insufficient for this grant vision that everybody has, that it becomes this autonomous enterprise or like the agency of these agents is increasing, right, over time. So at the beginning, the agents, of course, are coming back to you. Some people call this human in the loop or whatever, right? So they need to come back to you, like also still with Cloud Code or Codex, and still ask you some clarifying questions. Hey, I have now, I could now go this way, I could do that way. And with that, what you want to design for is that you start to capture more of that context, right? I always call this the tribal knowledge, the stuff that is not in the system stored somewhere that just lives in people's heads or maybe in Slack channels, maybe in Teams channels, maybe it was just a discussion on the phone, right? So it's not stored anywhere. So how can you drive a decision from that? And then so the question is the agent needs to come back, ask you for input. Now you want to store that. And now what we do in the past, we call this process mining. Now we call it agent mining because we record all these decision traces, these contexts, what the users are entering into the system. And then you can either use it to say like, hey, wait a minute, this is actually an anomaly. The folks in, I don't know, in UK from our company or the folks in Australia shouldn't do this because the standard operating procedure is this. Or you say like, oh, that's actually a very good improvement. And then you can elevate this to be the new standard operating procedure, maybe not just for Australia, but maybe for the rest of the world or more countries to run your company more efficient because now you'll learn something, how the organization behaves because it can go two ways. It could be either good or it could be a bad thing. And then you maybe want to streamline the process, how people then actually conduct the process in a different way. And that then leads to this kind of, I call this then this data flywheel, so to speak. So because with every trace, every input a user gives you with all the observability that an agent writes you, you have new data sources that can then lead to new evals, where somebody says, yes, that's a verified output, so to speak, that I want. And then of course, you can optimize the system more to what's that outcome, depending on which data you gathered.

Speaker 1:
[20:05] Do you have a strong point of view today as to whether agents operating against these business processes within SAP or otherwise in enterprise software, do you think it's going to be computer use? Do you think it is all code and tool use on APIs?

Speaker 2:
[20:28] It's an interesting question. I have not a very finite answer yet to this. So I think, given of course also how clunky UIs are and so on and so forth, and knowing the challenges also from UI automation from the past. I mean, it's phenomenal what they can do already today, quite frankly. I mean, they're still a little bit slow and so on and so forth. But I still believe for the most part the majority will live with tool calling and agents running in the background and so on. Because you also don't maybe want to have the browser open all the time. You can do this with headless browsers and so on. But I mean, if you can do this with a more structured approach from an integration point of view, I think that will be the preferred method. But then of course, there will be always kind of things where an API is maybe not available, or you have a legacy system for a time being and so on. Then of course, these computer use approaches and so on will nicely tie in, so to speak, as well.

Speaker 1:
[21:34] If we zoom out a little bit and just think about agents and automated business processes, what domains do you hope customers will see that be most effective first?

Speaker 2:
[21:50] Well, I mean, we need to be clear, right? I mean, it has been, for the most part, very productive in what I call the unstructured world, right? Because let's face it, I mean, large language models are very good in the unstructured world, right? Text and the images and stuff like that. So of course, everything where unstructured data is concerned for the most part, like in services and in support, and maybe sales, right? And then of course, in anything related to knowledge work, right? That deals a lot with documents. Of course, this is where we see like just do for consultant product I've mentioned, right? This is a lot of unstructured information. This is of course where it was the easiest to get quickly to the return on investment. It was hard enough to kind of combine this. Also you mentioned tool use, for example, right? I mean, the models had to learn, of course, to get better on how to use the tools. Then you need to build orchestrators, right? And disambiguate, oh, what does an order actually mean? You mean a maintenance order, sales order, purchase order? Order is a very overloaded term. It's very, very ambiguitive. And that's of course this orchestration logic. That's what is a hard thing to build, yeah? And so I think overall, but now that it's gotten better, right? Now you can do things like chat with your data, right? And instead of going to the data analyst, business analyst that curates you some dashboards, and in 80 percent of the cases, that might be a good enough dashboard. But for all the other 20 percent of the question, you always need to go back to your IT department. No, now you can just converse in natural language with the system. It pulls the data, right? International language to SQL or whatever have you, pulls that data, you converse it until you have that point of view of the data that you want to have, and then you just pin it and say like, okay, that's actually my problem. Now I want to manage that problem for the next, I don't know, two, three weeks until the problem maybe has disappeared. And then, of course, you move on, maybe then you delete that tile and so on and so forth. So this kind of combination of the structured unstructured world, which is required, right, if you want to go into the tabular world, right, because lots of data in finance is stored in tables and sales and the supply chain, and so on and so forth, right. Unlocking that took a little bit of time, but now it's actually we are seeing through, for example, the knowledge graph, the SAP knowledge graph that we've built, which is kind of the glue between natural language and the structured data in the system to really bring this together.

Speaker 1:
[24:22] That actually leads to one of your, I think, like, I know it's unconventional, but it's certainly not the dominant narrative in AI right now, which is your interest in predictive and tabular models. Can you talk about why LLMs aren't the be-all end-all here, or why we can't just use tools and calculation external to the model in combination with LLMs to achieve what you want to achieve?

Speaker 2:
[24:56] Yeah. Now, first of all, from a business motivation, it's a great question, right, Sarah? I mean, first from the business motivation point of view, right? Again, LLMs, unstructured world, that's all good, right? But most of the time, if you want to plan forward, if you want to make good decisions in a company, you need predictions, right? You need predictions in terms of, oh, what's my demand, right? For, oh, is this depending on the seasonality effects and so on, what's my demand forecast maybe, right, for my products in the retail store? Or what's my demand, right, for my products? I can plan accordingly my manufacturing, right? If I'm a manufacturing customer, or you want to predict your cash flow, right? You want to predict, and that has a bunch of input variables like, oh, what I actually, my data sales outstanding, right? That is determined based on our customers paying, yes or no, that's a classification question. And if you then say, okay, if a customer is not paying within the payment terms, what's the payment delay? A classical regression question, and so on and so forth. Now, the problem is, of course, still today, if we look at these predictive questions, right? And then you want to maybe do a what if analysis from it, right? Now, if you want to do these predictions quite frankly, then the challenge is large language models are not made for this, right? In a way, how they generate just one token after another essentially in a sequence to sequence modeling, I mean, they're language models, right? And they do this phenomenally well. But if you still want to do these predictors where you have to go back to these classical machine learning approaches, right? You use XGBoost or AutoGluon and many of these AutoML approaches, right? That might be that are still out there. The problem is just it doesn't scale, right? We haven't seen in the predictive space the same level of democratization, right? You still need to hire a very good talent, a data scientist, right? And then if you, for example, if you are a large company, we did this, for example, at a pharmaceutical company, if you just want to solve the payment delay prediction problem I've mentioned, right? They are running in 90 countries around the world, and they need these two models. So you end up with 180 models you need to train. You need to create the data, you need to train the models, figure out what the right model is, feature engineering, like the classical machine learning kind of approach that was used in the past. What we said all the time is, okay, look, we have all this data stored in these tables, right? Thousands of tables, right? Where all this information is stored. Can we not apply the same idea that large language models or multimodal models did for the unstructured world, actually for the structured in order to start predicting things? We can just basically provide a little bit of context, a small amount of data, not a large amount of data because that was always a problem, small amount of data and then starting making high accurate predictions, so to speak, in that domain. That led, actually, it was two years of research. We published it also at NeurIPS and a bunch of other conferences. We call this RPT-1, so rapid one, stands for relational pre-trained transformers. It's still based on the transformer architecture, but with a very different architecture. We released it, and we see some, meanwhile, some very, very good results from that in various domains, where, as I said, classification and regression, sometimes time series and so on are concerned. We believe this will be huge because it obviously will allow way more people from a business impact to make these predictions, which large language models have a really hard time with.

Speaker 1:
[28:30] When you think about the gap in, I think you described it as like hype versus adoption within the enterprise customers.

Speaker 2:
[28:43] The innovation race versus the outcome race.

Speaker 1:
[28:46] Yes. Innovation race versus outcome race. It's a good framing. The change is happening very quickly. That's hard for companies to absorb. Where do you see challenges for the enterprise and adoption today, and where are customers making the most progress with you, or where are they most excited?

Speaker 2:
[29:08] Yeah, that's a good question. Usually, I say the primary problem, as I said, is the problem of A data, because most of the time the data is, of course, very disaggregated in a company. I mean, for a variety of reasons, either because you made certain decisions, how you purchase solutions in the past, or you did an M&A, so you acquired a company naturally. Of course, they bring a very different IT system landscape as well, and so on and so forth. So you have to segregate that information. The problem is, of course, that limits the potential of what you can actually do with AI, and then the question next is, how do you integrate this safely? What I see is clearly customers who did that kind of homework, right? Now, of course, it's not a new topic. We're discussing this for 10, 15, maybe more years. The ones that did their homework, they, of course, have a much easier life to then also reap the benefits, for example, of AI. The second one, as I mentioned, is already is the problem of scale. The bigger, the more complex the landscape, and so on, right? Then, of course, also then bringing this together in a unified experience is a challenge. Then finally, of course, everything around then security, and so on and so forth, right? Because then there's always then this gap between, oh, there's an amazing innovation. Take OpenClaw, for example, right? I mean, amazing, what this has brought to the world in terms of further ideas. And of course, I mean, from a security perspective, and so that's a problem. You don't want to run this like just like it is there on GitHub and deploy this in your organization too. So maybe nobody would ever do this, right? So then, of course, you need to bring this, make it secure. I mean, we have seen with light LLM, how long is this now ago? Two weeks or something. You probably saw it, right? Like with this vulnerability that still all of a sudden steals all your keys and credentials and so on and so forth. Like, and you don't want to, right? If you were to achieve information and secure the security officer in the company, you don't have a job anymore, right? And that's, of course, another big challenge from an adoption perspective as well.

Speaker 1:
[31:17] What do you think the function of a finance or an HR or a supply chain team that would have been operating out of SAP in their day-to-day work a year ago? What do you think that looks like a few years from now, if you and your customers are successful with the AI transformation?

Speaker 2:
[31:44] Yeah. First of all, it's very simple. They will get rid of a lot of the mundane work, like collecting information and preparing power points for decision-making and so on. What we're going to see is a much, much faster way of making decisions, making better decisions, and then of course automating the mundane work. What the people will do is they will run more scenarios, they will get better, deeper insights in a much faster way, in order to then really think about, we always call it this more strategic thinking. In a way, Sarah, if you will, for me, this is the same way, like everybody who works today, maybe in the Finance Shared Service Center, it's for me the equivalent of a junior developer today with Cloud Cloud. Now, they actually become, they've got one level higher. They're now not so much any more task with Ben writing a lot of the code with Codex or with Cloud Cloud, but they actually then start supervising the code, give feedback and capture, of course, the essence of what the code should look like, and then do much more review, and then rather think about what to build next, think about the next requirement and how is that actually differentiating? So every role, every level will kind of get upleveled, so to speak, because the work that's being done today will be pushed down to these agents. Therefore, I believe in general, what we will see is that people will just achieve so much more, because there is a lot of intelligence baked into the system that gets rid of many of the things that we're doing today, and that are actually, well, at least in many cases, a lot of fun.

Speaker 1:
[33:26] I must admit my ignorance here, I don't. I'm thinking about this, and I want to talk a little bit about the impact on the business, if you're right, as well. I don't actually know how SAP prices broadly today, but the question would be, how do you price? And if you are delivering more outcomes for customers, or serving them services, software in a different way, do you think that changes the business model for SAP?

Speaker 2:
[33:50] It does, absolutely. I mean, there's no question. We have prepared for this already. So for me, it was always very clear. I mean, for the most part, SAP software is seed-based, licensed today, with a few exceptions, like Concur or Fieldglass, for example, or the business network. But very clearly, with AI, it was very clear for us that step by step, it will go towards this consumptive world, first consumptive, and then maybe in the next step, once we have more verifiability in the system, then also towards maybe an outcome-based license model to, for example, what Sierra is doing and so on and so forth. But the reality is also, it is today for us, it's a hybrid model. It's consumptive, but it still has a certain element of seats in there and so on and so forth, because also it's a joint journey with the customer. Because the customer is saying they are not yet ready, in many cases, for a purely consumptive model, right? Because they need one predictability, right? And then of course, they are not yet fully also everywhere trusting the outcome, right? And no, then also, of course, is the value already there? But then they are afraid of that the costs may explode from a consumptive perspective, etc. So at the end of the day, what we have designed is a hybrid that is basically ready for this consumptive world, but actually meets the customers where they are today, knowing that they demand still a lot of predictability in the enterprise space in order to cost control the whole thing for themselves as well.

Speaker 1:
[35:28] That makes sense. It's unclear how... I also believe that transition is going to happen. It's unclear how quickly it will.

Speaker 2:
[35:34] Exactly. No, absolutely. I agree. I mean, nobody knows this. And at first you see customers that are more... You have a wide range, right? They're also of opinions, right? And of course, some customers are a bit more forward-leaning already. And then others are more still asking or demanding a classical model, so to speak. So therefore, it's a journey.

Speaker 1:
[35:58] What do you... Let me rephrase that for a second. When you look forward and think about SAP's position five years from now, and you compare it to the broad market pivot away from SaaS and software, in terms of just how investors are valuing these businesses and their enthusiasm about their durability, my own opinion is the challenge is real, and yet it will affect the incumbent software companies very differently. There will be winners and losers versus universally, everybody's market cap would come down. What do you think is going to be characteristic of a winner or why does SAP get to endure again?

Speaker 2:
[36:48] I think at the end of the day, it's all about adoption and the outcome you bring to the customer. I mean, the technology, look, the reality is for most companies, the technology doesn't matter. I always tell to my developers all the time, our job at SAP is to make the technology disappear. We need to get the outcome in front of the customer. And of course, not just the value itself, of course, you also be able to produce and price it in a way. So it's a win-win situation for the customer and of course, the vendor at the end of the day. So what we are really trying to do, and this is also from an architecture we are so flexible, we said we don't over index on a specific area, we have partnerships with all of them. And really only invest in the things that are actually differentiating for our customers versus the things that anyway will likely get commoditized in the tech stack. And then try to make sure that we of course, bake the enterprise qualities in and the integration is there. And the customers can turn these capabilities on almost instantaneously in order to benefit from it. Why is this important? Because if you take a lot of time to reap the value, then your return on investment is essentially gone, right? Or the business case becomes harder, right? And therefore, what we are really focusing on is to deliver these outcomes to the customers. And I think that will differentiate the winners from the losers at the end of the day to really focus on the business outcomes for the customer at the end of the day.

Speaker 1:
[38:16] As we wrap up, I want to ask you a few quickfire questions, including a little bit more personal one. Our listeners always want to know, like, what do you do all day as like a CTO of SAP? Like can you just describe how you spend your time?

Speaker 2:
[38:33] Well, I spent most of the time reviewing the progress with the teams, right? And we're thinking along, you know, from all the layers with the teams, from the database to the models, right? To the UI, review the progress, give guidance, feedback, learn something new, study, of course, what happens outside. Do a little lot of prototypes, right? Where we speak here, I have a bunch of command line interface instances running here, prototyping a bunch of things, right? Trying things out, see what works, what doesn't work, right? And then give this also as kind of inspiration back to the team. And then, of course, work a lot with the teams. Think about, of course, how we connect the vision then to the execution. And of course, work a lot with customers as well. So I love working with customers because they always keep you honest in terms of what you can do already and where you can still get better.

Speaker 1:
[39:31] Can you give us a flavor of just because you have such an amazing global customer base, like something challenging or interesting that a customer is asking you to do right now?

Speaker 2:
[39:43] Well, there are plenty of things, right? As I said, the predictive win just immediately comes to my mind. If you can improve the accuracy of the demand forecast by 3-4 percent, that is huge for them. This is multi-million value immediately that you can deliver to the customers. Or I have many customers, of course, that come and say like, Oh, Philip, can you actually help me look at the commodity prices all through the roof, right? Oil and gold and silver, right? Like if you are now a jewelry customer, right? You have a jewelry, right? You have a challenge, right? You have a challenge because now your entire business model, you need to think about this. And then you discuss with the customers coming from the business challenge, hey, how can we actually help you to address this? Because now they need to change your manufacturing, right? They need to change your processes and come up with new products. And then the question was like, how can you help me research the right product, find product market fit? And then of course, out of that, then determine how to actually, you know, source new suppliers, right? Because maybe I need to source different materials from different suppliers. So like these challenges are real in this ever-changing world, right? And then for some people, it's other things because they are challenged on others. The world is very, very diverse in terms of business challenges that customers have out there.

Speaker 1:
[41:03] Yes. I mean, to your, so something you mentioned earlier, you know, being the partner that folks come to and say, you know, how does my business change given what's happening in these straight of hormones is a hard question, right? And make it better, make it easier for me to navigate around it.

Speaker 2:
[41:23] Yeah. But that's an interesting one, right? But also, look, I go into every, even if I'm the CTO, you know, and usually people then expect from me, I tell them a lot of technical things and so on and so forth. What I learned actually, and I did this mistake probably more than anybody else in this world, is to kind of pitch the technology, right? This is completely wrong. When I sit together with CFO or a CIO, the first question is like, hey, what's top of mind for your business? What are your current challenges, right? And then work backwards to the technology, right? I always found that this is the most useful approach.

Speaker 1:
[41:59] And then last question for you, personally outside of everything that you're doing at SAP, just as a technologist, what else are you interested in and paying attention to, or in tech or AI, or a belief you have about something that's going to happen?

Speaker 2:
[42:15] Well, I think, I mean, obviously, AI is the dominating thing, because it's so pervasive and ubiquitous. But now I think what we also are very, very excited about the work we are doing, finding in the quantum computing space, trying to find new algorithms also there, because programming quantum computers is very, very different compared to everything else. That's in a very early research stage, but that's also super exciting as well.

Speaker 1:
[42:45] Why is that commercially relevant to you? What's your point of solving backwards from the business problem?

Speaker 2:
[42:53] Let me put it this way. The hypothesis is that, of course, once the hardware matures in the quantum space, there are certain problems that you can address that are hard to address today. What we are focusing on is the optimization domains, obviously, and then if you go into things like logistics, traveling salesman problems, knapsack problems, like all these kind of usual hard problems in computer science, these are interesting problems where we believe that could be interesting for the future, for maybe a different kind of computing paradigm to solve for. And we try to be a learning, so to speak, and we will be very hardware agnostic in that sense, right? Like SAP always ran on different computers in this world. But what is important to us is that we find early on already, and intellectually, but then also find these new algorithms that then can propel that forward. Because if you can obviously load your trucks, right, and do the route planning even more, you save the outcome, you save the emissions go down, right? And you save a lot of money and so on and so forth. So there's a lot of things you can still opt that today, of course, with the limitations you have, depending on how large the problem size is, you can only approximate and then need to live with the best solution you can get in any finite amount of time.

Speaker 1:
[44:17] Yeah, I think that's a great note to end on just because it reminds everyone, including me, that actually there are like interesting computer science problems everywhere in the enterprise, where I think a lot of people, if they haven't worked on these problems of scale with real customers, they might assume that you're like, you know, like building a CRUD application is pretty easy at minimal scale in 2026. It's actually remarkably easy with coding agents.

Speaker 2:
[44:49] It is. Oh, absolutely.

Speaker 1:
[44:51] And yet, there are problems where we are limited by algorithms and computation everywhere, right? And it takes some imagination to just go attack them.

Speaker 2:
[45:01] And it is also to your point, right? Yes, a CRUD application is like, that's a solved problem, if you will, right? But most of the time, it's not just building a little CRUD application for a data object in the database. I mean, this software is a little bit more complicated than that usually.

Speaker 1:
[45:20] Thank you so much for the time, Philipp. This is great.

Speaker 2:
[45:22] Yeah, thank you for having me.

Speaker 1:
[45:26] Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at nopriors.com.