title The AI Sandwich: Where Humans Excel in an AI World

description Most frameworks for working with AI agents assume humans should stay in the loop at every phase. That’s the wrong approach, says Cora general manager Kieran Klaassen.
Kieran is the creator of Every's AI-native engineering methodology, compound engineering. His four-step framework—plan, work, review, compound—rebuilds how engineers work with agents. The insight, worked out with collaborator Trevin Chow, is about when to be in the loop and when to step away and let the model handle it. "LLMs are very good at just following steps, doing deep work, working for hours—days even now," Kieran says. "That thing is kind of solved."
Kieran and Trevin describe an AI workflow as a sandwich. Agents are the workhorse filling, and humans are the bread, responsible for framing the problem at the start and reviewing the outputs at the end. 
Every CEO Dan Shipper talked with Kieran for AI & I about why setting the frame of a problem is still hard for agents, why simulated personas won't replace human judgment, Dan's bar for AGI—an agent worth running 24/7 with no off switch—and what Kieran's background as a classical composer taught him about performance, polish, and finding the parts of work that bring you joy.
If you found this episode interesting, please like, subscribe, comment, and share!
Head to http://granola.ai/every and get 3 months free with the code EVERY
To hear more from Dan Shipper:
Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper 
Discover more resources in the episode
Compound engineering plugin: https://github.com/EveryInc/compound-engineering-pluginCompound engineering guide: https://every.to/source-code/compound-engineering-the-definitive-guideCompound engineering camp: https://every.to/source-code/compound-engineering-camp-every-step-from-scratch
Timestamps:  
 00:00:00 – Introduction and the AI sandwich metaphor 00:02:33 – What compound engineering is and how it’s evolved 00:04:27 – The "work" phase of agentic coding is essentially solved 00:06:27 – Why humans belong at the beginning and the end of an AI workflow 00:11:06 – Dan's argument for why agents can't change frames—and how this will keep us employed 00:16:51 – Full automation is a moving target 00:23:21 – Musical composition as a model for human-AI collaboration 00:26:39 – Find your place in an AI-accelerated world by leaning into what brings you joy

pubDate Wed, 22 Apr 2026 18:51:01 GMT

author Dan Shipper

duration 1711000

transcript

Speaker 1:
[00:00] Humans are the bread in the sandwich, and the AI is in the middle.

Speaker 2:
[00:04] AI is whatever you put on your sandwich. If you ship something or do something, if you want it to be your own, you cannot fully automate everything. It's like art. If you want it your own, it needs to be from you or somehow be connected. So I believe it's so important to do things you enjoy and you love. And it's very important to make it feel great, because the bar is high, the bar will always get higher. The beginning and the end, the middles can be automated pretty well. And Trevin at some point said, oh, it's kind of like a sandwich, which was like very funny.

Speaker 1:
[00:54] Kieran, welcome to the show.

Speaker 2:
[00:56] Hello, Dan. Happy to be here.

Speaker 1:
[00:59] So for people who don't know, you are the GM of Cora, and you are also the creator of Compend Engineering, the engineering framework and plugin that everyone inside of Ever uses. And everyone who's really coding in with agents is at least aware of, if not using. And so a pleasure to have you on the show.

Speaker 2:
[01:20] Thank you. Yeah, it's always great.

Speaker 1:
[01:22] So I love getting to chat with you and getting to work with you because every once in a while, you have a thing that you do or you figure out that I'm like, holy shit, that's definitely the future. And you just figured something out along with Trevin Chow, who also helps out on Compend Engineering. And I think it has massive implications for how programming works. And then I think we can also translate that to the rest of AI and its impact on work. And one of the things you've been doing, so you have this Compend Engineering plugin that you've rebuilt the engineering workflow for how you should work with agents. And in thinking about that and thinking about where a human is used and where a human should not be present inside of that process, I think you've found something really interesting and deep about, in general, how humans and AI are going to interact with work. So do you want to explain a little bit about Compend Engineering and that and the process that you've created, and then also explain this insight about where humans fit?

Speaker 2:
[02:30] Yeah, absolutely. So Compend Engineering is like a philosophy of doing engineering work, but we realize it applies to more than just engineering work. It's product work as well, it's design work, it could be knowledge work, it could be other things. But how I build it is while building Cora, I had AI and I was like, how can I use AI to do better work more quickly? The initial version of Compend Engineering really evolved around four steps, which is planning first. You make a great plan so it's very clear what you need to build and do. Then the work part, where the agent does the work and implements it and actually writes the code, does the design work or whatever work needs to be done. The third is review. So slop comes out or whatever you call it, something beautiful comes out. One of the two, like something comes out, but how do you know it's good? And traditionally there's like a code review or like a PR that you review and see like, hey, this can be improved. And there's some iteration going on there. And then the most important step is the compound step, which is if anything comes up during that review or doing the planning, that you think like, oh, this is, this is a good learning. Probably we'll run into this again. You can compound that knowledge back into the system and we store that as knowledge inside the repository. And agents next time when they go into planning or when they go into work or review, they can see the mistakes they made before, so they won't make it the next time. And that's really the power, like that's by far the most powerful thing that is in this plugin. But we start to realize more, like first of all, the work phase is kind of done. Like it works. If you have a good plan, it does the work and it's pretty good. And then the review, it makes it a little bit better, yeah.

Speaker 1:
[04:37] And by that, you mean like having an entire phase dedicated to work in this whole system doesn't necessarily make that much sense when all that really means is run the model. Let the model do the thing.

Speaker 2:
[04:49] Yeah, so there needs to be a step, but what I mean by done is I don't need to care. Or like I don't need to think about it. I trust it and this is not like trust me bro, it just works, but this is like I've seen if you put in a good plan, like it does the plan, like it executes on the plan. LLMs are very good at just following steps, doing deep work, like working for hours, days even now. That thing is kind of solved and the review starts to get there too, and the planning starts to get there too. And then there's this next step, it's like, okay, so if all these things work, where do I have to do anything? Because like...

Speaker 1:
[05:37] Did I just automate my job?

Speaker 2:
[05:39] Yeah, did I automate myself out of a job? If everything works, like where do I work? What is still the bottleneck? And there are two things we started to know, like Trevin, he's a very, very great contributor to the Compute Engineering plugin. Like he is a product person and he is like, I need more on the product side, which is like before the planning phase. So he added first a brainstorm step and an IDA step. And the IDA step is like really going wide. Like it's like, okay, let's come up with ideas in a room full of interesting people with angles. Brainstorm is more like, I have a problem, but I don't really understand exactly what and how. So it's very much brainstorming with you around the problem. And the first thing we noticed there is like the top is very important to be super well in the loop with a human and really like ask a lot of questions and really think hard. Like the human should think hard. The alum should support the human. But then after that, the planning phase, if you have a good brainstorm, an idea of what problems you solve, like it can create a very good plan and the human needs not to be in the loop. So that's the first realization where it's like, oh, hey, here's good to be in the loop versus not to be in the loop. And you can see other like spectrum and development, for example, or other ways to do things. They assume that it's always good to have people in the loop. And I disagree. I think it's very important to know when to be in the loop versus when to hand it off, because that means we can think harder at the moments where we need to think harder. And that's the first one. So the other one comes at the end. So like something comes out, how do you validate it's good? Well, it's already tested, because we have browser automated testing. It clicks through. All the requirements are very clearly specified and it says, yeah, everything works. But the beauty comes in when a human looks at it, clicks around and has a feel like, oh, this doesn't feel good. We can polish it even more. We can make it even better. We can increase or we can do something that's still missing or make it more beautiful, make the design better. This is something I've learned from doing pomodoros, where ideally if you do pomodoros, the old school way is you start with a task and if you finish after 15 minutes, you have 10 more minutes to work on the same task. You cannot switch tasks. Sometimes in that space, something beautiful happens because you will go deeper, you will go further than you would do. I think this is the other moment, which is all the way at the end when everything is done, where you can just elevate everything and make it even better. I think that's also what we need to do because if we don't do it, it will be all slop, all the same. It's very important to make it feel great because the bar is high, the bar will always get higher. So this is kind of what we realized, like the beginning and the end, the middle is kind of solved and can be automated pretty well. And Trevin at some point said, oh, it's kind of like a sandwich, which was very funny. And Dan is now referring to the AI sandwich, which I think is very cool. And I think the sandwich here is like, when do you need to think about what you do and really use your brain versus offload it to the LLM?

Speaker 1:
[09:17] We've all been there. You're sitting in an important meeting and you're trying to pay attention, you're trying to stay present, but you have this lingering underlying anxiety that you're gonna forget everything, that you're gonna miss the important detail, forget the decision, forget the action item, let something important slip through the cracks. That's why I love granola. It's an AI-powered notepad that works in the background while you're in your meetings. It takes notes on everything that gets said, transcribes action items, and helps get rid of that feeling. You don't have to worry about whether you're gonna miss something because granola has you covered. And now let's use Stay Present in Meetings. I've been using granola for a long time, almost since they came out, and it's amazing for this. It doesn't join the meeting like some of those other clunky meeting note takers. The UI is really fast and well-considered, and it feels like it's sort of just transcribing all the important moments in my work life. And that gives me the confidence to get great work done. And what's even cooler is you can chat with your notes afterwards. You can run detailed research reports on how your week was, how you act as a leader, how you performed in particular difficult conversations, and how you can do better. It's really a power tool for anyone who cares about their meetings and also cares about how they show up in those meetings. It also has these things called recipes, which are pre-made prompts for common tasks, like negotiating, coaching, or summarizing. I even have a recipe that I made that you should check out. Once you try it on one meeting, it's really, really hard to go back. The notes are always better than what you could do manually, and it helps me be much more present instead of frantically typing all the time. Head to granola.ai/every for three months free with the code every, E-V-E-R-Y. That's granola.ai/every for three months free. And now back to the episode. Humans are the sandwich, the bread in the sandwich, and the AI is in the middle.

Speaker 2:
[11:01] Yeah, the AI is whatever you put on your sandwich.

Speaker 1:
[11:04] Yeah, exactly. And I think that's really interesting and really cool because A, it gives me a good mental model for how I should be working with coding agents, but I think that also applies to the rest of knowledge work. And I think this is such an important question now because we have all these questions about, oh my God, like, what are agents gonna do? And is everyone gonna lose their job and all that kind of stuff? And I think software engineers are a little bit of the canary in the coal mine. And so far, what we found internally at Evry is we're absolutely not like, we still hire software engineers. We need software engineers. But the way that you're working and what you're doing looks a lot more like managing if you're doing it well. You're still involved, but you're involved at the beginning and the end as sort of the sandwich. And I think the same is going to be true of every other kind of work, whether that's copywriting or strategy or design. And I think there are deep reasons why that is the case that I think will be interesting to talk about. And I want to start with an objection that I think people will have, which is like, okay, like for now, agents can't do the IDA in the brainstorm, but pretty soon they will. So then what happens? There are, now they're starting to do the beginning of that process. And I think that there's something interesting here, where if you look within any given local frame of a problem. So to take a non-coding example, the problem might be, my knee hurts, and I want to solve that problem. But you can say, my knee hurts is the same as this feature is broken, or customers are anxious about this part of the product or whatever. Any problem. If you take that frame and you say, okay, well, the solution is maybe for your, if you're talking to a knee hurting thing, the solution is take Advil. Any part of that process, getting to the store or whatever can be automated, let's say DoorDash can go do it. But even once you've solved it in that way, there's always a larger frame within which to think about the problem. An example is if your knee hurts, you might need to stretch your IT band, or you might need to stop running on hard surfaces every day. Each one of these is addressing the same problem at a different level of the stack, from a different frame. Humans are very good at flipping and changing frames like that. Our job is to set the frame or set the bounds within which we solve the problem. I think it's going to be very, very hard for agents to do that well by themselves, and there are deep reasons for that. I know it's a little bit handwavy and the knee-hurting thing is a little hard to understand, but does that resonate for you?

Speaker 2:
[14:19] Yeah, for sure. This all comes down to building an environment where the agent will thrive, and you do that by picking the right things, and this is why it's so important to have humans with experience and humans with taste, and humans that just want to click around and say, this is shit or this is great, and why it is shit or great. And I think it's similar to the actual example. If you keep doing that, it's probably your friend will say, yo, that's messed up. Just go fix the problem instead of denying the problem. And maybe it will work for you for a little bit. You need someone to shake you up. And in that case, that's the human or that's the other person. But I do think it will also be more automated. Also the ideation, you can say, okay, let's have a persona of 100 people and run simulations of how they think and how they behave. And clearly we're going there too, where we run simulations of millions of people and see how things work. And probably you'll learn something from that. And there will be more automation. And maybe even that step in the front will be fully automated. But I do think in the end, if you ship something or do something or make a statement in the world, if you want it to be your own, which you need to say yes or no at some point, you cannot fully automate everything. Like it's maybe a little bit like art, like making art, like if you want it your own, you need to just, it needs to be from you or somehow be connected. So I believe like having those moments where you decide, this is what I just enjoy. And that's why it's so important to do things you enjoy and you love are very important, yeah.

Speaker 1:
[16:32] I agree. And I think, yeah, you can imagine it being like, okay, yeah, we're gonna simulate a bazillion people and then we're gonna make decisions based on what we think they would do. But that would still only cover a small set of the decisions that someone might make.

Speaker 2:
[16:48] It will never be fully. It's a moving target. Like we always get something new and then again, there is a layer up that we then can even make bigger impacts on.

Speaker 1:
[17:00] Especially because, especially for a lot of these decisions, the feedback loops on these decisions, like the data is really rare. You may only get a couple of moments in your career where you gather the data that helps you decide about a particular thing. And that's very hard to get into language models, especially because it's hard to get and they need a lot of it. And so, that sort of rare expertise that is encapsulated in an expert who has a personality in a worldview is hard to get in. And you're right, it's also always moving. And I don't know, that makes me very excited about this stuff. Because I feel like we've been wandering in the woods for a long time on like, okay, what is AI progress going to mean? And how are humans going to be involved? And all that kind of stuff, and it just feels very much to me, like, the simple answer is ride the bottle. Or to mix the metaphor, be this bread in the sandwich. And if you do that, you're going to be fine. It's going to be like really, really, really, really great.

Speaker 2:
[18:05] Yeah, I agree. And it will be different for different people because, yeah, you need to change some things. Like you cannot keep doing what you're doing, because if you like writing code only, you need to find your way of writing code. Like yes, you can write code, but maybe it's about beautiful code. And maybe you find also lots of value in just seeing beautiful code. Like someone looks at the UI and says, oh, this is beautiful. This works great. Maybe you want that for code. Some people don't care about that, but they're like, oh, but the UI should feel great. And just really polish it, go extra, like wherever you feel joy. But also it's way more product focused. So as an engineer, you're going to become either more of a manager, but also more of a product person. So it's, I think like a product manager, product engineer, like it's more of those things as well. So there will be some changes, but lean into making beautiful stuff. And whatever that means to you, that can mean beautiful codes, beautiful abstractions, beautiful architecture, beautiful design, beautiful copy. I think it's very important to lean into what is beautiful to you, because then you will find a way to utilize an LLM to make something that gives you energy instead of drains you all the way.

Speaker 3:
[19:40] It may not look like it, but Naveen is a dictator. You can speak faster than you can type, so dictators choose to do so whenever possible. While those confined to keyboards deal with finger cramps and input lag, voice allows dictators to convey ideas as naturally as they sound in their heads. And in the future, as AI tools improve, we will see a rise of dictators around the world. More and more dictators are choosing monologue from Every. It learns, transcribes and translates across different disciplines and languages, adjusting its output format to match your context, allowing you to stay in flow. Be a dictator, an idea by Every. Every, the only subscription you need to stay at the edge of AI.

Speaker 1:
[20:32] Yeah, and I think there's a deep reason why language models are not going to be as good at that. There's one deep reason which is, it's just not going to be yours if you didn't decide it, if you didn't do it. But another deep reason is you can think of language models as being a super intelligence that has been kept in a box for the last year and has no idea of what's going on in the world except for whatever it gets right when it pops out of the box. Because of that, its outputs end up being a little bit more generic and less personal to you and your situation. You can see this in all of the stuff that's like, okay, all the AI writing that's like, it's X, not Y, or all that kind of stuff. It's just going to do all of that. To truly solve a problem well, or to truly make art, or to truly make a product that resonates with people, it's going to have to be really well-tuned to the exact problem that you're trying to solve, or the exact form that you're trying to make. And language models need a lot of help to get there, and that's why you have to be on either end of them, to set the frame of the problem, and then make sure the details are really right at the level of execution at the end. And I don't think, I think that they will get better at doing this, but I actually think they're much further than we think they are from being able to do it all end to end. My general bar for AGI is, whenever it is economically profitable, or makes economic sense to run an agent 24-7, or it never turns off. And OpenClaw is like pushing in this direction, but it doesn't run 24-7. It runs on a schedule, it has a heartbeat. But it's not like you just say, hey, OpenClaw, just go and just do a bunch of stuff and just work all the time, spend tokens all the time on stuff, and it's worthwhile. We're not even close to that. And yes, we sometimes have well-specified tasks that we can send a model off to go for 24 hours on. But again, it's not changing frames, it's not finishing the task and be like, cool, now I'm going to pick the next one, and that's going to take five minutes. And the next one, I'm going to spend four days on it. We're not even close to that. And I think we're going to need some fundamental changes to language model architecture to let them learn better, for them to get to a point where they're running 24-7. And I think that will, if they are running 24-7 like that, they'll be a lot closer to, I'm sensitive enough to context to actually do interesting creative things, but we're not there yet.

Speaker 2:
[23:19] Yeah, I agree. One other way to look at it, so I have a music background, I studied classical composition, and I think one of the beautiful things about music is like, yes, Suno can create songs, but it will never capture like a live performance or coming up with a melody. And it's something internally in the human, like as a composer or a musician, if you perform something and you deliver this to other people, that they feel that. Like it will not be like, sure, if you're a DJ, it's maybe somewhere in the middle, but like there is something like performing, like you see something, you express something. And I think there is some of that element in these steps as well, where you see something and you're like, oh, it feels a little bit off here because I don't know why, but I wanted to change it a little bit with the step at the end. And suddenly you're like kind of performing or iterating or you're making stuff, you're putting something in the world. And I feel that special. Like practicing a piece for like playing it a hundred times is not very creative as a musician. And this is kind of the middle part. But at the end, the performance is where you bring it out into the world to the people. So I think that's a special moment. And there is a little bit of a link for me with doing this Polish step at the end. And at the start is maybe coming up with a piece. Like if you're a composer, like coming up with something out of nothing. And this is also a special moment. And normally everything in the middle is kind of boring. It's just work. And I feel these moments are still special and it kind of works for making software or other things with elements as well for me.

Speaker 1:
[25:18] I think that's totally right. I love this art angle that you have. And another way to say this is all the work exists on the spectrum from it being totally wrote to it being art. And art itself has many tasks within it. Any kind of creative work has many tasks within it that are more wrote or less wrote. And if you're trying to map work on that spectrum, the stuff that is more wrote is just going to be stuff that you're not going to have to do anymore. And that is a big opportunity to move a lot of the work that we do to the more creative, to us, probably more interesting parts of work. And to recognize that that frame is always changing or is always moving. So as certain things get wrote, other things become things that humans start to do. And yes, those will get automated too, but we're going to also keep moving down along that spectrum. And the final thing that's not automatable is art made by humans who feel something. And I think that's beautiful.

Speaker 2:
[26:31] Yeah, it's still scary because what if you're in the middle and you want to move, or if you want to figure out what that is to you, because this might sound very abstract and weird to some people. If you're not an artist or haven't really felt this in moments, it sounds maybe a little bit like, oh, but that's not me. But I do believe everyone has this. Think of it like, what brings you joy? Like, what lights a fire in you? Like, what do you get excited about? Like, I think that thing you should, like, lean into. Like, whatever that is. And that can be beautiful writing, or that can be very structured lists, or whatever it is. Like, anything that just brings you happiness. Like, you should do more of that using LLMs in your work, because that's good.

Speaker 1:
[27:30] I agree. Kieran, always a pleasure.

Speaker 2:
[27:33] Thank you. Yeah. Let's see where this goes.

Speaker 1:
[27:37] See you next time.

Speaker 2:
[27:38] See you. Bye.

Speaker 4:
[27:47] Oh my gosh, folks, you absolutely, positively have to smash that like button and subscribe to Ai and I. Why? Because this show is the epitome of awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure, unadulterated knowledge bombs about ChatGPT. Every episode is a roller coaster of emotions, insights and laughter that will leave you on the edge of your seat, craving for more. It's not just a show, it's a journey into the future with Dan Shipper as the captain of the spaceship. So do yourself a favor, hit like, smash subscribe and strap in for the ride of your life. And now without any further ado, let me just say Dan, I'm absolutely hopelessly in love with you.