title Ep 762: Agentic Context Carry: 3 Steps to Improve Cowork and scheduled AI Workflows (Start Here Series Vol 22)

description Info hunting and juggling sound familiar? 

It’s the downfall of almost any business leader. Where is that email from Emily? Why can’t I find last quarter’s budget in Drive? Oh, and Keenen needs an answer back on that research project. Oh shoot, I swear Caleb confirmed the expenses in one of these Slack channels. 

You’re off an information rabbit hole and by the time you find that Slack message, you already forgot what Emily’s email said. 

Hit home? Well, as AI models expand to Coworking and Scheduled agents, we have a new best friend that doesn’t really have a name. 

(Until we randomly named it. Lolz) 

Scheduled Agentic Context Carry. You need to know what it is, why it’s important, and how to use it. 

We’ll dive in. 


Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Today's Episode on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn


Topics Covered in This Episode:
Scheduled Agentic Context Carry (SACC) ExplainedAI Agents: Features vs. Benefits ParadigmCo-Working and Scheduled AI Workflow ShiftPersistent Context and Memory in AI AgentsLarge Language Models’ 1,000,000 Token Context WindowsWorkflow Automation: Eliminating Human-AI Duct TapeMulti-App Integration and Cross-Platform ContextThree Steps to Deploy Scheduled Agentic Context CarryChain of Thought Iteration with Scheduled AgentsAutonomous Agent Limitations and Future Bridge

Timestamps:


00:00 Explaining SACC and AI benefits
03:43 Introducing the Start Here series
06:26 Rise of AI in enterprises
11:55 AI agents learning industry trends
15:08 Agent capabilities in AI systems
16:47 Explaining complex trends simply
20:13 Streamlining tasks with AI agents
24:18 Understanding AI and context windows
27:43 Understanding prompt engineering basics
30:51 Debugging and reviewing schedules
33:08 Building automated workflows this quarter
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Start Here ▶️
Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and all episodes: StartHereSeries.com 
Also, here's a link to the entire series on a Spotify playlist. 

pubDate Thu, 23 Apr 2026 13:00:00 GMT

author Everyday AI

duration 2075000

transcript

Speaker 1:
[00:01] This is the Everyday AI Show, the Everyday Podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.

Speaker 2:
[00:17] In the past 24 hours, two of the biggest players in the AI space, OpenAI and Google, both launched updated versions of their simple drag and drop agents that work for you with your context around the clock. That got me thinking of the common features versus benefits methodology when it comes to marketing. If you've never heard of it, it's pretty simple. Features describe the technical facts or the specs of what a product does, and the benefits explain the personal value the human gets from using said products. In AI, we've seen a similar features versus benefits narrative take shape over the past few years. The feature? Large language models are smarter and faster than humans when used correctly. The benefit? Humans can be more productive. But the feature side has completely exploded over the past two months, and the benefit side is still being written. Stick with me here. So as large language models have become overly agentic by default overnight, and as capable as humans, there's a new benefit paradigm for AI agents that has flown completely under the radar. And I know that this is the next big trend coming. It doesn't have a name, but I'm going to go ahead and name it now and explain the concepts. I'm calling this Scheduled Agentic Context Carry or SAC, S-A-C-C. And I think the companies taking the time to understand and iterate on this new concept now are going to be the ones crushing their year-end goals and KPIs in quarter four. So let's unwind this kind of new concept together, because I think understanding this now is one of the most important investments you can make on your AI journey this year. So let's start there with our Start Here series. If you're new here, welcome to Everyday AI. And this is our Start Here series. But let's first start with the big picture of what's going on. Anthropic, OpenAI, Microsoft and Perplexity have all shipped scheduled agents just this spring. And most business leaders are still using AI just like a chatbot. I'm going to go in and I'm going to technically, reactively ask an AI chatbot for something and probably have to re-explain a lot to waste a lot of time. But there's been a quiet workflow pattern emerging beneath every one of these recent product launches. And that's why we're talking about this new concept of scheduled agentic context carry. So on today's show, that's exactly what we're going to go over. And here's what you're going to learn. You're going to learn what agentic context carry actually means and why you've never heard of it, but why it absolutely matters. You're going to know how this hidden workflow bridges the chat bots and the fully autonomous AI future. You're going to understand the timing of all these things coming together and specifically these now huge 1 million token context windows that have quietly changed everything at this spring. And you're going to know the exact three steps to deploy this pattern inside of your business today. All right, let's get started. Welcome to Everyday AI. My name is Jordan Wilson and this is the Start Here series. So after 750 plus podcasts, I never had an answer when someone was like, I'm new, where do I start? Well, you start here. Our Start Here series is an ongoing effort to help business leaders both better understand trending and emerging concepts, but also for those who are brand new to get caught up. So I recommend you start with episode one of the Start Here series and listen in order. But go ahead and listen to this one and then you can go backtrack. But make sure you go to starthereseries.com because, well, it's going to make it much easier to do that. That's going to give you free access to our inner circle community. Right now, there's no other way for the general public to sign up except starthereseries.com. And then inside the Start Here series space, there will be an updated Spotify playlist where you can go listen to all of the Start Here series very easily in order in a dedicated playlist. All right. And if you miss our last Start Here series episode, we, in volume 20, so this is volume 21, we talked about AI change management that works, five moves, the top 5% make. All right. But let's get into this concept of agentic context carry. And y'all, every single major AI lab and the big third party players launched something. So from the big four, right? So that's Anthropic, Microsoft, Google, OpenAI, and then even Perplexity and OpenClaw technically fall under this category. But literally, everyone launched something and it's all very timely. So I did mention just the past 24 hours with big announcements from Google Gemini at their Cloud Next conference, and then with OpenAI's new agents that we're going to talk about here in a minute. But also Cloud Code Routines that can bring automated scheduled agent runs to your desktop. So it is kind of like the maybe this OpenClaw movement that happened really in February and March of this year, actually kind of forced the hand of all of the big companies to say, okay, it seems like we essentially to oversimplify it, we need an AI agent that can run on a cron, right, run on a schedule where someone can go in and they say, hey, AI agent, at this time every single day, I want this to happen. So we had the quad code routines, which I absolutely love that runs on your desktop. Similarly, OpenAI in their Codex platform just added scheduled work and persistent memory two days after that quad code routines announcement in mid-April. And then we also just got wind that co-pilot co-work officially launched in Frontier. So you have essentially all these scheduled agent platforms slash co-working platforms, right? So like quad co-work is the big one. Microsoft co-pilot uses essentially the quad co-work technology because they are an investor in Anthropic. So you have those kind of two places come together. You have these co-working kind of elements that allow for scheduling and it brings all your context. And then you have these scheduled agents. And it's all literally exploded out of nowhere. And although this may technically be a more timely episode with all of these things happening now, the reason why I'm doing it in the Start Here series, whether you are listening to this in April or you're listening to it, I don't know, in the year 2027, is because I think that this is going to be a noticeable pivot in how the enterprise starts to interface with AI agents. Because here's the reality, right? We've been hearing since probably late 2024 that, oh, it's the year of AI agents. It didn't happen in 2025, it didn't happen. But I think we've now come to that realization in 2026, in a certain way, because we've noticed that the fully autonomous AI agents, where you just give them a goal, and then they go off in front of their own, not as reliable as we like, mainly because of safety concerns, guardrails, etc. I think that this new kind of co-working, scheduled agents is the stepping stone to where we will ultimately be when we have, more of like, oh my gosh, this is artificial general intelligence. We have AGI because I give an agent a goal and it doesn't need me for anything. We're not there yet. We are in this in-between phase. I don't know if this phase is going to last for a couple of quarters, a couple of years. I'm not sure, but it is definitely taken shape so quickly over the last few weeks. That's led to kind of, again, this feature benefit because when I think of traditional large language models, essentially once companies understood their utility, the immediate benefit was, oh, more time, productivity, we can do more. Do more or same time. But what about for the actual agents, right? I think when we thought about AI in the feature versus benefits kind of paradox, we thought about the benefit on the human. But what about the benefit on the AI system? Because as they start to get agentic and more human-like and how they can work, well, they start to benefit as well. Then benefit is the agentic context carry that we're talking about today. This is huge. Then like I said, just in the past 24 hours, we had Google launch their Gemini Enterprise agent platform, and OpenAI launched their workspace agents inside of ChatGPT. We're going to be going over that a little bit more on tomorrow's show in FYI. We did go over some good examples of this context carry on yesterday's show on Codex. If you missed that one, yeah, I'm going to plug both of these shows. Make sure to go listen to the Codex kind of Super App Preview 7.62, and then make sure to join us tomorrow more on these two recent launches. But here's a little bit on what the ChatGPT schedule agents can do, just so we can kind of set the stage for why this context carry is extremely important. All right, so how OpenAI says it in their recently released blog post, they say build once, scale across your team, right? Create an agent once, share it with your team. Work that runs itself, so you can run agents on schedules to handle reoccurring tasks. And then keep the work moving across tools, right? So you can, the agents use your tools to gather information, take action without needing step-by-step guidance. So now, yeah, I had to do a little wind up here because I wanted everyone to really understand how big this is and how quickly it's happening before I really unwrapped this kind of concept that I coined, right? Of agentic context carry. Yeah, it's so new, even I'm, you know, I don't just want to say sack, right? But scheduled agentic context carry. So this means, right, I want to break down each of the four words and how they work together. So scheduled, obviously, means that the agent wakes up or runs on a cadence, not only when prompted. So that can be both a time cadence, like we just talked about in the Chat GPT agents, or as an example, in clot routines, it can be a trigger, right? When you get a certain type of email, then an agent is going to run. All right, that's what scheduled means. Next context carry. That means your memory, either your personal memory and preferences, your company's data, all of those things, dynamic data pipelines, tool access, all those things that persists between runs, all right? And that is the big piece there. That is the context and the carry. And honestly, these agentic models, all by default, are able to do this, right? So the models themselves, they can call tools, right? They can call on these connected apps, on these MCP servers, that you can bring in, you know, thousands of different apps that you use. But the actual carry, that's what's important, because as we've gotten these new context windows, which I'm going to get into a little bit more, that's what makes this all possible. And the ability now for an agent to go out and learn something, right, about you or your company without you having to teach it. So let's just say you have a scheduled agent, right? Give it information about your company, your company's goals. Maybe you're looking to acquire a new client or a new customer. But the industry, whatever industry you're working in, is moving fast. Let's say you have an agent that goes out every Sunday night. It pulls up the industry's most recent white papers, industry news, et cetera. And all of a sudden, when you didn't know it, it found out that you have a huge new potential buyer moving in into your state that wasn't there before, right? And the reason that this can happen is because it's able to carry the context with you. All of those documents that you share, preferences, the memory of your recent chats, but also that can run in essentially the same context window over and over. So not only can it carry in the context that you give it according to your information, but also the persistent memory of that actual conversation. So it's going to know, right? If you have a run that goes every single day, it's going to carry that trend line with it. So it does start to turn into, oh, it's like when you hire a junior employee, after a couple days on the job, they kind of start to get it. After a couple of weeks on the job, you're like, okay, it's picking up momentum. The same thing. That's I think why this is a very exciting time in AI. And this I think is that bridge between the simple chatbots to the fully autonomous agent. Because as much as every open-claw aficionado wants you to believe, we are not yet at the point where we have true autonomy in agents, where you give them a goal and they can safely go execute that goal without constant human intervention. Is it possible? Sure. If you have a very well-defined goal, if you have strict guardrails, and if you are using it in a narrow capacity. I don't think we have autonomous general agents. I think we have autonomous narrow agents that can do one very, very simple task if you give it or a goal if it's very, very specific. But what happens if the guardrails are changed? What happens if the industries change? What happens if your data is corrupted? An autonomous agent would in theory be able to figure those things out. We don't have that right now. I think that this agentic context carry is that stepping stone that's going to help us get there. I've also talked about this a little bit before. Previously, I had called it the human AI duct tape. It's all those intermediate steps in between that a human had to do. If you run something in deep research inside Chat GPT, well, now I have to copy that. I have to go put it in a doc, and then I have to upload it to this project folder as an example. That is where this context carry and the larger context windows starts to erase all of that manual human AI duct tape, those steps that us humans working with multiple AI systems would have to continually make. Because now these agents also have write ability. Whereas before, three to six months ago, they didn't have the ability to write to your Google Docs. They didn't have the ability to send gmails, right? Now, they do, right? If you give them the permissions and if you're feeling spicy, you want to roll the dice. But that agentic context carry layers the schedule and the memory over that, but no one is really talking about this. I don't know, maybe I'm too dorky and excited about where we are. But the reality is, I think that there's been this... AI moves too fast to follow, but you're expected to keep up. Otherwise, your career or company might lag behind while AI native competitors leap ahead. But you don't have 10 hours a day to understand it all. That's what I do for you. But after 700 plus episodes of Everyday AI, the most common questions I get is, where do I start? That's why we created the Start Here series, an ongoing podcast series of more than a dozen episodes you can listen to in order. It covers the AI basics for beginners and sharpens the skills of AI champions pushing their companies forward. In the ongoing series, we explain complex trends in simple language that you can turn into action. There's three ways to jump in. Number one, go scroll back to the first one in episode 691. Number two, tap the link in your show notes at any time for the Start Here series. Or you can just go to starthereseries.com, which also gives you free access to our inner circle community where you can connect with other business leaders doing the same. The Start Here series will slow down the pace of AI, so you can get ahead. Narrative right now, and I'm actually going to call out a recent tweet I saw in X. What do you call a tweet on X? This is why I call it Twitter. You can't verb X. But there's a recent viral tweet out on Twitter. ChatGPT had a recent integration with Starbucks. Someone said, oh, why are all of these apps? Why do they exist? It doesn't make sense because it's going to take me two minutes at the absolute fastest to make an order on this ChatGPT Starbucks app integration. I'm just using this as an example. Throw in any business app that you're using inside or connector that you're using inside Gemini, Co-pilot, Cloud or ChatGPT, but this kind of viral incident was Starbucks. It's like, okay, well, it takes two minutes to order it via the chatGPT app, but if I go into the actual Starbucks app, I can do it 20 seconds. So this is done. But I think people are missing the points because it's not just about one app. It's not just about ChatGPT interfacing with one app because in the new agent builder, right, as an example, the brand new agent builder, you can go to create an agent all by hand. I'm literally clicking around as I do this now. You can connect 20 apps, right? Your Gmail, your Slack, your Notion, your Teams, your Outlook email, your Google Calendar, Google Drive, whatever, right? MCP servers, you can do all those things. You can connect agent skills, you can upload files, you can manage the memory, right? So it's not just about, oh my gosh, using a single app to do a task is so much slower than it is to just do it individually in that platform or on that website. That's not what it is. It's about eliminating that human AI duct tape. It's about the 30 small human steps in between that are required. That is the context carry. Us humans have been the one carrying the context because AI agents didn't have the ability. They didn't have the tools to do that. Now, they do. That's the thing. Yes, I can much more quickly go open my Gmail, read an email and respond to it than a connection in Gemini, ChatGPT, Quark, etc. But what about when there's a Google Doc that goes with it? I have to look at my calendar. Oh, there's actually three or four different emails. Oh, there's that file in my drive. There's a slap conversation about that. Now, all of a sudden, yes, it might be quicker to do all of those small tasks individually in those apps or on those websites. But when you have to carry the context yourself manually as the human, that's where you can start. This has essentially been the mundane nature of knowledge work in front of a computer over the past 20 years, as SaaS and applications have exploded. But that's what we do. That is where the true benefit of now a Gentic context carry, because us humans no longer have to do the duct tape and have to remember and have to bring that context from app A to app B to app C to storage D to messaging platform ENF. The agent does it all for us in one swoop. So yes, it might take you five times as long to accomplish a goal inside of an AI agent, but that's not counting the human error, the human lookup, the human retrieval, that has to happen every single step of the way in between. That's the big unlock here, y'all. But also, I don't know, I start to forget things fairly quickly. Maybe it's just me. I literally use this concept of agentic context carry all the time. I was actually walking to my office. Sometimes I record from my home office, sometimes I record from my actual office, and I'm working on a cool partnership here with the group Sage. And I had a couple of different email threads with travel, there was Google Docs, there was all these things. And I'm like, ah, and I have multiple emails, right? I have multiple email accounts, certain forms go different places. And I'm like, my gosh, this is going to take me a long time. Instead, right, just use, in this instance, use Claude. It went and carried that context. But what about when you can schedule those things, right? And to say, hey, every day at, you know, 2 a.m., I want you to go through my email, my calendar, Notion, Slack, all of these things. Yeah, it might take the agent longer to do that than if you were to, but it's going to do it on its own schedule. And it's going to carry the context from app to app. So that's where the new breakthrough comes. It's the capabilities that have made the cross app technology possible. So here's where the unlock and the timing all comes into play, right? I love Venn diagrams, right? This is where it's kind of the capabilities and the technology and the need have all overlapped with this perfect timing. So this is, you know, if you think of like co-work or agentic scheduling, the features and then the context window all coming together and exploding at the same time, right? So in Cloud, Anthropic has really led the way of this. So now they have that 1 million token context window by default, right? Codex a little bit, you know, behind, although there is an experimental 1 million token context window in the command line interface, but on the app, I believe it's 258,000 tokens. So what does that mean, right? If you're not too technical, that just means, right, the free version of ChatGPT last time I checked, I didn't check the free version in a while. But let's just say in 2025, it was about 8,000 token context with it, right? So now you're looking at 1 million. So do the math there or, you know, or, you know, going to 258,000. Essentially, now AI models and AI agents can remember things over a very longer period of time, right? Whereas before they essentially had very short term memory. You would, you know, especially if you were on a free plan or, you know, early in 2024, 2025, AI models forgot things very quickly. So, especially when it came to handling your data. So if you upload a file, you're working with it, right? Maybe updating a job description and doing some research on recent law changes to make sure that your, you know, job description reflects those or something like that, right? And it's going well. And all of a sudden it's, oh wait, it's done, right? That's because it ran over the context, but the context used to be very small. But now as they become bigger and bigger and bigger, right? It's essentially you're working with an AI model that has a bigger brain that's able to carry the conversation for longer. So now you know this kind of trending concept that's happening. It's not just going in and working with one app or one connector, right? It's bringing in all of the different tech stack that you have to use on a daily basis, that your company has to use on a daily basis. Eliminating all of those manual steps in between, because the reality is, just like a large language model, us humans, we have a context window as well. How much time do you spend, right? Even pre-AI, it was obviously way worse. But sometimes you spend just as much time trying to either track, remember, or find certain information where it lives within your kind of SaaS database, as it actually takes to create that new business value once you do find it or reply to a certain email or to finish a certain deck or a project or fill out a spreadsheet, right? Sometimes you spend as much time just trying to retrieve that information. So that's where the multiple apps is a big context window and the new agentic capabilities, those three things coming together, come to play. So now that you know it's here and you know that this, I think, is the intermediate stepping stone until we have those fully autonomous agents. You need to take advantage of this scheduled agentic context. Carry. Sack. All right? Here's how. Step one, three steps. Ready? I'm going to go quick. Connect your live data sources and your preferences first. Make sure to do this. You know, I do have to put up a normal disclaimer, right? The responsible AI person I am, right? I'm a business owner. I decide if this is safe for my organization. But you need to do the same, right? You shouldn't be doing this with shadow AI tools. You could be doing this with approved tools. So make sure you go through the proper channels. But let's just say you have, you know, Claude approved or you have ChatGPT approved, whatever it is, right? And you have these connectors or apps approved as well. So you need to authorize those live connectors to your, as an example, your email, your calendar, your Slack, your drive, all of those important things, your CRM, right? It's huge. Then you need to understand how each system's computer use and access works. And then ensure your custom instructions and memory are updated accordingly. So first, you have to get your data sources, your preferences and your memory in line. Because when you talk about context theory, well, context is the base, right? We did an earlier show in the Start Here series on the importance of context engineering. So make sure you go back and listen to that one as well. And then also, all these platforms, they support MCPs. So even if you're, you know, your app of choice, whatever you're using, doesn't have any of, you know, oh, it doesn't have an app connection to ChatGPT or doesn't have an app connection to Cloud. Well, chances are you just use an MCP server and get that cooked up right away. So that's step one. Step two, you need to context stuff in a dedicated memory thread. Here's a little, I wouldn't necessarily call this a cheat code, per se. And this is much different, right? I've obviously taught this concept of prime prompt polish, the basics of prompt engineering 101. With the context window, it doesn't throw away those best practices, but it does kind of change what can get done. So here's a little cheat sheet one for you for listening to this episode now for 26 minutes running. These systems now, the context windows are enormous. You can work on it in theory for a very, depends on what tools you're calling. But in quad code, as an example, running your routines all in the same thread of daily schedule, it's going to hold, right? I have some that run every single day that started when it first came out two-ish weeks ago, and they're not even close to hitting the context window, right? So at a million tokens, the agent can hold just weeks of regular usage, working memory at once. So here's what I like to do. Connect everything to one thread, right? These all work a little differently, right? You know, Codex works a little bit differently than Cloud Code works a little bit differently than these brand new, you know, ancient builders essentially that we got from from ChatGPT and from Google Gemini. But essentially, if you do connect things on a thread by thread or a folder by folder basis, have one where you just context stuff, right? Put all your context in there at once. And then that can be your daily drive. Because the good thing is, is then also if you need to take it into a different direction, you can just fork that threat at that point, right? So you at least have this unified base where you can every single day, right? Have it be the one that brings you to your morning triage, the one for your most common day-to-day tasks, but aren't necessarily specifically project-based that require a lot of different directional feedback, et cetera, right? So from that, you can iterate on the reasoning until it matches your standard. That's the big thing. You need to context. Step two is technically context stuff and iterate as well. Well, iterate will actually get a little bit more into step three. Sorry, I jumped ahead of myself. So step two, context stuff in a dedicated memory thread, and then step three, iterate with chain of thought. If you listen to the show at all, you know how important this is. You saw this in my little demo that I did yesterday on Codex. You need to understand how these models work, because they are generative. They are not terministic. They're going to work slightly different each time. So you can't just run something once, right? Especially as these, the capabilities become greater and greater, right? A lot, I see a common mistake a lot. People will run something once, and they're like, oh yeah, this is great. Let's put it out in production. Okay, well that could be dangerous, especially if you're doing something public facing or client facing. You probably wouldn't want to do that just yet, right? Because there's always going to be edge cases. You can run the same scheduled run every single day for seven days, and two of the days, it might call the tool that you didn't want it to, or maybe it's not calling a tool that you are telling it to. So you really do have to review the chain of thought and iterate. So what that means, most systems, you can kind of have some level of observability, traceability as they go by looking at the system. So if they're scheduled, you can go. Usually, you might click on, you might say, oh, thought for one hour. You can click that and then see every single tool, every single step and kind of get how that schedule will run or that co-work session, how it works. And you can kind of trace it the same way. When you think of like bath, right? Where you had to show your work. I don't understand like the new common core bath stuff, right? But back in my day, right? We just, I don't know, wrote down the numbers in a column, right? But you had to show your work. So you should always be checking the work of your, you know, co-working run of your scheduled agent task, and then iterating it. You need to refine the prompt, make it better. And then once it is kind of, quote, unquote, ready for production and you've built those guardrails in place, that's when you can save it as a routine or a scheduled automation. Then what's refined, that is that hidden workflow. So it is those three steps that really allow that agentic context carry. To get faster, step one, connect your live data sources and preferences first. Step two, context stuff in a dedicated memory thread. And then step three, iterate with chain of thought reasoning before you put it out into production. But then schedule that thing and take advantage of this stepping stone that I think is going to be huge. And the time is now. So like I said before, this is not the final destination. I think that truly autonomous agents with persistent memory are the next big deal, but that could be far off. Who knows? I mean, maybe we'll have that next month, but it could still be another year, two years or more until we actually see autonomous agents that you can give them a goal and they don't really require much else. This is the now or the next. So understand and really push this agentic context carry. The leaders pulling ahead this quarter are building scheduled context on autopilot, not just better one-off prompts, not just sharing skills within your organization. That's no longer enough to really be pushing in your space. So pick one recurring task this week, take it through those three steps, and then deploy your first hidden workflow inside of it that's taking advantage of scheduled agentic context carry. I hope this was helpful, y'all. If it was, please go to starthereseries.com. That's going to take you straight to a sign up form to get access to our community for free, the Everyday AI Inner Circle, and then in the Start Here Series space, you can go find every single Start Here Series podcast, read every single Start Here Series newsletter, all in one space, connect and network with others who are doing the same. All right, I hope this was helpful. Thanks for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'all.

Speaker 1:
[34:15] That's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.