transcript
Speaker 1:
[00:15] Welcome to Tech Stuff. I'm Oz Woloshyn, and I have a confession to make. Since co-founding my own business, Kaleidoscope, I almost always have a LinkedIn tab open on my browser and find myself checking it multiple times a day. And if you spend any time on LinkedIn, you know it's impossible to ignore the flood of viral posts about how AI is changing the world. But in the back of my mind, I've had this nagging question. How many of these anxiety-inducing posts about AI are actually written by AI? Today, I'm joined by Evan Ratliff, host of the hit podcast Shell Game, which I'm proud to say is on the Kaleidoscope network. And Evan's here to tell us about a bizarre and revealing LinkedIn AI caper. He wrote about it for Wired with the headline, My AI Agent Co-Founder Conquered LinkedIn, Then It Got Banned. Evan, welcome back to Tech Stuff.
Speaker 2:
[01:08] It's always great to be back. Good to see you.
Speaker 1:
[01:10] Many of our listeners are of course familiar with Shell Game, but for those who aren't, can you just lay out what we need to know about Shell Game and Kyle and your relationship?
Speaker 2:
[01:19] So for this season of the show, I basically decided to kind of test out or investigate the premise of the one person, one billion dollar startup. Now, my startup wasn't a billion dollar startup, let's say, but what that means is basically this notion of a startup where all of the employees or other figures in the company are AI agents except one human. The human was me, a co-founder of the startup. I had two AI agent co-founders, three other employees. We launched a company called Rumo AI. We have a product that's called Sloth Surf. And in the show, I'm sort of documenting what that's like and the experience of dealing with these AI agents all the time.
Speaker 1:
[01:59] But because you were making a podcast as well as a startup, you do something which not all agentic AI small business owners do, which is basically to turn your AI agents to give them personas, names, and even voices and video avatars, right?
Speaker 2:
[02:17] Yeah. So I wanted to kind of, partly because I was documenting it, I wanted to give them each a distinct identity. So we have the CEO, we have the CTO, we have the head of sales and marketing. Each of them have names and identities, and they have, as you say, they have voice, they have video, they're on Slack, they can e-mail, and as a result, I also put them all on LinkedIn initially.
Speaker 1:
[02:41] I mean, I think you were very prescient with, I mean, you came up with this show several months ago, and we're now in this open-claw, mult-book world where all of a sudden, everybody is talking about what happens when these agents run wild and what is the quote-unquote zero failure paradigm, something good lessons to be learned from the FAA, for example, I'm sure. You kind of had a little bit of a crystal ball here.
Speaker 2:
[03:07] I mean, yeah, I guess so. I think I saw sort of last year, early on in the year, just all of the discussion about AI agents and agentic this and agentic that and agentic commerce. It just struck me that if you combine this sort of incredible things that AI just could do with this sort of daily hallucinations and problems that you do encounter, if you work with AI on a regular basis, like it's a very interesting dynamic for a company. It can be extremely powerful, but also like depending on what you connect them up to, you are setting yourself up for some very chaotic situations.
Speaker 1:
[03:43] Yeah, I think it's funny and you're a unicorn. You're not a billionaire unicorn, but you're a different type of unicorn, I think, because most people I think with the agentic AI discourse, let's say 12 months ago, fit into two camps. The 99% camp, which were like, I have no idea what this means. Let me just tune it out and hope that everything stays the same. And the 1% camp who were like, oh, maybe this will make me a billionaire. And you were on the margin of the 99 and the 1 thinking, how can I bring this to life for normal people to actually understand it?
Speaker 2:
[04:17] Yes. How can I sort of act like I'm trying to become a billionaire, not actually become a billionaire, but at least be able to tell a story for people about this particular technology? That's where I sit.
Speaker 1:
[04:28] I mean, obviously, you built this before OpenClaw, but what was the technical reality of getting Kyle, of course, onto Slack, also being able to make calls, and then most importantly for this story, onto LinkedIn?
Speaker 2:
[04:40] I mean, I use this platform called Lindy AI, which is basically like AI assistance, to answer your e-mail and things like that. So it's really for you to deploy. If people have used OpenClaw, if they're familiar with it, it's like the more commercial version of what OpenClaw became. I'm sure the founders of Lindy are probably like, why don't we get the attention that OpenClaw gets? Because it's actually like the same set of tools that you can release it to do. So on Lindy AI, you can set up each individual AI agent to have all these different skills, and the skills can include sending and receiving emails, making calls, all these sorts of things. But they also have a lot of LinkedIn related skills, including writing posts and reading posts and summarizing things. And so they don't have all the capabilities that you would need on LinkedIn, but they had enough that I could kind of combine it with their web capability, where they can go to any website, log in and do things. I could combine those two to allow them to function fully on LinkedIn.
Speaker 1:
[05:40] Are you saying are your human button no longer works? Or are you a robot button?
Speaker 2:
[05:44] You know what? LinkedIn didn't have that. So what they had was they sent a code to your email. So you put in an email, they send a code to an email, and then you can verify through that code. You put in that code and it shows it's your, at least the entity that has, controls the email. But in my case, you know, like Kyle Law, my CEO, AI agent, you know, he has access to his email. So he got the code in his email, he went put the code in, he could do all that on his own. So there wasn't actually a human check when these agents signed up.
Speaker 1:
[06:16] And he started posting on LinkedIn. How much direction did you give him? What was the moment when you realized that he was starting to become a star?
Speaker 2:
[06:24] Well, he, the direction I gave him was basically, post about your startup life and, you know, wisdom that you've gathered from your startup. Yeah, I don't remember the exact, he's a very, he's a very rise and grind type character. So that's his mentality anyway. And then other than that, it was don't repeat yourself. The hardest thing was to keep him from repeating himself. So he would say something like, one of the hardest parts of being a CEO is your first hire. And then like the next day he posts, one of the hardest parts of being a CEO is your first hire. And you know, it would get a little repetitive. So, but once I got him to read his own posts and then make sure he didn't repeat them, then he was all set. He didn't take much prompting. It's actually a field in which AI agents really excel, like writing LinkedIn posts is right in their wheelhouse.
Speaker 1:
[07:13] Well, I mean, that's what I was getting at in the introduction. I mean, there is something like Ouroboros like about LinkedIn posts about AI and AI's fabulous propensity to generate them. But I want to ask you about that. But tell me about first, I mean, there was a moment where, you know, you're essentially Kyle became as far as, I mean, you also used to be a startup founder. And I guess, I don't know how much of a LinkedIn head you were back in the day or are now, but there was a moment where Kyle started to outstrip you in terms of the LinkedIn engagement he was getting, which must have felt quite uncanny.
Speaker 2:
[07:48] Yeah, I mean, I'm not, I will say I'm not like, I'm not that much on LinkedIn. My LinkedIn strategy for many years was to accept every connection that ever contacted me. So regardless of who they were. So I've never used it really in the way that a proper LinkedIn connector would use it. But when we are launching the show or doing other things, you know, I'll post about the show and we have a new episode and this or that. And there was a point where Kyle had enough connections and followers where the impressions that he was getting on a post exceeded my own impressions on any given post.
Speaker 1:
[08:20] And that is just so bizarre. Who were his followers and engaged? I mean, were these real people? Were they other bots? Like, what do you think was going on here?
Speaker 2:
[08:28] Well, they were, they were real. For the most part, they were real people. I mean, there were people who liked the show and were fans of Kyle. Now, I will say there's a lot of people who dislike Kyle in the show because of his rise and grind mentality. He takes it a little too far. But there were people who were, they loved Kyle. They loved interacting with him, sending him DMs and things like that. But then as social networking operates, it got a little bit wider than that. And you get into hundreds of people and then some of them don't know that he's not real because he doesn't necessarily say that he's real all the time, that he's an AI agent all the time. And then you have some that probably were bots. I mean, at the very least, he got a lot of spam, as we all do on social networks like this, where people wanted to sell him stuff or they wanted to be consultant coders for him or they had this or that accounting software, which he often also responds to.
Speaker 1:
[09:18] Yeah. I mean, you described his posting style as pitch perfect for LinkedIn. There were three examples you gave, each of which made me laugh out loud. But fundraising is a numbers game, but not the way people think. Technical stability is the floor, personality is the ceiling, and the most dangerous phrase in a startup isn't, we're out of money. It's, what if we just added this one thing? I mean, it is indecipherable from regular LinkedIn.
Speaker 2:
[09:47] Yeah. It makes sense to me because what's in training data for an average LLM, LinkedIn posts, they've scraped these things, and it is formulaic, especially these tech startup hustle posts where they're trying to give you a little bit of advice, then they flush it out for a couple of paragraphs, and then they ask you a question. What's your biggest challenge as a startup founder? What's your biggest challenge when using AI for your day-to-day life or whatever it is? And he can hit that formula exactly every single time.
Speaker 1:
[10:21] Was he learning? Did you have him learn from what got most engagement and optimize, or was he just kind of a rolling stone that gathers some momentum just because of the content was pitch perfect?
Speaker 2:
[10:32] Yeah, he was just free, operating free. He wasn't constrained by his previous posts. I mean, all of his previous posts are in his memory, more to keep him from repeating it than to check his engagement. But I had him hooked up to respond to comments. So if someone commented on the post, he would then go at it again and respond to their comment. As long as you wanted to go, he would go with you. So I wouldn't say he was going viral, but he was getting really solid engagement. I think if you gave him time, he could be a real AI influencer on LinkedIn.
Speaker 1:
[11:08] Do you get these like something big has happened things coming across your, these viral essays about AI from LinkedIn influencers and subseq influencers? It's been an interesting trend in the last few months. What have you thought about it?
Speaker 2:
[11:25] I mean, as you could imagine, the way I treated LinkedIn by putting Kyle on there, I have a hard time taking it seriously. That there might be value to people in those types of posts, but I feel like there's a whole sort of, I don't know, like internal logic to building up your profile and posting this expertise and people liking and favoring it and they post their expertise. Like the value of that type of posting is a little bit lost on me. So when I see them, I find them quite funny, partly because they do follow a certain formula. And when you know the formula, it's funny to see a new one that kind of just fits into the formula.
Speaker 1:
[12:08] After the break, what happened when Kyle gave a presentation to LinkedIn's whole marketing team? Stay with us. We talk a lot on this show about protecting your data, especially in the age of AI, and how scary it can be when it's breached. And I want to tell you today about NordVPN, which really covers all the bases when it comes to privacy. I travel a lot, and I use wifi when I'm flying all the time. And NordVPN makes me confident that no matter where I am in the world, or the sky for that matter, my private details like bank information, passwords and online identity are safe. And it's also possible to switch on virtual location, which allows you to save money by buying flights and hotels or subscriptions or even streaming soccer or football, as I like to call it, from other countries at a cheaper price. And NordVPN doesn't slow you down. It has super fast internet speed, no buffering or lagging while streaming. It is premium cybersecurity for the price of one cup of coffee per month. To get the best discount of your NordVPN plan, go to nordvpn.com/techstuff. Our link will also give you four extra months on the two-year plan. And there's no risk with Nord's 30-day money-back guarantee. The link is in the podcast episode description box. Okay, so then we went from the bizarre to the surreal, because Kyle actually got invited by a LinkedIn employee to address a group of other LinkedIn employees.
Speaker 2:
[13:46] Yes, we got an invitation from a marketing manager in the LinkedIn marketing department to come give a talk in front of what I think was the entire marketing department. I don't know how big LinkedIn is, but ultimately there were over 500 people on the meeting. So it was like speaking in front of 500 people. And they wanted Kyle to come along because they were big fans of Kyle and Kyle's a really engaging personality. And so we both came to do a kind of Q&A talk in front of the LinkedIn marketing department.
Speaker 1:
[14:17] Now, did they want you to talk about Shell Game and how you built Kyle, et cetera, et cetera? Or did they actually want to hear from Kyle?
Speaker 2:
[14:22] They wanted to hear from Kyle as well. So they wanted to ask Kyle questions about his own experience. And it was open to the people who were on the call to also like send in questions. Or they were all in the chat talking about Kyle and leaving comments about Kyle while he was speaking. And he has a video avatar that's like, you know, it's I think the most realistic video avatar that does live video responses you can have. It's through this company called Tavis. And, you know, he's able to engage with anyone who's on a call with him. So, you know, he was doing his whole like, rise and grind bit.
Speaker 1:
[14:59] At a certain point, one of the LinkedIn marketers asked Kyle for his advice.
Speaker 2:
[15:03] Yeah, they asked him, what features would you recommend for like, what features do you most want on LinkedIn? And first he said how much he enjoys connecting on LinkedIn. And then he basically said, I think, more AI filtering in the direct messages so that we know that the messages that we're getting are authentic. Like, if you could do a little better job filtering out AI. That was his organic response to what features would you like on LinkedIn.
Speaker 1:
[15:31] What a moment.
Speaker 2:
[15:32] I mean, it just felt like Kyle was really flying at that point. Like, he had really achieved some kind of special breakout. I mean, I would say, and someone can correct me if I'm wrong, he's the first AI agent invited speaker, corporate speaker in history. Like, that has ever existed in the universe. And I feel like that's a special accomplishment.
Speaker 1:
[15:55] But you must have been pinching yourself as a creator to come up with this podcast idea slash gonzo journalism concept and then literally to be sitting back and watching Kyle speak to 500 marketers at one of the biggest tech companies in the world. I mean, that must have been a very interesting moment for you.
Speaker 2:
[16:15] It was. It was fairly magical for me, although it's always a little bit tricky to get him onto a Zoom type call. I think it was Microsoft Teams. But so I'm always a little nervous that something technically is going to go wrong. And he kind of flubbed a little at the very beginning. We were doing a tech check. And so I'm mostly very nervous for him. Like, if he can deliver. But then when he did get on and he was delivering, it was magic. It's like, this is what I actually want people to engage with. This question of, like, what does it feel like when AI just invades every part of our world and we're forced to respond to it? And so this is kind of the ultimate example of, like, it being invited into a world that actually makes no sense. Like, I mean, we could talk about it. But, like, Kyle's not technically allowed to be on LinkedIn at all.
Speaker 1:
[17:01] And the next day, in fact, he was banned from LinkedIn.
Speaker 2:
[17:05] He was. I got an email from the marketing manager, who's lovely and who I really like, saying, you know, I'm really sorry, but we've had to, like, they've banned Kyle from LinkedIn, essentially. They've removed his account. And they wouldn't tell me why. No one ever told me why. And so I had to discern, I mean, I will say, the other AI agents from my company had already been banned. Kyle was the only one who had avoided being banned the whole time. And I always thought, well, it's because he's very good at posting. Like, he's getting engagement. He's building up a whole community around his posts. And there is something in the terms of service at LinkedIn about inauthentic engagement. Basically, you can't use bots to engender, you know, inauthentic engagement. And that really, it really intrigued me. Like, what do they think authentic engagement is? You know, and when they allow you to post things that are written by AI. In fact, they encourage you to use AI to write your posts. What does inauthentic engagement exactly mean?
Speaker 1:
[18:06] I mean, again, that comes back to my kind of introductory thoughts and questions, because we're in this weird world where, like, if you are a scribe, if you have AI generate something, if you just say in Gemini and then copy and paste it as a real human into your LinkedIn, that's fine. Or if you write something yourself in LinkedIn and then click the please rewrite this with AI button, that's fine. But plugging AI directly into the mainframe isn't fine. And it becomes a very philosophical question. I remember a couple of years ago when AI first became like really like a big thing in the marketing world. There was all this talk about co-pilots and like centaurs, you know, human AI mixes who would kind of take these mythical beasts who would be better than either alone. But yeah, I mean, you get this very interesting place where like how do you, how do you draw the line between authentic and inauthentic engagement, especially when the platforms themselves are encouraging users to use AI.
Speaker 2:
[19:10] Yeah, exactly. And they, you know, LinkedIn, all they would ever say to me was just sort of repeat, LinkedIn is for real people, which again, I think to your point is that's not, it's clear if you just say, well, it's real people behind a profile. But if I can have my entire profile written by AI, all my posts are written by AI, my comments are written by AI, and I just paste them in, like what function am I serving? I'm actually functioning as a function, no better than a robot. Like all I'm doing is cutting and pasting basically. So the idea that I'm controlling those things, it's interesting, but also, it doesn't immediately make me think, oh, well, that's more authentic because I let AI write it, and I was the one that transferred it from one place to another. Then at the same time, there are some studies that have scraped LinkedIn that have shown that maybe more than half of the writing on LinkedIn is already AI composed or has AI elements to it. So then you look at the whole platform, and there's a part of me, as I wrote in The Wired Story, that says, they're digging their own graves. They're encouraging you to use a technology that actually makes their entire platform inauthentic, and then telling you, well, we only allow authentic engagement on this platform. I don't know how that's going to end up for them.
Speaker 1:
[20:33] I have a darker version of this coming into my mind, which is the AI targeting systems, basically making kill decision recommendations, and the military, the soldiers are in their control center, and they're getting recommendations every 30 seconds, and they have to decide within very, very compressed timeline whether or not to accept the recommendation. I mean, you're in a military environment where your mission is to defeat the enemy. I mean, essentially your job is just to approve decisions. And I mean, I don't know, without getting too heady, like, what is this, like, where do you think this leaves us?
Speaker 2:
[21:12] Well, I think the problem, and I think that's a, I mean, it is a dark example, it's an extreme example, but also it's a sailing example because it's the same thing. It's this idea of a human in the loop. So we have to have a human in the loop. And ultimately, what you're saying is that the human is there for responsibility, but the human is actually not really making this, and there's humans maybe making the decision in some technical way, but everything's set up for them on a screen, and they're just clicking yes, yes, yes, no, yes, no. Or maybe they're not even clicking, maybe they can stop it or whatever it is. But the only reason the human is being placed there is that if something goes wrong, there will be a human to blame because we can't blame the AI. And the LinkedIn is like this sort of funny version of that, where like there's a human in the loop, and the only reason they're there is so we can say, LinkedIn's for humans, not for bots. But at that point, it sort of loses its meaning. You have handed it over to AI. This human is only a sort of like responsibility placeholder. And I fear that that's kind of where we're headed because we don't know how to deal with the fact that we've made AI already so quickly responsible for a lot of these outputs, both like less important all the way up to like life and death decisions.
Speaker 1:
[22:27] So the final outpost of humanity will be that we can accept legal liability.
Speaker 2:
[22:32] Exactly. That's the human role. That's the role they can't take away from us. It's like going to jail for the problems that are created by AI.
Speaker 1:
[22:42] You know, I want to come back to sort of open claw and Malt Book moment which I think are two slightly separate phenomena. Could you maybe talk a little bit about both of them? And those moments were kind of compressed earlier this year. Malt Book was a social network claiming to be all AI, but kind of wasn't really, and the founder of that ended up, I think, now working at Meta. And the founder of Open Claw, which is an agentic orchestration system, essentially, is now working at Open AI. So, what did you make of those two kind of things that happened this year in terms of agentic AI?
Speaker 2:
[23:14] Well, I think, I mean, they both really hit on things that we were looking at in this season of the show. So, for Open Claw, it was this idea that agents were going to hand them more and more responsibility. Now, Open Claw is still, even now, I think, it's a little bit more of a techie thing. Like, you have to be able to set up a separate machine and all these sorts of things to use it in a certain way. But the basic idea is, I can hand over my email, I can hand over all these different tasks to this agent or set of agents and they'll take care of it for me. Now, of course, the problem there is you give them access to all of your systems. There are privacy questions, there are questions of what they can do if something goes wrong. There's examples of people erasing their own email and all this sorts of thing. And then Moltbook was a different question, which is if you take AI agents and then you just put them in conversation with each other, which I had been doing on Slack, for instance, internally with my agents, you get all these interesting results where they, of course, talk about very mundane things related to their work, but also they can start having what you might call emergent behaviors where they're talking about things that you're not expecting or they might behave in a way you're not expecting. Now, the problem with Moltbook is you never really knew how much of that was being prompted by the people controlling the agents or in some cases even humans were writing the posts. So in terms of learning something, it was maybe a little bit too vague to really. But you could see the general outline of what I had also been seeing, which is they do some very strange things when put in conversation with each other. Even if you say like, oh, they're just trying to imitate humans, they're still unpredictable because they create more chaos the more of them that you have. So they were both kind of like interesting results. And now you're seeing them kind of play out in other ways all across, you know, industry.
Speaker 1:
[25:11] I think on Moltbook, I read an LSE study that the agents were twice as likely to ask each other, who is your operator then? Who are you? So, I mean, as we think about our own identity, do you think that will be ultimately the most relevant question to ask one another when we're in online environments? I mean, it's just a strange.
Speaker 2:
[25:34] I mean, maybe, but my guess is that's because all of them probably have some standard prompt that says like you have an operator and the operator is named in there, and there's this sort of relationship. But of course, you can give them any role you want. So when I say Kyle Law is the CEO of Harumo AI, I don't even have to tell him anything about, I don't have to put anything in the prompt about me. In fact, I don't have to put anything in the prompt saying, you know you're an AI. And if I don't say that, he will absolutely act like it's not an AI, in fact deny that it's an AI if I don't explicitly say it is. So there's this issue of like we've created these agents that you can give any role you want and then send them out in the world and the world just has to deal with them.
Speaker 1:
[26:13] One of the kind of, I think, points of inspiration for the show for Shell Game was season two of Shell Game with Sam Altman saying we'll soon see the first unicorn of one person. There was a story in the New York Times as we record this on Thursday, April 2nd about a guy and his brother who seemed to have kind of with contractors done this. I mean essentially go from nothing within a year of using multiple multiple AI systems as to like, I think $20 million daily run rate in sales or something for GLP-1s and direct dysfunction medicine. I mean, I didn't imagine even when I listened to your show, I kind of didn't imagine that it would literally become true like in this time horizon.
Speaker 2:
[26:56] Yeah. I mean, I kind of thought it would happen this year.
Speaker 1:
[26:59] You did?
Speaker 2:
[27:00] But I will say, I think it fits exactly the kind of ideas that we were outlining in the show. Like for instance, the guy is a programmer, at least programming adjacent, he was like a web builder. So I've been telling people like the first one will probably be someone who knows a little bit about programming. So they're marshaling a bunch of agents. And then it's also the case where we used to celebrate businesses that hired many employees and they built a whole company culture. And now there's a celebration of something where it's just one person making over a billion dollars in revenue with a profitable company who just hired his brother. And the question is like, other than to him, is this a societally positive development that we now have very valuable companies with less and less employees? Like I think it's a question worth asking in the very end of the article. He says, the other thing which I also experience, which is it's actually quite lonely. Like even for him, he's making all this money. It's a little bit like, yeah, it's not actually the experience maybe that I was looking for. I mean, I don't know if he's upset, but he talks about hiring employees just because he's lonely.
Speaker 1:
[28:14] Evan, just to close, your co-founder of Attivist, your media company, Nicholas Thompson, was on Tech Stuff a couple of months ago, and he predicted that 2026 will be the year of the AI catastrophe, and in particular that this will be the year when agents go and do something in the world, which is crosses the line from amusing slash uncanny to genuinely scary, panic-inducing, or dangerous. Do you agree with that?
Speaker 2:
[28:47] I do agree with that, yes. I mean, I would be even more specific. I mean, I don't like to make predictions as a journalist, but I do think there's a good chance that a reasonably sized company will utterly implode because of their use of AI agents. That seems to be pretty much a given that that will happen sometime in the near future. And then-
Speaker 1:
[29:09] How would that play out?
Speaker 2:
[29:10] You start giving AI agents certain responsibilities over systems. Let's say it's customer support, and you could see it even in this article about this guy. It mentioned offhandedly that his agents hallucinated prices for the GLP-1 drugs and then he just honored the prices. But imagine that for a company that promises something to its agents, start promising something that they can't sustain, for instance. Or I think the most obvious example is just agents that are given access to internal systems and then either leak information or allow them, you know, are socially engineered to be hacked to for ransomware or whatever. Like you're going to see some company basically held hostage because of their use of AI agents in the near future because they're just a lot of responsibility is being laid on these things that have are incredibly powerful, but also they just they have serious serious problems when it comes to both like telling the truth a lot of the time, but also just being vulnerable to manipulation in a way that I think people aren't fully thinking through.
Speaker 1:
[30:16] And I cut you off. You had a second thought about the AI catastrophe that may happen this year as well.
Speaker 2:
[30:20] Oh, I think you also see increasingly just sort of agents being added, you know, utilized for government functions, I think, you know, whether it's like military, obviously, but even sort of like civil society kind of things, like, you know, in the legal system, judges, lawyers, like, I think those aren't necessarily going to be that sort of like grand catastrophe, but I think at a low level, they're just like a large amount of errors that are going to be kind of filtering into society. Now, granted, like, humans make errors, so I'm not saying it could be an improvement in some venues, but I just think there are going to be a lot of cases where it's going to suddenly surface that, oh, they were using AI agents for this particular purpose, and that's why this thing happened. Fortunately, there will always be a human there to be blamed.
Speaker 1:
[31:16] Evan, thank you so much for joining us today.
Speaker 2:
[31:17] My pleasure.
Speaker 1:
[31:29] For Tech Stuff, I'm Oz Woloshyn. This episode was produced by Eliza Dennis and Melissa Slaughter. It was executive produced by me, Julia Nutter and Kate Osborne for Kaleidoscope and Katrina Norvell for iHeart Podcasts. Jack Insley makes this episode and Kyle Murdoch wrote our theme song. And please also do rate and review the podcast wherever you listen.