title How AI Impacts Our Lives These Days

description It's time for an AI check-in! Maddy, Kirk, and Jason catch up on how they're feeling about generative AI in 2026, what it's doing to creativity, how it's impacting the job market, and of course, how it's affecting video games. 

One More Thing:

Kirk: Dungeon Crawler Carl (Matt Dinniman)

Maddy: Slay the Princess

Jason: American Pastoral (Philip Roth)

LINKS:

Adam Neely’s video “Suno, AI Music, and the bad future”

“AI As Normal Technology” by Arvind Narayanan and Sayash Kapoor  

Astrophysicist Minas Karamanis’ blog post “The machines are fine. I’m worried about us”

“Sam Altman May Control Our Future—Can He Be Trusted?” by Ronan Farrow and Andrew Marantz for The New Yorker

“Fabienk” and “Sarniezz” by Angine de Poitrine from Vol. II, 2026

Watch Angine de Poitrine live on KEXP

The NYT reviews Dungeon Crawler Carl

Happy MaxFunDrive! Right now is the best time to start a membership to support your favorite shows. Learn more and join at https://maximumfun.org/jointripleclick
 
🚀  SUPPORT TRIPLE CLICK:Join Maximum Fun | Buy TC Merch
💬 JOIN THE TRIPLE CLICK DISCORD
🎮 Triple Click Ethics Policy
📱 SOCIALS | @tripleclickpodInstagram | YouTube | TikTok | Twitch

pubDate Thu, 16 Apr 2026 04:00:00 GMT

author Kirk Hamilton, Jason Schreier, Maddy Myers

duration 5034000

transcript

Speaker 1:
[00:04] It often feels like AI raises more questions than it answers, and its answers still contain a lot of mistakes. But hey, let's talk about those questions. Welcome to Triple Click, where we bring the games to you. This week, we are talking about AI, its uses in game development, in art, and in society more broadly, and just how we're feeling about the technology these days. Let's get into it. I'm Kirk Hamilton.

Speaker 2:
[00:32] I'm Maddy Myers.

Speaker 3:
[00:33] And I'm Jason Schreier. Hello.

Speaker 2:
[00:35] Hello.

Speaker 1:
[00:36] Hello, my friends.

Speaker 2:
[00:37] We're here.

Speaker 1:
[00:38] So nice to see you both.

Speaker 3:
[00:39] How are you two doing on this fine spring day?

Speaker 2:
[00:43] Relieved it's spring.

Speaker 3:
[00:45] Did you guys pay your taxes?

Speaker 2:
[00:46] You know it.

Speaker 1:
[00:48] Oh, yeah. Oh, yeah. I paid my taxes a while ago. So there's always something to do right up until the last minute. It was like 99.9% done, and then I had to sign some form or something, some docu-sign or other.

Speaker 3:
[01:00] But yes, good.

Speaker 2:
[01:02] Are we all feeling good?

Speaker 1:
[01:03] Jason, have you done your taxes? You asked us, but you didn't actually say if you've done your taxes.

Speaker 3:
[01:07] I have. Well, I have. On April 14th, and I guess on 15th also, I like watched the money disappear from my bank account. And for us, for all three of us, I believe, we have to pay our normal taxes for the year, and then also an estimated taxes for this quarter. So it's just a double whammy.

Speaker 2:
[01:24] Yep, that's how it works.

Speaker 1:
[01:26] As I mentioned last week, actually, I got a bit of a refund because I overpaid my taxes.

Speaker 3:
[01:31] Good job, good job. Overpaying.

Speaker 2:
[01:33] I'm so jealous. I can't believe you did that.

Speaker 1:
[01:36] The thing is really.

Speaker 2:
[01:36] I've never estimated it properly. I can't figure it out.

Speaker 1:
[01:41] You both congratulated me last week because you said that I paid my taxes well, but actually I paid my taxes poorly. You should not get a refund. It means that I actually did a bad job.

Speaker 3:
[01:51] You have chosen poorly.

Speaker 2:
[01:53] One of these years, I'll hit it perfectly dead on. Although last year, I would say employment wise, didn't go the way I thought it was going to go. So I think I'm going to cut myself some slack on that one. My estimates feed off.

Speaker 1:
[02:03] If any listeners out there have gotten a tax refund and are looking for something nice to do with that refund, you could consider joining our network and supporting our show. Triple Click is, of course, a member of the Maximum Fun Podcast Network, a network that we love to be a part of. We have been on this network since we launched, and Maximum Fun is truly a wonderful place on the Internet, one of the few remaining good places on an Internet that is increasingly degraded and bad. So we are very happy to be part of it, and if you would consider supporting it, that would be great. You can go to maximumfun.org/join. That's how you become a member. However, there is a new link in our show notes that they have engineered at Maximum Fun. It's a quick sign up link, so that'll be next to the Maximum Fun sign up. Because when you sign up for Maximum Fun, there's a bunch of stuff you can do. You can support a bunch of shows. You can also just click the quick sign up link, and it'll just take you straight to signing up to support Triple Click. You'll join at the $5 tier. You'll start getting our bonus episodes. It's super quick and easy. So if you just kind of want to knock it out and not deal with signing up for the network, you can do that. And it's an easier way of doing things. So look for that link down in the show notes. And you may have heard me mention bonus episodes. And that's because we make bonus episodes of this show for members. We make one every month, and we have, since we launched, we're coming up on 70 of them. The most recent one is about Resident Evil Requiem. I almost said Revelations, but that's an older one. Though I think that we mentioned that maybe a little bit.

Speaker 2:
[03:33] That is also a Resident Evil game.

Speaker 3:
[03:35] That was the one that was misspelled on the box art, and it was like Revelations or something.

Speaker 1:
[03:39] I remember that, yes. Not a great game, but not a bad game. But no, this was about Resident Evil Requiem, the most recent Resident Evil game, and just about Resident Evil in general. This is a super fun episode. We're actually going to drop a little teaser of that in the feed next week for everyone, if you want to stay tuned and see what you're missing. And this month, we are returning to The Sopranos. We're going to record a big, supersized Sopranos episode about seasons two and three. It's going to be great. We've already recorded our season two discussion, and we're going to add season three to it as we finish it. So that'll be our supersized bonus episode for this month. There is all kinds of stuff though, that you can get if you become a member. So once more, that's maximumfun.org/join. Become a member, support Triple Click, and thanks so much to everyone who is a member.

Speaker 3:
[04:27] Although if you're thinking about becoming a member, maybe wait a week until Max Fun Drive, and you'll get some cool stuff if you become a member next week instead.

Speaker 1:
[04:34] That's a good point. Maximum Fun Drive, we're already running promos on our show for that. So if you are paying close attention, you know that Maximum Fun Drive, our yearly pledge drive is kicking off next week, and there's always unique bonuses for that. So each year you can get some special thing. So yeah, maybe I guess, wait a week. I mean, you can sign up this week. We'll take your support.

Speaker 3:
[04:54] You can. We won't be mad if you do.

Speaker 1:
[04:57] But we also won't be mad if you wouldn't be.

Speaker 2:
[04:59] We won't be mad either way. It would be weird if we were mad.

Speaker 3:
[05:01] Yeah, we don't really get mad about how people choose to.

Speaker 4:
[05:04] Yeah.

Speaker 2:
[05:05] We choose not to sign up. We're not mad, just disappointed.

Speaker 1:
[05:08] I would say Triple Click is just not really a source of anger in my life in any way. Which is a very nice thing about it.

Speaker 3:
[05:13] It's really, it's like not a source of stress, not a source of anger. It's just kind of a chill thing that we do. It's pretty nice.

Speaker 1:
[05:19] Speaking of things that are not a source of stress or anger, Maddy, what are we talking about this week?

Speaker 2:
[05:25] I don't know if I agree with that particular segue, but listeners, you can make up your own minds. Today, we are talking about AI and its impact on game development and also on human creativity. And part of why we're talking about this, in addition to, I think all of us thinking about it all the time, is that we have a listener named Matt who wrote in and is an AI researcher at a major tech corporation and asked us to discuss it, which I think, I don't think Matt's the only one who has questions about where our heads are at on the topic now because it changes all the time. I think the last time we talked about AI was a while ago, and it's changing all the time. And we have all been reading a lot of articles about it and watching some cool videos about it, which I'm sure we're going to discuss today. One of them is a really cool video from this musician Adam Neely about Suno, which is an AI generated music app, and it's about just human creativity in general. And that's something that I think is relevant to all of us and also the game designers who listen. And I think that even if they might not imagine, so they'll probably be able to really relate to some of the things that musician Adam Neely brings up in his video and that we might get into today when it comes to just how AI impacts human creativity and motivation. And in my case, I'm really worried about de-skilling and the idea of AI kind of making people complacent when it comes to creation. So what do we think about this in general? Whether it's a response to Adam's video or AI de-skilling in general and human motivation?

Speaker 3:
[07:15] Yeah, I have a lot of thoughts on AI, especially over the last couple of years, as it's really become a ubiquitous part of our lives. Most of those thoughts are pretty negative. I feel like it's really, it's made the Internet significantly worse. I can't go on LinkedIn without seeing a post that is just clearly written by AI, has all of the tells. You can't search on Google without getting the AI box on top, and that is almost, not almost always, but very often incorrect. It's spread misinformation about our show and about me and about many, many other things. And just seeing it kind of shove down your throat on everything from social media to shopping websites. You can't even do a normal search on a lot of clothes websites these days without getting the AI results. It has really, I think, made the Internet significantly worse. It's also becoming more and more ubiquitous to the point where it's impossible to not use it. And I think, I mean, I imagine, certainly all of us, but I imagine most people have tried to figure out, hey, is this a tool that is like going to be useful for me in some way or another? And I played around with it. I found the occasional use case. I actually, I just used it to find, you know how you could Google discount codes when you're shopping for clothes online? And you'll get to those sketchy websites that make you click to reveal the code and stuff like that. So I went to Claude. And like half of them don't work, or more than half of them don't work. And you're worried it's gonna get you malware or something if you click the wrong way. So I asked Claude, I was like, hey, can you get me a discount code for this clothes website where I'm buying shirts? And it just pulled one up for me. It was like, this should probably work, and I plugged it in and it worked immediately. That's the sort of thing where I'm like, yeah, this could be cool. But for the most case, I feel like it's been just kind of a negative presence, and I'm not sure what we can do about that.

Speaker 2:
[09:16] Well, we're going to solve it today, so don't worry.

Speaker 3:
[09:19] Is this Triple Click solves the problems that AI has brought to the table?

Speaker 2:
[09:25] Everyone's been waiting for us to do it.

Speaker 3:
[09:26] Yeah, right. That's what we're here for.

Speaker 2:
[09:28] Kirk, what do you think?

Speaker 1:
[09:29] Yeah, it feels like we're past a turning point or we're in the middle of a turning point with AI right now in a number of different ways. Like, the capabilities, I think, of AI have reached a sort of new level. I do get the sense that a lot of people who are playing around with it, I don't know who, like everybody switched over to Claude recently, I started playing with Claude as well to see what it could do.

Speaker 3:
[09:51] It's a lot better than ChatGPT.

Speaker 1:
[09:52] It's just not as annoying and obsequious as ChatGPT, which is a huge improvement, at least. I don't know if it's more accurate.

Speaker 3:
[09:58] That's one of the reasons it's so much better.

Speaker 1:
[10:00] I know Claude code is impressive, though I haven't used that.

Speaker 3:
[10:03] We'll get into that. We'll talk about that a little bit later.

Speaker 1:
[10:06] Right. But I know that basically having an agentic AI, which is the term for an AI that can do things on your computer, is the new standard or the new thing that people are exploring. I get the sense that Claude code being an AI that lives inside of the terminal on your computer and can just do stuff in the background. Some people are finding really good uses of that. I'm not, because I haven't experimented with it, but I just get the sense the capability of the AI has improved. At the same time, I also get the sense that the public sentiment around it has really hardened and crystallized into something that is really pervasively negative. There are a lot of polls now showing that the majority of people don't like AI, and don't think that it's helping, and don't think it's going to make the world better. Then you're just seeing stuff like Mythos, the new model from Anthropic, which Anthropic basically said, we're not going to release this because it is so capable at exposing software vulnerabilities and security vulnerabilities that it is not safe for it to be in the world. Now we know that exists, but it's not out, and that's this feeling of, okay, maybe we did just cross some new threshold where the AI is really dangerous, to the point where one of the companies that makes it didn't release it. We also had this conflict between Anthropic and the Defense Department, or the Department of War, if you want to call it that. The Defense Department. This is like calling Twitter X. I still want to call it Twitter. So we had that conflict, and then OpenAI turning around and taking this defense contract. We have this new huge profile in The New Yorker that I actually read, without realizing it was a huge profile and then read the whole thing. This is a very in-depth and thoroughly reported piece that we'll link in the show notes about Sam Altman, basically wondering, is this guy a sociopath? I don't think that's really what it's wondering, but it kind of is.

Speaker 2:
[12:02] Well, some of their sources openly wonder that. That could very well be a direct quote.

Speaker 1:
[12:06] This is a word that people have used to describe Sam Altman.

Speaker 3:
[12:09] It's by Ronan Farrow and Andrew Marantz. Ronan Farrow, of course, being of Harvey Weinstein fame.

Speaker 2:
[12:15] Also a gamer.

Speaker 3:
[12:17] We should highlight that. Also a gamer, yeah.

Speaker 1:
[12:18] True, also a gamer. That's right, we didn't know that. So yes, and known for his extremely in-depth reporting. This is a piece that seems as though it had been reported over a very long period of time. A really interesting read that does paint Sam Altman as someone who tells everyone what they want to hear, and is very two-faced or five-faced, and a very manipulative person, at least according to some accounts of him, which then is very interesting given that he is engineered in AI that always tells you what you want to hear. So anyways, I could go on and on and on, but it just feels like we're in a place where a lot has changed and we're in a kind of a new paradigm in a lot of different ways. And so I've just found myself sitting in that and looking at it and thinking about it.

Speaker 2:
[13:03] Yeah, me too. Something that I've ended up really getting into reading about is how complicated the term AI has become. So we're kind of colloquially using it today, which I think is fine to sort of refer to LLMs and kind of generative AI for now anyway. But I think we're probably also just going to naturally colloquially use it to refer to machine learning on which some of these LLMs and another gen AI systems are built, but which has been around for longer and which is at least to me, kind of contributing to some of the things about AI that I think we're all more used to. So like when I was learning about what machine learning is specifically an example that a lot of people use is like fraud detection from your bank, which I think we can all agree is a great use of AI and machine learning, right? It looks at all of your transactions and it tells you, oh, this one is aberrant in some way, and you should be aware of it. And so some of these uses of AI are sort of built on that and turning it into something that as you described it, Kirk is obsequious and talks back to you. That's where the LLM piece comes in. It builds upon these machine learning models that are taking data and analyzing it. And then it's adding on to that by having the capacity to respond to you and have a conversation with you. And that's really interesting to me because I'm really curious how this is going to turn out in the sense that I wonder if some of the vestiges of AI as we see it now will prevail and will continue to be used in ways that I think are helpful. I mean, this is me being really optimistic, uncharacteristically optimistic perhaps. And if some of the things that people find or people kind of have a disgust impulse and reaction to, and certainly I do, will maybe fall away. I would love to see that and to instead see a future where we can kind of have conversations about things like AI upscaling in games and like productive uses of machine learning to help medical fields and other pattern recognition types of things. I wish we could get there with the conversation, but it feels like we can't because so much of the conversation revolves around these sort of direct to consumer tools like ChatGPT and Claude.

Speaker 1:
[15:30] Yeah. I think about how, I was listening to Corey Doctorow talk about this a little while ago, and he was using the term AI as normal technology, which is the name of a paper that was published about this, and is a kind of a good way of thinking of a little bit like what you're talking about, Maddy, that AI could just be a tool. The example that he cited is, you're making a movie and there's a shot where all the extras are facing the wrong way, and the director realizes, oh, we wish they were facing left because the thing they're looking at is supposed to be to the left, and now someone in post-production can use an AI tool that just turns everybody's face to the left, and that that's just a normal tool. That's just a tool that would be in Photoshop or whatever, in some image software, and that's kind of a normal use of it. Something that I use that has machine learning, for example, is I run a stem splitter in order to extract parts on recordings when I'm making strong songs. I just started using this a couple years ago. It's really cool. It can pull out the piano part and the bass part. This is like a machine learning algorithm that can look at a wave file and figure out where the different instruments are and then give you isolated stems. That's very useful. That's just a tool. And then there's this kind of next layer on top of it, the LLM part you're talking about, the part that talks to you, that acts as though it has a personality. And in some ways I almost feel like that part is like an engagement hack placed on top of a tool.

Speaker 2:
[16:59] Or a series of tools even. Like a series of different tools at times.

Speaker 1:
[17:03] I'm speaking broadly by just saying a tool, like something that is just like practically useful. And then suddenly it's talking to you and it's forming a relationship with you and it's kind of manipulating you in certain ways. And that's the part where I don't, like I don't know what to do with that. Like why do they all need to talk to me? You know, all of these different tools that I'm finding otherwise useful.

Speaker 3:
[17:23] It's interesting. I don't think that's the part that is controversial. The chat bot aspect of it all isn't what is driving people mad. It's the part where it can be used not just to turn everybody's head to the left, but also to create an entire five minute video. It's the part where you can extrapolate that tool and use it to write an entire novel to the point where Hachette has to pull it from stores because it turns out it was written by AI. I think that's what's a little scary about the tool is just how far you can take it. It's like as if Photoshop had come out and you could just tell Photoshop, hey, make me a bunch of photos instead of saying I'm gonna use Photoshop to brush up the photos I've already taken. The chat bot, that to me seems inoffensive and just kind of like a cursory part of this whole experience that we're going through.

Speaker 1:
[18:12] So here's a question to throw into that because I think that's right. So then is the issue the way that we're using the technology more than the technology itself?

Speaker 3:
[18:22] Well I think there are a couple of issues that people generally have. One is one that maybe we won't really get into because it's so messy, but essentially the data set and the fact that everything that it's creating is taken from the amalgamation of human art and creativity and novels. I'm part of a class action against Anthropic because they took two of my books and fed them to Claude. Maybe that's why Claude is so smart because he used my books. And so I'm getting money because of that and I have a lot of strong and complicated feelings about that fact. So that's one part of it. But the other part is that when it feels like a work of art or even a fucking LinkedIn post is being created by a machine and the human involved couldn't be bothered to do it, then as a human you feel cheated. You're like, if a human couldn't be bothered to make any of this, then why would I bother looking at it? Why would I bother reading it? Why would I bother engaging with it? That's another, that's like, I think that's the main thing where it's like, if you're playing a game and you know that a bunch of the art in the background was created by AI, it's not that big a deal at the end of it in the grand scheme of things, you're kind of like, man, they couldn't be bothered to do this, like, why should I be bothered to engage with this? You feel, it feels off, it feels gross, it feels, you get the eck a little bit.

Speaker 1:
[19:43] Yeah, I think that's a really interesting part of this, like the reaction that we have to AI art or that people are realizing that they have, because it's been in the world long enough now that I think, like I have, I at least have a stronger sense of how I feel about it and how most people do. And yeah, the training data part is like a whole separate conversation, and the fact that it's all fruit from a poison tree, like that it was all done by this kind of this theft that then led to all of this, I guess, is an incredibly difficult thing for me, at least, to get my head around and deal with.

Speaker 3:
[20:16] Intractable.

Speaker 1:
[20:18] It's intractable and incredibly difficult, almost unsolvable. And we're kind of taking it on its own terms because it feels like we have no choice.

Speaker 2:
[20:25] And it's also part of the marketing of AI is this idea that we have no choice but to accept it.

Speaker 1:
[20:31] Yeah, yes. And I want to acknowledge that because I feel a lot of resentment around that, around the fact that I just have to accept that. That same feeling, like that feeling of resentment is what so many people feel around AI. And every time it's crammed on your throat and people constantly talk about it, AI is being crammed on my throat. It's being forced into this thing that doesn't need it. It's that same feeling that these people are telling you, you need this, you have this. Sorry, it already happened. We stole everything. Now you just got to talk about it. Now you just got to live with it. And that is a bad feeling. And I think it's worth recognizing that feeling as something that's just in the air surrounding this whole conversation.

Speaker 3:
[21:07] Well, can I just chime in on that feeling real quick? This is something that I have kind of, I don't know, I have some unique feelings about this, I think, because I started experiencing that feeling, I don't know, 15 years ago, when I realized that YouTubers would just make videos where they just read my articles out loud and get millions of views. And meanwhile, my articles are getting a fraction of that kind of attention. And these YouTubers are making God knows how much money just reading things. And then it started becoming a social media thing where like these hack accounts, like Desherto, just copy paste what I say and turn them into images and just spread them around the internet. And so to me, it feels like this is just another evolution in that same kind of pipeline of my work being stole in and just used by other people for to make money without crediting me or giving me proper due, proper financial due, proper credit, whatever it is. So I guess I become a little bit more numb to that. At least Anthropic is giving me some money. I haven't gotten a dime out of all those YouTube clickbaiters.

Speaker 2:
[22:11] And you probably never will.

Speaker 3:
[22:13] I've stolen my stuff for years. So I mean, that's just the internet ecosystem. And I think I wish there was a fraction of the outrage there is for the AI datasets. I wish there was a little bit of that outrage for the way that the internet content machine has worked over the years.

Speaker 1:
[22:29] I think that's helpful. It's helpful for me in understanding what AI really is, because AI is an expression of the same incentives and forces that gave us the algorithms and YouTube and the social internet of the 2010s. And so AI is just taking that to an even further extreme or an even maybe more logical conclusion or something. So the same incentives that made a YouTuber think, oh yeah, I can just steal Jason's article, record myself saying it, throw it on YouTube, and then that algorithm will reward me. Those same incentives are the same. That's what told an AI company, oh, I'll just steal all these books, load it up, turn it into a product, and then generate a bunch of investment capital. Like it's the same way of thinking, because it's like all Silicon Valley. It's like all the same companies and the same people who think the same way.

Speaker 3:
[23:16] And the lack of disincentives. So there's no punishment. It doesn't matter. Like in Anthropix case, they had to pay a billion dollars in a class action lawsuit, which is nothing compared to how much they're able to is every single year.

Speaker 1:
[23:30] So I have this thought about aesthetics, but Maddy, I'm curious if you have any thoughts on this, like about this sort of this part of it.

Speaker 2:
[23:36] Yeah, I mean, I'm inclined to agree with Jason in the sense that I also have felt a lot of anger at Facebook and Google these past couple decades of my journalistic career and just seeing the way that they have also essentially stolen all of our work and reposted it in a variety of formats. There was actually a time period when some journalistic institutions tried to sue Google News because of the way that it abbreviates articles and shares them in a newsfeed that I'll admit, I also read and disincentivizes people from clicking on it. And that lawsuit failed and is something that I think about a lot in all of this. And that I'm like, well, I think that and the fact that Facebook described itself and Zuckerberg describes it as not a journalistic institution. Like all of those things have kind of been chipping away at whatever possibility we had of a different outcome here. So like Jason, I also feel very jaded by this point, about the idea of all of my work and all of our work being stolen, because I feel like it already has been. In order for us to even agree into the social contract of publishing on the Internet, it no longer belongs to us, even though I also, we've talked last week and so many times about how important it is to us that we own Triple Click, for example, we already also don't. In the sense that you could go ask Jeff, GVT or Claude or whoever tomorrow to make you a Triple Click episode, you could use tools to imitate our voices, you have hundreds of hours, please don't do any of this, but you could in theory do these things and get, in my opinion, pretty crappy Triple Click episode. But something that I think we have to talk about is the fact that it is getting better, and then eventually it could conceivably create a pretty good Triple Click episode, pretty good series of articles, and that also upsets me fundamentally, but is something that I think I have had to really reckon with as all of these tools have improved over time. How do I feel about that? Just the fact that it might eventually be so good that I can't tell the difference anymore.

Speaker 1:
[25:45] Yeah, I've been thinking about that as well. I'm not totally convinced. I think this relates to a broader feeling that I have around AI, which is that any guesses we have about where things are going to go, that the art could become good, for example, or that the AI could do something genuinely helpful like curing cancer or Alzheimer's or something. Like these promises that are constantly brought up whenever we talk about AI, the future looking part of the conversation. I'm trying to hold all of that separately from myself, I guess, and not spend any or too much time accepting it as true. Because that's possible, I guess, that AI will become better at making art that is beautiful or convincing in some way. But I want to go back, actually, it's helpful to go back to something Jason was talking about, which is not only do you, I think a lot of people feel that resentment around just being confronted with AI, because so much of it has happened in ways that a lot of people personally don't agree with, and it just feels like it's being shoved down your throat. Also, people disengage from AI art. Like I've definitely anecdotally heard stories of a lot of people I know whose kids are just like, the minute they know something is AI, they just totally disengage and they don't care, because they're like, well, why would I care? That's just like fake basically. I think I certainly feel that way when I hear an AI song. So many people will send me an AI song and say, Kirk, how do they do it? It sounds so real, and I listen and I'm just like two seconds in. I'm not into it. And even if you could fool me, which there are plenty of videos on YouTube of professional producer fooled by AI track, even then, I don't think that that actually means that the track has the same inherent artistic qualities as something that was made by real people. And I think there's like an aesthetic conversation to be had about what AI is used for, and what kind of art AI ultimately is even capable of, and ultimately what it even is, like what it means for an art-like thing to be made by an algorithm. In terms of things that are made by AI, AI art is degraded, like it is a fundamentally degraded kind of art. If you look at like the visual aesthetic of AI art, it is, I think, not a coincidence that that aesthetic is also the visual aesthetic of the Trump administration. It is fundamentally a troll aesthetic. It's the aesthetic of like an administration that seeks to degrade and remove beauty and like individual power from the world and impose its will on everyone. Like there was just this big controversy about Trump posting this AI image of himself as Jesus. But if you look at the picture, it's like quintessentially Trumpian, and it's also quintessentially AI. And I don't think that's a coincidence. Like I think that the way, like AI lends itself to this use as like a force of degradation. And that just makes me wonder if no matter how advanced it may get in the future, if it is actually something that can make beautiful art on its own. Like I'm not convinced that it's possible for that to happen. So when I think about the models getting better and making more convincing art, that even sounds like something that was real or looks or reads like real art. I'm also not sure that that's possible just because of what AI is.

Speaker 3:
[29:15] Yeah, I think we should draw a distinction here between it being used and that aesthetic being used for something whose purpose is to be a meme or a joke or something that is kind of like a provocative image versus something that is meant to be engaged with on a more meaningful way. If I used AI to like, if I was like, I'm going to pull a prank on Kirk by creating, by recording a song that is just me making farting noises or something like that. And I recorded it versus I used AI to do it. It wouldn't really matter because the purpose of that is not to do anything interesting. The purpose of that is to just play a prank or do something stupid. In the same way that Trump posting a picture of himself as Jesus, would it really matter if that was just a guy using Photoshop on his staff for a couple of hours versus AI doing it? Either way, it would just have the same kind of purpose. Whereas if someone sends you a music track that was created entirely by AI, that seems to me to be a little bit of a different kind of, I don't know, a different bucket to use our trusty old metaphor, than the kind of intentionally provocative type. And so like I think about it in terms of kind of like, you could say you could apply that aesthetic also to the writing style, which is like, it's not just a metaphor, it's a simile. That's sort of those writing ticks, right? And so I complained about LinkedIn posts before, but when I look at a LinkedIn post, that is not meant to be engaged with in the first place. That is meant to go viral and get you a bunch of new followers on LinkedIn. So at the end of the day, does it really matter?

Speaker 2:
[30:53] Which I guess is engagement of a kind, but you mean like true emotional engagement.

Speaker 3:
[30:56] Sure, but that is not to be engaged with artistically. And that to me, I see a difference between using that kind of terrible aesthetic for your social media nonsense versus your book that you are trying to sell on store shelves and get people to engage with and on a meaningful artistic level. And to me, I think that as we see this, as time goes on, I think people might become a nerd to that aesthetic, whether it's the look of it, the AI look or whether it's the writing. I think they'll become a nerd to it when it's done in that provocative, meany way, and that'll just become commonplace, whereas when it's used for things that we are actually meant to take seriously, I think that's just never going to be accepted. I don't know, just to give you an example, like in my group chat, sometimes we used to just do, in my high school friend group chat, people I've known with and fucked around with for 20 years, 25 years, we used to just make fun of each other. Sometimes someone would do a Photoshop or something. Now we use elaborate AI images that are too obscene to mention on this show, but to make fun of each other with. To me, who cares if we're using it for that purpose? I think that's the sort of thing that might become more acceptable, and it would be more of that aesthetic. And then, yeah, and then when you see it, like, I guess, Kirk, maybe what is kind of really getting at you is the fact that this is being, this is the president of the United States using this stuff. It is essentially acting like a meme poster on 4chan. But I do think that that, like, when it's such a lowbrow thing, to me, it almost doesn't matter whether a White House intern spent two hours Photoshopping Trump is Jesus or whether they just typed it into Mid Journey or whatever.

Speaker 2:
[32:41] I mean, maybe, but there is something else to what Kirk is getting at in terms of AI art looking a specific way that I think is also relevant to games, to bring it back to the beginning of this conversation, which is that anytime you're creating an AI image, you're kind of taking the average in a certain respect of a really large data set of other images. And the result has this certain uncanny smoothed out appearance to it that it's, I don't know why I'm defending artisanal Photoshop. I agree with you, Jason. It doesn't matter for the purposes of your group chat per se. But I do think there is something about the specific aesthetic of the way an AI enhanced or AI created image can look that is definitional of a specific time period that we're in now, whether we like it or not, even if only because it's associated with maybe the kinds of shirts people are wearing. It's a trend that is happening right now, and a visual medium that is happening right now, so we're associating it with the present time. It also happens to be a collective theft that is producing this type of art. There's that piece, but also just the fact that it always looks a certain way. Can that be changed? I think part of what comforts me, I guess, is the fact that I don't think it ever can be because it can never create something original. It can only ever be creating something based on a pre-existing data set.

Speaker 3:
[34:12] Yeah, it's useful. It's useful to be able to see that aesthetic and just immediately dismiss it as AI garbage, and you can just be like, this is irrelevant as a piece of work, and hopefully that will remain the case for writing and art. It sounds like in music already, it can be difficult to tell. But yeah, I mean, and then when it comes to game development, which we should flick at least a little bit, I think AI has become ubiquitous at game companies, just not in the way that people typically think. We're not seeing that AI aesthetic. Occasionally, we'll see it as background art in games like, I don't know, Crimson Desert was the most recent one, Claire Obscur had AI background art. Occasionally, it's getting used. But most of the time, I think a lot of these companies, which including pretty much every game company that is of a certain size, has played around with AI tools in some way or another, for backend work, for coding. Cloud Code has been a very popular tool among programmers and coders of all stripes. It's being used for coming up with early documentation. It's being used for tools. It's being used for testing tools that you can use to look for patterns and say, hey, I'm looking at this massive script or something like that, and I want to see if there are any links that aren't connected. I want to see if there's anything that isn't quite right, and you can save time that way. It's being used for a lot of different things among game companies. I've heard a bunch of different use cases that I felt were interesting that are not the types of AI features that you would wind up seeing in the final product. It's all just kind of back-end work, and I think that it'll be interesting to see how many companies can actually come out and say, we use no-gen AI in the making of this thing, because I think that number is dwindling by the day.

Speaker 1:
[36:17] Yeah, I'm remembering Brett Duville, a friend of the show, who wrote in to share his thoughts as a software engineer on just how AI coding works. This was a little while back related to a question that we sort of amended in a later episode. We amended our answer about how AI code works, and how it's basically still an LLM, like it's still generating the code based on other code. It isn't thinking of it in some different way. It was a helpful answer because he talked some about that kind of de-skilling, to use the term. You talked about at the beginning, Maddy, and how a software engineer is a problem solver who uses code as one tool in their toolkit to solve problems, but that once you start to remove the ability to understand code from the engineer, the engineer becomes arguably less useful. And that question, I think, like in the workplace is going to be, like I had certainly have no, I almost don't understand it as a question. I certainly have no answers about it. But it is definitely something that people are going to be wrestling with because AI and some of these tools, specifically not the LLM part, but the tool part, like allows you to do things that you just couldn't do before, and also allows you to kind of shortcut some processes that might have taken a long time. One thought I have related to workplaces, like workplaces incorporating AI, it seems to me that if you're starting something new and you understand these tools very well, it is probably easier to build a workforce that incorporates some of these tools in an effective way that allows you to be efficient and for work to be like a kind of a slightly different thing, and that it's probably a lot harder to incorporate these tools into an existing workforce. And it also seems to me that the games industry has been so unstable and so unsustainable for so long. It arguably seems to be in the middle of a slow motion crash at this point. Like so many people have been laid off. It's this just like horrifying disaster that's just not happening all at once. The way the first video game crash in the 80s happened, still kind of feels like that's happening. There are so many different factors at play. AI is just one of them that's causing this kind of house of cards to fall over. And it just seems like if you have a huge organization with a ton of workers, and you're introducing a number of different factors, economic, cultural, etc., and also this technological factor of AI, that it's like just one more destabilizing factor in an already destabilized industry.

Speaker 3:
[38:44] The thing that's really scary to the point about it leaving people unskilled is that it is having the greatest impact on entry level jobs. I was talking to a friend of mine who's an engineer, and he was telling me that an AI tool is way more useful than a junior level programmer. For a variety of reasons. And it's only after a few years that you can wind up getting enough skills and developing your coding ability to the point where you're better than the AI tool, or you know things that the AI tool wouldn't. But right now, if an AI tool is better than an entry level programmer, and you're an office, a corporation looking to cut costs as much as possible, why are you hiring entry level programmers? Especially if you're incentivized in the short term to just juice your fiscal quarters every year, and it doesn't matter to you what your company looks like in five years from now. So that's a little bit of a scary thought, is just the elimination of entry level jobs. And it's not just in coding, it's in a lot of fields. I think a lot of people out there right now are probably nodding along because a lot of companies in a lot of different industries have been reckoning with this fact, with the fact that AI can do the job of an entry level worker better than the entry level worker can. And you really have to be willing as a company to invest in your future and say, hey, it is worth it for me to take that productivity hit from hiring someone new because in the long run, it'll be good to have new people and you eventually will lose all our seniors and will need seniors around to supervise the AI agents, et cetera, et cetera. But yeah, that to me is the biggest disruption that this is going to cause. I don't think it's going to put people with 15 years of experience out of work. I think it's going to put new job applicants. It's going to prevent them from even getting the positions in the first place.

Speaker 2:
[40:27] Yeah, and it also prevents the creation of those people in the first place because in order to become the person with 15 years of experience coding, you need to have first been the entry level person who's being replaced. We've talked broadly about de-skilling, but in a more direct way, I worry about the person who's learning a little bit about how to code right now and is like, well, why would I bother to learn all of these complicated aspects of it when I can just use AI to shortcut these various things? But then you'd probably hit a plateau as you would with almost anything else to do with AI where it's helping you, but you aren't actually learning practical skills to the degree that you could actually write your own code independently. To bring in another example, this astrophysicist, Minas Karamanis, wrote this blog post called The Machines Are Fine and Worried About Us, and how AI has impacted astrophysics research, and how he imagines these two characters in the blog post, one who uses AI to advance their entire career and eventually can get a tenure track position doing that, and one who does it the hard way, but becomes in his mind a better scientist as a result, because they're learning all of these practical skills along the way. They're capable of evaluating complex problems and solving them through research and evaluating scientific hypotheses and doing tests, and actually seeing what the results are as opposed to relying on an AI to do those things for you. And then the end result is, is the person who used AI the entire way capable of actually doing that work, they're so out of practice that I worry they wouldn't be. And I mean, in reading that, I was like, this isn't about my field, but it is also in the sense that I do worry about someone today in college or high school who maybe likes writing, but is also like, oh, but I don't really want to write this English paper on this book I don't care about. I'm just going to take a shortcut and have that be something that AI writes for me, but I still like to write. And when I look back on it, I'm grateful for all of those stupid papers I had to write, even though at the time I didn't want to write a single freaking one of them. They all taught me something about deadlines and time management and also understanding media and analyzing it and reading the book a second time and putting in all my little sticky notes and really thinking about it. Like all the stuff AI couldn't do for me because I would be skipping it. And I just, I don't know. I don't know what the world looks like in 15 years when it's full of people who skipped all that stuff.

Speaker 1:
[43:03] That's a, and that is a really good question. I thought this article is very interesting. This, the machines are fine. I'm worried about us and recommend people read it. I sent this to a couple of friends of mine and had some really interesting conversations. There is an argument that I think we should at least acknowledge that this is a tool. It's a really powerful tool, but it is a new tool and that you can, like for the sake of discussion, think of it like a calculator and no longer having to do long division by writing it out, and you just have something that can do it for you quickly. That hypothetical that you're imagining, Maddy, where in 15 years people who are like that one student that Karamanis imagines uses AI and does a lot more research more quickly, but doesn't develop those scientific skills. As a result, at least in their argument, also doesn't develop the instincts that a scientist relies on and is sort of arguably not a scientist at all. I mean, I think Karamanis is actually making a sort of similar argument to what I was saying earlier about art, that science done in this way with AI doing this much of the work, it's so kind of elides the point of science that it almost ceases to be science at all. And it starts, it becomes something else. At the same time, you can imagine, oh, maybe there's a world where it's just normal. Everyone uses this tool. And now, the new normal is just, we all expect that everyone will use this. And as a result, science or art, in the case of a lot of what we've been talking about, just looks different. For me, that's an interesting idea. I don't totally know how I feel about it. I sometimes resist the calculator thing only because this just feels different to me. And I have to acknowledge that when I say that, I think everyone always says that about a controversial new technology. It always just feels different. I always want to say, yeah, but this isn't like those past things where this pattern repeated itself a bunch of times. This time, it's really different. It does feel different to me because a calculator doesn't also do therapy for you and doesn't like whatever run your household and befriend your kids and research cancer and do all these other things that AI is theoretically going to do. But for me, I guess, I actually have found AI and especially questions about AI and art to be clarifying only because however people are using it and whatever they're coming up with, it's led me to have to really define what I think art is, and specifically music because that's where I think about it the most. What music really is and how it works and how it's this human thing that we use to connect to one another. That has actually been helpful to be forced into the position of having to define that. I wonder if some of this is going to actually make us reverse some of the past 15 years of moving toward thinking of art as content, moving toward turning everything into a product to be sold, living so much of our lives according to algorithms that we and even the algorithms creators don't fully understand, and to step back from that and say like, okay, well, now we've built a thing that just does it for us. We've fully automated it. So wait, what was music to begin with? Why do we actually want to make things for one another? What is writing? Why would I tell you a story? What is any of that? And that is actually, it's not to say like, you gotta hand it to AI. It's just been something that I've come out of this stronger on. I feel like I've been forced into having a stronger sense of it.

Speaker 3:
[46:31] I think I'm a little optimistic about at least the creative parts of this whole thing. In part because there's been stuff like AI to a lesser degree around for a long time. So let me give you an example. Let's say I want to write a novel, let's say I want to write a murder mystery or something like that. I could go and read Save the Cat, or I could go and check out Dan Harmon's Story Circle, or Joseph Campbell's story, Hero's Journey. I could take that and I would have to think of ideas along the way, but I would essentially be following a paint by numbers script for how a story is written and could go about it that way. Would that be controversial? No, absolutely not. I think AI might let you take it one step further by actually creating some of that stuff for you. But I don't know, to me, it's going to feel just as formulaic as it would be if I just took the paint by numbers stuff that already exists and has already led to lots of dreck that's out there on the Internet. I think that the good stuff is still going to be good, and I don't think that the AI is just as a creative tool is ever going to be capable of creating the good stuff. I think what might be different is that maybe we'll see, I don't know, this kind of, it'll be interesting to see this gradient of like where AI kind of settles as a tool in people's stable, because I do think it will be a tool for a lot of people in the creative field in the same way that Photoshop is or whatever. It's just that like, will it be a tool that is actually useful for creating things or will be a tool that is useful for the kind of the logistical parts of things, the research or the kind of the formatting elements of it. If you use AI to write an outline and then you turn that into a novel, is that novel, are you capable of making it really good? I don't know, maybe. Would that be controversial? Yeah, probably. What if you use AI to write a rough draft and then you edit around, you edit that and you make it something good? Is it capable of being good? I don't know. It'll be really interesting to see, but at the end of the day, I feel like it's not really going to be capable of something, of creating something great, something that resonates with a lot of people that becomes extremely successful. I don't know. I could be wrong, but that's kind of my gut feeling is that it just doesn't feel that way. And the other part of that is that like when anyone can do it, it just we were talking, I don't remember if we brought this up yet, but the three of us were talking about Adam Neely's video called Suno AI Music and the Bad Future. And one of the things, one of the points he makes that I think is really salient is that when people are using Suno to create their own music, they're just kind of like listening to their own music and not listening to other people's AI music. Because at the end of the day, nobody is going online and being like, I'm going to go search for some new AI music that someone else created. Like no one is really interested. There isn't an appetite for that. People are looking for music created by other humans, writing created by other humans, art created by other humans. And so I think that's always going to be the case. And maybe we have this world where there's kind of like people just creating AI things for themselves and not really getting much traction when they share it with other people because other people just are creating their own AI things. And maybe the market is just flooded even more than it already is. And we have to rely on gatekeepers and discerning kind of critics even more so to tell us what's great and help us figure out how to sort the dreck, how to sort the good stuff from the dreck. But I do think there will always be just a clear distinction between what's great and what's not.

Speaker 1:
[50:04] I love that video of Adam Neely's. It was my one more thing a little while ago, so I already talked about it at length. I highly recommend people listening to this. Watch it if you haven't yet. It's great and has a lot of very clarifying thoughts about art. At the end, he lays out some values that he is choosing to embrace as an artist and as a person, that I think are really smart and offer a nice counterpoint, and just a way to sort of hold in your own mind how you feel and how you want to move in this world, as AI becomes more prevalent. Two, I guess, closing thoughts that I'll share. One is, are either of you aware of Angine de Poitrine, this math rock duo? Have you seen this? It hasn't broken containment to that extent. Okay, so there's this Quebecois math rock duo, called Angine de Poitrine. They're these two guys, I guess I'm assuming they're guys. They're two people who are wearing these wild outfits. One of them plays drums, and they have these huge heads on them, so they look like they're almost giant creatures. One is playing drums, and one plays this double neck bass guitar, both of which are unique microtonal instruments. I will play a little bit of it right now, so listeners can hear what this music sounds like. It's wild, I got like five emails about this on the same day. So did everyone else who works in the music education space, Rick Viato, the YouTuber, put out a video that was like, stop emailing me about this. And part of it is you have to see them do it live, like with the looping and the weird microtonal guitars and stuff. Like it's two guys, and this is what they sound like doing it live. So these guys went totally viral. Like they blew up on like one week. It was like maybe a month ago or something. And everyone watched them. And like every comment mentions AI. And I feel like part of the reason that they blew up is as a response to AI malaise. Because you look at these two people, this guy with like polka dots painted on his hands and feet, fully in disguise, playing microtonal music that sounds like it is from outer space. Someone's comment was like, 20 years ago, this is what I thought music in 2026 would sound like, so I'm glad that it exists. And I think that it's that only human beings could do this. This is just pure human creativity, and people loved it because you just see it. They're not even at a live show. They're watching it online, and they're seeing them do this thing, and it just feels like this. This is what human beings do. And I think that is an interesting moment in the midst of all this AI, and I do think that part of their viral popularity was driven by AI malaise. And the last thought I will share is just something I want to hear more of when I hear from these AI companies, like when Jack Clark is on Ezra Klein talking about Anthropic or whatever, any of these AI big wigs goes and gives an interview, I want to hear them talk about how AI is helping cancer researchers, how AI is helping Alzheimer's researchers, and not just as an aside. I want that to be the subject of the conversation. If you want to justify all of the theft and the harm, and we haven't even really mentioned the environmental costs, but the energy and water costs of these data centers, how harmful these data centers are. If this is as world changing as they say it is, they need to talk about that more and show me that more. That is my current feeling is whenever someone starts talking about this stuff who works at one of those companies, I'm like, tell me how you're going to cure cancer or I don't want to hear it. I just wanted to say that on the podcast for the record.

Speaker 3:
[54:04] Well, I mean that shareholders don't want to hear that.

Speaker 1:
[54:08] I don't care, but I want to hear that. Kirk wants to hear that and they should keep that in mind.

Speaker 3:
[54:13] I know that it helps doctors diagnose things. I know it's helped them recognize things that they otherwise wouldn't, bone fractures and whatnot.

Speaker 1:
[54:22] It's definitely been a thing. It's a real area of potential. I just want to hear about it.

Speaker 2:
[54:26] Yeah, it does also result in de-skilling of doctors for what it's worth. Where over just a series of months, if doctors use it to spot certain things on scans or output in x-rays, etc., then they lose the ability to spot those things. And at first, that disturbed me. But then I was like, doctors are expected to contain so much information in their brains that maybe I'm okay with it if they get de-skilled at some things so that they have more room to become skilled at other things, like remembering all of the complex, interconnected diagnoses that they need to keep in their brains. Can you tell I've been watching The Pit lately? I'm worried about all these people. So really anything we can do to help some of our most stressed out members of society, I like Kirk, I'm in favor of, and yet somehow that just never seems to be where we concentrate our efforts.

Speaker 3:
[55:15] Yeah. I mean, what I would like to see some of these AI companies come out and say is like, yes, this is going to eliminate jobs because it's going to eliminate the need for this, this, and this, and entry-level positions won't be available here anymore. And therefore, this is what we are going to do to help make up for that. So people can still live their lives.

Speaker 1:
[55:36] Right. Instead of some vague hand-waving, well, wealth generation, it'll be fine. You're like, okay, will it? Yeah. What are you going to do?

Speaker 3:
[55:43] What is the concrete plan here? Because I do, I do like, I don't know. I feel like there is a world that we could grow to accept where AI replaces certain things and we don't need to think about certain things anymore because they're not relevant in the same way that, I don't know, you don't think about huntering and gathering anymore because it's not relevant. That's not a human skill.

Speaker 2:
[56:02] Yeah, you know, like sewing machines or washing machines, any number of industrialization products.

Speaker 3:
[56:07] You don't need to worry about doing your own, like washing your own clothes anymore. Although, I mean, there's some merit to that. I'm sure some people, survivalists out there would say there's merit to that stuff too. But still, I mean, I'm willing to accept that that's a world. It's just that like people have not presented their, our technocrats or our plutocrats have not presented a world where there are actually good things that result of that because people will therefore benefit in this, this and this way. It's all just kind of, well, your job is going to be replaced. So, uh, shrug.

Speaker 2:
[56:38] Yeah.

Speaker 1:
[56:39] Yeah. It's very frustrating.

Speaker 2:
[56:41] I mean, I think their priorities are pretty different from what ours would be because perhaps so many of the people who are working on these AI projects have so much money that they're a little divorced from the reality of what actual people worry about.

Speaker 3:
[56:55] It's hard to imagine.

Speaker 2:
[56:55] Day to day. But hey, I liked that we're trying to end on an optimistic note. So let's, let's leave it there. Triple Click still 100% original. You can't replace us. We're still us. We're still human. And with that, let's take a break and come back with one more thing.

Speaker 5:
[57:19] Max Fun Drive starts next week. Max Fun shows like this one are creator-owned. The network is worker-owned and we're all supported by members just like you. Max Fun Drive is the best time to support the shows you love. You can get Drive exclusive gifts, a bunch of new bonus content, and join in on the fun as shows hit their milestones. Plus we've got dozens of meetups and counting. We got live streams and more. So stay tuned because you don't want to miss it. Max Fun Drive 2026 is starting Monday, April 20th.

Speaker 6:
[57:55] I'm Jordan Cruciola, host of Feeling Seed, where every week I have a different actor, director or writer as my co-host. And whoever that co-host may be, it is a sure bet that we are digging deep and having a great time doing it.

Speaker 5:
[58:09] I love that you just said that. Yeah, I mean, if I were going to join a cult, I think this might be it.

Speaker 6:
[58:16] A fresh look at your favorite film and a peek behind the curtain at how movies get made. No, okay, I'm going to tell you this full story.

Speaker 4:
[58:22] Okay, I almost got fired from that movie.

Speaker 6:
[58:24] You should be listening to Feeling Seen.

Speaker 4:
[58:27] I had so much fun. I love what you're doing.

Speaker 3:
[58:30] I hope I did okay.

Speaker 6:
[58:31] New episodes every week on Maximum Fun.

Speaker 2:
[58:36] We are back. It's time for One More Thing. Kirk, why don't you go first?

Speaker 1:
[58:41] All right. My One More Thing is a book that I experienced as an audiobook, which is how I'm going to recommend other people experience it. It's a book called Dungeon Crawler Carl by Matt Dinniman. Okay, let's see if I could do an abridged version of the narrative set up for this book. Basically, a guy and his ex-girlfriend's cat find themselves thrown into a post-apocalyptic dungeon video game run by an intergalactic corporation that has destroyed most life on Earth and made the survivors fight their way through an 18-layered dungeon for the entertainment of viewers all around the universe. And Carl, the protagonist, and his ex-girlfriend's cat, who becomes sentient and becomes his sort of friend and companion. Her name is Princess Donut. She is fantastic. She's a highlight of the book. Carl and Donut must make their way through a series of ever more dangerous levels while trying to please the crowd and gain sponsors all without pissing off the Borrent Corporation, which is the intergalactic corporation that runs this whole thing. It's sort of a combination of the Hunger Games with Hitchhiker's Guide to the Galaxy in terms of the tone and then maybe some Ready Player One because it is very much a video game. There's a lot of stats and upgrades and dealing with magical items and your level and kind of always trying to grind for experience points. They are very much living in a video game. So it's kind of like a narrativized reenactment of a fictional video game from the perspective of one of the players of that game. So that's the basic setup. It's a guy has to make his way through a video game and you read about it. I am listening to the audiobook of this novel and now of this series. I'm actually moved on to the second one. The audiobook is read by an actor named Jeff Hayes. He produced this for Soundbooth Theatre. Jeff Hayes is exceptional and these books, of course, are very, very popular. The audiobooks, I believe, outsell the print edition. The audiobook is the whole thing. It is an Audible exclusive, so you have to listen to it through Amazon's Audible, which is a bummer. So basically, man, where to even begin? As a radio listening experience, these books are really fun. I don't think I would like this book nearly as much if I had just read it. In fact, I read a little bit of the second book because I got it on Kindle. I guess I'll just say the way to save money now is if you buy it on Kindle, you can get it for like four bucks and then you can bundle the Audible version with the Kindle version, and the total is significantly less than if you bought it through Audible. So look into that if you're going to buy these and you want to get them for a discount. So anyways, I read it a little bit and just found like, I was like, man, the writing in this is not that great, or it's just like the language is very straightforward. There's even some grammar stuff that I'm not wild about. It's like the jokes don't always hit. Sometimes it's very funny, but sometimes some of the humor is just kind of not my tempo. It's a little bit loosely written, let's say, or loosely edited maybe. And it's more just like, here's the story, here's what's going on.

Speaker 3:
[61:53] It's almost like a tabletop campaign.

Speaker 1:
[61:55] It is, and in reading it especially, it starts to become clear, this is really just like a series of imagined MMO scenarios or tabletop scenarios. Like there is a lot of gear just being described and achievements being described like inventory management and like stats. You know, he's got to boost his strength, but you know, Donut's charisma is low because she has some special item that she got. And every item is like very complicated. And I'm in the second book now, and it's really like feels like I'm listening to someone describe to me a video game that they played in like great detail. And so every fight is like, you know, you're hearing about the stats and the items, and he's accounting for all these different buffs and debuffs and abilities. And it's like very complicated. And just not really like anything that I've ever read before. I believe the genre of this is called a lit RPG, which is like a novel that is an RPG. So, okay. So it's very unusual and it works a lot better having this wonderful voice actor perform it for you than it does to read it. So, all right. So the backstory of this is actually really fascinating and kind of explains everything.

Speaker 3:
[63:02] We're not up to the backstory yet. Okay.

Speaker 1:
[63:04] No, I mean the meta backstory, not the lore. Well, so the Borant Corporation. No, I mean that of these books, of Matt Dinniman, the author and the books. So, Dinniman worked, apparently this is according to Wikipedia, he was an artist who would go around to pet shows and he would draw people's cats. That was like his gig. And then in his free time, he was writing and self-publishing this series, Judging Carl or Carl, that then when the pandemic hit, he couldn't go to cat shows anymore. So he started writing it full time and he just self-published it to like the internet basically. And they became really popular, like through the site that he was publishing through. So the first few books were really kind of just written by him. I'm not actually sure if they've been re-edited or anything, but that kind of explains why they have this, just like a guy was having a lot of fun and just wrote this story. And that makes it make a lot more sense. And actually in that context, I appreciate and enjoy it a lot more. And then of course, they made the audiobooks, and the audiobooks became a phenomenon in their own right. Now, because the books are so popular, they've been, he found a publisher in 2024. I think there's a new book coming out in May really soon. There is a TV series coming from Seth MacFarlane's production company. It's going to be on Peacock. It's like a live action recreation of this that could be great. I mean, you start reading it and it's like, oh, well, this like demands to be adapted. I would have assumed it would be animated just because there's so many outlandish scenarios. It feels a little like Critical Role, like that Vox Machina show that I watched, and that just animation makes it a lot easier to do huge dragon fights and whatever. But I'll watch a live action version of this. That just explains the whole thing to me. Anyways, I listened to this. I enjoyed it. It ended. I was like, man, that was pretty silly. It was a little like listening to someone describe playing a video game to me for like 14 hours. But also, I really want to know what happens next. So I guess I'm going to listen to the next one. And I mentioned audiobooks a little while ago because I listened to the Project Hail Mary audiobook. This was like a similar deal where I had a free month of Audible so I could just listen to the book. And Audible promotes this book heavily because it's such a popular audiobook. So it was like right there at the top of the app. And we had been talking about this series separately. And I was like, okay, I'll check this out. And I added it and it's just great as an audiobook. Like it fits into my life now. I kind of found this space for audiobooks. I listen on my run. I like play it off my watch while I'm running. And it is just kind of a fun extra thing. Like I'm reading a like actual book at the same time as listening to this. And they almost like exist in separate parts of my brain. Like it doesn't really feel like reading a book. It's more just a fun radio adventure.

Speaker 3:
[65:46] So well, Kirk, you didn't mention the craziest part, which is that one of the reasons this became so popular, it sold six million copies. One of the reasons it became so popular is because he would post chapters online and have readers vote on like what would happen next and like survey them and how the story would go. There was this example, there's this great Times article that we'll link in the show notes, but he talks about how recently the Times article talks about how he pulled his Patreon subscribers to help them pick a setting, giving them this choice between like a suburban home with shrunken characters or Satan's Water Park. And a bunch of people voted in Satan's Water Park 1. Even to your point, Kirk, smaller stuff got people really invested because they felt like they're contributing to the story in the same way as a tabletop campaign or an Early Access video game.

Speaker 2:
[66:33] It's an Early Access book.

Speaker 3:
[66:33] Which is very salient to what we were talking about last week.

Speaker 1:
[66:36] It definitely fits in with the Early Access part of this. And I do think he's made one very smart choice in how he's structured this story. And that's that, so this dungeon is 18 levels. No one's ever made it to level 18. We learned pretty early on. Like I think the farthest any crawler has ever made it across the whole universe is like level 13 or something. So there's like levels no one's ever even seen before with like god monsters on it or who knows. But what's cool is the first book is just the first two levels. And so he's paced himself, I think, very smartly. And as a result, every time you go to a new level, you never know what it's going to be like. And he has the characters who know, who like work at the dungeon. They'll always say, oh, well, I can't talk about that, you know, what happens on level five or level six. So these like, they kind of allude to something. But of course, he doesn't know yet. And so he gets to decide as he goes. Yeah, then he can have people vote on it. And then each setting is going to be totally different. So like each book is to has a totally different flavor to it. And it really lends itself to this kind of episodic series approach, which is a really smart structural idea and makes you just curious, like, well, I wonder what the next book is going to be like. Cool.

Speaker 2:
[67:40] I want to check out this book because it's such a phenomenon that I am really curious about it.

Speaker 1:
[67:46] I'll be very curious what the two of you think. Like I know there are people who really don't like that and who just find, just because I think if you read them, I could definitely see someone reading them and just kind of not getting into it. Like just being like, this is like just reading someone describing stats to me for weight. There's too much inventory management in this book. But listening to it really does transform the experience at least for me. So for anyone out there thinking about this series, I really highly recommend those audiobooks.

Speaker 3:
[68:12] It sounds like it would be more like listening to one of those D&D podcasts, which I think is probably much better.

Speaker 2:
[68:18] Which can be pretty fun. We have one of our own.

Speaker 3:
[68:20] Exactly.

Speaker 2:
[68:20] It's called Triple Quest.

Speaker 3:
[68:21] Exactly.

Speaker 1:
[68:22] And kind of, and explains the popularity, right? I'm sure a lot of people who really like critical role probably also like this. It's that same feeling of, yeah, a really great storyteller just taking you on an adventure.

Speaker 2:
[68:32] Yeah, right. Jason, why don't you go next? You also read a book.

Speaker 3:
[68:35] Yeah. So I read a book called American Pastoral by Philip Roth. Have you guys read this book?

Speaker 2:
[68:41] Yeah. It's also based on a video game, right?

Speaker 3:
[68:43] Well, it's about a guy named Seymour Lavov, aka the Swede, and he winds up in a dungeon and he has to fight his way down by gaining charisma. No, it's as far from Dungeon Crawler Seymour.

Speaker 2:
[69:04] It's so funny to my mind to throw to you because I knew what you were reading.

Speaker 3:
[69:08] So it's a fascinating book. So this book starts off by telling the story from the perspective of this guy, Nathan Zuckerman, who is Philip Roth's alter ego that he's used in a few books as kind of fictional novelist that he comes up with to tell stories. And basically, there's this guy, the Swede, who I'm calling Seymour Levav. It was worship by Nathan Zuckerman in high school. He was this kind of all-American athlete, like good at everything, tall, good looking, super nice to everybody, just like seemed like there was nothing wrong with him. And for the first 100 pages or so, Zuckerman describes Levav and describes high school life, and then talks about going to his high school reunion, where he meets up with a bunch of people, including the brother of Levav, who he had been friends with in high school. And the brother of Levav tells him that the Swede, Seymour Levav, has died. And they get to talking, and Nathan Zuckerman, our author character, finds out that Seymour, the Swede Levav's daughter, was a bomber and was like a revolutionary during the VNM war, and blew up the nearby general store in the town that they lived in. And Nathan Zuckerman had no idea. And he starts thinking to himself, my god, this guy who I thought had this perfect life, was this kind of idyllic fantasy person, who I idolize, has been living with this inside of them for many years, that his daughter did this horrible thing, and he gets to live with that. The two of them actually had dinner at one point. Nathan Zuckerman and the Swede, and Nathan Zuckerman left thinking, like, this guy is so boring, he just has the most perfect life. And then the rest of the book is Nathan Zuckerman's kind of recreation or novelistic writing of the Swede's life, and what it must have been like for him to deal with the repercussions of what happened. So it's very much, it's a book about, I don't even know all the things it's about, cause it's one of those that is a very literary book, you write papers about and discuss over book clubs and stuff. But my interpretation is that it's a book about the depths of people, and this one guy who seems like one thing, but on the inside is very much another thing. That is done in this really interesting way where you don't know how much of it is actually true or not, because it's all being told through Nathan Zuckerman and his interpretation of this guy. There's no, don't expect any catharsis or resolution on that point, by the way. It's very much left ambiguous as to who this person actually is, what's true, what isn't. There's one character who's introduced later in the book who may or may not be a total fantasy of this guy as he is driven into kind of a manic state over the way that his daughter has turned out to be. And it's really interesting. It's quite a read. There are parts I liked more than others, parts I found pretty boring and just superfluous, I suppose. But it's a fascinating look at the human condition, I would say. And it's got a lot of interesting, just kind of the way Philip Roth writes sentences, I think is very provocative and very, I think poignant for lack of a better word. I really enjoy the way he writes. It's really as far away from how Kirk describes John Carle as one could be. This is the type of book where you'll read a sentence and you'll be like, wow, I want to read that again. And you'll be rereading sentences just to kind of get the weight of them and try to figure out how they're constructed and how they're used to tell a story. Yeah, it's a cool book. One, The Pulitzer, when it was released in 1997. I can see why it's definitely one of those books that you come away with thinking, man, this is the way that we judge people and the way we think about people on the first blush, I think, is really, it's always worth giving them a second glance and thinking about people in a different way. Because yeah, this is very much Philip Roth just kind of unpacking this person and looking at his psyche in a way that I haven't really seen a book kind of play around with in the same way before. Yeah, fantastic book, really enjoyed it. American Pistorial, Philip Roth. I recommend it if you're up for a dense read, if you got Dungeon Crawler Carl in your ears and you want something else on your eyeballs, that is a little bit different, little later.

Speaker 1:
[73:46] Sounds like it would be a good pairing.

Speaker 2:
[73:49] Perfect pairing. They're constantly being paired together in bookstores around the world. They're side by side.

Speaker 1:
[73:54] It really is like you listen to Dungeon Crawler Carl and then read something that is exactly like you're describing, Jason. That is actually probably a very pleasing thing to do.

Speaker 2:
[74:00] Yeah, it actually is. Good for the brain.

Speaker 3:
[74:02] Yeah, this is very much instead of reading about him gaining his charisma so he could get to the next level of the dungeon. It's like you're reading him melt down over like this.

Speaker 2:
[74:14] Can you ever truly know a person?

Speaker 3:
[74:16] Yeah, or this would be like trying to figure out, how did my daughter turn out this way? Is it this one thing that I did when she was 11 and I shouldn't maybe shouldn't have done and it's driving me mad and I'm just going to keep harping on that and like fixating on that and I will never truly know the answer. Oh my God.

Speaker 2:
[74:34] Yeah, right on. I have a fairly literary pick for my one more thing as well, although it's actually a video game. It's called Slay the Princess. I got as many endings in this game as I could find. So I think I will say I have completed it, but it has many endings so it's hard to know. This game is from 2023 and I had been meaning to play it for a while because it gets a lot of buzz. I don't know if you two have seen this game.

Speaker 1:
[74:57] Oh yeah. This game has been very highly recommended to me, actually recently by a friend of mine.

Speaker 2:
[75:01] I can see why.

Speaker 1:
[75:02] I'm very aware of it as a well-regarded game.

Speaker 3:
[75:05] I just read a story about how the people behind this game are publishing the next game from the makers of A Thousand Times Resist, another critically acclaimed game.

Speaker 2:
[75:14] I know. That game also looks very, very cool. I would like to still play A Thousand Times Resist. It's on my to-do list as well. So anyway, Slay the Princess. I actually played The Pristine Cut, which is like an updated version of the game that I think adds even more endings. So the premise of this game, it's very similar to the Stanley Parable, I would say, which is high praise, although I actually liked it more, even higher praise. So you start off, it's a visual novel, a lot of text, but there's also voice acting for everything in the game. So you can also just listen to the game if you prefer that. The voice acting is really wonderful. So you start it up and you're in a forest. It's all first person. Well, you don't see yourself at first, and you just hear a narrator speaking in this posh British accent and he tells you that you need to go to a cabin on a hilltop and walk, grab a knife at the top floor of the cabin and then walk into the basement of the cabin, and there will be a princess there who's chained to the wall and you need to kill her. That's all you need to do in order to save the entire world. Your character is then presented with an extremely long series of questions that you can ask the narrator who seems to be a voice in your head, strangely, and just about any question you can imagine as a response to that as to why, who is she, I don't want to do that, you name it. There's so, so many possibilities for things that you can ask and rebut with and it's very satisfying and you can try, but eventually you'll find that there's just about nothing else you can do in this world other than these two things. And so inevitably you will end up finding yourself in the basement with this princess, whether you've picked up the knife or not, and she's the other voice in the game, and she will make her case to you as to whether you should kill her or not. And regardless of what you choose, whether you choose to set her free or kill her, everything will reset back to the beginning of the game after that point, and you will then be invited to make a different choice. The narrator won't believe you, by the way, that time has been reset and will insist to you that this is the first time you've ever been here doing this. And the more you play, the more you discover about the nature of the world. But like the Stanley Parable, the answers are not satisfying and increasingly cerebral and existential. And in my case anyway, I sort of interpreted it as being a game about the idea of playing a game. And again, the way that the Stanley Parable is in that you end up being like, well, why am I doing anything? Why am I following any of these instructions? And the princess ends up being like this sort of ever-refracting archetype of however you perceive her per playthrough. And like this voice actress that's performing her does a variety of different imaginations of her, like depending on how you approach her, she'll almost be a completely different person, which is fascinating. It's very well-written. I really liked it. And I still feel like I ended up being like, I don't really know what that was about other than like a prolonged existential crisis about the nature of playing a video game. But that rules to me. I really enjoy that as a format for a video game and liked it a lot. If I could critique one thing, the game does this artsy thing with fonts that just really didn't work for me. I think it would have been fine if every single font in the game, like when they're transcribing what the characters are saying aloud to you. I don't need the fonts to change. It could have all just been Times du Roman as far as I'm concerned. But like when characters are saying something threatening to you, for example, that like changed the font to be like a goofy, threatening looking font. I don't know why this just, I don't know, it's very silly to me. But truly, that's the only quibble I had with what I otherwise found to be a really cool, super strange game that just gets increasingly unhinged the more you play. It's also described often as a horror game. That's because it can be very gory. Obviously if you are killing the princess with a knife, you can sort of imagine some of the visuals that entail if you're doing that. And that's definitely part of the game. All of that is animated and it's scary and upsetting to see your character doing that. So in that sense, I would say it's a horror game. There's some blood in it, but there aren't jump scares. And it's more just, I don't want to call it a thriller. It's more like horror in the sense of being existential, in the sense that you're like, I don't know what I'm doing here or why any of this matters. And that can be horrifying, at least for a video game protagonist. So it's called Slay the Princess. I think it was a really cool game. And I also really enjoyed reading a bunch of essays people have written about it after I got a bunch of endings and was like, what the fuck? Did I just play? Like, I need to read a lot of essays about this and see what people said. That was about as fun as playing the game.

Speaker 1:
[79:59] I've seen this compared to Doki Doki Literature Club.

Speaker 2:
[80:02] Oh, not even close to as scary as that game. That game is so scary.

Speaker 1:
[80:07] But in the terms of the style and the general visual novel that goes in weird directions and freaks you out.

Speaker 2:
[80:12] I think this is so much stronger than that. I feel like I've really soured on Doki Doki over the years. I feel like it doesn't go past its initial premise for me, where it does one twist and then after that, that's that. Whereas Slay the Princess, I think it's possible they were influenced by it. And I think they have specifically said they were influenced by The Stanley Parable, which is another meta game where the narrator speaks to you. And I think probably these developers were looking to games like that and being like, well, what if we could just continue to make it ever more bizarre the more time passes? What would that look like? And what if there was never a satisfying resolution and the world itself just became stranger and stranger? And I think they succeeded at that. It just keeps getting weirder.

Speaker 3:
[80:54] Maddy, you should check out their other game. It's called Scarlet Hollow.

Speaker 2:
[80:58] Yeah, I want to.

Speaker 3:
[80:59] So these, the people behind this game, they're called Black Tabby Games. It's this husband and wife called Tony Howard Arias and Abby Howard. And they made Slay the Princess like kind of in between the episodes of Scarlet Horror, which is also Scarlet Hollow, which is also a horror game, like a visual novelish horror game. And Slay the Princess sold a million copies and was like a massive success. And so they told me, I spoke to them like two weeks ago, they told me that they're using that money to publish a couple of games and invest in a couple of games, including this new one from the guy who made and the team that made A Thousand Times Resist is called Prove Your Human. And it's about an AI that thinks it's a person. So to put a pin on this whole episode.

Speaker 2:
[81:40] Yeah, it's so cool. The trailer for that game looks awesome. I'm really excited about it. And that's so cool that they're doing that and kind of helping that game come to life as well. It's nice to see game devs using their success in that way.

Speaker 1:
[81:52] Yeah, man. I mean, A Thousand Times Resist, that was one of my favorite games the year it came out. That's an amazing game. Very excited for what the makers of that do next.

Speaker 2:
[81:59] Yeah, same. I need to play that. It's like I said, I mean, I'm just now getting around to a 2023 game. Who knows? Anything could happen. So anyway, that game was called Slay the Princess. It was very cool. All right, super long episode, but hey, we had a really meaty topic and then some very meaty one more things, but now we've reached the end of our meaty meat filled sandwich.

Speaker 3:
[82:22] We've reached the 18th level of the dungeon.

Speaker 2:
[82:24] We have.

Speaker 1:
[82:25] We have. We made it all the way. No one had made it before. Turns out the 18th level, all you have to do is record a podcast.

Speaker 2:
[82:31] We're taking Triple Click back from the Bore Ant Corporation or whatever, wherever it was, and it's ours again. And as always, we'll be back next week with another episode. I'll see you both then.

Speaker 3:
[82:43] See you both next week.

Speaker 1:
[82:44] Yep, see you both next week.

Speaker 2:
[82:46] Bye.

Speaker 1:
[82:49] Triple Click is produced by Jason Schreier, Maddy Myers and me, Kirk Hamilton. I edit and mix the show and also wrote our theme music. Our show art is by Tom DJ. Some of the games and products we talked about this episode may have been sent to us for free for review consideration. You can find a link to our ethics policy in the show notes. Triple Click is a proud member of the Maximum Fun Podcast Network. If you like our show, we hope you'll consider supporting us by becoming a member at maximumfun.org/join. Email us at TripleClick at maximumfun.org and find links to our merch store and our Discord server in the show notes. Thanks for listening. See you next time.

Speaker 4:
[83:47] Maximum Fun, a worker-owned network of artist-owned shows supported directly by you.