transcript
Speaker 1:
[00:02] Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work and create. Our goal is to help make AI technology practical, productive and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm. Now, on to the show.
Speaker 2:
[00:41] Welcome to another episode of the Practical AI podcast. This is Daniel Whitenack. I am CEO at Prediction Guard. I'm joined as always by my co-host Chris Benson, who is a principal AI and autonomy research engineer. How are you doing, Chris?
Speaker 3:
[00:57] Hey, I'm doing great today, Daniel. Looking forward to catching up on one of these fully connected episodes where we get to talk about kind of whatever we want to talk about.
Speaker 2:
[01:05] Whatever we want to talk about. No one, I mean, I guess normally we just talk about what we want to talk about, but at least when we have a guest, we try to center the conversation on maybe a few things they want to talk about. I'm pretty excited, Chris, because I just got a brand new pair of shoes. I've been wearing my new shoes all week, and I didn't think that that would be a relevant topic to bring up with you on the Practical AI podcast, because I thought shoes really didn't have any overlap with the AI world. Although I guess this is not the topic we're going to talk about, but I did see a company that had like you take a picture of your foot and like the AI figures out the shape of your foot or whatever, and then like could, I guess, advise on shoes or something. Anyway, but speaking of shoes, today, I didn't even see this, but folks in the office here at Prediction Guard were like, hey, did you hear about Allbirds? And I had not heard about Allbirds, but apparently, Allbirds is now an AI company, which is quite interesting. So Chris, do you have a pair of Allbirds? I guess they're AI Allbirds now.
Speaker 3:
[02:29] I actually don't, but I gotta say, that's a terribly interesting way of retreading your business model.
Speaker 2:
[02:39] Yes, yeah, they really kicked it to the curb, I guess.
Speaker 3:
[02:46] So from shoes to AI data centers?
Speaker 2:
[02:50] Yeah, well, I guess kind of background information for people here. I sort of only know this because my wife was really into Allbirds. She had a few pairs. But it seems like kind of 2016 to around just after COVID 2021, there was this huge rise of the Allbirds brand, which was a favorite in terms of shoes that you would order online. I think eventually they did have retail locations, that sort of thing. But from 2022 to 2025, so through this kind of last year, they kind of consistently had a decline, called growth, margins kind of compressed, and their stock price declined, making the business distressed. And so I guess March of this year, so as we're recording this, we're in April of 2026. In March of 2026 of this year, Allbirds exited the actual footwear shoe part of their business. So selling off all of those assets to American Exchange Group. Don't know a whole lot about them, but basically that sort of ends the shoe operation portion of Allbirds. They still of course had the shell of a company, which had a name and an entity and a stock ticker, et cetera. And they had a bunch of cash, right? So what do you do with a bunch of cash? I guess they did also raise additional cash. What do you do with a bunch of cash but buy GPUs? Of course.
Speaker 3:
[04:41] What else is there?
Speaker 2:
[04:43] Apparently, what happened?
Speaker 3:
[04:45] Isn't that what you do with all your cash?
Speaker 2:
[04:47] Yes. Rebranding as AI compute infrastructure, which I'm wondering if they'll give me AI compute infrastructure for cheaper. That would be nice.
Speaker 3:
[05:01] It's amazing, we're joking about this being kind of the pivot you didn't see coming, and quite a pivot too, but their shares jumped at least 700 percent based on what I'm looking at here, which is quite a jump in terms of the market not only accepting, but endorsing that kind of a decision. You got to be wondering if there aren't many, many boards out there and CEOs that are going, we're in a tough spot in our business. Things have been struggling recently. Maybe we go buy GPUs and go into the AI business. I mean, it apparently is a perfectly legit business plan now.
Speaker 2:
[05:50] Yeah, I guess on the positive side, this could seem to be a rational allocation of capital. I have a bunch of capital. Well, I don't have that much capital. I wish I had that much capital, but a party has that much capital. If your core business is dying and you're able to sell that off, you have a company, a stock ticker, then what's the hot thing? Obviously, compute is a core part of the expansion of AI everywhere, the running of these models at scale. Many might not be self-hosting models, but they're certainly consuming models that are running on infrastructure somewhere, right? And yeah, so I guess from that perspective, it could be kind of seen as a very positive and useful kind of pivot. What's your thought?
Speaker 3:
[06:57] Well, apparently so. I mean, the market is endorsing it, and I think this is like prior to this announcement, this is the kind of thing nobody would have bought into. Like, it would have been seen as a joke, you know? And so, but the fact that, at least at this point, the market's doing that really does make such a pivot into a concern that companies may be evaluating. And, you know, in these articles, as they talk about Allbirds kind of, you know, once the point of time being the next Nike, and I've seen that bantered about in some of the articles. And it got me thinking for just a second there, like, what if Nike were to do the same thing? What if Nike were to pivot from shoot? Yeah, just do it. But I'm wondering, would they brand themselves as ARI? Sorry, ba-doom-boom.
Speaker 2:
[07:50] Yeah, AI Jordans.
Speaker 3:
[07:52] There you go.
Speaker 2:
[07:53] Yeah. Speaking of terms, I was running across this term, which, you know, sometimes we try to clear up jargon on the show pretty practical, and sometimes jargon doesn't make any sense to me. But this term, neocloud, is this something that you've run across or is this new to you?
Speaker 3:
[08:14] This is new to me, so you'll have to take us into neoclouds.
Speaker 2:
[08:18] So apparently, and this is related to the Allbirds thing, because apparently, neocloud or sometimes referred to as AI native cloud is a shift that we've seen recently where a neocloud is cloud infrastructure that's built specifically for AI workloads, not general computing. So in that way, Allbirds would be potentially putting together a neocloud. So the old cloud model is you have your web app infrastructure, you have databases, managed databases, you have managed storage of some type, you have some IT or logging, monitoring services. It's general purpose, flexible, lots of different services. That's the idea of this neocloud or AI native cloud. Think of other companies maybe like CoreWeave or Together AI or Lambda Labs, right? Is infrastructure that's built either for AI training, inference or both, massive GPU workloads, kind of GPU first, not CPU first. And this exists kind of because GPUs are scarce, also in the hyperscalers in the general cloud platforms. The workloads are different, right? Because maybe you are running a lot of things across many nodes, running a large model across many nodes, lots of movement of data often, and you're kind of supply chain constrained in terms of what you need to support. So that's this idea and kind of how it intersects here. This was a new one for me as I was looking a little bit at this story.
Speaker 3:
[10:18] I'm curious, do you have any insight into it? If you're looking at neocloud companies and you're comparing them against kind of the traditional cloud players, which are the Alphabets, the Apple, the Microsoft, the Metas, those, how is the business model changing and how much is neocloud eating in to that? I mean, are we seeing it very specialized or is it making kind of general market traction?
Speaker 2:
[10:48] I would say, well, I don't have exact numbers like right in front of me. I think if our listeners do have that, let us know, point us to those on social somewhere. But I do know that I'm seeing a lot of CoreWeave and other, I guess, other neocloud type of companies being talked about quite a bit. And I think that it is partially because there is this sort of, there can be this specialization towards the AI workloads and the specific compute there. And as you know, going into a hyperscaler, if I go into AWS or if I go into some of these platforms, you can do just about anything. And there isn't that focus, which is good in one sense because you can kind of support a lot of different types of things, but if you're an AI native, AI forward company, and maybe you're quickly spinning up no-code applications, you're not maybe doing a lot of that hosting and management in a traditional way, then maybe it makes sense for you to run a lot of that stuff serverless or otherwise, and kind of have this pay-as-you-go in AI specialized clouds, which is kind of interesting. I guess that's one of the things about the Allbirds case that you could talk about on the, maybe the negative side. My hot take on this is like there's no really, what Allbirds is bringing here is a company shell and capital, right? They're not bringing any domain expertise that maybe I'm aware of. Maybe there are some, there's some domain expertise around supply chain and maybe manufacturing or industrial settings, like that they're bringing, but they're not bringing AI specific expertise in terms of building this kind of NeoCloud. That's true. The other thing is like approximately $50 million, although that's much more money than I can imagine generally, is very much a drop in the bucket in terms of the AI data center market. So part of my question is like, okay, create your little data center. It is very much a drop in the bucket in terms of whether you look at like what China is spending on data centers or just like companies in the US investing billions of dollars in AI data centers. That maybe is the cynical take on this is like, okay, you have a little bit of this capital, you don't have the domain expertise in AI, and you're going to what spend $50 million on a little data center. How is that going to make a mark? And maybe part of it is like, this is the foothold and more capital will be infused and they'll figure it out. And I don't wish them bad or anything. It's just more of a skeptical take.
Speaker 3:
[13:57] Yeah, I mean, in another industry, that money would seem like quite a starter, but in this industry where the availability of GPUs in the ecosystem is already quite strained based on the demand. And if you look at the fact that kind of globally, there's basically half a dozen key players in the GPU ecosystem in terms of supply, as companies may pivot to this kind of business model, which is really Nvidia, TSMC, AMD, Intel, Apple and Qualcomm for the most part. And each of those is cranking out the types of chips in this capacity for AI purposes that they make. And so I'm just kind of, I can't help wonder, but if this becomes a trend where you see a lot of companies that are struggling pivoting into that, what is that chip supply chain start looking like? It gets even more strained going forward. So this will be a really interesting kind of see if this turns into a trend to watch and see what happens with that.
Speaker 2:
[15:14] Yeah. And what do you think, Chris, there's kind of two elements happening here. One is the centralization and expansion of these very much centralized compute resources in data centers, which will grow. But there's also this push towards, I think, I don't know if this was one of the trends that we talked about at the beginning of the year for 2026. It's certainly one of the trends that I'm thinking about in terms of the market in general is the shift towards kind of physical or embedded AI, where AI is kind of living everywhere in a bunch of environments, whether that's like kiosks in a retail environment or actually on the manufacturing floor, not in a data center for a manufacturer. Of course, in phones or we just had the conversation with AMA AI who has AI in these devices that they're putting in cars to make them self-driving. So yeah, what is your take on this and how people could think about the, is it both are increasing simultaneously, like we'll just see more data centers and we'll see more physical AI, or is there a shift more towards that embedded edge-centric model versus kind of everything being centralized in data centers?
Speaker 3:
[16:34] I mean, I think it will be all of the above in my view, but I think the giant growth area is going to be in what you might call far edge, meaning it, because people define edge differently. Some people would say kind of the edge of the data center, edge of the cloud is edge, but if you're talking about embedded devices that are out on, embedded in physical devices that are used, that are not directly cloud connected or are, but you're not relying on that for all of its functionality, then I mean, there's huge, huge, huge growth potential in that across so many different industries and that's still in its infancy. But yes, I mean, I do think that we'll go this, you know, your notion of Neo Cloud, as you, as you instructed us a few minutes ago, is an opportunity that many, many companies will go at. My gut is that in the long run, that is not as, as profitable, just because there's already huge players dominating that. And as, as others fill in the niche, there'll be many, many players there. So it'll be interesting to see if that continues to be, and, you know, an amazing strategic opportunity versus, you know, specializing out in various devices that are embedded.
Speaker 2:
[17:59] Yeah. Yeah. Well, I guess that is your set of beliefs or assumptions about something, or shall I say your mythos about, about that. Which brings us to an interesting topic of mythos. What is the, what's the right way to say it?
Speaker 3:
[18:22] I'm actually not 100% sure. I've heard people say it both ways. So either one is fine for today.
Speaker 2:
[18:29] Okay.
Speaker 3:
[18:30] Unless folks have been under a rock than the last week, they hopefully have heard a bit about this already.
Speaker 2:
[18:37] Yeah. I'll maybe switch between the two. That way, at least for part of the time, I can seem smart. But the Mythos model from Anthropic has been in the news, or I guess the supposed Mythos model that Anthropic has and is somewhere and not seen by people yet is in the news, I should say.
Speaker 3:
[19:03] Correct. So the short of it, there is a new frontier model that I think logically, would say is kind of the next thing past the Opus model, which has been the powerhouse driving Claude Code. We've talked a lot about that on Opus and Claude Code on the show. And so the next generation being Mythos from Anthropic is a powerful model. But back when before any of us had heard of it, I think what's been reported is that you know, Anthropic had it in a sandbox environment. They discovered it was particularly adept at uncovering security vulnerabilities. In just about every meaningful software package, arena, you know, whatever that you could imagine, they discovered they claim many thousands of vulnerabilities in every operating system and every browser. And they realized that it could have profound effects out there on its own. So they, instead of releasing it, as I think they had been planning, they kept it close hold and they started a new project called Project Glasswing, which is a security project that is kind of closed. And they brought in apparently 40 companies, but only about a dozen of those companies are public, a number of companies are not. And those companies are being invited to use Mythos to make sure that their various systems are not exposed or give them time to fix those. So there's not a lot of information, as you would expect, about the specifics of that process. And so that is ongoing right now. And so we'll see what happens. We don't really know what the future of Mythos is. But I think I would finish by saying, if you just kind of look at it in the same way that you and I are often talking among ourselves and with guests about these types of models, the fact that people know it's possible now means that it is probable that other people will be developing models and stuff, which we've seen in every case ever on Frontier models. And so it might bespeak of a very interesting future where we are seeing some tremendous capabilities from Frontier models that are once again a significant step beyond the generation that's public. So who knows? But I think in the months ahead, I suspect this is a topic we will end up revisiting from time to time.
Speaker 2:
[21:58] Haven't we been here before, Chris? I mean, you and I have been doing this for a while, and it seems like this is the same conversation we've had with respect to some OpenAI model releases and gated releases of this and that because it's going to end the world or something.
Speaker 3:
[22:16] It was GPT-3, I believe. Am I remembering correctly?
Speaker 2:
[22:20] I believe it was. It was like earlier. Yeah, we talked about the whole gated release thing, right?
Speaker 3:
[22:25] Yes. And they were holding it. And then finally, they were out, and they didn't even try that on the four side.
Speaker 2:
[22:33] And now, you talk about GPT-3, even GPT-4, and feels like, oh man, that thing sucks.
Speaker 3:
[22:44] Yeah.
Speaker 2:
[22:45] So it's like at the time, it was going to end the world, but it kind of sucks.
Speaker 3:
[22:49] I was talking to somebody the other day, and they were using in their business GPT-4. And I literally said, why are you using that dinosaur? Why would you do that to yourself? And so, yes, we're definitely, this is like definitely coming around to the same story here.
Speaker 2:
[23:10] I'm not saying it's not better. I think my point is just, hey, I don't think it's world ending. People can probably rest a little bit at night. I do think like, I guess they emphasize the kind of that it is, and this is I think from Reuters and a couple other places that it's, especially strong maybe at discovering vulnerabilities, exploiting those vulnerabilities. And so this does expand. I mean, even already, right? I can use whatever models and agentic coding techniques to create malware, just like I can use to create great software, and I can exploit systems. So like, there is definitely this narrowing or, which maybe has always been the case in the cybersecurity world, where like, the threat actors get better, and there's better availability of tools and that sort of thing. This is certainly a different level of that. I'm not saying it's equivalent. I'm sure it's a different level of that, right?
Speaker 3:
[24:19] But, you know, when I look at this, like regardless of what Mythos capabilities really are, like whether they're really high, low, whatever, I think that, you know, Anthropic has historically been kind of the, a little bit lower key, and kind of the safety oriented, and not quite as flamboyant and kind of over the top as the Open AI, you know, folks have been, I mean, Sam Altman's known for his, you know, the kind of statements he makes all the time. And people over time have kind of learned to take those with a grain of salt, but it is, it's starting to look, you know, with some of the quad code stuff and get into this, like maybe Anthropic has started taking a page from that playbook at Open AI in terms of kind of the, the marketing, you know, aspect of this, because regardless of what Mythos capabilities are, whether they are amazing or less, or just whatever, this is still an amazing amount. I mean, you and I are sitting here talking about it, we're contributing to that. They were all over the news. And so it's fantastic marketing strategy on their part, no matter what the reality is.
Speaker 2:
[25:36] And who knows how much is tied to also recent problems in terms of interactions with the government. I do think that this is, you know, and I would be lying if I did not have a personal bias and hope for this, but I do think that this emphasizes kind of a tailwind for governance and control capabilities within the AI world, which is of course an area where I work, but also like letting people know that there is a risk is very different than controlling that risk. And so whether it's like AI SOC that's using AI, like within security operations kind of, you know, on the offensive side or defensive side, I think that like it ushers in great, you know, a tailwind for those companies, but also on the like, hey, if companies actually want to use a model like this, there's bad things that can happen as well as good things that can happen, which emphasizes kind of a push towards governance and control regardless of what model stack you're using. And one of the, you know, shout out if you're out there and we'd love to have you on the podcast, but there's like things like the AI underwriting company and others that have received funding recently where they are actually trying to establish some of those auditable certifications for companies in terms of how they institute governance, what evidence there is for that. That's a very different thing than saying there is a risk, right?
Speaker 3:
[27:10] It is. It is.
Speaker 2:
[27:11] Yeah. But interesting. I look forward to trying it out whenever I get my hands on it. I'm not part of the, I don't have a golden ticket, so I'll have to wait with the other ones, I guess.
Speaker 3:
[27:26] Yeah, I'm right there with you.
Speaker 2:
[27:28] Until I go for token maxing my Mythos endpoint.
Speaker 3:
[27:36] Oh, you threw that out. Now, we got to talk about that. I mean, that's like, token maxing is the hottest new term over the last few weeks.
Speaker 2:
[27:44] First off, I feel really old, because I just don't like the whole whatever maxing. Like, I just seem, I feel old talking about anything with that sort of term. But yeah, I guess there is the token maxing thing.
Speaker 3:
[28:00] Oh, okay. So, for those who have not heard the term, again, like, it's been all over the place recently. So, you know, stepping back, and because I think this really ties back, ironically, into the Anthropic conversation that we had, but stepping back a little point to us talking about Opus as the greatest thing since sliced bread. And the fact that, as we have discussed, you know, Opus combined with Cloud Code, you know, made a substantial difference or change in how people were approaching coding and stuff. I know, you know, me coding last year with AI assistance in various forms versus me coding this year has, the workflow is quite different. And so, and really, it really has accelerated in a lot of ways, aside from, you know, where the models are and stuff like that. So, the tool set has been great. And we've talked about this on the show fairly recently as well. So, you know, acknowledging this process, we've had a number of the kind of, especially like in the big traditional AI companies, especially like Meta, you know, can't imagine the Meta culture embracing this. Yeah. Meaning, of course, it has. If you look at who's Meta CEO is, they have gamified the use of developers, and they're trying to basically get them to spend as much as they possibly can on cloud code and other competing development tools to try to accelerate what any given developer can do. To the point, like, it levels that the rest of us look at and go, that's insane, you know, where it's like, you know, go spend you, a developer, go spend hundreds of thousands of dollars on tokens that you're, you know, to accelerate your capability. I guess this is trying to, to, you know, to kind of 10 times, you know, if you will, to use another buzzword, what any given developer is able to do from a, in terms of producing work. And they're orchestrating teams around token maxing and stuff like that. And I know at Meta, I don't know if it's still up or not, but they had a scoreboard that was kind of in one of their main areas where everybody could see who was token maxing the most. And then people were gaming the token maxing system. So they would actually spend tokens on kind of trivial things just to make sure that they were showing up on the scoreboard. So, yeah, I mean, like in those kind of stories, absolutely doing it to excess. But that's trickled down. And so there are many other organizations that may not have the budgets of some of these top AI companies, but they're trying to figure out what can we afford for our developers to do, you know, in terms of spending money on tokens, and what will that get us in terms of the production capability in our own businesses. So that's now another big thing that's out there in business right now.
Speaker 2:
[31:09] Yeah, I would say just anecdotally, I very much think that we are, we meaning the company that I'm leading, I don't think we are spending enough. We are not, we're not token maxing, we have no leaderboard, I also don't think we are spending enough on AI usage. I think that certainly one of the things as a founder, I think about and I'm pushing, I think very much like there's, it kind of reminds me, well, I don't know, there's all sorts of things obviously and parallels you could draw with everything in moderation, right? But certainly people have to push the boundary for us to know where the boundary really is, right? So I don't think, I think there's probably abuses within that and there's inefficiencies within that and things that don't make sense. But also, it makes sense to me that there would be a push towards figuring out where the proper boundary is, because I don't know if we totally know that yet. I know even some folks, I think it was Jensen from NVIDIA, who obviously has a horse in the race in terms of token maxing, right?
Speaker 3:
[32:37] Maybe.
Speaker 2:
[32:38] Maybe. As we talk about NeoCloud and GPUs. But I think he was saying, he would be very alarmed if there was an engineer that was making 500k, and they weren't spending 250k on half their salary on tokens. I don't know if that's where I land. But like I say, I think we were not spending enough on tokens, I don't think, in terms of... And this is where I think in one conversation we were talking about, you always have an infinite engineering roadmap, right? So on the negative side, certainly you can just spend tokens on dumb things. I don't like, I'm willing to say that. But also, I think it is some indicator of how effective you're running an AI-driven engineering team in today's world.
Speaker 3:
[33:36] I think that is a very sensible approach in the sense of, I think what is unknown now is what the price to productivity translation really is. And I think it's, you know, if you look across many organizations, I suspect you'd find a very significant standard deviation in terms of what that variability is. And so, with, I think, going forward, as this also matures, that we're going to see best practices, we're going to see there will be books being written, that, you know, you'll start seeing in all the developer areas about how to do it efficiently. And so, I think there's probably, I think we're just not there yet. I think there'll probably be some guidance developing over time about how to do it without just being Jensen's, spend all the money on GPUs you possibly can approach, you know, which you would expect.
Speaker 2:
[34:34] Yeah, and I think maybe it's because we don't totally know the right metrics to optimize and max out right now, right? The tokens is probably a vanity metric in the same way that, like, clicks to your website is a vanity metric in terms of your top of the funnel go to market activities, right? Like, you can put a lot of money into, let's say, ads or something like that and get a lot of noise to your traffic or maybe you have bought traffic. You have a lot of traffic on your website. It doesn't mean that you are doing a great job in terms of your organic discoverability, SEO, etc. Like, we have developed other metrics to judge that over time, right? And now, we kind of have, there's been certain metrics around velocity, etc. for engineering teams over time. And now, all of those things are kind of like, the rules are broken to your point. So like, what metrics do we use? Sure, token usage is a vanity metric. I think it probably is, but it's not like, it doesn't, yeah, correlation, causation, all of that stuff, right? If you are doing well, you probably are using a lot of tokens, but it doesn't mean if you're using a lot of tokens, you're doing well, sort of thing.
Speaker 3:
[35:54] And I think one other thing I'll just throw in as we close out on this one, is the fact that you mentioned just a moment ago, the kind of infinite roadmap that a company might have on stuff, but it is possible to potentially outrun what your organization can manage in its other capacities. So even if you are able to produce a lot more productive code that drives the capability of whatever your company does, if you're outpacing what the rest of the organization can absorb and manage and modify, then that's another form of potentially losing efficiency, where pure token maxing may not get you what you want. You can kind of pull it back a little so that your organization doesn't die under the weight of it. Just one last thought to keep in mind. It's a business. It's not just programming.
Speaker 2:
[36:48] Well, I would be curious if someone was token maxing to take a look at a few of their chat logs to see maybe how they were doing that token maxing, which it appears could actually be discoverable information in a court of law. So transitioning a little bit, one interesting thing that I saw this last week was a ruling by a federal judge in a case where the federal judge actually forced a defendant to hand over chat outputs, I think in particular from Claude in this case, that they had used to prep some legal materials. The overall idea here being that AI systems like this tools are not lawyers, they are just what they are tools. And so, conversations with, you know, if you want to think about, you're talking to your AI legal assistant, these are not lawyers, and so they aren't protected by attorney-client privilege, even if you're talking about these, you know, legal matters. Essentially, the court treated AI systems like this, like a third party, meaning confidentiality was effectively waived. So, that's kind of disturbing in some ways, maybe even like, if you're listening out there, you probably are thinking of that one conversation you had with ChatGPT or Anthropic or whatever that is like, oh man, I hope I never go to court because they're going to find out about that chat log.
Speaker 3:
[38:32] Yeah, I'm not at all surprised about this. And I think there's been a lot of foreshadowing that such things would happen. Most of these companies, probably all of them, have long since said this is not protected. I know ChatGPT, the notion of once the voice capability way back came out and people were having live conversations, and that evolved into people kind of treating it as a confidant or friend or that special someone who understands me in that. And I think these are all kind of different flavors of the same thing. So I don't think it's strictly a legal field issue only. It's also a medical field. It's a psychiatric field thing. And I think it's kind of like, well, I think they got bitten on that one when they went to court. But I think it's important to remember that no matter what it feels to you personally and how you're interpreting these things, it doesn't have a special... The courts may ask to see that. And that's an easy thing to get.
Speaker 2:
[39:44] Well, and I'm just thinking through all the implications of this. And certainly, there is the general public who might be whatever it is, divorce cases or whatever they're dealing with in their own life or criminal sorts of things. But for me as a founder, like with Prediction Guard, there's a lot of information that I process. And we have a lawyer, right? I think like most startups do or should, right? And we're processing back agreements or contracts or updates to license agreements, right? It's so tempting to just say, well, I got this from my lawyer. Let me pop it into XAI. Well, I guess I shouldn't use X as like a general variable now because X is not a general variable. It's a social media company run by Elon Musk. But if I pop that into a random AI system, then essentially I have moved from something that was confidential to something that is explicitly not confidential now. And so if those are in draft forms or if we're talking about things that are just contemplated within the company or dealing with a problematic customer and that sort of thing, all of that essentially is then moved into what is discoverable. And I'm not hopefully getting sued for anything. But yeah, it does make you think in the ways that you're using these systems and it does seem like in some of the response to this, but even before this, that some law firms are putting in to kind of the contracts that they're using that, and maybe even people should think about this in their own license agreements and other things that they're doing for their products. Like, hey, if you're putting information into an AI system, you're essentially waiving privilege of confidentiality, right, because that is explicitly discoverable unless it is explicitly private or you're running a private model locally. But yeah, there's warnings that need to be sent out to people to not do this. There's implications and maybe contracts that need to be updated, all of that stuff.
Speaker 3:
[42:10] Yeah, I'd like to wind up a little bit with a question, and maybe some of our folks in the audience can educate us a little bit on our social media channels. And that is with, if you look at kind of communication things like Signal and like ProtonMail, who are catering to users who don't want there to be a record explicitly. So there's literally nothing there that a government, a court or whatever could can access. And I'm wondering if there will be those kinds of systems for AI chat and what the legalities of those are in various jurisdictions. So that you can have your chat bot, but you're literally able to honestly tell the court, well, no such log exists. So if anyone has any insight into some of that, like the juxtaposition of AI chat, and no record systems that we're seeing pop up, I'd love to hear about that. So Daniel, any thoughts about that yourself?
Speaker 2:
[43:20] Yeah, I think it's really interesting. I also think it brings it back to something that really companies have been forced to deal with in terms of multiple transitions of technology, where what you could send or not send confidentially in a physical letter, there were sort of rules about what you could send and not send confidentially an email. There's different intuitions that we've built up around that, and we just don't have the intuition here yet, and I think that will develop, but also it does, to your point, present maybe some market opportunity as well, or some process sorts of things that everyone needs to think about. It was fun to talk today, Chris. Go ahead and kick your shoes off and relax for the evening. Invest in AI data centers, I guess that's what we should do.
Speaker 3:
[44:20] There we go. That's my pastime going forward, I guess.
Speaker 2:
[44:23] Yeah. Awesome. See you Chris. Take care.
Speaker 1:
[44:31] All right, that's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X or Bluesky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show. Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the beats, and to you for listening. That's all for now, but you'll hear from us again next week.