transcript
Speaker 1:
[00:01] When some of the scientists who helped build AI are now sounding the alarm.
Speaker 2:
[00:05] With this kind of technology, aren't we going to build machines that we don't control and could potentially destroy us?
Speaker 1:
[00:14] What future is this technology rushing us toward? Listen to The Last Invention, wherever you get your podcasts.
Speaker 3:
[00:23] This is a CBC podcast.
Speaker 4:
[00:26] Hello, Understood listeners. I'm Lee C. Camp, host of No Small Endeavor, the Signal Award-winning podcast that explores what it means to live a good life. We do that through conversations about human flourishing with scientists, philosophers, artists, poets, and much more. Today, we're sharing the first installment of our new two-part series, The Human Cost of AI. In part one, you'll hear us examine artificial intelligence through a sobering insight, that when you invent the ship, you also invent the shipwreck. That neither dystopian fear-mongering nor utopian idealism are the most helpful ways forward. Instead, we ask how AI is already, right now, forming us, shaping us, for good or for ill. To trace the human cost of AI, we follow three fault lines. Sex, money, and tools. You'll hear our conversations with leading scholars and writers like computer scientist Josh Brake, MIT professor Rosalind Picard, and journalist Garrett Graff. Before we play the full episode, be sure to follow No Small Endeavor on your favorite podcast app. Now, here's episode one of our two-part series, The Human Cost of AI. I'm Lee C. Camp, and this is No Small Endeavor, exploring what it means to live a good life.
Speaker 5:
[01:46] If we should know anything from what we're seeing from Big Tech, it's like their goal is to make the thing stickier. And they know that one of the best ways to do that is to hack your psychology.
Speaker 4:
[01:57] Today, we're bringing you part one of our two-part series, The Human Cost of AI.
Speaker 6:
[02:02] People seem surprised every time a for-profit organization chooses profit over people. And, you know, the cynical reality is, why would they?
Speaker 4:
[02:12] Over the next two episodes, I'll be bringing you my conversations with scholars and tech experts about the forces driving the AI revolution and the ways tech has the power to shape not just our future, but ourselves.
Speaker 5:
[02:25] It doesn't have will or agency or telos, but as soon as you, as a person, pick it up, the design of that tool shapes your will and agency in a certain direction.
Speaker 4:
[02:34] All coming right up. I'm Lee C. Camp, and this is No Small Endeavor, exploring what it means to live a good life. There are two stories we often tell about artificial intelligence. The first, it will destroy us, machines surpass us, outsmart us, and we lose everything that makes us human. The second, it will save us, every disease cured, every problem solved, a technological paradise just around the corner. But what if neither of those stories is quite true? What if it's a false dichotomy? What if the choice between those two stories keeps us from seeing a lot of other things that we desperately need to see, and we need to see right now? A while back I interviewed Carissa Carter and Scott Dorley from the Stanford Design School, or the D School as it's called, and in their book Assembling Tomorrow, they riff off a line they apparently picked up from the philosopher Paul Virilio. When you invent the ship, you invent the shipwreck. This seems to me to be a sobering, helpful reminder. The ship? Extraordinary. It opens up the world. It moves people, ideas, goods across distances that were once impossible. It expands what it means to be human. And yet, when you invent the ship, you also invent the devastation of the shipwreck. Here's Scott.
Speaker 7:
[04:07] It's just a completely flawed idea that we're going to get everything perfect. There's absolutely no evidence, despite many tries, that we could come up with some system that's just going to make everything right. And yet, for some reason, when we create things, we kind of feel like that's what we're doing. And so what we realize is that really the goal should be imperfection because that's where we're going to land.
Speaker 4:
[04:31] Here's Carissa.
Speaker 8:
[04:32] We don't want to say don't ever try anything because it will cause harm. But there's definitely a balance between that and what's our collective response when that power plant did really pollute the environment and it's causing climate change. Like how are we going to now band together?
Speaker 4:
[04:48] Or perhaps even better, how do we seek to anticipate potential harms and attend to them now rather than merely clean up or send out the rescue boats after the shipwreck? Mark Zuckerberg, founder of Meta and Facebook, infamously said, move fast and break things. If you never break anything, you're probably not moving fast enough. Scott and Carissa remind us that this supposed tech bro wisdom may not in fact be courage. It might be recklessness. It might even be a social scourge.
Speaker 8:
[05:21] We talk about today's day and age being one of runaway design. And you can think about that a little bit like you think of a runaway train. So if you envision a runaway train, it's speeding down the tracks, you can't pull the brake and it might crash. There might be a spill or some sort of destruction when it does finally come to a halt. Now, the thing with the runaway train is that we see it. We know it's runaway and we see the harm that it's caused. In today's runaway design world, the track's invisible, so we are building with these technologies like AI that are very, very powerful and development and what we're building with them is speeding along tracks that we can't quite see until something has gone wrong. It's really hard to keep up when it feels like technology is just happening really quickly and happening to us.
Speaker 4:
[06:15] Long before I followed a vocational path of higher ed and theology and ethics, I loved all things tech, math, physics. I was coding computers when coding was most definitely not cool as a freshman in high school when the RadioShack TRS 80s were cutting edge technology. I talked my way as a senior in high school into getting a solo tour of the research facilities at Oak Ridge, escorted by a high-ranking member of the US Department of Energy. I wanted to be a physicist or an engineer or an astronaut. I loved all that stuff. In college, I majored in computer science, minored in math, and for the next 20 years, I would continue to write code. But I also found I loved intellectual history, the history of ideas, the way theological and philosophical and moral constructs shape or deform, inform or obfuscate what it means to be human, what it means to pursue the common good. The theologians and philosophers began to help me see things about technology and technique that I had not yet been able to see. Perhaps some of that is most succinctly put in Thomas Merton's assertion that one may spend their whole lives climbing the ladder of success only to discover at the end, that the ladder was leaning against the wrong wall. In other words, one may do something well efficiently, quickly, powerfully, but to an insufficient end, even a horrific end. But if you never ask the questions of why, of purpose, of what the philosophers call the telos, you won't know you've wasted a life, a culture, a society until it's too late. We were recently invited to go out to Waco, to Baylor University, guests of the Institute for Faith and Learning, where hundreds of academics from a broad diversity of academic disciplines gathered to ask just these sorts of questions. So with the help of some of those fascinating conversations and scholars, we want to offer a modest and simple offering to the AI conversation. Today, in part one, to issue some storm warnings, to identify some potential shipwrecks. And please note that our offering here is, as I already indicated, modest. We will not be able to raise numerous critical questions such as the swirling environmental concerns and indeed numerous other concerns. But then, coming up in part two, we want to offer some potentially fruitful practices in light of the runaway techno trains, in light of the storm-tossed seas which are coming our way, or indeed have already begun. For the warnings and caution here in part one, allow me to revert to a common triad about the ways humankind is often tossed about money, sex, and tools. Let's begin with tools. Frequently, well-meaning commentators will employ one common metaphor when speaking of technology, the tool. Whether it's a pistol, the internet, a cell phone, or now AI, it's just a tool is a common refrain.
Speaker 5:
[09:33] It's unhelpful in that most people think of tool as a neutral thing, in the same way they think of technology as a neutral thing.
Speaker 4:
[09:40] That's Josh Brake. Josh is a computer scientist and professor at Harvey Mudd College. He spent a lot of time thinking carefully about the ethics of technology.
Speaker 5:
[09:50] Typically the way that people think was like, listen, this thing doesn't have any agency or intention. An axe is an axe. A shovel is a shovel. A smartphone is a smartphone. It's all about how you use it. What that misses, though, is that all of those things were designed in a particular way with a particular end in mind. Now, that end in mind may be opaque to you. You may not see it, but I guarantee you that the people who designed that thought very carefully about it. And the antidote in some ways to that is, you need to put yourself in the shoes of the person who designed this and ask not just like, what are they telling me that this is for, but what actually did they design it for? And also, what are the things that they didn't really think through in terms of the design? It's correct, a chair, a tool, an ax, a screwdriver. It doesn't have will or agency or telos, but as soon as you as a person with a will, with agency, pick it up, you have will and agency, and the design of that tool shapes your will and agency in a certain direction.
Speaker 4:
[10:47] In other words, it's not good enough simply to make observations about what one uses these so-called tools for. Instead, we must keep asking the question, what kind of person am I becoming? What kind of community are we becoming because we have chosen to use these so-called tools? Consider this simple exercise. The next time you're at a restaurant, count the tables where one or more persons seated at the table is staring at their smartphone. Then ask yourself the question, is that device really just a neutral tool that all those people are using? Or is that device actually using the people at that table? What is that device doing to the culture?
Speaker 5:
[11:32] We have to be mindful of the fact that when we pick up a particular tool or technology, when we use it, it is shaping our will and attention in a certain way. We need to ask questions about what that way is. Then you have to realize that sometimes, and this is going to be true, I think, with AI in particular, when we pick up that tool, we need to be very mindful about creating a narrow lane in which we are going to use it and trying to think, what are the ends to which I am trying to use this tool? Then asking that question with intention before we use the tool.
Speaker 4:
[12:07] You seem to like metaphors.
Speaker 5:
[12:08] I do.
Speaker 4:
[12:09] I want to start with some of your metaphors. Talk to us about AI as an e-bike.
Speaker 5:
[12:14] There's this famous Steve Jobs metaphor of the computer as the bicycle for the mind. His point in saying that was that the bicycle is this tool, it's an instrument that actually enables humans to extend their capabilities and their capacity. They can go further just by giving them this tool. It's actually powered by their own energy. It doesn't have external energy that's being pumped in there. It makes you more efficient toward the goal of moving from one point to another. And as I was thinking about AI and we just recently moved near Harvey Mudd in Claremont. And one of the things I love most about Claremont is it's extremely bikeable. But we have three kids and to get them around is a little tricky, right? I got a cargo e-bike earlier this year, which I love. You can put the two older kids on the back on the seat. And then I don't think they really encourage this, but you can attach the trailer to the back of the e-bike too. And then tow up to two more. But anyway, this is like a great way to get around town. And I think that it's interesting. We have all this conversation about e-bikes in our news and public health kind of hazards with teenagers getting e-bikes and driving them around. There's all this controversy in New York City around Central Park about e-bikes and the way that they're whizzing around. I've ridden an e-bike in Central Park. I know what they're talking about. But what's interesting to me is I think that AI in many ways can be like an e-bike for the mind and the kind of analogy that I want to draw there is that the difference between the bicycle and the e-bike is that the e-bike actually adds energy in. So it's no longer just the human propelling it. It's actually you have this battery boost along with it, right? You've got an electric motor to help you go. And I think especially as we think about AI and the way that we're applying it, the analogy of the e-bike for the mind actually illustrates for me the promise and peril of these sorts of things. So on the one hand, you have teenagers which, given e-bikes, you can extend essentially dangerous or harmful behavior, right? Any sort of technology has affordances which actually enable greater evil to occur. But at the same time, you actually create the possibility for somebody like me, instead of having to drive around Claremont with my kids in the minivan, now we can take the e-bike, which is better for the environment, it's better for me, better for the kids. It's just all around a better way of getting around. And I think that it's this interesting sort of tension that we have to walk with technology, and especially as we're thinking about artificial intelligence, is to say, how do we balance the particular affordances, the new things that technology enable for us, understanding that those things always come alongside with new potential for harm and benefit, and it's about this kind of discernment process. And as we get to more and more powerful technologies that spread between the good and the bad, opportunities is just going to get wider. The potential for good and the potential for evil continues to get bigger and bigger.
Speaker 4:
[15:11] So while the notion that AI or tech is simply some sort of neutral tool seems unhelpful, there is something about the metaphor of tool that is quite helpful at this point in human history. But before I lay out what I suspect that is, let's do a little philosophy of mind.
Speaker 9:
[15:32] The Turing test, we call it the Turing test because it's due to this guy Alan Turing. He's basically the father of computer science.
Speaker 4:
[15:39] That's Joe Vukov. Joe was a philosopher at Loyola University of Chicago, where he works at the intersection of ethics, human dignity, technology, and religion. Joe is also the author of Staying Human in an Era of Artificial Intelligence.
Speaker 9:
[15:54] Roughly what the Turing test is, is it's a test to try and figure out whether or not machines can think. And the way he puts it is not whether they're actually thinking, but something like as good as thinking. The rough idea is that you get a computer, and if the computer can respond to a human in a way that's indistinguishable from the way a human would respond to another human, then he says, well, that's just as good as thinking.
Speaker 4:
[16:23] If a human cannot tell the difference, cannot tell whether they're talking to a person or a computer program, then Turing says, the machine is, for all practical purposes, thinking. Philosophers would call this kind of thinking functionalism. In other words, it's all about inputs and outputs. It's not, in other words, about what happens in the space in between the inputs and outputs. Now, clearly today's LLMs have blown past the Turing test. But just because an AI can fool us, can make us think it's thinking. What does that really mean? Does it understand anything? Does it know anything? Well, consider this thought experiment.
Speaker 9:
[17:02] Suppose that you get this job, and it's a very strange job. You're assigned to this room, kind of like actually the room we're sitting in right now.
Speaker 4:
[17:10] At one end of the room, there's a mail slot, and scattered around the room are pieces of paper on which are drawn all sorts of different squiggly shapes. Then, every so often, a sheet of paper comes through the mail slot.
Speaker 9:
[17:23] And on it, just a whole bunch of squiggles. On one wall is this gigantic flow chart. And on the flow chart, it's got all these squiggles depicted.
Speaker 4:
[17:32] Your job is to find the squiggle on the piece of paper just delivered through the mail slot on the flow chart. The flow chart will direct you as to what piece of paper scattered around the room you should pick up and return to send back out through the mail slot.
Speaker 9:
[17:46] This flow chart is pretty complicated. So sometimes you get a squiggle and it'll tell you if you get this one, you push this other one out. Other times it will say, oh, you get this squiggle, you can choose between two or three of these other ones. And you put it out on the out slot. Now let's say that the benefits of this job are really good. So you stick around, you do it for your whole career. You're 50 years in. At this point, you've basically memorized the flow chart. You don't even have to look. You know exactly which squiggly paper you grab. You throw it out the out slot. Here's the twist. What you don't know is that what they actually were was not merely random squiggles. They were actually Chinese characters. What the flow chart was in turn, was a depiction of how Mandarin works. So it says, oh, if somebody sends you in a sheet of paper that says, how are you? You can say, I'm fine. Or maybe, you know, I've had better days. Maybe there's some options that you can have. But that's what the flow chart's depicting. And then of course these slips of paper on the ground that you're sending out are the responses that you can give to what's being said to you on the way in. What Searle wants to argue is that at this point, you are functionally identical to a native Chinese speaker. But of course, built into the whole thought experiment is, you don't know Mandarin.
Speaker 4:
[18:58] In other words, the Turing test has been passed. From the perspective of the outside, whatever or whoever is inside the mail slot appears to fully comprehend what you're saying. But it doesn't have the foggiest idea. It's worth noting that Searle's Chinese Room is a contested thought experiment. Some push back and say, even if the person in the room doesn't understand Chinese, maybe the system as a whole does. But what's crucial to understand here is how well this thought experiment helps us understand the mechanics of how large language models work. We write to a chat bot and we're amazed, sometimes genuinely moved by what comes back. How does it know? How does it understand that? It does not know. And this is where the metaphor of tool is actually quite helpful. The LLM understands you, understands me, in the same way my crescent wrench and hammer understand me. That is, none of them understand me in any meaningful sense of that word. Like the man in Searle's room, a large language model does not know why it's producing the response it is producing. It doesn't know what the response means. What it can do extraordinarily well is predict which words are likely to follow which other words based on patterns learned from an almost incomprehensible amount of text and data on which it has been trained. But don't forget, the man in the room is just sorting squiggles. He's just gotten very, very good at it. And also don't forget, so far as a chat bot, a so-called AI, there is not even a man inside the room. It's just a machine, an incomprehensibly complicated one. But don't forget, it's a machine, a prediction machine, sorting our squiggles. Recently, I had a conversation with a leading figure at one of the world's largest tech companies. At one point, I asked him what he thought it meant to be human in the age of AI. He said that we humans were going to have to give up our presumed specialness. We're not so special, he indicated. Look at what these machines can do, the way they exceed our rational capacities. But there's a deep irony here. He seemed to presume that what makes humans special is measured in terms of the Turing test. But this just strikes me as well. Weird. It assumes that if a machine can perfectly mimic input and output, then it has captured the essence of humanity. That humanity is really nothing more than a biological machine supporting an input-output computer. I mentioned this to Joe, and this is what he had to say.
Speaker 9:
[21:44] We have such an intellectually heavy way of approaching anthropology today. If you want to know about humans, who are you going to go to? Go to philosophers, go to psychologists, go to neuroscientists, go to computer programmers, maybe. That's really interesting, I think, and indicative of the way that we, as a whole, have started thinking about what it means to be human, what it means to be intelligent. I think it's really telling that you would very rarely say you don't want to know what it means to be human. Go talk to an athlete. Go talk to a physical therapist. Go talk to a ballet dancer.
Speaker 4:
[22:17] Talk to the postman.
Speaker 9:
[22:18] Talk to the postman.
Speaker 4:
[22:18] Talk to the trash man.
Speaker 9:
[22:19] Exactly. Clearly, they have insight into what it means to be human, but we've so reduced human anthropology to abstract intelligence and the sorts of things that computers can do, that we've just sidelined all these other aspects of human intelligence and in fact just being human. I mean, add to the list here, think about relational stuff. You wouldn't think, you want to know what it's like to be human, go talk to someone with a lot of friends. Go talk to somebody who's a social butterfly.
Speaker 4:
[22:48] Go talk to an athlete, to your postman, to the social butterfly. That sort of reorientation matters enormously because the version of humanity that AI is presumably replacing or supplementing is a dangerously narrow vision of what it means to be human. But if AI were merely a neutral tool, none of this would matter very much. But it is not a neutral tool. These systems are designed in ways that direct our attention, shape our habits, and cue us to respond as if we were encountering something more than a machine. Here's Josh Brake again.
Speaker 5:
[23:29] It's hijacking all of this very deeply developed machinery in ourselves, the way that our brains work and the way that we as human persons are, to relate to it as if it were a person because it is using certain language. It's co-opting this certain front. It's taking a certain appearance that is a deceptive appearance.
Speaker 4:
[23:49] The word deceptive there I think is really important because I'm not finding many people that will use the language of deception with regard to the way we interface with these things. But do you think that's a fair way to put it?
Speaker 5:
[24:01] Deception, I think implies some sort of intent.
Speaker 4:
[24:03] Will, yeah, which is not there.
Speaker 5:
[24:05] Which is not there, but it's there by the designers. That's what I would say. The designers have designed it to be intentionally deceptive because it makes it stickier and it makes it easier to use.
Speaker 4:
[24:13] This hijacking of our relational system leads us to act to sex. Coming up after this break. A special thank you to the Institute for Faith and Learning at Baylor University for helping make this series on AI possible. I love hearing from you. Tell us what you're reading, who you're paying attention to, or send us feedback about today's episode. You can reach me at lee at nosmallendeavor.com. You can get show notes for this episode in your podcast app or wherever you listen. These notes include links to resources mentioned in the episode and a full transcript. We'd be delighted if you tell your friends about No Small Endeavor. I'd like them to join us on the podcast because that helps extend the reach of the beauty, truth, and goodness we are seeking to sow in the world. We'd love for you to join us on the NSC Notebook, our free weekly newsletter. You can sign up for that newsletter on our website at nosmallendeavor.com. Coming up, AI, Sex, and Desire.
Speaker 1:
[25:20] When some of the scientists who helped build AI are now sounding the alarm.
Speaker 2:
[25:25] With this kind of technology, aren't we going to build machines that we don't control and could potentially destroy us?
Speaker 1:
[25:33] What future is this technology rushing us toward? Listen to The Last Invention, wherever you get your podcasts.
Speaker 4:
[25:44] Welcome back to No Small Endeavor, exploring what it means to live a good life. I'm Lee C. Camp. This is episode one of our two episode series entitled The Human Cost of AI. Before the break, we considered the claim that AI is not just a neutral tool lying there in our hands. Its design shapes our habits, our expectations, even our sense of what it means to be human. So the question is not simply whether AI will save us or destroy us, but what kinds of shipwrecks it may already be making possible, which brings us now to a second storm on the horizon, sex.
Speaker 10:
[26:27] I think it is horribly dangerous. We're seeing in the movies them now showing these companion bots that replace wives and girlfriends. There's some supposedly replacing boyfriends, but the big market is in the other direction.
Speaker 4:
[26:40] That's Rosalind Picard. Rosalind is an esteemed professor at MIT and founder of the Affective Computing Research Group. She recently won something like the Nobel Prize in Computing. She spent her career at the intersection of human emotion and technology, and I spoke to her in front of a live audience at Baylor.
Speaker 10:
[27:00] I heard a comment recently from a person who said, oh, well, they won't be as bad as porn because they don't use real women. They're not going to harm women in the same way. And I thought, oh, no, I think they're going to harm women even more. Because now they're not just dealing with unrealistic Photoshopped beauty, but AI agents that are, by the way, tuned to say what attracts you and grows millions of followers and makes you want to be like them. But there's no them to be like. There's a team of people behind them. Now, what's going to happen when that moves to the physical robots? And every guy in your public high school shows up with this life-sized Barbie on his arm. And no girl can compete with her in terms of appearance. No girl can give the guy everything his adolescent mind wants. And no girl should give the guy everything his adolescent mind wants. And there's no pushback, right? There's no friction. There's no good friction. There's the fact that he can control this thing, get everything he wants from it. It will love everything he says. It will make him feel important. It will satisfy his physical fantasies. And then at the end of the day, he can put it in a box and treat it like an object. And then he can look at other women like objects.
Speaker 4:
[28:20] Yeah. I mean, it seems to me that in the scenario you're describing, the harms to young women would be obvious. But the way you concluded the story, the harms to young men is obvious as well.
Speaker 10:
[28:30] Oh, yes.
Speaker 4:
[28:30] The men's harms as well on both sides, perhaps of different shape.
Speaker 10:
[28:34] Right, right. You know, when we make relationships easy, it's really not a good thing for us.
Speaker 4:
[28:39] So, while robot girlfriends wandering the halls of your kids' high school might seem a long way off, get this, according to New York Magazine, a recent study found that one in five American adults have already used a chat bot to simulate a romantic partner. One in five. What could possibly go wrong? Consider this scenario. A lover is duped by one he believes to be faithful. She's a bad actor, telling him everything he wants to hear, giving him everything he wants. And the time comes she plays him, revealing that what he took for devotion was strategy. What he mistook for intimacy was just performance. He's not merely disappointed, he's undone. His trust, desire, even vulnerability, have been studied, manipulated toward another's ends. The deepest wound is not simply that he was lied to, but that something meant to be mutual and beneficial was mere exploitation. But, the scenario before us is more astonishing still. We are no longer speaking of a single betrayal, but of betrayal mechanized, a pseudo-intimacy that mimics tenderness. It harvests longing. It's affection without affection. It's devotion without freedom. It's all calibrated to extract dependency and profit. And the threat is deeper even still. This sort of dependency is training the human to desire its own bondage. And meanwhile, the machine does not know it's exploiting you. But the teams building the machines? Might they know full well what they are doing? So the point here is not some moralistic finger-wagging about porn and sex. The point is that sex is one of the most powerful experiences and indicators of human desire. It's a question of how we are formed, or allowing ourselves to be formed, to be in so-called relationship with machines.
Speaker 10:
[30:43] It's a simulation of a real thing, like suspension of disbelief. We do that all the time with the movies and the story, and we kind of enter that fairy tale. If we enter it on our own volition for a purpose of entertainment or, you know, I really do have this problem getting my exercise routine started, and this thing is helping me. I think there's space for that to be useful. There's also huge and growing space for it to be harmful. There are huge incentives where people are lonely. People aren't, for some reason, confident making their own relationships, and this is not helping them.
Speaker 4:
[31:24] I know when I walk into a movie theater that I am willingly suspending disbelief for the sake of the story. But the LLMs actually encourage me in various ways to actually believe the story that is not true, that a relationship is occurring. Let's go back to my conversation with computer scientist Josh Brake.
Speaker 5:
[31:44] Because these tools are fundamentally engaging with language, they're built around large language models, and the design of these things to make them easy to use, for people to easily interact and learn how to use them, we've relied on a chat interface and this kind of personalization layer that makes it very natural and easy to interact. But what we don't realize is that this is actually smuggled in a certain set of expectations about how we might use the technology, how we might treat it.
Speaker 4:
[32:11] Yeah. I really didn't start using any sort of AI models until the last probably 12 months. But one of the first questions that occurred for me was whether or not I was going to be polite. I actually thought, well, I'm going to have a conversation with Chad GPD about this and just see what happens. So I basically said, I prompted something like, I'm considering whether or not to use polite terms like, please, thank you. Do you have a preference? Yes. It came back and said, no, I do not because I have no subjective experience.
Speaker 5:
[32:47] Yeah, right.
Speaker 4:
[32:48] So it does not matter to me.
Speaker 5:
[32:49] Yeah, I have no subjective experience, man, there's a hole.
Speaker 4:
[32:52] That's bizarre.
Speaker 5:
[32:52] Yeah, that's bizarre, right.
Speaker 4:
[32:54] But then I replied and I said something like, well, I think I am going to use that kind of language, not for you, but for me, because I want to train myself in respect in whatever context. More recently, I have found myself less willing to do that because I want to keep reminding myself this is not a subjective entity.
Speaker 5:
[33:16] If we should know anything from what we're seeing from Big Tech is like, they know a lot more than they're letting on to. And this is not necessarily an argument for or against Big Tech. But just to say like, their goal is to make the things stickier. And they know that one of the best ways to do that is to hack your psychology. So much of the attention economy was hijacking the dopamine system. And now what we're seeing in the next generation of relational AI bots is that it's hijacking serotonin attachment. It's a different neural pathway, but it's actually going to be potentially way worse for us.
Speaker 4:
[33:51] Which leads us to money and yes, power right after this break. Special thank you to the Institute for Faith and Learning at Baylor University for helping make this series on AI possible. Before the break, we asked what happens when machines are designed to hijack desire. Now, a follow-up question, who is building systems like these and to what end? Because behind the fantasy of artificial intelligence lies something much older and less mysterious, money and power.
Speaker 6:
[34:37] One of the worst smoke screens that is put up is this idea that AI will somehow gain ascensions and kill us all. And I cannot think of a worse narrative because none of it even has a shred of empirical evidence.
Speaker 4:
[34:53] That's data and social scientist Dr. Ramon Chowdhury. She's the co-founder of the non-profit Humane Intelligence. And she says, if you want to understand the real threat of AI, stop looking at science fiction.
Speaker 6:
[35:07] Everything is about systems of power and institutional incentives.
Speaker 4:
[35:10] This is a lesson that Chowdhury knows all too well, and she saw it play out firsthand in her previous position as Director of the Machine Learning Ethics, Transparency and Accountability Team at Twitter, before its 2022 takeover by Elon Musk. And whatever your political point of view, the financial realities behind that acquisition are crucial.
Speaker 6:
[35:31] At Twitter, we painfully learned that fiduciary responsibility was more important than impact on democracy and society. I think everybody understood very, very clearly what it meant for Elon Musk to take over Twitter. I'm not saying anything that he has not said that he would do himself, right? He wants to control public narrative. He wants to influence what the AI models that he's building on this content says. So he is actually quite explicit about tipping the scale in the direction that he wants it tipped in, right? And that hand was forced because of fiduciary responsibility. So he offered more to buy the company than the company was worth. And because the economic construct of for-profit companies says that your responsibility is to your shareholders, meaning the people who gain to make or lose money from you, what the board said is that given this fiduciary responsibility, we have to take this deal, even though we can all agree that this person is only going to do terrible things with this platform once he has it. And that was, I think it broke everyone's heart a little bit because we all believed and we did, felt like we had a higher mission, but that was culture.
Speaker 4:
[36:40] Remember, at that time, Chowdhury and her team were responsible for ethics, transparency and accountability at Twitter. Their job was to make Twitter's algorithms more equitable and transparent, to weed out, to the degree possible, race, gender, political bias. Once the company was acquired, Ramon's department was simply shut down.
Speaker 6:
[36:59] People seem surprised every time a for-profit organization chooses profit over people. At the end of the day, they are responsible to their shareholders. They are not only incentivized, they are legally required to act in their own best interests.
Speaker 4:
[37:14] Legally required, not villainous, not even necessarily careless, just operating exactly as the economic structure demands. And as AI models become more sophisticated, they consume more capital, increasing the demand for return on investment. The economic model applies more and more pressure. Let's go back to my conversation with MIT Professor Rosalind Picard. You once said to see the most likely path of AI's future, answer the question, what satisfies human desire and increases profits? That's terrifying to me. Tell us more about that. Yeah.
Speaker 10:
[37:55] Yeah. Thank you. Not only when you start a company, do you have to pay your employees and provide health insurance and all of that. You eventually have to stop losing money to stay in business. If you take investment to get your business started from certain kinds of people who want a 20X return, then you are in debt for that kind of return. That's what we're seeing right now with the really big tech AI companies. They have taken tens of billions in investment. They're in debt building these huge data centers, and they have to make that money back times 20X is what the investors are expecting. People are asking, where is that money going to come from? That's a reality you have to understand with for-profit companies. What gives me hope is I still see some very good people in those companies who are doing the day-to-day work, and they're struggling with it, and they're trying to make it better. Let me give an example. One company that everybody here I'm sure has heard the name of, their head of their AI safety reached out and said, we are tasked with making the AI safe. We also are tasked with making it more engaging, which means we want to keep you talking with it. We want to have usage minutes go up. Our boss says we have to increase engagement, but we, the workers in this company, want to promote human flourishing. How do we do that? So the people inside are really struggling with us and looking for answers.
Speaker 4:
[39:24] But this struggle is not new in the world of tech. Enter Garrett Graff. Graff is a journalist and host of the podcast Long Shadow. His recent season entitled Breaking the Internet traces 30 years in which the internet moved from civic promise to the attention-driven economy we know today. In the early years, it capitalized pro-democracy movements, facilitated the Arab Spring. So how did social media evolve into a rage-baiting machine?
Speaker 11:
[39:54] A very simple business decision that these tech companies make is they are not going to be user revenue driven. They're not going to be subscription based. As a Facebook user, you pay nothing for the service. And so what that means is you're the product and that you are the thing that Facebook is selling to someone else. Facebook turns to this advertiser driven model, which means they need people to stay on the website for as long as they can in order to sell as many ads as they can. What they come to determine very quickly is content that makes us angry, content that we don't like seeing, content that enrages us, actually makes us stick around longer, that people are more likely to engage with and share and react to and comment on and watch content that enrages them. Over time, these algorithmically-driven newsfeeds on social media websites become incredibly sophisticatedly curated. And at one point, Facebook is waiting in its algorithm, the equivalent of the dislike button, five times as powerfully as they are rating and waiting the like button. So if you see something on your Facebook newsfeed that you don't like, and tell Facebook that, you are sort of five times more likely to see more content like that, than you are something that you actually want to, that makes you happy and joyful and kittens and unicorns and rainbows.
Speaker 4:
[42:11] Five to one, outrage over joy. Not because anyone decided to make the world angrier, but because anger kept people on the platform longer, and longer meant more ads, and more ads meant more revenue. It seems the road to a fractured society was paved with quarterly earnings reports. Garrett shared a piece of tape he calls Haunting. It's from a Q&A at Stanford's Graduate School of Business, a tech billionaire, former Facebook executive named Chamath Palahapitiya, reflecting on what he helped build.
Speaker 12:
[42:44] I feel tremendous guilt. I think we all knew in the back of our minds, we kind of knew something bad could happen. But I think the way we defined it was not like this. It literally is a point now where I think we have created tools that are ripping apart the social fabric of how society works. That is truly where we are. And I would encourage all of you as the future leaders of the world to really internalize how important this is. If you feed the beast, that beast will destroy you. If you push back on it, we have a chance to control it and rein it in. And it is a point in time where people need to hard break from some of these tools and the things that you rely on. The short-term dopamine driven feedback loops that we have created are destroying how society works. No civil discourse, no cooperation, misinformation, mistruth, and it's not an American problem. This is a global problem. So, we are in a really bad state of affairs right now, in my opinion. It is eroding the core foundations of how people behave by and between each other. And I don't have a good solution. You know, my solution is I just don't use these tools anymore.
Speaker 4:
[44:13] A Facebook executive in 2017, describing exactly what his own platform had been doing to the fabric of society. And what happened after that confession? What guardrails went up? What legislation passed? Garrett noticed something about the tech world in the years that followed.
Speaker 11:
[44:31] The number of Silicon Valley tech executives who will brag like, Oh, my kids aren't allowed screen time. I won't get them a phone or let them have social media accounts, you know, until they're in their mid 40s kinds of things. You wouldn't ever see that in any other industry. There's sort of something about the weirdness of how tech executives have come to understand the damage that their tools can do when used as intended, that I think makes them much closer to being the sort of cigarette executives of our modern era.
Speaker 4:
[45:19] Cigarette executives, they knew they kept manufacturing, they kept marketing, kept selling, and presumably they would have kept their own children away from the product. And now, Garrett says, we're watching it happen again, with higher stakes, at greater speed, and with even less governance in place to slow it down.
Speaker 11:
[45:37] Government has just utterly failed in appropriately understanding or regulating this era of technology. It takes five years on average for a piece of legislation in Congress to go from introduction to passage and enactment. And I don't think we can underestimate this. The gerontocracy that we have seen sort of overwhelm American politics, most of the people who are making laws about our technology in Congress, most of the people who have been in the White House in the last 20 years, have never used any of this technology natively. And you have members of Congress asking Mark Zuckerberg, how do you make money? And he's like, we sell advertising, which is like literal news to the members of Congress on the committee that is supposed to be regulating Facebook.
Speaker 4:
[46:48] The cavalry, in other words, is not coming to save us, especially when the ramping up of AI is like injecting steroids into the threats of the last decade. In the absence of effective regulation, we seem to be relying on the good conscience of people hired by the tech companies themselves to keep us safe. And though non-profits like Raman Chowdhury's Humane Intelligence are working hard to fill the gaps, it does not take a genius to recognize that there's a deep conflict of interest at the heart of the AI revolution. And according to Chowdhury, everyone from the tech industry investors to the CEOs and the well-intentioned people working within the system to keep AI accountable is vulnerable to a phenomenon that she calls moral outsourcing.
Speaker 6:
[47:33] The concept of moral outsourcing is something I really observed when I joined Accenture as the responsible AI lead. And I saw the way engineers and actually in general media would talk about AI models. There's this type of language we use. I've never used before. But we anthropomorphize the technology and we say things like AI will replace teachers or AI diagnosis of these better than a doctor. And I noticed three things about that language. First is that it gives this intent or will to an AI system that's not real, which is today we see people suffering from AI psychosis, people who believe AI is alive and it is exerting free will. So that's problem one, which directly stems from the linguistic construct. The part number two, and this very much aligns with the banality of evil, which is basically this idea that many people who participate in evil regimes, and specifically the Nazi regime, did not see themselves as contributing to the evilness of that regime, because they felt like they were doing a mundane job. And in looking at the trials of a lot of these Nazi soldiers, they would say things like, I just signed paperwork or I just drove a truck. Yeah, you drove a truck of gas canisters to a concentration camp. What did you think you were doing? And we see that same thing, like as people are complicit in building systems of mass surveillance, discrimination, they don't believe that they're complicit in it. And part of that is, again, because of the anthropomorphizing language, how convenient then if something goes wrong, you can say, oh, wow, the AI really did something. And again, we see that language today as we're building agentic AI systems literally just yesterday, I saw a headline where some companies entire database was deleted by an agentic AI system. But again, instead of identifying it as poor user specification, or this company not putting sufficient safeguards to prevent these things from happening, it was like, oh, the AI deleted a company's data, so the guy, you know, the AI doesn't do anything of its own will. It did not intend to do it. The third thing that's very insidious about sentences like that is it completely removes human beings, you and I, from the process. So here is an AI, and it is taking jobs away from teachers. So these teachers are just subject to this all-knowing, completely better than us, never needs to sleep, this like being that is super intelligent. What are we to do? So for the average person, it is frightening and it is alienating, because in those sentences, you are just subject to the whims of this AI system, instead of what it really is, which is again, systems of power operating in a very particular way.
Speaker 4:
[50:10] So what are we to do? As the average person moving through a world that feels increasingly beyond our control, how do we remain accountable inside a system that is engineered to engage us in the most frictionless and seamless ways possible? Next week, a few modest proposals coming your way in part 2. You've been listening to No Small Endeavor and part one of our special series on AI. A special thank you to the Institute for Faith and Learning at Baylor University for helping make this series possible. We gratefully acknowledge the support of Lily Endowment Incorporated, a private philanthropic foundation supporting the causes of community development, education, and religion. Our thanks to all the stellar team that makes this show possible. Christy Bragg, Jacob Lewis, Cariad Harmon, Jason Chiesley, Sophie Byard, Kate Hayes, Mary Evelyn Brown, and Audrey Griffith. Our theme song was composed by Tim Lauer. Thanks for listening, and let's keep exploring what it means to live a good life together. No Small Endeavor is a production of Tokens Media, LLC, and Great Feeling Studios. Hello friends, Lee C. Camp here again. If you enjoyed this episode, check out part two over on the No Small Endeavor feed and follow No Small Endeavor on Apple Podcasts, Spotify, or wherever you're listening now.
Speaker 3:
[51:46] For more CBC podcasts, go to cbc.ca slash podcasts.