transcript
Speaker 1:
[00:01] America's BEST Network just got bigger. Switch to T-Mobile today and get built-in benefits the other guys leave out. Plus, our five-year price guarantee. And now, T-Mobile is available in US cellular stores. BEST Mobile Network based on analysis by Ooculliff Speedtest Intelligence Data 2H 2025. Bigger network, the combination of T-Mobiles and US. Cellular Network footprints will enhance the T-Mobile Network's coverage.
Speaker 2:
[00:24] Price guarantee on talk, text, and data.
Speaker 1:
[00:26] Exclusions like taxes and fees apply. See tmobile.com for details.
Speaker 2:
[00:30] There's nothing like your first Mac. Here's what people online are sharing. At Dr. Rain says, Everything is just so smooth and fast, I still can't get over it. Sinking stuff between my phone and this is just chef's kiss. At Mr. Incredible 488 says, Apple silicon basically cures low battery trauma. That's how they felt with their first Mac. How will you? Introducing the all-new Macbook Neo, an amazing Mac at a surprising price. Find out more on apple.com/mac.
Speaker 3:
[01:02] This is the 99% Invisible Breakdown of the Constitution. I'm Roman Mars.
Speaker 4:
[01:07] And I'm Elizabeth Jo.
Speaker 3:
[01:08] Today, we're discussing articles six and seven.
Speaker 4:
[01:12] Roman, why don't we go through both articles and let's save the most important part for last.
Speaker 3:
[01:16] Okay, because there's a lot of unimportant parts. Six and seven, go for it.
Speaker 4:
[01:22] So why don't we start with article seven? That's the ratification clause. It may be the least important to talk about today, but actually crucially important for the Constitution itself.
Speaker 3:
[01:31] Yeah, sure.
Speaker 4:
[01:32] We needed these states to vote on it. And the ratification clause says that nine states would be enough to ratify or make the Constitution itself a legitimate document. So that in fact happened. It comes into effect on June 21st, 1788, when New Hampshire became the ninth state of the 13 to ratify the Constitution. So it kind of has its own clause to make sure the document is legit.
Speaker 3:
[01:57] Got it. Yeah, that makes sense. Okay.
Speaker 4:
[01:59] Yeah. So that's pretty clear. No real important Supreme Court cases on it. So let's turn back to Article 6.
Speaker 3:
[02:05] Okay.
Speaker 4:
[02:06] Article 6 is a mishmash of a bunch of different things. So Article 6, Clause 1, talks about debts that the United States is already obligated for and still has to pay.
Speaker 3:
[02:19] So why is that there?
Speaker 4:
[02:20] Well, you know, when the Constitution was drafted, the country still had debts and engagements that were left over from the Revolutionary War, and creditors were kind of nervous. What if you create a Constitution that wipes out all of the debt? That would be pretty convenient.
Speaker 3:
[02:36] Got it. Got it.
Speaker 4:
[02:37] So in order to assuage those nervous creditors, this Constitution, our Constitution, actually says, don't worry, we understand that we have these debts and we are going to pay these debts. So today it's really mostly historical interest. It doesn't come up because we did in fact pay our debts. I will say what is interesting is that from clause one, originally as it was drafted, the clause said that the United States would both be obligated to pay the debts and would have the power to pay the debts. But that second part got taken out of Article 6 and put into Article 1 as part of Congress' spending authority. So that very, very important part today is actually in the larger chunk of the Constitution we cite all the time, which is why does Congress have the ability to pass laws, and very often it's because of spending authority.
Speaker 3:
[03:27] Wow. Well, that's really fascinating actually. Yeah.
Speaker 4:
[03:30] So one little switch around, you're going to make a big difference.
Speaker 3:
[03:33] It's going to make a huge difference. Okay. Okay. So that's the first clause.
Speaker 4:
[03:36] Okay. Let's turn to clause three of Article six. Do you want to read it?
Speaker 3:
[03:39] Okay. The senators and representatives before mentioned and the members of the several state legislatures and all executive and judicial officers, both of the United States and of the several states shall be bound by oath or affirmation to support this Constitution. But no religious test shall ever be required as a qualification to any office or public trust under the United States.
Speaker 4:
[04:02] So this is known as the no religious test clause.
Speaker 3:
[04:05] Great. I like this clause.
Speaker 4:
[04:07] Exactly. It's kind of a no religious test for anybody taking office. And in fact, it's the absence of religious tests that makes us understand that this is successful, right?
Speaker 3:
[04:17] Yeah.
Speaker 4:
[04:17] You don't require anybody to take a religious test.
Speaker 3:
[04:19] Yeah.
Speaker 4:
[04:20] But at least formally.
Speaker 3:
[04:22] I mean, there's no formal test, but you can kind of feel it in there. The fact that the representation of other religious faiths is not super common inside of our public institutions.
Speaker 4:
[04:32] True, true. But there is a big difference when you were formally required to do it. And in fact, this clause comes from traditions going back to England. So for instance, in England, in the 17th century, for example, all government officials had to take an oath that they would help establish the Church of England, also disclaim Catholicism and the Pope. And so the idea is we have this common law tradition. It comes from England. By the time you have the Colonies and the Articles of Confederation, it was pretty common for government officials to be told that they had to take some kind of religious affirmation. Of course, not for the Church of England, but some kind of I believe in God sort of test.
Speaker 3:
[05:14] It's notable that it's absent.
Speaker 4:
[05:15] It's notable for its absence. And this clause as well has very little in terms of Supreme Court interest or case law today. And that's for a totally different reason. You'll notice this is about religious freedom essentially, right? It shouldn't matter whether you are a practicing Catholic or Muslim or Jew to be able to take a public office. But the reason why this clause doesn't get much attention is that's because free exercise clause cases today come up under the First Amendment. Rather than this clause. So not too much there as well.
Speaker 3:
[05:47] Yeah. So I noticed in a recap, you had Article 6 clause one and Article 6 clause three, but we have skipped Article 6 clause two. So what is that?
Speaker 4:
[05:57] Well, Article 6 clause two contains what's called the Supremacy Clause. Why don't you read it?
Speaker 3:
[06:04] Okay. This Constitution and the laws of the United States, which shall be made in pursuance thereof and all treaties made or which shall be made under the authority of the United States shall be the supreme law of the land.
Speaker 4:
[06:18] And that's referred to as the Supremacy Clause.
Speaker 3:
[06:21] So why is the Supremacy Clause so important?
Speaker 4:
[06:23] Well, historically, the Supremacy Clause responds to a very particular problem, and that is before the Federal Constitution, the Articles of Confederation, which was the predecessor document, had no similar provision saying that federal law is supreme. And you might wonder, well, what does that really mean? Well, think of it this way. If you have state laws on a topic and federal laws on the exact same topic, which one are you supposed to follow? If there's no clear instruction, well, maybe you just follow whichever one you want. And that's kind of what happened. Before the Constitution, state courts sometimes just didn't think that federal law was binding, so they didn't apply it. They applied state law. That's kind of a problem, right?
Speaker 3:
[07:04] Yeah.
Speaker 4:
[07:04] So the Supremacy Clause, just with one fell swoop with this particular clause, gets rid of that uncertainty or ambiguity. The Supremacy Clause simply says, look, federal law, whether we mean the Constitution, federal statutes, federal treaties, are supreme when it comes to any conflicting state law. So the idea here is that you have this very important structural part of the Constitution, that federal law is supreme.
Speaker 3:
[07:34] So what does that mean, practically speaking?
Speaker 4:
[07:36] Well, what that means is, if you can think of supremacy as stating the simple fact that federal law is supreme, but arising out of supremacy is the idea that Congress now has the power, when it legislates, to preempt, or that really means displace or override any contrary state or local law. So you can think of preemption as being based in the constitutional power of supremacy. So Congress doesn't have to exercise preemption, but when it does pass laws in this way, it's very clear that any directly conflicting state or local law has to give way. So that's kind of the genius or the simplicity of the supremacy clause. But that's the most simple part of the supremacy clause.
Speaker 3:
[08:22] And I take it there's lots of constitutional case law based on the supremacy clause.
Speaker 4:
[08:27] That's right, because things can never be simple, right?
Speaker 3:
[08:29] Yeah.
Speaker 4:
[08:30] So when you think about federal law, sometimes Congress can simply say, we're going to pass a law and this law will in the text of the law itself, displace or preempt any similar state law. That's pretty easy. And if that were the only issue, we'd never talk about preemption, right? But the problem is that Congress very often doesn't say, there may be a federal law on a topic and a state law on a topic and the federal law doesn't say anything. So in response, the Supreme Court has come up with a whole host of cases, doctrines, tests, ways of thinking about federal preemption to try and answer the question, what happens when it seems like there are federal and state laws legislating on the same topic?
Speaker 3:
[09:18] So what exactly is supposed to happen when there's a conflict?
Speaker 4:
[09:21] Well, that also is a complicated answer. So it depends on what we're talking about. Sometimes courts will say something like, you know, there are some areas of federal law where the federal interest is so important, so extreme, we don't want the states to get involved a little tiny bit even. Even if Congress hasn't specifically spoken to that area. So an interest like this would be foreign policy. We don't want the states to get involved with foreign policy, negotiating their own treaties. That would be a bad idea.
Speaker 3:
[09:50] That would be a bad idea.
Speaker 4:
[09:51] Exactly. So those are the easy cases. But the much more frequent and difficult cases are sometimes courts have to answer, well, there's a federal law on a topic and a state law on the topic. Is it possible to comply with both state and federal law? If it's possible, maybe there is no preemption. No preemption would mean that state law and federal law are both valid. But for instance, if there is a way in which the state law is an obstacle for the federal government's law to operate, or whether it's literally impossible, state law says black and federal law says white, you can't do both at the same time, then that's a case of federal preemption. So these are always case by case determinations. But preemption is actually really important because if you think about all of the different areas in which the federal government regulates everything from the environment, consumer protection, energy, you name it, the states also often legislate in the same areas. And what you will have are individuals or companies that say, well, I want to comply with one, I don't want to comply with both, or am I supposed to comply with both? And that gives rise to preemption. So of all of the areas of law that we've talked about with the Constitution, in fact, preemption is probably the most frequently used constitutional law in practice. So on the one hand, you can think of constitutional law in the courts as being on a spectrum, right? Like maybe we'd put impeachment at one end. We don't talk about it in the courts. And then preemption all the way at the other. But preemption comes up all the time. Because the idea of federal preemption is that it's a possible question anytime the federal government is regulating in a particular area.
Speaker 3:
[11:43] Right, right. Which could be infinite, almost.
Speaker 4:
[11:47] Almost infinite. That's right. Every single area of modern life where the states regulate, very often, though not always, of course, very often the federal government is also regulating.
Speaker 3:
[11:56] And this situation is exacerbated by the fact that modern life continues to go on. There's new laws coming up all the time because there's new technology all the time and there's new things all the time to consider.
Speaker 4:
[12:07] That's right. So whenever you have a new policy problem, a new change in society, there's a race to regulate it or at least calls to regulate that new development in modern life. So the question is, are states going to do that job? Should the federal government do that job or should they both do that job? So one way to think about the problem of preemption is to, for us to pick an emerging area where both the states and the federal government are trying to regulate at the same time. And I think there's no better topic than artificial intelligence.
Speaker 3:
[12:39] Totally, totally. I mean, that's like huge. I don't even know what I think of it.
Speaker 4:
[12:43] That's right.
Speaker 3:
[12:43] So I can't even imagine what states and the federal government are thinking about it at this point.
Speaker 4:
[12:48] That's right. Artificial intelligence is everywhere. It's at the doctors, it's at the store, it's at school, it's at work. It's kind of a huge problem for government. And that's because AI has the potential to produce these really big benefits for society. But we've already seen that it can have all kinds of harmful effects, it can produce all kinds of major risks for society. You know, everyone's heard of AI makes up facts that don't exist, that people believe and sometimes act upon. Or it can make decisions about people that are really hard for us to explain and sometimes those decisions are false or misleading.
Speaker 3:
[13:26] Yeah.
Speaker 4:
[13:26] So just like any other problem in society, the states and the federal government are trying to figure out, how do we regulate AI or AI systems? Then that means everything from how do you regulate a chatbot that teenagers use, or self-driving taxis, or how do you regulate autonomous weapons when it comes to wartime?
Speaker 3:
[13:46] Oh my God.
Speaker 4:
[13:47] And so what kind of level of government should be regulating AI? And so should the states get out of the way altogether? Now, this seems like a very current topic, and it is, but the larger picture is an old one, and that's a question of federalism. So the narrower of view we have of preemption, we're really allowing the states to engage in more experimentation for the states to say, hey, we want to try this approach. And California will always take an approach that probably Texas will not, right? And vice versa. Sure. But a very broad view of preemption really is saying, you know what? We want the states to just get the heck out of the way. We want the federal government to be the primary voice in this area. So those are choices that courts have to make. There's nothing obvious about going in one direction or another.
Speaker 1:
[14:40] Yeah. Yeah.
Speaker 4:
[14:42] Because this is a fast moving and complex topic, our guest for this episode is Dr. Alondra Nelson. She's a scholar of technology and social science, and a leading expert on artificial intelligence. She currently holds the Harold F. Linder Chair at the Institute for Advanced Study in Princeton. She also served in the Biden administration as the acting director of the White House Office of Science and Technology Policy. It was in that role that Dr. Nelson spearheaded what's called the Blueprint for an AI Bill of Rights. We invited her to help us navigate why it's a challenge to regulate and what to think of the tug of war between the states and the federal government on the topic, especially during the second Trump administration. But we start with Alondra's definition of what exactly AI is.
Speaker 5:
[15:30] I usually use a modified version of the OECD definition, which is a definition that 38 nation states have agreed upon. And it's basically that these are like machine-based systems, like lots of statistics, lots of math, and that they use, they make inferences, so from different inputs and they generate outputs. And so the outputs are things like so-called predictions. They are things like recommendations, like your Spotify music recommendations or your Netflix recommendations. And I like to use those two examples because people have different feelings about how good or bad they think their net stream in Spotify is. And I think that's kind of a level set for AI. You know, decisions, so there are machines that are helping, you know, if we think about the theater of war, decisions about targeting people, locations, and the theater of war. And of course, with generative AI, AI tools and systems generate content, so texts and images and sound. So that's kind of, you know, inferences made from different sets of inputs, almost all a sort of data, whether those are photographs or numeric data, or, you know, quote unquote, all of the internet that was taken into generative AI, and lots of different outputs. So you cross cut that with the fact that AI systems have like different levels of autonomy and adaptiveness after they're deployed. So some can be very static, like, you know, a decision making or predictive algorithm that might be used in the criminal legal system, is taking in data and, you know, it has a sort of hardwired data set that it's sort of making so-called predictions against. And obviously today we increasingly are being told about things like OpenClaw and AI agents. And so these are more autonomous kind of AI systems that are, you know, making purchasing decisions for people, coding for them and the like. So that's a broad definition on purpose because AI is really broad. And we, I think we go back and forth from using generative AI as the default for what we mean by AI. But it's this whole suite of things. And if you talk to, you know, a computer scientist or an AI or machine learning engineer, they would say to you that actually, you know, if you think about AI, the world of AI as sort of a set of Russian nesting dolls, that generative AI is actually the smallest, right? You've got deep learning, you've got machine learning and all of that. So because generative AI with things like chatbots have been made consumer facing tools and that's really how AI came into the public sphere, it's kind of how we think about AI, but there's a lot of other use cases and types and autonomous and more brittle, et cetera, besides.
Speaker 4:
[18:23] Yeah. So I think, you know, when you, when you hear this, it's like a pretty technical set of definitions and products, but I suppose if you're listening to this conversation and maybe someone might think, well, I've I'm sort of familiar with maybe ChatGPT that came out in 2022. I've used it a couple of times, but like I really want to know, like, why should I care about this? So what for you are some of the most transformative or really concerning examples of AI that are happening in American society right now?
Speaker 5:
[18:53] So the why should I care is, you know, I think people every day, particularly folks in companies over sell AI. So that's certainly true. So what might be transformational? Some of the claims, you know, the AI for good claim are true and I think are on the either happening or on the horizon. So you can think about in the medical space, like an AI system reading chest x-rays or being able to flag kind of an early stage kind of cancer diagnosis, being able to see, you know, a tumor in its very early stages. So that's transformative. And indeed, you know, if we get that right, we're going to be able to do that in a way that's life-saving. It is the case that we still need radiologists and we don't have enough of them. So transformational, but transformational in potentially in the intersection of humans working with the AI, right? So, you know, other cases certainly are like in agriculture. So you can either farmers, whether it's sub-Saharan Africa, Kansas, in the United States, are using forms of computer vision, forms on a phone app that can help them identify whether or not a crop is being blighted. We're using already kind of AI and the traffic flow and try to sort of direct traffic and kind of retime stoplights. So you can cut commutes or you can redirect traffic. So when these are all, as to go back to my definition, all systems that take an image or a data pattern or a question, make an inference and generate an output that hopefully helps to augment when humans are doing, maybe improve what humans are doing, and maybe to help humans make better decisions. So those are, I think, cool things. I mean, we just have been watching Artemis 2. That is full of AI computer simulations that help them to track how they were going to do this incredible 10-day journey. Also cool. Concerning, we are living with a lot of that right now. We have got this kind of great race happening in the world of looking for a job, right? So you can now more easily do your resume and your cover letter using AI, but now AI systems are being used to screen your resume out. So people are now sending dozens and dozens of resumes out on a given day, but they are getting screened out right away. So the downside of this is that it might filter out people out of an applicant pool before anybody ever sees your name or anybody ever actually looks at your credentials and nobody will tell you why potentially. There's some research that suggests because again, as you talk about input data and making inferences from that, and things like employment, a lot of the input data is historical data. So in fields in which you've had historic racial discrimination or gender discrimination, if you're looking for the resume of an excellent computer scientist, then a lot of algorithms have been shown to kick people out. So you're like people are losing access to opportunities with real implications for their liberties and their rights. There's so-called predictive policing tools that the algorithm says that you should police more because it's been policed more historically, not because there's actually new information suggesting that that should be the case. And then in the generative AI space, because I live partly in New York City, the Adams administration spent nearly, I think, a million dollars on this government chat bot, this NYC bot or NYC chat that was supposed to... The idea of it was good. It was supposed to help small businesses navigate all sorts of city regulations, which in a place like New York City, they're voluminous. But it was telling them to violate the law. So it was giving advice like that you could like how to skim workers' tips, how to discriminate against your tenant if you're a landlord. I mean, it was fairly outrageous. And I think well beyond the kind of whimsical term, hallucination that we use, that often suggests that it's not a really big deal. And we shouldn't be surprised that I think the Mom Donnie administration, I think, canceled that contract and got rid of the chat bot. But the concerning aspects, I think, also just give you a sense of like all of the places in our lives, all of the sites simultaneously that are being shaped in some way by some form of kind of algorithmic decision making or management.
Speaker 4:
[23:19] Yeah. And I guess one of the ways to approach that, right, is to say not just like, oh, these are technical problems, but since you're mentioning like all of the different ways that individuals might feel powerless or just confused about what's going on, you can kind of use a civil rights approach. And, you know, and of course, in the Biden administration, you led the OSTP and you're credited with directing the White House blueprint for an AI Bill of Rights. And I would love for you to talk more about that. There is a, you know, this is a policy paper, it's a white paper. So what was the process? How did you begin creating the blueprint? Like who, who was behind it? Who did you talk to?
Speaker 5:
[23:58] Yeah, so it was, you know, we came into office in the middle of a pandemic and we came into office as a country having a racial reckoning. We were having an economic crisis. And, you know, I think those of us who work in the science and technology policy space knew both on the research side and also kind of saw brewing amidst all of these kind of societal concerns, like what was going to be happening in the algorithmic space. And we all, you know, we were having already examples. So, for example, the YouTube videos about the, you know, the so-called racist soap dispensers and faucets. Right. You know, if you have darker skin, you can't get the soap to come out, which is a kind of application of AI. And, you know, and I had the idea to do, in part, I think, you know, borrowed from lots of other examples. I mean, the Obama administration accompanied its Affordable Care Act with something called the Patients Bill of Rights. I think Ralph Nader had a Consumer Bill of Rights. So the Bill of Rights has been used, you know, variously, both by government and folks in civil society, as a way to sort of think about a rights expansion in the face of kind of a new technology or a new social dynamic, for example. So we got into office by October of 2021. We published an op-ed in Wired that came out in October of 2021. And we sort of used the Bill of Rights framing, and we kind of tried to draw a parallel to the country's founding and noting that there was this time, you know, in the 1780s and 90s, that Americans adopted the original Bill of Rights to guard against really a power. They just created this powerful government, right? We're about to celebrate the 250 years of the Declaration of Independence and in the Constitution. Like, we had created this kind of powerful government technology and that we needed to place a check on that. So how did you secure our rights and our liberties, our opportunities in the context of a kind of large and powerful government? So we saw a parallel with kind of powerful technologies and the powerful companies that were pushing these powerful technologies and thought that there was a useful analogy and we're wanting to think with the public, with the American public, about what might be equivalent kind of guardrails against these kind of new powerful domains. And so we were trying to kind of frame the blueprint for an AI Bill of Rights project within a kind of continuous, you know, US or American tradition of aspiring to values, kind of recognizing the shortcomings of the systems that we create and sort of, you know, thinking about what we might do to sort of mitigate it.
Speaker 3:
[26:40] Can you tell us some of the five principles that are identified in the AI Bill of Rights?
Speaker 5:
[26:44] Sure, yeah. So the five principles was in the white paper that Elizabeth alluded to was released in October of 2022, so a year later. And what we did over the course of that year was a lot of public engagement. So we had that wired op-ed ends with an email address that can go direct, that you could write to the White House.
Speaker 3:
[27:04] That's always a good plan.
Speaker 5:
[27:06] Yeah, so I think we wish more people had taken us up on it, but people certainly did. And we did kind of focus groups. We had what we called office hours. So everybody who worked on the team, which included policy generalists, AI scientists, computer scientists, folks who work on science and technology policy from academia, who had government experience, who had commercial experience. So it was a pretty broad team. And we would all block on our calendar time just to talk with people. And that included high school students and rabbis, in addition to always the technology companies, lobbyists, you know. But we really tried to have a broad conversation. And what we, so the five principles are really distilled from those conversations. We don't, we weren't trying to do anything novel. We were trying to sort of take from this new year of conversation, like what is the best of what we think? What are the aspirations that we should have as we move as a society into a more kind of algorithmic shaped, mediated world? So one was that AI system should be safe and effective. I mean, that's a very kind of basic and almost kind of consumer rights principle. Second, that people should have protections from algorithmic discrimination. Third, that there should be some modicum of data privacy. We are still fighting out what that might even look like. But again, these are kind of aspirations. Fourth, that there should be notice and explanation so that you should have a right to know when an AI system is being used to make consequential decisions, like some of those that I was talking about, Elizabeth, when you asked me what's concerning. Like, you know, do we care if you get a bad Netflix recommendation and you end up watching a movie you don't really like that the algorithm told you you were going to like? Like, no. But when algorithms and more advanced AI systems are being made for consequential decisions about people's lives, they should know about that. And if they want an explanation, they should be able to get one. And then lastly, the last principle is that there should be some sort of human alternative or fallback so that you should ideally be able to opt out. We build a lot of algorithm and social media systems as opt in as opposed to opting people out. So can you opt out of an automated system? Can you talk to a real person instead of being kind of brought down into a circle of like a phone tree hell where you keep trying to press zero to get to a person? Particularly when it's about something that affects your life. I mean, you know, health insurance, jobs, housing. So these are really critical things. So that's what we came up with. And it's been, you know, variously sort of taken up by different kinds of constituencies. It's become a kind of a civic infrastructure that is a way, I think, that allows different kinds of communities, particularly non-expert communities, to talk about why AI is important and how they want it to sort of sit in their lives and not sit in their lives.
Speaker 4:
[30:05] So from, you know, from an ordinary person's perspective, what would it mean to have a safe AI system? Does it mean that it's not going to make mistakes? Or is it, you know, what do you envision as like an AI system that would follow this idea of safety?
Speaker 5:
[30:19] Yeah, so, you know, by my friend Damon, who leads the Lawyers Committee for Civil Rights, you know, he will often say there's more laws around your toaster than around the chat bot that you might have used this morning, which is true. So we, so we just basically don't have, certainly at a federal level, there's some action happening at the state level, but any kind of just basic consumer protection. So I think many people are actually shocked when they realize that when, you know, an AI company or tech company sort of ships a new model or an update of a model, that no one has looked at that. There's been no kind of third party kind of authority that said, you know, it's met some threshold or standard of testing and that we think it should be safe and effective. So, you know, there are affordances, there are things about, particularly about generative AI. And we know increasingly from the research that you're never going to get rid of all of the mistakes in generative AI, certainly not in large language models. So safe and effective systems doesn't mean that. But it does mean that one would and should expect that there should be testing on what people think would be the most obvious use cases of these technologies, right? You can't, if it's a multifunction or a multi-use technology, there are use cases, I think, that we can't, we haven't even imagined and people aren't doing yet. But I think anybody who has studied the history of technology in the United States, even just going back to the 90s, we know there's always going to be a problem with scams, scamming and fraud, always, any kind of new technology. We know historically, there's always going to be a problem with forms of pornography, sexual abuse. These things are often the first use cases for new technologies. And so that we have chatbots that are being used to nudify young people in high school or on rock or whatever. We can't act like these are not harmful use cases that were not anticipated. And so it doesn't mean at all, Elizabeth, that there won't be unanticipated things or that a chatbot won't hallucinate. But it certainly should mean that a company, before releasing a product, has thought through even basic historical use cases and actually thought about how they might be mitigated or that should have a conversation about some independent stakeholder, state government, a civil society, about how they might be mitigated.
Speaker 4:
[32:52] So you'd think with all this, essentially, you're sort of describing this experimentation that's happening. And we'd expect that if the government is going to do this, they also should be regulating it a lot. And the answer at the federal level has been crickets, mostly. There has been some movement. And the blueprint served as a springboard for President Biden's executive order on AI. So could you say a little bit about what the core of those concerns were in the EO?
Speaker 5:
[33:21] Yeah. So I think the philosophy, both for the AI Bill of Rights and for the most part for President Biden's executive order on AI, was that just because we have a new technology does not mean that we have to have a new social compact or a new social contract. You don't have to throw out every policy regulation in law because we have this new technology as powerful as it may be. So if intentional discrimination or intentional violations of people's civil rights or liberties are illegal in any other fashion, if you do that with AI, it's also illegal, right? You might have to differently figure out the mechanism or differently make the case, but the outcome is the illegality of the outcome is the same. One of the things the executive order did was ask the Department of Education to think about what, you know, you've got guidelines for children's privacy and their protection for the use of educational technology. Do those need to be updated or how do we need to, or do we just need to double down on what we have as you're introducing different forms of advanced AI potentially to the classroom, right? You know, the president's executive order had some, you know, directions to things like the Department of Labor. And I think differently from what the current administration has been doing, it was not just what is AI going to do to work, it was how can government help put speed bumps or friction or help to direct the sort of direction of travel. So you're not just potentially casting people out of work, you are helping them find other work, you are re-skilling them. Could there be a conversation about tax incentives or other kinds of incentives to keep people on work or to help people off-board or on-ramp to different work, for example? The executive order, of course, also weighed in on, you know, there was a lot of concern and remains a lot of concern in the national security space. So, you know, should there be export control? Should we be controlling where various forms of technology go? So this is still a very live conversation. Controversially, the executive order proposed that we would use the Defense Production Act from, I think, World War II originally to require that companies give the government more input and information about new, more powerful AI systems and tools that had a kind of certain threshold of capability. So it was, it might have been historically the longest executive order ever. Really? Yeah, I think that's right. I think it was a hundred and some pages, a hundred and one or a hundred and two. You know, as a reformist, a reformer, I don't necessarily think that is a good thing. You know, and somebody is a bad thing. But in this case, I think it was good in the sense that it tried to be comprehensive, that the philosophy here was that, this is a kind of new infrastructure. This is sort of a new operating system for a lot of the work that we do, and how might we think about the ways that government can both help to accelerate potential good use cases and mitigate potential harms, using the things, the tools, the mechanisms, the levers that government agencies and the executive already have.
Speaker 3:
[36:26] We're going to take a break, but when we come back, we'll turn to how the federal government is and isn't regulating AI and how the states are filling in the gaps. So before the break, we talked to Dr. Alondra Nelson about how to think about artificial intelligence and why it poses a risk and should be regulated. And so how did her work lead to a conversation about preemption?
Speaker 4:
[36:55] Well, as she's already mentioned during her time in the Biden White House, she helped create the blueprint for an AI Bill of Rights. And that blueprint became the impetus for a part of President Biden's 2023 executive order on AI. And as she's already discussed, that order told the federal agencies to address the safe and ethical use of AI. Now, that's the limit of what President Biden can do, and that's because Congress has the power to legislate, not the president. So while Biden could tell the executive branch what to do about AI, he lacked the authority to actually preempt state law.
Speaker 3:
[37:29] Got it.
Speaker 4:
[37:29] And as soon as he began his second term, Trump rescinded or did away with Biden's executive order and replaced it with his own. Now, the Trump administration's approach to AI has been to turn away from a focus on safety and ethics in AI.
Speaker 3:
[37:45] Surprise, surprise. Okay.
Speaker 4:
[37:46] Right. And instead, to focus on what the federal government can do to accelerate AI development. Now, Trump's executive order has called upon Congress to use its power of preemption, based in the supremacy clause, to override state laws on AI. Congress so far has not responded.
Speaker 3:
[38:05] Okay. Which has left a lot of room for the states. And so we'll pick up our conversation there.
Speaker 4:
[38:12] So we have a lot of different states regulating on AI. You know, California has been in the lead, as it often is in these areas. You know, so for example, California has just a lot of different laws on AI. For example, you know, you've got to disclose what kind of data you use if you're an AI developer, what you use to train your models. That seems like very technical, very big picture. There's also some very specific California laws that we just passed a law. You know, if you're a police department, you've got to disclose if you use generative AI when you're officers, you write their police reports. That's a good one. So you've got the whole range of different things. So what does that mean? You talk to a lot of people in the industry. If I'm an AI developer and I want to offer my product in California or I want to offer my product in Colorado, which has an algorithmic discrimination law, what does that even mean? How does that work?
Speaker 5:
[39:06] Well, I think the first thing to say is that we have other industries where you have different kinds of regulations. So you've got insurance is regulated mostly by the states, for example. We've talked about a little bit consumer protection. So I think the discourse that gets used in DC, which is its own language of its own, there's a lot of kind of like pearl clutching around the fact that you would have different laws in different states, although the same very same people in Washington, because they are the most adept people on what the regulatory space looks like more broadly writ large, like, you know, know that it's true all the time, you know, it's basically true. And I think there's like, you know, there's real, we use the phrase, you know, laboratories of democracy. I think there's something to that. I mean, you have a new technology that I think is fast moving. I think in some ideal demos, would you want just one, you know, a rule, you know, a lot of rule them all? Sure, right? But we don't live in that ideal demos. And we also know that the states are much closer to the harms. Like, so you also have to imagine being a governor of a state or a state legislator or senator in a state, and you have people writing to you about being worried about the future of their children. You know, we had a scandal, I'm sitting here in Princeton and New Jersey, about Newtify apps. Like, you know, lots of just, I think, concerns about, you know, reading in the news and experiencing young people harming themselves. You know, there's been something, I saw a case reported about a potential homicide. And I think if you are a state legislator and you're hearing from constituents who've been denied a mortgage or screened out of a job by an algorithm, you can't just sit blithely and sort of not respond to that. So I think partly it's just like folks are hearing it. I think that we have a new technology. What are the best ways to think about this? I mean, even with the case, you mentioned California and New York, which have done laws around kind of trying to require some disclosure and transparency from companies around harms. Texas is, you know, has weighed in actually on thinking about harms and including discrimination. But they've said it really has to be intent. It can't be if there's unintentional harms, you know, that they're trying to let the companies off the hook. You know, a place like Colorado has attempted the first, you know, we might think of as like an omnibus AI bill that covers lots of things, you know, including sort of harms to young people, deep fakes, discrimination. And these are all, these are like, I've just named three different approaches. And it's not clear which one of those is the best one, or which one's going to be most efficacious. And I think it's worth actually letting states do this, you know, finish the work of implementing these laws and actually find out. I just don't think that the harms are more likely to be on the side of not doing anything at all, rather than trying to do a couple of, you know, different innovative strategies in different states to see. And then, you know, because there's been no federal law, there's obviously just this vacuum in the states and there's a lack of clarity. And, you know, I think there's been the DC conversation, the Trump administration conversation has been, or discourse has really been, well, it's creating confusion. And I think what's actually creating confusion is the lack of any kind of federal guidance. It's actually the states that are trying to sort of bring clarity to chaos.
Speaker 3:
[42:41] I mean, if the states are the appropriate front line for figuring this stuff out, is the ideal form of that to eventually roll up into some kind of federal regulation that makes sense? Sure.
Speaker 5:
[42:52] I mean, I think what the state patchwork does is test things out, some things will work, some things will fail horribly. I think it also creates some kind of the, you know, the so-called patchwork, I think kind of creates some upward pressure because when exactly, to your point, Roman, when enough states act, federal policy or norms become, as the patchwork gets woven together, become kind of implicit. And I think it puts more pressure on it for the federal government to actually do something explicitly. I would also like if we widen the aperture just slightly broadly from like the AI companies that, you know, like that we're talking about now to social media, the social media example, which gives us another, you know, 15, 10 years more to think about. We've seen the utter failure, right, of the federal government to be able to legislate in that space. And to the extent that we've got anything that looks like regulation or law or governance in that space, it's coming out of these lawsuits, like the lawsuits that we saw early, you know, that were decided a few weeks ago around, you know, Metta and YouTube. And so I think there's also, I think if you are a state level, you know, executive, or governor or state legislator, you're like thinking back about that example and just thinking we can't wait and do this again. You know, the other, you know, as I said, the states are close to the harms, they're hearing from constituencies. The way that we've been governing, if you think about the social media model, I mean, the young woman who was the plaintiff, and I think in the Metta case, she's 20 years old. I mean, this happened eight years ago or something. She was, you know, a child when this happened. And so the lie, you know, using liability and legal cases puts us quite far away from the harms. And I think the states can be much closer.
Speaker 4:
[44:38] Yeah, just to back up for a moment, by way of explanation, you're referring to the social media trials that are happening in California, where basically the state's attorney generals and private plaintiffs are suing, arguing that social media platforms are harmful products, which has a long storied history of legal liability in the United States. And actually, they're using the legal playbook of big tobacco. We kind of shut down big tobacco because we argued that the companies knew that these were harmful products and you sold them anyway. And that has proven so far to be successful in the social media space. So I guess we could think of perhaps, you know, AI, some of these products are going to be dangerous and maybe we'll do that. Of course, I think you're right, Alondra, to say that this is a backup, right? We don't want to wait for the bad use case for people to be harmed. I mean, the nice thing about regulation is you can be proactive and say, we think this is going to happen or it is happening and we want to affect as many people as we can within the state or within the country. My question is really more about what about the company? Is that that I feel too bad for them? But if you're a company, it's pretty burdensome, I would think, that you've got to look at every state and see, like, what is every state doing? So I would imagine that, you know, their first choice is no regulation. But their second choice must be federal regulation, no?
Speaker 5:
[45:59] Yeah, I mean, I would disagree with that a little bit. I mean, let's have a friendly quibble about this. I mean, I think that the compliance burden argument, I think, is a bit overstated by companies, right? You know, that's just what companies do in their own interest in their lobbyists. So, you know, and as I said, you know, I already mentioned, I think companies already in other policy spaces are navigating, like, different consumer protection regimes for different states, different employment laws, different privacy frameworks. I mean, you know, there's a, the state of Illinois has this pretty strong biometric policy regime, and yet, you know, companies were still, Clearview was still selling its facial recognition technology dataset, for example. So I think that the language from companies and lobbyists that say that state AI laws are, like, uniquely burdensome or especially burdensome, I think doesn't really hold up when you think about these other examples of these other policy spaces. The other thing I would say is that I think what your question, which is a common question, an important one, presumes, is that, like, if the states don't have a law, there's no other governance or pressure being applied on the direction of AI governance, which certainly in the Trump administration is not true. So, you know, so, okay, maybe you have to, you don't want to apply, you don't want to deal with California or Colorado, but you've got a Trump administration that's saying, we're changing tariffs every day. You know, we've gone from Liberation Day to not Liberation Day back and forth. So companies are dealing with that, including AI companies. You've got a Trump administration that is saying, we're uncertain about, we don't like immigration. We're uncomfortable with science and tech immigration. If you want to bring a new technology talent AI company, you're going to have to pay $100,000 per visa if we allow you to have one, to bring a talented engineer from France or Korea or something. And then they're also intervening in business. So we've got the US taxpayer is a shareholder in Nvidia, we're a shareholder in Intel. So the compliance burden question, I think, is much too narrow given all of the different ways in which companies are being asked to respond to a kind of broad spectrum of AI governance.
Speaker 4:
[48:18] Yeah, and let's not forget, I should say, the federal government and all of the state governments are huge customers, right? Customers can demand changes if they want.
Speaker 5:
[48:27] Procurement is an excellent vehicle. I mean, Governor Newsom just signed this executive order that I think really leaned into that, including not only safety issues, but issues around discrimination and civil rights and liberties, which I thought was fantastic.
Speaker 4:
[48:42] So we've talked a lot about sort of granular harms that are potentially happening or are happening. But I do want to talk about your thoughts on what's on the horizon, the AI horizons. There seems to be this race to develop AGI or artificial general intelligence. So the idea would be like not like, please find all the cats in this picture or write my high school essay on Pride and Prejudice. Like an all-purpose sophisticated AI with autonomy. Now, you've spoken to a lot of people in tech. I've spoken to a few. It seems like some people in the AI policy world are extremely worried about this. Like we could create something that gets totally out of control, develops like a biological weapon, takes over our defense systems. How concerned are you about this as a subject and then an object of regulation?
Speaker 5:
[49:29] So, I'm concerned about it as, I think some people are quite invested in the name and what the name means. So people are quite invested in whether it's super intelligence or AGI. I'm not at all invested in the name and I don't really care. So it keeps me out of some fights, but probably also keeps me out of some parties. I don't know. But I do think that, you know, I prefer to use the phrase like advanced AI. Like there are significant concerns about advanced AI. So example, if we think about the DOJ early, you know, last year in the Trump administration. And part of what the reporting in Wired and elsewhere was suggesting is that DOJ was breaking the Privacy Act of 1974, which said that a lot of inter-agency organizations could not share data in part because you don't want the federal government to have administrative data about you from Health and Human Services, from Fannie Mae, from whatever, to be able to put into this kind of large surveillance kind of pan-apticon. And I think what like powerful AI systems do is allow the interoperability of that data and the sort of discovery potential of associations that are dangerous, things that we could never possibly know about ourselves or about others. So that's like not even AGI, right? But that's just like sort of a powerful extreme. So if you imagine a system having access to data about everyone in the United States, everyone in the world, being able to sort of constantly be evaluating that data, running that data and then making decisions. And again, I mentioned at the beginning, the various forms of the autonomy or not different AI systems and to do it autonomously. So imagine not just all of the open clause, not all the little lobster clause of various agents, but like a really big claw, like a really powerful independent agent sort of acting in the world. And so we've got, there's been some reporting and I've seen, and some people discussing on social media, things like, I use this agent and it wiped out my entire hard drive or deleted all of my emails or that. You know, and that's, you know, that's happening. And we're not imagining an AI agent that was sentient and all knowing, and like they'll decided that it was going to wipe out all of your email because you work too hard or because it doesn't want you to work or whatever. Those are just powerful systems that we're learning to use. So then you can imagine potentially a system having a bit more intentionality, a bit more sort of understanding of the stakes and being more powerful. The question then becomes, and I think this is where we trip ourselves up. Well, how do you regulate that? It's just so powerful. What are we going to do? And before you get there, you need to imagine that companies can actually be told not to build a thing. I mean, you can't tell them not to, or they can be told that they can't ship a thing. You can't tell a company what to create, but you can certainly say, you can't ship this out into the world without certain controls. Like someone needs to be able to have a kind of final decision on whether or not it ships, or to be able to turn it off and on, or you can only run it for a few hours, or it can only have so much access to so much compute, or so much data, you know? And we're not having those kind of, I think, system-wide conversations. And, you know, to go back to where, you know, the kind of subject of the broader conversation, I mean, that is where you would want a smart, prudent federal government to sort of weigh in, right? That's where, you know, at that level of kind of nuance and both kind of level of abstraction and power, you might want there to be some sort of federal, I think, law or legislation or guidance.
Speaker 3:
[53:25] When we come back, Dr. Nelson explains her vision for finding a consensus on AI regulation. And whether she's optimistic, the government will figure this out. I mean, you developed this idea of a kind of thick alignment when it comes to AI governance. Could you talk more about what thick alignment is and how that translates to regulation?
Speaker 5:
[53:50] Yeah, so there's a wonderful writer, Brian Christian, who has a really important book that I would commend to people called The Alignment Problem that is really writing about the first, the kind of early years of what we, what some people call AI safety, which is basically just like, how do we explain them? How do we interpret what they're doing? How do we demonstrate that they're safe to the extent possible? And it was very much a kind of technical sense of thinking about alignment. So the system says that it's supposed to identify at 98% with a margin of error of 2 or 3%, these people in a facial recognition technology system. And for all intents and purposes, you would say that system is aligned, right? But we know the system is misidentifying people. We know in the Detroit metropolitan area that there have been more than half a dozen people misidentified by facial recognition technologies that someone somewhere in the development and deployment queue said this is aligned, like this product works, right? And so as we're thinking about AI systems and advanced AI systems, it's not just whether or not they kind of work technically, what happens when you or what can we anticipate or not anticipate when you deploy them and how do we create a process or an understanding that allows us to be thinking about alignment as something that needs to happen fairly continuously over time. And also that's something that needs to happen in conversation with the values of different communities and different societies. So alignment is not, so by thick alignment, I am taking up the work of the philosopher, Gilbert Ryle, but also the anthropologist, Clifford Geertz, who was a professor here at the Institute of Advanced Study in Princeton, where I am, who has this very famous sort of essay and the concept of thick description, like that you don't really understand the world until you've really thought to understand contextually deeply what it means, how do you describe it sort of deeply. And so my provocation to AI safety researchers and my collaboration actually, so it's not just a critical work is sort of, what does it mean to like, alignment is important, safety, explainability, interpretability, all the things that you might put in that bucket are really important and are an important and taken together or an are an important solution set for some of the harm mitigation that we might want to do in the space of AI. But what does it mean to do that in a way that takes seriously the different contexts in which these tools might be used, the different values. If you think about, Anthropic has created a Constitution for AI, for example. Who gets to weigh in on that? Are those the values that I want or others want? You see this value conversation coming up also, and even some of the Trump administration's way that they frame quote unquote ideological bias in AI. Who gets to decide what's biased in AI? There's a technical question about bias in AI, but who gets to decide what is a bias chatbot? So I think we just need to have a conversation we're not having about what it means to try to come to rough consensus values to the extent that's even possible, to try to have high level values to make decisions about these technologies. So I think the AI Bill of Rights was one of the ways that we were trying to point to that. But certainly, I think state laws are another way. You have states saying that you might think of those as examples of thick alignment. Like, this is what our constituency cares about, and this is where we're going to lean in on in the regulatory space with regard to AI. The other stuff, maybe we don't care about so much.
Speaker 3:
[57:35] I don't even know if I have thick alignment with President Trump as a human, you know what I mean? Like, it seems harder and harder to have it, you know, like when you're talking about all these hypothetical uses of this stuff, and like, if something like a program is supposed to have inherent, you know, human values, a lot of those don't feel shared a lot right now.
Speaker 5:
[57:54] No, I think that's right. I will say, you know, one of the things I've been doing since I left, so the AI Bill of Rights comes out on October of 2022, and I've been since that time following its afterlives. And some of its afterlives, I think, Roman, to your point, have been in red states. So there's been an Oklahoma AI Bill of Rights introduced as a bill. You know, it didn't succeed ultimately, but it contained all of the five principles that we discussed previously, plus a few more that were really good and actually quite stronger than some of the things that we suggested. More recently, in November, in Florida, Governor DeSantis introduced a Florida AI Bill of Rights, which contains within it all of the five principles that we have and lots of other things besides deepfakes, you know, child sexual abuse imagery, a really nice clause that was around health insurance and not being able to get a decision around algorithmic uses of health insurance. So I totally take your point, Roman, but it's also clear that there are a few things that we agree are wrong or that we don't want that are like suboptimal for society. And so I take, I think you're exactly right, but I also take some comfort in these Bill of Rights alignments that pop up here and there.
Speaker 4:
[59:15] You sound optimistic about the future of AI regulation. Is that right?
Speaker 5:
[59:20] I'm not optimistic about, am I optimistic about regulation? I don't know. I mean, I think if we look at the history of technology policy at the federal level and the Congress, I mean, we, Elizabeth, correct me if I'm wrong, but I think, you know, it's maybe not been since like the Communications Decency Act of 1996 that we've passed anything like a technology law. Like, you know, that's a long time. That's a generation. So I'm not optimistic in that sense. I think I'm optimistic with, you know, some people are calling it the tech backlash and I don't call it that. You know, I don't like that framing. But there's a growing public empowerment to speak about what people want and don't want with regards to the way that AI systems are being developed and deployed. So when I first started working, you know, in sort of big data and then, you know, as it became AI, you know, policy and research.
Speaker 4:
[60:14] That's how you date yourself.
Speaker 5:
[60:15] I know. I know. You say big data in a room and people kind of, like, cringe. They're just like, oh, big data is so cringe. So, you know, you would sit in rooms and say people can't possibly understand. I mean, even now you hear people saying, you know, if folks don't have... You was here in DC, actually, when I was working in Washington. You know, if you don't really have a PhD, if you're a staffer on the Hill and you don't have a degree in machine learning or AI, like, how could you possibly even begin to, like, offer guidance on how we should govern this technology? So, of course, you don't want people who know nothing about AI to be governing AI, but I've been encouraged by the fact that the public has demonstrated that it is not true that you have to have a PhD in AI to be able to say something about the AI governance space. So, you see it in the space of data center, so people have really, you know, that's a place where, like, AI governance and policy is quite tactile, right? So, it's quite, you know, it is in communities, it is about their water, it is about their energy use, and it's where sort of AGI or superintelligence, like, lands on the ground, and it is where communities really feel they have a sense of agency around that. So, we're seeing, I think I just saw in the news that Maine has banned, you know, data centers for a time. There's been, there were a lot of big projects announced that have been installed, that are being revisited. There's reporting now about how a lot of these data center agreements in various communities were done with local politicians under NDAs, that local communities can't even know the terms of the agreement for some of these. And people are really pushing back against that. And they're pushing back against the harms to young people. They're very concerned about, you know, suicidal ideation and how chap-ots encourage them. So, am I optimistic about law? Absolutely not. But am I optimistic about the fact that, you know, it's getting much more difficult for, you know, companies and, you know, other elites who really want to just drive technology without thinking about the harms and the social implications to do that, because you've got a growing chorus of people, bipartisan Roman, saying, maybe not aligned, but bipartisan, you know, saying that, like, we don't want this. And I, so I find optimism no encouragement, yes.
Speaker 3:
[62:46] I think one of the things, I just, my one point here is, like, one of the things that gets, it's funny, is the biggest proponents of AI and the, you know, broad use of it are kind of the biggest fear mongers of it too. Like, I think kind of enjoy the sort of sense of, this is super powerful, you should let us do what we want to, and it's going to destroy humanity in five years. Like, I think they like both of those things, so I think both of them, like, feed into their ego.
Speaker 5:
[63:09] They're both about power, yeah.
Speaker 3:
[63:10] Yeah. It's fascinating, because it's one of the things that the alarmists are the biggest proponents, is a weird dynamic. This is not like tobacco regulation, where we, you know, where the people who wanted to, like, regulate were just on the side of harm and the other people were like, no harm. It's an odd dynamic, and it's one of the things that's also, like, mixed up in all this stuff of, like, the Florida regulation versus the California regulation. The political valence of this stuff is much more complicated than most other things.
Speaker 5:
[63:38] Yes, it's very, it's very complicated and kind of heterogeneous. And so that's fascinating. And I think there's some very interesting essays, articles, papers to be written about at a time of, you know, maybe historically, since we've been measuring highest polarization in American society, that you've got this growing negative sentiment about AI and that it's bipartisan and that the issue set about which people are having agreement of their dissatisfaction around is growing, right? So you go from kind of discrimination to young people and CSAM to fraud, to, you know, health care, like the space is just becoming much broader. To data centers, for example, people are obviously are worried about their jobs and worried about employment and what they're being told. You've grown into your point about powerful people saying, our powerful tool is going to be really great and destroy everything, including all of your jobs, right? So, yeah, it's a very interesting policy space. And it's a space, as I said, you know, I think of political encouragement, if not optimism.
Speaker 3:
[64:45] Yeah. I mean, this seems like a new opportunity for a different kind of alignment, which is really kind of fascinating.
Speaker 5:
[64:51] Yeah.
Speaker 3:
[64:53] Dr. Nelson, I really appreciate you being here.
Speaker 4:
[64:56] Thank you so much.
Speaker 6:
[64:57] It's been great to talk to you.
Speaker 3:
[65:02] So that's the original seven articles of the Constitution. Thank you for joining for all of that. Of course, there are amendments to be talked about, 27 of them, but we're going to take a pause on the breakdown of the Constitution. There's just so much going on with Trump and the Constitution that we're going to go back to releasing our What Trump Can Teach Us About Con Law episodes. There won't be an episode in May, but we'll be back in June for Supreme Court Decision Season, everyone's favorite season.
Speaker 4:
[65:32] The 99% Invisible Breakdown of the Constitution is produced by Isabel Angel, edited by committee, music by Swan Rial, mixed by Martine Gonzalez.
Speaker 3:
[65:43] Cathy Tu is our executive producer, Kurt Kolstad is the digital director, Delaney Hall is our senior editor. The rest of the team includes Chris Berube, Jason DeLeon, Emmett Fitzgerald, Christopher Johnson, Vivienne Leigh, Lasha Madon, Joe Rosenberg, Kelly Prime, Jacob Medina Gleason, Talon and Rain Stradley, and me, Roman Mars. The 99% Invisible logo was created by Stefan Lawrence. The art for this series was created by Aaron Nestor. We are part of the SiriusXM Podcast family, now headquartered six blocks north in the Pandora building, in beautiful Uptown, Oakland, California. You can find the show on all the usual social media sites, as well as our own Discord server, where we have fun discussions about constitutional law, about architecture, about movies, music, all kinds of good stuff. You can find a link to the Discord server, as well as every past episode of the ConLaw Book Club, and every past episode of 99PI at 99pi.org.
Speaker 6:
[66:44] Hi, I'm Angie Hicks, co-founder of Angie. One thing I've learned is that you buy a house, but you make it a home. For decades, Angie's helped millions of homeowners hire skilled pros for the projects that matter. Angie, the one you trust, define the ones you trust. Find a pro for your project at angie.com.
Speaker 7:
[67:00] Breathe in, feel the sense of calm that comes from having up to $300 in overdraft protection with GoToBank. Now, did you say $300? Yes, now back to our breathing.
Speaker 5:
[67:11] So if I overspend my balance, GoToBank has my back up to $300.
Speaker 7:
[67:15] Yes, can we breathe out now? Less worries, more Zen. With over $300 in overdraft protection. Tap to open an account today. Eligible direct deposits and opt-in required for overdraft protection. Fees, terms and conditions apply.
Speaker 8:
[67:29] This is a monday.com ad. The same monday.com helping people worldwide getting work done faster and better. The same monday.com designed for every team and every industry. The same monday.com with built-in AI, scaling your work from day one. The same monday.com that your team will actually love using. The same monday.com with an easy and intuitive setup. Go to monday.com and try it for free. Yes, the same monday.com.