title 351 | Peter Singer on Maximizing Good for All Sentient Creatures

description Peter Singer has been an influential philosopher for a number of decades. He was a significant early voice in animal rights, has been a leading thinker of utilitarianism, and helped inspire the effective altruism movement. In this podcast episode, we try our best to talk about all of those things -- working from metaethical questions of consequentialism vs. other approaches, to specific flavors of utilitarianism, the practical demands that ethics places on people, the rights of animals, and the decisions we make at the end of our lives.





Blog post with transcript: https://www.preposterousuniverse.com/podcast/2026/04/20/351-peter-singer-on-maximizing-good-for-all-sentient-creatures/







Support Mindscape on Patreon.
Peter Singer received his B.Phil. in philosophy from the University of Oxford. He retired from Princeton University in 2023, and now lives in Melbourne, Australia. He is the author of a number of influential books, including Animal Liberation (1975). He has been named a Companion of the order of Australia, and is a winner of the Berggruen Prize. He is the founder of the charity The Life You Can Save. He and philosopher Kasia de Lazari Radek are co-hosts of the Lives Well Lived podcast (YouTube, Spotify, Apple).
Web site Princeton University Center for Human Values page Google Scholar publications Amazon author page Wikipedia Bluesky

pubDate Mon, 20 Apr 2026 11:43:00 GMT

author Sean Carroll

duration 4537000

transcript

Speaker 1:
[00:00] This episode is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game? Well, with the Name Your Price tool from Progressive, you can find options that fit your budget and potentially lower your bills. Try it at progressive.com. Progressive Casualty Insurance Company and Affiliates. Price and coverage match limited by state law, not available in all states.

Speaker 2:
[00:22] When Mother's Day means celebrating your mom, your wife, maybe even your daughter as a new mom, trust 1-800-FLOWERS to help you celebrate every important woman in your life. With double blooms from 1-800-FLOWERS, order one dozen roses and get another dozen for free. It's a simple way to give beautifully with colorful blooms that make Mother's Day feel meaningful. For every mom you're celebrating, order with confidence and get double blooms at 1800flowers.com/podcast. That's 1800flowers.com/podcast.

Speaker 3:
[00:52] Hello everyone and welcome to the Mindscape Podcast. I'm your host, Sean Carroll. We all in life struggle with the question of what is the right thing to do? How do you behave like a good person? What is the moral thing to do? The ethical thing to do? Once you get into the professional philosophy sphere, you start asking what do you mean by the right thing to do? Is there something called the right thing to do? Is that an objective truth about the universe? Do people just make it up? And if it's out there, whether we make it up or not, how do we find it? How do we calculate? How do we decide what is the right thing to do under various conditions? Probably, although there are many different approaches, the most popular current approach is utilitarianism. Some version of saying that there's some quantity, the utility, the greatest good, the total happiness or pleasure or well-being of conscious creatures, human beings, however you want to slice it, that we should try to maximize. The more people are happy, the more their well-being is fulfilled, et cetera, the better off, the more moral we're going to be. And probably the most prominent proponent of utilitarianism in modern philosophy is today's guest, Peter Singer. Peter Singer has been a very active voice in both developing utilitarian philosophy, but also in implementing it. He has had an impact on the real world, more than most philosophers have had. He's both been very, very careful in thinking about what it means to increase the total happiness of humanity, and he's famously willing to follow the implications of his reasoning wherever they may lead. He believes that we all have moral intuitions or feelings. Some of them are not right, and so sometimes if you're careful about doing moral philosophy, you're going to be led to conclusions that are a little bit surprising or counterintuitive at first, but if you trust your logic, you should nevertheless accept them. Among the areas in which he's had influence on are effective altruism, the idea that we shouldn't just be philanthropic and give money to good causes, but we should sort of target our giving to the most impactful causes. Also animal rights and fights against factory farming. He wrote a very influential early book called Animal Liberation, which basically makes the argument that if animals have feelings, they have preferences or happiness or well-being, then they should count for how we should increase the total happiness of conscious creatures here on earth. He continues to be super active in both philosophizing and fighting for these causes, especially against poverty worldwide. Nowadays, he has expanded his efforts into podcasting. He and Kasia de Lazari Radek have started a podcast called Lives Well Lived. He has a wonderful microphone setup, which makes him a great podcast guest. We're very happy to have Peter Singer on today's podcast to think about all of these questions. How to be a good person, why this approach is better than the others, and what it means for the real world. So let's go. Peter Singer, welcome to the Mindscape Podcast.

Speaker 4:
[04:28] Thank you Sean, great to be with you.

Speaker 3:
[04:30] So obviously a lot of what we're going to talk about falls under the umbrella of ethics. But as you know, being a professional philosopher, there's also meta-ethics. There's this issue of like, how do we even know how to justify a story of how to behave ethically? I mean, what are your general feelings here? We'll start very broad and work in. Do you think the morality is something objective and real, or are you of the school where we kind of make it up?

Speaker 4:
[05:02] I do think that there is something objective and real, although obviously a lot of our moral intuitions are evolved intuitions that have helped our ancestors to survive. So I don't really trust those moral intuitions. But I think that reason has something to say about ethics, and that's where the objectivity or reality comes in. This is not a view that I've always held. I started out as what philosophers call a non-cognitivist, so something that basically holds that there's nothing to be known, that there's no knowledge. Initially the view that I held that was quite popular when I was an undergraduate in the 1960s was that moral judgments are expressions of our emotions, our feelings. I moved a little bit more when I got to Oxford and studied with Aram Hare, who thought that moral judgments are prescriptions, that's like commands, but also that they are governed by, when you think ethically, by the need to make those prescriptions universalizable. That is, essentially, that's something like the golden rule. So it's something like, well, if you say that this is the right moral thing to do, and perhaps on this occasion, you benefit from it, you're telling somebody not to harm you. But if you're actually in the reverse of the roles, so you also have to stick to the same moral judgment. Your moral judgments can't just say, well, because I'm Peter Singer, I can do this to you, but because you're Sean Carroll, you can't do that to me. So that brought in an element of reason. But for Hare, it was kind of something just part of the grammar of the moral language. And that wasn't really enough for me. So after struggling with that for a few years, I came to a view that I took from the late 19th century philosopher, Henry Sidgwick really, that there are rational judgments, that we are capable of looking at things from a broader point of view than our own. He called it the point of view of the universe, although he wasn't saying that the universe is a being with a point of view. And that produces an element of reasoning, I think, objectivity and moral judgments.

Speaker 3:
[07:24] I guess there is a distinction maybe we should be careful about between moral realism and moral objectivity, right? Moral realism gives us the impression that there's something out there, some morality stuff that we can experiment on or something like that, whereas objective, objectivity just means it's independent of our personal subjective point of view.

Speaker 4:
[07:48] Yes, I think that's right. The term moral realism suggests that in some way this is like part of the furniture of the universe, that we could discover it in some way as we can discover other galaxies, and clearly that's not the case. But I think that it's objective in the sense that any being capable of reasoning could understand the reasons that we give for acting in the ways that we say the way you ought to act. So that's what makes it larger than just ourselves or even our species.

Speaker 3:
[08:24] So what do you say to people who just say they disagree with you? Like maybe I do think that the morally right thing to do is whatever Sean Carroll wants, and there's nothing you can say about it.

Speaker 4:
[08:37] I do think that if you say that you're denying the fact that others are like you, that I'm like you, and that you are justified in ignoring that fact. So you're missing some fact about the situation, or you're not giving it attention or weight. If we are going to talk ethically, I think that's not justifiable. So what I'm saying is, yes, there's a selfish way of reasoning that you can engage in, but you are just looking at it from that perspective, and that's a narrower perspective, one that, again, maybe our ancestors evolved and survived because they took that perspective, but we can look at things from a broader perspective, this point of view of the universe, and say, well, is it a bad thing that somebody is suffering? And I think when we do that and we understand what suffering is, then we can say, yes, that is a bad thing. And that's something that you're ignoring when you only look at Sean Carroll's suffering and not at the suffering of others that you're causing.

Speaker 3:
[09:53] So I think it sounds like, and this will come up later, a crucially important role is being played here by a kind of equivalence that different agents have, right? That there's no higher moral standing that one agent has than another, and sort of given that, I don't want to say assumption, but at least starting point, you can go pretty far.

Speaker 4:
[10:17] Yes. There's some sense in which some moral agents may have a higher status. They may be able to suffer more, for example. So there are some non-human animals who may be, we may decide that they are also, well, they're certainly agents. We may even say that they're moral agents in some cases. But maybe they have a more limited perspective on the world, more limited cognitive capacities. And possibly they're not capable of suffering as much or enjoying life as much. And there's one sense then in which they matter less than we do. So I'm not saying that every moral agent is of equal worth in some deep sense. But I am saying that any being capable of having conscious experience matters and ought not to be ignored.

Speaker 3:
[11:13] And so that's very helpful. Thank you. I know what your answer to this is, but could you maybe explain to the audience, there is a traditional division of approaches to ethics into consequentialists' point of view, deontology, where you care about the rules that you're given, and maybe virtue ethics, where you're trying to be virtuous as a process in life, rather than following a set of rules. Which one of these are you?

Speaker 4:
[11:44] Okay. Well, I'm a consequentialist. I guessed. You guessed, yes. Not a secret. I have been for my entire career. A consequentialist is somebody who thinks that the right actions are those that have the best consequences. Whereas a deontologist essentially denies that, says sometimes an action is right, even though it will have all things considered and for everybody affected, worse consequences than some other action. And a virtue ethicist, I think actually virtue ethics is compatible in some form with consequentialism, perhaps also with deontology. Because virtue ethics simply says, you should be the kind of being who has a character of virtue that is virtuous. And then of course we have to define what virtuous means. And that's fairly open. But a consequentialist will say, well, the virtues one should have are those that in the long run will tend to produce the best consequences.

Speaker 3:
[12:49] Yeah. I do find virtue ethics... Sorry, go ahead.

Speaker 4:
[12:54] I was just going to say I don't really see that as a completely independent moral view.

Speaker 3:
[12:58] I do think that virtue ethics is sort of frustratingly vague. And yet I do find myself growing in sympathy toward it over time. So I'm glad that I can talk to a died in the world, in the world consequentialist, which is what I used to be. But I'm sort of backsliding a little bit. So utilitarianism is a particular way of being consequentialist?

Speaker 4:
[13:20] That's right. You can think of consequentialism as the genus and utilitarianism as the species. So consequentialists might value a wide range of things. They might say, for example, that freedom or autonomy or knowledge or justice, independent goods, that they are things of intrinsic value, even if they don't produce more happiness and less suffering for sentient beings. Whereas utilitarians say, no, the only thing that is really of intrinsic value is the well-being of sentient beings, conscious desirable conscious states, if you want to put it that way. And all of these other things are very important. I didn't deny that they're important, but they're important instrumentally as a means to producing in the long run a society that does lead to more happiness for all sentient beings. So that's the difference between non-utilitarian consequentialists, of whom there are some, who think that there are these other independent goods, and utilitarians like myself.

Speaker 1:
[14:37] This episode is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game? Well, with the Name Your Price tool from Progressive, you can find options that fit your budget, and potentially lower your bills. Try it at progressive.com. Progressive Casualty Insurance Company and Affiliates. Price and coverage match limited by state law, not available in all states.

Speaker 5:
[14:59] Owning a home is full of surprises. Some wonderful, some not so much. And when something breaks, it can feel like the whole day unravels. That's why HomeServe exists. For as little as $4.99 a month, you'll always have someone to call. A trusted professional ready to help, bringing peace of mind to four and a half million homeowners nationwide. For plans starting at just $4.99 a month, go to homeserve.com. That's homeserve.com. Not available everywhere. Most plans range between $4.99 to $11.99 a month your first year. Terms apply on covered repairs.

Speaker 3:
[15:29] And of course, among utilitarians, there are subspecies and factions. Do you have a specific brand that you like to appeal to?

Speaker 4:
[15:39] Yes, I'm now a classic hedonist, hedonistic utilitarian. That means I regard happiness and pleasure, again, desirable states of consciousness, we call them, the kinds of states of consciousness that you like to have for their own sake. I regard them as good and undesirable states, obviously pain, misery, suffering, as bad. Again, for some years, I was a preference utilitarian.

Speaker 3:
[16:15] That's what I thought, so this is news to me, yeah, thanks.

Speaker 4:
[16:18] Right, yeah, yeah, okay. So I was a preference utilitarian, again, somewhat under the influence of RM Hare, because Hare's view was, as I said before, that moral judgments have to be universalizable, and that means we have to put ourselves in the position of others. And when we put ourselves in the position of others, Hare said, we take on their preferences. So we take on what it is that they want and how much they want it. And that's what we should be maximizing, giving, satisfying their preferences. But of course, there's a problem with that, and that problem is that they may be misinformed about a whole lot of things in their preferences. And they may want something and not realize that it's actually not going to be nearly as good as they imagined it was, might in fact be quite bad. So, Heir then adjusted that to say, well, it's the preferences that you would have if you were fully informed, and also if you were thinking calmly, for example, and not, you know, like you might have a preference in a rage, somebody has insulted you and you want to strike them in the face, and that will have terrible consequences if you do that. But, you know, because you're in a rage, that's your preference. And again, Heir said, well, you have to be, what you would want if you thought calmly and rationally about this. And actually, Siddwick kind of anticipated all of this, you know, in the 19th century and pointed out that if you're talking about the preferences you would have if you were rational, then you have to think, well, is there some reason for thinking that happiness is good and that some other preferences are not good? And Siddwick argued that there are reasons for thinking that happiness is good and that other things that people might prefer, even intrinsically, are actually not good or not good in themselves. So, you know, that was one reason why I then shifted to a hedonistic viewpoint. And otherwise, I suppose that I moved away from that universal prescriptivism, as I said before, to the idea of a more objective ethic. So I made that change in roughly in the early teens of this century. And the most definitive statement of it is a book that I co-authored with the Polish philosopher Katarzyna de Lazari-Radek, called The Point of View of the Universe. Again, that's that phrase from Siddwick. And it is really kind of an account of Siddwick's views and a defense of Siddwick's views in the context of contemporary philosophy.

Speaker 3:
[19:07] One of the things that I always worried about with utilitarianism was the apparent need to say that there was this thing called utility that, whether or not I could accurately measure it, it existed and I could somehow add up the utility of caused by someone being happy and compare it to the utility from someone else being happy or sad or whatever. That seemed like wishful thinking and not really reflective of true human experience. Am I right to think that that's what a utilitarian has to believe?

Speaker 4:
[19:41] A utilitarian doesn't have to believe that we actually can measure it. But it exists. In a precise way. But it exists, yes. That's right. A utilitarian would have to believe that there is some state of happiness that you're in, that I cannot directly access or measure. And that it might be greater or lesser than mine. But I think that's something that we all have to do anyway. I'm a little puzzled by this objection. Suppose that you and a few other people are going out for dinner. And you say, well, which restaurant should we go to? And you say, I really love Chinese food, especially that spicy Sichuan food. Let's go to that. And two other people say, yeah, I enjoy Sichuan. And then someone else says, I'm sorry, I really hate spicy food. I just can't take it. So you have to decide now, is your preference and that of the others who enjoyed that Sichuan food, going to outweigh that of the other person who will have to order the plain boiled rice and maybe some vegetables with not much flavor in them, if they go along, right? And we do that kind of thing all the time in deciding those kinds of group choices. So I don't think the utilitarian has to say more than, well, yeah, we try to do that as accurately as we can. Maybe we're getting some techniques for assessing it a little bit better than we used to, but that's the situation we're in.

Speaker 3:
[21:13] I mean, I do think I hear that response very clearly. I would say that human beings act as if something is real and true all the time without it being real and true.

Speaker 4:
[21:26] Well, that is true. It's not really a definitive...

Speaker 3:
[21:31] And it gets us into sort of the classic worries about utilitarianism. I apologize as I'm sure that you've had these conversations many times over the decades, but if you're a straightforward utilitarian, there's all sorts of puzzles you run into of the form. Would it be okay to make one person much, much, much less happy if we could just make a lot of people a little bit happier and the totals outweighed?

Speaker 4:
[21:57] Yes, you're right. There are many such objections put to utilitarianism. I tend to think that the answer to those objections is if you specify that this is a hypothetical example and we know that the slight happiness of the many is going to outweigh the greater unhappiness of the few or the one, and we know there are no further bad consequences from this, then that could well be the thing that we ought to do. Now, let me just add that we need, in order to decide that that slight happiness of the many outweighs the greater happiness of the one, we need to have a scale on which we compare suffering or unhappiness with happiness. And it's not obvious that that scale is one that runs in equal, you know, thinking spatially, runs an equal distance from the neutral point in both directions. In other words, you know, I sometimes ask my, I've asked my audiences this question. Suppose that a fairy says, I can grant you an hour of the greatest pleasure that you have ever experienced, but the price you have to pay for that is that you will also have an hour of the greatest pain that you have ever experienced. Would you like me to grant you that wish? And most of the audience says no. And I conclude from the fact that most of the audience says no, that we think that the worst pains we suffer actually are further from the neutral point than the greatest pleasures that we suffer. Sorry, the greatest pleasures that we enjoy. So we have to be careful with these things about making one person miserable, because maybe if we make them really miserable, that's going to outweigh a very large amount of mild enjoyments that the many are going to experience.

Speaker 3:
[24:02] I'll confess I haven't read a lot of the technical literature on utilitarianism, but knowing philosophers as we both do, I'm sure that someone has tried to make this fearsomely mathematical. I mean, I'm wondering, is it necessarily the case that if there's a certain amount of utility garnered by increasing two people's happiness by a certain amount, does utilitarianism have to assume that you get twice as much utility if four people get that increase in happiness? Is there some linearity or addition kind of assumption being made?

Speaker 4:
[24:37] Yes, I think that is the standard view. There are some people who have toyed with the idea of, if you like, a sort of declining marginal utility of utility itself. They do this particularly on the puzzles about population ethics. So you're perhaps familiar with the examples that we owe mostly to Derek Parfitt about the idea that for any world which has, let's say, one billion people who are extremely happy having wonderful lives, you could imagine a world of much larger world in which everybody is just barely above the level at which life is worth living. Derek Parfitt called it a life of music and potatoes. So you only get to eat potatoes. They're okay, but you don't love them. And you only listen to no Mozart, only music, the stuff that comes out of the elevators at you. So some people have said, well, if you have very large populations, then the value of the additional utility falls away in some way. But it's not entirely clear why it falls away or what the discount rate is for that. So I think it's better just to say, yes, if happiness is good and the happiness of two people is good, then the happiness of four people, the same level of happiness is twice as good.

Speaker 3:
[26:07] So one of the things that you're famous for is to oversimplify biting the bullet. Like you really are, I get the impression, very willing to accept somewhat counterintuitive conclusions that arise out of your moral precepts such as they are. Would you say that's accurate?

Speaker 4:
[26:25] Yes, that's accurate. And the underlying reason for that is that, as I briefly mentioned earlier, I think our intuitions have arisen in various ways that suggest that they are not likely to be backing moral truth or the best impartial objective views. Because, so our intuitions arise from various sources. Some of them, of course, are cultural. That may be that we were brought up in a certain ethic. That ethic may have been influenced by the religion that was dominant in our society or in our group. And that is not a trustworthy source, in my view. I'm an atheist, so I don't think any religious teachings have any authority as such. Some of them might be wise, and some of them might not be wise, and very often they have solidified centuries ago in circumstances that are quite different from our own. So I don't think they're something to be relied upon. And then some people say, oh, but there are these more or less universal judgments that everybody has. For example, that incest is wrong. And then you say, well, yes, but that may be an evolved intuition, and that's why it's universal, because incestuous relationships might lead to inferior children. And so maybe those groups that practiced incestuous relationships and reproduction didn't survive. And so the ones that had an inhibition against that did better. So that also is not at least today a reason against incest, because we have reliable contraception and if that fails, we have abortion. So, you know, Jonathan Haidt has an example where he talks about a case of adult sibling incest. So an adult brother and sister who are alone on a holiday somewhere, in a remote place, decide it would be fun to have sex. They have sex, they enjoy it, but they say, yeah, okay, but we don't need to do that again. We've had that experience and they remain close as a sibling, as siblings. Was there something wrong with that? And when Haidt, who was a psychologist rather than a philosopher, asked his students and experimental subjects, they nearly all said, oh, yes, that's wrong. And then Haidt would say, well, why is it wrong? And they would say things, oh, like, because they might have children who are abnormal. But actually, I didn't spell this out, but in the example they used, although the woman was on the pill, the sister was on the pill, but the brother decided to use a condom as well. So there was really no chance that they were going to have an offspring. And then they might say, well, it could hurt their sibling relationship, but they were told in the example that it didn't. So, you know, they can't really explain, but they just have this deep instinct that incest is wrong. Okay, well, we can explain why they have that incest. Sorry, we can explain why they have that instinct, that intuition. But it doesn't give us any reasons for thinking that what this couple did was wrong.

Speaker 3:
[29:41] I guess this is a classic kind of tension that arises in moral philosophy. You try to be systematic about what your starting point is, whether you want to call them moral axioms or your ethical theory or whatever, and then you reach some conclusions. But your axioms or your starting points are certainly somehow informed by, let's say, moral judgments, whether they're intuitions or educated or whatever. Then when we reach conclusions, if we reach a conclusion that is at odds with our existing moral judgments, maybe, like you just said very nicely, maybe it's because our existing moral judgments are just not really up to date. Maybe they're not as well thought out as we hope that they were. But maybe our starting point wasn't right either. And how do you know? How do you work with that dialectic?

Speaker 4:
[30:36] Right, so, I mean, what you're talking about really sounds like the idea of reflective equilibrium that John Rawls put forward in A Theory of Justice. That we sort of, we develop a theory or a basic starting point for foundation for moral theories. And then we test the theories against our intuitions. And we either tweak the theory to produce better intuitions or we tweak the intuitions to match the theory. Rawls originally suggested this was a little bit like a scientist trying to produce a theory to match the data. And if there are a couple of points of data that the theory couldn't match, well then you perhaps rejected that data, you said well there must have been something wrong with that observation. But by and large the right theory was the one that best matched the data. I think that's a mistaken view of what we're doing in morality because I don't think there is this solid data that there might be in science, scientific observation. So I'm more of a foundationalist than a proponent of reflective equilibrium. I try to get the foundations as clear as I can, try and reflect on them, think about them, take the most rational point of view that you can, and that's, as I was saying at the beginning, trying to take the point of view of the universe rather than our own point of view and try to find what is intrinsically good and what is intrinsically bad and coming up with the idea of desirable states of consciousness as intrinsically good and undesirable states of consciousness, ones that you would rather stopped as intrinsically bad. And that seems to me to be a more solid foundation than relying on those moral intuitions.

Speaker 3:
[32:24] Good. This leads us to, I mean, maybe rolling up our sleeves a little bit and digging into some of the details of how you think through your version of utilitarianism. I mean, as I mentioned earlier, the impartiality of who gets the utility seems to be a big part of it. I mean, again, I know a little bit about what you said, so many of my questions are going to be of the form, I think it's like this, tell me if I'm wrong. So is it fair to say that every person should give equal care to the well-being of every other person, no matter how near or far they are?

Speaker 4:
[33:04] That's a kind of simplified overall perspective.

Speaker 3:
[33:07] Yeah.

Speaker 4:
[33:08] But it's not that everybody should do that in their daily life all the time. I simply don't think that would work. Again, we have evolved as beings who love and care for their children, and have close relationships with other kin as well, siblings, parents, maybe even cousins, and of course, close ties with friends, unrelated friends with whom we have mutually beneficial relationships, often cooperative relationships. I don't think we can simply set all that aside and say, well, I'm going to deal with strangers exactly in the same way that I deal with my children, or with my closest friends, or my partner. We have to show some partiality there. Although in one sense, I think the well-being of every other sentient being can't as much as the similar well-being of myself and those close to me. I think we in practice have to take account of the fact that we will only do well, and others will only do well in close personal relationships that necessarily involve some form of partiality.

Speaker 1:
[34:24] This episode is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game? Well, with the Name Your Price tool from Progressive, you can find options that fit your budget and potentially lower your bills. Try it at progressive.com. Progressive Casualty Insurance Company and Affiliates. Price and coverage match limited by state law, not available in all states.

Speaker 6:
[34:46] A text says, you're on my mind. A bouquet from 1-800-FLOWERS says, you're my everything. Heartfelt moments belong in the real world, not just your phone. For 50 years, 1-800-FLOWERS has helped millions of people make memories that will last a lifetime. With gifts, they'll cherish forever. Their expertly curated arrangements and gift baskets shipped nationwide with a 100% satisfaction guarantee. Don't wait for the next big moment. Make it when you visit 1800flowers.com/podcast. That's 1800flowers.com/podcast.

Speaker 3:
[35:15] Okay, this is fascinating to me. This is clearing things up. I mean, maybe rather than asking it that way, let me give you the opportunity to make the positive case for why we should care about strangers, people far away, things like that.

Speaker 4:
[35:30] We should care about strangers, people far away, because they are capable of having the mental states that are intrinsically good or the mental states that are intrinsically bad. And it matters. It's a better universe if most of the people, and I should say sentient beings, not only humans, do not have the really bad states, the states that are intrinsically bad, like states of agony. And it's good if they have the positive states of happiness, well-being, fulfillment. So that's why we should care about them. If we don't, if we say, for example, well, they're far away and I'm only going to care about those who are close to me, we are using a distinction that may have some practical value. People often say, well, I know better what they want and I can help them better. And that again, used to be the case very clearly before we really could know what was happening to people on the other side of the world. But it's less so now, though it's not entirely, that's not entirely disappeared that we know better what people want when they're close to us. So, but we would be invoking this difference of distance when it's not really relevant to what is intrinsically good and bad, which are the states of consciousness that sentient beings can have.

Speaker 3:
[36:59] I probably should have asked this earlier, but your discussion just reminded me of this question. As a utilitarian, how much should we intervene in people's lives for their own good? If we think that they're on a self-destructive path, does being a good utilitarian sort of give us a license to set them straight?

Speaker 4:
[37:19] Yes, it does if that's going to be positive in its consequences. If they're just going to get angry with us and completely, let's say, it'll end the friendship that we have, and they're not going to listen to us at all, then obviously that's not a good thing to do. There is this idea of somebody being a busy body and interfering in other people's business, and often people do that wrongly and don't really know what's good for another person. John Stuart Mill, a famous utilitarian, said something like that everybody is the best judge and guardian of their own interests. I don't actually think that that's right, and Mill would have opposed some of the laws that we now take for granted like laws that require people to wear seat belts when they're in cars. So, I think that you can go too far in interfering with people, but some forms of intervention are clearly justified, as in the seat belts example, which tends to be done by legislation. But if you were in a car and you noticed that the passenger was not wearing a seat belt, nowadays, of course, you would just say, look, there's this annoying beep going on in the car. Would you mind putting on your seat belt? But there were times when cars did not beep if passengers didn't wear seat belts. But yes, you should certainly have said, please put your seat belt on. I know the chance of me having an accident is small, but it's worth it.

Speaker 3:
[38:46] The utilitarians have won the car seat belt design argument pretty strongly, I think. But there is a counter argument, right? I mean, there are those who would say, and maybe this is a fundamental, at least sort of personality type cleavage between deontologists and consequentialists, that there's a certain set of people who would say, I have the right to live my life as I would like to. That's what brings me happiness, even if I end up getting hurt in a car crash. Therefore, it doesn't matter what the utility is. I'm the custodian of my own being.

Speaker 4:
[39:23] So you use the language, I have the right. And that is a language that some people will take as ending the discussion completely. Because they think that rights are absolute and overriding and inviolable. But consequentialists will be prepared to talk about rights in various ways. But they may think, yes, we should establish systems of rights, things that people should virtually never do, create a high barrier against anybody violating somebody's rights. Those are important things. But they won't regard that as the end of the discussion. And I would certainly say, yes, you have the right to do that. Let's say that I do agree that you have the right to do that. Suppose you're not in my car, maybe you're riding a motorbike and you say, I like to have my long hair streaming out behind me in the wind. And I have a right to do that. And I know I'm taking a risk and I might be killed, but it's fine and I'm prepared to do it. Now, you could argue reasonably plausibly that, okay, it's better if we let people take responsibility for those kinds of choices that are not going to harm others, if we assume that's not going to harm others. But we can still discuss it with them. We can still say, look, have you been to hospitals where you've seen people who have been riding motorbikes and had brain injuries and they've turned into basically just lying there and making a few noises? And is it really worth it to take the risk of being like that in the end? There are other cases I could give you where I think it's clear that when somebody says I have a right, we might want to object. Suppose somebody is wealthy enough to have bought a great work of art, whatever it might be, a great painting, let's say. And I say, well, where are you going to hang that? And they say, oh, I'm not going to hang it, actually. I've decided to cut it into small pieces and use it to wallpaper my bedroom in different places. Well, we might say, okay, so you own this, perhaps you have the right to do it, but don't you think you're damaging something that millions of people have seen and wanted to see, and that perhaps we'll see again even after your period of ownership is over. Is that really the best thing to do in the long run? So I think we can, saying I have a right is not the end of the discussion.

Speaker 3:
[42:04] It does seem, we've already admitted that human beings are imperfect creatures a little bit. So maybe there's an ideal utilitarianism out there that we can strive for, but we should admit that we fall short. Is there a worry that utilitarianism can act as a license for paternalism, for being a little bit overly interventionist in other people's lives?

Speaker 4:
[42:27] The question is where we draw the line of being overly interventionist, right? Certainly, utilitarianism does provide a license for paternalism, and the seatbelt example, as you said, is one that utilitarians have won, or consequentialists have won that battle. So that suggests that most societies really accept a degree of paternalism. And the question is, what is overly interventionist? And that's a discussion that utilitarians can have. And they may differ from people who are not utilitarians, who place intrinsic value on autonomy. That's true. Or it may be that you can have utilitarian arguments against being too paternalistic. As I said, it may mean that people just don't take responsibility for their own actions in the right way, and that the state has to become more powerful, if you like, or it has to become more omnipresent. And that might lead to abuses in the wrong hands, if the government is not a genuinely paternalistic one, but is seeking its own power. So there might be a variety of risks in being too paternalistic, that utilitarians can accept those reasons for restraining paternalism in some circumstances.

Speaker 3:
[43:46] Well, maybe more dramatically, you certainly have suggested that taking this utilitarian perspective seriously, if we're trying to improve the overall utility of the world, one of the most direct things we can do is to fight global poverty. Maybe it's the most direct thing we could do. Talk about how strong that conclusion is, like how you would actually advise or change people's behavior to make things better by fighting poverty around the world.

Speaker 4:
[44:16] Right. Well, one of the ways I would like to change people's behavior is to enable them to see that if they are part of the affluent world and they're not very bottom rung of the economic ladder in the affluent world, but they're let's say middle class or above, that the additional marginal utility that money has to them is very small compared to the additional marginal utility that money has to somebody who is living in extreme poverty. The World Bank defines extreme poverty as something close to three US dollars per day now, and says there's around 800 million people living on that level, which is actually historically a very low figure. It's about 10 percent of the world's population, and it's only in the last 20 years or so, I think that probably for the first time ever, 10 percent or fewer of the world's population have been living in extreme poverty. That is, have not reliably been able to get enough food for themselves to feed themselves and their family. And we now add other things to being in poverty, like not getting any basic health care. Well, of course, most of the world didn't get any basic health care for most of our evolutionary past, not being able to educate our children, all those kinds of things. So, if you're living on $3 a day, then obviously, if you, let's say you could get an extra $1,000, that's roughly your annual income that you've now doubled. And there's all sorts of things that you can do that you couldn't do before. It might be, for example, that you can buy a corrugated iron roof for your dwelling, which up to now has been thatched and leaks when it rains heavily, and anyway is actually more expensive in the long run than a corrugated iron roof because it has to be replaced quite frequently. But you just could never accumulate the cost of a corrugated iron roof to keep yourself and your family dry. So that could be a big improvement. You could maybe afford to send your children to school when you couldn't previously. You could get some basic health care which might be life-saving. You could sleep under malaria nets to prevent your children getting bitten by malaria or mosquitoes and small children die from that. So there's a whole range of important things that you could do that would make a huge difference, would transform your lives perhaps. And what's $1,000 to middle class people in affluent societies? Well, you know, maybe they have a less luxurious holiday than they would have because they're trying to save a little bit of money. It's not a big deal. So I think if people really think about that and think about what it is like to live in extreme poverty and how much better life can be if you can do some of the things that I just mentioned, then more people would be prepared to say, oh, well, that's something good that I can do. I like the idea that I'm transforming other people's lives for the better and I don't really mind what I'm having to give up to donate that sum. On the contrary, perhaps I find my life more meaningful and more fulfilling because I know that I am having a positive influence on others.

Speaker 3:
[47:56] So just doing the numbers in my head, if global poverty is roughly 10 percent, that's roughly a billion people, a little bit less actually, $1,000 each for a billion people is a trillion dollars, which is a lot of money, but it's not crazy compared to what Western countries spend on things. So should we think about this as something that individuals should do by giving to charity or something that is a more society-wide obligation and therefore governments should do?

Speaker 4:
[48:29] Yeah. There's an in-between alternative, which is that companies and businesses might consider doing, because this is a very rough estimate, but the profits of companies and businesses worldwide is roughly $10 trillion, and your calculation was pretty good, really. So if companies were to give 10% of their profits, if they were to tithe profits, that could be that trillion dollars that is going to be enough to bring everybody out of extreme poverty. So there are really those three options, I would say, individuals, companies, and governments. Now, in one way, it would be fairest if governments did it, because governments have progressive taxation, and that would mean that the wealthiest who could most easily afford to donate significant sums would be doing the most, and the poorest would be doing the least, and that would be the right outcome. But unfortunately, it's very difficult politically to get governments to do that, and no government is giving really much more than 1 percent. In fact, I'm not even sure if it's maybe one or two that are close to 1 percent of gross national product. But most of the others, including, say, the United States is giving very little now. It's giving like a quarter of 1 percent or maybe a bit less than that. I'm speaking to you from Australia, which is also around that pretty deplorably low level. United Kingdom is a little bit higher and several of the European states are higher. But as I say, none of them are really above 1 percent. So it's, and then there's a further question about, is government aid as effectively directed as non-government organizations? And there's some evidence that it may not be, that it's influenced by political, geopolitical sort of policies. Certainly that's been true of USAID. It's gone to countries that US had an interest in. For many years, Iraq was the number 1 country for USAID, although it was certainly not the poorest country. So while it would be nice to increase government aid and make it more effective, those are long shots. Individuals, I think, relatively easily can give. And in a way, I direct a lot of my argument towards individuals because I think some of them do respond then. And the effective altruism movement, which has been going about 15 years now, has encouraged that and encouraged people to think about giving as effectively as possible. I wrote a book first published in 2009 called The Life You Can Save, which talked about the most effective organizations that you can give to and develop the arguments that I've just briefly summarized. And The Life You Can Save is an organization that encourages people to give effectively. And it's distributed so far about $120 million, which is significant, but not huge, to effective organizations. GiveWell is another organization that tries to locate, does research on the most effective organizations, and it's been supported by what used to be called open philanthropy, is now called Coefficient Giving, which in turn has been supported by Dustin Moskovitz and Carrie Tuner, who are sort of digital billionaires. And so that's, you know, some hundreds of millions anyway, perhaps a billion or so has gone in that way. So I feel that this moral argument actually has had quite a sizable impact, not of the trillion dollar range that you mentioned is needed, but has affected very large numbers of people now for the better. And that's mostly what I've been focusing on. But let me just say one more thing. I have recently got involved with or co-founded an organization called Profit for Good, which is looking at the company aspect of it. And people can go to the website, profitforgood.com, and see what we're doing. And if any of your listeners are involved with companies that they have some influence over, and that might be interested in joining the Profit for Good Alliance, we'd be very happy to hear from them.

Speaker 1:
[52:57] This episode is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game? Well, with the Name Your Price tool from Progressive, you can find options that fit your budget and potentially lower your bills. Try it at progressive.com. Progressive Casualty Insurance Company and Affiliates. Price and coverage match limited by state law. Not available in all states.

Speaker 5:
[53:20] It never happens at a good time. The pipe bursts at midnight. The heater quits on the coldest night. Suddenly, you're overwhelmed. That's when HomeServe is here. For $4.99 a month, you're never alone. Just call their 24-7 hotline and a local pro is on the way. Trusted by millions, HomeServe delivers peace of mind when you need it most. For plans starting at just $4.99 a month, go to homeserve.com. That's homeserve.com. Not available everywhere. Most plans range between $4.99 to $11.99 a month your first year. Terms apply on covered repairs.

Speaker 3:
[53:50] Do you have any comment on... I mean, I love the idea of effective altruism. If you're going to be altruistic, why not be effective? We've had people like Will McCaskill and Joshua Green on the podcast who've talked about these things. But the movement, such as it is, has come on a little bit of hard times of late. One way in which that's happened is that certain effective altruists think that the most effective thing you can do is to give money to them, because they're going to save the world from AI, or they're going to improve the chances of long-term evolution of the species or something. Again, is there a failure mode here that we should be especially alert to?

Speaker 4:
[54:34] I think it's a little unfair to say that effective altruists have said give money to them. They might have said give it to think tanks that they're working for. That's possible.

Speaker 3:
[54:43] Not them personally. I didn't mean that short.

Speaker 4:
[54:46] Yeah. Right. Okay. Good. I thought you might be suggesting that they were siphoning off the money. Of course, that was suggested about same Bankman Fried, which is one of the reasons why effective altruism took a bit of a hit, because he had been a pinup boy, the world's richest person under 30, and he had said that he was going to give it away to effective causes. Then it turned out that there was a fraud going on, that he was using trust funds from clients. And he's currently sitting in prison. That's very unfortunate in a variety of ways. Obviously, the loss of large amounts of money that could have gone to good causes is the greatest tragedy. But it's not representative of the movement as a whole. As I said, there are other very wealthy people who are giving really effectively. And I think the movement is recovered from that burst of bad media publicity. And is still going strong. Yes, there's quite a bit of quite a number of people in the effective altruism movement are interested in preventing our species becoming extinct, including trying to reduce the risk that AI will take us over in some way. I have mixed feelings about that. I do think that what we've been talking about, global poverty, is certainly in a more immediate cause that we can with greater certainty know what we're doing and know that we can have good consequences. I also think that animal welfare and especially trying to combat factory farming of animals, which causes suffering on a vast scale, is another very practical and immediate concern that we can know we're having good consequences. Whereas trying to reduce the risk of extinction and especially the risk of extinction being caused by super intelligent AI is much more speculative that you're going to actually succeed. But of course, the counter argument goes, well, we're not just talking about eight billion people when we're talking about reducing the risk of extinction, we're talking about vast numbers of people who will exist and possibly, our species will colonize other planets and spread throughout the galaxy or even other galaxies eventually. So you can talk about not just trillions, but quintillions or whatever of people who might exist and won't exist if, in fact, we don't survive the next century. So I think there's a real point there. I mean, I would value the, especially as I've been saying, a utilitarian has to value consciousness, pleasant consciousness, happiness. If you really think that there's a chance that there will be quintillions of happy beings having solved all the technological problems that we face, living in future centuries or millennia, that's something that would be tragic if that were lost. So it's not obvious that they're wrong to think about extinction risk as a high priority. But yeah, I still think that it's very difficult to know how we are contributing to reducing that particular one. Other extinction risks, by the way, like reducing the risk of bioterrorism or nuclear war, I do take much more seriously as things that we can actually have an impact on.

Speaker 3:
[58:23] I guess one of the worries about being an overly principled utilitarian is it seems to imply that I should basically give away almost all of my wealth, at least until I reach the median wealth of the person in the world. Maybe that's just too much to ask for a typical person. You already eluded to this a little bit, but what is your response to that?

Speaker 4:
[58:48] Yes. We can distinguish here between what we would do if we were ideal beings purely motivated by the utilitarian idea, from what we can reasonably ask of human beings to do, given the way they are and that they have evolved from thousands of generations of beings who survive because they thought about their own interests and those of their close kin. So if we were all saints, then yes, we would give away down to that level of marginal utility by giving more, but we're not. So I think what we ought to be asking is what will actually bring about the best consequences. So if we say to somebody, look, if you're going to live an ethical life, you have to give away everything down to the point at which if you gave more, you would be just as poor as the poorest person that you're giving to. If they then are just going to throw up their hands and say, if that's what ethics requires, I'm giving up ethics because I just can't live like that, then you're not going to get anything. Whereas perhaps if you have a more modest scale of giving, still significant but more modest, you're going to get a lot more. So that's what I've tried to do. In the book I mentioned before, The Life You Can Save, I draw up a scale of giving which is progressive like a tax scale. So it starts off with people, modest incomes giving one or 2 percent, and then by the time you get up to the people who are earning millions each year, it suggests giving a third, which is still I think modest enough in that, if you're earning millions every year and you give up a third, you're still very wealthy and probably still have a lot more than you can really spend in a way that adds to your enjoyment. So I think that that's a reasonable scale that we can ask people to give. In putting it forward, I'm hoping that it will actually maximize the amount that people do give if they think of it in those terms.

Speaker 3:
[61:00] Well, this is very helpful. So it sounds like being a kind of effective real world realistic utilitarian involves a kind of balance between knowing what the ideal utilitarian action would be and knowing a little bit about human nature. Like you say, it doesn't actually raise the total utility of the world if you ask people to do things that they're just not willing to do.

Speaker 4:
[61:22] Yes, exactly. And so we have to tailor that. Asking people to give is itself an action and utilitarians will weigh those actions in terms of their consequences. So you have to think about trying to put the ask which will have the best consequences.

Speaker 3:
[61:38] I do want to give you a chance to talk about animal rights and animal liberation. You alluded to it a little bit already, but we had Jonathan Birch on the podcast talking about animal sentience. And I take it that that's part of the argument. If you're impartial about the utility gained by pleasure by different human beings, then almost everyone would agree that animals feel suffering or feel pleasure. And we had to calculate that somehow.

Speaker 4:
[62:08] Yes, absolutely. And I think Jonathan Birch is doing exactly that. And the Jeremy Coller Center for Animal Sentience at LSE that he's directing is actually remarkably effective. I don't know if you're aware of the story, but it's an unusual example of a philosophical work or joint philosophical and scientific work on the sentience of cephalopods, octopuses, and squid, and a certain group of crustaceans called decapod crustaceans, which is basically lobsters and crabs, having had an immediate legislative effect in the UK because that report came out at a time when the UK was bringing in its new laws to say that animals are sentient beings, which when it was part of the EU had always been part of EU law, of UK law. But when the UK left the EU, the animal movement protested that this will mean that animals are not sentient beings in the UK. And Boris Johnson said, don't worry, I'll bring in a law to say that they're sentient animals. And that's a promise that Boris Johnson actually kept. And he did bring in such a law. And it was just going through Parliament when that report came out. And the House of Lords debated it and amended it to, because it had been just vertebrate animals, amended it to include cephalopods and decapod crustaceans. So it doesn't mean exactly that they're protected in the same way as other animals, but at least there's something there. It's a legal basis for courts or government policies to give them weight now.

Speaker 3:
[63:48] Well, it's a well-known fact, when you go to dinner with philosophers, they never eat octopus, and it's usually Peter Godfrey Smith's fault, your countrymen who has convinced us all that they're smarter than we might think. And I guess that's the argument. But I mean, tell people your thoughts on your own angle is more about factory farming than about the octopuses.

Speaker 4:
[64:10] Yes, although they're not as separate as you might think, because there are moves to factory farm octopuses.

Speaker 3:
[64:15] Oh, I didn't know that.

Speaker 4:
[64:17] Yeah, there's a proposal in the Canary Islands, which is part of Spain, for a large sort of underwater factory farm for producing millions of octopuses, and I think there are other proposals around. So yeah, don't assume that your octopuses had a happy life living freely in the sea until it was caught, if you do eat octopus, and I agree. We shouldn't. I think that factory farming is really one of the great moral atrocities that's going on right now, and the reason I think that is the scale is so mind bogglingly enormous. If we're talking about land-based factory farms, it's probably something like 70 to 80 billion animals a year. If we're talking about, if we're including factory farms for vertebrate marine animals, so for fish basically, it probably goes up to something like 200 billion animals a year. And if you were to include shrimp and other animals, you're getting into the trillions, but there's less certainty about whether they can suffer. So let's leave them out of it if you like. But just talking about 200 billion vertebrate animals crowded together in conditions that are solely directed to producing their flesh or their eggs or their milk in the cheapest possible manner, with no independent concern for their welfare. And often those conditions thwart a lot of basic instincts that they may have. Social instincts to be in a group of size where they can identify other individuals, which, let's say, chickens and turkeys might do if we're talking about 20 or 30 or even 50 individuals. But when you've got 20,000 birds in a single shed, that's not possible. Instincts to play, which many animals have, we're talking about young animals, pigs, for example, playful, intelligent animals. Instincts to move around freely and to route around and have something to do rather than just stand and sit and lie all day and eat occasionally when the food is put in front of you. I just think there's a huge amount of suffering that is going on there. And it's completely unnecessary, that's the other factor. It's not that this produces more food for human beings. It actually dramatically reduces the availability of food for human beings. Because when we grow crops, grain or soybeans, and feed them to animals, we get back only a small fraction of the food value that we feed to them. In some cases, perhaps less than 10 percent in the case of feeding grain to beef cattle. And in others, it's more, but it's always a minority. It's always less than half of the food value.

Speaker 3:
[67:23] Is there progress being made on factory farming? I might be in a bubble here. I have the impression that more food producers are trying to be nicer to their livestock.

Speaker 4:
[67:36] Well, I think that's unfortunately largely a false impression, but it depends a little bit where you are. In the European Union and the UK, there have been, if you go back to when I first wrote, Animal Liberation came out 1975, so just over 50 years ago. There has been some progress. Not that producers are trying really to be nicer, but the animal movement has succeeded in getting laws which reduce the amount of suffering, but only marginally, I would say. You can't keep laying hands in cages, small wire cages where they can't even stretch their wings anymore. You can't keep veal calves in individual stalls where they can't turn around. Similarly, the mother pigs which used to be kept in stalls that are too narrow for them to turn around or walk more than a single step, are no longer allowable in the EU and the UK. In some states of the United States, particularly California, but unfortunately, in the United States, these are still standard practices. All the things I mentioned that are banned in the EU and the UK are still standard practices in the states that produce most of the animals. In other words, in Iowa and North Carolina and the south of the US and Nebraska, those states have no such laws. So this degree of confinement goes on. And then elsewhere in the world, China has dramatically increased its production of animal products all through factory farming and with pretty much no animal welfare constraints. So I would not say globally that there's been an improvement. There has been some regional improvements.

Speaker 3:
[69:25] Okay. Well, that's, I mean, I guess it's good to know if it's true, but that's, that is too bad to hear. I do hope that there are advocacy efforts going on and having some impact outside the EU to make things better.

Speaker 4:
[69:41] There certainly are advocacy efforts going on. Yes, in many countries, including the US and to some extent in India, which is not a very large country, much less in China. There's some animal welfare movement in China, but it's mostly focused on dogs and cats. So, it's difficult. And there are, you know, there are a lot of myths, like what you said, about a lot of people think that things are getting better. If you ask Americans, for example, a lot of, you know, do they eat products from factory farms? The proportion of people who say, oh, I only eat sort of free-range or animal welfare-friendly animals, vastly add numbers the extent to which there is such prediction. For example, in chicken meat, you know, the number of chickens who are not in factory farms is two in every thousand. So, 99.8 percent of chickens produced in the United States are in factory farms. And yet the number of people who say, oh, they only eat free-range and who eat chicken, it's maybe 10 percent of the population or more.

Speaker 3:
[70:49] Is it easier to get free-range eggs than chicken meat?

Speaker 4:
[70:54] It's somewhat easier. And so, I'm not sure how it is now in the EU. I think you can get eggs properly labeled as free-range and in the UK as well. Here in Australia too, we have eggs labeled free-range. In the US actually, it's quite easy to get eggs labeled cage-free. But cage-free doesn't mean free-range, because they might still be confined in a big shed. And you might still have thousands of birds in the shed. So, genuinely free-range is hard to find. Here in Australia now, we're getting producers who actually put the number of hens per hectare on their box. And that's interesting information. And it varies from 10,000, at least here in Australia, you can't call eggs free-range if you have more than 10,000 birds per hectare. But some of them, and some of them do have 10,000 birds per hectare, and some of them just have, you know, like 1200 or something. And obviously that's a significant difference in quality of life for the hens.

Speaker 3:
[71:56] I did want to get to one other topic very quickly. I'm not quite sure how it fits in. Maybe you can help us understand how it fits in. You've written about end-of-life decisions, which is something I'm super interested in. I think that our culture does a very bad job of taking care of people and guiding people and giving people autonomy as they approach the end of their lives. Is your stance on this something that grows out of the other things we've been talking about?

Speaker 4:
[72:26] Yes, absolutely. I mean, you could see the unifying thread of the things that I've actually worked most intensively of and where I've worked not only as a philosopher but also as an advocate in terms of trying to eliminate pointless or unnecessary suffering. So I think the suffering of factory farming is pointless and unnecessary, pointless in that it doesn't produce more food. I think that the suffering of people in extreme poverty is unnecessary because we have the means to end it. And on a much smaller scale, the suffering of people who are terminally ill and understand that they are not going to survive very long and don't want to go through those last days or weeks or possibly months before they actually die from the disease they're suffering from, is completely unnecessary and completely pointless. And actually, in economic terms, wasteful because hospital beds are expensive. But I'm not thinking of it from the point of view of health care economics, really. I'm thinking of it from the point of view of those people who want to die, of making a rational choice to die, given that their prospects are very poor. And yet, still in many countries and jurisdictions are not able to die, are not able to have medical assistance in dying anyway. And they don't want to go and kill themselves by jumping off a building for good reason. So I think that that's completely pointless suffering. And I'm pleased to say that over the years that I've been working on this, this has now been legalized in more and more countries, starting off with the Netherlands in the 1980s, where it could begin to be openly practiced. But we've now got a lot of different countries allowing this and a growing number of states in the United States as well.

Speaker 3:
[74:20] OK, so I do see the connection between eliminating pointless suffering. But would you also advocate for letting people just make this decision, even if they weren't at the moment suffering?

Speaker 4:
[74:32] I would advocate that under certain conditions. I think they need to have good reasons. So I would not like to see anybody able to do this, even if they're just temporarily depressed and have a prospect of recovering or lovesick teenagers whose partner has broken up with them and think that they'll never be happy again. I think we need a certain amount of paternalism. In other words, here, I'm not an absolute autonomy overrides everything here. But if they can give us good reasons why that might be so, then that's something that I would allow them to choose.

Speaker 3:
[75:14] And then I promise the final thing, as you mentioned, you've not just been an ivory tower philosopher, you've been active in trying to actually make the world a better place in these directions in which we've indicated. And one of your new tools is this wonderful idea called the podcast. You're a podcaster like all the great academics that I know. Tell us about that.

Speaker 4:
[75:40] Yes. So I retired from Princeton in 2024. And one of the things that that gave me more time to do was a podcast. And I'm doing that with the Polish philosopher I mentioned, with whom I co-authored The Point of View of the Universe. And we also co-authored a book on utilitarianism for Oxford University Press' very short introduction series. And we're both interested in what it is to live well. We're both basically hedonistic utilitarians, so we're presenting that point of view. But we thought it would be interesting to get a range of different views of guests who have either something to say about that because it's one of their interests, perhaps some of their research that they've been working on, what enables people to live well, or they're just people who've led interesting lives, sometimes quite long lives, sometimes not so long. So we've had a wide range of guests on the podcast, starting with the late psychologist Daniel Kahneman, who was a Nobel laureate, and many other guests from different fields. We recently had Paul Simon, the musician. Oh, wow. And we've also had Judy Collins, a folk singer, a singer, great singer. We've had Yuval Harari, author of Sapiens, and a range of other people living more ordinary lives. Some of them have been activists, Ingrid Newkirk from People for the Ethical Treatment of Animals has been on. Some of them have been people who've lived altruistically in various ways. We've had Kakenya Nataya, who was a Maasai woman from Kenya who made a deal with her father that she would go through the genital mutilation ceremony that was typical for girls in her culture, if he allowed her to go to high school, which girls usually didn't do. They usually got married at 13 or 14. She ended up going to a college in the United States after high school and starting a movement called Kakenya's Dream to try and give other girls in countries where they did not have a future as an independent person, that kind of future. So it's a wide range of people who've led rich and fulfilling lives in many ways.

Speaker 3:
[78:15] Okay. So Peter, I know you're new at this, but you have to tell us the name of your podcast.

Speaker 4:
[78:19] Oh, I'm sorry. You're right. It's called Lives Well Lived. You can find it on Apple Podcasts or Spotify or wherever you get your podcasts, including of course Mindscape.

Speaker 3:
[78:31] I think that's a great place to end on an optimistic note after we talked about a lot of things. So Peter Singer, thanks very much for being on the Mindscape podcast.

Speaker 4:
[78:38] Thanks a lot Sean. It's been great talking to you.