transcript
Speaker 1:
[00:44] Hi, I'm Erin Welsh, and this is This Podcast Will Kill You. You're tuning in to the latest episode of the Tpwky Book Club, where I chat with authors of popular science and medicine books about their latest work. Since starting this series a few years ago, I've gotten to cover some amazing books, and I appreciate so many of you reaching out with your suggestions for books to feature. Keep those recommendations coming, please. And if you'd like to take a look at the full list of books that we've covered in this series, as well as get a sneak peek at ones that are coming up in future episodes, head on over to our bookshop.org affiliate page, which you can find on our website, thispodcastwillkillyou.com, under the extras tab. On the bookshop page, you'll find several podcast related lists, including one for this book club and the Tpwky Kids Book Club, which if you're not following us on social media, you absolutely should be, because Erin Updyke has been putting together videos reviewing children's books. It is such a great resource for sciency kids books for all ages. And if you want to share your thoughts on these episodes, make topic suggestions, submit a firsthand account, you can get in touch with us using the Contact Us form on our website. Two last things before moving on to the book of the week, and that is to please rate, review and subscribe. It really does help us out. And second, you can now find full video versions of most of our newest episodes on YouTube. Make sure you're subscribed to the Exactly Right Media YouTube channel so you never miss a new episode drop. Belief is a powerful force. It shapes every facet of our lives and transforms perception into reality. What we believe to be true is not always what is actually true, something I'm sure we can all relate to. Maybe you've debated with a friend over the answer to a trivia question, like you both know the right answer but your answers are somehow different. Or maybe you've had a heated exchange with a relative who firmly believes that the moon landing was faked. How do we decide what we believe? How can we know that what we believe is the truth? And how can we convince others of that? These are precisely the questions that Adam Kucharski, who is professor at the London School of Hygiene and Tropical Medicine, asks in his latest book Proof, The Art and Science of Certainty. Kucharski, who is a mathematician that works on infectious disease outbreaks, explores how we are inundated with information and increasingly misinformation that we have to evaluate to determine whether or not we should incorporate it into our decision making. This extends beyond personal decisions, which route is best to take to work, what to make for dinner. Our world is built upon structures of proof with varying degrees of support. That car that you drive to work is manufactured under rigorous safety testing, meaning there are established guidelines for what is considered safe and how to test that. Same thing with the food we eat, the medicines we take, the buildings we spend time in. We don't question so many of our beliefs. To do so would leave you frozen, uncertain of which direction to move in, what to trust. You'd have no time to actually live your life. But when we do scrutinize our certainty, we might find a gulf between our beliefs and someone else's and those beliefs and the objective truth. Where does that incongruity originate? Why are we skeptical about some things and not others? What does it take to make up our mind? And what does it take to change it? That answer might not be the same for everyone. An enlightening blend of philosophical musings, political commentary, statistical exploration and personal reflection, Proof is a fascinating read, particularly as this unceasing flood of information, both good and bad, shows no sign of stopping. Let's take a quick break and then get into things. Professor Kucharski, thank you so much for joining me today.
Speaker 2:
[05:18] Thanks for having me.
Speaker 1:
[05:19] I am thrilled to talk with you about your latest book, Proof, The Art and Science of Certainty. And before we dig into the various forms of proof and how we determine a threshold for proof or what different types of proof exist for certain situations, I want to start at the very beginning. What is proof? Is there a standard definition?
Speaker 2:
[05:40] Yes, I think that's a great question. And I think my background's in math, so I think a lot of my kind of training was around this idea that you can have this definitive knowledge that something is true. And I think it's something that people have grappled with across fields. I mean, one of the stories that really struck me was Abraham Lincoln, when he was training to be a lawyer, came across this word demonstrate, and you have this kind of beyond reasonable doubt, this certainty, and he's like, I don't really understand what this is as a concept. And he actually went back to all of these ancient Greek mathematical texts to understand how can we take what the knowledge we have, build on that, prove new theorems, use that to prove subsequent knowledge. But I think one of the things that was really the motivation for the book and something that I think anyone who works with information and decision making and evidence happens across very often is it can become quite a shifting concept. I mean, even in mathematics, things that people thought were proven turned out had some hidden assumptions or human judgments that were kind of lurking there and caused a lot of that to collapse. So I think it's a kind of fascinating concept because it's something that's so important in life, not just having knowledge that we gradually accrue, but for many of the things we care about, whether it's dealing with emergency, whether it's a legal case, whether it's even just a kind of minor business decision in our day, we have to work out where we set the bar and how we evaluate what we've got. And I think for me, that was really the launching off point to explore this. You know, how do we converge on certainty and what happens when it goes wrong?
Speaker 1:
[07:19] Thinking about the difference between proof and certainty and truth, like what is the relationship between those concepts?
Speaker 2:
[07:27] Yeah, I think that's a great question. And without going down the kind of philosophical rabbit hole, it could have been a book on it, What is Reality? But I think the way that I approached it is just to look at how people have thought about this in different fields. And again, even going back to Lincoln and much earlier, there was this appeal of this certainty, this idea that there could be this universe truth, and it's why a lot of fields ended up borrowing from mathematics. You see it in the US Declaration of Independence, we hold these truths to be self-evident. The original draft was, we hold these truths to be sacred and undeniable. But Benjamin Franklin didn't like that because it sounded like they were kind of appealing to some divine authority, and self-evident is just borrowed directly from maths. It's just a given truth. And unfortunately, it turned out a lot of these things about equality weren't self-evident. But I think that story of how you think about these things, and even when we see in the legal world, a lot of it was originally derived from concepts around maths, around probability. If you talk about some of these thresholds, preponderance of evidence, you're saying it's more likely than not, and you're kind of borrowing a lot of these kind of probability-based ideas. And even in the world's kind of more experimental design, as that kind of developed, a lot of it was about, I mean, actually, some of these early studies were almost trying to discount some of the influences of religion. You're wanting to understand cause or effect in the world, rather than just appealing to some other influence. And then for a lot of people, it became this question of how do you take the evidence you have and how do you link that to a conclusion that you want to make? And where do you set the bar for that? Do you try and get ever closer to certainty? And there was actually a lot of statistical tension about a hundred years ago. I know statistical debates kind of sound a bit boring, but it was actually this real, people just almost wouldn't talk to each other because it was this tension between do you just try and get ever closer to the truth, or do you have a framework that allows you to make decisions? And I think a lot of times in life, we don't get to do the academic, I'm just going to sit on the fence here. I just, I don't know, and I'm just not going to do anything with life or actions. But often we have to decide, we do something or we don't do something, or we say someone's guilty or we let them go free, or there's these decisions we have to make. And so that process of interacting with evidence is much more prescient. I think that was one of the real big tensions that never fully got resolved actually. Even how we teach statistics at school, we kind of smushed together these two very different philosophies, one of this ever higher bar for evidence and one where we're sort of outlining a framework to make a decision based on the knowledge we have.
Speaker 1:
[10:05] When it comes to public health and medicine, there's a lot more pressing need to make decisions. And yet, this decision is often dragged out for long periods of time. And sometimes that is at the urging of someone who has incentive to drag out a decision. So one of the examples that you talk about is Austin Bradford Hill, who is talking about this relationship between cigarettes and lung cancer and saying, well, we have some evidence and there's still a lot of skepticism, but we have enough to make a decision. We cannot use uncertainty as an excuse for inaction. Do you feel like we've ever truly learned that as a society, or has it been players like the tobacco industry saying, oh no, this uncertainty, we need to push for more and more and more evidence?
Speaker 2:
[10:59] Yeah, I think that's a really good question. I think that's a really good example of almost kind of weaponized certainty, that you can always set the bar higher, and in any aspect of life, you can set the bar higher and higher and higher to the point where you just won't do anything. And inaction, of course, is in itself a decision. And I think Bradford Hill's work, he was extremely thoughtful in how he approached this, because something like smoking, you can't really design it like a trial. You can't get people to randomly take up smoking and see if they get cancer. There's obviously ethical reasons. There's also just timeline reasons. If you look at the time scale of the intervention versus what happened, you might have to wait decades to have that clear signal. And so he did a lot of pony work with others, linking together the various non-random data sets you had available. Because one of the criticisms, of course, is any data is, yes, smokers are more likely to get cancer, but maybe there's a genetic reason that makes them more likely to smoke and get this. And he outlined a lot of the ways we can think about cause and effect. And I think that's a very useful set of concepts. Even some of it is the obvious ones of the cause needs to come before the effect, or that if you have this strength of association, more cigarettes makes you more likely to get cancer. If you see that across multiple countries, or if you can start to think about the biological plausibility, we see carcinogens in other kind of situations as well. None of those things on their own is conclusive. You can start to build this evidence base. And he made this really good point that any knowledge we have, even if it's very confident knowledge, is always subject to further refinement. But we still have that knowledge at that point in time. And we can seek further information. There's been lots more studies of smoking since their early ones. But also that's information that we have to do something with. And I think we often, particularly in the situations with emerging threats or kind of early concerns about things, whether it's a health intervention we think might be harmful. I mean, one of the examples I've given the book is the work at the FDA around thalidomide, which was this treatment for sickness in in pregnancy. And there was actually a lot of concerns about safety for babies, and the FDA blocked it as a result. But on the other hand, you get things where there might be a lot of value, for example, in reducing smoking for health outcomes. And even if there's uncertainty, and Bradford Hill made this nice point of actually the standard you should apply for taking action kind of depends a bit on the situation you're dealing with. If it's a fairly cheap action to take, if it's not too disruptive for people. But actually in his argument, he said, smoking is something people really enjoy. So we need a kind of higher barrier. If you're going to, and I think it's a reasonable point, if you're going to tell a lot of people to change how they live their lives, that the evidence you need is perhaps different to something where you can take some action and you can unwind that. So it is those kind of trade-offs that you have available, that obviously needs to play in as well.
Speaker 1:
[13:56] Let's take a short break and when we get back, there's still so much to discuss. Welcome back, everyone. I've been chatting with Dr. Adam Kucharski about his book, Proof, The Art and Science of Certainty. Let's get back into things. Right, the thresholds for certainty is, it can be different depending on the situation. And then there's also these personal thresholds for certainty or evidence. You know, how much information do we need? And one of the things that you discuss in your book as well is sort of what happens when evidence flies in the face of our personal beliefs and how sometimes even despite a mountain of evidence, we can just still feel like that's not possible. We can't reject it. It's not an intuitive truth. You know, what happens? Like, what does this show us about sort of the personal nature of proof and certainty?
Speaker 2:
[15:04] Yeah, I think that's one of the things that really kind of struck me in researching that. I mean, even in some of these kind of mathematical puzzles examples, it's things that I'd come across as a kid and convince myself, oh, that's just the answer to the puzzle. And it was only years later when I was explaining it to someone else or someone else had asked me about it. And I sort of went through the thing that convinced me and I realized it just didn't convince them at all. And I think that's a really interesting gap. I think we focus a lot on how science works, how methods work, what convinces us. And you see this in even a lot of studies around political beliefs that people will often try and convince others with arguments that convince them. And then you get this gap and it's almost like that just fails. And I think that's a really interesting step to explore. But why does that fail? And one of the things that I find even just kind of in some of the modern tools we have in the modern era are quite striking is where we have this desire to explain things. So yeah, a few years ago, I was talking to a bunch of people working on AI and there's a lot of concern about things like self-driving cars. We don't understand why they make mistakes. We need that explainability. We can't have things we don't trust. And actually in medicine, we have all sorts of things that we know work, we know how often they work, we don't fully understand the physics and biology. Something like anesthesia, for example, you can control the effect it's going to have, but actually all the underlying biology and kind of physics mechanisms is still more work to be done. Things like defibrillation, if you give a heart a shock, you can kind of reset it. Again, some of that's understood, but there's still those kind of gaps in knowledge. But we know that these are useful tools. And even if you run a clinical trial, you can assess how effective a treatment is. But that on its own will just tell you the effect. It won't tell you necessarily all the mechanisms that are going on to explain it. But there's these tools that we've got, and we've got the evidence to take action and use these things we're very happy with. And there's other areas of life where actually that inability to explain something kind of really bothers us. Even if self-driving cars were much safer than humans. Humans, when I started looking into the book, humans are not good at driving. It's not a massively high bar. But I think it would still make people uncomfortable, even if they were, say, twice as safe. In cities where it's very well defined, I think it would still bother people if every now and again, there was just an accident that we had no real idea of what was happening. And I think that's really important to bridge. Because I think that, particularly when you get that gap in understanding, that's room for other explanations to kind of creep in. And I think that's where we start to see emergence, where things like conspiracy theories, whether it's things with kind of incorrect logic, often it is that gap between what we're seeing and the understanding of why that's happening. I think humans have this very, in many ways, very powerful desire to explain what they're seeing. But in some cases where the explanation is very hard to untangle, it can lead us astray.
Speaker 1:
[17:50] That's fascinating to think of the gap between understanding and what is happening. We don't understand how anesthesia works or how Tylenol or acetaminophen truly works. But we do understand how vaccines work, for instance. And yet there's so many conspiracy theories and misinformation surrounding this thing that we do know how it works. I guess what good does evidence do if we do not take it into account and are not open to it?
Speaker 2:
[18:17] Yeah, and I think for me a lot of it is just understanding at what point that breaks down. I mean, even if you look at some of the COVID vaccines, for example, or even some of the kind of other debates around climate intervention, I think often it gets very into debating some element of the technology. I think often it's actually just people disliked some of the control that was exerted over them through mandates or for other things. Actually, if you've got an intervention that you're unhappy with, you can disagree with the intervention and say, look, for example, we know that intervention works, but I disagree with how you're implementing it. Or you can disagree that the intervention actually has an effect. Or you can go even one step down and just say, actually, I think there's deeper problems or maybe the disease isn't a threat. I think often those levels get tangled up. I think a lot of conversations I've had with people, often they're deep down concerned or the thing that they're approaching it with isn't necessarily that they've just out of nowhere decided that this isn't a threat or that that technology doesn't work. It's actually, in some of these instances, things are a bit more marginal and you could say, you can make an argument either way, even if the underlying intervention is effective, or is going to have this, you can make this moral and this, it's not just about an epidemiological question. I think understanding where those drivers are, and also just in our own arguments, I think sometimes I have the conversation with people and I think I'm just arguing about the nuances of whether intervention is a good idea or not, and they're actually arguing whether it's a problem in the first place. We see it, vaccine is I guess an example that's more polarized, but even something in climate, you can have a lot of people who just agree on the nature of the threat of climate change. They agree on the different levers that we probably have available to society, but they might strongly disagree about actually how we prioritize those and all of the trade-offs. And I think it's just understanding what level we're on and where the evidence might stop and where it might then just be other things that are filtering in on a personal level.
Speaker 1:
[20:25] This idea of proof and certainty and truth, that seems very intuitive in a lot of ways today, but this maybe wasn't always the case. When did the concepts of truth and the need for these self-evident truths or certainty or proof, when did these come to be and then in what fields or what areas were they initially applied?
Speaker 2:
[20:48] Yeah, I think that's a great, it's easy just to think of the world and science and evidence just always was as it is. I mean, even in mathematics, this idea that we had a universal truth wasn't the same throughout history. If you go back to the ancient Egyptians, the ancient Babylonians, they were much more focused on problem solving. A lot of their texts are kind of these kind of puzzles and very much things around kind of practical everyday problems. And even if you look at their formulas for an area of a circle, they're quite approximate. And if you're building something that needs quite a large circle, you're probably going to be okay using those. But it's not going to give you that really precise truth no matter what problem you're working on. And that's something where the ancient Greeks, mathematicians like Euclid came in and tried to put things on a much more solid footing. So you've got these concepts like pi, that if you want the area of a circle, that will just be universally true. And you won't have this issue if your kind of approximation breaks down. And it was then, I think, that as it sort of came into the Enlightenment, it was very appealing for people that you could have these undeniable truths about the world. And I think that's where a lot of other fields started growing them as well. But even in medicine, if you look at this study of cause and effect, a lot of that, it was the medieval Arabic world, that a lot of that started to emerge. So a lot of the kind of superstition, this idea that disease or conditions just kind of come out of nowhere and it's bad people or someone's a witch or this kind of stuff that was going around in much of Europe at the time. There's a lot of early writings even around sort of the 11th century saying, these aren't supernatural, there's natural causes and we can study them. Yeah, we can study them, we can work out what the cause and effects were. A lot of early attempts to try and think about concepts that we would now call things like having a control group or thinking about how we would divide and treat some people and not treat some people and then compare the difference. The conclusions didn't always work out. I mean, I think one of the earlier studies was someone who'd identified correctly the symptoms of meningitis, but then concluded that bloodletting was really effective for it, which probably something in their study design had gone astray. But again, it's one of the things you look back on and you think it's pretty obvious that we should be doing it that way. But even coming into the 20th century, if you look at something like analyzing a medical treatment, a lot of the early studies did an alternation method. Because if you think about it, rather than randomize patients, you could just say, well, the first patient that comes in, I'm going to treat, second, I won't, the third, I will, fourth, I won't. On average, you should get something that any other sources of variability should balance out and the difference in those groups should be, on average, down to the treatment effect. But Bradford Hill, actually, who did a lot of the pioneering work in the early clinical trial space, noticed that the groups were often in balance. Because what was happening is patients were coming in, the doctors were maybe subconsciously, maybe that person looks a bit ill, I'll enroll them, or maybe they don't meet the diagnosis. And actually, a lot of the early randomization wasn't statistical. It was just to keep humans from themselves because we couldn't trust subconscious judgment. So a lot of the early randomization in medicine wasn't about the statistical properties of the trial design. It was just about making sure humans didn't muck things up, basically, with their internal biases.
Speaker 1:
[24:03] Well, I mean, we'll find a way, I'm sure, somehow, some way. It's interesting to think about this idea of self-evident truths, thinking back to, okay, yes, there's superstition, and this person is a witch based on these signs or whatever. Was that also viewed as proof?
Speaker 2:
[24:20] The story of those trials by ordeal is a fascinating one as well, because they were used for a long time. You could have trial by ordeal, like by water or by fire or whatever, and then you could also choose trial by duel. So basically, the big criminals always pick that, and people start to notice, like, oh, if God is deciding which one's innocent, God tends to pick the bigger one pretty much every time, which is, I think there was that that came in. But actually, one of the reasons they stopped using them is a lot of religious scholars became concerned that they were basically trying to, by running those trials, you're essentially trying to get God to do your work for you. And that felt for them a bit awkward because you're sort of on demand saying, hey, can you come and make a decision for us, which they sort of got quite uncomfortable with. But even those early systems, I mean, early juries in England were kind of fascinating because they weren't the structure that we had today. They kind of did their own investigations. Often someone was accused and then they went off and accused someone else and kind of did their own thing and it was only over time that system kind of evolved of having that way of converging something. And I think that's, we talk a lot about the problem of black boxes, but to some extent, juries, and talking to legal scholars was kind of interesting with this, that it's not so much about getting to the truth. It's having a system where you can reach a decision and you've got kind of that finality or semi-finality to that decision and having a system that works rather than you're 100% convinced of that. And I think we see that kind of across different fields of that emergence of truth. And as you said, what's kind of obvious and what's self-evident. I mean, one of the other things I found kind of interesting was how many mathematicians were deeply influenced by religion. So Newton, for example, Isaac Newton, driving all these equations and theories about planets and planetary motion, he saw that it was God keeping the universe in balance. He was essentially just observing divine influence. So for him, although he was doing a lot of this scientific work, he saw that there was this external influence keeping it all in place along the way. So even quite far through history, you had these kind of other baseline explanations going on. I think even in the modern era, I think the way sometimes we tell the story of science, I think is sometimes almost a bit disingenuous. If you read a scientific paper, it's kind of, yeah, there's this problem and I decided to run this experiment and I got these results. But I think there's also just that element of like, what was the hunch that made you think that that might be an interesting thing to investigate? What was that spark of inspiration? I think even in this era of AI, it's a really interesting question because AI can process and mimic human decisions as we write them down. But I think there's often that kind of spark or that idea that would lead you to do something that just people wouldn't have tried before. And that's much, much harder to articulate. There's not necessarily that kind of obviousness that we might have had in another era, but I think there still are those things which are quite hard to explain in where that evidence might have initially sparked from.
Speaker 1:
[27:21] One of the things that you mentioned was the use of proof and evidence in the legal system. And I feel like this was a really fascinating discussion in your book as well where this is employed as proof beyond a reasonable doubt or innocent until proven guilty. What does this show us about the variable level of evidence needed to make a decision and I guess the different forms that proof can take in this setting?
Speaker 2:
[27:49] It's a really interesting question about how different societies have even set that balance. Essentially in a legal case, there's two main errors you can make. Someone can be guilty and you can let them go free, or they can be innocent and you can convict them. And William Blackstone, who was a legal scholar in the 1760s, came up with what's known as Blackstone's ratio. He said, it's better for 10 guilty people to go free than one innocent to be convicted. And Benjamin Franklin actually was even more cautious. He said, it's better for 100 guilty people to go free than for one innocent to be convicted, seeing that as a kind of balance. Other cultures, particularly some communist regimes in the 20th century, set it the other way. It was better for 10 innocents to be in prison than one guilty to go free, because there's this kind of trade-off from where they're seeing it as the worst error. And actually some analysis looking at US legal cases, obviously they don't try and target these error rates, but you can sort of infer how people are valuing this. A lot of them seem to land between that kind of Blackstone and Franklin ratio of error. But there's of course, yes, the different evidence and how it makes its way into the courtroom, particularly some of the examples historically of kind of things like early probability. And again, one of the challenges here is one school I talked to called the weak evidence problem. I think a lot of how we navigate life is around probabilities that are quite likely. A lot of probability theory was originally developed around like dice games and things you can study and you can quantify. But in legal cases, we often have this weak evidence problem where someone ends up in some extremely bad looking situation from a guilt point of view. And you're like, well, it's extremely unlikely this is just a coincidence. But then if you think about it, like, well, this person might just be a normal, everyday person. Well, it's extremely unlikely to that they're guilty. So you have these two extremely unlikely events. And a lot of statistics just isn't equipped to handle that. And so there's this notion, it's called the prosecutor's fallacy, where people say, well, this is the probability that that will all be a coincidence and therefore that's the probability that innocent. But of course, you've got to weigh it against the fact that it's extremely unlikely they're guilty as well. And we see this even in other areas. The work we do dealing with emerging health threats and pre-COVID, there were some studies and actually we did a TV show where you sort of say, oh, a pandemic could just be round the corner. Or there was another study that World Bank, I think, put it at 1%. And you're like, well, what is that? Was that a good prediction? Was that bad? And it's these very unlikely events. I think in legal cases, again, for that weak evidence problem, it's less about, do we definitively work out with high probability, which of those is true? And it's more just, we have to converge on the best explanation for what we've seen, given those two possibilities. And in reality, we may never have certainty about where we are. And I think it's something that kind of struck me both thinking about that and then also thinking about a lot of people who have to plan for emergencies and very unlikely events, thinking a lot of the way we traditionally think about probability can very quickly lead us astray. Because I think we're so used to having this idea, well, I can be 99% sure that this happened. But actually it's much more about that balancing act that we have to perform.
Speaker 1:
[31:03] Let's take a quick break here. We'll be back before you know it. Welcome back, everyone. I'm here chatting with Dr. Adam Kucharski about his book, Proof. Let's get into some more questions. Thinking about this in the context of COVID, when things were evolving very rapidly, the situation was evolving rapidly, and the general public and, of course, government officials wanted answers and wanted decisions. What is the best thing to do? Wear masks, not wear masks, sanitize groceries, all these things that were just constant questions and people wanting hard answers, like just yes, period, end of. As someone who was on the informational front lines of the COVID pandemic, what was your relationship with uncertainty like at that time? Did you struggle with feeling like we don't have enough information yet? How did that feel, I guess, in your position?
Speaker 2:
[32:11] Yeah, I think, I mean, those kind of situations are enormously challenging both in terms of evidence generation and communication, and then obviously the political decision making that comes off the back of it. I think in many of those situations, I found it useful to convert, in some cases, uncertainty around the exact estimate to just broadly what situation we're in. So for example, when I think it was the Delta variant emerged and we did a lot of the work identifying the early advantage it had. And it really wasn't, was it 30%, was it 40%, was it 60%? But essentially, all of those were a big problem. And it's kind of arguing like, is this a disaster or just a catastrophe or just very, very bad? And it's like, from a policy, you don't need to kind of necessarily communicate. You can just say like, we're very confident that it's going to take off. A couple of things I think that jumped out for me. I think one was the need to triangulate across data sources. I think sometimes people have this idea of science that you go out and you run a study and that study gives you the answer or it doesn't give you the answer. And there were quite a few of the early skepticism were saying, well, actually this study wasn't definitive and this study wasn't definitive. But once you start to look at all of those, you start to look at the evacuations flights, you start to look at the testing data and the contact tracing and the big testing of some of the cruise ships, you start to look at the clinical data, all of those signals start to drag you in the same way. Again, each bits of those evidence on their own might have problems, but you can start to bring together and draw that into a conclusion. I think we saw that across the pandemic that if you view it very much as like, I'm going to get the perfect study and it's going to give me the answer, you'll struggle but often you can actually find a lot of complementary data sources that all for variance or a lot of that early severity were all pointing in the same direction. I think it's harder obviously when they're pointing in different directions as we saw with some of the interventions where it was less clear because different countries, different economies, certain things did affect behavior and other things in different ways. But I think the other challenge that jumped out, and I think a lot of the health issues we deal with in US, UK and the modern era are non-contagious, so they're very much individual, things like cancer, things like heart disease, so it's very much individual focus. You have someone who's ill, do you treat them, do you not treat them? If you don't treat them, that's someone who's one person who's going to get worse. But contagious health threats have this dependence where a problem can get worse and that problem can then accelerate in very different ways. I think that was something that was quite a challenge to communicate because I think a lot of people had this notion of, you've got normal life and then you could do something else that's not normal life and we'd obviously just prefer it to be normal. But I think as we saw globally, you didn't get that status quo. I mean, that was gone and no country had, they had varying levels of normality, but no country had just pretend absolutely nothing happened. You either had, in varying degrees, depending on the structure of society and the advantages they had in terms of demography and health care and other stuff, big changes in behavior or borders or whatever, or you saw a huge amount of death. I think that's something that can be, from an evidence point of view, much more challenging. I think just in life, we're much more used to those linear problems where like with cancer or something, these are tragic events that happen, sort of distributed across the population, rather than something that the worse it gets, the worse that worstness accelerates.
Speaker 1:
[35:36] You mentioned how we have these different data sources, these different studies that are all leading us in a certain direction. We have, by this point in time, developed ways to measure both the quantity and quality of evidence. I really enjoyed your discussion on randomized controlled trials because this quote unquote gold standard of medical studies, that might not always be the gold standard. And I was hoping you could tell me a little bit about the times when the true gold standard might not be, for instance, an RCT, but it might be something else entirely, or it might be unethical to do a randomized controlled trial in that situation.
Speaker 2:
[36:16] I think we've seen quite a lot of examples where treating it as that kind of cookie cutter, this is the only method we can use, can lead us into problems. Smoking and cancer is a very well-known one, that we couldn't just have in action because you can't get that level of perfection. Actually, even the first randomized controlled trial in modern medicine, which in 1947, so streptomycin, a trial for TB, Austin Bradford Hill, who led that, made the point that actually streptomycin had some very promising looking lab data and kind of early signals. And he suggested it would have been unethical to withhold it from patients if it was available. But actually, this is 1947, there were currency controls. The UK and its post-war state couldn't get enough dollars to buy streptomycin. There wasn't enough to go around. So in that situation, they said it would be ethical to randomize because there's not enough of it. So there's not enough of it. You might as well randomize and just learn something along the way. And I think we've seen that in other situations. I mean, in other sort of examples that you see where things are very difficult to randomize, you can think about natural experiments. There's a lot of the well-known one is the Vietnam Draft, where people are essentially randomly assigned to go to war based on their birthdays. A lot of economists have done Nobel Prize winning work, using that to understand the effects of war on subsequent life outcomes, because it's not something where you can fully design that experiment, but you can then make use of what you have available. So I think a lot of it just comes down to this issue of we want to understand cause effect. The benefit of randomization is a lot of the other things that would influence whether or not someone's getting a vaccine and someone's getting the disease. Because you're randomizing on the vaccine, on average, those will cancel out as effects. So it gives you that quite neat benefit. But of course, you've also got the challenge that you might run a population in one group or one population that doesn't generalize to somewhere else. You've also got the time issue. So for diseases that evolve, you might run a trial now against flu or COVID or something. A year later, that's going to be a different variant. To what extent can you carry over those conclusions? I think we see a lot of examples in the literature where, for instance, someone might run a trial in one population for one disease, for flu, for example, and then see a very different result when people look at population patterns elsewhere, because it's a different immune structure, it's a different strain, it's a different time period. I think we can't just say, well, that study from a few years ago is the gold standard. We're only going to use that one. We have to think about how these things move along. That being said, though, I think in COVID, there were missed opportunities, I think, to gather much stronger data. I think it's very hard to justify running those kinds of studies as a threat increases. I think when epidemics going up, taking your time to try and randomize, I think essentially countries have to take that threat, as the evidence suggests. But I think particularly as countries lifted measures, that was often just done in quite an ad hoc way. We could have done much more staging. In the UK, there were some early studies, for example, of can we use rapid tests so people test themselves every day, rather than quarantining for a week or two, and then in practice, a lot of people just didn't bother. But apart from that, I think there's a lot of these debates we're still having, and we probably could have got better answers for that with some higher quality studies. Not necessarily even in RCT, just making use of what we had with more observations.
Speaker 1:
[39:38] One thing that I feel like during the COVID pandemic, especially the early months, was this desire from the public to have the answers, and I feel like there's a lot of variation in how willing someone is to say, I don't know. I'm wondering your feelings on this. Do you feel that scientists in particular have a difficult time saying that they don't know the answer to something? Do we need to embrace uncertainty more as scientists, or do you feel like we are embracing it but just not communicating it well?
Speaker 2:
[40:12] I think that's a really good question. It's how personality and politics and all these things play into it. I think there's been good reviews of evidence showing that overstated certainty just undermines trust and confidence, whether it's vaccines or other things. So saying, yeah, this is 100% safe, there's absolutely no risk, and if there's even a tiny risk, you then undermine that. Yeah, one of the challenges with that overstated certainty, I think particularly once you make that public statement, it's very hard to back that. And we saw that with some of the airborne, some even health organizations say it's not airborne. In fact, it's very difficult to then walk that back. But I think it's fine in line, because you don't just want to say, we have no idea. You want to try and communicate the way that evidence, I think some countries did that better than others, particularly on their, so Denmark, Singapore spring to mind on their reopening, where they said, this is the data we're looking at to do this, that might change. And this is kind of how we have to work things through. But I think one of the one of the difficulties, I think, because any emergency that goes on for that long is, you know, you have some people who are very loudly said something's 100 less, 100 times less severe than it is. And then they're kind of very nailed on to having to keep promoting that. And I think it is there was one that one of the government advisory committees I sat on, you know, so a lot of early Alpha variant, early Delta variant, early severity came out of this group. And there was a phrase that became used quite a lot, which was, tell me why I'm wrong. If you have that discussion where you want to get criticism, if you present stuff and especially people more senior and say, is this correct? It's very hard for people to kind of come in and say, oh yeah, actually, I spotted a problem, especially if there's power dynamics or seniority and other things. So I think there was a lot of that thing where people present work and be like, right, tell me why I'm wrong, tell me what I'm missing. And I think that's quite a healthy attitude in that kind of environment to be much more looking for weaknesses and be able to kind of lay out. And I remember actually, I think it was when, was it the gamma variant is sort of emerging in Latin America? And I gave a media interview and when they wrote it up, it was basically, Dr. Kucharski doesn't really know was the kind of thing. But in that situation, we didn't. And it is hard to do because I think, especially people asking you questions around your area of expertise, I think in terms of how to balance that, not just saying, I don't know, but saying, well, we do know this and we can make some judgment. And there was this wonderful study in 1951, it was by the CIA analyst. And it was about words we use when we're unsure and words about judgment. And he basically realized that people use probable and possible to mean all sorts of things. And they all had kind of different notions. And he said, humans will go out of their way to making a judgment about something. That will often, the risk is you get the uncertainty where we're like very hazy, like, oh, it's a definite possibility. And actually, in some cases, like with, if you've got an emerging threat and you've got experts, you do actually want them to put a number on it. Even if there's uncertainty, you want them to say, I am 60% sure that this is the case. And there's been a lot of nice work, even around things like super forecasting, where people make those predictions, and you can go back and then look. Because if people are well calibrated in their uncertainty, if you say you're 50% sure about a list of things, about 50% of the time those things should happen. So about half the things on that list should occur. So there are these situations where I think we can get better just about thinking about our own uncertainty. One of the things that I actually tried to do, I've tried to do in a lot of kind of emerging threats, is even just writing down what you think is going to happen. Because I think we're great, the human mind, at like kind of rationalizing, oh yeah, maybe I did think that. And so, yeah, I did quite a lot of like where you could state, I actually think the vaccine is going to be pretty good. Or I think this, and like, this is where social media, when it was maybe slightly less polarized, was quite helpful because you could just put a post out. And I think I was always very careful. I didn't delete any of my tweets during COVID because I was like, I actually want that record. And the worst time I got wrong, I was in Singapore in February 2020 and their policy was, don't wear a mask unless you have symptoms. And I think I tweeted, I was like, yeah, that seems like a sensible policy. That seems quite a little evidence-based. And now we'd probably, with some of the studies, not look back on that as being the best post. But so yeah, I think it's almost that, as well as overstated certainty, I think it's also holding ourselves to account, even if it's just privately, about how confident we were and what played out.
Speaker 1:
[44:46] I want to close out by asking you about the subtitle of your book, which is The Art and Science of Certainty. And I want to know about the art part of this. What is the art aspect?
Speaker 2:
[45:01] So I think for me, it was the more I dug into this, the more I saw these other elements beyond pure logic, pure observation coming in. I mean, even if you look at what was essentially a bit of a mathematical civil war in the late 19th century, where a lot of these ancient Greek theorems, things about the properties of triangles, started to break down because people started to draw shapes on spheres and other structures and come up with functions that these supposedly proven theorems didn't hold. And I think one of the reasons that was really controversial was, there was this idea that there's a universal truth out there about the world. And actually in this situation, it kind of depended on what assumptions humans were making and what we were willing to kind of define. And even in this supposedly pure subjects, there's still these debates around, well, it kind of depends on which one you want to pick and that will change the answer. I think even in science, there's a lot of these situations where we can accumulate the evidence, but then you have disagreement about where you set the threshold, this kind of 5% cutoff has become very popular, this sort of p-value or the chance that you get a result that extreme, if there was nothing going on or your hypothesis was wrong. But that was kind of arbitrary, it was partly picked just for convenience that this was 100 years ago, the calculations just a bit easier if they picked a value, one official did a lot of work, just easier to pick a value around 0.05. And others who were more pragmatic, working in business on something and thinking, actually the evidence is a bit weaker, but that's still useful to it. So there's this kind of human balancing act. And we saw again, things like legal cases where how much you value different types of errors depends a lot on the individuals. I mean, one of the examples that I found fascinating in the book was Einstein when he moved to the US got very angry about peer review, because he sent something to a journal and it came back, he's like, oh, we've got another opinion on it. And he was like, whoa, whoa, whoa. Why haven't you just accepted my work? And actually, Max Planck, who published some of his amazing early papers, Planck made that point that actually, I would rather publish a few things that are a bit nonsense, then this is me paraphrasing, than miss a really important idea. So for him, his threshold was like, I want to set the threshold low, admittedly, mainly amongst physicists he knew, because I don't want to set it too high and miss a good idea. And I think we all have this kind of, that's where the art, I think, creeps in, that kind of subjectivity in not just the evidence. I think one of the, for me, the real difference with something like proof is it's not just generating data, it's how that data interacts with the world and the decisions we make. And I think that's where things get really interesting. It's like, where do we actually set the bar for evidence? And then both to convince ourselves, but then go out and convince others too.
Speaker 1:
[47:56] Well, Professor Kucharski, thank you so much for joining me today. This was such an enlightening conversation and I really did, I loved your book, Proof, so I appreciate you coming on to the show.
Speaker 2:
[48:07] Thanks. Great to talk.
Speaker 1:
[48:27] A big thank you again to Dr. Adam Kucharski for taking the time to chat with me. If you enjoyed today's episode and would like to learn more, check out our website, thispodcastwillkillyou.com, where I'll post a link to where you can find Proof, The Art and Science of Certainty, as well as a link to Dr. Kucharski's website, where you can also find his other book, The Rules of Contagion, Why Things Spread and Why They Stop. And don't forget, you can also check out our website for all sorts of other cool things, including but not limited to transcripts, quarantini recipes, show notes and references for all of our episodes, links to merch, our bookshop.org affiliate account, our Goodreads list, a firsthand account form, and music by Bloodmobile. Speaking of which, thank you to Bloodmobile for providing the music for this episode and all of our episodes. Thank you to Liana Squillace and Tom Breyfogle for our audio mixing. And thanks to you listeners for listening. I hope you liked this episode and are loving still being part of the Tpwky Book Club. A special thank you as always to our fantastic patrons. We appreciate your support so very much. Well, until next time, keep washing those hands.