Adam Kucharski: The Uncertain Science of Certainty
“To navigate proof, we must reach into a thicket of errors and biases. We must confront monsters and embrace uncertainty, balancing — and rebalancing —our beliefs. We must seek out every useful fragment of data, gather every relevant tool, searching wider and climbing further. Finding the good foundations among the bad. Dodging dogma and falsehoods. Questioning. Measuring. Triangulating. Convincing. Then perhaps, just perhaps, we'll reach the truth in time.”—Adam KucharskiMy conversation with Professor Kucharski on what constitutes certainty and proof in science (and other domains), with emphasis on many of the learnings from Covid. Given the politicization of science and A.I.’s deepfakes and power for blurring of truth, it’s hard to think of a topic more important right now.Audio file (Ground Truths can also be downloaded on Apple Podcasts and Spotify)Eric Topol (00:06):Hello, it's Eric Topol from Ground Truths and I am really delighted to welcome Adam Kucharski, who is the author of a new book, Proof: The Art and Science of Certainty. He’s a distinguished mathematician, by the way, the first mathematician we've had on Ground Truths and a person who I had the real privilege of getting to know a bit through the Covid pandemic. So welcome, Adam.Adam Kucharski (00:28):Thanks for having me.Eric Topol (00:30):Yeah, I mean, I think just to let everybody know, you're a Professor at London School of Hygiene and Tropical Medicine and also noteworthy you won the Adams Prize, which is one of the most impressive recognitions in the field of mathematics. This is the book, it's a winner, Proof and there's so much to talk about. So Adam, maybe what I'd start off is the quote in the book that captivates in the beginning, “life is full of situations that can reveal remarkably large gaps in our understanding of what is true and why it's true. This is a book about those gaps.” So what was the motivation when you undertook this very big endeavor?Adam Kucharski (01:17):I think a lot of it comes to the work I do at my day job where we have to deal with a lot of evidence under pressure, particularly if you work in outbreaks or emerging health concerns. And often it really pushes the limits, our methodology and how we converge on what's true subject to potential revision in the future. I think particularly having a background in math’s, I think you kind of grow up with this idea that you can get to these concrete, almost immovable truths and then even just looking through the history, realizing that often isn't the case, that there's these kind of very human dynamics that play out around them. And it's something I think that everyone in science can reflect on that sometimes what convinces us doesn't convince other people, and particularly when you have that kind of urgency of time pressure, working out how to navigate that.Eric Topol (02:05):Yeah. Well, I mean I think these times of course have really gotten us to appreciate, particularly during Covid, the importance of understanding uncertainty. And I think one of the ways that we can dispel what people assume they know is the famous Monty Hall, which you get into a bit in the book. So I think everybody here is familiar with that show, Let's Make a Deal and maybe you can just take us through what happens with one of the doors are unveiled and how that changes the mathematics.Adam Kucharski (02:50):Yeah, sure. So I think it is a problem that's been around for a while and it's based on this game show. So you've got three doors that are closed. Behind two of the doors there is a goat and behind one of the doors is a luxury car. So obviously, you want to win the car. The host asks you to pick a door, so you point to one, maybe door number two, then the host who knows what's behind the doors opens another door to reveal a goat and then ask you, do you want to change your mind? Do you want to switch doors? And a lot of the, I think intuition people have, and certainly when I first came across this problem many years ago is well, you've got two doors left, right? You've picked one, there's another one, it's 50-50. And even some quite well-respected mathematicians.Adam Kucharski (03:27):People like Paul Erdős who was really published more papers than almost anyone else, that was their initial gut reaction. But if you work through all of the combinations, if you pick this door and then the host does this, and you switch or not switch and work through all of those options. You actually double your chances if you switch versus sticking with the door. So something that's counterintuitive, but I think one of the things that really struck me and even over the years trying to explain it is convincing myself of the answer, which was when I first came across it as a teenager, I did quite quickly is very different to convincing someone else. And even actually Paul Erdős, one of his colleagues showed him what I call proof by exhaustion. So go through every combination and that didn't really convince him. So then he started to simulate and said, well, let's do a computer simulation of the game a hundred thousand times. And again, switching was this optimal strategy, but Erdős wasn't really convinced because I accept that this is the case, but I'm not really satisfied with it. And I think that encapsulates for a lot of people, their experience of proof and evidence. It's a fact and you have to take it as given, but there's actually quite a big bridge often to really understanding why it's true and feeling convinced by it.Eric Topol (04:41):Yeah, I think it's a fabulous example because I think everyone would naturally assume it's 50-50 and it isn't. And I think that gets us to the topic at hand. What I love, there's many things I love about this book. One is that you don't just get into science and medicine, but you cut across all the domains, law, mathematics, AI. So it's a very comprehensive sweep of everything about proof and truth, and it couldn't come at a better time as we'll get into. Maybe just starting off with math, the term I love mathematical monsters. Can you tell us a little bit more about that?Adam Kucharski (05:25):Yeah, this was a fascinating situation that emerged in the late 19th century where a lot of math’s, certainly in Europe had been derived from geometry because a lot of the ancient Greek influence on how we shaped things and then Newton and his work on rates of change and calculus, it was really the natural world that provided a lot of inspiration, these kind of tangible objects, tangible movements. And as mathematicians started to build out the theory around rates of change and how we tackle these kinds of situations, they sometimes took that intuition a bit too seriously. And there was some theorems that they said were intuitively obvious, some of these French mathematicians. And so, one for example is this idea of you how things change smoothly over time and how you do those calculations. But what happened was some mathematicians came along and showed that when you have things that can be infinitely small, that intuition didn't necessarily hold in the same way.Adam Kucharski (06:26):And they came up with these examples that broke a lot of these theorems and a lot of the establishments at the time called these things monsters. They called them these aberrations against common sense and this idea that if Newton had known about them, he never would've done all of his discovery because they're just nuisances and we just need to get rid of them. And there's this real tension at the core of mathematics in the late 1800s where some people just wanted to disregard this and say, look, it works for most of the time, that's good enough. And then others really weren't happy with this quite vague logic. They wanted to put it on much sturdier ground. And what was remarkable actually is if you trace this then into the 20th century, a lot of these monsters and these particularly in some cases functions which could almost move constantly, this constant motion rather than our intuitive concept of movement as something that's smooth, if you drop an apple, it accelerates at a very smooth rate, would become foundational in our understanding of things like probability, Einstein's work on atomic theory. A lot of these concepts where geometry breaks down would be really important in relativity. So actually, these things that we thought were monsters actually were all around us all the time, and science couldn't advance without them. So I think it's just this remarkable example of this tension within a field that supposedly concrete and the things that were going to be shunned actually turn out to be quite important.Eric Topol (07:53):It's great how you convey how nature isn't so neat and tidy and things like Brownian motion, understanding that, I mean, just so many things that I think fit into that general category. In the legal, we won't get into too much because that's not so much the audience of Ground Truths, but the classic things about innocent and until proven guilty and proof beyond reasonable doubt, I mean these are obviously really important parts of that overall sense of proof and truth. We're going to get into one thing I'm fascinated about related to that subsequently and then in science. So before we get into the different types of proof, obviously the pandemic is still fresh in our minds and we're an endemic with Covid now, and there are so many things we got wrong along the way of uncertainty and didn't convey that science isn't always evolving search for what is the truth. There's plenty no shortage of uncertainty at any moment. So can you recap some of the, you did so much work during the pandemic and obviously some of it's in the book. What were some of the major things that you took out of proof and truth from the pandemic?Adam Kucharski (09:14):I think it was almost this story of two hearts because on the one hand, science was the thing that got us where we are today. The reason that so much normality could resume and so much risk was reduced was development of vaccines and the understanding of treatments and the understanding of variants as they came to their characteristics. So it was kind of this amazing opportunity to see this happen faster than it ever happened in history. And I think ever in science, it certainly shifted a lot of my thinking about what's possible and even how we should think about these kinds of problems. But also on the other hand, I think where people might have been more familiar with seeing science progress a bit more slowly and reach consensus around some of these health issues, having that emerge very rapidly can present challenges even we found with some of the work we did on Alpha and then the Delta variants, and it was the early quantification of these.Adam Kucharski (10:08):So really the big question is, is this thing more transmissible? Because at the time countries were thinking about control measures, thinking about relaxing things, and you've got this just enormous social economic health decision-making based around essentially is it a lot more spreadable or is it not? And you only had these fragments of evidence. So I think for me, that was really an illustration of the sharp end. And I think what we ended up doing with some of those was rather than arguing over a precise number, something like Delta, instead we kind of looked at, well, what's the range that matters? So in the sense of arguing over whether it's 40% or 50% or 30% more transmissible is perhaps less important than being, it's substantially more transmissible and it's going to start going up. Is it going to go up extremely fast or just very fast?Adam Kucharski (10:59):That's still a very useful conclusion. I think what often created some of the more challenges, I think the things that on reflection people looking back pick up on are where there was probably overstated certainty. We saw that around some of the airborne spread, for example, stated as a fact by in some cases some organizations, I think in some situations as well, governments had a constraint and presented it as scientific. So the UK, for example, would say testing isn't useful. And what was happening at the time was there wasn't enough tests. So it was more a case of they can't test at that volume. But I think blowing between what the science was saying and what the decision-making, and I think also one thing we found in the UK was we made a lot of the epidemiological evidence available. I think that was really, I think something that was important.Adam Kucharski (11:51):I found it a lot easier to communicate if talking to the media to be able to say, look, this is the paper that's out, this is what it means, this is the evidence. I always found it quite uncomfortable having to communicate things where you knew there were reports behind the scenes, but you couldn't actually articulate. But I think what that did is it created this impression that particularly epidemiology was driving the decision-making a lot more than it perhaps was in reality because so much of that was being made public and a lot more of the evidence around education or economics was being done behind the scenes. I think that created this kind of asymmetry in public perception about how that was feeding in. And so, I think there was always that, and it happens, it is really hard as well as a scientist when you've got journalists asking you how to run the country to work out those steps of am I describing the evidence behind what we're seeing? Am I describing the evidence about different interventions or am I proposing to some extent my value system on what we do? And I think all of that in very intense times can be very easy to get blurred together in public communication. I think we saw a few examples of that where things were being the follow the science on policy type angle where actually once you get into what you're prioritizing within a society, quite rightly, you've got other things beyond just the epidemiology driving that.Eric Topol (13:09):Yeah, I mean that term that you just use follow the science is such an important term because it tells us about the dynamic aspect. It isn't just a snapshot, it's constantly being revised. But during the pandemic we had things like the six-foot rule that was never supported by data, but yet still today, if I walk around my hospital and there's still the footprints of the six-foot rule and not paying attention to the fact that this was airborne and took years before some of these things were accepted. The flatten the curve stuff with lockdowns, which I never was supportive of that, but perhaps at the worst point, the idea that hospitals would get overrun was an issue, but it got carried away with school shutdowns for prolonged periods and in some parts of the world, especially very stringent lockdowns. But anyway, we learned a lot.Eric Topol (14:10):But perhaps one of the greatest lessons is that people's expectations about science is that it's absolute and somehow you have this truth that's not there. I mean, it's getting revised. It's kind of on the job training, it's on this case on the pandemic revision. But very interesting. And that gets us to, I think the next topic, which I think is a fundamental part of the book distributed throughout the book, which is the different types of proof in biomedicine and of course across all these domains. And so, you take us through things like randomized trials, p-values, 95 percent confidence intervals, counterfactuals, causation and correlation, peer review, the works, which is great because a lot of people have misconceptions of these things. So for example, randomized trials, which is the temple of the randomized trials, they're not as great as a lot of people think, yes, they can help us establish cause and effect, but they're skewed because of the people who come into the trial. So they may not at all be a representative sample. What are your thoughts about over deference to randomized trials?Adam Kucharski (15:31):Yeah, I think that the story of how we rank evidence in medicines a fascinating one. I mean even just how long it took for people to think about these elements of randomization. Fundamentally, what we're trying to do when we have evidence here in medicine or science is prevent ourselves from confusing randomness for a signal. I mean, that's fundamentally, we don't want to mistake something, we think it's going on and it's not. And the challenge, particularly with any intervention is you only get to see one version of reality. You can't give someone a drug, follow them, rewind history, not give them the drug and then follow them again. So one of the things that essentially randomization allows us to do is, if you have two groups, one that's been randomized, one that hasn't on average, the difference in outcomes between those groups is going to be down to the treatment effect.Adam Kucharski (16:20):So it doesn't necessarily mean in reality that'd be the case, but on average that's the expectation that you'd have. And it's kind of interesting actually that the first modern randomized control trial (RCT) in medicine in 1947, this is for TB and streptomycin. The randomization element actually, it wasn't so much statistical as behavioral, that if you have people coming to hospital, you could to some extent just say, we'll just alternate. We're not going to randomize. We're just going to first patient we'll say is a control, second patient a treatment. But what they found in a lot of previous studies was doctors have bias. Maybe that patient looks a little bit ill or that one maybe is on borderline for eligibility. And often you got these quite striking imbalances when you allowed it for human judgment. So it was really about shielding against those behavioral elements. But I think there's a few situations, it's a really powerful tool for a lot of these questions, but as you mentioned, one is this issue of you have the population you study on and then perhaps in reality how that translates elsewhere.Adam Kucharski (17:17):And we see, I mean things like flu vaccines are a good example, which are very dependent on immunity and evolution and what goes on in different populations. Sometimes you've had a result on a vaccine in one place and then the effectiveness doesn't translate in the same way to somewhere else. I think the other really important thing to bear in mind is, as I said, it's the averaging that you're getting an average effect between two different groups. And I think we see certainly a lot of development around things like personalized medicine where actually you're much more interested in the outcome for the individual. And so, what a trial can give you evidence is on average across a group, this is the effect that I can expect this intervention to have. But we've now seen more of the emergence things like N=1 studies where you can actually over the same individual, particularly for chronic conditions, look at those kind of interventions.Adam Kucharski (18:05):And also there's just these extreme examples where you're ethically not going to run a trial, there's never been a trial of whether it's a good idea to have intensive care units in hospitals or there's a lot of these kind of historical treatments which are just so overwhelmingly effective that we're not going to run trial. So almost this hierarchy over time, you can see it getting shifted because actually you do have these situations where other forms of evidence can get you either closer to what you need or just more feasibly an answer where it's just not ethical or practical to do an RCT.Eric Topol (18:37):And that brings us to the natural experiments I just wrote about recently, the one with shingles, which there's two big natural experiments to suggest that shingles vaccine might reduce the risk of Alzheimer's, an added benefit beyond the shingles that was not anticipated. Your thoughts about natural experiments, because here you're getting a much different type of population assessment, again, not at the individual level, but not necessarily restricted by some potentially skewed enrollment criteria.Adam Kucharski (19:14):I think this is as emerged as a really valuable tool. It's kind of interesting, in the book you're talking to economists like Josh Angrist, that a lot of these ideas emerge in epidemiology, but I think were really then taken up by economists, particularly as they wanted to add more credibility to a lot of these policy questions. And ultimately, it comes down to this issue that for a lot of problems, we can't necessarily intervene and randomize, but there might be a situation that's done it to some extent for us, so the classic example is the Vietnam draft where it was kind of random birthdays with drawn out of lottery. And so, there's been a lot of studies subsequently about the effect of serving in the military on different subsequent lifetime outcomes because broadly those people have been randomized. It was for a different reason. But you've got that element of randomization driving that.Adam Kucharski (20:02):And so again, with some of the recent shingles data and other studies, you might have a situation for example, where there's been an intervention that's somewhat arbitrary in terms of time. It's a cutoff on a birth date, for example. And under certain assumptions you could think, well, actually there's no real reason for the person on this day and this day to be fundamentally different. I mean, perhaps there might be effects of cohorts if it's school years or this sort of thing. But generally, this isn't the same as having people who are very, very different ages and very different characteristics. It's just nature, or in this case, just a policy intervention for a different reason has given you that randomization, which allows you or pseudo randomization, which allows you to then look at something about the effect of an intervention that you wouldn't as reliably if you were just digging into the data of yes, no who's received a vaccine.Eric Topol (20:52):Yeah, no, I think it's really valuable. And now I think increasingly given priority, if you can find these natural experiments and they’re not always so abundant to use to extrapolate from, but when they are, they're phenomenal. The causation correlation is so big. The issue there, I mean Judea Pearl's, the Book of Why, and you give so many great examples throughout the book in Proof. I wonder if you could comment that on that a bit more because this is where associations are confused somehow or other with a direct effect. And we unfortunately make these jumps all too frequently. Perhaps it's the most common problem that's occurring in the way we interpret medical research data.Adam Kucharski (21:52):Yeah, I think it's an issue that I think a lot of people get drilled into in their training just because a correlation between things doesn't mean that that thing causes this thing. But it really struck me as I talked to people, researching the book, in practice in research, there's actually a bit more to it in how it's played out. So first of all, if there's a correlation between things, it doesn't tell you much generally that's useful for intervention. If two things are correlated, it doesn't mean that changing that thing's going to have an effect on that thing. There might be something that's influencing both of them. If you have more ice cream sales, it will lead to more heat stroke cases. It doesn't mean that changing ice cream sales is going to have that effect, but it does allow you to make predictions potentially because if you can identify consistent patterns, you can say, okay, if this thing going up, I'm going to make a prediction that this thing's going up.Adam Kucharski (22:37):So one thing I found quite striking, actually talking to research in different fields is how many fields choose to focus on prediction because it kind of avoids having to deal with this cause and effect problem. And even in fields like psychology, it was kind of interesting that there's a lot of focus on predicting things like relationship outcomes, but actually for people, you don't want a prediction about your relationship. You want to know, well, how can I do something about it? You don't just want someone to sell you your relationship's going to go downhill. So there's almost part of the challenge is people just got stuck on prediction because it's an easier field of work, whereas actually some of those problems will involve intervention. I think the other thing that really stood out for me is in epidemiology and a lot of other fields, rightly, people are very cautious to not get that mixed up.Adam Kucharski (23:24):They don't want to mix up correlations or associations with causation, but you've kind of got this weird situation where a lot of papers go out of their way to not use causal language and say it's an association, it's just an association. It's just an association. You can't say anything about causality. And then the end of the paper, they'll say, well, we should think about introducing more of this thing or restricting this thing. So really the whole paper and its purpose is framed around a causal intervention, but it's extremely careful throughout the paper to not frame it as a causal claim. So I think we almost by skirting that too much, we actually avoid the problems that people sometimes care about. And I think a lot of the nice work that's been going on in causal inference is trying to get people to confront this more head on rather than say, okay, you can just stay in this prediction world and that's fine. And then just later maybe make a policy suggestion off the back of it.Eric Topol (24:20):Yeah, I think this is cause and effect is a very alluring concept to support proof as you so nicely go through in the book. But of course, one of the things that we use to help us is the biological mechanism. So here you have, let's say for example, you're trying to get a new drug approved by the Food and Drug Administration (FDA), and the request is, well, we want two trials, randomized trials, independent. We want to have p-values that are significant, and we want to know the biological mechanism ideally with the dose response of the drug. But there are many drugs as you review that have no biological mechanism established. And even when the tobacco problems were mounting, the actual mechanism of how tobacco use caused cancer wasn't known. So how important is the biological mechanism, especially now that we're well into the AI world where explainability is demanded. And so, we don't know the mechanism, but we also don't know the mechanism and lots of things in medicine too, like anesthetics and even things as simple as aspirin, how it works and many others. So how do we deal with this quest for the biological mechanism?Adam Kucharski (25:42):I think that's a really good point. It shows almost a lot of the transition I think we're going through currently. I think particularly for things like smoking cancer where it's very hard to run a trial. You can't make people randomly take up smoking. Having those additional pieces of evidence, whether it's an analogy with a similar carcinogen, whether it's a biological mechanism, can help almost give you more supports for that argument that there's a cause and effect going on. But I think what I found quite striking, and I realized actually that it's something that had kind of bothered me a bit and I'd be interested to hear whether it bothers you, but with the emergence of AI, it's almost a bit of the loss of scientific satisfaction. I think you grow up with learning about how the world works and why this is doing what it's doing.Adam Kucharski (26:26):And I talked for example of some of the people involved with AlphaFold and some of the subsequent work in installing those predictions about structures. And they'd almost made peace with it, which I found interesting because I think they started off being a bit uncomfortable with like, yeah, you've got these remarkable AI models making these predictions, but we don't understand still biologically what's happening here. But I think they're just settled in saying, well, biology is really complex on some of these problems, and if we can have a tool that can give us this extremely valuable information, maybe that's okay. And it was just interesting that they'd really kind of gone through that kind process, which I think a lot of people are still grappling with and that almost that discomfort of using AI and what's going to convince you that that's a useful reliable prediction whether it’s something like predicting protein folding or getting in a self-driving car. What's the evidence you need to convince you that's reliable?Eric Topol (27:26):Yeah, no, I'm so glad you brought that up because when Demis Hassabis and John Jumper won the Nobel Prize, the point I made was maybe there should be an asterisk with AI because they don't know how it works. I mean, they had all the rich data from the protein data bank, and they got the transformer model to do it for 200 million protein structure prediction, but they still to this day don't fully understand how the model really was working. So it reinforces what you're just saying. And of course, it cuts across so many types of AI. It's just that we tend to hold different standards in medicine not realizing that there's lots of lack of explainability for routine medical treatments today. Now one of the things that I found fascinating in your book, because there's different levels of proof, different types of proof, but solid logical systems.Eric Topol (28:26):And on page 60 of the book, especially pertinent to the US right now, there is a bit about Kurt Gödel and what he did there was he basically, there was a question about dictatorship in the US could it ever occur? And Gödel says, “oh, yes, I can prove it.” And he's using the constitution itself to prove it, which I found fascinating because of course we're seeing that emerge right now. Can you give us a little bit more about this, because this is fascinating about the Fifth Amendment, and I mean I never thought that the Constitution would allow for a dictatorship to emerge.Adam Kucharski (29:23):And this was a fascinating story, Kurt Gödel who is one of the greatest logical minds of the 20th century and did a lot of work, particularly in the early 20th century around system of rules, particularly things like mathematics and whether they can ever be really fully satisfying. So particularly in mathematics, he showed that there were this problem that is very hard to have a set of rules for something like arithmetic that was both complete and covered every situation, but also had no contradictions. And I think a lot of countries, if you go back, things like Napoleonic code and these attempts to almost write down every possible legal situation that could be imaginable, always just ascended into either they needed amendments or they had contradictions. I think Gödel's work really summed it up, and there's a story, this is in the late forties when he had his citizenship interview and Einstein and Oskar Morgenstern went along as witnesses for him.Adam Kucharski (30:17):And it's always told as kind of a lighthearted story as this logical mind, this academic just saying something silly in front of the judge. And actually, to my own admission, I've in the past given talks and mentioned it in this slightly kind of lighthearted way, but for the book I got talking to a few people who'd taken it more seriously. I realized actually he's this extremely logically focused mind at the time, and maybe there should have been something more to it. And people who have kind of dug more into possibilities was saying, well, what could he have spotted that bothered him? And a lot of his work that he did about consistency in mass was around particularly self-referential statements. So if I say this sentence is false, it’s self-referential and if it is false, then it's true, but if it's true, then it's false and you get this kind of weird self-referential contradictions.Adam Kucharski (31:13):And so, one of the theories about Gödel was that in the Constitution, it wasn't that there was a kind of rule for someone can become a dictator, but rather people can use the mechanisms within the Constitution to make it easier to make further amendments. And he kind of downward cycle of amendment that he had seen happening in Europe and the run up to the war, and again, because this is never fully documented exactly what he thought, but it's one of the theories that it wouldn't just be outright that it would just be this cycle process of weakening and weakening and weakening and making it easier to add. And actually, when I wrote that, it was all the earlier bits of the book that I drafted, I did sort of debate whether including it I thought, is this actually just a bit in the weeds of American history? And here we are. Yeah, it's remarkable.Eric Topol (32:00):Yeah, yeah. No, I mean I found, it struck me when I was reading this because here back in 1947, there was somebody predicting that this could happen based on some, if you want to call it loopholes if you will, or the ability to change things, even though you would've thought otherwise that there wasn't any possible capability for that to happen. Now, one of the things I thought was a bit contradictory is two parts here. One is from Angus Deaton, he wrote, “Gold standard thinking is magical thinking.” And then the other is what you basically are concluding in many respects. “To navigate proof, we must reach into a thicket of errors and biases. We must confront monsters and embrace uncertainty, balancing — and rebalancing —our beliefs. We must seek out every useful fragment of data, gather every relevant tool, searching wider and climbing further. Finding the good foundations among the bad. Dodging dogma and falsehoods. Questioning. Measuring. Triangulating. Convincing. Then perhaps, just perhaps, we'll reach the truth in time.” So here you have on the one hand your search for the truth, proof, which I think that little paragraph says it all. In many respects, it sums up somewhat to the work that you review here and on the other you have this Nobel laureate saying, you don't have to go to extremes here. The enemy of good is perfect, perhaps. I mean, how do you reconcile this sense that you shouldn't go so far? Don't search for absolute perfection of proof.Adam Kucharski (33:58):Yeah, I think that encapsulates a lot of what the book is about, is that search for certainty and how far do you have to go. I think one of the things, there's a lot of interesting discussion, some fascinating papers around at what point do you use these studies? What are their flaws? But I think one of the things that does stand out is across fields, across science, medicine, even if you going to cover law, AI, having these kind of cookie cutter, this is the definitive way of doing it. And if you just follow this simple rule, if you do your p-value, you'll get there and you'll be fine. And I think that's where a lot of the danger is. And I think that's what we've seen over time. Certain science people chasing certain targets and all the behaviors that come around that or in certain situations disregarding valuable evidence because you've got this kind of gold standard and nothing else will do.Adam Kucharski (34:56):And I think particularly in a crisis, it's very dangerous to have that because you might have a low level of evidence that demands a certain action and you almost bias yourself towards inaction if you have these kind of very simple thresholds. So I think for me, across all of these stories and across the whole book, I mean William Gosset who did a lot of pioneering work on statistical experiments at Guinness in the early 20th century, he had this nice question he sort of framed is, how much do we lose? And if we're thinking about the problems, there's always more studies we can do, there's always more confidence we can have, but whether it's a patient we want to treat or crisis we need to deal with, we need to work out actually getting that level of proof that's really appropriate for where we are currently.Eric Topol (35:49):I think exceptionally important that there's this kind of spectrum or continuum in following science and search for truth and that distinction, I think really nails it. Now, one of the things that's unique in the book is you don't just go through all the different types of how you would get to proof, but you also talk about how the evidence is acted on. And for example, you quote, “they spent a lot of time misinforming themselves.” This is the whole idea of taking data and torturing it or using it, dredging it however way you want to support either conspiracy theories or alternative facts. Basically, manipulating sometimes even emasculating what evidence and data we have. And one of the sentences, or I guess this is from Sir Francis Bacon, “truth is a daughter of time”, but the added part is not authority. So here we have our president here that repeats things that are wrong, fabricated or wrong, and he keeps repeating to the point that people believe it's true. But on the other hand, you could say truth is a daughter of time because you like to not accept any truth immediately. You like to see it get replicated and further supported, backed up. So in that one sentence, truth is a daughter of time not authority, there's the whole ball of wax here. Can you take us through that? Because I just think that people don't understand that truth being tested over time, but also manipulated by its repetition. This is a part of the big problem that we live in right now.Adam Kucharski (37:51):And I think it's something that writing the book and actually just reflecting on it subsequently has made me think about a lot in just how people approach these kinds of problems. I think that there's an idea that conspiracy theorists are just lazy and have maybe just fallen for a random thing, but talking to people, you really think about these things a lot more in the field. And actually, the more I've ended up engaging with people who believe things that are just outright unevidenced around vaccines, around health issues, they often have this mountain of papers and data to hand and a lot of it, often they will be peer reviewed papers. It won't necessarily be supporting the point that they think it's supports.Adam Kucharski (38:35):But it's not something that you can just say everything you're saying is false, that there's actually often a lot of things that have been put together and it's just that leap to that conclusion. I think you also see a lot of scientific language borrowed. So I gave a talker early this year and it got posted on YouTube. It had conspiracy theories it, and there was a lot of conspiracy theory supporters who piled in the comments and one of the points they made is skepticism is good. It's the kind of law society, take no one's word for it, you need this. We are the ones that are kind of doing science and people who just assume that science is settled are in the wrong. And again, you also mentioned that repetition. There's this phenomenon, it's the illusory truth problem that if you repeatedly tell someone someone's something's false, it'll increase their belief in it even if it's something quite outrageous.Adam Kucharski (39:27):And that mimics that scientific repetition because people kind of say, okay, well if I've heard it again and again, it's almost like if you tweak these as mini experiments, I'm just accumulating evidence that this thing is true. So it made me think a lot about how you've got essentially a lot of mimicry of the scientific method, amount of data and how you present it and this kind of skepticism being good, but I think a lot of it comes down to as well as just looking at theological flaws, but also ability to be wrong in not actually seeking out things that confirm. I think all of us, it's something that I've certainly tried to do a lot working on emergencies, and one of the scientific advisory groups that I worked on almost it became a catchphrase whenever someone presented something, they finished by saying, tell me why I'm wrong.Adam Kucharski (40:14):And if you've got a variant that's more transmissible, I don't want to be right about that really. And it is something that is quite hard to do and I found it is particularly for something that's quite high pressure, trying to get a policymaker or someone to write even just non-publicly by themselves, write down what you think's going to happen or write down what would convince you that you are wrong about something. I think particularly on contentious issues where someone's got perhaps a lot of public persona wrapped up in something that's really hard to do, but I think it's those kind of elements that distinguish between getting sucked into a conspiracy theory and really seeking out evidence that supports it and trying to just get your theory stronger and stronger and actually seeking out things that might overturn your belief about the world. And it's often those things that we don't want overturned. I think those are the views that we all have politically or in other ways, and that's often where the problems lie.Eric Topol (41:11):Yeah, I think this is perhaps one of, if not the most essential part here is that to try to deal with the different views. We have biases as you emphasized throughout, but if you can use these different types of proof to have a sound discussion, conversation, refutation whereby you don't summarily dismiss another view which may be skewed and maybe spurious or just absolutely wrong, maybe fabricated whatever, but did you can engage and say, here's why these are my proof points, or this is why there's some extent of certainty you can have regarding this view of the data. I think this is so fundamental because unfortunately as we saw during the pandemic, the strident minority, which were the anti-science, anti-vaxxers, they were summarily dismissed as being kooks and adopting conspiracy theories without the right engagement and the right debates. And I think this might've helped along the way, no less the fact that a lot of scientists didn't really want to engage in the first place and adopt this methodical proof that you've advocated in the book so many different ways to support a hypothesis or an assertion. Now, we've covered a lot here, Adam. Have I missed some central parts of the book and the effort because it's really quite extraordinary. I know it's your third book, but it's certainly a standout and it certainly it's a standout not just for your books, but books on this topic.Adam Kucharski (43:13):Thanks. And it's much appreciated. It was not an easy book to write. I think at times, I kind of wondered if I should have taken on the topic and I think a core thing, your last point speaks to that. I think a core thing is that gap often between what convinces us and what convinces someone else. I think it's often very tempting as a scientist to say the evidence is clear or the science has proved this. But even on something like the vaccines, you do get the loud minority who perhaps think they're putting microchips in people and outlandish views, but you actually get a lot more people who might just have some skepticism of pharmaceutical companies or they might have, my wife was pregnant actually at the time during Covid and we waited up because there wasn't much data on pregnancy and the vaccine. And I think it's just finding what is convincing. Is it having more studies from other countries? Is it understanding more about the biology? Is it understanding how you evaluate some of those safety signals? And I think that's just really important to not just think what convinces us and it's going to be obvious to other people, but actually think where are they coming from? Because ultimately having proof isn't that good unless it leads to the action that can make lives better.Eric Topol (44:24):Yeah. Well, look, you've inculcated my mind with this book, Adam, called Proof. Anytime I think of the word proof, I'm going to be thinking about you. So thank you. Thanks for taking the time to have a conversation about your book, your work, and I know we're going to count on you for the astute mathematics and analysis of outbreaks in the future, which we will see unfortunately. We are seeing now, in fact already in this country with measles and whatnot. So thank you and we'll continue to follow your great work.**************************************Thanks for listening, watching or reading this Ground Truths podcast/post.If you found this interesting please share it!That makes the work involved in putting these together especially worthwhile.I’m also appreciative for your subscribing to Ground Truths. All content —its newsletters, analyses, and podcasts—is free, open-access. I’m fortunate to get help from my producer Jessica Nguyen and Sinjun Balabanoff for audio/video tech support to pull these podcasts together for Scripps Research.Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Please don't hesitate to post comments and give me feedback. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years.A bit of an update on SUPER AGERSMy book has been selected as a Next Big Idea Club winner for Season 26 by Adam Grant, Malcolm Gladwell, Susan Cain, and Daniel Pink. This club has spotlighted the most groundbreaking nonfiction books for over a decade. As a winning title, my book will be shipped to thousands of thoughtful readers like you, featured alongside a reading guide, a "Book Bite," Next Big Idea Podcast episode as well as a live virtual Q&A with me in the club’s vibrant online community. If you’re interested in joining the club, here’s a promo code SEASON26 for 20% off at the website. SUPER AGERS reached #3 for all books on Amazon this week. This was in part related to the segment on the book on the TODAY SHOW which you can see here. Also at Amazon there is a remarkable sale on the hardcover book for $10.l0 at the moment for up to 4 copies. Not sure how long it will last or what prompted it.The journalist Paul von Zielbauer has a Substack “Aging With Strength” and did an extensive interview with me on the biology of aging and how we can prevent the major age-related diseases. Here’s the link. Get full access to Ground Truths at erictopol.substack.com/subscribe