Hidden Brain XX
[0] This is Hidden Brain.
[1] I'm Shankar Vedantam.
[2] We're going to start today with a little experiment.
[3] I'll be the guinea pig.
[4] I'm going to open the stopwatch app on my phone.
[5] I'll hit start and count off five seconds while looking at the phone.
[6] One, two, three, four, five.
[7] Okay, let me do that again.
[8] One, two, three, four, five.
[9] Okay, now I'm going to hit start and count off five seconds without looking at the phone.
[10] One, two, three, four, five.
[11] It was five point four three seconds.
[12] Let's do it again.
[13] One, two, three, four, five.
[14] Much better.
[15] Five point two seconds.
[16] Last time.
[17] 1, 2, 3, 4, 5 .5.
[18] The errors I made seem trivial, but it turns out they are not.
[19] Multiply the small mistakes I made in milliseconds over all the countless decisions I make every day, and you can end up with a serious problem.
[20] Multiply the errors I make as an individual by an entire society made up of other error -prone humans, and you can get disaster.
[21] What makes these mistakes insidious is that they are rarely the result of conscious decision -making.
[22] Human judgment is imprecise, and imprecise judgment produces unwanted variability, what the Nobel Prize -winning psychologist Daniel Kahneman calls noise.
[23] Wherever there is judgment, there is noise, and there is more of it than you think.
[24] This week on Hidden Brain, the gigantic effect of inadvertent mistakes in business, medicine, and the criminal justice system, and how we can save us from ourselves.
[25] Daniel Kahneman's insights into how we think have revolutionized many areas of the social sciences.
[26] He was my guest on Hidden Brain for our 100th episode.
[27] We talked about his early research and his first book, Thinking Fast and Slow.
[28] As we close in on our 200th episode, we wanted to bring him back to talk about a set of ideas he's been working on for several years.
[29] They're described in his new book, Noyce, a flaw in human judgment.
[30] Daniel Kahneman, welcome to Hidden Brain.
[31] Glad to be here.
[32] I want to begin by exploring what you mean by the term noise.
[33] You spent some time starting an insurance company, and one of the things an insurance company needs to do is to tell prospect, clients, how much their premiums are going to cost.
[34] So an underwriter says, if you want us to cover you against this loss, here's this quote.
[35] From the insurance company's point of view, Danny, what is the risk of offering quotes that are too high and also quotes that are too low?
[36] Well, a quote that is too high, you're very likely to lose the business because there are competitors and they'll offer a better price.
[37] A quote that is too low, you're leaving money on the table, and you may not be covering your losses if you do that a great deal.
[38] So errors in both directions are costly.
[39] We define noise as unwanted variability in judgments or decisions.
[40] That is, if the same client would get different quotes from different underwriters in the same company, this is bad for the company.
[41] And variability is a basic component of error.
[42] So I think of the insurance business as being driven by mathematics.
[43] That's my stereotype, that they're hard -nosed statisticians who work at these companies.
[44] So I would not expect a quote from one underwriter to be wildly different from the next.
[45] You asked executives at this insurance company how much variability they expected between underwriters.
[46] What was their estimate of this kind of subjective variability?
[47] I mean, it turns out that there is a very general answer to that question.
[48] And people have a very general idea about that number will be, and it's around 10%.
[49] Now, when we actually measured that in an insurance company, the answer was 55%.
[50] And that was a number, that was an amount of variability, as we call it, an amount of noise that no one expected.
[51] And that really is what set me off on this journey that led to this book.
[52] Now, the difference between 10 % and 55 % might seem trivial.
[53] Who cares?
[54] Well, the consequences of this variability were anything but trivial.
[55] I mean, I asked people what actually would be the cost of setting up a premium that is too high or too low.
[56] And when they carried out that exercise, they thought that the overall cost of these mistakes was in the billions of dollars.
[57] Now, what was in some sense saving that company was that probably other companies were noisy as well.
[58] But if you have a company that is noisy, well, others are noise free.
[59] The noisy company is going to lose a lot of money very quickly.
[60] So with the insurance company, it's not just that the insurance company is losing money.
[61] There is also a cost that's being paid by all the people who are trying to get insurance.
[62] It might be that if you happen to get a quote that's too high, you might end up being uninsured, or you might be spending more on insurance that you need to be spending.
[63] There is sort of a general human cost to these errors, not just in terms of the bottom line for the insurance company.
[64] Well, of course, when you have a noisy underwriting system, then the customer is facing a lottery that the customer has not signed up for.
[65] And that is true everywhere.
[66] That is, wherever people reach a judgment or a decision by using their mind rather than computing, Wherever there is judgment, there is noise, and there is more of it than you think.
[67] I want to look at a few other places, because in some ways what's striking about your book is both the number of different domains where you see noise and the extent of noise in those different domains, including in places where you really feel this should be a setting where noise does not play a role.
[68] You cite a study done by Jaya Ramji Nogales and her co -authors who found that in asylum cases, this was a courtroom in Miami, one judge would grant asylum to 88 % of the applicants, and another granted asylum to only 5 % of the applicants.
[69] So this is more than a lottery.
[70] This is like playing roulette.
[71] This is a scandal.
[72] Clearly, the system isn't operating well.
[73] In many situations, it's just that when people look at the same data, they see them differently.
[74] They see them more differently than they expect.
[75] They see them more differently than anyone would expect.
[76] That's the basic phenomenon of what we call system noise.
[77] That is, when you have a system that ought to be producing judgments or decisions that are predictable, they turn out not to be predictable, and that's noise.
[78] You also describe in some ways there are different kinds of noise.
[79] So if you're an asylum judge and I'm an asylum judge and we have very different subjective readings that can produce very different answers.
[80] But it could also be that if you are reviewing a case in the morning and you are reviewing a case in the afternoon, it's possible that just within yourself, your own judgments can be noisy?
[81] Can you talk about that idea as well?
[82] I mean, it's not only possible, it actually is the case, that when people are asked the same question or evaluate the same thing on multiple occasions, they do not reach the same answers.
[83] For example, radiologists who are shown the same image on two separate occasions and are not reminded that it's the same image, really with distressing frequency, reach different diagnoses on the two occasions, that we know.
[84] It's true even for fingerprint examiners, whom we really would not expect to be noisy at all, but actually they vary when you show them the same fingerprints twice.
[85] By the way, that's important.
[86] They do not vary in the sense that somebody would make a match on one occasion and would positively say it is not a match on the other.
[87] But fingerprint examiners are allowed to say, I'm not sure.
[88] And between I'm not sure and I am sure that it's a match or it's not a match, there is variability.
[89] One of the things that you point out is that you don't expect that the lottery of who is reviewing your file is going to make a huge difference or that extraneous factors would play a huge role.
[90] The researcher Uri Simonson found that college admissions officers pay more attention to the academic attributes of candidates on cloudy days and to non -academic attributes when the weather is sunny.
[91] He titled this paper, clouds make nerds look good.
[92] Talk about this idea that extraneous factors, whether someone's hungry, what the weather is like, that can affect people's judgment too.
[93] Indeed, it's been established in the justice system.
[94] If you're a defendant, you have to hope for good weather, because on very hot days, judges assign more severe sentences.
[95] And that is true, although judges are air -conditioned, but it's the outside temperature, nevertheless seems to have an effect.
[96] It's been established in at least one study that, for for judges who are keen on football, the result of their team on Sunday or Saturday, depending on whether it's professional or college, will affect their judgment they make on the Monday, and there will be more severe if their team lost.
[97] No!
[98] He missed the extra point wide right.
[99] That's a terrifying idea, isn't it, Danny?
[100] That you're sort of hoping that your judge's football team wins the Sunday before your case is heard.
[101] Yes, absolutely.
[102] And you are also hoping to find a judge who is in a good mood, to find a judge who is rested, has had a good night, who is not too tired.
[103] And your chances of being prescribed antibiotics or painkillers differ in the course of the day.
[104] So doctors tend to prescribe more antibiotics toward the end of the day when they are tired than earlier in the day.
[105] when they are fresh, and they are more likely to prescribe painkillers later in the day, simply because it's an effort to resist the patient who wants painkillers, and when you're very tired and depleted, that effort becomes more difficult.
[106] So completely extraneous factors have a distressingly large effect.
[107] Noise in medicine often shows up under a different name.
[108] Medical Mistakes stunning medical news tonight about how many Americans have something go wrong when they go to the hospital the astronomical number one in three patients will face a mistake during a hospital stay and these are costly errors one study estimating medical mistakes cost the u .s more than 17 billion dollars a year the doctors had discovered that sarah didn't have cancer in the first place she'd be missed diagnosed, and all the pain and treatment that she went through was for absolutely nothing.
[109] So, Daddy, can you talk about these two different dimensions of noise in the medical sphere, the ways in which it might cause us to get diagnosed with conditions we might not have, but also for doctors to miss conditions and problems that we actually do have?
[110] The contribution of noise is that which physician looks at the data makes a difference.
[111] And there is a lot of that.
[112] we know that physicians disagree on diagnosis, and they also disagree on treatment.
[113] And that is a little shocking that, you know, there is that element of lottery.
[114] So errors could happen for many reasons, including luck, which is not an error in judgment, but where information was missing.
[115] But in some cases, the errors cannot be described in any other way than noise, that is different doctors looking at the same case, reaching different conclusions.
[116] It might seem obvious from these examples that noise is a big problem and that combating noise makes a lot of sense.
[117] Who could argue against reducing arbitrary decisions and inconsistent rules?
[118] It turns out a lot of people have a problem with doing just that, and one of those people might be you.
[119] You're listening to Hidden Brain.
[120] I'm Shankar Vedantam.
[121] This is Hidden Brain.
[122] I'm Shankar Vedantam.
[123] We've seen how noise pervades many aspects of our personal and social lives.
[124] It can lead to wildly different estimates on our insurance premiums.
[125] It affects judgments doctors make about our health.
[126] It can determine whether we get a job or a promotion.
[127] In their new book, Noise, A Flo in Human Judgment, Daniel Kahneman and his co -authors, Olivier Siboney and Cass Sunstein, show that noise also shapes what happens in the criminal justice system.
[128] It affects decisions that send people to prison or sentence them to execution.
[129] Danny, Judge Marvin Frankel, worked as a United States district judge, and he made a name for himself by pointing out inconsistencies in the criminal justice system.
[130] He once wrote a case about two men convicted for cashing counterfeit checks.
[131] Both amounts were for less than $60.
[132] One man got a sentence of 30 days in prison.
[133] The other got 15 years.
[134] What did Judge Frankel make of such a decision?
[135] disparities?
[136] I mean, he thought it's unjust.
[137] He thought it's extraordinarily unfair, I mean, which it seems to be on the face of it.
[138] So he really felt that the justice system should be reformed to avoid this role of completely unpredictable, unreasonable factors that determine the fate of defendants.
[139] You know, Danny, I feel like in the last year, I've I've seen dozens of stories that talk about disparities of all kinds, including disparities in the criminal justice system.
[140] And invariably, when I read these stories about disparities, they talk about the idea that it's about bias, that it's about racial bias or gender bias or some other kind of bias.
[141] So when Judge Frankel comes along and says, you know, defendants are being given vastly different sentences, the very first thing that pops in my head is maybe these defendants were of different races.
[142] and what we're really seeing is racial bias at play rather than noise.
[143] How can we tell the difference between racial bias and noise?
[144] It's actually easy to do because when you want to measure noise, you can conduct a kind of study that we call a noise audit.
[145] And so you take professionals, for example, judges, and you show them a fictitious case.
[146] And you ask them to make judgments as they would normally.
[147] Now, you know that it's the same case.
[148] They've all been given the same information.
[149] They should give you the same judgment.
[150] The differences among them cannot be attributed to bias.
[151] And indeed, what Judge Frankel caused to happen, he caused many noise audits to be performed.
[152] He actually conducted some himself.
[153] And in the most famous one, 208 federal judges evaluated 16 cases.
[154] and assign sentences to 16 cases.
[155] And this gives you an idea of the lottery that the defendant would face in that where the average sentence is seven years in jail, the probable difference between two judgments is over three years.
[156] So that seems to be unacceptable.
[157] So based on the work of Judge Frankel and others, Congress eventually passed a law that basically limited the amount of discretion that judges had.
[158] Talk about the effects that this law had on reducing noise.
[159] Were there studies conducted to actually figure out if these were reducing noise?
[160] Yes.
[161] Studies were conducted, and actually you can look at many cases and look at the variability of judgments in many cases, and you find that the variability significantly diminished, which indicates that their noise was, in fact, reduced.
[162] However, something else happened, the judges hated it.
[163] They hated this restriction on their ability to make free decisions, and they felt that justice was not being served.
[164] So even as the data was showing that the noise was reducing and sentencing, in other words, sentencing was becoming more consistent.
[165] Many judges were upset that their discretion was being taken away, and Judge Jose Cabranes was one of the of those who spoke up.
[166] And I want to play you a clip of something he said in 1994.
[167] This was a discussion at Harvard University where they were talking about these guidelines that were aimed to reduce ethnic disparities in sentencing by limiting the amount of discretion that judges had.
[168] Here is Judge Cabranes.
[169] These arcane and mechanistic computations are intended to produce a form of scientific precision.
[170] But in practice, they generate a dense fog.
[171] of confusion, that undermines the legitimacy of the judge's sentencing decisions.
[172] Danny, I want to draw your attention to what Judge Cabranes is saying.
[173] When you limit the variability of sentencing, you're telling judges, for this offense you have to do X, for that offense you have to do Y. A lot of judges feel their hands are tied, and they feel the art of law is being reduced to a mechanistic science.
[174] Well, you know, if it takes a mechanistic science to produce justice, then I think we should seriously consider some mechanistic science.
[175] And what seems to be happening is that from the perspective of the judge, they feel that they're evaluating every detail of the case and that they are producing a just judgment because they are convinced that what they are doing is a just judgment.
[176] And somehow it's very difficult to convince judges that another judge whom they respect a great deal, deal, presented with the same case, would actually pass a different sentence.
[177] That argument doesn't seem to have penetrated when Judge Cabranes made that assertion, that in fact there is a problem and there is a problem to be resolved.
[178] He was, in effect, as I hear him, he was denying the existence of a problem.
[179] Psychologists talk about a phenomenon called naive realism that in some ways explains why it is I am bewildered that you would not see the world exactly the way that I see the world.
[180] Can you explain what naive realism is and how it speaks to the question we just discussed about judges not just reaching different conclusions, but being bewildered that anyone would reach a different conclusion than them?
[181] Well, you know, we feel that we see the world as it is.
[182] It's the only way we see it, and what we see is real, what we see is true.
[183] And it makes it very difficult to believe and to imagine, that someone else looking at the same reality is going to see it differently.
[184] But in fact, we are struck by how different they are in the context of criminal justice.
[185] The variability in sentences is shocking.
[186] But when you're looking at it from the perspective of a judge who looks at cases individually and feels that he or she is making correct judgments for every case individually, then it looks as if any attempt to restrict their freedom is going to cause injustice to be performed.
[187] But they are simply not accepting, I think, the statistics that tell them that another judge looking at the same case would actually pass a different sentence.
[188] So these debates about sentencing reform raged in the 1980s and 1990s, and eventually in the early 2000s, the Supreme Court struck down the guidelines that bound the way judges were operating, and sentencing reform essentially went away, giving discretion back to judges.
[189] Is what happened, what I fear happened?
[190] Did Noyce come back into the system?
[191] Oh, yes.
[192] I mean, there is evidence that noise came roaring back, and there is also evidence that judges were a lot happier without the guidelines than they had been earlier.
[193] One of the ironic things that you and others have found is that even though there is this distinction, between noise and bias, when the noise came back after the Supreme Court ruling, black defendants were actually among those who were the most severely harmed by this.
[194] Is it possible in some ways they can be intersections between noise and bias?
[195] In other words, they can amplify one another?
[196] Certainly.
[197] I mean, when you are constraining people and reducing noise, you're reducing the opportunities for bias to take place.
[198] So attempts to reduce noise and attempts to reduce noise and attempts to control noise are going to, in general, not invariably, but are very likely to control and reduce bias as well.
[199] If noise produces many of the adverse outcomes we see, if noise produces much of the unfairness we see, why is it that critiques of disparities invariably talk about bias?
[200] Turns out, that's because of the way our minds work.
[201] As we discussed in a recent series of episodes, the brain is a storytelling machine, and the story of bias caters to our hunger for simple explanations.
[202] I mean, clearly, bias in general is a better story.
[203] That is, you see something happening.
[204] It had the character of an event.
[205] It had the character of something that is caused by a psychological force of some kind.
[206] Variability, noise, is uncaused.
[207] Noise doesn't lend itself to a causal story.
[208] And really the mind is hungry for causes.
[209] And that leads us very naturally to think in terms of biases.
[210] That errors must be explainable.
[211] So if I get a misdiagnosis because a doctor doesn't like the color of my skin, that might not make me feel good, but at least I can make sense of what happened.
[212] Once I settle on an explanation of racism or sexism or homophobia, I tell myself I have every right to get angry.
[213] When I discuss what happened with others, they'll get angry too.
[214] By contrast, a misdiagnosis produced by noise is, by definition, no one's fault.
[215] The error may have harmed me, but I can't lay the blame on someone's evil intentions.
[216] Noise is the very opposite of a good story.
[217] It's meaningless, and that can make me feel even worse.
[218] Here's another problem.
[219] When I see a judge pass a really harsh sentence or a very light sentence, I can come up with a story of bias to explain this individual case.
[220] You cannot do that with noise.
[221] You cannot spot noise by looking at any individual case.
[222] You have to measure it in the aggregate.
[223] It shows up only when you look at the statistics, and many of us are uncomfortable turning to data as our guide to the truth.
[224] We prefer stories and anecdotes, and stories and anecdotes are better at illustrating the problem of bias.
[225] Stories and anecdotes are what the mind is prepared for.
[226] Statistical thinking is alien to us.
[227] And statistical thinking is the only way to detect noise, because it's variability.
[228] It's sort of absurd to say about any single case that it is noisy.
[229] You say that if you have no idea of how it came about.
[230] But noise is a phenomenon that you observe statistically and that you can analyze only statistically.
[231] And that is not appealing.
[232] So there's an even deeper problem than the fact that noise is detectable only through statistics, whereas bias, you know, you can tell a story about bias.
[233] For many people making decisions, the data is simply not even available.
[234] So at a statistical level, you can say an insurance company is demonstrating noise.
[235] But many of the decisions we are making are decisions we make as individual.
[236] So if I want to propose marriage and I feel like proposing marriage on a moonlit night in the springtime, I have no idea if my decision to propose marriage on that evening is being shaped by noise or not.
[237] I don't have a statistical set of how I would behave under different circumstances.
[238] You know, the truth of the matter is that no one can tell you that this decision was noisy.
[239] What you can tell is that when you look at the collection of decisions, decisions of people deciding to get married.
[240] That collection is noisy.
[241] There is no reason to believe that these steps which improve judgments in the statistical case do not apply when somebody decides to get married.
[242] If noise is present in the decisions where you can observe it, it's also present when you cannot observe it.
[243] Some years ago, I interviewed the researcher Berkeley Dietwurst, he talked about how people respond when a mistake has been made by a human versus an algorithm.
[244] I want to play you a short excerpt of something he told me. People failed to use the algorithm after they'd seen the algorithm perform and make mistakes, even though they typically saw the algorithm outperform the human.
[245] In our studies, the algorithms outperform people by 25 to 90%.
[246] So he's basically saying the algorithms are significantly better than the humans, but when a mistake is made, and algorithms, of course, can make mistakes and humans can make mistakes.
[247] He's saying that you prefer the human to make the mistake, and I think intuitively that feels correct to me. If I'm going to get a misdiagnosis when I go to a doctor, I would feel better if it's the doctor who's made the mistake than an unfeeling, unthinking algorithm.
[248] I think that's absolutely true.
[249] And, you know, when we're looking at a road accident, we somehow feel less bad about it.
[250] If it was a doubt, driver error than if it was a self -driving car that caused the accident.
[251] Algorithms, they make errors.
[252] The error they make, by the way, are different from the errors that people would make, and they look stupid to people.
[253] Algorithm make errors that people think are ridiculous.
[254] Now, we don't get to hear what algorithms think of the errors that people make.
[255] And we do know that algorithm just make far fewer of them in many cases.
[256] And you have to try.
[257] straight off the higher overall accuracy against the discomfort of abandoning human judgment and trusting an algorithm.
[258] Yeah.
[259] You know, this might actually be a subtext of much of your lifetime's work, Danny, but it seems to me that fighting noise requires a certain amount of humility, and it seems to me that humans are not humble.
[260] Well, they're not humble for a fairly straightforward reason.
[261] We do not go through life imagining different ways of seeing what we see.
[262] We see one thing at a time, and it feels right to us.
[263] And that is really the source of the problem of ignoring noise.
[264] This is why it is so difficult to imagine it.
[265] I want to talk just for a brief moment about places where noise can potentially be useful.
[266] So let's say, for example, you have a company that's trying to innovate and come up with new ideas or you're in a creative enterprise where you want to pitch different ideas for movies.
[267] In some ways, you might want to actually maximize the variability of the ideas you get.
[268] So noise is not always bad.
[269] Sometimes it can actually lead to good things.
[270] Yeah, we don't call it noise in those cases.
[271] So we reserve the term noise for undesirable variability.
[272] There are indeed many situations in life in which variability is a blessing, certainly in creative enterprises, also evolution.
[273] So anything that allows you to select the better one of multiple responses, wherever there is a selection mechanism, variability is a good thing.
[274] But variability in the absence of a selection mechanism is a sheer loss of accuracy.
[275] And those are the cases that we talk about.
[276] So if you had a way when you have multiple underwriters of finding, finding out who is doing a better job than whom, and using that in order to improve their training, that would be a case where you could make positive use of variability.
[277] But in the absence of such a mechanism, that variability just is a sheer loss.
[278] When we come back, how to fight noise.
[279] You're listening to Hidden Brain.
[280] I'm Shankar Vedantam.
[281] This is Hidden Brain.
[282] I'm Shankar Vedantam.
[283] Noise is endemic.
[284] It's also very difficult to fight, in part because judges and doctors and police officers don't like to think of themselves as capricious.
[285] We don't think of our judgments as being arbitrary, certainly not when it comes to really important decisions.
[286] Even when we are told about how noise is affecting our judgments and decisions, we hate to be shackled by rules.
[287] Danny, in 1907, Charles Darwin's cousin, Francis Galton, asked 787 villagers at a county fair to estimate the weight of a prize ox.
[288] None of the villagers guessed the right answer, but then Galton did something with their answers that got in very close to the correct answer.
[289] What did he do, Danny?
[290] Well, he simply took the average, and the average, I think, was within two pounds of the correct weight.
[291] And that led to a lot of research that was summarized in a recent book by James Surahuecki on the wisdom of crowds and the fact that when you take multiple judgments, independent judgments, and average them, you eliminate noise.
[292] This, by the way, is guaranteed to eliminate noise.
[293] So if you take multiple judgments, there is no guarantee that it will reduce bias, because if the judges agree on the bias, then the bias will remain when you take the average.
[294] Indeed, it will be even more salient.
[295] But what is absolutely guaranteed is that when you average independent judgments, you are eliminating noise.
[296] When you take 400 judges, you're reducing noise by one half.
[297] When you take 100, you're reducing it by 90%.
[298] So there is some mathematics of noise that lends itself to analysis that doesn't apply to bias.
[299] So it's really remarkable.
[300] The correct weight of that ox was 1 ,198 pounds, and as you said, that was one or two pounds off the correct weight.
[301] And I want to point out that the reason averaging the responses produces a better answer is that noise is random.
[302] You're taking advantage of the fact that various estimates will be randomly high or low, and that's why when you average them out, you're going to get closer and closer to the correct answer.
[303] And what happens when you have different people making the same judgment of the same objects, and then you are going to average them, then the errors they may cancel each other out.
[304] But when people make judgments about different cases, errors don't cancel them out.
[305] If you set too high a premium in one case and too low a premium in the other case, that doesn't make you right.
[306] That just makes things worse.
[307] So this idea that errors cancel out, you have to apply it quite precisely.
[308] They cancel out when you average judgments are the same thing.
[309] And also the judgments have to come from people who in some ways who are independent of one another.
[310] If I'm seeing the judgment you make and then I make my judgment afterwards, my judgment really is just a reflection of your judgment, not an independent one.
[311] That's right.
[312] And, you know, what happens, basically, is when you have witnesses who talk to each other, the value of their testimony is sharply reduced.
[313] Because in effect, in the extreme, if you have one witness who is very assertive, all the other witnesses fit their story to his.
[314] Then you have one witness, regardless of how many testify.
[315] One of the most remarkable aspects of the wisdom of the crowd that you describe in the book has to do with how you can elicit the wisdom of the crowd just from yourself.
[316] You cite research by Edward Wuhl and Harold Pashler that asks people to make judgments about the same thing separated by a certain amount of time.
[317] what do they find when you average out these different estimates?
[318] Well, for example, you know, if you ask people, where did the population of London, and you ask it once, and then you wait a couple of weeks, say, and you ask it again, the striking thing is that most people will not give you the same number on the two occasions.
[319] And the second striking thing is that the average of the two responses is more likely to be accurate than I. are the responses.
[320] The first response is better than the second, but the average is better than both.
[321] In one of the studies they conducted, they actually asked people to make estimates that were different than their initial estimates, and then they averaged out the estimates, and they found that noise was reduced even further.
[322] Why would this be the case, Danny?
[323] Well, here, what you're trying to do, and you can do it within an individual, is you're leaning against yourself.
[324] You made one judgment and then you ask people to think, how could that judgment be wrong and then make another?
[325] And that turns out to be indeed better than merely asking the same question twice.
[326] In some ways, this provides a solution to the conundrum I pose to Danny.
[327] If noise is detectable only by studying statistical averages, how do I reduce noise in decisions I am making as an individual.
[328] The answer, try to make the same decision over and over under different conditions.
[329] One way to tell if noise is behind my decision to propose marriage is to ask myself whether I would make the same decision under different circumstances, not just on a moonlit night in the springtime, but in the heat of summer or in the dead of winter.
[330] If I reach the same answer in these different settings, it's possible I could still be making a mistake.
[331] But at least I can be somewhat reassured that my decision is not the result of random extraneous factors.
[332] Scientists are exploring lots of ways to reduce noise.
[333] The researcher Sendel Mulayanathan and his colleagues devised an algorithm to advise judges on whether to grant bail to suspects.
[334] These are people who have been arrested, but who have not yet been put on trial.
[335] Keeping them in jail can cause all kinds of hardship.
[336] People can lose jobs or lose custody of their children while they're incarcerated awaiting trial.
[337] It's costly for taxpayers to keep people in jail.
[338] But letting someone dangerous out of jail can cause harm.
[339] Maybe they go on to commit other crimes.
[340] The researchers had the algorithm offer advice to judges about whether to grant bail.
[341] They found that if judges incorporated the recommendations, this could reduce the number of people in jail by 42%.
[342] without increasing the risk of crime.
[343] Research goes further than that, in that allowing the algorithm to inform the judge is actually not the best way of doing it.
[344] The research suggests quite strongly that when you have a judge and an algorithm that are looking at the same data, with some exceptions, it's better to have the algorithm have the last word, and this is very non -intuitive.
[345] Yeah.
[346] besides being actually superior in some ways in terms of judgment, one of the things that algorithms do better than people is that they're not noisy.
[347] They're actually much more consistent.
[348] Can you talk about this, that in some ways one of the advantages that algorithms have is even when their judgments might not be as good as humans, because they have less noise than humans, you're able to get better outcomes.
[349] Well, noise is a source of inaccuracy, and algorithms, by their nature, are noise -futable.
[350] That is, when you present the same problem to two computers running the same software, they're going to give you the same answer, which is not true of different bail judges.
[351] So that advantage is, in many cases, sufficient to make algorithms superior to people.
[352] But I don't want to create the impression that our solution to the problem of noise is algorithms, because even if it were the solution, there's just too much opposite.
[353] to algorithms.
[354] So ultimately, we're talking about improving judgments.
[355] In some domains, algorithms can be used, and I think where they can be used, they should be used, but this is a long process, a slow process, because human judgment is going to make the important decisions for quite a while.
[356] Isn't it interesting, though, Danny, that when you look at the news and you see the news coverage of algorithms, I feel like just in the last year, I've seen dozens of articles talking about algorithmic bias, about how algorithms in some ways can make judgments worse.
[357] And it is the case that you can have poorly designed algorithms.
[358] You can argue that, you know, the old sentencing rules that we had, three strikes and you're out, in some ways, that is an algorithm.
[359] But you could argue the algorithm in some ways was too crude to capture what actually needed to be done.
[360] But isn't it striking that there's so little attention that's paid by contrast to the potential good that algorithms can do?
[361] because, again, we're so focused with the story of intent of saying a bad outcome happened, an algorithm caused it, clearly algorithms need to be thrown out the window.
[362] I mean, we do not want to accept the errors that blind rules will make.
[363] You know, I was talking to someone who designed self -driving cars, and they realize that self -driving cars, it's not enough for them to be 100 times safer than regular drivers.
[364] They effectively have to be almost perfect before they will be admitted.
[365] And it's that kind of bias that is completely human and natural.
[366] We like the natural over the unnatural.
[367] We prefer human drivers and human doctors to make mistakes rather than self -driving cars and medical algorithms.
[368] And that's just the fact of psychology.
[369] You talk in the book about something that you call decision hygiene, and others have talked about this idea as well.
[370] What is decision hygiene and why the analogy to public health?
[371] When you're thinking of dealing with bias is like a specific disease.
[372] So you can think of a vaccine or you can think of medication, which is specific to that disease.
[373] But when you're washing your hands, you're doing something entirely different.
[374] You have no idea what germs you might be killing.
[375] And if you're good at it, you will never know because the germs are dead.
[376] And a similar distinction can be drawn between different ways of fighting errors.
[377] There is a difference between procedures that are specifically aimed at particular biases and procedures that are intended generally to improve the quality of the judgment and decisions.
[378] And the way that this feeds back on the individual is that if there are procedures that are good for organizations and for repeat, repeated decisions, they should be good for individuals and for singular decisions.
[379] So if I'm a CEO of a corporation or if I'm a policymaker and I'm hearing this conversation about noise, can you give me two or three really specific suggestions on ways that I can reduce noise in my decision making or in my company's decision making or in my organization or community?
[380] Well, I think the first step would be to ask whether you have, a task in the organization that is carried out by interchangeable functionaries like underwriters or emergency room physicians they're carrying out the same tasks making the same kinds of judgments and you would like those judgments to be noise -free to be uniform so first of all identify whether you have that case in your organization if you do we strongly recommend you measure noise.
[381] That is, you actually take those individuals, present them with similar cases, and observe the variability in the judgments.
[382] And possibly that may lead you to want to do something about it.
[383] But the first step is just to measure noise, because our intuitions about the magnitude of noise are systematically wrong.
[384] Danny thinks we should learn from the saga of the rise and fall of sentencing reform.
[385] Once you detect noise in an organization, it may be wiser to avoid trying to fix the problem by asking everyone to follow rigid rules.
[386] As we've seen, people hate to have their judgment questioned, they hate to have their discretion limited, and they detest anything that smacks of mechanistic rules.
[387] The main thing to do, if you're attempting to improve the judgment of people in an organization, is to convince those people that they want their judgments to be better.
[388] If you impose it as a set of rules that all of them will follow, they will resist it, they will feel they are being robotized, and they're likely to sabotage whatever you propose.
[389] I mean, this is well -known in insurance companies that provide the underwriters, in many cases, with information or even with a technical price, with the suggestion.
[390] about what premium should be assigned.
[391] And underwriters are very likely to completely ignore those and to follow their judgment.
[392] And basically, I would think, you know, it's obvious advice.
[393] If you have a group of people who are noisy, have that group try to find the solution to the noise, have them develop procedures that will make them uniform.
[394] Do not impose procedures on them, but work with them.
[395] to make them more uniform because actually they will recognize that they would like to be in agreement with each other.
[396] But letting them feel that what they are doing is what they want to do rather than what they're being forced to do.
[397] That is clearly a very important step if people really want to have organizations that improve their judgments.
[398] Daniel Kahneman, Olivier Sibonni and Cass Sunstein are the authors of Noise.
[399] a flaw in human judgment.
[400] Danny, thank you for joining me today on Hidden Brain.
[401] It was really my pleasure.
[402] Hidden Brain is produced by Hidden Brain Media.
[403] Our production team includes Bridget McCarthy, Laura Querell, Kristen Wong, Ryan Katz, Autumn Barnes, and Andrew Chadwick.
[404] Tara Boyle is our executive producer.
[405] I'm Hidden Brain's executive editor.
[406] Our unsung hero today is Rosalind Tortisilius.
[407] She's a producer in New York City who helped us record this interview with Danny Kahneman.
[408] Rosalind got to Danny's place early to set up for the interview, and she was incredibly kind, conscientious, and patient.
[409] At various points in my conversation with Danny, sirens blared outside.
[410] At one point, a refrigerator in Danny's apartment woke up and started making noise.
[411] Through all of it, Rosalind figured out how to get a crystal clear recording.
[412] Thank you, Rosalind.
[413] You are a true unsung hero.
[414] If you like this episode and like our show, please consider supporting us.
[415] go to support .hiddenbrain .org to learn how you can help.
[416] Every little bit makes a difference, and it means a lot to us to see you step forward to help.
[417] I'm Shankar Vedantam.
[418] See you next week.