Hidden Brain XX
[0] This is Hidden Brain.
[1] I'm Shankar Vedantam.
[2] It's one of the most iconic movie soundtracks of all time.
[3] In 1975, a young Stephen Spielberg scared the living daylights out of millions of people with jaws.
[4] A great white shark terrorizes a New England beach town, as one victim becomes two, and then three, And then four, people respond first with denial, then fear, and finally, outright hysteria.
[5] After watching the movie, I remember being scared to even stick my toe in the ocean.
[6] And even today, when I go to the beach, I can't help but peer out of the water and ask myself, is that a dorsal fin?
[7] This week on Hidden Brain, the disconnect between our fears and the real dangers we face in our daily lives.
[8] As the world grapples with a devastating pandemic, we consider how our minds assess risk.
[9] What makes us focus on some threats and not on others?
[10] And how can we use this knowledge to prepare for the future?
[11] Paul Slovak is a psychologist at the University of Oregon.
[12] For decades, he has studied how people think about risk and the mismatch between the intuitive feelings we have about risk and the way we analyze risk scientifically.
[13] Paul Slovak, welcome to Hidden Brain.
[14] Thank you, Shankar.
[15] Glad to be here.
[16] For years, Paul, the movie Jaws made people afraid of going to the beach.
[17] Did you ever think twice about swimming in the ocean after watching the movie?
[18] I laughed because I'm not a swimmer.
[19] I was a child in Chicago in the 1940s during the polio epidemic.
[20] We weren't allowed to go swimming because it thought it would made us susceptible to polio, so I never learned to swim very well.
[21] So I stay away from water, so I don't worry about sharks.
[22] But it was clear that many people who lived near oceans were quite worried about it.
[23] So there's a serious mismatch, of course, between how afraid people are of sharks and how afraid we ought to be.
[24] Sharks kill maybe five or six people a year, and that's worldwide.
[25] And a year.
[26] If anything, it's the sharks who should be making horror movies about us.
[27] Right.
[28] The movie created vivid images in our mind and a sense of experience.
[29] And so that creates a sense of risk of shark attacks much more powerfully than the statistics do.
[30] Yeah.
[31] And of course, this is true, not just of shark attacks.
[32] It's true of all manner of things that Hollywood has told us about over the years.
[33] You know, everything from snakes to serial killers, the risks in our minds vastly exaggerate the actual risks of those things affecting.
[34] us.
[35] Yes.
[36] What we find is that our sense of risk is influenced by the direct experiences we have and the indirect experiences we have through media such as film or the news media that's very powerful in influencing us.
[37] So when people think about risk, I think many people automatically assume that risk is something that you're analyzing.
[38] You analyze what the risks are in a situation.
[39] But you and many others argue that most of the time when people are thinking about risks, they're actually not using analysis to evaluate risks.
[40] You mentioned a second ago that people use their feelings.
[41] Can you talk about this idea that for many people, our emotions, our affect, is closely tied up in our perceptions of risk?
[42] We originally thought that people were analyzing risk, doing some form of calculating in their minds about what the probability of something bad happening would be and how serious that would be and perhaps even multiply the severity of the outcome by the probability to get some sort of expectation of harm.
[43] And as we started to study this, we found out that basically we can do those calculations, but it's certainly easier to rely on our feelings.
[44] It's easy to do, it feels natural, and it usually gets us where we want, except when it fails.
[45] and there are certain ways that our feelings deceive us.
[46] And that's what my colleagues and I study is, when can we trust our feelings and when should we stop and think more carefully and reflectively and look to data and argument and science to make a decision.
[47] I was remembering one time I was in Costa Rica, I believe, and we were going ziplining, and you're attached to this wire that's about 200 feet above the earth.
[48] And I remember at the moment at which, I was about to step off this ledge.
[49] I was just gripped with this sense of lunacy that what I was doing was absolutely insane.
[50] And at that point, of course, I was not calculating what are the risks that the rope will break, what are the odds that the harness will come lose.
[51] It was entirely what I felt in my stomach that essentially told me this is an extremely risky activity.
[52] Yes, that's the way it goes.
[53] After I had come to appreciate the concept of riskist feelings, I looked back in my own experience and recognize a very dramatic moment when my feelings were guiding me very powerfully.
[54] And that was a time when I was driving on a busy freeway near Chicago and ran out of gas.
[55] I pulled the car off to the side of the road.
[56] And then I thought, well, okay, I'd better go find a gas station, get a gas can, fill it, and come back.
[57] But to do that, I realized I have to cross this freeway.
[58] and so I started to cross the freeway and I would take a step onto the pavement and be looking at the cars approaching it 60 miles an hour and how far away and as I put my foot down I had to be gripped by this fear and I would retreat back and wait in hopes that I would find a bigger gap where I could step out and wouldn't be afraid.
[59] Yeah and in many ways this makes total sense I mean as you're telling the story I'm gripped by a sense of fear of thinking of you, Paul, stepping out across six lanes of traffic in Chicago.
[60] And at a certain level, this system works very well much of the time.
[61] It's worth saying.
[62] I mean, the fact that you didn't have to calculate the speed of the moving cars and how much time they would get to get you and write all that down on a sheet of paper, you just were gripped by a sense that this is extremely unsafe and you step back.
[63] That kind of fear, that kind of risk perception, holds us in good stead much of the time, does it not?
[64] Exactly.
[65] That's why we do it.
[66] keep doing it is because most of the time relying on our feelings works for us if our feelings have been conditioned properly by experience.
[67] So it's very adaptive, except when it goes wrong as it sometimes does.
[68] So I want to talk about some of the times and ways in which this intuitive sense of risk that we have runs up against our analytical approach to thinking about risk.
[69] And there have been a number of experiments that have teased out this tension very beautifully.
[70] I want to start with an experiment that the researcher Christopher C once ran.
[71] Volunteers were told they either had a low probability of losing $20 or a high probability of losing $20.
[72] They were then asked how much they would be willing to pay to avoid this risk.
[73] And the results were exactly what you'd expect.
[74] People were willing to pay about a dollar to avoid the low probability risk of losing $20, but we're willing to pay about $18 if they faced a high probability risk of losing the $20.
[75] So very rational.
[76] But then the researchers tweaked the experiment in a rather cruel fashion.
[77] Do you remember what they did, Paul?
[78] Yes.
[79] They said that if the bad event happens, you're going to get a strong electric shock.
[80] Not one that is truly dangerous, but it's going to be very unpleasant.
[81] One group was told it was 99 % chance of the shock.
[82] The other second group, like a 1 % chance of the shock.
[83] And what happened?
[84] How did people react?
[85] Did they react in the same rational way?
[86] when they confronted the low probability and high probability risk of losing 20 bucks?
[87] Well, the group that faced a 1 % chance of shock were willing to pay almost as much as the 99 % group to avoid that shock.
[88] The reaction was not sensitive to the probability of the shock.
[89] So what is actually going on here?
[90] Why is it that when people are facing a 1 % risk of an electric shock, in their minds it feels as if you're talking about a 99 % risk of getting an electric shock.
[91] Why is it different when it comes to the electric shock compared to when it comes to losing 20 bucks?
[92] Well, the loss of 20 bucks, we would say, is relatively affect -free.
[93] I mean, sure, we don't want to lose $20, but it's not a strongly emotional reaction as much as the potential shock was.
[94] So when you think about getting a shock, that thought creates a feeling in you of anxiety.
[95] And that feeling of anxiety is the same feeling if you're thinking about it with a 1 % or chance or 99%.
[96] You're still thinking about the shock.
[97] And therefore, the mind does not modulate or multiply that feeling from the shock image by its probability.
[98] That is, our, quote, feeling system doesn't do multiplication.
[99] When I read the experiment, I sort of try to put myself in the shoes the volunteers.
[100] And of course, the moment I try to do that, the thing that my mind went to was the last time I experienced an electric shock.
[101] And as you point out, at that point, asking me to multiply that feeling by either 1 % or 99 %, that's not really possible to do because my brain now is in the realm of affect and emotion as opposed to calculation.
[102] That's right.
[103] One way we can do that multiplication, and that is if we sort of push that 1 % down to the realm of it's not going to happen.
[104] It's zero.
[105] We can then turn off the feeling that way.
[106] But that's a rather crude kind of a calculation there sure.
[107] Once you get above zero to a probability that you think might actually happen, then it's very difficult to modulate the feeling by that probability.
[108] Of course, in the real world, most things are not zero probability or 100%.
[109] They're usually somewhere in between.
[110] When risks produce a feeling of fear or dread, our capacity to think analytically is impaired.
[111] Paul says this is why we worry about getting attacked by sharks rather than the far more likely prospect of getting in a car crash on the way to the beach.
[112] It's also the case that sometimes our brains get so overwhelmed with fear that they can't accurately process any additional fear.
[113] This idea builds on an area of psychology called psychophysics.
[114] Some of the very earliest experiments in psychology were looking at how we perceived brightness or the loudness of a sound.
[115] And what they found was that we're very sensitive at very low levels of brightness or very low levels of loudness.
[116] In a quiet room, you can hear a whisper.
[117] But then as the loudness of the sound or the brightness of the light increased, took more of a difference to make us notice.
[118] As we've seen, our feelings about risk are rarely shaped by data or by the data alone.
[119] Our feelings are shaped by stories, by images, and by the consensus of our groups.
[120] When we come back, we look at how our perceptions of risk shape how we think about homicide, climate change, and global pandemics.
[121] You're listening to Hidden Brain.
[122] I'm Shankar Vedantam.
[123] This is Hidden Brain.
[124] I'm Shankar Vedantam.
[125] Over several decades, psychologists have explored how people arrive at their conclusions that something is risky or that something is not risky.
[126] They have identified a number of factors that shape our perceptions of risk.
[127] These studies have found a significant gap between the way we analyze risks and the way we feel about risks.
[128] The two don't always match.
[129] Psychologist Paul Slovic has explored what happens in real life when these two ways of thinking produce different answers.
[130] Paul, if you ask Americans how many people are killed by homicide and by terrorism, they are likely to overestimate the risk.
[131] If you ask how many people die from heart disease and diabetes, we tend to underestimate the risk.
[132] Can you explain how a mental shortcut that sometimes called the availability heuristic might shape these perceptions, Paul?
[133] Yes, the availability heuristic refers to a mechanism whereby we judge the frequency or the probability.
[134] of something by how easy it is to imagine it happening or to remember it happening in the past.
[135] So we use imaginability and memorability as a shortcut way of judging probability and frequency.
[136] And again, at a very everyday level, this system makes perfect sense.
[137] I mean, things that do come more readily to mind might actually be things that are more important to the context that we find ourselves in.
[138] And so this rough rule of thumb, this shortcut, this shortcut, this.
[139] heuristic, it's not always a bad thing.
[140] In fact, for much of our lives, this might be very useful.
[141] Yes, because, as you say, imaginability and memorability are related to frequency, but not always.
[142] Something that's a very dramatic event that's easy to remember or imagine happening because we've seen it in a movie or something will lead us to have a sense that this thing is frequent or likely, when in fact, statistically, it's very unlikely.
[143] And And particularly if the event is not only dramatic so that it sticks in our memory, but if it carries affect or emotion, that feeling then amplifies the effects of memorability and makes it seem even more likely.
[144] Paul, the researcher Talley Sharratt once conducted a study where she asked people a number of questions about potential negative life events.
[145] She asked them about the likelihood that someone they knew.
[146] personally was going to die, or that they would suffer a serious illness, or that they would seriously embarrass themselves.
[147] And she found that people generally underestimated the likelihood of bad things happening to them compared to the likelihood of the bad thing happening to other people.
[148] Does optimism and the optimism bias shape our ability to look danger in the eye?
[149] Yes, it leads us to have more confidence in being able to cope with the situation than perhaps is warranted.
[150] And I think even Professor Chirot would say that, in many cases, the optimism bias is adaptive.
[151] It leads us to take action, where otherwise we might just be rather passive in situations.
[152] So it leads us to take chances that are often beneficial, so it's a good thing.
[153] But it can also lead us to be very overconfident in our ability to handle certain types of situations that are really quite dangerous and are beyond our capability.
[154] And the fact that we feel often that we're in control of some of these events, we saw also optimism bias greatly with regard to cigarette smoking, where people recognize that smoking is, in general, not good for your health, but they felt that they could smoke in ways that minimize those risks.
[155] That's what's so fascinating about this, because the optimism bias is not just that you're underestimating risks in general, you're underestimating the risks.
[156] for yourself.
[157] You think that other people are just as vulnerable to getting killed in a highway crash or getting cancer from smoking.
[158] You just think that somehow you're special.
[159] Yes, that you're not going to smoke very long or that you're going to smoke the cigarettes that are less harmful or fewer cigarettes per day, all of these things, will enable you to control the risk in ways that you don't think other people are doing.
[160] Talk for a moment about the idea of cumulative risks.
[161] You know, I might have a very low probability of getting killed in a car crash if I don't wear a seatbelt on one drive.
[162] But if I don't wear a seatbelt over many years, the cumulative risk might be quite large.
[163] How good are we in our minds at keeping track of these kinds of risks that gradually accumulate over time?
[164] I don't think we really do the cumulative assessment.
[165] This was, I think, very evident early on when seatbelts were first introduced.
[166] there were some very high -powered advertising campaigns to try to motivate people to use them.
[167] Only about 10 or 15 % of drivers were wearing seat belts, a very low percentage.
[168] And so they had these campaigns to get people to buckle up for safety.
[169] When you sit down, America, show the world you care.
[170] Click today with the belt.
[171] We'll welcome you.
[172] And they didn't work.
[173] They had very little impact in things.
[174] thinking about that, say we take 50 ,000 trips in a lifetime, and these individual trips are really pretty safe so that people were not rewarded by wearing a seatbelt.
[175] That was a little bit uncomfortable.
[176] And if they didn't put it on, they weren't punished either because they didn't need it.
[177] The problem is that over 50 ,000 trips, the likelihood that you'll need a seatbelt on one or more of those trips becomes significant.
[178] It might be like one and three people will actually be in a serious accident where they would benefit from a seatbelt.
[179] So the cumulative probability was high enough to warrant having people wear seatbel.
[180] So I wrote an op -ed piece saying that only new laws will produce seatbelt use.
[181] And people started to wear seatbelt because it was a law and then it became kind of a norm.
[182] So now we have relatively high seatbelt.
[183] belt usage.
[184] You know, there's a deep philosophical insight here in what you're saying, because I think especially in the United States, we are a country that believes in individual liberty and autonomy and freedom, and people want to have the sense that they are making the choices that are best for them.
[185] But I think the many examples you've talked about here in the seatbelts is perhaps the classic example, is how if you leave things up to individual choice, it is not irrational to say on this particular drive, on this particular Tuesday that I'm driving, my risk is actually not very high.
[186] And it might actually be the rational thing to say, okay, I can forego the seatbelt and nothing much happens.
[187] And our minds simply are not equipped to deal with these kinds of risks gradually accumulating over many, many decades for that one event, you know, 25 years from now when the seatbelt actually is really useful.
[188] And in situations like this, this is part of what you were alluding to a second ago, you really do need the intervention of systems that protect people in some ways from themselves.
[189] Can you talk about this idea a moment?
[190] Because it seems to me that that's one of the philosophical implications of the work that you've done.
[191] Yes, and you get this broader perspective through science, through collecting data, and that can show how these risks accumulate and affect both individuals and populations.
[192] The same thing.
[193] thing happens with cigarette smoking.
[194] That is the risk that smoking this next cigarette is really going to harm you significantly.
[195] I mean, it doesn't work that way.
[196] The harm that comes is the cumulative harm of smoking, you know, thousands of cigarettes that leads to quite a significant increase in the risk of not just lung cancer, but many diseases.
[197] In view of the continuing and mounting evidence from many sources, It is a judgment of the committee that cigarette smoking contributes substantially to mortality from certain specific diseases and to the overall death rate.
[198] And it can be demonstrated through data, through statistics, both for the individual and for the population.
[199] But at the individual level, this very next cigarette or this very next drive is not likely to be a problem for you.
[200] Some mismatches between our analytical approach to risk and our emotional response to risk comes about because of our inability to do certain kinds of math in our heads.
[201] I was looking for examples of this yesterday and came by an interesting puzzle.
[202] If you take a piece of paper and you fold it over, it now has double the thickness it had at first.
[203] And then if you fold it over again, it's now four times as thick as the original sheet.
[204] And the puzzle is if you had an endless amount of paper, how many times would you have to fold it over to get a tower that stretches all the way to the moon.
[205] And when I first saw the puzzle, I guess the answer must be about, you know, like you have to fold it a billion times.
[206] And the correct answer is 45, just 45 falls.
[207] And you get a tower that basically stretches all the way from the earth to the moon, which is a quarter of a million miles away.
[208] Can you talk a moment, Paul, about how people often experience difficulty appreciating the nature of exponential growth, where two becomes four, four becomes eight, eight becomes 16, and so on.
[209] and how this shapes our perception of risk.
[210] Yes, this is very interesting challenge for the human brain.
[211] And some experiments that were done in the 1970s in the Netherlands demonstrated this very clearly, where people were given a series of numerical measures of pollution increasing.
[212] And the pollution was doubling or tripling every year.
[213] And they were asked, you know, well, where will this be after 10 years?
[214] And what they found was that people projected in a straight line from this very low level at the beginning of this exponential growth and greatly underestimated where this was leading.
[215] So the hallmark of exponential growth and what makes it so challenging and insidious in a way is that it looks very benign at the beginning, even though it is changing exponentially, the numbers are still small.
[216] And what happens is suddenly roars up like a fire that erupts and overwhelms us with very high numbers.
[217] So we don't anticipate how quickly it's going to explode.
[218] I want to talk about exponential growth in the context of the COVID -19 pandemic.
[219] In early March, I believe it was on March 9th, 2020, New York City Mayor Bill de Blasio had this to say about the coming pandemic.
[220] Some places like Italy are doing mass school closures.
[221] That's not on the menu here.
[222] Is there a theoretical scenario where that could happen?
[223] Of course, but is it anywhere near to where we are now?
[224] No. So this was on March 9th, Paul.
[225] New York City closed its public schools one week later on March 16th.
[226] And I think that speaks in some ways to what you were just saying, which is that it's really hard, even if you're a public official with the best of intentions and you have a lot of data, to actually truly appreciate how staggeringly fast a pandemic can grow.
[227] Absolutely.
[228] governments all over the world were slow to appreciate what was going to happen when their small numbers of cases began to grow exponentially.
[229] And so I think there was a general delay in responding.
[230] Not everywhere, fortunately, but the majority of nations were slow to react because the numbers were small, increasing slightly, and it didn't look all that bad.
[231] And by the time we really start to take it seriously, it's out of control.
[232] What do you think it is about the mind that makes it difficult for us to appreciate exponential growth?
[233] Well, probably because it is relatively rare compared to straight line growth.
[234] That is like counting.
[235] I mean, counting is a linear system, you know, and we're much more familiar with things that grow in a linear way than those that grow exponentially.
[236] In so many ways, our minds struggle with the mathematics of risk.
[237] Exponential growth is hard for us to conceptualize.
[238] And as we heard earlier, the difference between a 1 % risk and a 99 % risk becomes difficult to take into account when our emotions become part of the equation.
[239] Then there's the issue of control and whether we feel like we're the ones making choices about which risks to take.
[240] More than 50 years ago, the research at Sean C. Star discovered something curious.
[241] People were willing to accept far greater risks for activities that they chose over activities that did not involve personal choice.
[242] They accepted a higher level of risk for, say, skiing or bungee jumping, but found a similar level of risk unacceptable when it came to things like building safety or the use of preservatives and food.
[243] Paul and other researchers have refined this idea in subsequent studies.
[244] But this core finding about the importance of personal control may help us to understand why some people say they are not worried about becoming ill with the COVID -19 virus, but are worried about the safety of the COVID -19 vaccine.
[245] You feel you have control when you go to a restaurant.
[246] You convince yourself the risk of the virus is small.
[247] But you don't have control over how a vaccine is made.
[248] have to trust the results of studies conducted by scientists whom you will never meet.
[249] Paul says our sense of control plays a significant role in our perceptions of risk.
[250] Someone once gave me an example, you know, supposing that you're slicing a loaf of bread and, you know, how close to the knife would you put your fingers?
[251] And supposing that someone else was slicing the bread, how close would you put you But those, it probably wouldn't be as close.
[252] No. And I think that's a nice example of the sense of control.
[253] I think it's a very important element in driving, where the driver feels that they are controlling the risk because of their controlling the speed and other aspects.
[254] They don't realize all the elements of the situations that are not under their control, like the condition, you know, road hazards or what other drivers are going to do.
[255] Where do you think the origins for many of these biases are, this mismatch between the way our body?
[256] brains operate and the challenges that modern life places before us, Paul?
[257] When we were earlier in our phase of evolution and our brain was forming, we were shaped very much by the experience that we faced at that time, which had to do with things that were kind of up close and personal, things that were right in front of us, like an animal lurking in the bushes or a hostile tribe.
[258] We had that kind of sensitivity.
[259] to things that were relatively small in number and we could sense directly through our senses.
[260] In the modern world, the hazards are far more diverse.
[261] Many of them are invisible, like things that have to do with bacteria or viruses and things that we can't easily see, things that happen at a distance from us, but at some point will affect us.
[262] Things like related to climate change where the problem seems still fairly distant, something that's going to happen, that's happening perhaps elsewhere to other people.
[263] So the modern world has a whole different array of hazards from the ancient world, and a lot of these modes of thinking were shaped in the cave, so to speak.
[264] Yeah.
[265] You know, I remember going for a walk a few months ago in the woods, and at one point in the trail, you know, sort of leaping backwards.
[266] And, you know, I looked down and I saw that there was a snake, you know, five feet in front of me. And I was struck by the speed of my response to that threat was just, you know, it was almost instantaneous.
[267] It almost felt like a reflex.
[268] And through all of the months of the COVID pandemic, I have never experienced a moment of fear like I felt when I saw that snake.
[269] And of course, when you think about it, the risk of COVID is probably much greater than the risk that that snake posed me, because most snakes, of course, in the wild, in a woods in suburban communities are probably going to be relatively safe.
[270] But there's something in some ways about the Stone Age brain that has remained in my brain, where I'm much more vigilant to the risk of a snake than I am to the risk of this invisible virus that could do me and others harm.
[271] Yes, again, a difference between hazards that existed a long time ago that shaped the way our brains formed than the risks of today.
[272] Another variation on what you said was if instead of seeing the snake in front of you, you had heard an ominous sound in the bushes.
[273] And then the question is, do you stop and you analyze the sound and debate with yourself as to whether this is really something you should worry about or you just take it as something that sounds scary and kind of move away from it?
[274] And what likely will happen is that you accept the first reaction.
[275] And this is something that my colleague Daniel Vestfall and I have discussed, that our feelings, there's no gatekeeper that leads us to analyze information that conveys feelings in us.
[276] You know, we just take it for what it is, and the brain lets these feelings in, and we react to them.
[277] We don't vet our feelings as a way we vet arguments.
[278] And I think this goes way back.
[279] It's something that was very adaptive a long time ago.
[280] And for good reason, because, again, as you point out, if you actually stopped and analyzed every threat and drew up a cost -benefit calculation every time you heard a growl in the bushes, likelihood is you'd be dead.
[281] Yes.
[282] Not only would your calculations likely be wrong because it's so hard to do those calculations, but as you say, you may not survive.
[283] You have to just move fast.
[284] You know, there are many risks where if I take the risk, I'm the one who is bearing the cost of that risk.
[285] So if I decide to go, you know, mountain climbing or, you know, base jumping, I'm the one who's incurring the risk.
[286] Now, presumably others are affected as well if something happens to me. But to a large extent, you know, I bear the consequences of my actions.
[287] That logic breaks down when it comes to the risk of something like a pandemic, where my actions, in fact, might affect the well -being of other people.
[288] Economists sometimes call these phenomena negative externalities.
[289] So in other words, my actions that I'm undertaking freely might affect somebody else's ability to be free and their ability to do what it is that they want to do.
[290] If our minds are not very good at appreciating the things that are risky even to us, how effective can they be in appreciating the things that are risky to other people?
[291] It's particularly true that we can't sense that.
[292] when the consequences are not direct.
[293] So some things that we do that have collective consequences are, for example, stopping at red lights, where if you violate those, what you get to your destination a little quicker if you don't stop at red lights, but there'll be massive increase in collisions and deaths, which is very obvious a harm.
[294] But we get good feedback on the harm that your individualism is producing.
[295] The problem with COVID, that makes it so insidious and difficult is the fact that protecting yourself and others by wearing a mask, social distancing, staying home rather than going to school or to the workplace, you don't have the sense that you are creating harm when you are violating those because you don't see the harm immediately or directly.
[296] And we rely heavily on kind of what's right in front of our eyes in terms of sensing risk.
[297] We don't see the damages that are caused, but we feel the benefits of not doing these things.
[298] I mean, we get to hang out with our friends and go to restaurants and bars and work, which we need to do.
[299] So we feel the benefits of doing the wrong thing.
[300] We don't see the benefits of doing the right thing.
[301] And that's a recipe that leads even the most responsible people to ease off over time.
[302] Yeah.
[303] And, of course, when you think about this from the point of view of natural selection, the reason viruses have thrived among human populations for thousands of years is really because they've taken advantage of how our minds work.
[304] Yes, interesting.
[305] Well, they succeed when they have characteristics that take advantage of our cognitive limitations.
[306] And in fact, as we go to more and more remote parts of the earth and start to inhabit rainforests and other places and the climate changes, we are perhaps coming into contact with more of these viruses that heretofore we haven't had contact with.
[307] and that's kind of a worry in the future.
[308] But it's certainly the case that with COVID, COVID is adapted to thrive in ways that are very difficult for humans to combat.
[309] Yeah, I mean, I know the virus has not been designed by, you know, an evil psychologist, but sometimes it feels as if the virus has been designed by an evil psychologist when you see how it's taking advantage of the fallibilities in our cognitive architecture.
[310] And the very fact that the harm can be spread invisibly makes it.
[311] ambiguous enough that politicians can then play on that to manipulate us to say that it is really not a serious problem.
[312] When we come back, Paul explains why our brains have an easier time mourning one death rather than tens of thousands of deaths.
[313] You're listening to Hidden Brain.
[314] I'm Shankar Vedantam.
[315] This is Hidden Brain.
[316] I'm Shankar Vedantam.
[317] We've seen how many subtle biases shape what makes us afraid.
[318] We are more likely to fear things that do not involve activities of our own choosing.
[319] We are less likely to notice dangers when they grow exponentially or add up cumulatively.
[320] Paul, you've done a lot of work exploring how the same phenomenon might play out in the context of compassion, our ability to care about others.
[321] Can you talk about the phenomenon of psychic numbing, please?
[322] Yes.
[323] I started to look at that when I became concerned about the failure of the world to respond to the genocide that was happening in Rwanda where 800 ,000 people were murdered in about 100 days and the world knew it was happening and turned a blind eye to it, you know, refused to intervene in any way.
[324] And I started to study that.
[325] And one way to study that was to look at, you know, why we help some people who are in danger and not others.
[326] And we found that this was very much related to how many people there were at risk.
[327] And again, we had been sensitized to the notion that our sense of risk was driven by our feelings.
[328] So we look at how feelings work in this context as well, and we realize that one life, we believe, is immensely important and valuable to protect, and we make an emotional connection to an individual in need and we'll then do a lot to protect that person or to rescue them, but that it doesn't scale up.
[329] Why, if we value individual lives so greatly, do we do so little to protect thousands or millions of lives at risk?
[330] That was the puzzle, and we started doing experiments to help us understand that.
[331] So some of the experiments that you've done have looked at what happens in people's minds as you expand the number of victims of a tragedy.
[332] And what's striking is that we're not talking about very large numbers here, even at fairly small numbers, our ability to empathize almost.
[333] seems to shrink as the numbers seem to grow.
[334] Talk about some of those experiments, Paul.
[335] Yes.
[336] We asked people to donate money on behalf of children who are facing starvation, and we found that as the number of children at risk increased, the propensity of an individual to donate money to help them did not grow as the number of potential victims grew, but rather was strongest for a small number of individuals and then it either flattened out or in some case it even declined and we had a phrase for that.
[337] We said, you know, the more who die, the less we care.
[338] We don't respond proportionally as the needs get greater and there are several reasons that we discover for that.
[339] One is this notion of psychic numbing that when things become large and become statistics, we don't get the same emotional connection to the people at risk that we do when there's only a few of them.
[340] And so as the numbers increase, we say that statistics are human beings with the tears dried off.
[341] You don't get the same emotional reaction to the numbers that you do to the individual or small numbers of people.
[342] And of course, all of us have experienced this in the course of the COVID -19 pandemic.
[343] Even those of us who want to exercise empathy and compassion, it just simply isn't possible to muster the same sense of tragedy when the death toll goes from, you know, 221 ,355 to 200 ,000.
[344] 21 ,356, that extra death doesn't count in our minds because our minds simply are not calibrated to deal with that level of tragedy.
[345] You could have added not one life, but 10 ,000 or 20 ,000 lives there.
[346] And again, you feel the same.
[347] It's just a big number.
[348] If we don't see the ill people around us or know them or if we don't feel personally vulnerable, these statistics don't move us as they should.
[349] So many years ago, the philosopher Peter Singer came up with a thought experiment that we've talked about before in Hidden Brain.
[350] And very simply, the thought experiment is imagine you're walking by the side of a pond and you see a child drowning in the pond.
[351] And you can jump in and save the child at no risk to your own life, but you have to act quickly.
[352] And if you act quickly and jump in the pond, you're going to ruin a very fine pair of shoes you're wearing.
[353] And the question that Peter Singer asked, you know, most people say, of course, I would jump in the pond to save the child's life.
[354] a child's life is worth more than my pair of shoes.
[355] The question that Peter Singer asks is, well, in that case, why would you not donate $200 of your money to save the life of a child halfway around the world?
[356] Because it's the same trade -off in that case, you know, a child's life versus $200.
[357] You came up with a heartbreaking twist on that thought experiment that in some ways is very revealing about the conversation we're having about how empathy and compassion work when numbers start to get larger than one.
[358] Tell me about the refinement to the thought experiment, Paul.
[359] We ask people to imagine that they're walking by this pond and they see a child in the water drowning and they're about to jump in and risk their own lives.
[360] And then they see that off in the distance, there's a second child also in danger of drowning.
[361] The question is, well, would you not go into the water and rescue the child that is nearby because there's another child that you can't rescue?
[362] And you would say, of course not.
[363] I mean, now I can rescue this child.
[364] Let's do it.
[365] But what we find in experiments is actually that people do get demotivated from helping people they can help by the bad feelings they get when they realize that there's others that they cannot help.
[366] So what's going on here is that we help others not only because they need our help, but also we feel good about doing that.
[367] We're doing the right thing and we can do it.
[368] and when you're made aware of others who you cannot help, this creates negative feelings that come in and mix in with the good feeling and dampen the good feeling you have about what you can do, so then you no longer do it.
[369] So obviously, in a real situation, when you're right there with a child, you're not going to be demotivated.
[370] But if this is a more subtle kind of thing where you're asked to donate to a charity on behalf of starving children like this one, And then, by the way, you're told that this is a big problem with starvation in this region, that there are thousands or millions of children starving.
[371] You should even be more motivated to help this child.
[372] We found an experiment that the donations dropped in half when people were made aware of the fact that this child that they could help was one of many.
[373] This is crazy.
[374] You know, you should not be demotivated from doing what you can do by the fact that you can't do at all.
[375] Do what you can do.
[376] You know, you can look at what's happened with the COVID pandemic, almost as a dress rehearsal for even more serious challenges that we might face collectively in the years to come.
[377] I'm wondering if you can connect our discussion about risk to the challenge of dealing with a problem like climate change.
[378] How do the workings of our minds predict what we have done and on what we might be failing to do?
[379] Yes, that's a very interesting question.
[380] So first, COVID spreads exponentially, and the same thing may happen to us with climate change.
[381] That is, sure, it's on a different scale, but the processes that are contributing to climate change in terms of the build -up of certain types of pollutants and the changes in temperature are growing exponentially.
[382] And the hallmark of an exponential growth process is that the really severe, unacceptable, unlivable consequences will be here.
[383] here more quickly than we think, unless we pay attention to the scientists who are showing us with the data that this is happening.
[384] You know, just like scientists were showing us with the data that COVID was growing exponentially, but we didn't take that seriously.
[385] We have to do the same thing with regard to climate change.
[386] You know, we talked a little while earlier about the idea of externalities where, you know, my actions that I'm undertaking with autonomy might affect you.
[387] And it's clear that there are things I would not do to harm you if I think.
[388] thought that they would harm you.
[389] I wouldn't come up and punch you in the face, but I might say, you know, what's the harm in my going to a bar?
[390] How is that possibly going to affect Paul?
[391] Because I can't see the chain that causes you harm.
[392] And I'm wondering in some ways of climate change sort of puts that on steroids, because here the externalities are not just other people.
[393] They're not just people living in other countries, but there are people who are not yet born, who will be inhabiting the earth, you know, 50, 100, 200 years from now.
[394] If our minds are not well calibrated to think about the well -being of other people who are living next to us or in the next city or the next country, you know, surely it must be even harder for our minds to contemplate the well -being of people who haven't yet arrived on the planet.
[395] Yes, I think you're right, because we will devalue the lives of people who are currently living.
[396] But when they don't even exist yet, then it's even easier to devalue them.
[397] And it's not necessarily that we are deliberately saying that their lives don't matter, because if you ask people who are doing things that are harmful to the climate, are these lives of people in future generations, are they important?
[398] We would say, yes, of course they are important.
[399] And what we found is that there's often a disconnect between our values and the actions.
[400] And that comes from the fact that when we have to act, we've got a conflict between, you know, protecting unborn future generations versus getting the near -term conveniences and comforts that we get from doing the wrong thing to the environment.
[401] And so we act in ways that contradict our values.
[402] And we have to be aware of that.
[403] And that implies that we have to do more than just educate people about the importance of protecting future generations.
[404] We also have to have enforcement of behaviors and, you know, through regulations, we have to enforce safe practices.
[405] We also have to provide motivation and economic incentives for doing the right thing, you know, and creating jobs in industries that protect the environment.
[406] So we have to recognize the fact that we need these external, often led by government and industry, the carrots and the sticks to produce and maintain climate -friendly behavior, that just creating a moral obligation by itself is not going to do it.
[407] Yeah.
[408] I mean, it's interesting.
[409] You've been studying these issues for, you know, some four decades now, Paul.
[410] And there must be a part of you that's a little disheartened when you see how little these insights have actually been used in the face of a global pandemic or climate change.
[411] What gives you hope as you look out at the landscape in terms of having these ideas actually applied to sort of turn things for the better?
[412] It's a challenge and I do get energized by facing challenges.
[413] Also, I think the information environment is different now so that hopefully, The awareness of the findings that we're coming up with and their implications can be spread far and wide very quickly.
[414] And I find it an exciting challenge to try to synthesize and communicate the knowledge of the judgment and decision -making community to address these problems that are global in scope and potentially catastrophic.
[415] Paul Slovak is a psychologist at the University of Oregon.
[416] To learn more about his work, go to arithmetic of compassion .org.
[417] Paul, thank you for joining me today on Hidden Brain.
[418] You're very welcome, Shankar.
[419] It's my pleasure.
[420] Hidden Brain is produced by Hidden Brain Media.
[421] Midroll Media is our exclusive advertising sales partner.
[422] Our production team includes Bridget McCarthy, Kristen Wong, Laura Querell, Ryan Katz, Autumn Barnes, and Andrew Chadwick.
[423] Tara Boyle is our executive producer.
[424] I'm Hidden Brain's executive editor.
[425] Our unsung hero this week is Michael Costagliola.
[426] Michael is a composer whose music and sound design work has been featured in theater productions across the country.
[427] Since the pandemic has upended the theater world, Michael has expanded into making music for podcasts.
[428] Composing for podcast is difficult because the music has to be understated to allow the story and ideas to shine.
[429] Michael intuitively understands how to do this, and his work has the added benefit of being beautiful and distinctive.
[430] You heard some of it in today's episode.
[431] Thank you, Michael.
[432] For more Hidden Brain, you can follow us on Facebook and Twitter.
[433] If you like this episode, please be sure to share it with a friend.
[434] I'm Shankar Vedantham.
[435] See you next week.