Freakonomics Radio XX
[0] Hey there, podcast listeners.
[1] This is the third of three recent episodes we recorded in front of live audiences.
[2] The first two were in Los Angeles and San Francisco.
[3] This one's in Philadelphia.
[4] And this one, as you'll hear in a moment, is a bit different.
[5] The L .A. and San Francisco shows had a mishmash of guests from those cities.
[6] This one is devoted to a particular project that we've been following for a couple of years now.
[7] If you want to hear our earlier shows on this topic, check out episode number 282.
[8] Could solving this one problem solve all the issues.
[9] others, in episode number 306, which was called How to Launch a Behavior Change Revolution.
[10] And once the revolution was launched, we wanted to know how it's been going.
[11] So that's today's show.
[12] Hope you enjoy.
[13] Ladies and gentlemen, please welcome the host of Freakonomics Radio, Stephen Dubner.
[14] Thank you so much.
[15] This is a very special episode of Freakonomics Radio.
[16] It's about one of my favorite topics.
[17] And based on the feedback we've gotten, it's one of your favorites, too.
[18] about behavior change.
[19] So a couple years ago, we first interviewed two researchers from the University of Pennsylvania, Angela Duckworth and Katie Milkman.
[20] They had launched an audacious new project called Behavior Change for Good, gathering together a dream team of behavioral scientists from all over the world.
[21] It's their attempt to advance the science of behavior change and help more people make good decisions about personal finance, health, and education.
[22] Tonight, we are recording live at the Merriam Theater in Philadelphia, just down the street from the University of Pennsylvania.
[23] We'll be hearing brief presentations from four behavioral science researchers about their latest work.
[24] Later on, we'll hear from a Nobel laureate who helped create this field.
[25] But let's start at the beginning by getting caught up on the Behavior Change for Good project with its founders.
[26] Would you please join me in welcoming Angela Duckworth and Katie Milkman.
[27] Angela, Katie, so nice to have you here.
[28] Hi.
[29] Hi.
[30] So it's been a few years now since you started this project.
[31] At the time, Katie, here's what you told us.
[32] We both thought the biggest problem in the world that needed solving was figuring out how to make behavior change stick.
[33] So my first question is, have you solved that problem yet?
[34] Well, we learned a ton in the last three years, but we have so not solved this problem.
[35] Today we had a really fabulous gathering where we shared the results of some of our first, ambitious studies to try to make a major dent in this.
[36] And I would say the hashtag from the day was, science is hard.
[37] We ran a massive randomized controlled trial, so a big old experiment, 63 ,000 members of 24 -hour fitness gyms, which is one of the biggest gym chains in the U .S., signed up to be part of a really cool behavior change program that we offered them for free.
[38] and it was designed by a team of brilliant scientists who we'd brought together.
[39] Now, just to be clear, you are recruiting people who've already gone to the trouble and the commitment of joining a gym, yes?
[40] Exactly.
[41] So you're a member of 24 -hour fitness, and you hear all these cool scientists build a program and that I can sign up for free.
[42] It'll help me exercise more.
[43] And what exactly are you trying to get them to do?
[44] We tell them it's a 28 -day program and the goal is to get you to build a lasting exercise habit, you know, ideally forever because, right, that was our goal.
[45] Let's make all these habits stick.
[46] So the idea is you get people sign up.
[47] You give them encouragement and incentives.
[48] Were there some cash rewards?
[49] Yes, there was cash promised and delivered.
[50] So we were paying order magnitude like a quarter for every gym visit.
[51] Better than nothing, but not a lot.
[52] Not really, but okay.
[53] And we also said, you know, we'll give you different kinds of messaging and reinforcement.
[54] Okay, so how amazingly, beautifully, perfectly well did it work?
[55] So you want the good news?
[56] are the bad news first.
[57] Let's start with an overview.
[58] Would you call it a failure or an abysmal failure?
[59] I'm going failure rather than abysmal failure.
[60] We learned a lot.
[61] The good news is 52 out of the 53 things that we tested, we thought would improve Jim and Tenants.
[62] One of our 53 experimental programs was supposed to be like nothing.
[63] People signed up and we're like, thanks for signing up, good luck with your life.
[64] That was sort of our comparison set.
[65] The other 52, everybody in those 52 conditions went to the gym more.
[66] That sounds nothing like a failure to me. Right.
[67] Okay, here comes the failure.
[68] So we were actually trying to test new scientific insights.
[69] And all of the programs that we built built on top of like a baseline thing that we thought would work, which was reminding people to go to the gym, paying them a little bit to go to the gym, and having them make a plan for the dates and times when they wanted to go.
[70] then the reminders come at those times.
[71] We were hoping to improve upon the performance of that, and nothing did.
[72] So basically what we found is that a set of ingredients were already quite confident would work.
[73] They did.
[74] And then when we layered new stuff on that we thought, this is a sexy new idea, it's going to beat the best practice, we got nowhere.
[75] Okay, I seem to recall that part of this project was asking all your fellow researchers when they design experiments to make a prediction of how well there is.
[76] experiment would work.
[77] And these are some of the best and brightest minds in behavioral sciences, so presumably their predictions are not terrible.
[78] Were they terrible?
[79] So what we learn was that our scientists are quite optimistic about behavior change, and on average they thought, oh, 40 % likelihood that my experiment worked, whereas, you know, when the data come in, it's close to zero.
[80] It strikes me, knowing nothing about anything, that what you were trying to do as your first big project, getting people to go to the gym more on a lasting basis, is like the opposite of low -hanging fruit.
[81] Oh, that's interesting.
[82] Okay, I do think it's worth mentioning, again, we actually didn't fail at getting people to go to the gym.
[83] During the 28 -day program, most of the different versions of the program did create behavior change.
[84] So, like, 50 to 75 % created significant boosts in exercise for 28 days.
[85] It's just that we didn't do very well at creating lasting change.
[86] So after our 28 -day program, pretty much we saw nothing in terms of behavior change.
[87] All 53 versions of the program, pretty much nothing sticks.
[88] And that was the ultimate goal.
[89] So that was major failure.
[90] So I know both of you fairly well by now, and I know that neither of you are short on enthusiasm.
[91] So I don't see you packing up and quitting and disbanding the behavior change for good project.
[92] What are your next steps?
[93] Okay, a couple things.
[94] First of all, we're doing more with this gym data.
[95] You know, we're going to swing a bat at it instead of the feather approach next time.
[96] We're also going to do medication adherence work.
[97] We think we can make a dent there, given some of the science that's preceded us.
[98] We're going to do some work on childhood obesity in the UK, which we're really excited about.
[99] Here's something that you, Angela, said when this project was starting, quote, the one problem that really confronts humanity in the 21st century is humanity itself, being that, you know, we do a lot of things that are not so good for us nutrition, smoking, not saving enough for retirement, etc. After that episode, one listener wrote in to say this, and I quote, this was the most depressing episode ever.
[100] People are a mess.
[101] That's what makes humanity beautiful.
[102] Taking away our spontaneity, our whimsy, our impulses, and replacing them with only logical thinking is truly a dastardly idea.
[103] Some of the greatest things mankind has ever done weren't for our overall well -being.
[104] They were done just because they were fun.
[105] You are killing the fun.
[106] I don't have a question.
[107] I just wanted to read that into the record.
[108] No, I do have a question.
[109] So for the sake of argument, what makes you think that behavioral scientists like yourselves should be nudging or even shoving people to change their behavior when they might like their behavior just fine?
[110] If you think of the most self -controlled person you know, you might think, wow, they have no fun.
[111] They never go out to say Freakonomics radio live.
[112] They only drink water.
[113] They, you know, work all day and they have no play, and that's no way to live.
[114] But in fact, there is research on the extremes of self -control, and there is no data that show that really, really, really, really self -controlled people are any less happy.
[115] Self -control is the ability to align your behavior with what you want.
[116] If what you want is a life of spontaneity and ice cream cones, then that's the behavior that you have to align to, right?
[117] That's the goal.
[118] But I think the kinds of problems that behavior change for good is working on exercise for teenagers studying, for those people who have had a heart attack, taking your medications.
[119] You know, these are things that most people, actually value as goals, and they simply interfere with other things that we could do, not taking our medication, hanging out on Snapchat all day, not going to the gym, and binge watching Game of Thrones instead at home on the couch.
[120] These are all temptations that are just more pleasing in the moment, but we later regret.
[121] So you can write back to this cranky listener.
[122] I think they're misunderstanding what it really means to actually have a lot of self -control.
[123] Well, I will say this, despite your struggle, so far.
[124] I know that you two are super gritty people and that you're going to keep at it.
[125] And I really look forward to hearing the results down the road.
[126] So can we say thank you so much to Angela Duckworth and Katie Milkman?
[127] Now it's time to hear from four members of the dream team of behavioral scientists that Katie and Angela have assembled.
[128] They're all doing work that somehow relates to decision -making or cognition or human fallibility.
[129] First up is a PhD.
[130] psychologist who teaches at the Harvard Business School.
[131] Would you please welcome, Mike Norton.
[132] So, Mike, I understand you've been doing research on how people split the check when they go out for dinner and what that may say about our behaviors.
[133] Can you tell us about that, please?
[134] Can I ask you?
[135] So out with a bunch of friends, drinks, appetizers, salads, meals, dessert, check comes.
[136] What do you do?
[137] Do you say, let's just split it and all put in our credit cards?
[138] or are you the guy who takes the check and calculates everything and says, well, I only had six croutons, so let me, I'm just going to pay this much.
[139] You are asking me what I do, personally.
[140] If you're comfortable admitting it.
[141] Sure, yeah.
[142] So I'm definitely not a counter, so I wouldn't do that.
[143] But I will say this.
[144] If I'm going to a dinner where I think it's a split dinner where we're all contributing, I will not skimp, let me put it that way.
[145] because I figure, if I'm getting an eighth of it, I want my steak and I want my ice cream Sunday.
[146] I'm actually getting it at a little bit of a discount because I figure some other people aren't.
[147] So I'm like getting 20 % off the steak.
[148] So what does that make me?
[149] It feels like it's working for you, but I think if we ask your friends and family, they might.
[150] So we actually find that there's sort of two kinds of people.
[151] A lot of people either say, when the check comes, you know, maybe you had more, maybe I had more, let's just split it.
[152] And then there's another group of people.
[153] It's typically like 30 % of people, actually, who no matter what, I mean, it could be a, you know, $3 .8 an $8 meal, and they'll still take the check and figure out who had what and make sure that they split it exactly.
[154] Okay, so I want to know about this research, how you do it and who the people are.
[155] So we can do really, really simple experiments where we can say, look at this person's Venmo account and see the payments they made.
[156] And Venmo is a payment app, we should say, correct?
[157] Payment app, and what it does, which is brilliant, is it automatically splits things for you.
[158] So it's great.
[159] And so it means if we go out for dinner and it's $20 .2, it actually will make each of us pay $10 .1.
[160] And we can just show you, for example, one person made a payment of $10 and $1 to some friend.
[161] And another person made a payment of $9 .99 to another friend.
[162] And then in the other version, you see someone who paid $10 to one friend and $10 to another friend.
[163] If I did that right, they both added up to $20.
[164] So it's not a different amount of money.
[165] Everything's the same.
[166] Your friend paid you back.
[167] It's $20.
[168] And we said, who do you like?
[169] How do you feel about this person?
[170] Sorry, how do you, a disinterested observer?
[171] Yep, here's two people.
[172] You don't even know them.
[173] It's not even a friend of yours.
[174] It's just these two people.
[175] How do you feel about them?
[176] And people are okay.
[177] The $10, $10 person, they say, yeah, it seems like a nice guy.
[178] And the $10 and $1 .1 .9.
[179] And the $9 .99 person, they say, I don't like them.
[180] Either one of them, both of them.
[181] Yeah, I don't like them.
[182] Is there more dislike for the one that does the $9 .99 or no?
[183] Only slightly.
[184] So actually, one thing that we tried to compare it to is generosity.
[185] So who do you like better?
[186] Someone who pays you back $10?
[187] Or someone who pays you back $10 .3?
[188] So technically, the $10 .3 person is more generous.
[189] But they're also really weird about money and really petty.
[190] And in fact, that's how much we dislike this behavior is we like the person more who paid us less.
[191] as long as they weren't petty about it.
[192] Also, pennies are a pain in the neck.
[193] Let's be honest, right?
[194] I've never seen them.
[195] I don't know.
[196] Okay, so what have you identified in the wild?
[197] Is it pettiness?
[198] Is that what you're studying?
[199] Yeah, it does seem to be.
[200] So pettiness is its attention to trivial details.
[201] That's kind of the way to think about it.
[202] So you can happen with time.
[203] It can happen with all sorts of currencies where there are these people in our lives who really seem very interested in the little tiny minutia of life.
[204] and they tend to drive us crazy.
[205] And again, they're not wrong.
[206] They're doing the math correctly.
[207] There's no problem with it on one level.
[208] But for many of us, they really, really drive us crazy.
[209] Do you know anything about how pettiness works in, let's say, a romantic relationship?
[210] Like personally, you mean?
[211] Or from the research?
[212] I didn't mean to imply it, but I see that I did.
[213] So we asked people in relationships about their partner to rate them on all kinds of things, how generous are they, all sorts of things, and also how petty are they.
[214] by asking them, is your partner the kind of person who splits things randomly, or are they really care about dollars and cents?
[215] The answers to that question really, really predict not only dissatisfaction in your relationship, but we asked, how upset would you be if your relationship ended?
[216] And people who are with a petty partner are less upset when they think about their relationship ending.
[217] I see that you've written about what you call two different kinds of relationships, exchange relationships and communal relationships.
[218] Is that the idea?
[219] Exactly, yeah.
[220] So classic exchange relationship is with our bank.
[221] So we're not offended at all if our bank gets things down to the cent.
[222] In fact, we're really upset if they don't.
[223] Because the whole point of a bank is they're supposed to be really good at dollars and cents.
[224] If your bank said, you know, we'll just round it up.
[225] So what are you talking about?
[226] It's my money.
[227] So you're not supposed to do it over there.
[228] And in fact, that's why we get so upset in communal relationships, because our friends are treating us like a bank.
[229] They're treating us like we're a merchant and we owe them money.
[230] All right.
[231] So let's say I find this pettiness effect interesting, and I do, though perhaps not all that surprising.
[232] Beyond the handful of people involved in one of these dollars and cents transactions, what are the larger ramifications here?
[233] What technology does actually, it's more efficient, it's better, it's an improvement, but it actually is starting to default all of us into the dollars and cents world.
[234] And there's nothing, again, wrong with that, but it does mean that it can be eroding social capital.
[235] It's actually good if I take you out for lunch and treat you, because then later you might take me out for lunch and treat me, and now we have an ongoing relationship.
[236] I understand, Mike, that you've also done research on humble bragging.
[237] Is that true?
[238] Yes.
[239] I mean, you may not want to admit it, but, um, Can you tell us in a nutshell what a humble brag is and when it's good and when it's banned?
[240] Katie and Angela tend to study things that are making the world a better place.
[241] And I tend to study things that I find annoying.
[242] And in that way, I'm changing the world as well.
[243] There's two kinds, actually.
[244] There's complaint bragging and then there's humble bragging.
[245] So complaint bragging, whenever someone online says, ugh, right after that, it's going to be a complaint brag.
[246] Just wait for it.
[247] It's always a complaint brag.
[248] So they say, ugh, wearing sweatpants, and everyone's still hitting on me. One of my favorite ones ever was, my hand is so sore from signing so many autographs.
[249] So humble bragging, usually people recycle from Wayne's World for some reason, not worthy.
[250] So that's, and whenever you see that, that means that here comes a humble brag.
[251] Not worthy, and then say, so honored to be on stage with Katie Milkman and Angela Duckworth.
[252] So what I'm really just doing is saying, I'm on stage with, like, really important people, but I'm acting all humble about it.
[253] So the reason that people do these things we can show in the research is they're feeling insecure.
[254] So I want to brag, always, because I want everyone to think I'm awesome.
[255] But I have the theory that if I brag, people won't like me, because nobody likes a bragger.
[256] So we think what we can do is if we're humble about it, then people will say, oh, what a nice guy.
[257] And also I learn that he knows celebrities.
[258] and instead what people think is, what a jerk.
[259] So, in fact, we like braggards, just straight up braggards, which is just saying, I met a famous person.
[260] We like them more than people who do this little strategy where they try to humble brag.
[261] Interesting.
[262] Mike Norton, thank you so much for joining us tonight.
[263] Would you please welcome our next guest?
[264] She is a professor of psychology and head of Silman College at Yale.
[265] She recently designed and taught the most popular course in Yale's history called Psychology and the Good Life.
[266] Would you please welcome Lori Santos.
[267] Lori Santos, I understand that you, rather than wasting time working with humans, as all these other people have been doing, that you've been doing behavioral research with, and this makes my heart pitter -patter so hard, dogs.
[268] That's right.
[269] They're just more fun than people.
[270] So I know you used to do or maybe still do some research with capuchin monkeys as well, which makes me curious why, as a site.
[271] psychologist, you find it so compelling to work with animals?
[272] Yeah, it's kind of a niche field, the whole, like, dog cognition, monkey cognition thing.
[273] But I'm actually very interested in human behavior, which is why I get interested in animals.
[274] Like, humans are so weird.
[275] Like, there's no other species that has a live radio show talking about their own species behavior, like using technology like this and human language, right?
[276] And on the one hand, that's sort of goofy.
[277] But on the other hand, it raises this deep question, which is like, what is it that makes this so special?
[278] And when you ask, like, 20 scientists, let's say, say from across a broad range of sciences, you'll get 20 answers of what makes humans unique, yes?
[279] What do you believe is the thing?
[280] Yeah, I mean, the top 10 are things like language, things like the fact that we can perspective take, the fact that we can think about the future and so on.
[281] We took a different take, though, which is all those answers tend to be stuff that makes us so smart.
[282] You know, we're special because we're so smart.
[283] I actually worried a deeper thing might be that we have to worry not about the smart stuff, but we have to worry about some of the dumb stuff, right?
[284] We might be uniquely dumb in certain ways are uniquely biased in certain ways, and we have to understand that if we really want to understand how human cognition works.
[285] Is it possible that we are, quote, wrong so often as humans, because we are so smart, though, because we think too much, think our way out of an obvious solution?
[286] Yeah, I think that's one possibility is that some of the smart capacities we have might not be giving us the best answers all of the time, right?
[287] Take our future thinking, right?
[288] We get to think about all these other hypotheses and all these counterfactuals and so on, and that gets us out of the present moment, right?
[289] That means we're thinking about different kinds of things than we would be if we were just a monkey that was just taking it all in in the moment.
[290] And so I think it's sometimes our smarter capacities that end up making us look incredibly dumb.
[291] Okay, so I want you to start by telling us how you do the dog experiments.
[292] Yeah, so we started with dogs in part because we built them to be like us, right?
[293] We, over this process of domestication, took a wolf, this kind of wild canad, and said, let me take a creature that can hang out with me, and their foot fra has cognitive abilities that can get along in human culture.
[294] And that means that we have a creature that's ready to soak up our culture in lots of different ways.
[295] So if there's anybody that's going to be like us, any species that is likely to show our biases, dogs might really be one of those.
[296] And so that's why we focus on them.
[297] Are they test dogs?
[298] Are they regular dogs that you recruit?
[299] Just like human subjects, we recruit them in the same way.
[300] So we put posters up and we say, do you want to bring your dog in for a study?
[301] What are you trying to get the dogs to think or do?
[302] And how does that compare to humans?
[303] In one study, we focused on a particular phenomenon that research, is called over -imitation, which, as you might guess, is imitating too much.
[304] And so here's the phenomenon humans.
[305] Imagine I show you some crazy puzzle box.
[306] You don't know how it works.
[307] And I say, I'm going to explain to you how it works.
[308] I'm going to tap this thing on the top, and we do all these steps, and I open the puzzle box.
[309] And then I give it to you.
[310] If it was some hard to figure out puzzle box, you might just copy me. But imagine I give you a really easy puzzle box.
[311] It was just a completely transparent box, nothing on it.
[312] It just had a door that you could open to get food out.
[313] But you watch me do all these crazy steps.
[314] I top on the side, I spin it around a few times, I do all these things.
[315] You might hope that humans are smart enough to say, that was a really dumb way to open the box.
[316] Give it to me, I'm going to open the door.
[317] But it turns out that's not what humans do.
[318] Humans will follow slavishly all these dumb steps that they see someone else do just in case.
[319] And we thought the same dumb copying behaviors that we see humans do, we should probably see in dogs as well.
[320] And so here's how we set it up.
[321] We made a kind of dog -friendly puzzle box, easy enough for the dogs to understand, So it was a transparent box with a lid that was really obvious.
[322] And if you flip the lid up, you could get inside and get a piece of food.
[323] But we added this extraneous lever on the side of the box that we showed dogs, hey, here's how you open it.
[324] You have to move the lever back and forth.
[325] It takes really a long time.
[326] Lever, lever, lever, lever.
[327] And at that point, you can open the box.
[328] Now, in theory, if we did this with a human, they would say, I don't really understand.
[329] Lever, lever, lever, lever, lever, lever, lever, lever, lever, lever.
[330] Open the box.
[331] That's actually what human, four -year -olds do, some wonderful videos online where you can see this.
[332] And what do the dogs do?
[333] ran over, lifted the lid, and got the food.
[334] And so what this is telling us is that we've created this species that learns from us a ton.
[335] They follow our cues all the time.
[336] But they're actually smarter at learning from us than we are at learning from ourselves.
[337] You mentioned a four -year -old human.
[338] Are you comparing the dogs to children or to adult humans?
[339] Yeah, so the study we did was in direct comparison with a study that Frank Kyle and Derek Lyons did at Yale University.
[340] They do this with four -year -old kids.
[341] And what they find is that four -year -old kids will slavishly imitate what they see, even when you make the box so simple that a four -year -old could figure it out.
[342] So you're not saying that dogs are, quote, smarter than humans.
[343] You're saying dogs are, quote, smarter than four -year -old humans.
[344] The cutest version of the study is a four -year -old study, but you can make the box slightly more complicated and find that adult humans over -imitate just as much.
[345] And if you don't believe me, have one of your pieces of technology on your TV go out and have someone come in and be like, well, you've got to move this wire to the HDMI thing and whatever, and you will have no causal understanding of it, but my guess is you will copy.
[346] exactly what that person does.
[347] How would you then characterize dogs are more blank than humans in this regard?
[348] Is it more rational?
[349] Is it less susceptible to bad advice?
[350] I think it's that dogs are more careful about the social behavior they pay attention to.
[351] We just automatically soak up what other individuals are doing, often without realizing it.
[352] And dogs can learn from us if they need to, but they don't have to follow us.
[353] In some ways, they're more rational in terms of the social information that they pay attention to.
[354] Okay, so let's flip it.
[355] bit.
[356] Rather than critique ourselves, which may be a singularly human trait as well, for all I know, let's see, what can we take from your research insight and apply it to this general notion of making behavior change happen?
[357] Yeah, I think what we get from this is that we have to be really careful in domains where we're watching the behavior of other people.
[358] And this is something that we've known in behavior change for a long time.
[359] Behavior change researchers have a phenomenon known as social proof, right?
[360] When you see other people doing it, you kind of think it's a good I think most of the time we think of social proof, we think of good things, but there are all these domains in which it seems to go awry.
[361] You know, classic work in the field of social psychology by Bob Chaldini found that if you hear that a bunch of other people are doing a dastardly thing, without realizing it you become more likely to do that dastardly thing, too.
[362] I think what we're realizing is that that's not necessarily that old strategy.
[363] This might be something that's human unique.
[364] And that begs us to ask the question, okay, why is our species using that kind of strategy?
[365] Maybe it's good for something in some contexts.
[366] Lori Santos, thank you so much for being on the show.
[367] Great job.
[368] It is time now for a break.
[369] If you'd like to attend a future taping of Freakonomics Radio Live, visit Freakonomics .com slash live.
[370] We've got upcoming shows in London and Chicago, and we will be right back.
[371] Welcome back to Freakonomics Radio, recording live tonight in Philadelphia, where we're learning about the science of behavior change.
[372] Would you please welcome our next guest?
[373] He's an economist at the University of Wisconsin School of Business.
[374] His research specialties include risk and decision.
[375] and insurance markets.
[376] Tell me that doesn't get you all giddy with excitement.
[377] Would you please welcome Justin Sidnor?
[378] Okay, Justin, I understand that you've done some interesting research on employers' health care plan options.
[379] Yes, is that about right?
[380] Yes.
[381] We'll see whether the audience agrees it's interesting.
[382] So the backdrop here is that many of us now have choices to make about health insurance plans.
[383] And we're all used to these horrible terms like deductibles and co -pays and co -insurance.
[384] So I did a research project with a couple of co -authors, Sora Bargava and George Lowenstein from Carnegie Mellon, and we got access to a company, a really big company, who decided to do something interesting.
[385] They embraced this idea that people should have control over their insurance.
[386] You should be able to decide, do you want a lot of insurance or not a lot of insurance?
[387] And there's going to be different premiums tied to that.
[388] So they gave people an opportunity to select one of four deductible levels.
[389] And what was the stated intention?
[390] Was the company saying to its employees, we want to give you more options because you're paying for the insurance whether you know it or not, right?
[391] It's coming out of payroll essentially, right?
[392] Or was it the company essentially trying to profit maximize?
[393] So I think in this case, they had a genuine belief that different employees would care more or less about how much insurance they had.
[394] And they were paying part of it through a premium share.
[395] And so they thought, why should we dictate?
[396] whether you have a high deductible plan with lower premium or a low deductible plan with higher premium.
[397] So they can choose between different deductible levels, different co -pays, co -insurance, and maximum out of pockets.
[398] So they can pull all these levers and they end up with 48 different possible combinations they could choose from.
[399] Okay.
[400] And then you have the real data so you can see what people really choose.
[401] And then can you see what they actually spend in the coming year?
[402] Yes.
[403] And we can calculate how much would they have spent with a different plan.
[404] Now, to be fair, it's a little bit of a gamble, right?
[405] When you buy insurance, you don't know how much of this you're going to need to consume, right?
[406] How do you factor that in?
[407] Well, this is the truly fascinating thing about this case.
[408] You're right.
[409] Most of the time, I couldn't tell you whether you made a good choice or not in your health insurance, because, you know, it's going to depend.
[410] You might get lucky, and it turns out you didn't really need much insurance.
[411] But if you bought insurance, I wouldn't have said that was a mistake.
[412] But in this case, it actually turned out that most of the plans were a deal that no economist should take.
[413] So most of the plans were such that you were going to pay more for sure for the year if you chose that plan.
[414] Doesn't matter if you turn out to be healthy or unhealthy.
[415] You're going to pay more.
[416] How so?
[417] It's just a higher premium and down the road the payments are worse?
[418] So it's really the higher premium part that matters.
[419] So what happens is the plans that had a lower deductible, say I wanted $500 instead of $1 ,000, to get that plan, I had to pay more than $600 extra in premium for the year.
[420] So best case scenario, I might save $500.
[421] I get more insurance, but for sure, I already paid over $600 for them.
[422] And you're talking equivalent benefits in those two cases.
[423] Yep.
[424] You can go to the same doctors, everything's covered, all the prices are the same.
[425] So it's really an interesting laboratory where we can label something that looks from at least our classic models.
[426] This looks like a financial mistake.
[427] Now, most people don't like insurance for a number of reasons, including the fact that it's a little confusing and intimidating.
[428] How much of this mistake, as you seem to be labeling it, is just a function of the fact that it's hard to figure out.
[429] So one possibility is that this is just choice overload.
[430] And if we gave them fewer options, they'd be able to select more rationally from that.
[431] Another possibility is that insurance is just really hard.
[432] And even if you're looking at just a couple of options, it's going to be very very.
[433] very hard to tell the difference between them.
[434] And the third option is maybe there's something going on where people just really genuinely are willing to pay more to avoid having these shocks of high deductibles, even if they knew for sure.
[435] What about affordability?
[436] Because especially for low -income employee, a smaller amount up front is attractive.
[437] Cash flow is an issue.
[438] Yep.
[439] So in many ways, they were sort of making the reverse option.
[440] So what was happening is that they were opting into paying higher premiums for sure.
[441] or every month.
[442] Now, they were potentially protecting themselves a little bit at the very beginning of the year, but over the course of the year, they were going to end up paying more money.
[443] And so the first thing we did is we wanted to figure out, okay, is this the choice overload?
[444] Is it the weird thing of 48 plans?
[445] And we ran some online choice experiments where we tried to replicate this sort of thing.
[446] And what we found very quickly there is you get exactly the same patterns if you just give people four plans or two plans.
[447] So it's really not about choice overload.
[448] It's fundamentally that when people look at insurance, they can't combine the premium and these out -of -pocket costs and make what looks like the rational math calculation.
[449] Do you think that long ago, some insurance company made the very sneaky, wise choice of calling the payment a premium, which sounds like a great thing?
[450] My general sense from studying insurance is that in the history of the insurance market, few people have made really wise choices, as evidenced by the fact that when you say you hate insurance, everyone in the room knobs along.
[451] Okay, so here's what I've learned from you.
[452] We're bad at buying insurance.
[453] We're bad at buying insurance in part because the way it's described makes it easy for us to be bad at it.
[454] And so maybe some of the fault lies there.
[455] So the big question is, again, let's flip it.
[456] What's the good news here?
[457] How can you take this research insight and apply it to the notion of helping more people make better choices, whether it's more people on an individual level or societally.
[458] So the good news is there are ways of making it way easier, right?
[459] I can add it up.
[460] I can show people.
[461] And we've run some little experiments, and it looks like if you make it easier to compare the plans, you can really easily inform and improve these options.
[462] But I think maybe the bigger implication is that we should just stop giving people choices about this.
[463] And I think the reason we should stop giving people choices about this, is that the only really good reason to give people choices is that we think that they might want to sort into plans that are good for them and have some bearing on their risk aversion.
[464] But we're really like four -year -olds with a box that's really hard to open, and we should just bring in the dogs and let them choose our insurance.
[465] Exactly.
[466] Let the dog choose your insurance.
[467] Justin Signore, thank you so much.
[468] Great to have here tonight.
[469] Let's welcome our next guest.
[470] He is one of the most revered.
[471] and prolific scholars in modern psychology.
[472] He helped identify all sorts of cognitive biases and illusions.
[473] He's also the author of one of my favorite books ever in the world called How We Know What Isn't So.
[474] Would you please welcome from Cornell University, Tom Gillivich?
[475] Tom Gillovich, I understand some of your latest research is on regret, which I'd love to hear more about, and really how it fits into your body of work.
[476] Sure.
[477] There's two types of mistakes we can make in life, mistakes of action and mistakes of inaction, and therefore two types of regrets.
[478] And the question is, what do people regret more, mistakes of action or mistakes of inaction?
[479] An example that I think everyone in the audience can relate to.
[480] If you go back to your days as a student, you're taking a multiple choice test, question number 20, you check B, you're going on, and then at question 24, you say, wait a minute, back up, go to question 20.
[481] I don't think it's B. I think it's C. Now you have a dilemma.
[482] Do you switch to C?
[483] You could make a mistake in doing that, or you could stay with B. You could make a mistake doing that.
[484] Which mistake hurts more?
[485] And I think we all recognize that if you switched from the right answer to the wrong answer, you're going to regret that more.
[486] So all the topics that we've been hearing about tonight, whether it's going to the gym, buying insurance, the way we behave with other people in a social setting, and so on, You can imagine scenarios by the billion where you make a choice and regret it.
[487] By looking at regret as you have, have you started to learn anything yet about how to just think about optimizing our decisions right here and now?
[488] Sure.
[489] The other side of the regret story, you regret action more than inaction sometimes.
[490] But if you ask people, what are your biggest regrets in life?
[491] They tend to report regrets of inaction.
[492] reconcile those two.
[493] And the reconciliation is that you feel more immediate pain over the regret of action, but partly because it's so painful, you do things about it.
[494] You think of it differently, and you've taken an action, and one of the ways that you can come to grips with is to say, well, it was a mistake, but I learned so much.
[495] It's hard to learn so much by not doing something new.
[496] And so over time, these painful regrets of action give way to more painful regrets of inaction.
[497] And what are the kinds of inactions that people have?
[498] And a great deal of them when we interviewed people, and we've interviewed college students, prisoners in a state prison, a sample of geniuses in group after group after group, a very frequent regret is one of not doing something because of a fear of social consequences.
[499] What will people think?
[500] And that calls to mind some of your earlier research about the spotlight effect, right?
[501] Which is we tend to think that people really care about us much more than they do, yeah?
[502] Yes.
[503] In fact, that research we did right on the heels of the research on regret.
[504] As David Foster Wallets put it, you won't mind so much how people judge you when you recognize how little they do.
[505] And people often don't do things that are in their interest because they're afraid it would be embarrassing.
[506] I don't want to go to the gym because I'd get on a...
[507] treadmill next to someone who's going a mile a minute and I can't keep up with that or I can't lift those weights.
[508] But let me ask you this.
[509] When you mentioned interviewing prisoners, what my mind jumps to is the obvious regret.
[510] I regret doing the thing that turned me into a prisoner.
[511] Yeah, they have slightly fewer regrets of inaction than the general population, but still the majority of theirs are regrets of inaction.
[512] Now, it's not that they don't regret the things that got them into prison, but the way they talk about them often focuses on an inaction.
[513] If only I'd done this, I wouldn't have gone down that path.
[514] If only I'd convinced the lookout person to be on his toes, et cetera.
[515] So even they tend to focus on things that they didn't do.
[516] So, Tom, I'm just curious around the subject.
[517] I really admire your work.
[518] I've admired it for years.
[519] I want to know your biggest regret.
[520] Okay, it's easy.
[521] It's a regret of inaction.
[522] I didn't think of this until five years after I got married, and I recognized at that time, I have a solution to the naming problem.
[523] What name do you take?
[524] We live in a world where it's sexist.
[525] The woman takes the man's name.
[526] Other cultures, they combine them, but that only works for one generation.
[527] You can't have a multiplying name.
[528] So what to do?
[529] And so my regret is I didn't think of it on the eve of my wedding, what I would have liked to have done, not told anyone about it, The ceremony goes, and then at the very end, I say, wait a minute, there's one more thing.
[530] We're going to flip a coin to decide what the last name is.
[531] A, because it's fair, but what I like even more about it is that we don't have anything, any cultural institutions that celebrate chance, and chance is a huge part of our life.
[532] I think it's my best idea, and unfortunately, it came five years too late.
[533] Tom Gillovich, thank you so much for being on today.
[534] So, what have we learned tonight?
[535] We have learned that humans are regretful, although not necessarily in the right direction.
[536] We're also not very good at buying insurance.
[537] We are dumber than dogs, and that's not a humble brag.
[538] That's an actual thing.
[539] And we are really petty.
[540] To make sense of all this, and maybe to give us a little hope, I'd like to introduce you to our final guest.
[541] He is a recent Nobel Prize, recipient, not the Peace Prize, I'm afraid.
[542] Not even the literature prize.
[543] It's just the prize in economics.
[544] I'm sorry, it's the best we could do.
[545] So would you please welcome the University of Chicago economist Richard Thaler?
[546] Richard Thaler, any day I get to talk to you is a great day.
[547] Thanks for being on the show.
[548] My pleasure.
[549] So you are best known as a primary architect of what's come to be called behavioral economics, also as co -author of the wonderful book Nudge and the resultant Nudge Movement.
[550] So let's start with that.
[551] How would you describe a Nudge?
[552] So a Nudge is some small, possibly small feature of the environment that influences our choices, but still allows us to do anything we want.
[553] Okay.
[554] So I would argue that the most successful Nudge and the great triumph to date of behavioral economics, has been your work done with Shlomo Ben Arzzi a couple decades ago in the realm of retirement savings.
[555] You argued that rather than relying on people to opt in to their 401k and fill out the 8 ,000 pages of paperwork and choose from a million investment options that confuse and intimidate people, that it's better to just automatically enroll them.
[556] So this has resulted in millions of people saving billions of dollars for their retirement.
[557] So congratulations and thank you.
[558] But what does it say about the field of behavioral economics and behavior change generally that this largest victory took place a couple decades ago?
[559] Where are all the other victories?
[560] So I think the retirement saving initiative has been a success because we've been able to convince firms that organize retirement plans to make them much simpler.
[561] So the choice architecture is simpler.
[562] As you mentioned, people are automatically enrolled so they don't have to fill out any forms.
[563] Then their rates are automatically escalated and they're given a default investment fund.
[564] So it's all easy.
[565] I think it's no accident that that was a success because the fix was easy.
[566] Give me a problem where I can arrange things so.
[567] that by doing nothing, people make the right choice.
[568] That's an easy problem.
[569] Well, one feature, in this case at least, is that it's a one -time fix, right?
[570] I mean, when Angela and Katie were talking earlier about their efforts to get people to go to the gym during the treatment period and then to keep going afterwards, that's having to win the battle every single day.
[571] Do you think that too many potential fixes are aimed at essentially unfixable behavior?
[572] No, because, you know, Katie and Angela have infinite energy, unlike me, and if they can solve the problem of getting people to go to the gym or eat less or take their medicines, I'm all for it.
[573] When they started this project, my reaction was, ooh, this is hard.
[574] And that the simple things may not work.
[575] So this behavior change for good project includes a lot of psychologists.
[576] What do economists know or have to offer that psychologists don't?
[577] And if your economist ego allows you to say so, vice versa, as well?
[578] Oh, I think psychologists know a lot more about that than we do.
[579] Economists don't know much about how people form habits or when they stick and when they break.
[580] Let me give you an example that relates to what Mike Norton was talking about.
[581] I was in London.
[582] I was invited to some meeting that they were trying to reduce binge drinking.
[583] And they asked me if I had any nudge -like ideas.
[584] You're smiling because you think this is a matter of personal importance, but I suggest neither of us go there.
[585] In England, there's a tradition, a hallowed tradition, of buying rounds.
[586] The way it's done at the pub is you go with your mates.
[587] And I would buy the first round and you would buy the second round until we each bought a round.
[588] Now, this has obvious problems if the number of people in the group is more than, say, three.
[589] And so what I suggested was that pubs institute a new policy, which is for groups of more than three, they run a tab.
[590] Well, this was supposed to be a private meeting, but it leaked to the press, and I got hate mail.
[591] And people would say, I would never dream of leaving the pub without.
[592] around for my friends, and they come with a group of eight, and this is nothing but trouble.
[593] So, you know, I can think up that change in the choice architecture, but how you would get that to change, well, I made no progress.
[594] We're, you know, we're human.
[595] We have self -control problems.
[596] We're absent -minded.
[597] We get distracted.
[598] And those things aren't going to go away.
[599] And I think technology is likely the best answer.
[600] Self -driving cars will drive better than us very soon.
[601] I've heard you talk about the opposite of a nudge as sludge.
[602] Can you describe what sludge is and give an example?
[603] Nudges typically work by making something easy, like automatically signing you up for the retirement plan.
[604] The sludge, you know, is the gunk that comes out as a byproduct.
[605] and I'm using it for stuff that slows you down in ways that make you worse off.
[606] So, for example, suppose that there's a subscription and they automatically renew your subscription, but to unsubscribe, you have to call.
[607] And I had this experience, the first review of my book Misbehaving, came out in the Times of London.
[608] My editor sent me an email excitedly telling me this and sending me the link, and I log on, and there's this paywall.
[609] And I said, oh, I can't read it, but there's a trial subscription for one pound for a month.
[610] And I said, oh, well, I'm willing to pay a pound to read the first review of my book.
[611] But then, you know, I start reading the fine print, And in order to quit, you have to call London during London business hours, not on a toll -free line, and you have to give them two weeks notice.
[612] That is sludge.
[613] So you're still reading the Times of London, I assume.
[614] I called my editor and told him that he should buy the subscription and then send me a PDF.
[615] All right.
[616] I have a final question for you.
[617] You mentioned habit formation, which to me is at the root of just about everything we've been talking about tonight.
[618] And some habits get formed intentionally, others not.
[619] Some habits are good.
[620] Some are not.
[621] I'm really curious to know what's a habit that you never acquired that you really wish you had?
[622] Doing my homework.
[623] In school, you were not a homework doer?
[624] I was not a great student.
[625] Yes, so how does this happen that a guy who's an admittedly not very good student who apparently didn't do homework gets a Nobel Prize?
[626] Well, I think listening to Tom, maybe it was I was less fearful of embarrassment than my colleagues.
[627] I mean, much of my career was similar to the kid who points out that the emperor is naked.
[628] and few of my economist colleagues were willing to say that, and I was willing to be ridiculed.
[629] Where do you think that lack of embarrassment came from?
[630] Possibly stupidity.
[631] Coming up next time on Freakonomics Radio, an episode inspired by one of the questions we asked Richard Thaler in Philadelphia.
[632] Hey, let me ask you this.
[633] Given how hard it is, obviously, to get people to exercise when they don't want to exercise, shouldn't we put a lot of resources in coming up with, let's say, an exercise pill?
[634] You in favor of that?
[635] Sure.
[636] Give me the pill.
[637] As it turns out, there is an exercise pill in the works.
[638] When we gave it to sedentary mice, the drug, progressively activated the genetic program that is normally activated by exercise.
[639] The hidden side of exercise that's next time on Freakonomics Radio.
[640] Freakonomics Radio is produced by Stitcher and Dubner Productions.
[641] This episode was created in partnership with W. H .Y. And was produced by Zach Lipinski, Alison Creglow, Greg Rippin, Harry Huggins, and Corinne Wallace.
[642] Our staff also includes Matt Hickey, and our intern is Daphne Chen.
[643] Our theme song, Mr. Fortune, was originally recorded by The Hitchhikers.
[644] The live version you heard in this episode was performed by Luis Gera and the Freakonomics Radio Orchestra.
[645] All the other music was composed by.
[646] Luis.
[647] You can subscribe to Freakonomics Radio on Apple Podcasts or wherever get your podcast.
[648] The entire archive is available on the Stitcher app or at Freakonomics .com, where we also publish transcripts and show notes.
[649] If you want the entire archive ad -free, plus lots of bonus episodes, go to stitcher premium .com slash Freakonomics.
[650] We also publish every week on Medium, a short text version of our new episode, go to medium .com slash Freakonomics Radio.
[651] We can also be found on Twitter, Facebook, and LinkedIn or via email at Radio at Freakonomics .com.
[652] Freakonomics Radio also plays on many NPR stations, so check your local station for details.
[653] As always, thank you for listening.
[654] Stitcher.