Freakonomics Radio XX
[0] If you are an American of a certain age, you may remember when Kmart, the discount department store chain, was everywhere.
[1] Kmart is more than any store you have known before.
[2] Kmart means you get quality.
[3] At one point, Kmart had more than 2 ,300 locations in the U .S. It was a famous brand with one very famous in -store promotion.
[4] Sure, sure.
[5] Light Special.
[6] Yeah, the old Blue Light Special just about cost me my marriage.
[7] John List is an economist at the University of Chicago.
[8] His Blue Light Special story goes back to when he was a graduate student at the University of Wyoming.
[9] I'm sitting in our house and it's like mid -October and 10 degrees in snowing.
[10] You can imagine a cattle town.
[11] You know, there goes some hay going down the road.
[12] And my wife is in a long rant about how much she hates Laramie, Wyoming, and then she looks out our front window, and down the street, there's a Kmart, which is having a blue light special and says, I can't even get away from the bleeping blue light special at Kmart.
[13] It's friendly and warm and service does not take long.
[14] That's how big a deal the blue light special was.
[15] It had been invented by an assistant store manager at a Kmart in Indiana.
[16] The blue light special is what Sam Walton, the famous entrepreneur who started Walmart, said, is like the greatest innovation in the world.
[17] At least in retail, maybe, right?
[18] At least in retail, absolutely.
[19] People are milling around the store, and all of a sudden, this blue light, like on the top of a police car, right?
[20] The blue light is going off and siren start.
[21] There was also an announcement over the public address system.
[22] Attention, Kmart shoppers.
[23] And what that means is, everyone should run like cattle toward the blue light because there's going to be a great deal.
[24] And I guess it did a couple different things, right?
[25] It helped them clear out the stuff they didn't want, and it gave shoppers, you know, this rush of adrenaline that kept them in the store longer and made them spend more money.
[26] That's right on.
[27] This works on both the supply and the demand sides, absolutely.
[28] The Blue Light Special became such a phenomenon that Kmart did what any right -minded company would do, probably what you or I would do too.
[29] They took their success and they tried to make it bigger.
[30] They tried to scale it up.
[31] What they essentially did is they centralized the blue light special.
[32] The headquarters just outside of Chicago would decide where and when and what product would be placed on the blue light special and that the blue light special in Laramie Wyoming's Kmart would be the exact same as a one in Honolulu.
[33] This meant the manager of a given Kmart store no longer had the autonomy to sell off their particular surplus, nor could they cater to the local customers they knew well.
[34] You can see why this might be a problem.
[35] If you are a customer in the market for a cheap snow shovel, a blue light special on swimsuits may not be so attractive.
[36] Kmart headquarters also began setting the sale schedule months in advance.
[37] So, how did the Blue Light Special do once it was scaled up?
[38] Yeah, it was ruined.
[39] So, John, is that really a failure to scale or just a failure to have common sense?
[40] Well, both.
[41] So what my book is trying to make the claim about is that all of our failures to scale, and the world is replete with, failure after failure after failure, I want that to be viewed as a failure of common sense.
[42] Look, my book is kind of the common sense checklist.
[43] The book that John List just published is called The Voltage Effect, How to Make Good Ideas Great and Great Ideas Scale.
[44] Here's a short passage read by List.
[45] Most of us think that scalable ideas have some similar.
[46] bullet feature, some quality that bestows a can't -miss appeal.
[47] That kind of thinking is fundamentally wrong.
[48] There is no single quality that distinguishes ideas that have the potential to succeed at scale from those that don't.
[49] And that's why, as List argues, the world of scaling needs a common -sense checklist.
[50] Kmart today is nearly extinct.
[51] That's probably not because they ruined the blue light special, but it didn't help.
[52] In retrospect, the decision to centralize the blue light special looks like an obvious mistake, but with most scaling failures, it's more complicated.
[53] List wrote his book to help people identify potential problems before you scale up, and to look for solutions that will, if not guarantee success, at least improve your odds.
[54] The first secret is use incentives that can scale.
[55] This applies to making health care or education policy, trying to grow your startup, or just helping your community get along better.
[56] In this installment of the Freakonomics Radio Book Club, an economist who has already revolutionized how his profession does research, is trying to repurpose that research for the rest of us, a scaling challenge, if ever there was one.
[57] We all have lifelong ambitions, don't we?
[58] Today on Freakonomics Radio, how to know if an idea is scalable before.
[59] you actually try to scale it, how to avoid false positives, how to know whether a given success is due to the chef or the ingredients, and maybe my favorite, knowing when to quit.
[60] All that is coming up right after this.
[61] This is Freakonomics Radio, the podcast that explores the hidden side of everything with your host, Stephen Dubner.
[62] There's one sentence in your book that I read over and over and over because it was so powerful, but honestly, I don't quite get it.
[63] You write, we need to move from a mentality of creating evidence -based policy to one of producing policy -based evidence.
[64] Can you explain and maybe give an example?
[65] Effectively, what I'm calling for here is a revolution.
[66] It's a small revolution, but it nevertheless is a revolution.
[67] It's a call to researchers and to policymakers and business people to imagine if I scale an idea, what are the constraints that I should take account of when I'm actually doing the original research?
[68] Let's talk about the Chicago Heights Early Childhood Program.
[69] The Chicago Heights Early Childhood Center, or Czech, was a program that lists set up with several other economists, including Steve Levitt, my Freakonomics friend and co -author, they created a preschool in a low -income Chicago neighborhood in order to explore the incentives for students, parents, and teachers that would produce the biggest educational gains.
[70] Essentially, in that program, which we started from scratch, we have to hire teachers.
[71] And my goal that entire time was, let's hire teachers in the exact same way that Chicago Heights would hire teachers.
[72] In other words, let's not go cream skimming and hire the best Teach for America type candidates I can find for this pilot program.
[73] Exactly.
[74] Let's do it exactly like Chicago Heights.
[75] Now, that sounds good, and I think it's the first step, but I should have taken one more step.
[76] Wait, let's back up.
[77] It sounds good because you're saying in a lot of research that looks great on paper that policymakers try to scale up, it fails because the way the original research was done on a small scale, you use the best of everything, the best leaders, the best methodology, right?
[78] But you can understand why researchers want to do that because they want the research to look successful.
[79] You are consciously saying, I'm not going to try to hire the best teachers I possibly can because even though that would make my initial research look better, it will make it less likely to scale up.
[80] That's 100%.
[81] Researchers tend to start with the best case scenario, best of everything.
[82] Now, that's great for an efficacy test.
[83] But the problem is after researchers in the social sciences do an efficacy test, they forget to tell everyone else that it was an efficacy test, we should have actually taken one more step in over sampled bad teachers.
[84] Because if I want to scale that up around Chicago, I probably have to hire 30 ,000 teachers.
[85] That means I want to know, does the program work with well below average teachers?
[86] when you have to hire a bunch of people from the same city, you're going to have to have your program work with not -so -good teachers.
[87] If you want to keep the budget the same, sure, you can bust the budget.
[88] But with the same budget, will I have the same voltage?
[89] And the fact is, I won't.
[90] What does List mean when he talks about voltage?
[91] After all, his book is called The Voltage Effect.
[92] Here's another passage from the book.
[93] A pharmaceutical company develops a promising new sleep medication in its lab, but the drug doesn't live up to its promise in randomized trials.
[94] A small company in the Pacific Northwest successfully launches a product, then expands its distribution only to find that it sells poorly on the East Coast.
[95] These cases are all examples of a voltage drop.
[96] Voltage drops are what happens when the great electric charge of potential that drives people and organizations dissipates, leaving behind dashed hopes, not to mention squandered money, hard work, and time.
[97] And they are shockingly common.
[98] And here again is lists in real life.
[99] So what I'm calling for here is to reverse the notion of evidence -based policy.
[100] when I talk about policy -based evidence, I'm saying at scale, with all of the flaws that the program or the institutions or the implementers will have, does my program still work?
[101] What you're talking about now is what you call in the book.
[102] The chapter is called, is it the chef or the ingredients?
[103] The example uses a restaurant, which I think everybody can identify with.
[104] There's a great restaurant, and it's got a great chef.
[105] And then that chef tries to replicate.
[106] and open, let's say, 10 or 20 restaurants of the same.
[107] And if it was the chef's magic that was making that original restaurant fly, then that's going to be hard to do.
[108] But if it's more the style of the menu that can be replicated and so on.
[109] Now, the chef versus the ingredients issue would seem, however, to be at play in almost any scaling project, including your Chicago Heights Early Childhood Center.
[110] So how do you get around that?
[111] There are different actors within every idea.
[112] Now, there are some cases where, when you think about chef versus ingredients, in the check case, I think of the chef as a teacher's, right?
[113] And the ingredients might be the curriculum or the kids themselves.
[114] When I first started to think about scaling, I started to consume every idea and policy that scaled and didn't scale.
[115] So there are kind of these moments where it keeps coming back and back.
[116] One of them was fidelity where you scale a program that was never tested to begin with.
[117] That's obviously a bad thing.
[118] But the other one was any time there was a human at the center of the program every time that would fail.
[119] And it got me thinking, you know, humans just don't scale.
[120] Think about smart technology.
[121] A lot of your listeners might have smart thermostats in their home.
[122] So you have a bunch of really smart engineers and they make a bunch of these smart thermostats because they're saying, look, we're going to change the world.
[123] We're going to conserve a bunch of energy.
[124] But what happens is I have one of these smart thermostats in my own home.
[125] what I do is I undo the presets.
[126] The technology is the technology, but people like me are so dumb that we undo the original setting so much that all of the good stuff is gone because the engineers assumed it was Spock, essentially, you know, this unswervingly rational being when really you're selling that product into a bunch of Homer Simpsons.
[127] John List's own vocation, academic economics, is heavily populated with Mr. Spock's, people who've spent their entire lives thinking hyper -rationalally about every decision and assuming that everyone else thinks that way, even when plainly we don't.
[128] List himself is different.
[129] I'm not saying he's Homer Simpson, but he gets home.
[130] He is considered one of the most distinguished economists of his generation, a good bet for a future Nobel Prize, for having brought economics experiments out of the lab and into the real world by running what are called natural field experiments.
[131] List has worked with governments around the world, including one pass with the White House's Council of Economic Advisors, but unlike many of his peers who attended top -tier schools for both their undergraduate and PhD programs, List went to the University of Wisconsin -Stevens Point, and then the University of Wyoming.
[132] Many of his colleagues have parents who are professionals, even economists themselves.
[133] List is the son of a truck driver and a secretary.
[134] He and his older sister were the first two people in the family to go to college.
[135] When you look at my background, I think.
[136] the lived experiences are exactly my secret sauce.
[137] When I was in high school and as an undergrad, I would set up every weekend at baseball card conventions.
[138] And I would buy, sell, and trade baseball cards to make extra money.
[139] And that dealing in that market led me to start to do field experiments, which in the early 90s were a new innovation.
[140] in economics.
[141] And looking at behavioral economics in markets is a kid growing up, detasseling corn, helping my dad with his, you know, one -man trucking company, washing dishes, working in warehouses.
[142] These types of experiences allow me to look at people in markets in a very different way than most of everyone else within the field of economics.
[143] List's combination of lived experiences, as he calls it, and his original research, have also made him a hot commodity for private firms, like Uber, where List once served as chief economist.
[144] I asked him why a firm like Uber would even want an academic in that role.
[145] They want to bring in an academic because two reasons.
[146] One, The academic is not afraid to instill economics around every part of the company.
[147] The other one is firms are beginning to recognize that there are a lot of secrets that are hidden in academic journals and an academic minds that might actually be fruitful.
[148] So was that the case with Uber's then CEO, Travis Kalanick, who brought you in?
[149] Did he feel you knew stuff that Uber could seriously profit from?
[150] I think that's right.
[151] So when I first met Travis, it was sort of shocking to me how the initial interview went down.
[152] Meaning he was a little challenging.
[153] Yeah, he was quite aggressive.
[154] He didn't treat you with a lot of deference.
[155] No, no, he didn't really give a damn that I was a Chicago economist, which I appreciate it, actually.
[156] On the other hand, it was normal for me. Yeah, for people who don't know an economics seminar at the University of Chicago is about as cutthrow as any corporate boardroom, right?
[157] That's right.
[158] But the one thing that Travis had was confidence.
[159] In fact, I think he was probably the most confident person I've ever met.
[160] To take a startup, you know, which is basically zero and to take them to $66 billion in seven years or so, you have to have a relatively large amount of confidence, I think, in your ideas and above all kind of your instincts.
[161] So you end up leaving Uber for Lyft, their smaller rival, but Kalanick, the CEO, was ultimately kicked out of the company by the board.
[162] In your book, John, you write, I don't believe Travis Kalnick is a bad person.
[163] He's a good person who made several bad calls at scale.
[164] And you write about Uber's problems under Kalanik, including a toxic work culture, quote, as someone who isn't a woman you write, a queer person or person of color, I wasn't acutely aware of the power imbalances that many employees faced.
[165] So John, reading that, it struck me that when you're scaling up, the people in charge of the scaling are very often not, as you put it, aware of power imbalances because they are the power.
[166] And I'm wondering if there's any lesson you can draw from that for all of us.
[167] So what I did observe is a very aggressive environment.
[168] And it's an environment that's very similar to what the world of academia is like.
[169] And it's really hard if you've never gone through it to use theory of mind.
[170] So theory of mind is you can put yourself in the shoes of someone else, when I'm sitting in a room right now with a woman or a person of color, it's very difficult for me to understand exactly what they're feeling and what's going through their mind when somebody is presenting at the front of the room and just getting killed.
[171] That's an environment that I've been raised in in the academy, but now in retrospect, you can see that that kind of environment just will not hold up at scale.
[172] Most people, they're just not built like that.
[173] Imagine that you are a small startup firm or institution.
[174] Maybe it's a nonprofit.
[175] And then you have some success and you want to get bigger.
[176] You want to scale up.
[177] Based on what you just said, what are a few things a firm can do as they're growing to be better at recruitment and hiring especially?
[178] I would say be as diverse as you possibly can, as early as you can.
[179] What are the types of diversity you're talking about, and what are the advantages of diversity?
[180] So the types of diversity are both observable characteristics like gender and race, for example, and unobservable characteristics like your socioeconomic status of how you were raised.
[181] when people look at me, what they see is a white man. They don't see underneath that this is a person that has very different experiences than many other white men in the academy.
[182] And I think that that's useful, these types of diversity from a very early phase in our organization, because it is not only a way to welcome future candidates from all walks of life, but also you have a much deeper in more diverse set of ideas and solutions to problems.
[183] And I believe organizations are much more productive because of that.
[184] John, give me an example from your own experience of what you would consider a serious voltage drop.
[185] Sure.
[186] A good example is, let's say at Lyft, where I'm the chief economist and say I'm trying to raise the wages of our drivers.
[187] In one case, in the Petri dish, let's say, I end up doing an experiment where I give 5 % of the drivers the wage increase.
[188] Now, in the end, they're all happy.
[189] They work more.
[190] They end up taking in more money per hour.
[191] Everyone's happy.
[192] Now, if I roll that out to the entire group of drivers, you know what happens?
[193] That completely undoes the dynamics of the wage increase.
[194] Everyone works more, but guess what?
[195] they drive around with an empty car more often, and in the end, they make about the same.
[196] So that's a voltage drop, but it's a voltage drop for a specific reason.
[197] It's because the market comes to a new equilibrium.
[198] If I were to ask you the single best example, in the history of the world of the voltage effect of an idea or a product that scaled up unbelievably well, What's your example?
[199] I would say Jonas Sulk in the polio vaccination.
[200] He was like a lot of other scientists where he has an innovation.
[201] He then tests it out on his own kids.
[202] And then he ramps it up to test all kinds of different kids.
[203] So it works for all kids.
[204] He finds that it's not a false positive.
[205] Boom.
[206] We leverage the health care system and the delivery mechanism is taken care of.
[207] What's an idea or product that seemed so great?
[208] There was no way it was going to fail, and yet it did.
[209] In the public policy world, it's the old Dare program that Nancy Reagan was really pushing.
[210] You might remember that from our high school days.
[211] I do.
[212] Drug something.
[213] Exactly.
[214] I don't even remember what there means anymore.
[215] Here's what it stands for.
[216] I'm getting this from your book.
[217] It's the drug abuse resistance education program.
[218] And this is where police officers would go into school classrooms and teach kids about the dangers of drugs.
[219] Exactly.
[220] It's basically an information program.
[221] You know, there's kind of a cool study done in Honolulu.
[222] And Nancy Reagan said, you know what?
[223] I want to effectively stamp out drug use amongst teens.
[224] So she took the Dare program and blew it up.
[225] And essentially, she looked into every possible television set in America.
[226] and told teens, just say no. Say yes to your life.
[227] And when it comes to drugs and alcohol, just say no. That was effectively her campaign.
[228] Now, in the end, her drug abuse resistance education program ended up being not very effective because it never had voltage to begin with.
[229] It was simply a false positive.
[230] False positives, list rights, are a common cause of scaling failures.
[231] False positives arise for all sorts of reasons.
[232] Bad measurement, wishful thinking, but also simply because the world is messy, and it can be hard to establish cause and effect.
[233] This is a particularly big issue in the social sciences, where you can't just hold all factors constant except for the one you're trying to measure.
[234] List says the original finding of the DARE program in Honolulu wasn't fraudulent and it wasn't meant to mislead.
[235] it just turned out to be wrong.
[236] Unfortunately, this didn't become clear until after Nancy Reagan had taken her message to the airwaves.
[237] The science, in effect, tricked her, and she wasted a lot of time and money.
[238] I remember reading a paper, it must have been 10 years ago, about these other mass media anti -drug ads.
[239] I'm not sure which ones exactly, but do you remember that partnership for a drug -free America TV ad campaign, this is your brain?
[240] and then they crack an egg in a sizzling pan.
[241] This is your brain on drugs.
[242] And this is your brain on drugs.
[243] Any questions?
[244] I do remember that.
[245] And the paper, as best as I recall, showed a similar result.
[246] And this was a nice experiment because if I recall, the ad would roll out in different markets at different times, which let researchers like you measure the effect.
[247] And it turns out that drug use or abuse or whatever was being measured didn't fall.
[248] and in some cases may have actually increased.
[249] Do you know that research?
[250] I don't, but it sounds fascinating.
[251] And I was trying to figure out, like, why on earth could it have, you know, backfired?
[252] And then I was thinking about it, like, if I'm a 16 -year -old high schooler who sometimes goes to school and maybe a little bit more frequently smokes weed, and I'm getting up in the morning and I'm thinking about going to school, and then I see that TV ad, and really what they're showing me is a fried egg.
[253] And I think, ooh, that looks good.
[254] I think I'm going to smoke some weed and eat some eggs.
[255] But, I mean, it's hard to tell cause and effect, isn't it, in the real world?
[256] Look, your story might be right.
[257] I would bet on false positive.
[258] I think one of the reasons why we don't make as big of an impact in the policy world that we probably should is because of these dual circumstances of, one, you know, I'm not really sure that this is going to ever scale.
[259] And two, are we sure that this is a correct result?
[260] This makes me think of a different example, research from maybe 10 years ago that was recently challenged.
[261] This was work led by the Duke behavioral scientist Dan Ariely, and it showed that if people are asked to sign and verify a tax form before they fill it out, they're more likely to be truthful than if they fill it out first and sign at the end.
[262] But now it's come out through some detective work from other researchers that the underlying data were maybe manipulated, perhaps even faked.
[263] Now, Ariely has acknowledged the data weren't what they appeared to be, but he says it was essentially an accident or a mistake.
[264] What's your reckoning of that sort of situation where whether or not the deception was intentional, there's a finding that policymakers may have acted upon but is now called into question because, of academic shenanigans?
[265] I would consider this episode of Black Eye for economics and more broadly for science.
[266] And to me, the solution is, first of all, quick replication.
[267] Journals are now starting to demand the data up front.
[268] And it does make sense that the people who are actually reviewing the paper itself for publication, have access to the data.
[269] Had they not before?
[270] As far as I know, it is very, very rare for any referee to have access to data.
[271] So it's not just about this paper.
[272] I'm making a more general claim.
[273] I think one of the key unlocks for economics and for the social sciences more broadly is essentially a partnership or a group of partnerships with organizations.
[274] to help us understand what's going on in the black box.
[275] Security checks in guardrails that need to be put in place.
[276] For example, use Benford's Law to see whether the data have been fabricated.
[277] This is looking for some kind of pattern in a final digit or something, is that right?
[278] Exactly.
[279] So Benford's law is that humans tend to give numbers that are very different than, say, a machine.
[280] There are different safeguards that you can set up, and I think we need to do those, and I think we will do those as a science.
[281] Because let's be clear, we're kind of in the first inning or two of data generation, especially in economics.
[282] You know, you can call it a crisis, you can call it a revolution, but it all points to the same thing that we need to do better, and the reason why we need to do better is because science can be so powerful.
[283] look what just happened with COVID.
[284] The scientists stepped up and gave us great vaccinations.
[285] Merck is now stepping up to give us something that's going to look like a pill that's going to help us after we get COVID.
[286] I think we can do the same thing within the world of social sciences in particular economics, and we're moving in that direction, I think.
[287] Coming up after the break, several pilot programs that give out a universal basic income have looked promising, but will they scale up?
[288] Facebook has scaled up, but is that a good thing?
[289] And how do you achieve optimal quitting?
[290] I'm Stephen Dubner.
[291] This is Free Economics Radio.
[292] We'll be right back.
[293] John List is an economist at the University of Chicago and the author of a new book called The Voltage Effect.
[294] He was inspired to think about scaling problems by the simple fact that a lot of research that he and other academics produce often fails to translate into the widespread gains they envision.
[295] We spent most of the first part of this episode talking about failures, so let's start getting into solutions.
[296] Absolutely.
[297] Since List is an economist, you won't be surprised to hear where he starts.
[298] The first secret is use incentives that can scale.
[299] Here's an example from the voltage effect.
[300] This one uses a pair of incentives that most of us would like to avoid.
[301] The Dominican Republic had a problem.
[302] Millions of citizens weren't paying the taxes they owed.
[303] In 2018, the country's equivalent of the IRS put into action a campaign to increase tax compliance.
[304] And when they came calling for help, several colleagues and I offered to join the fray, with the caveat, of course, that we could run a natural field experiment.
[305] The main thrust of the campaign was a series of messages the government sent to citizens and companies.
[306] Almost every person or entity that decides not to pay their taxes is essentially weighing the benefits versus the costs of that decision and concluding that the possible gains outweigh the possible losses.
[307] The goal of the messaging campaign was to tip the scales so that the potential.
[308] potential costs became more salient in people's minds than the potential benefits.
[309] One of our messages sought to do this by informing or reminding people of the jail sentences for tax evasion.
[310] Another message informed or reminded people about a new law that made any punishments levied for tax evasion a part of the public record.
[311] In other words, the names of those who got caught not paying their taxes would be made available to anyone in the Dominican Republic.
[312] List and his colleagues sent out more than 80 ,000 messages to self -employed Dominicans as well as firms.
[313] Half received the message about jail time and half received the message about tax offenders' names being made publicly available.
[314] The benefit cost analysis for those deciding whether or not to pay their taxes suddenly looked very different.
[315] Ka -ching.
[316] Our intervention worked.
[317] This simple messaging program brought in an extra $100 million in tax revenue.
[318] Which of the two messages do you think was more effective?
[319] The one threatening public exposure or jail time?
[320] Yes, jail time.
[321] Now, if the government had to actually put all those tax sheets in prison, you'd run into a scaling problem right there, finding enough prison capacity.
[322] Fortunately, the mere threat of imprisonment was effective.
[323] It's also pretty much free and, therefore, eminently scalable.
[324] In addition to the precise targeting of incentives, there is another basic insight from economics that List says is a key to successful scaling.
[325] Make all of your decisions on the margin?
[326] Give me a half sentence explaining what it means to think on the margins for non -economists out there.
[327] What that means, basically, is don't be a prisoner to looking at averages in the business world, in the policy world.
[328] Whenever data are presented, they're presented in averages, and any time you can turn those averages into, you know, what happened with the last dollar that I spent, or what's going to happen with the next dollar that I'm going to spend.
[329] That's really what you want.
[330] So at Lyft, we try to bring in more drivers by advertising on Google and Facebook.
[331] So the marginal way to think is for the next million dollars I spend on Facebook or the next million dollars I spend on Google, how many drivers am I going to bring in?
[332] As opposed to, I've spent $50 million this quarter, on average, how many drivers did that bring in?
[333] Absolutely.
[334] Average is looking at the entire pool.
[335] So those can be very, very different.
[336] There might be some really low apples to pick right away in average.
[337] and then as you scale, what you're scaling over are the marginal people.
[338] You're not scaling over the 2 million people who have already consumed it in the past.
[339] So that's why marginal thinking becomes super important.
[340] So Facebook, or as it's now calling itself meta platforms, until a recent stock drop had a market cap of around a trillion dollars, and there are nearly 3 billion users on Facebook.
[341] So that's a big company.
[342] But as we all know, it started as a tiny little thing called The Facebook by Mark Zuckerberg and others when he was an undergrad.
[343] Do you see Facebook as a scaling success story or as a cautionary tale about scaling?
[344] I see it as both.
[345] Let's start with the success part first.
[346] Facebook is a great example of a good that has network externalities.
[347] If only two of my friends are on Facebook, it's actually not a very important service for me. But if all of my friends and all of their friends and all of my relatives are on Facebook, that becomes much more valuable to me. Now the dark side, when a company grows is large, is meta.
[348] Now meta, of course, includes both Facebook and Instagram, and those consumers are getting information about elections or vaccinations or what have you.
[349] And when the good stuff is spreading, like the truth is spreading, this is great, right?
[350] There's a high voltage, you know.
[351] But when the bad stuff is spreading, like you might think about a wildfire, and there we need to take extra care because then you're undoing potentially some of the good stuff.
[352] On this program, we've discussed the potential benefits of a UBI, universal basic income, which is an idea that economists have been thinking about for decades, including Milton Friedman going way back.
[353] There are a number of pilot programs around the world, and generally the results that are reported, and we can account for the results that aren't reported, tend to show success.
[354] To me, this is a really fascinating and pretty high -stakes case.
[355] were scaling really, really, really matters and where it could be very hard to make an accurate prediction based on small pilot programs.
[356] If I'm looking at a series of successful UBI pilot programs, what questions do I want to ask to ensure that it might be scalable?
[357] With UBI, my biggest concern scaling -wise would be I want to make sure that there are not important general equilibrium effects.
[358] And what I mean by that is you can do a small -scale scale study and have a small group of people involved in UBI, and that's great.
[359] It will show great results.
[360] But what happens when 10, 20, 40, 50 percent of the local labor market is part of UBI?
[361] If we scale it up, what I'm talking about here now are spillover effects on the local labor market or spillover effects in the local community and whether we understand those.
[362] at scale.
[363] Okay, let me ask you about something we've discussed on this show several times, and that's the upside of quitting.
[364] Now, that very phrase would seem to be at cross purposes with the very American notion of never quitting anything.
[365] It also would seem to be at cross purposes with the notion of grit, which is the name of the book written by our mutual friend Angela Duckworth.
[366] But in your book, you've introduced the phrase optimal quitting.
[367] How do you know it's time to quit, whether it's a startup firm or some vocation or a passion of your own?
[368] Yeah, it's a great question.
[369] And the reason why it's a great question, because it doesn't have one answer.
[370] Each of our situations that we're talking about in terms of what should I quit, how should I quit, when should I quit, there is no silver bullet.
[371] But what I can give are a few features that a lot of times people don't think about when they're talking about quit.
[372] So the first feature is, if I quit, what am I going to do?
[373] Usually, when we think about quitting, it happens because something in the workplace, in our life, has gone wrong.
[374] So then you say, I'm going to quit.
[375] But you should as often, or maybe even more often, think about what is my opportunity that I'm giving up, if I stick with this job, If that opportunity changes, like it's gotten a lot better, I should quit more often.
[376] Just as often as you're pushed to quit, you should think about being pulled to quit.
[377] Maybe every three or four or six months, look and see what jobs are out there that suit your needs.
[378] Look at what apartments are available or what houses are available every six or so months.
[379] Should this go for girlfriends and boyfriends as well?
[380] Well, that's where I'm going to leave that alone because I don't want hate mail from a million boyfriends who say, because of you, my girl, has just broken up with me. So, John, I really appreciate the wisdom of acknowledging that every quit opportunity is unique, essentially, right?
[381] Every person is different and the situation they're in is different and it's plainly hard.
[382] you write in the book that you've failed to optimally quit many times.
[383] Can you maybe give an example?
[384] The one example that I start out the chapter with is when I think I quit at the right time.
[385] I mean, that's on my golf dream.
[386] It was a realization that I just wasn't good enough at something.
[387] I played with, you know, Steve Stricker, Jerry Kelly, two PGA pros when I was in high school, we played in the same tournaments.
[388] But then when I went to college and played golf, in college, there was that two or three year period where their games grew in very important ways.
[389] And mine just didn't.
[390] And you really hadn't realized that until you played with them same course, same day, and could compare your performance.
[391] Yeah, yeah.
[392] I saw their scores.
[393] I saw mine.
[394] I thought I played reasonably well.
[395] So you might say, look, the guy's a quitter.
[396] He's a loser.
[397] I ended up realizing that my dream of becoming a golf professional was fleeting, and it was just a dream.
[398] And I can well imagine.
[399] I might be one of the most popular golf club professionals in O 'Clair, Wisconsin, but I would have missed out on a lot in life.
[400] So your experience suggests that one thing you can do, if you're thinking about quitting or maybe questioning your longevity in something, is to really force yourself to compare your.
[401] your performance with your peer group?
[402] I think that's right.
[403] Any time that you are in a competitive market or in a situation where only some people will win and many others will lose, I think you always have to be looking to your right, looking to your left and saying, do I really have a comparative advantage at this particular activity?
[404] And that's hard because nobody wants to say that they're not good enough at some.
[405] something.
[406] But we need to do more comparison shopping if we want to be serious about changing the world.
[407] The secret to high voltage scaling is understanding when to quit.
[408] And the general lesson there is that people don't quit soon enough.
[409] So, John, it strikes me that you might have a lot of enemies.
[410] Because a lot of accomplished academic researchers spend a lot of time thinking about creative and interesting solutions to big societal problems.
[411] But in your work, you point out so many cases where those ideas that look great on paper and are published in the best academic journals just aren't that practical in the real world.
[412] And I can imagine that if I'm on the other side of that critique, I'm a bit annoyed at John List.
[413] I think that's fair to say.
[414] On one hand, I'm probably one of the most hated, I think amongst a group of most hated.
[415] And on the other, I'm probably the most unfortunately named economist in the world?
[416] Wait, you say you're unfortunately named because if I type in John List and Wikipedia pulls up the serial killer by that name?
[417] Is that what you mean?
[418] Yeah, my parents didn't have great foresight.
[419] They couldn't predict that there's going to be a guy who murders his entire family in New Jersey.
[420] And he was pretty famous for a while, yes?
[421] It was a big case.
[422] Oh, really famous.
[423] I've never been able to shed that.
[424] You type in Google John List.
[425] Unfortunately, you get this really, really bad person.
[426] I don't know.
[427] You know, the way SEO works theoretically, if you keep writing books and keep appearing on podcasts, then maybe someday John List the academic research will overtake John Liss the serial killer.
[428] We all have lifelong ambitions, don't we?
[429] That was the economist John List.
[430] His new book is called The Voltage Effect.
[431] If John sounds a bit familiar, that's because he's been one of our most frequent and fascinating guests since we started this show.
[432] show.
[433] If you want to hear about some of the other research he's done, I would recommend Episode 353, How to Optimize Your Apology, Episode 141, How to Raise Money Without Killing a Kitten, and Episode 405, Policymaking is not a science yet.
[434] That last one is where we first heard about some of the scaling issues that are featured in List's new book.
[435] A lot of that research was conducted with his wife, the pediatric surgeon Dana Suskind, who is about to publish her own new book.
[436] It's called Parent Nation, unlocking every child's potential, fulfilling society's promise.
[437] For the record, Suskin is not the wife who freaked out about the Blue Light Special in Laramie, Wyoming.
[438] Different person, in case that matters to you.
[439] Coming up next time on Freakonomics Radio, when a boss is a bad boss, have you ever wondered why?
[440] there's no reason to believe that a great salesperson will be a great manager.
[441] This relates to an old business theory known as the Peter Principle.
[442] The Peter Principle states very simply that in any hierarchy, an employee tends to rise to his level of incompetence.
[443] It's a funny idea, but it also rings true.
[444] There's new research showing that the Peter Principle is still alive and well.
[445] And how do most firms feel about this?
[446] It's a problem that they purposely choose to live with.
[447] Why bad bosses are bad and why that probably won't change.
[448] That's next time on Freakonomics Radio.
[449] Until then, take care of yourself.
[450] And if you can, someone else, too.
[451] Freakonomics Radio is produced by Stitcher and Renbud Radio.
[452] This episode was produced by Mary Deuke.
[453] We had help this week from Jeremy Johnston and Jared Holt.
[454] Our staff also includes Alison Craiglow, Greg Rippin, Zach Lipinski, Ryan Kelly, Rebecca Lee Douglas, Morgan Levy, Emma Terrell, Jasmine Klinger, Eleanor Osborne, Lyric Boutich, Jacob Clemente, and Alina Cullman.
[455] Our theme song is Mr. Fortune by The Hitchhikers.
[456] All the other music was composed by Luis Gera.
[457] If you are so inclined, we would love you to rate or review the show on your podcast app.
[458] It is a great way to help new listeners find it.
[459] As always, thanks for listening.
[460] Let me clear my throat again, since that Laker game's killing me. LeBron!
[461] Radio Network, the hidden side of everything.
[462] Stitcher.