Freakonomics Radio XX
[0] Let's say I flip a coin and it comes up heads.
[1] Now I flip it again.
[2] Hmm, heads again.
[3] One more time.
[4] And wow, it's three heads in a row.
[5] Okay, if I were to flip the coin one more time, what are you predicting?
[6] Here's what a lot of people would predict.
[7] Let's see, heads, heads, heads, it's got to come up tails this time.
[8] Even though you know a coin toss is a random event, that each flip is independent, and therefore, the odds for any one coin toss are 50 -50.
[9] But that doesn't sit well with people.
[10] Toby Moskowitz is an economist at Yale.
[11] We like to tell stories and fine patterns that aren't really there.
[12] And if you flip a coin, say, ten times, most people think, and they're correct, that on average you should get five heads, five tails.
[13] The problem is they think that that should happen.
[14] in any 10 coin flips.
[15] And of course, it's very probable that you might get eight heads and two tails, or it's even possible to get 10 heads in a row.
[16] But people have this notion that randomness is alternating, and that's not true.
[17] This notion has come to be known as the gambler's fallacy.
[18] This is a common misconception in Vegas.
[19] You go to the slot machine.
[20] It hasn't paid out in a long time, and people think, well, it's due to be paid out.
[21] That's just simply not true if it's a truly independent.
[22] an event, which it is the way it's programmed.
[23] So, Toby, you have co -authored a new working paper called decision -making under the gambler's fallacy.
[24] And if I understand correctly, the big question you're trying to answer is how the sequencing of decision -making affects the decisions we make.
[25] Is that about right?
[26] That's correct.
[27] In fact, the genesis of the paper was really to take this idea of gambler's fallacy, which has been repeated many times in psychological experiments, which is typically a bunch of undergrads playing for a free pizza, and apply it to real -world stakes where the stakes are big, there's a great deal of uncertainty, and these decisions matter a lot.
[28] Some of these decisions matter so much.
[29] They can mean the difference between life and death.
[30] So these probably aren't the kind of decisions we should be making based on this.
[31] From WNYC Studios, this is Freakonomics Radio, the podcast that explores the hidden side of everything.
[32] Here's your host, Stephen Dubner.
[33] So Toby Moskowitz and his co -authors, Daniel Chen and Kelly Shoe, have written this interesting research paper.
[34] It's called Decision Making Under the Gambler's Fallacy.
[35] It's a kind of paper that academics publish by the thousand.
[36] They publish in order to get their research out there, maybe to get tenure, etc. So it matters for them.
[37] Does it matter for you?
[38] Why should you care about something like the gambler's fallacy?
[39] Well, we often talk on this program about the growing science of decision -making.
[40] But it's funny.
[41] Most of the conversations focus on the outcome for the decision -maker.
[42] What about the people the decision is affecting?
[43] What if you are a political refugee hoping to gain asylum in the United States?
[44] there's a judge making that decision.
[45] What if you're trying to get your family out of poverty in India by starting a business and you need a bank loan?
[46] There's a loan officer making that decision.
[47] Or what if you're a baseball player waiting on a three -two pitch that's going to come at you 98 miles an hour from just 60 feet, six inches away?
[48] That's where the umpire comes in.
[49] We'll start with Major League Baseball.
[50] That was a simple one.
[51] Moskowitz and his co -authors, analyze decision -making within three different professions, baseball umpires, loan officers, and asylum judges to see whether they fall prey to the gambler's fallacy.
[52] Because there's all kinds of possible areas where the sequence of events shouldn't matter, but our brains think they should and that causes us to make poor decisions.
[53] Decisions that are the result of what I would call decision heuristics.
[54] A heuristic being essentially a cognitive shortcut.
[55] Now, Why choose baseball umpires?
[56] Because baseball has this tremendous data set called pitch FX, which records every pitch from every ball game.
[57] And what it records is, if you look at the home plate umpire, where the pitch landed, where it was located within or outside the strike zone, and also what the call was from the umpire.
[58] Moskowitz and his colleagues looked at data from over 12 ,000 baseball games, which included roughly 1 .5 million called pitches.
[59] That is the pitches where the batter doesn't swing, leaving the umpire to decide whether the pitch is a ball or a strike.
[60] As they write in the paper, we test whether baseball umpires are more likely to call the current pitch a ball after calling the previous pitch a strike and vice versa.
[61] There were 127 different umpires in the data.
[62] The researchers did not focus on pitches that were obvious balls or strikes.
[63] If you take a pitch dead center of the strike zone, umpires get that right in 99.
[64] percent of the time.
[65] Instead, they focused on the real judgment calls.
[66] So the thought experiment is as follows.
[67] Take two pitches that land in exactly the same spot.
[68] The umpire should be consistent in and call that pitch the same way every time.
[69] Because the rules state that each pitch is independent in terms of calling it correctly.
[70] It's either in the strike zone or it's not.
[71] The first thing the pitch FX data shows is that umpires are generally quite fallible.
[72] On pitches that are just outside the strike zone, they're definitely.
[73] But they're close.
[74] On those pitches, umpires only get those right about 64 % of the time.
[75] So that's a 36 % error rate.
[76] It's big.
[77] Slightly better than flipping a coin, but not much.
[78] Not much.
[79] Yeah.
[80] Better than you and I could do, though, I would say.
[81] And how does the previous pitch influence the current pitch?
[82] Just as a simple example, if the previous pitch was a strike, the umpire was already about a half a percent less likely to call the next pitch a strike.
[83] Half a percent doesn't seem like that big an error, but keep in mind, that's for the entire universe of next pitches, whether it's right down the middle or high and outside or in the dirt.
[84] What happens when the next pitch is a borderline call?
[85] So if you look at pitches on the corners, near the corners, that's where you get a much bigger effect.
[86] So as an example, if I see two pitches on the corners, one that happened to be preceded by a strike call and one that didn't, the one preceded by a strike call, the next pitch will, less likely be called a strike about 3 .5 % of the time.
[87] Now, if I increase that further, if the last two pitches were called a strike, then that same pitch will less likely be called a strike 5 .5%.
[88] So those are pretty big numbers.
[89] And let me just ask you, other than finishing location of the pitch, what other factors relating to pitch speed or spin or angle, etc., did you look at and or could you control for?
[90] And is that important?
[91] You always want to control for those things because some people might argue, well, maybe they see it differently if it's a 98 mile an hour fastball versus a 80 mile an hour slider or a curve.
[92] Maybe that changes the optics for the umpire.
[93] So we try to control for all that.
[94] And the beautiful thing about baseball is they have an enormous amount of data.
[95] We threw in things like the horizontal spin and vertical distance of the pitch, the movement of it, the speed, the arc when it leaves the pitcher's arm to when it crosses the plate, we also control for who the pitcher was, who the batter was, and even who the umpire was.
[96] Since you're controlling for the individual umpires, I assume you have a list of the best and worst umpires, yes.
[97] On this dimension, yes, and it turns out they're all pretty much about the same.
[98] So you can either view them as equally good or equally bad.
[99] There wasn't a single umpire that didn't exhibit this kind of behavior.
[100] They all fell prey to what we interpret as the gambler's fallacy in terms of calling pitches, which stands to reason because they're all human.
[101] One of the biggest things you have to do when you're an umpire is be honest with yourself.
[102] That's Hunter Wendlestat.
[103] Well, now I'm a Major League Baseball umpire.
[104] I have been in the Major League's full time since 1999, so I've been able to travel this great country doing something I love, and that's umpiring baseball games.
[105] Wendlestett's father, Harry, was also a Major League umpire, an extremely well -regarded one.
[106] Harry also ran the Wendlestet umpire school near Daytona Beach, Florida, which Hunter now runs during the off -season.
[107] They start with the fundamentals.
[108] You hold up a baseball.
[109] Here's a baseball.
[110] Here are the measurements of the baseball.
[111] Here's the weight of the baseball.
[112] Same thing with the bat.
[113] And you go step by step.
[114] There's a proper way for an umpire to put their mask on and take their mask off so as to not block their vision, different ways to ensure that you get the best look you can.
[115] And that's the first seven to 11 days.
[116] If you're fortunate enough to make it as an umpire all the way to the majors, you know you'll be subject to a great deal of scrutiny.
[117] Because now, on any given day at every major league stadium, you have cameras, most of them high -definition super slow motion, that are critiquing every pitch and every play.
[118] Wendlestet is a fan of the pitch FX system that Toby Moskowitz used to analyze umpire decisions.
[119] As once these pitch systems got into place, and it's been a great educational tool for us because you look at it, and we get a score sheet after every game we work behind the plate, and it tries to see if you have any trends and really helps us become a better quality product for the game of baseball.
[120] We sent Hunter Wendlestadt the Moskowitz research paper, which argues that Major League umpires succumb to the gambler's fallacy.
[121] I was reading that.
[122] I got nervous.
[123] But that was really interesting.
[124] You know, that's just stuff I've never even thought about.
[125] It's kind of blowing my mind the last couple of days.
[126] It's pretty neat.
[127] But Wendlestead wasn't quite ready to accept the magnitude of umpire error the researchers found.
[128] I think it's very interesting.
[129] And I really look forward to studying that some more because, you know, running the umpire school and all that, we've got to keep up on the trends and the way that the perception is going out there also.
[130] Wendlestett did say that if an umpire makes a bad call, whether behind the plate or in the field, you don't want to try to compensate later.
[131] If you miss something, the worst thing to do, you can never make up a call.
[132] People, oh, that's a make up call.
[133] Well, no, it's not because if you try and make up a call, now you've missed two.
[134] And that's something that we would never, ever want to do.
[135] The Moskowitz research paper only analyzed data for the home plate umpire, the one calling balls and strikes.
[136] For those of you not familiar with baseball, there are four.
[137] umpires working every game, one behind home plate and one at each of the three bases.
[138] The umps rotate positions from game to game, so a given ump will work the plate only every few games.
[139] Interestingly, baseball uses six umps during the post season, adding two more down the outfield lines, which has always struck me as either a jobs program or a rare admission of umpiring fallibility.
[140] Because if you need those two extra umps to get the calls right during the post season, doesn't That imply they ought to be there for every game?
[141] In a more overt admission of the fallibility of umpires, baseball has increasingly been using video replays to look at close calls.
[142] In such cases, the calls are overturned nearly half the time, nearly half the time, calls by the best umpires in the world, which might make you question the fundamental decision -making ability of human beings generally, and whether we'd be better off getting robots to make more of the relevant.
[143] relatively simple judgment calls in our life, like whether a baseball pitches a ball or a strike.
[144] But human nature being what it is, and most of us having an undeservedly high opinion of ourselves as good decision makers, we probably won't be seeing wholesale automation of this kind of decision making anytime soon.
[145] Making decisions, after all, is a big part of what makes us human.
[146] So it's hardly surprising we'd be reluctant to give that up.
[147] But if the gambler's fallacy is as pronounced, as Toby Moskowitz and his colleagues argue, you might wish otherwise, especially if you are, say, applying for a bank loan in India.
[148] And we got a little bit lucky here.
[149] Lucky, meaning some other researchers had already run an experiment.
[150] With a bank in India and a bunch of loan officers on actual loans.
[151] And the data from that experiment allowed Moskowitz and his co -authors to look for evidence of the gambler's fallacy because what they did was they took that data and they reassigned them to other loan officers, which allowed for a randomization of the sequence of loan applications.
[152] Suppose you and I looked at the same six loans.
[153] I happened to look at them in descending order.
[154] You happen to look at them in ascending order, let's say alphabetically, just some way to rank them.
[155] And then the question is, did we come to different decisions just purely based on the sequencing of those loans?
[156] Now, keep in mind, these were real loan applications that an earlier loan officer had already approved or denied.
[157] This let the researchers measure an approval or denial in the experiment against the correct answer, although the correct answer in this case isn't nearly as definitive as a correct ball or strike call in baseball.
[158] Why?
[159] Because if a real loan application had been denied, the bank had no follow -up data to prove whether that loan actually would have failed.
[160] But the loans that were approved, we can look at the performance of that loan later on.
[161] You could see whether it was delinquent or didn't pay off as well.
[162] So unlike baseball, where we know for sure there's an error here, it's not quite clear.
[163] How much did loan officers in India fall prey to the gambler's fallacy?
[164] So you and I are looking at the same six set of loan applications.
[165] And the sequence with which I received them, suppose I had three very positive ones in a row, then I'm much more likely to deny the fourth one.
[166] one, even if it was as good as the other three.
[167] The analysis showed that the loan officers got it wrong, roughly 8 % of the time, simply because of the sequence in which they saw the applications.
[168] Talk for just a minute about why this kind of experiment, a field experiment, is inherently to people like you, more valuable than a lab experiment with a bunch of undergrads trying to get some free pizza, for instance.
[169] Well, that's right.
[170] Well, in this particular case, this is their job, first of all.
[171] so you're dealing with experts in their own field making decisions that they should be experts on, as opposed to maybe very smart undergrads, but making a decision on something they haven't had a lot of experience doing and shouldn't be considered experts doing.
[172] The second thing is incentives.
[173] Ah, incentives.
[174] One beauty of the original experiment was that it had the loan officers working under one of three different incentive schemes, which allows you to see if the gambler's fallacy can perhaps be overcome by offering a strong enough reward.
[175] Some loan officers operated under a weak incentive scheme.
[176] Which basically meant you just got paid for doing your job, whether you got it right or wrong, what we would call flat incentive.
[177] Then there was a moderate incentive scheme.
[178] Which is, we'll pay you a little more if you get it right and then pay you a little bit less when you get it wrong.
[179] And finally, some loan officers were given a strong incentive scheme.
[180] Which was, we'll pay you a little bit more to get it right, but we'll punish you severely for getting it wrong, meaning you approved it when it should have been denied or you denied it when it should have been approved, then it costs you money.
[181] So how was the gambler's fallacy affected under stronger incentives?
[182] Well, this was the most interesting part.
[183] With the strongest incentive at play, where loan officers were significantly rewarded or punished for not messing up an application simply because the order they read it.
[184] We found that that 8 % error rate, or I should say what we ascribe to the gambler's fallacy affecting decision -making, goes down to 1%.
[185] Wow.
[186] It doesn't get eliminated completely, but pretty nicely.
[187] We then looked at what the loan officers did in order to get that 8 % down to 1%.
[188] It turns out they ended up spending a lot more time on the loan application.
[189] If they make a quick decision, they rely on these simple heuristics of, well, I just approved three loans in a row, I should probably deny this one.
[190] But if I'm forced to actually just use information and think about it slowly because I really want to get it right because I get punished if I don't, then I don't rely on those simple heuristics as much.
[191] I force myself to gather the information and I make a better decision.
[192] Or to put it in non -academic terminology, if you're paid a lot to not suck at something, you'll tend to not suck.
[193] If effort can help.
[194] That's right.
[195] Coming up next on Freakonomics Radio.
[196] that federal asylum judges aren't deciding 50 % of their cases based on sequencing.
[197] Also, how stock prices are affected by when a company reports earnings.
[198] It makes today's earnings announcement seem kind of less good in comparison.
[199] And if you like this show, why don't you give it a nice rating on whatever podcast app you use?
[200] Because your approval means everything to us.
[201] Even if you've never watched a baseball game in your life, even if you don't care at all whether someone in India gets a bank loan, you might care about how the United States runs its immigration courts and whether it decides to grant or deny asylum to a petitioner.
[202] This is clearly a big decision, certainly for the applicants, right?
[203] I mean, in some cases it could mean the difference between life and death, right, or imprisonment and not imprisonment if they have to go back to their country where they're fleeing for political reasons or something else.
[204] These cases are heard in immigration courts by federal judges.
[205] Each case is randomly assigned, which if you're an applicant is a hugely influential step.
[206] As Toby Moskowitz and his co -authors write, New York at one time had three immigration judges who granted asylum in better than eight of ten cases and two other judges who approved fewer than one in ten.
[207] So as the researchers compiled their data to look at whether the gambler's fallacy is a problem in federal asylum cases, they focused on.
[208] on judges with more moderate approval rates.
[209] The data went from 1985 to 2013.
[210] So we looked only at judges that decided at least 100 cases in a given court and only looked at courts or districts that had at least 1 ,000 cases.
[211] Among that set across the country over those several decades, you're talking about 150 ,000 decisions and I think it was 357 judges making those decisions.
[212] So quite a large sample size.
[213] The researchers controlled for a number of factors, the asylum seekers' country of origin, the success rate of the lawyer defending them, even time of day, which, believe it or not, can be really important in court.
[214] A 2001 paper looked at parole hearings in Israeli prisons to see how the judge's decisions were affected by extraneous factors, hunger perhaps.
[215] This study found that judges were much more likely to grant parole early in the day, shortly after breakfast, presumably, and again, shortly after the lunch break.
[216] So Moskowitz and his colleagues tried to filter out all extraneous factors in order to zoom in on whether the sequencing of cases affected the judge's rulings.
[217] Keep in mind there's also no way to measure a correct ruling.
[218] When a judge denies a certain case, we don't know for sure if that was the right or the wrong decision.
[219] So I want to qualify that because what we can show is whether the sequencing of approval or denial decisions has any bearing on the likelihood that the next case is approved or denied.
[220] And that we show pretty strongly.
[221] So what does it look like for an asylum judge to be affected by the gambler's fallacy?
[222] So if the cases are truly randomly ordered, then what happened to the last case should have no bearing on this case, right?
[223] Not over large samples.
[224] And what we find is that's not true.
[225] If the previous case was approved by the judge, then the next case is less likely to be approved by almost 1%.
[226] Where it gets really interesting is if the previous two cases were approved, then that drops even further to about 1 .5%.
[227] percent.
[228] And if these happen on the same day, that goes up even further, closer to three percent.
[229] And then obviously if it's, you know, two cases in the same day, it gets even bigger.
[230] It starts to approach about 5 percent.
[231] So those are pretty big numbers, especially for the applicants involved.
[232] Or to put it a little differently, just by the dumb luck of where you get sequenced that day could affect your probability of staying in this country by 5 percent versus versus going back to the country that you're fleeing.
[233] That's a remarkable number, in my opinion.
[234] And in a different arena, if I hear that a baseball umpire might be wrong, 5 % at a time, I think, well, but the stakes aren't very high.
[235] But in the case of an asylum seeker, this is a binary choice.
[236] This is not one ball or strike out of many.
[237] This is, I'm either in the country, I'm not in the country.
[238] And so what did that suggest to you about the level of the severity that the gambler's fallacy can wreak?
[239] I guess, on different important decisions, whether it's for an individual or, I guess I'm thinking at a governmental level.
[240] I've refused to declare war on a given dictator three times in the last five years, but the fourth time gets harder, I guess, yeah?
[241] Right.
[242] No, I think that's right.
[243] And you can imagine the poor family that happens to follow two positive cases, even if their case is just as viable, their chances of getting asylum go down by 5%.
[244] That doesn't sound like much, but compare that to what it would be if the reverse had been true.
[245] If the two cases preceding them were poor cases and were denied, then their chances of being approved go up by 5%.
[246] That becomes a 10 % difference just based on the seek, just who happened to be in front of you that day, total random occurrence.
[247] So you wouldn't expect the magnitudes to be huge.
[248] Let's hope that federal asylum judges aren't deciding 50 % of their cases based on sequencing.
[249] So the lesson, if I'm seeking asylum or any other ruling, what I really want to do is bribe someone to let me get to the judge right after he or she has rejected the previous few applicants, right?
[250] I mean, other than that, it would be worth it.
[251] Well, plainly, it would be really, really, really worth it unless you get caught bribing and then obviously get rejected for asylum because of just that.
[252] So you're telling us the data from the decision maker's side.
[253] What about the seeker's side?
[254] Is there anything that can be done to offset this bias?
[255] I'm not sure there's much you can do.
[256] You're at the mercy of the courts.
[257] I suppose if you have a particularly good lawyer, maybe there's a way to lobby.
[258] I mean, I'm told the cases are randomized.
[259] I assume that's true.
[260] But who knows, like you said, maybe bribes is a bit extreme, but maybe there's a way.
[261] Well, feigning illness, at least, to break the street, right?
[262] Exactly.
[263] I mean, there's all kinds of things that perhaps a good lawyer could do.
[264] The evidence that Moskowitz and his colleagues present is, to me at least, fairly compelling evidence that decision makers in these three realms, courts, banks, and baseball, occasionally make poor decisions based on nothing more substantial than the order they face the decisions.
[265] But what if these researchers are just wrong?
[266] What if there are other explanations?
[267] No, that's a fair question.
[268] There are certainly other possible things to consider, and we try to rule them out.
[269] The first thing, the most obvious thing would be that the quality or merits of cases has that similar pattern.
[270] That seems hard to believe.
[271] We believe, you know, the randomization of cases, certainly in the loan officer experiment where we know it's randomized because we did it and these other economists randomized it themselves, we know we can rule that out.
[272] So I don't think that's an issue, but maybe just the quality of cases has this sort of alternating order to it.
[273] and these guys are actually making the right decision.
[274] We don't think that's true.
[275] And in baseball, we can actually prove it by showing that they're getting the wrong call.
[276] It's also interesting, to me at least, that what the Moskowitz research is pushing against is an instinct that a lot of people are trying to develop, which is pattern spotting.
[277] More and more, especially when we're dealing with lots of data, we look perhaps harder than we should for streaks or anomalies that aren't real.
[278] We may look for bias that isn't necessarily bias.
[279] Our umpire friend Hunter Wendlestead brought this up when we asked whether, as most baseball fans believe, umpires treat certain pitchers with undue respect.
[280] Well, you know, here's the thing about it.
[281] You take Clayton Kershaw.
[282] The umpire's going to call more strikes when Clayton Kersaw's out there.
[283] Why?
[284] Is it because we like them better?
[285] No. It's because he throws more strikes because he's a better pitcher than a rookie that's getting the call up from the New Orleans Zephyrs.
[286] It's one of those things.
[287] The reason that Greg Maddox and John Smaltz, they're in the Hall of Fame.
[288] for a reason.
[289] Toby Moskowitz points to one more barrier to unbiased decision -making related to the gambler's fallacy but slightly different.
[290] It's another bias known as sequential contrast effects.
[291] That sounds like a very technical term, but it's a pretty simple idea.
[292] The idea is if I read a great book last week, then the next book I read, even if it's very, very good, I might be a little disappointed because my reference for what a really good book is just went up.
[293] And you could see how that phenomenon would really be important in, let's say, job applicants or any kind of applicant, yeah?
[294] Correct.
[295] We see this all the time, that the sequence of candidates that come through for a job, I think, matters, both from the gambler's fallacy as well as from sequential contrast effects.
[296] So I, along with a couple of other researchers, were interested in this idea of sequential decision errors.
[297] That's Kelly Shoe.
[298] I'm an associate professor of finance at University of Chicago.
[299] the Booth School of Business.
[300] She's also one of Toby Moskowitz's co -authors on the gambler's fallacy paper, and she's a co -author on another paper called A Tough Act to Follow, contrast effects in financial markets.
[301] And I was talking to some asset managers in New York, and they said that when they consider earnings announcements by firms, their perception of how good the current earnings announcement was is very much skewed by what they've recently.
[302] seen.
[303] So Shu and her colleagues collected data on firm's quarterly earnings announcements from 1984 to 2013 to see how the markets responded.
[304] We look at how that firm's share price moves on the day of the earnings announcement and in a short time window before and after that announcement.
[305] And what did they find?
[306] So what we find is that if yesterday an unrelated large firm announced a very good earnings announcement, it makes today's earnings announcement seem kind of less good in comparison.
[307] And on the other hand, suppose yesterday's earnings announcement was pretty disappointing, then today's news, all else equal, looks more impressive.
[308] Before you go thinking that stock market investors are particularly shallow, Shoe notes that contrast effects like these have been widely observed in lab experiments.
[309] So what they've shown is that subjects will judge crimes to be less egregious if they've recently been exposed to narratives of more egregious crimes.
[310] College students will rate pictures of their female classmates to be less attractive if they've recently been exposed to videos of more attractive actresses.
[311] So something, we believe something fairly similar is happening in the context of earnings.
[312] In this research, as well as the Gambler's Fallacy Research, the timing of the consecutive decisions really matters.
[313] Toby Moskowitz again.
[314] Meaning if the decisions that you're making occur very close in time, then you tend to fall prey to the sequencing effect.
[315] So take the judge's example, for instance.
[316] We find that if cases are approved on the same day, then the likelihood of the next case that same day goes way down.
[317] If those cases were one day removed, the effect gets a lot weaker, or, in fact, if there's a weekend in between the decisions, then it's almost non -existence.
[318] So if the judge approved a bunch of cases on Friday, that really doesn't have much bearing.
[319] on what happens Monday.
[320] Moskowitz has tried to apply this insight to his own decision -making when it comes to grading students' papers.
[321] If I see a sequence of good exams that may affect the poor students who happen to be later in the queue in my pile, but one of the things I try to do, mostly just because I don't want my head to explode, is I take frequent breaks between grading these papers, and I think that breaks that sequencing.
[322] My mind sort of forgets about what I did in the past because I've done something else in between.
[323] What do you do during your breaks?
[324] Go for a walk, check email, get some coffee, maybe work on something else.
[325] Or, you know, my students don't want to hear this, but occasionally I'll grade an exam in front of a baseball game and, you know, I'll stop and watch a couple of innings.
[326] Obviously, every realm is different.
[327] A lone officer is different from a baseball umpire, is different from an asylum judge, is different from a professor grading papers and so on.
[328] But what they all would seem to have in common is a standard, a standard of, you know, a standard of you know, competence or excellence or whatnot.
[329] And so is there any way for all of us to try to avoid the bias of the gambler's fallacy, to try to, I guess, connect more with an absolute measure rather than a relative measure?
[330] Well, that's a very good question.
[331] I think it does depend on the field.
[332] Obviously, if you think about asylum judges, the absolute measure, you know, sort of your overall approval or denial rate, might be good from a judge's perspective, but it's certainly not great from the applicant's perspective if you make a lot of errors on the side, right?
[333] The errors may balance out, but to those applicants, there's huge consequences.
[334] Now that Moskowitz has seen empirical proof of the gambler's fallacy, he sees it just about everywhere he looks.
[335] My wife, who's a physician, claims that she thinks that happens.
[336] I would also argue test -taking.
[337] My son, who's actually studying a little bit for the SSATs, he'll say things like, well, you know what, I'm not sure what the answer to number four was but the last two answers were A, so it can't be A, right?
[338] And you just, you sort of caution, that may not be right.
[339] It sort of depends on whether the test makers have any biases either.
[340] Well, then it becomes game theory, which becomes harder and more fun, yeah.
[341] That's right.
[342] That would actually be a more interesting test, wouldn't it?
[343] If the students just figured that out, you let them in.
[344] Moskowitz plays tennis where there's plenty of opportunity for a rethink on the sequencing of shots.
[345] If you're serving, for instance, one of the best strategies, strategies is a randomized strategy, like a pitcher should do in baseball.
[346] And I'm not very good at being random, just like most humans.
[347] I'll say to myself, well, I hit the last couple down the middle.
[348] Maybe I should go out wide on this one.
[349] But that's not really random.
[350] What I should do is what some of the best pitchers in baseball do.
[351] Rumor has it Greg Maddox used to do this, which is recognizing that he's not very good at being random.
[352] He would use a cue in the stadium that was totally random.
[353] For instance, are we in an even or an odd inning, and is the time on the clock even or odd, some other cue that would just give him a sense of, well, I'll throw a fastball if there's two, you know, if the clock ends on an even number and the innings even, I'll throw a slider.
[354] If it's an odd, I should say to myself, you know, if the score is even or odd, or if whatever, if I count, you know, five blades of grass on the court as opposed to three, something that's totally random that has nothing to do with it, allows me to supply that random strategy, which my brain is not very good at doing.
[355] Most people's brains aren't.
[356] It's an interesting paradox that it takes a pretty smart person to recognize how not smart we are at doing something as seemingly simple as being random, because it wouldn't seem to be so difficult, right?
[357] I would say that's fairly true in general that the smartest people I know are so smart because they know all the things they don't know and aren't very good at, and that's a very tough thing to do.
[358] Interesting.
[359] The smartest people know all the things they aren't very good at.
[360] Me?
[361] I've never been very good at learning just when to end a podcast episode.
[362] I'm going to start working on that right now.
[363] Coming up next week on Freakonomics Radio, roughly 15 million Americans will eat their Thanksgiving meal in a restaurant.
[364] No cooking, no cleanup, and increasingly, no tipping.
[365] We just knew we had to go cold turkey on this whole tipping thing.
[366] Why tipping is a ridiculous way of doing business and what one man is doing to change it.
[367] That's next time on Freakonomics Radio.
[368] Freakonomics Radio is produced by WNIC Studios and Dubner Productions.
[369] This episode was produced by Harry Huggins.
[370] Our staff also includes Shelley Lewis, Christopher Worth, J. Cowett, Merritt Jacob, Greg Rosalski, Noah Kerness, Allison Hockenberry, Emma Morganstern, and Brian Gutierrez.
[371] You can subscribe to this podcast on iTunes or wherever you get your podcasts and come visit Freakonomics .com where you'll find our entire podcast archive as well as transcripts of all our episodes if reading is your thing.
[372] Thanks for listening.