Hidden Brain XX
[0] This is Hidden Brain.
[1] I'm Shankar Vedantam.
[2] Democrats were not the only ones chagrined after the U .S. presidential vote.
[3] Donald Trump's election has sent shockwaves through the world of pollsters and political analysts.
[4] Pundits on both the left and right had predicted Hillary Clinton would walk away with the race.
[5] Today, we're talking to one expert who got it right.
[6] There are several surprising things about him.
[7] For one thing, Alan Lickman is a historian, not a data scientist.
[8] For another, this wasn't a fluke.
[9] This is the ninth time in a row he's called the election correctly.
[10] Finally, and this is the craziest part, Alan often calls the winner months and sometimes years before an election.
[11] He sometimes called elections before he knows which candidates are running.
[12] Alan says his secret is sitting in plain sight, but most of us are too consumed with the drama of campaigns and campaigns, candidates to see it.
[13] Alan Lickman, welcome to Hidden Brain.
[14] Thank you so much.
[15] Alan, you're in the Middle East right now in a hotel room in Qatar.
[16] I want to take you back more than a quarter century.
[17] You were at Caltech and you sat down to dinner next to another academic.
[18] You were an odd couple.
[19] You're a historian with an interest in American politics and he was a geophysicist.
[20] Since our story begins at that dinner, can you tell me who you sat next to and what you talked about.
[21] Yes, this was in 1981, and I was sitting next to Veludja Kylaz Borak, a geophysicist, the world's leading authority in earthquake prediction.
[22] And it was Kylus Borac who suggested that we should collaborate.
[23] And of course, being a prescient, foresightful scholar, my answer was absolutely not.
[24] What does geophysics have to do with elections?
[25] And, of course, being a prescient, foresightful, a scholar, my answer was, my answer was, absolutely then I began to think about it a little bit.
[26] And everything we know about elections, we've derived from geophysics anyway, tremors of political change, volcanic elections, political earthquakes.
[27] So why not?
[28] Explicitly steal from geophysics.
[29] And what we did was we reinterpreted elections in geophysical terms.
[30] That is, not as Republican versus Democrat, liberal versus conservative, or Carter versus Reagan, remember this is 1981, but in earthquake terms, as stability, the party holding the White House keeps the White House, and upheaval, the party holding the White House is turned out of office.
[31] We'll talk in a moment about the specifics of the model you built, but to make things clear, what you're saying is that you said, let's look at every presidential election as a referendum on the party currently holding the White House.
[32] That's exactly right.
[33] Our thesis was that elections are primarily judgments on the strength and performance of the party holding the White House.
[34] And all the twists and turns of the campaign, the ads, the speeches, the campaign tricks, the debates count for little or nothing on election day.
[35] But of course, this was just a theory.
[36] And to test our theory and to create a model, we examined every American presidential election from the election of Lincoln in 1860 to the election of Reagan in 1980.
[37] And from that examination, we came up with indeed the 13 keys to the White House.
[38] It's a very simple concept.
[39] The 13 keys are true false questions that can be answered prior to an upcoming election, sometimes years in advance if they fall into place.
[40] And an answer of true always favors the re -election of the party in power.
[41] An answer of false always predicts political upheaval the party in power will lose.
[42] And the decision rule is really simple.
[43] The party in power will lose if six or more of the 13 keys are false.
[44] That was the model we came up with in 1981.
[45] So what you did, Alan, is you looked at every election starting in 1860, and you said, what are the factors that might have influenced this election, and you used that to come up with this model of 13 keys.
[46] What are these keys?
[47] Tell us a little bit about these keys and these 13 questions.
[48] Yes.
[49] These 13 questions primarily gauge the strength and performance of the party holding the White House.
[50] There are four political keys that have to do with midterm elections, contests for the White House Party presidential nomination, whether the sitting president is running or not, third parties.
[51] Then there are a whole series of performance keys, short and long -term economy, scandal, social unrest, policy change, foreign policy successes, and failures.
[52] Only two keys relate to the candidates at all, and they're very high threshold keys.
[53] They ask whether the candidate of the White House Party is one of those once -in -a -generation charismatic candidates like a Ronald Reagan or a John F. Kennedy.
[54] And then they ask whether the challenging party candidate is charismatic.
[55] That is the only key, the 13th key, that has anything to do with the challengers.
[56] So this is fascinating because, of course, when we think about presidential elections and we think of the way the media cover presidential elections, we think it all comes down to the candidates and we pay enormous attention, and you would say that is actually inordinate attention.
[57] That's right.
[58] I think the media covers elections as though they were horse races, with candidates sprinting ahead and falling behind, according to the twists and turns of the campaigns, with the pollsters keeping score.
[59] But the whole point is that's not high.
[60] elections really work.
[61] The American people are fundamentally pragmatic, and they are asking whether or not the White House party merits four more years of office, and it's the record of the four years, the keys go term by term, that really counts, not all of these events of the campaign, that the media spends hundreds of millions of dollars covering each election.
[62] And as a result, almost all of the media coverage is not only irrelevant, it is largely misleading.
[63] It sends listeners and viewers and readers down into blind alleys.
[64] It's not quite right to sort of say you are looking into a crystal ball.
[65] What you've really done is you've said, looking back historically at dozens of elections, what are the different factors that played a role in each of these elections?
[66] And you're saying history can be a guide when we look into the future.
[67] Absolutely.
[68] I'm not a psychic.
[69] I don't look at crystal balls.
[70] I don't have a pipeline to the almighty.
[71] I'm an historian.
[72] And my system is guided by a deep study of history that covers a very broad span of time all the way back to 1860.
[73] And thus is an extremely robust system.
[74] That is, it has survived enormous changes in our economy, in our society, in our politics, in our technology.
[75] We go all the way back to the horse and buggy days.
[76] of American politics.
[77] And that's the beauty of looking at history and using this pattern recognition methodology.
[78] When we come back, we're going to explore some of the extraordinary features of this model.
[79] It allows Alan to sometimes call elections long before candidates announced they are running.
[80] Stay with us.
[81] This is Hidden Brain.
[82] I'm Shankar Vedantam.
[83] Alan, I believe that in 2010, you called the election for Barack Obama before his eventual opponent Mitt Romney had even declared his candidacy?
[84] That's correct.
[85] I was able to call the 2012 election, a very difficult election to call with the polls almost even on the eve of the election in 2010.
[86] And the reason I could do that was I was probing the underlying structure of elections.
[87] And looking ahead to where the keys would likely fall, I was able to ascertain that it was extremely unlikely that six or more of the keys would fall against the White House party, the party of Obama, and therefore Obama was going to win.
[88] I want to talk a little bit about the implications of this work a little later in this conversation, because I think it does raise really fascinating questions about how we come to know what we know and how we understand elections and how we should think about politics.
[89] But I want to stay a little bit with the model and your predictions.
[90] I understand that in 1991, you got a call from a from a, from a. obscure governor in Arkansas, and Bill Clinton's people wanted to know if you thought that George H .W. Bush was vulnerable in the 92 election.
[91] That's exactly right.
[92] My first book on the Keys, it's now in the sixth edition, came out in 1991.
[93] And at that time, George H .W. Bush had successfully conducted the Gulf War.
[94] His approval rating hit the highest of any president in history around 90 percent.
[95] And every big -name Democrat was tumbling over every other big -name Democrat to get out of the race.
[96] Mario Cuomo, Al Gore, Richard Gephart, Jesse Jackson.
[97] None of them wanted any part of H .W. Bush.
[98] But I wrote in my book that based on my historical study, Bush is a Carter, not a Reagan, a one -term president.
[99] None of the big shots, of course, listen.
[100] but I get a call from a woman, and she says, Professor Lickman, this is Kegas calling from Little Rock, Arkansas, special assistant to Governor Bill Clinton down here.
[101] And then she asks me, Lickman, are you serious that George Bush can be beaten in 1992?
[102] I said, yes, I am, and the rest is history.
[103] When did you call the 2016 election for Donald Trump?
[104] This was, of course, one of the most difficult and puzzling elections in all of American history to call because of the unprecedented nature of the Trump candidacy and other factors that were not entirely clear.
[105] I made my call, I believe it was September 23rd in an interview with the Washington Post.
[106] Mind you, this was before the sex tape, before the allegations that about a dozen women came out with with regard to sexual harassment by Donald Trump, before the Comey bombshell letter about the Clinton emails.
[107] So this is an odd question to ask, given that you call the election more than a month before it happened, but why were you so late this time?
[108] I was so late because we had in Donald Trump a history smashing candidate, and I was wondering if Donald Trump was so far outside the patterns of history that perhaps he could change the historical odds, that he could destruct a pattern that had existed all the way back since 1860.
[109] We've never seen a candidate who had no record of public service whatsoever, who had enriched himself at the expense of others, who had demeaned women, Muslims, African -American, the disabled, who had all kinds of scandals on his record.
[110] So after you made this call and the Act says Hollywood, tape is leaked, where Donald Trump talks about how he groped women without their consent, many Republicans denounced the candidate, and some even said that he should drop out.
[111] Did you ever think at that point in October, early October, of changing your call?
[112] No, I didn't.
[113] I was nervous about my call, and I had a lot of pressure with respect to my call, but I doubled down on my call even after all of that dirt on Donald Trump was.
[114] was revealed and before the Comey letter that kind of reopened the issue of Hillary Clinton's emails.
[115] In other words, if I am to be true to my system, I am not going to let events of the campaign, however dramatic.
[116] And regardless of what the pundits and the politicians might be saying, I'm not going to be influenced by that because I think the polls are very badly misused, and I don't think the pundits actually understand how elections work.
[117] And therefore, punditry is not a guide to understanding elections, predicting elections, or explaining their implications.
[118] Now, to be fair, Alan, we have talked in the past multiple times after elections.
[119] And you've told me in the past that the model correctly forecast the winner of the popular vote.
[120] In 2000, for example, you called the election for Al Gore, even though George W. Bush won the electoral college in that disputed election.
[121] This time, Hillary Clinton appears to have narrowly won the popular vote.
[122] So I guess it's fair to ask you, would it be more accurate to say that this is actually your first mistake in nine elections because Donald Trump didn't win the popular vote?
[123] That is a very fair question to ask.
[124] And what has happened that has changed my insights into the relationship between the popular vote and the electoral college vote is a fundamental change in our politics that is critically important not only for the keys, but for understanding all of American politics.
[125] And it's something that really has not been broadly acknowledged.
[126] When I developed my system in 1981, and really up to 2000, the popular vote drove the electoral college vote.
[127] You had to go back all the way to 1888 to find a divergence between the popular vote and the electoral college vote.
[128] But in recent years, that relationship has been severed.
[129] And the reason is very simple.
[130] You have two huge uncompetitive democratic states, California and New York, where nobody campaigns for president.
[131] And those states roll up many millions of extra popular votes for the Democrats, even though those millions of votes don't count for anything in terms of the electoral college.
[132] And there is no comparable set of states that roll up those votes for the Republicans.
[133] The only example would be Texas, but Texas is no longer nearly for the Republicans as uncompetitive as New York and California is for the Democrats.
[134] So there has been a fundamental change in the dynamics of our elections, which is why when I made my call, I did not draw the distinction between the popular vote and the electoral college, but simply said, this is going to be a change election, and the party holding the White House is going to lose.
[135] If you will, you might say, you know, I was wrong in 2000, although I don't think so, because there were some very special circumstances that gave those 537 votes in Florida to Bush, including the suppression and the discarding of many, many thousands of votes cast by African Americans.
[136] When we come back, we're going to talk more about the 2016 election, and we'll talk about some of the deeper implications of Allen's work.
[137] If someone can tell you who's going to win months or years before an election, does it really make sense to follow the ups and downs of the election for, 24 months before the big day.
[138] Stay with us.
[139] This is Hidden Brain.
[140] I'm Shankar Vedantham.
[141] Alan, I want to start by asking you about your call in September.
[142] I understand that you're not a personal fan of Donald Trump and you know many people who were appalled at your call.
[143] And this was presumably difficult for you to make difficult emotionally.
[144] I couldn't sleep at night when I made this call, I have to tell you.
[145] You know, personally, I think Donald Trump is a very, very dangerous leader who's brought out the worst elements in our society, the Ku Klux Klan, the neo -Nazis, the white supremacists.
[146] I've written about these issues.
[147] I wrote a book called FDR and the Jews.
[148] I'm a Jewish American myself and very sensitive, of course, to religious and racial prejudices.
[149] So I had many, many sleepless nights when I made this call.
[150] I teach at American University in Washington, D .C., not exactly a hotbed of republicanism.
[151] And I had a lot of pressure from some very good friends of mine and some very smart scholars to change my call.
[152] And I had that pressure from the time I made the call, and certainly after I doubled down on that call right up to the election.
[153] So emotionally and spiritually, this was the most gut -wrenching call I had to make, but I felt I had to be trained.
[154] to my system.
[155] I had to be true to the verdict of history.
[156] Has it given you any satisfaction to be right?
[157] No, it really hasn't given me a whole lot of satisfaction.
[158] It has in the sense that I think people may now start rethinking how they understand elections and understand how important governance is, that it's governance, not campaigns that turn elections.
[159] And I have been preaching this for decades now, and the politicians don't seem to get it.
[160] They think elections are won or lost during the campaign.
[161] I've argued they're one and lost during the four years of the term, of the administration, and it's the governing that really counts.
[162] I have nothing against the pollsters.
[163] A lot of them are good friends of mine.
[164] They're very competent people.
[165] But I am saying polls are not predictors.
[166] Polls are misused and abused as predictors.
[167] I also think that polls lead to lazy and misleading journalism.
[168] You don't even have to get out of bed in the morning to write a story about the polls.
[169] And it leads to this misconception that elections are horse races.
[170] And it misses all the deeper tides that drive elections.
[171] It misses the implications of elections for governance in this country.
[172] Look how the pundits tied themselves into pretzels.
[173] Right before the election, they said, oh, based on the polls, Donald Trump can't win.
[174] A day later, they had to twist themselves around to explain why what they said couldn't happen actually did happen, and it was all after the fact and meaningless punditry.
[175] I have to say that as somebody who was not involved in covering this election, but somebody who was watching it very closely, there was something about watching the numbers that I saw on a lot of media websites and political websites that assigned a probability to Hillary Clinton's win that lent it an air of certitude.
[176] So even though people said in the caveats and the footnotes, look, there's uncertainty, look, we don't quite know what's going to happen and unpredictable things can happen, when you said Hillary Clinton has a 75 % chance of winning the White House and 85 % chance of winning the White House, you really had a sense of whiplash on election, night because you said, how could you switch around from saying you're 85 % that Clinton's going to win to being 85 % that Trump's going to win in a matter of 45 minutes?
[177] This is a classic example of what I call the fallacy of false precision.
[178] It looked like those are very precise and accurate numbers.
[179] They are not.
[180] They are simply based upon the underlying polling data.
[181] And if the underlying polling data is full, flawed, then those numbers are not only meaningless, they are extremely misleading.
[182] They do not really tell you an accurate probability of winning because they're based on polls, which may or may not be right.
[183] And none of these analysts go beyond the polls to really probe whether or not the polls are correct.
[184] You and I talked in 2012, and you laid out this model for me, and I remember doing stories about it.
[185] And of course, it made no difference to the way people thought about elections in the next four years.
[186] And I want to talk a little bit about that because I feel like there is, you know, there are many, many forces that really would not want to hear the message that you have.
[187] Certainly, you know, you raise serious questions about the efficacy and value of political campaigns.
[188] You raise questions about political punditry.
[189] You raise questions about media coverage.
[190] You even raise, I think, disturbing questions for people who are voters because I think people who are voters at some level enjoy the ups and downs of the campaign.
[191] They want to be gripped by the drama of who's up and who's down and who's suffering from a scandal and who's had a good day and who's won the news cycle and who's lost the news cycle.
[192] And what you're saying is deeply dispiriting in some ways to all of these different constituencies because you're saying it doesn't matter.
[193] The keys are very deconstructive.
[194] And obviously, they're very destructive to the polling industry and to the media industry because the media makes money by covering the elections as an exciting horse race by saying who's at a good day, who's at a bad day.
[195] They make money by covering the elections day by day.
[196] And the pollsters make money by keeping score in the horse race.
[197] telling us who's ahead and who is behind.
[198] And my model suggests all of that is misleading or worse.
[199] And if you covered it according to the deep structure, it would put a lot of pollsters out of business and perhaps vastly diminish the revenue that flows into the media during an election campaign.
[200] This is the time, of course, when the media cashes in.
[201] There are also some very big lessons about campaigning from the keys that, of course, nobody has followed.
[202] The keys suggest that conventional campaigning takes us in the wrong direction, that all of these negative ads, all of these attacks on one another are the wrong way to campaign, that the right way to campaign based on the keys is to build a mandate for governing over the next four years.
[203] That might not be as exciting as email or sex scam, but that's the way you establish a basis for governing.
[204] That's for the good of the country, but campaigners and the media don't follow that because it doesn't have the same excitement and drama of negative campaigning and day -to -day horse race coverage.
[205] Alan, I'm going to ask you to make a prediction, and I want to make a prediction as well.
[206] There's much of what you say that I find persuasive, but I'm going to predict that in two years, we're going to start hearing about the ups and downs of the 2020 campaign, and the next election is going to be just as frenetic and just as poll -driven and just as down -to -the -wire excitement coverage as we've had in 2016.
[207] What's your prediction?
[208] Well, you just depressed me immensely, but I have to say for over 30 years, I have been arguing against exactly what you mentioned.
[209] So far, I haven't made a dent.
[210] in the way campaigns are covered or in the way candidates campaign.
[211] But I have never had the kind of attention that I've had after this kind of contrary to everybody else's prediction of a Trump win and one that certainly does not reflect my own political views.
[212] It was hardly an endorsement.
[213] It was a prediction.
[214] So maybe perhaps the attention I've gotten from this prediction will make people rethink even a little bit the way they look at elections, the way they look at our politics, and maybe, maybe bring together politics, governing, and history the way the keys do.
[215] Alan Lickman, I want to thank you for talking with me today.
[216] Shankar was a great conversation.
[217] The Hidden Brain podcast is produced by Tara Boyle, Jenny Schmidt, Maggie Penman, and Renee Klar.
[218] For more Hidden Brain, you can follow us on Facebook and Twitter, and listen for my stories on your local public radio station.
[219] If you like this episode, please do a couple of things.
[220] Make sure you are subscribed to our podcast so you get new episodes.
[221] And please tell one friend who doesn't know about us to listen to our show.
[222] Our unsung hero this week is Tanya Blue.
[223] Tanya works in NPR's development department, and she has played a vital role in getting this project launched.
[224] She's also been a voice of wise counsel and someone who has always looked out for us.
[225] Thank you, Tanya.
[226] I'm Shankar Vedantam, and this is NPR.