Freakonomics Radio XX
[0] When you try to envision the future, what do you see?
[1] Do you see a grim picture?
[2] A world, perhaps, in which humans have become marginalized, where technologies created to help us have gained the upper hand?
[3] Open the pod bay doors, hell.
[4] I'm sorry, Dave.
[5] I'm afraid I can't do that.
[6] The film industry takes a rather dim view of the future, doesn't it?
[7] Indeed, I can't think of a single Hollywood movie about the future on this planet that I want to live in.
[8] Kevin Kelly was a pioneer of internet culture, a founding editor of Wired magazine.
[9] His vision of the future wasn't so bleak.
[10] In the 50s and 60s when I was growing up, there was a hope of everything after the year 2000, and that's the future that I remembered.
[11] In a new book called The Inevitable, Kelly tries to see whether his youthful optimism squares with the technological realities of today and tomorrow.
[12] Short answer?
[13] Yes.
[14] I think that this is the best time in the world ever to make something and make something happen.
[15] All the necessary resources that you want to make something have never been easier to get to than right now.
[16] So from the view of the past, this is the best time ever.
[17] And it's getting better.
[18] Artificial intelligence will become a commodity like electricity, which will be delivered to you over the grid called the cloud.
[19] You can buy as much of it as you want, and most of its power will be invisible to you as well.
[20] But, of course, it could all go wrong.
[21] We've never invented a technology that could not be weaponized.
[22] And the more powerful a technology is, the more powerfully it will be abused.
[23] From WNYC Studios, this is Freakonomics Radio, the podcast that explores the hidden side of everything.
[24] Here's your host, Stephen Dubner.
[25] Kevin Kelly, a writer and thinker, has what he calls a bad case of optimism.
[26] It's rooted in the fact that on average for the past 100 years or so, things have improved.
[27] Incrementally, a few percent a year in growth.
[28] And while it's possible that next year that stops and goes away, the probable statistics view of it is that it will continue.
[29] Kelly envisions a world where ever more information is available at any time, summoned by a small hand gesture or voice command, a world where virtual reality augments our view of just about everything, where artificial intelligence is seamlessly stitched into our every move.
[30] Most of AI is going to be invisible to us.
[31] That's one of the signs of the success of a technology has become invisible.
[32] So invisible that without our even knowing about it, AI will read our medical imaging and approve our mortgages.
[33] It'll drive our cars, of course, and perhaps become our confidant.
[34] So I think our awareness of it, for the most part, will be as a presence in our lives.
[35] And we take for granted very, very quickly.
[36] In the same way that we take for granted, Google, people don't realize, I try this dress to my son.
[37] You know, when I was growing up, you couldn't have your questions answered.
[38] You didn't ask questions because there was no way to get them answered.
[39] And now we just routinely asked dozens and dozens of questions a day that we would never have asked back then.
[40] And yet we just sort of take it for granted.
[41] And I think a lot of the AI will be involved in assisting us in our schedules, our days, answering questions as a partner in getting things done.
[42] So think of it as a GPS for your life in the way that you kind of sit your course in the GPS.
[43] and then it's going along and it's telling you how to go but oftentimes you're overriding it and it's not bothered by that it's got another plan right away and then you change your mind is no problem I got another one I have another schedule here I'll do this over here or get this ready for you I'll make this reservation I'll buy this thing no problem you change your mind well I'll send it back no problem kind of like having a presence that is anticipating and helping your life I think that's what it looks like even I would say within 20 years.
[44] So we've done quite a few episodes of Freakonomics Radio that address the future, especially when it comes to the interface between technology and employment, the idea of whether there will be, quote, enough, end quote, jobs for people, whatever enough means.
[45] You write that, and I'll quote you, the robot takeover will be epic, which I'm sure will scare some people, and that even information -intensive jobs, doctor, lawyer, architect, programmer, probably writers and podcasters, too, can be automated.
[46] So even if this is what the technology can and wants to accomplish, it strikes me that the political class may well try to stymie it.
[47] I'm curious your views on that.
[48] Yeah, I mean, there was this really great survey poll that Pew did.
[49] Basically, they asked people how likely they thought that, you know, of the jobs would be replaced by robots or AIs.
[50] And it was like 80 % of people.
[51] And then they follow this up with how likely they thought that their job would be replaced.
[52] Like nobody, nobody believed that your job would be.
[53] And it was across the board.
[54] And I did the same exact survey actually in a crowd of people who came to my book party, 200 people.
[55] We had instant polling devices.
[56] And I asked the same thing.
[57] It was exactly the same pattern.
[58] Everyone believes that most of the jobs will replace, and no one believes that their job will be replaced.
[59] And I think it's actually neither.
[60] I think most of these, our jobs are bundles of different tasks, and some of those tasks, or maybe many of those tasks, will be automated.
[61] But they'll basically redefine the kinds of things that we do.
[62] So a lot of the jobs are going to be reinvented rather than displaced, particularly in the kinds of things we're talking about of the professional classes.
[63] I'm not saying that the AI can't be creative, it can be.
[64] In fact, we're going to be shocked.
[65] In some senses, we're going to realize that creativity isn't so creative.
[66] Creativity is actually fairly mechanical, that we will actually be able to figure out how to have AIs be creative.
[67] But the thing is, is they're going to be creative in a different way than we are.
[68] I think probably the last job that A .I .'s robots would do will be a comedian.
[69] I mean, they just have, they'll have a different sense of humor.
[70] They won't get us in the same way that we get us, even though they'll be incredibly creative, they can be brilliant, they'll be smart, they'll surprise us in many ways, they're still not going to do it exactly like we do it, and I think we will continue to value that.
[71] But that assumes, which is if you watch a certain kind of futurist movie or read a certain kind of futurist book, that assumes that the artificial intelligence doesn't essentially obliterate or marginalize us, yes?
[72] Right.
[73] The question is whether an artificial intelligence that we can only gain at our expense.
[74] And I think that while that's a possibility that we should not rule out, it's an unlikely possibility.
[75] You write, Kevin, that this is not a race against the machines.
[76] If we race against them, we lose.
[77] This is a race with the machines.
[78] Talk about how that begins to happen.
[79] Whether it's a shift in mindset, a shift in engineering, how does it happen that we come to view AI or robotization or automation or computerization as more of a continuing ally than a threat?
[80] So one of the first AIs we made, which was a kind of dedicated standalone supercomputer, the IBM, D. Blue, who, beat the reigning chess master at the time, Gary Kasparov.
[81] And this was kind of the first big challenge to sort of to human exceptionalism, basically.
[82] And when Kasparov lost, there were several things that went through people's mind.
[83] One is, well, that's kind of like the end of chess.
[84] It's like, well, who's going to play competitively because the computers are always going to win?
[85] And that didn't happen.
[86] In a funny kind of way, playing against computers actually increased the extent in which chess became popular and on average the best players became better playing against the artificial minds and then finally Kasparov who lost realized at the time that he said you know it's kind of unfair because if I had access to the same database that Deep Blue had of every single chess mood ever I could have won and so he invented a new chess league this kind of like the freestyle league like a kind of free martial arts you can play any way you want so you could play as an AI, or you could play as a human, or you can play as a team of AI in humans.
[87] And in general, it's happened in the past couple of years, is the best chess player on this planet is not an AI, and it's not a human.
[88] It's the team that he calls centaurs.
[89] It's the team of humans in AI, because they're complementary, because AIs think differently than humans.
[90] And the same of the world's best medical dietician diagnostic is not Watson.
[91] It's not a human doctor.
[92] It's the team of Watson plus doctor.
[93] And that idea of teaming up is going to work because inherently AIs think differently, even though they're going to be creative, even though they'll make decisions, even though they'll have a type of eventually consciousness, it will be different than ours because we're running on a different substrate.
[94] It's not a zero -sum game.
[95] And how much AI was applied to the writing of the book.
[96] I mean, obviously, you spell check and things like that, but I'm curious if there's anything else.
[97] Not as much as I would like, because AI has been around for 50 years and very slow progress, and that is because it was very, very expensive to do.
[98] AI was expensive because good artificial intelligence requires a lot of data and a lot of what's called parallel processing power.
[99] But the cost has come down, an unexpected gift from the video game industry.
[100] The reason why we have this sudden surge into AI right now in the last couple of years is because it turned out that to do video gaming you really needed to have parallel processing chips and they had these video chips graphical processor units that were being produced to make video gaming fast and they were being produced in such quantities that actually the price went down and became a commodity and the AI researchers discovered a few years ago that they could actually do AI, not on these big expensive multi -million dollar supercomputers, but on a big array of these really cheap GPUs.
[101] Add to that, the fact that more and more objects are being equipped with tiny sensors and microchips, creating the so -called Internet of Things.
[102] As Kelly writes, in 2015 alone, five quintillion transistors, that's 10 to the power of 18, were embedded into objects other than computers, which means we will be adding artificial intelligence as quickly and easily as people in the industrial era added electricity and motors to manually operated tools.
[103] I believe people will look back at this time.
[104] We'll look back at the year 2016 and said, oh my gosh, if only I could have been alive then, there was so much I could have done so easily.
[105] And here we are.
[106] But one area of technology isn't keeping up.
[107] And that's batteries.
[108] I think a lot of this internet of thing, the idea that we take all your shoe and your clothes and the chair and the books and the light bulbs in your house, and all of them are connected.
[109] I think part of what's holding that back is not the sensors and the chips intelligence, but the power.
[110] What I don't want to do is spend all my Saturdays, replacing all the batteries and all the things in my house.
[111] Predicting the future anytime in any realm is fairly perilous.
[112] You admit in your book that you missed a lot about how the Internet would develop, for instance.
[113] So without meaning to sound like a total jerk, let me just ask you, why should we believe anything you're telling us today about the future?
[114] Yeah, I think every future, including myself, is basically trying to predict the present.
[115] And so you should believe me to the extent that is useful to helping you understand what's going on now.
[116] As much as possible, I'm not really trying to make predictions as much as I am trying to illuminate the current trends that are working.
[117] working in the world.
[118] These are ongoing processes.
[119] These are directions rather than destinies.
[120] These are general movements that have been happening for 20 or 30 years.
[121] And so I'm saying these things are already happening.
[122] It looks like they're going in the same direction.
[123] And so it might be wrong, and I probably will be wrong on much of it.
[124] But I think if you see what I'm seeing, I think you will agree that it's happening right now and that that can be useful to anybody who's trying to make something happen or make their lives better.
[125] Let me just give you a quick little parallel about what I mean by Inevitable, which is the title of the book.
[126] I'm talking about long -term processes rather than particulars.
[127] And so imagine, you know, rain falling down into a valley.
[128] The path of a particular drop of rain as it hits the ground and goes down the valley is inherently unpredictable.
[129] It's a stochatic.
[130] It's not at all something you can predict.
[131] But the direction is inevitable, which is downward.
[132] And so I'm talking about those kinds of large -scale downward forces that kind of pull things in a certain direction, and not the particulars.
[133] And so I would say that in a certain sense, the arrival of telephones was inevitable.
[134] Basically, no matter what political regime or economic system, that you'd have, you would get telephones once you had electricity and wires.
[135] And while telephones were inevitable, the iPhone was not.
[136] The species, the particular product or company wasn't.
[137] So I'm not talking about the particulars of certain inventions.
[138] I'm talking about the general forms of things.
[139] Coming up on Freakonomics Radio, how good of a driver are you, really, compared to an autonomous car?
[140] Even though there's a few people who die from robot cars a year, Humans kill one million people worldwide a year, and we're not banning humans from driving.
[141] And if you want to hear more conversations like this one, check out the Freakonomics Radio Archive at Freakonomics .com, on iTunes or wherever you get your podcasts.
[142] Not long ago, I got a text from a friend.
[143] It just said, have you read The Inevitable?
[144] I thought, The Inevitable, what's that?
[145] I had no idea, but it sure sounded scary.
[146] Did North Korea finally bomb someone?
[147] Did Donald Trump finally fire one of his own kids?
[148] But no, that's not what my friend meant.
[149] The Inevitable was the title of a new book by Kevin Kelly about the future.
[150] I do have to say the title of the book sounds to me at least a little scary.
[151] And I'm wondering, were you trying to scare us a little bit or no?
[152] No, I wasn't trying to scare people.
[153] I think people are scared enough as it is.
[154] I was trying to suggest basically what it meant literally, which is that we need to accept these things in the large form and part of the message of the book, which is a little bit subtle, which is that large forms are inevitable, but the specifics and particulars are not, and we have a lot of control over those.
[155] And we should accept the large forms in order to steer the particulars.
[156] One of the inevitable trends that Kelly points out is dematerialization, the fact that it takes so much less stuff to make the products we use.
[157] Yeah, that's a long, ongoing trend.
[158] The most common way is to use design to be able to make something that does at least the same with a smaller amount of matter.
[159] An example I would give was like the beer can, which start off as being made of steel, and it's basically the same shape and size, but has reduced the...
[160] almost a third of its weight by using better design.
[161] But you can see how this trend can snowball.
[162] Instead of 100 books on a shelf, I have one e -reader.
[163] Instead of 1 ,000 CDs, a cache of MP3 files, which I may own or more likely borrow from the cloud whenever I want them.
[164] The current example would be the way that people are reimagining a car, which is a very physical thing, as a ride service that you don't need to necessarily buy the car and keep it parked in your garage and then parked at work not being used when you could actually have access to the transportation service that a thing like Uber or taxis or buses or public transportation give.
[165] So you get the same benefits with less matter.
[166] But one of the things that's sitting in my recording booth is a hardcover copy of your book, The Inevitable, and it just strikes me as so weird that for a set of ideas that we're talking about today, that it is still published in classic Dead Tree format.
[167] And I'm curious whether you felt that there was a little bit of a paradox in that, or are you happy to exploit technologies from previous generations for as long as they're still useful, even if in small measure?
[168] So there's a couple things to say about it.
[169] Let me say the larger thing first, and I get to the specifics about the book.
[170] Most of the things in our homes are old technology.
[171] that most of the stuff that surrounds us is concrete, steel, electrical lights.
[172] These are ancient technologies in many ways, and they form the bulk of it, and they will continue to form the bulk of it.
[173] So in 50 years from now, most of the technology in people's lives will be old stuff.
[174] We tend to think of technologies as anything that was invented after we were born.
[175] But in fact, it's all the old stuff, really.
[176] And so I take an additive view, and I had this surprise in my previous book, and many people have challenged it, but it never successfully.
[177] And that is that there has not been a globally extinct technology.
[178] Technologies don't go away, basically.
[179] They just become invisible into the infrastructure.
[180] And so, yes, there will be paper books forever.
[181] They will become much more expensive.
[182] And they may be kind of premium and luxury items, but they'll be around simply because we don't do away with the old things.
[183] I mean, there are kind of like more blacksmiths alive today than they were ever before.
[184] The more people making telescopes by hand than ever before.
[185] So lots of these things, they just don't go away.
[186] But they're no longer culturally dominant.
[187] And so that's paper books will not be culturally dominant.
[188] And in fact, this book, The Inevitable, it has digital versions.
[189] I'd love you to talk for just a couple minutes about the ongoing need for maintenance, even when the technological infrastructure we're building and using every day wouldn't seem to be as inherently physical and in need of maintenance as the old infrastructure.
[190] Yeah, that was a surprise to me. I changed my mind about my early introduction to the digital world.
[191] There was the promise of the indorable nature of bits that when you made a copy of something, it was perfectly exact copy.
[192] And there was a sense that there was no moving parts, you know, kind of like a flash drive thing.
[193] There's no moving parts that will never break.
[194] But it turns out in a kind of weird way that in more ways than we suspected, the intangible is kind of like living things.
[195] It's kind of like biology in the sense that it's so complicated and interrelated inside that things do break.
[196] And there are little tiny failures, whether it be inside a chip a particular bit that can have cascading effects and that can actually make.
[197] make your thing sick or broken.
[198] And that was a surprise to me that software would rot, that computer chips would break, and that in general that the amount of time and energy you'd have to dedicate to digital and tangible things was almost as equal to the physical realm was a surprise to me and I think a lesson for us into the future.
[199] Did it change how you think about running your own technological life?
[200] You write that you used to be one of the last guys to update everything because, you know, I got used to things the way they are.
[201] I don't need the update or the upgrade.
[202] And how has that changed how you do it now?
[203] Yeah.
[204] Well, I learned by being burned, by experience, by waiting until the last minute to upgrade, that it was horrible and that it was more traumatic in the sense that when I did eventually upgrade, I had to upgrade not just the current system, but everything else that it touched forming this sort of chain reaction where upgrading one thing.
[205] required upgrading the other, which required upgrading the other, and that when I did these calculations and changed my mode and tried to upgrade pretty fast as soon as, you know, maybe not the very first rib, but the next one after that, that it was actually kind of, it was like flossing.
[206] It was like hygiene.
[207] You just sort of wanted to keep up to date because in the end you actually spent less time energy.
[208] It was less traumatic.
[209] And you gained all the benefits of that upgrade.
[210] And so there is a sort of digital hygiene approach to things that I take now.
[211] And that's not the only way that a change.
[212] I also realize that the purchase price is just one of the prices that you pay when you bring something into your life.
[213] That there is this other thing that you actually do have an ecosystem, even in your household, even in your workplace, whatever it is.
[214] And that bringing something on, you're now committed to, tending it.
[215] When you're talking about bringing something into your home, I thought the one product that I've seen that number, they usually call it like cost of ownership, I guess, is cars.
[216] And I don't think it's the car manufacturers themselves who calculate that for you.
[217] Maybe it is, I don't know.
[218] But you do see, this car, here's what it's going to actually cost you over its lifetime in terms of how much fuel it uses versus another car, how much maintenance it will require versus another car, because of the high -end components that it may have, what the cost of replacement will be for those.
[219] And I like that, but I want that calculation attached to everything.
[220] I want that calculation attached to the people that come into my life, even.
[221] Yeah, no, actually, I think you're onto something.
[222] I think this idea of calculating the cost of ownership for digital devices or software apps, or that matter, would be very, very valuable and would not actually be that hard to derive because, you know, everything is being kind of logged in some capacity.
[223] By looking deeply into the present, Kelly sees a future where more and more of our moves are being tracked, whether because of data we voluntarily make public, as on Facebook or otherwise.
[224] Inevitably, we will be tracking more and more of our lives and will be tracked more and more, and that's inevitable.
[225] And what we have a choice about is the particulars of how we do that, whether we do that civilly or not.
[226] we have to engage this.
[227] I was maybe a little bit frustrated by the fact that there's often an initial reaction from many corners of trying to prohibit things before we know what they are.
[228] And that's called the precautionary principle.
[229] It says simply that there are things that we should not allow in our lives until they're proven harmless.
[230] And I think that doesn't work.
[231] Has that ever happened with a major invention period?
[232] That we proved that it was harmless.
[233] Yeah.
[234] Before it being, you know, let's say, widely adopted.
[235] In general, no. I don't think that there has ever been that, and I think it's kind of unfair to request it, but it does seem to be a current motion like, say, in the genetically modified crop area.
[236] So people saying we can't have these because we can't prove that they're harmless.
[237] And so there are attempts to do that with AI driving, a robot car, which saying, no, no, you can't have robot cars on the road until we've prove that they're completely safe.
[238] And that's not going to happen.
[239] And that's unfair because even though there's a few people who die from robocars a year, humans kill one million people worldwide a year.
[240] And we're not banning humans from driving.
[241] In the future that you envision, who are the biggest winners and losers?
[242] I think it's all comparative.
[243] I think there will certainly be people who would gain more than others.
[244] And to them, who only gain a little, that might seem that they lost.
[245] But I suspect that everybody will be gaining something.
[246] And perhaps the poorest in the world will continue to gain the most over time.
[247] But there will be people who won't gain as much as many others.
[248] I don't want to call them losers, but those people, I think, are going to, by and large, be those who will be unable to retrain or unwilling to retrain.
[249] And I think retraining or learning is going to be kind of like a fundamental survival skill because it's not just the poor who have to be retrained.
[250] I think even the professionals, people who have jobs who are in the middle class.
[251] I think this is going to be an ongoing thing for all of us is we are going to probably be changing our careers, changing our business card, changing our title.
[252] many times in our life.
[253] And I think there will be the resources to retrain them, whether there's a political will, I don't know.
[254] I kind of take a Buckminster Fuller position, which is that if you look at the resources, they're all there, there's enough food for everybody.
[255] The reason why there's famine is not because there's enough food, but it's because there isn't a political will to distribute it.
[256] And it only takes one bad actor to ruin the livelihood of a couple hundred thousand million people.
[257] That's, you know, that's a leverage that exists, you know, even in humans, forget about machines.
[258] Exactly.
[259] So I think this technology is going to benefit or can benefit everybody.
[260] But whether they do or not, whether specifically whether they do, that is a choice that we have to make and will make a huge difference.
[261] So in the abstract sense, I think this technology does not necessarily make losers.
[262] But that doesn't mean that there won't be, because I think we do have choices about how we make things specifically.
[263] The Internet was inevitable, but the kind of Internet that we made was not, and that was a choice that we make, whether we made it transnational or international, whether it was commercial or nonprofit.
[264] Those choices are choices that we have.
[265] Those choices make a huge difference to us.
[266] And so I think inherently the technology has the power to benefit everybody and not make losers, But that's a political choice in terms of the particulars of how it's applied.
[267] And therefore, I think we do have to have those choices.
[268] It also seems just out of fairness to your argument, really, that just as you can't foresee all the benefits of what technology will give birth to, nor can you see the downsides, right?
[269] I mean, there's just no way for any one of us sitting here now to see what that's really going to be.
[270] I'm sorry, Dave.
[271] I'm afraid I can't do that.
[272] Yeah, right.
[273] We've never invented a technology that could not be weaponized.
[274] And the more powerful a technology is, the more powerfully, it will be abused.
[275] And I think this technology that we're making is going to be some of the most powerful technology we've ever made.
[276] Therefore, it will be powerfully abused.
[277] And there's the scary part of the Kevin Kelly view of the future.
[278] Exactly, right.
[279] But here's the thing.
[280] Most of the problems we have in our life today have come from previous technologies.
[281] And most of the problems in the future will come from the technologies we're inventing today.
[282] But I believe that the solution to the problems that technology created is not less technology, but more and better technology.
[283] And so I think technology will be abused and that the proper response to those abuses is not less of it, to prohibit it, to try and stop it, to turn it off, to turn it down.
[284] It's actually to come up with something yet even better to try to remedy it, knowing that that itself will, cause new problems, knowing that we then have to make up new technologies to deal with that.
[285] And so what do we get out of that race?
[286] We get increasing choices and possibilities.
[287] All right.
[288] Kevin Kelly, one last question.
[289] You argue that technology is prompting us to ask more and better questions, advancing our knowledge and revealing more about what we don't know.
[290] You're right, it's a safe bet that we have not asked our biggest questions yet.
[291] Do you really think that we haven't asked, I guess, the essential human questions yet?
[292] What are they?
[293] And I ask that, of course, with the recognition that if you knew the answer to that question, we wouldn't be having this conversation.
[294] Well, what I meant was we're moving into this arena where answers are cheaper and cheaper.
[295] And I think as we head into the next 20 or 30 years, that if you want an answer, you're going to ask a machine, basically.
[296] And the way science moves forward is not just by getting answers to things, but by then having those answers provoke new questions, new explorations, new investigations, and a good question will provoke a probe into the unknown in a certain direction.
[297] And I'm saying that the kinds of questions that, like, say, Einstein had, like, what does it look like if you sat on the end of a beam of light and you were traveling through the universe and the front of a light?
[298] Those kinds of questions were sort of how he got to his theory of relativity.
[299] there are many those kinds of questions that we haven't asked ourselves.
[300] The kind of question you're suggesting about what is human is also part of that because I think each time we have an invention in AI that beats us at what we thought we were good at, each time we have a genetic engineering achievement that allows us to change our genes, we are having to go back and redefine ourselves and say, well, wait, wait, what does it mean to be human or what should we be as humans?
[301] And those questions are things that maybe philosophers have asked, but I think these are the kind of questions that almost every person is going to be asking themselves almost every day, as we have to make some decisions about, is it okay for us to let a robo soldier decide who to kill?
[302] Should that be something that only humans do?
[303] Is that our job?
[304] Do we want to do that?
[305] They're really going to come down to like dinner table conversation level of like what are humans about, what do we want humans to become, what am I as a human, as a male, as American?
[306] What does that even mean?
[307] So I think that we will have an ongoing identity crisis personally and as a species for the next, at least forever.
[308] So I have to say for all this talk of technology and the future of technology, you have weirdly made me feel a bit more human.
[309] And for that, I thank you.
[310] You know, you're not a robot because you ask such great questions.
[311] The true test will be how I do at comedy, though, correct?
[312] Exactly.
[313] And you laughed at my joke, so we know you're alive as a human.
[314] Next time on Freakonomics Radio, Steve Levitt, my Freakonomics friend and co -author, has long dreamed of solving an economic puzzle, and he's finally done it, with the help of an app.
[315] I love Pokemon Go.
[316] No, not that app.
[317] I have had a burning question about economics that I've wanted to answer for almost 15 years.
[318] And by using Uber data, I've finally been able to get to the bottom of it.
[319] What Uber can teach an economist and the rest of us about consumer surplus.
[320] Trust me, it's more interesting than it sounds.
[321] That's next time, Freakonomics Radio.
[322] Freakonomics Radio is produced by WNYC Studios and Dubner Productions.
[323] This episode was produced by Christopher Worth.
[324] Our staff also includes Irva Gunja, Jay Cowett, Merritt Jacob, Greg Rosalski, Caitlin Pierce, Alison Hockenberry, Emma Morganstern, and Harry Huggins.
[325] Remember, you can subscribe to Freakonomics Radio on iTunes or wherever you get your podcasts.
[326] You can also visit Freakonomics .com where you'll find our entire podcast archive, as well as a complete transcript of every episode ever made along with music credits and lots more.
[327] Thanks for listening.