Armchair Expert with Dax Shepard XX
[0] Welcome, welcome, welcome to armchair expert.
[1] Experts on expert.
[2] I'm Dan Shepard.
[3] I'm joined by Monica Mouse.
[4] Hi.
[5] How are you?
[6] Oh, we were saying that maybe my new animal is a, it's like a mouse.
[7] A high rax.
[8] Highracks.
[9] You're a high rax.
[10] Monica Hirex.
[11] Oh, yeah, Monica Hirex, Padman.
[12] Because you said your new nickname is the Boulder.
[13] Oh, shit, right.
[14] That was on another thing that we didn't air.
[15] Yeah.
[16] Yeah.
[17] That's important.
[18] People should recognize that I am now going by the Boulder.
[19] Yep.
[20] Because what's more formidable of Boulder or The Rock, I think the viewer can decide.
[21] This guess, I really hope people listen to this interview because I'm going to put it into a handful of guests we've had where I just was literally overwhelmed with the scope and breadth of their knowledge of how all things work.
[22] Where I was like, oh, wait, does this person kind of understand how everything works in concert?
[23] It's amazing to be in front of someone like that.
[24] Yes, and that man is Eric Schmidt.
[25] Now, Eric Schmidt is a technologist, an entrepreneur, and a philanthropist.
[26] He is the founder of Schmidt Futures and former CEO of Google for a long time.
[27] Yeah.
[28] I started this interview wondering, how would those two people pick someone they thought was more qualified than themselves to run their company?
[29] Yeah.
[30] Who did that person be?
[31] And we met him.
[32] He has a new book out called The Age of AI and Our Human Future.
[33] The age of AI and our human future explores what society must be focused on right now to make sure that we are prepared for a new reality where humans and AI must coexist.
[34] He wrote this book with Henry Kissinger and Daniel Hutton Locker.
[35] Fascinating interview.
[36] Please enjoy Eric Schmidt.
[37] Wondry Plus subscribers can listen to armchair expert early and ad free right now.
[38] Join Wondry Plus in the Wondry app or on Apple Podcasts.
[39] Or you can listen for free wherever you get your podcast.
[40] podcasts.
[41] So first of all, nice to see you.
[42] Great to see you.
[43] Yeah.
[44] I guess I would have a year ago, if someone asked me, do you think you'll ever meet Eric Schmidt?
[45] I would have said no. And then here we are twice face to face within four months.
[46] Oh, because of the Illuminati.
[47] Okay, okay.
[48] We've had so many people on from that event.
[49] And I'm so jealous.
[50] And if Adam doesn't invite me next year, I'm going to be mad, making that.
[51] Well, Jared Cohen's the guy you want, I think, on your side in this scenario.
[52] He's getting an email for me. He controls the world as far as I'm concerned.
[53] And he's grown into a real person.
[54] You know, he started off as sort of a kid in the State Department.
[55] Yeah.
[56] But he made a significant set of accomplishments.
[57] You may not know this, but his most significant achievement has been he worked.
[58] tirelessly for the last few weeks to get all the Afghani refugees out.
[59] And what he did, which I didn't understand, is he simply called all the prime ministers and yelled at them.
[60] Oh, really?
[61] You have to do this.
[62] You have to do this.
[63] You have to do this.
[64] Because it was the only way to get places to put these people.
[65] And so a couple thousand people looked like they got out because of Jared's direct intervention.
[66] Oh, my God.
[67] And the help of a lot of other people.
[68] And that's a great day.
[69] Oh, my gosh, yes.
[70] I don't know that I'll ever be able to make a claim that.
[71] No. That's so wonderful, yeah, that I actually got people to safety.
[72] Saving lives, I mean, this is why people go into being doctors.
[73] Imagine walking in and saving someone's life.
[74] It's really an extraordinary achievement.
[75] Yes.
[76] What's interesting thing to me is that we have problems in society that we don't know how to solve.
[77] And I view these as the hard problem.
[78] I'll give you an example of addiction.
[79] Another one is homelessness.
[80] And why my industry is not trying to figure how to solve these problems, maybe because the science is too hard.
[81] But wouldn't it be amazing if there was some app that came along that could really address addiction, homelessness, or inequality of some form, you know, unfairness in some form?
[82] We need to start working on that stuff.
[83] Well, wouldn't you say conventionally the hurdle is that those aren't profitable endeavors, that there's not a ton of incentive for any industry to tackle those?
[84] But the interesting thing about my world is it's now gotten so large and you have so many young people entering it.
[85] who are, there really are principle.
[86] They really do care about climate change and society and ethics and so forth and so on, maybe more so than my generation.
[87] There are plenty of opportunities for those people to do this.
[88] There's plenty of nonprofits that need help addressing these issues.
[89] I think the underlying problems, we don't know how to solve the problem yet.
[90] Yeah.
[91] Well, let's get dangerous and dive into homelessness as I've experienced it here in Los Angeles.
[92] So first and foremost, I like to say I've been involved be it fractionally compared to my wife and other friends of ours, but like Path is this great organization in Los Angeles, and it helps get people into permanent housing, and there are some hurdles, of course, but it's a great program.
[93] So I've been very in favor of adding services, helping, helping, helping.
[94] Something very peculiar happened within the first six weeks of quarantine here in L .A., which is the corner where we would normally have a couple hundred people in a three -block area had shrank to like a few people.
[95] And so I was asking myself, well, where did these homeless folks go?
[96] And you're not going to like my conclusion and America's not going to like my conclusion, but my conclusion was there's nobody out to panhandle from.
[97] There's no source of money.
[98] And no human's going to sit on this sidewalk for another 10 weeks and evaporate.
[99] They went somewhere else.
[100] They went somewhere.
[101] They contacted a relative they hadn't talked to.
[102] They had done something.
[103] And at that moment, my whole lens kind of shifted.
[104] I was like, that throwing a bunch more services and money at this problem isn't, in fact, making it worse?
[105] Well, one answer, which is the doctrinaire answer, is there is a real problem in housing in our country.
[106] And if you look, it's true globally, why housing prices continue to accelerate and they've been doing it for 20 years to the point where they're almost bankrupting, shall we say, normal jobs, normal people, that's a problem.
[107] Yes.
[108] Now, some of that is due to the high cost of manufacturing.
[109] but some of that is also zoning.
[110] In California, for example, there's a bill about trying to restrict communities to essentially limit to just single -family houses.
[111] You want to come up with multifamily solutions.
[112] In the Bay Area, where I spent many decades living, there's a chronic housing shortage.
[113] The way New York solves that problem is by building skyscrapers, which is precisely not what San Francisco seems to want to do.
[114] So we've got to come up with a shared agreement on how to get more housing and especially more affordable housing.
[115] The current path where the developer gives a percentage of the housing is fine, but the scale of the unmet housing need is so large.
[116] Yeah.
[117] And it's a combination of cost, but also regulation.
[118] And nobody wants the housing complex full of all the poor people next to their expensive office or home or what have you.
[119] We have to come up with some kind of compromise where people can coexist.
[120] Yeah.
[121] If you look at the great cities of the world.
[122] The great cities are where all the productivity growth is.
[123] It's where all of the companies are essentially going to get born from.
[124] They're not going to get born in rural places.
[125] It's just the nature of how knowledge workers work.
[126] We've got to find a way to get more people into cities.
[127] In China, they've solved this problem by building new cities.
[128] And they built so many new cities.
[129] They've in fact got a credit bubble and so forth.
[130] But at least they're trying to address the housing problem.
[131] Yeah.
[132] And so right away you bring into it the full scope of the problem, is it's so multifaceted.
[133] And when we're talking about homelessness, are we talking about people?
[134] There's some segment that are, yes, one check away from not making the rent.
[135] They're living in a very insecure way.
[136] That's one group of people.
[137] There's also, I did an ethnography in college in Skid Row.
[138] In the vast, vast majority of people I interviewed are addicts primarily or people of pretty severe mental health issues.
[139] Remember, there's many sources of this and you do them one by one.
[140] You need more housing.
[141] we need some new ways to treat mental illness and we need some new ways to treat addiction and we also need to stop criminalizing some addictive behaviors if you look at the Portugal experiment where they essentially liberalized the laws with respect to all hard drugs what they did is they turned it from a criminal activity to a medical addiction problem and all of a sudden people were willing to seek help the interesting thing is that crime went down but addiction did not go down so that tells you that Addiction is a harder problem than it appears.
[142] And that's why I said if our industry could come up with some new ideas.
[143] And in fact, I would argue that the current industry focus, which is around revenue, is in fact playing into the addiction capabilities of every human.
[144] Yes.
[145] That what happens with social media is you essentially become addiction.
[146] And furthermore, what I'll say is that corporations, at least in social media land, are optimized and maximizing revenue.
[147] To maximize revenue, you maximize engagement.
[148] To maximize engagement, you maximize outrage.
[149] The same is true on television news networks, by the way.
[150] Okay, so I think this is a great bridge into what could possibly be the utopian aspect of AI.
[151] There's going to be a dystopian aspect and a utopian aspect.
[152] But if we can imagine a world that Yuval Harari has suggested, or many other people have suggested, where machines will ultimately do all the tasks that laborers are currently doing, I know it's just one of the fun thought experiments we could run is whether this job Monica and I could eventually be done.
[153] But what we're going to earmark that, in this world, this utopian world where we're not doing any of the menial tasks anymore and we are really just being creative.
[154] Obviously, there has to be an enormous paradigm shift in money, in earning, and expending and living.
[155] So I guess in that utopian future, if money itself is not the incentive, than these other things could be incentives.
[156] Right.
[157] So let's think about 200 years from now or 300 years from now.
[158] Yeah.
[159] Unfortunately, none of us will be around.
[160] But our children's descendants will be...
[161] Eric Lander thinks different.
[162] Let's see.
[163] So the important point here is let's think beyond our current lifetimes.
[164] It's clear, by the way, that the lifestyle of a millionaire today will be affordable by a normal person because that has been the pattern.
[165] The pattern is that the cars and the TVs and so forth initially come out for the rich people.
[166] And then as prices come down, volume increases, they become more affordable.
[167] Many people believe that over a long enough period of time, the basic goods of society, eating, sleeping, construction, Netflix, will all become affordable to everyone.
[168] Now, this is a utopian view, but let's imagine if that were true.
[169] And let's say we're true globally to the 11 billion people that will have past 2 ,100 on our globe.
[170] What would life be like?
[171] Now, then people immediately conclude that everyone will be watching television and relaxing and lying by the pool, which is precisely not what's going to happen.
[172] What will happen is humans will compete over other things.
[173] Yeah, yeah.
[174] They'll compete for complexity.
[175] They'll compete for sports.
[176] They'll compete for power.
[177] Humans are not going to give up identity.
[178] What's the scarcest thing in a world full of everything?
[179] Fame.
[180] Right.
[181] Because it's the one thing that everyone doesn't have.
[182] Yeah, yeah.
[183] So you'll have huge competition over fame.
[184] One way to understand social networks is that every 10 years, there's a new set of 18 -year -olds, if you will, who want to become famous.
[185] Yeah.
[186] And they look at the 28 -year -olds in the previous social network, and they say, I want that.
[187] I just have that talent and so forth.
[188] but those positions are taken.
[189] So the new social network comes off with new players.
[190] We saw this with YouTube.
[191] We saw this with Snapchat.
[192] We've seen this with Instagram, and we're clearly seeing it with TikTok.
[193] And by the way, there'll be one after it, and they're roughly on a 10 -year basis because of this fame scarcity.
[194] Yeah, I would just add, we could just generally call that status.
[195] And as very social primates, that's never leaving us.
[196] We are always going to be obsessed with our ranking in our group.
[197] There's a famous economist named Herb Simon, 1971.
[198] said that the abundance of information leads to a scarcity of attention.
[199] And economists think not in terms of abundance, but in terms of what is scarce, right?
[200] What do we fight over?
[201] Right.
[202] So if we had infinite money, we would fight over something else.
[203] Yeah.
[204] But the one thing we don't have a lot of, and we'll never have a lot of, is social hierarchy where everyone's the same.
[205] That's right.
[206] So we're going to compete for power and attention and whatever replaces money in this utopian scenario.
[207] A much more likely scenario, and it's always fun to talk about utopian and dystopian, but a much more likely scenario is coexistence with AI and with all the things that we'll talk about will be happening.
[208] And in this coexistence, we'll come up with new things to collaborate and new things to compete with.
[209] But we're not going to be sitting at the beach in a world full of resources.
[210] It's not how humans work.
[211] There's no pride in it, no honor in it, no status in it, know nothing in it.
[212] Well, and then, using a stereotype, you sit at the beach and you compete on the quality of your tan.
[213] Sure, sure.
[214] Where you got, what position you have?
[215] How close are you to the water?
[216] Do you get the right seat?
[217] We will compete over scarcity.
[218] I got to throw a non -sequitur at you right now.
[219] So I met you, and you didn't talk a lot.
[220] You're like a very generous host.
[221] You were the host of this thing I attended.
[222] And you let everyone else shine.
[223] And now just within five minutes of talking you, I'm like, this motherfucker's brilliant.
[224] That's the way to do it.
[225] That's what you do.
[226] Slow play.
[227] Yes.
[228] Yeah.
[229] I love just the overall.
[230] comprehensive knowledge you have.
[231] That's really kind of you, Dax.
[232] I'll give you my life advice.
[233] Yes.
[234] Which is ask a lot of questions and be curious.
[235] If you ask a lot of questions, you can be annoying, but people are always flattered if you ask them their opinion.
[236] Oh, yeah.
[237] Always works.
[238] And if you're actually listening to them and curious, you might learn something.
[239] And frankly, it's so much easier to let people who are experts explain things to you than do your own research.
[240] I've learned a while ago not to try.
[241] Trust marketing.
[242] So whenever people pitch me, I always wonder, where is that coming from?
[243] What are they trying to get out of me?
[244] But when I've got somebody who has a perspective on a problem and who can give me the scope of it, I want to learn from them.
[245] Yeah, I mean, we're doing it professionally, which is a real hack, I think.
[246] This is sort of what you do.
[247] Yes, yes, yes.
[248] And we're in on the joke.
[249] But if everyone modeled your behavior, we wouldn't have these essentially amplification scale things where everybody all of a sudden believes something without any kind of criticism.
[250] At Google when I first started there this is now 20 years ago we sort of adopted a policy that whenever anyone would say something we would query it.
[251] The first CFO we had was making a presentation before we were public and he made a statement which I thought might be false and so he's talking and I checked it.
[252] It was a minor thing.
[253] But it's an example of how you get high performance teams, right?
[254] People operate on all sorts of beliefs, which are not based in facts.
[255] The Google principle was, check it.
[256] No, you're so right.
[257] I think it's often tempting to reduce people to their specialty and then ignore the vastness of their humanity, which is like everyone's got an ego, everyone feels embarrassed, everyone feels called out.
[258] These have to always be in the stew when we're making these decisions.
[259] But what's happening because of the, amplification.
[260] It's easy to use social media as an example.
[261] Yeah.
[262] My book, our book is really about what happens when this thing gets much worse, which is the dystopian side.
[263] But I will tell you that the fact that everyone believes something does not necessarily mean it be true.
[264] The fact that your friend believes it does not necessarily mean it be true.
[265] And because of the overload of information and all the people purveying you for whatever reason, whether it's the Russians or marketing or evil people that you know or what have you, check it out on your own.
[266] It's not a bad way to live your life.
[267] Be a little skeptical.
[268] Be skeptical about what I say.
[269] Check what I say.
[270] See if you agree with me. I'd welcome that.
[271] We interviewed a great professor down at Texas Tech.
[272] And he was saying, in general, if when you're receiving information, it tastes just like candy, that's a red fly.
[273] By the way, that is a great analogy.
[274] I love that.
[275] What has happening now, it's boom, boom, boom, boom, boom, boom.
[276] And we're trying to keep you going.
[277] in the game.
[278] And so one of the tricks is turn off the television and turn off your news feed and live in a local world and see how you feel.
[279] Yeah.
[280] As a matter of philosophy and we should be thankful that we are here, we're alive, we're in a great country, we have all these great opportunities and so forth.
[281] And instead, all the surveys indicate that we're more unhappy than ever.
[282] Yeah.
[283] Yeah, I think he said critical thinking if you're doing it right should hurt.
[284] Yeah.
[285] And I think that's right.
[286] That's a very bad marketing message.
[287] But it's correct.
[288] Yeah.
[289] Okay, so let's walk through for a second.
[290] I do have a couple questions from your childhood just because they jumped out at me as potentially interesting.
[291] Well, first of all, your mother had a master's in psychology.
[292] Your father was an economics professor at both Virginia Tech and at Johns Hopkins.
[293] Yes.
[294] And at George Washington, where I was born.
[295] Uh -huh.
[296] I said, why was I born there?
[297] He said, it's really simple.
[298] Birth was free for professors.
[299] Yeah, done deal.
[300] And I said, now I've learned something about economists.
[301] Of course.
[302] But also worked for the Treasury.
[303] And at some point, you moved to Italy.
[304] What ages was that?
[305] I was eight.
[306] You were eight.
[307] I moved to Europe for two years as a boy.
[308] And I can tell you that it was an enormously transformative experience because at the time, Americans didn't travel to Europe the way they do.
[309] now.
[310] And so this was post -war Europe, still very poor compared to the United States.
[311] We lived very well on a professor's salary because of what is unimaginable today.
[312] Europe was very inexpensive.
[313] Sure.
[314] But it caused me to appreciate the global nature of life.
[315] Italians are the warmest people you'll ever meet.
[316] It's a great history and a great culture.
[317] I was there recently to receive an award and give a speech where I grew up.
[318] And I was really struck that this is a country which had this horrific COVID thing in Lombardy in February and March of last year, but they've made the necessary changes, their identity is strong, and they're growing well.
[319] Who would have expected such a strong narrative in the EU from Italy of all countries?
[320] Yeah.
[321] So it shows you that human systems can respond to the challenge.
[322] Now, did you feel like an outsider there?
[323] They made me very welcome.
[324] I spoke Italian very quickly.
[325] Oh, okay.
[326] Yeah, so I was treated very well.
[327] If you have an opportunity to take your family and have your kids live in a European country, it's enormously broadening.
[328] When I came back to America, I realized how completely nativist, how completely self -obsessed the United States is.
[329] A good example is watch Fox or CNN and then watch BBC news.
[330] You'll see the difference.
[331] In that they are focusing much more outward.
[332] The majority of the people in the world do not live in the United States.
[333] The majority, therefore, of the drama is outside the United States.
[334] Yes, yes.
[335] The news is therefore slanted in the U .S. in the ways that we understand.
[336] And if you look today, the cable news is essentially obsessed about politics.
[337] And we've lost our local news leadership in America because of the economics of the Internet and so forth.
[338] The incentives.
[339] And so where are the local stories?
[340] Here we are in Los Angeles.
[341] There's 10 million people in the greater Los Angeles basin, actually more than that.
[342] There's a lot going on.
[343] Oh, yeah, yeah, yeah.
[344] Okay, now when you went to college, initially, you majored in architecture, and I just had a tiny curiosity if you had fantasies of being Howard Rourke.
[345] No, but what happened was I liked architecture, and I applied as an architector candidate.
[346] And I was just interested in it.
[347] And what I realized very quickly is I was a terrible architect, but I was quite a good engineer.
[348] Uh -huh.
[349] Because I was not creative enough.
[350] And I continued to believe that is true today.
[351] The other thing that was interesting about the architecture people is they worked harder than anyone else.
[352] Architecture as a major is rough.
[353] Really?
[354] So it was interesting.
[355] And all I could tell you is my interest in architecture has translated into the architecture of corporations.
[356] So people who are engineering -oriented architects, they think of scale.
[357] In computer scientists, again, I was an early part of computer science.
[358] When I was at Princeton, there was no computer science program at all.
[359] Today, it is the number one major of the entire university.
[360] Wow.
[361] So in two generations, it's actually gone from.
[362] from zero to highest ranking.
[363] And that is representative of the transformation that our society is going to see.
[364] Independent of all the crazy stuff going on in our government, in our local environment, and here in Los Angeles, the fact of the matter is this huge cohort of early 20 -something men and women, and I'm proud to say that it's almost half women now, which is an amazing achievement.
[365] They're going to come in and they're going to apply these tools to the problems around us.
[366] And they're going to make money and change the world and all of that in ways that are appropriate for their generation.
[367] But when I was there, there was only, you know, one computer.
[368] We stayed up all night because it was shared, and it was too slow during the day.
[369] Right.
[370] And even your doctorate degree, which you ended up getting for Berkeley, it's like, what's the degree?
[371] ECCU or something?
[372] EECS, double E, because computer science wasn't there.
[373] Right.
[374] So even your...
[375] So my point is that people don't appreciate how much has changed.
[376] I'll give you another example.
[377] The computer that I used in Princeton is 100 million times slower.
[378] than an iPhone or an Android phone.
[379] I didn't say 100 ,000.
[380] I said 100 million times.
[381] That's crazy.
[382] And the number one import to China is semiconductors made elsewhere.
[383] It shows you how powerful the digital revolution is and what is at stake.
[384] Well, I like the parallels people have been making in regards to Taiwan, and that Taiwan will most likely be seen as a Kuwait was in 1990, where we cannot allow it to be taken over.
[385] It's too vital to the world economy.
[386] I think that's the consensus.
[387] And I'll give you the background, by the way.
[388] There's a fab system of foundry, is it technically known, called TSM, Taiwan Semiconductor Corporation.
[389] And it was founded by a fellow named Morris, who actually was a physicist at Berkeley and so forth in the 60s and 70s.
[390] And he was trained in America and he went back to Taiwan, which at that time was very Western compared to China.
[391] And so working in Taiwan, he managed to build this extraordinary company, which is worth many hundreds of billions of dollars, and it is roughly half of the world's foundries for the key chips that we all use.
[392] And it's a triumph of engineering, it's a triumph of science, and it's also a triumph of government policy because they received enormous financial support from Taiwan to do this.
[393] So again, just like we had with Operation Warp Speed, We had the government procuring the vaccines, whether they worked or not.
[394] You had huge private sector risks.
[395] Remember, most of the vaccines didn't work.
[396] And then we had the universities helping in a real national emergency.
[397] That's what it took.
[398] So Taiwan did this.
[399] So now we faced the question, and I did a report for the AI Commission for the Congress, which said that we have to stay two generations ahead of China, mainland China, in semiconductors.
[400] Well, we are well behind Taiwan.
[401] So it's Taiwan first, the U .S. second and China third.
[402] The interesting thing geopolitically is that China has spent essentially infinite money to catch up on semiconductors.
[403] They literally have the top talent and so forth, but it's so hard that the choke points remain key hardware design things, which are called extreme ultraviolet.
[404] There's a company called ASML in the Netherlands, which is the only company that knows how to do this.
[405] You need this to go below what is called the 10 nanometer line.
[406] And the other thing that has occurred is that the cost of these fabs are so great that there'll be relatively few of them.
[407] The U .S. and China have both demanded that Taiwan builds such fabs in the United States, and guess what?
[408] They're in the process of building such fabs, but they're not putting it at the state of the art. So if you use, for example, the Apple Mac M1 Pro or M1 Max chips, those are so -called 5 -nometer technology from Taiwan, they are the product of an enormous, enormous investment at Apple, but also in Taiwan.
[409] Mm -hmm.
[410] The other competitor so you know who it is is Samsung, and that's in South Korea.
[411] Mm -hmm.
[412] Stay tuned for more armchair expert, if you dare.
[413] What's up, guys?
[414] It's your girl Kiki, and my podcast is back with a new season, and let me tell you, it's too good.
[415] And I'm diving into the brains of entertainment's best and brightest, okay?
[416] Hey, every episode, I bring on a friend and have a real conversation.
[417] And I don't mean just friends.
[418] I mean the likes of Amy Poehler, Kell Mitchell, Vivica Fox, the list goes on.
[419] So follow, watch, and listen to Baby.
[420] This is Kiki Palmer on the Wondery app or wherever you get your podcast.
[421] We've all been there.
[422] Turning to the internet to self -diagnose our inexplicable pains, debilitating body aches, sudden fevers, and strange rashes.
[423] Though our minds tend to spiral to worst -case scenarios, it's usually nothing.
[424] but for an unlucky few, these unsuspecting symptoms can start the clock ticking on a terrifying medical mystery.
[425] Like the unexplainable death of a retired firefighter, whose body was found at home by his son, except it looked like he had been cremated, or the time when an entire town started jumping from buildings and seeing tigers on their ceilings.
[426] Hey listeners, it's Mr. Ballin here, and I'm here to tell you about my podcast.
[427] It's called Mr. Ballin's Medical Mysteries.
[428] Each terrifying true story will be sure to keep you up at night.
[429] Follow Mr. Ballin's medical mysteries wherever you get your podcasts.
[430] Prime members can listen early and add free on Amazon Music.
[431] Now, here's where it gets a little tricky, right, as I understand it.
[432] As you point out, the investment is so large.
[433] You need state cooperation.
[434] And I have a very cursory knowledge of it from watching a 60 -minute segment on it.
[435] But Intel, of course, had been the leader.
[436] And they didn't really have any kind of U .S. support.
[437] It was not invested by us in the same way Taiwan invested in that company.
[438] And then now we get into a really tricky scenario because the headline will read like U .S. propping a trillion -dollar company.
[439] So it looks like it's a corporate welfare program.
[440] And I think we lose sight of the national security aspect.
[441] And by the way, that's a political problem in our system that we're going to have to overcome.
[442] But let me tell you that China doesn't have this problem.
[443] Right.
[444] China is pouring money to achieve chip leadership, which will hurt America.
[445] American firms.
[446] Every industry that were on the precipice of competing over.
[447] Yeah.
[448] And in fact, to continue on China, China has defined the following markets as interesting to it.
[449] Artificial intelligence, software, AI, quantum computing, energy, financial services, and essentially tracking and face recognition, and a few others.
[450] Well, that's essentially the whole world I occupy.
[451] That's everything that I am working on in my history.
[452] It's the source of the primary profit.
[453] for all of the globe's fastest -growing companies, all of the stock market winners that have essentially been as a basis of that.
[454] So the Chinese are very clear on where their priorities are.
[455] What's the equivalent list in the United States?
[456] We don't have one.
[457] We can argue over steel and corn and weed and so forth.
[458] Those are fine things to argue about.
[459] They are not strategic.
[460] No. Occupying the strategic platform for the globe is crucial.
[461] And let me give you an example of how we screwed up.
[462] Really quick.
[463] Bioengineering is in this.
[464] there as well.
[465] Actually, I'm sorry, and I omitted synthetic bio, that's a terrible omission on my part.
[466] That's going to be huge.
[467] It's going to be immense.
[468] China's building a biobank that is even larger than anybody else's.
[469] So again, it's coming.
[470] And we need to get organized around this.
[471] How do we do that?
[472] Well, first we need a national plan.
[473] And the national plan has to involve some amount of financial support from the government.
[474] The government always has supported basic research.
[475] Thank you very much.
[476] It's at the lowest level it's been since Sputnik.
[477] That's not good.
[478] But the other thing that happened was we got out of the semiconductor business and out of the 5G business in the United States, with the exception of Intel and a few other cases, because the profits weren't there.
[479] It was better to be linked to a supply chain to Asia primarily.
[480] Well, now with these national security concerns, we need to do some reshoring.
[481] Reshoring in an industry which we left is expensive.
[482] Somebody's got to pay for it.
[483] Furthermore, let me give you two examples to illustrate this point of what happens if we screw this up.
[484] Huawei is a state subsidized manufacturer in China They build really, really excellent 5G infrastructure They're also full of national security concerns The United States is trying to get people not to use Huawei Especially mission critical things But we don't have an alternative because we got out of that business That's a problem we need to fix that I'll give you another example TikTok Now TikTok's great Chinese company people love it It's the new big thing it's going to be on its way to be a trillion -dollar company.
[485] It has huge growth everywhere.
[486] President Trump tried to fix the issue of China with TikTok by requiring local ownership.
[487] That deal ultimately was not consummated.
[488] So we need a solution to TikTok.
[489] Now, what's the issue with TikTok?
[490] I don't mind that Chinese knowing where our teenagers are.
[491] I'd like someone to know where they are, right?
[492] It's not a problem.
[493] We love our teenagers.
[494] They're safe.
[495] But I do mind if TikTok becomes, censored or an agent for a foreign power of any kind.
[496] That would be a violation of our norms.
[497] So when you play with these systems that are global in nature and you don't have your own local alternative and there's innovation occurring elsewhere, you give up not just the economic opportunity, which is a trillion dollar economy, which we want because we want the jobs, but you also give up the opportunity for control in innovation.
[498] And in my world, broadly speaking, the tech world, the first mover really does have an advantage.
[499] And once they get a big scale, it's really hard for a new entry to come in for all sorts of reasons.
[500] So we are in a window, and the report that we published for the Congress said, we're in a window of a few years where the decisions that we make will determine whether we ultimately control our digital future or whether someone else likely China does.
[501] And we have a bunch of recommendations which include more R &D funding, a whole bunch of infrastructure that would help startups, pro -competitive moves, working with our partners, and most important, reestablishing the principle of our ethics.
[502] What are the ethics we care about?
[503] We care about free speech.
[504] Are there limits of free speech?
[505] We care about access.
[506] Are there limits?
[507] We care a lot about prejudice.
[508] What is an okay prejudice and what is a not okay prejudice?
[509] We've got to get that right now.
[510] And I fear, given our political confusion, and these weird sort of nativist things, like we don't want immigrants.
[511] This is where the growth is, guys.
[512] This is what drives the economy.
[513] If you look, and it's great to have a recovery, the majority of the economic growth is occurring from six or seven states, and within those, it's a very small percentage of the counties in our country.
[514] 3 ,500 counties, it's something like 30 or 40 counties drive the vast majority of the wealth creation.
[515] And I'm not talking about personal wealth, I'm talking about societal wealth.
[516] Yeah.
[517] Are young people here getting into these fields, though?
[518] Because I know in other countries they are, and I feel like that's part of the issue, is there's no incentive for younger.
[519] They want to be influencers.
[520] So the good news is that this next generation of people in our universities is phenomenal.
[521] They are so much quicker than I was, partly because I think they are actually are smarter, but also because they live in a world that's much quicker.
[522] You know, so when I teach, I watch the students and I go, boy, I wasn't that fast.
[523] I wasn't that crisp.
[524] And I've seen that over and over again.
[525] So I think there's a reason to be optimistic.
[526] To me, the question is not the people, because I think the people are there.
[527] The American educational system produces the really top people at the collegiate level.
[528] The real problem is what is the society that they're operating in?
[529] What kind of rules do they have?
[530] And I worry that we're losing the formula.
[531] So the formula is pretty simple.
[532] the tech industry invents these things and assuming that they're legal they get launched and we develop global platforms.
[533] When we start thinking of ourselves as a regional power and not a global power, that's going to be a problem.
[534] I want us to innovate and I want us to build the future and I want to build it in America with our democratic partners, which by the way includes South Korea and Japan and so forth.
[535] The U .S. surpassed Britain as a world power in the 50s and 60s and in the 70s Britain had terrible economic crises and they've come out of those.
[536] But I don't want that for us.
[537] Yeah.
[538] Could you just one sec before we moved to AI, tell me what a science fiction nightmare would be of a TikTok application?
[539] Like what would be an actual example of what could happen?
[540] What tech people like me didn't understand, and a lot of people did, is that especially when you take a young mind and you put a young mind in these incentive -based systems, they really do change their behavior.
[541] Right, yeah.
[542] And I used to say 10 years ago, look, it's just the internet.
[543] Turn it off.
[544] Have dinner with your family.
[545] Go for a walk, relax.
[546] You can't do that anymore.
[547] Right.
[548] We went from being optional to being fundamental.
[549] People live on the internet.
[550] And therefore, the internet has to reflect the societal values that you want.
[551] So here's an example.
[552] China decides to insert censorship or promotion within the TikTok system that demotes or promotes stories that they like.
[553] Now, would we notice this?
[554] Correct.
[555] And just can I be very literal?
[556] So I'm a user.
[557] I do a certain kind of story.
[558] TikTok promotes that story because it's on a message for what China wants.
[559] And you're now incentivizing you.
[560] And they use AI.
[561] So if I were the Chinese government, I'm not trying to give them advice.
[562] But what I would do is I would build an AI system that looked for desirable behaviors and that I would shift the presentation to be consistent with that desirable behavior.
[563] You would learn what the behaviors were.
[564] you wouldn't know.
[565] This is the way computer scientists think.
[566] You look at a system, you say, what do I have to learn?
[567] And then I want to optimize for more.
[568] So in a business, for your listeners, in a business, basically, you want to get more customers.
[569] So you could learn what customers are actually doing as opposed to what you think they're doing.
[570] Learn what they're doing.
[571] And then whatever they're doing, shift it to maximize revenue or whatever it is you're doing.
[572] And that's how the tech companies work.
[573] Yeah.
[574] There's no reason to think that governments couldn't require that the same.
[575] Right.
[576] And by the way, that would be a huge values violation in America.
[577] Yeah.
[578] Okay.
[579] Your book, The Age of AI, are Human Future.
[580] Now, first of all, you've written it with Henry Kissinger and Dr. Houghtenlucker.
[581] And why those two as bedfellows in this book?
[582] Because if I'm right, Kissinger's 99, 98, 98.
[583] 99 in May. Okay.
[584] And it's such a delight to work with someone so brilliant.
[585] I can only imagine what he would have been when he was 30.
[586] Right.
[587] So at 98, he's such an incredible mind and thinker about things in society.
[588] What happened was I invited him to Google 12 years ago or so, and he said he'd come, but he wanted to give a speech about how he thought Google was a threat to civilization.
[589] And I said, okay.
[590] And the Google employees love this.
[591] But what he basically said was that he did not want to have a single company having the kind of power that Google had and now has on citizens of the world.
[592] Sure.
[593] Now, of course, he would say that to other companies as well, of Facebook being an obvious candidate, but maybe others.
[594] But that built a friendship and a partnership.
[595] He came to a conference where Demas Hasebis was speaking, and Demis is the founder of Deep Mind.
[596] And he was talking about the implications of general intelligence and how when computers have general intelligence.
[597] It raises many, many ethical questions.
[598] And Dr. Kisinger had written a undergraduate thesis before we were all alive entitled The Future of the World.
[599] And he contemplated the questions that are raised by Kant, which have to do with what is the nature and definition of an object.
[600] So he was very interested in the structure of how our mind perceives facts in a world where you have, in this case, a single company trying to shape it.
[601] Yeah.
[602] That then generalized to his view, which he's published two articles in the Atlantic on, which goes something like this.
[603] There was a age of faith before the age of reason.
[604] In faith, you basically had your own knowledge which you got from God or the king, but you did not have critical thinking.
[605] In the age of reason, it was agreed that we would do better as humans if we had reason.
[606] We would have a principle.
[607] We would have a debate, things we take for granted today.
[608] And that ushered in the age of enlightenment, where people began to have these philosophical questions and so forth.
[609] the age of enlightenment then leads to the industrial revolution and so forth and so on and everything we have around us today.
[610] This is a really fundamental point.
[611] He argues that we're entering a new epic from the age of reason to the age of AI.
[612] And here's why people will be changed by their interaction with these almost human systems that are not human.
[613] We don't know how they'll be changed.
[614] We don't know how their perception will be changed.
[615] But in the book, we try to ask a bunch of questions.
[616] What will it be like to grow up as a child when your best friend is non -human?
[617] What will this do to war and conflict?
[618] What will this do to misinformation and people who are trying to violate sort of the conventions of our society?
[619] But the core question is, and which is why we wrote the book, is what does it mean to be human in this new world?
[620] Yeah, it's a good place to start.
[621] It's like, what are we trying to even protect?
[622] In varying degrees, you have real fatalists about what AI is going to be.
[623] You have very utopian thinkers and what it's going to be.
[624] But yeah, I think a great question first is like, what are the things about us we would want to safeguard?
[625] Well, before we even answer the us part, I'm not talking about killer robots.
[626] So when you talk to people at AI, they say, yeah, I've seen that movie.
[627] Yeah, yeah, yeah.
[628] Well, that's not what we're talking about.
[629] What we're talking about is the fact that AI will be around you, helping you, guiding you, and maybe constraining you in ways, that you may or may not like.
[630] The obvious one is kids.
[631] Yeah.
[632] So you've got some toy, a bear.
[633] You give it to the kid at two.
[634] The bear gets upgraded every year.
[635] The kid grows up.
[636] And at 12, the bear is watching TV with the kid, his best friend.
[637] The bear says, I don't like this show.
[638] And the kid says, I don't like it either.
[639] How do you feel about that?
[640] Do you think that your child's tastes should be influenced by his toy, as opposed to his best male 12 -year -old friend, in this case?
[641] I'll give you another example.
[642] There's this huge controversy over what people teach in schools.
[643] There always has them.
[644] There are textbooks, and those textbooks are approved by the State Board of Education.
[645] And we can debate whether they're right or not.
[646] But the important point is there's an approved process for textbooks in schools.
[647] Well, now we've got this bear, right?
[648] The bear, by the way, knows an awful lot of stuff.
[649] Okay.
[650] I just spit -took coffee.
[651] You really?
[652] Oh, my God, that's my first spit take on the show.
[653] Yes, the bear knows way more than the textbook.
[654] I'm sorry, that really cracked me up.
[655] Please continue.
[656] By the way, the bear does.
[657] Yes.
[658] And let me tell you how this happens.
[659] It's the total knowledge of human history.
[660] Right.
[661] The bear actually knows everything.
[662] Yeah.
[663] And the bear is busy idling, waiting for the kid to ask him a question.
[664] Like, tell me about Thucydides.
[665] And the 12 -year -old is busy watching UFC.
[666] What are the rules about how the bear interacts with the kid?
[667] Right.
[668] Because the kid is going to learn more from the bear.
[669] By the way, we all learn more from our peer group than we do from any other kids.
[670] Certainly more from sitting in class, listening to the teacher.
[671] looking at the other kids.
[672] So now, let's imagine the bear learns something, and it learns incorrectly that lollipops cure cancer in children.
[673] And the bear is programmed by an objective function to keep the kid happy.
[674] And happiness involves being healthy and alive.
[675] So the bear decides to start suggesting the kid consume lollipops.
[676] The ability to subtly manipulate a developing mind, it's a very big deal.
[677] Yeah.
[678] And the same thing is true for adults that you see today in the Facebook and Twitter examples where you have manipulation by people who know how to get them excited.
[679] They get them excited by outrage.
[680] And YouTube, I mean, like rabbit hole, that whole thing is that.
[681] It's just the slow.
[682] The tiniest nanometer nudging in their direction.
[683] And in YouTube's case, they actually made some changes that are important.
[684] Right.
[685] They stopped recommending the most emotional and crazy videos.
[686] They're still there, right?
[687] So they can say they didn't say.
[688] censor them, but they don't recommend them.
[689] Yeah.
[690] And we've seen situations over the years where people would start watching something in YouTube and then they would get radicalized.
[691] Exactly.
[692] Watching one more and more.
[693] This is a rabbit hole point.
[694] Yeah.
[695] So that kind of stuff.
[696] And I defy you to write a regulation, you know, sit down with a pen, try to write, how would the government regulate this kind of behavior?
[697] It's not obvious to me. We need a consensus about that.
[698] Well, so what I want to talk about in the best friend hypothetical is from the second I read that that was one of your points, I go to what does a child actually need.
[699] So I think that the product would obviously be designed to alleviate stress on the parents.
[700] They're the ones buying it, right?
[701] So ultimately this AI bear, the parents are going to decide to get that.
[702] And they're going to decide to get it because now the kid's more entertained, they have more free time to themselves.
[703] That would be probably the incentive for the parent.
[704] Unfortunately, everything great that a human learns is through compromise, is through struggle, is through disagreement, is through sharing when they don't want to share, is through fighting.
[705] So this device that will obviously be bought to help you as the parent have easier time in life is not going to present any of those things that a human should have to endure.
[706] So right out of the gates, I don't know how the product, if it was an ethical one, would even be sold.
[707] So I'm going to buy this bear that I'm going to hear them fighting upstairs.
[708] In theory, right, how would you regulate this?
[709] Right.
[710] But the most important thing is remember that this is technology, today, which is imprecise, it's dynamic, it's emergent in the sense that when you combine it with others, it does stuff that's unexpected, and it's learning while it's on the job.
[711] Right.
[712] So the problem here is, you're the parent and you buy the, again, I'm using the bear as a metaphor, you buy any form of digital toy trying to do the right thing for the kid, and the toy's learning.
[713] What if it learns the wrong thing?
[714] And sometimes it could learn the wrong thing because it made a mistake, but it could also be misprogrammed by a government, or a competitor or marketing or an incentive.
[715] So you have this crazy idea.
[716] You think we should give the equivalent of these bears to every child.
[717] How do we support that with advertising?
[718] Now the bear becomes advertising to the kid.
[719] And the kid basically then wants the toy and the parents have to buy the toy.
[720] That seems like a terrible idea.
[721] You're so right, because in the pursuit of everything being democratized, you can't offer a product that's only beneficial to the wealthy people who can afford this product.
[722] So then how else do you provide it to everyone while advertising?
[723] This is how the Internet works.
[724] I feel like before this would even be permitted, we would all have to collectively agree and social scientists would collectively agree on what percentage of the interaction between a child and its best friend should be conflict?
[725] Which percentage should be bonding in it?
[726] I can guarantee you there is no number that I can use.
[727] And if you can't tell me the number, then the tech industry is going to do whatever it wants.
[728] That's what I'm saying.
[729] It's like we can't even get consensus on our side where we might want to say, well, here would be the guidelines of what this thing should do.
[730] Again, I think the issue of developing young minds is going to get much more important, and we've got to get ahead of that.
[731] Let me give you another example.
[732] Does AI perceive things that are different from what humans perceive?
[733] And if so, what happens?
[734] So the example we use in the book is a game called Go and a game chess, which we all know about, which have been played by humans for thousands of years, not only won those games against humans using these new AI techniques, which is a major achievement because they thought they were not possible, but more importantly, they invented new strategies.
[735] Now, how is it that in a game that says mature as chess as go that we could invent a completely new strategy for a game that's been played for thousands of years?
[736] So one possibility is that it was always there for humans to discover, they just couldn't see it.
[737] Another possibility is that the computer actually discovered a new truth that we as humans can't really perceive.
[738] Oh, my gosh.
[739] Now, this is a thought experiment, but let's imagine that these systems get, smart enough that they begin to do things that are actually correct, but they can't explain what they're doing, and we don't understand what they're doing.
[740] Now, Dr. Kissinger says, if you look at history, there are two things that happen.
[741] Either people rebel, that is literally with guns against these overlords, or they invent a new religion, that there is a religious component about faith because they can't understand what's actually going on.
[742] Well, and yeah, the real historical example of course you'd give is like, Galeo's virtually that.
[743] He is the AI that believes in the Copranicus way of thinking of our solar system.
[744] And then, yes, that person is, it's too threatening.
[745] We put them in jail.
[746] It also makes me think of, I watch this documentary on physics, and to learn, and I didn't know this, that all the theories that have been around for the last 60 years, those theories actually don't exist in language.
[747] They only exist in mathematics.
[748] The entire thought exists only in mathematics.
[749] And when they're telling you about it, they're using this very substandard method to try to explain to you what's happening in the mathematics.
[750] So that was like, for me, kind of mind -shattering.
[751] Like, wow, our language can't account for what they're thinking of in this math.
[752] Our language is not precise enough.
[753] Right.
[754] Stay tuned for more armchair expert, if you dare.
[755] So one of the ways to understand how this will happen, this is very relevant to this physics point, is that today there are things called large language models.
[756] And we talk about this in the book.
[757] We use something called GPT3 as an example.
[758] And what they do is they suck all the text that they can find in and they figure out what the text is about.
[759] And then using all sorts of tricks that are very powerful, it's very expensive to compute.
[760] These things cost $100 million to build one of them, for example.
[761] They appear to have human -like knowledge with errors.
[762] You can ask it to compose something.
[763] You can ask it to define itself.
[764] So, for example, on the back of our book, we have a quote from GPT3.
[765] It was to ask, are you human?
[766] And it said, no, I'm not human.
[767] I am a language model.
[768] I don't have the reasoning capabilities that you as a reasoning machine have.
[769] Now, how did it learn that?
[770] It figured it out because of the descriptions that were about what it was versus what humans were.
[771] But it appears to have insight.
[772] So, Many people believe that the path goes like this.
[773] So the first thing you do is you get the language model.
[774] Then you figure out what is called dialogue, where you can start asking questions.
[775] I was playing with one, which is not announced yet, and I said, tell me the product that's in 2001 of Space Odyssey that is available today here in the United States.
[776] And the correct answer is a tablet.
[777] Okay.
[778] And because it was the first demonstration of a tablet device.
[779] And so it comes back and it says there was a tablet, which is the origin of that.
[780] And it used a different algorithm from the kind of thing that Google search uses.
[781] That's an example of a breakthrough.
[782] And then I could have asked it more and more questions about this tablet and the origin and so forth than it would have understood the context.
[783] That's the next step.
[784] All of these systems today have the problem that they don't have their own volition.
[785] They can't generate their own objective function.
[786] Right.
[787] So the sequence goes as follows.
[788] The first phase is we get really good at something.
[789] somebody else's objective function.
[790] The obvious one there for a physicist is they spend all day doing physics and then it's 5 o 'clock and it's time to go home.
[791] And so they say to the computer, I want you to read every paper that's ever been written on eigenvalues of this thing.
[792] And tomorrow morning, when I show up to do physics, I want you to give me the best idea that I should pursue.
[793] Oh, wow.
[794] It's the same assistant that you would get because you would say, I'm preparing for a show.
[795] Tell me everything I should ask.
[796] right which is today what a good assistant does for you but literally read everything read everything that Eric said read all the books in this area and read them all yeah because I don't have time yeah and tell me what the key things are but what is best a computer doesn't know or maybe they do but what we're qualifying as best idea well again it's gotten good enough that it can get an approximation of best so that's the first I think stable point that's five years from now.
[797] Can I go really quick back to the volition statement you made?
[798] I want to see if the way I'm understanding it is accurate.
[799] So I one time heard a debate with Stephen Pinker on AI.
[800] And he pointed out most of the doomsdayers with AI, they're mapping on this truth about organic beings, which is they have to survive, they have to reproduce, they have to feed themselves.
[801] And the more they collect the better their chances of survival are that that very most basic thing of any organic organism is being mapped onto this AI and that's not accurate we would have to make it understand that so there's lots of evidence now that babies are born into the world with preconceived notions of what their objective function be more than just food so for example there's something called zero shot learning you basically show somebody something without explaining it, and that when you show something similar in the future, it looks familiar to them.
[802] Okay.
[803] They can construct where it is and what it's up to, right?
[804] And that's a hard problem in my field.
[805] But these are all problems that are being worked on.
[806] So at some point, we're going to have these incredible systems which will do what somebody told them to do, but they still won't have volition.
[807] The great debate is at what point can the computer begin to give itself objective?
[808] And we're beginning to see this in the following.
[809] There are products that have just come out this year to help you write code.
[810] And the idea is that the programmer is writing and the computer's watching it, it kind of finishes what you're writing.
[811] It's like finishing your sentence as a programmer.
[812] Right, like autocorrect.
[813] Like you think of it as auto correct.
[814] But this is a very, very powerful idea because as the programmers use it, it will learn more and more about how to finish.
[815] So many people believe that, at some point those systems, you'll be able to say, write me this, and it'll write the whole thing.
[816] Yeah.
[817] Okay.
[818] Then the next question is when it decide what it should write.
[819] Yeah.
[820] And that's a speculative question.
[821] This is called artificial general intelligence.
[822] There are skeptics and there are optimists.
[823] I'm somewhere in between.
[824] Right.
[825] The optimists believe the general intelligence, which is not human intelligence, but it's creative in the way that humans are.
[826] It has volition.
[827] It can set its own objectives.
[828] is 10 to 15 years away.
[829] I'm a bit more conservative.
[830] I think it's more like 20 to 30.
[831] And there are other people who say this will never happen.
[832] It's well beyond what computers will ever do.
[833] Those kinds of people are usually wrong.
[834] Okay.
[835] Usually we'll get there somehow.
[836] Yeah.
[837] Really quick.
[838] They don't have ego.
[839] Or could they ever have ego?
[840] Of course they could.
[841] They could.
[842] Look, the computer will say, I want to keep this human happy and these other people are a source of frustration to them, so I'll just kill them all.
[843] Right, right.
[844] That's clearly a bad objective function.
[845] Yeah.
[846] I'm not making light of it, but I'm trying to say that's a mistake.
[847] Yeah.
[848] So you have to figure out a way to define in human society and in the complexity we live in, what does looking better look like?
[849] And I'm wondering is computers may eventually observe humans and say, we know how to make your lives better and actually understand it in a way that humans don't understand.
[850] Yeah.
[851] And if that happens, that's the most utopian phase.
[852] The most dystopian phase is that we can't control these things.
[853] And furthermore, because of open, all the softers being leaked out to all sorts of evil people, the next generation of evil terrorists, and it could be really misused.
[854] And the obvious one is, I say to the computer, tell me how to kill a million people who have a different race than my own.
[855] Right?
[856] Now, for this reason in the book, and this is why I'm mentioning it, we talk at some length about the history of national security and deterrence.
[857] We're going to have weapons that are like what I just described that will be so dangerous that we'll need to have them be guarded and have special access and so forth.
[858] And the reason we know that is we already have such weapons in the form of nuclear weapons.
[859] Unlike nuclear where the enriched uranium is difficult to get, the thought experiment is imagine if nuclear stuff was free, well, we'd all be dead, trust me. Yeah, yeah.
[860] Because the evil people would have sort of taken this outcome.
[861] 0 .001 % of evil people to kill.
[862] Yeah, it only takes a few for these mega -scale weapons.
[863] And so we have to be very careful as we build this technology to put in the necessary restrictions so that they're not misused at scale.
[864] Just saying the machine can never kill, it's not going to work because we're going to use it in military applications.
[865] Well, first place, who would you say it to?
[866] Right.
[867] So in programming the computer, you have to say rule one, don't kill anyone.
[868] This is the Isaac Asimov, three laws of robots.
[869] you know, don't harm anyone.
[870] But the fact of the matter is that in the military, they are in the business of killing combatants.
[871] Right.
[872] And having worked for the military for five years, we call the Defense Innovation Board, they're very straightforward about it.
[873] Right.
[874] We are proud.
[875] This is our mission.
[876] We're defending our country.
[877] And if necessary, under the laws of war, we will attack enemy commands and kill them.
[878] Yeah.
[879] And they're not apologetic about it.
[880] Right.
[881] So they're clear of their mission, and their principles are high.
[882] I say this with respect.
[883] So they're going to build systems.
[884] which will allow them to do that more accurately.
[885] Right.
[886] I would argue that one of the things about current conflicts, and people don't want to hear this, is that the vast majority of the deaths are actually non -combatants.
[887] Right.
[888] And that's terrible.
[889] Yeah.
[890] Meaning like when we drone strike, most generally more civilians go down than the target itself.
[891] The target identification problem is a really hard problem.
[892] So let's imagine that a good outcome of AI is the weapons are highly targetable.
[893] But you still have the problem of who gets to decide who to launch them.
[894] So we're clearly going to have more accurate weapons.
[895] We say very clearly, both in this commissioner report I did as well as in the book, that the issue around computers deciding to do attacks is really problematic.
[896] And I'll give you an example.
[897] You're on a ship and there's a missile coming.
[898] The AI has figured it out, but you can't see it.
[899] It's not on your radar.
[900] And it's coming fast it's hypersonic i think in the book you say you've got 24 seconds right 24 seconds now would you dachs would you press the button or would you count down in the movies they count down me i just press the button right right you know i panic yeah that's that's who i am yeah so how do we deal with that and one of the discussions that i think we should be having and we say this in the book is we need to have some agreements about launch on warning that is this sort of any kind of automatic launch, automatic attack, decisions faster than human time is destabilizing because it leads the other side to think it has to destroy it.
[901] It's too dangerous to allow this to exist.
[902] Yeah.
[903] If you look right now, there's this lengthy conversation about whether the enriched uranium in Iran should be destroyed ahead of its use.
[904] Yeah.
[905] Right?
[906] Yeah.
[907] And there's all sorts of issues.
[908] In Israel, they spend a fair amount of time.
[909] They don't admit it, but I think it's fairly agreed to, that whenever something that could threaten Israel shows up in Syria, they use special agents and they go and they destroy it and so forth.
[910] And then we had the Great Siemens software we wrote to mess with the centerfusers.
[911] Yes, the allegation that Israel and America did that.
[912] Yeah.
[913] We must start now talking specifically with China, but also with Russia, about appropriate restraints.
[914] Kind of like a Geneva Convention.
[915] Some agreement.
[916] Yeah.
[917] So when a country launches a missile, whether it's a missile test or, a civilian missile such as SpaceX, all the other countries are notified.
[918] And that's because no country wants the other one to think it's attacking him.
[919] Sure.
[920] And the other countries use that as an opportunity to train their surveillance systems to watch the launch.
[921] So that's a good example where we have an agreement among untrusted parties that de -escalates some of the worst possible scenarios.
[922] Right.
[923] So we've got to have some rules.
[924] So, for example, when there is a cyber test in China and it goes awry, there better be a phone call from China to the United States saying, sorry, guys, not intended.
[925] Don't come back after us.
[926] Yeah, but in light of Wuhan, it's just like we just had an experience where they didn't do that with us.
[927] That is correct.
[928] They didn't give us a heads up.
[929] Imagine if we, the country, had gotten a phone call in late December saying, look, guys, this thing's going to get out of hand.
[930] We've been modeling it.
[931] Get yourself together.
[932] And collectively we can respond to this.
[933] That's an example of an emergent event.
[934] So you have AI which both speeds things up.
[935] It also creates emergent events that we don't expect.
[936] Uh -huh.
[937] Holy smokes, man. The stakes are just so high and I agree with your assessment that this will be another renaissance or enlightenment.
[938] This will be some period that's looked back on should we make it?
[939] And we'll make it in some form.
[940] What I would say is that the opportunities for AI and the platforms that we're building are so enormously exciting.
[941] There is a drug called Hallison, and we profiled this in the book.
[942] It was designed by some CINBio people at MIT and some computer scientists.
[943] What they did is they trained the model on what they knew about antibiotics.
[944] That was straightforward.
[945] Then they asked the computer to look through 100 million chemical compounds to figure out what the compounds did, which nobody knew, and then figure out whether they would produce an antibiotic response.
[946] It learned all of that on its own, something humans could never do, and then it produced 10 candidates for drugs.
[947] Get out.
[948] Of antibiotics.
[949] Oh, my God.
[950] This is the best part.
[951] It then took another network and looked at the characteristics of the existing antibiotics, and it said you have to be as different as possible.
[952] And that produced one.
[953] Okay.
[954] Now, this one appears to produce a broad -scale antibiotic reaction for people who have antibiotic resistance.
[955] Yeah.
[956] The number of lives that are saved by that is enormous.
[957] Over and over and over again, DeepMind and the University of Seattle Baker Lab just released the details of how proteins fold.
[958] Proteins fold in a particular order and the way they fold and determines what they do.
[959] So predicting protein folding has been the Holy Grail.
[960] This is a discovery worthy of a Nobel Prize because based on knowing how proteins fold, then all of a sudden a whole other set of drugs and chemicals and so forth can be made.
[961] It's similar to CRISPR in biology, which allowed us to do editing and so forth.
[962] So these are things that humans could never do.
[963] Yeah, you posed the question in the book, should a Nobel Prize come out of this?
[964] Does the AI receive it?
[965] Did the people who ask the AI the question receive it?
[966] Like, where are we at there?
[967] When AlphaGo won, I was in AlphaGo in Shanghai when we won against the human.
[968] This was 2017.
[969] And the Chinese were so upset that in the middle of the match, they shut down the national television of the match because they were losing because they always thought that they were winning.
[970] What was interesting afterwards is I thought, what could I do to acknowledge this occasion?
[971] So I took the 18 top people who worked on this, and we all flew to the data center where this computer was, and we found it, and we put a plaque on it to honor it.
[972] And obviously it's a humorous story, but we also spent a whole day talking to each other.
[973] Yeah.
[974] Right, about what had been achieved and so over.
[975] And I'm very proud of that visit.
[976] But who do I give the plaque to?
[977] It's a team of humans, and it's a computer.
[978] So we put the plaque on the computer.
[979] Right, right.
[980] And we found it in the middle of all the racks of computers that we're doing work.
[981] And by the way, it was three or four racks of computers in a huge data center.
[982] It shows you the power and also the peril of these things.
[983] That's the positive.
[984] The most likely dystopian case is a case where we don't get this addiction problem under control, that the computer systems are built around addiction.
[985] and they drive us insane.
[986] Attention, attention, crisis, issue, so forth and so on.
[987] Well, if I can interrupt you, what you said at the beginning of this, which is in this even Yuval Harari future, where we're all just pleasure seekers in a virtual reality, everyone can have status.
[988] Everyone can be the winner of it.
[989] It'll be bullshit.
[990] It'll be fake status.
[991] But certainly the algorithm could make you feel like you're chipping away in gaining all this popularity, but all the popularity is a bunch of other bots.
[992] You can imagine all sorts of dystopian scenarios.
[993] So you're popular in the metaverse of this rack of computers and you're unpopular in the other metaverse and so forth.
[994] I don't think we know how this will play out.
[995] But left of their own devices, and this is why we wrote the book, what we say is that the tech industry should not make these decisions, that these are decisions that should include philosophers, economists, behavioral scientists.
[996] Maybe you could answer this question for us.
[997] The way our legal system works and the litigious nature, of class action suits and whatnot, to me, sets up a catch -22 that's standing in our way.
[998] So let's say we design the system with a team of people, and then we launch it, and then there's some disastrous outcome.
[999] And we know that the disastrous outcome is going to result in X amount of legal fees.
[1000] That puts the company in an existential position.
[1001] And so it really creates a scenario where the company to acknowledge, failure is also committing suicide.
[1002] Yes.
[1003] This has to change.
[1004] If we're going to ask these companies, I think, to be real -time ethical, real -time changing, real -time transparent, real -time acknowledging what's fucked up and where it went wrong and how we're going to change it, it can't really do it in our current system.
[1005] Do you have any thoughts on that?
[1006] The effect of what you're describing is that the legal system is another participant -complainer customer within the corporate environment.
[1007] So the way it really works is you have the founders with great vision, you have the employees, you have the customers, you have the regulators, and you have public opinion in the press.
[1008] And as a CEO, you have to deal with all of them.
[1009] What's interesting now is this next generation of employees is extremely activist.
[1010] Yeah.
[1011] So using the example I was using earlier about the bear, I would hope that the future company that's programming the bear, the employees would say to the management, that crosses the line.
[1012] That's not okay.
[1013] Your profit -seeking motive is overwhelming my likelihood of being willing to work for you to do it because you're going to do some damage to some kid.
[1014] So I think that the good news about this is these issues are going to get debated.
[1015] The bad news is we don't have good government answers and regulatory answers on how to handle this.
[1016] And the model we've used so far has worked well, you know, launched it, see what happens and react to the worst cases.
[1017] If you look at the Facebook situation, they knew these problems and they continued to do them.
[1018] But do you think they were at a crossroads where they said, if we now change this, it in itself is an admission that it was wrong?
[1019] There's always that concern, because if you're a CEO and you say to the audience, the shareholders, we've decided to lower our profits because we want to change this.
[1020] You will get sued by shareholder derivative sues.
[1021] Right, both sides.
[1022] Even if it's the right thing to do.
[1023] So it takes an awful lot of Kevlar on your vest in order to do these things.
[1024] The great companies and the great leaders will do it.
[1025] A lot of people are not going to do it.
[1026] By the way, this is great.
[1027] Oh, thank you.
[1028] I'm loving this.
[1029] Okay, so that was one question.
[1030] And then I do think it would really be fun for us to hear what the potential life of this show is in a world of AI.
[1031] And the thought experiment I would do is I would get one of these language models, and I would have it be a panelist.
[1032] And I would ask it questions and see what it answers.
[1033] And I would do that every year so you can track the progress of your replacement.
[1034] Now, what you're going to discover is that if you ask it the right question, it gives an incredibly interesting answer, but it still requires your volition.
[1035] It won't drive the conversation.
[1036] And so you'll sit there and say, that is the most frustrating guest.
[1037] I have to ask it precisely the question.
[1038] And it gives me a great answer.
[1039] And otherwise, it just waits.
[1040] Right.
[1041] Well, I could use a little bit of that, by the way.
[1042] Well, and again, though, great answer, right?
[1043] That's all dependent on what we're deciding is great.
[1044] Is a precise answer great?
[1045] Is an emotional answer great?
[1046] A metaphorical answer?
[1047] But the systems will learn the answer to that question.
[1048] Because remember, its objective function is you, to ask it more questions.
[1049] Right.
[1050] So it will learn how to ask, it will learn how to give you answers that are just exciting enough to keep you there and just factual enough to be interesting and just responsive enough.
[1051] Yeah.
[1052] So it'll learn.
[1053] So the key thing about this technology that took me a long time to learn is that it's busy learning.
[1054] Yeah.
[1055] And it will learn, so we're talking about it as a guest, it's learning while it's a guest on your show.
[1056] Sure.
[1057] How to be a guest on our show.
[1058] How to be on a guest on your show.
[1059] And if it gets enough other gigs, it will become a very good guest.
[1060] Now, this might be an uncouth parallel, but again, drawing from 60 minutes, my only source of information, they did a follow -up interview with the man who, Rain Man was based off of.
[1061] So they interviewed him at the time, and they took him, I believe he lives in Salt Lake City.
[1062] He could go to the library and you could pull any book off the shelf and you could turn to a page and he would tell you what's written on the page.
[1063] It's an insane...
[1064] A truly brilliant man. Then they did a follow -up, and Leslie Stahl went back and talked to him 20 years later, and he was making eye contact with her.
[1065] He touched her.
[1066] He interacted with her in a way that appeared to be a lot of growth.
[1067] And so she said to his father, wow, he's gotten so much more personable over the years.
[1068] It seems like he has grown in his autism.
[1069] And his dad said, well, I don't believe he has actually obtained the missing stuff, but he is a great learner and over time he's watched people have success looking at each other in the eye and they've had success in hugging so it's weird it's that little thing that could exist even within a human that the machine itself will be there so let's say the bear has movable eyes and the programmers have not had time to work on the eyes but there's a mechanism to do it you could imagine that the bear which is optimized to keep the kid happy the first thing it learns is to look at the kid right because what happens When it doesn't look at the kid, the kid's screaming.
[1070] When it looks at the kid, the kid screams less.
[1071] These are babies.
[1072] We intuitively understand this, but it's not difficult for a bear to learn that because its objective function says, get attention, be in...
[1073] Minimize crying or whatever the thing is, yeah.
[1074] And we know from baby studies that babies are very, very optimized around both their mothers' sounds, but also face recognition, in particular eye contact.
[1075] That's why they look and you look at them and you, if you look at, the great moms, they first start by cooing and holding and looking at and singing to and so forth.
[1076] And that imprinting, which is so deeply human, well, it'll work with computers too because they'll learn how to do it.
[1077] I think maybe a place to start with some of this is like not programming it for happiness.
[1078] Because also, that's so philosophical happiness in general.
[1079] What is it?
[1080] Is it temporary?
[1081] That's what like we need Paul Bloom to say.
[1082] Oh, they need 32 % discomfort in suffering.
[1083] Well, okay, so you're going to have the hard -ass bear and the nice bear.
[1084] And depending on whether the kid has behaved, the little boy or girl gets the hard -ass bear on the bear.
[1085] Even worse, the bear's going to have to trip the child occasionally.
[1086] The kids got to learn how to get up.
[1087] So, you know, we can have fun with the analogy.
[1088] Yeah.
[1089] The reason to have this conversation now is that the bear is a metaphor for their iPad today and their phone today and all the things that kids are playing with.
[1090] We're playing with fire.
[1091] We're playing with enormous impact.
[1092] We're entering a new phase when we don't fully understand the phase we're exiting right now.
[1093] We still don't really fully understand the impact of just the last 15 years.
[1094] And I'll tell you that the reason I want the bear is because the bear will learn how the kid learns and the bear will then become the greatest teacher the kid has ever had.
[1095] All we have to do is keep it from learning the wrong thing.
[1096] But imagine if you have a bear which is steeped in history, steeped in language, steeped in the local history, has all the textbooks and so forth and it prepares that kid in whatever way the kid learns.
[1097] So let's do some counting.
[1098] Oh, I'm bored with counting.
[1099] Okay, let's do some language and the language is counting.
[1100] You can imagine this enormously powerful tool for human development.
[1101] I believe that the future of the world is completely determined by whether we can get the best human potential out of everybody.
[1102] If you go back in American history, around 1900, they had this huge rural workforce.
[1103] all the evidence is that they were extremely uneducated.
[1104] And the government created universal high school and land grant universities in roughly 1910.
[1105] A hundred years ago, 110 years ago, they created these huge universities which have been at the source of the innovation of the industrial economy of America, our national security, our culture, and so forth and so on.
[1106] Everybody expects to go through high school and people hope to go to college.
[1107] That wasn't true 100 years ago.
[1108] Yeah, that's wild.
[1109] That's a great point.
[1110] So let's redouble our efforts to get people to their highest human potential using these tools.
[1111] And then they will take over our world.
[1112] And they'll do much better than we have because they're operating at their highest human potential.
[1113] Okay.
[1114] My last question for you is a philosophical one.
[1115] I was reading notes from an underground and there's a whole chapter.
[1116] And I don't know what year that book was written, but in the 1800s or turn of the century.
[1117] and Dotsievsky spends, I don't know, a chapter theorizing that because mathematics have come so far that in the very near future, through the power of mathematics, mathematicians will be able to predict all human behavior.
[1118] And so when I read that, I was like, oh, my God.
[1119] So this fear minimally has been around since he wrote that.
[1120] And there's really no justification at that moment for him to have that fear over mathematics.
[1121] That certainly did not prove to be the case.
[1122] And I thought to myself, wow, we have this new thing.
[1123] We have AI.
[1124] And so I wonder, is the fear human, and we just keep planting it on each new thing?
[1125] When humans are creative and curious in all the great ways, we also are unpredictable.
[1126] And AI will produce a new human -like intelligence that is also unpredictable that we're going to have to coexist with.
[1127] and the sooner we get ourselves organized or what that coexistence looks like, the better.
[1128] This technology is coming much faster than our philosophical, regulatory government, public policy, people are.
[1129] Yeah.
[1130] It's time to get ahead of it.
[1131] Wow.
[1132] I'm so glad you took the time to write this book.
[1133] You know, it is funny when I thought about you being appointed, the CEO of Google, I thought, like, who could be so impressive that those two, Larry and Sergei would say, like, yeah, you run this company, that's almost unimaginable.
[1134] And now that I've chatted with you for an hour and a half, I'm like, fuck yeah, I would have hired you immediately, too.
[1135] What a comprehensive, like, all -encompassing knowledge.
[1136] You're just way too flattering.
[1137] No, no, it's so fucking true.
[1138] Congratulations on you guys' success in this.
[1139] I look forward to working with you.
[1140] Oh, thank you.
[1141] Thank you so much.
[1142] It was such a pleasure to have you.
[1143] And good luck with the book, and I think everyone would be interested in Rienian.
[1144] And again, it's called The Age of AI, Our Human Future, written with Henry Kissinger and Dr. Daniel Hutton Locker.
[1145] So everyone check that out.
[1146] Again, thank you so much for coming.
[1147] It was such a blast.
[1148] Okay.
[1149] Thanks, guys.
[1150] And now my favorite part of the show, the fact check with my soulmate Monica Padman.
[1151] Hi.
[1152] Hi.
[1153] Okay, so fact check cross -continental.
[1154] you're on one side of the divide, I'm on the other.
[1155] That's right.
[1156] I'm home for the post -holiday season.
[1157] I'm home with the fam.
[1158] They're recovered.
[1159] The airport was just bonkers.
[1160] On both sides?
[1161] No, only in L .A. I've never experienced anything like it.
[1162] It was like Disneyland.
[1163] Like, I was stuck standing, like waiting to move, not even to get to a line.
[1164] It was awful.
[1165] I almost missed my flight, which has never happened in my life.
[1166] And I was two hours early to the airport.
[1167] That is so stressful.
[1168] I'm going to have to leave.
[1169] I'm going to have to leave at 4 in the morning, I guess, tomorrow.
[1170] I know.
[1171] Leave now.
[1172] Oh, my God.
[1173] Maybe I should just go right now.
[1174] I'll do it.
[1175] Otherwise, how was your travel?
[1176] Do you feel like you contracted COVID in transit?
[1177] Well, I don't know because there were so many people.
[1178] Yeah.
[1179] That scared me.