Armchair Expert with Dax Shepard XX
[0] Hello, everybody.
[1] I'm Dax Shepard.
[2] This is Ranica Radman.
[3] That's me. And this is an episode of Experts on Expert.
[4] And today's expert is a juicy one.
[5] You've all harari.
[6] You hear me talk all the time about both Sapiens and Homo Deus.
[7] And he has a new book, 21 Lessons for the 21st Century.
[8] And it's equally as stimulating as the previous two books.
[9] There's a lot of concepts that he has introduced me to that I have rattled off on here.
[10] So I think Monica and I both felt this way.
[11] We had X amount of time with Yvall.
[12] And that X amount of time went by in what felt like 16 minutes.
[13] I've never in my life had the experience where time flew so quickly.
[14] I agree.
[15] Because he's so dang smart and interesting.
[16] It was a good time for us, right?
[17] It was.
[18] It was candy, brain candy.
[19] It was brain candy.
[20] we left turbocharged.
[21] So please enjoy the genius we call Yuval Harari.
[22] Wondry Plus subscribers can listen to Armchair Expert early and ad free right now.
[23] Join Wondry Plus in the Wondry app or on Apple Podcasts.
[24] Or you can listen for free wherever you get your podcasts.
[25] Yvall Harari, welcome to Armchair Expert.
[26] Are you out on a book tour?
[27] Yes.
[28] You are.
[29] Just touring the North America.
[30] Will you go all over the world?
[31] We have another European leg of the tour, and then China, and then maybe Turkey in India.
[32] And eventually we hope to reach South America as well.
[33] Well, your first two books, Sapiens and Homo Deus sold 12 million copies more at this point, and was translated in 45 languages, yeah?
[34] So I assume you have a very global appeal.
[35] It seems so.
[36] Is it mind -blowing?
[37] for an academic to have such success, popular success?
[38] It wasn't in the beginning.
[39] You get used to almost everything.
[40] But it's also, you know, I get a lot of help from a lot of people.
[41] My husband, also my partner and manager, is kind of the PR genius behind this whole phenomena.
[42] And I have a team of other people who are working with us.
[43] And, you know, if it was only me, I would have collapsed long ago.
[44] Yeah.
[45] It's impossible to deal with it just as a single individual.
[46] An arrangement I too have with my wife, yeah.
[47] Actually, historically, it's quite a common arrangement that, you know, the marriage partners or the family is also the basic economic unit.
[48] So once upon a time, you had together a field or a herd of goats, and now you manage books.
[49] Well, I want to kind of launch right into some of your ideas.
[50] I just want to first say that I've read both of your first books.
[51] I've just started 21 lessons for the 21st century.
[52] That's your new book.
[53] Yeah.
[54] I got the title correct.
[55] Yes.
[56] Okay, wonderful.
[57] I just passed the first test.
[58] But I read the first two books and I loved them.
[59] Sapiens and Homo Deus.
[60] I think maybe even Homo deus for me was more mind -blowing and some concepts I had never thought of.
[61] But I also heard you on Sam Harris over a year ago, I guess.
[62] And you said some things in there that were not in office.
[63] either book that really kind of blew my mind.
[64] So I kind of want to bring people slowly up to speed.
[65] So you first, you have a PhD in history from Oxford.
[66] Is that accurate?
[67] Originally, I was a specialist in medieval military history.
[68] I was writing about the Crusades and the 100 years war and things like that.
[69] Yeah, totally off topic.
[70] But have you seen any of this stuff where they're starting to bring to market some of these potions that they used in medieval Europe that because they were written, and it turns out that some of them have been really effective in treating like SARS and stuff, or these super viruses.
[71] I would be very skeptical about it.
[72] Life expectancy in the Middle Ages was under 40, something like almost 50 % of children died before they reached adulthood.
[73] Medieval medicine, you wouldn't like, the safest thing is to stay away from it.
[74] Okay, but they did, they didn't realize they did, but they had antibiotics.
[75] They had, they had, the alchemists were coming up with, you know, they would let it rot and a yak belly and all this crazy stuff, but they were creating antibiotics.
[76] But then as viruses evolve so quickly, they become irrelevant and they just existed now in a book.
[77] It's just fascinating to me that you can discover something amazing, then you can lose it, then you can, it can come back.
[78] Yeah, that can certainly happen in history that we have lost a lot of stuff on the way and we don't even know what we've lost.
[79] Right.
[80] We don't know what we don't know.
[81] We don't know what we don't know.
[82] Yeah.
[83] And so you grew up in Israel?
[84] Yeah.
[85] And you still live there?
[86] Yeah, I still live in Israel.
[87] Okay.
[88] And you also, just get into personal life for five seconds, I learned on Sam Harris.
[89] You also disappear for like three months of the year and go to India and meditate.
[90] Am I accurate?
[91] Altogether, if you take all the retreats that I do during the year, I think it adds up to something like almost three months.
[92] Now, I just want to say I admire that you, I mean, basically.
[93] basically what you have is boundaries for your life, right?
[94] Because it would be, I assume, quite tempting for you to just stay on the hamster wheel sprinting.
[95] No, it's not tempting at all.
[96] Oh, it's not?
[97] It's exhausting.
[98] So really, it's not easy to just stop everything for a month or two.
[99] But it is tempting.
[100] I mean, it's...
[101] But I would imagine there are people that are disappointed to learn that, they will not hear from you for another six weeks or something.
[102] So in your real life, you're having to tell people, I'm very sorry.
[103] I know you need me, but I will be gone.
[104] Yes.
[105] That's an admirable quality, I think.
[106] It's almost the opposite of codependence in a way.
[107] So you know how to take care of yourself, right?
[108] Yeah, but again, it also demands the cooperation of other people like my husband that they disappear for 60 days and he has to stay there and keep answering the phones and the emails.
[109] And you know, 99 % of what we do now is just say no. People come with all kinds of requests from interviews to theater productions to some, and many of them are really wonderful ideas and good causes.
[110] And just, you know, the thing with the human being so far, you can't copy them.
[111] So with a book, I can have 12 million copies of my book and can reach everywhere.
[112] but with myself there is just one pair of legs and one stomach and one head today today and it's just in one place and uh you can't copy it well and on that topic i think all of our minds are blown that you're here i don't know how you're saying yes to us yeah you're in our stupid little addict and that's a little mind -blowing to me as a huge fan of yours but i think what's kind of unique um about what you do is you and this is always confuse me why there isn't And maybe there is now, but why there isn't a department at all universities that is aggregating everything and just noticing, oh, is there some overlap here?
[113] Are like we missing, are they finding little bits of truth that someone should be assembling or aggregating to come up with some real thought breakthroughs?
[114] And what you seem to do is you have a very comprehensive interest in the world.
[115] It starts, I assume, with history, but then much of Sapiens is anthropological, which was what I studied, and then you get into technology and philosophy and Buddhism, all these things, it's so comprehensive.
[116] Did you at all feel like you were getting off the path by doing that?
[117] Yeah, it's a bit off the path, but the scientific kind of enterprise is, again, it's built on cooperation.
[118] Nobody can be an expert on everything.
[119] Most people become experts on more and more narrow subjects, and this is very much.
[120] important.
[121] But you do need also people who kind of look at the big picture and also are able to communicate this big picture to the general public, which is what I do.
[122] Now, if all scientists would do what I do, we would not have any science.
[123] Right, right, right.
[124] We would just have a bunch of guys trying to depict the big picture, but without any details.
[125] You'd have a lot of great dinner conversations.
[126] Yes.
[127] And no iPhone.
[128] Yeah.
[129] So you need the people who spend like five years on developing the new antibiotic and you need the people who spend 10 years on researching some medieval manuscript in order to better understand the relations between Christianity and Islam in the 12th century and things like that.
[130] And then, yes, you need a few people who would take all these bits and pieces together and build some picture out of it because people are now flooded with enormous amounts of information.
[131] And the thing they need most is not more information.
[132] It's the last thing they need is more information.
[133] They need to make sense of it.
[134] Yeah.
[135] And this is becoming more and more difficult.
[136] Well, this is, I don't know if you remember, but as soon as we had the Patriot Act and the government had all these, they could now gather all this information from telephones and all this stuff.
[137] Gathering information is no problem.
[138] They have billions and billions of megabytes of all this.
[139] information, but nobody's figured out a way to sift through it or make any use of it, really.
[140] So it's almost useless that they're even gathering it.
[141] Yeah, I mean, if you go in this direction, if you look, for example, at the recent wars in Iraq and in Afghanistan, nobody in history had such good intelligent gathering abilities as the U .S. in Iraq and in Afghanistan.
[142] You couldn't make yourself a cup of coffee in some Baghdad suburb without the Americans knowing about it.
[143] And what did they do with all these amazing information?
[144] They still lost or losing both the wars.
[145] Yeah.
[146] So apparently just gathering lots and lots of information is not enough to make the right decisions.
[147] So I'm going to try to just Reader's Digest version, Sapiens.
[148] So you correct me if I'm wrong, but the compelling kind of narrative of that book is we as animals, as humans, we have dominated this planet, right?
[149] And how did we come to do that?
[150] Your first thought is, oh, we're smart, but there's more to it than that.
[151] For us to dominate, right, we have to be able to gather in large, large groups.
[152] And how do we do that?
[153] And the answer is myths, right?
[154] Is that the thing that allows two strangers to meet each other on the planes?
[155] And that stranger goes, I believe in money and you do too so we have some business we could do or I believe in Jesus is the son of God, me too.
[156] So it allows these myths trick that are in -group out -group.
[157] It allows us to include a lot of people in our in -group.
[158] So do a better job than I just did.
[159] Yeah, it allows us above all else to trust and cooperate with strangers.
[160] All social animals, I mean, there are many social animals besides homo sapiens.
[161] all social animals have tricks about how to cooperate with other animals they know.
[162] Chimpanzees can cooperate with 50 other chimpanzees with a hundred other chimpanzees.
[163] Human species, once there are many human species, we are now the only humans around, but until not very long ago, a couple of tenets of thousands of years, there were many different human species on the planet Earth.
[164] And they were also social animals.
[165] What is really remarkable about our species is not that we are smarter than everybody else, but that we can cooperate in far larger groups than anybody else.
[166] We can cooperate in thousands and tens of thousands and millions, and eventually today we have global networks of billions of people, for instance, trading and belonging to the same economic system.
[167] So the question, why did Homo sapiens come to dominate this planet really boils down to why are we the only animals, the only mammals, that can cooperate on a very large scale?
[168] And the answer to that is the imagination, the ability to create and spread fictions.
[169] Because if you look at any large -scale human cooperation, you will always find some fictional story at the basis.
[170] It's clearest in the case of religions that they are based on fictional stories.
[171] Now, even religious people will agree that all religions are fictional stories, except one, except my religion, of course.
[172] You ask a Jew, the Jew will tell you Judaism.
[173] That's the truth.
[174] But Christianity, you know, all these stories about Jesus rising from the dead and being the son of God.
[175] This is just a fictional story.
[176] Humans invented that.
[177] You go to the Christians.
[178] they will say, no, no, no, no, no, this is truth.
[179] But the Muslims, they believe in all these crazy mythologies that Muhammad received the Quran from the archangel, Gabriel, and so on and so forth.
[180] This is a fiction.
[181] You ask the Muslims, they will tell you the Hindus.
[182] They believe in really silly myth.
[183] And like that.
[184] So it's very obvious that all religions, really, are based on fiction, which doesn't mean that they are banned.
[185] These fictions enable people, not just to fight crusades and jihads, but also come together to build beautiful cathedrals or to build schools and hospitals and so forth.
[186] What is more important to realize is that the same principle also underlies nations, also underlies the modern economic system.
[187] Our modern economic system is also based on fictional stories, corporations, which are the most, important economic entities in the world, they are just stories invented by the powerful sorcerers that we call lawyers.
[188] A corporation like Google or Toyota or General Motors, it's not the factories, it's not the people, it's not the products, it's a story invented by lawyers.
[189] But as long as everybody believes in the story, it works.
[190] Yes.
[191] And Just as, you know, a thousand years ago, almost our people served some imaginary God or the other.
[192] So today, most of us serve some imaginary cooperation or other.
[193] Now, when I was reading it, I had this very, I guess, a cognitive dissonance, where I'm reading and you're kind of taking me wisely from the most obvious, which is religion, which as an atheist, I'm like, yep, that's a myth.
[194] Everyone believed in it.
[195] I see why it allowed people to gather in groups of tens of thousands.
[196] Money, yes, money has no value.
[197] This piece of paper, we all agree that it has a value, but there's no intrinsic value in the piece of paper.
[198] We all disagree upon that.
[199] That makes sense.
[200] And I'm going down the list of the things you expose as being missed.
[201] The nation state, I agree.
[202] There is a line across the map, and one side's Canada, one side's America.
[203] And we really think I'm American.
[204] That's my identity.
[205] I go, oh, yeah, that's preposterous.
[206] But then you go, humanism's a myth.
[207] And I go, whoa, hold on there, you all.
[208] No, no, no, this is true.
[209] And then you say civil rights is a myth.
[210] And I go, whoa, hold on.
[211] You lost me. And then I had to really challenge myself.
[212] I'm like, well, what are the odds that I agree with every other example he gave?
[213] But he's wrong on the two I cherish, which is kind of the fun of reading the book, if you're open to that kind of challenging yourself.
[214] So just explain quickly, how on earth could human rights be a myth?
[215] What else?
[216] I mean, they are definitely not...
[217] They are definitely not a biological reality.
[218] I mean, people talk about natural rights and things like that, but on the biological level, just as chimpanzees don't have rights and jellyfish don't have rights, homo sapiens has no rights.
[219] They are not written in our DNA.
[220] You don't find the Declaration of Independence written in the DNA.
[221] You don't need shelter, water, food, and a right to vote.
[222] You, no, in order to survive, you need all kinds of stuff, but it doesn't mean you have a right to these things.
[223] Just as, I don't know, antelops on the African savannah, they don't have a right to live, try to convince the lions and the cheetahs that the antelops have a right to live.
[224] So also, homo sapiens, biologically speaking, the rights don't exist.
[225] You take a human being, you look anywhere you want.
[226] You cut the human open.
[227] You look in the heart, in the brain, in the DNA.
[228] You won't find any rights there.
[229] There's no organ.
[230] There is no organ, the right center of the brain.
[231] Rights are a story invented by humans not so long ago.
[232] It wasn't there throughout history.
[233] just in the last couple of centuries it became a very popular and widespread story this idea that humans have rights I'm not saying that there is something bad I mean many people when they hear that this is a story or that is a fictional story they think that this is bad it's not necessarily bad I mean you can't organize people to do almost anything unless they agree on some fictional laws or fictional stories, you can't play baseball or basketball or football unless you get a couple of people to agree on laws, which should be obvious to everybody.
[234] We invented them.
[235] They did not come from heaven or from physics.
[236] Yes.
[237] So that's a great point to make.
[238] Just because you're pointing out that it is a myth or a fiction or created by us is not an argument against it or suggesting it shouldn't be.
[239] It's just let's...
[240] Let's understand what it is.
[241] Be truthful about what it is first.
[242] Yeah, I mean, you know, Harry Potter is a fiction.
[243] It doesn't mean it's bad.
[244] Well, it is.
[245] Witchcraft is very bad.
[246] If people start killing each other because they believe in a different version of the story of Harry Potter, that's bad.
[247] But as long as they do that, I mean, it's a very nice book.
[248] Well, that's where I'm guilty.
[249] I kind of want to throw the baby out with the bathwater.
[250] So if I see that people are arguing about whether Hogwarts was.
[251] set here or there and that's causing wars I'm like let's get rid of this fucking book like it's causing all this problem there was a very famous incident I think in Britain maybe two years ago I think either in the play or in one of the movies they wanted to cast a black person to play Hermione and there was this huge uproar in the internet how can a black person play Hermione she's white And they went over the entire seven books of Harry Potter until they found the one single case where there is a reference in the books to the skin color of Hermione, something that they are sneaking in the woods at night and the moonlight shines on the white skin of her, something like that.
[252] It's the one place in the whole seven books that they found.
[253] And this was their proof that Hermione must be a white person and it's unthinkable for a black person to play her.
[254] And, you know, this was amazing.
[255] On the one hand, you know, this biblical exegesis that you find this here, here, in Jeremiah, part three, chapter one, it says that's so.
[256] And also that, you know, you accept people flying on broomsticks.
[257] That's fine, that's fine.
[258] But a black person played no. No, no, no, no, no. You have to draw that.
[259] Yeah.
[260] Yes.
[261] Again, off topic.
[262] But have you had the pleasure of seeing Hamilton?
[263] Yes.
[264] You have, right?
[265] I was amazed that anybody without a PhD in history can understand what's happening there.
[266] Apparently they can.
[267] Yeah, there were many impressive things about it.
[268] One is I read that book.
[269] And I thought, how on earth is this person going to put that entire book into a two -hour musical?
[270] And by God, he did it.
[271] Almost the whole book's in there.
[272] It's incredible.
[273] But I had this moment where I was like, well, this is going to be interesting.
[274] There's going to be a black Aaron Burr and a black Hamilton or whatever.
[275] You know, all these historical figures are going to be played by black or Hispanic actors.
[276] That's going to be weird.
[277] That was my thought going in.
[278] And within nine seconds of the play starting, I've completely lost that.
[279] I'm like, oh, it doesn't fucking matter at all.
[280] But I almost, I needed to experience it to recognize, oh, it doesn't matter.
[281] We honestly should be casting black folks.
[282] in movies as historical, it doesn't mean anything.
[283] Yeah, I mean, you think about, I don't know, Julius Caesar, Shakespeare play.
[284] Now, we are perfectly fine with somebody talking English as Julius Caesar, you know?
[285] Good point.
[286] We are perfectly fine with a Christian playing Julius Caesar, why not?
[287] Even though Jesus was not even born at the time that Julius Caesar was alive.
[288] So, but when it comes to something like race or skin color, no, no, no, no. That's impossible.
[289] Yes.
[290] And in Anthro, this was something that people got great joy out of is just how arbitrary and insignificant to even start a category that would be skin color.
[291] Because it's one of the most simple things in our DNA, right?
[292] There's a couple of alleles that are going to determine your skin color while there's, you know, billions of other alleles that you probably have much more in common with someone with perhaps black skin.
[293] It's just a terrible biological category.
[294] But it's very, very useful in a terrible way for political and cultural purposes.
[295] Yes, yeah.
[296] You don't need to map the person's genome to put them in the category.
[297] Exactly.
[298] If you want to build a hierarchical society, you need very simple categories.
[299] And you usually also need categories that are inherited in the family.
[300] I mean, if you're the king, you don't want your son to suddenly be in the wrong category.
[301] Yes.
[302] So if it's not an inherited, if it's not inherited in the family, usually it will not be selected to be the basis of some hierarchical social system.
[303] Stay tuned for more armchair expert, if you dare.
[304] We've all been there.
[305] Turning to the internet to self -diagnose our inexplicable pains, debilitating bodies, aches, sudden fevers, and strange rashes.
[306] Though our minds tend to spiral to worst -case scenarios, it's usually nothing, but for an unlucky few, these unsuspecting symptoms can start the clock ticking on a terrifying medical mystery.
[307] Like the unexplainable death of a retired firefighter, whose body was found at home by his son, except it looked like he had been cremated, or the time when an entire town started jumping from buildings and seeing tigers on their ceilings.
[308] Hey, listeners, it's Mr. Ballin here, and I'm here to tell you about my podcast.
[309] It's called Mr. Ballin's Medical Mysteries.
[310] Each terrifying true story will be sure to keep you up at night.
[311] Follow Mr. Ballin's Medical Mysteries wherever you get your podcasts.
[312] Prime members can listen early and ad -free on Amazon Music.
[313] What's up, guys?
[314] It's your girl Kiki, and my podcast is back with a new season, and let me tell you, it's too good.
[315] And I'm diving into the brains of entertainment's best and brightest, okay?
[316] Every episode, I bring on a friend.
[317] and have a real conversation.
[318] And I don't mean just friends.
[319] I mean the likes of Amy Poehler, Kel Mitchell, Vivica Fox.
[320] The list goes on.
[321] So follow, watch, and listen to Baby.
[322] This is Kiki Palmer on the Wondery app or wherever you get your podcast.
[323] Another thing in Sapiens that I loved, you have a really unique ability.
[324] You kind of cracked two concepts for me in Sapiens.
[325] One was, well, just communism.
[326] I had never really heard why communism didn't work in such a very simple, elegant way, which probably other people knew, and I just simply didn't because I'm a bozo.
[327] But the fact that it was centralized is the problem.
[328] That's why that system doesn't work is just you can't have centralized control of anything and have it immediately meet the needs to correct for supply and everything else.
[329] Yeah.
[330] And then you kind of you map that onto everything and now, how we trade ideas and how important in everything is to be decentralized.
[331] And I just thought, oh, okay, there is nothing even theoretical or moral or anything.
[332] It's just almost a mechanical principle.
[333] Yeah, it depends really on the technology you have at your disposal.
[334] You could look at the whole of history through the prism of methods to process information.
[335] and the two main contenders are centralized system when all the information goes into one place and being processed there and the decisions are made there and distributed systems in which the information flows freely between many different organizations and institutions and individuals.
[336] There is no center and there are lots of places where decisions, important decisions are being made.
[337] And in the 20th century political arena, you see the struggle between communism and democracy.
[338] It's really about that.
[339] Communism works by concentrating all the information and power in one place.
[340] You have some people in Moscow deciding how many cabbages will be grown in some farm in Kazakhstan and what will be the price of bread in every shop in the country.
[341] And then you have democracy and the free market system, which says, no, we'll just allow information to flow freely between people and institutions, and they can make their own decisions.
[342] And given the technological realities of the 20th century, it just worked much, much, much better.
[343] There was a famous story or anecdote I once heard that at the waning days of the Soviet Union when the system was collapsing.
[344] So Gorbachev sent people to the West to understand how these capitalists, how do they manage their societies.
[345] Yeah, he goes to London.
[346] And the British are very happy to, you know, Margaret Thatcher is in power.
[347] Everything is about the free market and capitalism.
[348] Very happy to explain to this Soviet official how the system works.
[349] So they take him to the banks and to the stock exchange and to the LSE to talk to economics professors.
[350] Until he finally says, wait a minute.
[351] There is something much more fundamental I can't understand about your system.
[352] Back home in Moscow, we have our best minds, the best minds in Soviet Union, working on the problem of how to provide bread to Moscow.
[353] And nevertheless, in every grocery store and every supermarket, you have this long line, a queue for the bread.
[354] And here in London, it's a city of millions.
[355] We've been passing all day, all these bakeries and grocery stores.
[356] I haven't seen a single bread line.
[357] So, cancel all my other appointments, and just take me to meet the person.
[358] The minister of bread.
[359] Who is it?
[360] And then the British host, they like, who is this guy?
[361] I mean, there is no minister of bread.
[362] Nobody is in charge of providing bread for London.
[363] And that's the real secret of the system.
[364] You just allow all these consumers and producers to exchange information and make their own decisions.
[365] Now, what people often forget when they hear this example or the analysis is that it all depends on the technology of the day.
[366] In some situations, given some technologies, centralized systems are much less efficient than distributed systems.
[367] This was the case in the late 20th century, and this is why the United States defeated the Soviet Union.
[368] But it's not always like that.
[369] Given a different technological reality, things look very different.
[370] And one of the dangers in the 21st century is that machine learning and artificial intelligence will make centralized systems much more efficient than distributed systems.
[371] And dictatorship might become more efficient than democracies.
[372] Oh, fascinating.
[373] That's juicy.
[374] So that, I assume, is in 21st.
[375] That's part of 21st lessons for the 21st century.
[376] Right, because there was no way.
[377] for the Soviet Union to be gathering the appropriate dead, real time, making real time decisions.
[378] And technology is such now that that could conceivably happen with some mega -computer.
[379] Yeah, gathering the data is less, the problem is not so much gathering the data.
[380] It's processing it, it's analyzing it.
[381] So like in 1970 Soviet Union, you would have all these massive amounts of information flowing all the time from the most distant province, of Russia and Kazakhstan and Ukraine to the central nerve system of the whole of the whole organization in Moscow and there you had the problem.
[382] Nobody had the ability to process these enormous amounts of information fast enough and efficiently enough and make the right decisions.
[383] So they are making worse and worse decisions.
[384] But what's changed is.
[385] changing now is that, again, given machine learning and artificial intelligence, we are developing the technology that can process enormous amounts of information, much better than any human being, in one place.
[386] And actually, there is now an advantage to try to concentrate it all in one place, because you can thereby discover all kinds of patterns in the data that if you have only partial data you will never be able to do.
[387] To give an example, if you think about the advances now in genetics, so to discover what a gene or a group of gene is doing, it's most of the time just based on statistics.
[388] You have statistics about the DNA and the medical situation and the life of a lot of people, and you just discover patterns, ah, people who have this gene, they tend to suffer from this disease.
[389] People who have this combination of genes, they tend to be very proficient in this kind of skill.
[390] Aha, I found something.
[391] So it's all statistics.
[392] Well, a lot of great stuff's coming out of like Denmark and those Netherlands countries where your medical information, although private, it's not, it doesn't have your name attached to it.
[393] It does go into a database.
[394] Exactly.
[395] And they do all these great epidemiological studies there.
[396] And they have breakthroughs in three hours.
[397] Like here we, people have.
[398] had the spurious connection between one of our vaccines in autism.
[399] And they were just like, oh, let us tell you in three hours.
[400] You know, 50 % of the people had that vaccine here, zero correlation.
[401] Oh, case solved.
[402] Exactly.
[403] And in these cases, concentrating all the data in one place actually makes it the system much more efficient.
[404] So to take the extreme example, what happens if tomorrow morning the Chinese government issues and order that every citizen of the People's Republic of China should go within the next two weeks to the nearest clinic or police station and give a DNA sample and also, of course, give free access to all their medical records and school records and whatever.
[405] And you build, at a stroke, the largest genetic database in the world.
[406] And with zero privacy concerns, and you can start making all these amazing discoveries.
[407] And then the next thing that happens is that people all over the world, they realize actually the Chinese are the most advanced in this field of genetics.
[408] So if I want to scan my DNA and discover what kind of, say, ailments I am more susceptible to, I will not go to some American company which has just a database about a million people.
[409] I will go to the Chinese with their database of 1 .4 billion people.
[410] And when everybody goes there, it's a snowball effect.
[411] It becomes even more efficient.
[412] And very soon, and it's all based on concentrating the data in one place.
[413] Yeah, so for us Americans hellbent on liberty and privacy and all these myths, we have basically come together to celebrate.
[414] That's a big leap for us, right?
[415] We have a lot of distrust with whoever's in charge of that database.
[416] And for a very good reason, I mean, I'm not here kind of to recommend the Chinese system as the best system in the world.
[417] It obviously has a downside when somebody knows so much about you and they are hardly accountable to you.
[418] It's not like there is an election in four years and you can vote them out of office.
[419] Then you are extremely exposed to control and manipulation of a kind we have.
[420] have never seen before in history.
[421] Stalin and Hitler didn't have anything like that.
[422] Right.
[423] Now, you have some clairvoyant powers.
[424] You have some incredible in there in Homo Deus.
[425] They're amazing, your observations.
[426] Does this make this kind of thing being able to potentially map 1 .4 billion people's genome solve probably a laundry list of medical conditions?
[427] Does that make you excited and optimistic or fearful or a combination of all of that?
[428] Both.
[429] I mean, these kinds of these kinds of developments, they always have an upside and a downside.
[430] They can do amazing things for, for example, our health.
[431] And at the same time, they can have terrible political and social consequences, resulting in worse discrimination than ever before, in far more dictatorial regimes than ever before, in opening real biological gaps between, for example, classes and costs that we have never seen before.
[432] And it's true of every major technology in history.
[433] It always had an upside and a downside.
[434] But is it futile to even worry?
[435] I mean, like, the future you present is likely nothing could stop it.
[436] No, nothing is deterministic.
[437] It's a waste of your anxiety almost.
[438] No, no, I don't agree.
[439] I mean, are you better of just doing these things than as bad consequences arise, solving those consequences, or should you be fighting passionately to prevent it from even happening, I guess?
[440] What's the...
[441] We need, first of all, that we need to realize that technology doesn't determine how people use it.
[442] I mean, if somebody now comes along and says, look, AI and genetics, this is scary stuff.
[443] We should just stop all research in AI and genetics.
[444] This will not happen.
[445] Some foreign actor will do it, as we say here.
[446] Somebody will do it.
[447] Even if you can get an entire country.
[448] Even the great USA bans all further research in genetics.
[449] So it will just continue in other places.
[450] And very soon the Americans will realize that they are being left behind.
[451] So they have to join.
[452] So you can't just stop it.
[453] It's like doing steroids in the Olympics.
[454] What the hell else are you going to do?
[455] No, I'm not sure.
[456] I don't know.
[457] I'm not sure this is the example that I would give.
[458] Because you can actually do something.
[459] regulation is effective, it can be effective, and there is a very good case to be made that we do need to regulate something like steroids in sports and that we can do it.
[460] Technology can help us.
[461] I mean, technology works both ways.
[462] You can use the new technologies, for example, to give more and more treatments to athletes, but you can also use technology to more.
[463] monitor these kinds of usages in a more effective way and regulate against them, if this is what you want to do.
[464] It's the same with something like AI and surveillance that at present we see the development of more and more AI systems which work in the service of corporations and governments to monitor individuals.
[465] But there is nothing about the basic capabilities of AI, which says you cannot use it in the opposite way.
[466] You can build AI systems that monitor corporations and governments in the service of individuals.
[467] Instead of the government.
[468] Is not the capital required to have that kind of system just naturally exclude the proletariat?
[469] Like, how would we have access to that?
[470] You won't develop it yourself, but there is a huge market out there.
[471] For example, to develop an AI system that monitors government.
[472] officials in order to prevent corruption.
[473] That the same way that a government can employ an AI system to spy on the citizens and locate incidents in which citizens express criticism of the government, it just goes over all your emails and all your phone calls and picks up the patterns of that are the government deems to be dangerous.
[474] It can do that.
[475] You can also do the reverse, build an AI system that constantly monitors.
[476] the actions and the emails and the bank accounts and the lifestyle of government officials in order to discover patterns that are linked with corruption.
[477] And why not?
[478] And there is a huge market for that.
[479] Where will that come out of?
[480] The university?
[481] Who's fighting for us?
[482] At present, we don't see many such systems being developed.
[483] What I'm saying is that technically...
[484] It can happen.
[485] It can happen.
[486] There is nothing the technology that says it must work in the...
[487] service, unidirectional.
[488] Yeah, so it could come from a corporation, which realizes, hey, there is a huge business opportunity here.
[489] I can sell this monitoring system to countries all over the world in order to fight corruption.
[490] And, you know, corruption is a business worth trillions of dollars.
[491] Yeah.
[492] So.
[493] Black money, they call it.
[494] And there's so many things going all over the world.
[495] It can come from an NGO.
[496] You can get people.
[497] to, you know, there are many NGOs fighting against corruption.
[498] So they need to get together and get a few good coders and start developing these kinds of tools.
[499] It can be sponsored by a government.
[500] Lots of government, I'm not very happy with their officials being corrupt.
[501] So there is a huge opportunity out there.
[502] Stay tuned for more armchair expert, if you dare.
[503] Okay.
[504] Now, of the many things I bring up at dinner parties that are your thoughts, the one I think I trade in most often is in Homo Deus, you talk about this very profound thing, which is the self.
[505] And we think of the self as being one thing, me, Dax Shepherd, I'm a self.
[506] And you point out that minimally there's two Dax Shepherds.
[507] There's the experiential Dax Shepard, the one who's, you know, scrolling through.
[508] through Instagram and so happy for two hours.
[509] The whole time I'm doing it, I'm in heaven.
[510] And then I go to and lay down at night to go to bed.
[511] And then there's the narrative self who's writing Dax's life story who says, Jesus, dude, you fucking spent two hours staring at your hand.
[512] That's a terrible waste of your life.
[513] I'm disappointed in you, right?
[514] So you start by just introducing this concept that even we aren't unified as one thing.
[515] We have these facets.
[516] and that where we're heading with technology is that your smartphone very soon in the future will be measuring biometrics.
[517] It'll know your blood sugar, your heart rate, your cortisol levels, all these things.
[518] And the example I think you give is that you could set a goal on this smartphone to help you realize something.
[519] And that smartphone may vibrate as you're walking into a meeting and say, hey, Dax, don't talk in this meeting.
[520] because the last time your blood sugar was this way and you got this little amount of sleep, you have pissed off your boss.
[521] So just shut the fuck up for the next hour and a half.
[522] And then you pose the most intoxicating question of all is what is the device going to service?
[523] Is it going to service the narrative dachs or the experiential self?
[524] And will we give it permission to make that decision?
[525] That blew my head off my shoulders.
[526] What did I leave out of that?
[527] What could you elaborate on that?
[528] I mean, I just find that to be, we will be confronting this problem you laid out in my lifetime.
[529] Yes, yes.
[530] I mean, as our understanding of the human body and the human brain improves, and at the same time, as we have more and more sophisticated AI, so these kinds of scenarios that can go in all kinds of directions, you can have the government monitoring you 24 hours a day.
[531] So if you live in North Korea, and what you just described will take a very different form, you have to wear a biometric bracelet all day, which constantly monitors just what you said, your cortisol level, your sugar level, your blood pressure, and so forth.
[532] And if you just happen to listen to a speech by Kim Jong -un, and the bracelet picks up the biometric signs of anger, that's the end of you.
[533] Oh, my God.
[534] So it can go in that direction in some country.
[535] Let's even get even like even more inconsequentials.
[536] Like my wife's got it hooked to my wrist and I walked by a beautiful girl on the street.
[537] She goes, excuse me, sir.
[538] I see what's happening in your body.
[539] Yes.
[540] It could be the total end of all internal life.
[541] Yeah.
[542] One thought experiment, which maybe it's even being done to them.
[543] I'm not sure.
[544] Imagine that you are wearing a shirt which reacts, the shirt is connected to biometric sensors like your Fitbit or whatever and the shirt can light up in all kinds of colors if you're angry, it becomes red if you are sexually attracted it becomes red with flashing lights if you're bored, it becomes, I don't know, blue and just imagine what it means and technically it's very simple maybe there is already startup out there if not after this podcast I think we'll have a couple and maybe I should actually register They said dear before.
[545] I could definitely see parents wanting it for their kids.
[546] Like if you could just visually know what animal you're dealing with, it would be so helpful.
[547] But just imagine, let's leave aside, you know, the sexual issues.
[548] Boldom.
[549] Like what happens if you go to meet your boss, you're having a chat with your mother, whatever, and the shirt goes blue?
[550] And they know.
[551] And, you know, all the things that for millions of years, evolution adapted us to hide.
[552] Yeah.
[553] Suddenly they are out there.
[554] So the shared thought experiment, this is like the most in your face.
[555] Yeah.
[556] But you have all kinds of subtle scenarios in which only the government knows or only the corporation knows or only you know.
[557] Like you want to gather information about yourself during the day that will afterwards be useful to making decisions in life.
[558] I mean, because of what you described, this division between the experiencing self and the narrating self or the storytelling self.
[559] We experience life in one way and then we imagine it and we tell us of stories which are often completely different.
[560] Oh, 100%.
[561] If you're married, you've experienced that daily.
[562] Yes.
[563] So if you think about making decisions about, I don't know, like which friends you like to hang up with.
[564] And so you think you enjoy yourself with these friends, but actually the truth is, is very different.
[565] So the device can tell you what you actually enjoy.
[566] And, you know, people now experience it, for instance, with television, with VOD, view on demand, or with Netflix, that there is a famous study, I think, being done, that when people record on their VOD, all kinds of movies, they tend to record all these kinds of high -level dramas and, you know, things like that.
[567] Yeah, yeah.
[568] But then when the moment comes to actually watch a movie, you never, yeah, you never want to see it.
[569] You want to see some stupid Hollywood comedy.
[570] Watch your mouth, please.
[571] No, there are some very good Hollywood comedies.
[572] I'm totally teasing.
[573] But the thing is that, that, and this device, now you see it, like you scroll through the list of movies and you say, yes, I recorded all these movies.
[574] but I don't really want to see them because my narrating self has this image of me as a very sophisticated person who watch these French out nouveau dramas and whatever and but I don't really want to see them I you know I would yeah my comparison would be every not everyone lots of people in my circle subscribe with the New Yorker and then if you go into their bathroom and there's a New Yorker and us weekly that us weekly the pages are almost worn off and they have not cracked the New Yorker like they want to be the person who reads the New Yorker every time they can sit down, but they just can't resist the juicy pictures.
[575] But what's interesting about this world, this future of ours, your painting is that these questions that are ultimately philosophical questions, which you would almost think would have been rendered obsolete as we kind of get more technologically advanced.
[576] They are almost, we're in a position where they're going to be vital weirdly.
[577] Like we, we have, we have actually are now going to have to find out what our philosophy is because we're going to engage these machines to execute what we think we believe in.
[578] So we had better figure out what we believe in, right?
[579] It's more now than ever.
[580] It's paramount that we know what we're trying to aim for as these things assist us to get there.
[581] Yeah, I think philosophy is now more important than ever before because lots of what used to be philosophical problems are becoming very practical problems of engineering and more and more engineers I think need to learn philosophy in order to solve like the best example I know is with self -driving cars that as is now everybody is talking about it in order to put a self -driving car on the road you need to solve a few philosophical questions oh right um like uh the the the the most common scenario is that the car is driving and suddenly two kids running after a ball jump in front of the car.
[582] And the only way to save the two kids is to swerve to the side and fall off a percy piece and kill the owner of the car who is asleep in the back seat.
[583] Now, what should we do?
[584] These kinds of questions philosophers have been arguing about for thousands and thousands of years with very little actual impact.
[585] It's the trolley.
[586] It's the trolley problem.
[587] Exactly.
[588] It's a trolley problem.
[589] And the interesting thing about the trolley problem, there is a very big difference between what people say in the philosophy seminar in university and how they actually behave.
[590] Yes.
[591] But with a self -driving car, you need to program the algorithm in a certain way.
[592] You can't just go on...
[593] Leave it undecided.
[594] Yeah, you can't leave it.
[595] You need an answer.
[596] You'll kill all three people probably.
[597] The engineers need an answer.
[598] So the answer can come in all kinds of ways.
[599] the government can mandate an answer or you can just say I believe in the free market Tesla will come up with the Tesla altruist and the Tesla egoist and you just choose you know the customer is always right let me answer that question for you now they don't even have to bring the philanthropist version to the market everyone's buying the egoist Tesla yeah there was actually again a study about that and the thing is most people who were asked said they think that the car should sacrifice its owner.
[600] But then when they asked them, would you actually buy this car?
[601] They said, absolutely not what I meant crazy.
[602] That's right.
[603] I'll buy a Mercedes.
[604] I bet they'll let me live.
[605] Yeah.
[606] Yeah.
[607] Now, this wasn't in either of your books, but this comes from Sam.
[608] And this again was another mind -blowing experience for me, is you were laying out this future of AI and you were talking about the high probability that in the future 80 % of the jobs that are currently being done now by humans will be done by machines and that you're going to have a huge class of people, the useless class that don't really do anything, but all their needs are met by these different robots, right?
[609] And Sam said, well, what are people going to do all day to give meaning to their life?
[610] And you said, well, they're going to participate in virtual reality.
[611] They're going to play virtual reality.
[612] games probably.
[613] This is ringing a bell.
[614] Am I getting it wrong?
[615] I'm familiar with the...
[616] Okay.
[617] So like Sam, I was listening to you and I thought, well, this is so dystopic.
[618] I don't want my kids to just put fucking goggles on and that's their life, right?
[619] And he said, well, I'm so discouraged that that's what people are going to do.
[620] And then you said, oh, Sam, we've been playing virtual reality for thousands of years.
[621] Religion is virtual reality.
[622] Explain to us how religion is virtual reality.
[623] Well, you have a couple of rules of the game, which were invented by humans, but those who play this particular game, they are sure that this is the reality.
[624] And you live your entire life trying to gain points and not to lose points.
[625] So if you play the Christian game, then you need to to give to charity and you need to pray and you shouldn't have sex before marriage and absolutely no homosexual sex.
[626] You lose a hundred points if you do that.
[627] That's a big one.
[628] And that's a big one.
[629] And if at the end of life you have a high enough score, then you move on to the next level of the game in heaven.
[630] Yeah.
[631] And that's wonderful.
[632] And some people, especially I think further back in history, are playing that game for hours of the day.
[633] They're aware of that game.
[634] Like a good chunk of their consciousness is dedicated to that game.
[635] Yeah.
[636] And how they're interacting with people and everything.
[637] So what's another example you could give of this virtual reality we've all been playing?
[638] Well, you have, I mean, what people, first of all, I have to say this is absolutely not a prophecy.
[639] Like, nobody really knows how the job market or how the world would look like in 50 years.
[640] This was just exploring one of the possibilities that we are facing.
[641] We can still do many things to prevent this particular possibility from being realized.
[642] But if it does happen, and of course there are also much worse possibilities.
[643] There is the scenario that you have a lot of people who have no jobs, no economic value, no political power, and nobody cares about them.
[644] And they get no support.
[645] They don't get to play virtual reality games because they hardly have anything to.
[646] eat.
[647] They have to struggle for survival.
[648] So there are definitely worse scenarios, even than that.
[649] Even when you were describing it and I was listening, I thought, sure, at the point where 80 % of the people are unemployed and we've really figured out some production scale that feeds everyone and close them, great.
[650] But then I said, it's when it's 35 % of the population.
[651] I don't even know how you get there.
[652] Like, yes, if you could turn on a light switch and 80 % of a are unemployed, great.
[653] But I don't know how you get past when 35 % of the country is unemployed, that's a revolution.
[654] Long before 35%.
[655] Yeah, I don't know how we get there.
[656] It's almost like you just have to keep building the infrastructure, building it building.
[657] We're not allowed to turn that light switch on until it's literally self -sufficient, which of course will not happen.
[658] Yeah, there are many, many pitfalls on the way.
[659] And again, people, it's not like one day suddenly 80 % of jobs disappear.
[660] that's it.
[661] No, it's a gradual process.
[662] Some jobs disappear.
[663] Some jobs change.
[664] New jobs appear.
[665] There will be new jobs.
[666] Yeah.
[667] But one of the problems is that the pace of change is going to accelerate.
[668] And in order to find employment in the new jobs, you will have to retrain yourself.
[669] And not just once, but several times, because you got a new job and 10 years later, the new jobs too has been automated.
[670] So you again have to reinvent yourself.
[671] And it's, you again have to reinvent yourself.
[672] And It's going to be very difficult to retrain people, not just on the, I know, practical level of new skills, but you really need to reinvent yourself even psychologically.
[673] Well, my urologist told me that he doesn't think he would recommend his son go to medical school.
[674] He said, if you've seen Watson diagnosed cancer at an 80 % success rate and the oncologists are only at 50%.
[675] I can tell you pretty assuredly, that job's not going to exist for my kids.
[676] Diagnosing cancer, no. I mean, something that has to do only with information.
[677] Information comes in, gets processed, and information goes out.
[678] This is the easiest thing to automate.
[679] So a doctor whose almost sole duty is to suck information from you, process it, and come up with a diagnosis, this is not going to be a viable job in a couple of decades.
[680] Mind -blower.
[681] But a nurse, on the other hand, this is a much.
[682] safer bet because anything that involves both cognitive and manual skills, this is much, much more difficult.
[683] So there will be nurses, human nurses, long after the old diagnosis of cancer is done by computers, because to give an injection or to replace a bandage, robots are so far away from that.
[684] If you see the development of robots today, I mean, we, we may. met, I think, two years ago, this expert on robotics.
[685] And she said that if the robots are coming from you in some apocalyptic robot science fiction movie, if the robots are coming for you, just close the door behind you.
[686] No robot is so far able to manage the simple task of opening the door knob.
[687] No, I just watched a video and everyone was so excited that this robot did in fact open a door.
[688] And it took this fucking thing like 45 seconds.
[689] I was like, this is what we're celebrating?
[690] Like, that was as clumsy as it gets.
[691] I wanted you to talk about Buddhism a little bit because I also loved your breakdown of craving.
[692] I just think that's a fascinating thing to be aware of as a human being is that true suffering comes from craving.
[693] That's a breakthrough in thought for me. Yeah, that's, you know, Buddhism 101 for the last 2 ,500 ,500 years.
[694] But basically that, yes, I mean, suffering is, you want something.
[695] You don't get it.
[696] That's suffering.
[697] That's the whole deal.
[698] Yeah, I mean, I grew up thinking, you know, mourning the loss of somebody was suffering.
[699] But recognizing, no, craving to not be feeling that morning is the suffering.
[700] Yeah, when you miss something, it's not an abstract idea.
[701] It's very unpleasant sensations in the body.
[702] It's a very unpleasant experience.
[703] And then a part of the mind goes, I don't.
[704] don't like this.
[705] Yeah.
[706] And this, this, this, this is generated by the mind.
[707] And, and this is, basically, this is what, what suffering is all about.
[708] And we know, we go about trying to change the entire world.
[709] We completely disrupt the ecological system.
[710] We fly to the moon.
[711] We split the atom.
[712] We wage world wars.
[713] And it's all because we can't handle these experiences within ourselves.
[714] Yeah.
[715] Okay.
[716] So my real question for you.
[717] is, is you have, more than any other person I've ever read, the most comprehensive view of the world, simply because you know its history so well, you know a lot of its science.
[718] You really know how this place is working.
[719] No. No, no, no, no, no, I don't.
[720] Well, you won't accept that, but I'm going to tell you, you know how it's working a lot more than most of us.
[721] And I want to know that as you've come to understand more and more why we do what we do.
[722] Again, savings is about how we got here.
[723] Homadais is about where we're going.
[724] 21 lessons is about where we're at now.
[725] Which I'm glad you're...
[726] Because I will say that my only complaint about the two books is that there's nothing really prescriptive.
[727] It's like, this is where it's going.
[728] This is where it was.
[729] I need you smart person and tell me what the...
[730] What should we do?
[731] But that aside, having this comprehensive understanding of how we got to this place and why we do the things we do biochemically Has that exacerbated the plight of the human condition or has it helped it?
[732] I want to know.
[733] I think it helps.
[734] I think that to understand to be realistic about ourselves as individuals, about ourselves as a species, is a very good thing.
[735] To align your expectations with reality, to be aware of our own biases, of our own weaknesses.
[736] This is an extremely important thing, especially now because of the really extraordinary powers that we are gaining as a species.
[737] We are really, I say it quite often, we are really in the process of upgrading ourselves into gods.
[738] And in the most literal and banal sense that whatever abilities, ancient mythology, is ascribed to the gods, to Zeus and to Vishnu and to Yahweh, we are now acquiring these abilities to ourselves.
[739] For instance, the ability to engineer life and to manufacture life.
[740] And we need to have a realistic understanding both of our power and of our weaknesses and biases.
[741] Otherwise, we are going to be very irresponsible gods.
[742] And if you had to choose between your two knowledge sets, one being whatever we would describe your meditating in India as what would we call that are you Buddhist I try to understand myself as much as possible okay so that's something you've dedicated a ton of time to as much or somewhat commensurate with your worldly knowledge I'm gonna I am God and I'm gonna erase one of those two understandings which one are you going to give up well they are very closely intertwined.
[743] Yeah, I think of them as opposed, but you probably don't.
[744] No, I don't think that I could have done my scholarly work and written Sapiens and Homo Deos and 21 lessons without both the insights, but also on a deeper level, simply the mental training that I got from meditation.
[745] In order, for example, to try and condense the whole history of humankind to 450 pages, you need the ability to focus.
[746] And I got that from meditation.
[747] So I think if you took away, like my experience with meditation, most of my scholarly achievements will go away with it.
[748] So then I'm guessing you would not let go of the meditation above all.
[749] Oh, I think this is a fair scenario.
[750] Yeah, I think this is a fair assumption.
[751] Okay, and then you and your partner, let's say you guys have a daughter or a son.
[752] A dog is more like...
[753] But in this scenario, I don't think dogs go to college, even in Israel.
[754] No, not yet.
[755] Are you hoping if you have that child that they pursue one over the other?
[756] I guess it would be you'd hope that they would learn some internal peace, probably start there.
[757] I think the most important is to learn this, to get to know yourself better.
[758] Also because it's going to be the, unlike in previous centuries, this is also going to be the most important for things like the job market and for finding your way around the world.
[759] because when you're living in a situation in which you have all these systems, as we talked in the beginning, that are really hacking you and getting to know you so well, so you have to know yourself very, very well.
[760] Otherwise, you can be so easily manipulated and controlled by these external systems.
[761] Well, and as you're just saying this, it occurs to me, even if you think about it in terms of the market, if there is a technology now that's going to replace so many parts of what a human can do, you had better invest in the one thing that can't be duplicated.
[762] That's the one thing that'll still be scarce.
[763] Yeah, we still know so very little about the human potential.
[764] Most jobs that exist today utilize just a tiny, tiny part of the human potential.
[765] We don't know most of it.
[766] So both as individuals and also as a species, I would say that it's a very urgent thing to explore ourselves and to get to know a full human potential before it's too late.
[767] I'm so mad you have to leave.
[768] I don't think I've ever been more upset by something.
[769] You're so special.
[770] I'm so flattered you came and saw us and I hope you have a great rest of your book tour.
[771] You're fucking awesome and I hope everyone buys your book because you're incredible.
[772] Thank you.
[773] And now my favorite part of the show the fact check with my soulmate Monica Padman.
[774] Okay.
[775] I wrote one down.
[776] This is a request.
[777] Oh, wow.
[778] This is an arm cherry request.
[779] Feel free to make them.
[780] I will note what it was, and I wrote it down.
[781] Oh, yes.
[782] It's a fact check is a little old place where we can get together.
[783] Oh, that was good.
[784] Fact check, baby.
[785] I got me your Chrysler.
[786] It's about 20.
[787] So hurry up and bring your jute box.
[788] money the fact check is a little old place where we can get together fact check baby good request right arm cherry keep them coming please all right yvall good luck checking those facts when you have someone like yvall on need you check facts correct there's not very many to check right because he is the facts yeah he's who i would be looking up yeah Yeah.
[789] So, no, not very many.
[790] Okay.
[791] What if we decided to just take on you've all, like put them on retainer to do our fact check?
[792] And we were spending like $3 million a year, just bankrupting ourselves to know that the facts are coming from the generator of most facts.
[793] It might be worth it to know that they're truth.
[794] Yeah.
[795] Okay.
[796] So you said that Sapiens and Homo Deus sold 12 million copies.
[797] Yeah.
[798] I read that directly off of the book jacket that they're promoting.
[799] 21st, 21 lessons for, so I'm inclined to think that's correct.
[800] That might be correct.
[801] I, I scoured the internet for those numbers and I could not find it.
[802] Yeah, they're right on the jacket of the book.
[803] All right.
[804] Not all right.
[805] They are on the jacket as a book.
[806] Well, according to the internet, in 2017, each of those books hit one million.
[807] In the U .S. maybe domestically.
[808] This is a worldwide book translated into 40 -some languages.
[809] I don't think it, anyway.
[810] Okay.
[811] So this is a little tricky.
[812] Now, you said that there's only a couple alleles that determine skin color.
[813] Thank you.
[814] Are you talking about because people define race as skin color and there is like eight genetic variants in like just even African?
[815] What I'm saying is the criteria by which they're sorting people into racial groups.
[816] Yeah, that's what I'm saying.
[817] Yeah, is solely skin color.
[818] Right, exactly, which is crazy because, but I think I'm saying something a little different.
[819] Skin color is a super, it's incredibly simple part of our genetic code, whether your skin is brown, white, black.
[820] So even if you were attempting to group people with similar genetics, the last thing you would do is take this really simple thing of skin color and make that the, the criteria by which you're sorting these people out.
[821] So what I'm saying is that there are people in Africa that have the same skin color.
[822] So they have a couple of the alleles or loci for that.
[823] Yet a much larger part of their DNA will bear more resemblance to someone from Ireland than they do even from someone in another part of Africa.
[824] That's what I'm saying.
[825] Right, right.
[826] I guess there was a New York Times article with this professor from the University of Pennsylvania that was saying researchers pinpointed.
[827] eight genetic variants in four narrow regions of the human genome that strongly influence pigmentation in, in Africans.
[828] So making skin darker and others making it lighter.
[829] So, and that is saying the same thing that we are, we are attributing race to color and that's wrong.
[830] But it's, it's kind of saying.
[831] Well, from the anthropological thing that I'm talking about, it's just simply that, how about this?
[832] if you were trying to separate food into nuts, plants, meat, dairy, and you said because cheese is yellow and so is squash, those two things are the same.
[833] Right.
[834] That would be a terrible way to group food.
[835] Right.
[836] Yeah, yeah, yeah.
[837] Can I say a fun thing about skin color that I don't know everyone knows?
[838] Sure.
[839] Is the whole reason that I am white is because my ancestors left.
[840] Africa and they went to a northern climate with far less sunlight and you synthesize vitamin D from sunlight.
[841] And if you had dark skin and took on less sunlight, you would die of a vitamin D deficiency and you couldn't pass on your genes.
[842] That's what skin color is all about.
[843] Well, also your capacity to make melanin is a big part of it.
[844] Well, sure, melanin is the thing that makes your skin a different color darker yeah yeah so if you have less of it genetically odds of passing your genes on were much greater in a northern climate because you'd be sucking in that vitamin d also male pattern baldness that's to suck up more more of that yummy vitamin d this according to the internet human skin color is a polygenic trait meaning multiple gene loci are involved in its expression at last count to the International Federation of Pigment Cell Society has determined that there are a total of 378 genetic loci.
[845] Oh, what does that mean?
[846] Loci is like, let's say there's 7 billion markers on a DNA strand.
[847] Each one is the loci.
[848] Oh, like location.
[849] Yep.
[850] Yeah, but that's why I gave it a hard C. Right.
[851] It's deceptive, yeah.
[852] All right.
[853] So 378 involved in determining scale.
[854] skin color in human and mice.
[855] Glad they threw mice in there for us.
[856] Yeah, in case we were wondering about mice.
[857] Okay, so we talk about the trolley problem.
[858] We just mentioned the trolley problem real quick in case people don't know what the trolley problem is.
[859] If they don't watch the good place.
[860] Exactly.
[861] The good place is the trolley problem.
[862] It's a thought experiment in ethics and philosophy where you see a runaway trolley moving toward five tied up people lying on the tracks.
[863] You're standing next to a lever that controls a switch.
[864] If you pull the lever, the trolley will be redirected onto a side track and the five people on the main track will be saved.
[865] However, there is a single person lying on the side track.
[866] So you have two options.
[867] One, do nothing and allow the trolley to kill the five people on the main track.
[868] Or two, pull the lever, diverting the trolley onto the side track where it will kill one person.
[869] What is the most ethical option?
[870] It's such an interesting.
[871] Well, what's interesting about that in itself, I don't find very interesting because everyone's going to say I pull that lever and save four people.
[872] But the way that they then take the exact same math and fuck you up worse is then they go, okay, so now there are five sick people in a hospital bed, one healthy person lying in another bed, and you have the option of killing that person, harvesting the five organs that are required to save the five other people.
[873] Would you do that?
[874] And no one thinks that we should do that.
[875] Yeah, because then you're actually murdering someone.
[876] it's by your hand.
[877] Well, I think it, well, it's the same thing.
[878] Yeah, so that's how you feel about it.
[879] I think it's because we have come to accept that people get sick.
[880] We haven't come to accept that innocent people get hit by trolleys.
[881] We've come to accept that people get sick and die.
[882] So we're very used to that outcome and somewhat comfortable with it.
[883] It's a natural trajectory for people.
[884] So these five people that are dying of an illness, you're just, you're used to that.
[885] Yeah, that's what happens.
[886] but this healthy person shouldn't be sacrificed to prevent something that we know is inevitable.
[887] But getting hit by a trolley is not inevitable.
[888] I think that's where it gets a hiccupy.
[889] I guess.
[890] Yeah, I mean, that does make sense.
[891] I still think part of the dilemma is in there's still a part that's removed.
[892] Well, let's say that there was a, like, robotics has gotten to a place in AI that you just pull a lever and then the robot comes in and euthanizes the person in.
[893] and harvest their organs, then is it good?
[894] I mean, I think more people would say yes to that.
[895] I still think it would be a very low percentage of people that would say we should kill a healthy person to save five ill people.
[896] Yeah, I guess.
[897] Which is interesting because it's the exact same math as the trolley equation.
[898] Although I guess it is, but I guess you're right that it isn't because one is by natural forces and the other is someone tied these people to a track.
[899] How about the five people are just all standing on the track looking at the CNN tower?
[900] Okay.
[901] So they haven't been like captured and being held hostage on the track.
[902] Like there's one guy on the track on the left, he's looking at the CNN tower as well.
[903] And then on the right side, there's five people looking at the CNN tower.
[904] So just either five people are going to get killed or one person's going to get killed.
[905] And then the other one, five people are going to die or one person's going to die.
[906] Yeah, yeah, yes.
[907] I mean, it is all mathy, but if you do start, but I think, yeah, I don't know.
[908] Yep.
[909] All it does is point out the frailty of the human mind and how we have all these really kind of abstract rationale for why things are moral or anymore.
[910] Like a robot, if the robot would pick to kill just one person on the trolley truck, that same robot would definitely pick to kill.
[911] Yeah.
[912] Yeah.
[913] But, okay, yeah.
[914] So the modern form of the problem.
[915] was first introduced in 1967.
[916] However, an earlier version in which one person to be sacrificed on the track was the switchman's child was part of a moral questionnaire given to undergraduates at the University of Washington, Wisconsin in 1905.
[917] So that's where it started.
[918] Okay, you mentioned Watson.
[919] You said it diagnosis cancer at an 80 % success rate and the oncologist are at 50%.
[920] Oh, sorry.
[921] In 2016, human experts at the University of North Carolina School of Medicine tested Watson by having it analyzed a thousand cancer diagnoses.
[922] In 99 % of the cases, Watson was able to recommend treatment plans that matched actual suggestions from oncologists.
[923] Not only that, but because it can read and digest thousands of documents in minutes, Watson found treatment options human doctors missed in 30 % of the cases.
[924] The AI's processing power allowed it to take into account all of the research papers or clinical trials, that the human oncologist might not have read at the time of diagnosis.
[925] The thing I was talking about I saw in 60 minutes, it was pretty cool.
[926] And it was just about diagnosing the patient with cancer to begin with.
[927] It wasn't like recommending treatment.
[928] Just the thing I saw.
[929] But they can diagnose cancer.
[930] They're not very great at it.
[931] They're right about half the time is what this thing said.
[932] Okay.
[933] That's it.
[934] That was that?
[935] Mm -hmm.
[936] Did you like you, Vol?
[937] Yeah, I did.
[938] did you immensely can you walk me through what was going on in your in your head while it was happening um i mean it was just like being in school being in the best lecture yeah okay last can i just say one fun fact about you've all yes please right so a fun comical fact was when you've all arrived we learned that he had just met with somebody and that person who he had met with lives like 150 feet from us.
[939] Yes.
[940] But you can exit the neighborhood on either the north or south side and they had exited on the wrong side.
[941] And then it took them 15 minutes to get here through the back door.
[942] Yeah.
[943] And when he got here, it was like, oh, so sorry.
[944] It took us 15 minutes, you know, to get from this person's house.
[945] And I said, you know, that person's house is 150 feet that way.
[946] And it was we had a good little chuckle.
[947] It was a nice laugh.
[948] It was a great little icebreaker.
[949] It was.
[950] All right.
[951] Okay.
[952] All right.
[953] Love you.
[954] Love you.
[955] on the Wondry app, Amazon music, or wherever you get your podcasts.
[956] You can listen to every episode of Armchair Expert early and ad free right now by joining Wondry Plus in the Wondry app or on Apple podcasts.
[957] Before you go, tell us about yourself by completing a short survey at Wondry .com slash survey.