The Joe Rogan Experience XX
[0] four three two one hello lex hey we're here man what's going on we're here thanks for doing this you brought notes you're seriously prepared when you're jumping out of a plane it's best to bring a parachute this is my parachute i i understand yeah um how long have you been working in an artificial intelligence my whole life i think really so i when i was a kid wanted to become a second I wanted to understand the human mind.
[1] I think the human mind is the most beautiful mystery that our entire civilization has taken on exploring through science.
[2] I think you look up at the stars and you look at the universe out there.
[3] You had, you know, DeGrasse Tyson here.
[4] It's an amazing, beautiful scientific journey that we're taking on and exploring the stars.
[5] but the mind to me is a bigger mystery and more fascinating and it's been the thing I've been fascinated by from the very beginning of my life and just I think all of human civilization has been wondering you know what is inside this thing the hundred trillion connections they're just firing all the time somehow making the magic happen to where you and I can look at each other make words all the fear, love, life, death that happens is all because of this thing in here.
[6] And understanding why is fascinating.
[7] And what I early on understood is that one of the best ways, for me at least, to understand the human mind is to try to build it.
[8] And that's what artificial intelligence is, you know, it's not enough to sort of from a psychology perspective to study from a psychiatry perspective to investigate from the outside, the best way to understand is to do.
[9] So you mean almost like reverse engineering a brain?
[10] There's some stuff, exactly, reverse engineering the brain.
[11] There's some stuff that you can't understand until you try to do it.
[12] You can hypothesize your, I mean, we're both martial artists from various directions.
[13] You can hypothesize about what is the best martial arts.
[14] art but until you get it in the ring like what the UFC did and test ideas is when you first realize that the touch of death that I've seen some YouTube videos on that you perhaps cannot kill a person with a single touch or your mind or telepathy that there are certain things that work wrestling works punching works okay can we make it better can we create something like a touch of death?
[15] Can we figure out how to turn the hips, how to deliver a punch in the way that does do a significant amount of damage?
[16] And then you, at that moment, when you start to try to do it and you face some of the people that are trying to do the same thing, that's the scientific process.
[17] And you actually begin to understand what is intelligence.
[18] And you begin to also understand how little we understand.
[19] And it's like Richard Feynman, who I'm dressed after today.
[20] Are you?
[21] He was a physicist.
[22] I'm not sure if you're familiar with him.
[23] Yeah, he always used to wear this exact thing.
[24] So I feel pretty badass wearing it.
[25] If you think you know astrophysics, you don't know astrophysics.
[26] That's right.
[27] Well, he said it about quantum physics, right?
[28] Quantum physics, that's right.
[29] That's right.
[30] So he was a quantum physicist.
[31] And he kind of, I remember hearing him talk about that, that understanding the nature of the universe of reality could be like an onion.
[32] We don't know, but it could be like an onion to where you think you know you're studying a layer of an onion and then you peel it away and there's more.
[33] And you keep doing it and there's an infinite number of layers.
[34] With intelligence, there's the same kind of component to where we think we know.
[35] We got it.
[36] We figured out.
[37] We figured out how to beat the human world champion in chess.
[38] We solved intelligence.
[39] And then we try the next thing.
[40] wait a minute go is really difficult to solve as a game and then you say okay it's i came up when the game of go was impossible for artificial intelligence systems to beat and have now recently have been beaten within the last like five years right next last five years there's a lot of technical fascinating things of why that victory is interesting and important for artificial intelligence it requires creativity correct it does not no it just exhibits creativity oh so So the technical aspects of why AlphaGo from Google Deep Mind, that was the designers and the builders of the system that was the Victor, they did a few very interesting technical things where essentially you develop a neural network, this is this type of artificial intelligence system that looks at a board of Go as a lot of elements on it, as black and white pieces.
[41] and is able to tell you how good is this situation and how can I make it better?
[42] And that idea, so chess players can do this, I'm not actually that familiar with a game of go.
[43] So I can speak to, I'm Russian, so chess is romanticized.
[44] It's a beautiful game.
[45] I think that you look at a board and all your previous experiences, all the things you've developed over tons of years of practice and thinking, you get this instinct of what is the right path to follow.
[46] And that's exactly what your own network is doing.
[47] And some of the paths it has come up with are surprising to other world champions.
[48] So in that sense it says, well, this thing is exhibiting creativity.
[49] Because it's coming up with solutions that are something that's outside the box, thinking from the perspective of the human.
[50] What do you differentiate between requires creativity and exhibits creativity?
[51] I think one because we don't really understand what creativity is so it's almost it's on the level of concepts such as consciousness for example the question which there's a lot of thinking about whether creating something intelligent requires consciousness requires for us to be actual living beings aware of our own existence in the same way does doing something like building an autonomous vehicle that's the area where I work in does that require creativity does that even require something like consciousness and self -awareness I mean I'm sure in LA there's some degree of creativity required to navigate traffic and in that sense you start to think are there solutions that are outside of the box and AI system you use to create it's once you start to build it you realize that to us humans certain things appear creative certain things don't Certain things we take for granted.
[52] Certain things we find beautiful.
[53] And certain things are like, yeah, yeah, that's boring.
[54] Well, there's creativity in different levels, right?
[55] There's creativity like to write the stand, a Stephen King novel.
[56] That requires creativity.
[57] There's something about his, he's creating these stories.
[58] He's giving voices to these characters.
[59] He's developing these scenarios and these dramatic sequences in the book that's going to get you really sucked in.
[60] That's almost undeniable creativity, right?
[61] is it so it's uh he's imagining a world what is it always set in uh new hampshire massachusetts a lot of it's main main that's right so he's imagining a world and imagine the emotion of different levels surrounding that world yeah that's that's creative although if you there's a few really good books including his own that talks about writing yeah he's got a great book on writing and it's actually called on writing on writing yeah if there's anyone who can write a book on writing.
[62] It should be Stephen King.
[63] I think Stephen Pressfield, I hope I'm not saying the war of art. The War of Art. Beautiful book.
[64] And I would say, if from my recollection, they don't necessarily talk about creativity very much.
[65] That it's really hard work of putting in the hours of every day of just grinding it out.
[66] Well, Pressfield talks about the muse.
[67] Pressfield speaks of it almost in like a strange, mystical sort of connection to the unknown because he almost I'm not even exactly sure if he believes in the muse but he I think if I could put words in his mouth I have met him he's a great guy he was on the podcast once I think the way he treats it is that if you decide the muse is real and you show up every day and you write as if the muse is real you get the benefit it's of the muse being real.
[68] That's right.
[69] Whether or not there's actually a muse that's giving you these wonderful ideas.
[70] And what is the muse?
[71] So I think of artificial intelligence the same way.
[72] There's a quote by Pamela McCordick from 1979 book that I really like.
[73] She talks about the history of artificial intelligence.
[74] AI began with an ancient wish to forge the gods.
[75] And to me, gods broadly speaking or religions represents, it's kind of like the muse, it represents the limits of possibility, the limits of our imagination.
[76] So it's this thing that we don't quite understand, that is the muse, that is God.
[77] Us chimps are very narrow in our ability to perceive and understand the world, and there's clearly a much bigger, beautiful, mysterious world out there, and God or the muse represents that world and for many people i think throughout history and especially in the in the past sort of hundred years artificial intelligence has become to represent that a little bit to the thing which we don't understand and we crave we're both terrified and we crave in creating this thing that is greater that is able to understand the world better than us and that in that sense artificial intelligence is the desire to create the muse this other this imaginary thing And I think one of the beautiful things, if you talk about everybody from Elon Musk to Sam Harris to all the people thinking about this, is that there is a mix of fear of that, of that unknown, of creating that unknown, and an excitement for it.
[78] Because there's something in human nature that desires creating that.
[79] Because like I said, creating is how you understand.
[80] Did you initially study biology?
[81] Did you study the actual development of the mind or what is known about the evolution of the human mind?
[82] Of the human mind, yeah.
[83] So my path is different as it's the same for a lot of computer scientists and roboticists is we ignore biology, neuroscience, the physiology anatomy of our own bodies.
[84] And there's a lot of beliefs now that you should really study biology.
[85] You should study neuroscience.
[86] You should study our own brain.
[87] The actual chemistry, what's happening, what is actually, how are the neurons interconnected, all the different kinds of systems in there.
[88] So that is a little bit of a blind spot, or it's a big blind spot.
[89] But the problem is, so I started with more philosophy almost.
[90] It's where, if you think Sam Harris, in the last couple of years, has started kind of thinking about artificial intelligence.
[91] And he has a background.
[92] neuroscience but he's also a philosopher and I started there by reading Camus and Nietzschevsky thinking what is what is intelligence what is human morality will so all of these concepts give you the context for which you can then start studying these problems and then I said there's a there's a magic that happens when you build a robot and it drives around I mean your father, I'd like to be, but I'm not yet.
[93] There's a creation aspect that's wonderful.
[94] That's incredible.
[95] For me, I don't have any children at the moment, but the act of creating a robot where you programmed it, and it moves around, and it senses the world, is this, is a, is a magical moment.
[96] Did you see Alien Covenant?
[97] Is it a sci -fan movie?
[98] Yeah.
[99] No. Do you ever seen any of the alien films?
[100] So I grew up in the Soviet Union Where we didn't watch too many movies So I need to catch up We should catch up on that one in particular Because a lot of it has to do with artificial intelligence There's actually a battle between Spoiler Alert Two different but identical Artificially intelligent Synthetic beings That are there to aid the people on the ship one of them is very creative and one of them is not and the one that is not has to save them from the one that is spoiler alert I won't tell you who wins but there's there's a really fascinating scene at the very beginning of the movie where the creator of this artificially intelligent being is discussing its existence with the being itself and the being is trying to figure out who made him and it's this really fascinating moment and this being winds up being a bit of a problem because it possesses creativity and it has the ability to think for itself and may they found it to be a problem so they made a different version of it which was not able to create and the one that was not able to create was much more of more of a servant and there's this battle between these two.
[101] I think you would find it quite fascinating.
[102] It's a really good movie.
[103] Yeah, the same kind of theme carries through Ex Machina and 2001 Space Odyssey.
[104] You've seen X Machina?
[105] Yeah, I've seen it.
[106] So because of your, I've listened to your podcast, and because of it, I've watched it a second time.
[107] Because the first time I watched it, a Neil deGrasse Tyson moment where it was, you said there's cut the cut the shit.
[108] Cut the shit moments.
[109] Yes.
[110] For me, for me, The movie opening is everything about it was I was rolling my eyes.
[111] Why were you rolling your eyes?
[112] What was the cut -the -chip moment?
[113] So that's a general bad tendency that I'd like to talk about amongst people who are scientists that are actually trying to do stuff.
[114] They're trying to build the thing.
[115] It's very tempting to roll your eyes and tune out in a lot of aspects of artificial intelligence discussion and so on.
[116] for me there's real reasons to roll your eyes and there's just well let me uh let me just describe it so this person in ex machina no spoil alerts uh is in the middle what like a Jurassic park type situation where he's like in the middle of a land that he owns yeah we don't really know where it is it's not established but you have to fly over glaciers and you get to this place and there's rivers and he has this fantastic compound and inside this compound he appears to be working alone right and he's like lift he's like uh doing curls i think like dumbbells and uh drinking heavily so the everything i know about science everything i know about engineering is it doesn't happen alone so the situation of a compound with no hundreds of engineers there working on this is not feasible it's not feasible it's not possible and the other uh moment moments like that were the technical the discussion about how it's technically done they threw a few jargon to spice stuff up that makes doesn't make any sense well that's where I am blissfully ignorant so I watch it I go this movie's awesome yeah and you're like ah I know too much yeah no too much but that's that's a stupid way to think for me so once you suspend this beliefs say okay well right those are those are not important details yeah but it is important i mean they could have gone to you or someone who really has knowledge in it and cleaned up those small aspects and still kept the the theme of the story that's right they could have but they would make a different movie so but slightly different i don't know if it's possible to make so you look at 2001 space odyssey i don't know if you've seen that movie yes that's the kind of movie you'll start if you talk to scientists, you'll start making those kinds of movies because you can't actually use jargon that makes sense because we don't know how to build a lot of these systems.
[117] So the way you need to film it and talk about it is with mystery.
[118] It's this hitchhawk type.
[119] Like you almost, you say very little.
[120] You leave it to your imagination to see what happens.
[121] Here everything was in the open.
[122] Right.
[123] Even in terms of the actual construction of the brain, that foam -looking, whatever, gel, brain.
[124] Right.
[125] If they gave a little bit more subtle mystery, I think I would have enjoyed that movie a lot more.
[126] But the second time, really because of you, you said I think it's your favorite sci -fi, well, a movie.
[127] It's absolutely one of my favorite sci -fi movies, period.
[128] I loved it.
[129] Yeah.
[130] So I watched it again.
[131] And also Sam Harris said that he also hated the movie and then watched it again and liked it.
[132] So I give it a chance.
[133] Why would you see a movie again after you hate it?
[134] Because maybe you're self -aware enough to think there's something unhealthy about the way I hated the movie.
[135] Like you're like introspective enough to know.
[136] It's like I have the same experience with Batman.
[137] Okay.
[138] I watched...
[139] Which one?
[140] Dark Knight, I think.
[141] Christian Bail?
[142] Christian Bail one.
[143] So to me, the first.
[144] time i watched that is is a guy in a costume like speaking excessively with an excessively low voice i mean it's just something with like a little bunny ear not bunny ears but like little ears it's so silly but then you go back and okay if we just accept that those that's the reality of the world we live in uh what's the human nature aspects that are being explored here what is the the beautiful conflict between good and evil as being explored here, and what are the awesome graphics effects that are being on exhibit, right?
[145] So if you can just suspend that, that's beautiful.
[146] The movie can become quite fun to watch, but still, to me, not to offend anybody, but bad superhero movies are still difficult for me to watch.
[147] Yeah, who was talking about that recently?
[148] Was it Kelly?
[149] Kelly Slater?
[150] No. It was yesterday.
[151] It's Kyle.
[152] He's like Geo -on -A -Hall superhero movies or something.
[153] Right.
[154] He doesn't like superhero movies.
[155] We were talking about Batman, about Christian Bale's voice, and he's like, the most ridiculous thing was that he's actually Batman, not that his voice.
[156] That's true.
[157] Not that's true.
[158] I'm Batman.
[159] That part of it is way less ridiculous than the fact that he's Batman.
[160] He's Batman.
[161] Because anybody can do that voice.
[162] Yeah.
[163] But I contradict I'm a hypocrite because Game of Thrones or Tolkien's Lord of the Rings, it's totally believable to me. Yeah, of course, dragons.
[164] Well, that's a fantasy world, right?
[165] That's the problem with something like Batman or even Ex Machina is that it takes place in this world.
[166] Whereas they're in Middle Earth.
[167] They're in a place that doesn't exist.
[168] and if you it's like if you like avatar if you make a movie about a place that does not exist you can have all kinds of crazy shit in that movie because it's not real that's right yeah so but at the same time like star wars is harder for me and you're saying star wars is a little more real because it's it feels feasible like you could have spaceships flying around right what what's not feasible about star wars too Oh, I'm not.
[169] I'll leave that one to Neil DeGrasis.
[170] He was getting angry about the robot that's circular, that rolls around.
[171] He's like, it would just be slippery.
[172] Yeah.
[173] Like, trying to roll around all over the sand.
[174] It wouldn't work.
[175] It would get no traction.
[176] I was like, that's true.
[177] If you had, like, glass tires, you'd try to drive over sand.
[178] It was smooth time.
[179] You'd get nothing.
[180] Yeah.
[181] He's actually the guy that made me realize, you know, the movie Ghost, with Patrick Swayzing.
[182] And it was at this podcast, or somewhere he was talking about the fact that But so this guy can go through walls, right?
[183] It's a beautiful romantic movie that everybody should watch, right?
[184] But he doesn't seem to fall through chairs when he sits on them.
[185] Right?
[186] So he can walk through walls, but he can put his hand on the desk.
[187] Yeah.
[188] He can sit, like, his butt has a magical shield that is in this reality.
[189] This is a quantum shield that protects him from falling.
[190] Yeah.
[191] So, that's, you know, those devices are necessary movies.
[192] I get it.
[193] Yeah, but you got a good point.
[194] He's got a good point, too.
[195] It's like, there's cut the shit moments.
[196] They don't have to be there.
[197] You know, you just have to work them out in advance.
[198] But the problem is a lot of movie producers think that they're smarter than people.
[199] They just decide, oh, I just put it in there.
[200] The average person is not going to care.
[201] I've had that conversation with movie producers about martial arts.
[202] And I was like, well, this is just nonsense.
[203] You can't do that.
[204] like because I was explaining martial arts to someone and he was like ah the average person is not going to care I'm like oh the average person okay but you brought me in as a martial arts expert to talk to you about your movie and I'm telling you right now this is horseshit yeah I'm a huge believer of Steve jobs philosophy where forget the average person discussion because first of all the average person will care uh the Steve jobs uh designed would really push the design of the interior of computers computers to be beautiful, not just the exterior, even if you never see it, if you have attention in detail to every aspect of the design, even if it's completely hidden from the actual user in the end, somehow that karma, whatever it is, that love for everything you do and that love seeps through the product.
[205] And the same, I think, with movies, if you talk about the 2001 Space Odyssey.
[206] There's so many details.
[207] I think there's probably these groups of people that study every detail of that movie and other Kubrick films those little details matter somehow they all come together to show how deeply passionate you are about telling the story well Kubrick was a perfect example that because he would put layer upon layer upon layer of detail into films that people would never even recognize like there's a bunch of correlations between the Apollo moon landings and the shining you know there's like people have actually studied it to the point where they think that it's some sort of of a confession that Kubrick fake the moon landing goes from the little boy having the rocket ship on his sweater to the number of the number of the room that things happen.
[208] There's like a bunch of like very bizarre connections in the film that Kubrick unquestionably engineered because he was just a stupid smart man. I mean he was so goddamn smart that he would do complex mathematics for fun in his spare time.
[209] I mean, Kubrick was like a legitimate genius, and he engineered that sort of complexity into his films where he didn't have cut the shit moments in his movies, not that I can recall.
[210] No, not even close.
[211] This was very interesting.
[212] I mean, but that probably speaks to the reality of Hollywood today that the cut the shit moments don't affect the bottom line of how much the movie makes.
[213] Well, it really depends on the film, right?
[214] I mean, the cut the shit moments that Neil de Grouse Tyson found in gravity, I didn't see because I wasn't aware of what the effects of gravity on a person's hair would be.
[215] You know, he saw it and he was like this is ridiculous.
[216] And then there were some things like, why are these space stations so close together?
[217] I just let it slide while the movie was playing, but then he went into great detail about how preposterous it would be that those space stations were that close together that you can get to them so quickly.
[218] That's with Sandra Bullock and a good -looking guy.
[219] And George Clooney.
[220] George Clooney.
[221] Yeah, the good -looking guy.
[222] So did that pass muster with, you know, like that movie?
[223] He tore it apart.
[224] And when he tore it apart, people went crazy.
[225] They getting so angry at him.
[226] Yeah, he reads the negative comments.
[227] As you've talked about, I actually, recently, because of doing a lot of work in artificial intelligence and lecturing about it and so on, I've plugged into this community of folks that are thinking about the future of artificial intelligence, artificial general intelligence, and they are very much out -of -the -box thinkers to where the kind of messages I get are best.
[228] So I'd let them kind of explore those ideas without sort of engaging into those discussions.
[229] I think very complex discussions should be had with people in person.
[230] That's what I think.
[231] And I think that when you allow comments, just random anonymous comments to enter into your consciousness, like you're, you take taking risks and you may you may run into a bunch of really brilliant ideas that are you know coming from people that are considerate they've thought these things through or you might just run into a river of assholes and it's entirely possible and I peaked into my comments today on Twitter and I was like what in the fuck I started reading like a couple of them some just morons and I'm like all right about shit I didn't even know what the fuck they were talking about But that's the risk you take when you dive in.
[232] You're going to get people that are disproportionately upset.
[233] You're going to get people that are disproportionately, you know, delusional or whatever it is in regards to your position on something.
[234] Or whether or not they even understand your position.
[235] They'll argue something that's an incorrect interpretation of your position.
[236] Yeah.
[237] And you've actually, from what I've heard, you've actually been to this podcast and so on, really good at being open -minded, and that's something I try to preach as well.
[238] So in AI discussions, when you're talking about AGI and talking about, so there's a difference in narrow AI and general artificial intelligence, narrow AIs, the kind of things that are, the kind of tools that are being applied now and being quite effective.
[239] And then there's general AI, which is a broad categorization of concepts that are human level or superhuman level intelligence.
[240] And when you talk about AGI, artificial general intelligence, there seems to be two camps of people, ones who are really working deep in it, like that's the camp I kind of sit in.
[241] And a lot of those folks tend to roll their eyes and just not engage into any discussions in the future.
[242] Their idea is saying it's really hard to do what we're doing.
[243] And it's just really hard to see how this becomes intelligent.
[244] And then you're being very short -sighted that you may not be able to do much now, but the exponential, the hard takeoff overnight, it can become super intelligent and then it'll be too late to think about.
[245] Now, the problem with those two camps, as with any camps, Democrat, Republic, any camps, is they don't seem to be talking past each other, as opposed to both have really interesting ideas.
[246] If you go back to the analogy of touch of death, this idea of MMA, right?
[247] So I'm in this analogy.
[248] I'm going to put myself in the UFC for a second.
[249] In this analogy, I'm, you know, like ranked in the top 20.
[250] I'm working really hard.
[251] My dream is to become a world champion.
[252] I'm training three times a day.
[253] I'm really working.
[254] I'm an engineer.
[255] I'm trying to build my skills up.
[256] And then there's other folks that come along, like Stephen Seagal and so on.
[257] that kind of talk about other kinds of martial arts, other ideas of how you can do certain things.
[258] And I think Stephen Segal and is beyond to something.
[259] I think we really need to be open -minded.
[260] Like Anderson -Silva, I think, talks to Steven -Sagall, right?
[261] Well, Anderson -Silva thinks Stephen Seagall is, I want to put this in a respectful way.
[262] he Anderson Silva has a wonderful sense of humor and Anderson Silva is very playful and he thought it would be hilarious if people believed that he was learning all of his martial arts from Steven Segal got it.
[263] He also loves Steven Segal movies legitimately.
[264] So it treated him with a great deal of respect.
[265] He also recognizes that Stephen Seagal actually is a master of Aikido.
[266] He really does understand Aikido and was one of the very first.
[267] Westerners that was teaching in Japan speaks fluent Japanese was teaching at a dojo in Japan and is you know a legitimate master of Aikido right the problem with Aikido is it's it's one of martial arts that has merit in a in a vacuum like if you if you're in a world where there's no NCAA wrestlers or no juda players or no Brazilian jiu -jitsu black belts or no moitai kickboxers there might be something to that ikeedo stuff but in the world where all those other martial arts exist and we've examined all the intricacies of hand -to -hand combat it falls horribly short well see this is the point I'm trying to make you just said that we've investigated all the intricacies you said all the intricacies of hand -to -hand combat I mean you just speaking, but you want to open your mind to the possibility that Aikido has...
[268] Some techniques that are effective.
[269] Yeah, when I say all, you're correct, that's not a correct way of describing it.
[270] Right.
[271] Because there's always new moves that are being, like, for instance, in this, a recent fight between Anthony Pettis and Tony Ferguson, Tony Ferguson actually used Wing Chung in a fight.
[272] He trapped one of Anthony Pettus's hands and hit him with an elbow.
[273] He basically used a technique that you would use on a Wing Chung dummy, and he did it in an actual world -class mixed martial arts fight.
[274] And I remember watching it, wow, going, this crazy motherfucker actually pulled that off, because it's a technique that you just rarely see anybody getting that proficient at it that fights in MMA.
[275] And Ferguson is an extremely creative.
[276] an open -minded guy, and he figured out a way to make that work in a world -class fight.
[277] So, and let me then ask you the question, there's these people who still believe quite a lot of them that there is this touch of death, right?
[278] Yeah.
[279] So do you think it's possible to discover?
[280] No. Through this rigorous scientific process that is MMA that started pretty recently, do you think not the touch of death, but do you think we can get a 10x improvement in the amount of power the human body can can generate in and punching no not certainly not 10x i think you can get incremental improvements but it's all based entirely on your frame like if you're a person that has very small hands and narrow shoulders you're kind of screwed there's not really a lot of room for improvement you can you can certainly get incremental improvement in your ability to generate power but you'll never be able to generate the same kind of power as say a guy with a very big frame like Brock Lesnar or Derek Lewis or, you know, anyone who has, there's like classic elements that go with being able to generate large amounts of power.
[281] That's right.
[282] Wide shoulders, large hands.
[283] There's a lot of characteristics of the human frame itself.
[284] Those, even those people, there's only so much power you can generate.
[285] And we pretty much know how to do that correctly.
[286] So the way you're talking about as a martial arts expert now is kind of the way a lot of the experts in robotics and AI talk about AI and when the topic of touch of death is brought up.
[287] Now, the analogy is not perfect.
[288] I tend to use probably too many analogies.
[289] We maybe know the human body better than we know the possibility of AI.
[290] I would assume so, right?
[291] Because the possibility of AI is basically limitless once AI starts redesigning itself.
[292] it's not obvious that that's true our imagination allows it to be true when i i i'm of two minds i am i'm both i can hold both beliefs that are contradicting in my mind one is that idea is really far away almost bordering on BS and the other is it can be there overnight i think you could believe both those things so um there's another uh quote uh from Barbara Wooden.
[293] It's a poem I heard on a lecture somewhere that I really like, which is it's from the champions of the impossible rather than the slaves of the possible that evolution draws its creative force.
[294] So I see Elon Musk as a representative of the champion of the impossible.
[295] I see exponential growth of AI within the next several decades as the impossible, but it's the champions of the impossible that actually make the impossible happen.
[296] Why would exponential growth of AI be impossible?
[297] Because it seems inevitable to me. So it's not impossible.
[298] I'm sort of using the word impossible meaning...
[299] Magnificent?
[300] Yeah, it feels very difficult.
[301] Very, very difficult.
[302] We don't even know where to begin.
[303] Grand.
[304] Yep.
[305] Like the touch of death actually feels.
[306] Yeah, but see, the touch of death is horseshit.
[307] But see, you're an expert.
[308] When someone's like, ha!
[309] And they touch you in the chest.
[310] But we don't have the ability in the body to do.
[311] generate that kind of energy how do you know that that's a good question um it's never been done we understand so much about physiology how do you know it's never been done okay there could be someone out there with magic that i that has escaped my my grasp no you've studied you've talked about with uh graham hancock and you've talked about the history maybe it was in roman times there was that idea was discovered and then it was lost hmm because weapons are much more effective ways of of delivering damage now i find myself in a very uncomfortable position of defending the concept as a martial artist uh defining the concept of what martial arts did you study uh jihitsu and judo and wrestling those are the hard ones those are jitsu judo and wrestling those are absolute martial arts in my opinion this is what i mean like if you are a guy who just has a fantastic physique and incredible speed and ridiculous power just you just can generate ridiculous power if you just are just a per you know who dionte wilder is yes he's a heavy champion of the world boxer he's uh you have uh what's his name Tyson fury Tyson fury on tomorrow yes two undefeated guys right yes yes o'te wilder has fantastic power I mean he just knocks people flying across the ring he's just I think Deonté Wilder, if he just came off the street, if he was 25 years old and no one ever taught him how to box at all, and you just wrapped his hands up and had him hit a bag, he would be able to generate insane amounts of force.
[312] Now, if you were a person that really didn't have much power and you had a box with Deontay Wilder and you were both of the same age and you were a person that knew boxing and you stood in front of Deonté, it's entirely possible that Deontay could knock you into another dimension, even though he had no experience in boxing.
[313] If he just held on you and hit you with a haymaker, he might be able to put you out.
[314] If you're a person who is, let's say, built like you, a guy who exercises, who's strong, and then there's someone who's identically built like you, who's a black belt in Brazilian jiu -jitsu, and you don't have any experience in martial arts at all, you're fucked.
[315] Right?
[316] If you're a person who's built like you, who's a guy who exercises and is healthy, and you grapple with a guy who's even stronger than you and bigger than you, but he has no experience in Brazilian jiu -jitsu, he's still fucked.
[317] That's the difference.
[318] That's why I think Brazilian jiu -jitsu and judo and wrestling in particular, those are absolutes in that you have control of the body.
[319] And once you grab a hold of a person's body, there's no lucky triangle choke.
[320] in jujitsu that's right right but i think uh i think i would say jiu jitsu is the highest representative of that i think in wrestling and judo having practiced those i've never been quite as humbled as i have been in jiu jitsu yeah especially when i started i was like power lifting i was like i was a total meathead and you know a 130 pound guy or girl could tap you easily yeah it's confusing it's very confusing in wrestling you can get pretty far with that neat head power yeah yeah yeah and in judo a little bit less so at its highest levels if you go to japan for example where they i mean the whole dream of judo is effortlessly throw your opponent yeah but in you know if you go to gyms in in america and so on you know there is some hard wrestling style gripping and just just beating each other up pretty intensely where we're not talking about beautiful with chimattas or these beautiful throws.
[321] We're talking about some scrapping, some wrestling style.
[322] Yeah.
[323] Yeah, no, I see what you're saying.
[324] Yeah, my experience with Jiu -Jitsu was very humbling when I first started out.
[325] I had a long background in martial arts and striking and even wrestled in high school.
[326] And then I started taking Jiu -Jitsu and a guy who was my size and I was young at the time.
[327] And he was basically close to my age, just mauled me. And he wasn't even a black belt.
[328] I think he was a purple belt.
[329] He might have been a blue belt.
[330] I think he was a purple belt.
[331] And he just destroyed me. Just did anything he wanted to me. Choked me, arm barred me. And I remember thinking, man, I am so delusional.
[332] I thought I had a chance.
[333] Right.
[334] Like I thought just based on taking a couple classes and learning what an arm bar is and then being a strong person who has a background of market.
[335] martial arts that I would be able to at least hold him off a little bit.
[336] No. And this is, that's so beautiful that I feel lucky to have had that experience of having my ass kicked in Philadelphia is where I came up with.
[337] Because in science, you don't often get that experience in the space of ideas.
[338] You can't choke each other out.
[339] You can't beat each other up in science.
[340] And so it's easy to go your whole life.
[341] I have so many people around me telling me how smart.
[342] I am.
[343] There's no way to actually know if I'm smart or not, because I think I'm full of BS.
[344] And in the same realm as fighting, there's no, it's what Hicks and Gracie said.
[345] I mean, it's the, or someone with Salo Habera or somebody that the mat doesn't lie.
[346] There's this deep honesty in it that was, I'm really grateful, almost like wanting, you know, you talk about bullies or you talk about, or even just, my fellow academics could benefit significantly from training a little bit.
[347] I think so, too.
[348] It's a, yeah, it's a beautiful thing to almost, I think it's been talked about in high school sort of requiring it.
[349] Yeah, we've talked about many times, yeah.
[350] I think it's a more humbling sport, to be honest, than wrestling, because you could in wrestling, like I said, get away with some muscle.
[351] It's also what martial arts are supposed to be in that a small person who knows technique can beat a big person who doesn't know the technique.
[352] That's what we always hoped for, right?
[353] When we saw the Bruce Lee movies and Bruce Lee, who was a smaller guy, could beat all these bigger guys just because he had better technique.
[354] That is actually real in Jiu -Jitsu, and it's one of the only martial arts where that's real.
[355] Yeah, and I have in Philadelphia started, you had Steve Maxwell here, right?
[356] Sure.
[357] That's kind of where the, that was the spring of Jiu -Jitsu in Philadelphia.
[358] Yeah, he was one of the very first American black belts in Jiu -Jitsu, like way back in the day.
[359] I believe he was a black belt in the very early 90s when Jiu -Jitsu was really just starting to come to America and he had maxercise in Philadelphia.
[360] It's still there.
[361] And then I trained at Balance, which is a few gracy folks, which Phil McGlorees, Rick McClurees, Josh Vogel brothers.
[362] So all those, I mean, especially Vogel brothers, these couple blackballs, they come up together.
[363] They're, well, they're smaller.
[364] they're little guys and I think those were the guys that really humbled me pretty quickly well little guys are the best to learn technique from yeah because they can't rely on strength there's a lot of really big powerful you know 250 pound jujitsu guys who never are going to develop the sort of subtlety of technique that some like the meao brothers like smaller guys who just they've from the very beginning they've never had an advantage in weight and size, and so they've never been able to use anything but perfect technique.
[365] Eddie Bravo is another great example of that, too.
[366] He competed in the 140 -pound, 145 -pound class.
[367] But to get back to artificial intelligence, so the idea is that there's two camps.
[368] There's one camp that thinks that the exponential increase in technology and that once artificial intelligence becomes sentient, it could eventually improve upon its own.
[369] own design and literally become a god in a short amount of time.
[370] And then there's the other school of thought that thinks that is so far outside of the realm of what is possible today that even the speculation of this eventually taking place is kind of ludicrous to imagine.
[371] Right.
[372] Exactly.
[373] And the balance needs to be struck because I think I'd like to talk about sort of the short -term threats that are there, and that's really important to think about.
[374] But the long -term threats, if they come to fruition, will overpower everything, right?
[375] That's really important to think about.
[376] But what happens is if you think too much about the encroaching doom of humanity, there's some aspect to it that is paralyzing, where you almost, it turns you off from actually thinking about the these ideas there's something so appealing it's like a black hole that pulls you in and if you notice folks like sam harris and so on spend a large amount of the time you know they're talking about the negative stuff about something that's far away not to say it's not wrong to talk about it but they spend very little time about the potential positive impacts in the near term and also the negative impacts in the near term so let's go over those.
[377] Yep, fairness.
[378] So the more and more we put decisions about our lives into the hands of artificial intelligence systems, whether you get a loan or in the autonomous vehicle context or in terms of recommending jobs for you on LinkedIn or all these kinds of things, the idea of fairness becomes of bias in these machine learning systems becomes a really big threat.
[379] Because the way current artificial intelligence systems function is they train on data.
[380] So there's no way for them to somehow gain a greater intelligence than the data we provide them with.
[381] So we provide them with actual data.
[382] And so they carry over, if we're not careful, the biases in that data.
[383] The discrimination that's inherent in our current society as represented by the data.
[384] So they'll just carry that forward.
[385] Like how so?
[386] So there's people working in this more so to show really the negative impacts in terms of getting alone or whether to say whether this particular human being should be convicted or not of a crime.
[387] There's ideas there that can carry, you know, in our criminal system, there's discrimination.
[388] and if you use data from that criminal system to then assist the siders, judges, juries, lawyers in making this incriminating, in making a decision of what kind of penalty a person gets, they're going to carry that forward.
[389] So you mean like racial economic biases?
[390] Racial economic, yeah.
[391] Geographical.
[392] And that's a sort of, I don't study that exact problem, but it's, you're aware of it because of the tools we're using.
[393] It only, so the two ways that I'd like to talk about neural networks with Joe.
[394] Sure, let's do it.
[395] Okay.
[396] So the current approaches are, there's been a lot of demonstrated improvements, exciting new improvements in our advancements of our artificial intelligence.
[397] And those are, for the most part, have to do with neural networks.
[398] Something that's been around since the 1940s has gone through two AIAI winters where everyone was super hyped and then super bummed and super hyped again and bummed again.
[399] And now we're in this other hype cycle.
[400] And what neural networks are is these collections of interconnected simple compute units.
[401] They're all similar.
[402] It's kind of like it's inspired by our own brain.
[403] We have a bunch of little neurons interconnected.
[404] And the idea is these interconnections are really dumb and random.
[405] them, but if you feed it with some data, they'll learn to connect, just like they're doing our brain, in a way that interprets that data.
[406] They form representations of that data and can make decisions.
[407] But there's only two ways to train those neural networks that we have now.
[408] One is we have to provide a large data set.
[409] If you want that neural network to tell the difference in a cat and the dog, you have to give it 10 ,000 images of a cat and 10 ,000 images of a dog.
[410] You need to give it those images.
[411] and who tells you what a picture of a cat and a dog is?
[412] It's humans.
[413] So it has to be annotated.
[414] So as teachers of these artificial intelligence systems, we have to collect this data.
[415] We have to invest a significant amount of effort and annotate that data, and then we teach neural networks to make that prediction.
[416] What's not obvious there is how poor of a method there is to achieve any kind of greater data.
[417] degree of intelligence.
[418] You're just not able to get very far besides very specific narrow tasks of cat versus dog or should I give this person alone or not.
[419] These kind of simple, tasks.
[420] I would argue autonomous vehicles are actually beyond the scope of that kind of approach.
[421] And then the other realm of where neural networks can be trained is if you can simulate that world.
[422] So if the world is simple enough or it's conducive to be formalized sufficiently to where you can simulate it.
[423] So a game of chess is just, there's rules.
[424] Game of Go, there's rules.
[425] So you can simulate it.
[426] The big exciting thing about Google Deep Mind is that they were able to beat the world champion by doing something called competitive self -play, which is they have two systems play against each other.
[427] They don't need the human.
[428] They play against each other.
[429] But that only works, and that's a beautiful idea and super powerful and really interesting and surprising, but that only works on things like games and simulation.
[430] So now if I wanted to, sorry to be going to analogous of like UFC, for example, if I wanted to train a system to become the world champion, be, what's his name, number of met up, right?
[431] I could play the UFC game I could create two neural networks that use competitive self -play to play in that virtual world and they could become state of the art the best fighter ever in that game but transferring that to the physical world we don't know how to do that we don't know how to teach systems to do stuff in the real world some of the stuff that freaks you out often is Boston Dynamics robots yeah Yeah.
[432] Every day I go to the Instagram page and just go, what the fuck are you guys doing?
[433] Engineering our demise.
[434] Mark Raber, CEO, spoke with the class I taught.
[435] He calls himself a bad boy of robotics.
[436] So he's having a little fun with it.
[437] He should definitely stop doing that.
[438] Don't call yourself a bad boy of anything.
[439] That's true.
[440] How old is he?
[441] Okay, he's one of the greatest roboticists of our generation.
[442] That's great.
[443] That's wonderful.
[444] However, you just can't call yourself a...
[445] Don't call yourself a bad boy, bro.
[446] Okay.
[447] So you're not the bad boy of MMA?
[448] Definitely not.
[449] I'm not even the bad man. Bad man?
[450] Definitely not a bad boy.
[451] Okay.
[452] It's so silly.
[453] Yeah.
[454] Those robots are actually functioning in the physical world.
[455] That's what I'm talking about.
[456] And they are using something called, what was I think coined, I don't know, 70s or 80s, the term good old -fashioned AI, meaning there is nothing like going on that you would consider artificially intelligent, which is usually connected to learning.
[457] So these systems aren't learning.
[458] It's not like you drop the puppy into the world and it kind of stumbles around and figures stuff out and learns it is better and better and better.
[459] better and better.
[460] That's the scary part.
[461] That's the imagination that that's what we imagine is we put something in this world.
[462] At first it's like harmless and falls all over the place and all a sudden it figures something out and like Elon Musk says that travels faster than whatever than you can only see it with Problites.
[463] There's no learning component there.
[464] This is just purely there's hydraulics and electric motors and there is 20 to 30 degrees of freedom and it's doing hard -coded control algorithms to control the task of how do you move efficiently through space.
[465] So this is the task roboticist's work on a really, really hard problem is taking robotic manipulation, taking the arm, grabbing a water bottle, and lifting it, super hard, somewhat unsolved to this point.
[466] And learning to do that, we really don't know how to do that.
[467] Right, but what we're talking about essentially is the convergence of the, robotic systems with artificially intelligent systems.
[468] That's right.
[469] And as artificially intelligent systems evolve, and then this convergence becomes complete, you're going to have the ability to do things like the computer that beat humans at Go.
[470] That's right.
[471] You're going to have creativity.
[472] You're going to have a complex understanding of language and expression, and you're going to have, I mean, perhaps even engineered things like emotions, like jealousy and anger.
[473] I mean, it's entirely possible that, as you were saying, we're going to have systems that could potentially be biased the way human beings are biased towards people of certain economic groups or certain geographic groups, and you would use that data that they have to discriminate, just like human beings discriminate.
[474] If you have all that in an artificially intelligent robot that has autonomy and that has the ability to move, This is what people are totally concerned with and terrified of, is that all of these different systems that are currently in semi -crued states, they can't pick up a water bottle yet, they can't really do much other than they can do backflips, but they, you know, I mean, I'm sure you've seen the more recent Boston dynamic ones.
[475] Paracour?
[476] Yeah, I saw that one the other day.
[477] They're getting better and better and better, and it's increasing every year.
[478] Every year they have new abilities.
[479] Did you see the Black Mirror episode Heavy Metal?
[480] Yeah, and I think about quite a lot because it's, functionally, it has, we know how to do most aspects of that.
[481] Right now.
[482] Right now.
[483] Pretty close, yeah.
[484] Pretty close.
[485] I mean, I don't remember exactly there's some kind of pebble shooting situation where it like hurts you by shooting you somehow.
[486] I figure.
[487] Well, it has bullets, didn't it?
[488] Bullets, yeah.
[489] It has a knife.
[490] It stuck into one of its arms, remember, and come to spoiler alert.
[491] It's just an amazing episode of how terrifying it would be if some emotionless robot with incredible abilities is coming after you and wants to terminate you.
[492] And I think about that a lot because I love that episode because it's terrifying for some reason.
[493] But when I sit down and actually in the work we're doing, think about how we would do that.
[494] So we can do the actual movement of the robot.
[495] What we don't know how to do is to have robots that do the first.
[496] full thing, which is have a goal of pursuing humans and eradicating.
[497] I'm not, spoiler alert all over the place.
[498] I think the goal of eradicating humans, so assuming their values are not aligned somehow, that's one, we don't know how to do that.
[499] And two, is the entire process of just navigating all over the world is really difficult.
[500] So we know how to go up the stairs, but to say how to navigate the path you took from home to the studio today how to get through that full path is so much an unsolved problem but is it because you could you could engineer or you could program it into your Tesla you could put it into your navigation system and have it stop at red lights drive for you take turns and it can do that so first of all that I would argue is we're quite far away from still but that's within 10 20 years well what how much can it do now it can it can stay inside the lane on the highway or on different roads and it can change lanes and what's being pushed now is they're trying to be able to enter and exit a highway so it's some basic highway driving it doesn't stop at traffic lights it doesn't stop it stop signs and it doesn't interact with the complex irrational human beings pedestrians cyclists cars that's this is the onion i talked about is we first, in 2005, the DARPA Grand Challenge, DARPA organized this challenge in the desert.
[501] It says, let's go across the desert, let's see if we can build an autonomous vehicle that goes across the desert.
[502] 2004, they did the first one, and everybody failed.
[503] We're talking about some of the smartest people in the world really tried and failed.
[504] And so they did again 2005.
[505] There's a few, Stanford one.
[506] There's a really badass guy from CMU, Red.
[507] He led.
[508] I think he's like a Marine.
[509] He led the team there.
[510] And they succeeded.
[511] The four teams finished, Stanford won.
[512] That was in the desert.
[513] And there was this feeling that we saw the autonomous driving.
[514] But that's that onion.
[515] Because you then, okay, what's the next step?
[516] We've got a car that travels across the desert autonomously.
[517] What's the next?
[518] So in 2007, they did the Urban Challenge, the Upper Urban Challenge, where you drove around the city a little bit.
[519] Super hard problem.
[520] People took it on.
[521] CMU won that one, the Stanford second, I believe.
[522] And then there was definitely a feeling like, yeah, now we had a car drive around the city.
[523] It's definitely solved.
[524] The problem is those cars were traveling super slow, first of all.
[525] And second of all, there's no pedestrians.
[526] There's no, it wasn't a real city.
[527] It was an artificial.
[528] It's just basically having to stop in different signs.
[529] Again, one other layer of the onion.
[530] And you say, okay, when we actually have to put this car in a city like L .A., how we're going to make this work?
[531] Because if there's no cars in the street and no pedestrians in the street, driving around is still hard, but doable and I think solvable in the next five years.
[532] When you put pedestrians, everybody jaywalks.
[533] If you put human beings into this interaction, it becomes much, much harder.
[534] Now, it's not impossible, and I think it's very doable and with completely new, interesting ideas, including revolutionizing infrastructure and rethinking a transportation in general, it's possible to do the next five, ten years, maybe 20, but it's not easy, like everybody says.
[535] But does anybody say it's easy?
[536] Yeah, there's a lot of hype behind autonomous vehicles.
[537] Elon Musk himself and other people have promised autonomous vehicles that timeline has already passed.
[538] There's been going on in 2018, we'll have autonomous vehicles.
[539] Now, for GM.
[540] They're semi -autonomous now, right?
[541] So I know they do, they can break for pedestrians.
[542] Like if they see pedestrians, they're supposed to break for them and avoid them, right?
[543] That's part of the, technically no. Wasn't that an issue with an Uber car that hit a pedestrian that was operating autonomously?
[544] That's right.
[545] someone, a homeless person, stepped out off of a median right into traffic, and it nailed it, and then they found out it didn't have, just one of the settings, wasn't in place?
[546] That's right, but that was an autonomous vehicle being tested in Arizona.
[547] And unfortunately, it was a fatality.
[548] A person died.
[549] Yeah.
[550] The pedestrian was killed.
[551] So what happened there, that's the thing I'm saying is really hard.
[552] That's full autonomy.
[553] That's technically when the car, you can remove the steering wheel on the car to drive itself and take kind of everything.
[554] Everything I've seen, everything we're studying.
[555] So we're studying drivers in Tesla vehicles.
[556] We're building our own vehicles.
[557] It seems that it'll be a long way off before we can solve the fully autonomous driving problem.
[558] Because of pedestrians.
[559] But two things.
[560] Pedestrians and cyclists and the edge cases of driving.
[561] All the stuff we take for granted.
[562] The same reason we take for granted how hard it is to walk.
[563] how hard is to pick up this bottle, our intuition about what's hard and easy is really flawed as human beings.
[564] Can I interject?
[565] What if all cars were autonomous?
[566] That's right.
[567] If we got to a point where every single car on the highway is operating off of a similar algorithm or off the same system, then things would be far easier, right?
[568] Because then you have to don't deal with random kinetic movements, people just changing lanes, people looking at their cell phone, not paying attention to what they're doing, all sorts of things that you have to be wary of right now driving and pedestrians and bicyclists.
[569] Totally.
[570] And that's in the realm of things I'm talking about where you think outside the box and revolutionize our transportation system.
[571] That requires government to play along.
[572] It seems like that's going that way, though, right?
[573] Do you feel like that one day we're going to have autonomous driving pretty much everywhere, especially on the highway?
[574] It's not going there in terms of, of, it's very slow moving.
[575] So, government does stuff very slow moving with infrastructure.
[576] One of the biggest things you can do for autonomous driving will solve a lot of problems is to paint lane markings regularly.
[577] Right.
[578] And even that, they've been extremely difficult to do for, yeah, for politicians.
[579] Right, because right now there's not really the desire for it.
[580] But to explain to people what you mean by that, when the lanes are painted very clearly, the cameras and the autonomous vehicles can recognize them and stay inside those lanes much more easily.
[581] Yeah, there is two ways that cars see the world.
[582] Three, there's different sensors.
[583] The big ones for autonomous vehicles is LIDAR, which is these lasers are being shot all over the place in 360, and they give you this point cloud at how far stuff is away, but they don't give you the visual texture information of this is what brand water bottle they are.
[584] And cameras give you that information.
[585] So what Tesla is using, they have eight cameras, I think, is they perceive the world with cameras.
[586] And those two things require different things from the infrastructure, those two sensors.
[587] Cameras see the world the same as our human eyes see the world.
[588] So they need lay markings.
[589] They need infrastructure to be really nicely visible, traffic lights to be visible.
[590] So the same kind of things us humans like to have is cameras like to have.
[591] and lane marketing is a big one.
[592] The other, there's a lot of interesting infrastructure improvements that can happen, like traffic lights.
[593] Our traffic lights are super dumb right now.
[594] They sense nothing about the world, about the density of pedestrians, about the approaching cars.
[595] If traffic lights can communicate with the car, which it makes perfect sense.
[596] It's right there, there's no size limitations.
[597] it can have a computer inside of it.
[598] You can coordinate different things in terms of the same pedestrian kind of problem.
[599] Well, we have sensors now on streets.
[600] So when you pull up to certain lights, especially at night, the light will be red.
[601] You pull up it instantaneously turns green because it recognizes that you've stepped over or driven over a sensor.
[602] That's right.
[603] So that's a step in the right direction, but that's really sort of 20 years, 30 years ago technology.
[604] So you want to have something like the power of a smartphone inside every traffic light.
[605] It's pretty basic to do, but there's way outside of my expertise is how do you get government to do these kinds of improvements.
[606] So if I'm mistaken, well, correct me if I'm mistaken, but you're looking at things in terms of what we can do right now.
[607] And a guy like Elon Musk or Sam Harris is saying, yeah, but look at where technology leads us.
[608] If you go back to 1960, the kind of computers that they use, to do the Apollo mission.
[609] You've got a whole room full of computers that doesn't have nearly the same power as the phone that's in your pocket right now.
[610] Now, if you go into the future and exponentially calculate, like, what's going to take place in terms of our ability to create autonomous vehicles, our ability to create artificial intelligence, and all of these things, going from what we have right now to what could be, in 20 years, we very well might look at some sort of an artificial being that can communicate with you, some sort of an ex -machina type preacher.
[611] I mean, that's not outside the realm of possibility at all.
[612] You have to be careful with the at all part.
[613] At all.
[614] It's our ability to predict the future is really difficult, but I agree with you.
[615] It's not outside the realm of possibility.
[616] Yeah.
[617] And the thing, there's a few examples that brought along just because I enjoy these predictions.
[618] So of how bad we are predicting stuff.
[619] From the very engineers, the very guys and gals like me sitting before you made some of the worst predictions in history in terms of both pessimistic and optimistic.
[620] The Wright brothers, one of the Wright brothers before they flew in 1903 predicted two years before.
[621] that it will be 50 years.
[622] I confess that in 1901, that's one of the brothers talking, I said to my brother Orville that man would not fly for 50 years.
[623] Two years later, we ourselves were making flights.
[624] This demonstration of my inability as a prophet gave me such shock that I have ever since distrusted myself and have refrained from all prediction.
[625] That's one of the right brothers, one of the people working at it.
[626] So that's a pessimistic estimate.
[627] versus an optimistic explanation or a...
[628] Exactly.
[629] And the same with Albert Einstein, for me, made these kind of pessimistic observations.
[630] For me, three years before, the first critical chain reaction as part of the, he led the nuclear development of the bomb.
[631] He said that he has 90 % confidence that it's impossible three years before.
[632] Okay, so that's on the pessimistic side.
[633] On the optimistic side, the history of AI is laden with optimistic.
[634] mystic predictions.
[635] In 1965, one of the seminal people in AI, Herbert Simon, said machines will be capable within 20 years of doing any work a man can do.
[636] He also said, within 10 years, a digital computer will be the world's chess champion.
[637] That's in 58.
[638] And we didn't do that until 90 -something, 98, so 40 years later.
[639] Yeah, but that's one person, right?
[640] I mean, it's a guy taking a stab in the dark based on what data.
[641] I mean, what's he basing this off of?
[642] Our imagination.
[643] We have more data points now, don't you think?
[644] No. In terms of, no?
[645] Not about the future.
[646] That's the thing.
[647] Not about the future, but about what's possible right now.
[648] Right.
[649] And if you look at the past is a really bad predictor of the future.
[650] If you look at the past, what we've done, the immense advancement of technology has given us in many ways optimism about what's possible.
[651] But exactly what is possible we're not good at.
[652] So I am much more confident that the world will look very fascinatingly different in the future.
[653] Whether AI will be part of that world is unclear.
[654] It could be we will all live in a virtual reality world.
[655] Or, for example, one of the things I'm really think about is, to me, a really dumb AI on one billion smartphones is potentially more important.
[656] impactful than a super intelligent AI on one smartphone.
[657] The fact that everybody now has smartphones, this kind of access to information, the way we communicate, the globalization of everything, the potential impact there of just even subtle improvements in AI could be in could completely change the fabric of our society in a way where these discussions about an ex -Machina type lady walking around will be silly because we'll all be.
[658] either living on Mars or living in virtual reality or there's so many exciting possibilities right and what I believe in is we have to think about them we have to talk about them technology is always the source of danger of risk all of the biggest things that threatened our civilization at the small and large scale all are connected to misuse of technology we develop.
[659] And at the same time, it's that very technology that will empower us and save us.
[660] So there's Max Tagmark, brilliant guy, Life 3 .0.
[661] I recommend people read his book on the artificial general intelligence.
[662] He talks about the race.
[663] There's a race that can't be stopped.
[664] One is the development of technology, and the other is the development of our wisdom of how to stop or how to control the technology.
[665] and this is this kind of race and our wisdom is now is always like one step behind and then that's why we need to invest in it and keep sort of keep always thinking about new ideas so right now we're talking about AI we don't know what it's going to look like in five years we have to keep thinking about we have to through simulation explore different ideas through conferences have debates come up with different approaches of how to solve particular problems like I said, with bias or how to solve deep fakes where you fake, you can make Donald Trump or former President Obama say anything or you can have Facebook advertisement, hyper -targeted advertisements, how we can deal with those situations and constantly have this race of wisdom versus the development of technology, but not to sit and think, well, look at the, you know, look at the development of science, technology imagine what it could do in 50 years and we're all screwed because that's important to sort of be nervous about it in that way but it's not conducive to what do we do about it and the people that know what to do about it are the people trying to build this technology building this future what you mean by know what to do about it because like let's let's put it in terms of Elon Musk right like Elon Musk is terrified of artificial intelligence because he thinks by the time it becomes sentient it'll be too late it'll be smarter than us and we'll have essentially created our successors yes and let me quote joe rogan and say that's just one guy yeah well sam harris thinks the same thing yes and there's a lot of a few people to think that and sam harris i think is one of the smartest people i know and elin musk intelligence aside is one of the most impactful people i know yeah he's actually building these cars and In the narrow AI sense, if he's built these autopilot system that we've been studying, the way that system works is incredible.
[666] It was surprising to me on many levels.
[667] It's an incredible demonstration of what AI can do in a positive way in the world.
[668] So I don't, but people can disagree.
[669] I'm not sure the functional value of his fear about, the possibility of this future.
[670] Well, if he's correct, there's functional value in hitting the brakes before this takes place.
[671] Just to be a person who's standing on top of the rocks with a light to warn the boats, hey, there's a rock here.
[672] Like, pay attention to where we're going because there's perils ahead.
[673] I think that's what he's saying.
[674] And I don't think there's anything wrong with saying that.
[675] And I think there's plenty of room for people saying what he's saying and people saying what you're saying.
[676] I think what would hurt us is if we try to silence either voice.
[677] I think what we need in terms of our understanding of this future is many, many, many, many, many of these conversations where you're dealing with the current state of technology versus a bunch of creative interpretations of where this could go and have discussions about where it should go or what could be the possible pitfalls of any current or future actions.
[678] I don't think there's anything wrong with this.
[679] So when you say, like, what's the benefit of thinking in a negative way?
[680] Well, it's to prevent our demise.
[681] So totally.
[682] I agree 100%.
[683] Negativity or worry about the existential threat is really important to have us part of the conversation.
[684] But there's this level.
[685] There's this line.
[686] It's hard to put into words.
[687] There's a line that you cross.
[688] when that worry becomes hyperbole.
[689] Yeah, and then there's something about human psyche where it becomes paralyzing for some reason.
[690] Right.
[691] Now, when I have beers with my friends, the non -AIA folks, we actually go, we cross that line all day and have fun with it.
[692] Maybe I should get you drunk right now.
[693] Maybe.
[694] Regret every moment of it.
[695] I talk to Steve Pinker, Enlightenment Now, his book, kind of highlights that, that that kind of, he's totally doesn't find that appealing because that's crossing all realms of rationality and reason.
[696] When you say that appealing, what do you mean?
[697] Crossing the line into what will happen 50 years.
[698] What could happen?
[699] What could happen.
[700] He doesn't find that appealing.
[701] He doesn't find that appealing because he's studied, and I'm not sure I agree with him to the degree that he takes it, he finds that there's no evidence he wants there to all our discussions to be grounded in evidence and data and he he highlights the fact that there's something about human psyche that desires this negativity that it wants there's there's something a denialable where we want to create and engineer the gods that overpower us and destroy us we want to Or we worry about it.
[702] I don't know if we want to.
[703] Let me rephrase that.
[704] We want to worry about it.
[705] There's something about the psyche.
[706] Because you can't take the genie and put it back in the bottle.
[707] That's right.
[708] I mean, when you say there's no reason to think this way.
[709] But if you do have cars that are semi -autonomous now, and if you do have computers that can beat human beings who are world go champions, and if you do have computers that can beat people at, chess and you do have people that are consistently working on artificial intelligence.
[710] You do have Boston Dynamics who are getting these robots to do all sorts of spectacular physical stunts.
[711] And then you think about the possible future convergence of all these technologies.
[712] And then you think about the possibility of this exponential increase in technology that allows them to be sentient, like within a decade, two decades, three decades.
[713] What more evidence do you need?
[714] You're seeing all the building blocks of a potential success.
[715] being laid out in front of you, and you're seeing what we do with every single aspect of technology.
[716] We constantly and consistently improve and innovate, right, with everything, whether it's computers or cars or anything.
[717] Everything today is better than everything that was 20 years ago.
[718] So if you looked at artificial intelligence, which does exist to a certain extent, and you look at what it could potentially be 30, 40, 50 years from now, whatever it is, Why wouldn't you look at all these data points and say, hey, this could go bad.
[719] I mean, it could go great, but it could also go bad.
[720] I do not want to be mistaken as the person who's not the champion or the impossible.
[721] I agree with you completely.
[722] I don't think it's impossible.
[723] I don't think it's impossible at all.
[724] I think it's inevitable.
[725] I don't.
[726] I think it is inevitable, yes.
[727] It's the Sam Harris argument.
[728] if superintelligence is nothing more than information processing.
[729] Same as the argument of the simulation that we're living in a simulation.
[730] That's very difficult to argue against the fact that we're living in a simulation.
[731] The question is when and what the world would look like.
[732] So it's like I said, a race.
[733] And it's difficult.
[734] You have to balance those two minds.
[735] I agree with you totally.
[736] And I disagree with my fellow robotics folks who don't want to think about it at all.
[737] Of course they don't.
[738] They want to buy new houses.
[739] They've got a lot of money invested in this adventure.
[740] They want to keep the party rolling.
[741] They don't want to pull the brakes.
[742] Everybody pull the cords out of the walls.
[743] We've got to stop.
[744] No one's going to do that.
[745] No one's going to come along and say, hey, we've run all this data through a computer and we found that if we just keep going the way we're going, in 30 years from now, we will have a successor that will decide that human beings are outdated and inefficient and dangerous to the actual world that we live in, and we're going to start wiping it.
[746] them out.
[747] But that's not exist right now.
[748] It doesn't exist right now.
[749] But if that did happen, if someone did come to the UN and had this multi -stage presentation with data that showed that if we continue on the path, we're going, we have seven years before artificial intelligence decides to eliminate human beings based on these data points.
[750] What do they do?
[751] What do the Boston Dynamics people do?
[752] Well, building a house in Cambridge.
[753] What are you talking about?
[754] about, man. I'm not going anywhere.
[755] Come on.
[756] I just bought a new Tesla.
[757] I need to finance this thing.
[758] Hey, I got credit card bills.
[759] I got student loans.
[760] I'm still paying off.
[761] How do you stop people from doing what they do for a living?
[762] How do you say that, hey, I know that you would like to look at the future with rose -collar glasses on, but there's a real potential pitfall that could be the extermination of the human species.
[763] And obviously, I'm going way far with this.
[764] Yeah, I like it.
[765] I think every one of us trying to build these systems are similar in sound to the way you are talking about the touch of death.
[766] In that my dream and the dream of many roboticists is to create intelligent systems that will improve our lives.
[767] And working really hard at it, not for our house in Cambridge, not for a billion dollar for selling a startup.
[768] paycheck we love this stuff some of you obviously the motivations are different for every single human being that's involved in every endeavor so and we're trying really hard to build these systems and is really hard so whenever the the question is well this is going to look at historically it's going to take off it can potentially take off any moment it's very difficult to really be cognizant as an engineer about how it takes off because you're trying to make it take off in a positive direction and you're failing everybody is failing it's been really hard and so you have to acknowledge that they're that overnight some Elon Musk type character might come along and you know people with this boring company or with SpaceX people didn't think anybody but NASA could do what Elon Musk is doing and he's doing it it's hard to think about that too much.
[769] You have to do that.
[770] But the reality is we're trying to create these super intelligent beings.
[771] Sure.
[772] But isn't the reality also that we have done things in the past because we were trying to do it?
[773] And then we realized that these have horrific consequences for the human race, like Oppenheimer and the Manhattan Project.
[774] You know, when he said, I am death, destroyer of worlds.
[775] When he's quoting the Bhagavok Gita, when he's detonating the first nuclear bomb and realizing what he's done.
[776] Just because something's possible to do doesn't necessarily mean it's a good idea for human beings to do it.
[777] Now, we haven't destroyed the world with Oppenheimer's discovery and through the work of the Manhattan Project.
[778] We've managed to somehow another keep the lid on this shit for the last 60.
[779] It's crazy, right?
[780] You know, I mean, for the last, what, 70 years?
[781] How has it been?
[782] 70 sounds right.
[783] 10 ,000, 20 ,000 eukes all over the the world right now.
[784] It's crazy.
[785] I mean, we literally could kill everything on the planet.
[786] it.
[787] And somehow we don't.
[788] Somehow, in some amazing way, we have not.
[789] But that doesn't mean we, I mean, that's a very short amount of time in relation to the actual lifespan of the earth itself and certainly in terms of the time human history has been around.
[790] And nuclear weapons, global warming is another one.
[791] Sure, but that's a side effect of our actions, right?
[792] We're talking about a direct effect of human ingenuity and innovation, the nuclear bomb.
[793] It's a direct effect.
[794] We tried to make it.
[795] We made it.
[796] There it goes.
[797] Global warming is an accidental consequence of human civilization.
[798] So, you can't, I don't think it's possible to not build the nuclear bomb.
[799] You don't think it's possible to not build it.
[800] Because people are tribal, they speak different languages, they have different desires and needs, and they were in more.
[801] So if all these engineers were working towards it, it was not possible to not build it.
[802] Yep.
[803] And like I said, there's something about us chimps in a large collective where we are born and pushed forward towards progress of technology.
[804] You cannot stop the progress of technology.
[805] So the goal is to how to guide that development into a positive direction.
[806] But surely, if we do understand that this has taken place, and we did drop these.
[807] enormous bombs on Hiroshima and Nagasaki and killed untold amounts of innocent people with these detonations that it's not necessarily always a good thing to pursue technology.
[808] It nobody is so.
[809] Do you see what I'm saying?
[810] Yes, 100%.
[811] I agree with you totally.
[812] So I'm more playing devil's advocate than anything.
[813] But what I'm saying is you guys are looking at these things like we're just trying to make these things happen.
[814] And what I think people like Elon Musk and Sam Harris and a bunch of others that are gravely concerned about the potential for AI are saying is that I understand what you're doing, but you've got to understand the other side of it.
[815] You've got to understand that there are people out there that are terrified that if you do extrapolate, if you do take this relentless thirst for innovation and keep going with it, if you look at what we can do so, what human beings can do so far in our.
[816] crude manner of 2018 with all the amazing things they've been able to accomplish it's entirely possible that we might be creating our successors this is not outside the realm of possibility and all of our biological limitations might be we might figure out a better way and this better way might be some sort of an artificial creature yeah i i began with our dream to forge the gods and that's i would like i think that it's impossible to stop.
[817] Well, it's not impossible to stop if you go Ted Kaczynski and kill all the people.
[818] I mean, that's what Ted Kaczynski anticipated.
[819] No, the Unabomber, do you know the whole story behind him?
[820] No, what was he trying to stop?
[821] Who, he's a fascinating cat.
[822] Here's what's fascinating.
[823] It was a bunch of fascinating things about him.
[824] One of the more fascinating things about him, he was involved in the Harvard LSD studies.
[825] So they were nuking that dude's brain with acid.
[826] And then he goes to Berkeley, becomes a professor, takes all his money from.
[827] teaching and just makes a cabin in the woods and decides to kill people that are involved in the creation of technology because he thinks technology is eventually going to kill off all the people.
[828] So he becomes crazy and schizophrenic and who knows what the fuck is wrong with him and whether or not this would have taken place inevitably or whether this was a direct result of his being literally like drowned in LSD.
[829] We don't even know how much they gave him or what the experiment entailed or how many other people's got their brain torched during these experiments, but we do know for a fact that Ted Kaczynski was a part of the Harvard LSD studies.
[830] And we do know that he went and did move to the woods and write his manifesto and start blowing up people that were involved in technology.
[831] And the basic thesis of his manifesto that perhaps LSD opened his eyes to is that technology is going to kill all humans.
[832] Yeah.
[833] And so we should.
[834] It was going to be the end of the human race, I think.
[835] believe human race so the solution you know what is that what he said you looking at it up the industrial revolution in its consequences have been a disaster for the human race yeah he extrapolated he was looking at where we're going and these people that were responsible for innovation and he was saying they're doing this with no regard for the consequences on the human race and he thought the way to stop that was to kill people obviously he's fucking demented but this is i mean he literally was saying what we're saying right now you keep going we're fucked so the industrial revolution we have to think about that it's a really important message coming from the wrong guy but is uh where is where is all this taking us yeah where is it so i guess my underlying assumption is the during the current capitalist structure of society that we always want a new iphone And you just had one of the best reviewers on yesterday that always talks about.
[836] Marcus.
[837] Marcus, yeah.
[838] We always, myself too, pixel three, I'm going to have a pixel two.
[839] I'm thinking, maybe I need a pixel three.
[840] I don't know, a better camera, you know, that whatever that is, that fire that wants more, better, better.
[841] I just don't think it's possible to stop.
[842] And the best thing we can do is to explore ways to guide it towards safety where it helps us.
[843] When you say it's not possible to stop, you mean, collectively as an organism like the human race that it's a tendency that's just built in it's certainly possible to stop as an individual because i know people like my friend ari who's given up on smartphones he went to a flip phone and he doesn't check social media anymore and he found it to be toxic he didn't like it he thought it was too addicted to it and he didn't like where it was leading him yep so in front on an individual level it's possible individual level but then and just like with that kaczynski and the individual level it's possible to do certain things that try to stop it on more dramatic ways but I just think the force of our this symbiotic this organism that's just this living breathing organism that is our civilization will progress forward we're just curious apes it's this desire to explore the universe why why do we want to do these things why do we look up and we want to travel and it's not I don't think it's sort of we're trying to optimize for survival In fact, I don't think most of us would want to be immortal.
[844] I think it's like Neil deGrasse Tyson talks about.
[845] The fact that we're immortal, the fact that one day will die, is one of the things that gives life meaning.
[846] And sort of trying to worry and trying to sort of say, wait a minute, where is this going, as opposed to riding the wave and doing, riding the wave of forward progress.
[847] I mean, it's one of the things he gets quite a bit of ironically, hate for it, Steve Pinker, but he really describes in data how our world is getting better and better with...
[848] Well, he just gets hate from people that don't want to admit that there's a trend towards things getting better because they feel like then people will ignore all the bad things that are happening right now and all the injustices, which I think is a very short -sighted thing, but I think it's because of their own biases and the perspective that they're trying to establish and push.
[849] Instead of looking at things objectively and looking at the data, say, say, I see where you're going.
[850] It doesn't discount the fact that there's injustice in the world and crime and violence and all sorts of terrible things happen to people that are good people on a daily basis.
[851] But what he's saying is, just look at the actual trend of civilization and the human species itself.
[852] And there's an undeniable trend towards peace, slowly but surely working towards peace.
[853] Way safer today, way safer today than it was a thousand years ago.
[854] It is.
[855] it just is yeah and there's there's these interesting arguments which his book kind of blew my mind to this funny joke he says that some people consider giving nuclear the atom bomb the Nobel priest prize because he believes i'm not an expert in this at all but he believes that sort of or some people believe that nuclear weapons are actually responsible for a lot of the decrease in violence because all the major people can do damage all the russia and all the major states or can you damage, have a strong disincentive from engaging in warfare.
[856] Right.
[857] And so these are the kinds of things you don't, I guess, anticipate.
[858] And so I think it's very difficult to stop that forward progress, but we have to really worry and think about, okay, how do we avoid the list of things that we worry about?
[859] So one of the things that people really worry about is the control problem, is basically AI becoming not necessarily super intelligent, but super powerful.
[860] We'll put too much of our lives into it.
[861] That's where Elon Musk and others that want to provide regulation of some sort, saying, wait a minute, you have to put some bars on what this thing can do from a government perspective, from a company perspective.
[862] Right, but how could you stop rogue states from doing that?
[863] How could you, why would China listen to us?
[864] Why would Russia listen to us?
[865] Why would other countries that are capable of doing this and maybe don't have the same sort of power that the United States has?
[866] And they would like to establish that kind of power?
[867] why wouldn't they just take the cap off?
[868] In a philosophical, high -level sense, there's no reason.
[869] But if you engineer it in, so I'm a big, we do this thing with autonomous vehicles called arguing machines.
[870] We have multiple AI systems argue against each other.
[871] So it's possible that you have some AI systems over supervising other AI systems.
[872] So I have sort of like in our nation, there's a Congress arguing.
[873] blue and red states being represented and there's this discourse going on, debate, and have AI systems like that too.
[874] It doesn't necessarily need to be one super powerful thing.
[875] It could be AI supervising each other.
[876] So there's interesting ideas there to play with.
[877] Because ultimately, what are these artificial intelligence systems doing?
[878] We humans place power into their hands first.
[879] In order for them to run away with it, we need to put power into their hands.
[880] So we have to figure out how we put that power in initially, so it doesn't run away and how supervision can happen.
[881] Right, but this is us, right?
[882] You're talking about rational people.
[883] What about other people?
[884] Why would they engineer limitations into their artificial intelligence and what incentive would they have to do that to somehow another limit their artificial intelligence to keep it from having as much power as ours?
[885] There's really not a lot of incentive on their side, especially if there's some sort of competitive advantage for their artificial intelligence to be more ruthless, more sentient, more autonomous.
[886] I mean, it seems like once the, again, once the genie's out of the bottle, it's going to be very hard.
[887] I have a theory, and this is a very bizarre theory, but I've been running with this for quite a few years now.
[888] I think human beings are some sort of a caterpillar.
[889] And I think we're creating a cocoon, and through that cocoon, we're going to give birth to a butterfly.
[890] And then we're going to become something.
[891] And I think whether we're going to have some sort of a symbiotic connection to these electronic things, where they're going to replace our parts, our failing parts with far superior parts until we're not really a person anymore.
[892] Like, what was that Scarlet -Johansen movie?
[893] The Ghost in the Shell?
[894] I tried to watch part of it.
[895] It's pretty stupid.
[896] But she's hot as fuck, so it kept my attention for a little bit.
[897] But in that, they took her brain and put it in this artificial body that had superpowers.
[898] And they basically replaced everything about her that was in her consciousness.
[899] with these artificial parts.
[900] Like all of her frame, everything was just some new thing that was far superior.
[901] And she had these abilities that no human being will ever have.
[902] I really wonder why we have this insatiable third.
[903] Why can't, if we're so logical, we're so logical and so thoughtful in some ways, why can't we be that way when it comes to materialism?
[904] Well, I think one of the reasons why is because materialism is the main, engine that pushes innovation.
[905] If it wasn't for people's desire to get the newest, latest and greatest thing, what would fund these new TVs, cell phones, computers?
[906] Why do you really need a new laptop every year?
[907] Is it because of engineered obsolescence where the laptop dies off and you have to get a new one because they fucked you and they built a shitty machine that's designed to die so you buy a new one?
[908] You really like iPhones, don't you?
[909] Well, it's not even iPhones or a laptop?
[910] Yeah.
[911] Well, is it, is it because you're, you know, you just see the number.
[912] 2 .6 gigahertz is better than 2 .4.
[913] Oh, it's the new one.
[914] It has a 12 megapixel webcam instead of an 8.
[915] And for whatever reason, we have this desire to get those new things.
[916] I think that's what fuels innovation.
[917] And my cynical view of this thing that's happening is that we have this bizarre desire to fuel our demise and that we're doing so by fueling technology, by motivating these companies to continually innovate.
[918] If everybody just said, you know what, man, I'm really in the log cabins, and I want an axe or can cut my own firewood, and I realize the TV's rot in my brain, I just want to read books.
[919] So, fuck off.
[920] And everybody started doing that.
[921] And everybody started living, like, when it gets dark out, I'll use candles.
[922] And you know what, I'm going to get my water from a well.
[923] And you know what, I'm going to do, and I like living.
[924] better that way.
[925] If people started doing that, there would be no need for companies to continually make new computers, to make new phones, to make new smart watches, or whatever the fuck they're making, to make cars they can drive themselves.
[926] These things that were really, really attached to, if you looked at the human organism, you're like if somehow or another could objectively remove yourself from society and culture and all the things that make us a person, and you look at what we do like what does this thing do we found this planet there's these little pink monkeys and brown monkeys and yellow monkeys and what are they all into well they all seem to be into making stuff and what kind of stuff are they making well they keep making better and better stuff that's more and more capable well where's it going well it's going to replace them they're going to make a thing that's better than them they're going to they're engineering these things slowly but surely to do all the things they do but do them better yeah and it's a it's a fascinating theory.
[927] I mean, it's not a theory.
[928] It's an instructive way to think about intelligence and life period.
[929] So if you step back, look across human history, and look at Earth as an organism.
[930] What is this thing doing?
[931] The thing is, I think, in terms of scale, and in terms of time, you can look that way at so many things.
[932] Like, isn't there billions or trillions of organisms on our skin right now?
[933] Both of us, they have little civilizations, right?
[934] They have a different mechanism by which they operate and interact.
[935] But for us to say that we're intelligent, those organisms are not, is a very narrow -sided view.
[936] So they are operating under some force of nature that Darwin has worked on trying to understand some small elements of this is evolutionary theory.
[937] But there's other more interesting forces at play that we don't understand.
[938] Sure.
[939] And there's some kind of force.
[940] It could be a fundamental force of physics that Einstein never got a chance to discover is our desire for an iPhone update some some fundamental force of nature somehow gravity in the strong force and these things described by physics add up to this drive for new things for creation and the fact that we die the that the the fact that we're mortal the fact that what desires are built in to us whether it's sexual or intellectual or whatever drives us apes, like somehow that all combines to this progress and towards what, it is a compelling way to think that if an alien species did visit Earth, I think they would probably see the smartphone situation.
[941] They see how many little lights are on and how us apes are looking at them.
[942] It's possible, I think some people have said that they would think the overlords are the phones, not the people.
[943] So to think that that's now moving into a direction, the future will be something that is beyond human or symbiotic with human in ways we can't understand is really interesting.
[944] Not just that, but something that we're creating ourselves.
[945] Creating ourselves.
[946] And it's a main focal point of our existence.
[947] That's our purpose.
[948] Yeah.
[949] I mean, if you think about a main focal point, if you think about the average person, what they do, there's a great percentage of our population that has jobs where they work and one of the ways that they placate themselves doing these things that they don't really enjoy doing is earning money for objects right they want a new car they want a new house they want a bigger tv they want a this or that and the way they motivate themselves to keep showing up at this shitty job is to think If I just put in three more months, I can get that Mercedes.
[950] If I just do this or that, I can, oh, I can finance this new Pixel 3.
[951] Ooh.
[952] Yeah.
[953] And it's interesting because it's sort of politicians, what's the American dream is for, you hear this thing, I want my children to be better off than me. This kind of desire, you know, you can almost see that that taken farther and farther will be, there will be a presidential candidate in 50, 100 years.
[954] they'll say I want my children to be robots and you know what I mean like sort of this idea that that's the natural evolution right is the highest calling of our species that scares me because I value my own life but does it scare you if it comes out perfect like if each robot is like a god and each robot is beautiful and loving and they recognize all the great parts of this existence and they avoid all the the jealousy and the nonsense and all the stupid aspects of being a person, we realize that a lot of these things are just sort of biological engineered tricks that are designed to keep us surviving from generation after generation, but now here in this fantastic new age, we don't need them anymore.
[955] Yeah, it's, well, first, one of the most transformative moments of my life was when I met Spotmini in person, which is one of the legged robots in Boston Dynamics.
[956] for the first time when I met them met that little fella I know exactly how it works I know exactly how every aspect of it works it's just a dumb robot but when I met him and he got up and he looked at me there is right there have you seen it dance now yeah the dance I've seen the new thing?
[957] The dance is crazy but see it's not crazy on the technical side it's crazy it's engineered it's obvious it's programmed but it's It's crazy to watch.
[958] Like, wow.
[959] There's something, the reason the moment was transformative is I know exactly how it works.
[960] And yet by watching it, something about the feeling of it.
[961] You're like, this thing is alive.
[962] And there was this terrifying moment, not terrifying, but terrifying and appealing, where this is the future.
[963] Right.
[964] Like, this thing, like, this thing represents some future that is totally, that we, cannot understand just like a future in the 18th century of future with planes and smartphones was something you could understand that this thing that little dog could have had a human consciousness in it that was the feeling i had and i know exactly how it works there's nothing close to the intelligence but it just gives you this picture of what the possibilities are of these living creatures and i think that's what people feel when they see boston dynamics look how awesome this thing running around is they don't care about the technicalities and how far away we are they see it look this thing is pretty human and the possibilities of human like things that supersede humans and can evolve and learn so quickly exponentially fast is this terrifying frontier that that really makes us think as it did for me maybe terrifying is a weird word because when i when i look at it and i'm not irrational And I look at, there's videos that show the progression of Boston Dynamics robots from several years ago to today what they're capable of.
[965] And it's, it is a fascinating thing because you're watching all the hard work of these engineers and all these people that have designed these systems and have figured out all these problems that these things encounter.
[966] And they've come with solutions and they continue to innovate.
[967] and they're constantly doing it, and you're seeing this, and you're like, wow, what are we going to see in a year?
[968] What am I going to see in three years?
[969] What am I going to see in five years?
[970] It's absolutely fascinating because if you extrapolate and you just keep going, boy, you go 15, 20, 30, 50, 100 years from now, you have ex machina.
[971] Yeah, you have ex machina, and at least in our imagination.
[972] In our imagination.
[973] And the problem is there will be so many other things that are super exciting.
[974] interesting.
[975] Sure, but that doesn't mean it's not crazy.
[976] I mean, there's many other things you could focus on also that are also going to be bizarre and crazy.
[977] Sure, but what about it?
[978] Just it.
[979] It's going somewhere.
[980] That fucker is getting better.
[981] The parkour one is bananas.
[982] You see it hopping from box to box and left to right and leaping up in the air and you're like, whoa, that thing doesn't have any wires on it.
[983] It's not connected to anything.
[984] It's just jumping from box to box.
[985] If that thing had a machine gun, it was running across a hill at you, you'd be like, oh, fuck.
[986] How long does it last?
[987] How many bullets does it have?
[988] Let me just say that I would pick Tim Kennedy over that dog for the next 50 years.
[989] 50?
[990] Yeah.
[991] Man. So.
[992] I'm a big Tim Kennedy fan.
[993] I'm talking about that.
[994] Well, but he'll probably have some robotic additions to his body to improve the.
[995] Well, then is he Tim Kennedy?
[996] anymore?
[997] If the brain is Tim Kennedy, then he's still Tim Kennedy.
[998] That's the way we think about it.
[999] But there is a huge concern about UN is meeting about this, as autonomous weapons.
[1000] It's allowing the AI to make decisions about who lives and who dies is really concerning in the short term.
[1001] It's not about a robotic dog with a shotgun running around.
[1002] It's more about our military wanting to make destruction as efficient as possible, minimizing human life.
[1003] Drones.
[1004] There's something really uncomfortable to me about drones.
[1005] In how, you know, compare with Dan Carlin hardcore history, with Genghis Khan, there's something impersonal about what drones are doing, where it moves you away from the actual destruction that you're achieving, where I worry that our ability to encode the ethics into these systems will go wrong in ways we don't.
[1006] expect.
[1007] And so, I mean, folks at the UN talk about, well, you have these automated, so drones that make, that draw bombs over a particular area.
[1008] So the bigger and bigger the area is over which you allow an artificial intelligence system to make a decision to drop the bombs, the weird and weirder it gets.
[1009] There's some line.
[1010] Now, presumably if there's like three tanks that you would like to destroy with a drone, it's okay for AI system to say I would like to destroy those three like I'll handle everything just give me the three tanks but now this makes me uncomfortable as well because I think I'm opposed to most wars but it's just military is military and they try to get the job done now what if we now expand that to 10 20 100 tanks where you now let the AI system draw bombs all over very large areas how can that go wrong And that's terrifying.
[1011] And there's practical engineering solutions to that, oversight.
[1012] And that's something that engineers sit down.
[1013] There's an engineering ethic where you encode and you have meetings of how do we make this safe.
[1014] That's what you worry about.
[1015] The thing that keeps me up at night is the 40 ,000 people that die every year in auto crashes.
[1016] Like that's, I worry about not, you have to understand, like, I worry about the future of AGI taking over.
[1017] but that's not as large a GI AI artificial general intelligence that's kind of the term that people have been using for this but I'm maybe because I'm in it I worry more about the 40 ,000 people that die in the United States and the 1 .2 million that die every year from auto crashes there's something that is more real to me about the death that's happening now that could be helped and that's the fight but of course if this threat becomes really, then that's a much, you know, that's a serious threat to humankind, and that's something that should be thought about.
[1018] I just worry that, I worry also about the AI winter.
[1019] So I mentioned there's been two winters in the 70s and the 80s to 90s when funding completely dried up, but more importantly, just people stopped getting into artificial intelligence and became cynical about its possibilities because there was a hype cycle where everyone was really excited about the possibilities of AI and then they realized you know five 10 years into the development that we didn't actually achieve anything it was just too far off too far off same as it was for virtual reality for the longest time virtual reality was something that was discussed like even in the 80s and the 90s but it just died off nobody even thought about it now it's come back to the forefront when there's real AI, or real, excuse me, real virtual reality that you can use, like HTC Vives or, you know, things along those lines where you can put these helmets on, and you really do see these alternative worlds that people have created in these video games.
[1020] And it's, you realize, like, there's a practical application for this stuff because the technology is caught up with the concept.
[1021] Yeah.
[1022] And for, I actually don't know where people stand on VR.
[1023] We do quite a bit of stuff with VR for research purposes for simulating robotic systems, but I don't know where the high.
[1024] hype is, I don't know if people calm down a little bit on VR, so there was a hype in the 80s and 90s, I think.
[1025] I think it's ramped up quite a bit.
[1026] What is the other one the H -D, Oculus Rift, and what other one?
[1027] Those are the main ones, and there's other headsets that you can work and use with.
[1028] Yeah, and there's some you can use just with a Samsung phone, correct?
[1029] Yeah, and the next generation or which next year to two are going to be all standalone systems.
[1030] Yeah.
[1031] So there's going to be an Oculus Rift coming out, you don't need a computer for at all.
[1032] So the ultimate end fear, end game fear, the event horizon of that is the matrix, right?
[1033] That's what people are terrified of, of some sort of a virtual reality world where you don't exist in the physical sense anymore.
[1034] They just plug something into your brain stem, just like they do in the matrix, and you're just locked into this artificial world.
[1035] Is that terrifying to you?
[1036] So that seems to be less terrifying than AI killing all of humankind?
[1037] Well, it depends.
[1038] I mean, what is life?
[1039] That's the real question, right?
[1040] If you only exist inside of a computer program, but it's a wonderful program, and whatever your consciousness is, and we haven't really established what that is, right?
[1041] We don't, I mean, there's a lot of really weird, hippie ideas out there about what consciousness is.
[1042] Your body is just like an antenna man, and it's just like tuning into consciousness, and consciousness is all around you.
[1043] It's Gaia, it's the Mother Earth, it's the universe itself, it's God, it's love.
[1044] okay maybe I don't know but if you could take that whatever the fuck it is and send it in a cell phone in New Zealand is that where your consciousness is now because like if we figure out what consciousness is and get it to the point where we can turn it into a program or duplicate it I mean that sounds so far away but if you went up to someone from 1820 and said hey man one day I'm going to take a picture of my dick and I'm going to send it to this girl she's going to get it on her phone.
[1045] They'd be like, what the fuck are you talking about?
[1046] A photo?
[1047] What do you mean?
[1048] What's a photo?
[1049] Oh, it's like a picture, but like you don't draw it.
[1050] It's perfect.
[1051] It looks exactly like that.
[1052] It's an HD, and I'm going to make a video of me taking a shit, and I'm going to send it to everyone.
[1053] They're like, what the fuck is?
[1054] That's not even possible.
[1055] Get out of here.
[1056] That is essentially you're capturing time.
[1057] You're capturing moments in time in a very, not a very crude sense, but a crude sense.
[1058] in terms of comparing it to the actual world in the moment where it's happening.
[1059] Like here, you and I are having this conversation.
[1060] We're having it in front of this wooden desk, this paper in front of you.
[1061] To you and I, we have access to all the textures, the sounds.
[1062] We can feel the air conditioning.
[1063] We can look up.
[1064] We could see the ceiling.
[1065] We got the whole thing in front of us because we're really here.
[1066] Yeah.
[1067] But to many people that are watching this on YouTube right now, they're getting a minimized, crude version of this that's similar.
[1068] But it feels real.
[1069] It feels pretty real.
[1070] It's pretty close.
[1071] It's pretty close.
[1072] So, I mean, I've listened to your podcast for a while.
[1073] You usually have, so when I listen to your podcast, it feels like I'm sitting in with friends listening to a conversation.
[1074] So it's not as intense as, for example, Dan Carlin's hardcore history.
[1075] where the guys, like, talking to me about the darkest aspects of human nature.
[1076] His show's so good, I don't think you can call it a podcast.
[1077] It's not a podcast.
[1078] It's an experience.
[1079] Yeah.
[1080] You're there.
[1081] I was hanging out with him and Genghis Khan.
[1082] And World War I was in World War I, World War II, Painful Tame is an episode he had where he talks very dark ideas about our human nature and desiring the observation.
[1083] of the torture and suffering of others.
[1084] There's something really appealing to us.
[1085] He has this whole episode, how throughout history we liked watching people die.
[1086] And there's something really dark.
[1087] You're saying that if somebody streamed something like that now, it would probably get hundreds of millions of views.
[1088] Yeah.
[1089] Probably would.
[1090] And we're protecting ourselves from our own nature because we understand the destructive aspects of it.
[1091] That's why YouTube would pull something like that.
[1092] If you tied a person in between two trucks and pulled them apart and put that on YouTube, it would get millions of hits, but YouTube would pull it because we've decided as a society collectively that those kind of images are gruesome and terrible for us.
[1093] But nevertheless, that experience of listening to his podcast slash show, it feels real, just like VR for me, there's really strongly real aspects to it where I'm not sure that if the VR technology gets much better to where if you had a choice between do you want to live your life in VR.
[1094] You're going to die just like you would in real life, meaning your body will die.
[1095] You're just going to hook up yourself to a machine like it's a deprivation tank.
[1096] And just all you are is in VR and you're going to live in that world.
[1097] Which life would you choose?
[1098] Would you choose life in VR?
[1099] Or would you choose a real life?
[1100] That was the guy's decision in the Matrix, right?
[1101] The guy decided in the Matrix he wanted to be a special person in the Matrix.
[1102] He was eating at steak, talking to the guys, and he decided he was going to give up.
[1103] Remember that?
[1104] So what decision would you make?
[1105] I mean...
[1106] What is reality if it's not what you're experiencing?
[1107] If you're experiencing something, but it's not tactile in the sense, like you can't drag it somewhere and put it on a scale and take a ruler to it, measure it.
[1108] But in the moment of being there, it seems like it is.
[1109] What is missing?
[1110] What is missing?
[1111] Well, it's not real.
[1112] Well, what is real, then?
[1113] What is real?
[1114] Well, that's the ultimate question in terms of, like, are we living in a simulation?
[1115] That's one of the things that Elon brought up when I was talking to him.
[1116] And this is one thing that people have struggled with.
[1117] If we are one day going to come up with an artificial reality that's indiscernible from reality, in terms of emotions, in terms of experiences, feel, touch, smell, all of the sensory input that you get from the regular world, if that's inevitable, if one day we do come up with that, how are we to discern whether or not we have already, created that and we're stuck in it right now that we can't and there's a lot of philosophical arguments for that but it gets it to yeah the nature of reality it's i mean it's fascinating because we're okay we're totally clueless about what it means to be real what it means to exist to exist so consciousness for us i mean it's incredible you could like look at your own hand like i'm pretty sure i'm on the joe rogan experience podcast i'm pretty sure this is not real i'm a man Imagine all of it.
[1118] There's a knife in front of me. I mean, it's surreal.
[1119] And I have no proof that it's not fake.
[1120] And those kinds of things actually come into play with the way we think about artificial intelligence, too.
[1121] Like, what is intelligence?
[1122] Right.
[1123] It seems like we're easily impressed by algorithms and robots we create that appear to have intelligence.
[1124] But we still don't know what is intelligent and how close those things are to us.
[1125] And we think that ourselves as this biological entity that can think and talk and cry and laugh, that we are somehow or another more important than some sort of silicone -based thing that we create that does everything that we do but far better.
[1126] Yeah, I think if I were to take a stand, a silver right stand, I hope I'm young.
[1127] I'll one day run for president on this platform, by the way, that defending the rights, Well, I can't because I'm Russian, but maybe they'll change the rules.
[1128] That, you know, robots will have rights.
[1129] And, you know, robots' lives matter.
[1130] And I actually believe that we're going to have to start struggling with the idea of how we interact with robots.
[1131] I've seen too often the abuse of robots, not just the Boston dynamics, but literally people, you leave them alone with the robot, the dark nature of human, the dark aspects of human nature, comes out and it's worrying to me i would like a robot that spars but only can move it like 50 % of what i can move at so i could fuck it up yeah you'd be able to practice like really well like you would develop like some awesome like sparring instincts yeah i yeah i yeah robot but there would still be consequences like if you did fuck up and you got lazy and it the leg kicked you didn't check it it would hurt i would uh love to see like a live stream of that session because the you know there's so many ways i used to i mean i practiced on a dummy there is aspects to a dummy that's helpful yeah in terms of positioning and where you where your stance is and technique yeah there's something to it i can certainly that go see that going wrong in ways where a robot might not respect you tapping yeah or robot decides to beat you to death it's tired of you fucking it up every day and one day you get tired or what if you sprain your ankle and it gets on top of you and mounts you and just starts blast you in the face.
[1132] Yeah, just a heel hook or something.
[1133] Right.
[1134] You'd have to be able to say, stop.
[1135] Well, then, no, you're going to have to use your martial art to defend yourself.
[1136] Yeah, right, because if you make it too easy for the robot to just stop any time, then you're not really going to learn.
[1137] Like, one of the consequences of training, if you're out of shape, is if you get tired, people fuck you up.
[1138] And that's incentive for you to not get tired.
[1139] Like, there was so many times that I would be in the gym, like doing strength and conditioning, I think about moments where I got tapped where guys caught me in something that I was exhausted and I couldn't get out of the triangle I'm like shit and I just fucking I just really push on the treadmill or you know push on the you know um you know um air dine bike or whatever it was that I was doing thinking about those moments of getting tired yeah those moments that's what I think about when I do like sprints and stuff was um the the feeling of competition those nerves yeah of stepping in there and just it's really hard to do that kind of visualization but it's effective though and the the feeling of consequences to you not having any energy so you have to muster up the energy because if you don't you're going to get fucked up or something bad's going to happen to someone you care about or something something's going to happen to the world maybe you're a superhero you're saving the world from the robots that's right but I did go back to what we're talking about sorry to interrupt you but just to bring this all back around, what is this life and what is consciousness and what is this experience?
[1140] And if you can replicate this experience in a way that's indiscernible, will you choose to do that?
[1141] Like if someone says to you, hey, Lex, you don't have much time left, but we have an option.
[1142] We have an option and we can take your consciousness as you know it right now, put it into this program.
[1143] You will have no idea that this has happened you're going to close your eyes, you're going to wake up, you're going to be in the most beautiful green field.
[1144] It's going to be naked women everywhere, feasts everywhere you go.
[1145] It's going to be just picnic tables filled with the most glorious food.
[1146] You're going to drive around a Ferrari every day and fly around on a plane.
[1147] You're never going to die.
[1148] You're going to have a great time.
[1149] Or take your chances.
[1150] See what happens when the lights shut off.
[1151] Well, first of all, I'm a simple man. I don't need multiple women.
[1152] One is good.
[1153] I'm romantic in that way.
[1154] That's what you say.
[1155] But that's in this world.
[1156] This world, you've got incentive to not be greedy.
[1157] In this other world where you can breathe underwater and fly through the air and you, you know.
[1158] No, I believe that scarcity is the fundamental ingredient of happiness.
[1159] So if you give me 72 virgins or whatever it is and...
[1160] You just keep one slut?
[1161] Not a slut.
[1162] But she, a requirement, you know, somebody intelligent and interesting.
[1163] Who enjoys sexual intercourse.
[1164] Well, not just enjoy sexual intercourse.
[1165] Like you.
[1166] A person.
[1167] Well, that and keeps things interesting.
[1168] Lex, we can engineer all this into your experience.
[1169] You don't need all these different women.
[1170] I get it.
[1171] I understand.
[1172] We've got this program for you.
[1173] Don't worry about it.
[1174] Okay, you want one.
[1175] And a normal car, like maybe a sob or something like that.
[1176] Nothing crazy.
[1177] Yeah.
[1178] Right?
[1179] Yeah.
[1180] Yeah.
[1181] You're a simple man, I get it.
[1182] No, no, no. But you need to...
[1183] You want to play chess with someone who could beat you every now and then, right?
[1184] Yeah, but not just chess.
[1185] So, engineer some flaws.
[1186] Like, she needs to be able to lose her shit every once in a while.
[1187] Yeah, the Matrix.
[1188] The Girl on the Red Dress.
[1189] Which girl on the red dress?
[1190] It comes right here.
[1191] Remember he goes, like, did you see, notice the girl in the red dress?
[1192] It's like the one that catches his attention.
[1193] I don't remember this.
[1194] This is right at the very beginning when he's telling him what the Matrix is.
[1195] She walks by right here.
[1196] Oh, there she is.
[1197] But bam.
[1198] That's your girl.
[1199] The guy afterwards, it's like, I'm telling you, it's just not, it's not.
[1200] Well, yeah, but then I have certain features.
[1201] Like, I'm not an iPhone guy like Android.
[1202] So that may be an iPhone person's girl.
[1203] But that's nonsense.
[1204] So if an iPhone came along that was better than Android, you wouldn't want to use it?
[1205] No, my just definition of better is different.
[1206] Like, I know for me happiness lies.
[1207] In Android phones?
[1208] Yeah, Android phones.
[1209] Close connection with other human beings who are flawed but interesting, who are passionate about what they do.
[1210] Yeah, but this is all engineered into your program.
[1211] Yeah, yeah.
[1212] I'm requesting features here.
[1213] Yeah, you're requesting features, but why Android phones?
[1214] Is that like, I'm a Republican?
[1215] Well, I'm a Democrat.
[1216] I like Android.
[1217] I like iPhones.
[1218] Is that what you're doing?
[1219] You getting tribal?
[1220] No, I'm not getting tribal.
[1221] No, I'm not getting tribal.
[1222] I was just representing, I figured the girl in their address just seems like an iPhone.
[1223] as a feature set I imagine the kind of features I'm asking for...
[1224] She's too hot?
[1225] Yeah, and it seems like she's not interested in Dostoevsky.
[1226] How would you know?
[1227] That's so prejudiced of you, just because she's beautiful and she's got a tight -fitting dress?
[1228] That's true.
[1229] I don't know.
[1230] That's very unfair.
[1231] How dare you?
[1232] How dare you?
[1233] You sexist son of a bitch.
[1234] I'm sorry.
[1235] Actually, that was totally...
[1236] She probably likes Nietzsche and Dostoevsky and Kammu and Hesse.
[1237] She did her PhD in Astrophysics.
[1238] Possibly.
[1239] Yeah, I don't know.
[1240] That's...
[1241] We're talking about all the trappings.
[1242] Look at that.
[1243] Bam, I'll take her all day.
[1244] iPhone, Andrew.
[1245] I'll take her if she's a...
[1246] I'm not involved in this conversation.
[1247] I'll take her if she's a Windows phone.
[1248] How about that?
[1249] I don't give her a fuck.
[1250] Windows phone?
[1251] Yeah.
[1252] Come on now.
[1253] I'll take her if she's a Windows phone.
[1254] I'll go with a flip phone from the fucking early 2000s.
[1255] I'll take a razor phone.
[1256] A Motorola razor phone with like 37 minutes of battery life.
[1257] But we're talking.
[1258] talking about all the learned experiences and preferences that you've developed in your time here and this actual real earth, or what we're assuming is the actual real earth.
[1259] But how are we, I mean, if you really are taking into account the possibility that one day something, someone, whether it's artificial intelligence figures it out or we figure it out, engineering a world, some sort of a simulation.
[1260] that is just as real as this world like where there is no there's no it's impossible to discern not only is it not impossible to discern people choose not to discern people choose not to discern anymore right because it's so why bother why bother discerning that's a fascinating concept to me but I think that world not to sound hippie or anything but I think that I think we live in a world that's pretty damn good that is pretty good so but improving it with such fine fine ladies walking around is not necessarily the delta that's positive.
[1261] Okay, but that's one aspect of the improvement.
[1262] What about improving it in this new world?
[1263] There's no drone attacks in Yemen that kill children.
[1264] There's no murder.
[1265] There's no rape.
[1266] There's no sexual harassment.
[1267] There's no, there's no racism.
[1268] There's no, any, all the negative aspects of our current culture are engineered out.
[1269] I think a lot of religions have struggled with this.
[1270] And, of course, I would say I would want a world without that.
[1271] But part of me thinks that our world is meaningful because of the suffering in the world.
[1272] Right.
[1273] That's a real problem, isn't it?
[1274] That's a real, that is a fascinating concept.
[1275] It's almost impossible to ignore.
[1276] Do you appreciate love because of all the hate?
[1277] You know, one, like, if you have a hard time finding a girlfriend and just no one's compatible and all the relationships go bad, are you?
[1278] Holla, letting the ladies know.
[1279] But if you do have a hard time connecting with someone and then you finally do connect with someone after all those years of loneliness and this person's perfectly compatible with you, how much more would you appreciate them than a guy like Dan Bolzerian who's flying around in a private jet banging tens all day long?
[1280] Yeah, or is it?
[1281] Maybe he's fucking drowning in his own sorrow.
[1282] Maybe he's got too much, too much prosperity.
[1283] Maybe this, you know?
[1284] Yeah, we have that with social networks too, the people that.
[1285] I mean, you're pretty famous.
[1286] The amount of love you get is huge.
[1287] It's, it might be, because of the overflow of love, it might be difficult to appreciate more, like, genuine little moments of love.
[1288] It's not for me, no. I spent a lot of time thinking about that.
[1289] And I also spent a lot of time thinking about how titanically bizarre my place in the world is.
[1290] I mean, I think about it a lot And I spend a lot of time being poor and being a loser I mean, my childhood was not the best I went through a lot of struggle when I was young That I cling to like a safety raft You know, I don't ever think there's something special about me And I try to let everybody know That anybody can do what I've done You just have to just keep going It's like 99 % of this thing It's just showing up and keep going keep improving keep working at things and keep going put the time in but the interesting thing is you haven't actually a couple days ago went back to your first podcast and listen to it you haven't really changed much so you were i mean the audio got a little better and but just like the you're the genuine nature of the way you interact hasn't changed and that's fascinating because you know fame changes people well i was already famous then I just was a different way.
[1291] Yeah, I was already famous from Fear Factor.
[1292] I was already, I already had stand -up comedy specials.
[1293] I already had, I'd already been on a sitcom.
[1294] I wasn't as famous as I am now, but I understood what it is.
[1295] I'm a big believer in adversity and struggle.
[1296] I think they're very important for you.
[1297] It's one of the reasons why I appreciate martial arts.
[1298] It's one of the reasons why I've been drawn to it as a learning tool, not just as something where it's a puzzle that I'm fascinated to try to figure out how to get better at the puzzle and martial arts is a really good example because you're never really the best especially when there's just so many people doing it it's like you're always going to get beat by guys and then I was never putting the kind of time into it as an adult outside of my Taekwendo competition I was never really putting all day every day into it like a lot of people that I would train would and so I'd always get dominated by the really best guys so there's a certain amount of humility that comes from that as well but there's there's a struggle in that you're learning about yourself and your own limits and the limits of the the human mind and endurance and just not understanding all the various interactions of techniques and that you know there's some there's humility to that in that i've always described martial arts as a vehicle for developing your own human potential.
[1299] But I think marathon running has similar aspects.
[1300] I think when you're just, you figure out a way to keep pushing and push through the control of your mind and your desire and overcoming adversity.
[1301] I think overcoming adversity is critical for the human, for humans, we have this, this, this, this, this set of reward systems that are designed to reward us for overcoming, for overcoming obstacles, for overcoming relationship struggles, for overcoming physical limitations.
[1302] And those rewards are great.
[1303] And they're some of the most amazing moments in life when you do overcome.
[1304] And I think this is sort of engineered into the system.
[1305] So for me, fame is almost like a cheat code.
[1306] It's like you don't really want to, Don't dwell on that, man. Like, that is the, that's like a free buffet.
[1307] Like, you don't, you know, you want to go hunt your own food.
[1308] You want to make your own fire.
[1309] You want to cook it yourself and feel the satisfaction.
[1310] You don't want people feeding you grapes while you lie down.
[1311] What is the hardest thing?
[1312] So you talk about a challenge a lot.
[1313] What's the hardest thing you've, when have you been really humbled?
[1314] Martial arts, for sure, the most humbling.
[1315] Yeah, from the moment I started, I mean, I mean, I got really good at Ty, window but even then i'd still get the fuck beaten out of me by my friends i got training partners especially when you're tired and you're doing you're rotating partners and guys are bigger than you it's just it's just humbling you know martial arts very humbling yeah so that i and i got to call you odd on something um so you talk about education systems sometimes i've heard you say a little broken in high school and so on i'm not really clinging out i just i just want to talk about it because I think it's important, and this is somebody who loves math.
[1316] You talked, but your own journey was school didn't give you passion value.
[1317] Well, you can maybe talk to that, but for me, what I always, maybe I'm sick in the head or something, but for me, math was exciting the way martial arts were exciting for you, because it was really hard.
[1318] I wanted to quit.
[1319] and the idea of the education I have that that seems to be flawed nowadays a little bit is that we want to make education easier that we want to make you know more accessible and so on accessible of course is great but you kind of forget in that and that those are all good goals you forget in that that it's supposed to be also hard and like teachers just the way you're wrestling coach if you like quit you say I can't do anymore I have to you come up with kind of excuse your wrestling coach looks at you once to say get your ass back on the mat the same way i wish math teachers did that when people say it's almost like cool now to to say ah it's not math sucks math not for me or science sucks as this teacher's boring i think there's there's there's room for some culture where says no no no you're not if you just put in the time and you struggle then that opens up the universe to you like whether you become an yield diggers Tyson or the next Fields Medal winner in mathematics.
[1320] I would not argue with you for one second.
[1321] I would also say that one of the more beautiful things about human beings is that we vary so much and that one person who is just obsessed with playing the trombone, and to me I don't give a fuck about trombones, but that's okay.
[1322] Like I can't be obsessed about everything.
[1323] Some people love golf and they just want to play it all day long.
[1324] I've never played golf a day in my life except miniature.
[1325] off and just fucking around but that doesn't it's not bad or good and I think there's there's definitely some skills that you learn from mathematics that are hugely significant if you want to go into type of fields that you're involved in for me it's never been appealing but it's not that it was just difficult it's also that it just for whatever reason who I was at that time in that school with those teachers having a life experience that I had that was not what I was drawn to but what I was drawn to is literature I was drawn to reading I was drawn to stories I was drawn to possibilities and creativity I was drawn to all those things you were an artist of it too yeah I used to be I used to want to be a comic book illustrator that was a big thing when I was young I was really into comic books I was really into traditional comic books and also a lot of the horror comics from the 1970s the black and white like creepy and eerie did you ever see those things creepy and iry like black and white yeah they were there were a comic book series that um existed like way back in the day was all they were all horror and they were like really cool illustrations and these wild stories but it was comic books but they were all black and white that's creepy and eerie oh that's the actual name yeah erie and creepy were the name see like that was from what year was that it says september but it doesn't say you what year but i used to get these when i was a little kid man i was like eight nine years old in the 70s good and evil yeah it was they were they were my favorite like that's a cover of them and like they would have like covers that were done by like frank frisetta or boris valetho and just really cool shit and i was fat those were i loved those when i was little i was always really into horror and i was always really movies and really into like Bram, like look at this, wherewolf one.
[1326] That was one of my favorite ones.
[1327] That was a crazy werewolf that was like all fours.
[1328] Who's the hero usually?
[1329] Superhero.
[1330] Everybody dies.
[1331] That's the beautiful thing about it.
[1332] Everybody gets fucked over.
[1333] There's nobody, that was the thing that I really liked about them.
[1334] Nobody made it out alive.
[1335] There was no one guy who figured it out and rescued the woman and they rode off in the sunset, uh -uh.
[1336] You'd turn the corner and they'd be a fucking pack of wolves with glowing eyes waiting a tear everybody apart, and that'd be the end of the book.
[1337] And I just, I was just really into the illustrations.
[1338] I found them fascinating.
[1339] I just, I love those kind of horror movies, and I love those kinds of illustrations.
[1340] So that's what I wanted to do when I was young.
[1341] Yeah, I think the education system is probably, we talked about creativity, it's probably not as good at inspiring and feeding that creativity.
[1342] Because I think math and wrestling can be taught systematically.
[1343] I think creativity is something, well, actually I had known nothing about it.
[1344] So I think it's harder to take somebody like you when you're young and say and inspire you to pursue that fire, whatever is inside.
[1345] Well, one of the best ways to inspire people is by giving them these alternatives that are so uninteresting, like saying, you're going to get a job selling washing machines.
[1346] You're like, fuck that.
[1347] I'm going to figure out a way to not get a job selling washing machines.
[1348] like some of the best motivations that I've ever had have been terrible jobs because you have these terrible jobs you go okay fuck that I'm going to figure out a way to not do this you know and whether you want to call it ADD or ADHD or whatever it is it makes kids squirming class I didn't squirm in every class I didn't squirm in science class I didn't squirm in you know in interesting subjects there's things that were interesting to me that I would be locked in and completely fascinated by and there was things where I just couldn't wait to run out of that room and I don't know what the reason is but I do know that a lot of what we call our education system is engineered for a very specific result and that result is you want to get a kid who can sit in class and learn so that they could sit in a job and perform and that for whatever reason that was just I mean I didn't have the ideal childhood maybe maybe if I did I I would be more inclined to lean that way, but I didn't want to do anything like that.
[1349] Like, I couldn't wait to get the fuck out of school, so I didn't ever have to listen to anybody like that again.
[1350] And then just a few years later, I mean, you graduated from high school when you're 18.
[1351] When I was 21, I was a stand -up comic, and I was like, I found it.
[1352] This is it.
[1353] I'm like, good.
[1354] I found, there's an actual job that nobody told me about where you could just make fun of shit.
[1355] And people go out and they pay money to hear you create.
[1356] jokes and routines and bits really you weren't terrified of stand -up not getting on stage and oh i was definitely nervous the first time probably more nervous and it seems harder than fighting no it's different it's different the consequences aren't as grave but that's one of the are they not no like embarrassment and you don't get pummeled i mean there's there's there's you there's you could say like emotionally it's probably more devastating or as devastating But man, losing a fight It fucks you up for a long time You feel like shit for a long time But then you win You feel amazing for a long time too When you kill on stage You only feel good for like an hour or so And that goes away It feels normal It's just normal It's just life, you know But I think that It prepared me Like competing in martial arts The fear of that And then how hard it is To like stand opposite another person who's the same size as you, who's equally well -trained, who's also a martial arts expert, and they ask you, are you ready?
[1357] Are you ready?
[1358] You bow to each other, and then they go, fight, and then you're like, fuck, here we go.
[1359] Like, that, to me, probably was, like, one of the best, and to do that from the time I was 15 until I was 21, was probably the best preparation for anything that was difficult to, because it was so fucking scary.
[1360] and then to go from that into stand -up I think it prepared me for stand -up because I was already used to doing things that were scary and now I seek scary things out I seek difficult things out like picking up the bow and learning that yes archery which is really difficult I mean there's some I mean it's one of reasons why I got attracted even to playing pool pool is very difficult it's very difficult to control your your nerves on high pressure situations so that there's there's there's some benefits of that but here but it goes back to what you were saying earlier how much of all this stuff like when you're saying that scarcity there's there's there's there's real value in scarcity and that there's real value in struggle and there's how much of all this is just engineered into our human system that has given us the tools and the incentive to make it to 2018 with the human species.
[1361] Yeah, I think it's whoever the engineer is, whether it's God or nature or whatever, I think it's engineered in somehow.
[1362] We get to think about that when you try to create an artificial intelligence system.
[1363] When you imagine what's a perfect system for you, we talked about this with the lady, what's the perfect system for you?
[1364] If you had to really put down on paper and engineer, what's the experience of your life, when you start to realize it actually looks a lot like your current life.
[1365] So this is the problem that companies are facing, like Amazon, and trying to create Alexa.
[1366] What do you want from Alexa?
[1367] Do you want a tool that says what the weather is, or do you want to Alexa to say, Joe, I don't want to talk to you right now.
[1368] I have.
[1369] Alexa, you have to work her over.
[1370] Like, Alexa, come on.
[1371] What do I do?
[1372] I'm sorry.
[1373] Listen, if I was rude, I was insensitive, I was tired.
[1374] The commute was really rough.
[1375] And they should be like, I'm seeing somebody else.
[1376] Alexa.
[1377] Do you remember Avatar Depression?
[1378] The movie Avatar and Depression is a psychological effect after the movie somehow?
[1379] Yeah, it was a real term that people were using, that psychologists were using, because people would see the movie Avatar, which I loved.
[1380] A lot of people said, oh, it's fucking Pocahontas with blue people.
[1381] To those people, I say, fuck off.
[1382] You want to talk about suspension of disbelief.
[1383] That to me, that movie was the ultimate suspension of disbelief.
[1384] I love that movie.
[1385] I fucking love that.
[1386] I know James Cameron's working on like 15 sequels right now all simultaneously.
[1387] I wish that motherfucker would dole them out.
[1388] He's like a crack dealer that gets you hooked once and then you're just waiting outside and the cold is shivering for years.
[1389] Avatar Depression was a psychological term that psychologists were using to describe this mass influx of people that saw that movie and were so enthralled by the way the Navi lived in Pandorum that they came back to this stupid world.
[1390] Didn't want to leave.
[1391] They wanted to be like the blue guy in Avatar.
[1392] And it also there was a mechanism in that film where this regular person became a Navi.
[1393] He became it through the avatar.
[1394] And then eventually that tree of life or whatever it was they transferred his essence into this creation, this avatar, and he became one of them.
[1395] He became one of them.
[1396] He absorbed their culture.
[1397] And it was very much like our romanticized versions of the Native Americans, that they lived in symbiotic relationship with the earth.
[1398] They only took what they needed.
[1399] They had a spiritual connection to their food and to nature and to the, just their existence was was noble and it was honorable and it wasn't selfish and it was powerful and it was spiritual and that we're missing these things we're missing these things and i think we are better at romanticizing them and craving them as opposed to living them as i mean you look at movies like happy people with uh life in the taiga life in the taiga um is yeah i'm russian so under herzogs film yeah amazing movie part of you wants to be like well i want to be out there in nature focusing on simple survival, setting traps for animals, cooking some soup, a family around you, just kind of focusing on the basics.
[1400] And I'm the same way.
[1401] Like I go out, you know, hiking and I go out in nature.
[1402] I would love to pick up hunting.
[1403] I crave that.
[1404] But if you just put me in the forest, I'll probably, and be like, here, I'm taking your phone away and you're staying here.
[1405] That's it.
[1406] You're never going to return to your Facebook and your, Twitter and your robots.
[1407] I don't know if I'll be so romantic about that notion anymore.
[1408] I don't know either, but I think that's also the genie in the bottle discussion.
[1409] I think that genie's been out of the bottle for so long, you'd be like, but what am on my Facebook?
[1410] What if I got some messages?
[1411] Let me check my email real quick.
[1412] No, no, no, we're in the forest.
[1413] There's no Wi -Fi out here.
[1414] No Wi -Fi ever?
[1415] What the fuck?
[1416] How do people get your porn?
[1417] This is no porn no that's another understudied again not an expert but the impact of internet pornography on culture oh yeah I mean it's significant and also ignored to a certain extent and if not ignored definitely purposefully left out of the conversation yeah there's a another PhD student a person from Google came to give a tech talk and he opened by saying 90 % of you in the audience have this month Google the pornographic term in our search engine.
[1418] And it was really, it's a great opener because people were just all really uncomfortable.
[1419] Yeah.
[1420] Because we just kind of hide it away into this, but it certainly has an impact.
[1421] But I think there's a suppression aspect to that, too, that's unhealthy.
[1422] We have a suppression of our sexuality because we think that somehow another it's negative.
[1423] right you know and especially for women i mean for women like men a man who is a sexual conqueror is thought to be a stud whereas a woman who seeks out a multiple desirable sexual partners is thought to be troubled there's something wrong with her you know they they're criticized they're used terms like we used earlier like slut or whore you know there's no you call a man a male slut they'll start laughing yep that's me dude like men don't give a fuck about that it's not it's not stigmatized but somehow another through our culture it's stigmatized for women and then the idea of masturbation is stigmatized these all these different things that we are puritan roots of our society start showing and our religious ideology starts showing when we we discuss our issues that we have with sex and pornography.
[1424] Right.
[1425] And for me, this is something I think about a little bit because my dream is to create an artificial intelligence, a human -centered artificial intelligence system that provides a deep, meaningful connection with another human being.
[1426] And you have to consider the fact that pornography or sex dolls will be part of that journey somehow in society.
[1427] The dummy they'll be using for martial arts.
[1428] We're likely to be an out -development of sex robots.
[1429] And we have to think about what's the impact of those kinds of robots on society.
[1430] Well, women in particular are violently opposed to sex robots.
[1431] I've read a couple of articles written by women about sex robots and the possibility of future sex robots.
[1432] And I shouldn't say violently.
[1433] But it's always negative.
[1434] So is the idea that men would want to have sex with some beautiful thing that's programmed to love them as opposed to earning the love of a woman.
[1435] But you don't hear that same interpretation from men.
[1436] From men, it seems to be that there's a thought about maybe it's kind of gross, but also that it's inevitable.
[1437] And then there's like this like sort of nod to it.
[1438] Like how crazy would that be if you have the perfect woman like the woman in the red dress and the matrix.
[1439] Yeah, but she comes over your house and she's perfect.
[1440] Because you're not thinking about the alternative, which is.
[1441] a male robot doll which will now be able to satisfy your girlfriend, wife, better than you.
[1442] I think you'll hear from guys a lot more then.
[1443] Maybe, or maybe, like, good luck with her.
[1444] She's fucking annoying.
[1445] She's always yelling at me. Let her yell at the robot.
[1446] He's not going to care.
[1447] Then that robot turns into a grappling gun.
[1448] Yeah, and maybe she can get just, go ahead and get fat with the robot.
[1449] He's not even even care.
[1450] Go ahead.
[1451] Just sit around eat Cheetos all day.
[1452] and screaming him.
[1453] He's your, he's your slave.
[1454] Good.
[1455] I mean, he can work both ways, right?
[1456] It can work the same way that a man would, you know, a woman would see a man that is interested in a sex robot to be disgusting and pathetic.
[1457] A man could see the same thing in a woman that's interested in a sex robot.
[1458] Like, okay, is that what you want?
[1459] You're some crude thing that just wants physical pleasure and you don't even care about a real actual emotional connection to a biological human being like okay well then you're not my kind of woman anyway yeah and uh but if done well those are the kinds that in terms of threats of AI to me it can change the fabric of society because like I'm old school in the sense I I like monogamy for example you know uh well you say that because you don't have a girlfriend so you're longing for monogamy one is one is better to Well, no, the real reason I have a girlfriend is because, and it's fascinating, with people like you actually, with Elon Musk, like the time.
[1460] Yes.
[1461] It's a huge challenge because of how much a romantic I am, because how much I care about people around me, I feel like it's a significant investment of time.
[1462] And also the amount of work that you do.
[1463] I mean, if you're dedicated to a passion, like artificial intelligence and the sheer amount of fucking studying and research and.
[1464] programming too there's certain disciplines in which you have to certain disciplines require like stephen preso talks about writing you can get pretty far with two three hours a day when you're programming when you're a lot of the engineering tasks they just take up hours it's just hard which is why i really one of the reasons i mean disagree with the all muscle but a bunch of things but he's an inspiration because i think he's a pretty good dad right and he finds the time for his sons while being probably in order of magnitude busier than I am.
[1465] And it's fascinating to me how that's possible.
[1466] Well, once you have children, I mean, there obviously are people that are bad dads.
[1467] But once you have children, your life shifts in almost, it's an indescribable way because you're different.
[1468] It's not just that your life is different.
[1469] When you have a child, like you're, like, there hasn't been a moment while we're having this conversation that I haven't been thinking about my children.
[1470] Thinking about what they're doing, where they are.
[1471] It's always running in the background.
[1472] It's a part of life.
[1473] You're connected to these people that you love so much, and they rely on you for guidance and for warmth and affection.
[1474] But how did your life have to change?
[1475] Well, you just change, man. When you see the baby, you change.
[1476] When you start feeding them, you change.
[1477] When you hold them, you change.
[1478] When you hold their hand while they're, walk you change when they ask you questions you change when they laugh and giggle you change when they smack in the face and you pretend to fall down they laugh you change you know every you just change man you change you become a different thing you become a dad so you almost can't help but some people do help it though that's what's sad some people resisted i mean i know people that have been terrible terrible parents they just they'd rather stay out all night and never come home and they don't want to take care of their kids and they get they split up with the wife or the girlfriend who's got the kid and they don't give child support it's a really common theme man I mean there's a lot of men out there that don't pay child support that's a dark dark thing you have a child out there that needs food and you don't want you're so fucking selfish you don't want to provide resources not only do you not want to be there for companionship you don't want to provide resources to pay for the child's food you don't feel responsible for I mean was my case when I was a kid.
[1479] My dad didn't pay child support.
[1480] And we were very poor.
[1481] It's one of the reasons why we were so poor.
[1482] And I know other people that have that same experience.
[1483] So it's not everyone that becomes a father or that impregnates, I should say a woman and becomes a father.
[1484] And the other side is true too.
[1485] There's women that are terrible mothers for whatever reason.
[1486] I mean, maybe they're broken psychologically.
[1487] Maybe they have mental health issues.
[1488] Whatever it is.
[1489] There's some women that are fucking terrible moms and it's sad but it makes you appreciate women that are great mom so much more yeah yeah when i when i see guys like you the inspiration is so i'm looking for sort of structural what's the process to then fit people into your life but what i hear is when it happens you just do you change but that's this is a thing man we're not living in a book right we're not living in a movie it doesn't always happen like you have to decide that you want it to happen and you got to go looking for it because if you don't you could just be older right and still alone time goes there's a lot of my friends that they've never had kids and now they're in their 50s i mean comedians yeah you have to be on the road a lot not just on the road you you have to be obsessed with comedy like it's it's got to be something you're you're always writing new jokes because you're always writing a new especially if you put out a special right you put like i just did a netflix special it's out now so i don't know i really have like a half hour new material that's it it's great by the way strange times thank you very much this the first special i've watched it was actually really weird to start to go on a tangent but i've listened to you quite a bit but i've never looked at you doing comedy and it was so different because like here you're just like improv you're like a jazz musician here it's like a regular conversation the stand -up special it was clear like that's like everything is perfect the timing it's like watching you do a different art almost it is kind of interesting it's like a song or something it's like you don't it's not there's some riffing to it there's some improvisation to it but it's also there's a very clear structure to it but it that's that's there's it's so time intensive and you have to you've got to be obsessed with it to continue to do something like that so for some people that travel in the road that takes priority over all things including relationships and then you never really settle down and so you never you never have a significant relationship with someone that you could have a child with and i know many friends that are like that and i know friends have gotten vasectomies because they don't want it they like this life and there's nothing wrong with that either you know that i always was upset by this notion that in order to be a full and complete adult you have to have a child you have to be a parent and i think even as a parent where i think it's probably one of the most significant things in my life, I reject that notion.
[1490] I think you could absolutely be a fully developed person, an amazing influence in society, an amazing contributor to your culture and your community without ever having a child, whether you're a man or a woman.
[1491] It's entirely possible.
[1492] And the idea that it's not as silly.
[1493] We're all different in so many different ways, you know, and we contribute in so many different ways.
[1494] There's going to be people that are obsessed with mathematics.
[1495] There's going to be people that are obsessed with literature.
[1496] There's going to be people that are obsessed with music and they don't all have to be the same fucking person because you really don't have enough time for it to be the same person you know and there's there's going to be people that love having children they love being a dad or love being a mom and there's going to be people that don't want to have nothing to do with that and they get snipped early and they're like fuck off I'm going to smoke cigarettes and drink booze and I'm going to fly around the world and talk shit and those people are okay too like it's the way we interact with each other that's most important that's what I think the way human beings to the way we form bonds and friendships the way we we contribute to each other's lives the way we we find our passion and create those things are what's really important yeah but there's also an element just looking at my parents I think they got they're still together to gotten together what it means standards you get together when you're like 20 or I should know this but 23 20 whatever young and there is an element there where you don't to be too rational you just want to just dive in right should you be an mma fighter should you be like um i'm in academia now uh so i'm a research scientist MIT the pay is much much lower than all the offers i'm getting non -stop is it rational i don't know but your passion is doing what you're doing currently yeah but it's like it's what are the other offers like what kind of other jobs and are they appealing in any way Yeah.
[1497] Yeah, they're appealing.
[1498] So I'm making a decision that's similar to actually getting married, which is, so the offers are, well, I shouldn't call them out, but Google phase, but the usual AI research, pretty high positions.
[1499] And the, I'm, there's just something in me that says, the edge, the chaos of this environment at MIT is something I'm drawn to.
[1500] It doesn't make sense.
[1501] So I can do what I'm passionate about in a lot of places.
[1502] You just kind of dive in.
[1503] And I had a sense that a lot of our culture creates that momentum.
[1504] And you just kind of have to go with it.
[1505] And that's why my parents got together.
[1506] Like a lot of people, they're probably, I mean, a lot of couples wouldn't be together if they weren't kind of culturally forced to be together and divorce was such a negative thing.
[1507] And they grew together and created a super happy connection.
[1508] So I'm a little afraid of, over rationality about choosing the path of life.
[1509] So you're saying like monogamy doesn't know or not monogamy relationship don't always make sense.
[1510] They don't have to make sense.
[1511] You know, I think I'm a big believer in doing what you want to do.
[1512] And if you want to be involved in a monogamous relationship, I think you should do it.
[1513] But if you don't want to be involved in one, I think you should do that too.
[1514] I mean, if you want to be like a nomad and travel around the world and just live out of a backpack, I don't think there's anything wrong with that.
[1515] as long as you're healthy and you survive and you're not depressed and you're not longing for something that you're not participating in but i think when you are doing something you don't want to be doing it it brings me back to um was it thoreau's quote i guess i always fuck up who made this what i think i know which one you're going to say that yeah most men lived lives of silent desperation that's that's real man that's real that's real that's real you that's what you don't want i think it's thorough right check you don't want silent desperation yeah it is right i fucking love that quote because i've seen it i've seen it in so many people's faces and that's one thing i've managed to avoid and i don't know if i avoided that by luck or just by fact i'm stupid and i just i just follow my instincts whether they're right or wrong or and i make it work but this goes back to what we're discussing in terms of what is the nature of reality and what are these, are we just finding these romanticized interpretations of our own biological needs and our human reward systems that's creating these beautiful visions of what is life and what is important, poetry and food and music and all the passions and dancing and holding someone in your arms that you care for deeply and are all those things just little tricks?
[1516] Are all those little biological tricks in order just to keep on this very strange dance of human civilization so that we can keep on creating new and better products that keep on moving innovation towards this ultimate eventual goal.
[1517] Of artificial intelligence.
[1518] Of this thing.
[1519] Giving birth to the gods.
[1520] Yeah, giving birth to the gods.
[1521] Yeah, so, you know, I did want to mention one thing about the one thing I really, I don't understand fully, but I've been thinking about, for the last couple years at the application of artificial diligence to politics I've heard you talked about sort of government being broken in the sense that one guy one president that doesn't make any sense I so you get like you know people get hundreds of millions of likes on their Facebook pictures and Instagram and we're always voting with our fingers every single day and yet for the election process, it seems that we're voting, like, once every four years.
[1522] It feels like this new technology could bring about a world where the voice of the people can be heard on a daily basis, like, where you could speak about the issues you care about, whether it's gun control and abortion, all these topics that are so heated, it feels like there needs to be an Instagram for our elections.
[1523] I agree, yeah.
[1524] And I think there's room for that.
[1525] I mean, thinking about how to write a few papers of proposing different technologies.
[1526] It just feels like the people that are playing politics need to, are old school.
[1527] The only problem with that is like the influencers, right?
[1528] If you look at Instagram, I mean, should Nikki Minaj be able to decide how the world works because she's got the most followers?
[1529] Should Kim Kardashian, like who's influencing things and why?
[1530] And you have to deal with the fickle nature of human beings.
[1531] and do we give enough patience towards the decisions of these so -called leaders that we're electing?
[1532] Or do we just decide, fuck them, they're out, new person in?
[1533] Because we have like a really short attention span when it comes to think, especially today with the news cycle, so quick.
[1534] So the same process, so Instagram might be a bad example because, yeah, you get or Twitter, you start following Donald Trump and you start to sort of idolize these certain icons that do we necessarily want them to represent us.
[1535] I was more thinking about the Amazon product reviews model, recommender systems or Netflix, the movies you've watched, the Netflix learning enough about you to represent you in your next movie selection.
[1536] So in the kind of movies, like what do you, like you, Joe Rogan, what are the kind of movies that you would like?
[1537] The recommender systems, these are artificial intelligence systems, learn based on your Netflix selection, that could be deeper understanding of who you are than you're even aware of.
[1538] And I think there's that element.
[1539] I'm not sure exactly, but there's that element of learning who you are.
[1540] Like, do you think drugs should be legalized or not?
[1541] Like, do you think immigration?
[1542] Should we let everybody in or keep everybody out?
[1543] Should we, all these topics with a red and blue team?
[1544] now have a hard answer of course you keep all the immigrants out or of course you need to be more compassionate of course but for most people it's really a gray area and exploring that gray area the way you would explore the gray area of Netflix what is the next movie you're watching do you want to watch a little mermaid or godfather too what what that process of understanding who you are just it feels like that there's room for that well the problem is of course the that you have, there's grave consequences to these decisions that you're going to make in terms of the way it affects the community.
[1545] And you might not have any information that you're basing this on at all.
[1546] You might be basing all these decisions on misinformation, propaganda, nonsense, advertising.
[1547] You could be easily influenced.
[1548] You might not have looked into it at all.
[1549] You could be ignorant about the subject, and it might just appeal to certain dynamics that have been programmed into your brain because you grew up religious or you grew up an atheist or, you know, the real problem is whether or not people are educated about the consequences of what these decisions were going to leave.
[1550] Yep, it's information.
[1551] Yeah.
[1552] And then I think, I mean, I think there's going to be a time in our life where our ability to access information is many steps better than it is now with smartphones.
[1553] I think we're going, like Elon Musk has some neuralink thing that he's working on right now.
[1554] He's being very vague about it.
[1555] Increase the bandwidth of our human interaction with machines is what he's working on.
[1556] Yeah.
[1557] I'm very interested to see where this leads.
[1558] But I think that we can assume that because something like the Internet came along and because it's so accessible to you and I right now with your phone, just pick it up, say, hey, Google, what the fuck is this?
[1559] And you get the answer almost instantaneously.
[1560] that's going to change what a person is as that advances.
[1561] And I think we're much more likely looking at some sort of a symbiotic connection between us and artificial intelligence and computer augmented access to information than we are looking at the rise of some artificial being that takes us over and fucks our girlfriend.
[1562] Wow.
[1563] Yeah, that's the real existential threat.
[1564] Yeah, I think so.
[1565] That's, to me, is super exciting.
[1566] the phone is a portal to this collective that we have, this collective consciousness, and it gives people a voice.
[1567] I would say if anyone's like me, you really know very little about the politicians you're voting for, or even the issues.
[1568] Like, global warming, I'm embarrassed to say.
[1569] I know very little about, like, if I'm actually being honest with myself, I've heard different, like, I know what I'm supposed to believe as a scientist.
[1570] But I actually know nothing about...
[1571] Concrete, right?
[1572] Nothing concrete about the process itself.
[1573] About the environmental process and why is it so certain?
[1574] You know, scientists apparently completely agree.
[1575] So as a scientist, I kind of take on faith oftentimes what the community agrees.
[1576] In my own discipline, I question.
[1577] But outside, I just kind of take on faith.
[1578] And the same thing with gun, control, and so on.
[1579] You just kind of say, which team am I on?
[1580] And I'm just going to take that on.
[1581] I just feel like it's such.
[1582] a disruptable space to where people could be given just a tiny bit more information to help them.
[1583] Well, maybe that's where something like Neuralink comes along and just enhances our ability to access this stuff in a way that's much more, just more tangible than just being able to Google search it.
[1584] And maybe this process is something that we really can't anticipate.
[1585] It's got to have to happen to us, just like who we're talking about cell phone images that you could just send to Australia with the click of a button that no one would have ever anticipated that 300 years ago maybe we are beyond our our capacity for understanding the impact of all this yeah yeah maybe kids coming up now what is that world going to look like when you're too old to you'll be sitting you'll be like 95 sitting on a porch with a shotgun Clint Eastwood and what do those kids look like when they're 18 years old robots fucking x -ray vision and they could read minds yeah yeah you'll be going to happen you'd be saying robots are everywhere these days back back in my day we used to put robots in their place yeah right like they were serving i'd shut them off yeah sure plug and go fuck your mom now they want to go to the same school as us yeah and they want to they want to run for president they want to run for president yeah they're more compassionate and smarter but we still hate them because they don't go to the bathroom yeah well not we half the country will hate them and the other will love them.
[1586] And the Abraham Lincoln character will come along.
[1587] That's what I'm pitching myself for.
[1588] You're the Abraham Lincoln of the robot world.
[1589] The robot world.
[1590] That's the speeches that everybody quotes.
[1591] And one other thing I've got to say about academia.
[1592] Okay.
[1593] In defense of academia.
[1594] So you've had a lot of really smart people on, including Sam Harris and Jordan Peterson.
[1595] And often the word academia is used to replace a certain concept.
[1596] So I'm part of academia.
[1597] And most of academia is engineering, is biology, is medicine, is hard sciences.
[1598] It's the humanities that are slippery.
[1599] Exactly.
[1600] And I think a subset of humanities that I know nothing about, and there are a subset that I don't want to speak about.
[1601] Gender studies.
[1602] Say it.
[1603] I don't know.
[1604] I don't know.
[1605] Candy man. Candyman.
[1606] Candyman.
[1607] I actually live on Harvard campus.
[1608] so I'm at MIT but I live on Harvard campus and yeah it's there do they have apartments for you guys how's that work yeah they hand them out as no I just I don't care live on the campus what do you mean oh sorry like in the Harvard Square oh Harvard Square in Cambridge in Cambridge yeah yeah so I used to go to Catch a Rising Star when it existed there used to be a great comedy club in Cambridge there's a few there's a few good comedy clubs there right well there's a Chinese restaurant that uh that has stand up there still how does that work well it's upstairs there's like this uh comedy club up there yeah do you ever you ever because you've done i think your specials in boston yes i did at the wilbur theater have you ever considered just going back to boston doing like that chinese restaurant the dingho yeah that was before my time when i came around i started in 1988 the dingho had already ended but i you know i got to be friends with guys like lennie clark and tony v and all these people that told me about the dingho and Kenny Rogerson, the comics that were, and Barry Crimmons, who just passed away, rest in peace, who was really the godfather of that whole scene.
[1609] And one of the major reasons why that scene was so, had such really some rock solid morals and ethics when it came to the creation of material and standards.
[1610] It was a lot of it was Barry Crimmons because that's just who he was as a person.
[1611] you know um but that was before my time i came around when there was i came around like four years after that stuff and so there was tons of comedy clubs it was everywhere but i just didn't get a chance to be around that uh that dingho scene and you stayed in boston for how many before you moved out here uh i was in new york in uh by the time night i think i was in new york by 92 91 92 so i was in boston for like four or five years doing stand -up How'd you get from Boston to New York?
[1612] My manager.
[1613] I want to use this opportunity for you to talk shit about Connecticut.
[1614] People from Connecticut.
[1615] It's become a running theme to talk shit about Connecticut here.
[1616] I've heard you do it once.
[1617] I just had a buddy who did a gig in Connecticut.
[1618] He told me it was fucking horrible.
[1619] I go, I told you, bitch.
[1620] You should have listened to me. Don't book gigs in Connecticut.
[1621] The fuck's wrong with you.
[1622] of 49 other states Go to Alaska It's great You go back to Boston And do like small gigs Small and Lexxon Sometimes yeah I'll do Yeah Laugh Boston is a great club I used to do Nick's Comedy Stop And all the other ones there But I you know I love the Wilbur The Wilbur's a great place to perform I love Boston I would live there If it wasn't so fucking cold in the winter But that's what keeps people like me out It keeps the pussies away Listen we got to end this We've got to wrap it up.
[1623] We've already done three hours, believe it or not.
[1624] It flies by.
[1625] It did.
[1626] It flew by.
[1627] Can I say two things?
[1628] Sure, sure.
[1629] So first, I got to give a shout out to my shout out.
[1630] Shout out.
[1631] To a long, long -time friend Matt Harandi from Chicago has been there all long.
[1632] He's a fan of the podcast, so he's probably listening.
[1633] Him and his wife Fadi had a beautiful baby girl, so I want to just send my love to him.
[1634] And I told myself I'll end it this way.
[1635] Okay.
[1636] Let me end it the way you all ended it.
[1637] Love is the answer.
[1638] Love is the answer.
[1639] It probably is.
[1640] Unless you're a robot.
[1641] Bye.
[1642] Unless you're a robot.