Lex Fridman Podcast XX
[0] The following is a conversation with Michael Stevens, the creator of Vsauce, one of the most popular educational YouTube channels in the world, with over 15 million subscribers and over 1 .7 billion views.
[1] His videos often ask and answer questions that are both profound and entertaining, spanning topics from physics to psychology.
[2] Popular questions include, what if everyone jumped at once, or what if the sun disappeared, or why are things creepy, or, or, why are things creepy?
[3] what if the earth stopped spinning?
[4] As part of his channel, he created three seasons of Mind Field, a series that explored human behavior.
[5] His curiosity and passion are contagious and inspiring to millions of people.
[6] And so as an educator, his impact and contribution to the world is truly immeasurable.
[7] This is the Artificial Intelligence Podcast.
[8] If you enjoy it, subscribe on YouTube, give five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter.
[9] at Lex Friedman, spelled F -R -I -D -M -A -N.
[10] I recently started doing ads at the end of the introduction.
[11] I'll do one or two minutes after introducing the episode and never any ads in the middle that break the flow of the conversation.
[12] I hope that works for you and doesn't hurt the listening experience.
[13] This show is presented by Cash App, the number one finance app in the App Store.
[14] I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds.
[15] Cash App also has a new investing feature.
[16] You can buy fractions of a stock, say $1 or worth, no matter what the stock price is.
[17] Broker services are provided by CashUp investing, a subsidiary of Square, and member SIPC.
[18] I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their first robotics and Lego competitions.
[19] They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on charity navigator.
[20] which means that donated money is used in maximum effectiveness.
[21] When you get Cash App from the App Store, Google Play, and use code Lex Podcast, you'll get $10, and Cash App will also donate $10 to first, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world.
[22] And now, here's my conversation with Michael Stevens.
[23] One of your deeper interests is psychology, understanding human behavior.
[24] You've pointed out how messy studying human behavior is and that it's far from the scientific rigor of something like physics, for example.
[25] How do you think who can take psychology from where it's been in the 20th century to something more like what the physicists, theoretical physicists are doing, something precise, something rigorous?
[26] Well, we could do it by finding the physical foundations of psychology.
[27] right?
[28] If all of our emotions and moods and feelings and behaviors are the result of mechanical behaviors of atoms and molecules in our brains, then can we find correlations?
[29] Perhaps chaos makes that really difficult and the uncertainty principle and all these things.
[30] We can't know the position and velocity of every single quantum state in a brain, probably.
[31] But I think that if we can get to that point with psychology, then we can start to think about consciousness in a physical and mathematical way.
[32] When we ask questions like, well, what is self -reference?
[33] How can you think about yourself thinking?
[34] What are some mathematical structures that could bring that about?
[35] There's ideas of, in terms of consciousness and breaking it down into physics, there's ideas of panpsychism where people believe that whatever consciousness is is a fundamental part of reality it's almost like a physics law do you think what's your views on consciousness do you think it has this this deep a part of reality or is it something that's deeply human and constructed by us humans start nice and light yeah and easy easy nothing i ask you today has actually proven answer so yeah right just hypothesized so yeah I mean, I should clarify, this is all speculation, and I'm not an expert in any of these topics, and I'm not God.
[36] But I think that consciousness is probably something that can be fully explained within the laws of physics.
[37] I think that our bodies and brains and the universe and at the quantum level is so rich and complex, I'd be surprised if we couldn't find a room for consciousness.
[38] there.
[39] And why should we be conscious?
[40] Why are we aware of ourselves?
[41] That is a very strange and interesting and important question.
[42] And I think for the next few thousand years, we're going to have to believe in answers purely on faith.
[43] But my guess is that we will find that, you know, within the configuration space of possible arrangements of the universe, there are are some that contain memories of others.
[44] Literally, Julian Barber calls them time capsule states where you're like, yeah, not only do I have a scratch on my arm, but also this state of the universe also contains a memory in my head of being scratched by my cat three days ago.
[45] And for some reason, those kinds of states of the universe are more plentiful or more likely.
[46] When you say those states, the ones that contain memories of its past or ones that contain memories of its past and have degrees of consciousness?
[47] Just the first part, because I think the consciousness then emerges from the fact that a state of the universe that contains fragments or memories of other states is one where you're going to feel like there's time.
[48] You're going to feel like, yeah, things in the people.
[49] happened in the past.
[50] And I don't know what will happen in the future because these states don't contain information about the future.
[51] For some reason, those kind of states are either more common, more plentiful, or you could use the anthropic principle and just say, well, they're extremely rare.
[52] But until you are in one, or if you are in one, then you can ask questions like you're asking me on this podcast.
[53] Why questions?
[54] Yeah, it's like, why are we conscious?
[55] Well, because if we weren't we wouldn't be asking why we were you've kind of implied that you have a sense again hypothesis theorizing that the universe is deterministic what's your thoughts about free will do you think of the universe is deterministic in a sense that it's unrolling in particular like there's a it's operating under a specific set of physical laws and when you have the set the initial conditions it will unroll in the exact same way in our particular line of the universe every time.
[56] That is a very useful way to think about the universe.
[57] It's done us well.
[58] It's brought us to the moon.
[59] It's brought us to where we are today, right?
[60] I would not say that I believe in determinism in that kind of an absolute form.
[61] Or actually, I just don't care.
[62] Maybe it's true, but I'm not going to live my life like it is.
[63] What in your sense, because you've studied kind of how we humans think of the world.
[64] What's in your view is the difference between our perception, like how we think the world is and reality?
[65] Do you think there's a huge gap there?
[66] Like we dilute ourselves, the whole thing is an illusion, just everything about human psychology, the way we see things and how things actually are.
[67] In all the things you've studied, what's your sense?
[68] How big is the gap between reality?
[69] Well, again, purely speculative, I think that we will never know the answer.
[70] We cannot know the answer.
[71] There is no experiment to find an answer to that question.
[72] Everything we experience is an event in our brain.
[73] When I look at a cat, I'm not even, I can't prove that there's a cat there.
[74] All I am experiencing is the perception of a cat inside my own brain.
[75] I am only a witness to the events of my mind.
[76] I think it is very useful to infer that if I witness the event of cat in my head, it's because I'm looking at a cat that is literally there and has its own feelings and motivations and should be pet and given food and water and love.
[77] I think that's the way you should live your life.
[78] But whether or not we live in a simulation, I'm a brain and a vat, I don't know.
[79] Do you care?
[80] I don't really Well, I care because it's a fascinating question And it's a fantastic way to get people excited about All kinds of topics Physics, psychology, consciousness, philosophy But at the end of the day, what would the difference be?
[81] The cat needs to be fed at the end of the day Otherwise it'll be a dead cat Right, but if it's not even a real cat Then it's just like a video game cat And right So what's the difference between killing a a digital cat in a video game because of neglect versus a real cat.
[82] It seems very different to us psychologically.
[83] Like, I don't really feel bad about, oh, my gosh, I forgot to feed my tomogachi, right?
[84] But I would feel terrible if I forgot to feed my actual cats.
[85] So can you just touch on the topic of simulation?
[86] Do you find this thought experiment that we're living in a simulation useful, inspiring, a constructive any kind of way?
[87] Do you think it's ridiculous?
[88] do you think it could be true, or is it just a useful thought experiment?
[89] I think it is extremely useful as a thought experiment because it makes sense to everyone, especially as we see virtual reality and computer games getting more and more complex.
[90] You're not talking to an audience in like Newton's time where you're like, imagine a clock that it has mechanics in it that are so complex that it can create love.
[91] And everyone's like, no. But today, you really start to feel, you know, man, at what point is this little robot friend of mine going to be like someone I don't want to cancel plans with?
[92] And so it's a great, the thought experiment of do we live in a simulation, am I a brain in a bat that is just being given electrical impulses from some nefarious other beings so that I believe that I live on Earth and that I have a body and all of this?
[93] And the fact that you can't prove it either way is a fantastic way to introduce people to some of the deepest questions.
[94] So you mentioned a little buddy that you would want to cancel an appointment with.
[95] So that's a lot of our conversations.
[96] That's what my research is, artificial intelligence.
[97] And I apologize, but you're such a fun person to ask these big questions with...
[98] Well, I hope I can give some answers that are interesting.
[99] Well, because of, you've sharpened your brain's ability to explore some of the most, some of the questions that many scientists are actually afraid of even touching, which is fascinating.
[100] I think you're in that sense, ultimately a great scientist through this process of sharpening your brain.
[101] Well, I don't know if I am a scientist.
[102] I think science is a way of knowing, and there are a lot of questions I investigate that are not scientific questions.
[103] On like minefield, we have definitely done scientific experiments and studies that had hypotheses and all of that.
[104] But, you know, not to be too precious about what does the word science mean.
[105] But I think I would just describe myself as curious.
[106] And I hope that that curiosity is contagious.
[107] So to you, the scientific method is deeply connected to science because your curiosity took you to asking questions to me asking a good question even if you feel society feels that it's not a question within the reach of science currently to me asking the question is the biggest step of the scientific process the scientific method is the second part and that maybe what traditionally is called science but to me asking the questions, being brave enough to ask the questions, being curious and not constrained by what you're supposed to think is just true what it means to be a scientist to me?
[108] It's certainly a huge part of what it means to be a human.
[109] If I were to say, you know what, I don't believe in forces.
[110] I think that when I push on a massive object, a ghost leaves my body and enters the object I'm pushing, and these ghosts happen to just get really lazy when they're around massive things.
[111] And that's why F equals MA.
[112] Oh, and by the way, the laziness of the ghost is in proportion to the mass of the object.
[113] So boom, prove me wrong.
[114] Every experiment, well, you can ever find the ghost.
[115] And so none of that theory is scientific.
[116] But once I start saying, can I see the ghost?
[117] Why should there be a ghost?
[118] And if there aren't ghosts, what might I expect?
[119] And I start to do different tests to see, is this falsifiable?
[120] Are there things that should happen if there are ghosts or are the things that shouldn't happen?
[121] And do they, you know, what do I observe?
[122] Now I'm thinking scientifically.
[123] I don't think of science as, wow, a picture of a black hole.
[124] That's just a photograph.
[125] That's an image.
[126] That's data.
[127] That's a sensory and perception experience.
[128] Science is how we got that and how we understand it and how we believe in it and how we reduce our uncertainty around what it means.
[129] But I would say I'm deeply within the scientific community and I'm sometimes disheartened by the elitism of the thinking, sort of not allowing yourself to think outside of the box.
[130] So allowing the possibility of going against the conventions of science, I think is a beautiful part of some of the greatest scientists in history.
[131] I don't know.
[132] I'm impressed by scientists every day.
[133] And revolutions in our knowledge of the world occur only under very special circumstances.
[134] It is very scary to challenge conventional thinking and risky because let's go back to elitism and ego, right?
[135] If you just say, you know what, I believe in the spirits of my body and all forces are actually created by invisible creatures that transfer themselves between objects.
[136] If you ridicule every other theory and say that you're correct, then ego gets involved and you just don't go anywhere.
[137] But fundamentally, the question of, well, what is a force is incredibly important?
[138] We need to have that conversation, but it needs to be done in this very political way of like, let's be respectful of everyone and let's realize that we're all learning together and not shutting out other people.
[139] And so when you look at a lot of revolutionary ideas, they were not accepted right away.
[140] And Galileo had a couple of problems with the authorities.
[141] And later thinkers, Descartes, was like, all right, look, I kind of agree with Galileo, but I'm going to have to not say that.
[142] I'll have to create and invent and write different things that keep me from being in trouble.
[143] But we still slowly made progress.
[144] revolutions are difficult in all forms and certainly in science before we get to AI on topic of revolutionary ideas let me ask on a reddit AMA you said that is the earth flat is one of the favorite questions you've ever answered speaking of revolutionary ideas so your video on that people should definitely watch is really fascinating can you elaborate why you enjoyed answering this question so much yeah well it's a long story i remember A long time ago, I was living in New York at the time, so it had to have been like 2009 or something, I visited the Flat Earth forums.
[145] And this was before Flat Earth theories became as sort of mainstream as they are.
[146] Sorry to ask the dumb question, forums, online forums.
[147] Yeah, the Flat Earth Society.
[148] I don't know if it's dot com or dot org, but I went there and I was reading, you know, their ideas and how they responded to typical criticisms of, well, the earth isn't flat because what about this?
[149] And I could not tell, and I mentioned this in my video, I couldn't tell how many of these community members actually believed the earth was flat or were just trolling.
[150] And I realized that the fascinating thing is how do we know anything and what makes for a good belief versus a maybe not so tenable or good belief.
[151] And so that's really what my video about Earth being flat is about.
[152] It's about, look, there are a lot of reasons.
[153] The Earth is probably not flat.
[154] But a flat earth believer can respond to every single one of them.
[155] But it's all in an ad hoc way.
[156] And all of their rebuttals aren't necessarily going to form a cohesive, non -contradictory hole.
[157] And I believe that's the episode where I talk about Occam's razor and Newton's flaming laser sword.
[158] And then I say, well, you know what?
[159] Wait a second.
[160] We know that space contracts as you move.
[161] And so to a particle moving near the speed of light towards Earth, Earth would be flattened in the direction of that particle's travel.
[162] So to them, Earth is flat.
[163] Like, we need to be, you know, really generous to even wild ideas because they're all thinking.
[164] They're all the communication of ideas and what else can it mean to be a human.
[165] Yeah, and I think I'm a huge fan of the flat earth theory, quote unquote, in the sense that to me feels harmless to explore some of the questions of what it means to believe something, what it means to explore the edge of science and so on.
[166] It's because it's to me, nobody gets hurt whether the earth is flat around, not literally, but I mean intellectually when we're just having a conversation.
[167] said again to elitism I find that scientists roll their eyes way too fast on the flat earth the kind of dismissal that I see to this even notion they haven't like sat down and say what are the arguments they're being proposed and this is why these arguments are incorrect so this is you know that should be something that scientists should always do even to the most sort of ideas that seem ridiculous.
[168] So I like this.
[169] This is almost my test when I ask people what they think about flat earth theory to see how quickly they roll their eyes.
[170] Well, yeah, I mean, let me go on record and say that the earth is not flat.
[171] It is a three -dimensional spheroid.
[172] However, I don't know that, and it has not been proven.
[173] Science doesn't prove anything.
[174] It just reduces uncertainty.
[175] Yes.
[176] Could the earth actually be flat?
[177] Extremely unlikely.
[178] Yes.
[179] Extremely unlikely.
[180] And so it is a ridiculous notion if we care about how probable and certain our ideas might be.
[181] But I think it's incredibly important to talk about science in that way and to not resort to, well, it's true.
[182] It's true in the same way that a mathematical theorem is true.
[183] And I think we're kind of like being pretty pedantic about defining this stuff.
[184] But like, sure, I could take a rocket ship out and I could orbit Earth and look at it and it would look like a ball, right?
[185] But I still can't prove that I'm not living in a simulation, that I'm not a brain in a vat, that this isn't all an elaborate ruse created by some technologically advanced extraterrestrial civilization.
[186] Right.
[187] So there's always some doubt.
[188] and that's fine.
[189] That's exciting.
[190] And I think that kind of doubt, practically speaking, is useful when you start talking about quantum mechanics or string theory.
[191] Sort of, it helps.
[192] To me, that kind of little adds a little spice into the thinking process of scientists.
[193] So, I mean, just as a thought experiment, your video kind of, okay, say the Earth is flat, what would the forces when you walk about this flat earth feel like to the human?
[194] That's a really nice thought experiment to think about.
[195] Right, because what's really nice about it is that it's a funny thought experiment, but you actually wind up accidentally learning a whole lot about gravity and about relativity and geometry.
[196] And I think that's really the goal of what I'm doing.
[197] I'm not trying to convince people that the Earth is round.
[198] I feel like you either believe that it is or you don't.
[199] And like, how can I change that?
[200] What I can do is change how you think.
[201] and how you are introduced to important concepts like, well, how does gravity operate?
[202] Oh, it's all about the center of mass of an object.
[203] So right, on a sphere, we're all pulled towards the middle, essentially the centroid, geometrically.
[204] But on a disk, ooh, you're going to be pulled at a weird angle if you're out near the edge.
[205] And that stuff's fascinating.
[206] Yeah, and to me, that was that particular video open my eyes even more to what gravity is.
[207] It's just a really nice visualization tool of because you always imagine gravity with spheres, with masses that are spheres.
[208] Yeah.
[209] And imagining gravity on masses that are not spherical, some other shape, but here a plate, a flat object is really interesting.
[210] It makes you really kind of visualizing that you're much in a way the force gravity.
[211] Yeah, even if a disc, the size of, of Earth would be impossible.
[212] I think anything larger than like the moon basically needs to be a sphere because gravity will round it out.
[213] So you can't have a teacup the size of Jupiter, right?
[214] There's a great book about a teacup in the universe that I highly recommend.
[215] I don't remember the author.
[216] I forget her name, but it's a wonderful book.
[217] So look it up.
[218] I think it's called teacup in the universe.
[219] Just to link on this point briefly, your videos are generally super, people love them, right?
[220] If you look at the sort of number of likes versus dislikes, this measure of YouTube, right, is incredible and as do I. But this particular flat earth video has more dislikes than usual.
[221] What do you, on that topic in general, what's your sense, how big is the community, not just who believes in flat earth, but sort of the anti -scientific community that naturally distrust scientists in a way that's not an open -minded way, like really just distrust scientists, like they're bought by some kind of mechanism of some kind of bigger system that's trying to manipulate human beings.
[222] What's your sense of the size of that community?
[223] You're one of the sort of great educators in the world, that educates people on the exciting power of science.
[224] So you're kind of up against this community.
[225] What's your sense of it?
[226] I really have no idea.
[227] I haven't looked at the likes and dislikes on the Flat Earth video.
[228] And so I would wonder if it has a greater percentage of dislikes than usual, is that because of people disliking it because they, you know, think that it's a video about earth being flat and they find that ridiculous and they dislike it without even really watching much.
[229] Do they wish that I was more like dismissive of flatter theories?
[230] I know there are a lot of response videos that kind of go through the episode and are pro flat earth but I don't know if there's a larger community of unorthodox thinkers today than there have been in the past.
[231] Okay.
[232] And I just want to not lose them.
[233] I want them to keep listening and thinking.
[234] And by calling them all, you know, idiots or something, like, that does no good.
[235] Because how idiotic are they really?
[236] I mean, the earth isn't a sphere at all.
[237] Like, we know that it's an oblate spheroid.
[238] And that in and of itself is really interesting.
[239] And I investigated that in which way is down, where I'm like, really down does not point towards the center of the earth, it points in different direction, depending on what's underneath you and what's above you and what's around you.
[240] The whole universe is tugging on me. And then you also show that gravity is non -uniform across the globe.
[241] Like if you, there's just, I guess, thought experiment if you build a bridge all the way across the earth and then just knock out its pillars, what would happen.
[242] Yeah.
[243] And you describe how it would be like a very chaotic, unstable thing that's happening because gravity is non -uniform throughout the Earth.
[244] Yeah.
[245] In small spaces, like the ones we work in, we can essentially assume that gravity is uniform.
[246] Right.
[247] But it's not.
[248] It is weaker the further you are from the Earth.
[249] And it also is going to be, it's radially pointed towards the middle of the Earth.
[250] So a really large object will feel tidal forces because of that non -uniformness.
[251] And we can take advantage of that with satellites, right?
[252] Gravitational -induced torque.
[253] It's a great way to align your satellite without having to use fuel or any kind of, you know, engine.
[254] So let's jump back to it.
[255] Artificial Intelligence.
[256] What's your thought of the state of where we are at currently with artificial intelligence?
[257] And what do you think it takes to build human level or superhuman level intelligence?
[258] I don't know what intelligence means.
[259] That's my biggest question at the moment.
[260] And I think it's because my instinct is always to go, well, what are the foundations here of our discussion?
[261] What does it mean to be intelligent?
[262] How do we measure the intelligence of an artificial machine or a program or something?
[263] Can we say that humans are intelligent?
[264] Because there's also a fascinating field of how do you measure human intelligence.
[265] Of course.
[266] But if we just take that for granted, saying that whatever this fuzzy intelligence, thing we're talking about, humans kind of have it.
[267] What would be a good test for you?
[268] So Turing develop a test that's natural language conversation.
[269] Would that impress you, a chat bot that you'd want to hang out and have a beer with, you know, for a bunch of hours or have dinner plans with?
[270] Is that a good test, natural language conversation?
[271] Is that something else that would impress you?
[272] Or is that also too difficult to think about?
[273] Oh, yeah.
[274] I'm pretty much impressed by everything.
[275] I think that if.
[276] Rumba?
[277] If there was a, a chatbot that was like incredibly um i don't know really had a personality and i if i didn't be the the the turning test right like if i'm unable to tell that it's not another person but then i was shown a bunch of wires and mechanical components and it was like that's actually what's you're talking to i don't know if i would feel that guilty destroying it i would feel guilty because clearly it's well made and it's a really cool thing.
[278] It's like destroying a really cool car or something.
[279] But I would not feel like I was a murderer.
[280] So yeah, at what point would I start to feel that way?
[281] And this is such a subjective psychological question.
[282] If you give it movement or if you have it act as though or perhaps really feel pain as I destroy it and scream and resist, then I'd feel that.
[283] Yeah, it's beautifully put.
[284] And let's just say act like it's in pain.
[285] So if you just have a robot that not screams, just like moans in pain if you kick it.
[286] Yeah.
[287] That immediately just puts it in a class that we humans, it becomes, we anthropomorphize it.
[288] It almost immediately becomes human.
[289] Yeah.
[290] But that's a psychology question as opposed to sort of a physics question.
[291] Right.
[292] I think that's a really good instinct to have.
[293] You know, if the robot.
[294] But screams and moans, even if you don't believe that it has the mental experience, the qualia of pain and suffering, I think it's still a good instinct to say, you know what, I'd rather not hurt it.
[295] The problem is that instinct can get us in trouble because then robots can manipulate that.
[296] And, you know, there's different kinds of robots.
[297] There's robots like the Facebook and the YouTube algorithm that recommends the video and they can manipulate it in the same kind of way.
[298] Well, let me ask you just to stick on artificial intelligence for a second.
[299] Do you have worries about existential threats from AI or existential threats from other technologies like nuclear weapons that could potentially destroy life on Earth or damage it to a very significant degree?
[300] Yeah, of course I do, especially the weapons that we create.
[301] There's all kinds of famous ways to think about this.
[302] And one is that, wow, what if we don't see advanced alien civilizations, because of the danger of technology.
[303] What if we reach a point?
[304] And I think there's a channel thoughty too.
[305] Geez, I wish I remembered the name of the channel.
[306] But he delves into this kind of limit of maybe once you discover radioactivity and its power, you've reached this important hurdle.
[307] And the reason the skies are so empty is that no one's ever, like, managed to survive as a civilization.
[308] once they have that destructive power.
[309] And when it comes to AI, I'm not really very worried because I think that there are plenty of other people that are already worried enough.
[310] And oftentimes these worries are just, they just get in the way of progress.
[311] And there are questions that we should address later.
[312] And, you know, I think I talk about this in my interview with, the self -driving autonomous vehicle guy as I think it was a bonus scene from the trolley problem episode and I'm like wow what should a car do if like this really weird contrived scenario happens where it has to like swerve and like save the driver but kill a kid and he's like well you know what would a human do and if we resist technological progress because we're worried about all of these little issues then it gets in the way and we shouldn't avoid those problems, but we shouldn't allow them to be stumbling blocks to advancement.
[313] So the, you know, folks like Sam Harris or Elon Musk are saying that we're not worried enough.
[314] So worry should not paralyze technological progress, but we're sort of marching, technology is marching forward without the key scientists, the developing technology, worrying about the overnight having some effects that would be very detrimental to society.
[315] So to push back on your thought of the idea that there's enough people worrying about it, Elon Musk says there's not enough people worrying about it.
[316] That's the kind of balance is, you know, it's like folks who really focused on nuclear deterrence are saying there's not enough people worried about nuclear deterrence, right?
[317] So it's an interesting question of what is a good threshold of people to worry about these?
[318] And if it's too many people that are worried, you're right.
[319] It'll be like the press would overreport on it and there'll be technological halt technological progress.
[320] If not enough, then we can march straight ahead into that abyss that human beings might be destined for with the progress of technology.
[321] Yeah, I don't know what the right balance is of how many people should be worried and how, worried should they be, but we're always worried about new technology.
[322] We know that Plato was worried about the written word.
[323] It's like, we shouldn't teach people to write because then they won't use their minds to remember things.
[324] There have been concerns over technology and its advancement since the beginning of recorded history.
[325] And so, you know, I think, however, these conversations are really important to have, because again, we learn a lot about ourselves.
[326] If we're really scared of some kind of AI, like coming into being that is conscious or whatever and can self -replicate.
[327] We already do that every day.
[328] It's called humans being born.
[329] They're not artificial.
[330] They're humans, but they're intelligent.
[331] And I don't want to live in a world where we're worried about babies being born because what if they become evil?
[332] Right.
[333] What if they become mean people?
[334] What if they're thieves?
[335] Maybe we should just like, what, not have babies born?
[336] Like maybe we shouldn't create AI?
[337] It's like, you know, we will want to have safeguards in place in the same way that we know, look, a kid could be born that becomes some kind of evil person, but we have laws, right?
[338] And it's possible that would advantage of genetics in general be able to, you know, it's a scary thought to say that, you know, this, my child, if born, would be, would have an ADHD.
[339] three percent chance of being a psychopath right like being able to if it's something genetic if there's some sort of and what to use that information what to do with that information is a difficult ethical yeah i'd like to find an answer that isn't well let's not have them live you know i'd like to find an answer that is well all human life is worthy and if you have an 83 % chance of becoming a psychopath, well, you still deserve dignity.
[340] Yeah.
[341] And you still deserve to be treated well.
[342] You still have rights.
[343] At least at this part of the world, at least in America, there's a respect for individual life in that way.
[344] That's, well, to me, but again, I'm in this bubble, is a beautiful thing.
[345] But there's other cultures where individual human life is not that important, where a society, So I was born in the Soviet Union where the strength of nation and society together is more important than the any one particular individual.
[346] So it's an interesting also notion, the stories would tell ourselves.
[347] I like the one where individuals matter, but it's unclear that that was what the future holds.
[348] Well, yeah, and I mean, let me even throw this out.
[349] Like, what is artificial intelligence?
[350] How can it be artificial?
[351] I really think that we get pretty obsessed and stuck on the idea that there is something that is a wild human, a pure human organism without technology.
[352] But I don't think that's a real thing.
[353] I think that humans and human technology are one organism.
[354] Look at my glasses.
[355] Okay, if an alien came down and saw me, would they necessarily know that this is an invention that I don't grow these organically from my body?
[356] They wouldn't know that right away.
[357] And the written word and spoons and cups, These are all pieces of technology.
[358] We are not alone as an organism.
[359] And so the technology we create, whether it be video games or artificial intelligence that can self -replicate and hate us, it's actually all the same organism.
[360] When you're in a car, where do you end and the car begin?
[361] It seems like a really easy question to answer.
[362] But the more you think about it, the more you realize, wow, we are in this symbiotic relationship with our inventions.
[363] And there are plenty of people who are worried about it, and there should be.
[364] But it's inevitable.
[365] And I think that even just us think of ourselves as individual intelligences may be silly notion because, you know, it's much better to think of the entirety of human civilization.
[366] All living organs on Earth is a single living organism.
[367] Right.
[368] As a single intelligent creature because you're right, everything's intertwined.
[369] Everything is deeply connected.
[370] So we mentioned Elon Musk, so you're a curious lover of science.
[371] What do you think of the efforts that Elon Musk is doing with space exploration, with electric vehicles, with autopilot, sort of getting into the space of autonomous vehicles, with boring under LA, and NeuralLink trying to communicate brain machine interfaces, communicate between machines and human brains.
[372] Well, it's really inspiring.
[373] I mean, look at the fandom that he's amassed.
[374] It's not common for someone like that to have such a following.
[375] Engineering nerd.
[376] Yeah, so it's really exciting.
[377] But I also think that a lot of responsibility comes with that kind of power.
[378] So, like, if I met him, I would love to hear how he feels about the responsibility he has.
[379] when there are people who are such a fan of your ideas and your dreams and share them so closely with you, you have a lot of power.
[380] And he didn't always have that, you know?
[381] He wasn't born as Elon Musk.
[382] Well, he was.
[383] But, well, he was named that later.
[384] But the point is that I want to know the psychology of becoming a figure like him.
[385] Well, I don't even know how to phrase the question right, but it's a question about what do you do when you're following, your fans become so, you know, large that it's almost bigger than you.
[386] And how do you responsibly manage that?
[387] And maybe it doesn't worry him at all.
[388] And that's fine, too.
[389] But I'd be really curious.
[390] And I think there are a lot of people that go through this when they realize, whoa, there are a lot of eyes on me. there are a lot of people who really take what I say very earnestly and take it to heart and will defend me and who that's that's um that that can be dangerous and and um you have to be responsible with it both in terms of impact in society and psychologically for the individual just just the burden psychologically on neon yeah yeah how does he how does he think about that part of his persona.
[391] Well, let me throw that right back at you, because in some ways, you're just a funny guy that's gotten a humongous following, a funny guy with a curiosity, you've got a huge following.
[392] How do you psychologically deal with the responsibility?
[393] In many ways, you have a reach in many ways bigger than Elon Musk.
[394] What is your, what is the burden that you feel in educating?
[395] being one of the biggest educators in the world where everybody's listening to you and actually everybody like most of the world that uses YouTube for education material trust you as a source of good strong scientific thinking it's a burden and I try to reproach it with a lot of humility and sharing like I'm not out there doing a lot of science scientific experiments, I am sharing the work of real scientists, and I'm celebrating their work and the way that they think and the power of curiosity.
[396] But I want to make it clear at all times that, like, look, you know, we don't know all the answers, and I don't think we're ever going to reach a point where we're like, wow, and there you go.
[397] That's the universe.
[398] It's this equation.
[399] You plug in some conditions or whatever, and you do the math, and you know what's going to happen tomorrow.
[400] I don't think we're ever going to reach that point, but I think that there is a tendency to sometimes believe in science and become elitist and become, I don't know, hard, when in reality it should humble you and make you feel smaller.
[401] I think there's something very beautiful about feeling very, very small and very weak and to feel that you need other people.
[402] So I try to keep that in mind and say, look, thanks for watching Vsauce is not, I'm not Vsauce, you are.
[403] When I start the episodes, I say, hey, Vsauce, Michael here.
[404] Vesos and Michael are actually a different thing in my mind.
[405] I don't know if that's always clear, but yeah, I have to approach it that way because it's not about me. Yeah, so it's not even, you're not feeling responsibility.
[406] You're just sort of plugging into this big thing that is scientific exploration of our reality.
[407] And you're a voice that represents a bunch, but you're just plugging into this big Vsauce ball that others, millions of others are plugged into.
[408] Yeah, and I'm just hoping to encourage curiosity and, you know, responsible thinking and an embracement of doubt and being okay with that.
[409] So next week talking to Christos Goodrow, I'm not sure if you're familiar who he is, but he's.
[410] He's the VP of engineering head of the quote -unquote YouTube algorithm or the search and discovery.
[411] So let me ask, first, high level, do you have a question for him that if you can get an honest answer that you would ask?
[412] But more generally, how do you think about the YouTube algorithm that drives some of the motivation behind, no, some of the design decisions you make as you ask?
[413] answer some of the questions you do, how would you improve this algorithm in your mind in general?
[414] So what would you ask him?
[415] And outside of that, how would you like to see the algorithm improve?
[416] Well, I think of the algorithm as a mirror.
[417] It reflects what people put in.
[418] And we don't always like what we see in that mirror.
[419] From the individual mirror to the individual, mirror to the society.
[420] Both.
[421] In the aggregate, it's reflecting back what people on average want to watch.
[422] And when you see things being recommended to you, it's reflecting back what it thinks you want to see.
[423] And specifically, I would guess that it's not just what you want to see, but what you will click on and what you will watch some of and stay on YouTube because of.
[424] I don't think that, this is all me guessing, but I don't think that YouTube cares if you only watch like a second of a video, as long as the next thing you do is open another video.
[425] If you close the app or close the site, that's a problem for them because they're not a subscription platform.
[426] They're not like, look, you're giving us 20 bucks a month no matter what, so who cares?
[427] They need you to watch and spend time there and see ads.
[428] So one of the things I'm curious about whether they do consider longer term sort of develop, your longer term development as a human being, which I think ultimately will make you feel better about using YouTube in the long term and allowing you to stick with it for longer.
[429] Because even if you feed the dopamine rush in the short term and you keep clicking on cat videos, eventually you sort of wake up like from a drug and say, I need to quit this.
[430] So I wonder how much they're trying to optimize for the long term.
[431] Because when I look at the, you know, your videos aren't exactly sort of no offense, but they're not the most clickable.
[432] They're both the most clickable and I feel I watched the entire thing and I feel a better human after I watched it, right?
[433] So like they're not just optimizing for the clickability.
[434] Because I hope, so my thought is how do you think of it?
[435] And does it affect your own content?
[436] Like how deep you go, how profound you explore the directions and so on.
[437] I've been really lucky in that I don't, worry too much about the algorithm.
[438] I mean, look at my thumbnails.
[439] I don't really go too wild with them.
[440] And with Mindfield, where I'm in partnership with YouTube on the thumbnails, I'm often like, let's pull this back.
[441] Let's be mysterious.
[442] Usually I'm just trying to do what everyone else is not doing.
[443] So if everyone's doing crazy Photoshop, what kind of thumbnails?
[444] I'm like, what if the thumbnails just a line?
[445] And what if the title is just a word?
[446] And I kind of feel like all of the Vsauce channels have cultivated an audience that expects that.
[447] And so they would rather Jake make a video that's just called stains than one called, I explored stains, shocking.
[448] But there are other audiences out there that want that.
[449] And I think most people kind of want what you see the algorithm favoring, which is mainstream traditional celebrity and news kind of information.
[450] I mean, that's what makes YouTube really different than other streaming platforms.
[451] No one's like, what's going on in the world?
[452] I'll open up Netflix to find out.
[453] But you do open up Twitter to find that out.
[454] You open up Facebook.
[455] You can open up YouTube because you'll see that the trending videos are like what happened amongst the traditional mainstream people in different industries.
[456] That's what's being shown.
[457] And it's not necessarily YouTube saying, we want that to be what you see.
[458] It's that that's what people click on.
[459] When they see Ariana Grande, you know, reads a love letter from like her high school sweetheart, heart, they're like, I want to see that.
[460] And when they see a video from me that's got some lines in math and it's called law and causes, they're like, well, I mean, I'm just on the bus.
[461] Like, I don't have time to dive into a whole lesson.
[462] So, you know, before you get super mad at YouTube, you should say, really, they're just reflecting back human behavior.
[463] Is there something you would improve about the algorithm knowing, of course, that as far as we're concerned, it's a black box, or we don't know how it works?
[464] Right.
[465] And I don't think that even anyone at YouTube really knows what it's doing.
[466] They know what they've tweaked, but then it learns.
[467] I think that it learns and it decides how to behave.
[468] And sometimes the YouTube employees are left going, I don't know.
[469] Maybe we should change the value of how much it, you know, worries about watch time.
[470] And maybe it should worry more about something.
[471] I don't know.
[472] But, I mean, I would like to see, I don't know what they're doing and not doing.
[473] Well, is there a conversation that you think they should be having, just internally?
[474] whether they're having it or not, is there something, should they be thinking about the long -term future?
[475] Should they be thinking about educational content and whether that's educating about what just happened in the world today, news or educational content, like what you're providing, which is asking big sort of timeless questions about how the way the world works?
[476] Well, it's interesting.
[477] What should they think about?
[478] Because it's called YouTube, not our tube.
[479] And that's why I think they have so many phenomenal educational creators.
[480] You don't have shows like Three Blue One Brown or Physics Girl or Looking Glass Universe or Up and Adam or Brain Scoop or, I mean, I could go on and on.
[481] They aren't on Amazon Prime and Netflix and they don't have commissioned shows from those platforms.
[482] It's all organically happening because there are people out there that want to share their passion for learning, that want to share their passion for learning, that want to share.
[483] their curiosity and YouTube could, you know, promote those kinds of shows more, but like, first of all, they probably wouldn't get as many clicks and YouTube needs to make sure that the average user is always clicking and staying on the site.
[484] They could still promote it more for the good of society, but then we're making some really weird claims about what's good for society because I think that cat videos are also an incredibly important part of what it means to be a human.
[485] I mentioned this quote before from Unumuno about, look, I've seen a cat like estimate distances and calculate a jump, you know, more often than I've seen a cat cry.
[486] And so things that play with our emotions and make us feel things can be cheesy and can feel cheap, but like, man, that's very human.
[487] And so even the dumbest vlog is still so important that I don't think I have a better claim to take its spot than it has to have that spot.
[488] It puts a mirror to us.
[489] The beautiful parts, the ugly parts, the shallow parts, the deep parts.
[490] You're right.
[491] What I would like to see is, you know, I miss the days when engaging with content on YouTube helped push it into my subscribers' timelines.
[492] It used to be that when I liked a video, say, from Veritasium, it would show up in the feed on the front page of the app or the website of my subscribers.
[493] and I knew that if I liked a video, I could send it 100 ,000 views or more.
[494] That no longer is true.
[495] But I think that was a good user experience.
[496] When I subscribe to someone, when I'm following them, I want to see more of what they like.
[497] I want them to also curate the feed for me. And I think that Twitter and Facebook are doing that in also some ways that are kind of annoying, but I would like that to happen more.
[498] And I think we would see communities being stronger on YouTube if it was that way instead of YouTube going, Well, technically, Michael liked this Veritasium video, but people are way more likely to click on Carpool Karaoke.
[499] So I don't even care who they are.
[500] Just give them that.
[501] Not saying anything against Carpool Karaoke.
[502] That is an extremely important part of our society, what it means to be a human on earth.
[503] You know, but I'll say it sucks, but a lot of people would disagree with you, and they should be able to see as much of that as they want.
[504] And I think even people who don't think they like it should still be really aware of it.
[505] It's such an important thing and such an influential thing.
[506] But yeah, I just wish that, like, new channels I discover and that I subscribe to.
[507] I wish that my subscribers found out about that because especially in the education community, a rising tide floats all boats.
[508] If you watch a video from NumberFile, you're just more likely to want to watch an episode from me, whether it be on VsOS 1 or ding.
[509] It's not competitive in the way that traditional TV was, where it's like, well, if you tune into that show, it means you're not watching mine because they both air at the same time.
[510] So helping each other out through collaborations takes a lot of work.
[511] But just through engaging, commenting on their videos, liking their videos, subscribing to them, whatever, that I would love to see become easier and more powerful.
[512] So a quick and impossibly deep question, last question, about mortality.
[513] You've spoken about death as an interesting topic.
[514] Do you think about your own mortality?
[515] Yeah, every day.
[516] It's really scary.
[517] So what do you think is the meaning of life that mortality makes very explicit?
[518] So why are you here on earth, Michael?
[519] What's the point of this whole thing?
[520] What, you know, what does mortality in the context of the whole universe make you realize about yourself?
[521] Just you, Michael Stevens.
[522] Well, it makes me realize that I am destined to become a notion.
[523] I'm destined to become a memory.
[524] And we can extend life.
[525] I think there's really exciting things being done to extend life.
[526] But we still don't know how to, like, you know, protect you from some accident that could happen, you know, some unforeseen thing.
[527] Maybe we could, like, save my connectome and, like, recreate my consciousness digitally.
[528] but even that could be lost if it's stored on a physical medium or something.
[529] So basically, I just think that embracing and realizing how cool it is that someday I will just be an idea.
[530] And there won't be a Michael anymore that can be like, no, that's not what I meant.
[531] It'll just be what people, like, they have to guess what I meant.
[532] And they'll remember me and how I live on as that memory will maybe not even be who I wanted to be but there's something powerful about that and there's something powerful about letting future people run the show themselves i think i i'm glad to get out of their way at some point and say all right it's your world now so you the physical entity michael has have ripple effects in the space of ideas that's far outlives you yeah in ways that you can't control but it's nevertheless fascinating to think i mean especially with you you can imagine imagine an alien species when they finally arrive and destroy all of us would watch your videos to try to figure out what were the questions but even if they didn't you know I still think that there will be ripples like when I say memory I don't specifically mean people remember my name and my birth date and like there's a photo of me on Wikipedia like all that can be lost but I still would hope that people ask questions and and teach concepts in some of the ways that I have found useful and satisfying, even they don't know that I was the one who tried to popularize it.
[533] That's fine.
[534] But if Earth was completely destroyed, like burnt to a crisp, everything on it today, what would, the universe wouldn't care.
[535] Like, Jupiter's not going to go, oh, no. And that could happen.
[536] So we do, however, have the power to, you know, launch things into space, try to extend how long our memory exists.
[537] And what I mean by that is, you know, we are recording things about the world and we're learning things and writing stories and all of this.
[538] And preserving that is truly what I think is the essence of being a human.
[539] We are autobiographers of the universe and we're really good at it.
[540] We're better than fossils.
[541] We're better than light spectrum.
[542] We're better than any of that.
[543] We collect much more detailed memories of what's happening, much better data.
[544] And so that should be our legacy.
[545] And I hope that that's kind of mine, too, in terms of people remembering something or me having some kind of effect.
[546] But even if I don't, you can't not have an effect.
[547] This is not me feeling like I hope that I have this powerful legacy.
[548] It's like no matter who you are, you will.
[549] But you also have to embrace the fact that that impact might look really small and that's okay.
[550] One of my favorite quotes is from Tessa the Durbervilles.
[551] And it's along the lines of the measure of your life depends on not your external displacement, but your subjective experience.
[552] If I am happy and those that I love are happy, can that be enough?
[553] Because if so, excellent.
[554] I think there's no better place to end it, Michael.
[555] Thank you so much.
[556] It was an honor to meet you.
[557] Thanks for talking to it.
[558] Thank you.
[559] It was a pleasure.
[560] Thanks for listening to this conversation with Michael Stevens.
[561] And thank you to our presenting sponsor, Cash App.
[562] Download it, use code Lex Podcast.
[563] You'll get $10 and $10 will go to first, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn, to dream of engineering our future.
[564] If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or connect with me on Twitter.
[565] And now, let me leave you with some words of wisdom from Albert Einstein.
[566] The important thing is not to stop questioning.
[567] Curiosity has its own reason for existence.
[568] One cannot help but be in awe when he contemplates the mysteries of eternity, of life, the marvel structure of reality.
[569] It is enough if one tries merely to comprehend a little of this mystery every day.
[570] Thank you for listening and hope to see you next time.