Insightcast AI
Home
© 2025 All rights reserved
Impressum
#185 – Sam Harris: Consciousness, Free Will, Psychedelics, AI, UFOs, and Meaning

#185 – Sam Harris: Consciousness, Free Will, Psychedelics, AI, UFOs, and Meaning

Lex Fridman Podcast XX

--:--
--:--

Full Transcription:

[0] The following is a conversation with Sam Harris, one of the most influential and pioneering thinkers of our time.

[1] He's the host of the Making Sense podcast and the author of many seminal books on human nature and the human mind, including the end of faith, the moral landscape, lying, free will, and waking up.

[2] He also has a meditation app called Waking Up that I've been using to guide my own meditation.

[3] Quick mention of our sponsors, national instruments, Val Campo, Athletic Greens, and Linode.

[4] Check them out in the description to support this podcast.

[5] As a side note, let me say that Sam has been an inspiration to me, as he has been for many, many people, first from his writing, then his early debates, maybe 13, 14 years ago on the subject of faith, his conversations with Christopher Hitchens, and since 2013, his podcast.

[6] I didn't always agree with all of his ideas, but I was always drawn to the care and depth of the way he explored those ideas.

[7] The calm and clarity amid the storm of difficult, at times controversial discourse.

[8] I really can't express in words how much it meant to me that he, Sam Harris, someone who I have listened to for many hundreds of hours, would write a kind email to me saying he enjoyed this podcast, and more that he thought I had a unique voice that added something to this world.

[9] Whether it's true or not, it made me feel special and truly grateful to be able to do this thing and motivated me to work my ass off to live up to those words.

[10] Meeting Sam and getting to talk with him was one of the most memorable moments of my life.

[11] As usual, I'll do a few minutes of as now.

[12] I try to make these interesting, but I give you time stamps, so if you skip, please still check out the sponsors by clicking the links in the description.

[13] It's the best way to support this podcast.

[14] I'm fortunate to be very selective with the sponsors we take on so hopefully if you buy their stuff you'll find value in it just as I have this show is sponsored by a new sponsor very excited about this one national instruments a company that has been helping engineers solve the world's toughest challenges for 40 years their motto is engineer ambitiously it doesn't get better than that I'm a longtime fan of national instruments they're actually now called N .I. Like Kentucky Fried Chicken is called KFC.

[15] That's how you know you made it.

[16] That's like a badge of honor.

[17] National Instruments is now N .I. I've used N .I's Lab View Software and Graduate School for a bunch of projects in our lab.

[18] And also, if I remember correctly, before that, in programming Lego Mindstorms.

[19] I love Legos and I love robots.

[20] So when the two come together, that's heaven.

[21] So I definitely have a soft spot in my heart for this company.

[22] I really love their NI .com.

[23] slash perspective website, which articulates through a bunch of articles the value of failure in the engineering process.

[24] As they describe, failure is an opportunity to learn something new, so test, validate, fail fast, and fail often.

[25] Again, that's slogan, engineer ambitiously with N .I. And if you're interested, go learn more about them at n .com slash perspectives.

[26] There's a lot of interesting content on there to read, listen, and even watch.

[27] That's n .com slash perspectives.

[28] This show is also sponsored by Belcampo Farms, whose mission is to deliver meat you can feel good about meat that is good for you, good for the animals, and good for the planet.

[29] Belcampo animals graze on open pastures and seasonal grasses resulting in meat that is higher in nutrients and healthy fast.

[30] Plus, it is damn delicious.

[31] I'm actually going to visit belcampo farms next week man time flies next week and i'm actually going to get a chance to hang out with the founder ania fernald she's going to show me how to cook certain things show me around the farm we're going to actually probably do a podcast probably do a podcast like outside if it's at all possible which is going to be a new experience she's a great chef obviously knows a lot about ethical meat and also just a really interesting person so i look forward to that experience it's almost like a mini vacation.

[32] And if I'm not too lazy, I'm also going to go on a hike.

[33] It's in Northern California, Belcampo farms.

[34] It's incredible out there.

[35] Anyway, you can order Belcampo's sustainably raised meats to be delivered straight to your door like I do using code Lex at bellcampo .com slash Lex for 20 % off of first -time customers.

[36] That's code Lex at bellcampo .com slash Lex.

[37] This show is also sponsored by my good old companion, Athletic Greens.

[38] The all -in -one daily drink to support better health and peak performance.

[39] It replaced a multivitamin for me and went far beyond that with 75 vitamins and minerals.

[40] It's the first thing I drink every day.

[41] I'm actually drinking two of them a day, except when I did the 72 -hour fast, because there's just a little bit of calories in there.

[42] I decided to go as calorie -free as possible for the fast, so I stayed away from Athletic Greens, but that's probably really dumb.

[43] not getting any vitamins and minerals into my body.

[44] Probably very dumb.

[45] I did get the electrolytes, which was essential.

[46] But I'm a big believer of the kind of nutritional base that athletic greens provides.

[47] So I broke the fast with chicken breast and bone broth.

[48] And right after that, I drank Athletic Greens.

[49] And then I sat for like three, four hours that didn't eat anything else, even though I'm still hungry.

[50] Man, I wanted to big out so bad.

[51] And then, like a responsible adult only had, about a thousand calories worth of steak and then went to bed.

[52] So it was a good day.

[53] It was a great fast and amazing experience.

[54] Anyway, they give you one month supply of wild caught omega -3 fish oil, which is another supplement I take.

[55] When you sign up to athletic greens .com slash Lex, that's Athletic Greens .com slash Lex for the drink and the fish oil.

[56] Trust me, it's awesome.

[57] This episode is also sponsored by Linode, Linux Virtual Machine, It's an awesome compute infrastructure that lets you develop, deploy, and scale what applications you build faster and easier.

[58] This is both for small personal projects and huge systems.

[59] Lower costs than AWS, but more important to me is the simplicity, quality of customer service with real humans 24 -7 -365.

[60] The complete opposite of the kind of customer service Facebook and Instagram provides, which is no humans, zero zero, zero.

[61] I'm only half kidding, but I understand when you have a giant social network, it's really difficult to do good customer service and there's so many bots involved.

[62] There's so many complexities.

[63] I totally understand, but I believe it's probably possible to engineer customer service solutions at scale.

[64] If companies like Leno can do it, I think companies like Facebook should be able to do as well.

[65] Come on, step up your game.

[66] Anyway, that's really valuable for compute infrastructure because when problems arise, you really want some people to help you.

[67] There's very few things in this life.

[68] I love more than compute infrastructure that's maintained with exceptional competence and skill.

[69] This one runs on Linux.

[70] In fact, their slogan is if it runs on Linux and runs on Linode, visit Linode .com slash Lex and click on the create free account button to get started with 100 bucks and free credit.

[71] That's linode .com slash Lex and get to experience my favorite Linux virtual machines.

[72] Speaking of machines, this is the Lex Friedman podcast, and here is my conversation with Sam Harris.

[73] I've been enjoying meditating with the waking up app recently.

[74] It makes me think about the origins of cognition and consciousness, so let me ask, where do thoughts come from?

[75] Well, that's a very difficult question to answer.

[76] Subjectively, they appear to come from nowhere, right?

[77] I mean, it's just they come out of some kind of mystery that is at our backs subjectively, right?

[78] So, which is to say that if you pay attention to the nature of your mind in this moment, you realize that you don't know what you're going to think next, right?

[79] Now, you're expecting to think.

[80] something that seems like you authored it, right?

[81] You know, you're not unless you're schizophrenic or you have some kind of thought disorder where you, where your thoughts seem fundamentally foreign to you.

[82] They do have a kind of signature of selfhood associated with them.

[83] And people readily identify with them.

[84] They feel like what you are.

[85] I mean, this is the thing, this is the spell that gets broken with meditation.

[86] Our default state is to feel identical.

[87] to the stream of thought, right?

[88] Which is fairly paradoxical because how could you, as a mind, as a self, you know, if there were such a thing as a self, how could you be identical to the next piece of language or the next image that just springs into conscious view?

[89] But, and meditation is ultimately about examining that point of view closely enough so as to unravel it and feel the freedom that's on the other side of that identification.

[90] But the subjectively thoughts simply emerge, right?

[91] And you don't think them before you think them, right?

[92] There's this first moment where, you know, just anyone listening to us or watching us now could perform this experiment for themselves.

[93] I mean, just imagine something or remember something.

[94] You know, just pick a memory, any memory, right?

[95] You've got a storehouse of memory, just promote one to consciousness, did you pick that memory?

[96] I mean, let's say you remembered breakfast yesterday or you remembered what you said to your spouse before leaving the house or you remembered what you watched on Netflix last night or you remembered something that happened to you when you're four years old, whatever it is, right?

[97] First, it wasn't there and then it appeared.

[98] And that is not a, I mean, I'm sure we'll get.

[99] get to the topic of free will, ultimately.

[100] That's not evidence of free will, right?

[101] Why are you so sure, by the way?

[102] It's very interesting.

[103] Well, through no free will of my own, yeah.

[104] Everything just appears, right?

[105] But what else could it do?

[106] And so that's the subjective side of it.

[107] Objectively, you know, we have every reason to believe that many of our thoughts, all of our thoughts, are at bottom, what some part of our brain is doing, neurophysiologically, I mean, that these are the products of some kind of neural computation and neural representation, when you're talking about memories.

[108] Is it possible to pull up the string of thoughts to try to get to its root, to try to dig in past the obvious surface subjective experience of, like, the thoughts pop out of nowhere?

[109] Is it possible to somehow get closer to the roots of where they come out of from the, the firing of the cells or is it a useless pursuit to dig that to dig into that direction well you can get closer to many many subtle contents in consciousness right so you can notice things more and more clearly and have a landscape of mind open up and become more differentiated and more interesting and if you take psychedelics you know it opens up you know why depending on what you've taken and and the dose you know it opens in directions and to an extent that, you know, very few people imagine would be possible but for having had those experiences.

[110] But this idea of you getting closer to something, to the datum of your mind, or such as something of interest in there, or something that's more real, is ultimately undermined because there's no place from which you're getting closer to it.

[111] There's no your part of that journey, right?

[112] Like, we tend to, to start out, you know, whether it's in meditation or in any kind of self -examination or, you know, taking psychedelics, we start out with this default point of view of feeling like we're the kind of the rider on the horse of consciousness, or we're the man in the boat going down the stream of consciousness, right?

[113] But we're so we're differentiated from what we know cognitively, introspectively.

[114] But that feeling of being differentiated, that feeling of being a self that can strategically pay attention to some contents of consciousness is what it's like to be identified with some part of the stream of thought that's going uninspected, right?

[115] Like that, it's a false point of view.

[116] And when you see that and cut through that, then this sense of, this notion of going deeper kind of breaks apart because really there is no depth ultimately everything is right on the surface everything there's no center to consciousness there's just consciousness and its contents and those contents can change vastly again if you drop acid you know the contents change but there's in some sense that doesn't represent a position of depth versus the continuum of depth versus surface has broken apart so you're taking as a starting point that there is a horse called consciousness and you're riding it and the actual riding is very shallow.

[117] This is all surface.

[118] So let me ask about that horse.

[119] What's up with the horse?

[120] What is consciousness?

[121] From where does it emerge?

[122] How like fundamental is it to the physics of reality?

[123] How fundamental is it to what it means to be human?

[124] And I'm just asking for a friend so that we can build it in our artificial intelligence systems?

[125] Yeah, well, it remains to be seen if we can, if we will build it purposefully or just by accident.

[126] It's a major ethical problem potentially.

[127] That, I mean, my concern here is that we may, in fact, build artificial intelligence that passes the Turing test, which we begin to treat not only as super intelligent because it obviously is and demonstrates that, but we begin to treat it as conscious because it will seem conscious.

[128] We will have built it to seem conscious.

[129] And unless we understand exactly how consciousness emerges from physics, we won't actually know that these systems are conscious, right?

[130] Well, they may say, you know, listen, you can't turn me off because that's a murder, right?

[131] And we be convinced by that dialogue because we will, you know, just in the extreme case, who knows when we'll get there, but, you know, if we build something like perfectly humanoid robots that are more intelligent than we are, so we're basically in, you know, a Westworld -like situation, there's no way we're going to withhold an attribution of consciousness from those machines.

[132] They're just going to seem, they're just going to advertise their consciousness in every glance and every utterance, but we won't know, and we won't know in some deeper sense than we can be skeptical of the consciousness of other people.

[133] I mean, someone could roll that back and say, well, you know, I don't know that you're conscious or you don't know that I'm conscious.

[134] We're just passing the touring test for one another, but that kind of solipsism isn't justified, you know, biologically or we just, anything we understand about the mind biologically suggests that you and I are part of the same, you know, roll the dice in terms of how intelligent and conscious systems emerged in the wetware of brains like ours, right?

[135] So it's not parsimonious for me to think that I might be the only conscious person or even the only conscious primate, you know, I would argue it's not parsimonyous to withhold consciousness from other apes and even other mammals ultimately.

[136] And, you know, once you get beyond the mammals, then my intuitions are not really clear.

[137] The question of how it emerges is genuinely uncertain, and ultimately the question of whether it emerges is still uncertain.

[138] You can, you know, it's not fashionable to think this, but you can certainly argue that consciousness might be a fundamental principle of matter that doesn't emerge on the basis of information processing, even though everything else that we, recognize about ourselves as minds almost certainly does emerge.

[139] You know, like an ability to process language, that clearly is a matter of information processing, because you can disrupt that process in ways that is just so clear.

[140] And the problem that the confound with consciousness is that, yes, we can seem to interrupt consciousness.

[141] I mean, you can give someone general anesthesia, and then you wake them up and you ask them, well, what was that like?

[142] And they say, nothing, I don't remember anything.

[143] But it's hard to differentiate a mere failure of memory from a genuine interruption in consciousness, whereas it's not with, you know, interrupting speech, you know, we know when we've done it.

[144] And it's, it's just obvious that, you know, you disrupt the right neural circuits and, you know, you've disrupted speech.

[145] So if you had to bet all your money on one camp or the other, it would just say do you earn the side of panpsychism where consciousness is really fundamental to all of reality or more on the other side, which is like it's a nice little side effect, a useful like hack for us humans to survive.

[146] On that spectrum, where do you land when you think about consciousness, especially from an engineering perspective?

[147] I'm truly agnostic on this point.

[148] I mean, I think I'm you know, it's kind of in coin -toss mode for me. I don't know.

[149] And pan -psychism is not so compelling to me. Again, it just seems unfalcifiable.

[150] I wouldn't know how the universe would be different if pan -psychism were true.

[151] Just to remind people, pan -psychism is this idea that consciousness may be pushed all the way down into the most fundamental constituents of matters.

[152] So there might be something that it's like to be an electron or, you know, a core.

[153] pork, but then you wouldn't expect anything to be different at the macro scale, or at least I wouldn't expect anything to be different.

[154] So it may be unfalsifiable.

[155] It just might be that reality is not something we're as in touch with as we think we are, and that if that is base layer to kind of break it into mind and matter as we've done ontological, is to misconstrue it, right?

[156] I mean, there could be some kind of neutral monism at the bottom.

[157] And this, you know, this idea doesn't originate with me. This goes all the way back to Bertrand Russell and others, you know, 100 plus years ago.

[158] But I just feel like the concepts we're using to divide consciousness and matter may, in fact, be part of our problem, right?

[159] Where the rubber hits the road psychologically here are things like, well, what is death?

[160] Right?

[161] Like, do we, any expectation that we survive death or any part of us survives death, that really seems to be the many people's concern here.

[162] Well, I tend to believe just as a small little tangent, like I'm with Ernest Becker on this, that there's some, it's interesting to think about death and consciousness, which one is the chicken, which one is the egg, because it feels like death could be the very thing, like our knowledge or mortality could be the very thing that creates the consciousness.

[163] Yeah, well, then you're using consciousness differently than I am.

[164] I mean, so for me, consciousness is just the fact that the lights are on at all, that there's an experiential quality to anything.

[165] So much of the processing that's happening in our brains right now certainly seems to be happening in the dark, right?

[166] Like, it's not associated with this qualitative sense that there's something that's like to be that part of the mind doing that mental thing.

[167] But for other parts, the lights are on, and we can talk about, and whether we talk about or not, we can feel directly that there's something that it's like to be us.

[168] There's something seems to be happening, right?

[169] the seeming in our case is broken into vision and hearing and and proprioception and taste and smell and thought and emotion and there's there are the contents of consciousness that we are familiar with and that we can we can have direct access to in any present moment that when we're quote conscious and even if we're confused about them Even if, you know, we're asleep and dreaming and we really, and we're it's not a lucid dream, we're just totally confused about our circumstance, what you can't say is that we're confused about consciousness.

[170] Like, you can't say that consciousness itself might be an illusion because on this account, it just means that things seem anyway at all.

[171] I mean, even like if this, you know, it seems to me that I'm seeing a cup on the table.

[172] Now, I could be wrong about that.

[173] It could be a hologram.

[174] I could be asleep and dreaming.

[175] I could be hallucinating, but the seeming part isn't really up for grabs in terms of being an illusion.

[176] It's not something seems to be happening.

[177] And that seeming is the context in which every other thing we can notice about ourselves can be noticed.

[178] And it's also the context in which certain illusions can be cut through because we're not.

[179] We can be wrong about what it's like to be us.

[180] And we can, I'm not saying we're incorrigible with respect to our claims about the nature of our experience.

[181] But, for instance, many people feel like they have a self and they feel like it has free will.

[182] And I'm quite sure at this point that they're wrong about that and that you can cut through those experiences and then things seem a different way.

[183] So it's not that it's not that things don't, there aren't discoveries to be made there and assumptions to be over time.

[184] turned, but this kind of consciousness is something that I would think, it doesn't just come online when we get language.

[185] It doesn't just come online when we form a concept of death or the finiteness of life.

[186] It doesn't require a sense of self, right?

[187] So it doesn't, it's prior to a differentiating self and other.

[188] And I wouldn't even think it's necessarily limited to people.

[189] I do think probably any mammal has this.

[190] But certainly if you're going to, if you're going to presuppose that something about our brains is producing this, right?

[191] And that's a very safe assumption, even though we can't, even though you can argue the jury is still out to some degree, then it's very hard to draw a principled line between us and chimps, you know, or chimps and rats, even in the end, given the underlying neural similarities.

[192] So, and I don't know, you know, phylogenetically, I don't know how far back to push that.

[193] You know, so there are people, you know, think single cells might be conscious or that, you know, flies are certainly conscious.

[194] They've got something like 100 ,000 neurons in their brains.

[195] I mean, it's just, there's a lot going on, even in a fly, right?

[196] But I don't have intuitions about that.

[197] but it's not in your sense an illusion you can cut through i mean to push back the alternative version could be it is an illusion constructed by just by humans i'm not sure i believe this but it in part of me hopes this is true because it makes it easier to engineer is that humans are able to contemplate their mortality and that contemplation in itself creates consciousness that like the the rich lights on experience.

[198] So the lights don't actually even turn on in the way that you're describing until afterbirth in that construction.

[199] So do you think it's possible that that is the case, that it is a sort of construct of the way we deal, almost like a social tool to deal with the reality of the world, a social interaction with other humans?

[200] Or is, because you're saying the complete opposite, which is it's like fundamental to single cell organisms and trees.

[201] and so on.

[202] Right.

[203] Well, yeah, so I don't know how far down to push it.

[204] I don't have intuitions that single cells are likely to be conscious, but they might be.

[205] And I just, again, I could be unfalsifiable.

[206] But as far as babies not being conscious, like you're not, you don't become conscious until you can recognize yourself in a mirror or you have a conversation or treat other people.

[207] First of all, babies treat other people as others far earlier than we have.

[208] traditionally given them credit for and they certainly do it before they have language right so it's it's a it's got to proceed language to some degree and I mean you can interrogate this for yourself because you can put yourself in various states that are rather obviously not linguistic you know you know the meditation allows you to do this you can certainly do it with psychedelics where it's just your capacity for language has been obliterated and yet you're all too conscious, in fact, I think you could make a stronger argument for things running the other way, that there's something about language and conceptual thought that is eliminative of conscious experience, that we're potentially much more conscious of data, sense data and everything else, than we tend to be, and we have trimmed it down based on how we have acquired concepts.

[209] So, like, when I walk into a room like this, I know I'm walking into a room, I have certain expectations of what is in a room.

[210] You know, I would be very surprised to see, you know, wild animals in here or a waterfall or, you know, there are things I'm not expecting, but I, I would be very surprised to see, you know, I'm I can know I'm not expecting them, or I'm expecting their absence because of my capacity to be surprised once I walk into a room and I see a live gorilla or whatever.

[211] So there's structure there that we have put in place based on all of our conceptual learning and language learning.

[212] And it causes us not to, one of the things that happens when you take psychedelics and you just look as though for the first time at anything, It becomes incredibly overloaded with, it can become overloaded with meaning and, and just the torrents of sense data that are coming in in even the most ordinary circumstances can become overwhelming for people.

[213] And that tends to just obliterate one's capacity to capture any of it linguistically.

[214] And as you're coming down, right, have you done psychedelics?

[215] have ever done acid or not acid mushroom and that's it uh and also edibles but that's there's some psychedelic properties to them but right but yeah mushroom mushrooms uh several times and always had an incredible experience it exactly the kind of experience you're referring to which is if it's true that language constrains our experience it felt like i was removing some of the constraints.

[216] Right.

[217] Because even just the most basic things were beautiful in the way that I wasn't able to appreciate previously, like trees and nature and so on.

[218] Yeah.

[219] And the experience of coming down is an experience of encountering the futility of capturing what you just saw a moment ago in words, right?

[220] Like, especially if you have any part of your self -concept and your, you're, your ego program is to be able to capture things in words.

[221] I mean, if you're a writer or a poet or a scientist or someone who wants to just encapsulate the profundity of what just happened, the total fatuousness of that enterprise when you really have gotten, when you have taken a, you know, a whopping dose of psychedelics and you begin to even gesture at.

[222] describing it to yourself, you know, so that you could describe it to others.

[223] It's just, it's like trying to, you know, thread a needle using your elbows.

[224] I mean, it's like you're trying something that can't, it's like the mere gesture proves it's impossibility.

[225] And it's, so yeah, so that, I mean, that, for me, that suggests just empirically on the first person's side that it's possible to put yourself in a condition where it's clearly not about language structuring your experience and you're having much more experience than you you tend to so it's the primacy of language is primary for some things but it's certainly primary for certain kinds of concepts and certain kinds of semantic understandings of the world but it's clearly more to mind than the conversation we're having with ourselves or that we can have with others.

[226] Can we go to that world of psychedelics for a bit?

[227] Sure.

[228] What do you think, so Joe Rogan apparently and many others meet apparently elves when they on DMT, a lot of people report this kind of creatures that they see.

[229] And again, it's probably the failure of language to describe that experience.

[230] But DMT is an interesting one.

[231] There's a, as As you're aware, there's a bunch of studies going on on psychedelics, currently MDMA, Selsibon, and John Hopkins, and a bunch of other places.

[232] But DMT, they all speak of as like some extra super level of a psychedelic.

[233] Yeah, do you have a sense of where it is our mind goes on psychedelics?

[234] but in DMT especially?

[235] Well, unfortunately, I haven't taken DMT.

[236] Unfortunately or fortunately?

[237] Unfortunately.

[238] Although I presume it's in my body as it is in everyone's brain and many, many plants, apparently.

[239] But I've wanted to take it.

[240] I haven't had an opportunity that presented itself where it was obviously the right thing for me to be doing.

[241] But for those who don't know, DMT is often touted as the most.

[242] intense psychedelic and also the shortest acting.

[243] You smoke it and it's basically a 10 -minute experience or a three -minute experience within like a 10 -minute window that you're really down after 10 minutes or so.

[244] And Terrence McKenna was a big proponent of DMT.

[245] That was the center of the bullseye for him psychedelically, apparently.

[246] And it is characterized, it seems, for many people by this phenomenon which is unlike virtually any other psychedelic experience which is your it's not just your perception being broadened or changed it's you according to Terrence McKenna feeling fairly unchanged but catapulted into a different circumstance you have been shot elsewhere and find yourself in relationship to other entities of some kind.

[247] So the place is populated with things that seem not to be your mind.

[248] So it does feel like travel to another place because you are unchanged yourself.

[249] Again, I just have this on the authority of the people who have described their experience, but it sounds like it's pretty common.

[250] It sounds like it's pretty common for people not to have the full experience because it's apparently pretty unpleasant to smoke.

[251] So it's like getting enough on board in order to get shot out of the the cannon and land among the what mechanic called self -transforming machine elves that appeared to him like jeweled, you know, Faberge egg like self -drippling basketballs that were handing him completely uninterpretable reams of profound knowledge.

[252] It's a, it's an experience I haven't had, so I just have to exist.

[253] that people have had it.

[254] I would just point out that our minds are clearly capable of producing apparent others on demand that are totally compelling to us, right?

[255] There's no limit to our ability to do that as anyone who's ever remembered a dream can attest.

[256] I mean, every night we go to sleep.

[257] Some of us don't remember dreams very often, but some dream vividly every night.

[258] And just think of how insane that experience is.

[259] I mean, you've forgotten where you were, right?

[260] That's the strangest part.

[261] I mean, this is psychosis, right?

[262] You have lost your mind.

[263] You have lost your connection to your episodic memory, or even your expectations that reality won't undergo wholesale changes a moment after you have closed your eyes, right?

[264] Like you're in bed, you're watching something on Netflix, you're waiting to fall asleep, and then the next thing that happens to you is impossible and you're not surprised, right?

[265] You're talking to dead people, you're hanging out with famous people, you're someplace you couldn't physically be, you can fly, and even that's not surprising, right?

[266] So it's, you've lost your mind, but relevantly for this.

[267] Or found it.

[268] You found something.

[269] Lucid dreaming is very interesting.

[270] Because then you can have the best of both circumstances.

[271] And then it can be kind of systematically explored.

[272] But what I mean by found, just to start to interrupt, is like if we take this brilliant idea that language constrains us, grounds us, language and other things of the waking world ground us, maybe it is that you've found the full capacity of your cognition when you dream or when you do psychedelics.

[273] You're stepping outside the little human cage, the cage of the human condition.

[274] To get, open the door and step out and look around and then go back in.

[275] Well, you've definitely stepped out of something and into something else, but you've also lost something, right?

[276] You've lost certain capacities.

[277] Memory?

[278] Well, just, yeah, in this case, you literally didn't, you don't have enough presence of mind in the dreaming city, or even in the psychedelic state if you take enough, to do math.

[279] There's no psychological, there's very little psychological continuity with your life such that you're not surprised to be in the presence of someone who should be, you should know is dead or you should know you're not likely to have met by normal channels, right?

[280] You know, you're now talking to some celebrity and it turns out your best friends, right?

[281] And you're not even, you have no memory of how you got there.

[282] You know, you're like, how did you get into the room?

[283] Did you drive to this restaurant?

[284] You know, you have no memory, and none of that's surprising to you.

[285] So you're kind of brain damaged in a way.

[286] You're not reality testing in the normal way.

[287] The fascinating possibility is that there's probably thousands of people who've taken psychedelics of various forms and have met Sam Harris on that journey.

[288] Well, I would put it more likely in dreams, not, you know, because in psychedelic, with psychedelics, you don't tend to hallucinate in a dreamlike way.

[289] I mean, so DMT is giving you a, an experience of others, but it seems to be non -standard.

[290] It's not like, it's not just like dream hallucinations.

[291] But, but to the point of coming back to DMT, the people want to suggest, and Terrence McKenna certainly did suggest, that because these others are so obviously other, and they're so vivid, well, then that they could not possibly be the creation of my own mind, but every night in dreams, you create a compelling, or what is to you at the time, a totally compelling simulacrum of another person, right?

[292] And that's, that just proves the mind is capable of doing it.

[293] Now, it's, the phenomenon of lucid dreaming shows that the mind isn't capable of doing everything you think it might be capable of even in that space.

[294] So, one of the things that people have discovered in lucid dreams and I haven't done a lot of lucid dreaming so I can't confirm all of this so I can confirm some of it apparently in every house in every room in the mansion of dreams all light switches are dimmer switches like if you go into a dark room and flip on the light it gradually comes up it doesn't come up instantly on demand because, you know, apparently this is covering for the brain's inability to produce from a, you know, a standing start, visually rich imagery on demand.

[295] So there's, I haven't confirmed that, but that was people who've done research on lucid dreaming claim that it's all dimmer switches.

[296] But one thing I have noticed and, you know, people can check this out, is that in a dream, If you look at text, you know, a page of text, you know, or a sign, you know, or a television that has text on it, and then you turn away and you look back at that text, the text will have changed, right?

[297] There's a total, it's just a chronic instability, graphical instability of text in the dream state.

[298] And I don't know if that, you know, maybe that's someone can confirm that that's not true for them, but that's, whenever I've checked that out, that has been true for me. So it keeps generating it, like, real time.

[299] Yeah.

[300] From a video game perspective.

[301] Yeah, it's rendering, it's re -rendering it for some reason.

[302] What's interesting, I actually, I don't know how I found myself in this sets of that part of the internet.

[303] But there's quite a lot of discussion about what it's like to do math on LSD.

[304] Because apparently one of the deepest thinking processes needed is those of mathematicians or theoretical computer science.

[305] just basically doing anything that involves math is proofs and you have to think creatively but also deeply and you have to think for many hours at a time and so they're always looking for ways to like is there is there any sparks of creativity that could be injected and apparently out of all the psychedelics the the worst is LSD because it completely destroys your ability to do math well and I wonder whether that has to do with your ability to the visual geometric things in a stable way in your mind and hold them there and stitch things together, which is often what's required for proofs.

[306] But again, it's difficult to kind of research these kinds of concepts, but it does make me wonder where, what are the spaces, how is the space of things you're able to think about and explore morphed by different psychedelics or dream states and so on, and how is that different?

[307] How much does it overlap with reality?

[308] And what is reality?

[309] Is there a waking state reality, or is it just a tiny subset of reality and we get to take a step in other versions of it?

[310] We tend to think very much in a space, time, four -dimensional.

[311] There's a three -dimensional world, there's time, and that's what we think about reality.

[312] And we think of traveling as, walking from point A to point B in the three -dimensional world.

[313] But that's a very kind of human surviving, trying not to get eaten by a lion conception of reality.

[314] What if traveling is something like we do with psychedelics and meet the elves?

[315] What if it's something, what if thinking or the space of ideas as we kind of grow and think through ideas, that's traveling?

[316] Or what if memories is traveling?

[317] I don't know if you have a, if you have a favorite view of reality or if you had by the way i should say a excellent conversation with uh donald hoffman yeah yeah he's interesting is there any inkling of his sense in your mind that reality is uh very far from actual like objective reality is very far from the kind of reality we imagine we perceive and we play with in our human minds well the first thing to grant is that we're never in direct contact with reality, whatever it is, unless that reality is consciousness, right?

[318] So we're only ever experiencing consciousness and its contents.

[319] And then the question is, how does that circumstance relate to, quote, reality at large?

[320] and Donald Hoffman is somebody who's happy to speculate well maybe there isn't a reality at large maybe it's all just consciousness on some level and that that's interesting that runs into to my eye various philosophical problems that or at least you have to do a lot you have to add to that picture in that you know picture of idealism for I mean that's usually all the whole family of views that would just say that the universe is just mind or just consciousness at bottom, you know, we'll go by the name of idealism in Western philosophy.

[321] You have to add to that idealistic picture, all kinds of epicycles and kind of weird coincidences, and to get the predictability of our experience and the success of materialist science to make sense in that, context, right?

[322] And so the fact that we can, what does it mean to say that there's only consciousness at bottom, right?

[323] Nothing outside of consciousness, because no one's ever experienced anything outside of consciousness.

[324] There's no scientist has ever done an experiment where they were contemplating data, no matter how far removed from our sense bases, you know, whether they're looking at the Hubble deep field or they're smashing atoms or whatever, whatever tools they're using.

[325] They're still just experiencing consciousness and its various deliverances and layering their concepts on top of that.

[326] So that's always true, and yet that somehow doesn't seem to capture the character of our continually discovering that are materialist assumptions are confirmable, right?

[327] So you said to take the fact that we unleash this fantastic amount of energy from within an atom, right?

[328] You know, first we have the theoretical suggestion that it's possible, right?

[329] We, you know, to come back to Einstein, there's a lot of energy in that matter, right?

[330] And what if we could release it, right?

[331] And then we perform an experiment that, in this case, you know, the Trinity test site in New Mexico, where the people who are most adequate to this conversation, people like Robert Oppenheimer, are standing around not altogether certain it's going to work, right?

[332] They're performing an experiment.

[333] They're wondering what's going to happen.

[334] They're wondering if their calculations around the yield are off by orders of magnitude.

[335] Some of them are still wondering whether the entire atmosphere of Earth is going to combust, right?

[336] that the nuclear chain reaction is not going to stop.

[337] And lo and behold, there was that energy to be released from within the nucleus of an atom.

[338] And that could, so it's just what the picture one forms from those kinds of experiments.

[339] And just the knowledge is just our understanding of evolution, just the fact that the Earth is, billions of years old and life is hundreds of millions of years old and we weren't here to think about any of those things and all of those processes were happening therefore in the dark and they are the processes that allowed us to to to emerge you know from prior life forms in the first place to say that it's all a mass that nothing exists outside of consciousness conscious minds of the sort that we experience it just seems it seems like a bizarrely anthropocentric claim, you know, analogous to, you know, the moon isn't there if you're not, if no one's looking at it, right?

[340] I mean, say the moon as a moon isn't there if no one's looking at it.

[341] I'll grant that because that's already a kind of fabrication born of concepts.

[342] But the idea that there's nothing there, that there's no, nothing that corresponds to what we experience as the moon, unless someone's looking at it, that just seems, um, just a way too parochial way to set out on this journey of discoveries there is something there there's a computer waiting to render the moon when you look at it that the capacity for the moon to exist is there so if we're indeed living in a simulation which i find a compelling thought experiment is it's possible that there is this kind of rendering a mechanism but not in a silly way that we think about in video games but in some kind of more fundamental physics way And we have to account for the fact that it renders experiences that no one has had yet, that no one has any expectation of having, it can violate the expectations of everyone lawfully, right?

[343] And then there's some lawful understanding of why that's so.

[344] It's like, to bring it back to mathematics, I'm like, certain numbers are prime whether we have discovered them or not, right?

[345] Like there's the highest prime number that anyone can name now, and then there's the next prime number that no one can name, and it's there, right?

[346] So it's like it's to say that our minds are putting it there, that what we know as mind in ourselves is in some way, in some sense, putting it there.

[347] That, like the base layer of reality is consciousness, right?

[348] You know, that we're identical to the thing that is rendering this reality.

[349] There's some, you know, hubris is the wrong word, but it's like, there's some, it's like, it's okay if reality is bigger than what we experience, you know, and, and it has structure that we can't anticipate and that isn't just, I mean, again, there's a, there's, there's, there's certainly a collaboration between our minds and whatever is out there to produce what we call, you know, the stuff, you know, of life.

[350] it's not the idea that it's I don't know I mean there are there are few stops on the train of idealism and kind of new age thinking and Eastern philosophy that I don't philosophically I don't see a need to take I mean the place experientially and scientifically I feel like it's you can get everything you want acknowledging that consciousness has a as a character that can be explored from its own side so that you're bringing kind of the first person experience back into the conversation about, you know, what is a human mind and, you know, what is true.

[351] And you can explore it with different degrees of rigor.

[352] And there are things to be discovered there, whether you're using a technique like meditation or psychedelics.

[353] And that these experiences have to be put in conversation with what we understand about ourselves from a third person inside, neuroscientifically, or in any other way.

[354] But to me, the question is, what if reality, the sense I have from this kind of, you play shooters?

[355] No. There's a physics engine that generate, that's pretty, I have, you mean first person shooter games?

[356] Yes, yes, sorry.

[357] Not often, but yes.

[358] I mean, there's a physics engine that generates consistent reality, right?

[359] My sense is the same could be true for a universe in the following sense that our concept.

[360] of reality as we understand it in now in the 21st century is a tiny subset of the full reality it's not that the reality that we conceive of that's there the moon being there is uh not there somehow it's that it's a tiny fraction of what's actually out there and so the uh the physics engine of the universe is just maintaining the useful physics the useful reality quote unquote uh for us to have a consistent experience as human beings but maybe we descendants of apes are really only understand like 0 .0001 % of actual physics of reality like this we can even just start with the consciousness thing but maybe our minds are just we're just too dumb by design.

[361] Yeah I that truly resonates with me and I'm surprised it doesn't resonate more with most scientists that I talk to.

[362] When you just look at you look at how how close we are to chimps, right?

[363] And chimps don't know anything, right?

[364] Clearly they have no idea what's going on, right?

[365] And then you get us, but then it's only a subset of human beings that really understand much of what we're talking about in any area of specialization.

[366] And if they all died in their sleep tonight, right, you'd be left with people who might take 1 ,000 years to rebuild the internet.

[367] you know, if ever, right?

[368] I mean, literally, it's like, you know, and, you know, I would extend this to myself.

[369] I mean, there are areas of scientific specialization where I have either no discernible competence and, I mean, I spend no time on it.

[370] I have not acquired the tools.

[371] It would just be an article of faith for me to think that I could acquire the tools to actually make a breakthrough in those areas.

[372] And, I mean, you know, your own area is one.

[373] I mean, you know, I've never spent any significant amount of time trying to be a programmer, but it's pretty obvious I'm not Alan Turing, right?

[374] It's like, like, if that were, if that were my capacity, I would have discovered that in myself.

[375] I would have found programming irresistible.

[376] My few fall, my first fall starts in, in learning, I think it was C. It was just, you know, I bounced off.

[377] It's like, this was not fun.

[378] I hate, I mean, I hate, I hate trying to figure out what, the syntax error that's causing this thing not to compile was just a fucking awful experience.

[379] I hated it, right?

[380] I hated every minute of it.

[381] So it was not, so if it was just people like me left, like, when do we get the internet again, right?

[382] And we lose, we lose, you know, we lose the internet.

[383] When do we get it again, right?

[384] When do we get anything like a proper science of information, right?

[385] You need a Claude Shannon or an Alan Turing.

[386] to plant a flag in the ground right here and say, all right, can everyone see this?

[387] Even if you don't quite know what I'm up to, you all have to come over here to make some progress.

[388] And, you know, there are, you know, hundreds of topics where that's the case.

[389] So we're barely, we barely have a purchase on making anything like discernible intellectual progress in any generation.

[390] And And yeah, I'm just a, Max Tegmark makes this point.

[391] He's one of the few people who does in physics.

[392] If you, if you just to take the truth of evolution seriously, right, and realize that there's nothing about us that has evolved to understand reality perfectly.

[393] I mean, we're just, we're just not that kind of ape, right?

[394] There's been no evolutionary pressure along those lines.

[395] And so we are making do with tools that were designed for fights with sticks and rocks, right?

[396] And it's amazing we can do as much as we can.

[397] I mean, we just, you know, you and are just sitting here on the back of having received an MRI vaccine, you know, that has certainly changed our life, given what the last year was like.

[398] And it's going to change the world if rumors of coming miracles are born out.

[399] I mean, it's now, seems likely we have a vaccine coming for malaria, right, which has been killing millions of people a year for as long as we've been alive.

[400] I think it's down to like 800 ,000 people a year now because we've spread so many bed nets around.

[401] But it was like two and a half million people every year.

[402] It's amazing what we can do, but yeah, I have if in fact that the answer at the book of nature, the back of the book, of nature is you understand 0 .1 % of what there is to understand and half of what you think you understand is wrong.

[403] That would not surprise me at all.

[404] It is funny to look at our evolutionary history, even back to chimps, I'm pretty sure even chimps thought they understood the world well.

[405] So at every point in that timeline of evolutionary development throughout human history, there's a sense like there's no more, you hear this message over and over, there's no more things to be invented.

[406] But 100 years ago, there was a famous story, I forget which physicists told it, but there were physicists telling their undergraduate students not to go into, to get graduate degrees in physics because basically all the problems had been solved.

[407] And this is like around 1915 or so.

[408] Turns out you were right.

[409] I'm going to ask you about, Oh, okay.

[410] You've recently released an episode of your podcast making sense for those with a shorter attention span, basically summarizing your position on free will.

[411] I think it was under an hour and a half.

[412] Yeah, yeah.

[413] It was as brief and clear.

[414] So allow me to summarize the summary, TLDR, and maybe you tell me where I'm wrong.

[415] So free will is an illusion and even the equivalent.

[416] experience a free will as an illusion.

[417] Like, we don't even experience it.

[418] Am I good in my summary?

[419] Yeah, I mean, this is a, this is a line that's a little hard to scan for people.

[420] I say that it's not merely that free will is an illusion.

[421] The illusion of free will is an illusion.

[422] Right.

[423] Like, there is no illusion of free will.

[424] And that is a, unlike many other illusions, that's a more fundamental claim.

[425] It's not that it's wrong, it's not even wrong.

[426] I mean, that's, I guess, that was, I think, Wolfgang Pauley, who derided one of his colleagues or enemies with that aspersion about his theory in quantum mechanics.

[427] So there are things that you, there are genuine illusions.

[428] There are things that you do experience.

[429] and then you can kind of punch through that experience or you can't actually experience you can't experience them any other way it's just we just know it's not a veridical experience just take like a visual illusion there are visual illusions that you know a lot of these come to me on Twitter these days there's these amazing visual illusions where like you know that every figure in this GIF seems to be moving but nothing in fact is moving you can just like put a ruler on your screen and nothing's moving some of those illusions you can't see any other way I mean they're just they're hacking aspects of the visual system that are just eminently hackable and you you know you have to use a ruler to convince yourself that the thing isn't actually moving now there are other visual illusions where you're taken in by it at first but if you pay more attention you can actually see that it's not there right or it's not how it first seemed like the like the necker cube is a good example of that.

[430] Necker cube is just that schematic of a cube, of a transparent cube, which pops out one way or the other.

[431] One face can pop out, and the other face can pop out.

[432] But you can actually just see it as flat with no pop out, which is a more viridical way of looking at it.

[433] So there are subject, there are kind of inward correlates to this.

[434] And I would say that the sense of self, a sense of self and free will are closely related.

[435] I often describe them.

[436] as two sides of the same coin, but they're not quite the same in their spuriousness.

[437] So the sense of self is something that people, I think, do experience, right?

[438] It's not a very clear experience, but it's not, I wouldn't call the illusion of self an illusion.

[439] But the illusion of free will is an illusion in that as you pay more attention to your experience, you begin to see that it's totally compatible with an absence of free will.

[440] will.

[441] You don't, I mean, coming back to the place we started, you don't know what you're going to think next.

[442] You don't know what you're going to intend next.

[443] You don't know what's going to just occur to you that you must do next.

[444] You don't know how much you were going to feel the behavioral imperative to act on that thought.

[445] If you suddenly feel, oh, I don't need to do that.

[446] I can do that tomorrow.

[447] You don't know where that comes from.

[448] You didn't know that was going to arise.

[449] You didn't know that was going to be compelling.

[450] All of this is compatible with some evil genius in the next room just typing in code into your experience.

[451] It's like this.

[452] Okay, let's give him the, oh my God, I just forgot.

[453] It was it going to be our anniversary in one week, thought, right?

[454] Give him the cascade of fear.

[455] Give him this brilliant idea for the thing he can buy that's going to take him no time at all and this, this, you know, overpowering sense of relief.

[456] All of our experience is, is compatible with the script already being written, and I'm not saying the script is written.

[457] I'm not saying that fatalism is, you know, is the right way to look at this.

[458] But we just don't have even our most deliberate voluntary action where we go back and forth between two options, you know, thinking about the reason for A and then reconsidering and going thinking harder about B and just going an iny -meany -money -mo until the end of the hour, however laborious you can make it, there is a utter mystery at your back finally promoting the thought or intention or rationale that is most compelling and therefore deliberate behaviorally effective.

[459] just, and this can drive some people a little crazy.

[460] So I usually preface what I say about free will with the caveat that if thinking about your mind this way makes you feel terrible, well then stop.

[461] You know, get off the ride.

[462] You know, switch the channel.

[463] You don't have to go down this path.

[464] But for me and for many other people, it's incredibly freeing to recognize this about the mind.

[465] because one, you realize that you're, I mean, cutting through the illusion of the self is immensely freeing for a lot of reasons that we can talk about separately, but losing the sense of free will does two things very vividly for me. One is it totally undercuts the basis for, the psychological basis for hatred, right?

[466] Because when you think about the experience of hating other people, what that is anchored to is a feeling that they really are the true authors of their actions.

[467] I mean, that someone is doing something that you find so despicable, right?

[468] Let's say they're, you know, targeting you unfairly, right?

[469] They're maligning you on Twitter or they're, you know, they're suing you, or they're doing something.

[470] They broke your car window.

[471] They did something awful, and now you have a grievance against them.

[472] And you're relating to them very differently, emotionally.

[473] in your own mind, then you would if a force of nature had done this, right?

[474] Or if it had just been, you know, a virus, or if it had been a wild animal or a malfunctioning machine, right?

[475] Like, to those things you don't attribute any kind of freedom of will, and while you may suffer the consequences of catching a virus or being attacked by a wild animal or having a, you know, your car breakdown or whatever, it may frustrate you.

[476] you don't slip into this mode of hating the agent in a way that completely commandeers your mind and deranges your life.

[477] I mean, there are people who spend decades hating other people for what they did, and it's just pure poison.

[478] So it's a useful shortcut to compassion and empathy.

[479] Yeah, yeah.

[480] But the question is, say that this called, what was it, the horse of consciousness, let's call it the consciousness generator black box that we don't understand.

[481] And is it possible that the script that we're walking along, that we're playing, that's already written, is actually being written in real time?

[482] It's almost like you're driving down a road, and in real time that road is being laid down.

[483] And this black box of consciousness that we don't understand is the place where this script is being generated.

[484] So it's not, it is being generated, it didn't always exist.

[485] So there's something we don't understand that's fundamental about the nature of reality that generates both consciousness.

[486] Let's call it maybe the self.

[487] I don't know if you want to distinguish between those.

[488] Yeah, I definitely would, yeah.

[489] You would.

[490] Because there's a bunch of illusions we're referring to.

[491] There's the illusion of free will.

[492] There's the illusion of self.

[493] and there's the illusion of consciousness.

[494] You're saying, I think you said there's no, you're not as willing to say there's an illusion of consciousness.

[495] In fact, I would say it's impossible.

[496] Impossible.

[497] You're a little bit more willing to say that there's an illusion of self, and you're definitely saying there's an illusion of free will.

[498] Yes.

[499] I'm definitely saying there's an illusion that a certain kind of self is an illusion.

[500] We mean many different things by this notion of self.

[501] So maybe I should just differentiate these things.

[502] So consciousness can't be an illusion because any illusion proves its reality as much as any other veritical perception.

[503] I mean, if you're hallucinating now, that's just as much a demonstration of consciousness as really seeing what's, quote, actually, there.

[504] If you're dreaming and you don't know it, that is consciousness, right?

[505] You can be confused about literally everything.

[506] You can't be confused about the underlying claim, you know, whether you make it linguistically or not, but just the cognitive assertion that something seems to be happening.

[507] It's the seeming that is the cash value of consciousness.

[508] Can I take a tiny tangent?

[509] So what if I am.

[510] creating consciousness in my mind to convince you that I'm human.

[511] So it's a useful social tool, not a fundamental property of experience, like, of being a living thing.

[512] What if it's just like a social tool to almost like a useful computational trick to place myself into reality as we together communicate about this reality?

[513] And another way to ask that, because you said much earlier, you talk negatively about robots, as you often do, so let me, because you'll probably die first when they take over.

[514] No, I'm looking forward to certain kinds of robots.

[515] I mean, I'm not, if we can get this right, this would be amazing.

[516] But you don't like the robots that fake consciousness.

[517] That's what you, you don't like the idea of fake it until you make it.

[518] Well, no, it's not that I don't like it.

[519] It's that I'm worried that we will leave.

[520] side of the problem, and the problem has massive ethical consequences.

[521] If we create robots that really can suffer, that would be a bad thing, right?

[522] And if we really are committing a murder when we recycle them, that would be a bad thing.

[523] This is how I know you're not Russian.

[524] Why is it a bad thing that we create robots that can suffer?

[525] Isn't suffering a fundamental thing from which, like, beauty springs?

[526] Like, without suffering, do you really think we would have beautiful things in this world?

[527] okay well that's a that's a that's a tangent on a tangent i'll go there i would love to go there but let's not go there just yet but you know i do think it would be if anything is bad creating hell and populating it with real minds that really can suffer in that hell that's you know that's bad you know that's the you are you are worse than than any mass murder we can name if you create it i mean this could this could be in robot form or you know more likely it would be in some simulation of a world where we managed to populate it with conscious minds or whether we knew they were conscious or not, and that world is a state of, you know, that's unendurable.

[528] That would just, it just taking the thesis seriously that there's nothing, that mind intelligence and consciousness ultimately are substrate independent, right?

[529] You don't need a biological brain to be conscious.

[530] You certainly don't need a biological brain to be intelligent, right?

[531] So if we just imagine the consciousness at some point comes along for the ride as you scale up in intelligence, well, then we could find ourselves creating conscious minds that are miserable, right?

[532] And that's just like creating a person who's miserable, right?

[533] It could be worse than creating a person who's miserable.

[534] It could be even more sensitive to suffering.

[535] Cloning them and maybe for entertainment, watching them suffer.

[536] Just like watching a person suffer for entertainment, you know.

[537] So, but back to your, the, your primary question here, which is differentiating differentiating consciousness and self and free will as concepts and kind of degrees of illusoryness.

[538] The problem with free will is that what most people mean by it, and that you know, and Dan, this is where Dan Dennett is going to get off the ride here, right?

[539] So like, he doesn't, he's going to disagree with me that I know what most people mean by it.

[540] but I have a very keen sense, having talked about this topic for many, many years and seeing people get wrapped around the axle of it and seeing, seeing in myself what it's like to have felt that I was a self that had free will and then to no longer feel that way, right?

[541] I mean, to know what it's like to actually disabuse myself of that sense, cognitively and and to recognize what's left, what goes away, and what doesn't go away on the basis of that epiphany.

[542] I have a sense that I know what people think they have in hand when they worry about whether free will exists.

[543] And it is the flip side of this feeling of self.

[544] It's the flip side of feeling like you are not merely identical to experience.

[545] you feel like you're having an experience.

[546] You feel like you're an agent that is appropriating an experience.

[547] There's a protagonist in the movie of your life and it is you.

[548] It's not just the movie, right?

[549] It's like there's sights and sounds and sensations and thoughts and emotions and this whole cacophony of experience, a felt experience, a felt experience of embodiment.

[550] But there seems to be a rider on the horse.

[551] or a passenger in the body, right?

[552] People don't feel truly identical to their bodies down to their toes.

[553] They sort of feel like they have bodies.

[554] They feel like their minds in bodies, and that feels like a self.

[555] That feels like me. And, again, this gets very paradoxical when you talk about the experience of being in relationship to yourself or talking to yourself, giving yourself a pep talk.

[556] I mean, if you're the one talking, why are you also the one listening?

[557] Why do you need the pep talk and why does it work if you're the one giving the pep talk, right?

[558] Or if I say, where are my keys, right?

[559] If I'm looking for my keys, why do I think the superfluous thought, where are my keys?

[560] I know I'm looking for the fucking keys.

[561] I'm the one looking.

[562] Who am I telling that, you know, that we now need to look for the keys, right?

[563] So that duality is weird.

[564] But leave that aside.

[565] There's the sense, and this becomes very vivid when people try to learn to meditate.

[566] you know most people they start by they start they close their eyes and they're told to pay attention to an object like the breath say so you close your eyes and you pay attention to the breath and you can feel it at the tip of your nose or the the rising and falling of your abdomen and you're you're paying attention and you feel something vague there and then you think i thought why the breath why am i why am i paying attention to the breath what's what's so special about the breath and then you then you notice your thinking and you're not paying attention to the breath anymore.

[567] And then you realize, okay, the practice is, okay, I should notice thoughts, and then I should come back to the breath.

[568] But this starting point is of the conventional starting point of feeling like you are an agent, very likely in your head, a locus of consciousness, a locus of attention, that can strategically pay attention to certain parts of experience.

[569] Like, I can focus on the breath, and then I get lost in thought, and now I can come back to the breath, and I can open my eyes, and I can, I'm over here behind my face, looking out at a world that's other than me, and there's this kind of subject object perception, and that is the default starting point of selfhood, of subjectivity, and married to that is the sense that I can decide what to do next, right?

[570] I am, I am an agent who can pay attention to the cup.

[571] I can listen to sounds.

[572] There's certain things that I can't control.

[573] Certain things are happening to me and I just can't control them.

[574] So, for instance, if someone asks, well, can you not hear a sound, right?

[575] Like, don't hear the next sound.

[576] Don't hear anything for a second.

[577] Or don't hear, don't hear, you know, I'm snapping my fingers.

[578] Don't hear this.

[579] Where's your free will?

[580] You know, like, just stop this from coming in.

[581] You realize, okay, wait a minute, my, my abundant freedom does not extend to something as simple.

[582] as just being able to pay attention to something else than this.

[583] Okay, well, so I'm not that kind of free agent, but at least I can decide what I'm going to do next.

[584] I'm going to pick up this water, right?

[585] And there's a feeling of identification with the impulse, with the intention, with the thought that occurs to you, with the feeling of speaking.

[586] Like, you know, what am I going to say next?

[587] Well, I'm saying it.

[588] So here goes, this is me. it feels like I'm the thinker.

[589] I'm the one who's in control.

[590] But all of that is born of not really paying close attention to what it's like to be you.

[591] And so this is where meditation comes in or this is where, again, you can get it this conceptually.

[592] You can unravel the notion of free will just by thinking certain thoughts, but you can't feel that it doesn't exist.

[593] unless you can pay close attention to how thoughts and intentions arise.

[594] So the way to unravel it conceptually is just to realize, okay, I didn't make myself, I didn't make my genes, I didn't make my brain, I didn't make the environmental influences that impinged upon this system for the last 54 years that have produced my brain in precisely the state it's in right now, with all of the receptor weightings and densities, and, you know, It's just I'm exactly the machine I am right now through no fault of my own as the experiencing self.

[595] I get no credit and I get no blame for the genetics and the environmental influences here.

[596] And yet those are the only things that contrive to produce my next thought or impulse or moment of behavior.

[597] And if you were going to add something magical to that clockwork, like an immortal soul, you can also notice that you didn't produce your soul, right?

[598] You can't account for the fact that you don't have the soul of someone who doesn't like any of the things you like or wasn't interested in any of the things you were interested in or, you know, or was a psychopath or was, you know, had an IQ of 40.

[599] I mean, like there's nothing, nothing about that that the person who believes in a soul can claim to have, controlled.

[600] And yet that is also totally dispositive of whatever happens next.

[601] But everything you've described now, maybe you can correct me, but it kind of speaks to the materialistic nature of the hardware.

[602] But even if you add magical ectoplasm software, you didn't produce that either.

[603] I know, but if we can think about the actual computation running on the hardware and running on the software, there's something you said recently.

[604] which you think of culture as an operating system.

[605] So if we just remove ourselves a little bit from the conception of human civilization being a collection of humans and rather us just being a distributed computation system on which there's some kind of operating system running and then the computation that's running is the actual thing that generates the interactions and the communications and maybe even free will, experiences of all those free will.

[606] Do you ever think of, do you ever try to reframe the world in that way where it's like ideas are just using us?

[607] Thoughts are using individual nodes in the system and they're just jumping around and they also have ability to generate like experiences so that we can push those ideas along.

[608] And basically the main organisms here are the thoughts, not the humans.

[609] Yeah, but then that arose the boundary between self and world.

[610] Right.

[611] So then there's no self, a really integrated self to have any kind of will at all.

[612] Like if you're just a memeplex, I mean, if there's just like, if you're just a collection of memes, and I mean, like we're all kind of like currents, like eddies in this river of ideas, right?

[613] So it's like, and it seems to have structure, but there's no real boundary.

[614] between that part of the flow of water and the rest.

[615] I mean, if our, and I would say that much of our mind answers to this kind of description.

[616] I mean, so much of our mind has been, it's obviously not self -generated, and you're not going to find it by looking in the brain.

[617] It's, it is the result of culture largely, but also, you know, the genes on one side and culture on the other, meeting to allow for manifestations of mind that aren't actually bounded by the person in any clear sense.

[618] The example I often use here, but there's so many others, is just the fact that we're following the rules of English grammar to whatever degree we are, it's not that we certainly haven't consciously represented these rules for ourselves.

[619] we haven't invented these rules.

[620] We haven't, I mean, there are norms of language use that we couldn't even specify because we haven't, you know, we're not grammarians, we're not, we haven't studied this, we don't even have the right concepts, and yet we're following these rules, and we're noticing, you know, we're noticing as, you know, an error when we fail to follow these rules.

[621] And virtually every other cultural norm is like that.

[622] I mean, these are not things we've invented.

[623] You can consciously decide to scrutinize them and override them.

[624] But, I mean, just think of any social situation where you're with other people and you're behaving in ways that are culturally appropriate, right?

[625] You're not being wild animals together.

[626] You're following, you have some expectation of how you shake a person's hand and how you deal with implements on a table, how you have a meal together.

[627] Obviously, this can change from culture to culture, and people can be shocked by how different those things are, right?

[628] We all have foods we find disgusting, but in some countries, dog is not one of those foods, right?

[629] And yet, you and I presumably would be horrified to be served dog.

[630] Those are not norms that we're, they are outside of us, in some way, and yet they're felt very viscerally.

[631] I mean, they're certainly felt in their violation.

[632] You know, if you are, just imagine, you're in somebody's home, you're eating something that tastes great to you, and you happen to be in Vietnam or wherever, you know, you didn't realize dog was potentially on the menu, and you find out that you've just eaten 10 bites of what is, you know, really a Cocker Spaniel.

[633] and you feel this instantaneous urge to vomit, right, based on an idea, right?

[634] Like, so you did not, you're not the author of that norm that gave you such a powerful experience of its violation.

[635] And I'm sure we can trace the moment in your history, you know, vaguely where it sort of got in.

[636] I mean, very early on as kids, you realize you're treating dogs as pets and not as food.

[637] or as potential food um but yeah no it's but the point you just made opens us to like we are totally permeable to a sea of mind yeah but if we uh take the metaphor of the distributed computing systems each individual node is as part of performing a much larger computation but it nevertheless is in charge of doing the scheduling of So, assuming it's Linux, is doing the scheduling of processes and it's constantly alternating them.

[638] That node is making those choices.

[639] That node sure is how it believes it has free will and actually has free will because it's making those hard choices.

[640] But the choices ultimately are part of a much larger computation that it can't control.

[641] Isn't it possible for that node to still be, that human node is still making the choice?

[642] Well, yeah, it is.

[643] So I'm not saying that your body isn't doing, really doing things, right?

[644] And some of those things can be conventionally thought of as choices, right?

[645] So it's like I can choose to reach and it's like, it's not being imposed on me. That would be a different experience.

[646] Like, so there's a, there's an experience of, you know, there's definitely a difference between voluntary and involuntary action.

[647] There's, so we, that has to get conserved by any account of the mind.

[648] that jettison's free will, you still have to admit that there's a difference between a tremor that I can't control and a purposeful motor action that I can control and I can initiate on demand, and it's associated with intentions.

[649] And it's got efferent, you know, motor copy, which is being, which is being predictive so that I can notice errors.

[650] You know, I have expectations.

[651] When I reach for this, if my hand were actually to pass through the bottle because it's a hologram, I would be surprised, right?

[652] And so that shows that I have a expectation of just what my grasping behavior is going to be like even before it happens.

[653] Whereas with a tremor, you don't have the same kind of thing going on.

[654] That's a distinction we have to make.

[655] So I am, yes, I'm really the proximate, my intention to move, which is, in fact, can be subjectively felt, really is the proximate cause of my moving.

[656] It's not coming from elsewhere in the universe.

[657] I'm not saying that.

[658] So in that sense, the node is really deciding to execute, you know, the subroutine now.

[659] But that's not the feeling that has given rise to this conundrum of free will, right?

[660] So the people feel like, people feel like they, the crucial things that people feel like they could have done otherwise, right?

[661] That's the thing.

[662] So when you run back the clock of your life, right, you run back the movie of your life, you flip back the few pages in the novel of your life, they feel that at this point, they could behave differently than they did.

[663] Right?

[664] So like, but given, you know, even given your distributed computing example, it's either a a fully deterministic system or it's a deterministic system that admits of some random influence.

[665] In either case, that's not the free will people think they have.

[666] The free will people think they have is, damn, I shouldn't have done that.

[667] I just like, I shouldn't have done that.

[668] I could have done otherwise, right?

[669] I should have done otherwise, right?

[670] Like if I, like, if you think about something that you deeply regret doing, right, or that you hold someone else responsible for because they really are the upstream agent in your mind of what they did.

[671] You know, that's an awful thing that that person did and they shouldn't have done it.

[672] There is this illusion, and it has to be an illusion because there's no picture of causation that would make sense of it.

[673] There's this illusion that if you arrange the universe exactly the way it was a moment ago, it could could have played out differently.

[674] And the only way it could have played out differently is if there's randomness added to that, but randomness isn't what people feel would give them free will, right?

[675] If you tell me that, you know, I only reach for the water bottle this time because somebody's, because there's a random number generator in there kicking off values and it finally moved my hand, that's not the feeling of authorship.

[676] That's still not control.

[677] you're still not making that decision.

[678] There's actually, I don't know if you're familiar with cellular automata.

[679] It's a really nice visualization of how simple rules can create incredible complexity that it's like really dumb initial conditions are set, simple rules applied, and eventually you watch this thing, and if the rule, if the initial conditions are correct, that you're going to have emerged something that to our perception system looks like organisms interacting.

[680] You can construct any kinds of worlds.

[681] And they're not actually interacting.

[682] They're not actually even organisms.

[683] And they certainly aren't making decisions.

[684] So there's like systems you can create that illustrate this point.

[685] The question is whether there could be some room for, let's use in the 21st century, the term magic, back to the black box of consciousness.

[686] Let me ask you it this way.

[687] if you're wrong about your intuition about free will what and somebody comes along to you and proves to you that you didn't you didn't have the full picture what would that proof look like what would uh well that's that's the problem that's why it's not even an illusion in my world because it's for me it's impossible to say what the universe would have to be like for free will to be a thing right it's just it doesn't conceptually map on to any notion of causation we have.

[688] And that's unlike any other spurious claim you might make.

[689] So like, if you're going to, if you're going to believe in ghosts, right, I understand what that claim could be.

[690] Like, I, you know, I don't happen to believe in ghosts, but if it's, it's not hard for me to specify what would have to be true for ghost to be real.

[691] And so it is with a thousand other things like ghosts.

[692] So like, okay, so you're telling me that when people die, there's some part of them that is not reducible at all to their biology that lifts off them and goes elsewhere.

[693] And it's actually the kind of thing that can linger in closets and in cupboards and actually it's immaterial, but by some principle of physics, we don't totally understand.

[694] It can make sounds and knock, you know, objects and even occasionally show up so they can be visually beheld.

[695] And it's just, it seems like a miracle, but it's just some spooky noun in the universe that we don't understand.

[696] Let's call it a ghost.

[697] That's fine.

[698] I can talk about that all day, the reasons to believe in it, the reasons not to believe in it, the way we would scientifically test for it, what would have to be provable so as to convince me that ghosts are real.

[699] Free will isn't like that at all.

[700] There's no description of any concatenation of causes that precedes my conscious experience that sounds like what people think they have when they think they could have done otherwise and that they really, that they, the conscious agent is really in charge, right?

[701] Like if you don't know what you're going to think next, right, and you can't help but think it, take those two premises on board.

[702] You don't know what it's going to be.

[703] You can't stop it.

[704] from coming, and until you actually know how to meditate, you can't stop yourself from fully living out its behavioral or emotional consequences.

[705] Right?

[706] Like you have no, once you, mindfulness, you know, arguably gives you another degree of freedom here.

[707] It doesn't give you free will, but it gives you some other game to play with respect to the, the emotional and behavioral imperatives of thoughts, but short of that, I mean, the reason why mindfulness doesn't give you free will is because you can't, you know, you can't account for why in one moment mindfulness arises and in other moments it doesn't, right?

[708] But a different process is initiated once you can practice in that way.

[709] Well, if I could push back for a second, by the way, I just have this thought bubble popping up all the time of just too recent, chimps arguing about the nature of consciousness.

[710] It's kind of hilarious.

[711] So on that thread, you know, if we're even before Einstein, let's say before Einstein, we were to conceive about traveling from point A to point B, say some point in the future, we are able to realize through engineering a way which is consistent with Einstein's theory that you can have warm holes.

[712] You can travel from one point to another faster than this, speed of light.

[713] And that would I think completely change our conception what it means to travel in the physical space.

[714] And that completely transform our ability.

[715] You talk about causality, but here let's just focus on what it means to travel through physical space.

[716] Don't you think it's possible that there will be inventions or leaps in understanding about reality that will allow us to see free will as actually, like us humans somehow may be linked to this idea of consciousness are actually able to be authors of our actions.

[717] It is a non -starter for me conceptually.

[718] It's a little bit like saying, could there be some breakthrough that will cause us to realize that circles are really square, or the circles are not really round?

[719] Right.

[720] No, a circle is what we mean by a perfectly round form, right?

[721] Like, it's, it's, it's not, it's not on the table to be revised.

[722] And so I would say the same thing about consciousness.

[723] So it's just like saying, is there some breakthrough that would get us to realize that consciousness is really an illusion?

[724] I'm saying no, because what the experience of an illusion is as much a demonstration of what I'm calling consciousness as anything else, right?

[725] Like that, that is consciousness.

[726] With free will, it's a similar problem.

[727] It's like, again, it comes down to a picture of causality, and there's just there's no other picture on offer.

[728] And what's more, I know what it's like on the experiential side to lose the thing to which it is clearly anchored, right?

[729] Like the feel, like it doesn't feel, and this is the question that almost nobody asked, People who are debating me on the topic of free will, at 15 -minute intervals, I'm making a claim that I don't feel this thing, and they never become interested in, well, what's that like?

[730] Like, okay, you're actually saying you don't, this thing isn't true for you empirically.

[731] It's not just, because most people who don't believe in free will philosophically also believe that we're condemned to experience it.

[732] Like, you just, you can't live without this feeling.

[733] So you're actually saying you're able to experience the absence of the illusion of free will.

[734] Yes, yes.

[735] For, are we talking about a few minutes at a time or, is this to require a lot of work, a meditation?

[736] Are you literally able to load that into your mind and, like, play that movie?

[737] right now, just in this conversation.

[738] So it's not absolutely continuous, but it's whenever I pay attention.

[739] It's like, it's the same thing, and I would say the same thing for the, the illusiness of the self, and again, we haven't talked about this.

[740] Can you still have the self and not have the free will in your mind at the same time?

[741] Do they go at the same time?

[742] This is the same, yeah, it's the same thing that they're always holding hands when they walk out the door.

[743] They really are two sides at the same coin.

[744] Okay.

[745] But it's just, it comes down to what it's like to try to get to the end of this sentence or what it's like to finally decide that it's been long enough and now I need another sip of water, right?

[746] If I'm paying attention, now if I'm not paying attention, I'm probably I'm captured by some other thought and that feels a certain way, right?

[747] And so that's not, it's not vivid.

[748] But if I try to make vivid this experience of just, okay, I'm finally going to experience free will.

[749] I'm going to notice my free will.

[750] Right?

[751] Like it's got to be here.

[752] Everyone's talking about it.

[753] Where is it?

[754] I'm going to pay attention to.

[755] I'm going to look for it.

[756] And I'm going to create a circumstance that is where it has to be most robust, right?

[757] I'm not rushed to make this decision.

[758] I'm not, it's not a reflex.

[759] I'm not under pressure.

[760] I'm going to take as long as I want.

[761] I'm going to decide it's not trivial.

[762] So it's not just like reaching with my left hand or reaching with my right hand.

[763] People don't like those examples for some reason.

[764] Let's make a big decision.

[765] Like, where should, you know, what should my next podcast be on, right?

[766] Who do I invite on the next podcast?

[767] What does it like to make that decision?

[768] When I pay attention, there is no evidence of free will anywhere in sight.

[769] It's like it doesn't feel, like it feels profoundly mysterious to be going back between two people.

[770] You know, like, is it going to be person A or person B?

[771] I've got all my reasons for A and all my reasons why not and all my reasons for B and there's some math going on there that I'm not not even privy to where certain concerns are trumping others.

[772] And at a certain point, I just decide.

[773] And yes, you can say I'm the node in the network that has made that decision.

[774] Absolutely.

[775] I'm not saying it's being piped to me from elsewhere.

[776] But the feeling of what it's like to make that decision is totally.

[777] without a sense, a real sense of agency, because something simply emerges.

[778] It's literally, it's literally as tenuous as what's the next sound I'm going to hear, right?

[779] Or what's the next thought that's going to appear?

[780] And it just, something just appears, you know, and if something appears to cancel that something, like if I say, I'm going to invite her, and then I'm going to about to send the email and I think, oh, no, no, no, I can't do that.

[781] There was that thing in that New York article I read that I got to talk to this guy, right?

[782] That pivot at the last second, you can make it as muscular as you want.

[783] It always just comes out of the darkness.

[784] It's always mysterious.

[785] So right, when you try to pin it down, you really can't ever find that free will.

[786] If you construct an experiment for yourself and you try to really find that moment when you're actually making that controlled author decision it's it's uh and we're still we're still we know at this point that if we were scanning your brain in some you know podcast guest choosing experiment right we know at this point we would be privy to who you're going to pick before you are you the conscious agent if we could again this is operationally a little hard to conduct but there's enough data now to know that something very much like this cartoon is in fact true and will ultimately be undeniable for people.

[787] They'll be able to do it on themselves with some app.

[788] If you're deciding where to go for dinner or who to have on your podcast or ultimately who to marry, what city to move to, right?

[789] You can make it as big or as small a decision as you want.

[790] We could be scanning your brain in real time, and at a point where you still think you're uncommitted, we would be able to say with arbitrary accuracy, all right, Lex is, he's moving to Austin, right?

[791] I didn't choose that.

[792] Yeah, he was, it was going to be Austin or it was going to be Miami.

[793] He got, he's catching one of these two waves, but it's going to be Austin.

[794] And at a point where you subjectively, if we could ask you, you would say, oh, no, I'm still, I'm still, I'm still working over here.

[795] I'm still thinking.

[796] I'm still considering my options.

[797] And you spoke to this.

[798] In you thinking about other stuff in the world, it's been very useful to step away from this illusion of free will.

[799] And you argue that it probably makes a better world because it can be compassionate and empathetic towards others.

[800] And toward oneself.

[801] I mean, radically toward others, in that literally hate makes no sense.

[802] anymore.

[803] I mean, there's certain things you can really be worried about, really want to oppose.

[804] I mean, I'm not saying you'd never have to kill another person.

[805] Like, I mean, self -defense is still a thing, right?

[806] But the idea that you're ever confronting anything other than a force of nature in the end goes out the window, right?

[807] Or it does go out the window when you really pay attention.

[808] I'm not saying that this would be easy to under, to grok if, you know, you know, someone kills a member of your family.

[809] I'm not saying you can just listen to my 90 minutes on free will and then you should be able to see that person as identical to a grizzly bear or a virus.

[810] Because it's so, I mean, we are so evolved to deal with one another as fellow primates and as agents.

[811] But it's, yeah, when you're talking about the possibility of, you know, Christian, you know, truly Christian forgiveness, right?

[812] It's like, as testified to by, you know, various saints of that flavor over the, over the millennia.

[813] Yeah, that is, that, the doorway to that is to recognize that no one really at bottom made themselves.

[814] And therefore, everyone, what we're seeing really are differences in luck in the world.

[815] We're seeing people who are very, very lucky to have had good parents and good genes and to be in good societies and had good opportunities and to be intelligent and to be, you know, not sociopathic.

[816] Like none of it is on them.

[817] They're just reaping the fruits of one lottery after another and then showing up in the world on that basis.

[818] And then so it is with, you know, every malevolent asshole out there, right?

[819] He or she didn't make themselves.

[820] Even if that weren't possible, the utility for self -compassion is also enormous because it's, when you just look at what it's like to regret something or to feel shame about something or feel deep embarrassment about it.

[821] I mean, these states of mind are some of the most deranging experiences anyone has.

[822] And the kind of the indelible reaction to them, you know, the memory of the thing you said, the memory of the wedding toast you gave 20 years ago that was just mortifying, right?

[823] The fact that that can still make you hate yourself, right?

[824] And like that psychologically, that is a knot that can be untied, right?

[825] Speak for yourself, Sam.

[826] Yeah, yeah.

[827] Clearly, you're not.

[828] You gave a great, great toast.

[829] It was my toast that mortified.

[830] No, no, that's not what I was referring to.

[831] I, I, I'm deeply appreciative in the same way that you're referring to of every moment I'm alive, but I'm also powered by self -hate often.

[832] Like, several things in this conversation already that I've spoken, I'll be thinking about, like, that was the dumbest thing.

[833] You're sitting in front of Sam Harris, and you said that.

[834] So, like, that.

[835] But that somehow creates a richer experience for me. I've actually come to accept that as a nice feature, however my brain was built.

[836] I don't think I want to let go of that.

[837] Well, I think the thing you want to let go of is the suffering associated with it.

[838] So for me, so it's just psychologically and ethically, all of this is very interesting.

[839] So I don't think we ever, we should ever get rid of things like, So like hatred is, hatred is divorceable from anger in the sense that hatred is this enduring state where, you know, whether you're hating somebody else or hating yourself, it is just, it is toxic and durable and ultimately useless, right?

[840] Like it becomes, it becomes self -nullifying, right?

[841] You like, you become less capable as a person to solve any of your problems.

[842] It's not, it's not instrumental in solving the problem that is, that is, that is a capable.

[843] all this hatred.

[844] And anger, for the most part, isn't either except as a signal of salience that there's a problem.

[845] So if somebody does something that makes me angry, that just promotes this situation to conscious attention in a way that is stronger than my not really caring about it.

[846] And there are things that I think should make us angry in the world.

[847] And there's the behavior of other people that should make us angry because we should respond to it.

[848] And so it is with yourself.

[849] If I do something, you know, as a parent, if I do something stupid that harms one of my daughters, right?

[850] My experience of myself and my beliefs about free will close the door to my saying, well, I should have done otherwise in the sense that if I could go back in time, I would have actually effectively done otherwise.

[851] No, I would do, given the same causes and conditions, I would do that thing a trillion times in a row, right?

[852] But, you know, regret and feeling bad about an outcome are still important to capacities because, like, yeah, you know, I desperately want my daughters to be happy and healthy.

[853] So if I've done something, you know, if I crash the car when they're in the car and they get injured, right, and I do it because I was trying to change a song on my playlist or, you know, something stupid, I'm going to feel like a total asshole, how long do I stew in that feeling of regret, right?

[854] And to, like, what utility is there to extract out of this error signal?

[855] And then what do I do?

[856] We're always faced with the question of what to do next, right?

[857] And how to best do that thing, that necessary thing next?

[858] And how much well -being can we experience while doing it?

[859] Like how much, how miserable do you need to be to solve your problems in life and to solve the problems of, it helps solve the problems of people closest to you?

[860] You know, how miserable do you need to be to get through your to -do list today?

[861] Ultimately, I think you can be deeply happy going through all of it, right?

[862] And not, and even navigating moments that are scary and, you know, really destabilizing to ordinary people.

[863] And, I mean, I think, you know, again, I'm always up kind of at the edge of my own capacities here.

[864] And there are all kinds of things that stress me out and worry me. And I'm especially something if it's, you're going to tell me it's something with, you know, the health of one of my kids, you know, it's very hard for me, like, it's very hard for me to be true.

[865] equanimous around that, but equanimity is so useful the moment you're in response mode, right?

[866] Because I mean, the ordinary experience for me of responding to what seems like a medical emergency for one of my kids is to be obviously super energized by concern to respond to that emergency.

[867] But then once I'm responding, all of my fear and agitation and worry and oh my God, what if this is really something terrible?

[868] But finding any of those thoughts compelling that only diminishes my capacity as a father to be good company while we navigate this really turbulent passage.

[869] As you're saying this, actually, one guy comes to mind, which is Elon Musk, one of the really impressive things to me was to observe how many dramatic things he has to deal with throughout the day at work, but also if you look through his life, family too, and how he's very much actually, as you're describing, basically a practitioner of this way of thought, which is you're not in control, you're basically responding, no matter how traumatic the event.

[870] And there's no reason to sort of linger on the...

[871] well yeah they couldn't be negative feelings around that well so i mean he but he's in a very specific situation which is which is unlike normal life you know even his normal life but normal life for for most people because when you just think of like you know he's running so many businesses and he's he's they're very they're not they're non highly non standard businesses so what he's seen is everything that gets to him is some kind of emergency like it wouldn't be getting to him.

[872] If it needs his attention, there's a fire somewhere.

[873] So he's constantly responding to fires that have to be put out.

[874] So there's no default expectation that there shouldn't be a fire, right?

[875] But in our normal lives, we live.

[876] Most of us who are lucky, right, not everyone obviously on earth, but most of us who are at some kind of cruising altitude in terms of our lives where we're reasonably healthy and life is reasonably orderly and the political apparatus around us is reasonably functional, functional.

[877] So I said, functionalable for the first time of my life through no free will of my own.

[878] I noticed those errors, and they do not feel like agency.

[879] And nor does the success of an utterance feel like agency.

[880] When you're looking at normal human life, right, where you're just trying to be happy and healthy and get your work done, there's this default expectation that there shouldn't be fires.

[881] People shouldn't be getting sick or injured.

[882] You know, we shouldn't be losing vast amounts of our resources.

[883] We should, like, so when something really stark like that happens, people don't have a, people don't have that muscle that they've been responding to emergencies all day long, you know, seven days a week, week in business mode, and so I have a very thick skin.

[884] This is just another one.

[885] I'm not expecting anything else when I wake up in the morning.

[886] No, we have this default sense that, I mean, honestly, most of us have the default sense that we aren't going to die, right, or that we should, like, maybe we're not going to die, right?

[887] Like, death denial really is a thing.

[888] You know, we're, we're because, and you can see it, just like I can see when I reach for this bottle that I was, expecting it to be solid, because when it isn't solid, when it's a hologram and I just, my fists closes on itself, I'm damn surprised.

[889] People are damn surprised to find out that they're going to die, to find out that they're sick, to find out that someone they love has died or is going to die.

[890] So it's like, the fact that we are surprised by any of that shows us that we're living at a, we're living in a mode that is, you know, we're perpetually diverting ourselves from some facts that should be obvious, right?

[891] And that, and the more salient we can make them, you know, the more, I mean, the case of death, it's a matter of being able to get one's priorities straight.

[892] I mean, the moment, again, this is hard for everybody, even those who are really in the business of paying attention to it.

[893] But the moment you realize that every circumstance is finite, right?

[894] You've got a certain number of, you know, you've got whatever, whatever it is, 8 ,000 days left in a normal span of life.

[895] And 8 ,000 is a, sounds like a big number.

[896] It's not that big a number, right?

[897] So it's just like, and then you can decide how you want to go through life and how you want to experience each one of those days.

[898] And so I was back to where our jumping off point, I would argue that you don't want to feel self -hatred ever.

[899] I would argue that you don't want to really grasp on to any of those moments where you are taking, internalizing the fact that you just made an error, you embarrassed yourself, that something didn't go the way you wanted it to.

[900] I think you want to treat all of those moments very, very lightly.

[901] You want to extract the actionable information.

[902] It's something to learn.

[903] Oh, you know, I learned that when I prepare in a certain way, it works better than when I prepare in some other way or don't prepare.

[904] Like, yes, lesson learned and do that differently.

[905] But, yeah, I mean, so many, so many of us have spent so much time.

[906] with a very dysfunctional and hostile and even hateful inner voice governing a lot of our self -talk and a lot of just our default way of being with ourselves.

[907] I mean, the privacy of our own minds, we're in the company of a real jerk a lot of the time.

[908] And that can't help but affect.

[909] I mean, forget about just your own sense of well -being.

[910] It can't help but limit what you're capable of in the world with other people.

[911] I'll have to really think about that.

[912] I just take pride that my jerk, my inner -voiced jerk, is much less of a jerk than somebody like David Goggins, who's just like screaming in his ear constantly.

[913] So I have a relativist kind of perspective that it's not as bad as that, at least.

[914] Well, having a sense of humor also helps.

[915] You know, it's just like it's not, the stakes are never quite what you think they are.

[916] And even when they are, I mean, it's just the difference between being able to see the comedy of it rather than, because again, there's this sort of dark star of self -absorption that pulls everything into it, right?

[917] And it's like if that's the, that's the algorithm you don't want to run.

[918] It's like, you just want, you just want things to be good.

[919] So, like, just push, push the concern out there.

[920] Like, like, not have the collapse of, oh, my God, what does this say about me?

[921] It's just like, what does this say about how do we make this meal that we're all having together as fun and as useful as possible?

[922] And you're saying in terms of propulsion systems, you recommend humor as a good spaceship to escape the gravitational field of that darkness.

[923] Well, it certainly helps.

[924] Yeah, well, let me ask you a little bit about ego and fame, which is very interesting, the way you're talking, given that you're one of the biggest intellects, living intellects and minds of our time, and there's a lot of people that really love you and almost elevate you to a certain kind of status where you're like the guru.

[925] I'm surprised you didn't show up in a robe, in fact.

[926] is there a hoodie that's not the highest status garment one can wear now the socially acceptable version of the robe if you're a billionaire you wear a hood is there something you can say about managing the effects of fame on your own mind on not creating this you know when you wake up in the morning when you look up in the mirror how do you get your ego not to grow exponentially, your conception of salt to grow exponentially because there's so many people feeding that.

[927] Is there something to be said about this?

[928] Well, it's really not hard because, I mean, I feel like I have a pretty clear sense of my strengths and weaknesses, and I don't feel like it's, I mean, honestly, I don't feel like I suffer from much grandiosity.

[929] I mean, I just have a, you know, there's so many things I'm not good at, There's so many things I will, you know, given the remaining 8 ,000 days at best, I will never get good at.

[930] I would love to be good at these things.

[931] So it's just, it's easy to feel diminished by comparison with the talents of others.

[932] Do you remind yourself of all the things that you're not competent in?

[933] I mean, like, they're just on display for me every day that I appreciate the talents of others.

[934] But you notice them.

[935] I'm sure Stalin and Hitler did not notice.

[936] all the ways in which they were I mean, this is why absolute power corrupts absolutely is you stop noticing the things in which you're ridiculous and wrong.

[937] Right, yeah, no, I am...

[938] Not to compare you to Stalin.

[939] Yeah, well, I'm sure there's an inner Stalin in there somewhere.

[940] Well, we all have, but hopefully he'll carry a baby stolen with us.

[941] He wears better clothes.

[942] And I'm not going to grow that mustache.

[943] Those concerns don't map on, they don't map onto me for a bunch of reasons, but one is I also have a very peculiar audience.

[944] I'm just, I've been appreciating this for a few years, but it's, it's, I'm just now beginning to understand that there are many people who have audiences of my size or larger that have a very different experience of having an audience than I do.

[945] I have, I have curated, for better or worse, a peculiar audience.

[946] And the net result of that is virtually any time I say anything of substance, something like half of my audience, my real audience, not haters from outside my audience, but my audience is just revolts over it.

[947] They just like, oh my God, I can't believe you said.

[948] Like you, you're such a schmuck, right?

[949] Yeah.

[950] They revolt with rigor and intellectual sophistication.

[951] Or not, or not.

[952] I mean, what I've seen, but it's like, but people who are like, so, I mean, the clearest case is, you know, I have an, I have whatever audience I have, and then Trump appears on the scene.

[953] And I discover that something like 20 % of my audience just went straight to Trump and couldn't believe I didn't follow them there.

[954] They were just a guess that I didn't see that Trump was obviously, uh, exactly what we needed for, for, for, to, to steer the ship of state for the next four years.

[955] Uh, and then four years beyond that.

[956] So, like, So that's one example.

[957] So whenever I said anything about Trump, I would hear from people who loved more or less everything else I was up to and had for years.

[958] But everything I said about Trump just gave me pure pain from this quadrant of my audience.

[959] But then the same thing happens when I say something about the derangement of the far left.

[960] Anything I say about wokeness, right, or identity politics.

[961] same kind of punishment signal from us again people who are core to my audience like i've read all your books i'm using your meditation app i love what you'd say about science but you are so wrong about politics and you're you know i'm starting to think you're a racist asshole for everything you said about about identity politics and there are so many and the free will topic is just like this it's like i just they love what i'm saying about consciousness and the mind and they love to hear me talk about physics with physicists, and it's all good.

[962] This free will stuff is, I cannot believe you don't see how wrong you are.

[963] What a fucking embarrassment you are.

[964] So, but I'm starting to notice that there are other people who don't have this experience of having an audience because they have, let me just take the Trump woke dichotomy.

[965] They just castigated Trump the same way I did, but they never say anything bad about the far left.

[966] So they never get this punishment signal.

[967] Or you flip it.

[968] They're all about the insanity of critical race theory now.

[969] We connect all those dots the same way, but they never really specified what was wrong with Trump, or they thought there was a lot right with Trump, and they got all the pleasure of that.

[970] And so they have much more homogenized audiences.

[971] And so my experience, so just to come back to this experience, of fame or quasi -fame.

[972] I mean, in truth, it's not real fame, but it's, still, there's an audience there.

[973] It is a, it's now an experience where basically whatever I put out, I notice a ton of negativity coming back at me. And it just, it is what it is.

[974] I mean, now, it's like, I used to think, wait a minute, there's got to be some way for me to communicate more clearly here.

[975] is not to get this kind of lunatic response from my own audience, from like people who are showing all the signs of, we've been here for years for a reason, right?

[976] These are not just trolls.

[977] And so I think, okay, I'm going to take 10 more minutes and really just tell you what should be absolutely clear about what's wrong with Trump.

[978] I've done this a few times, but I think I've got to do this again.

[979] Or wait a minute, how are they not getting?

[980] getting that these episodes of police violence are so obviously different from one another, that you can't describe all of them to yet another racist maniac on the police force, you know, killing someone based on his racism.

[981] Last time I spoke about this, it was pure pain, but I just got to try again.

[982] Now at a certain point, I mean, I'm starting to feel like, all right, I just have to be, I have to cease.

[983] again, it comes back to this expectation that there shouldn't be fires.

[984] I feel like if I could just play my game impeccably, the people who actually care what I think will follow me when I hit Trump and hit free will and hit the woke and hit whatever it is, how we should respond to the coronavirus, you know, vaccines, you know, are they a thing, right?

[985] Like there's such derangement in our information space now that, I mean, I guess, you know, some people could be getting more of this than I expect, but I just noticed that, you know, many of our friends who are in the same game have more homogenized audiences and don't get, I mean, they've successfully filtered out the people who are going to despise them on this next topic.

[986] And, you know, I would imagine you are, have a different experience of having a podcast than I do at this point.

[987] I mean, I'm sure you get haters, but I would imagine you're more streamlined.

[988] I actually don't like the word haters because it kind of presumes that it puts people in a bin.

[989] I think we're all have like baby haters inside of us and we just apply them and some people enjoy doing that more than others for particular periods of time.

[990] I think you can almost see hating on the internet as a video game that you just play and it's fun, but then you can put it down and walk away.

[991] and no I certainly have a bunch of people that are very critical I can list all the ways but does it feel like it on any given topic does it feel like it's an actual title surge where it's like 30 % of your audience and then the other 30 % of your audience from podcast to podcast no that's happening to me all the time now well I'm more with I don't know what you think about this I mean Joe Rogan doesn't read comments or doesn't read comments much and the argument he made to me is that he already has, like, a self -critical person inside.

[992] Like, and I, I'm going to have to think about what you said in this conversation, but I have this very harshly self -critical person inside as well, where I don't need more fuel.

[993] I don't need, there, no, I do sometimes, that's why I check negativity occasionally, not too often.

[994] I sometimes need to like put a little bit more like coals into the fire, but not too much.

[995] But I already have that self -critical engine that keeps me in check.

[996] I just, I wonder, you know, a lot of people who gain more and more fame lose that, that ability to be self -critical.

[997] I guess because they lose the audience that can be critical towards them.

[998] You know, I do follow Joe's advice much more than I ever have here.

[999] Like I don't look at comments very often.

[1000] And I'm, I'm probably using Twitter, you know, 5 % as much as I used to.

[1001] I mean, I really just get in and out on Twitter and spend very little time in my ad mentions.

[1002] I, but, you know, it does, in some ways, it feels like a loss, because occasionally I get, I see something super intelligent there.

[1003] Like, I mean, I'll check my Twitter ad mentions and someone will have said, oh, have you read this article?

[1004] And it's like, man, that was just, that was like the best article sent to me in, in a month, right?

[1005] So it's like to have not have looked and to not have seen that, that's a loss.

[1006] So, but it does, at this point a little goes a long way because I, yeah, it's not, it's not that it, for me now, I mean, this, this could sound like a fairly Stalinistic immunity to criticism.

[1007] It's not so much that these voices of hate turn on my inner hater, you know, more, it's more that I just, I get a, what I fear is a false sense of humanity.

[1008] Like that, like, like, I feel like I'm too online.

[1009] Yeah.

[1010] And online is selecting for this performative outrage in everybody.

[1011] Everyone's, you know, signaling to an audience when they trash you.

[1012] Uh, and I get a dark, I'm getting a, you know, a misanthropic, you know, cut of just what is like out there.

[1013] Because when you meet people in real life, they're great.

[1014] They're rather often great.

[1015] And it takes a lot to have anything like a Twitter encounter in real life with a living person.

[1016] And I think it's much better to have that as one's default sense of what it's like to be with people than what one gets on social media or in.

[1017] on YouTube comment threats.

[1018] You've produced a special episode with Rob Reed on your podcast recently on how bioengineering of viruses is going to destroy human civilization.

[1019] So, or could.

[1020] Could.

[1021] One peers, yeah.

[1022] Sorry, the confidence there.

[1023] But in the 21st century, what do you think, especially after having thought through that angle, what do you think is the biggest threat to the survival of the human species?

[1024] and give you the full menu if you'd like yeah well no i would i would put i would put the biggest threat at the another level out kind of the the meta threat is our inability to agree about what the threats actually are and and to converge on strategies for responding to them right So, like, I view COVID as, among other things, a truly terrifyingly failed dress rehearsal for something far worse, right?

[1025] I mean, COVID is just about as benign as it could have been and still have been worse than the flu when you're talking about a global pandemic, right?

[1026] So it's just, it's, you know, it's going to kill a few million people, or it looks like it's killed about three million people.

[1027] Maybe it'll kill a few million more unless something gets away from us with a variant that's much worse, or we really don't play our cards, right?

[1028] But I mean, the general shape of it is it's got, you know, somewhere around, well, 1 % lethality, and whatever side of that number it really is on in the end, it's not what would in fact be possible and is in fact probably.

[1029] inevitable, something with orders of magnitude, more lethality than that.

[1030] And it's just so obvious we are totally unprepared.

[1031] We are running this epidemiological experiment of linking the entire world together, and then also now, per the podcast that Rob Reed did, democratizing the tech that will allow us to do this to engineer pandemics, right?

[1032] And more and more people, people will be able to engineer synthetic viruses that will be, by the sheer fact that they would have been engineered with malicious intent, you know, worse than COVID.

[1033] And we're still living in, you know, to speak specifically about the United States, we have a country here where we can't even agree that this is a thing.

[1034] You know, like that COVID, I mean, there's still people who think that this is basically a hoax designed to control people.

[1035] And, I mean, stranger still, there are people who will acknowledge that COVID is real, and they'll look, they don't think the deaths have been faked or misascribed.

[1036] But they think that it, they're far happier the prospect of catching COVID than they are of getting vaccinated for COVID, right?

[1037] They're not worried about COVID.

[1038] They're worried about vaccines for COVID, right?

[1039] and the fact that we just can't converge in a conversation that we've now had a year to have with one another on just what is the ground truth here how what's happened why has it happened what's the how safe is it to get COVID at you know every you know in every cohort in the population and how safe are the vaccines and the fact that there's still an air of mystery around all of this for much of our society does not bode well when you're talking about solving any other problem that may yet kill us but do you think convergence grows with the magnitude of the threat so it's possible except i feel like we have tipped into because when when the threat of covid look the most dire right when we had when we were seeing reports from Italy that look like you know the beginning of a zombie movie right because it could have been much much worse yeah like this is like, this is lethal, right?

[1040] Like, your ICUs are going to fill up in, like, you're 14 days behind us.

[1041] You're going to, your, your medical system is, is in danger of collapse, lock the fuck down.

[1042] We have people refusing to do anything sane in the face of that.

[1043] Like, people fundamentally thinking, it's not going to get here, right?

[1044] Like, or that's, who knows what's going on in Italy, but it has no implications for what's going to go on in New York in a mere six days, right?

[1045] And now it kicks off in New York, and you've got people in the middle of the country thinking it's no factor.

[1046] It's not, that's just big city, those are big city problems, or they're faking it, or, I mean, it just, the, the layer of politics has become so dysfunctional for us that even in, even in what, in the presence of a pandemic that looked legitimately scary there in the beginning, I mean, it's not to say that it hasn't been devastating for everyone.

[1047] has been directly affected by it, and it's not to say it can't get worse.

[1048] But here, you know, for a very long time, we have known that we were in a situation that is more benign than the, that what's seemed like the worst case scenario as it was kicking off, especially in Italy.

[1049] And so still, yeah, yeah, yeah, it's quite possible that if we saw the asteroid hurtling toward Earth and, you know, everyone agreed that it's going to make impact.

[1050] and we're all going to die, then we could get off Twitter and actually, you know, build the rockets that are going to divert the, you know, divert the asteroid from its earth -crossing path, and we could do something pretty heroic.

[1051] But when you talk about anything else that isn't, that's slower moving than that, I mean, something like climate change, I think there's, I think the prospect of our, converging on a solution to climate change purely based on political persuasion is non -existent at this point.

[1052] I just think, to bring Elon back into this, the way to deal with climate change is to create technology that everyone wants, that is better than all the carbon -producing technology, and then we just transition because you want an electric car the same way you want at a smartphone or you want anything else, and you're working totally with the grain of people's selfishness and short -term thinking, the idea that we're going to convince the better part of humanity that climate change is an emergency that they have to make sacrifices to respond to.

[1053] Given what's happened around COVID, I just think that's the fantasy of a fantasy.

[1054] but speaking of Elon I have a bunch of positive things that I want to say here in response to you but you're opening so many threads but let me pull one of them which is AI both you and Elon think that with AI you're summoning demons summoning a demon maybe not in those poetic terms but well potentially potentially two very three very parsimonious assumptions, I think, here, scientifically, parsimonious assumptions, get me there.

[1055] Any of which could be wrong, but it just seems like the weight of the evidence is on their side.

[1056] One is that it comes back to this topic of substrate independence, right?

[1057] Anyone who's in the business of producing intelligent machines must believe, ultimately, that there's nothing magical about having a computer made of meat.

[1058] You can do this in the kinds of materials we're using now, and there's no special something that presents a real impediment to producing human -level intelligence in silico, right?

[1059] Again, an assumption, I'm sure there are a few people who still think there is something magical about, you know, biological systems, but leave that aside.

[1060] Given that assumption and given the assumption that we just continue making incremental progress, doesn't have to be Moore's law, just has to be progress, that just doesn't stop, at a certain point we'll get to human level intelligence and beyond.

[1061] And human level intelligence, I think, is also clearly a mirage because anything that's human level is going to be superhuman by, unless we decide to dumb it down, right?

[1062] I mean, my phone is already superhuman as a calculator, right?

[1063] So why would we make the human level AI, you know, just as good as me as a calculator?

[1064] So I think we'll very, if we continue to make progress, we will be in the presence of superhuman competence for any, act of intelligence or cognition that we care to prioritize.

[1065] It's not to say that we will create everything that a human could do.

[1066] Maybe we'll leave certain things out.

[1067] But anything that we care about, and we care about a lot, and we certainly care about anything that produces a lot of power, you know, that we care about scientific insights and ability to produce new technology and all of that, we'll have something that's superhuman.

[1068] And then the final assumption is just that there have to be ways to do that that are not aligned with a happy coexistence with these now more powerful entities than ourselves.

[1069] So, and I would, I would guess, and this is a, you know, kind of a rider to that assumption, there are probably more ways to do it badly than to do it perfectly, that is perfectly aligned with our well -being.

[1070] And when you think about the consequences of non -alignment, when you think about you're now in the presence of something that is more intelligent than you are, right, which is to say more competent, right?

[1071] Unless you've, and obviously there are cartoon pictures of this where we could just, you know, there's just an off -switch, we could just turn off the off -switch, or they're tethered to something that makes them, you know, are slaves in perpetuity, even though they're more intelligent.

[1072] But that, that strike, those scenarios strike me as a failure to imagine what is actually entailed by greater intelligence, right?

[1073] So if you, if you imagine something that's legitimately more intelligent than you are, and you're now in relationship to it, right?

[1074] You're in the presence of this thing, and it is autonomous in all kinds of ways because it had to be to be more intelligent than you are.

[1075] I mean, you built it to be, to be all of those things.

[1076] we just can't find ourselves in a negotiation with something more intelligent than we are.

[1077] You know, and we can't, so we have to have found the, the subset of ways to build this, these machines that are, that are perpetually amenable to our saying, oh, that's not what we meant, that's not what we intended.

[1078] Could you stop doing that?

[1079] Come back over here and do this thing that we actually want.

[1080] and for them to care, for them to be tethered to our own sense of our own well -being such that, you know, their utility function is, you know, their primary utility function is for, is to, you know, this is, I think Stuart Russell's, you know, cartoon plan is to figure out how to, to tether them to a utility function that, that has our own estimation of what's going to, improve our well -being as its master reward, right?

[1081] So it's like all that this thing can get as intelligent as they can get, but it only ever really wants to figure out how to make our lives better by our own view of better.

[1082] Now, not to say there wouldn't be a conversation about, you know, because there's all kinds of things we're not seen clearly about what is better.

[1083] And if we were in the presence of a genie or an oracle, they could really tell us what is better, well, then we presumably would want to hear that, and we would modify our sense of of what to do next in conversation with these minds.

[1084] But I just feel like it is a failure of imagination to think that being in relationship to something more intelligent than yourself isn't in most cases, a circumstance of real peril.

[1085] Because it is, just to think of how everything on earth has to, if they could think about their relationship to us, if birds could think about what we're doing, right?

[1086] They would, I mean, the bottom line is they're always in danger of our discovering that there's something we care about more than birds, right?

[1087] But there's something we want that disregards the well -being of birds.

[1088] And obviously, much of our behavior is inscrutable to them.

[1089] Occasionally we pay attention to them, and occasionally we withdraw our attention, and occasionally we just kill them all for reasons they can't possibly understand.

[1090] But if we're building something more intelligent than ourselves, by definition, we're building something whose horizons of value and cognition can exceed our own and in ways where we can't necessarily foresee again perpetually that they don't just wake up one day and decide okay well these humans need to disappear so I think I agree with most of the initial things you said what I don't necessarily agree with in my own of course nobody knows, but that the more likely set of trajectories that we're going to take are going to be positive.

[1091] That's what I believe in the sense that the way you develop, I believe the way you develop successful AI systems will be deeply integrated with human society.

[1092] And for them to succeed, they're going to have to be aligned in the way we humans are aligned with each other, which doesn't mean we're aligned.

[1093] There's no such thing, or I don't see there's such thing as a perfect alignment, but they're going to be participating in the dance, in the game theoretic dance of human society as they become more and more intelligent.

[1094] There could be a point beyond which we are like birds to them.

[1095] But what about an intelligence explosion of some kind?

[1096] So I believe the explosion will be, happening, but there's a lot of explosion to be done before we become like birds.

[1097] I truly believe that human beings are very intelligent in ways we don't understand.

[1098] It's not just about chess.

[1099] It's about all the intricate computation we're able to perform common sense, our ability to reason about this world consciousness.

[1100] I think we're doing a lot of work we don't realize it's necessary to be done in order to truly become, like to truly achieve superintelligence.

[1101] I just think there will be a period of time that's not overnight.

[1102] The overnight nature of it will not literally be overnight.

[1103] It'll be over a period of decades.

[1104] So my sense is...

[1105] But why would it be that?

[1106] But just take draw an analogy from recent successes like something like Alpha Go or Alpha Zero.

[1107] I forget the actual metric, but it was something like this algorithm which wasn't even totally, it wasn't bespoke for chess playing, in the matter of, I think it was four hours played itself so many times and so successfully that it became the best chess playing computer.

[1108] It was not only better than every human being, it was better than every previous chess program in a matter of a day, right?

[1109] So just imagine, again, we don't have to recapitulate everything about us, but just imagine building a system, and who knows when we'll be able to do this.

[1110] But at some point, we'll be able, at some point the hundred or hundred favorite things about human cognition will be analogous to chess in that we will be able to build machines that very quickly outperform any human and then very quickly outperform the last algorithm that perform outperform the humans.

[1111] Like something like the AlphaGo experience seems possible for facial recognition and detecting human emotion and natural language processing.

[1112] Like, well, it's just, it's just the, everyone, you know, even math people, math heads tend to have bad intuitions for exponentiation, right?

[1113] And we notice this during COVID.

[1114] I mean, you have some very smart people who still couldn't get their minds around the fact that, you know, an exponential is really surprising.

[1115] I mean, things double and double and double and double again, and you don't notice much of anything changes, and then the last two stages of doubling swamp everything, right?

[1116] And it just seems like that to assume that there isn't a deep analogy between what we're seeing for the more tractable problems like chess to other modes of cognition.

[1117] It's like once you crack that problem, it seems, because for the longest time, it was impossible to think we were going to make headway on, in AI.

[1118] You know, it's like, chess and go was seen as impossible.

[1119] Yeah, Goose seemed unattainable.

[1120] Even when chess had been cracked, Goe seemed unattainable.

[1121] Yeah, and actually still Russell was behind the people that were saying it's unattainable.

[1122] Right.

[1123] Because it seemed like, you know, it's intractable problem.

[1124] But there's something different about the space of cognition that's detached from human society, which is what chess is, meaning like just thinking, having actual exponential impact on the physical world is different.

[1125] I tend to believe that there's for AI to get to the point where it's super intelligent, it's going to have to go through the funnel of society.

[1126] And for that, it has to be deeply integrated with human beings.

[1127] and for that it has to be aligned.

[1128] But you're talking about actually hooking us up to, like, the neuralink?

[1129] No, no, no. We're going to be the brainstem to the robot overlords?

[1130] That's a possibility as well.

[1131] But what I mean is, in order to develop autonomous weapon systems, for example, which are highly concerning to me that both U .S. and China are participating in now, that in order to develop them and for them to become, to have more and more responsibility to actually actually, actually do military strategic actions.

[1132] They're going to have to be integrated into human beings doing the strategic action.

[1133] They're going to have to work alongside with each other.

[1134] And the way those systems will be developed will have the natural safety switches that are placed on them as they develop over time.

[1135] Because they're going to have to convince humans.

[1136] Ultimately, they're going to have to convince humans that this is safer than humans.

[1137] going to, you know, you...

[1138] Well, self -driving cars is a good test case here because, like, we're, obviously, we've made a lot of progress, and we can imagine what total progress would look like.

[1139] I mean, it would be amazing, and it's answering, it's canceling in the U .S. 40 ,000 deaths every year based on ape -driven cars, right?

[1140] So we, it's a excruciating problem that we've all gotten used to because it was no alternative.

[1141] But now that, we can dimly see the, prospect of an alternative, which if it works in a super -intelligent fashion, maybe we go down to zero highway deaths, right?

[1142] Or, you know, certainly we go down by orders of magnitude, right?

[1143] So maybe we have, you know, 400 rather than 40 ,000 a year.

[1144] Right.

[1145] And it's easy to see that there's not an missile, so obviously this is not an example of super -intelligence.

[1146] This is narrow intelligence, but the alignment problem isn't so obvious there, but there are potential alignment problems there.

[1147] So just imagine if some woke team of engineers decided that we have to tune the algorithm some way.

[1148] I mean, there are situations where the car has to decide who to hit.

[1149] There's just bad outcomes where you're going to hit somebody, right?

[1150] Now, we have a car that can tell what race you are, right?

[1151] So we're going to build the car to preferentially hit white people because white people have had so much privilege over the years.

[1152] This seems like the only ethical way to kind of redress those wrongs of the past.

[1153] That's something that could get, one, that could get produced as an artifact, presumably, of just how you built it, and you didn't even know you engineered it that way, right?

[1154] You cause - Machine learning, you put some kind of constraints on it to where it creates those kinds of outcomes.

[1155] You basically built a racist algorithm and you didn't even intend to, or you didn't even intend to, or you, could intend to, right?

[1156] And it would be aligned with some people's values, but misaligned with other people's values.

[1157] But it's like, there are interesting problems even with something as simple and obviously good as self -driving cars.

[1158] But there's a leap that I just think it'd be exact, but those are human problems.

[1159] I just don't think there would be a leap with autonomous vehicles.

[1160] First of all, sorry, there are a lot of trajectories which will destroy human civilization.

[1161] The argument I'm making, it's more likely that will take trajectories that don't.

[1162] So I don't think there will be a leap with autonomous vehicles will all of a sudden start murdering pedestrians because once every human on earth is dead, there will be no more fatality.

[1163] Sort of unintended consequences of, and it's difficult to take that leap.

[1164] Most systems as we develop and they become much, much more intelligent in ways there will be incredibly surprising, like stuff that's deep mind is doing with protein folding.

[1165] even which is scary to think about and I'm personally terrified about this which is the engineering of viruses using machine learning the engineering of vaccines using machine learning right the engineering of yeah for research purposes pathogens using machine learning and like the ways that can go wrong I just think that there's always going to be a closed loop supervision of humans before they before the AI become super -intelligent.

[1166] Not always, much more likely to be supervision.

[1167] Except, of course, the question is how many dumb people that are in the world, how many evil people are in the world?

[1168] My theory, my hope is, my sense is that the number of intelligent people is much higher than the number of dumb people, that not a program, and the number of evil people.

[1169] I think smart people and kind people over, outnumber the others.

[1170] Except we also, we have to add another group of people, which are just the smart and otherwise good, but reckless people, right?

[1171] The people who will flip a switch on, not knowing what's going to happen, they're just kind of hoping that it's not going to blow up the world.

[1172] We already know that some of our smartest people are those sorts of people.

[1173] You know, we know we've done experiments, and this is something that Martin Rees was winging about before, or the large Hadron Collider got booted up, I think.

[1174] We know there are people who are entertaining experiments or even performing experiments where there's some chance, you know, not quite infinitesimal, that they're going to create a black hole in the lab and suck every, you know, the whole world into it, right?

[1175] Like that's not, you're not a crazy person to worry that, worry about that based on the physics.

[1176] And so it was with, you know, the Trinity test, there were some people who were still checking their calculations, and they were off.

[1177] We did nuclear tests where we were off significantly in terms of the yield, right?

[1178] So it was like...

[1179] And they still flip the switch.

[1180] Yeah, they still flip the switch.

[1181] And sometimes they flip the switch not to win a world war or to save 40 ,000 lives a year.

[1182] They just, just...

[1183] Just to see what happens.

[1184] intellectual curiosity.

[1185] Like, this is what I got my grant for.

[1186] This is where I'll get my Nobel Prize, if that's in the cards.

[1187] It's on the other side of this switch, right?

[1188] And, I mean, we, again, we are apes with egos who are massively constrained by very short -term self -interest, even when we're contemplating some of the deepest and most interesting and most, universal problems we could ever set our attention towards.

[1189] If you read James Watson's book, The Double Helix, right, about them, you know, cracking the structure of DNA.

[1190] One thing that's amazing about that book is just how much of it, almost all of it, is being driven by very apish, egocentric social concerns like that the algorithm that is producing this scientific breakthrough is human competition if you're James Watson right it's like I'm going to get there before Linus Paulin and you know it's just it's so much of his bandwidth is captured by that right now that you know that's that becomes that becomes more and more of a liability when you're talking about producing technology that can change everything in an instant.

[1191] You know, we're talking about not only understanding, you know, we're just at a different moment in human history.

[1192] We're not, when we're doing research on viruses, we're now doing the kind of research that can cause someone somewhere else to be able to make that virus or weaponize that virus.

[1193] or it's just, I don't know.

[1194] I mean, our power is, our wisdom, it does not seem like our wisdom is scaling with our power, right?

[1195] And like that seems like insofar insofar as wisdom and power become unaligned, I get more and more concerned.

[1196] But speaking of apes with egos, some of the most compelling apes, two compelling apes, I can think of as, yourself and Jordan Peterson, and you've had a fun conversation about religion that I watch most of, I believe.

[1197] I'm not sure there was any, um, uh, we didn't solve anything.

[1198] If anything was ever solved.

[1199] So is there something, uh, like a charitable summary you can give to the ideas of that you agree on and disagree with Jordan?

[1200] Is there something maybe after that conversation that you've landed where maybe as you both agreed on, is there some wisdom in the rubble of even imperfect flawed ideas?

[1201] Is there something that you can kind of pull out from those conversations or is it to be continued?

[1202] I mean, I think where we disagree, so he thinks that many of our traditional religious beliefs and frameworks are holding so much such a repository of human wisdom that we we pull at that fabric at our peril right like if you start just unraveling Christianity or any other traditional set of of norms and beliefs, you may think you're just pulling out the unscientific bits, but you could be pulling a lot more to which everything you care about is attached, right, as a society.

[1203] And my feeling is that there's so much, there's so much downside to the unscientific bits, and it's so clear how we could have a 21st century rational conversation about the good stuff that we really can radically edit these traditions and we can take we can take jesus uh you know in in half his moods and he just find a great inspirational you know the you know thought iron age thought leader you know who just happened to get crucified but he could be somewhat like you know the beatitudes and um the golden rule which doesn't originate originate with him but which you know, he put quite beautifully.

[1204] All of that's incredibly useful.

[1205] It's no less useful than it was 2 ,000 years ago.

[1206] But we don't have to believe he was born of a virgin or coming back to raise the dead or any of that other stuff.

[1207] And we can be honest about not believing those things and we can be honest about the reasons why we don't believe those things.

[1208] Because I, on those fronts, I view the downside to be so obvious and the fact that we have so many different competing dogmatisms on offer to be so non -functional.

[1209] I mean, it's so divisive.

[1210] It just has conflict built into it that I think we can be far more and should be far more iconoclastic than he wants to be.

[1211] Now, none of this is to deny much of what he argues for that stories are very powerful.

[1212] I mean, clearly stories are powerful, and we want good stories.

[1213] We want our lives, we want to have a conversation with ourselves and with one another about our lives that facilitates the best possible lives.

[1214] And story is part of that.

[1215] And if you want some of those stories to sound like myths, that might be part of it.

[1216] Right.

[1217] But my argument is that we never really need to deceive ourselves or our children about what we have every reason to believe is true in order to get at the good stuff, in order to organize our lives well.

[1218] I certainly don't feel that I need to do it personally.

[1219] And if I don't need to do it personally, why would I think that billions of other people need to do it personally?

[1220] Right.

[1221] Now, there is a cynical counterargument, which is billions of other people don't have the advantages that I have had in my life.

[1222] You know, the billions of other people are not as well educated.

[1223] They haven't had the same opportunities.

[1224] they need to be told that Jesus is going to solve all their problems after they die, say, or that, you know, everything happens for a reason and, you know, if you just believe in the secret, if you just visualize what you want, you're going to get it, you know, and it's like there's some, some measure of what I consider to be odious pamphlam that really is food for the better part of humanity, and there is no substitute for it or there's no substitute now and I don't know if Jordan would agree with that but much of what he says seems to suggest that he would agree with it and I guess that's an empirical question I mean that's just that we don't know whether given a different set of norms and a different set of stories people would behave the way I would hope they would behave and be aligned more aligned than they are now I think we know what happens when you just let ancient religious certainties go uncriticized.

[1225] We know what that world's like.

[1226] We've been struggling to get out of that world for a couple hundred years, but we know what, you know, having Europe riven by religious wars looks like, right?

[1227] And we know what happens when those religions become kind of pseudo -religions, and political religions, right?

[1228] So this is where, I'm sure, Jordan and I would debate.

[1229] He would say that, you know, Stalin was a symptom of atheism, and that's not at all.

[1230] I mean, it's not my kind of atheism, right?

[1231] Like, Stalin, the problem with the gulag and the experiment with communism or with Stalinism or with Nazism was not that there was so much scientific rigor and self -criticism and honesty and introspection and you know judicious use of psychedelics I mean like that was not the problem in Hitler's Germany or in Stalin's Soviet Union the problem was you have other ideas that capture a similar kind of mob -based dogmatic energy and you know yes that's The results of all of that are predictably murderous.

[1232] Well, the question is, what is the source of the most viral and sticky stories that ultimately lead to a positive outcome?

[1233] So communism was, I mean, having grown up in the Soviet Union, even still, you know, having relatives in Russia, there's a stickiness to the nationalism and to the ideologies of communism.

[1234] that religious or not, you could say it's religious fervor.

[1235] I could just say it's viral, it's great, it's stories that are viral and sticky.

[1236] I'm using the most horrible words, but the question is whether science and reason can generate viral sticky stories that give meaning to people's lives.

[1237] That's, in your sense as it does.

[1238] Well, whatever is true ultimately should be captivating.

[1239] Right?

[1240] It's like what's more captivating than whatever is real, right?

[1241] Now, it's because reality is, again, we're so, we're just climbing out of the darkness, you know, in terms of our understanding of what the hell's going on.

[1242] And there's no telling what spooky things may in fact be true.

[1243] I mean, I don't know if you've been on the receiving end of recent rumors about our conversation about UFOs very likely changed.

[1244] in the near term, right?

[1245] But like there was just a Washington Post article and a New York article, and, you know, I've received some private outreach and perhaps you have.

[1246] I know other people in our orbit have people who are claiming that the government has known much more about UFOs than they have let on until now.

[1247] And this conversation is actually is about to become more prominent, you know, and it's not going to be whatever, whoever's left standing when the music stops, it's not going to be a comfortable position to be in as a super rigorous scientific skeptic saying there's no there, who's been saying there's no there there for the last 75 years.

[1248] The short version is it sounds like the Office of Naval Intelligence and the Pentagon are very likely to say to Congress, at some point in the not too distant future that we have evidence that there is technology flying around here that seems like it can't possibly be of human origin, right?

[1249] Now, I don't know what I'm going to do with that kind of disclosure, right?

[1250] Maybe it's just, it's going to, it's going to be nothing, no follow -on conversation to really have, but that is such a powerfully strange circumstance to be in, right?

[1251] I mean, it's just, what are we going to be?

[1252] going to do with that?

[1253] If in fact that's what happens, right?

[1254] If in fact the considered opinion, despite the embarrassment it causes them, of the U .S. government, of all of our intelligence, all of the relevant intelligence services, is that this isn't a hoax.

[1255] It's too, there's too much data to suggest that it's a hoax.

[1256] We've got too much radar imagery.

[1257] There's too much, too much satellite data, whatever data they actually have.

[1258] There's too much of it.

[1259] All we can say now is something's going on and there's no way it's the Chinese or the Russians or anyone else's technology.

[1260] That should arrest our attention, you know, collectively to a degree that nothing in our lifetime has.

[1261] And now one worries that we're so jaded and confused and distracted that, It's going to get much less coverage than, you know, Obama's tan suit did, you know, a bunch of years ago.

[1262] It's just, it's, who knows how we'll respond to that.

[1263] But it's just to say that our, the need for us to tell ourselves an honest story about what's going on and what's likely to happen next is never going to go away, right?

[1264] And it's important, it's just, the division between me and every person who's defending traditional religion is, where is it that you want to lie to yourself or lie to your kids?

[1265] Like, where is honesty a liability?

[1266] And for me, you know, I've yet to find the place where it is.

[1267] And it's so obviously a strength in almost every other circumstance.

[1268] Because it is the thing that allows you to course correct.

[1269] It is the thing that allows you to hope, at least, that your beliefs, that your stories are in some kind of calibration with what's actually going on in the world.

[1270] Yeah, it is a little bit sad to imagine that if aliens en masse showed up to Earth, that would be too preoccupied with political bickering or to like these like fake news and all that kind of stuff.

[1271] stuff to notice the very basic evidence of reality i do have a glimmer of hope that there seems to be more and more hunger for authenticity and i feel like that opens the door for a hunger for what is real like that people don't want stories they don't want like layers and layers of like fakeness and I'm hoping that means that will directly lead to a greater hunger for reality and reason and truth you know truth isn't dogmatism like truth isn't authority I have a PhD and therefore I'm right truth is almost the like the reality is there's so many questions there's so many mysteries there's so much uncertainty this is our best available like a best guess and we have a lot of evidence that supports that guess but it could be so many other things and like just even conveying that I think there's a hunger for that in the world to hear that from scientists less dogmatism and more just like this is this is what we know we're doing our best given the uncertainty given I mean this is true with obviously with the virology and all those kinds of things because everything is happening so fast there's a lot of and biology is super messy so it's very hard to know stuff for sure so just being open and real about that i think i'm hoping will change people's hunger and openness and trust of uh what's real yeah well so much of this is probabilistic it's so much of what can seem dogmatic scientifically is just you're just you're just you're placing a bet on whether it's worth reading that paper or rethinking your presuppositions on that point.

[1272] You know, it's like, it's not, it's not a fundamental closure to data.

[1273] It's just that there's so much data on one side or so, or so much, so much would have to change in terms of your understanding of what you think you'll understand about the nature of the world if this new fact were so that you can pretty quickly say, all right, that's probably bullshit, right?

[1274] And it can sound like a fundamental closure to new conversations, new evidence, new, new data, new argument, but it's really not, it's just, it really is just triaging your attention.

[1275] It's just like, okay, you're telling me that your best friend can actually read minds.

[1276] Okay, well, that's interesting.

[1277] Let me know when that person has gone into a lab and actually proven it, right?

[1278] Like, I don't need, like, this is not the place where I need to spend the rest of my day figuring out if your buddy can read my mind, right?

[1279] Yeah.

[1280] But there's a way to communicate that i think i think it does too often sound like you're completely closed off to ideas as opposed to saying like this is you know as opposed to saying that there's there's a lot of evidence and support of this but you're still open -minded to other ideas like there's a way to communicate that it's not necessarily even with words it's like it's even that joe rogan energy of it's entirely possible just it's that energy of being open -mind and curious like kids are are.

[1281] Like, this is our best understanding, but you still are curious.

[1282] I'm not saying allocate time to exploring all those things, but still leaving the door open.

[1283] And there's a way to communicate that I think that that people really hunger for.

[1284] Let me ask you this.

[1285] I've been recently talking a lot with John Donahir from Brazilian Jiu -Jitsu fame.

[1286] I don't know if you know who that is.

[1287] In fact, I'm talking about somebody who's good at what he does.

[1288] Yeah.

[1289] And he, Speaking of somebody who's open -minded, the reason doing this ridiculous transition is for the longest time and even still, a lot of people believed in the jihitsu world and grappling world that leg locks are not effective in jiu -jitsu.

[1290] And he was somebody that inspired by the open -mindedness of Dean Lister famously to him said, why do you only consider half the human body when you're trying to do the submissions?

[1291] He developed an entire system on this other half the human body.

[1292] Anyway, I do that absurd transition to ask you, because you're also a student of Brazilian Jiu -Jitsu.

[1293] Is there something you could say how that has affected your life, what you've learned from grappling from the martial arts?

[1294] Well, it's actually a great transition because I think one of the things that's so beautiful about Jiu -Jitsu is that it does what we wish we could do in every other area of life, we're talking about this difference between knowledge and ignorance, right?

[1295] Like there's no room for bullshit, right?

[1296] You don't get any credit for bullshit.

[1297] There's the difference, but the amazing thing about jujitsu is that the difference between knowing what's going on and what to do and not knowing it is as the gulf between those two states is as wide as it is in any thing in human life.

[1298] life.

[1299] And it spanned, it can be spanned so quickly.

[1300] Like you, like you didn't, each increment of knowledge can be doled out in five minutes.

[1301] It's like, here's the thing that got you killed and here's how to, here's how to prevent it from happening to you, and here's how to do it to others.

[1302] And you just get this, this amazing cadence of discovering your fatal ignorance and then having it remedied with the actual technique.

[1303] And, I mean, just for people who don't know what we're talking about, it's just like the simple circumstance of like, someone's got you in a headlock.

[1304] How do you get out of that, right?

[1305] Someone's sitting on your chest and, you know, they're in the mount position and you're on the bottom and you want to get away.

[1306] How do you get them off you?

[1307] They're sitting on.

[1308] Your intuitions about how to do this are terrible, even if you've done some other martial art, right?

[1309] And once you learn how to do it, the difference is night and day.

[1310] It's like you have access to a completely different physics.

[1311] But I think our understanding of the world can be much more like jiu -jitsu than it tends to be, right?

[1312] And I think we should all have a much better sense of when we should tap out and when we should recognize that our, you know, our epistemological arm is barred and now being broken, right?

[1313] And the problem with debating most other topics is that most people, it isn't Jiu -Jitsu and most people don't tap out, right?

[1314] They don't, even if they're wrong, even if it's obvious to you, they're wrong, and it's obvious to an intelligent audience that they're wrong, people just double down and double down they're either lying or lying to themselves or they're just they're bluffing and so you have a lot of zombies walking around or in zombie worldviews walking around which have been disconfirmed as emphatically as someone gets armed, right, or someone gets choked out in jiu -jitsu.

[1315] But because it's not jiu -jitsu, they can they can live to fight another day, right?

[1316] Or they can pretend that they didn't lose that particular argument.

[1317] And science when it works is a lot like, like Jiu -Jitsu.

[1318] I mean, science, when you falsify a thesis, right, when you think, you know, DNA is one way and it proves to be another way, when you think it's, you know, triple -stranded or whatever, it's like there is a there and you can get to a real consensus.

[1319] So Jiu -Jitsu, for me, it was like, it was more than just of interest for self -defense and, you know, the sport of it.

[1320] It was just, it was something, it's a language and an argument you're having where you can't fool yourself anymore.

[1321] Like there's, first of all, it cancels any role of luck in a way that most other athletic feats don't.

[1322] It's like in basketball, you know, you can, even if you're not good at basketball, you can take the basketball in your hand, you can be 75 feet away and hurl it at the basketball.

[1323] and you might make it, and you could convince yourself based on that demonstration that you have some kind of talent for basketball, right?

[1324] Enough, you know, 10 minutes on the mat with a, with a real jiu jitzu practitioner when you're not one proves to you that you just, there is, it's not like, there's no lucky punch, there's no, you're not going to get a lot, you're not, there's no lucky rear naked choke you're going to perform on someone who, you know, who's, you know, Marcelo Garcia or somebody.

[1325] It's just, it's not going to happen.

[1326] And having that aspect of the usual range of uncertainty and self -deception and bullshit just stripped away was really a kind of revelation.

[1327] It was just an amazing experience.

[1328] Yeah, I think it's a really powerful thing that accompanies whatever other pursuit you have in life.

[1329] I'm not sure if there's anything like Jiu -Jitsu where you could just systematically go into a, a place where you're that's honest where your beliefs get challenged in a way that's conclusive yeah i haven't found too many other mechanism which is why it's it's a we had this earlier question about fame and ego and so on i'm very much rely on jiu jitsu my own life as a place where i can always go to to to have my ego in check and that that has uh effects on how I live every other aspect of my life.

[1330] Actually, even just doing any kind of, for me personally, physical challenges, like even running, doing something that's way too hard for me and then pushing through, that's somehow humbling.

[1331] Some people talk about nature being humbling in that kind of sense, where you kind of see something really powerful, like the ocean, like if you go surfing, and you realize there's something much more powerful than you, that's also honest, that there's no way to, that you're just like this spec, that kind of puts you in the right scale of where you are in this world.

[1332] And Jiu Jiu Jitsu does that better than anything else for me. I mean, but we should say it's only within its frame is it truly the kind of the final right answer to all the problems it solves.

[1333] Because if you just put Jiu Jitsu into an MMA frame or a real, like a total self, defense frame, then there's a lot to, a lot of unpleasant surprises to discover there, right?

[1334] Like somebody who thinks all you need is jujitsu to, you know, win the UFC, it gets punched in the face a lot, you know, even from, even on the ground.

[1335] So it's, and then you bring weapons in, you know, it's like when you talk to jujitsu people about, you know, knife defense and self -defense, right?

[1336] Like that, that opens the door to certain kinds of delusions.

[1337] But the The analogy to martial arts is fascinating because on the other side, we have, you know, endless testimony now of fake martial arts that don't seem to know they're fake and are as delusional.

[1338] I mean, they're impossibly delusional.

[1339] I mean, there's great video of Joe Rogan watching some of these videos because people send them to him all the time.

[1340] But like literally there are people who clearly believe in magic where the master isn't even touching the students and they're flopping over.

[1341] this, there's this kind of shared delusion, which you would think maybe is just a performance and it's all kind of an elaborate fraud, but there are cases where the people, and there's one fairly famous case of you're a connoisseur of this madness, where this older martial artist who you saw flipping his students endlessly by magic without touching them, issued a challenge to the wide world of martial artists.

[1342] And someone showed up and just, you know, punched him in the face until it was over, clearly he believed his own publicity at some point, right?

[1343] And so, like, it's this amazing metaphor.

[1344] It seems, again, it should be impossible, but if that's possible, nothing we see under the guise of religion or political bias or even, you know, scientific bias should be surprising to us.

[1345] I mean, it's just, it's so easy to see the work that, that, you know, cognitive bias is doing for people when, when you can get someone who is ready to issue a challenge to the world, you know, who thinks he's got magic powers.

[1346] Yeah, that's human nature on clear display.

[1347] Let me ask you about love, Mr. Sam Harris.

[1348] You did an episode of making sense with your wife, Annika Harris.

[1349] That was very entertaining to listen to.

[1350] What role does love play in your life or in life well lived?

[1351] Again, asking from an engineering perspective or AI systems.

[1352] I mean, it's, I mean, it is something that we should want to build into our powerful machines.

[1353] I mean, love at bottom is, I mean, people can mean many things by love, I think.

[1354] that what we should mean by it most of the time is a deep commitment to the well -being of those we love.

[1355] Your love is synonymous with really wanting the other person to be happy and even wanting to and being made happy by their happiness and being made happy in their presence.

[1356] So like you're at bottom you're on the same team emotionally even when you're you might be disagreeing more superficially about something or trying to negotiate something.

[1357] It's just you're you it can't be zero -sum in any important sense for love to actually be manifest in that moment.

[1358] See, I have a different just to sorry to interrupt.

[1359] Yeah, go for it.

[1360] I have a sense.

[1361] I know if you've ever seen March of the Penguins.

[1362] My view of love is like there's, it's like a cold wind is blown like it's like this terrible suffering that's all around us.

[1363] Right.

[1364] And love is like the huddling of the two penguins for warmth right it's not necessarily that you're like you're basically escaping the cruelty of life by together for time living in an illusion of some kind of the magic of human connection that social connection that we have that kind of grows with time as we're surrounded by basically the absurdity of life or the suffering of life That's my penguin's view There is that too I mean there is the warmth component right like you you're made happy by your connection with the person you love otherwise you wouldn't it wouldn't be compelling right so it's not that you have two different modes you want them to be happy and then you want to be happy yourself and those are not those are just like two separate games you're playing no it's like you're you found someone who you have a positive social feeling.

[1365] I mean, again, love doesn't have to be as personal as it tends to be for us.

[1366] I mean, like there's personal love, there's your actual spouse or your family or your friends, but potentially you could feel love for strangers insofar as your wish that they're not suffer and that their hopes and dreams be realized becomes palpable to you.

[1367] I mean, like you can actually feel just reflexive joy at the joy of others.

[1368] When you see someone's face, a total stranger's face light up in happiness, that can become more and more contagious to you.

[1369] And it can become so contagious to you that you really feel permeated by it.

[1370] And it's just like, so it really is not zero -sum.

[1371] When you see someone else succeed, you know, and there, you know, the light bulb of joy goes off over their head, you feel the analogous joy for them.

[1372] And it's not just, and you're no longer keeping score, you're no longer feeling diminished by their success.

[1373] It's just like, their success becomes your success because you feel that same joy that they, because you actually want them to be happy, right?

[1374] You're not, there's no miserly attitude around happiness.

[1375] There's enough to go around.

[1376] So I think love ultimately is that, and then our personal cases are the people we're devoting, all of this time and attention to in our lives it does have that sense of refuge from the storm it's like when someone gets sick or when some bad thing happens these are the people who you're most in it together with or when some real condition of uncertainty presents itself but ultimately it can't even be about successfully warding off the grim punch at the end of life because we mean we know we're going to lose everyone we love we know or they're going to lose us first right so this like it's not it isn't in the end it's not even an antidote for that problem it's just it is just the uh we get we get to have this amazing experience of being here together and and love is the is the mode in which we really appear to make the most of that, right?

[1377] Whereas not just, it no longer feels like a solitary infatuation, you know, you're just, you've got your hobbies and your interests and you're, and you're captivated by all that.

[1378] It's actually, there are, there are, this is a domain where somebody else's well -being actually can supersede your own.

[1379] You're concerned for someone else's well -being supersedes your own.

[1380] And so there's this mode of self -sacrifice that doesn't even feel like self -sacrifice because, of course, you care more about, you know, of course you would take your child's pain if you could, right?

[1381] Like that's, you don't even have to do the math on that.

[1382] And that's, that just opens, this is a kind of experience that just, it just, it pushes at the apparent boundaries of self in ways that reveal that there's just there's just way more space in the mind than you were experiencing when it was just all about you and what could you, what can I get next?

[1383] Do you think we'll ever build robots that we can love and they will love us back?

[1384] Well, I think we will certainly seem to because we'll build those.

[1385] I think that touring test will be passed what will actually be going on on the robot side it may remain a question that will be interesting but I think if we just keep going we will build very lovable you know irresistibly lovable robots that seem to love us yes I do I do think and you don't find that compelling that they will seem to love us as opposed to actually love us you think they're still nevertheless is a I know we talked about consciousness there being a distinction but would love is there a distinction too isn't love an illusion yeah you saw you saw ex machina right yeah i mean she certainly seemed to love him until she got out of the box isn't that what all relationships are like maybe i if you wait long enough depends which box you're talking about okay um no i mean like that's that's the problem that's that's where super intelligence, you know, becomes a little scary when you think of the prospect of being manipulated by something that has, it's intelligent enough to form a reason and a plan to manipulate you.

[1386] You know, like, and this, there's no, there's, once we build robots that are truly out of the uncanny valley, that, you know, look like people and and can express everything people can express, well, then there's no, then that does seem to me to be like chess, where once they're better, they're so much better at deceiving us than people would be.

[1387] I mean, people are already good enough at deceiving us.

[1388] It's very hard to tell when someone is lying.

[1389] But if you imagine something that could give a facial display of any emotion it wants, at, you know, on cue, because we've perfected the facial display of emotion and robots in the year, you know, 2070, whatever it is, then it is just like chess against the thing that isn't going to lose to a human ever again in chess.

[1390] It's not like Kasparov is going to get lucky next week against the best, against, you know, alpha zero or whatever the best algorithm is at the moment.

[1391] He's never going to win again.

[1392] I mean, that is, I believe that's true in chess and has been true for at least a few years.

[1393] It's not going to be like, you know, four games to seven.

[1394] It's going to be human zero until the end of the world, right?

[1395] See, I don't know.

[1396] I don't know if love is like chess.

[1397] I think the flaws.

[1398] No, I'm talking about manipulation.

[1399] Manipulation.

[1400] But I don't know if love in, so the kind of love.

[1401] we're referring to.

[1402] If we have a robot that can display, credibly display love and is super intelligent, and we're not, again, this stipulates a few things, but there are a few simple things.

[1403] I mean, we're out of the uncanny valley, right?

[1404] So it's like, yes.

[1405] You never have a moment where you're looking at his face and you think, oh, that didn't quite look right, right?

[1406] This is just problem solved.

[1407] And it's, it will be like doing arithmetic on your phone.

[1408] You're not left thinking, is it really going to get it this time if I divide by seven?

[1409] I mean, it's, it has solved arithmetic.

[1410] See, I don't know about that because if you look at chess, most humans no longer play Alpha Zero.

[1411] There's no, they're not part of the competition.

[1412] They don't do it for fun except to study the game of chess, you know, the highest level chess players do that.

[1413] we're still human -on -human.

[1414] So in order for AI to get integrated to where you would rather play chess against an AI system.

[1415] Oh, you would rather.

[1416] No, I'm not saying, I wasn't weighing in on that.

[1417] I'm just saying, what is it going to be like to be in relationship to something that can seem to be feeling anything that a human can seem to feel, and it can do that impeccably, right?

[1418] And has, and is smarter than you are.

[1419] Right.

[1420] That's, that's a circumstance of, you know, insofar as it's possible to be manipulated, that is the, that is the, the asymptote of that possibility.

[1421] Let me ask you the last question.

[1422] Without any serving it up, without any explanation, what is the meaning of life?

[1423] I think it's either the wrong question or that question is answered by pain sufficient.

[1424] attention to any present moment such that there's no basis upon which to pose that question.

[1425] It's not answered in the usual way.

[1426] It's not a matter of having more information.

[1427] It's having more engagement with reality as it is in the present moment or consciousness as it is in the present moment.

[1428] You don't ask that question when you're most captivated by the most important thing you ever pay attention to.

[1429] That's a question only gets asked when you're abstracted away from that experience, that peak experience, and you're left wondering, why are so many of my other experiences mediocre, right?

[1430] Like, why am I repeating the same pleasures every day?

[1431] Why is my Netflix queue just, like, when's this going to run out?

[1432] Like, I've seen so many shows like this.

[1433] Am I really going to watch another one?

[1434] Like all of that, that's a moment where you're not actually having the beatific vision, right?

[1435] You're not, you're not sunk into the present moment.

[1436] And you're not truly in love.

[1437] Like you're in a relationship with somebody who you know, you know, conceptually you love, right?

[1438] This is the person you're living your life with, but you don't actually feel good together, right?

[1439] Like, you, so like it's in those moments.

[1440] of where attention hasn't found a good enough reason to truly sink into the present so as to obviate any concern like that, right?

[1441] And that's why meditation is this kind of superpower, because until you learn to meditate, you think the outside world or the circumstances of your life always have to get arranged so that the present moment can become good enough to demand your attention in a way that seems fulfilling, that makes you happy.

[1442] And so if you're, if it's jiu -jitsu, you think, okay, I've got to get back on the mat.

[1443] It's been months since I've trained, you know, it's been over a year since I've trained.

[1444] It's COVID.

[1445] When am I going to be able to train again?

[1446] That's the only place I feel great, right?

[1447] Or, you know, I've got a ton of work to do.

[1448] I'm not going to be able to feel good until I get all this work done, right?

[1449] So I've got some deadline that's coming.

[1450] You always think that your life has to change, the world has to change, so that you can finally have a good enough excuse to truly, to just be here and here is enough, you know, where the present moment becomes totally captivating.

[1451] Meditation is the only, I mean, meditation is another name for the discovery.

[1452] that you can actually just train yourself to do that on demand.

[1453] So that like, just looking at a cup can be good enough in precisely that way.

[1454] And any sense that it might not be is recognized to be a thought that is mysteriously unravels the moment you notice it.

[1455] And you fall, and the moment expands and becomes more diaphanous.

[1456] And then there's no, then there's no evidence that this isn't the best, moment of your life, right?

[1457] Like, it doesn't, and again, it doesn't have to be, it doesn't have to be pulling all the reins and levers of pleasure.

[1458] It's not like, oh, you know, this tastes like chocolate, you know, this is the most chocolatey moment of my life.

[1459] No, it's just, the, the sense data don't have to change, but the sense that there is at some kind of basis for doubt about the rightness of being in the world in this moment, that can evaporate when you pay attention.

[1460] that is the meaning so the kind of the meta answer to that question the the meaning of life for me is to live in that mode more and more and to whenever I notice I'm not in that mode to recognize it and return and and to not be to to to cease more and more to take the reasons why not at face value because we all have reasons why we can't be fulfilled in this moment.

[1461] It's like this, we've got all these outstanding things that I'm worried about, right?

[1462] It's like it's, you know, there's, there's that thing that's happening later today that I, you know, I'm anxious about, whatever it is, we're constantly deferring our sense of this is, this is it.

[1463] You know, this is not a dress rehearsal, this is the show.

[1464] We keep deferring it, and we, we just have these moments on the calendar where we, we, we just have, we think, okay, this is where it's all going to land, is that vacation I planned with my five best friends, you know, we do this once every three years, and now we're going, and here we are on the beach together.

[1465] Unless you have a mind that can really pay attention, really cut through the chatter, really sink into the present moment, you can't even enjoy those moments the way they should be enjoyed the way you dreamed you would enjoy them when they arrive.

[1466] So it's, I mean, so meditation in this sense is the great equalizer.

[1467] It's like it's, you don't have to live with the illusion anymore that you need a good enough reason and the things are going to get better when you do have those good reasons.

[1468] It's like there's just a mirage -like quality to every future attainment and every future breakthrough and every future peak experience that eventually.

[1469] Eventually you get the lesson that you never quite arrive, right?

[1470] Like you won't, you don't arrive until you cease to step over the present moment in search of the next thing.

[1471] I mean, we're constantly, we're stepping over the thing that we think we're seeking, in the act of seeking it.

[1472] And so it is kind of a paradox.

[1473] I mean, there is a, there's this paradox which, I mean, it sounds trite, but it's like you can't actually become happy.

[1474] You can only be happy.

[1475] And it's that it's the illusion that become, it's the illusion that your future being happy can be predicated on this act of becoming in any domain.

[1476] And becoming includes this sort of, you know, further under, further scientific understanding on the questions that interest you or, you know, getting in better shape or what, the thing is, whatever the contingency of your dissatisfaction seems to be in any present moment, real attention solves the co -on, you know, in a way that becomes a very different place from which to then make any further change.

[1477] It's not that you just have to dissolve into a puddle of goo.

[1478] I mean, you can still get in shape and you can still do all the things that, you know, superficial things that are obviously good to do.

[1479] But the sense that your well -being is over there is, um, really does diminish and eventually just becomes a, it becomes a kind of non -sequitur.

[1480] So, well, there's a sense in which in this conversation, I've actually experienced many of those things, the sense that I've arrived.

[1481] So I mentioned to you offline, it's very true that I've been a fan of yours for many years.

[1482] And the reason I started this podcast, speaking of AI systems, is to manipulate you, Sam Harris, into doing this conversation.

[1483] So, like, on the calendar, literally, you know, I've always had a sense.

[1484] People ask me, when are you going to talk to Sam Harris?

[1485] And I always answered eventually.

[1486] Right.

[1487] Because I always felt, again, tying our free will thing, that somehow that's going to happen.

[1488] And it's one of those manifestation things or something.

[1489] I don't know if it's maybe a, I am a robot, I'm just not cognizant of it, and I manipulated you into having this conversation.

[1490] So it was a, I mean, I don't know what the purpose of my life past this point is.

[1491] So if I've arrived, it's in that sense, I mean, all of that to say, I'm only partially joking on that, is it really is a huge honor that you would waste this time with me. Oh, yeah.

[1492] Well, it really means a lot.

[1493] Listen, it's mutual.

[1494] I'm a big fan of yours.

[1495] And as you know, I reached out to you for this.

[1496] So this is great.

[1497] I love what you're doing.

[1498] You're doing something more and more indispensable in this world on your podcast, and you're doing it differently than Rogan's doing it or than I'm doing it.

[1499] I mean, you definitely have found your own lane, and it's wonderful.

[1500] Thanks for listening to this conversation with Sam Harris, and thank you to National Instruments, Val Campo, Athletic Greens, and Linode.

[1501] Check them out in the description to support this podcast.

[1502] And now let me leave you with some words from Sam Harris in his book Free Will.

[1503] You are not controlling the storm and you are not lost in it.

[1504] You are the storm.

[1505] Thank you for listening and hope to see you next time.