Insightcast AI
Home
© 2025 All rights reserved
ImpressumDatenschutz
David Chalmers: The Hard Problem of Consciousness

David Chalmers: The Hard Problem of Consciousness

Lex Fridman Podcast XX

--:--
--:--

Full Transcription:

[0] The following is a conversation with David Chalmers.

[1] He's a philosopher and cognitive scientist specializing in the areas of philosophy of mind, philosophy of language, and consciousness.

[2] He's perhaps best known for formulating the hard problem of consciousness, which could be stated as, why does the feeling which accompanies awareness of sensory information exist at all?

[3] Consciousness is almost entirely mystery.

[4] Many people who worry about AI safety and ethics believe that in some form consciousness can and should be engineered into AI systems of the future.

[5] So while there's much mystery, disagreement, and discoveries yet to be made about consciousness, these conversations, while fundamentally philosophical in nature, may nevertheless be very important for engineers of modern AI systems to engage in.

[6] This is the Artificial Intelligence Podcast.

[7] If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon or simply connect with me on Twitter at Lex Friedman, spelled F -R -I -D -M -A -N.

[8] As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation.

[9] I hope that works for you and doesn't hurt the listening experience.

[10] This show is presented by Cash App, the number one finance app in the app store.

[11] When you get it, use code Lex Podcast.

[12] Cash app lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar.

[13] Brokery services are provided by Cash App investing, subsidiary of Square, and member SIPC.

[14] Since Cashab does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel.

[15] So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier.

[16] If you get Cash App from the App Store, Google Play, and use the code Lex Podcast, you'll get $10, and Cash App will also donate $10 to first, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world.

[17] And now here's my conversation with David Chalmers.

[18] Do you think we're living in a simulation?

[19] I don't rule it out there's probably going to be a lot of simulations in the history of the cosmos if the simulation is designed well enough it'll be indistinguishable from a non -simulated reality and although we could keep searching for evidence that we're not in a simulation any of that evidence in principle could be simulated so I think it's a possibility but do you think the thought experiment is interesting or useful to calibrate how we think about the nature of reality.

[20] Yeah, I definitely think it's interesting and useful.

[21] In fact, I'm actually writing a book about this right now all about the simulation idea, using it to shed light on a whole bunch of philosophical questions.

[22] So, you know, the big one is how do we know anything about the external world?

[23] Descartes said, you know, maybe you're being fooled by an evil demon who's stimulating your brain into thinking.

[24] all this stuff is real when in fact it's all made up well the modern the modern version of that is how do you know you're not in a simulation then the thought is if you're in a simulation none of this is real so that's teaching you something about about knowledge how do you know about the external world i think there's also really interesting questions about the nature of reality right here if we are in a simulation is all this real is there really a table here is it really a microphone own, do I really have a body?

[25] The standard view would be, no, we don't.

[26] None of this would be real.

[27] My view is actually, that's wrong.

[28] And even if we are in a simulation, all of this is real.

[29] That's why I call this Reality 2 .0.

[30] New version of reality, different version of reality, still reality.

[31] So what's the difference between quote -unquote real world and the world that we perceive?

[32] So we interact with the world by perceiving it.

[33] only really exists through the window of our perception system and in our mind.

[34] So what's the difference between something that's quote -unquote real that exists perhaps without us being there and the world as you perceive it?

[35] Well, the world as we perceive it is a very simplified and distorted version of what's going on underneath.

[36] We already know that from just thinking about science.

[37] You know, you don't see too many obviously quantum mechanical effects in what we're what we perceive, but we still know quantum mechanics is going on under all things.

[38] Well, I like to think the world we perceive is this very kind of simplified picture of colors and shapes existing and in space and so on.

[39] And we know that's what the philosopher Wilfred Sell is called the manifest image.

[40] The world as it seems to us, we already know underneath all that is a very different scientific image with atoms or quantum wave functions or super string.

[41] or whatever the latest thing is.

[42] And that's the ultimate scientific reality.

[43] So I think of the simulation idea as basically another hypothesis about what the ultimate, say, quasi -scientific or metaphysical reality is going on underneath the world of the manifest image.

[44] The world of the manifest image is this very simple thing that we interact with that's neutral on the underlying stuff of reality science can help tell us about that.

[45] philosophy can help tell us about that too.

[46] And if we eventually take the red pill and find out we're in a simulation, my view is that's just another view about what reality is made of.

[47] You know, the philosopher Emmanuel Kant said, what is the nature of the thing in itself?

[48] I've got a glass here and it's got all these, it appears to me a certain way, a certain shape, it's liquid, it's clear.

[49] He said, what is the nature of the thing in itself?

[50] Well, I think of the simulation idea.

[51] It's a hypothesis about the nature of the thing in itself.

[52] It turns out if we're, if we're, in a simulation, the thing in itself, nature of this glass, it's okay, it's actually a bunch of data structures running on a computer in the next universe up.

[53] Yeah, that's what people tend to do when they think about simulation.

[54] They think about our modern computers and somehow trivially, crudely, just scaled up in some sense.

[55] But do you think the simulation, I mean, in order to actually simulate something as complicated as, our universe that's made up of molecules and atoms and particles and quarks and maybe even strings, all of that requires something just infinitely many orders of magnitude more of scale and complexity.

[56] Do you think we're even able to even like conceptualize what it would take to simulate our universe?

[57] Or does it just slip into this idea that you basically have to build a universe, something so big to simulate it.

[58] It doesn't get into this fuzzy area that's not useful at all.

[59] Yeah, well, I mean, our universe is obviously incredibly complicated, and for us within our universe to build a simulation of a universe, as complicated as ours, is going to have obvious problems here.

[60] If the universe is finite, there's just no way that's going to work.

[61] Maybe there's some cute way to make it work if the universe is, is infinite.

[62] Maybe an infinite universe could somehow simulate a copy of itself, but that's going to be hard.

[63] Nonetheless, just that we are in a simulation, I think there's no particular reason why we have to think the simulating universe has to be anything like ours.

[64] You've said before that it might be...

[65] So you could think of it, turtles all the way down.

[66] You could think of the simulating universe different than ours, but we ourselves could also create another simulated universe.

[67] So you said there could be these kind of levels of universes.

[68] And you've also mentioned this hilarious idea, maybe tongue -in -cheek, maybe not, that there may be simulations within simulations arbitrarily stacked levels and that we may be in level 42, along those stacks referencing H -H -H -H -H -H -H -H -H -H -Hiker's Guide to the Universe.

[69] If we're indeed in a simulation within a simulation at level 42, what do you think level zero looks like?

[70] I would expect that level zero is truly enormous.

[71] I mean, not just if it's finite, it's some extraordinarily large finite capacity, much more likely it's infinite.

[72] Maybe it's got some very high set -throatic cardinality that enables it to support just any number of simulations.

[73] So high degree of infinity at level zero, slightly smaller degree of infinity.

[74] at level one.

[75] So by the time you get down to us at level 42, maybe there's plenty of room for lots of simulations of finite capacity.

[76] If the top universe is only a small finite capacity, then obviously that's going to put very, very serious limits on how many simulations you're going to be able to get running.

[77] So I think we can certainly confidently say that if we're at level 42, then the top level is pretty damn.

[78] So it gets more and more constrained as we get down levels, more and more simplified and constrained and limited in resources.

[79] Yeah, we still have plenty of capacity here.

[80] What was it?

[81] Findman said, he said there's plenty of room at the bottom.

[82] You know, we're still a number of levels above the degree where there's room for fundamental computing, physical computing capacity, quantum computing capacity at the bottom level.

[83] So we got plenty of room to play with and make, we probably have plenty of room for simulations of pretty sophisticated universes, perhaps none as complicated as our universe, unless our universes is infinite, but still, at the very least, for pretty serious finite universes, but maybe universes somewhat simpler than ours, unless, of course, we're prepared to take certain shortcuts in the simulation, which might then increase the capacity significantly.

[84] Do you think the human mind, us people, in terms of the complexity of simulation, is at the height of what the simulation might be able to achieve.

[85] Like, if you look at incredible entities that could be created in this universe of ours, do you have an intuition about how incredible human beings are on that scale?

[86] I think we're pretty impressive, but we're not that impressive.

[87] Are we above average?

[88] I mean, I think kind of human beings are at a certain point in the scale of intelligence, which made many things possible.

[89] You get through evolution, through single -cell organisms, through fish and mammals and primates, and something happens.

[90] Once you get to human beings, we've just reached that level where we get to develop language, we get to develop certain kinds of culture, and we get to develop certain kinds of collective thinking that has enabled all this amazing stuff to happen, science and literature and engineering and culture and so on.

[91] Still, we had just at the beginning of that on the evolutionary threshold.

[92] It's kind of like we just got there, you know, who knows, a few thousand or tens of thousands of years ago.

[93] So we're probably just at the very beginning for what's possible there.

[94] So I'm inclined to think among the scale of intelligent beings, we're somewhere very near the bottom.

[95] I would expect that, for example, if we're in a simulation, then the simulators who created us have got the capacity to be far more sophisticated.

[96] If we're at level 42, who knows what the one's at level zero, I like.

[97] It's also possible that this is the epitome of what is possible to achieve.

[98] So we as human beings see ourselves maybe as flawed, see all the constraints, all the limitations.

[99] But maybe that's the magical, the beautiful thing.

[100] Maybe those limitations are the essential elements for an interesting, sort of that edge of chaos, that interesting existence, that if you make us much more intelligent, And if you make us much more powerful in any kind of dimension of performance, maybe you lose something fundamental that makes life worth living.

[101] So you kind of have this optimistic view that we're this little baby that then there's so much growth and potential.

[102] But this could also be it.

[103] This is the most amazing thing is us.

[104] Maybe what you're saying is consistent with what I'm saying.

[105] I mean, we could still have levels of intelligence far beyond us, but maybe those levels of intelligence on your view would be kind of boring and, you know, we kind of get so good at everything and life suddenly becomes unidimensional.

[106] So we're just inhabiting this one spot of like maximal romanticism in the history of evolution.

[107] You get to humans and it's like, yeah, and then years to come, our super intelligent descendants are going to look back at us and say those were the days when they just hit the point of inflection and life was interesting.

[108] I am an optimist.

[109] So I'd like to think that if there is super intelligent somewhere in the future, they'll figure out how to make life super interesting and super romantic.

[110] Well, you know what they're going to do.

[111] So what they're going to do is they realize how boring life is when you're super intelligent.

[112] So they create a new level of a simulation and sort of live through the things they've created by watching them stumble about in their flawed ways.

[113] So maybe that's, say, you create a new level of a simulation every time you get really bored with how smart and...

[114] This would be kind of sad, though, because we showed the peak of their existence would be like watching simulations for entertainment.

[115] It's like saying the peak of our existence now is Netflix.

[116] No, it's all right.

[117] A flip side of that could be the peak of our existence for many people, having children and watching them grow.

[118] That becomes very meaningful.

[119] Okay.

[120] You create a simulation that's like creating a family.

[121] Creating like, well, any kind of creation.

[122] is kind of a powerful act.

[123] Do you think it's easier to simulate the mind or the universe?

[124] So I've heard several people, including Nick Bostrom, think about ideas of, you know, maybe you don't need to simulate the universe, you can just simulate the human mind.

[125] Or in general, just the distinction between simulating the entirety of it, the entirety of the physical world, or just simulating the mind.

[126] Which one do you see is more challenging?

[127] Well, I think in some sense the answer is obvious.

[128] It has to be simpler to simulate the mind than to simulate the universe because the mind is part of the universe.

[129] In order to fully simulate the universe, you're going to have to simulate the mind.

[130] Unless we're talking about partial simulations.

[131] And I guess the question is which comes first.

[132] Does the mind come before the universe or does the universe come before the mind?

[133] So the mind could just be an emergent phenomena in this universe.

[134] So simulation is an interesting thing that, you know, it's, it's not like creating a simulation perhaps requires you to program every single thing that happens in it.

[135] It's just defining a set of initial conditions and rules based on which it behaves.

[136] Simulating the mind requires you to have a little bit more.

[137] We're now in a little bit of a crazy lamp.

[138] But it requires you to understand the fundamentals of cognition, perhaps of consciousness, of perception of everything like that that's made, that's not created through some kind of emergence from basic physics laws, but more requires you to actually understand the fundamentals of the mind.

[139] How about if we said to simulate the brain, rather than the mind?

[140] So the brain is just a big physical system.

[141] the universe is a giant physical system to simulate the universe at the very least you're going to have to simulate the brains as well as all the other physical systems within it and it's not obvious that the problems are any worse for the brain than for it's a particularly complex physical system but if we can simulate arbitrary physical systems we can simulate brains there is this further question of whether when you simulate a brain will that bring along all the features of the mind with it.

[142] Like will you get consciousness?

[143] Will you get thinking?

[144] Will you get free will and so on?

[145] And that's something philosophers have argued over for years.

[146] My own view is if you simulate the brain well enough, that will also simulate the mind.

[147] But yeah, there's plenty of people who would say no. You'd merely get like a zombie system, a simulation of a brain without any true consciousness.

[148] But for you, you put together a brain, the consciousness comes with it, a rise.

[149] Yeah, I don't think it's obvious.

[150] That's your intuition.

[151] My view is roughly that, yeah, what is responsible for consciousness?

[152] It's in the patterns of information processing and so on, rather than, say, the biology that it's made of.

[153] There's certainly plenty of people out there who think consciousness has to be, say, biological.

[154] So if you merely replicate the patterns of information processing in a non -biological substrate, you'll miss what's crucial for consciousness.

[155] I mean, I just don't think there's any particular reason.

[156] reason to think that biology is special here.

[157] You can imagine substituting the biology for non -biological systems, say silicon circuits that play the same role.

[158] The behavior will continue to be the same.

[159] And I think just thinking about what is the true, when I think about the connection, the isomorphisms between consciousness and the brain, the deepest connections to me seem to connect consciousness to patterns of information processing, not to specific biology.

[160] So I at least adopted as my working hypothesis that basically it's the computation and the information that matters for consciousness.

[161] At the same time, we don't understand consciousness, so this could be wrong.

[162] So the computation, the flow, the processing, manipulation of information, the process is where the consciousness, the software is where the consciousness comes from, not the hardware.

[163] Roughly the software, yeah, the patterns of information processing, at least.

[164] in the hardware, which we could view as software.

[165] It may not be something you can just like program and load and erase and so on and the way we can with ordinary software, but it's something at the level of information processing rather than at a level of implementation.

[166] So on that, what do you think of the experience of self, just the experience of the world in a virtual world, in virtual reality?

[167] Is it possible that we can create sort of, offsprings of our consciousness by existing in a virtual world long enough.

[168] So, yeah, can we be conscious in the same kind of deep way that we are in this real world by hanging out in a virtual world?

[169] Yeah, well, the kind of virtual worlds we have now are, you know, are interesting but limited in certain ways.

[170] In particular, they rely on us having a brain and so on, which.

[171] is outside the virtual world.

[172] Maybe I'll strap on my VR headset or just hang out in a virtual world on a screen.

[173] But my brain, and then my physical environment might be simulated if I'm in a virtual world.

[174] But right now, there's no attempt to simulate my brain.

[175] I might be some non -player characters in these virtual worlds that have simulated cognitive systems of certain kinds that dictate their behavior, but, you know, mostly they're pretty simple right now.

[176] I mean, some people are trying to combine, put a bit of AI in their non -player characters to make them, to make them smarter.

[177] But for now, inside virtual world, the actual thinking is interestingly distinct from the physics of those virtual worlds.

[178] In a way, actually, I like to think this is kind of reminiscent of the way that Descartes thought our physical world was.

[179] There's physics, and there's the mind, and they're separate.

[180] Now we think the mind is somehow, somehow connected to physics pretty deeply.

[181] But in these virtual worlds, there's a physics of a virtual world.

[182] And then there's this brain which is totally outside the virtual world that controls it and interacts it.

[183] When anyone exercises agency in a video game, you know, that's actually somebody outside the virtual world, moving a controller, controlling the interaction of things inside the virtual world.

[184] So right now in virtual worlds, the mind is somehow outside the world.

[185] But you could imagine in the future, once we have developed serious AI, artificial general intelligence, and so on.

[186] Then we could come to virtual worlds which have enough sophistication.

[187] You could actually simulate a brain or have a genuine AGI, which would then presumably be able to act in equally sophisticated ways, maybe even more sophisticated ways inside the virtual world to how it might in the physical world.

[188] And then the question is going to come along.

[189] That would be kind of a VR, a virtual world internal, intelligence.

[190] And then the question is, could they have consciousness, experience, intelligence, free will, all the things that we have?

[191] And again, my view is, I don't see why not.

[192] To linger in it a little bit, I find virtual reality really incredibly powerful, just even the crude virtual reality we have now.

[193] Perhaps there's psychological effects that make some people more amenable to virtual worlds and others, but I find myself wanting to stay in virtual worlds for...

[194] Yes.

[195] With a headset or on a desktop?

[196] No, with a headset.

[197] Really interesting.

[198] Because I am totally addicted to using the internet and things on a desktop.

[199] But when it comes to VR for the headset, I don't typically use it for more than 10 or 20 minutes.

[200] There's something just slightly aversive about it, I find.

[201] So I don't, right now, even though I have Oculus Rift and Oculus Quest and HGC Vive and Samsung, this and that.

[202] that you just don't want to stay in that world for extended periods you actually find yourself the something about it's both a combination of just imagination and considering the possibilities of where this goes in in the future it feels like I want to almost prepare my brain for like it I want to explore sort of Disneyland when it's first being built yeah in the early days yeah And it feels like I'm walking around almost imagining the possibilities.

[203] And something through that process allows my mind to really enter into that world.

[204] But you say that the brain is external to that virtual world.

[205] It is, strictly speaking, true.

[206] But if you're in VR and you do brain surgery on an avatar, you're going to open up that skull.

[207] What are you going to find?

[208] sorry nothing there nothing the brain is elsewhere you don't think it's possible to kind of separate them and i don't mean in a sense like decart like a hard separation but basically do you think it's possible with the brain outside of the virtual rid when you're wearing a headset create a new consciousness for prolonged periods of times really feel like really experience like forget that your brain is outside so this is okay this is going to be the case where the brain is still outside still outside but could living in the VR I mean I mean we already find this right with video games exactly that's completely immersive and you get taken up by living in those worlds and it becomes your reality for a while so they're not completely immersive they're just very immersive you don't forget the external world no exactly so that's what I'm asking you It's almost possible to really forget the external world.

[209] Really, really immerse yourself.

[210] To forget completely, why would we forget?

[211] We've got pretty good memories.

[212] Maybe you can stop paying attention to the external world.

[213] But, you know, this already happens a lot.

[214] I go to work and maybe I'm not paying attention to my home life.

[215] I go to a movie and I'm immersed in that.

[216] So that degree of immersion, absolutely.

[217] But we still have the capacity to remember it, to completely forget.

[218] the external world.

[219] I'm thinking that would probably take some, I don't know, some pretty serious drugs or something to make your brain do that.

[220] Is it possible?

[221] So, I mean, I guess I'm getting at is consciousness truly a property that's tied to the physical brain?

[222] Or can you create sort of different offspring copies of consciousness is based on the worlds that you enter?

[223] Well, the way we're doing it now, at least with a standard V. there's just one brain, interacts with the physical world, plays a video game, puts on a video headset, interacts with this virtual world.

[224] I think we'd typically say there's one consciousness here that nonetheless undergoes different environments, takes on different characters, you know, in different environments.

[225] This is already something that happens in the non -virtual world.

[226] You know, I might interact one way in my home life, my work life, social life, and so on.

[227] So at the very least, That will happen in a virtual world very naturally.

[228] People might, people have, people sometimes adopt the character of avatars very different from themselves, maybe even a different gender, different race, different social background.

[229] So that much is certainly possible.

[230] I would see that as a single consciousness, taking on different personas.

[231] If you want literal splitting of consciousness into multiple copies, I think it's going to take something more radical than that.

[232] Like, you know, maybe you can run different simulations.

[233] of your brain in different realities and then expose them to different histories and then you know you'd split yourself into 10 different simulated copies which then undergo different environments and then ultimately do become 10 very different consciousnesses maybe that could happen but now we're not talking about something that's possible in the near term we're going to have to have brain simulations and AGI for that to happen got it so before any of that happens it's fundamentally you see it as a singular consciousness, even though it's experiencing different environments, virtual or not, it's still connected to same set of memories, same set of experiences, and therefore one sort of joint conscious system.

[234] Yeah, or at least no more multiple than the kind of multiple consciousness that we get from inhabiting different environments in a non -virtual world.

[235] So you said as a child, you were a music color Senate State Senate State.

[236] So where songs had colors for you.

[237] So what songs had what colors?

[238] You know, this is funny.

[239] I didn't pay much attention to this at the time, but I'd listen to a piece of music and I'd get some kind of imagery of a kind of color.

[240] The weird thing is, mostly they were kind of murky dark greens and olive brown and the colors weren't all that interesting.

[241] I don't know what the reason is.

[242] My theory is that maybe it's like different chords and tones provided different colors, and they all tended to get mixed together into these somewhat uninteresting browns and greens.

[243] But every now and then, there'd be something that had a really pure color.

[244] So there's just a few that I remember.

[245] There was here there and everywhere by the Beatles with bright red.

[246] It has this, you know, very distinctive tonality and its chord structure.

[247] at the beginning.

[248] So that was bright red.

[249] There was this song by the Alan Parsons project called Ammonia Avenue that was kind of a pure, a pure blue.

[250] Anyway, I've got no idea how would this happen.

[251] I didn't even pay that much attention until it went away when I was about 20.

[252] This anesthesia often goes away.

[253] So is it purely just the perception of a particular color or was there a positive or negative experience?

[254] Like was blue associated with a positive and red with a negative, or is it simply the perception of color associated with some characteristic of the song?

[255] For me, I don't remember a lot of association with emotion or with value.

[256] It was just this kind of weird and interesting fact.

[257] I mean, at the beginning, I thought this was something that happened to everyone, songs of colors.

[258] Maybe I mentioned it once or twice, and people said, uh, nope, uh, that it was like, I think it was kind of cool when there was one that had one of these especially pure colors, but only much later, once I ever came.

[259] grad student thinking about the mind that I read about this phenomenon called synesthesia.

[260] And it's like, hey, that's what I had.

[261] And now I occasionally talk about it in my classes, in intra -class.

[262] And it still happens sometimes.

[263] A student comes up and says, hey, I have that.

[264] I never knew about that.

[265] I never knew it had a name.

[266] You said that it went away at age 20 or so.

[267] And that you have a journal entry from around then saying songs don't have colors anymore.

[268] What happened?

[269] What happened?

[270] Yeah, it was definitely.

[271] sad that it was gone.

[272] In retrospect, it's like, hey, that's cool.

[273] The colors have gone.

[274] Yeah, do you, can you think about that for a little bit?

[275] Do you miss those experiences?

[276] Because it's a fundamentally different sets of experiences that you no longer have.

[277] Or is it just a nice thing to have had?

[278] You don't see them as that fundamentally different than you visiting a new country and experiencing new environments.

[279] I guess for me, when I had these experiences they were somewhat marginal they were like a little bonus kind of experience i know there are people who have much more serious forms of synesthesia than this for whom it's absolutely central to their lives i know people who when they experience new people they have colors maybe they have tastes and so on every time they see writing it has it has colors some people whenever they hear music it's got a it's got a certain really rich color pattern and you know for some synesthetes it's absolutely central.

[280] I think if they lost it, they'd be devastated.

[281] Again, for me, it was a very, very mild form of synesthesia.

[282] It's like, yeah, it's like those interesting experiences.

[283] Yeah.

[284] You know, you might get under different auto states of consciousness and so on.

[285] It's kind of cool, but, you know, not necessarily the single most important experiences in your life.

[286] Got it.

[287] So let's try to go to the very simplest question.

[288] The answer many time, but perhaps the simplest things can help us reveal, even in time, some new ideas.

[289] So what, in your view, is consciousness?

[290] What is qualia?

[291] What is the hard problem of consciousness?

[292] Consciousness, I mean, the word is used many ways, but the kind of consciousness that I'm interested in is basically subjective experience, what it feels like from the inside to be a human being or any other consciousness.

[293] just being.

[294] I mean, there's something it's like to be me. Right now, I have visual images that I'm experiencing.

[295] I'm hearing my voice.

[296] I've got maybe some emotional tone.

[297] I've got a stream of thoughts running through my head.

[298] These are all things that I experience from the first person point of view.

[299] I've sometimes called this the inner movie in the mind.

[300] It's not a perfect.

[301] It's not a perfect metaphor.

[302] It's not like a movie in every way and in every way and it's very rich.

[303] But, yeah, it's just direct, subjective experience.

[304] And I call that consciousness, or sometimes philosophers use the word qualia, which you suggested.

[305] People tend to use the word qualia for things like the qualities of things like colors, redness, the experience of redness versus the experience of greenness, the experience of one taste or one smell versus another, the experience of the quality of pain.

[306] And, yeah, a lot of consciousness is the experience of those, of those qualities.

[307] Well, consciousness is bigger, the entirety of any kinds of...

[308] Consciousness of thinking is not obviously qualia.

[309] It's not like specific qualities like redness or greenness, but still I'm thinking about my hometown, and I'm thinking about what I'm going to do later on.

[310] Maybe there's still something running through my head, which is subjective experience.

[311] Maybe it goes beyond those qualities or qualia.

[312] Philosophers sometimes use the word phenomenal consciousness for consciousness.

[313] In this sense, I mean, people also talk about access consciousness, being able to access information in your mind, reflective consciousness, being able to think about yourself.

[314] But it looks like the really mysterious one, the one that really gets people going is phenomenal consciousness.

[315] The fact that all this, the fact that there's subjective experience and all this feels like something at all.

[316] And then the hard problem is, how is it that, why is it, that there is phenomenal consciousness at all?

[317] And how is it that physical processes in a brain could give you subjective experience?

[318] It looks like on the face of it, you could have all this big, complicated physical system in a brain running without a given subjective experience at all.

[319] And yet we do have subjective experience.

[320] So the hard problem is just explain that.

[321] Explain how that comes about.

[322] We haven't been able to build machines where a red light goes on that says it's not conscious.

[323] So how do we actually create that?

[324] Or how do humans do it and how do ourselves do it?

[325] We do every now and then create machines that can do this.

[326] We create babies that are conscious.

[327] They've got these brains.

[328] The brain does produce consciousness.

[329] But even though we can create it, we still don't understand why it happens.

[330] Maybe eventually we'll be able to create machines, which as a matter of fact, AI machines, which as a matter of fact are conscious.

[331] but that won't necessarily make the hard problem go away any more than it does with babies because we still want to know how and why is it that these processes give you a consciousness you know you just made me realize for for a second maybe it's a totally dumb realization but nevertheless that it's a useful way to think about the creation of consciousness is looking at a baby so that there's a certain point at which that baby is not a baby is not conscious.

[332] The baby starts from maybe, I don't know, from a few cells, right?

[333] There's a certain point at which it becomes consciousness arrives.

[334] It's conscious.

[335] Of course, we can't know exactly that line.

[336] But that's a useful idea that we do create consciousness.

[337] Again, a really dumb thing for me to say, but not until now that I realize we do engineer consciousness.

[338] We get to watch the process happen.

[339] We don't know which point it happens or where it is, but we do see the birth of consciousness.

[340] Yeah, I mean, there's a question, of course, is whether babies are conscious when they're born.

[341] And it used to be, it seems, at least some people thought they weren't, which is why they didn't give anesthetics to newborn babies when they circumcised them.

[342] And so now people think, oh, that's incredibly cruel.

[343] Of course, babies feel pain.

[344] dominant view is that the babies can feel pain.

[345] Actually, my partner, Claudia, works on this whole issue of whether there's consciousness and babies and of what kind.

[346] And she certainly thinks that newborn babies, you know, come into the world with some degree of consciousness.

[347] Of course, then you can just extend the question backwards to fetuses and suddenly you're into politically controversial territory.

[348] But, you know, the question also arises in the animal kingdom.

[349] You know, where does consciousness start or stop?

[350] Is there a line in the animal kingdom where, you know, the first conscious organisms are?

[351] It's interesting.

[352] Over time, people are becoming more and more liberal about describing consciousness to animals.

[353] People used to think, maybe only mammals could be conscious.

[354] Now most people seem to think, sure, fish are conscious.

[355] They can feel pain.

[356] And now we're arguing over insects.

[357] You'll find people out there who say plants have some degree of consciousness.

[358] So, you know, who knows where it's going to end.

[359] the far end of this chain is the view that every physical system has some degree of consciousness.

[360] Philosophers call that panpsychism.

[361] You know, I take that view.

[362] I mean, that's a fascinating way to view reality.

[363] So if you could talk about, if you can linger on panpsychism for a little bit, what does it mean?

[364] So it's not just plants are conscious.

[365] I mean, it's that consciousness is a fundamental fabric of reality.

[366] What does that mean to you?

[367] How are we supposed to think about that?

[368] Well, we're used to the idea that some things in the world are fundamental, right?

[369] In physics, typically take things like space or time or space time, mass charge as fundamental properties of the universe.

[370] You don't reduce them to something simpler.

[371] You take those for granted.

[372] You've got some laws that connect them.

[373] Here is how mass and space and time evolve.

[374] theories like relativity or quantum mechanics or some future theory that will unify them both but everyone says you've got to take some things as fundamental and if you can't explain one thing in terms of the previous fundamental things you have to expand maybe something like this happened with Maxwell um you ended up with fundamental principles of electromagnetism and took charge as fundamental because turned out that was the best way to explain it so i at least take seriously the possibility, something like that could happen with consciousness.

[375] Take it as a fundamental property like space, time, and mass. Instead of trying to explain consciousness wholly in terms of the evolution of space, time, and mass, and so on, take it as a primitive and then connect it to everything else by some fundamental laws.

[376] There's this basic problem that the physics we have now looks great for solving the easy problems of consciousness, which are all about behavior.

[377] They give us a complicated structure and dynamics.

[378] They tell us how things are going to behave, what kind of observable behavior that are produced, which is great for the problems of explaining how we walk and how we talk and so on.

[379] Those are the easy problems of consciousness.

[380] But the hard problem was this problem about subjective experience just doesn't look like that kind of problem about structure, dynamics, how things behave.

[381] So it's hard to see how existing physics is going to give you a full explanation of that.

[382] Certainly trying to get a physics view of consciousness, yes.

[383] There has to be a connecting point, and it could be at the very eczematic at the very beginning level.

[384] But, first of all, there's a crazy idea that sort of everything has properties of consciousness.

[385] At that point, the word consciousness is already beyond the region of our current understanding, like far, because it's so far from, at least for me, maybe you can correct me, as far from the experiences that we have that I have as a human being.

[386] To say that everything is conscious, that means that basically another way to put that, if that's true, then we understand almost nothing about that fundamental aspect of the world.

[387] How do you feel about saying an ant is conscious?

[388] Do you get the same reaction to that, or is that something you can understand?

[389] I can understand ant.

[390] I can't understand an atom, a particle.

[391] So I'm comfortable with living things on earth being conscious because there's some kind of agency where they're similar size to me and they can be born and they can die.

[392] And that is understandable intuitively.

[393] Of course, you anthropomorphize.

[394] You put yourself in the place of the plant.

[395] But I can understand it.

[396] I mean, I'm not like, I don't believe, actually, that plants are conscious of that plant suffer, but I can understand that kind of belief, that kind of idea.

[397] How do you feel about robots?

[398] Like the kind of robots we have now, if I told you that, you know, Arumba had some degree of consciousness or some, you know, deep neural network?

[399] I could understand that a rumba has consciousness.

[400] I just had spent all day at I robot.

[401] And, I mean, I personally love robots and I have a deep connection with robots.

[402] So I can, I also probably anthropomorphize, them but there's something about the physical object so there's a difference than a neural network and neural network running a software to me the physical object something about the human experience allows me to really see that physical object is an entity and if it moves and moves in a way that it there's a like i didn't program it where it feels that it's acting based on its own perception and yes self -awareness and consciousness even if it's a rumba then you start to assign it to some agency some consciousness so but to say that panpsychism that conscious is a fundamental property of reality is a much bigger statement that it's like turtles all the way yeah every it's it doesn't end the whole thing is so like how i know it's full of mystery, but if you can linger on it, like, how would it, how do you think about reality if consciousness is a fundamental part of its fabric?

[403] The way you get there is from thinking, can we explain consciousness given the existing fundamentals?

[404] And then if you can't, as at least right now, it looks like, then you've got to add something.

[405] It doesn't follow that you have to add consciousness.

[406] Here's another interesting possibility is, well, we'll add something else.

[407] Let's call it proto -consciousness or X. And then it turns out space, time, mass, plus X will somehow collectively give you the possibility for consciousness.

[408] Why don't rule out that view?

[409] Either I call that pan -proto -psychism, because maybe there's some other property, proto -consciousness at the bottom level.

[410] And if you can't imagine there's actually genuine consciousness at the bottom level, I think we should be open to the idea.

[411] There's this other thing, X. Maybe we can't imagine that somehow gives you consciousness.

[412] But if we are playing along with the idea that there really is genuine consciousness at the bottom level, of course, this is going to be way out and speculative, but at least in say if it was classical physics, then we'd have to, you'd end up saying, well, every little atom, every little, with a bunch of particles in space time, each of these particles has some kind of consciousness whose structure mirrors maybe their physical properties, like its mass, it's charge, its velocity, and so on.

[413] The structure of its consciousness would roughly correspond to that.

[414] And the physical interactions between particles.

[415] I mean, there's this old worry about physics.

[416] I mentioned this before in this issue about the manifest image.

[417] We don't really find out about the intrinsic nature of things.

[418] Physics tells us about how a particle relates to other particles and interacts.

[419] It doesn't tell us about what the particle is in itself.

[420] That was Kant's thing in itself.

[421] So here's a view.

[422] The nature in itself of a particle is something mental.

[423] A particle is actually a little conscious subject with properties of its consciousness that correspond to its physical properties.

[424] The laws of physics are actually ultimately relating these properties of conscious subjects.

[425] So on this view, a Newtonian world actually would be a vast collection of little conscious subjects at the bottom level way, way simpler than we are without free will or rationality or anything like that.

[426] But that's what the universe would be like.

[427] Of course, that's a vastly speculative view.

[428] No particular reason to think it's correct.

[429] Furthermore, non -Newtonian physics, say quantum mechanical wave function, suddenly a stars will look different.

[430] It's not a vast collection of conscious subjects.

[431] Maybe there's ultimately one big wave function for the whole universe.

[432] Corresponding to that might be something more like a single conscious mind whose structure corresponds to the structure of the wave function.

[433] People sometimes call this cosmocycism.

[434] And now, of course, we're in the realm of extremely speculative philosophy.

[435] There's no direct evidence for this.

[436] But yeah, if you want a picture of what that universe would be like, think, yeah, giant cosmic mind with enough richness and structure among it to replicate all the structure of physics.

[437] I think, therefore, I am at the level of particles and with quantum mechanics at the level of the wave function.

[438] and it's a it's kind of an exciting beautiful possibility of course way out of reach of physics currently it is interesting that some neuroscientists are beginning to take panpsychism seriously that you find consciousness even in very in very simple systems so for example the integrated information theory of consciousness a lot of neuroscientists are taking seriously actually i just got this new book by christoph khoch just came in the feeling of life itself, why consciousness is widespread, but can't be computed.

[439] He basically endorses a panpsychist view where you get consciousness with the degree of information processing or integrated information processing in a system, and even very, very simple systems, like a couple of particles, will have some degree of this, so he ends up with some degree of consciousness in all matter.

[440] And the claim is that this theory can actually explain a bunch of stuff about the connection between the brain and consciousness.

[441] Now, that's very controversial.

[442] I think it's very, very early days in the science of consciousness.

[443] It's interesting that it's not just philosophy that might lead you in this direction, but there are ways of thinking quasi -scientifically that leads you there too.

[444] But maybe it's different than pen -psychism.

[445] What do you think?

[446] So Alan Watts has this quote that I'd like to ask you about.

[447] The quote is, through our eyes, the universe is perceiving itself.

[448] Through our ears, the universe is listening to its harmonies.

[449] We are the witnesses to which the universe becomes conscious of its glory, of its magnificence.

[450] So that's not panpsychism.

[451] Do you think that we are essentially the tools, the senses the universe created to be conscious of itself?

[452] It's an interesting idea.

[453] Of course, if you went for the giant cosmic mind view, then the universe was conscious all along.

[454] It didn't need us.

[455] We're just little components of the universal consciousness.

[456] Likewise, if we believe in panpsychism, then there was some little degree of consciousness at the bottom level all along, and we were just a more complex form of consciousness.

[457] So I think maybe the quote you mentioned works better.

[458] If you're not a panpsychist, you're not a cosmocycist.

[459] You think consciousness just exists at this intermediate level.

[460] And of course, that's the author of view.

[461] That you would say is the common view?

[462] So is your own view with panpsychism a rarer view?

[463] I think it's generally regarded certainly as a speculative view held by a fairly small minority of at least theorists, most philosophers and most scientists who think about consciousness are not panpsychist.

[464] There's been a bit of a movement in that direction the last 10 years or so.

[465] It seems to be quite popular, especially among the younger generation, but it's still very definitely a minority view.

[466] Many people think it's totally batch it crazy to use the technical term.

[467] It's a philosophical term.

[468] So the orthodox view, I think, is still consciousness is something that humans have and some good number of non -human animals have, and maybe AIs might have one day, but it's restricted.

[469] On that view, then there was no consciousness at the start of the universe, there may be none at the end, but it is this thing which happened at some point in the history of the universe consciousness developed.

[470] And yes, that's a very amazing event on this view, because many people are inclined to think consciousness is what somehow gives meaning to our lives without consciousness.

[471] There'd be no meaning, no true value, no good versus bad, and so on.

[472] So with the advent of consciousness, suddenly the universe went from meaningless, to somehow meaningful.

[473] Why did this happen?

[474] I guess the quote you mentioned was somehow, this was somehow destined to happen because the universe needed to have consciousness within it, to have value and have meaning.

[475] And maybe you could combine that with a theistic view or a teleological view.

[476] The universe was inexorably evolving towards consciousness.

[477] Actually, my colleague here at NYU, Tom Nagel wrote a book called Mind and Cosmos a few years ago where he argued for this teleological, view of evolution toward consciousness, saying this led the problems for Darwinism.

[478] It's got him on, you know, this is very, very controversial.

[479] Most people didn't agree.

[480] I don't myself agree with this teleological view.

[481] But it is at least a beautiful speculative view of the cosmos.

[482] What do you think people experience, what do they seek when they believe in God from this kind of perspective?

[483] I'm not an expert on thinking about God and religion.

[484] I'm not myself religious at all.

[485] When people sort of pray, communicate with God, whatever form, I'm not speaking to sort of the practices and the rituals of religion.

[486] I mean the actual experience that people really have a deep connection of God in some cases.

[487] What do you think that experience is?

[488] it's so common at least throughout the history of civilization that it seems like we seek that at the very least it's an interesting conscious experience that people have when they experience religious awe or prayer and so on and neuroscientists have tried to examine what bits of the brain are active and so on but yeah that is this deeper question of what are people looking for when they're doing this And like I said, I've got no real expertise on this.

[489] But it does seem that one thing people are after is a sense of meaning and value, a sense of connection to something greater than themselves that will give their lives meaning and value.

[490] And maybe the thought is if there is a God, then God somehow is a universal consciousness who has invested this universe with meaning and somehow connection to God might give your life meaning.

[491] I can kind of see the attractions of that but still makes me wonder why is it exactly that a universal consciousness God would be needed to give the world meaning?

[492] If I mean, if universal consciousness can give the world meaning, why can't local consciousness give the world meaning too?

[493] So I think my consciousness gives my world meaning.

[494] Is the origin of meaning for your world?

[495] Yeah, I experience things as good or bad, happy.

[496] sad, interesting, important.

[497] So my consciousness invests this world with meaning.

[498] Without any consciousness, maybe it would be a bleak, meaningless universe.

[499] But I don't see why I need someone else's consciousness or even God's consciousness to give this universe meaning.

[500] Here we are local creatures with our own subjective experiences.

[501] I think we can give the universe meaning ourselves.

[502] I mean, maybe to some people, that feels inadequate.

[503] Yeah, our own local consciousness is somehow too puny and insignificant to invest any of this with cosmic significance and maybe God gives you a sense of cosmic significance, but I'm just speculating here.

[504] So, you know, it's a really interesting idea that consciousness is the thing that makes life meaningful.

[505] If you could maybe just briefly explore that for a second.

[506] So I suspect just from listening to you now, you mean in an almost trivial sense, just the day -to -day experiences of life have, because of you attach identity to it, they become, well, I guess I want to ask something I would always want to ask a legit world -renowned philosopher, what is the meaning of life?

[507] So I suspect you don't mean consciousness gives any kind of greater meaning to it all.

[508] Yeah.

[509] And more to day to day.

[510] But is there greater meaning to it all?

[511] I think life has meaning for us because we are conscious.

[512] So without consciousness, no meaning.

[513] Consciousness invests our life with meaning.

[514] So consciousness is the source of my view of the meaning of life.

[515] But I wouldn't say consciousness itself is the meaning of life.

[516] I'd say what's meaningful in life is basically what we find meaningful, what we experience as meaningful.

[517] So if you find meaning and fulfillment and value in, say, intellectual work like understanding, then that's a very significant part of the meaning of life for you.

[518] If you find it in social connections or in raising a family, then that's the meaning of life for you.

[519] The meaning kind of comes from what you value as a conscious creature.

[520] So I think there's no, on this view, there's no universal solution.

[521] No universal answer to the question, what is the meaning of life?

[522] The meaning of life is where you find it as a conscious creature.

[523] But it's consciousness that somehow makes value possible, experiencing some things as good or as bad or as meaningful.

[524] Someone comes from within consciousness.

[525] So you think consciousness is a crucial component, ingredient.

[526] of assigning value to things.

[527] I mean, it's kind of a fairly strong intuition that without consciousness, there wouldn't really be any value.

[528] If we just had a purely a universe of unconscious creatures, would anything be better or worse than anything else?

[529] Certainly when it comes to ethical dilemmas, you know, you know about the old trolley problem.

[530] Do you kill one person or do you switch to the other track to kill five.

[531] Well, I've got a variant on this.

[532] The zombie trolley problem where there's one conscious being on one track and five humanoid zombies.

[533] Let's make them robots who are not conscious on the other track.

[534] Do you give in that choice?

[535] Do you kill the one conscious being or the five unconscious robots?

[536] Most people have a fairly clear intuition here.

[537] Yeah.

[538] Kill the unconscious beings because they basically, they don't have a meaningful life.

[539] They're not really persons, conscious beings at all.

[540] Of course, we don't have good intuition about something like an unconscious being.

[541] So in philosophical terms, you refer to as a zombie, it's a useful thought experiment, construction in philosophical terms, but we don't yet have them.

[542] so that's kind of what we may be able to create with robots and I don't necessarily know what that even means yeah they're merely hypothetical for now they're just a thought experiment they may never be possible I mean the extreme case of a zombie is a being which is physically functionally behaviorally identical to me but not conscious that's a mere I don't think that could ever be built in this universe the question is just could we does that hypothetically make sense?

[543] That's kind of a useful contrast class to raise questions like, why aren't we zombies?

[544] How does it come about that we're conscious?

[545] And we're not like that.

[546] But there are less extreme versions of this, like robots, which are maybe not physically identical to us, maybe not even functionally identical to us.

[547] Maybe they've got a different architecture, but they can do a lot of sophisticated things, maybe carry on a conversation, but they're not conscious.

[548] And that's not so far out.

[549] We've got simple computer systems, at least tending in that direction now and presumably this is going to get more and more sophisticated over years to come where we may have some pretty, it's at least quite straightforward to conceive of some pretty sophisticated robot systems that can use language and be fairly high functioning without consciousness at all.

[550] Then I stipulate that.

[551] I mean, there's this tricky question of how you would know whether they're conscious.

[552] But let's say we've somehow solve that and we know that these high functioning robots are unconscious then the question is do they have moral status does it matter how we treat them um what does moral status mean so does basically that question can they suffer does it matter how we treat them are we for example if we if if if i mistreat this glass this cup by uh by shattering it then that's bad why is it bad that's going to make a mess it's going to be annoying for me and my partner um and so it's not bad for the cup no one would say the cup itself has moral status hey you you hurt the cup and that's uh that's doing it a moral harm um likewise plants well again if they're not conscious most people think by uprooting a plant you're not harming it but if a being is conscious on the other hand then you are harming it so syri or um i dare not say the uh the name of alexa Anyway, so we don't think we're morally harming Alexa by turning her off or disconnecting her or even destroying her, whether it's the system or the underlying software system because we don't really think she's conscious.

[553] On the other hand, you move to the disembodied being in the movie her, Samantha.

[554] I guess she was kind of presented as conscious and then if you destroyed her, you'd certainly be committing a serious harm.

[555] So I think our strong sense is, if a being is conscious and can undergo subjective experiences, then it matters morally how we treat them.

[556] So if a robot is conscious, it matters, but if a robot is not conscious, then they're basically just meat or a machine, and it doesn't matter.

[557] So I think at least maybe how we think about this stuff is fundamentally wrong, but I think a lot of people to think about this stuff seriously, including people who think about, say, the moral treatment of animals and so on, come to the view that consciousness is ultimately kind of the line between systems where we have to take them into account and thinking morally about how we act and systems for which we don't.

[558] And I think I've seen you the right or talk about the demonstration of consciousness from a system like that, from a system like a Lex or a conversational agent that what you would be looking for it's kind of at the very basic level for the system to have an awareness that I'm just a program and yet why do I experience this or not to have that experience but to communicate that to you so that's what us humans would sound like if you all of a sudden woke up one day like Kafka right in a body of a bug or something but in a computer you all of a sudden realize you don't have a body and yet you would feel what you're feeling, you would probably say those kinds of things.

[559] So do you think a system essentially becomes conscious by convincing us that it's conscious through the words that I just mentioned?

[560] So by being confused about the fact that why am I having these experiences?

[561] So basically...

[562] I don't think this is what makes you conscious, but I do think being puzzled.

[563] about consciousness is a very good sign that a system is conscious.

[564] So if I encountered a robot that actually seemed to be genuinely puzzled by its own mental states and saying, yeah, I have all these weird experiences and I don't see how to explain them.

[565] I know I'm just a set of silicon circuits, but I don't see how that would give you my consciousness.

[566] I would at least take that as some evidence that there's some consciousness going on there.

[567] I don't think a system needs to be puzzled about consciousness to be conscious.

[568] Many people aren't puzzled by their consciousness.

[569] Animals don't seem to be puzzled at all.

[570] I still think they're conscious.

[571] So I don't think that's a requirement on consciousness.

[572] But I do think if we're looking for signs for consciousness, say in AI systems, one of the things that will help convince me that an AI system is conscious as if it shows signs of, if it shows signs of introspectively recognizing something like consciousness, and finding this philosophically puzzling in the way that, the way that we do.

[573] It's such an interesting thought, though, because a lot of people sort of would, at the shallow level, criticize the touring test for language.

[574] That it's essentially what I heard, like, Dan Dennett, criticize it in this kind of way, which is it really puts a lot of emphasis on lying.

[575] Yeah.

[576] And being able to imitate human beings, yeah, there's this cartoon of the AI system studying for the Turing test.

[577] It's got to read this book called Talk Like a Human.

[578] It's like, man, why do I have to waste my time learning how to imitate humans?

[579] Maybe the AI system is going to be way beyond the hard problem of consciousness.

[580] And it's going to be just like, why do I need to waste my time pretending that I recognize a hard problem of consciousness to in order for people to recognize me as conscious?

[581] Yeah, it just feels like, I guess the question is, do you think there's a, we can never really create a test for consciousness?

[582] Because it feels like we're very human -centric.

[583] And so the only way we would be convinced that something is conscious is, but is basically the thing demonstrates the illusion of consciousness.

[584] We can never really know whether it's conscious or not.

[585] And in fact, that almost feels like it doesn't matter then.

[586] Or does it still matter to you that something is conscious or it demonstrates consciousness?

[587] You still see that fundamental distinction.

[588] I think to a lot of people, whether a system is conscious or not matters hugely for many things, like how we treat it, can it suffer, and so on.

[589] But still, that leaves open the question, how can we ever know?

[590] And it's true that it's awfully hard to see how we can know for sure.

[591] whether a system is conscious.

[592] I suspect that sociologically, the thing that's going to convince us that a system is conscious is, in part, things like social interaction, conversation, and so on, where they seem to be conscious, they talk about their conscious states or just talk about being happy or sad or finding things meaningful or being in pain.

[593] That will tend to convince us if we don't, the system genuinely seems to be conscious, we don't treat it as such.

[594] Eventually, it's going to seem like a strange form of racism or speciesism or somehow not to acknowledge them.

[595] I truly believe that, by the way.

[596] I believe that there is going to be something akin to the civil rights movement, but for robots.

[597] I think the moment you have a Roomba say, please don't kick me, that hurts.

[598] Just say it.

[599] I think that will fundamentally change the fabric of our society.

[600] I think you're probably right, although it's going to be very tricky because to say we're, we've got the technology where these conscious beings can just be created and multiplied by the thousands by flicking a switch.

[601] And the legal status is going to be different, but ultimately their moral status ought to be the same.

[602] And yeah, the civil rights issue is going to be a huge mess so if one day somebody clones you another very real possibility in fact i find the conversation between two copies of david chalmers quite interesting very thought who is this idiot he's not making any sense so what do you think he would be conscious.

[603] I do think he would be conscious.

[604] I do think in some sense, I'm not sure it would be me. There would be two different beings at this point.

[605] I think they both be conscious and they both have many of the same mental properties.

[606] I think they both in a way have the same moral status.

[607] It would be wrong to hurt either of them or to kill them and so on.

[608] Still, there's some sense in which probably their legal status would have to be different.

[609] If I'm the original and that one's just a clone, then, you know, creating a clone of me, presumably the clone doesn't, for example, automatically own the stuff that I own or, you know, I've got to, you know, to certain connect the things that the people I interact with, my family, my partner, and so on, I'm going to somehow be connected to them in a way in which the clone isn't.

[610] Because you came slightly first?

[611] Yeah, because the clone would argue that they have, really as much of a connection they have all the memories of that connection then away you might say it's kind of unfair to discriminate against them but say you've got an apartment that only one person can live in or a partner who only one person but why should it be you the original it's an interesting philosophical question but you might say because i actually have this history if i am the same person as the one that came before and the clone is not then i have this history that the clone doesn't.

[612] Of course, there's also the question, isn't the clone the same person, too?

[613] This is the question about personal identity.

[614] If I continue and I create a clone over there, I want to say, this one is me and this one is someone else.

[615] But you could take the view that a clone is equally me. Of course, in a movie like Star Trek, where they have a teletransport, it basically creates clones all the time.

[616] They treat the clones as if they're the original person.

[617] Of course, they destroy the original body in Star Trek.

[618] So there's only one.

[619] one left around and only very occasionally do things go wrong and you get two copies of Captain Kirk but somehow our legal system at the very least is going to have to sort out some of these issues and that maybe that's what's moral and what's legally acceptable are going to come apart what question would you ask a clone of yourself is there something useful you can find out from him about the fundamentals of consciousness even.

[620] I mean, kind of in principle, I know that if it's a perfect clone, it's going to behave just like me. So I'm not sure I'm going to be able to, I can discover whether it's a perfect clone by seeing whether it answers like me. But otherwise, I know what I'm going to find is being which is just like me, except that it's just undergone this great shock of discovering that it's a clone.

[621] So just so you woke me up tomorrow and said, hey Dave, sorry to tell you this but you're actually the clone and you provided me really convincing evidence showed me the film of my being cloned and then all wrapped it here being here and waking up so you proved to me I'm a clone well yeah okay I would find that shocking and who knows how I would react to this so so maybe by talking to the clone I'd find something about my own psychology that I can't find out so easily like how I'd react upon discovering that I'm a clone I could certainly ask the clone if it's conscious and what is consciousness is like and so on.

[622] But I guess I kind of know if it's a perfect clone, it's going to behave roughly like me. Of course, at the beginning, there'll be a question about whether a perfect clone is possible.

[623] So I may want to ask it lots of questions to see if it's consciousness and the way it talks about its consciousness and the way it reacts to things in general is like me. And that will occupy us for a long time.

[624] It's a basic unit testing on the early models.

[625] So if it's a perfect clone, you say that it's going to behave exactly like you.

[626] So that takes us to free will.

[627] Is there a free will?

[628] Are we able to make decisions that are not predetermined from the initial conditions of the universe?

[629] You know, philosophers do this annoying thing of saying it depends what you mean.

[630] So in this case, you know, yeah, it really depends on what you mean by free will, if you mean something which was not determined in advance, could never have been determined, then I don't know we have free will.

[631] I mean, there's quantum mechanics and who's to say if that opens up some room, but I'm not sure we have free will in that sense.

[632] I'm also not sure that's the kind of free will that really matters.

[633] You know, what matters to us is being able to do what we want and to create our own futures.

[634] We've got this distinction between having our lives be under our control and under someone else's control.

[635] We've got the sense of actions that we are responsible for versus ones that we're not.

[636] I think you can make those distinctions even in a deterministic universe.

[637] And this is what people call the compatibilist view of free will, where it's compatible with determinism.

[638] I think for many purposes, the kind of free will that matters is something we can have in a deterministic universe.

[639] And I can't see any reason in principle why an AI, couldn't have free will of that kind.

[640] If you mean super duper free will, the ability to violate the laws of physics and doing things that in principle could not be predicted, I don't know, maybe no one has that kind of free will.

[641] What's the connection between the reality of free will and the experience of it, the subjective experience in your view?

[642] So how does consciousness connect to the experience of, to the reality and the experience of free will.

[643] It's certainly true that when we make decisions and when we choose and so on, we feel like we have an open future.

[644] Yes.

[645] I feel like, I could do this.

[646] I could go into philosophy or I could go into math.

[647] I could go to a movie tonight.

[648] I could go to a restaurant.

[649] So we experience these things as if the future is open.

[650] Maybe we experience ourselves as exerting a kind of effect on the future that somehow picking out one path from many paths were previously open.

[651] And you might think that actually if we're in a deterministic universe, there's a sense in which objectively those paths weren't really open all along, but subjectively they were open.

[652] And that's, I think that's what really matters in making a decision.

[653] So our experience of making a decision is choosing a path for ourselves.

[654] I mean, in general, our introspective models of the mind, I think are generally very distorted representations of the mind.

[655] So it may well be that our experience of ourselves in making a decision, our experience of what's going on, doesn't terribly well mirror what's going on.

[656] I mean, maybe there are antecedents in the brain way before anything came into consciousness and so on.

[657] Those aren't represented in our introspective model.

[658] So in general, our experience of perception, You know, I experience a perceptual image of the external world.

[659] It's not a terribly good model of what's actually going on in my visual cortex and so on, which has all these layers and so on.

[660] It's just one little snapshot of one bit of that.

[661] So in general, you know, introspective models are very oversimplified, and it wouldn't be surprising if that was true of free will as well.

[662] This also, incidentally, can be applied to consciousness itself.

[663] There is this very interesting view that consciousness itself is an introspective illusion.

[664] illusion.

[665] In fact, we're not conscious, but we, but we, the brain just has these introspective models of itself where it oversimplifies everything and represents itself as having these special properties of consciousness.

[666] It's a really simple way to kind of keep track of itself and so on.

[667] And then on the illusionist view, yeah, that's just, that's just an illusion.

[668] It was, I find this view, when I find it implausible, I do find it very attractive in some ways because it's easy to tell some story about how the brain would create introspective models of its own consciousness of its own free will as a way of simplifying itself I mean it's a similar way when we perceive the external world we perceive it as having these colors that maybe it doesn't really have because that's a really useful way of keeping tracks of keeping track did you say that you find it not very plausible because I I find it both plausible and attractive as some sense, because it, I mean, that's, that kind of view is one that has the minimum amount of mystery around it.

[669] You can kind of understand that kind of view.

[670] Everything else says we don't understand so much of this picture.

[671] No, it is very, it is very attractive.

[672] I recently wrote an article all about this kind of issue called the meta -problem of consciousness.

[673] The hard problem is how does a brain give you consciousness?

[674] The meta problem is why are we puzzled by the hard problem of consciousness?

[675] Because, you know, are being puzzled by it, that's ultimately a bit of behavior.

[676] We might be able to explain that bit of behavior as one of the easy problems.

[677] Consciousness.

[678] So maybe there'll be some computational model that explains why we're puzzled by consciousness.

[679] The meta problem has come up with that model.

[680] And I've been thinking about that a lot lately.

[681] There's some interesting stories you can tell about why the right.

[682] kind of computational system might develop these introspective models of itself that attributed itself, these special properties.

[683] So that that meta problem is a research program for everyone.

[684] And then if you've got attraction to sort of simple views, desert landscapes and so on, then you can go all the way with what people call illusionism and say, in fact, consciousness itself is not real.

[685] What is real is just these introspective models.

[686] models we have that tell us that we're conscious.

[687] So the view is very simple, very attractive, very powerful.

[688] The trouble is, of course, it has to say that deep down consciousness is not real.

[689] We're not actually experiencing right now, and it looks like it's just contradicting a fundamental datum of our existence.

[690] And this is why most people find this view crazy, just as they find panpsychism crazy in one way.

[691] people find illusionism crazy in another way.

[692] But I mean, so yes, it has to deny this fundamental data of our existence.

[693] Now, that makes the view sort of frankly unbelievable for most people.

[694] On the other hand, the view developed right might be able to explain why we find it unbelievable because these models are so deeply hardwired into our head.

[695] And they're all integrated.

[696] It's not, you can't escape that the illusion.

[697] And as a crazy possibility, is it possible that the entirety of the universe, our planet, all the people in New York, all the organisms on our planet, including me here today, are not real in that sense.

[698] They're all part of an illusion inside of Dave Chalmers' head.

[699] I think all this could be a simulation.

[700] No, but not just a simulation.

[701] Yeah.

[702] Because the simulation kind of is outside of.

[703] you?

[704] A dream?

[705] What if it's all an illusion, yes, a dream that you are experiencing?

[706] That's, it's all in your mind, right?

[707] Is that, can you take illusionism that far?

[708] Well, there's illusionism about the external world and illusionism about consciousness, and these might go in different.

[709] Illusionism about the external world kind of takes you back to Descartes.

[710] And yeah, could all this be produced by an evil demon?

[711] Descartes himself, had the dream argument.

[712] He said, how do you know you're not dreaming right now?

[713] How do you know this is not an amazing dream?

[714] And I think it's at least a possibility that, yeah, this could be some super duper complex dream in the next universe up.

[715] I guess, though, my attitude is that just as, I mean, Descartes thought that if the evil demon was doing it, it's not real.

[716] A lot of people these days say if a simulation is doing it, it's not real.

[717] If I was saying before, I think even if it's a simulation, that doesn't stop this from being real.

[718] It just tells us what the world is made of.

[719] Likewise, if it's a dream, it could turn out that all this is like my dream created by my brain and the next universe up.

[720] My own view is that wouldn't stop this physical world from being real.

[721] It would turn out this cup at the most fundamental level was made of a bit of, say, my consciousness in the dreaming mind at the next level up.

[722] Maybe that would give you a kind of weird kind of pan -psychism about reality, but it wouldn't show that the cup isn't real, but just tell us it's ultimately made of processes in my dreaming mind.

[723] So I'd resist the idea that if the physical world is a dream, then it's an illusion.

[724] Right.

[725] By the way, perhaps you have an interesting thought about it.

[726] Why is Descartes' demon or genius considered evil?

[727] Why couldn't have been a benevolent one that had the same powers?

[728] Yeah, I mean, Descartes called it the malign genie, the evil genie or evil genius.

[729] Maline, I guess, was the word.

[730] But, yeah, it's an interesting question.

[731] I mean, a later philosophy, Barclay said, no, in fact, all this is done by God.

[732] God actually supplies you all of these perceptions and ideas.

[733] that's how physical reality is sustained.

[734] And interestingly, Barclay's God is doing something that doesn't look so different from what Descartes' evil demon was doing.

[735] It's just that Descartes thought it was deception, and Barclay thought it was not.

[736] And I'm actually more sympathetic to Barclay here.

[737] Yeah, this evil demon may be trying to deceive you, but I think, okay, well, the evil demon may just be working under a false philosophical theory.

[738] It thinks it's deceiving you, it's wrong.

[739] It's like there's machines on the Matrix.

[740] They thought they were deceiving you that all this stuff is real.

[741] I think, no, if we're in a Matrix, it's all still real.

[742] Yeah, the philosopher, OK, Bussma, had a nice story about this, about 50 years ago about Descartes' evil demon, where this demon spends all its time trying to fool people, but fails because somehow all the demon ends up doing is constructing realities for people.

[743] So yeah, I think that maybe it's a very natural to take this view that if we're in a simulation or evil demon scenario or something, then none of this is real.

[744] But I think it may be ultimately a philosophical mistake, especially if you take on board sort of the view of reality where what matters to reality is really its structure, something like it's mathematical structure and so on, which seems to be the view that a lot of people take from contemporary physics.

[745] and it looks like you can find all that mathematical structure in a simulation, maybe even in a dream, and so on.

[746] So as long as that structure is real, I would say that's enough for the physical world to be real.

[747] Yeah, the physical world may turn out to be somewhat more intangible than we had thought and to have a surprising nature, but we've already gotten very used to that from modern science.

[748] See, you've kind of alluded that you don't have to have consciousness for high levels of intelligence, to create truly general intelligence systems, AGI systems, human -level intelligence and perhaps super -human -level intelligence, you've talked about that you feel like that kind of thing might be very far away, but nevertheless, when we reached that point, do you think consciousness from an engineering perspective is needed or at least highly beneficial for creating an AGI system?

[749] yeah no one knows what consciousness is for functionally so right now there's no specific thing we can point to and say you need consciousness for that so my inclination is to believe that in principle aGI is possible at the very least i don't see why someone couldn't simulate a brain ultimately have a computational system that produces all of our behavior and if that's possible I'm sure vastly many other computational systems of equal or greater sophistication are possible with all of our cognitive functions and more.

[750] And my inclination is to think that once you've got all these cognitive functions, perception, attention, reasoning, introspection, language, emotion, and so on.

[751] It's very likely you'll have consciousness as well.

[752] at least it's very hard for me to see how you'd have a system that had all those things while bypassing somehow conscious so just naturally it's integrated quite naturally there's a lot of overlap about the kind of function that required to achieve each of those things that's so you can't disentangle them even when you're at least in us but we don't know what the causal role of consciousness in the physical world what it does I mean just say it turns out consciousness does something very specific in the physical world like collapsing wave functions as on one common interpretation of quantum mechanics then ultimately might find some place where it actually makes a difference and we could say ah here is where in collapsing wave functions it's driving the behavior of a system and maybe it could even turn out that for a GI you'd need something playing that i mean if you wanted to connect this to free will some people think consciousness collapsing wave functions that would be how the conscious mind exerts effect on the physical world and exerts its free will.

[753] And maybe it could turn out that any AGI that didn't utilize that mechanism would be limited in the kinds of functionality that it had.

[754] I don't myself find that plausible.

[755] I think probably that functionality could be simulated.

[756] But you could imagine once we had a very specific idea about the role of consciousness in the physical world, this would have some impact on the capacity of AGIs and if it was a role that could not be duplicated elsewhere, then we'd have to find a, we'd have to find some way to either get consciousness in the system to play that role or to simulate it.

[757] If we can isolate a particular role to consciousness, of course, that's incredibly, seems like an incredibly difficult thing.

[758] Do you have worries about existential threats of conscious, intelligent beings that are not us?

[759] So certainly, I'm sure you're worried about us from an existential threat perspective but outside of us, AI systems.

[760] There's a couple of different kinds of existential threats here.

[761] One is an existential threat to consciousness generally.

[762] I mean, yes, I care about humans and the survival of humans and so on, but just say it turns out that eventually we're replaced by some artificial beings that aren't humans but are somehow our successes, they still have good lives, they still do interesting and wonderful things of the universe.

[763] I don't think that's not so bad.

[764] That's just our successors.

[765] We were one stage in evolution.

[766] Something different, maybe better came next.

[767] If, on the other hand, all of consciousness was wiped out, that would be a very serious moral disaster.

[768] One way that could happen is by all intelligent life being wiped out.

[769] And many people think that, yeah, once you get to humans and AIs, an amazing sophistication where everyone has got the ability to create weapons that can destroy the whole universe just by just by pressing a button then maybe it's inevitable all intelligent life will die out that would be a that would certainly be a disaster and we've got to think very hard about how to avoid that but yeah another interesting kind of disaster is that maybe intelligent life is not wiped out but all consciousness is wiped out so just say you thought unlike what I was saying a moment ago, that there are two different kinds of intelligent systems, some which are conscious and some which are not.

[770] And just say it turns out that we create AGI with a high degree of intelligence, meaning high degree of sophistication and its behavior, but with no consciousness at all.

[771] That AGI could take over the world, maybe, but then there'd be no consciousness in this world.

[772] This would be a world of zombies.

[773] Some people have called this, the zombie apocalypse because it's an apocalypse for consciousness.

[774] Consciousness is gone.

[775] You've merely got this super intelligent, non -conscious robots.

[776] And I would say that's a moral disaster in the same way, in almost the same way that the world with no intelligent life is a moral disaster.

[777] All value and meaning may be gone from that world.

[778] So these are both threats to watch out for.

[779] Now, my own view is if you get super intelligence, you're almost certainly going to bring consciousness with it.

[780] So I hope that's not going to.

[781] going to happen, but of course, I don't understand consciousness.

[782] No one understands consciousness.

[783] This is one reason, at least, among many, for thinking very seriously about consciousness and thinking about the kind of future we want to create in a world with humans and or AIs.

[784] How do you feel about the possibility if consciousness so naturally does come with AGI systems that we are just a step in the evolution, that we will be just something of, blimp on the record, they'll be studied in books by the AGI systems centuries from now.

[785] I mean, I think I'd probably be okay with that, especially if somehow humans are continuous with AGI.

[786] I mean, I think something like this is inevitable.

[787] The very least humans are going to be transformed.

[788] We're going to be augmented by technology.

[789] It's already happening.

[790] In all kinds of ways, we're going to be transformed by technology where our brains are going to be uploaded.

[791] and computationally enhanced.

[792] And eventually that line between what's a human and what's an AI may be kind of hard to draw.

[793] How much does it matter, for example, that some future being a thousand years from now that somehow descended from us actually still has biology?

[794] I think it would be nice if you can kind of point to its cognitive system, to point to some parts that had some roots in us and trace a continuous line there that would be selfishly nice for me. to think that, okay, I'm connected to this thread line through the future of the world.

[795] But if it turns out, okay, there's a jump there.

[796] They found a better way to design cognitive systems.

[797] They designed a whole new kind of thing.

[798] And the only line is some causal chain of designing and systems that design better systems.

[799] Is that so much worse?

[800] I don't know.

[801] We're still at least part of a causal chain of design.

[802] And yes, they're not humans, but still there are successes.

[803] So, I mean, ultimately, I think it's probably inevitable that something like that will happen.

[804] And at least we were, at least we were part of the process.

[805] It'd be nice if they still cared enough about us to, you know, maybe to engage with our arguments.

[806] And I'm really hoping that the AGIs are going to solve all the problems of philosophy.

[807] They'll come back and read all this, all this crappy work for the 20th and 21st century, hard problem of consciousness, and I hear is why they got it wrong.

[808] And so on if that happened, then I'd really feel like I was.

[809] as part of at least an intellectual process over centuries, and that would be kind of cool.

[810] I'm pretty sure they would clone or they would recreate David Chalmers and for the fun of it, sort of bring back other philosophers.

[811] Yeah, bring back Descartes.

[812] Descartes and just put them in a room and just watch.

[813] It'll be a Netflix of the future show where you bring philosophers from different human, 100 % human philosophers from previous generations, put them in a room and see them.

[814] I am totally up for that.

[815] simulators agis of the future if you're watching this podcast do that i would like to be recreated who would be with descart would be the if you if you could hang out as part of such a tv show with a philosopher that's no longer with us from long ago who would it who would you choose decart would have to be right up there oh actually a couple of months ago i got to have a conversation with descart an actor who's actually a philosopher came out of on stage playing Descartes.

[816] I didn't know this was going to happen.

[817] And I just after I gave a talk and a bit of a surreal moment.

[818] My ideas were crap and all derived from him and so on.

[819] We had a long argument.

[820] This was great.

[821] I would love to see what Descartes would think about AI, for example, and the modern neuroscience and so.

[822] And I suspect not too much would surprise him.

[823] But, yeah, William James, you know, for a psychologist of consciousness, I think James was probably the was probably the richest, but, or there are manual cards.

[824] I never really understood what he was up to if I got to actually talk to him about some of this.

[825] Hey, there was Princess Elizabeth who talked with Descartes and who really, you know, got at the problems of how Descartes' ideas of a non -physical mind interacting with the physical body couldn't really work.

[826] She's been kind of, most philosophers think she's been proved right.

[827] so maybe put me in a room with Descartes and Princess Elizabeth and we can all argue it out.

[828] What kind of future, so we talked about with zombies, a concerning future, but what kind of future excites you?

[829] What do you think if we look forward sort of we're at the very early stages of understanding consciousness and we're now at the early stages of being able to engineer complex, interesting systems, that have degrees of intelligence and maybe one day we'll have degrees of consciousness, maybe be able to upload brains, all those possibilities, virtual reality.

[830] Is there a particular aspect of this future world that just excites you?

[831] I think there are lots of different aspects.

[832] I mean, frankly, I wanted to hurry up and happen.

[833] It's like, yeah, we've had some progress lately in AI and VR, but in the grand scheme of things, it's still kind of slow.

[834] The changes are not yet transformative.

[835] and, you know, I'm in my 50s.

[836] I've only got so long left.

[837] I'd like to see really serious AI in my lifetime and really serious virtual worlds.

[838] Because, yeah, once people, I would like to be able to hang out in a virtual reality, which is richer than this reality to really get to inhabit fundamentally different kinds of spaces, well, I would very much like to be able to upload my mind onto a computer.

[839] maybe I don't have to die.

[840] If this is maybe gradually replaced my neurons with silicon chips and I have it, like if you're selfishly, that would be wonderful.

[841] I suspect I'm not going to quite get there in my lifetime.

[842] But once that's possible, then you've got the possibility of transforming your consciousness in remarkable ways, augmenting it, enhancing it.

[843] So let me ask then if such a system is a possibility within your lifetime.

[844] and you were given the opportunity to become immortal in this kind of way.

[845] Would you choose to be immortal?

[846] Yes, I totally would.

[847] I know some people say they couldn't, it would be awful to be to be immortal, it would be so boring or something.

[848] I don't see, I really don't see why this might be.

[849] I mean, even if it's just ordinary life, that continues ordinary life is not so bad.

[850] But furthermore, I kind of suspect that, you know, if the universe is going to go on forever or indefinitely, it's going to continue to be interesting.

[851] I don't think, you know, your view was that we're just hit, there's one romantic point of interest now and afterwards it's all going to be boring, super intelligent stasis.

[852] I guess my vision is more like, no, it's going to continue to be infinitely interesting.

[853] Something like, as you go up the set theoretic hierarchy, You know, you go from the finite cardinals to alf zero and then through there to all the LF1 and LF2 and maybe the continuum and you keep taking power sets.

[854] And in set theory, they've got these results that actually all this is fundamentally unpredictable.

[855] It doesn't follow any simple computational patterns.

[856] There's new levels of creativity as the set theoretic universe expands and expands.

[857] I guess that's my future.

[858] That's my vision of the future.

[859] That's my optimistic vision of the future of superintelligence.

[860] It will keep expanding and keep growing, but still being fundamentally unpredictable at many points.

[861] I mean, yes, this creates all kinds of worries, like couldn't all be fragile and be destroyed at any point.

[862] So we're going to need a solution to that problem.

[863] But if we get to stipulate that I'm immortal, well, I hope that I'm not just immortal and stuck in the single world forever, but I'm immortal and get to take part in this process of going through infinitely rich, created futures.

[864] Rich, unpredictable, exciting.

[865] Well, I think I speak for a lot of people in saying, I hope you do become immortal and there'll be that Netflix show of the future where you get to argue with Descartes, perhaps for all eternity.

[866] So, Dave, it was an honor.

[867] Thank you so much for talking today.

[868] Thanks.

[869] It was a pleasure.

[870] Thanks for listening to this conversation.

[871] And thank you to our presenting sponsored cash app.

[872] Download it, use code Lex Podcast.

[873] You'll get $10, and $10 will be.

[874] go to first, an organization that inspires and educates young minds to become science and technology innovators of tomorrow.

[875] If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcasts, follow on Spotify, support it on Patreon, or simply connect with me on Twitter at Lex Friedman.

[876] And now let me leave you with some words from David Chalmers.

[877] Materialism is a beautiful and compelling view of the world, but to account for consciousness, we have to go beyond the resources it provides.

[878] Thank you for listening.

[879] I hope to see you next time.