Insightcast AI
Home
© 2025 All rights reserved
Impressum
#83 – Nick Bostrom: Simulation and Superintelligence

#83 – Nick Bostrom: Simulation and Superintelligence

Lex Fridman Podcast XX

--:--
--:--

Full Transcription:

[0] The following is a conversation with Nick Bostrom, a philosopher at University of Oxford and the director of the Future of Humanity Institute.

[1] He has worked on fascinating and important ideas in existential risk, simulation hypothesis, human enhancement ethics, and the risks of super -intelligent AI systems, including in his book Superintelligence.

[2] I can see talking to Nick multiple times on this podcast, many hours each time, because he has done some incredible work.

[3] in artificial intelligence, in technology space, science, and really philosophy in general.

[4] But we have to start somewhere.

[5] This conversation was recorded before the outbreak of the coronavirus pandemic, that both Nick and I, I'm sure, will have a lot to say about next time we speak.

[6] And perhaps that is for the best, because the deepest lessons can be learned only in retrospect when the storm has passed.

[7] I do recommend you read many of his papers on the topic of existential.

[8] risk, including the technical report titled Global Catastrophic Risks Survey that he co -authored with Anders Sandberg.

[9] For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way.

[10] Stay strong.

[11] We're in this together.

[12] We'll beat this thing.

[13] This is the Artificial Intelligence Podcast.

[14] If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support on Patreon, or simply connect with me, on Twitter at Lex Friedman, spelled F -R -I -D -M -A -N.

[15] As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation.

[16] I hope that works for you and doesn't hurt the listening experience.

[17] This show is presented by Cash App, the number one finance app in the app store.

[18] When you get it, use code Lex Podcast.

[19] Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar.

[20] Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel.

[21] So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier.

[22] So again, if you get Cash App from the App Store Google Play and use the code Lex podcast, you get $10, and Cash App will also donate $10 to First, an organization that is helping to advance robotics and STEM education for young people around the world.

[23] And now, here's my conversation with Nick Bostrom.

[24] At the risk of asking the Beatles to play yesterday or the Rolling Stones to play satisfaction, let me ask you, the base.

[25] what is the simulation hypothesis that we are living in a computer simulation what is the computer simulation how we're supposed to even think about that well so the hypothesis is meant to be understood in a literal sense not that we can kind of metaphorically view the universe as an information processing physical system but that there is some advanced civilization who built a lot of computers and that what we experience is an effect of what's going on inside one of those computers so that the world around us, our own brains, everything we see and perceive and think and feel would exist because this computer is running certain programs.

[26] So do you think of this computer as something similar to the computers of today, these deterministic of touring machine type things.

[27] Is that what we're supposed to imagine, or we're supposed to think of something more like a quantum mechanical system, something much bigger, something much more complicated, something much more mysterious from our current perspective?

[28] The ones we have today would do find, I mean, bigger, certainly.

[29] You'd need more memory and more processing power.

[30] I don't think anything else would be required.

[31] Now, it might well be that they do have additional, Maybe they have quantum computers and other things that would give them even more oomph.

[32] It seems kind of plausible, but I don't think it's a necessary assumption in order to get to the conclusion that a technologically mature civilization would be able to create these kinds of computer simulations with conscious beings inside them.

[33] So do you think the simulation hypothesis is an idea that's most useful in philosophy, computer science, physics, sort of where do you see it having valuable kind of starting point in terms of the thought experiment of it?

[34] Is it useful?

[35] I guess it's more informative and interesting and maybe important, but it's not designed to be useful for something else.

[36] Okay, interesting, sure, but is it philosophically interesting or is there some kind of implications of computer science and physics?

[37] I think not so much for computer science or physics per se.

[38] Certainly it would be of interest in philosophy, I think, also to say cosmology or physics in as much as you're interested in the fundamental building blocks of the world and the rules that govern it.

[39] If we are in a simulation, there is then the possibility that, say, physics at the level where the computer running the simulation could be different from the physics governing phenomena in the simulation.

[40] So I think it might be interesting from point of view of religion or just for kind of trying to figure out what the heck is going on.

[41] So we mentioned the simulation hypothesis so far.

[42] There is also the simulation argument which I tend to make a distinction.

[43] So simulation hypothesis, we are living in a computer.

[44] the simulation.

[45] Simulation argument, this argument that tries to show that one of three propositions is true, one of which is the simulation hypothesis, but there are two alternatives in the original simulation argument, which we can get to.

[46] Yeah, let's go there.

[47] By the way, confusing terms because people will, I think, probably naturally think simulation argument equals simulation hypothesis, just terminology -wise, but let's go there.

[48] So simulation hypothesis means that we are living in a simulation, the hypothesis we're living in a simulation and simulation argument has these three complete possibilities that cover all possibilities.

[49] So what are they?

[50] Yeah, so it's like a disjunction.

[51] It says at least one of these three is true.

[52] Although it doesn't on its own tell us which one.

[53] So the first one is that almost all civilizations at our current stage of technological development go extinct before they reach technological maturity.

[54] So there is some great filter that makes it so that basically none of the civilizations throughout, you know, maybe vast cosmos will ever get to realize the full potential of technological development.

[55] And this could be, theoretically speaking, this could be because most civilizations kill themselves too eagerly or destroy themselves to eagerly, or it might be super difficult to build a simulation.

[56] So the span of time...

[57] Theoretically, it could be both.

[58] Now, I think it looks like we would technologically be able to get there in a time span that is short compared to, say, the lifetime of planets and other sort of astronomical processes.

[59] So your intuition is to build a simulation is not, Well, so there's this interesting concept of technological maturity.

[60] It's kind of an interesting concept to have for other purposes as well.

[61] We can see, even based on our current limited understanding, what some lower bound would be on the capabilities that you could realize by just developing technologies that we already see are possible.

[62] So, for example, one of my research fellows here, Eric Drexler, back in the 80s, studied molecular manufacturing.

[63] That is you could analyze using theoretical tools and computer modeling the performance of various molecularly precise structures that we didn't then and still don't today have the ability to actually fabricate.

[64] But you could say that, well, if we could put these atoms together in this way, then the system would be stable and it would rotate at this speed and have these computational characteristics.

[65] And he also outlined some pathways that would enable us to get to this kind of molecularly manufacturing in the fullness of time.

[66] You could do other studies we've done.

[67] You could look at the speed at which, say, it would be possible to colonize the galaxy if you had mature technology.

[68] We have an upper limit, which is the speed of light.

[69] We have sort of a lower current limit, which is how fast current rockets go.

[70] we know we can go faster than that by just, you know, making them bigger and have more fuel and stuff.

[71] And you can then start to describe the technological affordances that would exist once a civilization has had enough time to develop, at least those technologies we already know are possible.

[72] Then maybe they would discover other new physical phenomena as well that we haven't realized that would enable them to do even more.

[73] But at least there is this kind of basic set of capabilities.

[74] Can you just linger on that?

[75] How do we jump from molecular manufacturing to deep space exploration to mature technology?

[76] Like, what's the connection?

[77] Well, so these would be two examples of technological capability sets that we can have a high degree of confidence are physically possible in our universe, and that a civilization that was allowed to continue.

[78] to develop its science and technology would eventually attain.

[79] You can intuit, like, we can kind of see the set of breakthroughs that are likely to happen, so you can see, like, what did you call it, a technological set?

[80] With computers, maybe it's easiest.

[81] One is we could just imagine bigger computers using exactly the same parts that we have, so you can kind of scale things that way, right?

[82] But you could also make processors a bit faster if you had this molecular, and a technology that Eric Drexter described.

[83] He characterized a kind of crude computer built with these parts that would perform at a million times the human brain while being significantly smaller, the size of a sugar cube.

[84] And he made no claim that that's the optimum computing structure.

[85] Like for all you know, you could build faster computers that would be more efficient, but at least you could do that if you had the ability to do things that were atomically precise.

[86] Yes.

[87] I mean, so you can then combine.

[88] these two.

[89] You could have this kind of nanomolecular ability to build things atom by atom and then, say, at this, as a spatial scale, that would be attainable through space colonizing technology.

[90] You could then start, for example, to characterize a lower bound on the amount of computing power that a technologically mature civilization would have.

[91] If it could grab resources, you know, planets and so forth, and then use this molecular nanotechnology to optimize for computing, you'd get a very, very high lower bound on the amount of compute.

[92] So, sorry, just to define some terms.

[93] So technologically mature civilization is one that took that piece of technology to its lower bound.

[94] What is it technological maturedization?

[95] Well, okay, so that means it's a stronger concept that we really need for the simulation hypothesis.

[96] I just think it's interesting in its own right.

[97] So it would be the idea that there is at some stage of technological development for you, maxed out, that you developed all those general purpose, widely useful technologies that could be developed, or at least kind of come very close to the, you know, 99 .9 % there or something.

[98] So that's an independent question.

[99] You can think either that there is such a ceiling or you might think it just goes, the technology tree just goes on forever.

[100] Where is your sense fall?

[101] I would guess that there is a maximum that you would start to asymptote to So new things won't keep springing up, new ceilings.

[102] In terms of basic technological capabilities, I think there is like a finite set of those that can exist in this universe.

[103] More of, I mean, I wouldn't be that surprised if we actually reached close to that level fairly shortly after we have, say, machine superintelligence.

[104] So I don't think it would take millions of years for a human originating system.

[105] civilization to begin to do this, it, I think it's like more likely to happen on historical timescales.

[106] But that's an independent speculation from the simulation argument.

[107] I mean, for the purpose of the simulation argument, it doesn't really matter whether it goes indefinitely far up or whether there's a ceiling, as long as we know we can at least get to a certain level.

[108] And it also doesn't matter whether that's going to happen in a hundred years or five thousand years or 50 million years like the time scales really don't make any difference for this in this little bit like there's a big difference between 100 years and 10 million years yeah so it doesn't really not matter because you just said does it matter if we jump scales to beyond historical scales so we described that so for the simulation argument sort of doesn't it matter that that we, if it takes 10 million years, it gives us a lot more opportunity to destroy civilization in the meantime.

[109] Yeah, well, so it would shift around the probabilities between these three alternatives.

[110] That is, if we are very, very far away from being able to create these simulations, if it's like, say, billions of years into the future, then it's more likely that we will fail ever to get there.

[111] There's more time for us to kind of, you know, go extinct along the way.

[112] And so similarly for other civilizations.

[113] So it is important to think.

[114] about how hard it is to build the simulation in terms of figuring out which of the disjuncts.

[115] But for the simulation argument itself, which is agnostic as to which of these three alternatives is true.

[116] You don't have to, like the simulation argument would be true whether or not we thought this could be done in 500 years or it would take 500 million years.

[117] For sure, the simulation argument stands, I mean, I'm sure there might be some people who oppose it, but it doesn't matter i mean it's it's very nice those three cases cover it but the fun part is at least not saying what the probabilities are but kind of thinking about kind of intuitive reasoning about like what's more likely what uh what are the kind of things that would make some of the arguments less and more so like but let's actually i don't think we went through them so number one is we destroy ourselves before we ever create simulator right so that's kind of sad but But we have to think not just what might destroy us.

[118] I mean, so there could be some whatever disasters or meteorites slamming the Earth a few years from now that could destroy us, right?

[119] But you'd have to postulate in order for this first disjunct to be true that almost all civilizations throughout the cosmos also failed to reach technological maturity.

[120] And the underlying assumption there is that there is likely a very large number of other intelligent civilizations.

[121] Well, if there are, yeah, then they would virtually all have to succumb in the same way.

[122] I mean, then that leads off another.

[123] I guess there are a lot of little digressions that are interesting.

[124] Let's go there.

[125] Let's go there.

[126] I'll keep dragging us back.

[127] Well, there are these, there is a set of basic questions that always come up in conversations with interesting people.

[128] like the Fermi paradox like there's like you could almost define whether a person is interesting whether at some point the question of the Fermi paradox comes up like well so for what it's worth it looks to me that the universe is very big I mean in fact according to the most popular current cosmological theory is infinitely big and so then it would follow pretty trivially that it would contain a lot of other civilizations, in fact, infinitely many.

[129] If you have some local stochasticity and infinitely many, it's like, you know, infinite many lumps of matter one next to another, there's kind of random stuff in each one, then you're going to get all possible outcomes with probability one infinitely repeated.

[130] So then certainly that would be a lot of extraterrestrials out there.

[131] even short of that, if the universe is very big, that might be a finite but large number.

[132] If we were literally the only one, yeah, then of course, if we went extinct, then all of civilizations at our current stage would have gone extinct before becoming technological material.

[133] So then it kind of becomes trivially true that a very high fraction of those went extinct.

[134] But if we think there are many, I mean, it's interesting.

[135] because there are certain things that plausibly could kill us, like if you look at existential risks.

[136] And it might be a different, like the best answer to what would be most likely to kill us might be a different answer than the best answer to the question, if there is something that kills almost everyone, what would that be?

[137] Because that would have to be some risk factor that was kind of uniform overall possible civilization.

[138] Yeah.

[139] So, in this for the for the sake of this argument you have to think about not just us but like every civilization dies out before they create the simulation yeah or something very close to to everybody okay so what's number two in the well so number two is the convergence hypothesis that is that maybe like a lot of some of these civilizations do make it through to technological maturity but out of those who do get there they all lose interest in creating these simulations.

[140] So they just, they have the capability of doing it, but they choose not to.

[141] Yeah.

[142] Not just a few of them decide not to, but, you know, out of a million, you know, maybe not even a single one of them would do it.

[143] And I think when you say lose interest, that sounds like unlikely because it's like they get bored or whatever, but it could be so many possibilities within that.

[144] I mean, losing interest could be, it could be anything from it being exceptionally difficult to do to fundamentally changing the sort of the fabric of reality if you do it, ethical concerns, all those kinds of things could be exceptionally strong pressures.

[145] Well, certainly, I mean, yeah, ethical concerns.

[146] I mean, not really too difficult to do.

[147] I mean, in a sense, that's the first assumption.

[148] you get to technological maturity where you would have the ability using only a tiny fraction of your resources to create many simulations.

[149] So it wouldn't be the case that they would need to spend half of their GDP forever in order to create one simulation and they had this difficult debate about whether they should invest half of their GDP for this.

[150] It would more be like, well, if any little fraction of the civilization feels like doing this at any point during maybe their, you know, millions of years of existence, then there would be millions of simulations.

[151] But certainly, there could be many conceivable reasons for why there would be this convert, many possible reasons for not running ancestor simulations or other computer simulations, even if you could do so cheaply.

[152] By the way, what's an ancestor simulation?

[153] Well, that would be the type of computer simulation that would contain people, like those we think have lived on our planet in the past and like ourselves in terms of the types of experiences they have and where those simulated people are conscious.

[154] So like not just simulated in the same sense that a non -player character would be simulated in the current computer game where it's kind of has like an avatar body and then a very simple mechanism that moves it forward or backwards or but something where the simulated being as a brain, let's say, that's simulated at a sufficient level of granularity, that it would have the same subjective experiences as we have.

[155] So where does consciousness fit into this?

[156] Do you think simulation, is there different ways to think about how this can be simulated, just like you're talking about now?

[157] Do we have to simulate each brain within the larger simulation?

[158] is it enough to simulate just the brain, just the minds and not the simulation, not the big universe itself?

[159] Is there different ways to think about this?

[160] Yeah, I guess there is a kind of premise in the simulation argument rolled in from philosophy of mind.

[161] That is that it would be possible to create a conscious mind in a computer and that what determines whether some system is conscious or not is not like whether it's built from organic biological neurons, but maybe something like what the structure of the computation is that it implements.

[162] So we can discuss that if we want, but I think it would be, I might be that it would be sufficient, say, if you had a computation that was identical to the computation in the human brain down to the level of neurons.

[163] So if you had a simulation with 100 billion neurons connected in the same way as the human brain, and you then roll that forward with the same kind of synaptic weights and so forth.

[164] So you actually had the same behavior coming out of this as a human with that brain would have.

[165] Then I think that would be conscious.

[166] Now, it's possible you could also generate consciousness without having that detailed assimilation.

[167] There, I'm getting more uncertain, exactly how much you could simplify or abstract a way.

[168] Can you look on that?

[169] What do you mean?

[170] I missed where you're placing consciousness in the second.

[171] Well, so if you are a computationalist, do you think that what creates consciousness is the implementation of a computation?

[172] So some property, emerging property of the computation itself.

[173] Yeah.

[174] Yeah, you could say that.

[175] But then the question is, what's the class of computations such that when they are run, consciousness emerges?

[176] So if you just have like something that adds one plus one plus one plus one, like a simple computation, you think maybe that's not going to have any consciousness.

[177] If on the other end, the computation is one, like our human brains are performing, where as part of the computation, there is like, you know, a global workspace, a sophisticated attention mechanism, there is like self -representations of other cognitive processes.

[178] a whole lot of other things, that possibly would be conscious.

[179] And in fact, if it's exactly like ours, I think definitely it would.

[180] But exactly how much less than the full computation that the human brain is performing would be required is a little bit, I think, of an open question.

[181] He asked another interesting question as well, which is, would it be sufficient to just have, say, the brain or would you need the environment?

[182] Right.

[183] That's a nice way.

[184] In order to generate the same kind of experiences that we have.

[185] And there is a bunch of stuff we don't know.

[186] I mean, if you look at, say, current virtual reality environments, one thing that's clear is that we don't have to simulate all details of them all the time in order for, say, the human player to have the perception that there is a full reality in there.

[187] You can have, say, procedurally generated virtually might only render a scene when it's actually within the view of the player character.

[188] And so similarly, if this environment that we perceive is simulated, it might be that all of the parts that come into our view are rendered at any given time.

[189] And a lot of aspects that never come into view, say the details of this might be that microphone I'm talking into exactly what each atom is doing at any given point in time might not be part of the simulation, only a more coarse -grained representation.

[190] So that to me is actually from the engineering perspective, why the simulation hypothesis is really interesting to think about, is how much, how difficult is it to fake, sort of in a virtual reality context, I don't know if fake is the right word, but to construct a reality that is sufficiently real to us to be to be immersive in the way that the physical world is i think that's just that's actually probably an answerable question of psychology of computer science of how how how where's the line where it becomes so immersive that you don't want to leave that world yeah or that you don't realize while you're in it that it is a virtual world yeah those are two actually questions yours is the more sort of the good question about the realism.

[191] But from my perspective, what's interesting is it doesn't have to be real, but how can we construct a world that we wouldn't want to leave?

[192] Yeah.

[193] I mean, I think that might be too low bar.

[194] I mean, if you think, say when people first had the pong or something like that, I'm sure there were people who wanted to keep playing it for a long time because it was fun, and they wanted to be in this little world.

[195] I'm not sure we would say it's immersive I mean I guess in some sense it is but like an absorbing activity doesn't even have to be but they left that world though that's the so like I think that bar is deceivingly high so they eventually let so you can play Pong or StarCraft or whatever more sophisticated games for hours for months you know wow the World of Work could be in a big addiction but eventually they escape that.

[196] So you mean when it's absorbing enough that you would spend your entire, it would choose to spend your entire life in there.

[197] And then thereby changing the concept of what reality is.

[198] Because your reality becomes the game, not because you're fooled, but because you've made that choice.

[199] Yeah.

[200] And it made different people might have different preferences regarding that.

[201] Some might, even if you had any perfect virtual reality, might still prefer not to spend the rest of their lives there.

[202] I mean, in philosophy, there's this experience machine, thought experiment.

[203] Have you come across this?

[204] So Robert Nozik had this thought experiment where you imagine some crazy super -duper neuroscientists of the future have created a machine that could give you any experience you want if you step in there.

[205] And for the rest of your life, you can kind of pre -programmed it in.

[206] different ways.

[207] So your fund dreams could come true.

[208] You could, whatever you dream, you want to be a great artist, a great lover, like have a wonderful life, all of these things.

[209] If you step into the experience machine, will be your experiences constantly happy.

[210] But would you kind of disconnect from the rest of reality and you would float there in a tank.

[211] And so Nozic thought that most people would choose not to enter the experience machine.

[212] I mean, many might want to go there for a holiday, but they wouldn't want to serve to check out of existence permanently.

[213] And so he thought that was an argument against certain views of value, according to what we value is a function of what we experience.

[214] Because in the experience machine, you could have any experience you want, and yet many people would think that would not be much value.

[215] so therefore what we value depends on other things than what we experience so okay can you can you take that argument further of me what about the fact that maybe what we value is the up and down of life so you could have up and downs in the experience machine right but what can't you have in the experience machine well i mean that then becomes an interesting question to explore but for example real connection with other people if the experience machine is a solar machine where it's only you.

[216] Like, that's something you wouldn't have there.

[217] You would have this subjective experience that would be like fake people.

[218] But if you gave somebody flowers, there wouldn't be anybody who actually got happy.

[219] It would just be a little simulation of somebody smiling.

[220] But the simulation would not be the kind of simulation I'm talking about in the simulation argument where the simulated creature is conscious.

[221] It would just be a kind of smiley face that would look perfectly real to you.

[222] So we're now drawing a distinction between.

[223] appear to be perfectly real and actually being real.

[224] Yeah.

[225] So that could be one thing.

[226] I mean, like a big impact on history.

[227] Maybe it's also something you won't have if you check into this experience machine.

[228] So some people might actually feel the life I want to have for me is one where I have a big positive impact on history unfolds.

[229] So you could kind of explore these different possible explanations.

[230] for why it is you wouldn't want to go into the experience machine if that's if that's what you feel and one one interesting observation regarding this nozic thought experiment and the conclusions he wanted to draw from it is how much is a kind of a status quo effect so a lot of people might not want a jettison current reality to plug into this dream machine but if they instead we're told, well, what you've experienced up to this point was a dream.

[231] Now, do you want disconnect from this and enter the real world when you have no idea maybe what the real world is?

[232] Or maybe you could say, well, you're actually a farmer in Peru growing peanuts and you could live for the rest of your life in this.

[233] Or would you want to continue your dream alive?

[234] as Alex Friedman, going around the world, making podcasts and doing research.

[235] So if the status quo was that they were actually in the experience machine, I think a lot of people might then prefer to live the life that they are familiar with rather than sort of bail out into.

[236] It's interesting, the change itself, the leap, whatever.

[237] It might not be so much the reality itself that we are after, but it's more that we are maybe involved in certain projects and relations.

[238] relationships and we have self -identity and these things that our values are kind of connected with carrying that forward.

[239] And then whether it's inside a tank or outside a tank in Peru or whether inside a computer or outside a computer, that's kind of less important to what we ultimately care about.

[240] Yeah.

[241] So just to linger on it, it is interesting.

[242] I find maybe people are different, but I find myself quite willing to take the leap to the farmer in Peru, especially as the virtual reality system become more realistic.

[243] I find that possibility, and I think more people would take that leap.

[244] But so in this thought experiment, just to make sure we are understanding.

[245] So in this case, the farmer in Peru would not be a virtual reality.

[246] That would be the real.

[247] The real.

[248] Your life, like before this whole experience machine started.

[249] Well, I kind of assumed from that, description, you're being very specific, but that kind of idea just like washes away the concept of what's real.

[250] I mean, I'm still a little hesitant about your kind of distinction between real and illusion, because when you can have an illusion that feels, I mean, that looks real, I don't know how you can definitively say something is real or not.

[251] Like, what's a good way to prove that something is real in that context.

[252] Well, so I guess in this case, it's more a stupidition.

[253] In one case, you're floating in a tank with these wires by the super -duper neuroscientist plugging into your head, giving you Lex Friedman experiences.

[254] In the other, you're actually tilling the soil in Peru growing peanuts, and then those peanuts are being eaten by other people all around the world who buy the exports.

[255] So this is two different possible situations in the one and the same.

[256] real world that you could choose to occupy but just to be clear when you're in a vat with wires and the neuroscientists you can still go farming in peru right no well you could you could if you wanted it you could have the experience of farming in peru but that wouldn't actually be any peanuts grown well but what makes a peanut so so peanut could be grown and you could feed things with that peanut.

[257] And why can't all of that be done in a simulation?

[258] I hope, first of all, that they actually have peanut farms in Peru.

[259] I guess we'll get a lot of comments otherwise from Angrip.

[260] I was with you up until the point when you started talking about.

[261] You should know, you can't grow peanuts in that climate.

[262] No, I mean, I think, I mean, I, in the simulation, I think there is a sense, the important sense, in which it would all be real.

[263] Nevertheless, there is a distinction between inside a simulation and outside a simulation, or in the case of Nosex thought experiment, whether you're in the VAT or outside the VAT, and some of those differences may or may not be important.

[264] I mean, that comes down to your values and preferences.

[265] So if the experience machine only gives you the experience of growing peanuts, but you're the only one in the experience, machines.

[266] There's other, you can, within the experience machine, others can plug in.

[267] Well, there are versions of the experience machine.

[268] So, in fact, you might want to have distinguished different thought experiments, different versions of it.

[269] So in, like, in the original thought experiment, maybe it's only you, right?

[270] Just you.

[271] And you think, I wouldn't want to go in there.

[272] Well, that tells you something interesting about what you value and what you care about.

[273] Then you could say, well, what if you add the fact that there would be other people in there and you would interact with them?

[274] Well, it starts to make it more attractive.

[275] right then you could add in well what if you could also have important long -term effects on human history in the world and you could actually do something useful even though you were in there that makes it maybe even more attractive like you could actually have a life that had a purpose and consequences and so as you sort of add more into it it becomes more similar to the the baseline reality that that you were comparing it to yeah but i just think inside the experience machine and without taking those steps you just mentioned, you still have an impact on long -term history of the creatures that live inside that, of the quote -unquote fake creatures that live inside that experience machine.

[276] And that, like at a certain point, you know, if there's a person waiting for you inside that experience machine, maybe your newly found wife and she dies, she has fear, she has hopes, and she exists in that machine when you plug out, when you unplug yourself and plug back in, she's still there going on about her life.

[277] Well, in that case, yeah, she starts to have more of an independent existence.

[278] Independent existence.

[279] But it depends, I think, on how she's implemented in the experience machine.

[280] Take one limit case where all she is is a static picture on the wall, a photograph.

[281] Right.

[282] So you think, well, I can look at her, right?

[283] But that's it.

[284] There's no. Then you think, well, it doesn't really matter much what happens to that.

[285] And any more than the normal photographs, if you tear it up, right?

[286] It means you can't see it anymore, but you haven't harmed the person whose picture you tore it up.

[287] But if she's actually implemented, say, at a neural level of detail, so that she's a fully realized digital mind with a same behavioral repertoire as you have, then very plausible issue would be a conscious person like you are.

[288] And then you would, what you do in this experience machine would have real consequences for how this other mind felt.

[289] So you have to specify which of these experience machines you're talking about.

[290] I think it's not entirely obvious that it would be possible to have an experience machine that gave you a normal set of human experiences.

[291] which include experiences of interacting with other people without that also generating consciousnesses corresponding to those other people.

[292] That is, if you create another entity that you perceive and interact with, that to you looks entirely realistic, not just when you say hello, they say hello back, but you have a rich interaction many days, deep conversations.

[293] Like, it might be that the only plausible way of implementing that would be one that also has a side effect, instantiated this other person in enough detail that you would have a second consciousness there.

[294] I think that's to some extent an open question.

[295] So you don't think it's possible to fake consciousness and fake intelligence?

[296] Well, it might be.

[297] I mean, I think you can certainly fake, if you have a very limited interaction with somebody, you could certainly fake that.

[298] That is if all you have to go on is somebody said hello to you, that's not enough for you to tell whether that was a real person there or a pre -recorded message.

[299] or, you know, like a very superficial simulation that has no consciousness.

[300] Because that's something easy to fake.

[301] We could already fake it now.

[302] You can record a voice recording and, you know.

[303] But if you have a richer set of interactions where you're allowed to answer, ask open -ended questions and probe from different angles, that couldn't sort of be, you couldn't give canned answer to all of the possible ways that you could probe it, then it starts to become more plausible that the only way to realize this thing.

[304] in such a way that you would get the right answer from any which angle you probed it would be a way of instantiating it where you also instantiated a conscious mind.

[305] Yeah, I'm with you on the intelligence part, but is there something about me that says consciousness is easier to fake.

[306] Like I've recently gotten my hands on a lot of rubas.

[307] Don't ask me why or how.

[308] But, and I've made them, there's just a nice robotic mobile platform for experiments.

[309] And I made them scream and or moan in pain and so on, just to see when they're responding to me. And it's just a sort of psychological experiment on myself.

[310] And I think they appear conscious to me pretty quickly.

[311] Like I, to me, at least my brain can be tricked quite easily.

[312] Right.

[313] So if I introspect, it's harder for me to be tricked that something is intelligent.

[314] So I just have this feeling that inside this experience machine, just saying that you're conscious and having certain, qualities of the interaction like being able to suffer, like being able to hurt, like being able to wander about the essence of your own existence, not actually, I mean, you know, creating the illusion that you're wondering about it is enough to create the feeling of consciousness and the illusion of consciousness and because of that create a really immersive experience to where you feel like that is the real world.

[315] So you think there's a big gap between appearing conscious and being conscious?

[316] Or is it that you think it's very easy to be conscious?

[317] I'm not actually sure what it means to be conscious.

[318] All I'm saying is the illusion of consciousness is enough for this to create a social interaction that's as good as if the thing was conscious.

[319] Meaning, I'm making it about myself.

[320] Right.

[321] Yeah.

[322] I mean, I guess there are a few different things.

[323] One is how good interaction is, which might, I mean, if you don't really care about, like, probing hard for whether the thing is conscious, maybe it would be a satisfactory interaction, whether or not you really thought it was conscious.

[324] Now, if you really care about it being conscious, like, inside this experience machine, how easy would it be to fake it?

[325] And you say, it sounds fairly easy.

[326] But then the question is, would that also mean it's very easy to instantiate consciousness?

[327] Like, it's much more widely spread in the world and we have thought it doesn't require a big human brain with 100 billion neurons.

[328] All you need is some system that exhibits basic intentionality and can respond and you already have consciousness.

[329] Like in that case, I guess you still have a close coupling.

[330] I guess a data case would be where they can come apart, where you could create the appearance of there being a conscious mind without actually not being another conscious mind.

[331] I'm somewhat agnostic exactly where these lines go.

[332] I think one observation that makes it plausible that you could have very realistic appearances relatively simply, which also is relevant for the simulation argument and in terms of thinking about how realistic would a virtual reality model have to be in order for the simulated creature not to notice that anything was awry.

[333] Well, just think of our own humble brains during the wee hours of the night when we are dreaming.

[334] Many times, well, dreams are very immersive, but often you also don't realize that you're in a dream.

[335] And that's produced by simple, primitive three -pound lumps of neural matter effortlessly.

[336] So if a simple brain like this can create a virtual reality that seems pretty real, to us, then how much easier would it be for a super -intelligent civilization with planetarized computers optimized over the eons to create a realistic environment for you to interact with?

[337] Yeah, by the way, behind that intuition is that our brain is not that impressive relative to the possibilities of what technology could bring.

[338] It's also possible that the brain is the epitome, is the ceiling.

[339] like just the ceiling how is that possible meaning like this is the smartest possible thing that the universe could create so that seems unlikely unlikely to me yeah i mean for some of these reasons we alluded to earlier in terms of designs we already have for computers that would be faster by many orders of magnitude than the human brain yeah but It could be that the constraints, the cognitive constraints in themselves is what enables the intelligence.

[340] So the more powerful you make the computer, the less likely is to become super intelligent.

[341] This is where I say dumb things to push back on that statement.

[342] Yeah, I'm not sure I thought that we might.

[343] No, I mean, so there are different dimensions of intelligence.

[344] A simple one is just speed.

[345] Like if you can solve the same challenge faster in some sense, you're like smarter.

[346] there I think we have very strong evidence for thinking that you could have a computer in this universe that would be much faster than the human brain and therefore have speed super intelligence, like be completely superior, maybe a million times faster.

[347] Then maybe there are other ways in which you could be smarter as well, more qualitative ways, right?

[348] And there, the concepts are a little bit less clear cut, so it's harder to make a very crisp, neat, firmly logical argument for why that could be qualitative superintelligence as opposed to just things that were faster, although I still think it's very plausible, and for various reasons, that are less than watertight arguments.

[349] But you can sort of, for example, if you look at animals and even within humans, like there seems to be like Einstein versus random person, like, it's not just that Einstein was a little bit faster, but like how long would it take a normal person to invent general relativity?

[350] It's like, it's not 20 % longer than it took Einstein or something like that.

[351] It's like, I don't know whether they would do it at all, or it would take millions of years or some totally bizarre.

[352] But your intuition is that the compute size will get you, increasing the size of the computer and the speed of the computer might create some much more powerful levels of intelligence that would enable some of the things we've been talking about with like the simulation being able to simulate an ultra -realistic environment, ultra -realistic perception of reality.

[353] Strictly speaking, it would not be necessary to have superintelligence in order to have, say, the technology to make these simulations, ancestor simulations or other kinds of simulations.

[354] As a matter of fact, I think if we are in a simulation, it would most likely be one built by a civilization that had superintelligence.

[355] It certainly would help a lot.

[356] I mean, you could build more efficient larger scale structures if you had super intelligence.

[357] I also think that if you had the technology to build these simulations, that's like a very advanced technology.

[358] It seems kind of easier to get technology to superintelligence.

[359] So I'd expect by the time they could make these fully realistic simulations of human history with human brains in there, like before that, they got to that stage.

[360] They would have figured out how to create machine superintelligence or or maybe biological enhancements of their own brains, if there were biological creatures to start with.

[361] So we talked about the three parts of the simulation argument.

[362] One, we destroy ourselves before we ever create the simulation.

[363] Two, we somehow, everybody somehow loses interest in creating a simulation.

[364] Three, we're living in a simulation.

[365] So you've kind of, I don't know if your thinking has evolved on this point, but you kind of said that we know so little that these three cases might as well be equally probable.

[366] So probabilistically speaking, where do you stand on this?

[367] Yeah, I mean, I don't think equal necessarily would be the most supported probability assignment.

[368] So how would you, without assigning actual numbers, what's more or less likely in your view?

[369] Well, I mean, I've historical attended to punt on the, the question of like as between these three.

[370] So maybe you ask you another way is which kind of things would make each of these more or less likely?

[371] What kind of, yeah, intuition?

[372] Certainly in general terms, if you think anything that increases or reduces the probability of one of these, we tend to slush probability around on the other.

[373] So if one becomes less probable like the other would have to, because it's got to add up to one.

[374] Yes.

[375] So if we consider the first hypothesis, the first alternative that there's this filter that makes it so that virtually no civilization reaches technological maturity.

[376] In particular, our own civilization.

[377] If that's true, then it's like very unlikely that we would reach technological maturity, because if almost no civilization at our stage does it, then it's unlikely that we do it.

[378] So I'm sorry, can you linger on that for a second?

[379] Well, so if it's the case that almost all civilizations at our current stage of technological maturity, at our current stage of technological development fail to reach maturity, that would give us very strong reason for thinking we will fail to reach technological maturity.

[380] And also sort of the flip side of that is the fact that we've reached it means that many other civilizations have reached this point.

[381] So that means if we get closer and closer to actually reaching technological maturity, there's less and less distance left where we could go extinct before we are there and therefore the probability that we will reach increases as we get closer and that would make it less likely to be true that almost all civilizations at our current stage failed to get there like we would have this the one case we'd started ourselves would be very close to getting there that would be strong evidence it's not so hard to get to technological maturity so to the extent that we you know, feel we are moving nearer to technological maturity that would tend to reduce the probability of the first alternative and increase the probability of the other too.

[382] It doesn't need to be a monotonic change.

[383] Like if every once in a while, some new threat comes into view, some bad new thing you could do with some novel technology, for example, you know, that could change our probabilities in the other direction.

[384] But that technology, again, you have to think about as that technology has to be able to equally in an even way affect every civilization out there.

[385] Yeah, pretty much.

[386] I mean, strictly speaking, it's not true.

[387] I mean, that could be two different existential risk and every civilization, you know, not from one or the other, like, but none of them kills more than 50 percent.

[388] Like, yeah, but I incidentally, so in some of my the work, I mean, on machine superintelligence, like, I pointed some existential risks related to sort of super intelligent AI and how we must make sure, you know, to handle that wisely and carefully.

[389] It's not the right kind of existential catastrophe to make the first alternative true, though.

[390] Like, it might be bad for us if the future lost a lot of value as a result of it being shaped by some process that optimized for some completely non -human value.

[391] But even if we got killed by machine superintelligence is that machine superintelligence might still attain technological maturity.

[392] Oh, I see.

[393] So you're not very, you're not human exclusive.

[394] This could be any intelligent species that achieves, like, it's all about the technological maturity.

[395] It's not that the humans have to attain it.

[396] Right.

[397] So like superintelligence because it replaced us and that's just as well for the simulation argument.

[398] Yeah, yeah.

[399] I mean, it could interact with a second hypothesis.

[400] Like if the thing that replaced us was either more likely or less likely than we would be to have an interest in creating ancestor simulations.

[401] You know, that could affect probabilities.

[402] But yeah, to a first order, like if we all just die, then, yeah, we won't produce any simulations because we are dead.

[403] but if we all die and get replaced by some other intelligent thing, that then gets to technological maturity.

[404] The question remains, of course, if not that thing, then use some of its resources to do this stuff.

[405] So can you reason about this stuff?

[406] So given how little we know about the universe, is it reasonable to reason about these probabilities?

[407] So like how little, well, maybe you can disagree, but to me it's not trivial to figure out how difficult it is to build a simulation.

[408] We kind of talked about it a little bit.

[409] We also don't know, like as we try to start building it, like start creating virtual worlds and so on, how that changes the fabric of society.

[410] Like there's all these things along the way that can fundamentally change just so many aspects of our society about our existence that we don't know anything about.

[411] Like the kind of things we might, discover when we understand to a greater degree the fundamental, the physics, like the theory, if we have a breakthrough, I have a theory and everything, how that changes stuff, how that changes deep space exploration and so on.

[412] So is it still possible to reason about probabilities given how little we know?

[413] Yes, I think there will be a large residual of uncertainty that will just have to acknowledge.

[414] And I think that's true for most of these big picture questions that we might wonder about.

[415] It's just we are small, short -lived, small -brained, cognitively very limited humans with little evidence.

[416] And it's amazing we can figure us as much as we can, really about the cosmos.

[417] But, okay, so there's this cognitive trick that seems to happen where I look the simulation argument, which for me, it seems like case one and two feel unlikely.

[418] I want to say feel unlikely as opposed to sort of, like, it's not like I have too much scientific evidence to say that either one or two or not true.

[419] It just seems unlikely that every single civilization destroys itself.

[420] And it seems like feels unlikely that the civilizations lose interest.

[421] So naturally, the, without, necessarily explicitly doing it, but the simulation argument basically says it's very likely we're living in a simulation.

[422] To me, my mind naturally goes there.

[423] I think the mind goes there for a lot of people.

[424] Is that the incorrect place for it to go?

[425] Well, not necessarily.

[426] I think the second alternative, which has to do with the motivations and interest of technological and mature civilizations um i think there is much we don't understand about that yeah can you can you talk about that a little bit what do you think i mean this is a question that pops up when you when you build an a GI system or build a general intelligence or um how does that change our motivations do you think it'll fundamentally transform our motivations well it doesn't seem that implausible that once you take this leap to to technological maturity and I mean, I think, like, it involves creating machine superintelligence, possibly.

[427] That would be sort of on the path for basically all civilizations, maybe, before they are able to create large numbers of ancestor simulations.

[428] That possibly could be one of these things that quite radically changes the orientation of what a civilization is, in fact, optimizing for.

[429] There are other things as well.

[430] So at the moment we have not perfect control over our own being, our own mental states, our own experiences are not under our direct control.

[431] So for example, if you want to experience a pleasure and happiness, you might have to do a whole host of things in the external world to try to.

[432] to get into the state, into the mental state where you experience pleasure.

[433] You know, like some people get some pleasure from eating great food.

[434] Well, they can just turn that on.

[435] They have to kind of actually go to a nice restaurant and then they have to make money.

[436] So there's like all this kind of activity that maybe arises from the fact that we are trying to ultimately produce mental states, but the only way to do that is by a whole host of complicated activities in the external world.

[437] Now, at some level of technological development, I think, will become autopotent in the sense of gaining direct ability to choose our own internal configuration and enough knowledge and insight to be able to actually do that in a meaningful way.

[438] So then it could turn out that there are a lot of instrumental goals that would drop out of the picture and be replaced by other instrumental goals, because we could now serve some of these final goals in more direct ways.

[439] And who knows how all of that shakes out after civilizations reflect on that and converge and different attractors and so on and so forth.

[440] And that could be new instrumental considerations that come into view as well that we are just oblivious to.

[441] That would maybe have a strong shaping effect on actions, like very strong reasons to do something or not to do something.

[442] And we just don't realize they are there because we are so dumb, bumbling through the universe.

[443] But if almost inevitably en route to attaining the ability to create many answers to simulations, you do have this cognitive enhancement or advice from superintelligence or you yourself, then maybe there's like this additional set of considerations coming into view.

[444] And yesterday, it's obvious that the thing that makes sense is to do X. Whereas right now it seems, oh, you could X, Y, or Z, and different people will do different things.

[445] and we are kind of random in that sense.

[446] Yeah, because at this time, with our limited technology, the impact of our decisions is minor.

[447] I mean, that's starting to change in some ways.

[448] Well, I'm not sure how it follows that the impact of our decisions is minor.

[449] Well, it's starting to change.

[450] I mean, I suppose 100 years ago was minor.

[451] It's starting to...

[452] Well, it depends on how you viewed.

[453] But people did 100 years ago.

[454] still have effects on the world today.

[455] Oh, I see, as a civilization in the together.

[456] Yeah, so it might be that the greatest impact of individuals is not at technological maturity or very far down.

[457] It might be earlier on when there are different tracks, civilization could go down.

[458] Maybe the population is smaller.

[459] Things still haven't settled out.

[460] if you count indirect effects that those could be bigger than the direct effects that people have later on so part three of the argument says that so that leads us to a place where eventually somebody creates a simulation that I think you had a conversation with Joe Rogan I think there's some aspect here where you got stuck a little bit how does that lead to we're likely living in a simulation.

[461] So this kind of probability argument, if somebody eventually creates a simulation, why does that mean that we're now in a simulation?

[462] What you get to if you accept alternative three first is there would be more simulated people with our kinds of experiences than non -simulated ones.

[463] Like if kind of, if you look at the world as a whole, by the end of time, as it where you just count it up.

[464] That would be more simulated ones than non -simulated ones.

[465] Then there is an extra step to get from that.

[466] If you assume that, suppose for the sake of the argument that that's true, how do you get from that to the statement, we are probably in a simulation?

[467] So here you're introducing an indexical statement.

[468] like it's that this person right now is in a simulation.

[469] There are all these other people, you know, that they're in simulations and some that they're not in a simulation.

[470] But what probability should you have that you yourself is one of the simulated ones, right, given that setup.

[471] So yeah, so I call it the bland principle of indifference, which is that in cases like this, when you have to, I guess, sets of observers, one of which is much larger than the other.

[472] And you can't, from any internal evidence you have, tell which set you belong to.

[473] You should assign a probability that's proportional to the size of these sets.

[474] So that if there are 10 times more simulated people with your kinds of experiences, you would be 10 times more likely to be one of those.

[475] is that as intuitive as it sounds i mean uh that that seems kind of if you don't have enough information you should uh rationally just assign the same probability as is as the yeah is the size of the set it seems it seems pretty um plausible to me were the holes in this is it at the at the very beginning the assumption that everything stretches sort of you have infinite time essentially you don't need infinite time you should need How long does the time...

[476] But however long it takes, I guess, for a universe to produce an intelligent civilization that then attains the technology to run some ancestry simulations.

[477] Gotcha.

[478] At some point, when the first simulation is created, that stretch of time just a little longer than they'll all start creating simulations.

[479] Kind of like order of...

[480] Yeah, well, I mean, there might have different...

[481] It might, if you think of there being a lot of different planets and some subset of them have life.

[482] and then some subset of those get to intelligent life, and some of those maybe eventually start creating simulations, they might get started at quite different times.

[483] Like maybe on some planet it takes a billion years longer before you get monkeys or before you get even bacteria than on another planet.

[484] So this might happen kind of at different cosmological epochs.

[485] Is there a connection here to the doomsday argument and that sampling there?

[486] There is a connection in that they both involve an application of anthropic reasoning that is reasoning about these kind of indexical propositions.

[487] But the assumption you need in the case of the simulation argument is much weaker than the assumption you need to make the doomsday argument go through.

[488] What is the doomsday argument and maybe you can speak to the anthropic reasoning in more general.

[489] Yeah, that's a big and interesting topic in its own right, Anthropics.

[490] But the doomsday argument is this really first discovered by Brandon Carter, who was a theoretical physicist and then developed by philosopher John Leslie.

[491] I think it might have been discovered initially in the 70s or 80s, and Leslie wrote this book, I think, in 96.

[492] And there are some other versions as well by Richard Gott, who's a physicist.

[493] but let's focus on the Carter -Lessley version where it's an argument that we have systematically underestimated the probability that humanity will go extinct soon.

[494] Now, I should say most people probably think at the end of the day there is something wrong with this doomsday argument that it doesn't really hold.

[495] It's like there's something wrong with it, but it's proved hard to say exactly what is wrong with it.

[496] And different people have different accounts.

[497] My own view is, it seems inconclusive.

[498] And I can say what the argument is.

[499] Yeah, that would be good.

[500] Yeah, so maybe it's easy to explain via an analogy to sampling from urns.

[501] So imagine you have a big, imagine you have two urns in front of you and they have balls in them that have numbers, So there is the deterrents look the same, but inside one there are 10 balls, ball number one, two, three, up to ball number 10.

[502] And then in the other earn, you have a million balls numbered one to a million.

[503] And now somebody puts one of these urns in front of you and ask you to guess what's the chance it's the 10 ball earn?

[504] And you say, well, 50, 50, you know, I can't tell which earn it is.

[505] but then you're allowed to reach in and pick a ball at random from the urn and that's supposed you find that it's ball number seven so that's strong evidence for the 10 ball hypothesis like it's a lot more likely that you would get such a low numbered ball if they're only 10 balls in the urn like it's in fact 10 % done right than if there are a million balls it would be very unlikely you would get number seven so you perform a basian update and if your prior was 50 -50 that it was the 10 -ball urn, you become virtually certain after finding the random sample was 7 that it only has 10 balls in it.

[506] So in the case of the urns, this is uncontroversial, just elementary probability theory.

[507] The doomsy argument says that you should reason in a similar way with respect to different hypotheses about how many balls there will be in the urn of humanity, how many humans that will ever have been before.

[508] by the time we go extinct.

[509] So to simplify, let's suppose we only consider two hypotheses, either maybe 200 billion humans in total or 200 trillion humans in total.

[510] You could fill in more hypothesis, but it doesn't change the principle here.

[511] So it's easiest to see if we just consider these two.

[512] So you start with some prior based on ordinary empirical ideas about threats to civilization and so forth, and maybe you say it's a 5 % chance that we will go.

[513] extinct by the time there will have been 200 billion only.

[514] You're kind of optimistic, let's say.

[515] You think probably will make it through colonized the universe.

[516] But then, according to this tombstay argument, you should take off your own birth rank as a random sample.

[517] So your birth rank is your sequence in the position of all humans that have ever existed.

[518] And it turns out you're about a human number of 100 billion.

[519] you know give or take that's like roughly how many people have been born before you that's fascinating because i probably we each have a number we we would each have a number in this i mean obviously the exact number would depend on where you started counting like which ancestors start was human enough to count as human but those those are not really important there are relatively few of the so um yeah so you're roughly a hundred billion now if they're only going to be 200 billion in total that's a perfectly unremarkable number you're somewhere in the middle, right?

[520] Run -of -the -mill human.

[521] Completely unsurprising.

[522] Now, if they're going to be $200 trillion, you would be remarkably early.

[523] Like, what are the chances out of these 200 trillion human that you should be human number 100 billion?

[524] That seems it would have a much lower conditional probability.

[525] And so analogously to how in the Earned case, you thought after finding this low -numbered random sample, you update it in favor of the urn having few balls.

[526] Similarly, in this case, you should update in favor of the human species having a lower total number of members that is doom soon.

[527] You said doom soon?

[528] That's the...

[529] Well, that would be the hypothesis in this case that it will end.

[530] I just like that term for that hypothesis.

[531] So what it kind of crucially relies on the doomsday argument is the idea that you should reason as if you were a random sample from the set of all humans that will ever have existed.

[532] If you have that assumption, then I think the rest kind of follows.

[533] The question then is, why should you make that assumption?

[534] In fact, you know you're 100 billion, so where do you get this prior?

[535] And then there is like a literature on that with different ways of supporting that assumption.

[536] That's just one example of anthropic reasoning, right?

[537] Yeah.

[538] That seems to be kind of convenient when you think about.

[539] humanity.

[540] When you think about sort of even like existential threats and so on, is it seems that quite naturally that you should assume that you're just an average case.

[541] Yeah, that you're kind of a typical or randomly sample.

[542] Now, in the case of the doomsday argument, it seems to lead to what intuitively we think is the wrong conclusion, or at least many people have this reaction, that there's got to be something fishy about this argument, because from very, very weak premises, it gets this very striking implication that we have almost no chance of reaching size 200 trillion humans in the future.

[543] And how could we possibly get there just by reflecting on when we were born?

[544] It seems you would need sophisticated arguments about the impossibility of space colonization, blah, blah.

[545] So one might be tempted to reject this key assumption.

[546] I call it the self -sampling assumption, the idea that you should reason as if you were a random sample from all observers or in some reference class.

[547] However, it turns out that in other domains, it looks like we need something like this self -sampling assumption to make sense of bona fide scientific inferences.

[548] In contemporary cosmology, for example, you have these multiverse theories.

[549] And according to a lot of those, all possible human observations are made.

[550] I mean, if you have a sufficiently large universe, you will have a lot of people observing all kinds of different things.

[551] So if you have two competing theories, say, about the value of some constant, it could be true according to both of these theories that there will be some observers observing the value that corresponds to the other theory because there will be some observers that have hallucinations, or there's a local fluctuation or a statistically anomalous measurement, these things will happen.

[552] And if enough observers make enough different observations, there will be some that sort of by chance make these different ones.

[553] And so what we would want to say is, well, many more observers, a larger proportion of the observers will observe as it were the true value.

[554] And a few will observe the wrong value.

[555] If we think of ourselves as a random sample, we should expect with a probability to observe the true value and that will then allow us to conclude that the evidence we actually have is evidence for the theories we think are supported.

[556] It kind of done is a way of making sense of these inferences that clearly seem correct, that we can, you know, make various observations and infer what the temperature of the cosmic background is and the fine structure constant and all of this.

[557] But it seems that without rolling in some assumption, similar to the self -sampling assumption, disinference just doesn't go through.

[558] And there are other examples.

[559] So there are these scientific context where it looks like this kind of anthropic reasoning is needed and makes perfect sense.

[560] And yet, in the case of the dupst argument, it has this weird consequence and people might think there's something wrong with it there.

[561] So there's then this product.

[562] that would consist in trying to figure out what are the legitimate ways of reasoning about these indexical facts when observer selection effects are in play.

[563] In other words, developing a theory of anthropics.

[564] And there are different views of looking at that.

[565] And it's a difficult methodological area.

[566] But to tie it back to the simulation argument, the key assumption there, this bland principle of indifference.

[567] it's much weaker than the self -sampling assumption.

[568] So if you think about, in the case of the doomsday argument, it says you should reason as if you are a random sample from all humans that will have lived, even though in fact you know that you are about number 100 billionth human and you're alive in the year 2020, whereas in the case of the simulation argument, it says that, well, if you actually have no way of telling which one you are, then you should assign this kind of uniform probability.

[569] Yeah, yeah, your role as the observer in the simulation argument is different, it seems like.

[570] Who's the observer?

[571] I mean, I keep assigning the individual consciousness.

[572] Yeah, I mean, when you say you, when there are a lot of observers in the simulation, in the context of the simulation argument, but they're all observing the same.

[573] The relevant observers would be, A, the people in original histories, and B, the people in simulations.

[574] So this would be the class of observers that we need, I mean, there are also maybe the simulators, but we can set those aside.

[575] for this.

[576] So the question is, given that class of observers, a small set of original history observers and the large class of simulated observers, which one should you think is you?

[577] Where are you amongst this set of observer?

[578] I'm maybe having a little bit of trouble wrapping my head around the intricacies of what it means to be an observer in this, in the different instantiations of the anthropic reasoning cases that we mentioned.

[579] I mean, does it have to be...

[580] No, I mean, it may be an easier way of putting it.

[581] It's just like, are you simulated or are you not simulated?

[582] Given this assumption that these two groups of people exist.

[583] Yeah, in the simulation case, it seems pretty straightforward.

[584] Yeah, so the key point is the methodological assumption you need to make to get the simulation argument to where it wants to go is much weaker.

[585] less problematic than the methodological assumption you need to make to get the doomsday argument to its conclusion.

[586] Maybe the doomsday argument is sound or unsound, but you need to make a much stronger and more controversial assumption to make it go through.

[587] In the case of the doomsday argument, sorry, simulation argument, I guess one maybe way intuition pop to support this bland principle of indifference is to consider a sequence of different cases.

[588] where the fraction of people who are simulated to non -similated approaches one.

[589] So in the limiting case, where everybody is simulated, obviously you can deduce with certainty that you're simulated.

[590] If everybody with your experiences is simulated and you know you're got to be one of those, you don't need the probability at all.

[591] You just kind of logically concluded.

[592] Right.

[593] So then as we move from a case where, say, 90 % of everybody is simulated, 99%, 99 .9%.

[594] It should seem plausible that the probability assigned should sort of approach one certainty as the fraction approaches the case where everybody is in a simulation.

[595] Yeah, that's exactly.

[596] Like, you wouldn't expect it to be a discrete.

[597] Well, if there's one non -simulated person, then it's 50 -50.

[598] But if we'd move that, then it's 100%.

[599] Like, it should kind of...

[600] There are other arguments as well, one can use to support this bland principle of indifference.

[601] But that might be enough to...

[602] But in general, when you start from time equals zero and go into the future, the fraction of simulated, if it's possible to create simulated worlds, the fraction simulated worlds will go to one.

[603] Well, it won't...

[604] is that an obvious kind of thing?