The Jordan B. Peterson Podcast XX
[0] Hello everyone watching and listening.
[1] Today I'm speaking with author and cognitive neuroscientist, Dr. Donald Hoffman.
[2] We discussed Dr. Hoffman's research on what we know is reality, why space -time itself is now considered by many a doomed framework of interpretation, and how consciousness might be best understood as a vast probability.
[3] space within which we orient ourselves.
[4] Hello, Dr. Hoffman.
[5] It's very good to see you.
[6] I've been interested in your theory for a long time, partly because I'm quite attracted by the doctrine of pragmatism, which was really part of what I tried to discuss with Sam Harris many, many times.
[7] And it seems that your work bears, well, it's a broad general interest, but it also bears on specific interests of mind because I'm, I've always been curious about the relationship between Darwinian concepts of truth and, let's say, the concepts of truth put out by the more Newtonian, say, objective materialists.
[8] They don't seem commensurate to me. And so would you start by explaining your theory, your broad theory of perception?
[9] I know that'll take a while, but it's a tricky, it's a tricky theory.
[10] So do you want to lay it out for us to begin with?
[11] Most Darwinian scholars would agree that evolution shapes sensory systems to guide adaptive behavior.
[12] That is, to keep organisms alive long enough to reproduce.
[13] But many also believe that in addition, evolution shapes us to see reality as it is, at least some aspects of reality that we need for survival.
[14] So that's often among my colleagues in studying evolution with natural selection.
[15] They'll say, yeah, seeing the truth will make you more fit in many cases.
[16] And so even though Darwin says it's evolution, shape, sensory systems just to keep you alive long enough to reproduce, many people think that seeing aspects of reality as it is will also make you more fit and make you more likely to reproduce.
[17] So I decided with my graduate students a few years ago to look into this.
[18] There are tools.
[19] Darwin's theory is now a mathematical theory.
[20] We have the tools of evolutionary game theory that John Maynard Smith and others invented in the 1970s.
[21] And so it's a wonderful theory, so Darwin's ideas can now be tested with mathematical precision.
[22] And I thought that maybe what we would find is that evolution tries to do things.
[23] things on the cheap.
[24] It doesn't, you know, if you have to spend more calories, then you have to go out and kill something to get those calories.
[25] And so there are selection pressures to do things cheaply and quickly, heuristics.
[26] And so I went into it thinking that maybe that would make it so that many sensory systems didn't see all of the truth.
[27] But I just wanted to check and see what would happen.
[28] To my surprise, when we actually started studying this, there came up principles that made me realize that the chance that we see reality as it is on Darwinian principles is essentially zero.
[29] And that was a stunning result for me. Zero is a very low number.
[30] So why zero?
[31] That's right.
[32] So, and I can, it's a bit technical, but in evolutionary theory, there are, in the evolutionary game presentation of it, you think of it.
[33] You think of of evolution as like a game, and in a game, you're competing with other players and you're trying to get points.
[34] Now, in the game of evolution, the way it's modeled is there are these fitness payoff functions, and those are sort of the points that you can get for being in certain states and taking certain actions.
[35] And so these fitness payoffs are what guides the selection.
[36] They guide the evolution.
[37] And so we began to analyze those fitness payoffs, right?
[38] The fitness Payoffs, to be very concrete about a fitness payoff, suppose that you're a lion and you want to mate, well, a steak won't be very useful for you for that process, right?
[39] You'll have very little fitness payoff for a steak if you're a lion looking to mate.
[40] If you're a lion that's looking to eat and you're hungry, then, of course, the steak will have high fitness payoffs for you.
[41] So a fitness payoff depends on the organism, like a lion versus, say, a cow.
[42] The stake is of no fitness payoff for any cow, for any purposes.
[43] Quite the contrary.
[44] Quite the contrary.
[45] That's right.
[46] So the fitness payoff depends on the organism, its state, I mean hungry versus sated, for example, and the action, feeding, fighting, fleeing, and mating, for example.
[47] So these fitness payoffs are functions of the world.
[48] They depend on the state of the world and its structure.
[49] And the organism, its state, and its actions.
[50] So they're complicated functions.
[51] And in some sense, you could think that there's just effectively one fitness payoff function.
[52] There's this one big fitness payoff function which handles the world and all possible organisms are all possible states and actions.
[53] So there's a big fitness payoff.
[54] The question is, but we can think about it as many fitness payoffs if we want to as well.
[55] The question is, suppose then, so this fitness payoff function takes as its starting point, the state of the world, right?
[56] That's the domain of the function.
[57] And the range of the function might be the fitness payoff value, say from zero to 100.
[58] Zero means you lose.
[59] A hundred means you did as good as you could possibly do.
[60] So zero to 100, say.
[61] So it's a function from the state of the world, cross -organism, into state and action, into this number, some zero to 100 to 0 ,000, whatever you want to use.
[62] So the question then is, does this function preserve, information about the structure of the world.
[63] But this is the function that's guiding the evolution of our sensory systems.
[64] So does this function, if the function is what mathematicians call a homomorphism, a structure -preserving map.
[65] So for example, the world might have an order relationship, like one is less than two is less than three, like a distance or a distance metric or something like that.
[66] then to be a homomorphism would mean that if things were like in a certain order in the world, the function would take them into that same order or some homomorphism with that order onto the states of the payoffs.
[67] So that's a technical question.
[68] What is the probability that a generically chosen payoff function will be a homomorphism of a a metric or total order or a partial order or a topology or measurable structure.
[69] Any structure that you can imagine the world might have, you can ask what is the probability that a generically chosen payoff function will preserve it.
[70] If it doesn't preserve it, there's no information in the payoff function to shape sensory systems to see that truth, to see that structure of the world.
[71] So what's remarkable is that evolutionary theory is indifferent about the payoff functions.
[72] They don't say they have to be a certain shape.
[73] In other words, every fitness payoff function that you could imagine is on equal footing on current evolutionary theory to every other one.
[74] There's nothing in Darwin's theory that says these are the fitness payoff functions, and this is their structure.
[75] So what we had to do then is to say, okay, we have to just look at all possible fitness payoff functions and ask how many of them what fraction of these payoff functions would preserve a total order or a metric or a measurable structure or whatever it might be and here's the remarkable in retrospect obvious thing to for a payoff function to preserve a structure like a metric or a total order it must satisfy certain equations so you have to write down these equations that the homomorphism must satisfy, that the function, the fitness payoff function must satisfy to be a homomorphism.
[76] Well, once you write down an equation, most payoff functions simply aren't going to satisfy it.
[77] I mean, the equations are quite restrictive.
[78] And in fact, in the limit, as you look at, you know, a world that has an infinite number of states and payoff values that go from zero to infinity, the fraction of payoff functions that actually are homomorphic goes to zero precisely.
[79] All right.
[80] So this is going to be a somewhat meandering question because it's a very complicated thing to get right.
[81] So people who think that the world is made out of self -evident facts underestimate the complexity of perception.
[82] And so here's how I'll make that case.
[83] And you can tell me what you think.
[84] You can imagine you could ask an engineer a simple question.
[85] Can you build a bridge?
[86] And you might think, the fact of the bridge will be a fact, and the answer to the question, which would be yes or no, will be a fact.
[87] And that's that.
[88] It's all self -evident.
[89] It's sort of like the behaviorists assuming that the stimulus was self -evident.
[90] It's very much analogous to that.
[91] Okay, but here's the problem.
[92] There's a whole set of assumptions built into that question that people don't even notice.
[93] And so let me walk through some of the assumptions.
[94] It's like, well, I can't build a bridge if you want it to last 50 million years.
[95] So I could build a bridge that would last a century or two centuries.
[96] I can't build a bridge for no money with no labor, with materials that are just at hand.
[97] So the thing you define as a bridge is already subject to all sorts of constraints.
[98] Now, you and I mutually understand those constraints without any.
[99] even having to speak about them.
[100] So I'm also going to assume that if you say, if I ask you, can you build a bridge and you say yes, you're also saying, I'm willing to work with you.
[101] I'm willing to work honestly.
[102] I'm willing to hire the right number of people.
[103] I'm not going to screw you during the construction.
[104] The bridge that we build, we both understand that human beings will be able to walk across it and as many as will fit on the bridge without the bridge falling down.
[105] And also cars, and that means it'll have to be about the same width as a car or a truck or four lanes of cars or trucks and it'll have to abide by all the building codes and so forth.
[106] There's there's so many constraints in that question that it's it would take you an unlimited amount of time to list them all and you don't because you're talking to an engineer and he's a human being like you inculturated like you and so he understands the world like you do and so there's a hundred million things you don't have to talk about.
[107] And they're but they're there.
[108] They're constraining the the set of facts that's relevant to the, to the issue, and they're constraining them seriously.
[109] Okay, so now those constraints, those are nested in an even higher order's set of constraints, which are Darwinian, right?
[110] It's like, well, the axiomatic agreements that you and I come to as a consequence of our shared perceptions, our shared embodiment and our shared inculturation, are a consequence of a broader process, which is essentially Darwinian.
[111] Now, that Darwinian set of constraints is instantiated in motivational systems, in part.
[112] So we might say, well, anything that you and I do together will have to be done well, taking into account hunger and anger and fear and pain, the whole emotional potentiality of people plus our fundamental motivational systems, the manner in which we lay out this particular task will have to satisfy all that.
[113] Now, that's also unspoken.
[114] Now, when you talk about evolutionary game theory and pragmatic constraints, let's say you talked about the lion who wants to mate and not eat, you're referring to one motivational system or another, one governing sex, say, and the other governing hunger.
[115] And then the manner in which the lion is going to perceive the world, or the manner in which we're going to perceive the world, is going to be bounded by the operation of that motivational system.
[116] And the perception is going to be deemed sufficient if when we enact it, the motivational system is satiated.
[117] Fair enough?
[118] Okay.
[119] Okay.
[120] Now, but then there's a more interesting issue that pertains to the big fitness payoff.
[121] So, If you look at how the nervous system is structured, you have these underlying motivational systems, which are goal -setting machines in which define the parameters within which a perception is valid.
[122] But all those systems have to interact together, and they cause conflict, right?
[123] So if you're hungry and tired, you don't know whether you should get up and make a peanut butter sandwich or if you should just go to sleep and leave it till the morning.
[124] Like, there's inbuilt conflict.
[125] And part of the reason that the cortex evolved was to mediate subcortical conflicts.
[126] And then even at the cortical level, the manner in which you integrate your fundamental motivations and the manner in which I motivate, integrate mine, have to be integrated or will fight.
[127] And so I would say, and I don't know if evolutionary theorists have dealt with this, and it's relevant to your theory that perception doesn't map the real world.
[128] Is there a higher order set of integrated constraints that serves reproduction over the long run that all the lower order fitness payoffs are necessarily subordinate to?
[129] And I know this is a terribly complicated question.
[130] Is that the reality that perception serves?
[131] You know, you made the case that perceptions will not map one to one, one on reality.
[132] And I suppose that's partly because reality is, it's infinitely complex, right?
[133] I mean, you can fragment it infinitely and you can contextualize it infinitely.
[134] So it's very hard to calibrate.
[135] All right.
[136] So we got put that aside.
[137] But then I would say, well, maybe there's another transcendent fundamental reality that's Darwinian in nature that integrates everything with regards to optimized long -term survival, and perceptions are optimized to suit that.
[138] So I know that's a terribly complicated question, but this is a terribly complicated subject.
[139] Well, so I think we have to think a little out of the box on this question, because when we conclude that evolution shapes us not to see reality as it is, then the question is, well, what what is it shaping our sensory systems to give us?
[140] As well as what is reality, right?
[141] That question also comes up, yeah.
[142] Absolutely.
[143] And so the way I like to think about it is that evolution shapes sensory systems to serve as a user interface.
[144] So like the desktop on your computer, for example.
[145] So when you're actually working on a computer, you're in this metaphor, what you're literally doing is toggling millions, of voltages in a computer, in circuits, and you're having to toggle them in very specific patterns, millions of them in exactly the right pattern.
[146] Well, if you had to do that by hand, if you had to deal with that reality and interface with that reality, one voltage and get it an example, well, it take you forever, and you probably wouldn't get it right, and you wouldn't be able to write your email or edit your picture, whatever you're doing on your computer.
[147] So we spend good money, and people spend a lot of time building interfaces that allow you to be ignorant, completely ignorant.
[148] Most of us have no idea what's under the hood in our laptops.
[149] We have no idea.
[150] We know that there's circuits and software, but most of us have never studied it.
[151] And yet we're able to very swiftly and expertly edit our images and send texts and emails and so forth without having any clue, literally no clue, what's under the hood.
[152] What's the reality that we're actually toggling?
[153] And so it seems that that's what evolution has done for us.
[154] It's given us an incredibly dumbed -down interface.
[155] We call it space and time and physical objects.
[156] So we think of space and time as the fundamental reality, and physical objects as truly existing in that objective reality.
[157] But it's really just, in this metaphor, a virtual reality headset.
[158] We've evolved a virtual reality headset that utterly hides the very nature of reality.
[159] And on purpose, quote unquote, on purpose, so to speak, because it would be...
[160] We drown in the complexity.
[161] Right.
[162] You're drowning in the complexity.
[163] Okay, so some evidence for that, as far as I'm concerned, is the following.
[164] I mean, first of all, if you look at a desktop, it consists, let's say, in part of folders.
[165] Now, folders are actually something in the real world that you can pick up, and we understand them.
[166] You can manipulate them.
[167] You can see how they operate by using your...
[168] by as a consequence of your embodiment.
[169] And so that embodiment gives you a deep understanding of the function of a folder, and then you can represent it abstractly, and you can put it on a desktop, and everyone understands what it means.
[170] And that understanding is something like, able to map a certain set of functions for a certain set of purposes.
[171] That's what understands, and it's a constrained set of purposes.
[172] This is what really struck me about reading the pragmatists, say, they said, and Hippers and James, studied Darwin deeply, and they were the first philosophers to realize exactly what implications Darwinian theory had for both ontology and epistemology, ontology, which is the study of reality for everyone listening.
[173] That was a real surprise.
[174] You could understand that Darwin's theory might have epistemological implications, implications for the theory of knowledge, but the fact that it had implications for what reality is per se is something that very few scientists have yet grappled And the pragmatists always said, look, when you accept something as a fact, one of the things you don't notice is that you set up conditions for that to be factual.
[175] And the fact is something like this definition will do during this time span for this very constrained set of operations.
[176] Fact.
[177] Okay, but the problem with that is that's not a dead objective fact just lying on the ground.
[178] That's a fact by necessity nested inside a motivational system.
[179] So facts now all of a sudden become motivated facts.
[180] And that just wreaks havoc with the notion of objective, like of a distant objective materialism.
[181] Because the facts are supposed to be separate from motivation.
[182] And the pragmatist, as far as I'm concerned, following Darwin, demonstrated incontrovertibly that that's like you pointed to, I think it's analogous.
[183] that's actually impossible.
[184] Now, because you have to constrain reality in order to perceive it, because it's too complex.
[185] You drown in the details otherwise.
[186] You drown in the complexity.
[187] Now, you made the claim, and I want to interrogate this a bit, that there's really no direct relationship, let's say, between the desktop icon that you think is an object when you look at the world and the actual world.
[188] But let me offer you an alternative and tell me what you think about this.
[189] So there's this idea.
[190] This is a weird way of approaching this, but I'm going to do it anyways.
[191] There's a very strange stream of primarily Catholic thought, I believe, that tried to wrestle with the idea of how God could become man. So because God, of course, is infinite and everywhere, and man is finite and bounded.
[192] And so the question is, well, how do you establish a relationship between the infinite and the bounded?
[193] And that's analogous to the same problem that we're trying to solve.
[194] And they came up with this hypothesis of canosis, which means emptying.
[195] And their notion was, well, Christ was God, but in some ways like a low resolution, representation of God, an image of God.
[196] So there was a correspondence, but not a totality, at least not at any one instance.
[197] Now, the reason I'm bringing that up is because it seems to me that when we perceive an object, that it isn't completely without, you call it, homomorphism with, I believe, with the underlying world.
[198] It's just extremely low resolution.
[199] Like, it's a low resolution functional tool.
[200] That's what an object is.
[201] But, and it's now, and I would say I would advance in support of that, for example, obviously the icons that we have on a computer screen, we can use, and we treat them like they're real, and clearly they're low resolution.
[202] But also when we watch an animated show, for example, like The Simpsons, we're looking at cartoon -like icons, right?
[203] They're emptied even further than, like, if I saw a Simpson cartoon of U, it would be like a very low -resolution representation of the UIC, which is a very low -resolution representation of whatever the hell you are in actuality.
[204] Like it's a seek, but I think there's an element of that perception that's an unbiased sampling of the underlying reality, although it's bent to pragmatic ends, pragmatic motivational ends.
[205] Now, I don't know what you think about that.
[206] I've thought about it for a long time.
[207] I can't find a hole in it, but I'm wondering what you think.
[208] Well, I think here's an analogy that might help explain the way I see it.
[209] And suppose you're playing a VR version of Grand Theft Auto.
[210] So you have a headset and body suit on, and you're playing a multiplayer Grand Theft Auto.
[211] You're playing with someone in China and England and so forth.
[212] And I'm sitting there in my ride.
[213] I've got a steering wheel and gas pedal and dashboard.
[214] And I'm looking out, and I see, to my right, I can see a red Ferrari.
[215] And to my left, I see a green Mustang.
[216] Well, now, of course, what I'm really interacting with in this analogy is some supercomputer somewhere.
[217] Right.
[218] And if I looked inside that supercomputer and look for a red Ferrari, I would find no red Ferraris anywhere inside that supercomputer.
[219] I would find voltages.
[220] So in that sense, the red Ferrari is a symbol in my headset, in the game, and there's nothing in the objective reality in this metaphor that it's a low -resolution version of.
[221] It's just literally a completely different kind of beast.
[222] There are no red Ferraris.
[223] Okay, so let me ask you about that.
[224] So I get your point, especially germane with regards to the online game.
[225] But is it not the case that in that supercomputer architecture, there's a pattern that is analogous to the red Ferrari pattern that's the externalized representation of the pattern, let's say, on your retina and then that propagates into your brain?
[226] Like there is a, there is a, there is a conservation of pattern.
[227] Now, that Ferrari pattern in the supercomputer would be a very tiny element of an infinite landscape of patterns in the computer.
[228] But it's not, and it's definitely not a pattern of a car per se, right?
[229] It's a pattern of a representation of a car.
[230] But it still got some correspondence with a pattern of voltages, let's say, that does have some existence within the supercomputer architecture?
[231] Well, so in that case, I would say that there's a causal connection, that what's going on inside the supercomputer has a causal connection with the sequence of pixels that are being illuminated in my headset so that I see a red Ferrari.
[232] So there's a causal connection.
[233] But if I asked, is there some sense in which there's a homomorphism of structure between what's going on inside the computer and what I'm seeing on the screen as a red Ferrari, I would say there's probably no homomorphism at all.
[234] And in that sense, we can't think about it as like a low -resolution version of something.
[235] So to be specific, the electrons in the computer have no color.
[236] My Ferrari is red.
[237] The shape of the Ferrari and the shapes of the electrons or even the pattern of motion of the electrons is independent.
[238] And what's going on in part is that the pattern of electrons in the supercomputer, they're programmed to operate in a certain way to cause certain other things to happen in my headset to trigger voltages that trigger pixels to have certain colors.
[239] And so there's a whole sequence, a whole cascade of events that are going on there.
[240] And so to say that there's a homomorphism, I think, is, I think it's just barking up the wrong tree.
[241] Okay, so I want to push on this a bit more because I want to understand it.
[242] All right, so I'm going to do that from two angles.
[243] The first is that in the supercomputer architecture, let's say, there are levels of potential patterning, ranging from quantum subatomic, atomic, molecular, et cetera, all the way up to the apprehensible phenomenological world.
[244] Multiple layers of potential patterning.
[245] So I would say in response to your objection that if you looked at the electrons, for example, they have no color, that color is only a pattern that can even be replicated analogously at certain levels of that multi -level patterning.
[246] So you won't detect it at in the quantum realm.
[247] You won't detect it at the subatomic realm, maybe not even at the atomic realm.
[248] You detected at the level of patterns of molecules at one level, and then not above that.
[249] It'd be a very specific level.
[250] So it could still be there, even though it wasn't propagating through the entire system.
[251] And then I want to add another twist to that that I think is relevant.
[252] So I was talking to a biologist last week about how the immune system functions.
[253] And basically, the way that it functions, you imagine there's a foreign molecule in your bloodstream, and it's got a shape.
[254] Well, it has a very complex, has an endless number of very complex shapes that make up its surface.
[255] And the complexity of that shape would be dependent on the resolution of analysis, right?
[256] Because the subatomic contours would be different than the atomic contours and different than the molecular contours.
[257] Okay.
[258] Now, what the immune system wants to do is get a grip on that molecule.
[259] And it just has to get enough of a grip so that it can register the pattern, replicate the pattern, and get rid of the molecule.
[260] So that's its goal.
[261] You can say that it's motivational frame.
[262] Now, the way it does that is sort of the way your arm works.
[263] Imagine you were trying to figure out how to pick up a basketball.
[264] Now, a baby will do that in the crib.
[265] The first thing a baby will do when it's trying to figure out.
[266] how to use its arms is it uses them very non -specifically.
[267] It'll flail them out, maybe it'll hit the ball.
[268] Now, hitting the ball isn't throwing the ball, but it's more like throwing the ball than not hitting the ball, right?
[269] And then the baby does this, and then it, that works, and then it gets a little bit more sophisticated and it does this, and then it gets a little more sophisticated and it does this, and then finally it can manipulate its fingers, so it's specifying the grip.
[270] At some point, the baby can grab the ball and throw it.
[271] And that's kind of what the immune system does.
[272] It makes the molecules that kind of stick to the surface, and then those modify so they stick even better, and then the sticky molecules modify so it sticks even better.
[273] But the point I'm making is that the immune system appears to generate a sufficient homologue of the molecule to grab it and get it out.
[274] Now, you could say that that homologue that it generates, there's many levels of reality that the foreign body participates in that aren't being modeled by the immune system homologue.
[275] But I would say, yeah, but there's enough of a homology so that the immune system can get a grip and get rid of the molecule.
[276] Now, and we're running around the world, this is a very good analogy, because we're running around the world trying to get a grip.
[277] grip all the time.
[278] And we presume that the map that we've made of the world is sufficiently real if we get a good enough grip to perform the operation that we're intending to perform.
[279] But that still, to me, that still implies that there's some level of representation that has at least the echo of a genuine homology.
[280] So I'm wondering, you know, if you have objections to that or what you think about that.
[281] I think that we can't count on any kind of homology or homomorphism.
[282] I think that, for example, the way I think about it now is that spacetime itself and all the particles that we see at the subatomic level and the whole bit, that's all just a headset.
[283] And that physicists actually agree, they say space time is doomed.
[284] So, Neymar Connie Hamed, David Gross, and many are saying that we need a new framework for physics that's utterly outside of space -time and quantum theory.
[285] And they're finding structures like decorated permutations and so forth.
[286] These are structures not sort of curled up inside of space -time, but utterly outside of space -time.
[287] And so I think science is telling us.
[288] Darwin's theory, I think, is agreeing.
[289] It's saying that space -time is not fundamental.
[290] until it's just a headset.
[291] Okay, okay, so if I said there's no ultimate homology, but there are proximal local homologies, would that do the trick?
[292] I have a reason for torturing you about this, and I'll leave it soon.
[293] But I'm, because the issue of grip really makes a difference as far as I'm concerned, because getting a grip is very, it's sort of the basis of understanding, is all of our cognitive enterprises, you could think, in some real sense, are extensions of our ability to manipulate the world our hands.
[294] I mean, the fact that our left hemisphere is linguistically specialized, looks like it's a consequence of its specialization for articulation at the level of the hand.
[295] And so getting a grip is crucial here.
[296] And the homology seems to me to be demonstrated in the fact that like if you pick up a hammer, it actually comes off the ground.
[297] Now, I think you could reasonably object that that homology is tremendously limited.
[298] But it's hard for me to exceed to the notion that it's absent.
[299] Now, having said that, I don't want to push that point to stop you, let's say, from questioning something as fundamental as the objective reality of space and time.
[300] I think you can have your cake and eat it too in that regard.
[301] And I want to turn to those more radical claims right away.
[302] But if I said, well, if I pick up a hammer and it does in fact raise off the floor, how is that not an indication of a homology?
[303] Would you just, you would reduce that again to mere function?
[304] Like, it's merely the case that it worked and that's not demonstration of anything beyond.
[305] The thing is, it worked.
[306] That's the thing.
[307] That's why I can't shake the notion of some homology.
[308] Well, I would again say that there's a causal connection.
[309] You could talk about a causal connection between the reality behind your headset and what you're seeing in the headset.
[310] But I think it would be a stretch to talk about some kind of homology of structure.
[311] It's actually not necessary.
[312] To be successful, it's not necessary.
[313] Well, and as you pointed out very early, in this discussion.
[314] It also might be hyper expensive, right?
[315] You actually don't want to know more about something than you need to know in order to perform the requisite action.
[316] That's part of efficiency.
[317] Right.
[318] So, okay, so all right, so let's leave that aside.
[319] Let me, let me grind the way on that in the back.
[320] I'll just say one little, if you have a desktop folder on your, on your laptop, and for a file, and it's blue and rectangular in the middle of your screen, well, the file is not blue.
[321] It's not rectangular, and it's not in the middle of the computer.
[322] There's literally no homology for anything that you can see in the symbol on the screen and the file itself.
[323] It's just a useful symbol without homology, but there is a causal connection between the voltage, but no homology.
[324] So then what do you, okay, so, okay, so let maybe we can go down that route.
[325] Sure.
[326] I guess I'm then unclear about what you mean, what exactly do you mean by causal then?
[327] right so that's already sort of smuggling in a space time kind of analogy right right right exactly exactly so so i'll just say that there is a mathematical connection that maybe not causal but there's some kind of mathematical connection but but the mathematics need not be a kind of mathematics that preserves you know structure for example right so there's a mathematical connection okay i'm going to have to grind away on that for a bit because you know you are stating that there is a relationship at least a function and I'm unable to on the fly thoroughly discriminate between some grip of structure and some function because grip is a function so so I'll just put that aside now let's go on to consciousness itself now sure you said a variety of very radical things including criticizing the entire notion of space and time and so we'll delve in to that.
[328] But I want to tell you something that I learned from reading mythology, and I want you to tell me how that relates, if at all, to the way that you're conceptualizing consciousness, which is obviously not the way that people generally conceptualize it.
[329] Okay, so I've read a lot of different mythological accounts, and I've studied a lot of analysis of mythological accounts, and I think I think I've been able to extract out commonalities and regularities across the methods of assessment, and I think I've been able to triangulate them against findings from neuroscience, let's say, the neuroscience of perception.
[330] Now, the mythological stories that represent the structure of reality proclaim, you could say, that there are three interacting causal, three interacting fundamental causal agents, or structures.
[331] Causal agents is probably a better way of thinking about it.
[332] There's a realm of potential from which order can be extracted.
[333] That's often given feminine symbolism, the realm of potentiality.
[334] And I think that's because feminine creatures are the creatures out of which new creatures emerge.
[335] So there's a deep analogy there.
[336] So there's a realm of potentiality.
[337] Then there's a realm of a priori order.
[338] That's often given patriarchal or paternal symbolism.
[339] That's the Great Father.
[340] And so if you read a book, let's say, the book offers you a realm of potentiality, which is the multitude of potential interpretations that the book consists of.
[341] But then you impose an order on that that's a consequence of every book you've ever read and every experience you've ever had.
[342] And the book itself is a phenomenon that emerges as a consequence of the interplay between the interpreter and the realm of potentiality.
[343] Then there's one additional factor, which I think is identical to consciousness itself.
[344] It's associated in mythology with the sun, with the sun that sets and then rises, triumphant in the morning.
[345] It's associated with the conquering hero.
[346] And it's the thing, it's the active agent that transforms this infinite potentiality into concretized reality.
[347] It literally makes order out of chaos.
[348] the right way to think about it, and that we part, as conscious beings, we partake in that process.
[349] In fact, that process is our essence, and that's what makes us made in the image of God, let's say, but also instantiated with something like intrinsic value.
[350] Now, you have a very strange concept of consciousness, and so partly because you're attempting to make the case that what we think of as objective reality, so that's just the facts, ma 'am, objective reality, is actually an emergent property.
[351] Tell me if I've got this wrong.
[352] It's actually an emergent property of consciousness itself, and so that in your scheme of things, consciousness is more fundamental than objective reality?
[353] It doesn't even obvious in your scheme that objective reality, so to speak, exists.
[354] So tell me how you've grappled with the relationship between consciousness and the world as such.
[355] What have you concluded?
[356] Darwin and physics, high -energy theoretical physics, agree that space -time is doomed.
[357] It's not fundamental reality.
[358] And the search is on in the last 10 years among physicists to find structures entirely beyond space -time, not curled up inside space -time, beyond space -time.
[359] And they found structures I mentioned like the Decorator Permutations, Amplitohedron, and so forth.
[360] And so I'm also thinking about consciousness utterly outside of space -time.
[361] So it's a fundamental reality.
[362] and space time, which we have thought of for most of human history as the fundamental reality that we're embedded in, is a trivial headset.
[363] That's all it is.
[364] We've mistaken a headset for the truth because, yeah, it's easy.
[365] If that's all you've seen, all your life, is a headset, and it's hard to imagine something outside of it.
[366] But science is good enough to recognize that spacetime is just a headset.
[367] So now we can, we're free using mathematics, to ask what?
[368] kind of structures could we posit beyond space time?
[369] And in my case, I'm trying to also deal with the mind -body problem.
[370] How is consciousness related to what we call the physical world?
[371] So I've decided to try to get a mathematical model of consciousness.
[372] Now, of course, spiritual traditions and humanity for thousands of years has thought about consciousness and so forth.
[373] But as a scientist, what I want to do, of course, is listen to their insights, but I need to write down as minimal a mathematical structure as I can to boot up a completely rigorous theory.
[374] And so what we've done in our theory we call it the theory of conscious agents is a very minimal structure.
[375] A conscious agent has a probability space that it's defined on.
[376] So it's a probability space.
[377] Is that probability space equivalent to, let's say, a realm of potential around?
[378] My students and I tried to model anxiety as a response to entropy.
[379] So imagine that what you have in front of you is a set of branching possibilities, some of which can be realized with comparatively less effort, so they're more probable, let's say, given your current state, some of which are virtually impossibly distal, but in principle could be managed if you were smart enough and could gather the resources.
[380] But so you have a probability space in front of you, some of which is sort of at hand.
[381] Like it's pretty easy for me to pick up this pen, right?
[382] So that's a high probability pathway laid out in front of me. So, I mean, the mythological motifs that I referred to insist that what people face is something akin to the pre -cosmogonic chaos that God himself faced when the cosmos first sprang into being, right?
[383] And so that the way to construe the world isn't as a place of clockwork, automaton machines, self -evident objects, but as a realm of possibility that differs in probability.
[384] And then the issue becomes, how do you best orient yourself so that you contend, you can contend properly with that probability landscape?
[385] Now, is that, am I walking on parallel ground here?
[386] we're in broad agreement in that in the sense that our theory of conscious agents by writing down a probability space it is a space of potentiality for example to be very very concrete suppose my experiment is just to flip a coin twice heads and tails well what's my probability space well i could get heads heads heads tails tails tails or tails heads heads right so there's four possible or you'd land on the edge yeah right yeah yeah it's my Well, then I'd have to increase my probability space if I wanted to include that.
[387] But now notice, I write down the probability space first, but I haven't flipped my coin yet.
[388] So it's the space of potential outcomes of things that I can do.
[389] And that's what probability spaces are.
[390] And so when I write down a probability space for consciousness, it's a probability space in which I'm thinking about in the first instance, that it's about what is the probability of this I'll experience green?
[391] or mint, or the sound of a trumpet.
[392] So all these different conscious experiences.
[393] So the probability space is a space of all possible kinds of conscious experiences that this particular agent might have.
[394] And you can imagine that for some agents, maybe they're simple.
[395] They only have the experience of red, period.
[396] That's it.
[397] That's all this agent has red.
[398] The other one can experience red and green.
[399] And the other one can have 10 trillion experiences.
[400] You could imagine agents with, and then they can be related.
[401] Well, maybe the red agent can be thought of as a subspace of the one that says red and 10 million other things.
[402] So we can now get a...
[403] Depends on how articulated the organism is, right?
[404] So, yeah, the simpler organisms, exactly.
[405] The probability space around them collapses.
[406] That's right.
[407] And so, right, right.
[408] And so all the infinite number of potential probabilities that we see in front of us just collapse into maybe five choices, something like that.
[409] And sometimes...
[410] Yeah.
[411] Okay, so, you know, Carl Fristin, so this is quite interesting.
[412] So I talked to Carl Fristin about emotion, about hope, positive emotion, let's say, incentive reward positive emotion.
[413] So positive emotion in that sense is a reward that signals advancement towards a goal.
[414] Now, I'd already been conceptualizing with my students, as had Fristin, anxiety as a marker for the emergence of entropy.
[415] But Fristin pointed out, now, and I want to make a connection between his thinking and yours here, Fristin pointed out that you can map positive emotion with respect to entropy too, because if you're looking for a desired outcome, so imagine you're trying to get a grip on the world to bring about a certain reality, if you see yourself making a step towards that end such that the number of potential pathways to that end decreases somewhat, that produces a dopamine kick.
[416] and that's a signal of reduced entropy in relationship.
[417] And it seems to me that entropy is always calculated in relationship to a goal, right?
[418] You're saying, well, how intropic is the current space?
[419] And you can't answer that.
[420] You have to say, how intropic is the current space in relationship to the ordered state that I'm trying to bring about as a consequence of my actions.
[421] And then now and then you'll stumble across something that blows up in your face, let's say.
[422] Like, I've always thought about this.
[423] Like, imagine you're driving your car to work.
[424] Okay, and you might say, well, what is you?
[425] your car?
[426] And the objective materialist would say, well, it's an enclosed shell with four tires.
[427] It would give you a materialist description.
[428] But I would say, no, no, no. That's not how your nervous system is responding at all.
[429] Your nervous system, for your nervous system, the car is a conveyance from point A to point B. So it's a tool.
[430] And it's a tool that signifies zero entropy, essentially, as long as it performs its function.
[431] And then let's say your car breaks down.
[432] And now, you're on the side of the road.
[433] Now what happens to you is the probability space around you, I would say it becomes more distal.
[434] Any of your desired goals become more expensive and harder to compute, right?
[435] What's wrong with my car?
[436] Was I an idiot for buying that car?
[437] Am I generally an idiot?
[438] Am I going to get in trouble with my boss?
[439] What's going to happen to the rest of the day?
[440] You know, what's going to happen when I go see the mechanic?
[441] Right.
[442] The landscape blows into a broader range of unconstrained potentialities, and that seems to be signaled by anxiety, and anxiety then prepares your body for a multitude of potential actions, and the problem with that is that it's very physiologically costly.
[443] Right, so that's stress, and that'll wear you to a frazzle.
[444] So, okay, so is any of that not in accord with the manner in which you are modeling your theory of conscious agents?
[445] Right.
[446] So in the theory of conscious agents, I should say that in addition to the probability space and the conscious experiences that it allows, there is the dynamics.
[447] It's a Markov chain, a Markovian dynamics, where you have these matrices that describe the probability, if I'm experiencing red now, what's the probability I'll experience green the next time I have an experience?
[448] So there is a dynamical, and when we do the analysis, it turns out, that our Markovian dynamics, need not have an entropic arrow of time.
[449] It can be a stationary dynamics in which the entropy does not increase.
[450] So entropy in this realm of conscience.
[451] That's kind of what you hope.
[452] Right.
[453] You know, that's one of the things that makes things constant, right?
[454] Is that you assume that the entropic transformation is negligible.
[455] That's why you can ignore things, right?
[456] When you ignore things and you ignore almost everything, you're assuming that the entropic transformation is negligible.
[457] Well, what I'm saying is that it's possible to model a reality in which entropy doesn't increase, period.
[458] It's not ignoring anything.
[459] That's the nature of this deeper reality outside of space time.
[460] But then it turns out to be a theorem that if you take a projection of that non -entropic, there's no arrow of time in the sense of increasing entropy of this Markovian dynamics, but if you take a projection of it by conditional probability, any projection of it, It's a theorem that you will, as an artifact of projection, have the illusion of an arrow of time.
[461] You will get an...
[462] Right, well, is that because, well, look, if you're pursuing a pragmatic goal, things can fall apart and go wrong, and that is an increase in entropy within the universe defined by that goal.
[463] That may say nothing about entropy per se as a characteristic of broader reason.
[464] See, I've always had this issue with entropy, because entropy always seemed to be to be by necessity subjectively defined.
[465] It has to be disorder in relationship to some positive state of order, and then you get back into the Darwinian problem at that point.
[466] Like if it's well, if it's bounded by motivation, then it's encapsulated within a Darwinian space.
[467] So, okay, so in terms of your conception of objects, let me try this out.
[468] So I'm looking at this is teleprompter here and you're sitting in the middle of it.
[469] Now, I'm treating that like a set of conditional probabilities, right?
[470] I'm presuming that what this machine is doing right now is very much predictive of what it's going to do in a second.
[471] And I'm predicating my perception itself on that reality.
[472] Now, you know, it could burst into flames.
[473] Now, I feel that the probability of that is very low.
[474] So I'm not going to perceive the machine that way.
[475] Now, you know, there are disorders, obsessive -compulsive disorder is a good example, where people stop being able to reduce that probability landscape to predictable safety, and they start reacting to almost everything as if it's unpredictably dangerous.
[476] And, you know, things are, so I had clients, for example, they would go into a building.
[477] And the first thing they would do is look for all the fire escapes.
[478] And what they asked me was, well, why don't you do that?
[479] Because the building could burn down and people do get trapped in buildings, and that's a horrible way to die.
[480] So the mystery isn't why they did that.
[481] The mystery for them was why everyone didn't do that all the time.
[482] And I actually do believe that the great mystery is why people aren't scared out of their skulls all the time, not why they're sometimes calm.
[483] So can you imagine an object now?
[484] The object is surrounded by a probability distribution, I would say.
[485] in that probability distribution is all the things that object might turn into in some period of time, let's say.
[486] And I would say to some degree, when you look at the object, you actually also perceive that probability space.
[487] Because, you know, although I see that this teleprompter is stable, it's unstable enough and dynamic enough to provide me with a representation of you.
[488] And so I'm playing with the, by seeing the object and interacting with it, I'm playing with the probability space around it.
[489] So is it the case that you see the damn probability space when you look at the object?
[490] Well, I don't know if we see the space itself.
[491] We certainly, we're estimating what we think are the probabilities for various good things and bad things to happen.
[492] But I would say that this whole business about entropy increasing and so forth.
[493] First, I should point out that Shannon entropy, which is what we're talking about here, it turns out not to be the most general notion of entropy.
[494] There are mathematicians and physicists are looking at broader definitions of entropy.
[495] There's something called Solis entropy and others.
[496] So there are technical reasons for why, I mean, Shannon entropy is great and it's very, very useful.
[497] And when I was talking about the entropy of our dynamical systems and not having, you know, increasing entropy, I was talking about Shannon entropy.
[498] But there are more general notions of entropy that are important.
[499] So I would say that the very whole, the whole structure of needing to estimate probabilities and worrying about outcomes and, you know, rewards and so forth, from the point of view of our dynamics of conscious agents, all of that, in fact, All of Darwinian theory is an artifact of projection.
[500] So here's a dynamic of conscious agents outside of space -time.
[501] There need not be any competition, no limited resources, no arrow of time.
[502] And yet, when I take any projection of that dynamics to get a new Markovian dynamics that has lost just a little bit of information, I will have an arrow of time.
[503] And it can look like separate organisms competing for resources.
[504] and so forth.
[505] In other words, I mean, I love Darwin's theory of evolution, but natural selection is very powerful.
[506] I think the entire theory is not a deep insight into reality.
[507] I think it's an artifact of projection.
[508] The very arrow of time, think about the arrow of time.
[509] It is the fundamental limited resource in evolutionary theory.
[510] Time is the fundamental limited resource.
[511] If I don't get food in time, I die.
[512] If I don't made in time, I don't reproduce.
[513] If I don't breathe air in time.
[514] So time is the fundamental limited resource, and the arrow of time itself need not be fundamental.
[515] It could be entirely an artifact of projection.
[516] So what that means is, and this gets again to the whole...
[517] Okay, well, then I'd like to know this is back to the most fundamental possible question we could be describing is, well, what's the nature of reality itself?
[518] I mean, when I was debating with Sam Harris, we got hung up on this consistently because I wasn't willing to use the same definition of truth that he was.
[519] He uses an objective materialist definition, and I think that, you know, truth flies like an arrow, let's say.
[520] It's got a functional element to it that you cannot eradicate.
[521] There's no way out of that with an objective materialism, as far as I can tell.
[522] Now, you said the Darwinian race and the arrow of time is just an artifact, but if I said, well, hold on a second, I don't exactly know what you mean by artifact then, because if I don't act like there's an arrow of time and restricted resources in that regard, then I'm going to die.
[523] And that's real enough for me. You might even say, well, my death has little to do with the fundamental structure of reality, but I would say, well, it has enough to do with it, so it happens to concern me. And so, you know, we start to get into a discussion about what constitutes reality itself.
[524] If this is just a projection, what in principle would be real?
[525] Right.
[526] So on this theory, then consciousness is the fundamental reality and the conscious experience.
[527] that observers have as the fundamental reality.
[528] And the experience that we have of space and time is a projection of a much deeper reality.
[529] And that projection, because it loses information, is necessarily going to have artifacts in it.
[530] And among the artifacts are things like separate objects in space and time.
[531] Space and time itself is an artifact.
[532] So one reason I'm not a materialist.
[533] is because our best materialist theories, namely evolution of natural selection and also quantum field theory and Einstein's theory of gravity, they tell us that space time has no operational meaning at 10 to the minus 33 centimeters or 10 to the minus 43 seconds.
[534] In other words, our theories, our scientific theories that are the foundation of our materialist ideas tell us precisely the scope and the limits of materialism.
[535] Materialism, that kind of materialism, is fine down to the plank scale, 10 to the minus 33 centimeters.
[536] And after that, it completely falls apart.
[537] It's utterly...
[538] Irrelevant.
[539] That's right.
[540] The spacetime physicalist matter kind of materialism falls apart, and it's not because of religious ideas I'm saying.
[541] I'm just listening to the science.
[542] Science tells us space time has no meaning beyond the plank scale.
[543] And that's why the avant -garde high -energy theoretical physicists are now looking for structures entirely outside of space.
[544] time, not curled up inside space time, entirely beyond.
[545] So it's in that sense that, yeah, materialism, and by the way, this is, I should say this about all scientific theories.
[546] My view about all scientific theories is that each scientific theory starts with certain assumptions, the premises of the theory, and it says if you grant me those assumptions, then I can explain all this wonderful stuff.
[547] Okay, okay, so how did you come to that conclusion?
[548] Because that's, see, this is, hmm, I've been trying to wrestle with this with regards to, say, the potential relationship between the integrity of the scientific process and an underlying transcendent ethic.
[549] So I think, for example, I talked to Richard Dawkins about this a little bit, although we didn't get that far for a variety of reasons.
[550] But, like, I think that to be a scientist, there's certain things that you have to accept on faith.
[551] These would be equivalent to those axioms.
[552] And I'm not talking about it necessarily a scientific theory here, as you were, but the practice of science itself.
[553] So, for example, you have to act as if there is truth.
[554] You have to act as if the truth is discoverable.
[555] You have to act as if you can discover it.
[556] Then you have to act as if you discovering the truth and communicating it is good.
[557] And none of that is provable scientifically.
[558] You have to start with those axioms before you can even make a move.
[559] And it could be wrong, you know.
[560] I mean, we think that delving into the structure of the world with integrity is redemptive.
[561] We think that knowledge is useful pragmatically.
[562] But, you know, we've invented all sorts of things that could easily wipe us out, the hydrogen bomb perhaps being foremost among those.
[563] And so the evidence that that set of claims is true is sorely lacking.
[564] Or you could say it's 50 -50.
[565] That's another way of thinking about it.
[566] But I'm very curious about how you came to the conclusion that scientific theories themselves have to be axiomatically predicated.
[567] How did you walk down that road because that's not a road that very many people walk down?
[568] Well, if you just look at any scientific theory, say Einstein's theory is special relativity.
[569] He says, let's start with two assumptions that the speed of light is universal for all observers and that the laws of physics are the same in all inertial frames.
[570] He says, if you grant me those two miracles, then the whole...
[571] And away we go.
[572] And Huclid does the same thing.
[573] And so does Riemann.
[574] Darwin starts off and says, grant me that there are organisms in space and time and resources, and these organisms are competing for resources.
[575] Now I'll give you a theory.
[576] So if you just look at any scientific theory, a good theory will make explicit the assumptions.
[577] But if it's not, you can find what the assumptions are.
[578] So there's no theory.
[579] Okay.
[580] There's no theory of everything.
[581] Do you think that there's, is there any difference between, technically, I'm thinking, philosophically, I don't see any difference between the claim that a given theory has to have axioms that aren't provable from within the frame of that theory.
[582] That's Goodell's theorem, as far as I could tell, applied much more broadly.
[583] I don't see any difference between that and the proposition that to get the game started, there has to be, it's something akin to a miracle.
[584] I mean, because these axioms, imagine that a miracle inside a system is defined as any occurrence that isn't governed by the rules that apply within that system.
[585] That's a good working definition.
[586] Now, your proposition is, well, I don't care what theory you're coming up with, there's going to be a set of axiomatic presuppositions that are a launching point.
[587] See, I also think those axiomatic presuppositions are where you put all the entropy.
[588] You say, grant me this, it's like, well, that takes care of 95 % of the mystery, so we'll just shelve that invisibly, right?
[589] Because it's hidden inside the axioms, and then you can go about manipulating the small remnant of trouble that you have left over.
[590] I also think this is why people don't like to have their axioms challenge day, because if you say, well, I'm not going to accept that, then you let loose all the demons that are encapsulated within those axioms, and they start roaming about again, and people don't like that at all.
[591] Well, yeah, a good scientist will want to have their assumptions made absolutely mathematically, precisely, and explicit.
[592] So they're just laid out there and they say these are the assumptions of the theory.
[593] And given these assumptions, I can now prove this.
[594] And this is the glory of science where we put down precisely what our assumptions are.
[595] And then we look at it mathematically, and we can get both the scope of those assumptions.
[596] How much can we do with those assumptions?
[597] and the limits, like in the case of space -time, the limits are 10 to the minus -33 centimeters.
[598] Game over.
[599] By the way, it's not that deep, in my view.
[600] It's not 10 -to -the -minus 33 trillion centimeters.
[601] It's just 10 -to -the -minus 33, and the game is over for space -time.
[602] So that's a good antidote for dogmatism because your own theory, a mathematically -precise theory, will tell you the limits of your assumptions and then say, okay, now you need to look for a broader framework with deeper assumptions.
[603] But they will be new assumptions.
[604] And so I view this as infinite job security for scientists.
[605] Because we will never, ever get a theory of everything.
[606] We'll always have a theory of everything except our current assumptions.
[607] And I agree with you that those assumptions will essentially be the whole bailiwick of what we're doing.
[608] So there's a reality, whatever it is, now this is for me something of an interesting mystery.
[609] Our theories, in some sense, don't even scratch the surface of the truth.
[610] And yet, because this process will go on forever and will still essentially have measure zero of the truth.
[611] And yet, Einstein's theory and quantum theory gave us the technologies that are allowing you and me to talk across the country.
[612] Well, so you could say that partly what's happening there is that the more sophisticated the theory, the broader range of probable states of any given object or system of objects can be predicted.
[613] It's something like that.
[614] But Piaget pointed that out when he was talking about developmental improvement in children's cognitive theories.
[615] And so, you know, if you look at someone like Thomas Coon, Coon presumed that we undertook multiple scientific revolutions, but there was no necessary progress.
[616] There were just different sets of axioms.
[617] And Piaget knew about Coon's theory, by the way.
[618] But Piaget's point was, no, you've got it slightly wrong because there is a progression of theory in that a better theory allows you to predict everything the previous theory allowed you to predict, plus some additional things.
[619] Now, your point would be, well, we can just continue that movement upward forever, right?
[620] Because the landscape of potentiality is inexhaustible.
[621] And so, again, you can have your cake and eat it too.
[622] We can learn more Einstein got us farther than Newton, which doesn't mean that Einstein's axiomatic set is the final say.
[623] Okay, so let me put a twist in this.
[624] I've been thinking about this recently.
[625] I'm writing a new book, and one of the things I'm doing in that book is doing an analysis of the story of Abraham.
[626] Abraham's a very interesting story.
[627] Okay, so Abraham is called out into the world, even though he sort of hung around his father's tent till he's like 70.
[628] So he had utopia at hand.
[629] He didn't have to do any work to get everything he needed.
[630] But that wasn't good enough.
[631] So a voice comes to him.
[632] It's the voice of conscience, I would say, and says, look, you've got all this security, but that isn't what you're built for.
[633] Get the hell out there in the world.
[634] And so he does that, and then all hell breaks loose.
[635] It's one bloody catastrophe after another.
[636] Starvation and tyranny and warfare and the necessity of sacrificing his son, it's just like one bloody thing after another.
[637] Okay, but during that process, Abraham continues to aim up and he makes the proper sacrifices.
[638] And the consequence of that is that God promises him that his descendants will be more numerous than the stars.
[639] So I was reading that from an evolutionary perspective, and I thought, okay, what's happening here is that the narrative is trying to map out a pathway that maximizes reproductive fitness all things considered.
[640] Now, the problem I have with theories like Dawkins, let's say, is Dawkins reduces, and you tell me if you think this is wrong, Dawkins implicitly reduces sex to lust, then he reduces reproduction to sex.
[641] And the problem with that is that reproduction is not exhausted by lust or sex, quite the contrary, especially in human beings, because not only do we have to chase women, let's say, but then when we have children, we have to invest in them for like 18 years before they're good for continual reproduction.
[642] And we have to interact with them in a manner that's predicated on an ethos that improves the probability of their reproductive fitness.
[643] And so reproduction, see, this is something that Darwinists, the casual Darwin, do very incautiously, as far as I'm concerned, because they identify the drive to reproduction with sex.
[644] And that's a big mistake, because sex might ensure your reproduction proximally for one generation.
[645] But the pattern of behavior that you establish and instantiate in your offspring, which would be an ethos, might ensure your reproduction multi -generational.
[646] You see, and that appears to be what's being played out in this story of Abraham is that the unconscious mind, let's say, trying to map the fitness landscape is attempting to determine what pattern of behavior is most appropriate if the goal is maximal reproductive fitness calculated across multiple generations or maybe across infinitely iterating generations.
[647] And so that points to something again, like you said earlier, you called it a general fitness, what was it?
[648] I got to get it here.
[649] Big fitness payoff, right?
[650] And that could be the ethos to which all these subsidiary ethoses are integrated.
[651] See?
[652] Okay, okay.
[653] So I'm wondering what you think about that is that, first of all, what you think about the proposition that evolutionary biologists, psychologists, Dawkins is a good case in mind, have erred when they've too closely identified reproduction with, like, with short -term sex.
[654] It's like that isn't a guarantee of reproduction.
[655] We wouldn't invest in our children if that was the case.
[656] We would just leave them.
[657] The sex, sex is done, we've reproduced.
[658] You need an ethos to guarantee reproductive fitness across time.
[659] Well, there's several levels here.
[660] First, Dawkins, of course, understands that most sex is most reproduction is asexual, right?
[661] So, sexual reproduction is a relatively recent thing.
[662] Most reproduction has been asexual.
[663] So Dawkins is very famous for talking about the selfish gene.
[664] And it's really, when he talks about reproduction, it's about genes reproducing themselves.
[665] It's really not so much about sex is one way of having that happen, but bacteria do it without sex.
[666] And so, so the emphasis on sex was, I would say, you know, Hopkins, of course, understands that sex isn't fundamental.
[667] Now, when it comes to human motivations and, you know, mammal motivations, perhaps in that specific context, you might then be talking about it.
[668] But even there, when you start talking about sexual reproduction, there are many, many strategies that organisms use.
[669] So, for example, some spiders will have just hundreds of babies and eat some of them, you know, and let the others do.
[670] Having the babies is their only job.
[671] And after that, the babies are on their own.
[672] And so there are different strategies.
[673] So this is where, you know, Dawkins is quite famous, justifiably for his work on the selfish gene idea, that is, there are different strategies, but the only thing that matters in this framework is what is the probability that the particular genes, you know, spread through the population in later generations?
[674] Sex came along, apparently, to deal with...
[675] Okay, as one of the pathways to that, right?
[676] One of the path, that's right.
[677] But there's another framework in thinking about all this as well.
[678] So, again, I love evolutionary theory.
[679] I think in terms of models of evolution and so forth, of creatures and their behaviors.
[680] It's an incredibly powerful theory.
[681] I've used it a lot.
[682] My book Case Against Reality talks about it in great detail.
[683] It's a wonderful theory.
[684] But I think that from this deeper framework that science is now moving into beyond space time, all of evolutionary theory, all of it is an argument.
[685] artifact of projection.
[686] It's not, in other words, if you're looking, like, from a spiritual point of view, for some deep principles, deep spiritual principles, evolution, I don't think is deep enough.
[687] I think that it's, all of it is an artifact of space time projection.
[688] And if you're going to be thinking, looking for deep principles about, you know, that spiritual tradition is talking about Abraham and really thinking big, I think that thinking inside space time is not big enough.
[689] You've got to step entirely outside of space time.
[690] Space time has all these artifacts.
[691] And we're so used to being stuck in the headset.
[692] Well, there is an insistence upon that in the Judeo -Christian tradition because God is conceptualized, what would you say, traditionally as being entirely outside of time and space.
[693] And so whatever works for human, like the human landscape and the divine landscape, they're not the same.
[694] There's a relationship between them, however.
[695] they're not the same.
[696] Okay, so now, okay, so let me, let me ask you about that.
[697] Now, you have made the case, not least in this interview, that consciousness is primary.
[698] Now, consciousness uses these projections.
[699] So how do you reconcile the notion that consciousness is primary?
[700] And I want to make sure I'm not misreading what you're saying, that consciousness is primary, but consciousness operates in the world with these projections.
[701] See, Because this is the thing I grapple with, is that if survival itself is dependent on the utilization of a scheme of pragmatic projections, in what sense can we say that reality is something other than that?
[702] Because, see, part, this is something that Persson and William James wrestled with, too.
[703] It's like, well, why make the claim that there is a reality outside of the human concern with survival and reproduction.
[704] And if consciousness is the primary reality, and it's using projections to orient itself so that it can survive and reproduce in the biological sense, how can you even begin to put forward a claim that there is a reality that transcends that?
[705] On what grounds does it transcend it?
[706] In relationship to what?
[707] Right.
[708] So these are deep waters.
[709] And the idea that I'm playing with right now is that this consciousness is, there's one ultimate infinite consciousness.
[710] And it, what is it up to knowing itself?
[711] But how do you know yourself?
[712] Well, there are certain theorems that say that no system can actually completely know itself.
[713] Right, right, right.
[714] So if this one infinite consciousness wants to know itself, all it can do is start looking at itself through different perspective.
[715] So putting on different headsets.
[716] So space time is one headset.
[717] And from that perspective, here's a – so this is a projection of the one infinite consciousness.
[718] And in that perspective, it looks like evolution by natural selection.
[719] It looks like quantum field theory and so forth.
[720] And it looks like I need to play the game this way.
[721] But this is a trivial headset.
[722] This is actually, I think, one of the cheaper headsets.
[723] Okay.
[724] That's very interesting.
[725] Okay, so one of the things, while writing the book that I'm writing now, I've been walking through all these biblical narratives.
[726] And one of the things they do, every single narrative provides a different characterization of the infinite.
[727] There's no real replication.
[728] It's like, well, here's a picture of the divine, and here's another one, and here's another one, and here's another one.
[729] Now, there's an insistence that runs through the text, this unites the text, that those are all manifestations of the same underlying reality.
[730] But it is definitely the case that what's happening is that these are movies, so to speak, shot from the perspective of different directors.
[731] And it does seem to me akin to something coming to know itself.
[732] There's this ancient Jewish idea.
[733] This is a great, it's like a Zen cone.
[734] It's a great little mystery.
[735] It says, so here's the proposition.
[736] So God is traditionally imbued the following characteristics.
[737] omniscience, omnipresence, and omnipotence.
[738] What does that lack?
[739] And you think, well, that's a ridiculous question, because by definition, that lacks nothing.
[740] But the answer is limitation.
[741] That lacks limitation.
[742] And that's actually the classical explanation for God's creation of man, is that the unlimited needs, the limited as a viewpoint, it has something to do with the development of, as you pointed out, I believe, it has something to do with the possibility of coming to, it's something like conscious awareness.
[743] You see this in T .S. Eliot, too.
[744] I don't remember which poem where he talks about coming back to the point of origin, which is like the return to childhood, you know, that heavenly notion that to enter the kingdom of heaven you have to become as a little child.
[745] It's like, but there's a transformation there so that that return to the point of origin is accompanied by an expansion of consciousness.
[746] It's not a collapse back into childish unconsciousness.
[747] It's the reattainment of a, what would you say?
[748] It's the reattainment of the state of play.
[749] That's a good way of thinking about it that obtained when you were a child, but with conscious differentiated knowledge.
[750] So there is this tremendous narrative drive in the Western tradition towards differentiated, comprehensive understanding as a positive good.
[751] And that seems tied up with the continual drama between God and man. So, and I do think the scientific enterprise is an offshoot of that.
[752] That's what it looks like to me historically.
[753] So, okay, so how in the world do you survive in psychology departments given what you're thinking about?
[754] Well, I've got the mathematics.
[755] So as long as, if I was just talking this stuff without any mathematical underpinnings to it, it would be dismissed, of course.
[756] But, but the, you know, In the case of the evolutionary stuff, we've published papers in the journal Theoretical Biology, for example, and elsewhere, where we actually put the mathematics out there.
[757] So it's peer -reviewed, and I think that it's a bit surprising, but, and I, you know, I'm a minority, a small minority, but, you know, that's the way science progresses.
[758] It proceeds one funeral at a time.
[759] Yeah, it progresses by minorities of one.
[760] Exactly right.
[761] And scientists understand that you want to have independent ideas, think out of the box, make it mathematically precise.
[762] Most of our ideas will be nonsense, including mine, but you've got to put them out there and push them and see what happens.
[763] I have, I'll say in terms of, I've gotten some stiff pushback.
[764] For example, some philosophers have published papers recently where they give the following argument against my Darwinian.
[765] theory, they'll say, look, Hoffman uses evolutionary game theory to show that space and time and physical objects and organisms don't exist.
[766] Well, he's got himself what they say, an unenviable dialectical situation.
[767] Either evolutionary game theory faithfully represents Darwin's ideas, or it doesn't.
[768] So if it doesn't, then he can't use it to say that organisms and resources are not fundamental in space time.
[769] And if it does faithfully represent Darwin's ideas, well, Darwin's ideas are that space time is fundamental in their organisms and resources.
[770] So it couldn't possibly contradict that.
[771] So either way, Hoffman is screwed.
[772] There's nothing he can do.
[773] So, and that's been published, actually, in high value philosophy journals.
[774] And my response is, it's quite simple.
[775] It misunderstands science completely.
[776] Every scientific theory has, when you write it down mathematically, it has a scope and its limits.
[777] And the mathematics tells you both the scope and the limits.
[778] So, for example, just to be very concrete, Einstein's theory of gravity, right?
[779] And I think 1907 or so, he had this big idea.
[780] If I was standing on a weighing machine in an elevator and all of a sudden the cord was cut and I was in free fall, I would all of a sudden be weightless.
[781] That was his big idea for his theory of gravity.
[782] It took them years, seven or eight years to actually make the mathematics, but he wrote down his field equation.
[783] So those field equations are Einstein's mathematics to capture his idea that space time is fundamental and has certain properties.
[784] Well, a year after he published it, Schwarzschild, a German scientist, discovered that they entailed black holes.
[785] And we've eventually found out that this theory entails that spacetime itself has no operational meaning beyond 10 to the minus 33 centimeters.
[786] So we could use the same argument that's been used against me against Einstein.
[787] Now, look, okay, Einstein's field equations, either they're faithfully representing Einstein's ideas or they're not.
[788] So we can use the same argument against Einstein, you know, they've been used against my theory.
[789] Now, either Einstein's field equations capture his ideas faithfully or they don't.
[790] If they don't, then we couldn't use them to show that space time isn't fundamental.
[791] And if they do, they couldn't possibly show that space time isn't fundamental.
[792] That last step is the wrong one.
[793] The equations are there to show you the limits of your concepts.
[794] They give you precise, and that's, so that's what these philosophers have missed, is that the equations that we write down tell us not just the scope, but the limits of our theories.
[795] And that's why science is so valuable, because it tells us your theory, your assumptions go this far and no further.
[796] So that's all I've done with Darwin's theory of evolution is to say, this theory is, well, that also, okay, man, that also sounds to me very much like a vindication of the fundamental claim of the pragmatists, which is that, we accept something as true without noticing that what we mean is true in a time frame with certain implications for instantiation and something like that.
[797] And so true is a lot more like, does the bridge stand up when 100 cars go across it?
[798] It's not some final, comprehensive, all -encompassing definition of the truth for all -time.
[799] And you've already made the case that it can't be because that truth is an ever -recentive, preceding goal.
[800] It's always bounded.
[801] Okay, so when I came across that, I thought, okay, well, bounded by what?
[802] And it's, well, it's bounded by our aim.
[803] And then that's bounded by our motivation.
[804] And then that's ennested inside a Darwinian world.
[805] Okay, now, let's go after the game theory.
[806] Well, let me just say one thing about that.
[807] Sorry, go ahead.
[808] Go ahead.
[809] Yeah, I would just say that the very deep, deepest spiritual traditions really say that up front.
[810] Like the Tao de Ching starts off that says the Tao that can be spoken of is not the true Tao.
[811] Once you understand that, then go ahead and read the rest of it.
[812] That's a good example, because that's a great book.
[813] Yeah, that's a great book.
[814] And I think that that's also the way we should think about our science.
[815] The science that can be spoken of is not the final reality.
[816] But given that, it's a wonderful thing to do science.
[817] And we should do science, and we should do it very, very rigorously.
[818] But we should always understand that if we're talking about a theory of everything, it should be with a wink and a nod, because there is no theory of everything that we can write down.
[819] Right.
[820] It's the theory of everything that we've discovered so far, maybe, but it will never be the final theory of everything.
[821] Right, and it might have a broader, broader range of potential applications as well.
[822] But that doesn't mean that we've exhausted the landscape of comprehensive theories.
[823] Right.
[824] Okay, so now the philosophers that you described as objecting to your theory said that if evolutionary game theory is correct, and it models Darwin's propositions appropriately, then.
[825] Well, so game theory is extremely interesting to me, although I wouldn't say I'm an expert in its comprehension, but I understand it's gist, I believe, and it seems to me to be something like this, is that if you iterate interactions, an ethos of one form or another emerges.
[826] So, for example, if you play tit for tat simulations, you find out that the best trading strategy is cooperate but slap when necessary and then forgive, something like that.
[827] And so what it points to, very interestingly, is something like a concordance between objective reality insofar as objective reality is an emergent pattern coming out of iterative interactions and something like an ethos.
[828] So the first question I have is, like, why are you interested in evolutionary game theory and why do you think that it is a valid representative, a more differentiated representative, if I've got the language right, of Darwinian theory?
[829] Oh, well, I'm interested in it because that's within the field of evolutionary theory itself.
[830] Evolutionary game theory is taken as, you know, the prize mathematical tool for really understanding things.
[831] So that's just the framework of the science itself.
[832] Okay, so that's accepted as far as you're concerned.
[833] Yeah, I mean, of course, there's always debate, but by the vast, but it's the received opinion.
[834] So if I wanted to, as a scientist, if I wanted to, as a scientist, if I wanted to analyze Darwin's theory for this issue about truth, and I wanted to do it rigorously, the tool was evolutionary game theory.
[835] That was the tool to use.
[836] And it's not because I think it's the final word or the truth.
[837] It's just our current state of play in the field.
[838] Right now, that's the best we have.
[839] And I wanted to use the best tool we have.
[840] And that's the way we're always pulling ourselves up by the bootstraps in science, right?
[841] We always say, these are the best theories we have and the best tools we have so far.
[842] Of course, our goal is not to prove that we're right.
[843] Our goal is to find the limits of our current theories and transcend them.
[844] So we're looking for are the best tools that will say, aha, Darwin goes this far and no further.
[845] Space time goes this far, you know, high energy theoretical physics.
[846] Einstein's wonderful theories, they're, they're incredible gift.
[847] They go to 10 to minus 33 centimeters and they stop.
[848] That gift stops right there.
[849] And now we have to go entirely outside.
[850] And that will be the never -ending pattern of science, is that whatever the scientists are finding outside of space time, that will just be our next baby step.
[851] And we'll analyze that and then say, okay, what's beyond that and beyond that?
[852] And science will continue to go.
[853] So as long as you recognize that that's the game, you'll realize that there's no theory of everything in science.
[854] And then the question is, who am I?
[855] Who are we that are able to do this game?
[856] And that's a very interesting question.
[857] Well, you know, there's lots of things I'd like to ask you about, but that's a pretty good place to stop.
[858] And we're damn near at an hour and 30.
[859] So I hope I have the privilege of furthering my discussion with you at some point in the not new, not too dear future.
[860] I would like to say, is there anything in closing that you would like to bring to the attention of the listening audience, the watching audience, that you think that we needed to cover to make what we have covered comprehensible?
[861] Or is that also, in your estimation, a good place to stop?
[862] I'll just say one little thing, I guess.
[863] And that is, some people might think, well, he's got this theory of consciousness outside of space time.
[864] So what?
[865] Who cares?
[866] And I would agree with that unless I did something more.
[867] So what we're trying to do now is scientists to say, we have this mathematical model of consciousness outside of space time.
[868] We just published a proposal for how to actually test it.
[869] So we're going to have a projection into space -time, we're working on that projection.
[870] We'd like to model the inner structure of the proton.
[871] We would like to have a dynamics of conscious agents that projects down and gives us what's called the momentum distributions of quarks and gluons inside a proton, and all the biorch and x and q squared, the different spatial and temporal resolutions that particle physicists have studied.
[872] And the reason we're going there is not because I think that's the most important application of a theory of consciousness.
[873] it's the most accessible one.
[874] That's the simplest part of our science right now.
[875] Ultimately, of course, the brain has the nice neural correlates of consciousness.
[876] We want to understand that.
[877] But that's really complicated.
[878] So we're going to go after, if we can model the proton and get it exactly right, get the momentum distributions to several decimal places, it doesn't mean our theory is right, but it does mean it can't be dismissed out of hand.
[879] And so that's what our goal is to take a theory of consciousness, not just to airy, fairy, wave our hands, but to actually get in there and predict the structure of the interstructure of the proton with great detail.
[880] If we can do that, then I would say we then can start to move up to molecules and then ultimately to neural systems and the brain and try to understand the neural correlates of consciousness.
[881] But not the neural correlates, the brain does not cause consciousness on this model.
[882] The brain is merely a symbol inside the headset, right?
[883] Right.
[884] So, and in fact, I would say this, neurons do not even exist when they're not perceived.
[885] Neurons cause none of our behavior.
[886] And yet, I'm a cognitive neuroscientist.
[887] And I think that we should study, neuroscience is wonderful and we need more funding for it because it's more complicated than we thought.
[888] We thought, we look inside the brain we see neurons.
[889] That's because that's the reality.
[890] There are neurons.
[891] No, that's the interface description of something that's much more complicated.
[892] We have to reverse engineer neurons to this network of conscious agents outside of space time so we need more funding for neuroscience it's much more complicated so so i would just a little brief of course as you can imagine i'm talking about something that could take hours to go into detail but but just to to put those out there and say these are objections and people might have so we're headed okay well i do i do have one okay i do have one other question then i guess i do have to throw it out so you have a very radical conception of consciousness, what has that done for you existentially, do you think?
[893] I mean, you're obviously thinking about the place of consciousness while you're thinking about it existentially.
[894] You're thinking about the place of consciousness in the cosmos and you regard it as a fundamental reality.
[895] So what has that done to the manner in which you contemplate your own, say, mortality or the purpose of your life?
[896] And what's that done for you on that side of things.
[897] Quite a bit.
[898] It's really hit me in the face because I'm intuitively as much a physicalist and the materialist as anybody else.
[899] I'm wired up to believe all that.
[900] And so it's, it's come as a terrible shock to me. My whole self -image has had to change.
[901] And it says, and I, in what direction?
[902] In what direction your self -image changed?
[903] What changed?
[904] Well, I thought of myself as a little object in space time.
[905] Right, right, right.
[906] And the death of the body is ultimately the death of me. And now it's, well, our best science says that this is, you know, my body is just an icon in a headset.
[907] So in some sense, it's just an avatar.
[908] This body is just an avatar.
[909] It's not.
[910] And so death is more like taking off a headset.
[911] So, but my emotions don't agree with that.
[912] So I've got this really interesting.
[913] Well, that's probably just as well.
[914] Right.
[915] Yeah, exactly.
[916] So, So I do spend a lot of time in meditation.
[917] And my father was a Protestant minister, a fundamentalist Protestant minister.
[918] So I was raised on the Christian church.
[919] And so I look at those points of view.
[920] I look at the Eastern mystical stuff.
[921] I meditate myself.
[922] And my ultimate thinking about this is, as I said, we can never have a theory of everything.
[923] And that includes of who I am.
[924] So the question about who I am, my best guess right now, now is, at the deepest level, I and you are, in fact, the one consciousness just looking at itself through different avatars.
[925] So it's really the one using a Jordan avatar to talk to the one, you know, Hoffman avatar.
[926] And that's what's what's going on here.
[927] And in that sense, Mm -hmm.
[928] So are you responsible for being the best possible avatar you can be, so to speak?
[929] Well, in some sense, within this projection, within this headset, morals of a certain kind are the rules of the road.
[930] But my guess is that when we take the headset off, we'll just laugh.
[931] That was what we had to do in this headset, but that was, I am not this avatar.
[932] I am the consciousness that's far, that transcends space and time.
[933] Well, you know, the next time we talk, maybe that's a road we should wander down because we didn't get into the metaphysics of ethics, let's say, during this conversation.
[934] And there's plenty of that.
[935] That's obviously a whole other area.
[936] Okay, okay.
[937] Well, that would be good.
[938] All right.
[939] Well, so to everyone watching and listening, thank you very much for tuning into this podcast.
[940] As most of you know, I'm going to talk to Dr. Hoffman for another half an hour behind the Daily Wear Plus platform.
[941] And I'm going to see if I can find out where in the world his interests stemmed and how they initially manifested themselves and developed across time.
[942] We'll do that as much as we can in half an hour.
[943] Thank you to the crew here up in Northern Ontario for journeying up here to do this podcast.
[944] Thank you, Dr. Hoffman, very much for your time today.
[945] To the Daily Wire Plus people for making this possible, that's also much appreciated.
[946] And we'll see all of you watching and listening, hopefully, on another podcast.
[947] Thank you very much, sir.
[948] Thank you, Jordan.
[949] I don't know.