Insightcast AI
Home
© 2025 All rights reserved
ImpressumDatenschutz
#325 – Michael Levin: Biology, Life, Aliens, Evolution, Embryogenesis & Xenobots

#325 – Michael Levin: Biology, Life, Aliens, Evolution, Embryogenesis & Xenobots

Lex Fridman Podcast XX

--:--
--:--

Full Transcription:

[0] The following is a conversation with Michael Levin, one of the most fascinating and brilliant biologist I've ever talked to.

[1] He and his lab at Tufts University works on novel ways to understand and control complex pattern formation in biological systems.

[2] Andre Carpathie, a world -class AI researcher, is the person who first introduced me to Michael Levin's work.

[3] I bring this up because these two people make me realize that biology has a lot to teach us about AI researcher.

[4] and AI might have a lot to teach us about biology.

[5] And now, a quick two -second mention of his sponsor.

[6] Check them out in the description.

[7] It's the best way to support this podcast.

[8] We got Henson Shaving for Great Razor and Shave, Aid Sleep for e -commerce, Element for On -the -Go Electrolites, and Insight tracker for biological tracking.

[9] Choose wisely my friends.

[10] And now, onto the full ad reads, never ads in the middle those suck.

[11] I try to make these ads here interesting so they're worth listening to perhaps mostly because a lot of them don't have anything to do with the sponsor.

[12] They're almost like inspired by the sponsor.

[13] But if you still must skip, please check out the sponsors.

[14] They're the reason I'm able to do this podcast so please support them in any way you can.

[15] I enjoy their stuff.

[16] Maybe you will too.

[17] This show is brought to you by Henson shaving, a family -owned aerospace manufacturer bringing precision engineering to your shaving experience.

[18] You had me at engineering.

[19] If there's any word, a person can state to me that, you know, I was going to say turns me on a little bit, but that would be very inappropriate to do an ad read, so I would never say the thing like that.

[20] But anyway, I find engineering just in all of its forms, materials engineering, mechanical engineering, civil engineering, all of it.

[21] I love it.

[22] They're using aerospace grade CNC machines and because of that are able to make metal razors that extend just 0 .013 inches, which is less than the thickness of a human hair.

[23] And the precision of it is the reason you can have a safe, clean, smooth shave.

[24] Check them out at hensonshaving .com slash lex to pick your razor and use code Lex, and you'll get 100 free blades with your razor.

[25] You must add both the 100 blade pack and the razor for the discount to apply.

[26] This episode is also brought to you by Eight Sleep and its new Pod 3 mattress.

[27] It is a source of happiness for me. Naps bring joy to my heart.

[28] They put to rest all the anxiety, the uncertainty, the sadness, the melancholy that I have on my heart, they let that fade, and the phoenix rises from the ashes of the nap.

[29] And a cool surface of the bed with a warm blanket, I just love it.

[30] It's usually going to be about 20 to 30 minute nap.

[31] Cures all ills.

[32] I can have so much uncertainty, so much fear, so much anxiety about the world, and just a little nap.

[33] or insecurity, all of it.

[34] A nap can cure it, and maybe that's a gift, maybe that's a chemical gift for me. I'm so fortunate not to suffer from sort of chemical depression that can hold you down for many days, weeks, or months.

[35] For me, a nap can cure so much, and it's beautiful.

[36] I mean, sleep is so, so important.

[37] So I highly recommend it sleep.

[38] It's been, like I said, a source of a lot of happiness for me. Check it out and get special savings when you go.

[39] go to 8Sleep .com slash Lex.

[40] This episode is also brought to you by Element, Electrolite Drink Mix, spelled L -M -N -T.

[41] I like their watermelon salt one, but lately have also been consuming some of the chocolate mint.

[42] I have a question mark on that.

[43] I don't remember if that's what that is, but it's what it tastes like and it's delicious, but it's a different, it's kind of a different thing.

[44] Nutritionally is the same thing, but spiritually it's a different.

[45] different things.

[46] So watermelon salt is more what I'm just trying to add a little like a kick to my water, right, and making sure I'm getting all the sodium potassium magnesium right for the low carb stuff I'm doing.

[47] But the chocolate one has a little, I don't want to say sweetness, but a little more complexity in the way chocolate does, right?

[48] And so I feel like it's the thing that relaxes me a bit more.

[49] It's like a dessert at the end of the day.

[50] Anyway, get a simple pack for free with any purchase.

[51] Try it at drinkelement .com slash lex.

[52] This show is also brought to you by Inside Tracker, a service I use to track the biological data for my body.

[53] They have a bunch of plans.

[54] Most include a blood test that gives you a bunch of information that is then used by machine learning algorithms to use your blood data, data and fitness tracker data to make recommendations for positive diet and lifestyle changes.

[55] Your body, as John Mayer has once said, is a wonderland.

[56] It's full of data.

[57] It's full of signals that is sending to you.

[58] And the incredible aspect of our bodies that is able to integrate those signals to then maintain health, right?

[59] Now, we're creating a lot of medical tools and diet tools and all that kind of stuff that's able to extend the capabilities of our body, but it needs to know about those signals.

[60] It needs to know about the signals the body sends, so it's obvious to me that this is the future of health and medicine and all that kind of stuff.

[61] Get special savings for a limited time when you go to insidetracker .com slash Lex.

[62] This is a Lex Friedman podcast.

[63] To support it, please check out our sponsors in the description.

[64] And now, dear friends, here's Michael Levin.

[65] embryogenesis is the process of building the human body from a single cell i think is one of the most incredible things that exist on earth from a single embryo so how does this process work yeah it is it is it is an incredible process uh i think it's maybe the most magical process there is and uh i think one of the most fundamentally interesting things about it is that it shows that each of us takes the journey from so -called just physics to mind right because we all start life as a single uh unfertilized oocyte, and it's basically a bag of chemicals, and you look at that, and you say, okay, this is chemistry and physics.

[66] And then nine months and some years later, you have an organism with high -level cognition and preferences and an inner life and so on.

[67] And what embryogenesis tells us is that that transformation from physics to mind is gradual.

[68] It's smooth.

[69] There is no special place where, you know, a lightning ball says, boom, now you've gone from physics to true cognition.

[70] That doesn't happen.

[71] And so we can see in this process that the whole mystery, you know, biggest mystery of the universe, basically, how you get mine from matter.

[72] From just physics, in quotes.

[73] Yeah.

[74] So where's the magic into the thing?

[75] How do we get from information encoded in DNA and make physical reality out of that information?

[76] So one of the things that I think is really important if we're going to bring in DNA into this picture is to think about the fact that what DNA encodes is the hardware of life.

[77] DNA contains the instructions for the kind of micro -level hardware that every cell gets to play with.

[78] So all the proteins, all the signaling factors, the ion channels, all the cool little pieces of hardware that cells have.

[79] That's what's in the DNA.

[80] The rest of it is in so -called generic laws.

[81] And these are laws of mathematics.

[82] These are laws of computation.

[83] These are laws of physics.

[84] Of all kinds of interesting things that are not directly in the DNA.

[85] And that process, you know, I think the reason I always put just physics in quotes is because I don't think there is such a thing as just physics.

[86] I think that thinking about these things in binary categories, like this is physics, this is true cognition, this is as if it's only faking of these kinds of things.

[87] I think that's what gets us in trouble.

[88] I think that we really have to understand that it's a continuum, and we have to work up the scaling, the laws of scaling, and we can certainly talk about that.

[89] There's a lot of really interesting thoughts to be had there.

[90] So the physics is deeply integrated with information.

[91] So the DNA doesn't exist in its own.

[92] The DNA is integrated as in some sense in response to the laws of physics at every scale.

[93] The laws of the environment it exists in.

[94] Yeah, the environment and also the laws of the universe.

[95] I mean, the thing about the DNA is that once evolution discovers a certain kind of machine, that if the physical implementation is appropriate, it's sort of, and this is hard to talk about because we don't have a good vocabulary for this yet, but it's a very kind of a platonic notion that if the machine is there, it pulls down.

[96] interesting things that you do not have to evolve from scratch because the laws of physics give it to you for free.

[97] So just as a really stupid example, if you're trying to evolve a particular triangle, you can evolve the first angle and you evolve the second angle.

[98] But you don't need to evolve the third.

[99] You know what it is already.

[100] Now, why do you know?

[101] That's a gift for free from geometry in a particular space.

[102] You know what that angle has to be.

[103] And if you evolve an ion channel, which is ion channels are basically transistors, right?

[104] They're voltage gated current conductances.

[105] If you evolve that ion channel, you immediately get to use things like truth tables.

[106] You get logic functions.

[107] You don't have to evolve the logic function.

[108] You don't have to evolve a truth table.

[109] It doesn't have to be in the DNA.

[110] You get it for free, right?

[111] And the fact that if you have NANDGates, you can build anything you want, you get that for free.

[112] All you have to evolve is that that first step, that first little machine that that enables you to couple to those laws.

[113] And there's laws of adhesion and many other things.

[114] And this is all that interplay between the hardware that's set up by the genetics and the software that's bait, right?

[115] The physiological software that basically does all the computation and the cognition and everything else is a real interplay between the information and the DNA and the laws of physics of computation and so on.

[116] So is it fair to say, just like this idea that the laws of mathematics are discovered, they're latent within the fabric of the universe in that same way the laws of biology are kind of discovered?

[117] Yeah, I think that's absolutely.

[118] And it's probably not a popular view, but I think that's right on the money.

[119] Yeah.

[120] Well, I think that's a really deep idea.

[121] Then embryogenesis is the process of revealing, of embodying, of manifesting these laws.

[122] You're not building the laws.

[123] You're just creating the capacity to reveal.

[124] Yes.

[125] I think, again, not the standard view of molecular biology by any means, but I think that's right on the money.

[126] I'll give you a simple example, you know, some of our latest work with these xenobots, right?

[127] So what we've done is to take some skin cells off of an early frog embryo and basically ask about their plasticity.

[128] If we give you a chance to sort of reboot your multicellularity in a different context, what would you do?

[129] Because what you might assume by the thing about embryogenesis is that it's super reliable, right?

[130] It's very robust.

[131] And that really obscures some of its most interesting features.

[132] We get used to it.

[133] We get used to the fact that acorns make oak trees and frog eggs make frogs.

[134] And we say, well, what else is it going to make?

[135] that's what it makes.

[136] That's a standard story.

[137] But the reality is, and so you look at these skin cells and you say, well, what do they know how to do?

[138] Well, they know how to be a passive, boring, two -dimensional outer layer keeping the bacteria from getting into the embryo.

[139] That's what they know how to do.

[140] Well, it turns out that if you take these skin cells and you remove the rest of the embryo, so you remove all of the rest of the cells and you say, well, you're by yourself now.

[141] What do you want to do?

[142] So what they do is they form this little, this multi little creature that runs around the dish.

[143] They have all kinds of incredible capacities.

[144] They navigate through mazes.

[145] They have various behaviors that they do both independently and together.

[146] They have a, basically they implement von Neumann's dream of self -replication because if you sprinkle a bunch of loose cells into the dish, what they do is they run around, they collect those cells into little piles.

[147] They sort of mush them together until those little piles become the next generation of Zenobots.

[148] So you've got this machine that builds copies of itself from loose material in its environment.

[149] None of this are things that you would have expected from the frog genome.

[150] In fact, there's wild type, the genome is wild type.

[151] There's nothing wrong with their genetics.

[152] Nothing has been added, no nanomaterials, no genomic editing, nothing.

[153] And so what we have done there is engineer by subtraction.

[154] What you've done is you removed the other cells that normally basically bully these cells into being skin cells.

[155] And you find out that what they really want to do is to be this, they want their default behaviors to be a xenobot.

[156] but in vivo in the embryo they get told to be skinned by these other cell types and so so now so now here comes this this really interesting question that you just posed when you ask where does the form of the tad bull and the frog come from the standard answer as well it's it's it's it's selection so over over millions of years right it's been shaped to to produce the specific body with that's fit for froggy environments where does the shape of the xenobot come from there's never been any xenobots there's never been selection to be a good xenobot these cells find themselves in the new environment.

[157] In 48 hours, they figure out how to be an entirely different proto -organism with new capacities like kinematic self -replication.

[158] That's not how frogs or tadpoles replicate.

[159] We've made it impossible for them to replicate their normal way.

[160] Within a couple days, these guys find a new way of doing it.

[161] That's not done anywhere else in the biosphere.

[162] Well, actually, let's step back and define what are Zenobots?

[163] So a xenobot is a self -assembling little proto -organism.

[164] It's also a biological robot.

[165] Those things are not distinct.

[166] It's a member of both classes.

[167] How much is it biology?

[168] How much is a robot?

[169] At this point, most of it is biology, because what we're doing is we're discovering natural behaviors of these, of the cells and also of the cell collectives.

[170] Now, one of the really important parts of this was that we're working together with Josh Bongard's group at University of Vermont.

[171] Their computer scientists do AI, and they've basically been able to use an evolutionary, a simulated evolution approach to ask how can we manipulate these cells, give them signals, not rewire their DNA, so not hardware, but experience signals.

[172] So can we remove some cells?

[173] Can we add some cells?

[174] Can we poke them in different ways to get them to do other things?

[175] So in the future, there's going to be, you know, we're now, and this is this is future on published work, but we're doing all sorts of interesting ways to reprogram them to new behaviors.

[176] But before you can start to reprogram these things, you have to understand what their innate capacities are.

[177] Okay, so that means engineering, programming, you're engineering them in the future.

[178] And in some sense, the definition of a robot is something you impart engineer and persist evolve.

[179] I mean, it's such a fuzzy definition anyway, in some sense, many of the organisms within our body are kinds of robots.

[180] Yes, yes.

[181] And I think robots is a weird line because we tend to see robots as the other.

[182] I think there will be a time in the future when there's going to be something akin to the civil rights movements for robots.

[183] But we'll talk about that later perhaps.

[184] Anyway, so how do you, can we just linger on it?

[185] How do you build a xenobot?

[186] What are we talking about here?

[187] From when does it start and how does it become the glory?

[188] glorious xenobot.

[189] Yeah.

[190] So just to take one step back, one of the things that a lot of people get stuck on is they say, well, you know, engineering requires new DNA circuits or it requires new nanomaterials.

[191] You know, the thing is we are now moving from old school engineering, which use passive materials, right, that things like, you know, wood metal, things like this, that basically the only thing you could depend on is that they were going to keep their shape.

[192] That's it.

[193] They don't do anything else.

[194] It's on you as an engineer to make them do everything they're going to do.

[195] And then, there were active materials and now computation materials.

[196] This is a whole new error.

[197] These are agential materials.

[198] This is your, you're now collaborating with your substrate because your material has an agenda.

[199] These cells have, you know, billions of years of evolution.

[200] They have goals.

[201] They have preferences.

[202] They're not just going to sit where you put them.

[203] That's hilarious that you have to talk your material and to keep it shape.

[204] That's it.

[205] That is exactly right.

[206] That is exactly right.

[207] Stay there.

[208] It's like getting a bunch of cats or something and trying to organize the shape out of them.

[209] It's funny.

[210] We're on the same page here because in a paper, this is, this is, this is, currently just been accepted in nature by engineering.

[211] One of the figures I have is building a tower out of Legos versus dogs, right?

[212] So think about the difference, right?

[213] If you build out of Legos, you have full control over where it's going to go.

[214] But if somebody knocks it over, it's game over.

[215] With the dogs, you cannot just come and stack them.

[216] They're not going to stay that way.

[217] But the good news is that if you train them, then somebody knocks it over, they'll get right back up.

[218] So it's all, right?

[219] So as an engineer, what you really want to know is what can they depend on this thing to do, right?

[220] That's really, you know, a lot of people have definitions of robots as far as what they're made of or how they got here, you know, design versus evolve, whatever.

[221] I don't think any of that is useful.

[222] I think, I think as an engineer, what you want to know is how much can I depend on this thing to do when I'm not around to micromanage it?

[223] What level of, what level of dependency can I give this thing?

[224] How much agency does it have?

[225] Which then tells you what techniques do you use?

[226] So do you use micromanagement?

[227] Like you put everything where it goes?

[228] Do you train it?

[229] Do you give it signals?

[230] Do you try to convince it to do things, right?

[231] How much, you know, how intelligent is your substrate.

[232] And so now we're moving into this area where you're working with agential materials.

[233] That's a collaboration.

[234] That's not old, old style engineer.

[235] What's the word you're using agential?

[236] Agential.

[237] What's that mean?

[238] Agency.

[239] It comes from the word agency.

[240] So basically the material has agency, meaning that it has some level of, obviously not human level, but some level of preferences, goals, memories, ability to remember things to compute into the future, meaning anticipate.

[241] you know when you're working with cells they have all of that to some to to various degrees is that empowering or limiting having material has a mind of its own literally i think it's both right so it raises difficulties because it means that it if you if you're using the old mindset which is a linear kind of extrapolation of what's going to happen you're going to be surprised and shocked all the time because biology does not do what we linearly expect materials to do on the other hand it's massively liberating.

[242] And so in the following way, I've argued that advances in regenerate medicine require us to take advantage of this because what it means is that you can get the material to do things that you don't know how to micromanage.

[243] So just as a simple example, right, if you had a rat and you wanted this rat to do a circus trick, put a ball in the little hoop, you can do it the micromanagement way, which is try to control every neuron and try to play the thing like a puppet, right?

[244] And maybe someday that'll be possible, maybe.

[245] Or you can train the rat.

[246] And this is why humanity for thousands of years before we knew any neuroscience, we had no idea what's between the ears of any animal.

[247] We were able to train these animals because once you recognize the level of agency of a certain system, you can use appropriate techniques.

[248] If you know the currency of motivation, reward and punishment, you know how smart it is, you know what kinds of things it likes to do.

[249] You are searching a much more, much smoother, much nicer problem space than if you try to micromanage the thing.

[250] And in regenerative medicine, when you're trying to get, let's say, an arm to grow back or an eye to repair or cell birth defect or something.

[251] Do you really want to be controlling tens of thousands of genes at each point to try to micromanage it?

[252] Or do you want to find the high -level modular controls that say, build an arm here?

[253] You already know how to build an arm.

[254] You did it before.

[255] Do it again.

[256] So that's, I think it's both.

[257] It's both difficult and it challenges us to develop new ways of engineering.

[258] And it's hugely empowering.

[259] Okay.

[260] So how do you do, I mean, maybe sticking with a metaphor of dogs and cats, I presume you have to figure out the, find the dogs and dispose of the cats.

[261] Because, you know, it's like the old hurting cats is an issue.

[262] So you may be able to train dogs.

[263] I suspect you will not be able to train cats.

[264] Or if you do, you're never going to be able to trust them.

[265] So is there a way to figure out which material is amenable?

[266] to hurting?

[267] Is it in the lab work or is it in simulation?

[268] Right now it's largely in the lab because our simulations do not capture yet the most interesting and powerful things about biology.

[269] So what we're pretty good at simulating are feed -forward emergent types of things, right?

[270] So cellular automata, if you have simple rules and you sort of roll those forward for every agent or every cell in the simulation, then complex things happen, you know, ant colony or algorithm.

[271] things like that.

[272] We're good at that, and that's fine.

[273] The difficulty with all of that is that it's incredibly hard to reverse.

[274] So this is a really hard inverse problem, right?

[275] If you look at a bunch of termites and they make a thing with a single chimney and you say, well, I like it, but I'd like two chimneys.

[276] How do you change the rules of behavior -free termites?

[277] So they make two chimneys, right?

[278] Or if you say, here are a bunch of cells that are creating this kind of organism, I don't think that's optimal.

[279] I'd like to repair that birth defect.

[280] How do you control all the individual low -level rules, right?

[281] All the protein interactions and everything else.

[282] Rolling it back from the anatomy that you want to the low -level hardware rules is in general intractable.

[283] It's an inverse problem.

[284] It's generally not solvable.

[285] So right now it's mostly in the lab because what we need to do is we need to understand how biology uses top -down controls.

[286] So the idea is not bottom -up emergence, but the idea of things like goal -directed test -operate exit kinds of loops, where it's basically an error minimization function over a new space.

[287] It's not a space of gene expression, but for example, a space of anatomy.

[288] So just as a simple example, if you have a salamander and it's got an arm, you can amputate that arm anywhere along the length.

[289] It will grow exactly what's needed and then it stops.

[290] That's the most amazing thing about regeneration is that it stops.

[291] It knows when to stop.

[292] When does it stop?

[293] It stops when a correct salamander arm has been completed.

[294] So that tells you that's a, right?

[295] That's a means -ends kind of analysis where it has to know what the correct limb is supposed to look like, right?

[296] So it has a way to ascertain the current shape.

[297] it has a way to measure that delta from what shape it's supposed to be, and then it will keep taking actions, meaning remodeling and growing and everything else, until that's complete.

[298] So once you know that, and we've taken advantage of this in the lab to do some really wild things with both Plenaria and Frog Embryos and so on, once you know that, you can start playing with that homeostatic cycle.

[299] You can ask, for example, well, how does it remember what the correct shape is and can we mess with that memory?

[300] Can we give it a false memory of what the shape should be and let the cells build something else?

[301] Or can we mess with the measurement apparatus, right?

[302] It gives you those kinds.

[303] So the idea is to basically appropriate a lot of the approaches and concepts from cognitive neuroscience and behavioral science into things that previously were taken to be dumb materials.

[304] And you'd get yelled at in class for being anthropomorphic, if you said, well, my cells want to do this and my cells want to do that.

[305] And I think that's a major mistake that leaves a ton of capabilities on the table.

[306] So thinking about biological systems as things that have memory.

[307] have almost something like cognitive ability.

[308] But, I mean, how incredible is it, you know, that the salamander arm is being rebuilt, not with a dictator.

[309] It's kind of like the cellular automata system.

[310] All the individual workers are doing their own thing.

[311] So where's that top -down signal that does the control coming from?

[312] Like, how can you find it?

[313] Yeah.

[314] Like, why does it stop growing?

[315] How does it know the shape?

[316] How does it have memory of the shape?

[317] And how does it tell everybody to be like, whoa, whoa, whoa, slow down, we're done?

[318] So the first thing to think about, I think, is that there are no examples anywhere of a central dictator because in this kind of science, because everything is made of parts.

[319] And so we, even though we feel as a unified central sort of intelligence and kind of point of cognition, we are a bag of neurons, right?

[320] All intelligence is collective intelligence.

[321] There's this, this is important to kind of think about because a lot of people think, okay, there's real intelligence like me, and then there's collective intelligence, which is ants and flocks of birds and, you know, termites and things like that.

[322] And, you know, and maybe it's appropriate to think of them as a, as an individual and maybe it's not, a lot of people are skeptical about that and so on.

[323] But you've got to realize that we are not, there's no such thing as this like indivisible diamond of intelligence that's like this one central thing that's not.

[324] made of parts.

[325] We are all made of parts.

[326] And so if you believe, which I think is hard to get around, that we in fact have a centralized set of goals and preferences and we plan and we do things and so on, you are already committed to the fact that a collection of cells is able to do this because we are a collection of cells.

[327] There's no getting around that.

[328] In our case, what we do is we navigate the three -dimensional world and we have behavior.

[329] This is blowing my mind right now because we are just a collection of cell.

[330] Oh, yeah.

[331] Yeah.

[332] So when I'm moving this arm, I think, feel like I'm the central dictator of that action.

[333] But there's a lot of stuff going on.

[334] Like all all the cells here are collaborating in some interesting way.

[335] They're getting signal from the central nervous system.

[336] Well, even the central nervous system is misleadingly named because it isn't really central.

[337] Again, it's it's what it's just a bunch of cells.

[338] It's just a bunch of cells.

[339] I mean, all of it, right, there are no, there are no singular indivisible intelligences anywhere.

[340] We are all, every, every example that we've ever seen, is a collective of some of something it's just that we're used to it we're used to that you know we're used to okay this thing is kind of a single thing but it's really not you zoom in you know what you see you see a bunch of cells running around and so is there some unifying and you were just jumping around but that's something that you look at as the the biological signal versus the biochemical the um the chemistry the electricity maybe the life isn't that versus the cell it's the there's an orchestra playing and the resulting music is the dictator that's not bad Dennis that's Dennis Noble's kind of view of things he has he has two really good books where he talks about this musical analogy right so so I think that's that's I like it I like it is it wrong I don't think it's no I don't think it's wrong I don't I don't think it's wrong I think I think the important thing about it is that we have to come to grips with the fact that a true, a true proper cognitive intelligence can still be made of parts.

[341] Those things are, and in fact, it has to be.

[342] And I think it's a real shame, but I see this all the time.

[343] When you have, when you have a collective like this, whether it be a group of robots or, you know, a collection of cells or neurons or whatever, as soon as, as soon as we gain some insight into how it works, right?

[344] Meaning that, oh, I see.

[345] In order to take you this action, here's the information that got processed via this chemical mechanism or whatever.

[346] Immediately people say, oh, well, then that's not real cognition.

[347] That's just physics.

[348] And I think this is fundamentally flawed because if you zoom into anything, what are you going to see?

[349] Of course you're just going to see physics.

[350] What else could be underneath, right?

[351] It's not going to be fairy dust.

[352] It's going to be physics and chemistry.

[353] But that doesn't take away from the magic of the fact that there are certain ways to arrange that physics and chemistry, and in particular the bioelectricity, which I like a lot, to give you an emergent collective with goals and preferences and memories and anticipations that do not belong to any of the subunits.

[354] So I think what we're getting into here, and we can talk about how this happens during embryogenesis and so on, what we're getting into is the origin of a self, with a capital S. So we ourselves, there are many other kinds of selves, and we can tell some really interesting stories about where selves come from and how they become unified.

[355] Yeah, is this the first, or at least humans tend to think that this is the level of which the self, with a capital S, is first born?

[356] But, and we really don't want to see human civilization or earth itself as one living organism.

[357] Yeah.

[358] That's very uncomfortable to us.

[359] It is, yeah.

[360] But is, yeah, where's the self -born?

[361] We have to grow up past that.

[362] So what I like to do is, I'll tell you two quick stories about that.

[363] I like to roll backwards.

[364] So if you start and you say, okay, here's a paramecium and you see it, you know, it's a single -cell organism, you see it doing various things.

[365] And people will say, okay, I'm sure there's some chemical story to be told about how it's doing it.

[366] So that's not true cognition, right?

[367] And people will argue about that.

[368] I like to work it backwards.

[369] I say, let's agree that you and I, as we sit here, are examples of true cognition, if anything, is if there's anything that's true cognition, we are examples of it.

[370] Now, let's just roll back slowly, right?

[371] So you roll back to the time where you were a small child and used to do.

[372] and whatever.

[373] And then just sort of day by day, you roll back.

[374] And eventually you become more or less that paramecium and then you sort of even below that, right, as an unfertilized oocyte.

[375] So it's, no one has, to my knowledge, no one has come up with any convincing discrete step at which my cognitive powers disappear, right?

[376] It just doesn't, the biology doesn't offer any specific step.

[377] It's incredibly smooth and slow and continuous.

[378] And so I think this idea that it just sort of magically shows up at one point and then, you know, humans have true selves that don't exist elsewhere.

[379] I think it runs against everything we know about evolution, everything we know about developmental biology.

[380] These are all slow continuum.

[381] And the other really important story I want to tell is where embryos come from.

[382] So think about this for a second.

[383] Amniote embryos, so this is humans, birds and so on, mammals and birds and so on.

[384] Imagine a flat disk of cells.

[385] So there's maybe 50 ,000 cells.

[386] And in that, so when you get an egg from a, from a fertilization, Let's see you buy a fertilized egg from a farm, right?

[387] That egg will have about 50 ,000 cells in a flat disk.

[388] It looks like a little tiny little frisbee.

[389] And in that flat disk, what will happen is there'll be one set of cells will become special, and it will tell all the other cells, I'm going to be the head.

[390] You guys don't be the head.

[391] And so it'll amplify symmetry breaking amplification.

[392] You get one embryo.

[393] There's a, you know, there's some neural tissue and some other stuff forms.

[394] now now you say okay I had one egg and one embryo and there you go what else could it be well the reality is and I used to I did all of this as a grad student if you um if you take a little needle and you make a scratch in that blastederm in that in that disc such that the cells can't talk to each other for a while it heals up but for a while they can't talk to each other what will happen is that both regions will decide that they can be the embryo and there will be two and then when they heal up they become conjoint twins and you can make two you can make three, you can make lots.

[395] So the question of how many cells are in there cannot be answered until it's actually played all the way through.

[396] It isn't necessarily that there's just one.

[397] There can be many.

[398] So what you have is you have this medium, this this undifferentiated, I'm sure there's a, there's a psychological version of this somewhere that I don't know the proper terminology, but you have this, you have this list like ocean of potentiality.

[399] You have these thousands of cells and some number of individuals are going to be formed out of the, usually one, sometimes zero, sometimes several, and they form out of these cells because a region of these cells organizes into a collective that will have goals, goals that individual cells don't have.

[400] For example, make a limb, make an eye, how many eyes?

[401] Well, exactly two.

[402] So individual cells don't know what an eye is.

[403] They don't know how many eyes you're supposed to have, but the collective does.

[404] The collective has goals and memories and anticipations that the individual cells don't.

[405] And that the establishment of that boundary with its own ability to pursue certain goals, that's the origin of selfhood.

[406] But is that goal in there somewhere, where they always destined?

[407] Like, are they discovering that goal?

[408] Like, where the hell did evolution discover this?

[409] When you went from the prokaryotes to eukaryotic cells, and then they started making groups, and when you make a certain group, you make it sound, and it's such a tricky thing to try to understand.

[410] You make it sound like the cells didn't get together and came up with a goal, but the very act of them getting together revealed the goal that was always there, there was always that potential for that goal.

[411] So the first thing to say is that there are way more questions here than certainties.

[412] Okay, so everything I'm telling you is cutting edge, developing stuff.

[413] So it's not as if any of us know the answer to this.

[414] But here's my opinion on this.

[415] I think what evolution, I don't think that evolution produces solutions to specific problems, in other words, specific environments.

[416] Like, here's a frog that can live well in a froggy environment.

[417] I think what evolution produces is problem solving machines that will solve problems in different spaces.

[418] So not just three -dimensional space.

[419] This goes back to what we were talking about before.

[420] We, the brain is a evolutionarily a late development.

[421] It's a system that is able to pursue goals in three -dimensional space by giving commands to muscles.

[422] Where did that system come from?

[423] That system evolved from a much more ancient, evolutionarily, much more ancient system where collections of cells gave instructions for cell behaviors, meaning cells move, to divide, to die, to change into different cell types, to navigate morphos space, the space of anatomies, the space of all possible anatomies.

[424] And before that, cells were navigating transcriptional space, which is a space of all possible gene expressions and before that, metabolic space.

[425] So what evolution has done, I think, is produced hardware that is very good at navigating different spaces using a bag of tricks, right, which I'm sure many of them we can steal for autonomous vehicles and robotics and various things.

[426] And what happens is that they navigate these spaces without a whole lot of commitment to what the space is.

[427] In fact, they don't know what the space is, right?

[428] We are all brains in a vat, so to speak.

[429] Every cell does not know, right?

[430] Every cell is some other cell's external environment, right?

[431] So where does that border between you and the outside world, you don't really know where that is, right?

[432] Every collection of cell has to figure that out from scratch.

[433] And the fact that evolution requires all of these things to figure out what they are, what effectors they have, what sensors they have, where does it make sense to draw a boundary between me and the outside world?

[434] The fact that you have to build all that from scratch, this autopoises, is what defines the border of a self.

[435] Now, biology uses like a multi -scale competency architecture, meaning that every level has goals.

[436] So molecular networks have goals, cells have goals, tissues, organs, colonies, and it's the interplay of all of those that enable biology to solve problems in new ways.

[437] For example, in Xenobots and various other things.

[438] This is, you know, it's exactly, as you said, in many ways the cells are discovering new ways of being, but it's, at the same time, evolution certainly shapes all this.

[439] So evolution is very good at this agential bioengineering, right?

[440] When evolution is discovering a new way of being an animal, an animal or a plant or something, sometimes it's by changing the hardware, you know, protein, changing protein structure and so on.

[441] But much of the time, it's not by changing the hardware, it's by changing the signals that the cells give to each other.

[442] It's doing what we as engineers do, which is try to convince the cells to do various things by using signals, experiences, stimuli.

[443] That's what biology does.

[444] It has to, because it's not dealing with a blank slate.

[445] Every time, as, you know, if you're evolution and you're trying to make an organism, you're not dealing with a passive material that is fresh and you have to specify.

[446] It already wants to do certain things.

[447] So the easiest way to do that search to find whatever is going to be adaptive is to find the signals that are going to convince itself to do various things, right?

[448] Your sense is that evolution operates both in the software and the hardware, and it's just easier and more efficient operating the software.

[449] Yes, and I should also say, I don't think the distinction is sharp.

[450] In other words, I think it's a continuum, but I think it's a meaningful distinction where you can make changes to a particular protein and now the enzymatic function is different and it metabolizes differently and whatever, and that will have implications for fitness.

[451] Or you can change the huge amount of information in the genome that isn't structural at all.

[452] it's signaling.

[453] It's when and how do cells say certain things to each other.

[454] And that can have massive changes as far as how it's going to solve problems.

[455] I mean, this idea of multi -hierarchical competence architecture, which is incredible to think about.

[456] So this hierarchy that evolution builds, I don't know who's responsible for this.

[457] I also see the incompetence of bureaucracies of humans when they get together.

[458] So how the hell does evolution?

[459] built this where at every level only the best get to stick around they somehow figure out how to do their job without knowing the bigger picture and then there's like the bosses that do the bigger thing somehow or that you can now abstract the way the small group of cells as a as an organ or something and then that organ does something bigger in the context of the full body or something like this.

[460] How is that built?

[461] Is there some intuition you can kind of provide of how that's constructed that hierarchical competence architecture?

[462] I love that.

[463] Competence.

[464] Just the word competence is pretty cool in this context because everybody's good at their job somehow.

[465] Yeah.

[466] No, it's really key.

[467] And the other nice thing about competency is that so my central belief in all of this is that engineering is the right perspective on all of this stuff because it gets you away from subjective terms, you know, people talk about sentience and this and that.

[468] Those things are very hard to define.

[469] People argue about them philosophically.

[470] I think that engineering terms like competency, like, you know, pursuit of goals, right, all of these things are empirically incredibly useful because you know it when you see it.

[471] And if it helps you build, right?

[472] If I can pick the right level, I say this thing has, I believe this is X level of like, competency, I think it's like a thermostat or I think it's like a better thermostat or I think it's a, you know, various other kinds of, you know, there's many, many different kinds of complex systems.

[473] If that helps me to control and predict and build such systems, then that's all there is to say.

[474] There's no more philosophy to argue about.

[475] So, so I like competency in that way because you can quantify, you could, you have to, in fact, you have to, you have to make a claim competent at what?

[476] And then, or if I say, if I tell you it has a goal, the question is, what's the goal?

[477] And this particular state, that's what it spends energy to get back to.

[478] That's the goal and we can quantify it and we can be objective about it.

[479] So we're not used to thinking about this.

[480] I give a talk sometimes called, why don't robots get cancer?

[481] And the reason robots don't get cancer is because generally speaking, with a few exceptions, our architectures have been, you've got a bunch of dumb parts and you hope that if you put them together, the overlying machine will have some intelligence and do something rather, right?

[482] But the individual parts don't care.

[483] They don't have an agenda.

[484] Biology isn't like that.

[485] Every level has an agenda, and the final outcome is the result of cooperation and competition, both within and across levels.

[486] So, for example, during embryogenesis, your tissues and organs are competing with each other, and it's actually a really important part of development.

[487] There's a reason they compete with each other.

[488] They're not all just, you know, sort of helping each other.

[489] They're also competing for information, for metabolic, for limited metabolic constraints.

[490] But to get back to your other point, which is, you know, which is, which is, this seems like really efficient and good and so on compared to some of our human efforts, we also have to keep in mind that what happens here is that each level bends the option space for the level beneath so that your parts, basically, they don't see the, the, the geometry.

[491] So I'm using, and I think I take this seriously terminology from like relativity, right, where the space is literally bent.

[492] So the option space is deformed by the higher level so that the lower levels, all they really have to do is go down their concentration gradient.

[493] They don't have to, in fact, they don't, they can't know what the big picture is.

[494] But if you bend the space just right, if they do what locally seems right, they end up doing your bidding.

[495] They end up doing things that are optimal in the higher space.

[496] Conversely, because the components are good at getting their job done, you as the higher level don't need to try to compute all the low -level controls.

[497] All you're doing is bending the space.

[498] You don't know or care how they're going to do it.

[499] Give you a super simple example.

[500] In the tatpole, we found that.

[501] Okay, so tatpoles need to become frogs.

[502] And to go from a tadpole head to a frog head, you have to rearrange the face.

[503] So the eyes have to move forward, the jaws have to come out, the nostrils move, everything moves.

[504] It used to be thought that, because all tadpoles look the same and all frogs look the same, if you just remember, if every piece just moves in the right direction, the right amount, then you get your fraud, right?

[505] So we decided to test.

[506] I had this hypothesis that I thought, I thought actually the system is probably more intelligent than that.

[507] So what did we do?

[508] We made what we call Picasso tatpoles.

[509] So everything is scrambled.

[510] So the eyes are on the back of the head.

[511] Their jaws are off to the side.

[512] Everything is scrambled.

[513] Well, guess what they make?

[514] They make pretty normal frogs because all the different things move around in novel paths configurations until they get to the correct froggy sort of frog face configuration, then they stop.

[515] So the thing about that is now imagine evolution, right?

[516] So you make some sort of mutation and it does, like every mutation, it does many things.

[517] So something good comes of it, but also it moves your mouth off to the side, right?

[518] Now, if there wasn't this multi -scale competency, you can see where this is going.

[519] If there wasn't this multi -scale competency, the organism would be dead.

[520] Your fitness is zero because you can't eat.

[521] And you would never get to explore the other beneficial consequences of that mutation.

[522] You'd have to wait until you find some other way of doing it without moving the mouth.

[523] That's really hard.

[524] So the fitness landscape would be incredibly rugged.

[525] Evolution would take forever.

[526] The reason it works, one of the reasons it works so well is because you do that.

[527] No worries.

[528] The mouth will find its way where it belongs, right?

[529] So now you get to explore.

[530] So what that means is that all of these mutations that otherwise would be deleterious are now neutral because the competency of the parts make up for all kinds of things.

[531] So all the noise of development, all the variability in the environment, all these things, the competency of the parts makes up for it.

[532] So that's all fantastic, right?

[533] That's all, that's all great.

[534] The only other thing to remember when we compare this to human efforts is this.

[535] Every component has its own goals in various spaces, usually with very little regard for the welfare of the other levels.

[536] So as a simple example, you know, you as a complex system, you will go out and you will do, you know, Jiu -Jitsu or whatever, you'll have some go, you go rock climbing, you scrape a bunch of cells off your hands, and then you're happy as a system, right?

[537] You come back and you've accomplished some goals and you're really happy.

[538] Those cells are dead.

[539] They're gone, right?

[540] Did you think about those cells?

[541] Not really, right?

[542] You had some bruising.

[543] Selfish SOB.

[544] That's it.

[545] And so that's the thing to remember is that, you know, and we know this from history, is that just being a collective isn't enough because what the goals of that collective will be relative to the welfare of the individual parts is a massively open question.

[546] The ends justify the means.

[547] I'm telling you, Stalin was onto something.

[548] No. That's the danger.

[549] But we can see, exactly.

[550] That's the danger of, for us humans, we have to construct ethical systems under which we don't take seriously the full mechanism of biology and apply it to the way the world functions, which is an interesting line we've drawn.

[551] the world that built us is the one we reject in some sense when we construct human societies the idea that this country was founded on that all men are created equal that's such a fascinating idea it's like you're fighting against nature and saying well there's something bigger here than a hierarchical competency architecture.

[552] Yeah.

[553] But there's so many interesting things you said.

[554] So from an algorithmic perspective, the act of bending the option space, that's really profound.

[555] Because if you look at the way AI systems are built today, there's a big system, like I said, with robots, and it has a goal.

[556] And he gets better and better at optimizing that goal at accomplishing that goal.

[557] but if biology built a hierarchical system where everything is doing computation, and everything is accomplishing the goal, not only that, it's kind of dumb, you know, with the limited, with the bent option space, is just doing the thing that's the easiest thing for it in some sense.

[558] And somehow that allows you to have turtles on top of turtles, literally.

[559] leave dump systems on top of dumb systems that as a whole create something incredibly smart yeah i mean every system is has some degree of intelligence in its own problem domain so so cells will have problems they're trying to solve in physiological space and transcriptional space and then i can give you some some cool examples of that but the collective is trying to solve problems in anatomical space right and forming a you know a creature and growing your blood vessels and so on and then the collect the the the the the the the whole body is solving yet other problems.

[560] They may be in social space and linguistic space and three -dimensional space.

[561] And who knows, you know, the group might be solving problems in, you know, I don't know, some sort of financial space or something.

[562] So one of the major differences with most AIs today is, is A. the kind of flatness of the architecture, but also of the fact that they are constructed from outside their borders and their, you know, so a few, so to a large extent, and of course there are counter examples now, but to a large extent, our technology had been such that you create a machine or a robot, it knows what its sensors are, it knows what its effectors are, it knows the boundary between it and the outside.

[563] All this is given from the outside.

[564] Biology constructs this from scratch.

[565] Now, the best example of this that originally in robotics was actually Josh Bongard's work in 2006, where you made these robots that did not know their shape to start.

[566] start with.

[567] So like a baby, they sort of floundered around.

[568] They made some hypotheses.

[569] Well, I did this and I moved in this way.

[570] Well, maybe I'm a whatever.

[571] Maybe I have wheels or maybe I have six legs or whatever, right?

[572] And they would make a model and eventually would crawl around.

[573] So that's, I mean, that's really good.

[574] That's part of the autopoises.

[575] But we can go a step further and some people are doing this and then we're sort of working on some of this too, is this idea that let's even go back further.

[576] You don't even know what sensors you have.

[577] You don't know where you end and the outside world begins.

[578] All you have is certain things like active inference, meaning you're trying to minimize surprise, right?

[579] You have some metabolic constraints.

[580] You don't have all the energy you need.

[581] You don't have all the time in the world to think about everything you want to think about.

[582] So that means that you can't afford to be a microreductionist, you know, all this data coming in.

[583] You have to coarse grain it and say, I'm going to take all this stuff.

[584] I'm going to call that a cat.

[585] I'm going to take all this.

[586] I'm going to call that the edge of the table.

[587] I don't want to fall off of.

[588] And I don't want to know anything about the microstates.

[589] What I want to know is what is the optimal way to cut up my world.

[590] And by the way, this thing over here, that's me. And the reason that's me is because I have more control over this than I have over any of this other stuff.

[591] And so now you can begin to, right?

[592] So that's self -construction, that figuring out making models of the outside world and then turning that inwards and starting to make a model of yourself, right?

[593] Which immediately starts to get into issues of agency and control because in order to, if you are under metabolic constraints, meaning you don't have the energy, right, that all the energy in the world, you have to be efficient.

[594] That immediately forces you to start telling stories about coarse -grained agents that do things, right?

[595] You don't have the energy to, like, Laplace's demon, you know, calculate every possible state that's going to happen.

[596] You have to, you have to coarse -grain, and you have to say that is the kind of creature that does things, either things that I avoid or things that I will go towards.

[597] That's a mate or food or whatever it's going to be.

[598] And so right at the base of a simple, very simple organism, starting to make models of...

[599] agents doing things, that is the origin of models of free will, basically, right?

[600] Because you see the world around you as having agency, and then you turn that on yourself and you say, wait, I have agency too.

[601] I can, I do things, right?

[602] And then you make decisions about what you're going to do.

[603] So all of this one model is to view all of those kinds of things as being driven by that early need to determine what you are and to do so and to then take actions in the most energetically efficient space possible.

[604] So free will emerges when you try to simplify, tell a nice narrative about your environment.

[605] I think that's very plausible, yeah.

[606] Do you think free will is an illusion?

[607] So you're kind of implying that it's a useful hack?

[608] Well, I'll say two things.

[609] The first thing is I think it's very plausible to say that any organism that self, or any agent that self, whether it's biological or not, any agent that self -constructs under energy constraints is going to believe in free will.

[610] We'll get to whether it has free will momentarily.

[611] But I think, but I think what it definitely drives is a view of yourself and the outside world as an agential view.

[612] I think that's inescapable.

[613] So that's true for even primitive organisms.

[614] I think so.

[615] Now, they don't have, now, obviously you have to scale down, right?

[616] So, so, so they don't have the kinds of complex metacognition that we have so they can do long -term planning and thinking about free will and so on.

[617] But the sense of agency is really useful to accomplish its tasks, simple or complicated.

[618] That's right.

[619] In all kinds of spaces, not just in obvious three -dimensional space.

[620] I mean, we're very good.

[621] The thing is humans are very good at detecting agency of like medium -sized objects moving at medium speeds in the three -dimensional world, right?

[622] We see a bowling ball and we see a mouse and we immediately know what the difference is, right, and how we're going to...

[623] Mostly things you can eat or get eaten by.

[624] Yeah, yeah.

[625] That's our training set, right?

[626] From the time you're little, your training set is visual data on this, this like little chunk of your experience.

[627] But imagine if, imagine if from the time that we were born, we had innate senses of your blood chemistry.

[628] If you could feel your blood chemistry the way you can see, right, you had a high bandwidth connection, and you could feel your blood chemistry, and you could see, you could sense all the things that your organs were doing, so your pancreas, your liver, all the things.

[629] If we had that, we would be very good at detecting intelligence in physiological space.

[630] We would know the level of intelligence that our various organs were deploying to deal with things that we're coming, to anticipate the stimuli, to, you know, but we're just terrible at that.

[631] We don't, in fact, people don't even, you know, you talk about intelligence to these other spaces and a lot of people think that's just crazy because all we're, all we know is motion.

[632] We do have access to that information.

[633] So it's actually possible that, so evolution could if we wanted to construct an organism that's able to perceive most certainly.

[634] the flow of blood through your body, the way you see an old friend and say, yo, what's up, how's the wife and the kids?

[635] In that same way, you would feel like a connection to the liver.

[636] Yeah, yeah.

[637] I think, you know.

[638] Maybe other people's liver, no, just your own.

[639] Because you don't have access to other people's living.

[640] Not yet, but you could imagine some really interesting connection, right?

[641] Like sexual selection, like, ooh, that girl's got a nice liver.

[642] Like the way her blood flows, the dynamics of the blood is very interesting.

[643] It's novel.

[644] I've never seen one of those.

[645] But you know, that's exactly what we're trying to half -ass when we judge judgment of beauty by facial symmetry and so on.

[646] That's a half -assed assessment of exactly that.

[647] Because if your cells could not cooperate enough to keep your organism symmetrical, you know, you can make some inferences about what else is wrong, right?

[648] Like that's a very, you know, that's a very basic.

[649] Interesting.

[650] Yeah, so that in some deep sense, actually, that is what we're doing.

[651] We're trying to infer how health, we use the word healthy, but basically how functional is this biological system looking at so I can hook up with that one and make offspring.

[652] Yeah, yeah.

[653] Well, what kind of hardware might their genomics give me that might be useful in the future?

[654] I wonder why evolution didn't give us a higher resolution signal.

[655] Like, why the whole peacock thing with the feathers, it doesn't seem, it's a very low bandwidth signal for sexual selection.

[656] I'm going to, and I'm not an expert on this stuff, but.

[657] On peacocks?

[658] Well, no, you know, but I'll take a stab at the reason.

[659] I think that it's because it's an arms race.

[660] You see, you don't want everybody to know everything about you.

[661] So I think that as much as, as much as, and in fact, there's another interest.

[662] part of this arms race, which is, if you think about this, the most adaptive, evolvable system is one that has the most level of top -down control, right?

[663] If it's really easy to say to a bunch of cells, make another finger versus, okay, here's 10 ,000 gene expression changes that you need to do to make it to change your finger, right?

[664] The system with good top -down control that has memory, and we need to get back to that, by the way, that's a question I've neglected to answer about where the memory is and so on.

[665] a system that uses all of that is really highly evolvable, and that's fantastic.

[666] But guess what?

[667] It's also highly subject to hijacking by parasites, by by cheaters of various kinds, by conspecifics.

[668] Like we found that, and then that goes back to the story of the pattern memory in these planaria, there's a bacterium that lives on these planaria.

[669] That bacterium has an input into how many heads the worm is going to have because it's hijacks that control system and it's able to make a chemical that basically, interfaces with the system that calculates how many heads you're supposed to have, and they can have two, and they can make them have two heads.

[670] And so you can imagine that if you are too, so you want to be understandable for your own parts to understand each other, but you don't want to be too understandable because you'll be too easily controllable.

[671] And so I think that, my guess is that that, that opposing pressure keeps this from being a super high bandwidth kind of thing where we can just look at somebody and know everything about them.

[672] So it's a kind of biological game of Texas Hold 'em.

[673] Yeah.

[674] You're showing some cards and you're hiding other cards.

[675] and that's part of it and there's bluffing and all that and then there's probably whole species that would do way too much bluffing.

[676] That's probably where peacocks fall.

[677] There's a book that I don't remember if I read or if I wrote if I read summaries of the book but it's about evolution of beauty and birds.

[678] Where's that from?

[679] Is that a book or does Richard Dawkins talk about it?

[680] But basically there's some species start to like over select for beauty.

[681] Not over -select.

[682] They just, some reason, select for beauty.

[683] There is a case to be made.

[684] Actually, now I'm starting to remember.

[685] I think Darwin himself made a case that you can select based on beauty alone.

[686] So that beauty, there's a point where beauty doesn't represent some underlying biological truth.

[687] You start to select for beauty itself.

[688] And I think the deep question is there is some evolutionary value to beauty.

[689] But it's an interesting kind of thought that this, can we deviate completely from the deep biological truth to actually appreciate some kind of the summarization in itself?

[690] Let me get back to memory, because this is a really interesting idea.

[691] How do a collection of cells remember anything?

[692] How do biological systems remember anything?

[693] How is that akin to the kind of memory we think of humans as having?

[694] within our big cognitive engine.

[695] Yeah.

[696] One of the ways to start thinking about bioelectricity is to ask ourselves, where did neurons and all these cool tricks that the brain uses to run these amazing problem -solving abilities on and basically an electrical network?

[697] Where did that come from?

[698] They didn't just evolve, you know, appear out of nowhere.

[699] It must have evolved from something.

[700] And what it evolved from was a much more ancient ability of cells to form networks to solve other kinds of problems.

[701] for example, to navigate morphos space, to control the body's shape.

[702] And so all of the components of neurons, so ion channels, neurotransmitter machinery, electrical synapses, all this stuff is way older than brains, way older than neurons, in fact, older than multicellularity.

[703] And so it was already, even bacterial biofilms, there's some beautiful work from UCSD on brain -like dynamics and bacterial biofilms.

[704] So evolution figured out very early on that electrical networks are amazing at having memories, at integrating information across distance, at different kinds of optimization tasks, you know, image recognition and so on, long before there were brains.

[705] Can you actually just step back when we'll return to it?

[706] What is bioelectricity?

[707] What is biochemistry?

[708] What is, what are electrical networks?

[709] I think a lot of the biology community focuses on the chemicals as the signaling mechanisms that make the whole thing work.

[710] you have, I think, to a large degree uniquely, maybe you can correct me on that, have focused on the bioelectricity, which is using electricity for signaling.

[711] There's also probably mechanical like knocking on the door.

[712] So what's the difference and what's an electrical network?

[713] Yeah.

[714] So I want to make sure and kind of give credit where credit is do.

[715] So as far back as 1903 and probably late 1800s already, people were thinking about the importance of electrical phenomena in life.

[716] So I'm for sure not the first person to stress the importance of electricity.

[717] People, there were waves of research in the 30s, in the 40s, and then again in the kind of 70s, 80s, and 90s of sort of the pioneers of bioelectricity who did some amazing work on all this.

[718] I think what we've done that's new is to step away from this idea that, and I'll describe what the bioelectricity is, a step away from the idea that, well, here's another piece of physics that you need to keep track of to understand physiology and development, and to really start looking at this as saying, no, this is a privileged computational layer that gives you access to the actual cognition of the tissue of basal cognition.

[719] So merging that developmental biophysics with ideas and cognition of computation and so, and I think that's what we've done.

[720] That's new.

[721] But people have been talking about bioelectricity for a really long time.

[722] And so I'll define that.

[723] What happens is that if you have a single cell, cell has a membrane, in that membrane are proteins called ion channels, and those proteins allow charged molecules, potassium, sodium, chloride to go in and out under certain circumstances.

[724] And when there's an imbalance of those ions, there becomes a voltage gradient across that membrane.

[725] And so all cells, all living cells try to hold a particular kind of voltage difference across the membrane, and they spend a lot of energy to do so.

[726] When you now, now, so that's a, that's a single cell.

[727] When you have multiple cells, the cell sitting next to each other, they can communicate their voltage state to each other via a number of different ways, but one of them is this thing called a gap junction, which is basically like a little submarine hatch that's just kind of docks, right?

[728] And the ions from one side can flow to the other side and vice versa.

[729] Isn't it incredible that this evolved?

[730] It's not wild?

[731] Because that didn't exist.

[732] Correct.

[733] This had to be evolved.

[734] It had to be invented.

[735] That's right.

[736] Somebody invented electricity in the ocean.

[737] One of this to get invented.

[738] Yeah.

[739] So, I mean, it's, it is, it is incredible.

[740] The guy who discovered gab junctions, Werner Lowenstein, I visited him.

[741] He was really old.

[742] A human being?

[743] He discovered them.

[744] Because who really discovered them live probably four billion years ago.

[745] Good point.

[746] So you're, give credit, where credit is due, I'm just saying.

[747] He rediscovered, he rediscovered gap junctions.

[748] But when I visited him in Woods Hole, maybe 20 years ago now, he told me that he was writing.

[749] And unfortunately, he passed away, and I think this book never got written.

[750] He was writing a book on gap junctions and consciousness.

[751] And I think it would have been an incredible book because gap junctions are magic.

[752] I'll explain why in a minute.

[753] What happens is that, just imagine, the thing about both these ion channels and these gap junctions is that many of them are themselves voltage sensitive.

[754] so that's a voltage sensitive current conductance that's a transistor and as soon as you've invented one immediately you now get access to from from this platonic space of mathematical truths you get access to all of the cool things that transistors do so now when you have a network of cells not only do they do they talk to each other but they can send messages to each other and the differences of voltage can propagate now to neuroscientists this is old hat because you see this in the brain right there's action potentials that you know the electricity you can you can they have they have these awesome movies where you can take a zebra like a transparent animal like a zebra fish you can literally look down and you can see all the all the firings as the fish is like making decisions about what to eat and things like this right it's amazing well your whole body is doing that all the time just much slower so there are very few things that neurons do that other cells that all the cells in your body don't do they all they all do very similar things just on a much slower time scale and whereas your brain is thinking about how to solve problems in three dimensional space The cells in an embryo are thinking about how to solve problems in anatomical space.

[755] They're trying to have memories like, hey, how many fingers are we supposed to have?

[756] Well, how many do we have now?

[757] What do we do to get from here to there?

[758] That's the kind of problems they're thinking about.

[759] And the reason that gap junctions are magic is, imagine, right?

[760] From the earliest time, here are two cells.

[761] This cell, how can they communicate?

[762] Well, the simple version is this cell could send a chemical signal.

[763] It floats over and it hits a receptor on this cell.

[764] because it comes from outside this cell can very easily tell that that came from outside it's whatever information is coming that's not my information that that information is coming from the outside so i can i can trust it i can ignore it i can do various things with it whatever but i know it comes from the outside now imagine instead that you have two cells with a gap junction between them something happens let's say this cell gets poked there's a calcium spike the calcium spike or whatever small molecule signal propagates through the gap junction to this cell there's no ownership metadata on that signal this cell does not know now that it didn't that it came from outside because it looks exactly like its own memories would have looked like of being of whatever had happened right so gap junctions to some extent wipe ownership information on data which means that if i can't if if you and i are sharing memories and we can't quite tell who the memories belong to that's the beginning of a mind melt that's the beginning of a scale up of cognition from here's me and here's you to no now there's just us so they enforce a collective intelligence that's right that's right it helps It's the beginning.

[765] It's not the whole story by any means, but it's the start.

[766] Where's state stored of the system?

[767] So there are some, is it in part in the gap junctions themselves?

[768] Is it in the cells?

[769] There are many, many layers to this, as always in biology.

[770] So there are chemical networks.

[771] So for example, gene regulatory networks, right, which, which are or basically any kind of chemical pathway where different chemicals activate and repress each other, they can store memories.

[772] So in a dynamical system sense, they can store memories.

[773] they can get into stable states that are hard to pull them out of, right?

[774] So that's, that becomes once they get in, that's a memory, a permanent memory of so, or a semi -permanent memory of something that's happened.

[775] There are cytoskeletal structures, right, that are physically, they store, they store memories in physical configuration.

[776] There are electrical memories like flip -flops where there is no physical, right?

[777] So if you look, I show my students this example as a flip -flop, and the reason that it stores is zero -o -one is not because some, some, um, um, um, uh, piece of the hardware moved, it's because there's a cycling of the current in one side of the thing.

[778] If I come over and I hold, you know, I hold the other side to a high voltage for, you know, a brief period of time, it flips over and now it's here.

[779] But none of the hardware moved.

[780] The information is in a stable, dynamical sense.

[781] And if you were to x -ray the thing, you couldn't tell me if it was zero or one, because all you would see is where the hardware is.

[782] You wouldn't see the energetic state of the system.

[783] So there are also, so there are bi -electrical states that are held in that exact way, like volatile RAM, basically, like in the electrical status.

[784] It's very akin to the different ways the memory stored in a computer.

[785] So there's RAM, there's hard drives.

[786] You can make that mapping, right?

[787] So I think the interesting thing is that based on the biology, we can have a more sophisticated, you know, I think we can revise some of our computer engineering methods because there are some interesting things that biology does if we haven't done yet.

[788] but that mapping is not bad.

[789] I mean, I think it works in many ways.

[790] Yeah, I wonder, because I mean, the way we build computers at the root of computer science is the idea of proof of correctness.

[791] We program things to be perfect, reliable.

[792] You know, this idea of resilience and robustness to unknown conditions is not as important.

[793] So that's what biology is really good at.

[794] So I don't know what kind of systems.

[795] I don't know how we go from a computer to a biological system in the future.

[796] Yeah.

[797] I think that, you know, the thing about biology, like, is all about making really important decisions really quickly on very limited information.

[798] I mean, that's what biology is all about.

[799] You have to act.

[800] You have to act now.

[801] The stakes are very high.

[802] And you don't know most of what you need to know to be perfect.

[803] And so there's not even an attempt to be perfect or to get it right in any sense.

[804] There are just things like active inference, minimize surprise, optimize some efficiency and some things like this that that guides the whole the whole business I mentioned too offline that somebody who's a fan of your work is Andre Kapathi and he's amongst many things also writes occasionally a great blog he came up with this idea I don't know if he coined the term but of software 2 .0 where the programming is done in the space of configuring these artificial neural networks.

[805] Is there some sense in which that would be the future of programming for us humans where we're less doing like Python -like programming and more how would that look like?

[806] But basically doing the hyperparameters of something akin to a biological system and watching it go and keeping adjusting it and creating some kind of feedback loop within the system.

[807] So it correct itself.

[808] And then we watch it over time accomplish the goals we wanted to accomplish.

[809] Is that kind of the dream of the dogs that you describe in the nature paper?

[810] Yeah.

[811] Yeah.

[812] I mean, that's what you just painted is a very good description of our efforts at regenerative medicine as a kind of somatic psychiatry.

[813] So the idea is that you're not, you know, you're not trying to micromanage.

[814] I mean, think about the limitations of a lot of the medicines today.

[815] We try to interact down at the level of pathways, right?

[816] So we're trying to micromanage it.

[817] What's the problem?

[818] Well, one problem is that for almost every medicine other than antibiotics, once you stop it, the problem comes right back.

[819] You haven't fixed anything.

[820] You were addressing symptoms.

[821] You weren't actually curing anything, again, except for antibiotics.

[822] That's one problem.

[823] The other problem is you have massive amount of side effects because you were trying to interact at the lowest level.

[824] It's like, I'm going to, you know, I'm going to, I'm going to try to program this computer by changing the, the melting point of copper.

[825] Like, maybe you can do things that way, but my God, it's hard to program at the, right, at the hardware level.

[826] So what, what I think we're, we're starting to understand is that, and by the way, this goes back to what you were saying before about that we could have access to our internal state.

[827] Right.

[828] So people, who practice that kind of stuff, right?

[829] So yoga and biofeedback and those, those are all the people that uniformly will say things like, well, the body has an intelligence and this and area.

[830] Like those two sets overlap perfectly because that's exactly right.

[831] Because once you start thinking about it that way, you realize that the better locus of control is not always at the lowest level.

[832] This is why we don't all program with a soldering iron, right?

[833] We take advantage of the high level intelligences that are there, which means trying to figure out, okay, which of your tissues can learn what, can they learn why, you know, why is it that certain drugs stop working after you take them for a while with this habituation, right?

[834] And so can we understand habituation, sensitization, associative learning, and these kinds of things in chemical pathways?

[835] We're going to have a completely different way, I think.

[836] We're going to have a completely different way of using drugs and of medicine in general when we start focusing on the goal states and on the intelligence of our subsystems as opposed to treating everything as if the only path was micromanagement from chemistry upwards.

[837] Well, can you speak to this idea of somatic psychiatry?

[838] What are somatic cells?

[839] How do they form networks that use by electricity to have memory and all those kinds of things?

[840] Yeah.

[841] What are somatic cells, like basics here?

[842] The semantic cells just means the cells of your body.

[843] So what just means body, right?

[844] So somatic cells are just the, I'm not even specifically making a distinction between somatic cells and stem cells or anything like that.

[845] I mean, basically all the cells in your body, not just neurons, but all the cells in your body.

[846] They form electrical networks.

[847] embryogenesis during regeneration, what those networks are doing, in part, is processing information about what our current shape is and what the goal shape is.

[848] Now, how do I know this?

[849] Because I can give you a couple of examples.

[850] One example is when we started studying this, we said, okay, here's a, here's a planarian.

[851] A planarion is a flatworm.

[852] It has one head and one tail normally.

[853] And the amazing, the several amazing things about planaria, but basically they kind of, I think, Plenaria hold the answer to pretty much every deep question of life.

[854] For one thing, they're similar to our ancestor.

[855] So they have true symmetry.

[856] They have a true brain.

[857] They're not like earthworms.

[858] They're, you know, they're much more advanced life form.

[859] They have lots of different internal organs, but they're these little, they're about, you know, maybe two centimeters in the centimeter to two in size.

[860] They have a bright head and the tail.

[861] And the first thing is plenary are immortal.

[862] So they do not age.

[863] There's no such thing as an old planarian.

[864] So that right there tells you that these theories of thermodynamic limitations of on lifespan are wrong.

[865] It's not that, well, over time of everything degrades.

[866] No, Plenaria can keep it going for probably, you know, how long have they been around 400 million years, right?

[867] So these are the actually, so the plenary in our lab are actually in physical continuity with Plenaria that were here 400 million years ago.

[868] So there's Plenaria that have lived that long, essentially.

[869] What does it mean physical continuity?

[870] Because what they do is they split in half.

[871] The way they reproduce is they split in half.

[872] So the planaria, the back end grabs the Petri dish, the front end takes off.

[873] and they rip themselves in half.

[874] But isn't in some sense where, like, you are a physical continuation?

[875] Yes, except that we go through a bottleneck of one cell, which is the egg, they do not.

[876] I mean, they can.

[877] There's a certain plan area.

[878] Got it.

[879] So we go through a very ruthless compression process and they don't.

[880] Yes, like an auto encoder, you know, sort of squash down to one cell and then back out.

[881] These guys just tear themselves in half.

[882] And so the other amazing thing about them is they regenerate.

[883] So you can cut them into pieces.

[884] The record is, I think, 276 or something like that by Thomas Hunt Morgan.

[885] And each piece regrows a perfect little worm.

[886] They know exactly, every piece knows exactly what's missing, what needs to happen.

[887] In fact, if you chop it in half, as it grows the other half, the original tissue shrinks so that when the new tiny head shows up, they're proportional.

[888] So it keeps perfect proportion.

[889] If you starve them, they shrink.

[890] If you feed them again, they expand.

[891] Their control, their anatomical control is just insane.

[892] Somebody cut them into over 200 pieces?

[893] Yeah, yeah, Thomas Hunt Morgan did.

[894] Hashtag science.

[895] Yeah, amazing.

[896] Yeah, and maybe more.

[897] I mean, they didn't have antibiotics back then.

[898] I bet he lost some due to infection.

[899] I bet it's actually more than that.

[900] I bet you could do more than that.

[901] Humans can't do that.

[902] Well, yes, I mean, again, true, except that.

[903] Maybe you can at the embryonic level.

[904] Well, that's the thing, right?

[905] So I tell, when I talk about this, I said, just remember that as amazing as it is to grow a whole planarian from a tiny fragment, half of the human population can grow a full body from one cell, right?

[906] So development is really, you can look at development as just an example of regeneration.

[907] Yeah, to think, we'll talk about regenerative medicine, but there's some sense what would be like that warm in like 500 years.

[908] I think so.

[909] I can just go regrow a hand.

[910] Yep, with given time, it takes time to grow large things.

[911] For now.

[912] Yeah, I think so.

[913] You can probably, why not accelerate?

[914] Oh, biology takes this time?

[915] I'm not going to say anything is impossible, but I don't know of a way to accelerate these processes.

[916] I think it's possible.

[917] I think we are going to be regenerative, but I don't know of a way to make it fast.

[918] I can just think people from a few centuries from now would be like, well, they used to have to wait a week for the hand to grow.

[919] It's like when the microwave was invented.

[920] You can toast your, what's that called when you put a cheese on a toast?

[921] But it's delicious is all I know.

[922] I'm blanking.

[923] Anyhow.

[924] All right, so planaria, why were we talking about the magical planaria that they have the mystery of life?

[925] Yeah, so the reason we're talking about planaries, not only are they immortal, okay?

[926] Not only do they regenerate every part of the body.

[927] They generally don't get cancer, right?

[928] So, which we can talk about why that's important.

[929] They're smart.

[930] They can learn things, so you can train them.

[931] And it turns out that if you train a planarian and then cut their heads off, the tail will regenerate a brand new brain that still remembers the original information.

[932] Do they have a biological network going on or no?

[933] Yes, yes.

[934] So their somatic cells are forming a network, and that's what you mean by a true brain.

[935] What's the requirement for a true brain?

[936] Like everything else, it's a continuum, but a true brain has certain characteristics as far as the density, like a localized density of neurons that guides behavior.

[937] In the head.

[938] Exactly.

[939] Connected to the head.

[940] Exactly.

[941] If you cut their head off, the tail doesn't have, that doesn't do anything.

[942] sits there until the new brain is is you know until a new brain regenerates they have all the same neurotransmitters that you and I have but here's why here's what we're talking about them in this in this context so here's your plenary you cut off the head you cut off the tail you have a middle fragment that middle fragment has to make one head and one tail how does it know how many of each to make and where do they go how come it doesn't switch how come right so so we did a very simple thing and we said okay let's let's make the hypothesis that there's a somatic electrical network that remembers the correct pattern, and that what it's doing is recalling that memory and building to that pattern.

[943] So what we did was we used a way to visualize electrical activity in these cells, right?

[944] It's a variant of what people use to look for electricity in the brain.

[945] And we saw that it has a, that that fragment has a very particular electrical pattern.

[946] You can literally see it once we developed a technique.

[947] It has a very particular electrical pattern that shows you where the head and the tail goes, right?

[948] You can just see it.

[949] And then we said, okay, well, now let's test the idea that that's a memory that actually controls where the head and the tail goes.

[950] Let's change that pattern.

[951] So basically, in set a false memory.

[952] And so what you can do is you can do that in many different ways.

[953] One way is with drugs that target ion channels to say, and so you pick these drugs and you say, okay, I'm going to do it so that instead of this one head, one tail electrical pattern, you have a two -headed pattern, right?

[954] You're just editing the electrical information in the network.

[955] When you do that, guess what the cells build?

[956] They build a two -headed worm.

[957] And the coolest thing about it, now, no genetic changes, so we haven't touched the genome.

[958] The genome is totally wild type.

[959] But the amazing thing about it is that when you take these two -headed animals and you cut them into pieces again, some of those pieces will continue to make two -headed animals.

[960] So that information, that memory, that electrical circuit, not only does it hold the information for how many heads, not only does it use that information to tell the cells what to do to regenerate, but it stores it.

[961] Once you've reset it, it keeps.

[962] And we can go back.

[963] We can take a two -headed animal and put it back to one head.

[964] So now imagine, so there's a couple of interesting things here that have implications for understanding of genomes and things like that.

[965] Imagine I take this two -headed animal.

[966] Oh, and by the way, when they reproduce, when they tear themselves in half, you still get two -headed animals.

[967] So imagine they take them and I throw them in the Charles River over here.

[968] So 100 years later, some scientists come along and they scoop up some samples and they go, oh, there's a single -headed form and a two -headed form.

[969] Wow, a speciation event.

[970] Cool.

[971] Let's sequence the genome and see why, what happened?

[972] The genomes are identical.

[973] It's nothing wrong with the genome.

[974] So if you ask the question, how does, so this goes back to your very first question is where do body plants come from, right?

[975] How does the plenary know how many heads it's supposed to have?

[976] Now, it's interesting because you could say DNA, but what happens, what, as it turns out, the DNA produces a piece of hardware that by default says one head.

[977] The way that when you turn on a calculator, by default, it's a zero every single time, right?

[978] When you turn it on just a zero.

[979] But it's a programmable calculator as it turns out.

[980] So once you've changed that, next time it won't say zero.

[981] It'll say something else.

[982] And the same thing here.

[983] So you can make one headed, two -headed, you can make no -headed worms.

[984] We've done some other things along these lines, some other really weird constructs.

[985] So this question of, right, so again, it's really important.

[986] The hardware software distinction is really important because the hardware is essential, because without proper hardware, you're never going to get to the right physiology of having that memory.

[987] But once you have it, it doesn't fully determine what the information is going to be.

[988] You can have other information in there, and it's reprogramable by us, by bacteria.

[989] by various parasites, probably, things like that.

[990] The other amazing thing about these planarias, think about this, most animals, when we get a mutation in our bodies, our children don't inherit it, right?

[991] So you can go on, you could run around for 50, 60 years getting mutations, your children don't have those mutations because we go through the egg stage.

[992] Plenaria tear themselves in half, and that's how they reproduce.

[993] So for 400 million years, they keep every mutation that they've had that doesn't kill the cell that it's in.

[994] So when you look at these planaria, their bodies are what's called mix -aploid, meaning that every cell might have a different number of chromosomes.

[995] They look like a tumor.

[996] If you look at the genome is an incredible mess because they accumulate all this stuff.

[997] And yet, their body structure is, they are the best regenerators on the planet.

[998] Their anatomy is rock solid, even though their genome is all kinds of crap.

[999] So this is kind of a scandal, right?

[1000] That, you know, when we learn that, well, you know, what are genomes?

[1001] Do what?

[1002] Genomes determine your body.

[1003] Okay, why is the animal with the worst genome have the best anatomical control, the most cancer -resistant, the most regenerative, right?

[1004] Really, we're just beginning to start to understand this relationship between the genomically determined hardware.

[1005] And by the way, just as of a couple of months ago, I think I now somewhat understand why this is, but it's really a major, you know, a major puzzle.

[1006] I mean, that really throws a wrench into the whole nature versus nurture, because you usually associate electricity with the nurture and the hardware with nature and there's just this weird integrated mess that propagates their generations yeah it's much more fluid it's much more complex um you can you can imagine what's what's happening here is just just imagine the evolution of a of an animal like this that that multi -scale this goes back to this multi -scale competency right imagine that you have two to you have an animal that um that where it's its tissues have some degree of multi -scale competency so for example if the like like we saw the tadpole, you know, if you put an eye on its tail, they can still see out of that eye, right?

[1007] That, you know, there's an incredible plasticity.

[1008] So if you have an animal and it comes up for selection and the fitness is quite good, evolution doesn't know whether the fitness is good because the genome was awesome or because the genome was kind of junky, but the competency made up for it, right?

[1009] And things kind of ended up good.

[1010] So what that means is that the more competency you have, the harder it is for selection to pick the best genomes.

[1011] It hides information, Right.

[1012] And so that means that, so what happens, you know, evolution basically starts, all the hard work is being done to increase the competency because it's harder and harder to see the genomes.

[1013] And so I think in Plenaria, what happened is that there's this runaway phenomenon where all the effort went into the algorithm such that we know you got a crappy genome.

[1014] We can't keep, we can't clean up the genome.

[1015] We can't keep track of it.

[1016] So what's going to happen is what survives are the algorithms that can create a great worm no matter what the genome is.

[1017] everything went into the algorithm, which, of course, then reduces the pressure on keeping a, you know, keeping a clean genome.

[1018] So this idea of, right, and different animals have this in different, to different levels.

[1019] But this idea of putting energy into an algorithm that does not overtrain on priors, right?

[1020] It can't assume, I mean, I think biology is this way in general.

[1021] Evolution doesn't take the past too seriously because it makes these basically problem solving machines as opposed to like exactly what, you know, to deal with exactly what happened last time.

[1022] Yeah, problem solving versus memory recall.

[1023] So a little memory, but a lot of problem solving.

[1024] I think so, yeah, in many cases.

[1025] Problem solving.

[1026] I mean, it's incredible that those kinds of systems are able to be constructed, especially how much they contrast with the way we build problem solving systems in the AI world.

[1027] Back to Xenobots.

[1028] I'm not sure if we ever described how Zinobots are built, but you have a paper type.

[1029] titled Biological Robots, Perspectives on an emerging interdisciplinary field.

[1030] In the beginning, you mentioned that the word xenobots is like controversial.

[1031] Do you guys get in trouble for using xenobots or what?

[1032] Do people not like the word xenobots?

[1033] Are you trying to be provocative with the word xenobots versus biological robots?

[1034] I don't know.

[1035] Is there some drama that we should be aware of?

[1036] There's a little bit of drama.

[1037] I think the drama is basically related to people having very fixed ideas about what terms mean.

[1038] And I think in many cases, these ideas are completely out of date with where science is now.

[1039] And for sure, they're out of date with what's going to be.

[1040] I mean, these concepts are not going to survive the next couple of decades.

[1041] So if you ask a person and including a lot of people in biology, kind of want to keep a sharp distinction between biologicals and robots, right?

[1042] See, what's a robot?

[1043] Well, a robot, it comes out of a factory, it's made by humans, it is boring, it is a meaning that you can predict everything it's going to do.

[1044] It's made of metal and certain other inorganic materials.

[1045] Living organisms are magical.

[1046] They arise, right, and so on.

[1047] So there's these distinctions.

[1048] I think these distinctions, I think, were never good, but they're going to be completely useless going forward.

[1049] And so part of, there's a couple of papers, that that's one paper, and there's another one that Josh Bongard and I wrote, where we really attack the terminology.

[1050] And we say these binary categories are based on very non -essential kind of surface limitations of technology and imagination that were true before, but they've got to go.

[1051] And so we call them Zenobot.

[1052] So Xenobos, Levis, it's the frog that these guys are made of.

[1053] But we think it's an example of a biobot technology, because ultimately, if we, once we understand how to communicate and manipulate the inputs to these cells, we will be able to get them to build whatever we want them to build.

[1054] And that's robotics, right?

[1055] It's the rational construction of machines that have useful purposes.

[1056] I absolutely think that this is a robotics platform, whereas some biologists don't.

[1057] But it's built in a way that all the different components are doing their own computation.

[1058] So in a way that we've been talking about.

[1059] So you're trying to do top -down control in that biological system.

[1060] And in the future, all of this will merge together because, of course, at some point, we're going to throw in synthetic biology circuits, right?

[1061] New transcriptional circuits to get them to do new things.

[1062] Of course, we'll throw some of that in.

[1063] But we specifically stayed away from all of that because in the first few papers, and there's some more coming down the pike that are, I think, going to be pretty dynamite, that we want to show what the native cells are made of, because what happens is, you know, if you engineer the heck out of them, right?

[1064] If we were to put in new, you know, new transcription factors and some new metabolic machinery and whatever, people will say, well, okay, you engineered this and you made it do whatever and fine.

[1065] I wanted to show, and the whole team wanted to show the plasticity and the intelligence in the biology, what does it do that's surprising before you even start manipulating the hardware in that way?

[1066] Yeah, don't try to over control the thing.

[1067] let it flourish, the full beauty of the biological system.

[1068] Why Xenipus Levis, how do you pronounce it?

[1069] The frog.

[1070] Xenipus Leavis, yeah.

[1071] Yeah, it's a very popular.

[1072] Why this frog?

[1073] It's been used since I think the 50s.

[1074] It's just very convenient because you can, you know, we keep the adults in this, in this very fine frog habitat.

[1075] They lay eggs.

[1076] They lay tens of thousands of eggs at a time.

[1077] The eggs develop right in front of your eyes.

[1078] It's the most magical thing you can, you can see because normally, you know, if you were to deal with mice or rabbits or whatever you don't see the early stages right because everything's inside the mother everything's in a petri dish at room temperature so you just you have an egg it's fertilized and you can just watch it divide and divide and divide and all the organs form you can just see it and at that point um the community has has developed lots of different tools for understanding what's going on and also for for manipulating it right so it's people use it for um you know for understanding birth defects and neurobiology and cancer immunology also you get the whole umbriogenesis in the peter dish that's so cool to watch is there videos of this oh yeah yeah yeah there's amazing videos online i mean mammalian embryos are super cool too for example monazagotic twins are what happens when you cut a mammalian embryo in half you don't get two half bodies you get two perfectly normal bodies because it's a regeneration event right development is just the it's just the kind of regeneration really and why this particular frog it's just uh because they were doing in the 50s and It breeds well in, you know, in, it's easy to raise in the laboratory, and it's very prolific.

[1079] And all the tools, basically for decades, people have been developing tools.

[1080] There's other people, some people use other frogs.

[1081] But I have to say, this is, this is important.

[1082] Xenobots are fundamentally not anything about frogs.

[1083] So I can't say too much about this because it's not published and peer reviewed yet, but we've made Xenobots out of other things that have nothing to do with frogs.

[1084] This is not a frog phenomenon.

[1085] on this is we started with frog because it's so convenient but this this this plasticity is not a fraud you know it's not related to the fact that they're frogs what happens when you kiss it does it turn to a prince no or princess which way uh prince yeah prince yeah that's an experiment that i don't believe we've done and if we have i don't want to collaborate i can i can take on the lead uh on that effort okay cool uh how does the sauce coordinate let's focus in on just the emperor genesis so there's one cell so it divides doesn't have to be very careful about what each cell starts doing once they divide yes and like when there's three of them it's like the co -founders or whatever like slow down you're responsible for this when do they become specialized and how do they coordinate that specialization so so this is the the basic science of developmental biology there's a lot known about all of that but But I'll tell you what I think is kind of the most important part, which is, yes, it's very important who does what.

[1086] However, because going back to this issue of, I made this claim that biology doesn't take the past too seriously.

[1087] And what I mean by that is it doesn't assume that everything is the way it's expected to be, right?

[1088] And here's an example of that this was done.

[1089] This was an old experiment going back to the 40s.

[1090] But basically, imagine, it's a newt.

[1091] salamander, and it's got these little tubules that go to the kidneys, right?

[1092] This little tube.

[1093] Take a cross -section of that tube, you see eight to ten cells that have cooperated to make this little tube in cross -section, right?

[1094] So one amazing thing you can do is you can mess with a very early cell division to make the cells gigantic, bigger.

[1095] You can make them different sizes.

[1096] You can force them to be different sizes.

[1097] So if you make the cells different sizes, the whole nude is still the same size.

[1098] So if you take a cross -section through that tubule, instead of eight to ten cells, you might have four or five, or you might have three, until you make the cell so enormous that one single cell wraps around itself and gives you that same large -scale structure by a completely different molecular mechanism.

[1099] So now instead of cell -to -cell communication to make a tubule, instead of that, it's one cell using the cytoskeleton to bend itself around.

[1100] So think about what that means.

[1101] In the service of a large scale, talk about top -down control, right?

[1102] In the service of a large -scale anatomical feature, different molecular mechanisms get called up.

[1103] So now, think about this.

[1104] You're a nude cell, and trying to make an embryo.

[1105] If you had a fixed idea of who was supposed to do what, you'd be screwed because now your cells are gigantic.

[1106] Nothing would work.

[1107] There's an incredible tolerance for changes in the size of the parts, in the amount of DNA in those parts, all sorts of stuff.

[1108] The life is highly interoperable.

[1109] You can put electrodes in there.

[1110] You can put weird nanomaterials.

[1111] It still works.

[1112] It's, it's, uh, this is that problem solving action, right?

[1113] It's able to do what it needs to do, even when circumstances change.

[1114] That is, you know, the hallmark of intelligence, right?

[1115] William James defined intelligence as the ability to get to the same goal by different means.

[1116] That's this.

[1117] You get to the same goal by completely different means.

[1118] And so, so, so why am I bringing this up is just to say that, yeah, it's important for the cells to do the right stuff, but they have incredible tolerances for things not being what you expect and to still get their job done.

[1119] So if you're, you know, all of these things are not hardwired.

[1120] There are organisms that might be hardwired.

[1121] For example, the nematode sea elegance in that organism, every cell is numbered, meaning that every sea elegance has exactly the same number of cells as every other sea elegance.

[1122] They're all in the same place.

[1123] They all divide.

[1124] There's literally a map of how it works.

[1125] That in that sort of system, it's much more cookie cutter.

[1126] But most organisms are incredibly plastic in that way.

[1127] Is there something particularly magical to you about the whole developmental biology process?

[1128] Is there something you could say?

[1129] Because you just said it, they're very good at accomplishing the goal, the job they need to do, the competency thing.

[1130] But you get a freaking organism from one cell.

[1131] It's like, I mean, it's very hard, hard to intuit that whole process, to even think about reverse engineering that process.

[1132] Right.

[1133] Very hard.

[1134] To the point where I often, just imagine, I sometimes ask my students to do this thought experiment.

[1135] Imagine you were, you were shrunk down to the scale of a single cell and you were in the middle of an embryo and you were looking around at what's going on.

[1136] And the cells running around, some cells are dying.

[1137] Every time you look, it's kind of a different number of cells for most organisms and so I think that if you didn't know what embryonic development was, you would have no clue that what you're seeing is always going to make the same thing, never mind knowing what that is, never mind being able to say, even with full genomic information being able to say what the hell are they building we have no way to do that but but just even to guess that wow the the outcome of all this activity is it's always going to be it's always going to build the same thing the imperative to create the final you as you are now is there already so you can you would so if you start from the same embryo you would create a very similar organism yeah except for cases like the xenobots when you give them a a different environment, they come up with a different way to be adaptive in that environment.

[1138] But overall, I mean, so I think, so I think to, you know, kind of summarize it, I think what evolution is really good at is creating hardware that has a very stable baseline mode, meaning that left to its own devices, it's very good at doing the same thing.

[1139] But it has a bunch of problem solving capacity such that if any, if any assumptions don't hold, if your cells are a weird size, or you get the wrong number of cells, or there's a, you somebody stuck an electrode halfway through the body, whatever, it will still get most of what it needs to do done.

[1140] You've talked about the magic and the power of biology here.

[1141] If we look at the human brain, how special is the brain in this context?

[1142] You're kind of minimizing the importance of the brain or lessening it's, we think of all the special computation happens in the brain, everything else is like the help.

[1143] You're kind of saying that the whole thing is, the whole, the whole thing is doing computation.

[1144] But nevertheless, how special is the human brain in this full context of biology?

[1145] Yeah, I mean, look, there's no getting away from the fact that the human brain allows us to do things that we could not do without it.

[1146] You can say the same thing about the liver.

[1147] Yeah, no, this is true.

[1148] And so, you know, my goal is not, no, you're right, my goal is not.

[1149] You're just being polite to the brain right now.

[1150] Well, being a politician.

[1151] Like, listen, everybody.

[1152] has a use.

[1153] Everybody has a role.

[1154] Yeah.

[1155] It's a very important role.

[1156] That's right.

[1157] We have to acknowledge the importance of the brain, you know.

[1158] There are more than enough people who are cheerleading the brain, right?

[1159] So, so I don't feel like nothing I say is going to reduce people's excitement about the human brain.

[1160] And so, so I emphasize other things.

[1161] I don't think it gets too much credit.

[1162] I think other things don't get enough credit.

[1163] I think the brain is, is the human brain is incredible and special and all that.

[1164] I think other things need more credit.

[1165] And I also think that this, and I'm sort of this way about everything.

[1166] I don't like binary categories about almost anything.

[1167] I like a continuum.

[1168] And the thing about the human brain is that by accepting that as some kind of an important category or essential thing, we end up with all kinds of weird pseudo problems and conundrum.

[1169] So for example, when we talk about it, you know, if you do want to talk about ethics and other things like that and what, you know, this idea that surely if we look out into the universe, surely we don't believe that this human brain is the only way to be sentient, right?

[1170] Surely we don't, you know, and to have high level cognition.

[1171] I just can't even wrap my mind around this, this idea that that is the only way to do it.

[1172] No doubt there are other architectures made made of completely different principles that achieve the same thing.

[1173] And once we believe that, then that tells us something important.

[1174] It tells us that things that are not quite human brains or chimeras of human brains and other tissues or human brains or other kinds of brains and novel configurations or things that are sort of brains but not really or plants or embryos or whatever might also have important cognitive status.

[1175] So that's the only thing.

[1176] I think we have to be really careful about treating the human brain as if it was some kind of like sharp binary category, you know, you are or you aren't.

[1177] I don't believe that exists.

[1178] So when we look out at all the beautiful variety of semi -biological architectures out there in the universe, how many intelligent alien civilizations do you think are out there?

[1179] Boy, I have, you know, no expertise in that whatsoever.

[1180] You haven't met any?

[1181] I have met the ones we've made.

[1182] I think that.

[1183] I mean, exactly.

[1184] In some sense, with synthetic biology.

[1185] Are you not creating aliens?

[1186] I absolutely think so because look, all of life, all standard model systems are an end of one course of evolution on Earth, right?

[1187] And trying to make conclusions about biology from looking at life on Earth is like testing your theory on the same data that generated it.

[1188] It's all kind of like locked in.

[1189] So we absolutely have to create novel.

[1190] examples that have no history on Earth that don't, you know, Zenobots have no history of selection to be a good Zenobot.

[1191] The cells have selection for various things, but the Zenobot itself never existed before.

[1192] And so we can make chimeras, you know, we make frog allotls that are sort of half frog, have ax lot of.

[1193] You can make all sorts of high brots, right, constructions of living tissue with robots and whatever.

[1194] We need to be making these things until we find actual aliens because otherwise we're just looking at an end of one set of examples, all kinds of frozen accidents of evolution and so on we need to go beyond that to really understand biology but we're still even when you're doing synthetic biology you're locked in to the basic components of the way biology is done on this earth yeah right and also and also the basic constraints of the environment even artificial environments that construct in the lab are tied up to the environment i mean what do you okay let's say there is i mean what I think is there's a nearly infinite number of intelligence civilizations living or dead out there.

[1195] If you pick one out of the box, what do you think it would look like?

[1196] So when you think about synthetic biology or creating synthetic organisms, how hard is it to create something that's very different?

[1197] Yeah.

[1198] I think it's very hard to create.

[1199] something that's very different, right?

[1200] It's, um, uh, we are just locked in both, both, uh, experimentally and in terms of our imagination, right?

[1201] It's very hard.

[1202] And you also emphasize several times the idea of shape.

[1203] Yeah.

[1204] The individual cell, get together with other cells and they kind of, they're going to build a shape.

[1205] So it's shape and function, but shape is a critical thing.

[1206] Yeah.

[1207] So here I'll take a stab.

[1208] I mean, I agree with you.

[1209] I did it to whatever extent, though, that we can say anything.

[1210] I do think that there's probably an infinite number of different architectures that are with interest in cognitive properties out there.

[1211] What can we say about them?

[1212] I think that the only things that are going – I don't think we can rely on any of the typical stuff, you know, carbon -based.

[1213] Like I think all of that is just, you know, us being having a lack of imagination.

[1214] But I think the things that are going to be universal, if anything, is.

[1215] are things, for example, driven by resource limitation, the fact that you are fighting a hostile world and you have to draw a boundary between yourself and the world somewhere.

[1216] The fact that that boundary is not given to you by anybody, you have to assume it, you know, estimate it yourself.

[1217] And the fact that you have to coarse grain your experience and the fact that you're going to try to minimize surprise.

[1218] And the fact that, like, these are the things that I think are fundamental about biology.

[1219] None of the, you know, the facts about the genetic code or even the fact that we have genes or the biochemistry of it.

[1220] I don't think any of those things are fundamental, but it's going to be a lot more about the information and about the creation of the self, the fact that, so in my framework, selves are demarcated by the scale of the goals that they can pursue.

[1221] So from little tiny local goals to like massive, you know, planetary scale goals for certain humans and everything and everything in between.

[1222] So you can draw this like cognitive light cone about that determines the scale of the goals you could possibly pursue.

[1223] I think those kinds of frameworks like that, like active inference and so on, are going to be universally applicable, but none of the other things that are typically discussed.

[1224] Quick pause.

[1225] Dean of Bathambrick?

[1226] We were just talking about, you know, aliens and all that.

[1227] That's a funny thing, which is, I don't know if you've seen them.

[1228] There's a kind of debate that goes on about cognition and plants, and what can you say about different kinds of computation and cognition and plants?

[1229] And I always look at that something.

[1230] If you're weirded out by cognition and plants, you're not.

[1231] not ready for exobiology right if if you know something that's that similar here on earth is already like freaking you out then i think there's going to be all kinds of cognitive life out there that we're going to have a really hard time recognizing i think robots will help us yeah like expand our mind about cognition either that or the word like xenobots so and they maybe becomes the same thing is you know really when the human engineers the thing at least in part and then is able to achieve some kind of cognition that's different than what you're used to then you start to understand like oh you know every living organism is capable of cognition oh I need to kind broaden my understanding what cognition is but do you think plants like when you when you eat them are they screaming I don't know what's screaming I think you have to...

[1232] That's what I think when I eat a salad.

[1233] Yeah.

[1234] Yeah.

[1235] I think you have to scale down the expectations in terms of, right?

[1236] So probably they're not screaming in the way that we would be screaming.

[1237] However, there's plenty of data on plants being able to do anticipation and certain kinds of memory and so on.

[1238] I think, you know, what you just said about robots, I hope you're right, and I hope that's, but there's two ways that people can take that, right?

[1239] So one way is exactly what you just said to try to kind of expand their, expand their, their notions for that category.

[1240] The other way people often go is they just sort of define the term.

[1241] If it's not a natural product, it's just faking, right?

[1242] It's not really intelligence if it was made by somebody else because it's that same.

[1243] It's the same thing.

[1244] They can see how it's done.

[1245] And once you see how it's, it's like a magic trick when you see how it's done, it's not as fun anymore.

[1246] And I think people have a real tendency for that.

[1247] And they sort of, which I find really strange in the sense that if somebody said to me, we have this this sort of blind like like a hill climbing search and then and then we have a really smart team of engineers which one do you think is going to produce a system that has good intelligence I think it's really weird to say that it only comes from the blind search right it can't be done by people who by the way can also use evolutionary techniques if they want to but also rational design I think it's really weird to say that real intelligence only comes from natural evolution so I hope you're right I hope people take it in the other the other way but there's a nice shortcut so i work a leg of robots a lot now from for my own personal pleasure uh not in that way internet uh so uh four legs and uh one of the things that changes my experience of the robots a lot is um when i can't understand why i did a certain thing and there's a lot of ways to engineer that me the person that created a software that runs it there's a lot of ways for me to build that software in such a way that I don't exactly know why it did a certain basic decision of course as an engineer you can go in and start to look at logs you can log all kind of data sensory data the decisions you made you know all the outputs in your own networks and so on but I also try to really experience that surprise and that really experience as another person would that totally doesn't know how it's built and I think the magic is there and not knowing how it works that I think biology does that for you through the layers of abstraction yeah it because nobody really knows what's going on inside the biologicals like each one component is clueless about the big picture I think there's actually really cheap systems that can that can illustrate that kind of thing which is even like you know fractals right like you have a very small short formula in z and you see it and there's no magic you're just going to crank through you know z squared plus c whatever you're just going to crank through it but the result of it is this incredibly rich beautiful image right that that just like wow all of that was in this like 10 character long string like amazing so the fact that you can you can know everything there is to know about the details and the process and all the parts and every, like, there's literally no magic of any kind there.

[1248] And yet the outcome is something that you would never have expected.

[1249] And it's just, it just, you know, is incredibly rich and complex and beautiful.

[1250] So there's a lot of that.

[1251] You write that you work on developing conceptual frameworks for understanding unconventional cognition.

[1252] So the kind of thing we've been talking about, I just like the term unconventional cognition.

[1253] And you want to figure out how to detect study and communicate with the, thing.

[1254] You've already mentioned a few examples, but what is unconventional cognition?

[1255] Is it as simply as everything outside of what we define usually as cognition, cognitive science, the stuff going on between our ears?

[1256] Or is there some deeper way to skit at the fundamentals of what is cognition?

[1257] Yeah, I think like, and I'm certainly not the only person who works in unconventional cognition.

[1258] So it's the term used?

[1259] Yeah, that's one that, so I've, so I've coined a number of weird terms, but that's not one of mine.

[1260] Like that, that's an existing thing.

[1261] So, so for example, somebody like Andy Adamatsky, who I don't know if you've had him on, if you haven't, you should, he's a, he's a, he's an, you know, very interesting guy.

[1262] He's a computer scientist, and he does unconventional cognition and slime molds and all kinds of weird.

[1263] He's a real weird, weird cat, really interesting.

[1264] Anyway, so that's, you know, there's a bunch of terms that I've come up with, but that's not one of mine.

[1265] So I think, like many terms, that one is really defined by the times, meaning that, unconventional cognition, things that are unconventional cognition today are not going to be considered unconventional cognition at some point.

[1266] It's one of those, it's one of those things.

[1267] And so it's, you know, it's, it's this, it's this really deep question of how do you recognize, communicate with, classify cognition when you cannot rely on the typical milestones, right?

[1268] So typical, you know, again, if you stick with the, with the, uh, the history of life on earth like these exact model systems you would say ah here's a particular structure of the brain and this one has fewer of those and this one has a bigger frontal cortex and this one right so these are these are landmarks that that we're used to and and it allows us to make very kind of rapid judgments about things but if you can't rely on that either because you're looking at a synthetic thing or an engineered thing or an alien thing then what do you do right how do you and so and so that's what i'm really interested in mind in all of its possible implementation not just the obvious ones that we know from looking at brains here on Earth.

[1269] Whenever I think about something like unconventional cognition, I think about cellular automata.

[1270] I'm just captivated by the beauty of the thing.

[1271] The fact that from simple little objects you can create such beautiful complexity that very quickly you forget about the individual objects and you see the things that it creates as its own organisms.

[1272] That blows my mind every time.

[1273] Like, honestly, I could full -time just eat mushrooms and watch cellular automaton.

[1274] Don't even have to do mushrooms.

[1275] Just cellular automata, it feels like, I mean, from the engineering perspective, I love when a very simple system captures something really powerful because then you can study that system to understand.

[1276] something fundamental about complexity, about life on Earth.

[1277] Anyway, how do I communicate with a thing?

[1278] If a cellular automata can do cognition, if a plant can do cognition, if a xenobot can do cognition, how do I, like, whisper in its ear and get an answer back to, how do I have a conversation?

[1279] Yeah.

[1280] Well, how do I have a xenobot on a podcast?

[1281] it's a really a really interesting line of investigation that that that opens up i mean i mean we've thought about this so you need a few things you need you need to understand the space in which they live so uh what not not just the physical modality like can they see like can they feel vibration i mean that's important of course because that's how you deliver your message but but not just not just the ideas for a communication medium not not just the physical medium but what is saliency right so so what are these what are important to this what's important to this system and systems have all kinds of different levels of sophistication of what you could expect to get back.

[1282] And I think what's really important, I call this the spectrum of persuadability, which is this idea that when you're looking at a system, you can't assume where on the spectrum it is, you have to do experiments.

[1283] And so for example, if you look at a gene regulatory network, which is just a bunch of nodes that turn each other on and off at various rates, you might look at that.

[1284] say, well, there's no magic here.

[1285] I mean, clearly this thing is as deterministic as it gets.

[1286] It's a piece of hardware.

[1287] The only way we're going to be able to control it is by rewiring it, which is the way molecular biology works, right?

[1288] We can add nodes, remove notes or whatever.

[1289] Well, so we've done simulations and shown that biological, and now we're doing this in the lab, the biological networks like that have have associative memory.

[1290] So they can actually learn, they can learn from experience.

[1291] They have habituation.

[1292] They have sensitization.

[1293] They have the left side of that spectrum.

[1294] So when you're going to communicate with something, and we've even Charles Abramson and I've written a paper on behaviorist approaches to synthetic organism, meaning that if you're given something, you have no idea what it is or what it can do, how do you figure out what its psychology is, what its level is, what does it?

[1295] And so we literally lay out a set of protocols starting with the simplest things and then moving up to more complex things where you can make no assumptions about what this thing can do, right?

[1296] Just from you, you have to start and you'll find out.

[1297] So when you're going to, so, so, here's a simple, I mean, here's one way to communicate with something.

[1298] If you can train it, that's a way of communicating.

[1299] So if you can provide, if you can figure out what the currency of reward of positive and negative reinforcement is, right?

[1300] And you can get it to do something it wasn't doing before based on experiences you've given it.

[1301] You have taught it one thing.

[1302] You have communicated one thing that such and such an action is good.

[1303] Some other action is not good.

[1304] That's, that's like a basic atom of a primitive atom of communication.

[1305] What about in some sense, If it gets you to do something you haven't done before, is it answering it back?

[1306] Yeah, most, most certainly.

[1307] And there's, I've seen cartoons.

[1308] I think maybe Gary Larson or somebody had a cartoon of these, of these rats in the maze and the one rat, you know, assist to the other.

[1309] He'll look at this.

[1310] Every time I walk over here, he starts scribbling in that, on the, you know, on the clip port that he has, it's awesome.

[1311] If we step outside ourselves and really measure how much, like, If I actually measure how much I've changed because of my interaction with certain cellular automata, I mean, you really have to take that into consideration about, like, well, these things are changing you, too.

[1312] Yes.

[1313] I know you know how it works and so on, but you're being changed by the thing.

[1314] Absolutely.

[1315] I think I read, I don't know any details, but I think I read something about how wheat and other things of domesticated humans in terms of right, but by their properties change the way that the human behavior and societal structures.

[1316] In that sense, cats are running the world.

[1317] Because they took over the, so first of all, so first they, while not giving a shit about humans, clearly, with every ounce of their being, they've somehow got just millions and millions of humans to take them home and feed them.

[1318] And then, not only, only the physical space that they take over, they took over the digital space.

[1319] They dominate the internet in terms of cuteness, in terms of memeability.

[1320] And so they're like, they got themselves literally inside the memes that become viral and spread on the internet.

[1321] And they're the ones that are probably controlling humans.

[1322] That's my theory.

[1323] Another, that's a follow -up paper after the frog kissing.

[1324] Okay.

[1325] I mean, you mentioned sentience and consciousness.

[1326] You have a paper titled Generalizing Frameworks for Sentience Beyond Natural Species.

[1327] So beyond normal cognition, if we look at sentience and consciousness, and I wonder if you draw an interesting distinction between those two, elsewhere, outside of humans and maybe outside of Earth, you think aliens have sentience.

[1328] And if they do, how do we think about it?

[1329] So when you have this framework, what is this paper?

[1330] What is the way you propose to think about, essentially?

[1331] Yeah, that particular paper was a very short commentary on another paper that was written about crabs.

[1332] It was a really good paper on them, crabs and various, like a rubric of different types of behaviors that could be applied to different creatures and they're trying to apply it to crabs and so on.

[1333] And I've, consciousness, we can talk about it a few and but it's a whole separate kettle of fish.

[1334] I almost never talk about conscious.

[1335] In this case, yes.

[1336] I almost never talk about consciousness per se.

[1337] I've said very little about it, but we can talk about it if you want.

[1338] Mostly what I talk about is cognition because I think that that's much easier to deal with in a kind of rigorous experimental, experimental way.

[1339] I think that all of these, all of these terms.

[1340] have, you know, sentience and so on, have different definitions.

[1341] And I fundamentally, I think that people can, as long as they specify what they mean ahead of time, I think people can define them in various ways.

[1342] The one, the, the, the, the, the, the only thing that I really kind of insist on is that the right way to think about all this stuff is, is, is an, from an engineering perspective, what does it help me to control, predict, and does it help me do my next experiment?

[1343] So that's not a universal perspective.

[1344] So some people have philosophical kind of underpinnings, and those are primary, and if anything runs against that, then it must automatically be wrong.

[1345] So some people will say, I don't care what else, if your theory says to me that thermosets have little tiny goals, I'm not, I'm out.

[1346] So that's it.

[1347] I just, like, that's my philosophical, you know, preconception.

[1348] It's like thermosets do not have goals and that's it.

[1349] So that's one way of doing it.

[1350] And some people do it that way.

[1351] I do not do it that way.

[1352] And I think that we can't, we can't, I don't think we can know much of anything from a philosophical armchair.

[1353] I think that all of these theories and ways of doing things, stand or fall based on just basically one set of criteria, does it help you run a rich research program?

[1354] That's it.

[1355] I agree with you totally.

[1356] But so forget philosophy.

[1357] what about the poetry of ambiguity?

[1358] What about at the limits of the things you can engineer using terms that can be defined in multiple ways and living within that uncertainty in order to play with words until something lands that you can engineer?

[1359] I mean, that's to me where consciousness sits currently.

[1360] Nobody really understands the heart problem of consciousness is the subject what it feels like because it really feels like, it feels like something to be this biological system.

[1361] This conglomerate of a bunch of cells and this hierarchy of competencies feels like something.

[1362] And yeah, I feel like one thing.

[1363] And is that just a side effect of a complex system?

[1364] Or is there something more that humans have?

[1365] Or is there something more that any biological system has some kind of magic, some kind of, not just a sense of agency, but a real sense with a capital letter S of agency.

[1366] Boy, yeah, yeah, that's a deep question.

[1367] Is there a room for poetry and engineering or no?

[1368] No, there definitely is.

[1369] And a lot of the poetry comes in when we realize that none of the categories we deal with are sharp as we think they are, right?

[1370] And so in the, you know, in the different areas of all these spectra are where, a lot of the poetry sits.

[1371] I have many new theories about things, but I, in fact, do not have a good theory about consciousness that I plan to trot out.

[1372] And you almost don't see it as useful for your current work to think about consciousness.

[1373] I think it will come.

[1374] I have some thoughts about it, but I don't feel like they're going to move the needle yet on that.

[1375] And you want to ground in engineering always.

[1376] So, well, I mean, I don't, so, so if we really tackle consciousness per se, in terms of the heart problem, I don't, I don't, that isn't, necessarily going to be groundable in engineering, right?

[1377] That aspect of the cognition is, but actual consciousness per se, you know, first person perspective, I'm not sure that that's groundable in engineering.

[1378] And I think specifically what's different about, what's different about it is there's a couple things.

[1379] So let's, you know, here we go.

[1380] I'll say, I'll say a couple things about consciousness.

[1381] One thing is that what makes it different is that for every other aspect of science, when we think about having a correct or a good theory of it, we have some idea of what format that theory makes predictions in.

[1382] So whether those be numbers or whatever, we have some idea.

[1383] We may not know the answer.

[1384] We may not have the theory, but we know that when we get the theory, here's what it's going to output, and then we'll know if it's right or wrong.

[1385] For actual consciousness, not behavior, not neural correlates, but actual first -person consciousness, if we had a correct theory of consciousness or even a good one, what the hell would, what format would it make predictions in, right?

[1386] Because all the things that we know about basically boil down to observable behaviors.

[1387] So the only thing I can think of when I think about that is, it'll be poetry or it'll be, it'll be something to, if I ask you, okay, you've got a great theory of consciousness and here's this, here's this creature, maybe it's a natural one, maybe it's an engineer one, whatever.

[1388] and I want you to tell me what your theory says about this being, what it's like to be this being, the only thing I can imagine you giving me is some piece of art, a poem or something that once I've taken it in, I share, I now have a similar state as whatever.

[1389] That's about as good as I can come up with.

[1390] Well, it's possible that once you have a good understanding of consciousness, it would be mapped to some things that are more measurable.

[1391] So, for example, it's possible that a conscious being is one that's able to suffer.

[1392] So you start to look at pain and suffering.

[1393] You can start to connect it closer to things that you can measure that in terms of how they reflect themselves in behavior.

[1394] and problem solving, and creation and attainment of goals, for example, which I think suffering is one of the, life is suffering, it's one of the big aspects of the human condition.

[1395] And so if consciousness is somehow a, maybe at least a catalyst for suffering, you could start to get echoes of it.

[1396] You start to see the actual effects of consciousness on behavior.

[1397] That it's not just about subjective experience.

[1398] It's like it's really deeply integrated in the problem solving decision making of a system, something like this.

[1399] But also it's possible that we realize this is not a philosophical statement.

[1400] Philosophers can write their books.

[1401] I welcome it.

[1402] You know, I take the touring test really seriously.

[1403] I don't know why people really don't like it when a robot convinces you that it's intelligent.

[1404] I think that's a really incredible accomplishment.

[1405] And there's some deep sense in which that is intelligence.

[1406] If it looks like it's intelligent, it is intelligent.

[1407] And I think there's some deep aspect of a system that appears to be conscious.

[1408] In some deep sense, it is conscious.

[1409] for me, we have to consider that possibility.

[1410] And a system that appears to be conscious is an engineering challenge.

[1411] Yeah, I don't disagree with any of that.

[1412] I mean, especially intelligence, I think, is a publicly observable thing.

[1413] And I mean, you know, science fiction has dealt with this for a century or much more, maybe, this idea that when you are confronted with something that just doesn't meet any of your typical assumption, so you can't look in the skull and say, oh, well, there's that frontal cortex, so then I guess we're good, right?

[1414] If it's, you know, so this thing lands on your front lawn and this, you know, the little door opens and something trundles out and it's sort of like, you know, kind of shiny and aluminum looking and it hands you this, you know, it hands you this poem that it wrote while it was on, you know, flying over and how happy it is to meet you.

[1415] Like, what's going to be your criteria, right, for whether, whether you get to take it apart and see what makes a tick or whether you have to, you know, be nice to it and whatever, right?

[1416] like all the all the criteria that we have now and you know that people are using and as you said a lot of people are down on the touring tests and things like this but but what else have we got you know because measuring measuring the cortex size isn't going to isn't going to cut it right in the broader scheme of things so uh i think this is it's a wide open it's a wide open problem that right that we you know our solution to the problem of other minds it's very simplistic right we we give each other credit for having minds just because we sort of on a you know on an anatomical level were pretty similar and so that's good enough but how far how far is that going to go so i think that's really primitive so yeah i think i think it's a major unsolved problem it's a really challenging direction of thought to the human race that you talked about embodied minds if you start to think that other things other than humans have minds that's really challenging yeah because all men are created equal starts being like all right well we should probably treat not just cows with respect yeah but like plants and not just plants but some kind of organized conglomerates of cells in a petri dish in fact some of the work we're doing like you're doing and the whole community of science is doing with biology people might be like we were really mean to viruses.

[1417] Yeah.

[1418] I mean, yeah, the thing is, you're right.

[1419] And I get, I certainly get phone calls about people complaining about frog skin and so on.

[1420] But I think we have to separate the sort of deep philosophical aspects of versus what actually happens.

[1421] So what actually happens on Earth is that people with exactly the same anatomical structure kill each other, you know, on a daily basis, right?

[1422] So, so I think it's clear that simply knowing that something else is equal.

[1423] or maybe more cognitive or conscious than you are is not a guarantee of kind behavior, that much we know of.

[1424] So then, and so then we look at a commercial farming of mammals and various other things.

[1425] And so I think on a practical basis, long before we get to worrying about things like frogskin, we have to ask ourselves, why are we, what can we do about the way that we've been behaving towards creatures, which we know for a factor because of our similarities?

[1426] are basically just like us.

[1427] You know, that's kind of a whole other social thing.

[1428] But fundamentally, you know, of course, you're absolutely right in that we are also, think about this, we are on this planet in some way incredibly lucky.

[1429] It's just dumb luck that we really only have one dominant species.

[1430] It didn't have to work out that way.

[1431] So you could easily imagine that there could be a planet somewhere with more than one equally or maybe near equally intelligent species and then but but they may not look anything like each other right so there may be multiple ecosystems where there are things of of similar to human like intelligence and then you'd have all kinds of issues about you know how do you how do you relate to them when they're physically not like you at all but yet yet you know in terms of behavior and culture and whatever it's pretty obvious that they've got as you know as much on the ball as you have or maybe imagine imagine that there was another group of beings that that was like, on average, you know, 40 IQ points lower, right?

[1432] Like, we're just, we're pretty lucky in many ways.

[1433] We, you know, we don't really have, even though we sort of, you know, we still act badly in many ways.

[1434] But, but the fact is, you know, all humans are more or less in the same, that same range, but it didn't have to work out that way.

[1435] Well, but I think that's part of the way life works on Earth, maybe human civilization works, is it seems like we want ourselves to be quite similar.

[1436] And then within that, you know, where everybody's about the same relatively IQ intelligence, problem solving capabilities, even physical characteristics, but then we'll find some aspect of that that's different.

[1437] And that seems to be like, I mean, it's really dark to say, but that seems to be the, not even a bug, but like a feature of the early development.

[1438] of human civilization you pick the other your tribe versus the other tribe and you war it's a kind of evolution in the space of memes a space of ideas i think and you war with each other so we're very good at finding the other even when the characteristics are really the same and that's true i don't know what that i mean i'm sure so many of these things echo in the biological world in some way yeah Yeah.

[1439] There's a fun experiment that I did.

[1440] My son actually came up with this.

[1441] We did a biology unit together.

[1442] He would use a homeschool.

[1443] And so we did this a couple of years ago.

[1444] We did this thing where imagines, you got this slime mold, right, Pfizer and polycephalum.

[1445] And it grows on a petri dish of agar and it sort of spreads out.

[1446] And it's a single cell, you know, produce, but it's like this giant thing.

[1447] And so you put down a piece of oat and it wants to go get the oat and it sort of grows towards the oat.

[1448] So what you do is you take a razor blade and you just.

[1449] you just separate the piece of the whole culture that's growing towards the, oh, you just kind of separate it.

[1450] And so now think about, think about the interesting decision -making calculus for that little piece.

[1451] I can go get the oat and therefore I won't have to share those nutrients with this giant mass over there.

[1452] So the, so the nutrients per unit volume is going to be amazing.

[1453] So I should go eat the oat.

[1454] But if I first rejoin, because Faisarim, once you cut it, has the ability to join back up, if I first rejoin, then that whole calculus becomes impossible because there is no more me anymore.

[1455] There's just we, and then we will go eat this thing, right?

[1456] So, so this interesting, you know, this, you can imagine a kind of game theory where the number of agents isn't fixed and that it's not just cooperator defect, but it's actually merge and, and whatever, right?

[1457] So that kind of, that, that computation, how does it do that decision making?

[1458] Yeah.

[1459] So, so, so, right, so it's, it's really interesting.

[1460] And so, and so empirically, what we found is that it tends to merge first.

[1461] It tends to merge first, and then the whole thing goes.

[1462] But, but, it's really interesting that that that that that that that calculus like what do we even have I mean I'm not an expert in in the economic game theory and all that but maybe there's a calum we made some sort of hyperbolic discounting or something but but maybe you know this idea that the the actions you take not only change your payoff but they change who are what you are what you are not you could take an action after which you don't exist anymore or you are radically changed or you are merged with somebody else like that's you know as far as I know that's a whole you know we're still missing a formalism for even knowing how to model any of that.

[1463] Do you see evolution, by the way, as a process that applies here on Earth, or is it some, where did evolution come from?

[1464] Yeah.

[1465] Yeah.

[1466] So this thing that from the very origin of life that took us today, what the heck is that?

[1467] I think evolution is inevitable in the sense that if you combine, and basically, I think one of the most useful things that was done in early computing, I guess in the 60s, it started was evolutionary computation and just showing how, how simple it is, that if you have, if you have imperfect heredity and competition together, those two things with three things, So heredity, imperfect, heredity, and competition or selection, those three things, and that's it.

[1468] Now, now you're off through the races, right?

[1469] And so that can be, it's not just on earth because it can be done in the computer, it can be done in chemical systems, it can be done in, you know, Lee Smollin says it works on, on, on, them, you know, cosmic scales.

[1470] So I think that that kind of thing is incredibly pervasive and general.

[1471] It's a general feature of life.

[1472] It's interesting to think about, you know, the standard thought about this is that it's blind, right, meaning that the intelligence of the process is zero.

[1473] It's stumbling around.

[1474] And I think that back in the day when the options, where the options were it's dumb like machines or it's smart like humans, then of course the scientists went in this direction because nobody wanted creationism and said, okay, it's got to be completely blind.

[1475] I'm not actually sure, right?

[1476] Because I think that everything is a continuum.

[1477] And I think that it doesn't have to be smart with foresight like us, but it doesn't have to be completely blind either.

[1478] I think there may be aspects of it.

[1479] And in particular, this kind of multi -scale competency might give it a little bit of look ahead, maybe, or a little bit of, um, problem -solving sort of baked in, but that's going to be completely different in different systems.

[1480] I do think it's general.

[1481] I don't think it's just on Earth.

[1482] I think it's a very fundamental thing.

[1483] It does seem to have a kind of direction that it's taking us that somehow perhaps is defined by the environment itself.

[1484] It feels like we're headed towards something.

[1485] Like we're playing out a script that was just like a single cell defines the entire organism.

[1486] yeah it feels like from the origin of earth self it's playing out a kind of script yeah can't really go any other way i mean so this is very controversial and i don't know the answer but people have people have argued that uh this is called uh you know sort of rewining the tape of life right and and some people have argued i think i think conway morris maybe has argued that it is that there's a deep attractor for example to human to the human um kind of structure and that if you were to rewind it again, you'd basically get more or less the same thing.

[1487] And then other people have argued that, no, it's incredibly sensitive to frozen accidents.

[1488] And then once certain stochastic decisions are made downstream, everything is going to be different.

[1489] I don't know.

[1490] I don't know.

[1491] You know, we're very bad at predicting attractors in the space of complex systems, generally speaking, right?

[1492] We don't know.

[1493] So maybe evolution on Earth has these deep attractors that no matter what has happened, pretty much would likely to end up.

[1494] maybe not.

[1495] I don't know.

[1496] What's a really difficult idea to imagine that if you ran Earth a million times, 500 ,000 times you would get Hitler.

[1497] Yeah.

[1498] We don't like to think like that.

[1499] We think like, because at least maybe in America, you like to think that individual decisions can change the world.

[1500] And if individual decisions can change the world, then surely any perturbation results in a totally different trajectory.

[1501] But maybe there's a, in this competency hierarchy, it's a self -correcting system that's just ultimately, there's a bunch of chaos that ultimately is leading towards something like a super intelligent, artificial intelligence system that answers 42.

[1502] I mean, there might be a kind of imperative for life that it's headed to.

[1503] And we're too focused on our day -to -day life of getting coffee and snacks and having sex and getting a promotion at work not to see the big imperative of life on earth that it's headed towards something.

[1504] Yeah, maybe, maybe.

[1505] It's difficult.

[1506] I think one of the things that's important about chimerica, bioengineer technologies, all of those things are that we have to start developing a better science of.

[1507] predicting the cognitive goals of composite system.

[1508] So we're just not very good at it, right?

[1509] We don't know if I create a composite system, and this could be Internet of Things or swarm robotics or a cellular swarm or whatever.

[1510] What is the emergent intelligence of this thing?

[1511] First of all, what level is it going to be at?

[1512] And if it has goal -directed capacity, what are the goals going to be?

[1513] Like, we are just not very good at predicting that yet.

[1514] And I think that it's, it's a, it's a, it's a existential level need for us to be able to, because we're building these things all the time, right?

[1515] We're building, we're building both physical structures like swarm robotics and we're building, uh, social, financial structures and so on with very little ability to predict what sort of autonomous goals that system is going to have, of which we are now cogs.

[1516] And so, right, so learning, learning to predict and control those things is going to be critical.

[1517] So if, in fact, so if you're right and there is some kind of a tractor to evolution, it would be nice to know what that is and then to make a rational decision of whether we're going to go along or we're going to pop out of it or try to pop out of it because there's no guarantee.

[1518] I mean, that's that's the other kind of important thing.

[1519] A lot of people, I get a lot of complaints from people email me and say, you know, what you're doing, it isn't natural, you know, and I'll say, look, natural, that'd be nice if somebody was making sure that natural was, was matched up to our values, but no one's doing that.

[1520] You know, evolution optimizes for biomass.

[1521] That's it.

[1522] Nobody's optimizing.

[1523] It's not optimizing for your happiness.

[1524] It's, I don't think necessarily.

[1525] It's optimizing for, for intelligence or fairness or any of that stuff.

[1526] I'm going to find that person that emailed you, beat them up, take their place, steal everything they own and say, now we're, this is natural.

[1527] This is natural.

[1528] Yeah, exactly.

[1529] Because it comes from, it comes from an old worldview where you could, assume that whatever is natural that that's probably for the best and I think we're long out of that garden of Eden kind of view so I think we can do better we I think we and we have to right natural just isn't great for a lot of a lot of life forms what are some cool synthetic organisms that you think about you dream about anything about embodied mind what do you imagine what do you hope to build yeah on a practical level what I really hope to do is to gain enough of an understanding of the embodied intelligence of organs and tissues such that we can achieve a radically different regenerative medicine so that we can say basically and I think about it as you know in terms of like okay can you what's the what's the what's the goal kind of end game for this whole thing to me the end game is something that you would call an anatomical compiler so the idea is you would sit down in front of the computer and you would draw the the body or the organ that you wanted.

[1530] Not molecular details, but like, this is what I want.

[1531] I want a six -legged, you know, frog with a propeller on top, or I want a heart that looks like this, or I want a leg that looks like this.

[1532] And what it would do, if we knew what we were doing is put out, convert that anatomical description into a set of stimuli that would have to be given to cells to convince them to build exactly that thing, right?

[1533] I probably won't live to see it, but I think it's achievable.

[1534] And I think what that, if we can have, that, then that is basically the solution to all of medicine except for infectious disease.

[1535] So birth defects, right?

[1536] Traumatic injury, cancer, aging, degenerate disease.

[1537] If we knew how to tell cells what to build, all of those things go away.

[1538] So those things go away and the positive feedback spiral of economic costs where all of the advances are increasingly more heroic and expensive interventions of a sinking ship when you're like 90 and then and so on, right?

[1539] All of that goes away because basically instead of trying to fix you up as you as you degrade you you um you progressively regenerate you know you apply the regenerative medicine early before things degrade so i think that that'll have massive economic impacts over what we're trying to do now which is not at all sustainable and uh and that that's what i hope i hope that we get so so to me yes the xenobots will be doing useful things cleaning up the environment cleaning out you know your or you know your joints and all that kind of stuff but more important than that, I think we can use these synthetic systems to try to develop a science of detecting and manipulating the goals of collective intelligences of cells, specifically for regenerative medicine.

[1540] And then sort of beyond that, if we, you know, sort of think further beyond that, what I hope is that kind of like what you said, all of this drives a reconsideration of how we formulate ethical norms because this old school, so in the old then days, what you could do is, as you were confronted with something, you could tap on it, right?

[1541] And if you heard a metallic clanging sound, you'd said, ah, fine, right?

[1542] So you could conclude it was made in a factory.

[1543] I can take it apart.

[1544] I can do whatever, right?

[1545] If you did that and you got sort of a squishy kind of warm sensation, you'd say, ah, I need to be, you know, more or less nice to it and whatever.

[1546] That's not going to be feasible.

[1547] It was never really feasible, but it was good enough because we didn't have any, we didn't know any better.

[1548] That needs to go.

[1549] And I think that by breaking down those artificial barriers, someday we can try to build a system of ethical norms that does not rely on these completely contingent facts of our earthly history, but on something much deeper that really takes agency and the capacity to suffer and all that takes that seriously.

[1550] The capacity to suffer and the deep questions I would ask of a system is, can they eat it and can have sex with it, which is the two fundamental tests of, again, the human condition.

[1551] So I can basically do what Dolly does in the physical space.

[1552] So print out, like a 3D print, Pepe the Frog with a propeller hat, is the dream.

[1553] Well, yes and no, I mean, I want to get away from the 3D printing thing because that will be available for some things much earlier.

[1554] I mean, we can already do bladders and ears and things like that because it's micro level control, right?

[1555] When you 3D print, you are in charge of where every cell goes.

[1556] And for some things that, you know, for like this thing, they had that, I think 20 years ago or maybe earlier than that, you could do that.

[1557] So yeah, I would like to emphasize the Dali part where you provide a few words and it generates a painting.

[1558] So here you say, I want a frog with these features and then it would go direct a complex biological system to construct something like that.

[1559] Yeah.

[1560] The main magic would be, I mean, I think from looking at Dali and so on, it looks like the first part is kind of solved now, where you go from the words to the image, like that seems more or less solved.

[1561] The next step is really hard.

[1562] This is what keeps things like CRISPR and genomic editing and so on.

[1563] That's what limits all the impacts for regenerative medicine.

[1564] listen, because going back to, okay, this is the knee joint that I want or this is the eye that I want, now what genes do I edit to make that happen, right?

[1565] Going back in that direction is really hard.

[1566] So instead of that, it's going to be, okay, I understand how to motivate cells to build particular structures.

[1567] Can I rewrite the memory of what they think they're supposed to be building such that then I can, you know, take my hands off the wheel and let them do their thing?

[1568] So some of that is experiment, but some of that maybe AI can help too.

[1569] Just like with protein folding, this is exactly the problem that.

[1570] protein folding in the most simple medium tried and has solved with alpha fold which is how does the sequence of letters result in this three -dimensional shape and you have to I guess it didn't solve it because you have to if you say I want this shape how do I then have a sequence of letters yeah the reverse engineering step is really tricky it is I I think where, and we're doing some of this now, is to use AI to try and build actionable models of the intelligence of the cellular collectives.

[1571] So try to help us, help us gain models that, and we've had some success in this.

[1572] So we did something like this for repairing birth defects of the brain in Frog.

[1573] We've done some of this for normalizing melanoma, where you can really start to use AI to make models of how would I impact this thing if I wanted to, given all the complexities, right, and given all the controls that it knows how to do.

[1574] So when you say regenerative medicine, so we talked about creating biological organisms, but if you regrow a hand, that information is already there, right?

[1575] The biological system has that information.

[1576] So how does regenerative medicine work today?

[1577] How do you hope it works?

[1578] What's the hope there?

[1579] Yeah.

[1580] How do you make it happen?

[1581] Well, today there's a set of popular approaches.

[1582] So one is 3D printing.

[1583] So the idea is I'm going to make a scaffold of the thing that I want.

[1584] I'm going to seed it with cells and then there it is.

[1585] And then that works for certain things.

[1586] You can make a bladder that way or an ear, something like that.

[1587] The other ideas is some sort of stem cell transplanted.

[1588] The idea is if we put in stem cells with appropriate factors, we can get them to generate certain kinds of neurons for certain diseases and so on.

[1589] All of those things are good for relatively simple structures, but when you want an eye or a hand or something else, I think, in this maybe maybe an unpopular opinion, I think the only hope we have in any reasonable kind of time frame is to understand how the thing was motivated to get made in the first place.

[1590] So what is it that made those cells in the beginning create a particular arm with a particular set of of sizes and shapes and number of fingers and all that.

[1591] And why is it that a salamander can keep losing theirs and keep regrowing theirs?

[1592] And a plenaryian can do the same, even more so.

[1593] To me, kind of ultimate regenerate medicine was when you can tell the cells to build whatever it is you need them to build, right?

[1594] And so that we can all be like planaria, basically.

[1595] Do you have to start at the very beginning, or can you do a shortcut?

[1596] Because if you're growing a hand, you already got the whole organism.

[1597] Yeah.

[1598] So here's what we've done, right?

[1599] So we've more or less solved that in frogs.

[1600] So frogs, unlike salamanders, do not regenerate their legs as adults.

[1601] And so we've shown that with a very kind of simple intervention.

[1602] So what we do is there's two things.

[1603] You need to have a signal that tells the cells what to do.

[1604] And then you need some way of delivering it.

[1605] And so this is worked together with David Kaplan.

[1606] and I should do a disclosure here.

[1607] We have a company called Morphesuticals and a spin -off where we're trying to address regenerate, you know, limb regeneration.

[1608] So we've solved it in the frog, and we're now in trials and mice.

[1609] So now we're going to, we're in mammals now.

[1610] I can't say anything about how it's going, but the frog thing is salt.

[1611] So what you do is after.

[1612] You have a little frog loose Skywalker with every growing hand.

[1613] Yeah, basically.

[1614] Basically, yeah.

[1615] Yeah.

[1616] So what you do is we did it with legs instead of forearms, and what you do is after amputation, normally they don't regenerate.

[1617] you put on a wearable bioreactor, so it's this thing that goes on, and Dave Kaplan's lab makes these things.

[1618] And inside, it's a very controlled environment.

[1619] It is a silk gel that carries some drugs, for example, ion channel drugs.

[1620] And what you're doing is you're saying to these cells, you should regrow what normally goes here.

[1621] So that whole thing is on for 24 hours.

[1622] Then you take it off, you don't touch the link again.

[1623] This is really important because what we're not looking for is a set of micromanagement, you know, printing or controlling the cells.

[1624] We want to trigger.

[1625] We want to, we want to interact with it early on and then not touch it again because because we don't know how to make a frog leg, but the frog knows how to make a frog leg.

[1626] So 24 hours, 18 months of leg growth after that without us touching it again.

[1627] And after 18 months, you get a pretty good leg.

[1628] That kind of shows this proof of concept that early on when the cells, right after injury, when they're first making a decision about what they're going to do, you can, you can impact them.

[1629] And once they've decided to make a leg, they don't need you after that, they can, you know, do their own thing.

[1630] So that's an approach that we're now taking.

[1631] What about cancer suppression?

[1632] That's something you mentioned earlier.

[1633] How can all of these ideas help with cancer suppression?

[1634] So let's go back to the beginning and ask what what cancer is.

[1635] So I think, you know, asking why there's cancer is the wrong question.

[1636] I think the right question is why is there ever anything but cancer?

[1637] So, so in the normal state, you have a bunch of cells that are all cooperating towards a large scale goal.

[1638] If that process of cooperation breaks down, and you've got a cell that is isolated from that electrical network that lets you remember what the big goal is, you revert back to your unicellular lifestyle as far as, now think about that border between self and world, right?

[1639] Normally, when all these cells are connected by gap junctions into an electrical network, they are all one self, right?

[1640] Meaning that their goals, they have these large tissue level goals and so on.

[1641] As soon as a cell is disconnected from that, the self is tiny, right?

[1642] And so at that point, and so people, a lot of people model cancer cells as being more selfish and all that.

[1643] They're not more selfish.

[1644] They're equally selfish.

[1645] It's just that their self is smaller.

[1646] Normally the self is huge.

[1647] Now they've got tiny little selves.

[1648] Now, what are the goals of tiny little selves?

[1649] Well, proliferate and migrate to wherever life is good.

[1650] And that's metastasis.

[1651] That's proliferation and metastasis.

[1652] So one thing we found, and people have noticed years ago that when cells convert to cancer, the first thing they see is they close the gap junctions.

[1653] And it's a lot like, I think, it's a lot like that experiment with the slime old where until you close that gap junction, you can't even entertain the idea of leaving the collective because there is no you at that point, right?

[1654] Your mind melded with this whole other network.

[1655] But as soon as the gap junction is closed, now the boundary between you, now the rest of the body is just outside environment to you.

[1656] You're just a unicellular organism on the rest of the body's environment.

[1657] So we studied this process and we worked out a way to artificially control the bioelectric state of these cells to physically force them to remain in that network.

[1658] And so then what that means is that nasty mutations like K -RAS and things like that, these really tough oncogenic mutations that cause tumors.

[1659] If you do them, but then artificially control of the bielectrics, you greatly reduce tumorgenesis or normalized cells that had already begun to convert you, basically, they go back to being normal cells.

[1660] And so this is another, much like with the planaria, this is another way in which the bioelectric state kind of dominates what the genetic state is.

[1661] So if you sequence the, you know, if you sequence the nucleic acid, you'll see the KRAS mutation.

[1662] You'll say, well, that's going to be a tumor.

[1663] But there isn't a tumor because bielectrically you've kept the cells connected and they're just working on making nice skin and kidneys and whatever else.

[1664] So we've started moving that to human glioblastoma cells.

[1665] and we're hoping for, you know, a patient in the future interaction with patients.

[1666] So is this one of the possible ways in which we may, quote, cure cancer?

[1667] I think so.

[1668] Yeah, I think so.

[1669] I think the actual cure, I mean, there are other technology, you know, immune therapy, I think it's a great technology.

[1670] Chemotherapy, I don't think, is a good technology.

[1671] I think we've got to get off of that.

[1672] So chemotherapy just kills cells.

[1673] Yeah, well, chemotherapy hopes to kill more of the two.

[1674] tumor cells than of your cells.

[1675] That's it.

[1676] It's a fine balance.

[1677] The problem is the cells are very similar because they are your cells.

[1678] And so if you don't have a very tight way of distinguishing between them, then the toll that chemo takes on the rest of the body is just unbelievable.

[1679] And immunotherapy tries to get the immune system to do some of the work.

[1680] Exactly.

[1681] Yeah.

[1682] I think that's potentially a very good, very good approach.

[1683] If the immune system can be taught to recognize enough of the cancer cells, that that's a pretty good approach.

[1684] But I think, but I think our approaches in a way more fundamental.

[1685] Because if you can keep the cells harness towards organ level goals as opposed to individual cell goals, then nobody will be making a tumor or metastasizing and so on.

[1686] So we've been living through a pandemic.

[1687] What do you think about viruses in this full beautiful biological context we've been talking about?

[1688] Are they beautiful to you?

[1689] Are they terrifying?

[1690] Also, maybe, let's say, are they, since we've been discriminating this whole conversation, are they living?

[1691] Are they embodied minds?

[1692] Embodied minds that are assholes?

[1693] As far as I know, and I haven't been able to find this paper again, but somewhere I saw in the last couple of months, there was some papers showing an example of a virus that actually had physiology, so there was some, something was going on, I think proton flux or something.

[1694] on the virus itself.

[1695] But barring that, generally speaking, viruses are very passive.

[1696] They don't do anything by themselves.

[1697] And so I don't see any particular reason to attribute much of a mind to them.

[1698] I think, you know, they represent a way to hijack other minds, for sure, like cells and other things.

[1699] But that's an interesting interplay, though.

[1700] If they're hijacking other minds, you know the way we were we were talking about living organisms that they can interact with each other and have it alter each other's trajectory by having interacted i mean that's that's a deep meaningful connection between a virus and a cell and i think both are transformed by the experience and so in that sense both are living yeah yeah you know the whole category i um i i don't This question of what's living and what's not living, I really, I'm not sure, and I know there's people that work on this and I want to piss anybody off, but I have not found that particularly useful as to try and make that a binary kind of distinction.

[1701] I think level of cognition is very interesting as a continuum, but living and non -living, you know, I really know what to do with that.

[1702] I don't know what you do next after making that distinction.

[1703] That's why I make the very binary distinction can i have sex with it or not can i eat it or not those because there's those are actionable right yeah well i think that's a critical point that you brought up because how you relate to something is really what this is all about right as an engineer how do i control it but maybe i shouldn't be controlling it maybe i should be you know can i have a relationship with it should i be listening to its advice like like all the way from you know i need to take it apart all the way to uh i better do what it says because it seems to be pretty smart and everything in between Right, that's really what we're asking about.

[1704] Yeah, we need to understand our relationship to it.

[1705] We're searching for that relationship, even in the most trivial senses.

[1706] You came up with a lot of interesting terms.

[1707] We've mentioned some of them, agencial material.

[1708] That's a really interesting one.

[1709] That's a really interesting one for the future of computation and artificial intelligence and computer science and all of that.

[1710] There's also, let me go through some of them, if they spark some interesting thought for you, there's teleophobia, the unwarranted fear of erring on the side of too much agency when considering a new system.

[1711] Yeah, I mean.

[1712] That's the opposite.

[1713] I mean, being afraid of maybe anthropomorphizing the thing.

[1714] This will get some people ticked off, I think.

[1715] But I don't think, I think the whole notion of anthropomorphizing is a holdover from a pre -scientific age where humans were magic and everything else wasn't magic and you were anthropomorphizing when you dared suggest that something else has some features of humans.

[1716] And I think we need to be way beyond that.

[1717] And this issue of anthropomorphizing, I think, it's a cheap charge.

[1718] I don't think it holds any water at all other than when somebody makes a cognitive claim.

[1719] I think all cognitive claims are engineering claims, really.

[1720] So when somebody says this thing knows or this thing hopes or this thing wants or this thing predicts.

[1721] All you can say is fabulous.

[1722] Give me the engineering protocol that you've derived using that hypothesis and we will see if this thing helps us or not and then we can make a rational decision.

[1723] I also like anatomical compiler, a future system representing the long term end game of the science of morphogenesis that reminds us how far away from true understanding we are.

[1724] Someday you will be able to sit in front of a anatomical computer, specify the shape of the animal or a plant that you want, and it will convert that shape specification to a set of stimuli that will have to be given to cells to build exactly that shape.

[1725] No matter how weird, it ends up being you have total control.

[1726] Just imagine the possibility for memes in the physical space.

[1727] One of the glorious accomplishments of human civilizations is memes in digital space.

[1728] now this could create memes in physical space i am both excited and terrified by that possibility cognitive light cone i think we also talked about the outer boundary in space and time of the largest goal a given system can work towards is this kind of like shaping the set of options it's a little different than than options it's it's really focused on so so back in uh this I first came up with this, back in 2018, I want to say, there was a conference, a Templeton conference where they challenged us to come up with frameworks.

[1729] I think actually it's the, here, it's the diverse intelligence community.

[1730] Summer Institute.

[1731] Yeah, they had a summer institute.

[1732] That's the logos, the B with some circuits.

[1733] Yeah, it's got different life forms.

[1734] And, you know, so the whole program is called diverse intelligence.

[1735] And they sort of, they challenged us to come up with a framework that was suitable for analyzing different kinds of intelligence together, right?

[1736] Because the kinds of things you do to a human are not good with an octopus, not good with a plant and so on.

[1737] So I started thinking about this and I asked myself what do all cognitive agents, no matter what their provenance, no matter what their architecture is, what do cognitive agents have in common?

[1738] And it seems to me that what they have in common is some degree of competency to pursue a goal.

[1739] And so what you can do then is you can draw.

[1740] And so what I ended up drawing was this thing that it's kind of like a like a backwards Minkowski cone diagram where all of space is collapsed into one axis and then here and then time is this axis.

[1741] And then what you can do is you can draw for any creature, you can you can semi quantitatively estimate what are the what are the spatial and temporal goals that it can that it's capable of pursuing.

[1742] So, for example, if you are a tick and all you can, all you really are able to pursue is maxima or a bacteria in maximizing the level of some chemical in your vicinity, right?

[1743] That's all you've got.

[1744] It's a tiny little lichen.

[1745] Then you're a simple system like a tick or a bacteria.

[1746] If you are something like a dog, well, you've got some ability to care about some spatial regions, some temporal, you know, you can remember a little bit backwards.

[1747] You can predict a little bit forwards.

[1748] but you're never, ever going to care about what happens in the next town over four weeks from now.

[1749] It's just, as far as we know, it's just impossible for that kind of architecture.

[1750] If you're human, you might be working towards world peace long after you're dead, right?

[1751] So you might have a planetary scale goal that's enormous, right?

[1752] And then there may be other greater intelligence is somewhere that can care in the linear range about numbers of creatures that, you know, some sort of Buddha -like character that can like care about everybody's welfare, like really care the way that we can.

[1753] And so that, it's not a, it's not a mapping of what you can sense, how far you can sense, right?

[1754] It's not a mapping of where, how far you can act.

[1755] It's a mapping of how big are the goals you are capable of envisioning and working towards.

[1756] And I think that enables you to put synthetic kinds of constructs, aliens, swarms, whatever, on the same diagram, because, because we're not talking about what you're made of or how you got here.

[1757] we're talking about what are the what are the the the size and complexity of the goals towards which you can work is there any other terms that pop into mind that are interesting trying to remember there's a i have a list of them somewhere on my website target morphology yeah yeah definitely check it out more of morphocytical i like that one ionocytical yeah yeah i mean those those refer to different types of interventions in the regenerate medicine space so morphosutical is something that it's a kind of intervention that really targets the cells decision -making process about what they're going to build.

[1758] And ionaceuticals are like that, but more focused specifically on the bioelectrics.

[1759] I mean, there's also, of course, biochemical, biomechanical, who knows what else, you know, maybe optical kinds of signaling systems there as well.

[1760] Target morphology is interesting.

[1761] It really, it's designed to capture this idea that it's not just feed -forward emergence And oftentimes in biology, I mean, of course, that happens too.

[1762] But in main cases in biology, the system is specifically working towards a target in anatomical amorphous space, right?

[1763] It's a navigation task, really.

[1764] These kind of problem solving can be, you know, formalized as navigation tasks, and that they're really going towards a particular region.

[1765] How do you know?

[1766] Because you deviate them, and then they go back.

[1767] Let me ask you, because you've really challenged a lot of ideas and by in the work you do, probably because some of your rebelliousness comes from the fact that he came from a different field of computer engineering.

[1768] But you give advice to young people today in high school or college that are trying to pave their life story, whether it's in science or elsewhere, how they can have a career they can be proud of or a life they can be proud of, advice.

[1769] Boy, it's dangerous to give advice because things change so fast, but one central thing I can say.

[1770] Moving up and through academia and whatnot, you will be surrounded by really smart people.

[1771] And what you need to do is be very careful at distinguishing specific critique versus kind of meta -advice.

[1772] And what I mean by that is if somebody really smart and successful and obviously competent is giving you specific critiques on what you've done.

[1773] That's gold.

[1774] That's an opportunity to hone your craft to get better at what you're doing, to learn, to find your mistakes.

[1775] Like, that's great.

[1776] If they are telling you what you ought to be studying, how you ought to approach things, what is the right way to think about things, you should probably ignore most of that.

[1777] And the reason I make that distinction is that a lot of really, really successful people are very well calibrated on their own ideas and they in their own field and their own you know sort of area and they know exactly what works and what doesn't and what's good and what's bad but they're not calibrated on your ideas and so so the things they will they will say oh you know this is a dumb idea I don't do this and you shouldn't do that that stuff is generally worse than worse than useless It can be very, very demoralizing and really limiting.

[1778] And so what I say to people is read very broadly, work really hard, know what you're talking about, take all specific criticism as an opportunity to improve what you're doing and then completely ignore everything else.

[1779] Because I just tell you from my own experience, most of what I consider to be interesting and useful things that we've done, very smart people have said, this is a terrible idea.

[1780] You don't, don't do that.

[1781] Don't, you know, just, yeah, I think, I think we just don't know.

[1782] We have no idea beyond our own, like, at best, we know what we ought to be doing.

[1783] We very rarely know what anybody else should be doing.

[1784] Yeah, and their ideas, their perspective has been also calibrated, not just on their field in specific situation, but also on a state of that field at a particular time in the past.

[1785] Yeah.

[1786] So there's not many people in this world.

[1787] They're able to achieve revolutionary success.

[1788] multiple times in their life.

[1789] So whenever you say somebody very smart, usually what that means is somebody who's smart who achieved the success at certain point in their life and people often get stuck in that place where they found success.

[1790] To be constantly challenging your world view is a very difficult thing.

[1791] Yeah, yeah, yeah.

[1792] So yeah, and also at the same time, probably if a lot of people tell, that's the weird thing about life.

[1793] If a lot of people tell you, that something is stupid or is not going to work that either means it's stupid it's not going to work or it's actually a great opportunity to do something new and you don't know which one it is and it's probably equally likely to be either well I don't know the probabilities depends how lucky you are depends how brilliant you are but you don't know and so you can't take that advice as actual data yeah you have to um you have to and this is this is kind of hard and fuzzy.

[1794] It's like hard to describe and fuzzy, but I'm a firm believer that you have to build up your own intuition.

[1795] So over time, right, you have to take your own risks that seem like they make sense to you and then learn from that and build up so that you can trust your own gut about what's a good idea even when.

[1796] And then sometimes you'll make mistakes and it'll turn out to be a dead end.

[1797] And that's fine.

[1798] That's science.

[1799] But, but, you know, what I tell my students is life is hard and science is hard and you're going to sweat and bleed and everything and you should be doing that for ideas that that really fire you up inside and and you know and and really don't let kind of the the the common denominator of standardized approaches to things slow you down so you mentioned planaria being in some sense immortal what's the role of death in life what's the role of death in this whole process we have is it uh when you look at biological systems is death an important feature especially as you climb up the hierarchy of uh competency boy that's an interesting question um i think that uh it's certainly a factor that promotes change and turnover and uh an opportunity to do something different the next time for a larger scale system.

[1800] So apoptosis, you know, it's really interesting.

[1801] I mean, death is really interesting in a number of ways.

[1802] One is, like, you could think about, like, what was the first thing to die?

[1803] You know, that's an interesting question.

[1804] What was the first creature that you could say actually die?

[1805] It's a tough, it's a tough thing because we don't have a great definition for it.

[1806] So if you bring a cabbage home and you put it in your fridge, at what point are you going to say it's died, right?

[1807] And so that's, it's kind of hard.

[1808] to know, there's also, there's also, there's, there's one paper in which I talk about this idea that, I mean, think about, think about this, and imagine that you have, you have a creature that's aquatic, let's say, let's say it's a frog or something, or a tadpole, and the animal dies, in the, in the pond it dies, for whatever reason, most of the cells are still alive.

[1809] So you could imagine that if when it died, there was some sort of breakdown of, of, of, of, the of the connectivity between the cells, a bunch of cells crawled off.

[1810] They could have a life as amoebas.

[1811] Some of them could join together and become a xenobot and toodle around, right?

[1812] So we know from Plenaria that there are cells that don't obey the hayflick limit and just sort of live forever.

[1813] So you could imagine an organism that when the organism dies, it doesn't disappear.

[1814] Rather, the individual cells that are still alive crawl off and have a completely different kind of lifestyle and maybe come back together as something else or maybe they don't.

[1815] So all of this, I'm sure is happening somewhere on some, on some planet.

[1816] So, so death in any case, I mean, we already kind of knew this because the molecules, we know that when something dies, the molecules go through the ecosystem.

[1817] But even the cells don't necessarily die at that point.

[1818] They might have another life in a different way.

[1819] And you can think about something like Heela, right, the Heela cell line, you know, that has this, that's had this incredible life.

[1820] There are way more Heela cells now that there ever been, than there were when she was alive.

[1821] It seems like as the organisms become more and more complex, like if you look at the mammals, their relationship with death becomes more and more complex.

[1822] So the survival imperative starts becoming interesting.

[1823] And humans are arguably the first species that have invented the fear of death, the understanding that you're going to die.

[1824] Let's put it this way.

[1825] So not like instinctual, like, I need to run away.

[1826] from the thing that's going to eat me, but starting to contemplate the finiteness of life.

[1827] Yeah.

[1828] I mean, one thing, so one thing about the human cognitive light cone is that for the first, as far as we know, for the first time, you might have goals that are longer than your life span, that are not achievable, right?

[1829] So if you're, let's say, and I don't know if this is true, but if you're a goldfish and you have a 10 -minute attention span, I'm not sure if that's true, but let's say there's some organism with a short, you know, kind of cognitive light cone that way, all of your goals are potentially achievable because you're probably going to live the next 10 minutes.

[1830] So whatever goals you have, they are totally achievable.

[1831] If you're a human, you could have all kinds of goals that are guaranteed not achievable because they just take too long.

[1832] Like guaranteed you're not going to achieve them.

[1833] So I wonder if, you know, is that a perennial, you know, like a perennial, you know, sort of thorn in our psychology that drives some psychoses or whatever.

[1834] I have no idea.

[1835] Another interesting thing about that, actually, and I've been thinking about this a lot in last couple of weeks, this notion of giving up.

[1836] So you would think that evolutionarily, the most adaptive way of being is that you go, you fight as long as you physically can.

[1837] And then when you can't, you can't.

[1838] And there's this photograph, there's videos you can find of insects crawling around where like, you know, most of it is already gone and it's still sort of crawling, you know, like Terminator style, right?

[1839] Like as far as, as long as you physically can, you keep going.

[1840] mammals don't do that.

[1841] So a lot of mammals, including rats, have this thing where when they think it's a hopeless situation, they literally give up and die when physically they could have kept going.

[1842] I mean, humans certainly do this.

[1843] And there's some really unpleasant experiments that this guy, I forget his name, did with drowning rats where rats normally drown after a couple of minutes.

[1844] But if you teach them that if you just tread water for a couple of minutes, you'll get rescued, they can tread water for like an hour.

[1845] And so, right?

[1846] And so they literally just give up and die.

[1847] And so evolutionarily, that doesn't seem like a good strategy at all.

[1848] Evolutionarily seems why would you, like, what's the benefit ever of giving up?

[1849] You just do what you can and, you know, one time out of a thousand, you'll actually get rescued, right?

[1850] But this issue of actually giving up suggests some very interesting metacognitive controls where you've now gotten to the point where survival actually isn't the top drive and that for whatever, you know, there are other considerations that have like taken over.

[1851] And I think that's uniquely a mammalian thing, but I don't know.

[1852] Yeah, the Camus, the existentialist question of why I live, just the fact that humans commit suicide is a really fascinating question from an evolutionary perspective.

[1853] And what was the first, and that's the other thing, like what is the simplest system, whether evolved or natural or whatever, that is able to do that, right?

[1854] Like you can think, you know, what other animals are actually able to do that?

[1855] I'm not sure.

[1856] maybe you could see animals over time for some reason lowering the value of survive at all costs gradually until other objectives might become more important maybe i don't know how evolutionarily how that how that gets off the ground that just seems like that would have such a strong pressure against it you know just imagine a you know a population with with with with if you were a mutant in a population that had less of a, less of a survival imperative, would your genes outperform the others?

[1857] It seems not.

[1858] Is there such a thing as population selection because maybe suicide is a way for organisms to decide themselves that they're not fit for the environment somehow?

[1859] Yeah, that's a really population level selection is a kind of a deep controversial area, but it's tough because on the face of it, if that was your genome, it wouldn't get propagated because you would die and then your neighbor who didn't have that would have all the kids.

[1860] It feels like there could be some deep truth there that we're not understanding.

[1861] What about you yourself as one biological system?

[1862] Are you afraid of death?

[1863] To be honest, I'm more concerned with, especially now getting older and having helped a couple of people pass.

[1864] I think about what's a good, way to go, basically.

[1865] Like nowadays, I don't know what that is.

[1866] You know, sitting in a, you know, a facility that sort of tries to stretch you out as long as you can.

[1867] That doesn't seem, that doesn't seem good.

[1868] And there's not a lot of opportunities to sort of, I don't know, sacrifice yourself for something useful, right?

[1869] There's not terribly many opportunities for that in modern society.

[1870] So I don't know.

[1871] That's, that's more of, I'm not, I'm not particularly worried about death itself, but I've seen it happen.

[1872] and it's not pretty.

[1873] And I don't know what a better alternative is.

[1874] So the existential aspect of it does not worry you deeply, the fact that this ride ends?

[1875] No, it began.

[1876] I mean, the ride began, right?

[1877] So there was, I don't know how many billions of years before that I wasn't around.

[1878] So that's okay.

[1879] But isn't the experience of life, it's almost like feels like you're immortal?

[1880] because the way you make plans, the way you think about the future.

[1881] I mean, if you look at your own personal, rich experience, yes, you can understand, okay, eventually I die.

[1882] There's people I love that have died, so surely I will die and it hurts and so on.

[1883] But like, sure, it doesn't, it's so easy to get lost in feeling like this is going to go on forever.

[1884] Yeah.

[1885] It's a little bit like the people who say they don't believe in free will, right?

[1886] I mean, you can say that, but when you go to a restaurant, you still have to pick a soup and stuff.

[1887] So, right?

[1888] So I don't know if I know, I've actually seen that happen at lunch with a well -known philosopher.

[1889] And he didn't believe in free will.

[1890] And, you know, the waitress came around.

[1891] And he was like, well, let me see.

[1892] I was like, what are you doing?

[1893] You're going to choose a sandwich, right?

[1894] So I think it's one of those things.

[1895] I think you can know that, you know, you're not going to live forever.

[1896] But you can't, you can't, it's not practical to live that way unless, you know, so you you buy insurance and then you do some stuff like that.

[1897] But mostly, you know, I think you just live as if you can make plans.

[1898] We talked about all kinds of life.

[1899] We talked about all kinds of embodied minds.

[1900] What do you think is the meaning of it all?

[1901] What's the meaning of all the biological eyes we've been talking about here on Earth?

[1902] Why are we here?

[1903] I don't know that that's a well -post question.

[1904] other than the existential question you posed before.

[1905] Is that question hanging out with the question of what is consciousness and they're at retreat somewhere?

[1906] Not sure because...

[1907] Sipping Pinacoladas and because they're ambiguously defined.

[1908] Maybe.

[1909] I'm not sure that any of these things really ride on the correctness of our scientific understanding of it.

[1910] I mean, just for an example, right?

[1911] I've always found it weird that people get really worked up to find out realities about their bodies.

[1912] For example, right, that you've seen them ex machina, right?

[1913] And so there's this great scene where he's cutting his hand to find out a piece full of cock.

[1914] Now, to me, right, if I open it up and I find out in a fine bunch of cogs, my conclusion is not, oh, crap, I must not have true cognition.

[1915] That sucks.

[1916] My conclusion is, wow, cogs can have true cognition.

[1917] Great.

[1918] So, right?

[1919] So it seems to me, I guess I'm with Descartes on this one, that whatever, whatever the truth ends up being of how is what is consciousness, how it can be conscious, none of that is going to alter my primary experience, which is this is what it is.

[1920] And if a bunch of molecular networks can do it, fantastic.

[1921] If it turns out that there's a, there's a non -corporial, you know, so great.

[1922] We can, you know we'll study that whatever but but the fundamental um existential aspect of it is you know if somebody if somebody told me uh today that uh yeah yeah you were created yesterday and all your memories are you know sort of uh fake you know kind of like um like like it like boltsman brains right and hume you know hume skepticism and all that uh yeah okay well so so but but here i am now so so let's the experience is primal so like that's the that's the thing that matters so the the the back story doesn't matter.

[1923] I think so.

[1924] I think so from the first person perspective.

[1925] Now, from a third per, like scientifically, it's all very interesting.

[1926] From a third person perspective, I could say, wow, that's, that's amazing that this happens and how does it happen and whatever.

[1927] But from a first person perspective, I can care less.

[1928] Like, I just, it's just, what I've, what I learned from any of these scientific facts is, okay, well, I guess then that's, then I guess that's what is sufficient to, to give me my, you know, amazing first person perspective.

[1929] Well, I think if you dig deeper and deeper and get a get surprising answers to why the hell we're here it might give you some guidance on how to live maybe maybe i don't know um that would be nice on the one hand you might be right because on the one hand if i don't know what else could possibly give you that guidance right so so you would think that it would have to be that or you would it would have to be science because there isn't anything else so so that's so maybe on the other other hand, I am really not sure how you go from any, you know, what they call from an is to an ought, right, from any factual description of what's going on.

[1930] This goes back to the natural, right, just because somebody says, oh man, that's, that's completely not natural.

[1931] It's never happened on Earth before I, I'm not, you know, impressed by that whatsoever.

[1932] I think, I think whatever it has or hasn't happened, we are now in a position to do better if we can't, right?

[1933] Well, that's also good because you said there's science and there's nothing else.

[1934] It's really tricky to know how to intellectually deal with a thing that science doesn't currently understand.

[1935] Right?

[1936] So like the thing is if you believe that science solves everything, you can too easily in your mind think our current understanding like we've solved everything right right right like it jumps really quickly to not science as a mechanism as a as a process but more like science of today like you could just look at human history and throughout human history just physicists and everybody would claim we've solved everything sure sure like there's a few small things to figure out and we basically solved everything where in reality I think asking like what is the meaning of life is resetting the palette of like we might be tiny and confused and don't have anything figured out it's almost going to be hilarious a few centuries from now when they look back how dumb we were yeah 100 % agree so when I say science and nothing else I certainly don't mean the science of today because I think overall, I think we are, we know very little.

[1937] I think most of the things that we're sure of now are going to be, as you said, are going to look hilarious down the line.

[1938] So I think we're just at the beginning of a lot of really important things.

[1939] When I say nothing but science, I also include the kind of first person, what I call science that you do.

[1940] So the interesting thing about, I think about consciousness and studying consciousness and things like that in the first person is, unlike doing science in the third person, where you as the scientists are minimally changed by it, maybe not at all.

[1941] So when I do an experiment, I'm still me. There's the experiment.

[1942] Whatever I've done, I've learned something.

[1943] So that's a small change.

[1944] But overall, that's it.

[1945] In order to really study consciousness, you will, you are part of the experiment.

[1946] You will be altered by that experiment, right?

[1947] Whatever, whatever is that you're doing, whether it's some sort of contemplative practice or some sort of, you know, psychoactive, you know, you are now your own experiment and you are right in so so i can see i fold that in i think that's part of it i think that exploring our own mind and our own consciousness is very important i think much of it is not captured by what currently is third person science for sure but ultimately i include all of that in science with a capital s in terms of like a um a rational investigation of both first and third person aspects of our world we are are our own experiment, as beautifully put.

[1948] And when two systems get to interact with each other, that's a kind of experiment.

[1949] So I'm deeply honored that you would do this experiment with me today.

[1950] Oh, thanks so much.

[1951] I'm a huge fan of your work.

[1952] Likewise.

[1953] Thank you for doing everything you're doing.

[1954] I can't wait to see the kind of incredible things you build.

[1955] So thank you for talking.

[1956] Really appreciate being here.

[1957] Thank you.

[1958] Thank you for listening to this conversation with Michael Levin.

[1959] To support this podcast, please check out our sponsors in the description.

[1960] And now, let me leave you with some words from Charles Darwin in the origin of species.

[1961] From the war of nature, from famine and death, the most exalted object which were capable of conceiving, namely the production of the higher animals directly follows.

[1962] There's grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one, and that, whilst this planet has gone cycling on according to the fixed laws of gravity, from so simple a beginning, endless forms, most beautiful and most wonderful, have been and are being evolved.

[1963] Thank you for listening.

[1964] I hope to see you next time.