Lex Fridman Podcast XX
[0] The following is a conversation with Lee Cronin, his third time on this podcast.
[1] He is a chemist from University of Glasgow, who is one of the most fascinating, brilliant, and fun to talk to scientists have ever had the pleasure of getting to know.
[2] And now, a quick few second mention of each sponsor.
[3] Check them out in the description.
[4] It's the best way to support this podcast.
[5] We got NetSuite for business management software, better help for mental health, Shopify for e -commerce, A sleeve for naps and AG1 for delicious, delicious health.
[6] Choose wise and my friends.
[7] Also, if you want to work with our amazing team or what was hiring, go to Lexfremant .com slash hiring.
[8] You can also get in touch with me. If you go to Lexfremant .com slash contact, there's so many more things I could say.
[9] Let me just keep going.
[10] Now on to the full ad reads.
[11] As always, no ads in the middle.
[12] I try to make these interesting, but if you must skip them, friends, please still check out our sponsors.
[13] I enjoy their stuff.
[14] Maybe you will too.
[15] This show is brought to you by NetSuite, an all -in -one cloud business management system.
[16] I usually do these ad reads and say whatever the heck I want, but sometimes the sponsors ask politely, never required, but always politely, to mention a few things.
[17] Two things they asked me to mention.
[18] One is that NetSuite turned 25 years old this year.
[19] Congratulations.
[20] Happy birthday, NetSuite.
[21] And also, they want me to mention that 37 ,000 companies have upgraded to NetSuite by Oracle.
[22] 37 ,000 companies.
[23] I wonder how many companies are out there.
[24] Isn't that amazing?
[25] Just companies are amazing.
[26] A small, a medium, a large collection of humans get together, much as we did in the cavement days around the fire, but here around the office, and tied together with a mission to do something, to build something, but do so under the immense pressures of the capitalist system.
[27] Like you have to succeed.
[28] It's not zero sum, but it is a kind of game where there's competitors and you're always a tension, but also a little bit of a collaboration.
[29] It's a dance and it's just a beautiful thing.
[30] A dance of humans inside the company, a dance of companies in the big capitalist system that are also interacting with the full human civilization society.
[31] So it's a dance of humans and companies selling stuff, buying stuff, creating stuff.
[32] It's just all beautiful.
[33] Anyway, if you're one of those companies, you should use good tools to manage all of the stuff.
[34] And Netsweet is one such a good tool.
[35] You can download Netsweet's popular KPI checklist for free at Netsweet .com slash Lex.
[36] That's Netsuite .com slash Lex for your own KPI checklist.
[37] This episode is also brought to you by BetterHelp, spelled H -E -L -P -Help.
[38] I think whenever I mentioned BetterHelp, I have a lot of thoughts in my head.
[39] One of them is I believe a Better Help ad read that Tim Dillon has done.
[40] I think it goes on, if I remember correctly, for a very long period of time.
[41] And Tim Dillon is hilarious, so what can you say?
[42] But also there's a meta -ironic, absurd, hilarious.
[43] aspect to of all people, Tim Dillon, with a beautiful complexity of his mind and the beautiful complexities of his upbringing and family life, the dynamics of that, that he is doing an ad read for better help.
[44] I love it.
[45] I love it.
[46] I mean, there's an absurdity and an irony to me doing the same.
[47] But all of us need a bit of mental health.
[48] assistance.
[49] And BetterHelp is really good for that because it's accessible, affordable, all that kind of stuff.
[50] It's a good first step to take.
[51] And sometimes all you need is the first step.
[52] Check them out at BetterHelp .com slash Lex and save on your first month.
[53] That's betterhelp .com slash Lex.
[54] This show is also brought to you by Shopify, a platform designed for anyone to sell anywhere with a great looking online store that brings your ideas to life and tools to manage day -to -day operations once the ideas are brought to life.
[55] Ideas brought to life.
[56] That's a funny thing given this conversation with Lee Cronin.
[57] Ideas brought to life.
[58] So we talk about the origin of life and the universe defined more generally, complexity, the emergence of complexity that forms life, the origin of life on earth and the evolution of life as being part of the same system that integrates physics and chemistry and biology, all that kind of stuff.
[59] But ideas.
[60] Ideas as organisms, brought to life.
[61] It's interesting to think of ideas as organisms in the same way that all the other emergent complex organisms come to be.
[62] It's interesting.
[63] And Shopify is a company, which is a complex organism of its own, that allows individual creators of an idea to bring their idea to life and manifest it into the physical world.
[64] So the imagination is a creative engine that starts from some kind of ethereal thing that just inside our mind and projects out into the physical world and creates a thing, a store that can then interact with thousands, millions of people.
[65] It's fascinating.
[66] It's really fascinating to think of ideas as living organisms.
[67] Anyway, you can sign up for a $1 per month trial period at Shopify .com slash Lex Back to Reality.
[68] for Lex, all lowercase.
[69] Go to shopify .com slash Lex to take your business to the next level today.
[70] This episode is also brought to you by a source of a lot of happiness for me, 8th Sleep and the pot 3 mattress.
[71] It cools the two sides of the bed separately.
[72] You can also heat them up.
[73] I don't know who does that.
[74] I do know people like that exist, but I judge them harshly.
[75] No. I like a really cold bed surface with a warm blank.
[76] For a power nap, you're talking about 15, 20 minutes, or a full night's sleep.
[77] It's just heaven.
[78] It's the thing that makes me look forward to coming back home when I'm traveling.
[79] I should also mention that they currently ship to America, Canada, the UK, Australia.
[80] I need to go to Australia.
[81] I need to go to Australia.
[82] And select countries in the European Union.
[83] I don't know why I just mention that.
[84] Again, I don't have to say anything that the sponsors ask me to say.
[85] But there was this list of countries I'm looking at and continents.
[86] And he just filled my mind with a kind of inspired energy to travel.
[87] You know, Paul Rosalie has been on my case to travel with him in the Amazon.
[88] And I want to go.
[89] I want to go.
[90] I want to go.
[91] I want to turn off the devices and go with him.
[92] He's such an incredible human.
[93] Such an incredible human.
[94] I'm really glad he exists.
[95] Paul is just a beautiful human being.
[96] the humor, the stories, the deep, deep gratitude and appreciation of nature, the fearlessness, but also the ability to feel fear and embrace it, and just this childlike sense of wonder.
[97] I mean, he's just such an incredible human.
[98] I'm glad he exists.
[99] As one of the people, when I think about him, just makes me happy to be alive on this earth together with folks like him.
[100] Anyway, check it out and get special savings.
[101] Well, we're talking about A -Sleep.
[102] Check, get special savings when you go to A -Sleep .com slash Lex.
[103] This episode is also brought to you by the thing I'm drinking right now, AG1.
[104] It's a drink with a bunch of vitamins and minerals.
[105] It's basically like a delicious multivitamin, but it's green and delicious.
[106] And I think it has a lot more than any kind of multivitamin.
[107] I don't know.
[108] I don't know much in this world, friends, but I do know that a kind of peaceful feeling comes over me when I drink AG1, knowing that all the crazy stuff I'm going to do mentally or physically, I'm going to be okay.
[109] When I have a nice cold bed with A -sleep and a delicious A -G -1, everything's going to be okay.
[110] So you should definitely try it, see if it's going to be.
[111] going to give you the same kind of feeling.
[112] It is when I don't bring the travel packs, one of the things I miss when I'm traveling.
[113] To have a nice cold AG1 in the afternoon, especially after a long run.
[114] I love it.
[115] Life is beautiful, isn't it?
[116] Anyway, they'll give you a one -month supply of fish oil when you sign up at drinkag1 .com slash Lex.
[117] This is a Lex Friedman podcast.
[118] To support it, please check out our sponsors in the description.
[119] And now, dear friends, here's Lee Cronin.
[120] So your big assembly theory paper was published in nature.
[121] Congratulations.
[122] It created, I think it's fair to say, a lot of controversy, but also a lot of interesting discussion.
[123] So maybe I can try to summarize assembly theory and you tell me if I'm wrong.
[124] Okay for it.
[125] So assembly theory says that if we look at any object in the universe, any object, that we can quantify how complex it is by trying to find the number of steps it took to create it.
[126] And also, we can determine if it was built by a process akin to evolution by looking at how many copies of the object there are.
[127] Yeah, that's spot on.
[128] Yeah, spot on.
[129] I was not expecting that.
[130] Okay.
[131] So let's go through definitions.
[132] So there's a central equation I'd love to talk about but definition -wise what is an object yeah an object so from so if I'm going to try to be as meticulous as possible objects need to be finite and they need to be decomposable into subunits all human -made artifacts are objects is a planet an object probably yes in the if you scale out so an object is finite and countable and decomposable, I suppose, mathematically.
[133] But yeah, I still wake up some days and go, think to myself, what is an object?
[134] Because it's a non -trivial question.
[135] Persists over time.
[136] I'm quoting from the paper here.
[137] An object that's finite is distinguishable.
[138] Sure, that's a weird adjective.
[139] Distinguishable.
[140] we've had so many people offering to rewrite the paper after it came out you wouldn't believe it's so funny persist over time and is breakable such that the set of constraints to construct it from elementary building blocks is quantifiable such that the set of constraints to construct it from elementary building blocks is quantifiable the history is in the objects it's kind of cool right so okay so what defines the object is It's history or memory, whichever is the sexier word.
[141] I'm happy with both, depending on the day.
[142] Okay.
[143] So the set of steps it took to create the object, so there's a sense in which every object in the universe has a history.
[144] Yeah.
[145] And that is part of the thing that is used to describe its complexity, how complicated it is.
[146] Okay.
[147] What is an assembly index?
[148] so the assembly index if you're to take the object apart and be super lazy about it or minimal say what it's like you've got a really short term memory so what you do is you lay all the parts on the path and you find the minimum number of steps you take on the path to add the parts together to reproduce the object and that minimum number is the assembly index is a minimum bound and it was always my intuition the minimum bound and assembly theory was really important.
[149] And I only worked out why a few weeks ago, which is kind of funny.
[150] Because I was just like, no, this is sacrosan.
[151] I don't know why.
[152] It will come to me one day.
[153] And then when I was pushed by a bunch of mathematicians, we came up with the correct physical explanation, which I can get to.
[154] But it's the minimum.
[155] And it's really important.
[156] It's the minimum.
[157] And the reason I knew the minimum was right is because we could measure it.
[158] So almost before this paper came out, we'd published papers explain how you can measure the assembly index of molecules.
[159] Okay, so that's not so trivial to figure out.
[160] So when you look at an object, we can say a molecule, we can say object more generally, to figure out the minimum number of steps to take to create that object.
[161] That doesn't seem like a trivial thing to do.
[162] So with molecules, it's not trivial, but it is possible because what you can do, and because I'm a chemist, so I'm kind of like I see the lens of the world for just chemistry.
[163] I break the molecule apart and break bonds.
[164] And if you take a molecule and you break it all apart, you have a bunch of atoms and then you can say, okay, I'm going to then take the atoms and form bonds and go up the chain of events to make the molecule.
[165] And that's what made me realize, take a toy example, literally a toy example, take a Lego object, which is broken up of Lego blocks.
[166] So you could do exactly the same thing.
[167] In this case, the Lego blocks are naturally the smallest they're the atoms in the actual composite Lego architecture.
[168] But then if you maybe take a couple of blocks and put them together in a certain way, maybe they're offset in some way.
[169] That offset is on the memory.
[170] You can use that offset again with only a penalty of one.
[171] And you can then make a square triangle and keep going.
[172] And you remember those motifs on the chain.
[173] So you can then leap from the start with all the Lego blocks or atoms, just laid out in front of you and say, right, I'll take you, you, you, connect, and do the least amount of work.
[174] So it's really like the smallest steps you can take on the graph to make the object.
[175] And so for molecules, it came relatively intuitively.
[176] And then we started to apply it to language.
[177] We've even started to apply it to mathematical theorems.
[178] I'm so well out of my depth.
[179] But it looks like you can take minimum set of axioms and then start to build up kind of mathematical architectures in the same way.
[180] And the shortest path to get there is something interesting that I don't yet understand.
[181] So what's the computational complexity of figuring out the shortest path with molecules, with language, with mathematical theorems?
[182] It seems like once you have the fully constructed Lego castle, or whatever your favorite Lego world is, figuring out how to get there from the basic building blocks, is that an empty hard problem?
[183] It's a hard problem.
[184] It's a hard problem, but actually, if you look at it, so the best way I look at it, let's take a molecule.
[185] So if the molecule has 13 bonds, first of all, take 13 copies of the molecule and just cut all the bonds, so take 12 bonds, and then you just put them in order.
[186] And then that's how it works.
[187] So, and you keep looking for symmetry or copies, so you can then shorten it as you go down, and that becomes commentarily quite hard.
[188] for some natural product molecules it comes very hard it's not impossible but we're looking at the bounds on that at the moment but as the object gets bigger it becomes really hard but that's the bad news but the good news is there are shortcuts and we might even be able to physically measure the complexity without computationally calculating it which is kind of insane how would you do that well in the case of molecule so if you shine light on a molecule molecule.
[189] Let's take it infrared.
[190] The molecule has, each of the bonds absorbs the infrared differently in what we call the fingerprint region.
[191] And so it's a bit like, and because it's quantized as well, you have all these discrete kind of absorbances.
[192] And my intuition after we realized we could cut molecules up in mass spec, that was the first go at this.
[193] We did it with using infrared.
[194] And the infrared gave us an even better correlation, assembly index, and we used another technique as well in addition to infrared called NMR, nuclear magnetic resonance, which tells you about the number of different magnetic environments in a molecule, and that also worked out.
[195] So we have three techniques, which each of them independently gives us the same or tending towards the same assembly index from molecule that we can calculate mathematically.
[196] Okay, so these are all methods of mass spectrometry, mass spec, you scan a molecule, it gives you data in the form of a mass spectrum, and you're saying that the data correlates to the assembly index.
[197] Yeah.
[198] How generalizable is that shortcut?
[199] First of all, to chemistry.
[200] And second, I'm all beyond that.
[201] Because that seems like a nice hack and you're extremely knowledgeable about various aspects of chemistry so you can say, okay, it kind of correlates.
[202] But, you know, the whole idea behind assembly theory paper and perhaps why it's so controversial is that it reaches bigger.
[203] It reaches for the bigger general theory of objects in the universe.
[204] Yeah, I'd say so.
[205] I'd agree.
[206] So I've started assembly theory of emoticons with my lab, believe it or not.
[207] So we take emojis, pixelate them, and work out the assembly index and emoji, and then work out how many emojis you can make on the path of emoji.
[208] So there's the Uber emoji from which all other emojis emerge.
[209] Yeah.
[210] And then you can, so you can then take a photograph.
[211] and by looking at the shortest path, by reproducing the pixels to make the image you want, you can measure that.
[212] So then you start to be able to take spatial data.
[213] Now there's some problems there.
[214] What is then the definition of the object?
[215] How many pixels?
[216] How do you break it down?
[217] And so we're just learning all this right now.
[218] So how do you begin to compute the assembly index of a graphical, like a set of pixels on a 2G plane that form a thing.
[219] So you would, first of all, determine the resolution.
[220] So then what is your X, Y, well, the number on the X and Y plane?
[221] And then look at the surface area.
[222] And then you take all your emojis and make sure they're all looked at the same resolution.
[223] Yes.
[224] And then we were basically then do the exactly the same thing we would do for cutting the bonds.
[225] You'd cut bits out of the emoji and look at the, you'd have a bag of pixels.
[226] and you would then add those pixels together to make the overall emoji.
[227] Wait, wait a minute, but like, first of all, not every pixel's, I mean, this is at the core sort of machine learning and computer vision.
[228] Not every pixel is that important, and there's like macro features, there's micro features and all that kind of stuff.
[229] Exactly.
[230] Like, you know, the eyes appear in a lot of them.
[231] The smile appears in a lot of them.
[232] So in the same way in chemistry, we assume the bond is.
[233] fundamental.
[234] What we do in there here is we assume the resolution at the scale at which we do it is fundamental.
[235] And we're just working that out.
[236] And you're right, that will change, right?
[237] Because as you take your lens out a bit, it will change dramatically.
[238] But it, but it's just a new way of looking at not just compression, what we do right now in computer science and data.
[239] One big kind of, kind of misunderstanding is assembly theory is telling you about how compressed the object is.
[240] That's not right.
[241] It's a how much information is required on a chain of events.
[242] Because the nice thing is, when you do compression in computer science, we're wandering a bit here, but it's kind of worth wondering, I think.
[243] You assume you have instantaneous access to all the information in the memory.
[244] In assembly theory, you say, no, you don't get access to that memory until you've done the work.
[245] And then when you don't access to that memory, you can have access, but not to the next one.
[246] And this is how in assembly theory we talk about the four universes, the assembly universe, the assembly possible, and the assembly contingent, and then the assembly observed.
[247] And they're all scales in this conventorial universe.
[248] Can you explain each one of them?
[249] Yep.
[250] So the assembly universe is like anything goes.
[251] It's just conventorial kind of explosion and everything.
[252] So that's the biggest one?
[253] That's the biggest one.
[254] It's massive.
[255] Assembly universe, assembly possible, assembly contingent, assembly observed.
[256] And on the y -axis is assembly steps.
[257] time.
[258] Yeah.
[259] And, you know, in the x -axis, as the thing expands through time, more and more unique objects appear.
[260] So, yeah, so assembly universe, everything goes.
[261] Yep.
[262] Um, assembly possible laws of physics come in, in this case, in chemistry bonds.
[263] In assembly, so that means...
[264] Those are actually constraints, I guess.
[265] Yes, and they're the only constraints.
[266] They're the constraints at the base.
[267] So the way to look at is you've got all your atoms, they're quantized, you can just bung them together.
[268] So then you can become a kind of, so, So in the way in computer science speak, I suppose the assembly universe is just like no laws of physics.
[269] Things can fly through mountains, beyond the speed of light.
[270] In the assembly possible, you have to apply the laws of physics, but you can get access to all the motifs instantaneously with no effort.
[271] So that means you could make anything.
[272] Then the assembly contingent says, no, you can't have access to the highly assembled object in the future until you've done the work in the past on the causal chain.
[273] and that's really the really interesting shift where you go from assembly possible to assembly contingent that is really the key thing in assembly theory that says you cannot just have instantaneous access to all those memories you have to have done the work somehow the universe has to have somehow built a system that allows you to select that path rather than other paths.
[274] And then the final thing, the assembly observed, is basically I're saying, oh, these are the things we actually see.
[275] We can go backwards now and understand that they have been created by this causal process.
[276] Wait a minute.
[277] So when you say the universe has to construct the system that does the work, is that like the environment that allows for like selection?
[278] Yeah.
[279] Yeah.
[280] Yeah.
[281] So that's the thing that does the selection.
[282] You could think about in terms of a von Neumann constructor versus a selection, a ribosome.
[283] a Tesla plant, assembling Teslas, you know, the difference between the assembly universe in Tesla land and the Tesla factory is, everyone says, no, Tesla's are just easy, they just spring out, you know how I make them all, the Tesla factory, you have to put things in sequence and out comes a Tesla.
[284] So you're talking about the factory.
[285] Yes, this is really nice.
[286] Super important point is that when I talk about the universe having a memory or there's some magic, it's not that, it's that tells you that there must be a process.
[287] encoded somewhere in physical reality, be it a cell, a Tesla factory, or something else that is making that object.
[288] I'm not saying there's some kind of woo -woo memory in the universe, you know, morphic resonance or something.
[289] I'm saying that there is an actual causal process that is being directed, constrained in some way.
[290] So it's not kind of just making everything.
[291] Yeah, but Lee, what's the factory that made the factory?
[292] So what is the, so first of all, you assume the laws of physics has just sprung to existence at the beginning.
[293] Those are constraints, but what makes the factory the environment that does the selection?
[294] This is the question of, well, it's the first interesting question that I want to answer out of four.
[295] I think the factory emerges in the environment, the interplay between the environment and the objects that are being built.
[296] And here, let me, I'll have a go explain to you the shortest path.
[297] So why is the shortest path important?
[298] Imagine you've got, I'm going to have to go chemistry for a moment and then abstract it.
[299] So imagine you've got an environment, a given environment that you have a budget of atoms, you're just flinging together.
[300] Yep.
[301] And the objective of those atoms that have been flung together in, say, molecule A, have to make, they have it, they decompose.
[302] So molecules decompose over time.
[303] So the molecules in this environment, in this magic environment, have to not die, but they do die.
[304] They have a half -life.
[305] So the only way the molecules can get through that environment out the other side, let's pretend the environment is a box, you can go in and out without dying, and there's just an infinite supply of atoms coming, or a large supply.
[306] The molecule gets built, but the molecule that is able to template itself being built.
[307] and survives in the environment will basically reign supreme.
[308] Now, let's say that that molecule takes 10 steps.
[309] And it's using a finite set of atoms, right?
[310] Now let's say another molecule, smart -ass molecule will call it comes in and can survive in that environment and can copy itself, but it only needs five steps.
[311] The molecule that only needs five steps, because both molecules are being destroyed, but they're creating themselves faster they can be destroyed.
[312] You can see that the shortest path reigns supreme.
[313] So the shortest path tells us something super interesting about the minimal amount of information required to propagate that motif in time and space and it's just like a kind of, it seems to be like some kind of conservation law.
[314] So one of the intuitions you have is the propagation of motifs in time will be done by the things that can construct.
[315] themselves in the shortest path so like you can assume that most the objects in the universe are built in the shortest in the most efficient way that the so big loop I just took there yeah yes and no because there are other things so in the limit yes because you want to tell the difference between things that have required a factory to build them and just random processes um but you can find instances where the shortest path isn't taken for an individual object, an individual function.
[316] And people go, ah, that means the shortest path isn't right.
[317] And then I say, well, I don't know.
[318] I think it's right still.
[319] Because, so of course, because there are other driving forces.
[320] It's not just one molecule.
[321] Now when you start to, now you start to consider two objects, you have a joint assembly space.
[322] And it's not, now it's a compromise between not just making A and B in the shortest path, you want to make A and B in the shorts path, which might mean that A is slightly longer, you have a compromise.
[323] So when you see slightly more nesting in the construction, when you take a given object, that can look longer, but that's because the overall function is the object is still trying to be efficient.
[324] Yeah.
[325] And this is still very hand wavy and maybe have no lick to stand on, but we think we're getting somewhere with that.
[326] And there's probably with some parallelization.
[327] Yeah.
[328] So this is all, this is not sequential.
[329] The building is, I guess, when you're talking about complex objects, you don't have to work sequentially.
[330] You can work in parallel.
[331] You can get your friends together and they can.
[332] Yeah.
[333] And the thing we're working on right now is how to understand these parallel processes.
[334] Now there's a new thing we've introduced called assembly depth.
[335] And assembly depth can be lower than the assembly index.
[336] for a molecule when they're cooperating together because exactly this parallel processing is going on.
[337] And my team have been working this out in the last few weeks because we're looking at what compromises does nature need to make when it's making molecules in a cell?
[338] And I wonder if, you know, I'm maybe like, well, I'm always leaping out of my competence, but in economics, I'm just wondering if you could apply this in economic process.
[339] It seems like capitalism is very good at finding shortest path, you know, every time.
[340] But there are ludicrous things that happen because actually the cost function has been minimized.
[341] And so I keep seeing parallels everywhere where there are complex nested systems where if you give it enough time and you introduce a bit of heterogeneity, the system readjusts and finds a new shortest path.
[342] But the shortest path isn't fixed on just one molecule now.
[343] It's in the actual existence of the object over time.
[344] And that object could be a city.
[345] It could be a cell.
[346] it could be a factory, but I think we're going way beyond molecules, and my competence probably should go back to molecules.
[347] But hey.
[348] All right, before we get too far, let's talk about the assembly equation.
[349] Okay, how should we do this?
[350] Now, let me just even read that part of the paper.
[351] We define assembly as the total amount of selection necessary to produce an ensemble of observed objects, quantified using equation one.
[352] The equation basically has A on one side, which is the assembly of the ensemble.
[353] And then a sum from one to N, where N is the total number of unique objects.
[354] And then there is a few variables in there that include the assembly index, the copy number, which we'll talk about.
[355] That's an interesting.
[356] I don't remember you talking about that.
[357] That's an interesting addition, and I think a powerful one.
[358] It has to do with, what that you can create pretty complex objects randomly and in order to know that they're not random that there's a factory involved you need to see a bunch of them yeah that's that's the intuition there it's an interesting intuition and then some normalization what else is it and in minus one just to make sure that um more than one object one object could be a one off and random yep and then you have more than one identical object that's interesting when there's when there's two of a thing.
[359] Two of the thing is super important, especially if the index, assembly index is high.
[360] So we could say several questions here.
[361] One, let's talk about selection.
[362] What is this term selection?
[363] What is this term evolution that we're referring to?
[364] Which aspect of Darwinian evolution that we're referring to that's interesting here?
[365] So, yeah, so this is probably what, you know, the paper, we should talk about the paper for a second, the paper, what it did is it kind of annoyed, we didn't know it.
[366] I mean, it got intention, and obviously the angry people were annoyed.
[367] There's angry people in the world.
[368] That's good.
[369] So what happened is the evolutionary biologists got angry.
[370] We were not expecting that, because we thought evolutionary biology would be cool.
[371] I knew that some, not many, computational complexity people will get angry because I've kind of been poking them and maybe I deserved it.
[372] But I was trying to poke them in a productive way.
[373] And then, the physicist kind of got grumpy because the initial conditions tell everything.
[374] The pre -botic chemist got slightly grumpy because there's not enough chemistry in there.
[375] Then finally, when the creationist said it wasn't creationist enough, I was like, I've done my job.
[376] You're saying the physics, they say, because you're basically saying that physics is not enough to tell the story of how biology emerges.
[377] I think so.
[378] And then they said a few physics is the beginning and the end of the story.
[379] Yeah.
[380] So what happened is the reason why people put the phone down on the call of their paper.
[381] If you view you reading the paper like a phone call, they got to the abstract.
[382] And in the abstract, it's...
[383] First sentence is pretty strong.
[384] The first two sentences caused everybody...
[385] Scientists have grappled with reconciling biological evolution with the immutable laws of the universe defined by physics.
[386] True, right?
[387] There's nothing wrong with that statement.
[388] Totally true.
[389] Yeah.
[390] These laws underpin life.
[391] origin, evolution, and the development of human culture and technology, yet they do not predict the emergence of these phenomena.
[392] Wow.
[393] First of all, we should say, the title of the paper, this paper was accepted and published in nature.
[394] The title is assembly theory explains and quantifies selection and evolution, very humble title.
[395] And the entirety of the paper, I think, presents interesting ideas, but reaches high.
[396] I am not.
[397] I would do it all again.
[398] This paper was actually on the pre -print server for over a year.
[399] You regret nothing.
[400] Yeah, I think, yeah, I don't regret anything.
[401] You and Frank Sinatra did it your way.
[402] What I love about being a scientist is kind of sometimes, because I'm a bit dim, I'm like, and I don't understand what people are telling me, I want to get to the point.
[403] This paper says, hey, laws of physics are really cool.
[404] The universe is great, but they don't really, it's not intuitive that you just, run the standard model and get life out.
[405] I think most physicists might go, yeah, there's, you know, it's not just, we can't just go back and say that's what happened, because physics can't explain the origin of life yet.
[406] It doesn't mean it won't or can't, okay?
[407] Just to be clear, sorry, intelligent designers, we are going to get there.
[408] Second point, we say that evolution works, but we don't know how evolution got going.
[409] So biological evolution and biological selection.
[410] So for me, this seems like a simple continuum.
[411] So when I mentioned selection and evolution in the title, I think, and in the abstract, we should have maybe prefaced that and said non -biological selection and non -biological evolutions.
[412] And then that might have made it even more crystal clear, but I didn't think that biology, evolutionary biology, should be so bold to claim ownership of selection and evolution.
[413] And secondly, a lot of evolutionary biologists seem to dismiss the origin of life questions.
[414] just say it's obvious.
[415] And that causes a real problem scientifically.
[416] Because when the physicists are like, we own the universe, the universe is good, we explain all of it, look at us.
[417] And even biologists say we can explain biology.
[418] And the poor chemist in the middle going, but hang on.
[419] And this paper kind of says, hey, there is an interesting disconnect between physics and biology.
[420] And that's at the point which memories get made in chemistry through bonds and hey let's look at this close and see if we can quantify it so yeah i mean i never expected the paper to to kind of get that much interest and still i mean it's only been published just over a month ago now so just the link on the selection what is the broader sense of what selection means yeah that's a really good for selection selection so I think for selection you need so this is where for me the concept of an object is something that can persist in time and not die but basically can be broken up so if so if I was going to kind of bolster the definition of an object so if something can form and persist for a long period of time under an existing environment that could destroy other, and I'm going to use anthropomorphic terms, I apologize, that weaker objects or less robust, then the environment could have selected that.
[421] So good chemistry examples, if you took some carbon and you made a chain of carbon atoms, whereas if you took some, I don't know, some carbon nitrogen and oxygen and made chains from those, you'd start to get different reactions and rearrangements.
[422] So a chain of carbon atoms might be more resistant.
[423] to falling apart under acidic or basic conditions versus another set of molecules.
[424] So it survives in that environment.
[425] So the acid pond, the molecule, the resistant molecule can get through.
[426] And then that molecule goes into another environment.
[427] So that environment now maybe being an acid bond is a basic pond or maybe it's an oxidizing pond.
[428] And so if you've got carbon and it goes an oxidizing pond, maybe the carbon starts to oxidize and break apart.
[429] So you go through all these kind of obstacle courses, if you like, given by reality.
[430] So selection is the ability happens when an object survives in an environment for some time.
[431] But, and this is a thing that's super subtle, the object has to be continually being destroyed and made by process.
[432] So it's not just about the object now.
[433] It's about the process and time that makes it.
[434] Because a rock could just stand on the mountain.
[435] signed for four billion years and nothing happened to it.
[436] And that's not necessarily really advanced selection.
[437] So for selection to get really interesting, you need to have a turnover in time.
[438] You need to be continually creating objects, producing them, what we call discovery time.
[439] So there's a discovery time for an object.
[440] When that object is discovered, if it's, say, a molecule that can then act on itself or the chain of events that caused itself to bolster its formation, then you go from discovery time to production time, and suddenly you have more of it in the universe.
[441] So it could be a self -replicating molecule.
[442] And the interaction of the molecule in the environment in the warm little pond or in the sea or wherever in the bubble could then start to build a proto -factory, the environment.
[443] So really, to answer your question, what the factory is, the factory is the environment, but it's not very autonomous.
[444] It's not very redundant.
[445] There's lots of things that could go wrong.
[446] So once you get high enough up the hierarchy of networks of interactions, something needs to happen.
[447] That needs to be compressed into a smaller volume and made resistant and robust.
[448] Because in biology, selection and evolution is robust.
[449] You have error correction built in.
[450] You have really, you know, there's good ways of basically making sure propagation goes on.
[451] So really the difference between inorganic, abiotics, selection, evolution, and evolution and stuff in biology is robustness.
[452] um the ability to to kind of propagate over over in the the ability survive in lots of different environments whereas our poor little inorganic soul molecule whatever just dies in lots of different environments so there's something super special that happens from the inorganic environment molecule in the environment kills it to where you've got evolution and cells can survive everywhere How special is that?
[453] How do you know those kinds of evolution factors on everywhere in the universe?
[454] I don't, and I'm excited because I think selection isn't special at all.
[455] I think what is special is the history of the environments on Earth that gave rise to the first cell that now has, you know, has taken all those environments and is now more autonomous.
[456] and I would like to think that, you know, this paper could be very wrong, but I don't think it's very wrong.
[457] It means certainly wrong, but it's less wrong than some other ideas, I hope, right?
[458] And if this allows us to go and look for selection in the universe, because we now have an equation where we can say, we can look for selection going on and say, oh, that's interesting.
[459] We seem to have a process that's giving us high copy number objects, also a high, highly complex, but that doesn't look like life as we know it.
[460] And we use that and say, oh, there's a hydrofirmal vent.
[461] Oh, there's a process going on.
[462] There's molecular networks because the assembly equation is not only meant to identify at the higher end advanced selection, what you get, I would call it in biology, you super advanced selection.
[463] And even, I mean, you could use the assembly equation to look for technology and God forbid we could talk about consciousness and abstraction, but let's keep it primitive molecules and biology.
[464] So I think the real power of the assembly equation is to say how much selection is going on in this space.
[465] And there's a really simple thought experiment I could do.
[466] If you have a little petri dish, and on that petri dish, you put some simple food.
[467] So the assembly index of all the sugars and everything is quite low.
[468] And you put a single cell of E. coli cell.
[469] And then you say, I'm going to measure of the assembly in, this amount of assembly in the box.
[470] So it's quite low, but the rate of change of assembly, D -A -D -T, will go vum sigmoidal as it eats all the food and the number of coli cells will replicate because they take all the food, they can copy themselves, the assembly index of all the molecules goes up, up, up, up and up until the food is exhausted in the box.
[471] So now the, now the coli's stop, I mean, die is probably a strong word, they stop respiring because all the food has gone, but suddenly the amount of assembly in the box has gone up gigantically because of that one E. coli factory has just eaten through, milled lots of other E. coli factory, has run out of food and stopped.
[472] And so that, looking at that, so in the initial box, although the amount of assembly was really small, it was able to replicate and use all the food and go up.
[473] And that's what we're trying to do in the lab, actually, is kind of make those kind of experiments and see if we can spot the emergence of molecular networks.
[474] that are producing complexity as we feed in raw materials and we feed a challenge, an environment.
[475] You know, we try and kill the molecules.
[476] And really, that's the main kind of idea for the entire paper.
[477] Yeah, and see if you can measure the changes in the assembly index throughout the whole system.
[478] Yeah.
[479] Okay, what about, if I show up to a new planet, we'll go to Mars or some other planet from a different solar system.
[480] And how do we use assembly index there to, discover alien life?
[481] Very simply, actually, let's say we'll go to Mars with a mass spectrometer with a sufficiently high resolution.
[482] So what you have to be able to do, so a good thing about mass spec is that you can select the molecule from the mass, and then if it's high enough resolution, you can be more and more sure that you're just seeing identical copies.
[483] You can count them, and then you fragment them, and you count the number of fragments.
[484] and look at the molecular weight and the higher the molecular weight and the higher the number of the fragments or higher the assembly index so if you go to Mars and you take a mass spec or a high enough resolution and you can find molecules and I'll give a guide on Earth if you could find molecules say greater than 350 molecular weight or more than 15 fragments you have found artifacts that can only be produced at least on Earth by life now you would say oh well maybe the geological process, I would argue very vehemently that that is not the case.
[485] But we can say, look, if you don't like the cut off on Earth, go up higher, 30, 100, right?
[486] Because there's going to be a point where you can find a molecule with so many different parts, the chances of you getting a molecule that has a hundred different parts, and finding a million identical copies, you know, that's just impossible that could never happen in an infinite set of you, universes.
[487] Can you just linger on this copy number thing?
[488] A million different copies.
[489] What do you mean by copies and why is the number of copies important?
[490] Yeah, that was so interesting.
[491] And I always understood the copy number is really important, but I never explained it properly for ages.
[492] And I kept having this, it goes back to this.
[493] if I give you a, I don't know, a really complicated molecule, and I say it's complicated, you could say, hey, that's really complicated, but is it just really random?
[494] And so I realized that ultimate randomness and ultimate complexity are indistinguishable until you can see a structure in the randomness, so you can see copies.
[495] So copies implies structure.
[496] Yeah.
[497] the factory I mean there's a deep profound thing in there because if you just have a random random process you're going to get a lot of complex beautiful sophisticated things what makes them complex in the way we think life is complex or yeah something like a factory that's operating under a selection processes there should be copies is there like some looseness about copies like, what does it mean for two objects to be equal?
[498] It's all to do with the telescope or the microscope you're using.
[499] And so at the maximum resolution, so in the nice thing about chemists is they have this concept of the molecule and they're all familiar with a molecule.
[500] And molecules you can hold, you know, on your hand, lots of them, identical copies.
[501] A molecule is actually a super important thing in chemistry to say, look, you can have a mole of a molecules and Avagadro's number of molecules.
[502] and they're identical.
[503] What does that mean?
[504] That means that the molecular composition, the bonding and so on, the configuration is all, is indistinguishable.
[505] You can hold them together.
[506] You can overlay them.
[507] So the way of do it is if I say, here's a bag of 10 identical molecules.
[508] Let's prove they're identical.
[509] You pick one out of the bag and you basically observe it using some technique and then you put it, you take it away and then you take another one out.
[510] If you observe it using technique, you can see no differences.
[511] They're identical.
[512] It's really interesting to get right because if you take, say, two molecules, molecules can be in different vibrational and rotational states.
[513] They're moving all the time.
[514] So with this respect, identical molecules have identical bonding.
[515] In this case, we don't even talk about chirality, because we don't have a chirality detector.
[516] So two, I met identical molecules in one conception assembly theory, basically considers both hands as being the same.
[517] But of course, they're not.
[518] They're different.
[519] As soon as you have a chiral to distinguisher to detect the left and the right hand they become different and so it's to do with the detection system that you have and the resolution so i i wonder if there's an art and science to the which detection system is used when you show up to a new planet yeah yeah yeah so like you're talking about chemistry a lot today we have kind of standardized detection systems right of how to compare molecules So, you know, when you start to talk about emojis and language and mathematical theorems and, I don't know, more sophisticated things, at a different scale, at a smaller scale of the molecules, at a larger scale of the molecules, like what detection, if we look at the difference between you and me, Lexingtonly, are we the same?
[520] Are we different?
[521] Sure.
[522] I mean, of course we're different, close up, but if you zoom out a little bit, we'll morphologically look the same.
[523] Yeah.
[524] You know, high in characteristics, hair length, stuff like that.
[525] Well, also, like the species and...
[526] Yeah, yeah, yeah.
[527] And also, there's a sense why we're both from Earth.
[528] Yeah, I agree.
[529] I mean, this is the power of assembly theory in that regard.
[530] If you...
[531] So, if everything...
[532] So, the way to look at it, if you have a box of objects, if they're all...
[533] If they're all indistinguishable, then using your technique, what you then do is you then look at the assembly index.
[534] Now, if the assembly index of them is really low, right, and they're all indistinguishable, then it's telling you that you have to go to another resolution.
[535] So that would be, you know, it's kind of a signing scale.
[536] It's kind of nice.
[537] So those two kind of are attention with each other, the number of copies and the assembly index.
[538] Yeah.
[539] That's really, really interesting.
[540] so okay so you show up to on your planet you'll be doing what i would do mass spec i would bring a sample of what like first of all like how big of a scoop do you take did you just take the scoop like what like uh so we're looking for primitive life i would i would look yeah so if you're just going to mars or titan or inceladus or somewhere so a number of ways of doing it so you could take a large scoop or you'd go for the atmosphere and detect stuff So you could make a life meter, right?
[541] So one of Sarah's colleagues at ASU, Paul Davis, keeps calling it a life meter, which is quite a nice idea because you think about it, if you've got a living system that's producing these highly complex molecules and they drift away and they're in a highly kind of demanding environment, they could be burnt, right?
[542] So they could just be falling apart.
[543] So you want to sniff a little bit of complexity and say warmer, warmer, warmer, oh, we've found life.
[544] We found the alien Elon Musk smoking a joint in the bottom of the cave on Mars, or Elon himself, whatever.
[545] You say, okay, found it.
[546] So what you can do is the mass spectrometer, you could just look for things in the gas phase.
[547] Or you go on the surface, drill down because you want to find molecules that are, you've either got to find the source living system because the problem with just looking for complexity is it gets burnt away.
[548] So in a harsh environment on, say, on the surface of Mars, there's a very low probability that you're going to find really complex molecules because of all the radiation and so on.
[549] If you drill down a little bit, you could drill down a bit into soil that's billions of years old.
[550] Then I would put in some solvent, water, alcohol or something, or take a scoop, put it in, make it volatile, I'll put it into the mass spectrometer and just try and detect high complexity, high abundant molecules.
[551] And if you get them, hey, presto, you can have evidence of life.
[552] Wouldn't that then be great if you could say, okay, we've found evidence of life.
[553] Now we want to keep the life meter, keep searching for more and more complexity until you actually find living cells.
[554] You can get those new living cells and then you could bring them back to Earth or you could try and sequence them.
[555] You could see that they have different DNA and proteins.
[556] Go along the gradient of the life meter.
[557] How would you build a life meter?
[558] Let's say we're together starting a new company launching a life meter.
[559] Mass spectrometer would be the first way of doing it.
[560] No, no, no, but that's one of the major components of it.
[561] But I'm talking about if it's a device, and branding logo, we've got to talk to that.
[562] That's later.
[563] But what's the input?
[564] How do you get to the, a metered output?
[565] So I would take a life, so my life meter, our life meter.
[566] There you go.
[567] Thank you.
[568] Yeah, you're welcome.
[569] It would have both infrared and mass spec.
[570] So it would have two ports so it could shine a light.
[571] And so what it would do is you would have a vacuum chamber and you would have an electrostatic analyzer and you'd have a monochromator to producing infrared.
[572] You'd add the sample.
[573] So you'd take a scoop of the sample, put it in the life meter.
[574] It would then add a solvent or heat up the sample.
[575] so some volatiles come off.
[576] The volatiles would then be put into the mass spectrometer, into electrostatic trap, and you'd weigh the molecules and fragment them.
[577] Alternatively, you'd shine infrared light on them, you'd count a number of bands.
[578] But you'd have to, in that case, do some separation because you want to separate.
[579] And so in mass spec, it's really nice and convenient because you can separate electrostatically, but you need to have that.
[580] Can you do it in real time?
[581] Yeah, pretty much.
[582] Pretty much, yeah.
[583] So let's go all the way back.
[584] So this, okay, we're really going to get this.
[585] Lex's life meat, Lex and Leaves.
[586] No, no, Lex and Lee.
[587] It's a good ring to it.
[588] All right.
[589] So you have a vacuum chamber.
[590] You have a little nose.
[591] The nose would have a packing material.
[592] So you would take your sample, add it onto the nose, add a solvent or a gas.
[593] It would then be sucked up the nose.
[594] And that would be separated using what we call chromatography.
[595] And then as each band comes off the nose, would then do mass spec and infrared and in the case of the infrared count the number of bands in the case of mass spec count the number of fragments and weigh it and then the further up in molecular weight range for the mass spec and the number of bands you go up and up and up from the you know dead interesting interesting over the threshold oh my gosh earth life and then right up to bat shit crazy this is definitely you know alien intelligence that's made this life right you could almost go all the way there same in the infrared and it's pretty simple The thing that is really problematical is that for many years, decades, what people have done, and I can't blame them, is they've rather been obsessing about small biomarkers that we find on earth, amino acids, like single amino acids or evidence of small molecules and these things, and looking for those, run looking for complexity.
[596] The beautiful thing about this is you can look for complexity without Earth.
[597] chemistry bias or earth biology bias.
[598] So assembly theory is just a way of saying, hey, complexity and abundance is evidence of selection.
[599] That's how our universal life meter will work.
[600] Complexity in abundance is evidence of selection.
[601] Okay.
[602] So let's apply our life meter to Earth.
[603] So what, you know, if we were just to apply assembly index measurements to Earth, Earth.
[604] What kind of stuff are going to be get, are going to get?
[605] What's impressive about some of the complexity on Earth?
[606] So we did this a few years ago in the, when I was trying to convince NASA and colleagues that this technique could work.
[607] And honestly, it's so funny because everyone's like, no, I ain't going to work.
[608] And it was just like, because the chemist was saying, of course there are complicated molecules out there you can detect that just form randomly.
[609] I was like, really?
[610] That's like, That was like, you know, it's a bit like a, I don't know, someone saying, of course Darwin textbook was just written randomly by some monkeys and a typewrite.
[611] It was just for me, it was like, really?
[612] And I've pushed a lot on the chemist now, and I think most of them are on board, but not totally.
[613] It really had some big arguments, but the copy number caught there, because I think I confused the chemist by saying one -off.
[614] And then when I made clear about the copy number, I think that made it a little bit easier.
[615] Just to clarify, a chemist might say that, of course, out there outside of Earth, there's complex molecules.
[616] Yes.
[617] Okay.
[618] And then you're saying, wait a minute, that's like saying, of course, there's aliens out there.
[619] Yeah, exactly that.
[620] Okay.
[621] But you're saying, you clarify that that's actually a very interesting question, and we should be looking for complex molecules of which the copy number is two or greater.
[622] Yeah, exactly.
[623] So on Earth, to coming back to Earth, what we did is we took a whole bunch of samples and we were running pre -botic chemistry experiments in the lab.
[624] We took various inorganic minerals and extracted them.
[625] Look at the volatile because there's a special way of treating minerals and polymers and assembly theory where in this, in our life machine, we're looking at molecules.
[626] We don't care about polymers because they don't, they're not volatile.
[627] you can't hold them.
[628] How can you make, if you can't assert that they're identical, then it's very difficult for you to work out if there's undergone selection or they're just a random mess.
[629] Same with some minerals, but we can come back to that.
[630] So basically what you do, we got a whole lot of samples inorganic ones.
[631] We got a load of, we got Scotch whiskey and also got, took an ard bag, which is one of my favorite whiskeys, which is very peaty.
[632] And another whiskey is like, so the way that on, In Scotland, in Isla, which is Little Island, the scotch, the whiskey is led to mature in barrels.
[633] And it's said that the peak, the complex molecules in the peat might find their way through into the whiskey.
[634] And that's what gives it this intense brown colour and really complex flavour.
[635] It's literally molecular complexity that does that.
[636] And so, you know, vodka is the complete opposite.
[637] It's just pure, right?
[638] So the better the whiskey, the higher the assembly index, the higher the assembly index, the better the whiskey.
[639] That's what I mean, I really love deep, P .T., Scottish whiskeys.
[640] Near my house, there is a, one of the lowland distilleries called Glen Goyne.
[641] It's still beautiful whiskey, but not as complex.
[642] So for fun, I took some Glencoigne whiskey in our bag and put them into the mass spec and measure the assembly index.
[643] I also got E. coli.
[644] So the way we do, take the ekely, break the cell apart, take it all apart, and also got some beer.
[645] And people were ridiculing us saying, oh, beer is evidence of complexity.
[646] One of the computational complexity people was just throwing, yeah, kind of his very vigorous in his disagreement of assembly theory was just saying, you know, you don't know what you're doing.
[647] Even beer is more complicated than human.
[648] We didn't realize is that it's not beer, per se, it is taking the yeast extract, taking the extract, breaking the cells, extracting the molecules, and just looking at the profile of the molecules, see if there's anything over the threshold.
[649] And we also put in a really complex molecule taxol.
[650] So we took all of these, but also NASA gave us, I think, five samples.
[651] And they wouldn't tell us what they are.
[652] They said, no, we don't believe you're going to get this to work.
[653] And they really, you know, they gave us some super complex samples.
[654] And they gave us two, fossils, one that was a million years old and one was at 10 ,000 years old, something from Antarctica, seabed.
[655] They gave us a mergers and meteorite and a few others.
[656] Put them through the system.
[657] So we took all the samples, treated them all identically, put them into mass spec, fragmented them, and in this case, implicit in the measurement was we, in mass spec, you only detect peaks when you've got more than, say, let's say, 10 ,000 identical molecules.
[658] So the copy number is already baked in that wasn't quantified, which is super important there.
[659] This was in the first paper because I was like, it's abundant, of course.
[660] And when you then took it all out, we found that the biological samples gave you molecules that had an assembly index greater than 15, and all the abiotic samples were less than 15, and then we took the NASA samples, and we looked at the ones that were more than 15, less than 15, and we gave them back to NASA, and they're like, oh, gosh, Yep.
[661] Dead, living, dead, living.
[662] You got it.
[663] And that's what we found on earth.
[664] That's a success.
[665] Yeah.
[666] Oh, yeah, resounding success.
[667] Can you just go back to the beer and the E. coli?
[668] So what's the assembly index on those?
[669] So what you were able to do is like the assembly index of, we found high assembly index molecules originating from the beer sample and the E. coli.
[670] sample.
[671] So, I mean, I didn't know which one was higher.
[672] We wouldn't really do any detail there because now we are doing that because one of the things we've done, it's a secret, but I can tell you.
[673] Nobody's listening.
[674] Well, is that we've just mapped the tree of life using assembly theory because everyone said, oh, that you can't do in biology.
[675] And what we're able to do is, so I think there's three way, well, two ways of doing tree of life traffic.
[676] Well, three ways.
[677] actually.
[678] What's the tree of life?
[679] So the tree of life is basically tracing back the history of life on earth, all the different species, going back, who evolved from what, and it all goes all the way back to the first kind of life forms, and they branch off.
[680] And like you have plant kingdom, the animal kingdom, the fungi, the kingdom, and different branches all the way up.
[681] And the way this was classically done, and I'm no evolutionary biologists, evolution biologists It's a very, tell me, every day, at least 10 times.
[682] I want to be one, though.
[683] I kind of like biology.
[684] It's kind of cool.
[685] Yeah, it's very cool.
[686] But basically, what Darwin and Mendeleev and all these people do is just they draw pictures, right?
[687] And they tax her.
[688] They were able to draw pictures and say, oh, these look like common classes.
[689] Yeah.
[690] Then they're artists, really.
[691] They're just, you know.
[692] But they were able to find out a lot, right, in looking at verbats.
[693] inverberance, camera and explosion, all this stuff.
[694] And then came the genomic revolution, and suddenly everyone used gene sequencing.
[695] And Craig Venter is a good example.
[696] I think he's gone around the world in his yacht, just taking up samples, looking for new species, where he's just found new species of life, just from sequencing.
[697] It's amazing.
[698] So you have taxonomy, you have sequencing, and then you can also do a little bit of kind of molecular kind of archaeology, like, you know, measure the samples.
[699] and kind of form some inference.
[700] What we did is we were able to fingerprint, so we took a load of random samples from all of biology, and we used mass spectrometry, and what we did now is not just look for individual molecules, but we looked for coexisting molecules where they had to look at their joint assembly space, and where we were able to cut them apart and undergo recursion in the mass spec and infer some relationships, and we're able to recapitulate the tree of life using mass spectroscopy, no sequencing and no drawing.
[701] All right, can you try to say that again with a little more detail?
[702] So recreating, what does it take to recreate the tree of life?
[703] What does the reverse engineering process look like here?
[704] So what you do is you take an unknown sample, you pung it into the mass spec, because this comes from what you're asking like, what do you see in ecoli?
[705] And so in ecoli, you don't just see, it's not a, it's not that, the most sophisticated cells on earth make the most sophisticated molecules.
[706] It is the coexistence of lots of complex molecules above a threshold.
[707] And so what we realize is you could fingerprint different life forms.
[708] So fungi make really complicated molecules.
[709] Why?
[710] Because they can't move.
[711] They have to make everything on site.
[712] Whereas, you know, some animals are like lazy.
[713] They can just go eat the fungi.
[714] They don't need to make very much.
[715] And I, and so what you do is, you look at the, so you take, I don't know, the fingerprint, maybe the top number of high molecular weight molecules you find in the sample, you fragment them to get their assembly indices, and then what you can do is you can infer common origins of molecules.
[716] You can do a kind of molecular, when the reverse engineering of the assembly space, you can infer common roots and look at what's called the joint assembly space.
[717] But let's translate that into experiment.
[718] Take a sample, bung it in the mass spec, take the top, say, 10 molecules, fragment them, and that gives you one fingerprint.
[719] Then you do it for another sample, you get another fingerprint.
[720] Now the question is you say, hey, are these samples the same or different?
[721] And that's what we've been able to do.
[722] And by basically looking at the assembly space of these molecules create, without any knowledge of assembly theory, you are unable to do it.
[723] With a knowledge of assembly theory, you can reconstruct the tree.
[724] How does knowing if they're the same or different give you the tree?
[725] Let's go to two leaves on different branches on the tree, right?
[726] What you can do, by counting the number of differences, you can estimate how far away their origin was.
[727] And that's all we do.
[728] And it just works.
[729] But when we realized you could even use assembly theory to recapitulate the tree of life with no gene sequencing, we were like, huh?
[730] So this is looking at samples that exist today in the world.
[731] What about like things that are normal?
[732] longer exist.
[733] I mean, the tree contains information about the past.
[734] I would love...
[735] Some of it is gone.
[736] Yeah, yeah, absolutely.
[737] I would love to get old fossil samples and apply assembly theory, mass spec, and see if we can find new forms of life that have, that are no longer amenable to gene sequencing because the DNA is all gone.
[738] There's DNA, DNA and RNA is quite unstable, but some of them are complex molecules might be there, and might give you a hint something new, or wouldn't it be great if you're If you find a sample that's worth really persevering and doing, you know, doing the proper extraction to, to, you know, PCR and so on, and then sequence it and then put it together.
[739] So when I think dies, you can still get some information about its complexity.
[740] Yeah.
[741] And we can, and it appears that you can do some dating.
[742] Now, there are really good techniques.
[743] There's radiocarbon dating.
[744] There is longer dating, go and looking at radioactive minerals and so on.
[745] And you can also, in bone, you can look at the, what happens in after something dies is the, you get what's called rassamization where the, the chirality in the polymers basically changes and you just get, you get decomposition.
[746] And the rate of, the deviation from the pure enantium to the mixture, You can have a, it gives you a time scale on it, a half -life.
[747] So you can date when it died.
[748] I want to use assembly theory to see if I can date, use it, date death and things and trace the tree of life and also decomposition of molecules.
[749] You think it's possible.
[750] Oh yeah, without a doubt.
[751] It may not be better than what, because like the, I was just at a conference where there's some brilliant people looking at isotope enrichment and looking at how life enriches isotopes and they're really sophisticated stuff that they're doing.
[752] but I think there's some fun to be had there because it gives you another dimension of dating how old is this molecule in terms of or more importantly how long ago was this molecule produced by life the more complex a molecule the more prospect for decomposition oxidation, reorganisation, loss of chirality and all that jazz but what life also does is it enriches as you get older the amount of carbon 13 and you goes up because of the way the metabolic because of the way the bonding is in carbon 13 so it has a slightly different strength bond strength than you is called a kinetic isotope effect so you can literally date how old you are you know or when you stop metabolizing so you could date someone's debt how old they are I think I'm making this up this might be right but I think it's roughly right the amount of carbon 13 you have in you you can kind of estimate how old you are how old living organs humans are yeah yeah like you could say oh this person is 10 years old and this person 30 years old because they'll be metabolizing more carbon and they've accumulated it that's the basic idea it's probably completely wrong time scale signatures of chemistry are fascinating so you've been saying a lot of chemistry examples for assembly theory what if we zoom out and look at a bigger scale of an object you know like really complex subjects like humans or living organisms that are made up of you know millions or billions of other organisms how do you try to apply assembly theory to that at the moment we're with we should be able to do this to morphology in cells so we're looking at cell surfaces and really I'm trying to extend further it's just that you know we work so hard to get this paper out and people to start discussing the ideas.
[753] But it's kind of funny because I think the penny is falling on this.
[754] So, yeah.
[755] What's it mean for a penny to be?
[756] I mean, no, the penny's dropped, right?
[757] Because a lot of people are like, it's rubbish, it's rubbish, you've insulted me, it's wrong.
[758] And I'm, and, you know, I mean, the paper got published on the 4th of October.
[759] It had 2 .3 million engagements on Twitter, right?
[760] And it's been downloaded over a few hundred thousand times.
[761] And someone actually said to me, wrote to me and said, this is the example of really bad writing and what not to do.
[762] And I was like, if all of my papers got read this much, because that's the objective, if I have a publishing paper, want people to read it, I want to write that badly again.
[763] Yeah.
[764] I don't know what's the deep insight here about the negativity in the space.
[765] I think it's probably the immune system of the scientific community making sure that there's no bullshit that gets published.
[766] And it can overfire, it can do a lot of damage.
[767] It can shut down conversations in a way that's not productive.
[768] We, and I'll go back.
[769] I mean, answer your question about the hierarchy in assembly but let's go back to the reception people saying that paper was badly written i mean of course we could improve it we could always improve the clarity let's go there before we go to the hierarchy you know it has been criticized quite a bit the paper what has been some criticism that you found most powerful like that you can understand and can you explain it the yes the most exciting criticism came from the evolutionary biologist telling me that they thought that origin of life was a solved problem.
[770] And I was like, whoa, we're really onto something because it's clearly not.
[771] And when you poked them on that, they just said, no, you don't understand evolution.
[772] And I said, no, no, I don't think you understand that evolution had to occur before biology.
[773] And there's a gap.
[774] That was really, for me, that misunderstanding, and that did cause an immune response, which was really interesting.
[775] The second thing was the fact that physicists, the physicists were actually really polite, right?
[776] Really nice about it.
[777] But they just said, we're not really sure about the initial conditions thing.
[778] But this is a really big debate that we should certainly get into because, you know, the emergence of life was not encoded in the initial conditions of the universe.
[779] And it can't, and I think assembly theory shows why it can't be.
[780] I'll say that.
[781] Sure.
[782] If you can say that again.
[783] The origin of the emergence of life was not and cannot in principle be encoded in the initial condition to the universe.
[784] Just to clarify what I mean by life is like what high assembly index objects?
[785] Yeah.
[786] And this goes back to your favorite subject.
[787] What's that?
[788] Time.
[789] Right.
[790] So why?
[791] So why?
[792] What does time have to do with it?
[793] we i mean probably we can come back to it later but i i think it might be if we have time but um i think that i've i think i now understand how to explain how um you know lots of people got angry with the assembly paper but also the the ramifications of this is how time is fundamental in the universe and and this notion of commentorial spaces and there are so many layers on this but you have to become an intuitionist, I think you have to become an intuitionist mathematician and you have to abandon platonic mathematics and also platonic mathematics has left physics astray but there's a lot to back there so we can go to the...
[794] Atonic mathematics, okay, hey, there's a, it's okay, the evolutionary biologists criticize because the origin of life is understood and not, it doesn't require an explanation that involves physics.
[795] Yeah.
[796] That's their statement.
[797] Well, I mean, they said lots of confusing statements.
[798] Basically, I realized the evolutionary biology community that were vocal, and some of them were really rude, really spiteful, and needlessly so, right?
[799] Because, look, you know, I didn't, people misunderstand publication as well.
[800] some of the people who said how dare this be published in nature this is you know what a terrible journal and and i and it really and i want to set the people look this is a brand new idea that's not only potentially going to change the way we look at um biology it's going to change the way we look at the universe and everyone's like saying how dare you how dare you be so grandiose i'm like no no no this is not hype we're not we're not like saying we've invented some, I don't know, we've discovered an alien in a closet somewhere just for hype.
[801] We've genuinely mean this to genuinely have the impact or ask the question.
[802] And the way people jumped on that was a really bad precedent for young people who want to actually do something new because this makes a bold claim.
[803] And the chances are that it's not correct.
[804] But what I wanted to do is a couple of things, is I want to make a bold claim that.
[805] that was precise and testable and correctable, not a woolly, another woolly information in biology argument, information, Turing machine, blah, blah, blah, blah, blah, blah, a concrete series of statements that can be falsified and explored, and either the theory could be destroyed or built upon.
[806] Well, what about the criticism of, you're just putting a bunch of sexy names on something that's already obvious?
[807] Yeah, that's really good.
[808] So the assembly index of a molecule is not obvious.
[809] No one would measure it before.
[810] And no one has thought to quantify selection, complexity, and copy number before in such a primitive quantifiable way.
[811] I think the nice thing about this paper, this paper is a tribute to all, well, not to all the people that understand that biology does something very interesting.
[812] some people call it negentropy some people call it think about you know organizational principles that lots of people were not shocked by the paper because they'd done it before a lot of the lot of the arguments we got some people said oh it's rubbish oh by the way i had this idea 20 years before i was like which one is it your the rubbish part or the really revolutionary part so this kind of plucked two strings at once it plucked the there is something interesting the biology is ours we can see around this but we haven't quantified yet.
[813] And what this is, the first stab at quantifying that.
[814] So the fact that people said, this is obvious, but it's also, so if it's obvious, why have you not done it?
[815] Sure, but there's a few things to say there.
[816] One is, you know, this is in part of philosophical framework because, you know, it's not like you can apply this, generally to any object in the universe.
[817] It's very chemistry focused.
[818] Yeah, well, I think you will be able to.
[819] We just haven't got there robustly.
[820] So if we can say, how can we, let's go up a level.
[821] So if we go up from level, let's go up from molecules to cells, because you jump to people, and I jump to emoticons, and both are good, and they will be assemblies.
[822] Let's thick with cells, yeah.
[823] So if we go from molecules to assemblies, and let's take a cellar assembly, a nice thing about a cell is you can tell the difference between a u -carriot and a pro -carriot, right?
[824] The organelles are specialised differently.
[825] When they look at the cell surface, and the cell surface has different glycosylation patterns, and these cells will stick together.
[826] Now, let's go up a level in multicellular creatures.
[827] You have cellular differentiation.
[828] Now, if you think about how embryos develop, you go all the way back, those cells undergo a differentiation in a causal way that's biomechanically, a feedback between the genetics and biomechanics.
[829] I think we can use assembly theory to apply to tissue types.
[830] We can even apply it to different cell disease types.
[831] So that's what we're doing next.
[832] But we're trying to walk.
[833] You know, the thing is, I'm trying to leap ahead.
[834] I want to leap ahead to go, whoa, we apply it to culture.
[835] Clearly, you can apply it to memes and culture.
[836] And we've also applied assembly theory to CAs.
[837] Mm -hmm.
[838] And not, as you think.
[839] Cellary automata, but not just as you think.
[840] different CA rules were invented by different people at different times.
[841] And one of my co -workers, very talented to chat, basically, was like, oh, I can realize that different people had different ideas or different rules, and they copied each other and made slightly different cellular automata rules, and they, and looked at them online.
[842] And so he was able to refer an assembly index and copy number of rule whatever doing this thing.
[843] But I digress.
[844] But it does show you can apply it.
[845] at higher scale.
[846] So what do we need to do to apply assembly theory to things?
[847] We need to agree there's a common set of building blocks.
[848] So in a cell, well, in a in a multicellular creature, you need to look back in time.
[849] So there is the initial cell, which the creature is fertilized and then starts grow.
[850] And then there is cell differentiation.
[851] And you have to then make that causal chain both on those.
[852] So it requires development of the organism in time.
[853] Or if you look at the cell surfaces and the cell types, they've got different features on the cell, what's the walls and inside the cell.
[854] So we're building up.
[855] But obviously I want a leap to things like emoticons, language, mathematical theorems.
[856] That's a very large number of steps to get from a molecule to the human brain.
[857] Yeah.
[858] And I think they are related, but in hierarchies of emergence, right?
[859] So you shouldn't compare them.
[860] I mean, the assembly index of a human brain, what does that even mean?
[861] Well, maybe we can look at the morphology of the human brain, say all human brains have these number of features in common.
[862] If they have those number of, and then let's look at a brain in a whale or a dolphin or a chimpanzee or a bird.
[863] And say, okay, let's look at the assembly indices, a number of features in these.
[864] And now the copy number is just a number of how many birds are there, how many chimpanzees are there, how many humans are there?
[865] But then you have to discover for that the features that you would be looking for.
[866] Yeah.
[867] And that means you need to have some idea of the anatomy.
[868] Is there an automated way to discover features?
[869] I guess so.
[870] I mean, and I think this is a good way to apply machine learning and image recognition to basically characterize things.
[871] So apply compression to it to see what emerges and then use the thing, the features used as part of the compression as the measurement.
[872] measurement of, as the thing that is searched for when you're measuring assembly index and copy, and the compression has to be, remember the assembly universe, which is you have to go from assembly possible to assembly contingent.
[873] And that jump from, because assembly possible, all possible brains, all possible features all the time.
[874] But we know that on the tree of life and also on the lineage of life, going back to Luca, the human brain just didn't spring into existence yesterday.
[875] It is a long lineage of brains going all the way back.
[876] And so if we could do assembly theory to understand the development, not just an evolutionary history, but in biological development as you grow, we're going to learn something more.
[877] What would be amazing is if you can use assembly theory, this framework to show the increase in the assembly index associated with, I don't know, cultures or pieces of text like language or images and so on.
[878] and illustrate without knowing the data ahead of time, just kind of like you do it with NASA, that you were able to demonstrate that it applies in those other contexts.
[879] I mean, and that, you know, probably wouldn't it first, and you have to evolve the theory somehow.
[880] You have to change it, you have to expand it, you know.
[881] I think so.
[882] But like that, I guess this is as a paper of first step and saying, okay, can we create a general framework for measuring complexity?
[883] of objects for measuring life, the complexity of living organisms.
[884] Yeah.
[885] That's what this is reaching for.
[886] That is the first step.
[887] And also to say, look, we have a way of quantifying selection and evolution in a fairly, in a fairly, not mundane, but a fairly mechanical way.
[888] Because before now, you know, it wasn't very, the ground truth for it was very subjective.
[889] Whereas here we're talking about clean observables.
[890] And there's going to be layers on this.
[891] that.
[892] I mean, with collaborators right now, we already think we can do assembly theory on language.
[893] And not only that, wouldn't it be great if we can, so the, if we can figure out how under pressure language is going to involve and be more efficient because you're going to want to transmit things.
[894] And again, it's not just about compression.
[895] It is about understanding how you can make the most of the, in the architecture you've already built.
[896] And I think this is something beautiful that evolution does.
[897] We're reusing those architectures.
[898] We can't just abandon our evolutionary history.
[899] And if you don't want to abandon your evolutionary history and you know that evolution has been happening, then assembly theory works.
[900] And I think that's a key comment I want to make is that assembly theory is great for understanding when evolution has been used.
[901] The next jump is when we go to technology.
[902] Because of course, if you take the M3 processor, I want to buy, I haven't bought one yet.
[903] I can't justify it, but I want it at some point.
[904] The M3 processor arguably is, there's quite a lot of features, a quite large number.
[905] The M2 came before it, then the M1, all the way back.
[906] You can apply assembly theory to microprocessor architecture.
[907] It doesn't take a huge leap to see that.
[908] I'm a Linux guy, by the way.
[909] So your examples go way over there.
[910] Yeah, well, whatever.
[911] Is that like a fruit company of some sort?
[912] I don't even know.
[913] Yeah, there's a lot of interesting stuff to ask about language.
[914] Like, you could look at, how would that work?
[915] go like at GPT1, GPT2, GPT3, 35, 4, and try to analyze the kind of language it produces.
[916] I mean, that's almost trying to look at assembly index of intelligence systems.
[917] Yeah, I mean, I think the thing about large language models, and this is a whole hobby horse I have at the moment, is that obviously they're all about the, the, The evidence of evolution in the large language model comes from all the people that produced all the language.
[918] And that's really interesting and all the corrections in the mechanical Turk.
[919] Sure.
[920] And so...
[921] But that's part of the history, part of the memory of the system.
[922] Exactly.
[923] So can you...
[924] So it would be really interesting to basically use an assembly -based approach to making language in a hierarchy.
[925] Right?
[926] My guess is that you could, we might be able to build a new type of large language model that use an assembly theory that it has more understanding of the past and how things were created.
[927] Basically, the thing with LLMs is there like everything, everywhere, all at once, splat and make the user happy.
[928] So there's not much intelligence in the model.
[929] The model is how the human interacts with the model.
[930] but wouldn't it be great if we could understand how to embed more intelligence in the system?
[931] What do you mean by intelligence there?
[932] You seem to associate intelligence with history.
[933] Yeah, well, I think selection produces intelligence.
[934] You're almost implying that selection is intelligence, no. Kind of.
[935] I would go out in a limb and say that, but I think it's a little bit more.
[936] Human beings have the ability to abstract and they can break beyond selection.
[937] and this is what like Darwinian selection because the human being doesn't have to basically do trial and error like they can think about it and say oh that's a bad idea when to do that and then technologies and so on so we escaped Darwinian evolution and now we're on to some other kind of evolution I guess higher higher level of evolution and assembly theory will measure that as well right because it's all a lineage okay another piece of criticism or by way of question is how is assembly theory or maybe Assembly Index different from Comagraph Complexity.
[938] So for people who don't know, Comagraph complexity of an object is the length of a shortest computer program that produces the object as output.
[939] Yeah, I seem to, there seems to be a disconnect between the computational approach.
[940] So, yeah, so a Comagoloroff measure requires a cheering machine, requires a computer.
[941] and that's one thing and the other thing is assembly theory is supposed to trace the process by which life evolution emerged right there's a main thing there there are lots of other layers so so common goler off complexity you can you can approximate common goal off complexity but it's not really telling you very much about the actual um it's really telling you about like your data set compression of your data set and so that doesn't really help you identify the the turtle in this case is the computer and so what assembly theory does is I'm going to say trigger warning for anyone listening is who loves complexity theory I think that we're going to show that AIT is a very important subset of assembly theory because here's what happens that i think that assembly theory allows us to build um go understand when were selections occurring selection produces um factories and things factories in the end produce computers and you can go then algorithmic information theory comes out of that the frustration i've had with with looking at life through this kind of information theory is it doesn't take into account causation so the main difference between assembly theory and all these complexity measures is there's no causal chain.
[942] Yeah.
[943] And I think that's the main.
[944] As the causal chain is at the core of assembly theory.
[945] Exactly.
[946] And if you've got all your data in a computer memory, all the data is the same.
[947] You can access it in the same way.
[948] You don't care.
[949] You just compress it.
[950] And you either look at the program runtime or the shortest program.
[951] And that, for me, it is absolutely.
[952] not capturing what it is what its selection does but assembly theory looks at objects it doesn't have information about the object history it's going to try to infer that history by uh looking for the shortest history right the object the object doesn't like uh have a a Wikipedia page that goes with it oh i would say it does in a way and it is fascinating to look at so you've just got the object and you have no other information about the object.
[953] What assembly theory allows you to do with just with the object is to, and the word infer is correct, I agree with infer.
[954] You're like to say, well, that's not the, that's not the history, but something really interesting comes from this.
[955] The shortest path is inferred from the object.
[956] That is the worst case scenario if you have no machine to make it.
[957] So that tells you about the depth of that object in time.
[958] And so what assembly theory allows you to do is without considering any other circumstances to say from this object how deep is this object in time if we just treat the object as itself without any other constraints and that's super powerful because the shortest path then says allows you to say oh this object wasn't just created randomly there was a process and so assembly theory is not meant to you know one up a it or to ignore the factory It's just to say, hey, there was a factory.
[959] How big was that factory and how deep in time is it?
[960] But it's still computationally very difficult to compute that history, right, for complex objects.
[961] It is and becomes harder.
[962] One of the thing that's super nice is that it constrains your initial conditions, right?
[963] It constrains where you're going to be.
[964] So if you take, say, imagine, so one of the things we're doing right now is applying assembly theory to drug discovery.
[965] Now, what everyone's doing right now is taking all the proteins and looking at the proteins and looking at molecules dock with proteins.
[966] Why not instead take the look at the molecules that are involved in interacting with the receptors over time, rather thinking about and use the molecules that evolve over time as a proxy for how the proteins evolved over time and then use that to constrain your drug discovery.
[967] process you flip the problem 180 and focus on the molecule evolution rather than the protein and so you can guess in the future what might happen so that so that so you rather than having to consider all possible molecules you know where to focus and that's the same thing if you're looking at in assembly spaces for an object where you don't know the entire history but you know that um you know in the history of this object it's not going to have some other motif that there that um it doesn't apply it doesn't appear in the past but just even for the drug discovery point you made isn't don't you have to simulate all of chemistry for uh to figure out how to come up with constraints no the molecules and the no i mean i don't i don't know enough about protein well this is another thing that i think causes because this paper goes across so many boundaries so chemists have looked to this and said um this is not this is not a react this is not correct reaction it's like No, it's a graph.
[968] Sure, there's assembly index and shortest path examples here on chemistry.
[969] Yeah.
[970] And so, and what you do is you look at the minimal constraints on that graph.
[971] Of course, there has some mapping to the synthesis, but actually, you don't have to know all of chemistry, you just have to understand, you can build up the constraint space rather nicely.
[972] But this is just at the beginning, right?
[973] Like, there are so many directions this could go in.
[974] And as I said, it could all be wrong, but hopefully it's less wrong.
[975] What about the little criticism I saw of, do you, by way of question, do you consider the different probabilities of each reaction in the chain?
[976] So, like, that there could be different.
[977] When you look at a chain of events that led up to the creation of an object, doesn't it matter that some parts in the chain are less likely than others?
[978] No. It doesn't matter.
[979] No, no. Well, let's go back.
[980] So, no, not less likely, but react, so no. So let's go back to what we're looking at here.
[981] So the assembly index is the minimal path that could have created that object probabilistically.
[982] So imagine you have all your atoms in a plasma, you've got enough energy, you've got enough, there's collisions.
[983] What is the quickest way you could zip out that molecule with no reaction constraints?
[984] How do you define quickest there then?
[985] It's just basically a walk on a random graph.
[986] So we make an assumption that basically the time scale for forming the bonds.
[987] So no, I don't want to say that because then it's going to have people getting obsessing about this point.
[988] And your criticism is a really good one.
[989] What we're trying to say is like this puts a lower bound on something.
[990] Of course, some reactions are less possible than others.
[991] But actually, I don't think chemical reactions exist.
[992] Oh, boy.
[993] What does that mean?
[994] Why don't chemical reactions exist?
[995] I'm writing a paper right now that I keep being told I have to finish and it's called the origin of chemical reactions and it merely says that reactivity exists as controlled by the laws of quantum mechanics and reactions chemists put names on reactions like so you could have like I don't know the Vittic reaction which is by Vittic you could have the Suzuki reaction which is by Suzuki now what are these reactions so these reactions are constrained by the following They're constrained by the fact that we're on planet Earth, 1G, 2 .98 Kelvin, 1 bar.
[996] So these are constraints.
[997] They're also constrained by the chemical composition of Earth, oxygen, availability, all this stuff.
[998] And that then allows us to focus in our chemistry.
[999] So when a chemist does a reaction, that's a really nice compressed shorthand for constraint application.
[1000] Glass flask, pure reagent, temperature pressure, bomb, boom, boom, control, control, control, control, control.
[1001] So, of course, we have bond energies.
[1002] So the bond energies are kind of intrinsic in a vacuum.
[1003] So the bond energy, you have to have a bond.
[1004] And so for assembly theory to work, you have to have a bond, which means that bond has to give the molecule certain life, a half -life.
[1005] So you're probably going to find later on that some bonds are weaker and that you are going to miss in mass spectator.
[1006] when you count, look at the assembly of some molecules, you're going to miscount the assembly of the molecule because it falls apart too quickly because the bonds just form.
[1007] But you can solve that with looking at infrared.
[1008] So when people think about the probability, they're kind of misunderstanding.
[1009] Assembly theory says nothing about the chemistry, because chemistry is chemistry and their constraints are put in by biology.
[1010] There was no chemist on the origin of life, unless you believe in the chemist in the sky.
[1011] And they were, you know, it's like Santa Claus.
[1012] They had a lot of work to do, but chemical reactions do not exist in the constraints that allow chemical transformations to occur do exist.
[1013] Okay, okay.
[1014] So it's constraint application, so there's no chemical reactions, it's all constraint application.
[1015] Yep.
[1016] Which enables the emergence of react, of what's a different word for chemical reaction?
[1017] Transformation.
[1018] Transformation.
[1019] Yeah, like a function.
[1020] It's a function.
[1021] But no, but I love chemical reactions as a shorthand.
[1022] And so the chemists don't all go mad.
[1023] I mean, of course chemical reactions exist on Earth.
[1024] It's a shorthand.
[1025] It's a shorthand for these constraints.
[1026] Right.
[1027] So assuming all these constraints that we've been using for so long, we just assume that that's always the case in natural language conversation.
[1028] Exactly.
[1029] The grammar of chemistry, of course, emerges in reactions.
[1030] And we can use them reliably.
[1031] But I do not think the Vittic reaction is accessible on Venus.
[1032] Right.
[1033] And this is useful to remember, you know, to frame it as constraint application is useful for when you zoom out to the bigger picture of the universe and looking at the chemistry of the universe and then starting to apply assembly theory.
[1034] That's interesting.
[1035] That's really interesting.
[1036] But we've also pissed off the chemists now.
[1037] Oh, they're pretty happy, but well, most of them.
[1038] No, everybody deep down is happy, I think.
[1039] They're just sometimes feisty.
[1040] That's how they show.
[1041] That's how they have fun.
[1042] Everyone is grumpy on some days when you challenge it.
[1043] The problem with this paper is you, what is like, it's almost like I went to a part.
[1044] It's like I used to do this occasionally when I was young, is go to a meeting and just find a way to offend everyone at the meeting simultaneously.
[1045] Even the factions that don't like each other, they're all unified in their hatred of you just offending them.
[1046] This paper, it feels like the person that went to the party and offended everyone simultaneously.
[1047] So stop fighting with themselves and just focused on this paper.
[1048] maybe just a little insider interesting information what were the editors of nature like what the reviews and so on how difficult was that process this is a pretty like big paper yeah i mean the so um we when we originally sent the paper um we sent the paper and the editor said um that you know this was like this is quite a long process we sent the paper and the editor gave us some feedback and said, you know, I don't think is that interesting.
[1049] It's not, you know, or it's hard.
[1050] It's a, it's a hard concept.
[1051] And we asked, and the editor gave us some feedback.
[1052] And we, and Sarah and I took a year to rewrite the paper.
[1053] Was the nature of the feedback very specific on like this part of this part?
[1054] Or was it like, like, what do you guys smoke?
[1055] Yeah, it was kind of the latter.
[1056] What But polite and there's promise.
[1057] Yeah, well, the thing is, the editor was really critical, but in a really professional way.
[1058] And, I mean, for me, this was the way science should happen.
[1059] So when it came back, you know, we had too many equations in the paper.
[1060] If you look at the preprint, they're just equations everywhere, like 23 equations.
[1061] And when I said to Abyshech, who was the first author, we've got to remove all the equations.
[1062] But my assembly equations staying in Abyshek was like, you know, no, we can't.
[1063] I said, well, look, if we want to explain this to people, there's a real challenge.
[1064] And so Sarah and I went through the, I think it was actually 160 versions of the paper, but we basically got to version 40 or something.
[1065] We said, right, zero, it's start again.
[1066] So we wrote the whole paper again.
[1067] We knew the entire.
[1068] Amazing.
[1069] And we just went bit by bit by bit and said, what is it we want to say?
[1070] And then we sent the paper in.
[1071] And to our, we expected it be rejected and not even go to review.
[1072] And then we got notification back.
[1073] It had gone to review.
[1074] We were like, oh, my God, it's so going to get rejected.
[1075] how is it going to get rejected?
[1076] Because the first assembly paper on the mass spec we sent to nature went through six rounds of review and rejected, right?
[1077] And by a chemist, you just said, I don't believe you, you must be committing fraud.
[1078] Long story, probably a boring story.
[1079] But in this case, it went out to review, the comments came back, and the comments were incredibly, they were very deep comments.
[1080] from all the reviewers.
[1081] There were, and but the, but the, but the nice thing was, the reviewers were kind of very critical, but not dismissive.
[1082] They were like, oh, really?
[1083] Explain this, explain this, explain this, explain this.
[1084] Are you sure it's not Kamagollaroff?
[1085] Are you sure it's not this?
[1086] And we went through, I think, three rounds of review, pretty quick.
[1087] And, and the editor went, yeah, it's in.
[1088] maybe you could just comment on the whole process you've published some pretty huge papers on all kinds of topics within chemistry and beyond some of them have some little spice in them a little spice of crazy like tom way says i like my town with a little drop of poison says you know it's not a mundane paper so where what's it like psychologically to go through all this process to keep getting rejected to uh to get reviews from people that don't get the paper or all that kind of stuff just from a question of a scientist like what what is that like um it's uh i think it's uh um i mean this paper for me kind of because this wasn't the the first time we tried to publish assembly theory at the highest level the nature communications paper we on the mass spec on the on the idea went through went to nature and got rejected went through six rounds of review and got rejected.
[1089] And it, and it's, and I, I just was so confused when the, when the chemist said, this can't be possible.
[1090] I do not believe you can measure complex at using mass spec.
[1091] And also, by the way, molecules, molecules, complex molecules can randomly form.
[1092] And we're like, but look at the data.
[1093] The data says, and they said, no, no, we don't believe you.
[1094] And we went and I just wouldn't give up.
[1095] And the editor in the end was just like different editors actually.
[1096] Right.
[1097] What's behind that never giving up?
[1098] When you're sitting there 10 o 'clock in the evening, there's a melancholy feeling that comes over you.
[1099] And you're like, okay, this is rejection number five.
[1100] Or it's not rejection, but maybe it feels like a rejection because, you know, the comments or you totally don't get it?
[1101] Like, what gives you strength to keep going there?
[1102] I don't know.
[1103] I don't normally get emotional about papers, but it's not about giving up because we want to get it published because we want the glory or anything.
[1104] It's just like, why don't you understand?
[1105] And so what I did, so what I would just is try to be as, as rational as possible and say yeah you didn't like it um tell me why and then um sorry silly any part never get emotional about papers normally but but you but i think what we did you just decompressed like five years of angst from this so it's been it's been rough it's not just rough it's like it happened you know i came up the assembly equation you know remote from sarah in our Arizona and the people, SFI, I felt like I was a mad person, like, you know, the guy in depicted in a beautiful mind who was just like, not the actual genius part, but just the chipperish, gibberish, because I kept writing expanded and I have no mathematical ability.
[1106] Oh, and I was expand.
[1107] I was making these mathematical expansions where I kept seeing the same motif again.
[1108] I was like, oh, I think this is a copy number.
[1109] The same string is coming again and again and again.
[1110] I kept, I couldn't do the math.
[1111] And then I realized the copy number fell out of the equation and everything collapsed down.
[1112] I was like, oh, that works, kind of.
[1113] So we submitted the paper.
[1114] And then when it was almost accepted, right, the mass spec one, and it was astrobiologist said, great.
[1115] You know, mass spectroscopist said great.
[1116] And the chemist went nonsense, like, biggest pile of nonsense ever fraud, you know.
[1117] And I was like, but why fraud?
[1118] And they just said, just because.
[1119] And I was like, well, and so, and I could not convince the editor.
[1120] in this case, the edit was just so pissed off because they see it as like a kind of you're wasting my time.
[1121] And I would not give up.
[1122] I wrote, I went and dissected all the parts.
[1123] And I think, although, I mean, I got upset about it, you know, which kind of embarrassing actually, but I guess.
[1124] Beautiful.
[1125] But it was just trying to understand why they didn't like it.
[1126] So part of me was like really devastated.
[1127] And a part of me was super excited because I'm like, huh, they can't tell me why I'm wrong.
[1128] And this kind of goes back to, you know, when I was at school, I was in a kind of learning difficulties class and I kept going to the teacher and say, you know, you know, how, what do I do today to prove I'm smart?
[1129] And they were like, nothing, you can't.
[1130] I was like, give me a job, you know, give me something to do.
[1131] Give me a job to do, something to do as we, and I kind of felt like that a bit when I was arguing with the, and not arguing, there was no ad homin, I wasn't telling the editor, they were idiots or anything like this or the reviewers.
[1132] I kept it strictly like factual.
[1133] And all I did is I just kept knocking it down bit by bit by bit by bit by bit.
[1134] It was ultimately rejected and it got published elsewhere.
[1135] And then the actual experimental data.
[1136] So this is kind of in this paper, the experimental justification was already published.
[1137] So when we did this one and we went through the versions and then we sent it in and in the end it just got accepted, we were like, well, that's kind of cool, right?
[1138] This is kind of like, you know, some days you have, you know, the student, sorry, the first author was like, I can't believe it got accepted.
[1139] Like, no am I. But it's great.
[1140] It's like, it's good.
[1141] And then when the paper was published, I was not expecting the backlash.
[1142] I was expecting computational.
[1143] Well, no, actually, I was just expecting one person who'd been trolling me for a while about it just to carry on trolling.
[1144] But I didn't expect the backlash.
[1145] And then I wrote to the editor.
[1146] and apologized.
[1147] And the editor was like, what are you apologizing for?
[1148] It was a great paper.
[1149] Of course it's going to get backlash.
[1150] You said some controversial stuff.
[1151] But it's awesome.
[1152] I think it's a beautiful story of perseverance.
[1153] And the backlash is just a negative word for discourse, which I think is beautiful.
[1154] I think you, as I said to, you know, when it got accepted, and people were saying, we're kind of like hacking on it.
[1155] And I was like, papers are not gold medals.
[1156] The reason I wanted to publish that paper in nature is because it says, hey, there's something before biological evolution.
[1157] You have to have that if you're not a creationist, by the way.
[1158] This is an approach.
[1159] First time, someone has put a concrete mechanism, or sorry, a concrete quantification, and what comes next you're pushing on is a mechanism.
[1160] And that's what we need to get to, is an autocolidic sets, self -replicating molecules, some other features that come in.
[1161] And the fact that this paper has been so discussed, for me, is a dream come true.
[1162] Like, it doesn't get better than that.
[1163] If you can't accept a few people hating it, and the nice thing is, the thing that really makes me happy is that no one has attacked the actual physical content.
[1164] Like, you can measure the assembly index, you can measure selection now.
[1165] so either that's right or it's well either that's helpful or unhelpful if it's unhelpful this paper will sink down and no one will use it again if it's helpful it'll help people build scaffold on it and we'll start to converge to a new paradigm so i think that that's the thing that i wanted to see you know my colleagues authors collaborators and people were like you've just published this paper you're a chemist why have you done this like who are you to be doing even theory.
[1166] Like, well, I don't know.
[1167] I mean, sorry, did I need to get...
[1168] Was anyone to do anything?
[1169] Well, I'm glad you did.
[1170] Let me just, before coming back to Origin of Life and these kinds of questions, you mentioned learning difficulties.
[1171] I didn't know about this.
[1172] So what was it like?
[1173] I wasn't very good at school, right?
[1174] This is when you were very young?
[1175] Yeah, yeah.
[1176] But in primary school, my handwriting was really poor and apparently I couldn't read and my mathematics was very poor.
[1177] So they just said this is a problem.
[1178] They identified it.
[1179] My parents kind of at the time were confused because I was busy taking things apart, buying electronic junk from the shop, trying to build computers and things.
[1180] And then when I was, I think about the major transition in my stupidity.
[1181] Like, you know, everyone thought I wasn't that stupid.
[1182] Basically, everyone thought I was faking.
[1183] I like, stuff and I was faking wanting to be it.
[1184] So I always want to be a scientist.
[1185] So five, six, seven years old, I'll be a scientist, take things apart.
[1186] And everyone's like, yeah, this guy wants to be a scientist, but he's an idiot.
[1187] And so everyone was really confused, I think, at first that I wasn't smarter than I, you know, was claiming to be.
[1188] And then I just basically didn't do well in the test.
[1189] And I went down and down and down and down.
[1190] And then I was kind of like, huh, this is really embarrassing.
[1191] I really like maths, and everyone says, I can't do it.
[1192] I really like kind of, you know, physics and chemistry and all that in science, and people say, you know, you can't, you can't read and write.
[1193] And so I found myself in a learning difficulties class at the end of primary school and the beginning of secondary school in the UK, secondary school is like 11, 12 years old.
[1194] And I remember being put in the remedial class.
[1195] And the remedial class was basically full of, well, two types, three types of people.
[1196] There were people that had quite violent, right?
[1197] You know, and there were people couldn't speak English.
[1198] And there were people that really had learning difficulties.
[1199] So the one thing I can objectively remember was, I mean, I could read.
[1200] I like reading.
[1201] I read a lot.
[1202] But something in me, I'm a bit of a rebel.
[1203] I refuse to read while I was told to read.
[1204] And I found it difficulty to read individual words in the way they were told.
[1205] But anyway, I got caught one day teaching someone else to read.
[1206] And they said, okay, we don't understand this.
[1207] I always know what to be a scientist but didn't really know what that meant and I realized you had to go to university and I thought I can just go at university it's like curious people like no no no you need to have these you have to be able to enter them these exams to get this grade point average and the fact is the exams you've been entered into you're not you're just going to get C D or E you can't even get A B or C right this is the UK Gs DFCs I was like oh shit and I said can you just put me into the high exam I said no no you're going to fail there's no chance so my my father kind of intervened and said you know just let him go in the exams and they said he's definitely going to fail it's a waste of time waste of money and he said well what have we paid so they said well okay so you didn't actually have to pay you had to pay if I failed so I took the exams and passed them fortunately I didn't get the top grades but I you know I got into A levels but then that also kind of limited what I could do at A levels I wasn't allowed to do A -level maths.
[1208] What do I mean you weren't allowed to?
[1209] Because I had such a bad math grade from my GCSE.
[1210] I only had a C. But they wouldn't let me go into the ABC for maths because of some kind of coursework requirement back then.
[1211] So the top grade I could have got was a C. So C -D -E.
[1212] So I got a C. And then let me do kind of AS -level maths, which is this half intermediate and get to go to university.
[1213] But then I like to chemistry.
[1214] I had a good chemistry teacher.
[1215] So in the end, I got to university to do chemistry.
[1216] So through that kind of process, I think, for kids in that situation it's easy to start believing that you're not, well, how do I put it, that you're stupid and basically give up that you're just not good at math, you're not good at school.
[1217] So this is by way of advice for people, for interesting people, for interesting young kids right now, experiencing the same thing, where was the place, what was the source of you not giving up there?
[1218] I have no idea other than I was really, I really like not understanding stuff.
[1219] For me, when I not understand something, I didn't understand, I feel like I don't understand anything.
[1220] But now, but back then, I was so, I remember when I was like, I don't know, I tried to build a laser when I was like eight.
[1221] And I thought, how hard could it be?
[1222] Like, and I basically, I was going to build a CO2 laser.
[1223] And I was like, right, I think I need some partially coated mirrors and need some carbon dark side.
[1224] And I need a high, high voltage.
[1225] So I kind of, and I was like, I didn't have a, and I was so stupid, right?
[1226] I was kind of so embarrassed.
[1227] To make enough CO2, I actually set a fire and try to filter the flame.
[1228] Oh, nice.
[1229] To trap it off CO2.
[1230] And I was like completely failed and I bent half the garage down.
[1231] So my parents were not very happy about that.
[1232] So that was one thing.
[1233] I was like I really like first principle thinking.
[1234] And so, you know, so I remember being super curious and being determined to find answers.
[1235] And so the kind of when people do give advice about this, well, I ask for advice about this.
[1236] I don't really have that much advice other than don't give up.
[1237] And one of the things I try to do as a chemistry professor in my group is I, I don't, I hire people that I think that, you know, I'm kind of, who am I?
[1238] If they're persistent enough, who am I to deny them the chance?
[1239] Because, you know, people gave me a chance and I was able to do stuff.
[1240] Do you believe in yourself, essentially?
[1241] I like, so I love being around smart people, and I love confusing smart people.
[1242] And when I'm confusing smart people, you know, not by stealing their wallets and hiding it somewhere, but if I can confuse smart people, that is the one piece of hope that I might be doing something interesting.
[1243] Wow, that's quite brilliant.
[1244] Like, it's a gradient to optimize.
[1245] Yeah.
[1246] Hang out with smart people and confuse them.
[1247] Yeah.
[1248] And the more confusing it is, the more there's something there.
[1249] And as long as they're not telling you just a complete idiot and, and they give you different reasons.
[1250] Yeah.
[1251] And I mean, I'm, you know, if everyone, it's like with assembly theory and people said, oh, it's wrong.
[1252] And I was like, why?
[1253] And they're like, and no one could give me a consistent reason.
[1254] They said, oh, because it's been done before or it's just Comagollaroff or it's just that and the other.
[1255] So I think the thing that I like to do is, and in academia, it's hard, right?
[1256] Because people are critical.
[1257] But, I mean, you know, the criticism, I mean, although I got kind of upset about it earlier, which is kind of silly, but not silly, because obviously it's hard work being on your own or with a team spatially separated like during lockdown and try to keep everyone on board and have some faith that I always wanted to have a new idea.
[1258] And so, you know, I like a new idea and I want to, I want to nurture it as long as possible.
[1259] And if someone can give me actionable criticism, that's why I think I was trying to say earlier when I was kind of like stuck for words, give me actionable criticism, you know, it's wrong.
[1260] Okay, why is it wrong?
[1261] You say, oh, it doesn't, your equation's incorrect for this or your method is wrong.
[1262] And so what I try and do is get enough criticism from people to then triangulate and go back.
[1263] And I've been very fortunate in my life that I've got great colleagues, great collaborators, funders, mentors and people that will take the time to say, you're wrong because And then what I have to do is integrate the wrongness and go, oh, cool.
[1264] Maybe I can fix that.
[1265] And I think criticism is really good.
[1266] People have a go at me because I'm really critical.
[1267] But I'm not criticizing, you know, you as a person.
[1268] I'm just criticizing the idea and trying to make it better and say, well, what about this?
[1269] And, you know, and sometimes I'm kind of, you know, my filters are kind of, you know, truncated in some ways.
[1270] I'm just like, that's wrong, that's wrong, that's wrong.
[1271] I want to do this.
[1272] And people are like, oh, my God, you just told me. You destroyed my life's work.
[1273] I'm like, relax, no. I'm just like, let's make it better.
[1274] And I think that we don't do that enough because we're, you know, we're, we're, we're, we're, we're, we're, we're either personally critical, which isn't helpful, or we don't give any criticism at all because we're too scared.
[1275] Yeah, yeah, the, I've, I've seen you be pretty aggressively critical, but it's every time I've seen it, it's the idea, not the person.
[1276] I'm sure I make mistakes on that.
[1277] I mean, I, you know, I argue, I argue lots with, with lots, I mean, I argue lots with Sarah and she's like kind of shocked.
[1278] I've argued with Yashir in the past and he's like, you're just making, Yashabak and you're like, you're just making that up.
[1279] I'm like, no, not quite, but kind of.
[1280] Yeah.
[1281] You know, I've had a big argument with Sarah about time.
[1282] She's like, no, time doesn't exist.
[1283] I'm like, no, no, time does exist.
[1284] And now, and as she realized, the her conception of assembly theory, and my conception, assembly theory, was the same thing, necessitated us to abandon the fact that time is eternal, to actually really fundamentally question how the universe produces commentarial novelty.
[1285] So time is fundamental for assembly theory.
[1286] I'm just trying to figure out where you and Sarah converge.
[1287] So I think assembly theory is fine in this time right now, but I think it helps us understand that something interesting is going on.
[1288] So there's, and I'm been really inspired by a guy called Nick Gizden.
[1289] I'm going to butcher his argument, but I love his argument a lot.
[1290] So I hope he forgives me if he hears about it.
[1291] But basically, if you want free will, time has to be fundamental.
[1292] And we can go, and if you want time to be fundamental, you have to give up on platonic mathematics.
[1293] and you have to use intuitions, mathematics.
[1294] By the way, and again, I'm going to butcher this.
[1295] But basically, Hilbert said that, you know, infinite numbers are allowed.
[1296] And I think it was Brower, said no, you can't, all numbers are finite.
[1297] So they're kind of like, we're, so let's go back a step because it was like, people are going to say, assembly theory seems to explain that large commentorial space allows you to produce things like life and technology.
[1298] And that large commentorial space is so big is not even accessible to a Sean Carroll, David Deutsch, multiverse.
[1299] The physicists saying that all of the universe already exists in time is probably, provably, that's a strong word.
[1300] not correct, that we are going to know that the universe as it stands, the present, the way the present builds the future so big, the universe can't ever contain the future.
[1301] And this is a really interesting thing.
[1302] I think Max Techmark has this mathematical universe.
[1303] We says, you know, the universe is kind of like a block universe.
[1304] I apologize to Max if I'm getting it wrong, but people think you can just move.
[1305] You have the stat, you have the initial conditions, and you can run the universe right to the end and go backwards and forwards in that universe that is not correct let me load that in the universe is not big enough to contain the future yeah that's why that's it that's another that's a beautiful way of saying that time is fundamental yes and the and the you can have and that's what this is why um the the law of the excluded middle something is true or false only works in the past is it going to snow in new york next week or in austin you might In Austin, say, probably not.
[1306] In New York, you might say, yeah.
[1307] If you go forward to next week and say, did it snow in New York last week, true or false?
[1308] You can answer that question.
[1309] The fact that the law of the excluded middle cannot apply to the future explains why time is fundamental.
[1310] Well, I mean, that's a good example, intuitive example, but it's possible that we might be able to predict, you know, whether it's going to snow if we had perfect information.
[1311] I think we're saying it not.
[1312] Impossible.
[1313] Impossible.
[1314] So here's why.
[1315] I'll make a really quick argument.
[1316] And this argument isn't mine.
[1317] It's Nick's and a few other people.
[1318] Can you explain his view on fundamental, on time being fundamental?
[1319] Yeah, so I'll give my view, which kind of resonates with his.
[1320] But basically, it's very simple, actually.
[1321] He would say that free will, your ability to design and do an experiment is exercising free will.
[1322] So he used that thought process.
[1323] I never really thought about.
[1324] about it that way and that you actively make decisions.
[1325] I do think that I used to think that free will was a kind of consequence of just selection, but I'm kind of understanding that human free will is something really interesting.
[1326] And he very much inspired me. But I think that what Sarah Walker said that inspired me as well, that these will converge, is that I think that the universe, in the universe is very big, huge.
[1327] Actually, the place is largest in the universe right now, the largest place in the universe is Earth.
[1328] Yeah.
[1329] I've seen you say that.
[1330] And boy, does that, that's an interesting one of the process.
[1331] What do you mean by that?
[1332] Earth is the biggest place in the universe?
[1333] Because we have this commentorial scaffolding going all the way back from Luca.
[1334] So you've got cells that can self -replicate.
[1335] And then you go all the way to terraforming the Earth.
[1336] You've got all these architectures.
[1337] The amount of selection that's going on, biological selection, just to be clear, biological evolution.
[1338] And then you have multicellularity, then animals, and abstraction.
[1339] And with abstraction, there was another kick because you can then build architectures and computers and cultures and language.
[1340] And these things are the biggest things that exist in the universe, because we can just build architectures that couldn't naturally arise anywhere.
[1341] And the further that distance goes in time, and it's kind of, it's just, it's gigantic.
[1342] And from a complexity perspective.
[1343] Okay, wait a minute.
[1344] But, I mean, I know you're being poetic, but how do you know there's not other Earth -like?
[1345] Like, how do you know, you're basically saying Earth is really special.
[1346] It's awesome stuff as far as we look out.
[1347] There's nothing like it going on.
[1348] But how do you know there's not nearly infinite number of places where cool stuff like this is going on?
[1349] I agree.
[1350] And I would say, I'll say again, that.
[1351] Earth is the most gigantic thing we know in the universe commentarily.
[1352] We know.
[1353] We know.
[1354] Now, I guess, this is just purely a guess.
[1355] I have no data, but other than hope.
[1356] Well, maybe not hope.
[1357] Maybe, no, I have some data that every star in the sky probably has planets.
[1358] Yep.
[1359] And life is probably emerging on these planets.
[1360] But the amount of contingency that is associated with life, is that I think the commentorial space associated to these planets is so different.
[1361] Our causal cones are never going to overlap or not easily.
[1362] And this is a thing that makes me sad about alien life, why it's why we have to create alien life in the lab as quickly as possible.
[1363] Because I don't know if we are going to be able to build architectures that will intersect with alien intelligence and architectures.
[1364] And intersect, you don't mean in time or space?
[1365] Time and the ability to communicate.
[1366] The ability to communicate.
[1367] Yeah.
[1368] My biggest fear in a way is that life is everywhere, but we become infinitely more lonely because of our scaffolding in that commentorial space.
[1369] Because it's so big.
[1370] So you're saying the constraints created by the environment that led to the factory of Darwinian evolution are just like a little tiny cone in a nearly infinite.
[1371] infinite combinatorial space.
[1372] And so there's other cones like it.
[1373] And why can't we communicate with other, like, just because we can't create it doesn't mean we can't appreciate the creation, right?
[1374] Sorry, detect the creation.
[1375] I truly don't know, but it's an excuse for me to ask for people to give me money to make a planet simulator.
[1376] Yeah, right.
[1377] If I can make.
[1378] With a different kind of plan.
[1379] I'm just like another shameless say, it's like, give me money.
[1380] this was all a long plug for a planet simulator it's like you know hey i'll be the first in line to my my my my rick garage has run out of room you know yeah no um and is this is a planet simulator you mean like a different kind of planet yeah well different sets of environments and pressures exactly if we could basically recreate the selection before biology as we know it that gives rise to a different biology, we should be able to put the constraints on where I look in the universe.
[1381] So here's a thing.
[1382] Here's my dream.
[1383] My dream is that by creating life in the lab, based upon constraints we understand.
[1384] So let's go for Venus -type life or Earth -type life or something.
[1385] Again, do Earth 2 .0.
[1386] Screw it.
[1387] Let's do Earth 2 .0.
[1388] And Earth 2 .0 has a different genetic alphabet.
[1389] Fine.
[1390] That's fine.
[1391] Different protein alphabet.
[1392] Fine.
[1393] Have cells and evolution, and all that stuff, we will then be out to say, okay, life is a more general phenomena, selection is more general than what we think is the chemical constraints on life.
[1394] And we can point to James Webb and other telescopes to other planets that we are in that zone we are most likely to conventorially overlap with, right?
[1395] So, because, you know, we basically, so there are chemistry.
[1396] You're looking for some overlap.
[1397] And then we can then basically shine light on them literally and white -looking.
[1398] at light coming back and apply advanced assembly theory to a general theory of language that we will get and say, huh, we, in that signal, it looks random, but there's a copy number.
[1399] Oh, this random set of things that shouldn't be, that looks like a true random number generator has structure as a not comogolerov, AIT type structure, but evolutionary structure given by assembly theory and we start to um but i would say that because i'm a shameless assembly theorist yeah i it just feels like the the cone that might be misusing the word cone here but the width of the cone is growing faster um is growing really fast to where eventually all the cones overlap even in a very very very large combinatorial space it just but then again if you're saying The universe is also growing very quickly in terms of possibilities.
[1400] I hope that as we build abstractions, the main, I mean, one idea is that as we go to intelligence, intelligence allows us to look at the regularities around us in the universe, and that gives some common grounding to discuss with aliens.
[1401] And you might be right that we will overlap there, even though we have completely different chemistry, literally completely different chemistry, that we will be out of past information from one another.
[1402] But it's not a given.
[1403] And, you know, I have to kind of try and divorce hope and emotion, you know, away from what I can logically justify.
[1404] But it's just hard to intuit a world.
[1405] a universe where there's nearly infinite complexity objects, and they somehow can't detect each other.
[1406] But the universe is expanding, but the nice thing is, I would say, I would look, you see, I think Carl Sagan did the wrong thing.
[1407] Well, not the wrong thing.
[1408] He flicked the Voyager Pro around and the pale blue dot.
[1409] He said, look at how big universes.
[1410] I would have done it the way around and said, look at the Voyager Probe that came from the planet Earth that came from Luca.
[1411] Look at how big Earth is.
[1412] Then it produced that.
[1413] It produced that.
[1414] Yeah.
[1415] And that I think is like completely amazing.
[1416] And then that should allow people and have to think about, well, probably we should try and get causal chains off Earth onto Mars, onto the moon, wherever.
[1417] It's human life or Martian life that we create.
[1418] It doesn't matter.
[1419] But I think this commentorial space tells us something very important about the universe.
[1420] And I realized in assembly theory that the universe is too big to contain itself.
[1421] And I think this is, and now I'm coming back and I want to, I want to kind of change your mind about time because I'm guessing that your time is just a coordinate.
[1422] Yeah.
[1423] So I'm going to change your, I'm guessing you're one of those.
[1424] I'm going to change your mind in real time, or at least attempt.
[1425] Oh, in real time.
[1426] There you go.
[1427] I already got the tattoo, so this is going to be embarrassing if you change my mind.
[1428] But you can just add, you can just add an arrow time onto it, right?
[1429] Yeah, true.
[1430] Or erase it a bit.
[1431] So, and the argument that I think that is really most interesting is like, people say the initial conditions specify the future of the universe.
[1432] Okay, fine.
[1433] Let's say that's the case for a moment.
[1434] Now, let's go back to Newtonian mechanics.
[1435] Now, the uncertainty in Newtonian mechanics is this.
[1436] If I give you the coordinates of an object moving in space and the coordinates of another object and they collide, in space, and you know those initial conditions, you should know exactly what's going to happen.
[1437] However, you cannot specify these coordinates to infinite precision.
[1438] Now, everyone said, you know, oh, this is kind of like, you know, the chaos theory argument.
[1439] No, no, it's deeper than that.
[1440] Here's a problem with numbers.
[1441] This is where Hilbert and Brower fell out.
[1442] To have the coordinates of this object, a given object, it colliding, you have to have them to infinite precision.
[1443] That's what Hilbert says.
[1444] It says no problem.
[1445] Infinite precision is fine.
[1446] Let's just take that for granted.
[1447] But when the object is finite and it can't store its own coordinates, what do you do?
[1448] So in principle, if a finite object cannot be specified to infinite precision, in principle, the initial conditions don't apply.
[1449] Well, how do you know it can't store its...
[1450] Well, how do you store an infinitely long number in a finite size.
[1451] Well, we're using infinity very loosely here.
[1452] No, no. We're using...
[1453] Infinite precision.
[1454] I mean, not loosely, but...
[1455] Very precisely.
[1456] So you think infinite precision is required?
[1457] Well, let's take the object.
[1458] Let's say the object is a golf ball.
[1459] Golf ball is a few centimeters in diameter.
[1460] We can work out how many atoms are on the golf ball.
[1461] And let's say we can store numbers down to atomic dislocations.
[1462] So we can work out how many atoms there are in the golf ball, and we can store the coordinates in that golf ball down to that number.
[1463] But beyond that, we can't.
[1464] Let's make the golf ball smaller.
[1465] And this is where I think that we think that we get randomness in quantum mechanics.
[1466] And some people say, you can't get randomness, quantum mechanics deterministic.
[1467] But aha, this is where we realize that classical mechanics and quantum mechanics suffer from the same uncertainty principle.
[1468] And that is the inability to specify the initial conditions, to a precise enough degree to give you determinism.
[1469] The universe is intrinsically too big and that's why time exists.
[1470] It's non -deterministic.
[1471] Looking back into the past, you can use logical arguments because you can say, was it true or false?
[1472] You really know.
[1473] But the fact we are unable to predict the future with the precision is not evidence of lack of knowledge.
[1474] It's evidence the universe is generally.
[1475] generating new things.
[1476] Okay, so to you, first of all, quantum mechanics, you can just say statistically what's going to happen when two golf balls hit each other.
[1477] Statistically, but sure, I can say statistic what's going to happen, but then what they do happen, and you keep nesting it together, you can't, I mean, it goes almost back to look at, let's think about entropy in the universe.
[1478] So how do we understand entropy change, Well, we could do the, look at, or process, we can use the agerdic hypothesis.
[1479] We can also have, we can also have the counterfactuals, where we have all the different states, and we can even put that in the multiverse, right?
[1480] But both those are kind of, they're non -physical.
[1481] The multiverse kind of collapses back to the same problem about the precision.
[1482] So all that what you, if you accept, you don't have to have true and false going forward into the future, the real numbers are real.
[1483] They're just, they're just, they're observables.
[1484] We're trying to see exactly where time being fundamental sneaks in.
[1485] In this difference between the, the golf ball can't contain its own position perfectly precisely.
[1486] how that leads to time needing to be fundamental.
[1487] Let me, I'll have a quick.
[1488] Do you believe or do you accept you have free will?
[1489] Yeah, I think at this moment in time, I believe that I have free will.
[1490] So then you have to believe that time is fundamental.
[1491] I understand that's the statement you've made.
[1492] Well, no, that we can logically follow, because if you don't have free will, so like if you're in a universe that has no time, the universe is deterministic.
[1493] If it's deterministic, then you have no free will.
[1494] I think the space of how much we don't know is so vast that saying the universe is deterministic and from that jumping there's no free will is just too difficult of a leap.
[1495] No, I logically follows.
[1496] No, no, I don't disagree.
[1497] I'm not saying any, I mean, it's deep and it's important.
[1498] All I'm saying, and it's the difference to, it's actually different to what I've said before, is that if you don't require platonistic mathematics and accepts that non -determinism is how the universe looks and that gives us our creativity and the way the universe is getting novelty, it's kind of really deeply important in assembly theory because assembly theory starts to actually give you a mechanism while you go from boring time, which is basically initial conditions, specify everything, to a mismatch in creative time.
[1499] And I hope we'll do experiments.
[1500] I think it's really important to, I would love to do an experiment that prove that time is fundamental and the universe is generating novelty.
[1501] I don't know all the features of that experiment yet, but by, you know, having these conversations openly and getting people to think about the problems in a new way, better people, more intelligent people were good mathematical backgrounds, can say, oh, hey, I've got an idea.
[1502] I would love to do an experiment that shows that the universe, I mean, universe is too big for itself going forward in time.
[1503] And I really, you know, this is why I really hate the idea of the Boltzman brain.
[1504] The Boltzman brain makes me super kind of like, you know, everyone's having a free lunch.
[1505] It's like saying, it's like, let's break all the laws of physics.
[1506] So a Boltzman brain is this idea that in a long enough universe, a brain will just emerge in the universe as conscious.
[1507] And that neglects the cause will change.
[1508] of evolution required to produce that brain.
[1509] And this is where the computational argument really falls down because the computation is to say, I can calculate probability of a Boltsman brain.
[1510] And I can, and they'll give you a probability, but I can calculate probability of a Boltsman brain, zero.
[1511] Just because the space of possibility is so large?
[1512] Yeah, it's like when we start falling ourselves with numbers that we can't actually measure and we can't ever conceive of, I think it doesn't give us a good explanation.
[1513] And I've become, I want to explain why life is in the universe.
[1514] I think life is actually novelty minor.
[1515] I mean, life basically minds novelty almost from the future and makes it, actualizes it in the present.
[1516] Okay.
[1517] Life is a novelty minor from the future that is actualized in the present.
[1518] Yeah, I think so.
[1519] Novelty minor.
[1520] First of all, novelty.
[1521] what's the origin of novelty when you go from boring time to creative time where's that is it as simple as randomness like you're referring to i i'm really struggling with randomness because i had a really good argument with yasha bark about randomness and he says randomness doesn't give you free will that's insane because you'd just be random but i think and i think he's right at that level yeah but i don't think we i don't think he is right on another level and it's not about randomness it's about it's about constrained i'm going to sound like constrained opportunity i'm making this up as i go along so making this up constrained opportunity so what i mean is like so you have to have so that the novelty what is novelty you know this is why i think is a funny thing you ever want to discuss a i why i think everyone's kind of gone ai mad is that they misunderstanding novelty.
[1522] But let's think about novelty.
[1523] Yes, what is novelty.
[1524] So I think novelty is a genuinely new configuration that is not predicted by the past, right, and that you discover in the present, right?
[1525] And that is truly different, right?
[1526] Now everyone says that some people say that novelty doesn't exist.
[1527] It's always with precedent.
[1528] I want to do experiments that show that that is not the case.
[1529] And it goes back to a question you asked me a few moments ago, which is, where is the factory?
[1530] Right?
[1531] Because I think the same mechanism that gives us a factory gives us novelty.
[1532] And I think that that is why I'm so deeply hung up on time.
[1533] I mean, of course I'm wrong, but how wrong.
[1534] And I think that life opens up that commentorial space in a way that our current laws of physics or as contrived in a deterministic initial condition universe, even with the get out of the multiverse, David Deutsch style, which I love, by the way, but I don't think is correct, but it's really beautiful.
[1535] Multiverse.
[1536] David Deutsch's conception of the multiverse is kind of like given.
[1537] But I think that the problem with wave particle duality in quantum mechanics is not about the multiverse.
[1538] It's about understanding how determined the past is.
[1539] Well, I don't just think that actually this is a discussion I was having with Sarah about that, right?
[1540] She was like, oh, I think we've been debating this for a long time now about how we, how do we reconcile novelty, determinism, indeterminism.
[1541] It's okay.
[1542] Just to clarify, both you and Sarah think, the universe is not deterministic?
[1543] I won't speak for Sarah, but I roughly can't.
[1544] I think that the universe, I think the universe is deterministic looking back in the past, but undetermined going forward in the future.
[1545] So I'm kind of having my cake and eat it.
[1546] This is because I fundamentally don't understand randomness, right?
[1547] As Yasha told me or other people told me. But if I adopt a new view now, which the new view is the universe is just non -deterministic, but I'd like to refine that and say, the universe appears deterministic going back in the past, but it's undetermined going forward in the future.
[1548] So how can we have a universe that has deterministicly looking rules that's non -determined going in the future?
[1549] It's this breakdown in precision in the initial conditions, and we have to just stop using initial conditions and start looking at trajectories, and how the commentorial space behaves in expanding universe in time and space.
[1550] And assembly theory helps us quantify the transition to biology, and biology appears to be novelty mining because it's making crazy stuff.
[1551] You know, that we are unique to Earth, right?
[1552] There are objects on Earth that are unique to Earth that will not be found anywhere else because you can do the commentorial math.
[1553] What was that statement you made about life is novelty mining from the future?
[1554] Yeah.
[1555] What's the little element of time that you're introducing?
[1556] So what I'm kind of meaning is because the future is bigger than the present.
[1557] In a deterministic universe, how do you go from the, how do the states go from one to another?
[1558] I mean, there's a mismatch, right?
[1559] Yeah.
[1560] So that must mean that you have a little bit of indeterminism, whether that's randomness or something else.
[1561] I don't understand.
[1562] I want to do experiments to formulate a theory to refine that as we go forward.
[1563] That might help us explain that.
[1564] And I think that's why I'm so determined to try and crack the non -life -to -life transition, looking at networks and molecules, and that might help us think about it, the mechanism.
[1565] But certainly the future is bigger than the past in my conception of the universe and some conception of the universe.
[1566] By the way, that's not obvious, right?
[1567] That's what we was just kind of, the future being bigger than the past.
[1568] Well, that's one statement, and the statement that the universe is not big enough to contain the future is another statement.
[1569] Yeah.
[1570] Yeah, yeah, yeah.
[1571] That one is a big one.
[1572] That one's a really big one.
[1573] I think so.
[1574] But I think it's entirely, because look, we have the second law.
[1575] And right now, I mean, we don't need the second law if the future is bigger than the past.
[1576] It follows naturally.
[1577] Right.
[1578] So why are we retrofitting all these sticking plasters onto our reality to hold on to a timeless universe?
[1579] Yeah, but that's because it's kind of difficult to imagine the universe that can't contain the future.
[1580] But isn't that really exciting?
[1581] It's very exciting, but it's hard.
[1582] I mean, we're our humans on Earth, and we have a very kind of four -dimensional conception of the world.
[1583] of 3D plus time, and it's just hard to intuit a world where, what does it even mean?
[1584] A universe that can't contain the future.
[1585] Yeah, it's kind of crazy, but obvious.
[1586] I mean, I suppose it sounds obvious, yeah, if it's true.
[1587] But the nice thing is you can, so why, I mean, so the reason why assembly theory turned me onto that was that you, let's just start in the present and look at all the complex molecules and go backwards in time and understand how evolutionary processes gave rise to them is not at all obvious the taxol which is one of the most complex natural products produced by biology was going to be invented by biology it's an accident you know taxol is unique to earth there's no taxol elsewhere in the universe and taxol was not decided by the initial conditions it was decided by this kind of this interplay between the, so the past simply is embedded in the present.
[1588] It gives some features, but why the past doesn't map to the future one to one is because the universe is too big to contain itself.
[1589] That gives space for creativity, novelty, and on some things which are unpredictable.
[1590] Well, okay, so given that you're disrespecting the power of the initial conditions.
[1591] Let me ask you about, so how do you explain that cellular automata are able to produce such incredible complexity, given just basic rules and basic initial conditions?
[1592] I think that this falls into the Brower Hilbert trap.
[1593] So how do you get a cellular automata produce a complexity?
[1594] You have a computer, you generate a display, and you map the change of that in time.
[1595] There are some CAs repeat, like functions like it's fascinating to me that for pi there is a there is a formula where you can go to the the millionth decimal place of pie and read out the number without having to go there but there are some numbers where you can't do that you have to just crank through this whether it's wolframian computational irreducibility or some other thing well it doesn't matter but these CAs that complexity is that just complexity or a number that is that is basically you're mining that number in time.
[1596] You know, is that just a display screen for that number, that function?
[1597] Well, can you say the same thing about the complexity on Earth then?
[1598] No, because the complexity on Earth has a copy number and an assembly index associated with that.
[1599] That CA is just a number running.
[1600] You don't think it has a copy number?
[1601] Wait, wait a minute.
[1602] Well, it does in the human, where we're looking at humans producing different rules, but then it's nested on selection.
[1603] So those CAs are produced by selection.
[1604] I mean, the CA is such a fascinating pseudo -complexity generator.
[1605] What I would love to do is understand, quantify the degree of surprise in the CA and write that long enough.
[1606] But what I guess that means is we have to instantiate, we have to have a number of experiments where we're generating different rules and running them time -spete steps.
[1607] But, oh, got it.
[1608] CAs are mining novelty in the future, you know, in the future by iteration, right?
[1609] And you're like, oh, that's great, that's great.
[1610] You didn't predict it.
[1611] Some rules you can predict what's going to happen.
[1612] Other rules you can't.
[1613] So for me, if anything, CAs are evidence that the universe is too big to contain itself.
[1614] Because otherwise, you'd know what the rules are going to do forever, more.
[1615] Right.
[1616] I guess you were saying that the physicist saying that all you need is the initial conditions and the rules of physics is somehow missing the bigger picture.
[1617] Yeah.
[1618] And, you know, if you look at CAs, all you need is the initial.
[1619] condition and the rules and then run the thing you need three things you need the initial conditions you need the rules and you need time iteration to mine it out without the coordinate you can't get it out sure and that's that that to use fundamental and you can't predict it from initial conditions yeah if you could then it'd be fine and that time is a resource foundation of uh this is the history the memory of each the things that created it has to have that memory of all the things that led up to it.
[1620] I think it's, yeah, you have to have the resource.
[1621] Yeah.
[1622] Because time is a fundamental resource and, yeah, I'm becoming, I think I had a major epiphany about randomness, but I keep doing that every two days and then it goes away again, it's random.
[1623] You're a time fundamentalist.
[1624] You should be as well.
[1625] If you believe in free will, the only conclusion is there is time is.
[1626] fundamental otherwise you cannot have free will it logically follows well my my the foundation of my belief of free will is just uh is is uh observation driven but that's i think if you use logic it's like logically seems like the universe is deterministic looking backwards in time and that's correct the universe is and then everything else is a is a kind of leap It requires a leap.
[1627] I mean, I think that it's kind of, this is what, I think machine learning is going to provide a big, a chunk of that, right?
[1628] Because it helps us explain this.
[1629] So the way I'd say, if you take, that's interesting.
[1630] Why?
[1631] Well, let's, let's just, my favorite one is because I'm, I'm, the AI, Duma's are driving me mad.
[1632] And the fact that we don't have any intelligence yet.
[1633] I call AI autonomous informatics just to make people grumpy.
[1634] Yeah.
[1635] You're saying we're quite far away from AGI.
[1636] I think that we have no conception of intelligence, and I think that we don't understand how the human brain does what it does.
[1637] I think that we are neuroscience is making great advances, but I think that we have no idea about AGI.
[1638] So I am a technological, I guess, optimist.
[1639] I believe we should do everything.
[1640] the whole regulation of AI is nonsensical.
[1641] I mean, why would you regulate Excel other than the fact that Clippy should come back and I love Excel 97 because we can play, you know, we can do the flight simulator.
[1642] I'm sorry, in Excel?
[1643] Yeah, have you not played the flight simulator in 99?
[1644] In Excel, 97?
[1645] Yeah.
[1646] What does that look like?
[1647] It's like wireframe, very, very basic.
[1648] But basically, I think it's X ,0, Y, 0, shift, and it opens up and you can play the flight simulator.
[1649] Yeah.
[1650] Oh, wow.
[1651] Wait, wait.
[1652] Is it using Excel?
[1653] Excel 97.
[1654] Okay.
[1655] I resurrected it the other day and saw Clippy again for the first time and a long time.
[1656] Well, Clippy is definitely coming back.
[1657] But you're saying we don't have a great understanding of what is intelligence.
[1658] What is the intelligence?
[1659] I am very frustrated.
[1660] Underpinning the human mind.
[1661] I'm very frustrated by the way that we're AI dooming right now.
[1662] And people are bestowing some kind of magic.
[1663] Now, let's go back a bit.
[1664] So you said about AGI, are we far away from AGI?
[1665] Yes, I do not think we're going to get to AGI anytime soon.
[1666] I've seen no evidence of it.
[1667] And the AI doom scenario is nonsensical in the extreme.
[1668] And the reason why I think is nonsensical, but it's not non -suit.
[1669] And I don't think there isn't things we should do and be very worried about, right?
[1670] I mean, there are things we need to worry about right now, what AI are doing, whether it's fake data, fake users, right?
[1671] I want authentic people or authentic data.
[1672] I don't want everything to be faked, and I think it's a really big problem, and I absolutely want to go on the record to say, I really worry about that.
[1673] What I'm not worried about is that some fictitious entity is going to turn us into paper clips or detonate nuclear bombs, I don't know, maybe, I don't know, anything you can't think of.
[1674] Why is this, and I'll take a very simple series of logical arguments, and and this is the the the the AI doomers are have not had the correct and this is had not had the correct they do not have the correct epistemology they do not understand what knowledge is and until we understand what knowledge is they're not going to get anywhere because they're applying things falsely so let me give you a very simple argument people talk about the probability p doom AI I can we can work out the probability of an asteroid hitting the planet.
[1675] Why?
[1676] Because it's happened before.
[1677] We know the mechanism.
[1678] We know that there's a gravity well or that, you know, space time has been and stuff falls in.
[1679] We don't know the probability of AGI because we have no mechanism.
[1680] So let me give you another one, which is like, I'm really worried about AG.
[1681] What's AG?
[1682] AG, AG is anti -gravity.
[1683] One day we could wake up an anti -gravity, you know, it's discovered, we're all going to die, the atmosphere is going to float away, we're going to float away, we're all doomed.
[1684] What is the probability of AG?
[1685] We don't know because there's no mechanism for AG.
[1686] Do we worry about it?
[1687] No. And I don't understand the current reason for certain people in certain areas to be generating this nonsense.
[1688] I think they're not doing it maliciously.
[1689] I think we're observing the emergence of new religions, how religions come, because religions are about kind of some controls.
[1690] You've got the optimist saying AI is going to cure us all and AI is going to kill us all.
[1691] What's the reality?
[1692] Well, we don't have AI.
[1693] We have really powerful machine learning tools, and they will allow us to do interesting things, and we need be careful about how we use those tools in terms of manipulating human beings and faking stuff, right?
[1694] Right.
[1695] Well, let me try to sort of steal man the AI Dumer's argument.
[1696] Actually, I don't know.
[1697] Are AI Dumeers in the Yodowski camp saying it's definitely going to kill us because there's a spectrum?
[1698] 95 % I think is the limit, yeah.
[1699] 95 % plus.
[1700] No, not plus.
[1701] I don't know.
[1702] I was seeing on Twitter today various things, but I think Yukowski is at 95%.
[1703] But to belong to the AI Duma Club, is there a threshold?
[1704] I don't know what the membership.
[1705] Maybe.
[1706] And what are the fees?
[1707] I think, well, I think Scott Aronson, like, I was quite surprised how to put two, I saw this online, so it could be wrong.
[1708] So sorry if it's wrong, says 2%.
[1709] But the thing is, if you were to, if someone said there's a 2 % chance you're going to die going into the lift, would you go into the lift?
[1710] In the elevator for the American English -speaking audience.
[1711] Well, no, not for the elevator.
[1712] So I would say anyone higher than 2%, I mean, I think there's a 0 % chance of AGI do.
[1713] Just to push back on the argument where at the end of zero on the AGI, we can see on Earth that there is increasing levels of intelligence of organisms.
[1714] We can see what humans with extra intelligence were able to do to the other species.
[1715] So that is a lot of samples of data what a delta in intelligence gives you.
[1716] When you have an increase in intelligence, how you're able to dominate a species on Earth.
[1717] And so the idea there is that if you have a being that's 10x smarter than humans, we're not gonna be able to predict what that's going to do.
[1718] With that being is gonna be able to do, especially if it has the power to hurt humans, which you can imagine a lot of trajectories in which the more benefit AI systems give, the more control would give to those AI systems over our power grid, over our nuclear weapons or weapons of any sort, and then it's hard to know what an ultra -intelligence system would be able to do in that case.
[1719] You don't find that convincing.
[1720] I think this is, I would fail that argument 100%.
[1721] Here's a number of reasons to fail.
[1722] on.
[1723] First of all, we don't know where the intention comes from.
[1724] The problem is that people think they keep, you know, watching all the hucksters online with a prompt engineering and all this stuff.
[1725] When I talk to a typical AI computer scientist, they keep talking about the AI is having some kind of decision -making ability.
[1726] That is a category error.
[1727] The decision -making ability comes from human beings.
[1728] We have no understanding of how humans make decision.
[1729] We've just been discussing free will for the last half an hour, right?
[1730] We don't even know what that is.
[1731] So the intention, I totally agree with you, people who intend to do bad things can do bad things and we should not let that risk go.
[1732] That's totally here and now.
[1733] I do not want that to happen and I'm happy to be regulated to make sure that systems I generate, whether they're like computer systems or, you know, I'm working on a new project called Chemmachina.
[1734] nice well done yeah yeah which is basically a for people who don't understand the point that x -mark and i is a great film about i guess aGI embodied and chem is the chemistry version of that and i only know one way to embody intelligence that's in chemistry and human brains so category error number one is agents they have agency category error number two is saying that assuming that anything we make is going to be more intelligent now you didn't say super intelligent, I'll put the words into our mouths here, super intelligent.
[1735] I think that there is no reason to expect that we are going to make systems that are more intelligent, more capable.
[1736] You know, when people play chess computers, they don't expect to win now, right?
[1737] They just, the chess computer, he's very good at chess.
[1738] That doesn't mean it's super intelligent.
[1739] So I think that super intelligence, I mean, I think even Nick Bostrom is pulling back on this now because he invented this.
[1740] So I see this a lot.
[1741] When does the see first happen?
[1742] Eric Drexler, nanotechnology, atomically precise machines.
[1743] He came up with a world or we had these atom cogs everywhere.
[1744] They were going to make self -replicating nanobots.
[1745] Not possible.
[1746] Why?
[1747] Because there's no resources to build these self -replicating nanobots.
[1748] You can't get the precision.
[1749] It doesn't work.
[1750] It was a major category error in taking engineering principles down to the molecular level.
[1751] The only functioning molecular technology we know, sorry, the only functioning nanomolecular technology we know produced by evolution.
[1752] There.
[1753] So now let's go forward to AGI.
[1754] What is AGI?
[1755] We don't know.
[1756] It's super.
[1757] It can do this or humans can't think.
[1758] That, I would argue, the only AGIs that exist in the universe are produced by evolution.
[1759] And sure, we may be our making our working memory better.
[1760] We might be able to do more things.
[1761] The human brain is the most compact computing unit in the universe.
[1762] It uses 20 watts, uses a really limited volume.
[1763] It's not like a chat GPT cluster, which has to have thousands of watts model that's generated and it has to be corrected by human beings.
[1764] You are autonomous and embodied intelligence.
[1765] So I think that there are so many levels that we're missing out.
[1766] We've just kind of went, oh, we've discovered fire.
[1767] Oh gosh, the planet's just going to burn one day randomly.
[1768] I mean, I just don't understand that leap.
[1769] There are bigger problems we need to worry about.
[1770] So what is the motivation?
[1771] Why are these people, let's assume they're earnest, have this conviction?
[1772] Well, I think it's kind of, they're making leaps that they're trapped in a virtual reality, that isn't reality.
[1773] Well, I mean, I can continue a set of arguments here, but also it is true that ideologies that fear monger are dangerous.
[1774] because you can then use it to control, to regulate in a way that halt's progress to control people and to cancel people, all that kind of stuff.
[1775] So you have to be careful because the reason ultimately wins, right?
[1776] But there is a lot of concerns with super intelligence systems, with very capable systems.
[1777] I think when you hear the word superintelligence, you're hearing like it's smarter than humans in every way that humans are smart.
[1778] But the paperclip manufacturing system doesn't need to be smart in every way.
[1779] Just need to be smart in a set of specific ways.
[1780] And the more capable of the AI systems become, the more you could see us giving them control over, like I said, our power grid, a lot of aspects of human life.
[1781] And that means they will be able to do more and more damage when there's unintended consequences that come to life.
[1782] I think that that's right, the unintended consequences we have to think about, and that I fully agree with.
[1783] But let's go back a bit.
[1784] Sentient, I mean, again, I'm far away from my comfort zone and all this stuff, but hey, let's talk about it because I give myself a qualification.
[1785] Yeah, we're both qualified and sentience, I think, as much as anyone else.
[1786] I think the paperclip scenario is just such a poor one, because let's think about how that would happen.
[1787] And also, let's think about we are being so unrealistic about how much of the Earth's surface we have commandeered.
[1788] You know, for paper -mit clip manufacturing to really happen, I mean, do the math.
[1789] It's like, it's not going to happen.
[1790] There's not enough energy.
[1791] There's not enough resource.
[1792] Where are they all going to come from?
[1793] I think that what happens in evolution is really, why is a killer virus not killed out all of not killed all life on earth.
[1794] What happens is, sure, super killer viruses that kill the ribosome have emerged, but you know what happens?
[1795] They nuke a small space because they can't propagate.
[1796] They all die.
[1797] So there's this interplay between evolution and propagation, right, and death.
[1798] And so...
[1799] In evolution.
[1800] You don't think it's possible to engineer, for example, sorry to interrupt, but like a perfect virus?
[1801] No. That's deadly enough?
[1802] No. Nonsensical.
[1803] Okay.
[1804] I think that just wouldn't again, it wouldn't work.
[1805] It was too deadly.
[1806] it would just kill the radius and not replicate.
[1807] Yeah.
[1808] I mean, you don't think it's possible to get a...
[1809] I mean, if you were super...
[1810] I mean, I, if you were...
[1811] Not kill all of life on Earth, but kill all humans.
[1812] There's not many of us.
[1813] There's only like $8 billion.
[1814] There's so much more ants.
[1815] I mean, I don't...
[1816] So many more ants.
[1817] And they're pretty smart.
[1818] I think we...
[1819] The nice thing about where we are, I would love that the AI crowd to take a leaf out of the book of the bio -warfare, chemical warfare crowd.
[1820] I mean, not love because actually people have been killed with chemical weapons in the first and second world war and bio -weapons have been made and, you know, we can argue about COVID -19 and all this stuff.
[1821] Let's not go there just now.
[1822] But I think there is a consensus that some certain things are bad and we shouldn't do them, right?
[1823] And sure, it would be possible for a bad actor to engineer something bad.
[1824] But the damage would be, we would see it coming and we would be able to do something about it.
[1825] Now, I guess what I'm trying to say is when people talk about doom and they just, when you ask them for the mechanism, they just say, you know, they just make something up.
[1826] I mean, in this case, I'm Ian Lacoon.
[1827] I think he put out a very good point about trying to regulate jet engines before we've even invented them.
[1828] And I think that's what I'm saying.
[1829] I'm not saying we should, I just don't understand why these guys are going around, literally making stuff up about us all dying.
[1830] Yeah.
[1831] When basically we need to actually really focus on.
[1832] Now, let's say there's some actors are earnest.
[1833] Let's say Yudakowski is being earnest, right?
[1834] And he really cares.
[1835] But he loves it.
[1836] It goes, and then you're all going to die.
[1837] It's like, you know, why don't we try?
[1838] and do the same thing and say, you could do this, and then you're going to be happy forever after.
[1839] Well, I think there's several things to say there.
[1840] One, I think there is a role in society for people that say we're all going to die.
[1841] Because I think it filters through as a message, it's a viral message that gives us the proper amount of concern.
[1842] Okay, all right.
[1843] Meaning not the, it's not 95%.
[1844] But when you say 95%, and it filters through society, you'll give an average of like a 0 .03%, an average.
[1845] So it's nice to have people that are like, we're all going to die, then we'll have a proper concern.
[1846] Like, for example, I do believe we're not properly concerned about the threat of nuclear weapons currently.
[1847] Like, it just seems like people have forgotten that that's a thing.
[1848] And, you know, there's a war in Ukraine with a nuclear power involved.
[1849] There's nuclear powers throughout the world, and it just feels like we're on the brink of a potential world war to a percentage that I don't think people are properly calibrating like in their head.
[1850] We're all thinking it's a Twitter battle as opposed to like actual threat.
[1851] So like it's nice to have that kind of level of concern.
[1852] But to me like what I when I hear AI doers what I'm imagining is with unintended consequences a potential situation where let's say 5 % of the world suffers deeply because of a mistake made.
[1853] of unintended consequences.
[1854] I don't imagine the entirety of human civilization dying, but there could be a lot of suffering if this is done.
[1855] I understand that.
[1856] And I'm I kind of, I guess, I mean, I'm involved in the whole hype cycle.
[1857] Like, why, I would like us to, I don't want us to, so what's happening right now is there seems to be, so let me, let's say, having some people saying AI Doom is a worry.
[1858] Fine, let's give them that.
[1859] But what seems to be happening is there seems to be people, who don't think AI is doing, they're trying to use that to control regulation and to push people to regulate which stops humans generating knowledge.
[1860] And I am an advocate for generating as much knowledge as possible.
[1861] When it comes to nuclear weapons, I grew up in the 70s and 80s where the nuclear doom, a lot of adults really had existential threat, almost as bad as now with AI doom.
[1862] They were really worried, right?
[1863] There was some great, well, not great.
[1864] There was some horrific documentaries.
[1865] I think there's one called Freds that was generated in the UK, which it was like, it was terrible.
[1866] It was like so scary.
[1867] And I think that the correct thing to do is obviously get rid of nuclear weapons, but let's think about unintended consequences.
[1868] We've got rid of all the sulfur particles in the atmosphere, right?
[1869] Or the, all the soot.
[1870] And what's happened in the last couple of years is global warming is accelerated because we've cleaned up the atmosphere too much.
[1871] Sure.
[1872] I mean, the same thing, if you get rid of nuclear weapons, you're going to say.
[1873] Exactly.
[1874] That's my point.
[1875] So what we could do is if we actually started to put the AI in charge, which is I really like an AI to be in charge of all world politics.
[1876] And this sounds ridiculous for a second.
[1877] Hang on.
[1878] But if we could all agree on the - AI -Dumers just woke up.
[1879] Yeah, yeah, yeah.
[1880] In that statement.
[1881] But I really don't like politicians who are basically just looking at local sampling.
[1882] But if you could say globally, look, here's some game theory here.
[1883] What is the minimum number of nuclear weapons we need to distribute around the world to everybody to basically reduce war to zero?
[1884] I mean, just this thought experiment of the United States and China and Russia and major nuclear powers get together and say, all right, we're going to distribute nuclear weapons to every single nation on Earth.
[1885] Yeah.
[1886] Oh, boy.
[1887] I mean, that has a probably greater than 50 % chance of eliminating major military conflict.
[1888] Yeah.
[1889] Yeah, but it's not 100%.
[1890] But I don't think anyone will use them because I think, and look, what you've got to try and do is, like, to qualify for these nuclear weapons, this is a great idea.
[1891] The game theorists could do this, right?
[1892] I think the question is this.
[1893] I really buy your question.
[1894] We have too many nukes.
[1895] from just from a feeling point of view that we've got too many of them so let's reduce a number but not get rid of them because we'll have too much conventional warfare so then what is the minimum number of nuclear weapons we can just do it around to remove humans hurting each other is something we should stop doing it's in it's not out with our conceptual capability but right now what about certain nations that are being exploited for their natural resources in the future because for a short -term gain, because we don't want to generate knowledge.
[1896] And so if everybody had an equal doomsday switch, I predict the quality of life of average human will go up faster.
[1897] I am an optimist, and I believe that humanity is going to get better and better and better, that we're going to eliminate more problems.
[1898] But I think, yeah, let's...
[1899] But the probability of a bad actor of one of the nations setting off a nuclear weapon.
[1900] I mean, you have to, you have to integrate that into the...
[1901] But we distribute the nuke's like population, right?
[1902] We give, what we do is we...
[1903] But anyway, let's just go there.
[1904] So if a small nation with a couple of nukes uses one because they're a bit bored or annoyed, the likelihood that they are going to be pummeled out of existence immediately is 100%.
[1905] And yet they've only, they've only nuked one other city.
[1906] I know this is crazy.
[1907] and I apologize for...
[1908] Well, no, no, I think this, just to be clear, we're just having a thought experiment that's interesting, but, you know, there's terrorist organizations that would take that, would take that trade.
[1909] Yeah, I mean, look, I'm...
[1910] And we have to ask ourselves a question of how many, which percentage of humans would be suicide bombers, essentially, where they would sacrifice their own life to, because they hate another group of people.
[1911] And that, I believe it's a very, small fraction but is it large enough to uh if you give out nuclear weapons i can predict the future where we take all nuclear material and we burn it for energy right as well because we're getting there and the other thing you could do is say look there's a gap so if we got all the countries to sign up to the virtual nuclear agreement where we all exist we have a simulation where we can nuke each other in the simulation and the economic consequences are catastrophic sure in the simulation i love it it's not going to kill all humans it's just going to have economic consequences yeah i don't know I just made it up.
[1912] It seems like, it's interesting.
[1913] I mean, but it's interesting whether that would have as much power on human psychology as actual physical nuclear exposure.
[1914] It's possible, but people don't take economic consequences as seriously, I think, as actual nuclear weapons.
[1915] I think they do in Argentina, and they do in Somalia, and they're doing a lot of these places where, no, I think this is a great idea.
[1916] I'm a strong advocate now for, so what have we come up with, burning all the nuclear material to have energy.
[1917] And before we do that, because mad is good.
[1918] Mutually assured destruction is very powerful.
[1919] Let's take it into the metaverse and then get people to kind of subscribe to that.
[1920] And if they actually nuke each other, even for fun in the metaverse, there are dire consequences.
[1921] Yeah.
[1922] Yeah.
[1923] So it's like a video game.
[1924] We all have to join this metaverse video game.
[1925] Yeah.
[1926] I can't believe it's dire economic consequences.
[1927] I don't know how.
[1928] And it's all run by AI, as you mentioned.
[1929] So the AI doomers are really terrified at this point.
[1930] They're happy they have a job for another 20 years, right?
[1931] Oh, fearmongery.
[1932] Yeah, yeah, yeah.
[1933] I'm a believer in equal employment.
[1934] You've mentioned that, what do you call it, chemmachina?
[1935] Yeah.
[1936] Yeah.
[1937] So you've mentioned that a chemical brain is something you're interesting creating.
[1938] And that's the way to get conscious AI soon.
[1939] Can you expect what a chemical brain is?
[1940] I want to understand the mechanism of intelligence has gone through evolution, right?
[1941] Because the way that intelligence was produced by evolution appears to be the following.
[1942] Origin of life, multicellularity, locomotion, senses.
[1943] Once you can start to see things coming toward you and you can remember the past and interrogate the present and imagine the future, you can do something amazing right so and i think only in recent years did humans become cheering complete right yeah yeah right we'll go and so that cheering completeness kind of gave us another kick up um but our ability to process that information um as produced in a wet brain and and i think that we are not getting going to we do not have the correct hardware architectures to have the domain flexibility and the ability to integrate information.
[1944] I think intelligence also comes at a massive compromise of data.
[1945] Right now, we're obsessing about getting more and more data, more and more processing, more and more tricks to get dopamine hits.
[1946] So when we look back on this, going, oh, yeah, that was really cool.
[1947] Because when I chat, I'll chat GPT, it made me feel really happy.
[1948] I got a hit from it, but actually it just exposed how little intelligence I use in every moment because I'm easily fooled.
[1949] So what I would like to do is to say, well, hey, hang on, what is it about the brain?
[1950] So the brain has this incredible connectivity and it has the ability to, you know, as I said earlier, about my nephew, you know, I went from Bill to Billy and he went, all right, Leroy.
[1951] like how did he make that leap that he was able to basically without any training i extended his name he went gay that he doesn't like he wants to be called bill he went back and said you like to be called lee i'm going to call you leroy um so human beings have a brilliant ability or intelligent beings appear to have a brilliant ability to integrate across all domains all at once and to synthesize something which allows us to generate knowledge and and becoming true and on our own, I don't, although AIs are built and cheering complete things, their thinking is not true and complete, in that they are not able to build universal explanations.
[1952] And that lack of universal explanation means that they're just inductivists.
[1953] Inductivism doesn't get you anywhere.
[1954] It's just basically a party trick.
[1955] It's like, you know, I like the, I think it's in the fabric of reality from David Deutsch, where basically, you know, the farmer is feeding the chicken every day, and the chicken's getting fat and happy, and the chicken's like, I'm really happy.
[1956] Every time the farmer comes in and feeds me, and then one day the farmer comes in and doesn't, instead of feeding the chicken, just rings its neck.
[1957] You know, and that's kind of, and had the chicken had an alternative understanding of why the farmer was feeding it.
[1958] It's interesting, though, because we don't know what's special about the human mind that's able to come up with these kind of generalities, this university.
[1959] theories of things, and will come up with novelty.
[1960] I can imagine, because you gave an example about William and Leroy, I feel like an example like that will be able to see in future versions of large language models.
[1961] We'll be really, really, really impressed by the humor, the insights, all of it, because it's fundamentally true.
[1962] trained on all the incredible humor and insights that's available out there on the internet, right?
[1963] So we'll be impressed.
[1964] I think we'll be impressed.
[1965] Oh, I'm impressed.
[1966] Increasingly so.
[1967] But we're mining the past.
[1968] Yes.
[1969] And what the human brain appears to be out of do is mine the future.
[1970] Yes.
[1971] So novelty.
[1972] It is interesting whether these large language models will ever be able to come up with something truly novel.
[1973] I can show on the back of a piece of paper why that's impossible.
[1974] And it's like the problem is that, and again, there's a domain experts kind of bullshitting each other.
[1975] The term generative.
[1976] Yes.
[1977] Right.
[1978] Average person thinks, oh, it's no, no, no. If look, if I take the numbers between zero and 1 ,000 and I train a model to pick out the prime numbers by giving them all the prime numbers between zero and 1 ,000, it doesn't know what a prime number is.
[1979] Occasionally, if I can cheat a bit, it will start to guess.
[1980] it never will produce anything out with the data set because you mine the past.
[1981] The thing that I'm getting to is I think that actually current machine learning technologies might actually help reveal why time is fundamental.
[1982] It's like kind of insane because they tell you about what's happened in the past, but they can never help you understand what's happening in the future without training examples.
[1983] Sure, if that thing happens again, it's like, so I think, so let's think about what large how much models are doing.
[1984] We have the, we have the, language, we have all the internet as we know language, but also they're doing something else.
[1985] We're having human beings correcting it all the time.
[1986] Those models are being corrected.
[1987] Steered.
[1988] Corrected.
[1989] Modified, tweaked.
[1990] It's cool.
[1991] Yeah, but cheating.
[1992] Cheating.
[1993] Well, you could say the training on human data in the first place is cheating.
[1994] Well, let me, human is in the loop.
[1995] Sorry to interrupt.
[1996] Yes, so human is definitely in the loop.
[1997] but it's not just humanness in the loop.
[1998] A very large collection of humans is in the loop.
[1999] And that could be...
[2000] I mean, to me, it's not intuitive that you said prime numbers that the system can't generate an algorithm, right?
[2001] That...
[2002] The algorithm that can generate prime numbers or the algorithm that can tell you if a number is prime and so on.
[2003] And generate algorithms that generate algorithms that generate algorithms that start to look a lot like human reasoning, you know?
[2004] I don't think, I think, again, we can show that on a piece of paper.
[2005] That's short, I think there has, you have to have, so this is the failure in epistemology.
[2006] Like, I'm glad I even can say that word, letting me know what it means.
[2007] You said it multiple times.
[2008] I know, it's like three times now.
[2009] Without failure.
[2010] Quit while you're ahead.
[2011] Just don't say it again, because you did really well.
[2012] Thanks.
[2013] so I but I think the so what is reasoning so coming back to the chemical brain if I could basically if I could show that in a because I mean I'm never going to make an intelligence in cam machina because we don't have brain cells they don't have glial cells I don't have neurons but if I can make if I can take a major gel and engineer the gel to have it be a hybrid hardware for reprogramming which I think I know how to do I will be at a process a lot of more information and train models billions of times cheaper and use cross -domain knowledge.
[2014] And there's certain techniques I think we can do.
[2015] But they're still missing, though the abilities of human beings have had to become true and complete.
[2016] And so I guess the question to give back at you is like, how do you tell the difference between trial and error and the generation of new knowledge?
[2017] I think the way you can do it is this is that you come up with a theory and explanation, inspiration comes from out and then you then test that and then you see that's going towards a truth and human beings are very good at doing that in the transition between philosophy, mathematics, physics and natural sciences where and I think that we can see that.
[2018] Where I get confused is why people misappropriate the term artificial intelligence to say, hey, there's something else going on here because I think you and I both agree, machine learning is really good.
[2019] It's only getting get better, we're going to get happier with the outcome.
[2020] But why would you ever think the model was thinking or reasoning?
[2021] Reasoning requires intention.
[2022] And the intention, if the model isn't reasoning, the intentions come from the prompter and the intention has come from the person who programmed it to do it.
[2023] So I, I, I, um, but don't you think you can prompt it to have intention, basically start with the initial conditions and get it going, where the, you know, currently large language models, Chad GPT, only talks to you when you talk to it.
[2024] There's no reason why you can't just start it talking.
[2025] But, but those initial conditions came from someone starting it.
[2026] and that causal chain in there.
[2027] So that intention comes from the outside.
[2028] I think that there is something in that causal chain of intention that's super important.
[2029] I don't disagree we're going to get to AGI.
[2030] It's a matter of when and what hardware.
[2031] I think we're not going to do it in this hardware, and I think we're unnecessarily fetishizing really cool outputs and dopamine hits, because obviously that's what people want to sell us.
[2032] Well, but there could be, I mean, AGI is a loaded term, but there could be incredibly super -impressive intelligence systems on the way to AGI.
[2033] So these large language models, I mean, if it appears conscious, if it appears super -intelligent, who are we to say it's not?
[2034] I agree, but the super -intelligence I want, I want to be able to have a discussion with it about coming up before.
[2035] fundamental new ideas that generate knowledge.
[2036] And if the superintelligence we generate can mine novel even the future that I didn't see in its training set in the past, I would agree that something really interesting is coming on.
[2037] I'll say that again, if the intelligence system, be it a human being, a chat bot, something else, is able to produce something truly novel that I could not predict even having full audit trail from the past, then I'll be sold.
[2038] Well, so we should be clear that it can currently produce things that are in a shallow sense novel that are not in the training set, but you're saying truly novel.
[2039] I think they are in the training set.
[2040] I think everything it produces comes from a training set.
[2041] There's a difference between novelty and interpolation.
[2042] We do not understand where these leaps come from yet.
[2043] That is what intelligence is, I would argue.
[2044] Those leaps, and some people say, no, it's actually just what will happen if you just do cross -domain training and all that stuff.
[2045] And that may be true.
[2046] And I may be completely wrong.
[2047] But right now, the human mind is able to mind novelty in a way that artificial intelligence systems cannot.
[2048] And this is why we still have a job and we're still doing stuff.
[2049] And, you know, I used chat GPT for a few weeks.
[2050] Oh, this is cool.
[2051] And then it took me too, I had to, well, what happened is it took me too much time to correct it.
[2052] Then it got really good.
[2053] And now they've done something to it.
[2054] It's not actually that good.
[2055] Yeah.
[2056] Right.
[2057] I don't know what's going on.
[2058] Censorship, yeah.
[2059] I mean, that's interesting, but it will push us humans to characterize novelty better.
[2060] Like, characterize the novel.
[2061] Like, what is novel?
[2062] What is truly novel?
[2063] What's the difference between novelty and interpolation.
[2064] I think that this is the thing that makes me most excited about these technologies is they're going to help me demonstrate to you that time is fundamental and the unit future is bigger than the than the present, which is why we are human beings are quite good at generating novelty because we have to expand our data set and to cope with unexpected things in our environment.
[2065] Our environment throws them all at us.
[2066] Again, we have to survive in that environment.
[2067] And I mean, I never say never.
[2068] I would be very interested in how we can get cross -domain training cheaply in chemical systems because I'm a chemist and the brain, the only thing I know of as a human brain, but maybe that's just me being boring and predictable and not novel.
[2069] Yeah, you mentioned GPT for electron density.
[2070] So a GPT -like system for generating molecules that combine to host automatically.
[2071] I mean, that's interesting.
[2072] that's really interesting applying this same kind of transform mechanism for it I mean this is one it goes my team I try and do things that are non obvious but non -obvious in certain areas and one of the things I was always asking about in chemistry people like to represent molecules as graphs and it's quite difficult it's really hard if you're doing AI in chemistry you really want to basically have good representations so you can generate new molecules are interesting and I was thinking, well, molecules aren't really graphs and they're not continuously differentiable.
[2073] Could I do something that was continuously differentiable?
[2074] I was like, well, molecules are actually made up of electron density.
[2075] So then I got thinking and say, well, okay, could there be a way where we could just basically take a database of readily solved electron densities for millions of molecules?
[2076] So we took the electron density for millions of molecules and just trained the model to learn what electron density is.
[2077] And so what we built was a system that you literally could give it a, let's say you could take a protein that has a particular active site or, you know, a cup with a certain hole in it.
[2078] You pour noise into it.
[2079] And with a GPT, you turn the noise into electron density.
[2080] And then, in this case, it hallucinates like all of them do, but the hallucinations are good because it means I don't have to train on such a large lump, such a huge data set, because these data sets are very expensive.
[2081] because how do you produce it?
[2082] So go back a step.
[2083] You've got all these molecules in this data set, but what you've literally done is a quantum mechanical calculation where produce electron density for each molecule.
[2084] So you say, oh, this representation of this molecule has these electron density associated with it.
[2085] So you know what the representation is and you train the neural network.
[2086] You know what electron density is.
[2087] So then you give it an unknown pocket.
[2088] You pour in noise and you say, right, produce me electron density.
[2089] It produces electron density that doesn't look ridiculous.
[2090] And what we did in this case is we produced electron density that maximizes the electrostatic potential, so the stickiness, but minimizes the what we call hysteric hindrance.
[2091] So the overlaps is repulsive.
[2092] So, you know, make the perfect fit.
[2093] And then we then use the kind of a chat GPT type thing to turn that electron density into what's called a smile.
[2094] A smile string is a way of representing a molecule in letters.
[2095] And then we can then.
[2096] It just generates them.
[2097] Just generates them.
[2098] And then the other thing is, and we bung that into the computer and then it just makes it.
[2099] Yeah.
[2100] The computer being the thing that, right, the robot that we've got that can basically just do chemistry.
[2101] So we've kind of got this end -to -end drug discovery machine where you can say, oh, you want to bind to this active site, here you go.
[2102] I mean, it's a bit leaky and things kind of break, but it's a proof of principle.
[2103] Well, were the hallucinations, are those still accurate?
[2104] Well, the hallucinations are really great?
[2105] in this case.
[2106] Because in the case of a large language model, the hallucinations just like just make everything up to, when it doesn't just make everything up, but it gives you an output that you're plausibly comfortable with and thinks you're doing probabilistically.
[2107] The problem on these electron density models is it's very expensive to solve a shredding equation going up to many heavy atoms and large molecules.
[2108] And so we wondered if we trained the system on up to nine heavy atoms.
[2109] whether it would go beyond nine.
[2110] And it did.
[2111] It started to generate molecules for 12.
[2112] No problem.
[2113] They looked pretty good.
[2114] And I was like, well, this hallucination I will take for free.
[2115] Thank you very much.
[2116] Because it just basically, this is a case where interpolation, extrapolation worked relatively well.
[2117] And we were able to generate the really good molecules.
[2118] And then what we were able to do here is, and this is a really good point.
[2119] What I was trying to say earlier, that we were able to generate new molecules from their known data set that would bind to the host.
[2120] So a new guest would bind.
[2121] Were these truly novel?
[2122] Not really because they were constrained by the host.
[2123] Were they new to us?
[2124] Yes.
[2125] So I do understand, I can concede that machine learning systems, artificial intelligence systems can generate new entities, but how novel are they, it remains to be seen?
[2126] Yeah, and how novel the things that humans generate is also difficult to quantify.
[2127] They seem novel.
[2128] That's what a lot of people say.
[2129] So the way to really get to genuine novelty, and the assembly theory shows you the way, is to have different causal chains overlap.
[2130] And it really resonates with the time is fundamental argument.
[2131] And if you're bringing together a couple of objects with different initial conditions coming together, when they interact, the more different their histories, the more novelty they generate in time going forward.
[2132] And so it could be that genuine novelty is basically about mix it up a little, and the human brain is able to mix it up a little and all that stimulus comes from the environment.
[2133] But all I think I'm saying is the universe is deterministic.
[2134] going back in time, non -deterministic going forward in time because the universe is too big in the future to contain in the present.
[2135] Therefore, these collisions of known things generate unknown things that then become part of your data set and don't appear weird.
[2136] That's how we give ourselves comfort.
[2137] The past looks consistent with this initial conditional hypothesis, but actually we're generating more and more novelty.
[2138] And that's how it works.
[2139] Simple.
[2140] So it's hard to quantify novelty looking backwards.
[2141] I mean, the present and the future are the novelty generators.
[2142] But I like this whole idea of mining novelty.
[2143] I think it is going to reveal why the limitations of current AI is a bit like a printing press, right?
[2144] Everyone thought that when the printing press came, that writing books is going to be terrible, that you had evil spirits and all this.
[2145] They were just books.
[2146] And same would be with AI.
[2147] Yeah.
[2148] But I think they're just a scale.
[2149] you can achieve in terms of impact with AI systems is pretty nerve -wracking.
[2150] But that's what the big companies want you to think.
[2151] But not like in terms of destroy all humans, but you can have major consequences.
[2152] And the way social media has had major consequences, both positive and negative.
[2153] And so you have to kind of think about it and worry about it.
[2154] But yeah, people that fearmonger, you know.
[2155] My pet theory for this, you want to know, is I think that, um, A lot of, and maybe I'm being, and I think, I really do respect, you know, a lot of the people out there who are trying to have discourse about the positive future.
[2156] So open AI guys, meta guys and all this.
[2157] And what I wonder if they're trying to cover up for the fact that social media has had a pretty disastrous effect at some level.
[2158] And they're just trying to say, oh, yeah, we should do this.
[2159] Because, and covering up for the fact that we have got some problems with, you know, teenagers and Instagram and Snapchat and, you know, all this.
[2160] stuff, and maybe they're just overreacting now.
[2161] Yeah.
[2162] It's like, oh, yeah, sorry, we made the bubonic plate and gave it to you all, and you're all dying, and oh, yeah, but look at this over here.
[2163] It's even worse.
[2164] Yeah, there's a little bit of that, but there's also not enough celebration of the positive impact that all these technologies have had.
[2165] We tend to focus on the negative and tend to forget that, in part because it's hard to measure.
[2166] Like, it's very hard to measure the positive impact social media had on the world.
[2167] Yeah, I agree.
[2168] but if what I worry about right now is like I'm really I do care about the ethics of what we're doing and the one of the reasons why I'm so open about the things we're trying to do in lab make life look at intelligence all this is so people say what are the consequences of this and you say what the consequences of not doing it and I think that what worries me right now in the present is lack of authenticated users and authenticated data and human users yeah human I still think that that there will be AI agents that appear to be conscious but they would have to be also authenticated and labeled as such.
[2169] There's too much value in that, you know, like friendships with AI systems.
[2170] There's too much meaningful human experiences to have with the AI systems that I just...
[2171] But that's like a tool, right?
[2172] It's a bit like a meditation tool, right?
[2173] Some people have a meditation tool.
[2174] It makes them feel better.
[2175] But I'm not sure you can ascribe sentient and some legal rights to a chat bot that makes you feel less lonely.
[2176] Sentience, yes, I think legal rights, no. I think it's the same.
[2177] You can have a really deep meaningful relationship with a dog.
[2178] But the dog's sentient?
[2179] Yes.
[2180] The chatbots right now, using the technology we use, it's not going to be sentient.
[2181] This is going to be a fun continued conversation on Twitter that I look forward to.
[2182] Since you've had also, from another place, some debates that were inspired by the assembly theory paper.
[2183] Let me ask you about God.
[2184] Is there any room for notions of God in assembly theory?
[2185] Um, God.
[2186] Yeah, I don't know what God is a, I mean, so God exists in our mind created by selection.
[2187] So the human beings have created the concept of God in the same way that human beings have created the concept of superintelligence.
[2188] Sure.
[2189] But Does it mean, does it not, it still could mean that that's a projection from the real world where like we're just assigning words and concepts to a thing that is fundamental to the real world, that there is something out there that is a creative force underlying the universe?
[2190] I think the universe, there is a creative force in the universe, but I don't think it's, it's sent, I mean, I think the, so I do not understand the universe.
[2191] So who am I to say, you know, that God doesn't exist?
[2192] I am an atheist, but I'm not an angry atheist, right?
[2193] I have lots of, I have lots of, there's some people I know that are angry atheists and say, you know, say that religious people are stupid.
[2194] I don't think that's the case.
[2195] I have faith in some things because I don't, I mean, when I was a kid, I kept like, you know, I was like, I need to know what the charge of electron is.
[2196] I can't measure the charge of an electron.
[2197] I just gave up and had faith.
[2198] Okay, you know, resistors worked.
[2199] So when it comes to, I want to know why the universe is growing in the future and what humanity is going to become.
[2200] And I've seen that the acquisition of knowledge via the generation of novelty to produce technology has uniformly made humans' lives better.
[2201] I would love to continue that tradition.
[2202] and you said that there's that creative force do you think just to think on that point do you think there's a creative force like is there like a thing like a driver that's like that's creating stuff yeah i think that so i think that and where what what is what can you describe it like mathematical well i think selection i think selection selection is the force selection is the force in the universe, it creates novelty.
[2203] So, is selection somehow fundamental?
[2204] Like, what...
[2205] Yeah, I think persistence of objects that could decay into nothing through operations that maintain that structure.
[2206] I mean, think about it.
[2207] It's amazing that things exist at all, that we're just not a big commentorial mess.
[2208] Yes.
[2209] So the fact...
[2210] And they exist, a thing that exists persistent time.
[2211] Yeah.
[2212] I mean, let's think maybe the universe...
[2213] universe is actually, in the present, the things, everything that can exist in the present does exist.
[2214] Well, that would mean it's deterministic, right?
[2215] No, I think the universities might, so the universe started super small, the past was deterministic, there wasn't much going on, and it was able to mine, mine, mine, mine, mine.
[2216] And so the process, I mean, is somehow generating universes basically, I can't put, I'm trying to put this into work.
[2217] Did you just say there's no free will, though?
[2218] No, I didn't say that.
[2219] As if everything that can exist.
[2220] I said there is free will.
[2221] I think, I think I, I'm saying that three will occurs at the boundary between the past and the future.
[2222] The past and the future.
[2223] Yeah, I got you.
[2224] But everything that can exist does exist.
[2225] Everything that is, so everything that's possible to exist at this.
[2226] So, no, I'm really putting this.
[2227] There's a lot of loaded words there.
[2228] There's a time element loaded into that.
[2229] I think that the universe is able to do what it can in the present, right?
[2230] Yeah.
[2231] And then I think in the future, there are the things that could be possible.
[2232] We can imagine lots of things, but they don't all happen.
[2233] Sure.
[2234] So that's where I guess I'm getting to.
[2235] That's where you sneak in free will right there.
[2236] Yeah.
[2237] So I guess what I'm saying is what exists is a convolution of the past with the present and the free will going into the future.
[2238] Well, we can still imagine stuff, right?
[2239] We can imagine stuff that'll never happen.
[2240] And it's amazing force because you're imagining.
[2241] This is the most important thing that we don't understand is our imaginations can actually change the future in a tangible way, which is what the initial conditions and physics cannot predict.
[2242] Like, your imagination has a causal consequence in the future.
[2243] Isn't that weird to you?
[2244] Yeah.
[2245] It breaks the laws of physics.
[2246] physics as we know them right now.
[2247] Yeah, so you think the imagination has a causal effect on the future.
[2248] Yeah.
[2249] But it does exist in there, in the head.
[2250] It does.
[2251] There must be a lot of power and whatever's going on.
[2252] There could be a lot of power, whatever's going on in there.
[2253] If we then go back to the initial conditions, and that is simply not possible that can happen.
[2254] But if we go into a universe where we accept that there is a finite ability to represent numbers, and you have rounding it, we're not rounding errors, you have some, what happens, your ability to make decisions imagine and do stuff is that that interface between the certain and the uncertain.
[2255] It's not, as Yasha was saying to me, randomness goes and you just, you know, randomly do random stuff.
[2256] It is that you are set free a little on your trajectory.
[2257] Free will is about being able to explore on this narrow trajectory that allows you to build, that you have a choice about what you build, or that choice is you interacting with a future in the present.
[2258] What to you is most beautiful about this whole thing, the universe?
[2259] The fact it seems to be very undecided, very open, and the fact that every time I think I'm getting towards an answer to a question, there are so many more questions that make the chase, you know.
[2260] Do you hate that it's going to be over?
[2261] at some point?
[2262] Well, for me, I don't, so I, I think if you think about it, is it over for Newton now?
[2263] Newton has had causal consequences in the future.
[2264] We discuss him all the time.
[2265] His ideas, but not the person.
[2266] The person just had a lot of causal power when he was alive, but oh my God, one of the things I want to do is leave as many Easter eggs in the future when I'm gone to go, oh, that's cool.
[2267] Would you be very upset if somebody made a, like a good large language model that's fine -tuned to Lee Connor?
[2268] It would be quite boring because, I mean, I mean, I'm...
[2269] No novelty generation?
[2270] I mean, if it's a faithful representation of what I've done in my life, that's great.
[2271] That's an interesting artifact.
[2272] But I think the most interesting thing about knowing each other is we don't know what we're going to do next.
[2273] Sure.
[2274] Sure.
[2275] I mean, within some constraints, I've got, you know, you might, I can predict some things about you, you can predict some things about me, but we can't predict everything.
[2276] everything and it's because we can't predict everything is why we're exciting to come back and discuss and see it's so yeah i'm i'm i'm kind of i'm happy that it'll be interesting that some things that i've done can be captured but i'm pretty sure that my angle on mining novelty from the future will not be captured yeah yeah so that that's what life is is just uh some novelty generation and then you're done.
[2277] Each one of us just generate a little bit, or have the capacity to at least.
[2278] I think life is a selection produces life and life affects a universe.
[2279] Universes with life in them are materially and physically fundamentally different than universes without life and that's super interesting and I have no beginnings of understanding.
[2280] I think maybe this is like in a thousand years there'll be a new discipline in a humans.
[2281] Yeah, of course.
[2282] This is how it all works, right?
[2283] And in retrospect, it will all be obvious, I think.
[2284] I think assembly theory is obvious.
[2285] That's why a lot of people got angry, right?
[2286] They were like, oh, my God, this is such nonsense.
[2287] You know, and like, oh, you know, actually it's not quite.
[2288] But the writing's really bad.
[2289] Well, I can't wait to see where it evolves.
[2290] Lee, and I'm glad I get to exist in this universe with you.
[2291] You're a fascinating human.
[2292] This is always a pleasure.
[2293] I hope to talk to you many more times and I'm a huge fan of just watching you create stuff in this world and thank you for talking today.
[2294] It's a pleasure as always Lex.
[2295] Thanks for having me on.
[2296] Thanks for listening to this conversation with Lee Cronin.
[2297] To support this podcast, please check out our sponsors in the description.
[2298] And now let me leave you with some words from Carl Sagan.
[2299] We can judge our progress by the courage of our questions and the depth of our answers, our willingness to embrace what is true rather than what feels good.
[2300] Thank you for listening and hope to see you next time.