Insightcast AI
Home
© 2025 All rights reserved
Impressum
#321 – Ray Kurzweil: Singularity, Superintelligence, and Immortality

#321 – Ray Kurzweil: Singularity, Superintelligence, and Immortality

Lex Fridman Podcast XX

--:--
--:--

Full Transcription:

[0] The following is a conversation with Ray Kurzweil, author, inventor, and futurist, who has an optimistic view of our future as a human civilization, predicting that exponentially improving technologies will take us to a point of a singularity, beyond which super -intelligent artificial intelligence will transform our world in nearly unimaginable ways.

[1] 18 years ago, in the book, Singularity is Near, he predicted that the onset of the singularity will happen in the year 2045.

[2] He still holds to this prediction and estimate.

[3] In fact, he's working on a new book on this topic that will hopefully be out next year.

[4] And now, a quick few second mention of his sponsor.

[5] Check them out in the description.

[6] It's the best way to support this podcast.

[7] We got Shopify for e -commerce, NetSuite for business management software, Linode for Linux systems, Masterclass for online learning, and indeed for hiring.

[8] She was wise than my friends.

[9] And now, onto the full ad reads, as always, no ads in the middle.

[10] I try to make this interesting, but if you skip them, please still check out our sponsors.

[11] I enjoy their stuff.

[12] Maybe you will too.

[13] This show is brought to you by Shopify, a platform designed for anyone to sell anywhere with a great -looking online store that brings you idea to life and tools to manage day -to -day operation.

[14] I am so long overdue on merch.

[15] As a fan of a lot of people and podcasts and shows and stuff like that, I love getting their merch.

[16] It's a cool way to just celebrate the thing you love.

[17] Shopify is an exemplary company in this whole space.

[18] It seems like a fun thing to do.

[19] Merch is fun.

[20] And so anyway, Shopify is a platform that enables that kind of thing, super easy, very easy to set up, very easy to buy from.

[21] That's why what is it, 1 .7 million entrepreneurs use it.

[22] That's wild.

[23] Get a free trial and full access to Shopify's entire suite of features when you sign up at Shopify .com slash Lex.

[24] That's all lowercase, Shopify .com slash Lex.

[25] This show is also brought to you by NetSuite, an all -in -one cloud business management system.

[26] It manages financials, human resources, inventory, e -commerce, and many business related details.

[27] I talk to a lot of people that run large and medium size and small companies.

[28] A lot of them are becoming good friends of mine.

[29] It's just clear to me that running a company at every stage of it, from startup when it's just you, one founder, or co -founder, to when it's many thousands of people.

[30] It's just so complicated.

[31] There's so many things involved, so many more things than just the kind of engineering or the brainstorming or the idea development or product development, that kind of stuff.

[32] And that's the stuff I love, of course, but you have to get everything else right too.

[33] Like from the human resource from the hiring process, the hiring process alone is just so fascinating.

[34] So whatever sort of machinery of running a business you need to get right, you should use the best tool for the job.

[35] That's what NetSuite is.

[36] You can go to netsuite .com slash Lex to access their one -of -a -kind financing program.

[37] This episode is also brought to you by Linode, Linux virtual machines.

[38] I mean, the best operating system.

[39] The best interface in terms of cloud compute I've ever used as Linode.

[40] The best operating system is Linux.

[41] What else do you want from life, friends?

[42] The best operating system and the best cloud service that makes it super easy, You can create as many machines as you want.

[43] You can get your computer infrastructure for a small project, for a huge project.

[44] You can develop.

[45] You can deploy any scale.

[46] It works.

[47] There's also really good customer service 24 -7 all days of the year.

[48] It's lower cost than the AWS.

[49] But, I mean, so many things I just love, including the interface.

[50] Just the little things.

[51] The little things about the online interface, managing the computer instances, creating new ones.

[52] monitoring them, just everything.

[53] The whole thing, super intuitive.

[54] I love it.

[55] Visit lino .com slash Lex for free credit.

[56] This show is brought to you by Masterclass.

[57] $180 a year gets you an all -access pass to watch courses from the best people in the world in their respective disciplines.

[58] The list here, friends, is ridiculous.

[59] I can mention a few.

[60] A few of the ones I've listened to, and there's actually a few I'm not going to mention that I've listened to that are amazing as well but Chris Hatfield, Neil deGrasse Tyson, Blue Right on gaming Carlos Santana, probably my favorite guitar player Gary Kasparov on chess Daniel Negrano Daniel Negrano on poker and Daniel Nogranu is soon going to be a guest Martin Scorsi if there's a person that's a dream guest on this podcast is probably Martin Scorsese he's definitely up there if you want to learn about a thing learn it from the best people in the world, period.

[61] Not the best educators of a thing, but the best doers of a thing.

[62] Anyway, I highly recommend to check it out, get unlimited access to every masterclass and get 15 % off an annual membership at masterclass .com slash Lex.

[63] This show is brought to you by Indeed, a hiring website.

[64] I've used them many times for many hiring efforts I've done in the past.

[65] By the way, I'm currently using them for a large number of hiring I'm doing, and you should visit lexfidman .com slash hiring.

[66] I have a bunch of job posts there.

[67] Or you can go through Indeed.

[68] I have the post up there as well.

[69] Anyway, it doesn't matter.

[70] The point is, this is the best tool for the job of hiring.

[71] It does the first step really well, which is getting you the first batch of quality candidates.

[72] I think they call it Indeed Instant Match.

[73] They sort of basically match the resumes to the job description, and give you a really nice initial pool and then help you filter that pool down.

[74] I mean, the whole process.

[75] From the beginning pool to the end, perfect candidate that will make your life a beautiful, flourishing experience.

[76] They help you out with that.

[77] Indeed has a special offer only available for a limited time.

[78] Check it out at Indeed .com slash Lex.

[79] This is the Lex Friedman podcast.

[80] To support it, please check out our sponsors in the description.

[81] And now, dear friends, here's Ray Kurzweil.

[82] In your 2005 book titled The Singularity's Near, you predicted that the singularity will happen in 2045.

[83] So now, 18 years later, do you still estimate that the singularity will happen on 2045?

[84] And maybe first, what is the singularity?

[85] The technological singularity, and when will it happen?

[86] Singularity is where computers really change our view.

[87] of what's important and change who we are.

[88] But we're getting close to some salient things that will change who we are.

[89] A key thing is 2029, when computers will pass the Turing test.

[90] And there's also some controversy whether the Turing test is valid.

[91] I believe it is.

[92] Most people do believe that, but there's some controversy about that.

[93] But Stanford got very alarmed at my prediction about 2029.

[94] I made this in 1999 in my book.

[95] The Age of Spiritual Machines.

[96] Right.

[97] And then you repeated the prediction in 2005.

[98] In 2005.

[99] So they held international conference you might have been aware of it, of AI experts in 1999 to assess this view.

[100] So people gave different predictions.

[101] And they took a poll.

[102] It was really the first time that AI experts worldwide were polled on this prediction.

[103] And the average poll was 100 years.

[104] 20 % believed it would never happen.

[105] And that was the view in 1999.

[106] 80 % believed it would happen, but not within their lifetimes.

[107] There's been so many advances in AI that the poll of AI experts has come down over the years.

[108] So a year ago, something called meticulous, which may be aware of, assessed as different types of experts on the future, they again assessed what AI experts then felt.

[109] And they were saying 2042.

[110] For the touring test.

[111] For the turning test.

[112] So it's coming down.

[113] And I was still saying 2029.

[114] A few weeks ago, they again did another poll.

[115] and it was 2030.

[116] So A experts now basically agree with me. I haven't changed at all.

[117] I've stayed with 2029.

[118] And A experts now agree with me, but they didn't agree at first.

[119] So Alan Turing formulated the Turing test.

[120] Right.

[121] Now, what he said was very little about it.

[122] I mean, the 1950 paper where he had articulated the Turing test, there's like a few lines that talk about the Turing test and it really wasn't very clear how to administer it and he said if they did it in like 15 minutes that would be sufficient which I don't really think is the case these large language models now some people are convinced by it already I mean you can talk to it and have a conversation with you.

[123] You can actually talk to it for hours.

[124] So it requires a little more depth.

[125] There's some problems with large language models, which we can talk about.

[126] But some people are convinced by the Turing test.

[127] Now, if somebody passes the Turing test, what are the implications of that?

[128] Does that mean that they're sentient, that they're conscious or not?

[129] It's not necessarily clear what the implications are.

[130] Anyway, I believe 2029, that's six, seven years from now.

[131] We'll have something that passes the Turing test and a valid Turing test, meaning it goes for hours, not just a few minutes.

[132] Can you speak to that a little bit?

[133] What is your formulation of the Toring test?

[134] You've proposed a very difficult version of the Turing Test.

[135] So what does that look like?

[136] Basically, it's just to assess it over several hours and also have a human judge that's fairly sophisticated on what computers can do and can't do.

[137] If you take somebody who's not that sophisticated or even an average engineer, they may not really assess various aspects of it.

[138] So you really want the human to challenge the system?

[139] Exactly, exactly.

[140] On its ability to do things like common sense reasoning, perhaps.

[141] That's actually a key problem with large language models.

[142] They don't do these kinds of tests that would involve assessing chains of reasoning.

[143] But you can lose track of that.

[144] If you talk to them, they actually can talk to you pretty well.

[145] well, and you can be convinced by it, but it's somebody that would really convince you that it's a human.

[146] Whatever that takes, maybe it would take days or weeks, but it would really convince you that it's human.

[147] Large language models can appear that way.

[148] You can read conversations, and they appear pretty good.

[149] There are some problems with it.

[150] It doesn't do math very well.

[151] You can ask how many legs did 10 elephants have and they'll tell you, well, okay, each elephant has four legs and it's 10 elephants, so it's 40 legs.

[152] And you go, okay, that's pretty good.

[153] How many legs do 11 elephants have?

[154] And they don't seem to understand the question.

[155] Do all humans understand that question?

[156] No. That's a key thing.

[157] I mean, how advanced a human do you want it to be?

[158] But we do expect a human to be able to do multi -chain reasoning, to be able to take a few facts and put them together, not perfectly.

[159] And we see that in a lot of polls that people don't do that perfectly at all.

[160] So it's not very well -defined, but it's something where it really would convince you that it's a human.

[161] Is your intuition that large language models will not be solely the kind of system that passes the Turing Test in 2029?

[162] Do we need something else?

[163] No, I think it will be a large language model, but they have to go beyond what they're doing now.

[164] I think we're getting there.

[165] And another key issue is if somebody actually passes the Turing test validly, I would believe they're conscious.

[166] And not everybody would say that.

[167] So, okay, we can pass the Turing test, but we don't really believe that it's conscious.

[168] That's a whole other issue.

[169] But if it really passes the training test, I would believe that it's conscious.

[170] But I don't believe that of large language models today.

[171] If it appears to be conscious, that's as good as being conscious, at least for you, in some sense.

[172] I mean, consciousness is not something that's scientific.

[173] I mean, I believe you're conscious, but it's really just a belief, and we believe that about other humans that at least appear to be conscious.

[174] When you go outside of shared human assumption, like our animal is conscious.

[175] Some people believe they're not conscious.

[176] Some people believe they are conscious.

[177] And would a machine that acts just like a human be conscious?

[178] I mean, I believe it would be.

[179] But that's really a philosophical belief.

[180] not, you can't prove it.

[181] I can't take an entity and prove that it's conscious.

[182] There's nothing that you can do that would be, they would indicate that.

[183] It's like saying a piece of art is beautiful.

[184] You can say it.

[185] Multiple people can experience a piece of art is beautiful, but you can't prove it.

[186] But it's also an extremely important issue.

[187] Yes.

[188] I mean, imagine if you had something where nobody's conscious, the world may as well not exist.

[189] And so some people like, say, Marvin Rinsky, said, well, consciousness is not logical, it's not scientific, and therefore we should dismiss it, and any talk about consciousness is just not to be believed.

[190] But when he actually engaged, with somebody who was conscious, he actually acted as if they were conscious.

[191] He didn't ignore that.

[192] He acted as if consciousness does matter.

[193] Exactly.

[194] Whereas he said it didn't matter.

[195] Well, that's Marvin Minsky.

[196] He's full of contradictions.

[197] But that's true of a lot of people as well.

[198] But to you, consciousness matters.

[199] But to me, it's very important.

[200] But I would say it's not a scientific issue.

[201] It's a philosophical issue.

[202] And people have different views.

[203] Some people believe that anything that makes a decision is conscious.

[204] So your light switch is conscious.

[205] Its level of consciousness is low.

[206] It's not very interesting, but that's a consciousness.

[207] And anything, so a computer that makes a more interesting decision, still not at human levels, but it's also conscious and at a higher level than your light switch.

[208] So that's one view.

[209] there's many different views of what consciousness is.

[210] So if a system passes the touring test, it's not scientific, but in issues of philosophy, things like ethics start to enter the picture.

[211] Do you think there would be, we would start contending as a human species about the ethics of turning off such a machine?

[212] Yeah, I mean, that's definitely come up.

[213] hasn't come up in reality yet, but I'm talking about 2029.

[214] It's not that many years from now.

[215] And so what are our obligations to it?

[216] It has a different, I mean, a computer that's conscious has a little bit different connotations than a human.

[217] We have a continuous consciousness.

[218] we're in an entity that does not last forever.

[219] Now, actually, a significant portion of humans still exist and are therefore still conscious, but anybody who is over a certain age doesn't exist anymore, that wouldn't be true of a computer program.

[220] You could completely turn it off, and a copy of it could be stored, and you could recreate it.

[221] And so it has a different type of validity.

[222] You could actually take it back in time.

[223] You could eliminate its memory and have it go over again.

[224] I mean, it has a different kind of connotation than humans do.

[225] Well, perhaps you can do the same thing with humans.

[226] It's just that we don't know how to do that yet.

[227] Yeah.

[228] It's possible that we figure out all of these things on the machine first.

[229] But that doesn't mean the machine isn't.

[230] I mean, if you look at the way people react, say 3CPO or other machines that are conscious in movies, they don't actually present how it's conscious, but we see that they are a machine, and people will believe that they are conscious, and they'll actually worry about it if they get in trouble and so on.

[231] So, 29 is going to be the first year when a major thing happens.

[232] Right.

[233] And that will shake our civilization to start to consider the role of AI in this one.

[234] Yes and no. I mean, this is one guy at Google claimed that the machine was conscious.

[235] But that's just one person.

[236] Right.

[237] When it starts to happen to scale.

[238] Well, that's exactly right.

[239] Because most people have not taken that position.

[240] I don't take that position.

[241] I mean, I've used different things.

[242] like this and they don't appear to me to be conscious.

[243] As we eliminate various problems of these large language models, more and more people will accept that they're conscious.

[244] So when we get to 2029, I think a large fraction of people will believe that they're conscious.

[245] So it's not going to happen all at once.

[246] I believe they would actually happen gradually, and it's already started to happen.

[247] And so that takes us one step closer to the singularity.

[248] Another step then is in the 2030s, when we can actually connect our neocortex, which is where we do our thinking, to computers.

[249] And, I mean, just as this actually gains a lot to being connected to computers that will amplify its abilities.

[250] I mean, if this did not have any connection, it would be pretty stupid.

[251] It could not answer any of your questions.

[252] If you're just listening to this, by the way, Ray's holding up the all -powerful smartphone.

[253] So we're going to do that directly from our brains.

[254] I mean, these are pretty good.

[255] These already have amplified our intelligence.

[256] I'm already much smarter than I would otherwise be if I didn't have this.

[257] Because I remember my first book, The Age of Intelligent Machines, there was no way to get information from computers.

[258] I actually would go to a library, find a book, find the page that had an information I wanted, and I'd go to the copier, and my most significant information tool was a roll of quarters where I could feed the copier.

[259] So we're already greatly advanced that we have these things.

[260] There's a few problems with it.

[261] First of all, I constantly put it down and I don't remember where I put it.

[262] I've actually never lost it, but you have to find it, and then you have to turn it on.

[263] So there's a certain amount of steps.

[264] It would actually be quite useful if someone would just listen to your conversation and say, oh, that's so -and -so actress and tell you what you're talking about.

[265] So going from active to passive where it just permeates your whole life.

[266] Yeah, exactly.

[267] The way your brain does when you're awake.

[268] Your brain is always there.

[269] Right.

[270] That's something that could actually just about be done today where we listen to your conversation, understand what you're saying, understand what you're not missing and give you that information.

[271] But another step is to actually go inside your brain.

[272] And there are some prototypes where you can connect your brain.

[273] They actually don't have the amount of bandwidth that we need.

[274] They can work, but they work fairly slowly.

[275] So if it actually would connect to your neocortex, and the neocortex, which I describe how to create a mind, The neocortex is actually, it has different levels.

[276] And as you go up the levels, it's kind of like a pyramid.

[277] The top level is fairly small.

[278] And that's the level where you want to connect these brain extenders.

[279] So I believe that will happen in the 2030s.

[280] So just the way this is greatly amplified by being connected to the cloud.

[281] we can connect our own brain to the cloud and just do what we can do by using this machine.

[282] Do you think it would look like the brain computer interface of like Neurlink?

[283] So would it be...

[284] Well, Neurlinks is an attempt to do that.

[285] It doesn't have the bandwidth that we need.

[286] Yet, right?

[287] Right.

[288] But I think, I mean, they're going to get permission for this because there are a lot of people who absolutely need it because they can't communicate.

[289] I know a couple of people like that who have ideas and they cannot move their muscles and so on, they can't communicate.

[290] So for them, this would be very valuable.

[291] But we could all use it.

[292] Basically, it would turn us into something that would be like we have a phone, but it would be in our minds.

[293] It would be kind of instantaneous.

[294] And maybe communication between two people would not require this low bandwidth mechanism of language.

[295] Yes.

[296] Exactly.

[297] We don't know what that would be.

[298] Although we do know the computers can share information like language instantly.

[299] They can share many, many books in a second.

[300] So we could do that as well.

[301] If you look at what our brain does, it actually can manipulate different parameters.

[302] So we talk about these large language models.

[303] I mean, I had written that it requires a certain amount of information in order to be effective and that we would not see AI really being effective until it got to that level.

[304] And we had large language models that were like 10 billion bytes.

[305] It didn't work very well.

[306] They finally got to 100 billion bytes, and now they work fairly well, and now we're going to a trillion bytes.

[307] If you say Lambda has 100 billion bytes, what does that mean?

[308] Well, what if you had something that had one bite, one parameter?

[309] Maybe you want to tell whether or not something's an elephant or not.

[310] And so you put in something that would detect its trunk.

[311] If it has a trunk, it's an elephant.

[312] If it doesn't have a trunk, it's not an elephant.

[313] That will work fairly well.

[314] There's a few problems with it.

[315] And it really wouldn't be able to tell what a trunk is.

[316] But anyway...

[317] And maybe other things other than elephants have trunks, you might get really confused.

[318] Yeah, exactly.

[319] I'm not sure which animals have trunks, but, you know, how do you define a trunk?

[320] But yeah, that's one parameter.

[321] You can do okay.

[322] So these things have 100 billion parameters, so they're able to deal with very complex issues.

[323] All kinds of trunks.

[324] Human beings actually have a little bit more than that, but they're getting to the point where they can emulate humans.

[325] if we were able to connect this to our neocortex, we would basically add more of these abilities to make distinctions, and it could ultimately be much smarter and also be attached to information that we feel is reliable.

[326] So that's where we're headed.

[327] So you think that there will be a merger in the 30s, and increasing amount of merging between either human brain and the AI brain.

[328] Exactly.

[329] And the AI brain is really an emulation of human beings.

[330] I mean, that's why we're creating them.

[331] Because human beings act the same way, and this is basically to amplify them.

[332] I mean, this amplifies our brain.

[333] It's a little bit clumsy to interact with, but it definitely is way, beyond what we had 15 years ago.

[334] But the implementation becomes different, just like a bird versus the airplane.

[335] Even though the AI brain is an emulation, it starts adding features we might not otherwise have, like ability to consume a huge amount of information quickly.

[336] Like look up thousands of Wikipedia articles in one take.

[337] Exactly.

[338] And we can get, for example, to issues like simulated biology where it can simulate many different things at once.

[339] We already had one example of simulated biology, which is the Moderna vaccine.

[340] And that's going to be now the way in which we create medications.

[341] But they were able to simulate what each example of an MRI would do to a human being.

[342] were able to simulate that quite reliably.

[343] And we actually simulated billions of different MRI sequences.

[344] And they found the ones that they were the best, and they created the vaccine.

[345] And they did, and talked about doing it quickly, they did that in two days.

[346] How long would a human being take to simulate billions of different MRI sequences?

[347] I don't know that we could do it at all, but it would take many years.

[348] They did it in two days.

[349] And one of the reasons that people didn't like vaccines is because it was done too quickly.

[350] I was done too fast.

[351] And they actually included the time it took to test it out, which was 10 months.

[352] So they figured, okay, it took 10 months to create this.

[353] Actually, it took us two days.

[354] And we also will be able to ultimately do the tests in a few days as well.

[355] Oh, because we can simulate how the body will respond to it.

[356] Yeah.

[357] Now that's a little bit more complicated because the body has a lot of different elements and we have to simulate all of that, but that's coming as well.

[358] So ultimately we could create it in a few days and then test it in a few days and it would be done.

[359] And we can do that with every type of medical insufficiency that we have.

[360] So curing all diseases, improving certain functions of the body, supplements, drugs for recreation, for health, for performance, for productivity, all that kind of stuff.

[361] Well, that's where we're headed.

[362] Because, I mean, right now we have a very inefficient way of creating these new medications.

[363] But we've already shown it.

[364] And the Moderna vaccine is actually the best of the vaccines we've had.

[365] And it literally took two days to create.

[366] And we'll get to the point where we can test it out also quickly.

[367] Are you impressed by AlphaFold and the solution to the protein folding, which essentially is simulating, modeling this primitive building block of life, which is a protein, and it's 3D shape?

[368] It's pretty remarkable that they can actually predict what the 3D shape of these things are.

[369] But they did it with the same type of neural net that won, for example, the Go test.

[370] So it's all the same.

[371] It's all the same.

[372] They took that same thing and just changed the rules to chess.

[373] And within a couple of days, it now played a master level of chess greater than any human being.

[374] And the same thing then worked for alpha -fold.

[375] which no human had done.

[376] I mean, human beings could do, the best humans could maybe do 15, 20 % of figuring out what the shape would be.

[377] And after a few takes, it ultimately did just about 100%.

[378] Do you still think the singularity will happen in 2045?

[379] And what does that look like?

[380] you know once we can amplify our brain with computers directly which will happen in the 2030s that's going to keep growing it's another whole theme which is the exponential growth of computing power yeah so looking at price performance of computation from 1939 to 2021 right so that starts with the very first computer actually created by German during World War II and you might have thought that that might be significant, but actually the Germans didn't think computers were significant, and they completely rejected it.

[381] The second one is also the Zusa 2.

[382] And by the way, we're looking at a plot with the X -axis being the year from 1935 to 2025, and on the y -axis and log scale is competition per second per constant dollar.

[383] So dollar normalized inflation.

[384] And it's growing.

[385] linearly on the log scale, which means it's grown exponentially.

[386] The third one was the British computer, which the Allies did take very seriously, and it cracked the German code and enables the British to win the Battle of Britain, which otherwise absolutely would not have happened if they hadn't cracked the code using that computer.

[387] But that's an exponential graph, so a straight line on that graph is exponential growth.

[388] and you see 80 years of exponential growth and I would say about every five years and this happened shortly before the pandemic people saying well they call it Moore's Law which is not the correct because that's not all Intel in fact this started decades before Intel was even created and wasn't with transistors formed into a grid there's not just transistor count or transistors size.

[389] It's a bunch of different components.

[390] It started with relays, then went to vacuum tubes, then went to individual transistors, and then to integrated circuits.

[391] And integrated circuits actually starts like in the middle of this graph.

[392] And it has nothing to do with Intel.

[393] Intel actually was a key part of this, but a few years ago they stopped making the fastest chips, but if you take the fastest chip of any technology in that year, you get this kind of graph, and it's definitely continuing for 80 years.

[394] So you don't think Moore's Law broadly defined is dead.

[395] It's been declared dead multiple times throughout this process.

[396] I don't like the term Moore's Law because there's nothing to do with Moore or with Intel, but yes, The exponential growth of computing is continuing and has never stopped.

[397] From various sources.

[398] I mean, it went through World War II.

[399] It went through global recessions.

[400] It's just continuing.

[401] And if you continue that out, along with software gains, which is another issue, and they really multiply.

[402] Whatever you get from software gains, you multiply by.

[403] the computer gains, you get faster and faster speed.

[404] This is actually the fastest computer models that have been created, and that actually expands roughly twice a year.

[405] Like every six months it expands by two.

[406] So we're looking at a plot from 2010 to 2022.

[407] On the X -axis is the publication date of the model, and perhaps sometimes the actual paper associated with it, and on the y -axis is training compute and flops.

[408] And so basically this is looking at the increase in the, not transistors, but the computational power of neural networks.

[409] Yeah, it's the computational power that created these models.

[410] And that's doubled every six months.

[411] Which is even faster, the transistor division.

[412] Yeah.

[413] And actually, since it goes faster than the amount of cost, this has actually become a greater investment to create these.

[414] But at any rate, by the time you get to 2045, we'll be able to multiply our intelligence many millions fold, and it's just very hard to imagine what that will be like.

[415] And that's the singularity.

[416] Well, we can't even imagine.

[417] Right.

[418] That's why we call it the singularity.

[419] The singularity in physics, something gets sucked.

[420] into its singularity and you can't tell what's going on in there because no information can get out of it.

[421] There's various problems with that, but that's the idea.

[422] It's too much beyond what we can imagine.

[423] Do you think it's possible we don't notice that what the singularity actually feels like is we just live through it with exponentially increasing cognitive capabilities?

[424] and we almost, because everything is moving so quickly, aren't really able to introspect that our life has changed.

[425] Yeah, but I mean, we will have that much greater capacity to understand things, so we should be able to look back.

[426] Looking at history, understand history.

[427] But we will need people basically like you and me to actually think about these things.

[428] But we might be distracted by all the other sources of entertainment and fun.

[429] Because the exponential power of intellect is growing, but also the...

[430] That'll be a lot of fun.

[431] The amount of ways you can have, you know...

[432] I mean, we already have a lot of fun with computer games and so on.

[433] They're really quite remarkable.

[434] What do you think about the digital world, the metaverse, virtual reality?

[435] Will that have a component in this, or will most of our advancement be in physical?

[436] Well, that's a little bit like second life.

[437] although the second life actually didn't work very well because it couldn't actually handle too many people.

[438] And I don't think the Metaverse has come to being.

[439] I think there will be something like that.

[440] It will necessarily be from that one company.

[441] I mean, there's going to be competitors.

[442] But yes, we're going to live increasingly online, particularly if our brains are online.

[443] I mean, how could we not be online?

[444] Do you think it's possible that given this, merger with AI, most of our meaningful interactions will be in this virtual world.

[445] Most of our life, we fall in love, we make friends, we come up with ideas, we do collaborations, we have fun.

[446] I actually know somebody who's marrying somebody that they never met.

[447] I think they just met her briefly before the wedding, but she actually fell in love with this other person, never having met them.

[448] And I think the love is real.

[449] That's a beautiful story, but do you think that story is one that might be experienced as opposed to by hundreds of thousands of people, but instead by hundreds of millions of people?

[450] I mean, it really gives you appreciation for these virtual ways of communicating.

[451] And if anybody can do it, then it's really not such a freak story.

[452] So I think more and more people will do that.

[453] But that's turning our back on our entire history of evolution.

[454] The old days, we used to fall in love by holding hands and sitting by the fire, that kind of stuff.

[455] Actually, I have five patents on where you can hold hands, even if you're separated.

[456] Great.

[457] So the touch, the sense, it's all just senses.

[458] It's all just to be replicated.

[459] It's not just that you're touching someone or not.

[460] There's a whole way of doing it, and it's very subtle.

[461] But ultimately, we can emulate all of that.

[462] Are you excited by that future?

[463] Do you worry about that future?

[464] I have certain worries about the future, but not virtual touch.

[465] Well, I agree with you.

[466] You described six stages in the evolution of information processing in the universe, as you started to describe.

[467] Can you maybe talk through some of those stages from the physics and chemistry to DNA and brains and then to the very end, to the very beautiful end of this process?

[468] Well, it actually gets more rapid.

[469] So physics and chemistry, that's how we started.

[470] So at the very beginning of the universe.

[471] We had lots of electrons and various things traveling around And that took actually many billions of years, kind of jumping ahead here to kind of some of the last stages where we have things like love and creativity.

[472] It's really quite remarkable that that happens.

[473] But finally, physics and chemistry created biology and DNA.

[474] And now you had actually one type of molecule that described the cutting edge of, of this process.

[475] And we go from physics and chemistry to biology.

[476] And finally, biology created brains.

[477] I mean, not everything that's created by biology has a brain, but eventually brains came along.

[478] And all of this is happening faster and faster.

[479] Yeah.

[480] It created increasingly complex organisms.

[481] Another key thing is actually not just brains, but our thumb.

[482] Because there's a lot of animals with brains even bigger than humans.

[483] I mean, elephants have a bigger brain, whales have a bigger brain, but they've not created technology because they don't have a thumb.

[484] So that's one of the really key elements in the evolution of humans.

[485] This physical manipulator device that's useful for puzzle solving in the physical reality.

[486] So I could think, I could look at a tree and go, oh, I could actually trip that branch down and eliminate the leaves and carve a tip on it and it would create technology.

[487] And you can't do that if you don't have a thumb.

[488] Yeah.

[489] So thumbs and create a technology.

[490] and technology also had a memory and now those memories are competing with the scale and scope of human beings and ultimately we'll go beyond it and then we're going to merge human technology with human intelligence and understand how human intelligence works which I think we already do and we're putting that into our human technology.

[491] So create the technology inspired by our own intelligence and then that technology supersedes us in terms of its capabilities.

[492] And we ride along, or do you ultimately see it as - And we ride along, but a lot of people don't see that.

[493] They say, well, you've got humans and you've got machines and there's no way we can ultimately compete with humans.

[494] And you can already see that.

[495] Lisa Dahl, who's like the best Go player in the world, says he's not going to play Go anymore.

[496] Yeah.

[497] Because playing Go for human, that was like the ultimate in intelligence, because no one else could do that.

[498] But now a machine can actually go way beyond him.

[499] And so he says, well, there's no point playing it anymore.

[500] That may be more true for games than it is for life.

[501] I think there's a lot of benefit to working together with AI in regular life.

[502] So if you were to put a probability on it, is it more likely that we merge with AI or AI replaces us?

[503] A lot of people just think computers come along and they compete with them.

[504] We can't really compete and that's the end of it, as opposed to them increasing our abilities.

[505] And if you look at most technology, it increases our abilities.

[506] I mean, look at the history of work.

[507] Look at what people did 100 years ago.

[508] Does any of that exist anymore?

[509] People, I mean, if you were to predict that all of these jobs would go away and would be done by machines, people would say, well, no one's going to have jobs and it's going to be massive unemployment.

[510] But I show in this book that's coming out, out, the amount of people that are working, even as a percentage of the population, has gone way up.

[511] We're looking at the X -axis year from 1774 to 2024, and on the Y -axis, personal income per capita in constant dollars, and it's growing super linearly.

[512] I mean, it's...

[513] Yeah, 2021 constant dollars, and it's gone way up.

[514] That's not what you would predict, given that we would predict that all.

[515] all these jobs would go away.

[516] But the reason it's gone up is because we've basically enhanced our own capabilities by using these machines, as opposed to them just competing with us.

[517] That's a key way in which we're going to be able to become far smarter than we are now by increasing the number of different parameters we can consider in making a decision.

[518] I was very fortunate.

[519] I am very fortunate to be able to get a glimpse preview of your upcoming book The Singularity's Nearer.

[520] And one of the themes outside of just discussing the increasing exponential growth of technology, one of the themes is that things are getting better in all aspects of life.

[521] And you talk just about this.

[522] So one of the things you're saying is a job.

[523] So let me just ask about that.

[524] There is a big concern that automation, especially powerful AI will get rid of jobs.

[525] People will lose jobs.

[526] And as you were saying, the senses throughout history of the 20th century, automation did not do that ultimately.

[527] And so the question is, will this time be different?

[528] Right.

[529] That is the question.

[530] Will this time be different?

[531] And it really has to do with how quickly we can merge with this type of intelligence.

[532] whether Lambda or GPT3 is out there, and maybe it's overcome some of its key problems, and we really have an enhanced human intelligence that might be a negative scenario.

[533] But I mean, that's why we create technologies to enhance ourselves, and I believe we will be enhanced.

[534] We're not just going to sit here with 300 million modules in our neocortex, we're going to be able to go beyond that because that's useful, but we can multiply that by 10, 100 ,000, a million.

[535] And you might think, well, what's the point of doing that?

[536] It's like asking somebody that's never heard music, well, what's the value of music?

[537] I mean, you can't appreciate it until you've created it.

[538] There's some worry that there will be a wealth disparity, you know, class or wealth disparity.

[539] Only the rich people will be, basically the rich people will first have access to this kind of thing, and then because of this kind of thing, because of the ability to merge will get richer exponentially faster.

[540] And I say that's just like cell phones.

[541] I mean, there's like four billion cell phones in the world today.

[542] In fact, when cell phones first came out, you had to be fairly wealthy.

[543] They weren't very inexpensive.

[544] So you had to have some wealth in order to afford them.

[545] Yeah, there were these big, sexy phones.

[546] And they didn't work very well.

[547] They did almost nothing.

[548] So you can only afford these things if you're wealthy, at a point where they really don't work very well.

[549] So achieving scale and making it inexpensive as part of making the thing work well.

[550] Exactly.

[551] So these are not totally cheap, but they're pretty cheap.

[552] I mean, you can get them for a few hundred dollars.

[553] Especially given the kind of things it provides for you, there's a lot of people in the third world that have very little, but they have a smartphone.

[554] Yeah, absolutely.

[555] And the same will be true with AI.

[556] I mean, I see homeless people have their own cell phones.

[557] Yeah, so your sense is any kind of advanced technology will take the same trajectory.

[558] Right, ultimately becomes cheap and will be affordable.

[559] I probably would not be the first person to put something in my brain to connect to computers, because I think it will have limitations But once it's really perfected, and at that point it'll be pretty inexpensive, I think it'll be pretty affordable.

[560] So in which other ways, as you outline, your book is life getting better?

[561] Because I think - Well, I have 50 charts in there where everything is getting better.

[562] I think there's a kind of cynicism about, like even if you look at extreme poverty, for example.

[563] For example, this is actually a poll.

[564] taken on extreme poverty.

[565] And the people were asked, has poverty gotten better or worse?

[566] And the options are increased by 50%, increased by 25%, remain the same, decreased by 25%, decreased by 50%.

[567] If you're watching this or listening to this, try to vote for yourself.

[568] 70 % thought it had gotten worse, and that's the general impression.

[569] 88 % thought it had gotten worse remained the same.

[570] Only 1 % thought it decreased by 50%.

[571] And that is the answer.

[572] It actually decreased by 50%.

[573] So only 1 % of people got the right optimistic estimate of how poverty is...

[574] Right.

[575] And this is the reality.

[576] And it's true of almost everything you look at.

[577] You don't want to go back 100 years or 50 years.

[578] Things were quite miserable then, but we tend not to remember.

[579] that.

[580] So literacy rate increasing over the past few centuries across all the different nations nearly to 100 % across many of the nations in the world.

[581] It's gone way up.

[582] Average years of education have gone way up.

[583] Life expectancy is also increasing.

[584] Life expectancy was 48 in 1900.

[585] And it's over 80 now.

[586] And it's going to continue to go up, particularly as we get into more advanced stages of simulated biology.

[587] For life expectancy, these trends are the same for at birth, age one, age five, age 10, so it's not just the infant mortality.

[588] And I have 50 more graphs in the book about all kinds of things.

[589] Even spread of democracy, which might bring up some sort of controversial issues, it still has gone way up.

[590] Well, that one is gone way up, but that one is a bumpy road, right?

[591] Exactly.

[592] And somebody might represent democracy and go backwards, but we basically had no democracies before the creation of the United States, which was a little over two centuries ago, which in the scale of human history, isn't that long.

[593] Do you think super -intelligence systems will help with democracy?

[594] So what is democracy?

[595] Democracy is giving a voice to the populace and having their ideas, having their beliefs, having their views represented?

[596] Well, I hope so.

[597] I mean, we've seen social networks can spread, conspiracy theories which have been quite negative being for example being against any kind of stuff that would help your health so those kinds of ideas have on social media where you notice is they increase engagement so dramatic division increases engagement do you worry about AI systems that will learn to maximize that division I mean, I do have some concerns about this, and I have a chapter in the book about the perils of advanced AI.

[598] Spreading misinformation on social networks is one of them, but there are many others.

[599] What's the one that worries you the most, that we should think about to try to avoid?

[600] Well, it's hard to choose.

[601] We do have the nuclear power that evolved when I was a child.

[602] I remember, and we would actually do these drills against a nuclear war.

[603] We'd get under our desk and put our hands behind our heads to protect us from a nuclear war.

[604] It seemed to work.

[605] We're still around, so.

[606] You're protected.

[607] But that's still a concern.

[608] And there are key dangerous situations that can take place in biology.

[609] Someone could create a virus that's very, I mean, we have viruses that are hard to spread, and they can be very dangerous, and we have viruses that are easy to spread, but they're not so dangerous.

[610] somebody could create something that would be very easy to spread and very dangerous and be very hard to stop and it could be something that would spread without people noticing because people could get it that have no symptoms and then everybody would get it and then symptoms would occur maybe a month later so I mean and that actually doesn't occur normally because if we were to have a problem with that, we wouldn't exist.

[611] So the fact that humans exist means that we don't have viruses that can spread easily and kill us because otherwise we wouldn't exist.

[612] Yeah, viruses don't want to do that.

[613] They want to spread and keep the host alive somewhat.

[614] So you can describe various dangers with biology.

[615] also nanotechnology, which we actually haven't experienced yet, but there are people that creating nanotechnology and I described that in the book.

[616] Now you're excited by the possibilities of nanotechnology, of nanobots, of being able to do things inside our body, inside our mind that's going to help.

[617] What's exciting, what's terrifying about nanobots?

[618] What's exciting is that that's a way to communicate with our neocortex, because each neocortex that's pretty small and you need a small entity that can actually get in there and establish a communication channel.

[619] And that's going to really be necessary to connect our brains to AI within ourselves because otherwise it would be hard for us to compete with it.

[620] In a high bandwidth way.

[621] Yeah, yeah.

[622] And that's key, actually, because a lot of the things like Neurlink are really not high bandwidth yet.

[623] So nanobots is the way you achieve high bandwidth.

[624] How much intelligence would those nanobots have?

[625] Yeah, they don't need a lot, just enough to basically establish communication channel to one nanobot.

[626] So it's primarily about communication.

[627] Yeah.

[628] Between external computing devices and our biological thinking machine.

[629] What worries you about nanobots?

[630] Is it similar to the viruses?

[631] Well, I mean, this is the great goo challenge.

[632] Yes.

[633] If you had a nanobot that wanted to create any kind of entity and repeat itself and was able to operate in a natural environment, it could turn everything into that entity, and basically destroy all biological life.

[634] So you mentioned nuclear weapons.

[635] Yeah.

[636] I'd love to hear your opinion about the 21st century and whether you think we might destroy ourselves.

[637] And maybe your opinion, if it has changed by looking at what's going on in Ukraine, that we could have a hot war with nuclear power, involved, and the tensions building, and the seeming forgetting of how terrifying and destructive nuclear weapons are, do you think humans might destroy ourselves in the 21st century, and if we do, how?

[638] And how do we avoid it?

[639] I don't think that's going to happen, despite the terrors of that war.

[640] It is a possibility, but I mean, I don't.

[641] It's unlikely in your mind.

[642] Yeah.

[643] Even with the tensions we've had, with this one nuclear power plant that's been taken over, it's very tense, but I don't actually see a lot of people worrying that that's going to happen.

[644] I think we'll avoid that.

[645] We had two nuclear bombs go off.

[646] and 45.

[647] So now we're 77 years later.

[648] Yeah, we're doing pretty good.

[649] We've never had another one go off through anger.

[650] People forget the lessons of history.

[651] Well, yeah.

[652] I am worried about it.

[653] I mean, that's definitely a challenge.

[654] But you believe that we'll make it out and ultimately super intelligent AI will help us make it out, as opposed to destroy us?

[655] I think so.

[656] But we do have to be mindful of these dangers.

[657] And there are other dangers besides nuclear weapons.

[658] So to get back to merging with AI, we'd be able to upload our mind in a computer in a way where we might even transcend the constraints of our bodies.

[659] So copy our mind into a computer.

[660] computer and leave the body behind?

[661] Let me describe one thing I've already done with my father.

[662] That's a great story.

[663] So we created a technology.

[664] This is public.

[665] Came out, I think, six years ago, where you could ask any question.

[666] And the release product, which I think is still on the market, it would read 200 ,000 books.

[667] And then find the one sentence, in 200 ,000 books that best answered your question.

[668] And it's actually quite interesting.

[669] You can ask all kinds of questions, and you get the best answer in 200 ,000 books.

[670] But I was also able to take it and not go through 200 ,000 books, but go through a book that I put together, which is basically everything my father had written.

[671] So everything he had written, I had gathered, and we created a book, everything that Frederick Kurzweil had written.

[672] Now, I didn't think this actually would work that well because stuff he had written was stuff about how to lay out.

[673] I mean, he did, directed choral groups and music groups, and he would be laying out how the people should, where they should sit and how to fund this and all kinds of things that really didn't seem that interesting.

[674] And yet, when you ask a question, it would go through it and it would actually give you a very good answer.

[675] So I said, well, who's the most interesting composer?

[676] And he said, well, definitely Brahms.

[677] And he would go on about how Brahms was fabulous.

[678] talk about the importance of music education and...

[679] So you could have essentially a question and answer, a conversation with them.

[680] Can I have a conversation with him, which was actually more interesting than talking to him, because if you talk to him, he'd be concerned about how they're going to lay out this property to give a choral group.

[681] He'd be concerned about the day -to -day versus the big questions.

[682] Exactly, yeah.

[683] And you did ask about the meaning of life, and he answered love.

[684] Yeah.

[685] Do you miss them?

[686] Yes, I do.

[687] You know, you get used to missing somebody after 52 years, and I didn't really have intelligent conversations with him until later in life.

[688] In the last few years, he was sick, which meant he was home a lot, and I was actually able to talk to him about different things.

[689] like music and other things.

[690] So I missed that very much.

[691] What did you learn about life from your father?

[692] What part of him is with you now?

[693] He was devoted to music, and when he would create something to music, it put him in a different world.

[694] Otherwise, he was very shy.

[695] And if people got together, he tended not to interact with people.

[696] just because of his shine.

[697] But when he created music, he was like a different person.

[698] Do you have that in you?

[699] Yeah.

[700] That kind of light that shines?

[701] I mean, I got involved with technology at like age five.

[702] And you fell in love with it in the same way he did with music?

[703] Yeah.

[704] I remember this actually happened with my grandmother.

[705] She had a manual typewriter, and she wrote a book, One Life is Not Enough, which actually a good title for a book I might write, but, and it was about a school she had created.

[706] Well, actually, her mother created it.

[707] So my mother's mother's mother created the school in 1868, and it was the first school in Europe that provided higher education for girls.

[708] It went through 14th grade.

[709] If you were a girl and you were lucky enough to get an education at all, it would go through like ninth grade.

[710] And many people didn't have any education as a girl.

[711] This went through 14th grade.

[712] Her mother created it.

[713] She took it over.

[714] And the book was about the history of the school and her involved.

[715] with it.

[716] When she presented to me, I was not so interested in the story of the school, but I was totally amazed with this manual typewriter.

[717] I mean, here was something you could put a blank piece of paper into, and you could turn into something that looked like it came from a book.

[718] And you could actually type on it, and it looked like it came from a book.

[719] It was just amazing to me. and I could see actually how it worked and I was also interested in magic but in magic if somebody actually knows how it works the magic goes away the magic doesn't stay there if you actually understand how it works but he was technology I didn't have that word when I was five or six and the magic was still there for you the magic was still there even if you knew how it worked so I became totally interested in this and then went around, collected little pieces of mechanical objects from bicycles, from broken radios, I would go through the neighborhood.

[720] This was an era where you would allow a five or six -year -olds who'd like run through the neighborhood and do this.

[721] We don't do that anymore, but I didn't know how to put them together.

[722] I said, if I could just figure out how to put these things together, I could solve any problem.

[723] And I actually remember talking to these very old girls, I think they were 10, and telling them, if I could just figure this out, we could fly, we could do anything, and they said, well, you have quite an imagination.

[724] And then I, then when I was in third grade, so it was like eight created like a virtual reality theater where people could come on stage and they could move their arms and all of it was controlled through one control box.

[725] It was all done with mechanical technology.

[726] And it was a big hit in my third grade class.

[727] And then I went on to do things in junior high school science fairs and high school science fairs where I won the Westinghouse Science Talent Search.

[728] So I mean, And I became committed to technology when I was five or six years' own.

[729] You've talked about how you use lucid dreaming to think, to come up with ideas as a source of creativity.

[730] Because you maybe talked through that, maybe the process of how to, you've invented a lot of things.

[731] You've came up and thought through some very interesting ideas.

[732] What advice would you give, or can you speak to the process of thinking, of how to think, how to think creatively.

[733] Well, I mean, sometimes I will think through in a dream and try to interpret that, but I think the key issue that I would tell younger people is to put yourself in the position that what you're trying to create already exists.

[734] and then you're explaining like how works exactly that's really interesting you paint a world that you would like to exist you think it exists and reverse engineer that you actually imagine you're giving a speech about how you created this well you'd have to then work backwards as to how you would create it in order to make it work that's brilliant and that's brilliant and that requires some imagination to, some first principles thinking.

[735] You have to visualize that world.

[736] That's really interesting.

[737] And generally when I talk about things we're trying to invent, I would use the present tense as if it already exists.

[738] Not just to give myself that confidence, but everybody else who's working on it.

[739] We just have to kind of do all the steps in order to make it actual.

[740] How much of a good idea is about timing?

[741] How much is it about your genius versus that its time has come?

[742] Timing's very important.

[743] I mean, that's really why I got into futurism.

[744] I wasn't inherently a futurist, that there's not really my goal.

[745] That's really to figure out when things are feasible.

[746] we see that now with large -scale models.

[747] The very large -scale models like GPT -3, it started two years ago.

[748] Four years ago, it wasn't feasible.

[749] In fact, they did create GPT -2, which didn't work.

[750] So it required a certain amount of timing having to do with this exponential growth of computing power.

[751] So futurism in some sense is a study of timing, trying to understand how the world will evolve.

[752] Yeah, yeah.

[753] And when will the capacity for certain ideas emerge?

[754] And that's become a thing in itself than to try to time things in the future.

[755] But really, its original purpose was to time my products.

[756] I mean, I did OCR in the 1970s because OCR has...

[757] It doesn't require a lot of computation.

[758] Optical character recognition.

[759] Yeah, so we were able to do that in the 70s, and I waited until the 80s to address speech recognition since that requires more computation.

[760] So you were thinking through timing when you're developing those things.

[761] Yeah.

[762] Has its time come?

[763] Yeah.

[764] And that's how you've developed that brain power to start to think in a futurist sense, when, how will the world look like in 2045 and work backwards and how it gets there?

[765] But that has become a thing in itself because looking at what things will be like in the future reflects such dramatic changes in how humans will live.

[766] That was worth communicating also.

[767] So you developed that muscle of predicting the future and then apply it broadly and start to discuss how it changes the world of technology, how it changes the world of human life on Earth.

[768] In Danielle, one of your books, you write about someone who has the courage to question assumptions that limit human imagination to solve problems.

[769] And you also give advice and how each of us can have this kind of courage.

[770] Well, it's good that you pick that quote because I think that does symbolize what Danielle is about.

[771] Courage.

[772] So how can each of us have that courage to question assumptions?

[773] I mean, we see that when people can go beyond the current realm and create something that's new.

[774] I mean, take Uber, for example.

[775] Before that existed, you never thought that that would be feasible.

[776] And it did require changes in the way people work.

[777] Is there practical advices you give in the book about what each of us can do to be a Danielle?

[778] Well, she looks at the situation and tries to imagine how she can overcome various obstacles, and then she goes for it, and she's a very good communicator, so she can communicate these ideas to other people.

[779] And there's practical advice of learning to program and recording your life and things of this nature, become a physicist.

[780] So you list a bunch of different suggestions of how to throw yourself into this world.

[781] Yeah, I mean, it's kind of a idea how young people can actually change the world by learning all of these different skills.

[782] And at the core of that is the belief that you can change the world.

[783] that your mind, your body can change the world.

[784] Yeah, that's right.

[785] And not letting anyone else tell you otherwise.

[786] That's very good, exactly.

[787] When we upload, the story you told about your dad and having a conversation with him, we're talking about uploading your mind to the computer.

[788] Do you think we'll have a future with something you call afterlife?

[789] we'll have avatars that mimic increasingly better and better our behavior, our appearance, all that kind of stuff.

[790] Even those are perhaps no longer with us.

[791] Yes, I mean, we need some information about them.

[792] I mean, I think about my father.

[793] I have what he wrote.

[794] Now, he didn't have a word processor, so he didn't actually write that much.

[795] and our memories of him aren't perfect.

[796] So how do you even know if you've created something that's satisfactory?

[797] Now you could do a Frederick Kurzweil Turing test.

[798] It seems like Frederick Kurzweil to me. But the people who remember him, like me, don't have a perfect memory.

[799] Is there such a thing as a perfect memory?

[800] Maybe the whole point is for him to make you feel, a certain way.

[801] Yeah.

[802] Well, I think that would be the goal.

[803] That's the connection we have with loved ones.

[804] It's not really based on very strict definition of truth.

[805] It's more about the experiences we share.

[806] Yeah.

[807] And they get more through memory.

[808] But ultimately, they make a smile.

[809] I think we definitely can do that.

[810] And that would be very worthwhile.

[811] So do you think we'll have a world of replicants, of copies?

[812] There'll be a bunch of Raker as well.

[813] like I could hang out with one.

[814] I can download it for five bucks and have a best friend, Ray.

[815] And you, the original copy, wouldn't even know about it.

[816] Is that, do you think that world is, first of all, do you think that world is feasible and do you think there's ethical challenges there?

[817] Like, how would you feel about me hanging out with Ray Kurzweil and you're not knowing about it?

[818] It doesn't strike me as a problem.

[819] Which you, the original?

[820] Would you strike, would that cause a problem for you?

[821] No, I enjoy, I would really very much enjoy it.

[822] No, not just hanging out with me, but if somebody hang out with you, a replicant of you.

[823] Well, I think I would start, it sounds exciting, but then what if they start doing better?

[824] than me and take over my friend group.

[825] And then I, because they may be an imperfect copy or there may be more social or these kinds of things.

[826] And then I become like the old version that's not nearly as exciting.

[827] Maybe they're a copy of the best version of me on a good day.

[828] But if you hang out with a replicant of me and that turned out to be successful, I'd feel proud of that person because it's based on me. But it is a kind of death of this version of you.

[829] Well, not necessarily.

[830] I mean, you can still be alive, right?

[831] But, and you would be proud.

[832] Okay, so it's like having kids and you're proud that they've done even more than you were able to do.

[833] Yeah, exactly.

[834] It does bring up new issues.

[835] but it seems like an opportunity.

[836] Well, that replicant should probably have the same rights as you do.

[837] Well, that gets into a whole issue because when a replicant occurs, they're not necessarily going to have your rights.

[838] And if a replicant occurs that somebody who's already dead, do they have all the obligations and that the original person had, did they have all the agreements that they had?

[839] So.

[840] I think you're going to have to have laws that say, yes.

[841] There has to be, if you want to create a replicant, they have to have all the same rights as human rights.

[842] Well, you don't know.

[843] Someone can create a replicant and say, well, it's a replicant, but I didn't bother getting their rights.

[844] But that would be illegal, I mean.

[845] Like, if you do that, you have to do that in the black market.

[846] you have to if you want to get an official replicate okay it's not so easy it's supposed to create multiple replicants uh the original rights uh maybe for one person and not for a whole group of people sure so there has to be at least one and then all the other ones kind of share the rights.

[847] Yeah, I just don't, I don't think that that's very difficult to conceive for us humans, the idea of this country.

[848] You create a replicant that has certain, I mean, I've talked to people about this, including my wife, who would like to get back her father.

[849] And she doesn't worry about who has rights to what.

[850] She would have somebody that she could visit with and might give her some satisfaction.

[851] And she wouldn't care about any of these other rights.

[852] What does your wife think about multiple Rakeer as well as?

[853] Have you had that discussion?

[854] I wouldn't address that with her.

[855] I think ultimately that's an important question.

[856] Loved ones, how they feel about, there's something about love.

[857] Well, that's the key thing, right?

[858] If the loved ones rejected, it's not gonna work very well.

[859] So the loved ones really are the key determinant whether or not this works or not.

[860] But there's also ethical rules.

[861] We have to contend with the idea, and we have to contend with that idea with AI.

[862] But what's going to motivate it is, I mean, I talk to people who really miss people who are gone, and they would love to get something back, even if it isn't perfect.

[863] and that's what's going to motivate this.

[864] And that person lives on in some form.

[865] And the more data we have, the more we're able to reconstruct that person and allow them to live on.

[866] And eventually as we go forward, we're going to have more and more of this data because we're going to have nanobots that are inside our neocortex, and we're going to collect a lot of data.

[867] in fact anything that's data is always collected there is something a little bit sad which is becoming or maybe it's hopeful which is more and more common these days which when a person passes away you'll have their Twitter account you know and you have the last tweet they tweeted like something and you can recreate them now with large language models and so on I mean you can create somebody that's just like them and can actually continue to communicate.

[868] I think that's really exciting because I think in some sense, like if I were to die today, in some sense I would continue on if I continued tweeting.

[869] I tweet, therefore I am.

[870] Yeah.

[871] Well, I mean, that's one of the advantages of a replicant that they can recreate the communications of that person.

[872] Do you hope, do you think do you hope humans will become a multi -planetary species?

[873] You've talked about the phases the six epochs, and one of them is reaching out into the stars in part.

[874] Yes, but the kind of attempts we're making now to go to all this planetary objects doesn't excite me that much, because it's not really advancing anything.

[875] It's not efficient enough?

[876] Yeah, and we're also putting out other human beings, which is a very inefficient way to explore these other objects.

[877] What I'm really talking about in the sixth epic universe wakes up, it's where we can spread our superintelligence throughout the universe.

[878] And that doesn't mean sending a very soft, squishy creatures like humans.

[879] Yeah.

[880] The universe wakes up.

[881] I mean, we would send intelligence masses of nanobots, which can then go out and colonize these other parts of the universe.

[882] Do you think there's intelligent alien civilizations out there that our bots might meet?

[883] My hunch is no. most people say yes absolutely i mean and the universe is too big and they'll cite the drake equation and i think in uh singularity is near um i have two analyses of the drake equation both with very reasonable assumptions and one gives you thousands of advanced civilizations in each galaxy, and another one gives you one civilization, and we know of one.

[884] A lot of the analyses are forgetting the exponential growth of computation, because we've gone from where the fastest way I could send a message to somebody was with a pony, which was what?

[885] Like a century and a half ago?

[886] Yeah.

[887] to the advanced civilization we have today, and if you accept what I've said, go forward a few decades, you can have absolutely fantastic amount of civilization compared to a pony, and that's in a couple hundred years.

[888] Yeah, the speed and the scale of information transfer is growing exponentially, in a blink of an eye.

[889] Now, think about these other civilizations.

[890] They're going to be spread out at cosmic times, So if something is like ahead of us or behind us, it could be ahead of us or behind us by maybe millions of years, which isn't that much.

[891] I mean, the world is billions of years old, 14 billion or something.

[892] So even 1 ,000 years, if 2 or 300 years is enough to go from a pony to a fantastic amount of civilization, we would see that.

[893] So of other civilizations that have occurred, some might be behind us, but some might be ahead of us.

[894] If they're ahead of us, they're ahead of us by thousands, millions of years, and they would be so far beyond us, they would be doing galaxy -wide engineering.

[895] But we don't see anything doing galaxy -wide engineering.

[896] So either they don't exist or this very universe is a construction of an alien species.

[897] We're living inside a video game.

[898] Well, that's another explanation that, yes.

[899] You've got some teenage kids in another civilization.

[900] Do you find compelling the simulation hypothesis as a thought experiment that we're living in a simulation?

[901] The universe is computational.

[902] So we are an example in a computational world.

[903] Therefore, it is a simulation.

[904] It doesn't necessarily mean an experiment by some high school kid in another world, but it nonetheless is taking place in a computational world, and everything that's going on is basically a form of computation.

[905] So you really have to define what you mean by this whole, world being a simulation well then it's the it's the teenager that that makes the video game you know us humans with our current limited cognitive capability have um strive to understand ourselves and we have created religions we think of god whatever that is do you think god exists and if so who is I alluded to this before, we started out with lots of particles going around, and there's nothing that represents love and creativity.

[906] And somehow we've gotten into a world where love actually exists, and that has to do actually with consciousness, because you can't have love without consciousness.

[907] So to me, that's God, the fact that we have something where love, where you can be devoted to someone else and really feel that love, that's God.

[908] And if you look at the Old Testament, it was actually created by several different rabbinids in there.

[909] And I think they've identified three of them.

[910] One of them dealt with God as a person that you can make deals with, and he gets angry, and he wrecks vengeance on various people.

[911] But two of them actually talk about God as a symbol of love and peace and harmony and so forth.

[912] That's how they describe God.

[913] So that's my view of God.

[914] not as a person in the sky that you can make deals with.

[915] It's whatever the magic that goes from basic elements to things that consciousness and love.

[916] Do you think one of the things I find extremely beautiful and powerful is cellular automata, which you also touch on?

[917] Do you think whatever the heck happens in cellular automata where interesting, complicated objects emerge, God is in there too?

[918] the emergence of love in this seemingly primitive...

[919] Well, that's the goal of creating a replicant, is that they would love you and you would love them.

[920] There wouldn't be much point of doing it if that didn't happen.

[921] But all of it, I guess what I'm saying about cellular automata is it's a primitive building blocks and they somehow create beautiful things.

[922] is there some deep truth to that about how our universe works is the emergence from simple rules beautiful complex objects can emerge is that the thing that made us as we went through all the six phases of reality that's a good way to look at it it just makes some point to the whole value of having a universe do you think about your own mortality?

[923] Are you afraid of it?

[924] Yes, but I keep going back to my idea of being able to expand human life quickly enough in advance of our getting there, longevity, escape velocity, which we're not quite yet, but I think we're actually pretty close, particularly with.

[925] for example, doing simulated biology, I think we can probably get there within, say, by the end of this decade, and that's my goal.

[926] Do you hope to achieve the longevity, escape velocity?

[927] Do you hope to achieve immortality?

[928] Well, immortality is hard to say.

[929] I can't really come on your program saying, I've done it.

[930] I've achieved immortality.

[931] Because it's never forever.

[932] A long time.

[933] of living well.

[934] But we'd like to actually advance human life expectancy, advance my life expectancy more than a year every year.

[935] And I think we can get there within, by the end of this decade.

[936] How do you think we do it?

[937] So there's practical things in transcend the nine steps to living well, forever, your book.

[938] You describe just that.

[939] There's practical things like health, exercise, all those things.

[940] Yeah, I mean, we live in a body that.

[941] that doesn't last forever.

[942] There's no reason why it can't, though.

[943] And we're discovering things, I think, that will extend it.

[944] But you do have to deal with, I mean, I've got various issues.

[945] Went to Mexico, 40 years ago, developed salmonella.

[946] They created pancreatitis, which gave me a strange form.

[947] of diabetes, it's not type 1 diabetes because that's an autoimmune disorder that destroys your pancreas.

[948] I don't have that.

[949] But it's also not type 2 diabetes because type 2 diabetes is your pancreas works fine, but your cells don't absorb the insulin well.

[950] I don't have that pancreas, but it was a one -time thing.

[951] It didn't continue.

[952] And I've learned now how to control it.

[953] So that's just something I had to do in order to continue to exist.

[954] Since your particular biological system, you have to figure out a few hacks, and the ideas that science would be able to do that much better, actually.

[955] Yeah.

[956] So, I mean, I do spend a lot of time just tinkering with my own body to keep it going.

[957] So I do think I'll last until the end of this decade, and I think we'll achieve longevity -scape velocity.

[958] I think that will start with people who are very diligent about this.

[959] Eventually, it will become sort of routine that people will be able to do it.

[960] So if you're talking about kids today, or even people in the 20s or 30s, that's really not a very serious problem, I have had some discussions with relatives who were like almost a hundred and saying, well, we're working on it as quickly as possible, but I don't know if that's going to work.

[961] Is there a case, this is a difficult question, but is there a case to be made against living forever, that a finite life, that mortality is a feature, not a bug, that that, that, living, a shorter, so dying makes ice cream taste delicious, makes life intensely beautiful, more than, uh, most people believe that way, except if you present a death of anybody they care about or love, they find that extremely depressing.

[962] And I know people who feel that way, 20, 30, 40 years later, they still want them back.

[963] So, I mean, death is not something to celebrate.

[964] But we've lived in a world where people just accept this.

[965] Life is short.

[966] You see it all the time on TV.

[967] Oh, life's short.

[968] You have to take advantage of it.

[969] And nobody accepts the fact that you could actually go beyond normal lifetimes.

[970] But anytime we talk about death or a death of a person, even one death is a terrible tragedy.

[971] If you have somebody that lives to 100 years old, we still love them in return.

[972] And there's no limitation to that.

[973] In fact, these kinds of trends are going to provide greater and greater opportunity for everybody, even if we have more people.

[974] So let me ask about an alien species or a super intelligent AI 500 years from now that will look back.

[975] And remember Ray Kurzweil version zero, before the replicants spread.

[976] How do you hope they remember you in a Hitchhiker's Guide to the Galaxy summary of Ray Kurzweil?

[977] What do you hope your legacy is?

[978] Well, I mean, I do hope to be around.

[979] So that's...

[980] Some version of you, yes.

[981] So...

[982] Do you think you'll be the same person around?

[983] I mean, am I the same person I was when I was 20 or 10?

[984] You would be the same person in that same way, but yes, we're different.

[985] We're different.

[986] All you have of that person is your memories, which are probably distorted in some way.

[987] Maybe you just remember the good parts.

[988] Depending on your psyche, you might focus on the bad parts, might focus on the good parts.

[989] Right, but, I mean, I'd still have a relationship to the way I was when I was earlier, when I was younger.

[990] How will you and the other super -intelligent AIs remember you of today from 500 years ago?

[991] What do you hope to be remembered by this version of you before the singularity?

[992] I think it's expressed well in my books, trying to create some new realities that people will accept.

[993] I mean, that's something that gives me great pleasure and greater insight into what makes humans valuable.

[994] I'm not the only person who's tempted to comment on that.

[995] and optimism that permeates your work optimism about the future is ultimately that optimism paves the way for building a better future yeah i agree with that so you asked your dad about the meaning of life and he said love let me ask you the same question what's the meaning of life why are we here this beautiful journey they were on in phase four, reaching for phase five of this evolution and information processing, why?

[996] Well, I think I'd give the same answers as my father.

[997] Because if there were no love, and we didn't care about anybody, there'd be no point existing.

[998] Love is the meaning of life.

[999] The AI version of your dad had a good point.

[1000] Well, I think that's a beautiful way to end it.

[1001] Ray, thank you for your work.

[1002] Thank you for being who you are.

[1003] Thank you for dreaming about a beautiful future and creating it along the way.

[1004] And thank you so much for spending your really valuable time with me today.

[1005] This was awesome.

[1006] Well, this is my pleasure, and you have some great insights, both into me and into humanity as well.

[1007] So I appreciate that.

[1008] Thanks for listening to this conversation with Ray Kour as well.

[1009] To support this podcast, please check out our sponsors in the description.

[1010] And now, let me leave you with some words from Isaac Asimov.

[1011] It is change, continuous change, inevitable change, that is the dominant factor in society today.

[1012] No sensible decision could be made any longer without taking into account not only the world as it is, but the world as it will be.

[1013] this in turn means that our statesmen our businessmen our every man must take on a science fictional way of thinking thank you for listening and hope to see you next time