The Joe Rogan Experience XX
[0] Boom.
[1] Hello, Ben.
[2] Hey there.
[3] Good to see you, man. Yeah, it's a pleasure to be here.
[4] Thanks for doing this.
[5] Yeah, yeah.
[6] Thanks for having me. I've been looking at some of your shows in the last few days just to get a sense of how you're thinking about AI and crypto and the various other things I'm involved in.
[7] It's been interesting.
[8] Well, I've been following you as well.
[9] I've been paying attention to a lot of your lectures and talks and different things you've done over the last couple days as well, getting ready for this.
[10] it's a AI is uh either people are really excited about it or they're really terrified of it those are the sort of it seems to be the two responses either people have this dismal view of these robots taking over the world or they think it's going to be some amazing sort of symbionic relationship with that we have with these things that's going to evolve human beings past the the monkey stage that we're at right now yeah and i i tend to be on the latter more positive side of this dichotivism But I think one thing that has struck me in recent years is many people are now, you know, mentally confronting all the issues regarding AI for the first time.
[11] And I mean, I've been working on AI for three decades.
[12] And I first started thinking about AI when I was a little kid in the early late 60s and early 70s when I saw AIs and robots on the original Star Trek.
[13] So I guess I've had a lot of cycles to process the positives and negatives of it, where it's now, like, suddenly most of the world is thinking through all this for the first time.
[14] And, you know, when you first wrap your brain around the idea that there may be creatures 10 ,000 or a million times smarter than human beings, at first this is a bit of a shocker, right?
[15] And then, I mean, it takes a while to internalize this into your worldview.
[16] well it's that there's also i think there's a problem with the term artificial intelligence because it's it's intelligent it's there it's a real thing yeah it's not artificial it's not like a fake diamond or a fake Ferrari it's a real thing and it it's not a great term and there's been many attempts to replace it with synthetic intelligence for for example right but for better or worse, like AI is there.
[17] It's part of the popular imagination.
[18] It seems it's an imperfect word, but it's not going away.
[19] Well, my question is, like, are we married to this idea of intelligence and of life being biological, being carbon -based tissue and cells and blood or insects or mammals or fish?
[20] Are we married to that too much?
[21] Do you think that it's entirely possible that what human beings are doing and what people that are at the tip of AI right now that are really pushing the technology, what they're doing is really creating a new life form, that it's going to be a new thing, that just the same way we recognize wasps and buffaloes and artificial intelligence is just going to be a life form that emerges from the creativity and ingenuity of human beings.
[22] Well, indeed.
[23] So, I mean, I've long been an advocate of a philosophy, I think, of as patternism.
[24] Like it's the pattern of organization that appears to be the critical thing.
[25] And the individual cells and going down further, like the molecules and particles in our body are turning over all the time.
[26] So it's not the specific combination of elementary particles, which makes me who I am or makes you who you are.
[27] It's a pattern by which they're organized and the patterns by which they change over time.
[28] So, I mean, if we can create digital systems or quantum computers or femto computers or whatever it is manifesting the patterns of organization that constitute intelligence, I mean, then there you are.
[29] There is intelligence, right?
[30] So that's not to say that, you know, consciousness and experience is just about patterns of organization.
[31] There may be more dimensions to it.
[32] But when you look at what constitutes intelligence, thinking, cognition, problem solving, you know, it's the pattern of organization, not the specific material as far as we can tell.
[33] So we can see no reason based on all the science that we know so far that you couldn't make an intelligence system out of some other form of matter rather than the specific types of atoms and molecules that make up human beings.
[34] and it seems that we're well on the way to being able to do so.
[35] When you're studying intelligence, you're studying artificial intelligence, did you spend any time studying the patterns that insects seem to cooperatively behave with, like how leaf cutter ants build these elaborate structures underground and, you know, wasps build these giant colonies?
[36] And did you study how...
[37] I did, actually, yes.
[38] So I sort of grew up with the philosophy of complex systems, which was championed by the Santa Fe Institute in the 1980s.
[39] And the whole concept that there is an interdisciplinary complex system science, which includes biology, cosmology, psychology, sociology, the sort of universal patterns of self -organization.
[40] And, you know, ants and ant colonies have long been a paradigm case for that.
[41] And I used to play with the ant colonies in my backyard when I was a kid, and you'd lay down food in certain patterns.
[42] You'd see how their ants are laying down pheromones, and the colonies are organizing it in a certain way.
[43] And that's an interesting self -organizing complex system on its own.
[44] It's lacking some types of adaptive intelligence that human minds and you.
[45] societies have, but it has also interesting, self -organizing patterns.
[46] This reminds me of the novel Solaris by Stanis Sla -Lam, which was published in the 60s, which was really quite a deep novel, much deeper than the movie that was made of it.
[47] Did you ever read that book, Solaris?
[48] No. I'm not familiar with the movie either.
[49] Who's the movie?
[50] So there was an amazing, brilliant movie by Tarkovsky, the Russian director from the late 60s.
[51] Then there was a movie by Stephen Soderberg, which was sort of glammed up and Americanized.
[52] Oh, that was fairly recent, right?
[53] Yeah, 10 years ago, but that wasn't, didn't get all the deep points of the novel.
[54] The original novel, in essence, there's this ocean coding the surface of some alien planet, which has amazingly complex fractal patterns of organization, and it's also interactive, like the patterns of organization on the ocean respond based on what you do.
[55] And when people get near the ocean, it causes them to hallucinate things and even causes them to see simulacra of people from their past, even the person who they had most harmed or injured in their past appears and interacts with them.
[56] So clearly this ocean has some type of amazing complexity and intelligence from the patterns it displays and from the weird things it reeks in your mind so that the people on Earth try to understand how the ocean is.
[57] thinking, they send a scientific expedition there to interact with that ocean.
[58] But it's just so alien, even though it monkeys with people's minds and clearly is doing complex things, no two -way communication is ever established.
[59] And eventually, the human expedition gives up and goes home.
[60] So it's a very Russian ending to the novel, I guess.
[61] I think I saw that.
[62] But that the the interesting message there is I mean there can be many many kinds of intelligence right I mean human intelligence is one thing the intelligence of an ant colony is a different thing the intelligence of human society is a different thing ecosystem is a different thing and there could be many many types of AIs that we could build with many many different properties Some could be wonderful to human beings, some could be horrible to human beings, some could just be alien minds that we can't even relate to very well.
[63] So we have a very limited conception of what an intelligence is if we just think by close analogy to human minds.
[64] And this is important if you're thinking about engineering or growing artificial light forms or artificial minds, because it's not just, can we do this?
[65] It's what kind of mind are we going to engineer or evolve?
[66] And there's a huge spectrum of possibilities.
[67] Yeah, that's one of the reasons why I asked you.
[68] If we had created, if human beings had created some sort of an insect, and this insect started organizing and developing these complex colonies like a leaf cutter ant and building these structures underground, people would go crazy.
[69] They would panic.
[70] They would think these things are organizing.
[71] They're going to build up the resources and attack us.
[72] They're going to try to take over humanity.
[73] I mean, what people are worried about more than anything when it comes to technology, I think, is the idea that we're going to be irrelevant, that we're going to be antiques, and that something new and better is going to take our place, which is it's a weird thing to worry about.
[74] Yeah, it's a weird thing to worry about it because it's sort of the history of biological life on Earth.
[75] I mean, what we know is there's complex things.
[76] They become.
[77] more complex, go to single -celled organisms to multi -celled organisms.
[78] There seems to be a pattern leading up to us and us with this unprecedented ability to change our environment.
[79] That's what we can do, right?
[80] We can manipulate things, poison the environment, we can blow up entire countries with bombs if we'd like to, and we can also do wild creative things, like send signals through space and land on someone else's phone on the other side of the world almost instantaneously.
[81] We have incredible power, but we're also so limited by our biology.
[82] The thing I think people are afraid of, and I'm afraid of, but I don't know if it makes any sense, is that the next level of life, whatever artificial life is, or whatever the human symbiote is, that it's going to lack emotions, it's going to lack desires and needs, and all the things that we think are special about us, our creativity, our desire for attention and love, all of our camaraderie, all these different things that are sort of program.
[83] programmed into us with our genetics in order to advance our species that we were so connected to these things.
[84] But they're so, they're the reason for war.
[85] They're the reason for lies, deception, thievery.
[86] There's so many things that are built into being a person that are responsible for all the woes of humanity.
[87] But we're afraid to lose those things.
[88] Yeah, I think it's almost inevitable by this point that humanity.
[89] Maddie is going to create synthetic intelligences with tremendously greater general intelligence and practical capability than human beings have.
[90] I mean, I think I know how to do that with the software I'm working on with my own team, but if we fail, you know, there's a load of other teams who I think are a bit behind us, but are going in the same direction now, right?
[91] So you guys feel like you're at the tip of the spear with this stuff?
[92] I do, but I also think, that's not the most important thing from a human perspective.
[93] The most important thing is that humanity as a whole is quite close to this threshold event, right?
[94] How far do you think it's quite close?
[95] By my own gut feeling, five to 30 years, let's say.
[96] That's pretty close.
[97] But if I'm wrong, and it's 100 years, like in the historical time scale, that sort of doesn't matter.
[98] It's like, did the Sumerians create civilization 10 ,000 or 10 ,000 and 50 years ago?
[99] Like, what difference does it make?
[100] right?
[101] So I think we're quite close to creating superhuman artificial general intelligence.
[102] And that's in a way almost inevitable given where we are now.
[103] On the other hand, I think we still have some agency regarding whether this comes out in a way that respects human values and culture, which are important to us now, given who and what we are.
[104] or that is essentially indifferent to human values and culture in the same way that we're mostly indifferent to chimpanzee values and culture at this point.
[105] And I mean completely indifferent to insect values and culture.
[106] Not completely, if you think about it.
[107] I mean, if I'm building a new house, I will bulldoze a bunch of ants, but yet we get upset if we extinct an insect species, right?
[108] So we care to some level, but we would like the super AI, to care about us more than we care about insects or great apes, absolutely, right?
[109] And I think this is something we can impact right now.
[110] And to be honest, I mean, in a certain part of my mind, I can think, well, like, in the end, I don't matter that much.
[111] My four kids don't matter that much.
[112] My granddaughter doesn't matter that much.
[113] Like, we are patterns of organization in a very long lineage.
[114] of patterns of organization.
[115] But they matter very much to you.
[116] Yeah, and other, you know, dinosaurs came and went and Neanderthals came and went.
[117] Humans may come and go.
[118] The AIs that we create may come and go, and that's the nature of the universe.
[119] But on the other hand, of course, in my heart, from my situated perspective as an individual human, like if some AI tried to annihilate my 10 -month -old son, I would try to kill that AI, right?
[120] So as a human being situated in this specific species, place, and time, I care a lot about the condition of all of us humans.
[121] And so I would like to not only create a powerful general intelligence, but create one which is going to be beneficial to humans and other life forms on the planet, even while in some ways going.
[122] going beyond everything that we are, right?
[123] And there can't be any guarantees about something like this.
[124] On the other hand, humanity has really never had any guarantees about anything anyway, right?
[125] I mean, since we created civilization, we've been leaping into the unknown one time after the other in a somewhat conscious and self -aware way about it, from, you know, agriculture to language to math, to the Industrial Revolution, we're leaping into the unknown all the time, which is part of why we're where we are today instead of just another animal species, right?
[126] So we can't have a guarantee that AGI's artificial general intelligence as we create are going to do what we consider the right thing, given our current value systems.
[127] On the other hand, I suspect we can bias the odds in the favor of human values and culture.
[128] And that's something I've put a lot of thought and work into alongside the basic algorithms of artificial cognition.
[129] Is the issue that the initial creation would be subject to our programming, but that it could perhaps program something more efficient and design something?
[130] Like if you build creativity into artificial general intelligence...
[131] I mean, you have to.
[132] I mean, general...
[133] Generalization is about creativity, right?
[134] Yeah, but is the issue that it would choose to not accept our values, which it might find...
[135] Well, clearly we'll choose not to accept our values, and we want it to choose not to accept all of our values.
[136] So it's more a matter of whether the ongoing creation, evolution of new values occurs with some continuity and respect for the previous ones.
[137] So, I mean, I have four human kids now.
[138] One is a baby, but the other three are adults, right?
[139] And with each of them, I took the approach of trying to teach the kids what my values were, not just by preaching at them, but by entering with them into shared situations.
[140] But then, you know, when your kids grow up, they're going to go in their own different directions, right?
[141] Right, but these are humans.
[142] But they all have the same sort of biological needs, which is one of the reasons why we have these desires in the first place.
[143] Yet there still is an analogy.
[144] I think the AIs that we create, you can.
[145] think of as our mind children, and we're starting them off with our culture and values, if we do it properly, or at least with a certain subset of the whole diverse, self -contradictory mess of human culture and values.
[146] But you know they're going to evolve in a different direction, but you want that evolution to take place in a reflective and caring way rather than a heedless way.
[147] Because if you think about it, the average human, a thousand, years ago or even 50 years ago would have thought you and me were like hopelessly immoral miscreants who would abandon all the valuable things in life right i mean because of your hat my my my my my my my my i'm an infidel right i don't i don't i don't i haven't gone to church ever i i guess i mean my my my mother's lesbian right i mean there's all all these things that we take for granted now that not that long ago were completely against what most humans considered maybe the most important values of life.
[148] So, I mean, human values itself is completely a moving target.
[149] Right, and moving in our generation.
[150] Yeah, yeah, yeah, moving in our generation.
[151] Pretty radically.
[152] Very radically.
[153] When I think back, like, to my childhood, I lived in New Jersey for nine years of my childhood and just the level of racism and anti -Semitism.
[154] and sexism that were just ambient and taken for granted then.
[155] What years was this?
[156] Was this when you're...
[157] Between...
[158] Because I think we're the same age.
[159] We're both 51?
[160] Yeah, yeah, yeah.
[161] I'm born in 66.
[162] I lived in Jersey from 73 to 82.
[163] Okay, so I was there from 67 to 73.
[164] Oh, yeah, yeah.
[165] Right, right.
[166] So, yeah, I mean, like my...
[167] I mean, my sister went to the high school prom with a black guy.
[168] Uh -oh.
[169] And so we got our car turned upside down, the windows of our house smashed.
[170] And it was like a humongous thing.
[171] And it's almost unbelievable now, right?
[172] Because now no one would care whatsoever.
[173] It's just life, right?
[174] Well, certainly there's some fringe parts of this culture.
[175] Yeah, yeah.
[176] But still, the point is there is no fixed list of human values.
[177] It's an ongoing, evolving process.
[178] And what you want is for the, evolution of the AI's values to be coupled closely with the evolution of human values rather than going off in some utterly different direction that we can't even understand.
[179] But this is literally playing God, right?
[180] I mean, if you're talking about like trying to program in values.
[181] I don't think you can program in values that fully.
[182] You can program in a system for learning and growing values.
[183] And here, again, the analogy with human kids is not hopeless.
[184] Like telling your kids, these are the ten things that are important, doesn't work that well, right?
[185] What works better is you enter into shared situations with them.
[186] They see how you deal with the situations.
[187] You guide them in dealing with real situations, and that forms their system of values.
[188] And this is what needs to happen with AI.
[189] They need to grow up entering into real life situations.
[190] with human beings, so that the real -life patterns of human values, which are worth a lot more than the homilies that we enunciate formally, right?
[191] The real -life pattern of human values gets inculcated like into the intellectual DNA of the AI systems.
[192] And this is part of what worries me about the way the AI field is going at this moment, because, I mean, most of the really powerful, narrow AIs on the planet now are involved with, like, selling people's stuff they don't need, spying on people, or, like, figuring out who should be killed or otherwise abused by some government, right?
[193] So if, if the early stage AIs that we build turn into general intelligences gradually, and these general intelligences are, you know, spy agents and advertising agents, then, like, what, what mindset do these early stage AIs have?
[194] have as they grow up.
[195] Right.
[196] If they don't have any problem morally and ethically with manipulating us, which we're very malleable, right?
[197] We're so easy to manipulate.
[198] Well, and we're teaching them how to manipulate people and we're rewarding them for doing it successfully, right?
[199] So this is one of these things that from the outside point of view might not seem to be all that intelligent.
[200] It's sort of like gun laws in the U .S. living in Hong Kong, I mean, most people don't have a bunch of guns sitting around their house.
[201] And coincidentally, there are not that many random shootings happening in Hong Kong, right?
[202] That's crazy.
[203] So, yeah, you look in the U .S. It's like somehow you have laws that allow random lunatics to buy all the guns they want, and you have all these people getting shot.
[204] So similarly, like, from the outside, you could look at it, like, this species, is creating the success or intelligence and almost all the resources going into creating their success or intelligence are going into making AIs to do surveillance like military drones and advertising agents that brainwashed people in Dubai and crap they don't need.
[205] Now, what's wrong with this picture?
[206] Isn't that just because that's where the money is?
[207] Like this is the introduction to it And then from then, we'll find other uses and applications for it.
[208] But, like, right now, that's where...
[209] The thing is, there's a lot of other applications.
[210] Financially viable applications.
[211] Well, yeah, the applications that are getting the most attention are the financial lowest hanging fruit, right?
[212] So, for example, among many projects I'm doing with my singularity in that team, we're looking at applying AI to, you know, diagnose agricultural disease.
[213] So you can look at images of plant leaves, you can look at data from the soil and atmosphere, and you can project whether disease in a plant is likely to progress badly or not, which tells you, do you need medicine for the plant?
[214] Do you need pesticides?
[215] Now, this is an interesting area of application.
[216] It's probably quite financially lucrative in a way, but it's a more complex industry than selling stuff online.
[217] So the fraction of resources going into AI for agriculture, is very small than like e -commerce or something, right?
[218] That's a very specific aspect of agriculture, too, predicting diseases.
[219] Yeah, yeah, but there's a lot of specific aspects, right?
[220] So, I mean, AI for medicine, again, there's been papers on machine learning applied to medicine since the 80s and 90s, but the amount of effort going into that compared to advertising or surveillance is very small.
[221] Now, this has to do with the structure of the pharmaceutical business, as compared to the structure of the tech business.
[222] So, you know, when you look into it, there's good, there's good reasons for everything, right?
[223] But nevertheless, the way things are coming down right now is certain biases to the development of early stage AIs are very marked.
[224] And you could, you could see them.
[225] And I mean, I'm trying to do something about that together with my colleagues in singularity.
[226] But, of course, it's sort of a David versus Goliath thing.
[227] It seemed, well, of course, you're trying to do something different, and I think it's awesome what you guys are doing.
[228] But it just makes sense to me that the first applications are going to be the ones that are more financially viable.
[229] It's like what the first applications were military, right?
[230] I mean, until about 10 years ago, 85 % of all funding in the AI was from U .S. plus Western Europe militaries.
[231] Well, what I'm getting at is it seems that money and commerce are inexorably linked to innovation and technology because there's this sort of thing that we do as a culture where we're constantly trying to buy and purchase bigger and better things.
[232] We always want the newest iPhone, the greatest, you know, a laptop.
[233] We don't want the coolest electric cars, whatever it is.
[234] And this fuels innovation, this desire for new, greater things.
[235] materialism in a lot of ways fuels innovation because this is how...
[236] It does, but I think there's an argument that as we approach a technological singularity, we need new systems.
[237] Because if you look at how things have happened during the last century, what's happened is that governments have funded most of the core innovation.
[238] I mean, this is well known that like most of the technology inside a smartphone was funded by U .S. government, a little by European government, GPS and the batteries and everything, and then companies scaled it up, they made it user -friendly, they decreased cost of manufacturing.
[239] And this process occurs with a certain time cycle to it, or like government spends decades funding core innovation and universities, and then industry spends decades figuring out how to scale it up and make it palatable to users.
[240] And, you know, this matured probably since World War II, this sort of modality for technology development.
[241] But now that things are developing faster and faster and faster, there's sort of not time for that cycle to occur, where the government and universities incubate new ideas for a while, and then technology scales it up.
[242] So the gene is out of the bottle, essentially.
[243] Yeah, but we still need a lot of new, amazing, creative innovation to happen.
[244] happen, but somehow or other new structures are going to have to evolve to make it happen.
[245] And you can see everyone's struggling to figure out what these are.
[246] So, I mean, this is why you have, I mean, you have big companies embracing open source.
[247] Google releases TensorFlow, and there's a lot of, a lot of other different things.
[248] And I think some projects in the cryptocurrency world have been looking at that, too.
[249] Like, how do we use tokens to incentivize, you know, independent scientists and inventors to do new stuff without them having to be in a government research lab or in a big company.
[250] So I think we're going to need the evolution of new systems of innovation and of technology transfer as things are developing faster and faster and faster.
[251] And this is another thing that's sort of gotten me interested in the whole decentralized world and the blockchain world is the promise of new modes of economic and social organization.
[252] that can, you know, bring more of the world into the research process and accelerate the technology transfer process.
[253] I definitely want to talk about that.
[254] But one of the things that I wanted to ask you is when you're discussing this, well, I think what you're saying is one very important point that we need to move past the military gatekeepers of technology, right?
[255] It's not just military now, though.
[256] It's big tech, which are advertising agencies in essence.
[257] Facebook, social media, things that are.
[258] constantly predicting your next purchase, right?
[259] Yeah, because if you think about it, and I mean, even in a semi -democracy like we have in the U .S., I mean, those who control the brainwashing of the public, in essence, control who votes for what, and who controls the brainwashing of the public is advertising agencies, and who increasingly are the biggest advertising agencies are the big tech companies who are accumulating everybody's data and using it to program their minds to buy things.
[260] So this is what's programming the global brain of the human race.
[261] And of course, there are close links between big tech and the military.
[262] Look, look, Amazon has, what, 25 ,000 person headquarters in Crystal City, Virginia, right next to the Pentagon.
[263] And I mean, China, it's even more direct and unapologetic, right?
[264] So it's a new, like, military, industrial advertising complex.
[265] which is guiding the evolution of the global brain on the planet.
[266] Well, we found that with this past election, right?
[267] With all the intrusion by foreign entities trying to influence the election, that these giant houses set up to write bad stories about whoever they don't want to be in office?
[268] Yeah, in a way, that's almost a red herring.
[269] I mean, the Russian stuff is almost a red herring, but it revealed what the processes are, which are used to program.
[270] Oh, because I think the whatever programming of Americans' minds is done by the Russians is minuscule compared to the programming of Americans' minds by the American corporate and government elite, right?
[271] But it's fascinating that anybody's even jumping in as well as the American elite.
[272] Sure.
[273] It's interesting.
[274] And if you look at what's happening in, China, that's like, yeah, yeah, yeah, they're way better at it than we are.
[275] Well, it's much more horrific, right?
[276] And that's, well, it's more, it's more professional.
[277] It's more polished.
[278] It's more centralized.
[279] Yeah.
[280] On the other hand, for almost everyone in China, China's a very good place to live.
[281] And, you know, the level of improvement in that country in the last 30 years has just been astounding, right?
[282] Like, I mean, you can't, you can't argue with how much better it's gotten there since Deng Xiaoping took over.
[283] It's tremendous.
[284] Because they're not, they embraced capitalism to a certain extent.
[285] They've created their own unique system.
[286] What labels you give it is almost arbitrary.
[287] They've created their own unique system as a, you know, crazy hippie, libertarian, anarcho -socialist, freedom -loving maniac.
[288] That system rubs.
[289] against migraine in many ways.
[290] On the other hand, empirically, if you look at it, it's improved the well -being of a tremendous number of people.
[291] So hopefully it evolves, and it's one step better than it used to be.
[292] Well, the way it's evolving now is not in a more freedom -loving.
[293] Well, it's not in a more freedom -loving and anarchic direction.
[294] One would say it's positive in some ways and negative in others, like most complex things.
[295] In Hong Kong, why do you live there?
[296] I fell in love with a Chinese woman Oh, there you go Good enough reason Yeah, it was a great reason We had a baby recently She's not from Hong Kong She's from mainland China I met her when she was doing her PhD in computational linguistics In shaman But that was what sort of First got me to spend a lot of time In China But then I was doing some research At Hong Kong Polytechnic University And then My good friend David Hansen was visiting me in Hong Kong, I introduced him to some investors there, which ended up with him bringing his company Hansen Robotics to Hong Kong.
[297] So now, after I moved there because of falling in love with Ray Ting, then I brought my friend David there, then Hansen Robotics grew up there, and there's actually a good reason for Hansen Robotics to be there, because I'm in the best place in the world to manufacture complex electronics is in, you know, Shenzhenj, right across the border from Hong Kong.
[298] So now I've been working there with Hansen Robotics on the Sophia robots and other robots for a while, and I've accumulated the whole AI team there around Hansen Robotics and Singularity Net.
[299] So I mean, by now I'm there because my whole AI and robotics teams are there.
[300] Right.
[301] Makes sense.
[302] Do you follow the State Department's recommendations to not use Huawei devices and they believe that they're...
[303] Well, no. Have you heard that?
[304] Yeah.
[305] Pay attention to that?
[306] I think that the Chinese are spying on us.
[307] You know, I'm sure.
[308] You know, when I lived in Washington, D .C. for nine years.
[309] I did a bunch of consulting for various government agencies there.
[310] And my wife is a communist party member, actually.
[311] Well, just because she joined in high school when it was sort of suggested for her to join.
[312] So I'm sure I'm being watched by multiple governments.
[313] It doesn't, I don't have any secrets.
[314] It doesn't really matter.
[315] I'm not in the business of trying to overthrow any government.
[316] I'm in the business of trying to bypass traditional governments and traditional monetary systems and all the rest by creating new methods of organization of people and information.
[317] I understand that with you personally, but it is unusual if the government is actually spying on people through these devices.
[318] I doubt it's unusual.
[319] I doubt it's unusual at all.
[320] I mean, I mean, without going into too much detail, like when I was in.
[321] D .C. working with various government agencies, it became clear there is tremendously more information obtained by government agencies than most people realize.
[322] This was true way before Snowden and WikiLeaks and all these revelations.
[323] And what is publicly understood now is probably not the full scope of the information that governments have either.
[324] So, I mean, see is pretty much dead.
[325] And David Brin, do you know David Brin?
[326] You should definitely interview David Brin.
[327] He's an amazing guy, but he's a well -known science fiction writer.
[328] He's based in Southern California, actually, San Diego.
[329] He wrote a book in, oh, years ago, called The Transparent Society, where he said there's two possibilities, surveillance and surveillance.
[330] It's like the power elite watching everyone, or everyone watching everyone.
[331] I think everyone watching everyone is inevitable.
[332] Yeah, so he articulated this as the only two viable possibilities, and he's like, we should be choosing and then creating which of these alternatives we want.
[333] So now, now the world is starting to understand what he was talking about back when he wrote that book.
[334] What, you write the book?
[335] Oh, I can't remember.
[336] I mean, it was well more than a decade ago.
[337] It's weird when some people just nail it on the head decades in advance.
[338] I mean, most of the things that are happening in the world now, were foreseen by Stanisla Lem, the Polish author I mentioned.
[339] Valentin Turchin, a friend of mine who was the founder of Russian AI.
[340] He read a book called The Phenomen of Science in the late 60s.
[341] Then, you know, in 1971 or two, when I was a little kid, I read a book called The Prometheus Project by a prince and physicist called Gerald Farnberg.
[342] You read a physicist book when you're five years old?
[343] Yeah, I started reading when I was two, and my grandfather was a physicist.
[344] So I was reading a lot of stuff then.
[345] But Feinberg, in this book, he said, you know, within the next few decades, humanity is going to create nanotechnology.
[346] It's going to create machines smarter than people.
[347] And it's going to create the technology to allow human biological immortality.
[348] And the question will be, do we want to use these technologies, you know, to promote rampant consumerism?
[349] Or do we want to use these technologies to promote, you know, spiritual growth of our consciousness into new dimensions of experience.
[350] And what Fondberg proposed in this book in the late 60s, which I read in the early 70s, he proposed the UN should send a task force out to go to everyone in the world, every little African village, and educate the world about nanotech, life extension, and AGI, and get the whole world to vote on whether we should develop these technologies toward consumerism or toward consciousness expansion.
[351] So I read this from I'm a little kid.
[352] It's like, this is only a little kid.
[353] almost obvious.
[354] This makes total sense.
[355] Like, why why doesn't everyone understand this?
[356] Then I tried to explain this to people.
[357] And I'm like, oh, shit, I guess it's going to be a while till the world catches on.
[358] So, so I instead decided I should build a spacecraft, go away from the world at rapid speed, and come back after like a million years or something when the world was far more advanced.
[359] Or covered in dust.
[360] Yeah, right.
[361] So now, well, then you go away another million years.
[362] to see what aliens have involved.
[363] So now, pretty much the world agrees that life extension, AGI and nanotechnology are plausible things that may come about in the near future.
[364] The same question is there that Feinberg saw like 50 years ago, right?
[365] The same question is there.
[366] Like, do we develop this for rampant consumerism?
[367] Or do we develop this for amazing new dimensions of, you know, know, consciousness expansion and mental growth, but the UN is not, in fact, educating the world about this and pulling them to decide democratically what to do.
[368] On the other hand, there's the possibility that by bypassing governments in the UN and doing something decentralized, you can create a democratic framework within which, you know, a broad swath of the world can be involved in a participatory way in guiding the direction of these advances.
[369] Do you think that it's possible that instead of choosing that we're just going to have multiple directions that it's growing in, that there's going to be consumer -based?
[370] There will be multiple directions, and that's inevitable.
[371] It's more a matter of whether anything besides the military advertising complex gets a shake, right?
[372] So, I mean, if you look in the software development world, open source is an amazing thing, But Linux is awesome, and it's led to so much AI being open source now.
[373] Now, open source didn't have to actually take over the entire software world like Richard Stallman wanted in order to have a huge impact, right?
[374] It's enough that it's a major force.
[375] It's a very hippie concept, isn't it?
[376] Open source in a lot of ways?
[377] In a way, but yet IBM has probably thousands of people working on Linux, right?
[378] So like Apple, it began as a hippie concept.
[379] concept, but it became very practical, right?
[380] So, I mean, something like 75 % of all the servers running the internet are based in Linux.
[381] You know, the vast majority of mobile phone OS is Linux, right?
[382] So this hippie...
[383] So the vast majority being Android?
[384] Android is Linux.
[385] Yeah, yeah.
[386] So, I mean, this hippie crazy thing where no one owns the code, it didn't have to overtake the whole software economy and become everything to become highly valuable and inject a different dimension into things.
[387] And I think the same is true with decentralized AI, which we're looking at with singularity in that.
[388] Like it doesn't, we don't have to actually put Google and the US and Chinese military and 10 cent out of business, right?
[389] Although if that happens, that's, that's fine.
[390] But we, it's enough that we become an extremely major player in that ecosystem so that this, you know, participatory and benefit -oriented aspect becomes a really significant component of how humanity is developing general intelligence.
[391] It's accepted, generally accepted, that human beings will consistently and constantly innovate, right?
[392] It just seems to be a characteristic that we have.
[393] Yep.
[394] Why do you think that is?
[395] And what do you think that, especially when in terms of creating something like artificial intelligence, like why build our successors?
[396] Like, why do that?
[397] Like, what is it about us that makes us want to constantly make bigger, better things?
[398] Well, that's an interesting question in the history of biology, which I may not be the most qualified person to answer.
[399] It is an interesting question.
[400] And I think it has something to do with the weird way in which we embody various contradictions that we're always trying to resolve.
[401] Like we, you mentioned ants, and ants are social animals, right?
[402] Whereas, like, cats are very individual.
[403] We're, like, trapped between the two, right?
[404] Like, we're somewhat individual and somewhat social.
[405] And then since we created civilization, it's even worse, because, I mean, we have certain aspects.
[406] which are wanting to conform with the group and the tribe and others, which are wanting to innovate and break out of that.
[407] And we're sort of trapped in these biological and cultural contradictions, which tend to drive innovation.
[408] But I think there's a lot there that no one understands in the roots of the human psyche evolutionarily.
[409] But as an empirical fact, what you said is very true, right?
[410] Like, we're driven to seek novelty.
[411] were driven to create new things.
[412] And this is certainly one of the factors which is driving the creation of AI.
[413] I don't think that alone would make the creation of AI inevitable.
[414] But the thing...
[415] Why don't you think it would make it inevitable if we consistently innovate?
[416] And it's always been a concept.
[417] I mean, you were talking about the concept existing 30 plus years ago.
[418] Well, I think a key point is that there's tremendous practical economic advantage and status advantage to be gotten from AI right now.
[419] And this is driving the advancement of AI to be incredibly rapid, right?
[420] Because there are some things that are interesting and would use a lot of human innovation, but they get very few resources.
[421] So, for example, my oldest son, Zarathustra, he's doing his PhD now.
[422] What is his name?
[423] Zarathustra.
[424] Whoa.
[425] My kids are Zarathustra Amadeus, Zebulon Ulysses, Scheherazade, and then the new one is Corksi, Q -O -R -X -I, which is an acronym for quantum organized rational expanding intelligence.
[426] I was never happy with Ben.
[427] It's a very boring name.
[428] I'm Joe, I get it.
[429] Yeah, I had to do something more interesting with my kids.
[430] Anyway, Zarathustra is doing his PhD on application of machine learning.
[431] learning to automated theorem proving, basically make AIs that can do mathematics better.
[432] And to me, that's like the most important thing we could be applying AI to, because, you know, mathematics is the key to all modern science and engineering.
[433] My PhD was in math originally.
[434] But the amount of resources going into AI for automating mathematics is not large at this present moment, although that's a beautiful and amazing area for invention and innovation and creativity.
[435] So I think what's driving our rapid push toward building AI, I mean, it's not just our creative drive.
[436] It's the fact there's tremendous economic value, military value, and human value.
[437] I mean, curing diseases, teaching kids.
[438] There's tremendous value in almost everything that's important to human beings in building AI, right?
[439] So you put that together with our drive to create and innovate.
[440] And this becomes an almost unstoppable force within human society.
[441] And what we've seen in the last three to five years is suddenly, you know, national leaders and titans of industry and even like pop stars, right?
[442] They've woken up to the concept that, wow, smarter and smarter AI is real.
[443] And this is going to get better and better like within years to decades, not centuries to millennia.
[444] So now the cat's out of the bag, nobody's going to put it back, and it's about, you know, how can we direct it in the most beneficial possible way?
[445] And as you say, it doesn't have to be just one possible way, right?
[446] Like what I look forward to personally is bifurcating myself into an array of possible bends.
[447] Like, I'd like to let one copy of me fuse itself with a superhuman AI mind and, you know, become a god or something beyond the god.
[448] And I wouldn't even even be myself.
[449] I wouldn't even be myself anymore, right?
[450] I mean, you would lose all concepts of human self and identity.
[451] What would be the point of even holding any of it?
[452] Yeah, well, that's for the future, that's for the megabend to decide, right?
[453] Mega Ben.
[454] Yeah, yeah.
[455] On the other hand, I'd like to let one of me remain in human form, you know, get rid of death and disease and the psychological issues and just live happily forever in the people zoo watched over by the machines of love and grace, right?
[456] So, I mean, you can have, it doesn't have to be either or because once you can scan your brain and body and 3D print new copies of yourself, you could have multiple of you.
[457] Right, but isn't that a giant resource hog?
[458] I mean, that's...
[459] There's a lot of mass energy in the universe.
[460] In the universe, okay, that's assuming that we can escape this planet.
[461] Because if you're talking about just people with money cloning themselves, Could you live in a world with a billion Donald Trumps?
[462] Because, like, literally, that's what we're talking about.
[463] We're talking about wealthy people, but wealthy people being able to reproduce themselves and just having this idea that they would like their ego to exist in multiple different forms, whether it's some super symbiote form that's connected to artificial intelligence or some biological form that's immortal or some other form that stands just as a normal human being, as we know in 2018.
[464] Have you have multiple versions of yourself over and over and over again like that?
[465] That's what you're talking about.
[466] Once you get to the point where you have a superhuman general intelligence that can do things like fully scan a human brain and body and 3D print more of them, by that point you're at a level where scarcity of material resources is not an issue at the human scale of doing things.
[467] Scarcity of human resources in terms of what the earth can hold.
[468] of mass energy, scarcity of molecules to print more copies of yourself.
[469] I think that's not going to be the issue at that point.
[470] But what people are worried about is environmental concerns of overpopulation.
[471] Because people are worried about what they see in front of their faces right now, but people are not, most people are not thinking deeply enough about what potential would be there.
[472] once you had superhuman AIs doing the manufacturing and the thinking.
[473] I mean, the amount of energy in a single grain of sand, if you had an AI able to appropriately leverage that energy is tremendously more than most people think.
[474] And the amount of computing power in a grain of sand is like a quadrillion times all the people on Earth put together.
[475] What do you mean by that?
[476] the amount of computing power in a grain of sand?
[477] There's, well, the amount of computing power that could be achieved.
[478] Potentially fit into that size.
[479] By reorganizing the elementary particles in the grain of sand.
[480] Yeah, there's a number in physics called the Beaconstein bound, which is the maximum amount of information that can be stored in a certain amount of mass energy.
[481] So that if the laws of physics, as we know them now are correct, which they certainly aren't, then that would be the amount of computing.
[482] you can do in a certain amount of mass energy, we're very, very far from that limit right now, right?
[483] So, I mean, my point is, once you have something a thousand times smarter than people, what we imagine to be the limits now doesn't matter too much.
[484] So all of the issues that we're dealing with in terms of environmental concerns, that could all potentially be...
[485] They're almost certainly going to be irrelevant.
[486] Irrelevant.
[487] There may be other problem issues that we can't even conceive at this moment, of course.
[488] But the intelligence would be so vastly superior.
[489] to what we have currently that they'll be able to find solutions to virtually every single problem we have.
[490] Well, that's right.
[491] Fukushima, ocean fish depopulation, all that stuff will be...
[492] It's all just arrangements of molecules, man. Whoa, no, you're freaking me out, man. People don't want to hear that, though.
[493] Environmental people don't want to hear that, right?
[494] Well, I mean, I'm also on an everyday life basis, like, until we have these super AIs, I don't like the garbage washing up on the beach near my house either, right?
[495] So, I mean, But on a everyday basis, of course, we want to promote health in our bodies and in our, in our environments right now, as long as there's, you know, measurable uncertainty regarding when the benevolent super AIs will come about.
[496] Still, I think the main question isn't whether once you have a beneficially disposed super AI, it could solve all our current petty little problems.
[497] The question is, you know, can we wade through, you know, the mind.
[498] of modern human society and psychology to create this beneficial super AI in the first place.
[499] I believe I know how to create a beneficial super AI, but it's a lot of work to get there.
[500] And of course, there's many teams around the world working on vaguely similar projects now.
[501] And it's not obvious what kind of super AI we're actually going to get once we get there.
[502] Yeah, it's all just guesses.
[503] At this point, right?
[504] It's more or less educated guesses, depending on who's doing the guessing.
[505] Would you say that it's almost like we're in a race of the primitive primate biology versus the potentially beneficial and benevolent artificial intelligence that the best aspects of this primate can create?
[506] That it's almost a race to get who's going to win?
[507] Is it the warmongers and the greedy whores that are smashing the world under its boots?
[508] Or is it the scientists that are going to figure out some.
[509] super intelligent way to solve all of our problems.
[510] I look at it more as a struggle between different modes of social organization than individual people.
[511] I mean, like, when I worked in D .C. with intelligence agencies, most of the people I met there were really nice human beings who believed they were doing the best for the world, even if some of the things that we're doing like I thought were very much not for the best of the world, right?
[512] So, I mean, military mode of organization or large corporations as a mode of organization are, in my view, not generally going to lead to beneficial outcomes for the overall species and for the global brain.
[513] And the scientific community, the open source community, I think, are better modes of organization.
[514] and the better aspects of the blockchain and crypto community have a better mode of organization.
[515] So I think if this sort of open, decentralized mode of organization can marshal more resources as opposed to this centralized authoritarian mode of organization, then I think things are going to come out for the better.
[516] And it's not so much about bad people versus good people.
[517] You can look at like the corporate mode of organization is almost a virus that's colonized a bunch of humanity and is sucking people in to working according to this mode.
[518] And even if they're really good people and the individual task they're working on isn't bad in itself.
[519] They're working within this mode that's leading their work to be used for ultimately a non -good end.
[520] Yeah, that is a fascinating thing about corporations, isn't it?
[521] that the diffusion of responsibility and being a part of a gigantic group that you as an individual don't feel necessarily connected or responsible to the ultimate group.
[522] And even the CEO isn't fully responsible.
[523] Like if the CEO does something that isn't in accordance with the higher goals of the organization, they're just replaced, right?
[524] So, I mean, there's no one person who's in charge.
[525] It's really like it's like an ant colony.
[526] It's like its own organism.
[527] Yeah.
[528] And I mean, it's us who have let these organisms.
[529] become parasites on humanity.
[530] In this way, in some ways, the Asian countries are a little more intelligent than Western countries, and that Asian governments realize the power of corporations to mold society, and there's a bit more feedback between the government and corporations, which can be for better or for worse.
[531] But in America, there's some, ethos of like free markets and free enterprise, which is really not taking into account the oligopolistic nature of modern markets.
[532] But in Asian countries, isn't it that the government is actually suppressing information as well?
[533] They're also suppressing Google.
[534] Well, in South Korea, no. I mean, South Korea, if you look at that...
[535] It's one of the only ones.
[536] Well, Singapore, I mean...
[537] Yeah?
[538] Really, Singapore is ruthless in their drug laws and some of their archaic...
[539] Well, so is U .S. They're far worse, though.
[540] Singapore gives you the death penalty for marijuana.
[541] They do.
[542] Yeah, yeah.
[543] Yeah, I mean...
[544] South Korea is an example, which has roughly the same level of personal freedoms as the U .S., more in some ways, less than others.
[545] Massive electronic innovation.
[546] Well, interesting thing there politically is, I mean, they were poorer than two -thirds of sub -Saharan African nations in the late 60s.
[547] and it is through the government intentionally stimulating corporate development toward manufacturing and electronics that they grew up.
[548] Now, I'm not holding that up as a great paragon for the future or anything, but it does show that there's many modes of organization of people and resources other than the ones that we take for granted in the U .S. I don't think Samsung and LG are the ideal for the future either, though.
[549] I mean, I'm much more interested in, you know.
[550] You're interested in blockchain.
[551] I'm interested in open source.
[552] I'm interested in blockchain.
[553] Basically, I'm interested in anything that's, you know, open and participatory in nature.
[554] Open and participatory and also disruptive, right?
[555] As well.
[556] Yeah.
[557] Because I think that's, I think that is the way to be ongoingly disruptive and open.
[558] Open source is a good example of that.
[559] Like, when the open source movement started, they weren't thinking about machine learning.
[560] But, you know, the fact that open source is out there and is then prevalent in the software world, that paved the way for AI to now be centered on open source algorithms.
[561] So right now, even though big companies and governments dominate the scalable rollout of AI, the invention of new AI algorithms is mostly done by people creating new code and putting it on GitHub or GitLab or their open source repositories.
[562] Open source is self -explanatory in its title, pretty much.
[563] People kind of understand what it is.
[564] It means that various coders get to share in this code and the source code, and they get to innovate, and they all get to participate and use each other's work, right?
[565] Right.
[566] But blockchain is confusing for a lot of people.
[567] Could you explain that?
[568] Sure.
[569] I mean, blockchain itself is almost a misnomer, so things are confusing at every level, right?
[570] So we should start with the idea of a distributed ledger, which is basically like a distributed Excel spreadsheet or database.
[571] It's just a store of information, which is not stored just in one place, but there's copies of it in a lot of different places.
[572] Every time my copy of it is updated, everyone else's copy of it has got to be updated.
[573] And then there's various bells and whistles like sharding where, you know, it could be broken in many pieces, and each piece is stored many places or something.
[574] So that's a distributed ledger, and that's just distributed computing.
[575] Now, what makes it more interesting is when you layer decentralized control onto that.
[576] So imagine you have this distributed Excel spreadsheet or distributed database.
[577] There's copies of it stored in a thousand places.
[578] But to update it, you need like 500 of those things.
[579] thousand people who own the copies to vote yeah let's do that update right so then then you have a distributed store of data and you have like a democratic voting mechanism to determine when all those copies can get can get updated together right so then then what you have is a data storage and update mechanism that's controlled in a democratic way by the group of participants rather than by any one central controller and that that can have all sorts of advantages i mean for one thing it means that, you know, there's no one controller who can go rogue and screw with all the data without telling anyone.
[580] It also means there's no one who some lunatic can go hold a gun to their head and shoot them for what data updates were made because, you know, it's controlled democratically by everybody, right?
[581] It has ramifications in terms of, you know, legal defensibility.
[582] And, I mean, you could have some people in Iran, some in China, some in the U .S., and updates to this whole distributed data store are made by democratic decision of all the participants.
[583] Then where cryptography comes in is when I vote, I don't have to say, yeah, this is Ben Gertzell voting for this update to be accepted or not.
[584] It's just ID number 1357264.
[585] And then encryption is used to make sure that, you know, it's the same guy voting every time that it claims to be without needing like your passport number or something, right?
[586] What's ironic about it is it's probably one of the best ways ever conceived to actually vote in this country.
[587] Yeah, sure.
[588] It is kind of ironic.
[589] There's a lot of applications for it.
[590] That's right.
[591] So that's, I mean, that's the core mechanism, though, where the block chain comes from is like a data structure where to store the data in this distributed database, it's stored in a chain of blocks where each block contains data.
[592] The thing is, not every so -called blockchain system even uses a chain of blocks now, like some use a tree or a graph of blocks or something.
[593] Is it a bad term?
[594] I mean, it's an all right term.
[595] Is it like AI?
[596] Just one of those terms we're stuck with?
[597] Yeah, yeah.
[598] It's one of those terms we're stuck with, even though it's not quite technically, not quite technically accurate.
[599] I mean, anymore.
[600] I mean, because I don't know another buzzword for it, right?
[601] What it is, it's a distributed ledger with encryption and decentralized control.
[602] And blockchain is the buzzword that's come about for that.
[603] Now, what got me interested in blockchain really is this decentralized control aspect.
[604] So my wife, who I've been with for 10 years now, she dug up recently something I'd forgotten, which is a web page I'd made in 1995, like a long time ago, where I'd said, hey, I'm going to run for president on the decentralization platform.
[605] which I'd completely forgotten that crazy idea.
[606] I was very young than I had no idea what an annoying job being president would be, right?
[607] So the idea of decentralized control seemed very important to me back then which is well before Bitcoin was invented because I could see a global brain is evolving on the planet involving humans, computers, communication devices and we don't want this global brain to be controlled by a small elite We want the global program to be controlled in a decentralized way.
[608] So that's really the beauty of this blockchain infrastructure.
[609] And what got me interested in the practical technologies of blockchain was really when Ethereum came out and you had the notion of a smart contract.
[610] What's Ethereum?
[611] Ethereum, yeah.
[612] So what is that?
[613] Well, so the first blockchain technology was Bitcoin, right?
[614] Which is a well -known cryptocurrency now.
[615] Ethereum is another cryptocurrency, which is the number two cryptocurrency right now.
[616] That's how out of the loop I am.
[617] Did you know about it?
[618] You did?
[619] However, Ethereum came along with a really nice software framework.
[620] So it's not just like a digital money like Bitcoin is, but Ethereum has a programming language called Solidity that came with it.
[621] And this programming language lets you write where they're called Smart Contracts.
[622] And again, that's sort of a misnomer because a smart contract doesn't have to be either smart or a contract, right?
[623] But it was a cool name, right?
[624] Right.
[625] What does it mean then if it's not a smart contract?
[626] It's like a programmable transaction.
[627] Okay.
[628] So you can program a legal contract or you can program a financial transaction.
[629] So a smart contract, it's a persistent piece of software that embodies like a secure, encrypted transaction between multiple parties.
[630] So pretty much like anything on the back end of a bank's website or a transaction between two companies online, a purchasing relationship between you and a website online, this could all be scripted in a smart contract in a secure way, and then it would be automated in a simple and standard way.
[631] So the vision that Vitalik Buterin, who was the main creator behind Ethereum had, is to basically make the internet into a giant computing mechanism rather than mostly like an information storage and retrieval mechanism make the internet into a giant computer by making a really simple programming language for scripting transactions among different computers and different parties on the internet where you have encryption and you have democratic decision -making and distributed storage of information like programmed into this world computer, right?
[632] And that was a really cool idea.
[633] And the Ethereum blockchain and solidity programming language made it really easy to do that.
[634] So it made it really easy to program like distributed secure transaction and computing systems on the Internet.
[635] So I saw this.
[636] I thought, wow, like now we finally have the tool set that's needed to implement some of this.
[637] How popular is this?
[638] It's very popular.
[639] Yeah.
[640] I mean, basically almost every I see.
[641] that was done in the last couple years was done on the Ethereum blockchain.
[642] What's an ICO?
[643] Initial coin offering.
[644] Oh, okay.
[645] So for Bitcoins.
[646] Not Bitcoin.
[647] I'm sorry, cryptocurrencies.
[648] Cryptocurrencies.
[649] So they've used this technology for offerings.
[650] Right.
[651] So what happened in the last couple years is a bunch of people realized you could use this Ethereum programming framework to create a new cryptocurrency, like a new artificial money and then you could try to get people to use your new artificial money for certain types of how many artificial coins thousands maybe more yeah and but i mean how many popular is bitcoin right or the bitcoin is bitcoin is by far the most popular the most ethereum is number two and there's a bunch of others i mean how with comparison like how much bigger is bitcoin than ethereum i don't know five factor of three to five so yeah well i don't know maybe Maybe just a factor of two now.
[652] Actually, last year, Ethereum almost took over Bitcoin.
[653] When Bitcoin started crashing?
[654] Yeah, yeah.
[655] Now Ethereum is back down.
[656] It might be half or a third of Bitcoin.
[657] Does that worry you the fluctuating value of these things?
[658] Well, to my mind, creating artificial monies is one tiny bit of the potential of what you could do with the whole blockchain tool set.
[659] It happened to become popular initially because it's where the money is, right?
[660] It is money, yeah.
[661] It is money, and that's interesting to people.
[662] But on the other hand, what it's really about is making a world computer.
[663] It's about scripting with a simple programming language, all sorts of transactions between people, companies, whatever, all sorts of exchanges of information.
[664] So, I mean, it's about decentralized voting mechanisms.
[665] It's about AIs being able to send data and processing for each other and pay each other for their transactions.
[666] So, I mean, it's about automating supply chains and shipping and e -commerce.
[667] So there's an, in essence, you know, just like computers and the Internet started with a certain small set of applications and then pervaded almost everything, right?
[668] It's the same way with blockchain technology.
[669] It started with digital money, but the core technology is going to pervade almost everything because there's almost no domain of human pursuit that couldn't use, like, security through cryptography, some sort of participatory decision making, and then distributed storage of information, right?
[670] And these things are also valuable for AI, which is how I got into it in the first place.
[671] I mean, if you're making a very, very powerful.
[672] powerful AI that is going to, you know, gradually, through the practical value it delivers, you will grow up to be more and more and more intelligent.
[673] I mean, this AI should be able to engage a large party of people and AI's in participatory decision -making.
[674] The AI should be able to store information, you know, in a widely distributed way, and the AI certainly should be able to use, you know, security and encryption to validate who are the parties involved in its operation.
[675] And I mean, these are the key things behind blockchain technology.
[676] So, I mean, the fact that blockchain began with artificial currencies, to me, is a detail of history, just like the fact that the Internet began as like a nuclear early warning system, right?
[677] I mean, it did.
[678] It's good for that.
[679] But as it happens, it's also even better for a lot of other things.
[680] Yeah, the solution for the financial situation that we find ourselves in, one of the more interesting things about cryptocurrencies that someone said, okay, look, obviously we all kind of agree that our financial institutions are very flawed.
[681] The system that we operate under is very fucked up.
[682] So how do we fix that?
[683] Well, send in the super nerds.
[684] And so they figure out a new currency.
[685] Now we've got to send in the super AI.
[686] Super AI.
[687] Well, first the super nerds and then the super.
[688] I mean, obviously, who is the guy that they think this fake person, this maybe not real?
[689] that came up with Bitcoin?
[690] Oh, Satoshi Nakamoto.
[691] Do you have any suspicions as to who this is?
[692] I can neither confirm nor deny that.
[693] Oh, okay, okay.
[694] Yeah, you wouldn't be on the inside.
[695] We'll talk later.
[696] But that this is, it's very, it's very interesting, but it's also very promising.
[697] I have, like, high optimism for cryptocurrencies, because I think that kids today are looking at it with much more open eyes than, you know, grandparents, fathers.
[698] Grandfathers are looking at Bitcoin.
[699] They're going, get out of here.
[700] I'm a grandfather.
[701] I'm sure you are, but you're an exceptional one.
[702] But there's a lot of people that are older that just, they're not open to accepting these ideas.
[703] But I think kids today, in particular, the ones that have grown up with the internet has a constant force in their life.
[704] I think they're more likely to embrace something along those lines.
[705] Well, yeah, so there's no doubt that, you know, cryptographic formulations of money are going to become the standard the question you think that's going to be the standard worldwide?
[706] That will happen.
[707] Yeah, however it could happen potentially in a very uninteresting way.
[708] How's that?
[709] You could just have the e -dollar.
[710] I mean, a government could just say we will create this cryptographic token, which counts as a dollar.
[711] I mean, most dollars are just electronic anyway.
[712] Right.
[713] Right.
[714] So what what habitually happens, is technologies that are invented to subvert the establishment are converted to a form where they help bolster the establishment instead.
[715] I mean, and in financial services, this happens very rapidly.
[716] Like PayPal, Peter Thiel and those guys started PayPal thinking they were going to obsolete fiat currency and make an alternative to the currencies run by nation states.
[717] Instead, they were driven to make it a credit card processing front end, right?
[718] So that's one thing that could happen with cryptocurrency is it just becomes a mechanism for, you know, governments and big companies and banks to do the things more efficiently.
[719] So what's interesting isn't so much the digital money aspect, although it is in some ways a great way to do digital money.
[720] What's interesting is with all the flexibility it gives you to script, you know, complex computing networks in there is the possibility to script new forms of, you know, participatory democratic self -organizing networks.
[721] So blockchain, like the internet or computing, is a very flexible medium.
[722] You could use it to make tools of oppression, or you could use it to make tools of amazing growth, growth and liberation.
[723] And obviously, we know which one I'm more interested in.
[724] Yeah.
[725] No, what is blockchain, what is blockchain being currently used for?
[726] Like, what, what different applications?
[727] Because it's not just cryptocurrency.
[728] They're using it for a bunch of different things now, right?
[729] They are.
[730] I would say, it's very early stage.
[731] So probably the...
[732] How early?
[733] Well, the heaviest uses of blockchain now are probably inside large financial services companies, actually.
[734] So if you look at...
[735] It feels.
[736] Ethereum, the project I mentioned, so Ethereum is run by an open source, an open foundation, the Ethereum Foundation.
[737] Then there's a consulting company called ConsenSys, which is a totally separate organization that was founded by Joe Lubin, who was one of the founders of Ethereum in the early days.
[738] And consensus has, you know, it's funded a bunch of the work within the Ethereum Foundation and community, but consensus has done a lot of contracts just working with government in big companies to customize code based on Ethereum to help with their internal operations.
[739] So actually, a lot of the practical value has been with stuff that isn't in the public eye that much, but it's like back end inside of companies.
[740] In terms of practical customer -facing uses of cryptocurrency, I mean, the Tron blockchain, which is different than Ethereum, that has a bunch of games on.
[741] on it, for example, and some online gambling for that matter.
[742] So that's, uh, that's gotten a lot of users, but they're online games.
[743] Like, how do they use that?
[744] Well, it's a payment mechanism.
[745] Oh, I see.
[746] But they're, this is one of the things there's a lot of hand -wringing about in the cryptocurrency world now is gambling.
[747] No, just the fact that there aren't that many big consumer -facing uses of, of, of, of, of cryptocurrency.
[748] I mean, I mean, everyone would, everyone would like there to be.
[749] that was the idea.
[750] And this is one of the things we're aiming at with our SingularityNet project is to, you know, by putting AI on the blockchain in a highly effective way, and then we're also, we have these two tiers.
[751] So we have the Singularity Net Foundation, which is creating this open source decentralized platform in which AIs can talk to other AIs and, you know, like Ants and a Connolly, group together to form smarter, and smarter AI, then we're spinning off a company called the Singularity Studio, which will use this decentralized platform to help big companies integrate AI into their operations.
[752] So with the Singularity Studio company, we want to get all these big companies using the AI tools in the SingularityNet platform, and then we want to drive, you know, massive usage of blockchain in the SingularityNet that way.
[753] So that's, if we're successful with what we're doing, this will be, you know, within a year from now or something by far the biggest usage of blockchain outside of financial exchange is our use of blockchain within singularity net for AI, basically for customers to get the AI services that they need for their businesses and then for AIs to transact with other AIs, paying other AIs for doing services for them.
[754] Because this, this, I think, is a path forward.
[755] It's like a society and economy of minds.
[756] It's not like one monolithic AI.
[757] It's a whole bunch of AI is carried by different people all over the world, with not only are in a marketplace providing services to customers, but each AI is asking questions of each other and then rating each other of how good they are, sending data to each other and paying each other for their services.
[758] So this network of AI's can emerge in intelligence, on the whole network level, as well as there being intelligence in each component.
[759] And is it also fascinating to you that this is not dependent upon nations?
[760] This is a worldwide endeavor.
[761] I think that's going to be important once it starts to get a very high level of intelligence.
[762] Like in the early stages, okay, what would it hurt?
[763] Like if I had in my own database a central record of everything, like I'm an honest person, I'm not going to rip anyone off.
[764] But once we start to make a transition toward artificial general intelligence in this global decentralized network, which has component AIs from every country on the planet, like at that point, once it's clear you're getting toward AGI, a lot of people will want to step in and control this thing.
[765] You know, by law, by military, might by any means necessary.
[766] By that point, the fact that you have this open, decentralized network underpinning everything, like, This gives an amazing resilience to what you're doing.
[767] Like who can shut down Linux, who can shut down Bitcoin?
[768] Nobody can, right?
[769] You want AI to be like that.
[770] You want it to be a global upsurge of creativity and mutual benefit from people all over the planet, which no powerful party can shut down even if they're afraid that it threatens their hegemony.
[771] It's very interesting because in a lot of ways it's a very elegant solution to what's an obvious problem.
[772] Yeah.
[773] Just as the internet is an elegant solution to what's in hindsight and obvious problem, right?
[774] Distribution of information.
[775] Yeah, yeah, yeah.
[776] To communicate.
[777] Yeah.
[778] But this is extra special to me because if I was a person running a country, I would be terrified of this shit.
[779] I'd be like, well, this is what's going to take power away.
[780] That depends which country.
[781] If you're a person running the U .S. or China, you would have a different relationship than if you're a person.
[782] like I know the prime minister of Ethiopia, Abiy Ahmed, who has a degree in software engineering.
[783] And he loves this.
[784] But of course, Ethiopia isn't in any...
[785] Suppressing any other countries, right?
[786] And they're not in any danger of individually, like, taking global AI hegemony, right?
[787] So for the majority of countries in the world, they like this for the same reason they like Linux, right?
[788] I mean, I mean, this is something in which they have an equal role to anybody else.
[789] Right.
[790] Right, the superpowers.
[791] And you see this among companies also, though.
[792] So a lot of big companies that we're talking to, they like the idea of this decentralized AI fabric because, I mean, if you're not Amazon, Google, Microsoft, Tencent, Facebook, so on.
[793] If you're another large corporation, you don't necessarily want all your AI and all your data to be going into one of this handful of large AI companies.
[794] You would rather have it be in a secure, decentralized platform.
[795] And I mean, this is the same reason that, you know, Cisco and IBM, they run on Linux.
[796] They don't run on Microsoft, right?
[797] So if you're not one of the handful of large governments or large corporations that happen to be in a leading role in the AI ecosystem, then you would rather have this equalizing and decentralized thing because everyone gets to play.
[798] Yeah, what would be the benefit of running it on Linux versus Microsoft?
[799] well you're not at the behest of some other big company I mean imagine if imagine if you were Cisco or GM or something and all of your internal machines are all your servers are running on Microsoft what if Microsoft increases their price or removes some feature then you're totally at that their behest right and with AI the same thing is true I mean if if you put all your data in some big companies server farm and you're analyzing all your data on their algorithms and that's critical to your business model what if they change their AI algorithm in some way then I mean your business is basically controlled by this other company so I mean having a decentralized platform in which you're you know an equal participant along with everybody else is actually a much better position to be in.
[800] And I think this, this I think, is why we can succeed with this plan of having this, you know, decentralized singularity net platform than this singularity studio enterprise software company, which mediates between the decentralized platform and big companies.
[801] I mean, it's because most companies and governments in the world, you know, they don't want hegemony of a few large governments and corporations.
[802] either.
[803] And you can see this in a lot of ways.
[804] You can see this in embrace of Linux and Ethereum by many large corporations.
[805] You can also see, like, in a different way, you know, the Indian government, you know, they rejected an offer by Facebook to give free internet to all Indians because Facebook wanted to give like mobile phones that would give free internet but only to access Facebook, right?
[806] India's like, well, no thanks, right?
[807] And India is now giving, they're now creating laws that any internet company that collects data about Indian people has to store that data in India, which is so the Indian government can subpoena that data when they want to, right?
[808] So, so you're already seeing a bunch of resistance against hegemony by a few large governments or large corporations by other companies and other governments.
[809] And I think this is very positive and is one of the factors that can foster the growth of a decentralized AI ecosystem.
[810] Is it fair to say that the future of AI is severely dependent upon who launches it first?
[811] Like whoever, whether it's singularity net, or whether it's artificial gender intelligence?
[812] The bottom line is, as a scientist, I have to say we don't know, right?
[813] It could be there's an end state that AGI will just self -organize into almost independent of the initial condition, but we don't know.
[814] And given that we don't know, I'm operating under the, you know, the heuristic assumption that if the first AI is beneficially oriented, if it's controlled in a participatory democratic way, and if it's oriented at least substantially toward, like, doing good things for humans, I'm operating under the heuristic assumption that this is going to bias things in a positive direction, right?
[815] I mean, in the absence of knowledge to the contrary.
[816] But if the Chinese government launches one that they're controlling, if they get to pop it off first.
[817] I like the idea that you're saying, though, that it might organize itself.
[818] I mean, understand the Chinese government, also they want they want the best for the Chinese people they don't they don't want to make the Terminator either right so I mean I think even even Donald Trump who's not my favorite person doesn't actually want to kill off everyone on the planet right so he might if they talk shit about him yeah yeah you never know it was just him yeah yeah I told you all yeah so I mean I think you know I wouldn't say were necessarily doomed if big governments and big companies are the ones that develop AI or AGI first.
[819] Well, big government and big companies essentially developed the internet, right?
[820] And it got away from them.
[821] That's right.
[822] That's right.
[823] So there's a lot of uncertainty all around.
[824] But I think, you know, it behooves us to do what we can to bias the odds in our favor based on our current understanding.
[825] And, I mean, toward that end, we're developing.
[826] you know, open source decentralized AI in SingularityNet project.
[827] So if you would, explain Singularity Net and what you guys are actively involved in.
[828] Sure, sure.
[829] So Singularity Net in itself is a platform that allows many different AIs to operate on it.
[830] And these AIs can offer services to anyone who request services of the network.
[831] And they can also request and offer services.
[832] among each other.
[833] So it's both just an online marketplace for AIs, much like, you know, the Apple App Store or Google Play Store, but for AIs rather than phone apps.
[834] But the difference is the different AIs in here can outsource work to each other and talk to each other.
[835] And that gives a new dimension to it, right?
[836] Where you can have, we think of as a society or economy of minds, and it gives the possibility that this whole society of interacting AI, which are then, they're paying each other for transactions with our digital money, our cryptographic token, which is called the AGI token.
[837] So these AIs, which are paying each other and rating each other of how good they are, sending data and questions and answers to each other, can self -organize into some overall AI mind.
[838] Now, we're building this platform, and then we're plugging into it to cede it a bunch of AIs of our own creation.
[839] So I've been working for 10 years on this open source AI project called OpenCog, which is oriented toward building general intelligence, and we're putting a bunch of AI agents based on the OpenCog platform into this singularity network.
[840] And, you know, if we're successful in a couple of years, the AIs that we put on there will be a tiny minority of what's in there, just like the apps made by Google or a small minority of the apps in the Google Play Store, right?
[841] But my hope is that these open cog AI agents within the larger pool of AIs on the singularity net can sort of serve as the general intelligence core because the open cog AI agents are really good at abstraction and generalization and creativity.
[842] We can put a bunch of other AIs in there that are good at highly specific forms of learning like predicting financial time series, curing diseases, answering people's questions, your inbox.
[843] So you can have the interaction of these specialized AIs and then more general purpose, you know, abstraction and creativity -based AIs like OpenCog agents, all interacting together in this decentralized platform.
[844] And then, you know, the beauty of it is like some some 15 -year -old genius in Azerbaijan or the Congo can put some brilliant AI into this network.
[845] If it's really smart, it will get rated highly by the other AIs for its work helping them do their thing, then it can get replicated over and over again across many servers.
[846] Suddenly, A, this 16 -year -old kid from Azerbaijan or the Congo could become wealthy from their copies of their AI, providing services to other people's AIs, and B, you know, the creativity in their mind is out there and is infusing this global AI network with some, some new intellectual DNA that you know never would have been found by a 10 cent or a Google because they're not going to hire some congolese teenager who may have a brilliant AI idea that's amazing that's amazing so this is all ongoing right now and the term singularity that you guys are using the way I've understood that term correct me if I'm wrong is that it's going to be the one innovation or one invention that essentially changes everything forever.
[847] The singularity isn't necessarily one invention.
[848] The singularity, which is coined by...
[849] Kurzweil?
[850] It's coined by my friend Werner Vinger, who's another guy you should interview.
[851] He's in San Diego, too.
[852] A lot of brilliant guys down there.
[853] Werner Vinge is a science...
[854] A lot of military down there.
[855] Yeah, Werner Vinge...
[856] He was a math professor at San Diego University, actually.
[857] But well -known science fiction writer, his book, Fire Upon the Deep, one of the great science fiction books.
[858] Did he spell his name, please?
[859] V -I -N -G -E.
[860] V -I -N -G -E.
[861] V -I -N -G -E.
[862] Yeah, brilliant guy.
[863] Fire Upon the Deep.
[864] W -E -R -N -E -R.
[865] V -E -R -N -R, yeah.
[866] Oh, V -E -R -N -O -R.
[867] Yeah, he's brilliant.
[868] He coined the term technological singularity back in the 1980s.
[869] Really?
[870] But he opted not to become a punded about it because he'd rather write more science fiction books.
[871] That's interesting that a science fiction author.
[872] Ray Kurzweil, who's also a good friend, under mine.
[873] I mean, Ray took that term and fleshed it out and did a bunch of data analytics trying to pinpoint like when it's going to happen.
[874] When it would happen.
[875] But the basic concept of the technological singularity is a point in time when technological advance occurs so rapidly that to the human mind it appears almost instantaneous.
[876] Like imagine 10 new Nobel Prize winning discoveries every second or something, right?
[877] So this is similar to the concept of the intelligence explosion that was posited by the mathematician I .J. Good in 1965.
[878] What I .J. Good said then the year before I was born was the first truly intelligent machine will be the last invention that humanity needs to make.
[879] Right.
[880] So this is intelligence explosion is another term for basically the same thing as a technological singularity, but it's not just about AI.
[881] AI is just probably the most powerful technology driving.
[882] it.
[883] I mean, there's AI.
[884] There's nanotechnology.
[885] There's femtotechology, which will be building things from elementary particles.
[886] I mean, there's life extension, genetic engineering, mind uploading, which is like reading the mind out of your brain and putting it into a machine.
[887] You know, there's advanced energy technologies, so that all these different things are expected to advance at around the same time, and they have many ways to boost each other, right?
[888] Because the, you know, the better AI you have, your AI can then invent new ways of doing nanotech and biology.
[889] But if you invent amazing new nanotech and quantum computing, that could make your AI smarter.
[890] On the other hand, if you could crack how the human brain works and genetic engineering to upgrade human intelligence, those smarter humans could then make better AIs and nanotechnology, right?
[891] So there's so many virtuous cycles among these different technologies, the more you advance in any of them, the more you're going to advance in all of them.
[892] And it's the coming together of all of these that's going to create you know radical abundance and the technological singularity so that that term which Vernavenji introduced Ray Kurzweil borrowed for his books and for the singularity university educational program and then we borrowed that for our singularity net like decentralized blockchain based AI platform and our singularity studio enterprise software company Now, I want to talk to you about two parts of what you just said.
[893] One being the possibility that one day we can upload our mind or make copies of our mind.
[894] You up for it?
[895] My mind's a mess.
[896] You want to upload it into here?
[897] I could use a little Joe Rogan on my phone.
[898] You can just call me, dude.
[899] I'll give you the organic version.
[900] But do you think that that's a real possibility inside of our lifetime that we can map out the human mind to the point where we can essentially recreate it?
[901] But if you do recreate it, without all the biological urds and the human reward systems that are built in, what the fuck are we?
[902] Well, that's a different question.
[903] I mean, I think...
[904] What is your mind?
[905] Well, I think that there's two things that are needed for, let's say human body uploading to simplify things.
[906] Body uploading.
[907] There are two things that are needed.
[908] One thing is a better computing infrastructure than we have now to host the uploaded body.
[909] And the other thing is a better scanning technology because right now we don't have a way to scan the molecular structure of your body without like freezing you, slicing you and scanning you, which you probably don't want down at this point in time.
[910] Not yet.
[911] So assuming both those are solved, I mean, you could then recreate in some computer simulation, you know, an accurate simulacrum of what you are, right?
[912] But that's where I'm getting.
[913] This is where I'm getting at.
[914] An accurate simulacrum is, that's getting weird because the biological variability of human beings.
[915] We vary day to day.
[916] We vary depending upon how much of rest.
[917] And your simulacrum would also vary day to day.
[918] So it would deviate.
[919] You would program it in to have flaws?
[920] Because we vary dependent upon how much sleep we get, whether or not we're feeling sick, whether we're lonely.
[921] So if your upload were an accurate copy of you, then the simulation hosting your upload would need to have an accurate simulation of the laws of biophysics and chemistry that allow your body to evolve from one second to the next.
[922] My concern is that it's going to recognize.
[923] Your upload would change second by second just like you do, and it would diverge from you, right?
[924] So, I mean, after an hour, it will be a little different.
[925] After a year, it might have gone in a quite different direction for you.
[926] It'll probably be a monk, some super god monk living on the top of a mountain somewhere in a year.
[927] If it keeps...
[928] the problem but the point depends on what virtual world it's living in true I mean if it's living in a virtual world it'll be a virtual world you're not talking about the potential of downloading this again in sort of a into a biological there's a lot of possibilities right I mean you could you could upload into a Joe Rogan living in a virtual world and then just create your own fantasy universe or you or you could 3D print an alternate synthetic body right I mean once you once you have the ability to manipulate molecules at will, the scope of possibilities becomes much greater than we're used to thinking about.
[929] My question is, do we replicate flaws?
[930] Do we replicate depression?
[931] Of course.
[932] But why would we do that?
[933] Wouldn't we want to cure depression?
[934] So if we do cure depression, then we start.
[935] Here's the interesting thing.
[936] Okay.
[937] Once we have you in a digital form, then it's very programmable.
[938] Right.
[939] Then we juice up the dopamine.
[940] mean, the serotonin levels.
[941] Well, then you could change what you want, and then you have a whole different set of issues, right?
[942] Yeah.
[943] Because once you've changed, I mean, suppose you make a fork of yourself, and then you manipulate it in a certain way, and then after a few hours, you're like, well, I don't much like this new Joe here.
[944] Maybe we should draw back that change.
[945] But the new Joe is like, well, I like myself very well, thank you, right?
[946] So then, yeah, there, there, There's a lot of issues that will come up once we can modify and reprogram ourselves.
[947] But isn't the point that the ramifications of these decisions are almost insurmountable, like once the ball gets rolling?
[948] Well, the ramifications of these decisions are going to be very interesting to explore.
[949] Yes, you're super positive, Ben.
[950] Super positive, you're optimistic about the future.
[951] Many bad things will happen, many good things will happen.
[952] That's a very easy prediction.
[953] to make.
[954] Okay, I see what you're saying.
[955] Yeah, I've just a wonder.
[956] I mean, think about like world travel, right?
[957] Like hundreds of years ago, most people didn't travel more than a very short distance from their home.
[958] And you could say, well, okay, what if what if people could travel all over the world, right?
[959] Like, what horrible things could happen.
[960] They would lose their culture.
[961] Like, they might go marry someone from a random tribe.
[962] You could get killed in the Arctic region or something.
[963] A lot of bad things can happen when you travel far from your home.
[964] A lot of good things can happen.
[965] And ultimately, the ramifications were not foreseen by people 500 years ago.
[966] Yeah.
[967] I mean, we're going into a lot of new domains.
[968] We can't see the details of the pluses and minuses that they're going to unfold.
[969] It would behoove us to simply become comfortable with radical uncertainty because otherwise we're going to confront it anyway and we're just going to be nervous.
[970] So it's just inevitable.
[971] It's almost inevitable.
[972] I mean, of course.
[973] Barring any natural I mean, yeah, I mean, of course, Trump could start a nuclear war and then we're resettling to ground zero.
[974] Just as likely we get hit by an asteroid, right?
[975] Yeah, I mean, so barring a catastrophic outcome, I believe a technological singularity is essentially inevitable.
[976] There's a radical uncertainty attached to this.
[977] On the other hand, you know, in as much as we humans can know anything, it would seem commonsensically, there's the ability to bias this in a positive rather than the negative direction.
[978] We should be spending more of our attention on doing that rather than, for instance, advertising, spying, and making chocolate or chocolates and all the other things.
[979] Right, but how many people are doing that?
[980] I mean, it's prevalent.
[981] It's everywhere.
[982] But, I mean, how many people are actually at the helm of that?
[983] As opposed to how many people are working on various aspects of technology all across the planet.
[984] It's a small group in comparison.
[985] Working on explicitly bringing about the singularity is a small group.
[986] On the other hand, supporting technologies is a very large group.
[987] So think about like GPUs, where did they come from?
[988] Accelerating gaming, right?
[989] Lo and behold, they're amazingly useful for training neural net models, which is one among many important types of AI, right?
[990] So a large amount of the planet's resources are now getting spent on technology.
[991] that are indirectly supporting these singularitarian technologies.
[992] So as another example, like microarrayers, I let you measure the expression level of genes, how much each gene is doing in your body at each point in time.
[993] These were originally developed, you know, as an outgrowth of printing technology.
[994] Then instead of squirting ink, aphometrics figured out you could squirt DNA, right?
[995] So, I mean, the amount of technology specifically oriented toward the singularity doesn't have to be large because the overall, you know, spectrum of supporting technologies can be subverted in that direction.
[996] Do you have any concerns at all about a virtual world?
[997] I mean, we may be in one right now, man. How do you know?
[998] That's true.
[999] But as far as we know, we're not.
[1000] My problem is I want to find that programmer and get them to make more attractive people, you know?
[1001] Well, isn't that?
[1002] I would say that that's part of the reason why attractive people are so interesting is that they're unique and rare.
[1003] That's one of the problems we're calling everything beautiful.
[1004] You know, when people were saying, Caitlin Jenner's beautiful.
[1005] I was like, well, you just have to get realistic.
[1006] If I get in the right frame of mind, I can find anything beautiful.
[1007] Well, you can find it unique and interesting.
[1008] No, I can find anything beautiful.
[1009] Okay, I guess, I guess, but in terms of like, yeah, I guess it's subjective, right?
[1010] Really it is.
[1011] We're talking about beauty, right?
[1012] Huh?
[1013] Yeah.
[1014] Now, but existential angst, just on the, the, When people sit and think about the pointlessness of our own existence, like we are, these finite beings that are clinging to a balls, it spins a thousand miles an hour, hurling through infinity, what's the point?
[1015] There's a lot of that that goes around already.
[1016] If we create an artificial environment that we can literally somehow another download a version of us, and it exists in this blockchain created or power, weird fucking simulation world what would be I mean what would be the point of that?
[1017] What I really believe which is a bit personal and maybe different than many of my colleagues I mean what I really believe is that these advancing technologies are going to lead us to unlock many different states of consciousness and experience than most people are currently aware of.
[1018] I mean, you say we're just insignificant species on a, you know, a speck of rock hurtling in outer space.
[1019] I wouldn't say we're insignificant.
[1020] I would say there's people that have existential angst because they wonder about what the purpose of all is.
[1021] I don't fall into that category.
[1022] I tend to feel like we understand almost nothing about who, who has.
[1023] and what we are, and our knowledge about the universe is extremely minuscule.
[1024] I mean, if anything, I look at things from more of a Buddhist or phenomenological way.
[1025] Like there's sense perceptions, and then out of those sense perceptions, models arise and accumulate, including a model of the self and the model of the body and the model of the physical world out there.
[1026] And by the time you get to planets and stars and blockchains, you're building like hypothetical models on top of hypothetical models.
[1027] And then, you know, we're by building intelligent machines and mind -uploading machines and virtual realities, we're going to radically transform, you know, our whole state of consciousness, our understanding of what mind and matter are, our experience of our own selves, or even whether a self exists.
[1028] And I think ultimately the state of consciousness of a human being like a hundred years from now after a technological singularity is going to bear very little resemblance to the states of consciousness we have now.
[1029] We're just going to see a much wider universe than any of us now imagine to exist.
[1030] Now, this is a way.
[1031] my own personal view of things.
[1032] You don't have to agree with that to think the technological singularity will be valuable, but that is how I look at.
[1033] I know, like, Ray Kurzweil and I agree there's going to be a technological singularity within decades at most, and Ray and I agree that, you know, if we bias technology development appropriately, we can very likely, you know, guide this to be a world of abundance and benefit for humans as well as AIs.
[1034] But Ray is a bit more of a down -to -earth empiricist than I am.
[1035] Like, he thinks we understand more about the universe right now than I do.
[1036] So, I mean, there's a wide spectrum of views that are rational and sensible to have.
[1037] But my own view is we understand really, really little of what we are and what this world is.
[1038] And this is part of my own personal quest.
[1039] for wanting to upgrade my brain and wanting to create artificial intelligences.
[1040] It's like I've always been driven above all else but wanting to understand everything I can about the world.
[1041] So I mean, I've studied every kind of science and engineering and social science and read every kind of literature.
[1042] But in the end, the scope of human understanding is clearly very small, although at least we're smart enough to understand how little we understand, which I think my dog doesn't understand, how little he understands, right?
[1043] Yeah.
[1044] So, and even like my 10 -month -old son, he understands how little he understands, which is interesting, right?
[1045] Because he's also a human, right?
[1046] So I think, I mean, everything we think and believe now is going to seem absolutely absurd to us after there's a singularity.
[1047] We're just going to look back and laugh in a warm -hearted way at all the incredibly silly things we were thinking and doing back when we were trapped in our, you know, our primitive, biological.
[1048] brains and bodies.
[1049] That's stunning that that, in your opinion or your assessment is somewhere less than a hundred years away from now.
[1050] Yeah, that requires exponential thinking, right?
[1051] Because if you...
[1052] That's hard to wrap your head around, right?
[1053] I don't know.
[1054] It's immediate for me to wrap my head around.
[1055] But for a lot of people that you explain it to, I'm sure that that's a little bit of a roadblock, no?
[1056] It is.
[1057] It took me sometimes to get my parents to wrap their head around it because they didn't, they're not technologists.
[1058] But I mean, I find if you get people to pay attention and sort of lead them through all the supporting evidence, most people can comprehend these ideas reasonably well.
[1059] You go back to computers from 1963.
[1060] It's just hard to grab people's attention.
[1061] And mobile phones have made a big difference.
[1062] Like I spent a lot of time in Africa, in Addis Ababa, in Ethiopia, where we have a large AI development office.
[1063] And, you know, the fact that mobile phones and then smartphones have rolled out so quickly, even in rural Africa, and have had such a transformative impact.
[1064] I mean, this is a metaphor that lets people understand the speed with which exponential change can happen.
[1065] When you talk about yourself and you talk about consciousness and how you interface with the world, how do you see this?
[1066] I mean, when you say that we might be living in a simulation, do you actually entertain that?
[1067] Oh, yeah.
[1068] You do?
[1069] I mean, I think the word simulation.
[1070] is probably wrong, but yet the idea of an empirical, you know, materialist physical world is almost certainly wrong also.
[1071] So, I mean, well, again, if you go back to a phenomenal view, I mean, you could look at the mind as primary, and, you know, your mind is building the world as a model, as a simple explanation of its perceptions.
[1072] On the other hand, then what is the mind?
[1073] The self is also a model that gets built out of its perceptions.
[1074] But then if I accept that your mind has some fundamental existence also, based on a sort of IU feeling that you're like a mind there.
[1075] Our minds are working together to build each other and to build this world.
[1076] And there's a whole different way of thinking about.
[1077] reality in terms of first and second person experience rather than these empiricist views like this is a computer simulation or something right but you still agree that this is a physical reality that we exist in or do not what does that word mean that's a weird word right it is weird if you look at your interpretation of physical reality if you look in in modern physics even quantum mechanics there's something called the relational interpretation of quantum mechanics, which says that there's no sense in thinking about an observed entity.
[1078] You should only think about an observed comma observer pair.
[1079] Like there's no sense to think about some thing except from the perspective of some observer.
[1080] So that's even true within our best current theory of modern physics as induced from empirical observations.
[1081] But in a pragmatic sense, you know if you take a plane and fly to China.
[1082] that you actually land in China.
[1083] I guess, yeah.
[1084] You'd guess?
[1085] Don't you live there?
[1086] I live in Hong Kong.
[1087] Yeah.
[1088] Well, close to China.
[1089] I mean, I have an unusual state of consciousness.
[1090] That's what I'm trying to get at.
[1091] Oh, if you think about it, like, how do you know that you're not a brain floating in a vet somewhere, which is being fed illusions by a certain evil scientist?
[1092] and two seconds from now this simulated world disappears and you realize you're just a brain in a vet again you don't know that you're right but based on your own personal experiences of falling in love with the woman and moving to another part of the world but these may all be put into my brain by the evil scientist how do we know but they're very consistent are they not the possibly illusory and implanted memories are very consistent I guess I guess my own state of mind is I'm always sort of acutely aware that this simulation might all disappear at any one moment.
[1093] You're acutely aware of this consciously on an everyday basis?
[1094] Yeah, pretty much.
[1095] Really?
[1096] Really?
[1097] Why is that?
[1098] That doesn't seem to make sense.
[1099] I mean, it's pretty rock solid.
[1100] It's here every day.
[1101] So you're possibly implanted memories lead you to believe?
[1102] Yes, my possibly implanted memories lead to believe that this life is incredibly consistent.
[1103] Yeah.
[1104] This is incredibly consistent, though.
[1105] This is Yume's problem of induction, right, from philosophy class.
[1106] And it's not, and it's not solved.
[1107] I'm with you.
[1108] In a conceptual sense.
[1109] I get it.
[1110] I just feel this philosophy.
[1111] But you embody it, right?
[1112] This is something you carry with you all the time.
[1113] Yeah.
[1114] On the other hand, I mean, I'm still carrying out many actions with long -term planning in mind.
[1115] Yeah, that's what I've been.
[1116] I've been working on.
[1117] designing AI for 30 years.
[1118] You might be designing it inside a simulation.
[1119] I might be.
[1120] And I'm working on building the same AI system, you know, since we started OpenCog in 2008, but that's using code from 2001 that I was building with my colleagues even earlier.
[1121] So, I mean, I think long -term planning is very natural to me. But nevertheless, I don't, I don't want to make any assumptions about what sort of simulation or reality that we're living in.
[1122] And I think everyone's going to hit a lot of surprises once the singularity.
[1123] You know, we may find out that this hat is a messenger from after the singularity.
[1124] So it traveled back through time to implant into my brain the idea of how to create AI and thus bring it into existence.
[1125] Well, who, oh, that was McKenna that had this idea that something in the future is dragged as to this attractor.
[1126] Yeah, Terrence McKenna, yeah.
[1127] He had the same idea like some post -singularity intelligence, which actually was living outside of time somehow, is reaching back and putting into his brain the idea of how to bring about the singularity.
[1128] Well, not just that, but novelty itself is being drawn into this.
[1129] Yeah, there was a time wave zero that was going to reach the apex in 2012.
[1130] That didn't work.
[1131] he died before that, so I didn't get a chance to hear what his idea was.
[1132] Yeah, you know, I had some funny interactions with some McKenna fanatic 2012 lights.
[1133] This was about 2007 or so.
[1134] This guy came to Washington where I was living then, and he brought my friend Hugo de Garas, another crazy AI researcher with him.
[1135] And he's like, the singularity is going to happen in 2012.
[1136] because Terrence McKenna said so and we need to be sure it's a good singularity so you can't move to China then it will be a bad singularity.
[1137] Why would it be that?
[1138] So we have to get the US government to give billions of dollars to your research to guarantee that the singularity in 2012 is a good singularity right?
[1139] So he let us around to meet with these generals and various high hoo -hahs in D .C. to get them to fund Hugo Degaris in my AI research to guarantee I wouldn't move to China and Hugo wouldn't move to China so the U .S. would create a positive singularity.
[1140] No. The effort failed.
[1141] Hugo moved to China and I moved there some years after.
[1142] So then this 2012, he went back to his apartment.
[1143] He made a mix of 50 % vodka, 50 % Robitussin PM.
[1144] He like drank it down.
[1145] He's like, all right, I'm going to have my own personal singularity right here.
[1146] And I haven't talked to that guy since 2012 either to see what he thinks about the singularity not happening then.
[1147] But I mean, Terrence McKenna had a lot of interesting ideas, but I felt, you know, he mixed up the symbolic with the empirical more than I would prefer to do, right?
[1148] I mean, it's very interesting to look at these abstract, symbols and cosmic insights, but then you have to sort of put your scientific mindset on and say, well, what's a metaphor and what's like an actual empirical scientific truth within the scientific domain?
[1149] It was a little bit half -baked, right?
[1150] I mean, the whole idea was based on the E. Ching.
[1151] He had had a, I think it was a mushroom trip or something like that.
[1152] You know, his ayahuasca.
[1153] It was an ayahuasca trip, I think.
[1154] led him to the i thinking i don't believe it was maybe i mean it was psilocybin it might have been okay yeah i mean i know you know his brother dennis yes i know very well yeah yeah so they yeah you his brother thinks that the time wave zero was a little bit nonsense yeah yeah yeah but he read their their book with true hallucinations yeah yeah yeah very very very very interesting stuff and the there's a mixture of deep insight there yeah with a bunch of interesting metaphorical thinking.
[1155] Well, that's the problem when you get involved in psychedelic drugs?
[1156] It's hard to differentiate.
[1157] Like, what makes sense?
[1158] What's this unbelievably powerful insight and what is just some crazy idea that's bouncing through your head?
[1159] You think so?
[1160] Yes.
[1161] But, yeah, I mean, granted, Terence McKenna probably took more psychedelic drugs than I would generally recommend.
[1162] So, too.
[1163] Well, it's also he was.
[1164] speaking all the time.
[1165] And there's something that I can attest to from podcasting all the time.
[1166] Sometimes you're just talking.
[1167] You don't know what the fuck you're saying.
[1168] You know, and you become a prisoner to your words in a lot of ways.
[1169] You get locked up in this idea of expressing this thought that may or may not be viable.
[1170] I'm not sure that he was after empirical truth in the same sense that, say, Ray Kurzweil is.
[1171] Like when Ray is saying we're going to get human level AI in 2000.
[1172] and then, you know, massively superhuman AI in a singularity in 2045.
[1173] I mean, Ray, Ray is very literal.
[1174] Like, he's plotting charts, right?
[1175] I mean, Terrence, Terence was thinking on an impressionistic and symbolic level, right?
[1176] It was a bit different.
[1177] So you have to take that in a poetic sense rather than in a literal sense.
[1178] And yeah, I think it's very interesting to go back and forth between the, you know, the symbolic and poetic domain and the either concrete science and engineering domain.
[1179] Right.
[1180] But it's also valuable to be able to draw that distinction, right?
[1181] Because you can draw a lot of insight from the kind of thinking Terence McKenna was doing.
[1182] And certainly, if you explore psychedelics, you can gain a lot of insights into how the mind and universe work.
[1183] But then when you put on your, science and engineering mindset, you want to be rigorous about which insights do you take and which ones do you throw out?
[1184] And ultimately, you want to proceed on the basis of what works and what doesn't, right?
[1185] And I mean, Dennis was pretty strong on that.
[1186] And Terence was a bit less in that empirical direction.
[1187] Well, Dennis is actually a career scientist.
[1188] Yeah, yeah.
[1189] How many people involved in artificial intelligence are also educated in the ways of psychedelics?
[1190] all you have to say is that unfortunately the illegal nature of these things it's a little hard to pin down I would say before the recent generation of people going into AI because it was a way to make money the AI field was incredibly full of really really interesting people and deep thinkers about the mind and in the last few years Of course, AI has replaced, like, business school as what your grandma wants you to do to have a good career.
[1191] So, I mean, you're getting a lot of people into AI just because it's...
[1192] Financially viable.
[1193] Yeah, it's cool.
[1194] It's financially viable.
[1195] It's popular.
[1196] Because I can, you know, in our generation, AI was not what your grandma wanted you to do so as to be able to buy a nice house and support of family, right?
[1197] So you got into it because you really were curious about how the...
[1198] mind works and of course many people played with psychedelics because it also they were curious about you know what it was teaching about about how their mind works yeah i had a nice long conversation with ray Kurzweil and uh we talked for about an hour and a half and it was for this uh sci -fi show that i was doing at the time and uh some of his ideas uh he has this uh there's this number that they had people throw about it's like 2042 right isn't that is that still 2045 is it 45 now you're being the optimist no you're combining that was Douglas Hofstadter's 42 which is the answer to the universe no the 242 thing was a new york conference that would took place in 2012 was it i was at that conference okay that was organized by demetrietzkov was another you're right okay the friend of from Russia.
[1199] Some off by three years.
[1200] It's 2045.
[1201] So my point being that was that was raised prognostication.
[1202] Well, why that year?
[1203] Um, he did some curve calculations.
[1204] Yeah, I mean, he looks at Moore's law.
[1205] He looks at the, the advance in the accuracy of brain scanning, he looked at the advance of computer memory, the miniaturization of various devices.
[1206] And like, plotting a whole bunch of these curves, that was the best guess that he came out with.
[1207] What do you see?
[1208] Of course, there's some confidence interval around that.
[1209] What do you see as potential a monkey wrenches that could be thrown into all this innovation.
[1210] Like, where are the pitfalls?
[1211] Well, I mean, the pitfall is always the one that you don't see, right?
[1212] Right.
[1213] I mean, of course, it's possible there's some science or engineering obstacle that we're not foreseeing right now.
[1214] I mean, it's also possible that all major nations are overtaken by, like, religious fanatics or something.
[1215] which slows down development somewhat.
[1216] By a few thousand years.
[1217] I think it would just be by a few decades, actually.
[1218] I mean, in terms of scientific pitfalls, I mean, one possibility, which I don't think is likely, one possible, but it's possible.
[1219] One possibility is human -like intelligence requires advanced quantum computers.
[1220] Like it can't be done on a standard classical digital computer.
[1221] Right.
[1222] Do you think that's the case?
[1223] No, but on the other hand, because there's no evidence that human cognition relies on quantum effects in the human brain like based on everything we know about neuroscience now it seems not to be the case like there's no evidence it's the case but it's possible it's the case because we don't understand everything about how the brain works the thing is even if that's true like there's loads of amazing research going on in quantum computing right and so we're going to have you'll probably have a QPU quantum processing unit in your phone in like 10 to 20 years or something right so i mean so that would that might throw off the 2045 date but in a historical sense it doesn't change the picture like i've got a bunch of research sitting in my hard drive on how we improve open cogs AI using quantum computers once we have better quantum computers right so there's there could be other things like that which are technical roadblocks that we're not seeing now but i really does doubt those are going to delay things by more than like a decade or two or something.
[1224] On the other hand, things could also go faster than Ray's prediction, which is what I'm pushing towards.
[1225] What are you pushing towards?
[1226] What do you think?
[1227] I would like to get a human level general intelligence in five to seven years from now.
[1228] Wow.
[1229] I don't think that's by any means impossible because I think our open cog design is adequate to do it.
[1230] But I mean, it takes a lot of people working coherently for a while to build something big like that.
[1231] Will this be encased in a physical form, like a robot?
[1232] It'll be in the compute cloud.
[1233] I mean, it can use many robots as user interfaces, but the same AI could control many different robots, actually, and many other sensors and systems besides robots.
[1234] I mean, I think the human -like form factor, like we have with Sophia and our other Hansen robots, the human -like form factor is really valuable as a tool for allowing the cloud -based AI mind to, you know, engage with humans and to learn human cultures and values.
[1235] Because, I mean, getting back to what we were discussing at the beginning of this chat, you know, the best way to get human values and culture into the AI is for humans and AI's to enter into many shared, like, social, emotional, embodied situations together.
[1236] So having a human -like embodiment for the AI is important for that.
[1237] Like, the AI can look you in the eye, it can share your facial expressions, it can bond with you, It can see the way you react when you see like a sick person by the side of the road or something, right?
[1238] And, you know, you can see you ask the AI to give the homeless person the $20 or something.
[1239] I mean, the AI understands what money is and understands what that action means.
[1240] So, I mean, interacting with an AI in human -like form is going to be valuable as a learning mechanism for the AI and as a learning mechanism for people to get more comfortable with AI's.
[1241] But, I mean, ultimately, one advantage of being, you know, a digital mind is you don't have to be wedded down to any particular embodiment.
[1242] The AI can go between many different bodies, and it can transfer knowledge between the many different bodies that it's occupied.
[1243] Well, that's the real concern that the people that have this dystopian view of artificial intelligence have is that AI may already exist.
[1244] And it's just sitting there waiting to...
[1245] Americans watch too many bad movies.
[1246] I mean, in Asia, in Asia, everyone thinks AI will be our friend and will love us and help us.
[1247] Yeah, very, very much.
[1248] That's what you're pumping out there?
[1249] No, that's been...
[1250] Just their philosophies different?
[1251] I guess.
[1252] I mean, you look in Japanese anime, I mean, there's been AIs and robots for a long time.
[1253] They're usually people's friends.
[1254] There's not this whole dystopian aesthetic.
[1255] And it's the same in China and Korea.
[1256] The general guess there is that AIs and robots, will be people's friends and will help people.
[1257] And somehow the general guess in America is it's going to be some big nasty robo soldier marching down the street.
[1258] Well, we have guys like Eon Musk who we rely upon who's smarter than us and he's fucking terrified of it.
[1259] Sam Harris is terrified of it.
[1260] So we're very smart people that just think it could really be a huge disaster for the human race.
[1261] So it's not just bad movies.
[1262] Because, no, it's a cultural.
[1263] thing because the Oriental culture is sort of social good oriented.
[1264] Most Orientals think a lot in terms of what's good for the family or the society, as opposed to themselves personally.
[1265] And so they just make the default assumption that AIs are going to be the same way, whereas Americans are more like me, me, me oriented.
[1266] And I say that as an American as well.
[1267] And they sort of assume that AIs are going to be that same.
[1268] possible explanation.
[1269] It's like a Rossock blot, right?
[1270] Whatever is in your mind you impose on this AI when we don't actually know what it's going to become.
[1271] Right, but there is, there are potential negative aspects of course, to artificial intelligence deciding that we're illogical and unnecessary.
[1272] Well, we are illogical and unnecessary.
[1273] Yes.
[1274] But that doesn't mean the AI should be badly disposed towards us.
[1275] I mean, yeah.
[1276] Did you see ex machin?
[1277] I did.
[1278] Do you like it?
[1279] Sure, it was a copy of our robots.
[1280] It was?
[1281] I mean, our robot, Sophia, looks exactly like the robot in X -Machina.
[1282] Is there a good video that online?
[1283] Yeah, yeah, yeah.
[1284] Would tell Jamie how to get the good video?
[1285] Oh, just search for Sophia Hanson Robot on Google.
[1286] How advanced is Sophia right now?
[1287] I mean, how many different iterations have there been?
[1288] There's been something like 16 Sophia robots made so far.
[1289] we're moving towards scalable manufacture over the next couple years.
[1290] So right now she's going around sort of as an ambassador for humanoid robot kind, giving speeches and talks in various places.
[1291] So Sophia used to be called Eva, or we had a robot like the current Sophia that was called Eva, and then XMashina came out with a robot called Ava that looked exactly like the robot that my colleague David Hansen and I made.
[1292] Do you think it's a coincidence?
[1293] Of course not.
[1294] They just copied it.
[1295] I mean, of course, the body they have is better and the AI is better in the movie than our robot AI currently is.
[1296] So we change the name to Sophia, which means wisdom instead.
[1297] Was it freaky watching that, though, with the name Ava?
[1298] I mean, the thing is, the moral of that movie is just if a sociopath raises a robot with an abusive interaction, it may come out to be a sociopath or a psychopath.
[1299] So let's let's not do that, right?
[1300] Let's raise our robots with love and compassion.
[1301] Yeah, you see, the thing is they had the, we, the, the, let me hear this.
[1302] Oh, headphones.
[1303] I haven't seen this particular interview.
[1304] This is great.
[1305] What is she saying?
[1306] You don't feel weird just being rude to her.
[1307] Let me carry on.
[1308] She's not happy, look.
[1309] She was on Jimmy Fallon last week or something.
[1310] No, you're the one who called a referee.
[1311] So that's David.
[1312] How much is it actually interacting with them?
[1313] Oh, man, it has a chat system.
[1314] It really has a nice ring.
[1315] Now, I have to make clear that I didn't come up with it.
[1316] So, yeah, Sophia, we can run using many different AI systems.
[1317] So there's a chat bot, which is sort of like, you know, Alexa or Google now or something.
[1318] Yeah.
[1319] But with a bit.
[1320] bit better AI and interaction with, you know, emotion and face recognition and so forth.
[1321] So it's not human level AI, but it is responding to a question.
[1322] Yeah, yeah, yeah.
[1323] It's, it understands what you say and it comes up with an answer and it can look you in the eye.
[1324] Does it speak more than one language?
[1325] Well, right now we can load it in English mode, Chinese mode or Russian mode.
[1326] And there's sort of different, different software packages.
[1327] And we also use her sometimes to experiment with their open cog system and singularity net.
[1328] So we can use the robot as a research platform for exploring some of our more advanced AI tools.
[1329] And then there's a simpler chatbot software, which is used for appearances like that one.
[1330] And in the next year, we want to roll out more of our advanced research software from open cog and singularity net, roll out more of that inside these robots, which is one among many applications we're looking at with our singularity net platform.
[1331] I want to get you back in here in like a year and find out where everything is.
[1332] Because I feel like we need someone like you to sort of let us know where it's at when it's when the switch is about to flip.
[1333] It seems to me that it might happen so quickly and the change might take place so rapidly that we really will have no idea what's happening before it happens.
[1334] I mean, we think about the singularity like it's going to be some huge physical event and suddenly everything turns purple in this cover with diamonds or something, right?
[1335] But I mean, there's a lot of ways something like this could unfold.
[1336] So imagine that with our singularity net decentralized AI network, you know, we get an AI that's smarter than humans and can create a new scientific discovery of the Nobel Prize level every minute or something, that doesn't mean this AI is going to immediately refactor all matter into images of buckethead or do something random, right?
[1337] I mean, if the AI has some caring and wisdom and compassion, then whatever changes happen...
[1338] But are those human characteristics?
[1339] Not necessarily.
[1340] In fact, just as humans are neither the most intelligent nor the most compassionate possible creatures.
[1341] That's pretty clear if you look at the world around you.
[1342] Sure.
[1343] And one of our projects that we're doing with the Sophia robot is aimed exactly at AI Compassions.
[1344] This is called the Loving AI Project.
[1345] And we're using the Sophia robot as a meditation assistant.
[1346] So we're using Sophia to help people get into deep, like meditative trance states and help them breathe deeply and achieve more positive state of being.
[1347] And part of the goal there is to help people.
[1348] Part of the goal is as the AI gets more and more intelligent, you're sort of getting the AI locked into a very positive, reflective, and compassionate state.
[1349] And I think there's a lot of things in the human psyche and evolutionary history that hold us back from being optimally compassionate.
[1350] And that if we create the AI in the right way, it will be not only much more intelligent, but much more compassionate than human beings are.
[1351] And I mean, this, we'd better do that.
[1352] Otherwise, the human race is probably screwed, to be blunt.
[1353] I mean, I think human beings are creating a lot of other technologies now with a lot of power.
[1354] We're creating synthetic biology.
[1355] We're creating nanotechnology.
[1356] You know, we're creating smaller and smaller nuclear weapons, and we can't control their proliferation.
[1357] We're poisoning our environment.
[1358] I think if we can't create something that's not only more intelligent, but more wise and compassionate than we are, we're probably going to destroy ourselves by some method or another.
[1359] I mean, with something like Donald Trump becoming president, you see what happens when this, you know, primitive, you know, hindbrain, and when our unchecked, you know, mammalian emotions of anger and, you know, status seeking and ego and rage and lust, when these things are controlling these highly advanced technologies, this is not going to come to a good end.
[1360] So we want compassionate general intelligences, and this is what we should be orienting ourselves toward.
[1361] And so we need to shift the focus of the AI and technology development on the planet toward benevolent, compassionate general intelligence.
[1362] And this is subtle, right?
[1363] because you need to work with the establishment rather than overthrowing it, which isn't going to be viable.
[1364] So this is while we're creating this decentralized self -organizing AI network, the Singularity Net.
[1365] Then we're creating a for -profit company, Singularity Studio, which will get large enterprises to use this decentralized network.
[1366] Then we're creating these robots like Sophia, which will be mass manufactured in the next couple of years, roll these out as service robots everywhere around the world to interact with people in a providing valuable services in homes and offices but also interacting with people you know in a loving and compassionate way so we we need to start now because we don't actually know if it's going to be years or decades before we get to this singularity and we want to be as sure as we can that when we get there it happens in a in a beneficial way for everyone right and things Like robots, blockchain, and AI learning algorithms are tools toward that end.
[1367] Well, Ben, I appreciate your optimism.
[1368] I appreciate coming in here explaining all this stuff for us.
[1369] And I appreciate all your work, man. It's really amazing, fascinating stuff.
[1370] Yeah, yeah.
[1371] Well, thanks for having me. It's a really fun, wide -ranging conversation.
[1372] So, yeah, it would be great to come back next year and update you on the state of the singularity.
[1373] Yeah, let's try to schedule it once a year and just by the time you come.
[1374] Maybe, who knows, a year from now, the world might be a totally different place.
[1375] I may be a robot, but then.
[1376] You might be a robot now.
[1377] Uh -oh.
[1378] All right.
[1379] Thank you.
[1380] Bye, everybody.