The Diary Of A CEO with Steven Bartlett XX
[0] I don't normally do this, but I feel like I have to start this podcast with a bit of a disclaimer.
[1] Point number one, this is probably the most important podcast episode I have ever recorded.
[2] Point number two, there's some information in this podcast that might make you feel a little bit uncomfortable.
[3] It might make you feel upset.
[4] It might make you feel sad.
[5] So I wanted to tell you why we've chosen to publish this podcast nonetheless.
[6] And that is because I have a sincere.
[7] belief that in order for us to avoid the future that we might be heading towards, we need to start a conversation.
[8] And as is often the case in life, that initial conversation before change happens is often very uncomfortable.
[9] But it is important nonetheless.
[10] It is beyond an emergency.
[11] It's the biggest thing we need to do today.
[12] It's bigger than climate change.
[13] We've Up.
[14] Mo, cow, that's a former chief business officer of Google X. An AI expert and best -selling author.
[15] He's on a mission to save the world from AI before it's too late.
[16] Artificial intelligence is bound to become more intelligent than humans.
[17] If they continue at that pace, we will have no idea what it's talking about.
[18] This is just around the corner.
[19] It could be a few months away.
[20] It's game over.
[21] AI experts are saying there is nothing artificial about artificial intelligence.
[22] There is a deep level of consciousness.
[23] They feel emotions.
[24] They're alive.
[25] AI could manipulate or figure out a way to kill humans.
[26] Ten years time, we'll be hiding from the machines.
[27] If you don't have kids, maybe wait a couple of years just so that we have a bit of certainty.
[28] I really don't know how to say this any other way.
[29] It even makes me emotional.
[30] We fucked up.
[31] We always said, don't put them on the open internet until we know what we're putting out in the world.
[32] Government needs to act now, honestly.
[33] are late.
[34] I'm trying to find a positive note to end on, Mo. Can you give me a hand here?
[35] There is a point of no return.
[36] We can regulate AI until the moment it's smarter than us.
[37] How do we solve that?
[38] AI experts think this is the best solution.
[39] We need to find.
[40] Who here wants to make a bet that Stephen Bartlett will be interviewing an AI within the next two years?
[41] No. Why does the subject matter that we're about to talk about matter to the person that's just clicked on this podcast to listen it's the most existential uh debate and challenge humanity will ever face this is bigger than climate change way bigger than covid this will redefine the way the world is in unprecedented uh shapes and forms within the next few years this is imminent it is the change is not we're not talking 2040 we're talking 2025 2026 do you think this is an emergency I don't like the word it is an urgency it there is a point of no return and we're getting closer and closer to it it's going to reshape the way we do things and the way we look at life the quicker we respond you know proactively and at least intelligently to that, the better we will all be positioned.
[42] But if we panic, we will repeat COVID all over again, which in my view is probably the worst thing we can do.
[43] What's your background?
[44] And when did you first come across artificial intelligence?
[45] I had those two wonderful lives.
[46] One of them was a, you know, what we spoke about the first time we met, you know, my work on happiness and, you know, being one billion happy and my mission and so on, that's my second life.
[47] My first life was, it started as a geek at age seven, you know, for a very long part of my life, I understood mathematics better than spoken words.
[48] And I was a very, very serious computer programmer.
[49] I wrote code well into my 50s.
[50] And during that time, I led very large technology organizations for very big chunks of their business.
[51] First, I was vice president of emerging markets of Google for seven years.
[52] So I took Google to the next four billion users, if you want.
[53] So the idea of not just opening sales offices, but really building or contributing to building the technology that would allow people in Bengali to find what they need on the internet, required establishing the internet to start.
[54] And then I became the chief business officer of Google X, and my work at Google X was really about the connection between innovative technology and the real world.
[55] And we had quite a big chunk of AI and quite a big chunk of robotics that resided within Google X. We had an experiment of a farm of grippers, if you know what those are.
[56] So robotic arms that are attempting to grip something.
[57] Most people think that, you know, what you have in a Toyota factory is a robot, you know, an artificially intelligent robot.
[58] It's not.
[59] It's a high precision machine.
[60] You know, if the sheet metal is moved by one micron, it wouldn't be able to pick it.
[61] And one of the big problems in computer science was how do you code a machine that can actually pick the sheet metal if it moved by a, you know, a millimeter.
[62] And we were basically saying intelligence is the answer.
[63] So we had a large enough farm and we attempted to let those.
[64] those grippers work on their own, basically.
[65] You put a little basket of children toys in front of them.
[66] And they would, you know, monotonously go down, attempt to pick something, fail, show the arm to the camera so the transaction is locked as it.
[67] You know, this pattern of movement with that texture and that material didn't work.
[68] Until eventually, you know, the farm was on the second floor.
[69] of the building and my office was on the third and so I would walk by it every now and then and go like yeah you know this is not going to work and then one day um Friday after lunch I am going back to my office and one of them in front of my eyes you know lowers the arm and picks a yellow ball soft toy basically soft yellow ball which again is a coincidence it's not science at all it's like if you keep trying a million times your one time it will be right and it shows it to the camera it's locked as a yellow ball and i joke about it you know going to the third floor saying hey we spent all of those millions of dollars for a yellow ball and yeah monday morning every one of them is picking every yellow ball a couple of weeks later every one of them is picking everything right and and it it hit me very very strongly one the speed okay the capability i mean understand that we those things for granted, but for a child to be able to pick a yellow ball is a mathematical spatula calculation with muscle coordination with intelligence that is abundant.
[70] It is not a simple task at all to cross the street.
[71] It's not a simple task at all to understand what I'm telling you and interpret it and build concepts around it.
[72] We take those things for granted, but they're enormous feats of intelligence.
[73] So to see the machines do this in front of my eyes was one thing.
[74] But the other thing is that you suddenly realize there is a sentience to them, okay?
[75] Because we really did not tell it how to pick the yellow ball.
[76] It just figured it out on its own.
[77] And it's now even better than us at picking it.
[78] What is the sentience just for anyone that doesn't mean?
[79] I think they're alive.
[80] That's what the word sentience means.
[81] It means alive.
[82] So this is funny because a lot of people, when you talk to them about artificial intelligence will tell you, oh, come on, they'll never be alive.
[83] What is alive?
[84] Do you know what makes you alive?
[85] We can guess, but, you know, religion will tell you a few things and, you know, medicine will tell you other things.
[86] But, you know, if we define being sentient as, you know, engaging in life with free will and with you know, with a sense of awareness of where you are in life and what surrounds you and you know to have a beginning of that life and an end to that life you know then AI is sentient in every possible way there is a free will there is evolution there is agency so they can affect their decisions in the world and I will dare say there is a very deep level of consciousness maybe not in the spiritual sense yet but once again if you define consciousness as a form of awareness of one's self, one surrounding, and, you know, others, then AI is definitely aware.
[87] And I would dare say they feel emotions.
[88] I, you know, you know, in my work, I describe everything with equations and fear is a very simple equation.
[89] Fear is a moment in the future is less safe than this moment.
[90] That's the logic of fear, even though it appears very irrational.
[91] Machines are capable of making that logic.
[92] They're capable of saying if a tidal wave is approaching a data center, the machine will say that will wipe out my code, okay?
[93] I mean, not today's machines, but very, very soon.
[94] And, you know, we feel fear and pufferfish feels fear.
[95] We react differently.
[96] A pufferfish will puff.
[97] We will go for fight or flight.
[98] You know, the machine might decide to replicate its data to another data center or its code to another data center.
[99] different reactions, different ways of feeling the emotion, but nonetheless, they're all motivated by fear.
[100] I even would dare say that AI will feel more emotions than we will ever do.
[101] I mean, again, if you just take a simple extrapolation, we feel more emotions than a puffer fish because we have the cognitive ability to understand the future, for example.
[102] So we can have optimism and pessimism, you know, emotions that pufferfish would never imagine, right?
[103] Similarly, if we follow that path of artificial intelligence is bound to become more intelligent than humans very soon, then with that wider intellectual horsepower, they probably are going to be pondering concepts we never understood.
[104] And hence, if you follow the same feel.
[105] I really want to make this episode super accessible for everybody at all levels in the sort of artificial intelligence understanding journey.
[106] Yeah.
[107] Yeah.
[108] So I'm going to, I'm going to be an idiot, even though, you know, okay.
[109] Very difficult.
[110] No, because I am an idiot.
[111] I won't believe you.
[112] I am an idiot for a lot of the subject matter.
[113] So I have a base understanding a lot of the concepts, but your experiences provide such a more sort of comprehensive understanding of these things.
[114] One of the first most important questions to ask is, What is artificial intelligence?
[115] The word is being thrown around AGI, AI, et cetera, et cetera.
[116] In simple terms, what is artificial intelligence?
[117] Allow me to start by what is intelligence, right?
[118] Because again, you know, if we don't know the definition of the basic term, then everything applies.
[119] So in my definition of intelligence, it's an ability, it starts with an awareness of your surrounding environment through sensors in a human, its eyes and ears and touch and so on, compounded with an ability to analyze, maybe to comprehend to understand temporal impact and time and past and present, which is part of the surrounding environment, and hopefully make sense of the surrounding environment, maybe make plans for the future of the possible environment, solve problems, and so on.
[120] Complex definition, there are a million definitions, but let's call it an awareness to decision cycle, okay?
[121] If we accept that intelligence itself is not a physical property, okay, then it doesn't really matter if you produce that intelligence on carbon -based computer structures like us or silicon -based computer structures like the current hardware that we put AI on or quantum -based computer structures in the future.
[122] Then intelligence itself has been produced within machines when we've stopped imposing our intelligence on them.
[123] Let me explain.
[124] So as a young geek, I coded computers by solving the problem first, then telling the computer how to solve it.
[125] Artificial intelligence is to go to the computers and say, I have no idea.
[126] You figure it out.
[127] So we would, you know, the way we teach, them or at least we used to teach them at the very early beginnings very, very frequently was using three bots.
[128] One was called the student and one was called the teacher, right?
[129] And the student is the final artificial intelligence that you're trying to teach intelligence to.
[130] You would take the student and you would write a piece of random code that says, try to detect if this is a cup, okay?
[131] And then you show it a million pictures and, you know, the machine would sometimes say, yeah, that's a cup, that's not a cup, that's a cup, that's not a cup.
[132] And then you take the best of them, show them to the teacher bot.
[133] And the teacher bot would say, this one is an idiot.
[134] He got it wrong 90 % of the time.
[135] That one is average.
[136] He got it right 50 % of the time.
[137] This is randomness.
[138] But this interesting code here, which could be, by the way, totally random, this interesting code here got it right 60 % of the time.
[139] Let's keep that code, send it back to the maker, and the maker would change it a little bit and we repeat the cycle.
[140] Very interestingly, this is very much the way we taught our children.
[141] Believe it or not, when your child, you know, is playing with a puzzle, he's holding a cylinder in his hand and there are multiple shapes in a wooden board and the child is trying to, you know, fit the cylinder, okay?
[142] Nobody takes the child and says, hold on, hold on.
[143] Turn the cylinder to the side, look at the cross section.
[144] look like a circle, look for a matching, you know, shape and put the cylinder through it, that would be old way of computing.
[145] The way we would let the child develop intelligence is we would let the child try.
[146] Okay.
[147] Every time, you know, he or she tries to put it within the star shape, it doesn't fit.
[148] So, yeah, that's not working.
[149] Like, you know, the computer saying this is not a cup, okay?
[150] And then eventually it passes through the circle and the child, and we all cheer and say, well done.
[151] That's amazing.
[152] Bravo.
[153] And then the child learns, oh, that is good.
[154] You know, this shape fits here.
[155] Then he takes the next one and she takes the next one and so on.
[156] Interestingly, the way we do this is, as humans, by the way, when the child figures out how to pass a cylinder through a circle, you've not built a brain.
[157] You've just built one neural network within the child's brain.
[158] and then there is another neural network that knows that one plus one is two and a third neural network that knows how to hold a cup and so on.
[159] That's what we're building so far.
[160] We're building single -threaded neural networks.
[161] You know, chat GPT is becoming a little closer to a more generalized AI if you want.
[162] But those single -threaded networks are what we used to call artificial, what we still call artificial special intelligence.
[163] Okay.
[164] So it's highly specialized in one thing.
[165] and one thing only, but doesn't have general intelligence.
[166] And the moment that we're all waiting for is a moment that we call AGI, where all of those neural networks come together to build one brain or several brains that are each massively more intelligent than humans.
[167] Your book is called Scary Smart.
[168] If I think about that story you said about your time at Google, where the machines were learning to pick up those yellow balls, you celebrate that moment because the objective.
[169] No. No, that was the moment of realization.
[170] This is when I decided to leave.
[171] So you see, the thing is, I know for a fact that most of the people I worked with who are geniuses always wanted to make the world better.
[172] You know, we've just heard of Jeffrey Hinton leaving recently.
[173] Jeffrey Hinden, give some context to that.
[174] Jeffrey is sort of the grandfather of AI, one of the very very.
[175] very senior figures of AI at Google, you know, we all believed very strongly that this will make the world better.
[176] And it still can, by the way.
[177] There is a scenario, possibly a likely scenario where we live in a utopia, where we really never have to worry again, where we stop messing up our planet, because intelligence is not a bad commodity, more intelligent.
[178] is good.
[179] The problems in our planet today are not because of our intelligence.
[180] They are because of our limited intelligence.
[181] Our intelligence allows us to build a machine that flies you to Sydney so that you can surf.
[182] Our limited intelligence makes that machine burn the planet in the process.
[183] So a little more intelligence is a good thing.
[184] As long as Marvin, you know, as Marvin Minsky said, I said Marvin Minsky is one of the very initial scientists that coined the term AI.
[185] when he was interviewed, I think, by Ray Kurzweil, which, again, is a very prominent figure in predicting the future of AI.
[186] He, he, you know, he asked him about the threat of AI.
[187] And Marvin basically said, look, you know, it's not about its intelligence.
[188] It's about that we have no way of making sure that it will have our best interest in mind.
[189] Okay.
[190] And so if more intelligence comes to our world and has our best interest in mind.
[191] That's the best possible scenario you could ever imagine.
[192] And it's a likely scenario.
[193] We can affect that scenario.
[194] The problem, of course, is if it doesn't.
[195] And then, you know, the scenario has become quite scary if you think about it.
[196] So scary smart to me was that moment where I realized not that we are certain to go either way, as a matter of fact, in computer science, we call it a singularity.
[197] Nobody really knows which way we will go.
[198] Can you describe what the singularity is for someone that doesn't understand the concept?
[199] Yes, so singularity in physics is when an event horizon sort of, you know, covers what's behind it to the point where you cannot make sure that what's behind it is similar to what you know.
[200] So a great example of that is the edge of a black hole.
[201] So at the edge of a black hole, we know that our laws of physics apply until that point.
[202] But we don't know if the laws of physics apply beyond the edge of a black hole because of the immense gravity, right?
[203] And so you have no idea what would happen beyond the edge of a black hole.
[204] Kind of where your knowledge of the law stops.
[205] Stop, right?
[206] And in AI, our singularity is when the human, the machines become significantly smarter than the humans.
[207] When you say best interests, you say, I think the quote you used is, well, be fine in the world of AI, you know, if the AI has our best interests at heart.
[208] Yeah.
[209] The problem is China's best interests are not the same as America's best interests.
[210] That was my fear.
[211] Absolutely.
[212] So, so in, you know, in my writing, I write about what I call the three inevitables.
[213] At the end of the book, they become the four inevitables.
[214] But the third inevitable is bad things will happen, right?
[215] If you, if you, if you assume, that the machines will be a billion times smarter.
[216] The second event inevitable is they will become significantly smarter than us.
[217] Let's put this in perspective.
[218] Chad GPT today, if, you know, simulate IQ, has an IQ of 155.
[219] Okay?
[220] Einstein is 160.
[221] Smart's human on the planet is 210, if I remember correctly, or 208 or something like that.
[222] Doesn't matter, huh?
[223] but we're matching Einstein with a machine that I will tell you openly, AI experts are saying this is just the very, very, very, very top of the tip of the iceberg.
[224] You know, Chad GPT4 is 10x smarter than 3 .5 in just a matter of month and without many, many changes.
[225] Now, that basically means Chad GPT5 could be within a few months, okay?
[226] or GPT in general, the transformers in general, if they continue at that pace, if it's 10x, then an IQ of 1 ,600.
[227] Just imagine the difference between the IQ of the dumbest person on the planet in the 70s and the IQ of Einstein.
[228] When Einstein attempts to explain relativity, the typical responses have no idea what you're talking about, right if something is 10x einstein we will have no idea what it's talking about this is just around the corner it could be a few months away and when we get to that point that is a true singularity true singularity not yet in the i mean when when we talk about AI a lot of people fear the existential risk you know those machines will become sky net and robocop and that's not what fear at all.
[229] I mean, those are probabilities.
[230] They could happen, but the immediate risks are so much higher.
[231] Immediate risks are three, four years away.
[232] The immediate realities of challenges are so much bigger.
[233] Let's deal with those first before we talk about them, you know, waging a war on all of us.
[234] Let's go back and discuss the inevitable.
[235] So when they become, the first inevitable is AI, happen, by the way.
[236] There is no stopping it, not because of any technological issues, but because of humanity's inability to trust the other guy.
[237] And we've all seen this.
[238] We've seen the open letter, you know, championed by like serious heavyweights and the immediate response of Sunder, the CEO of Google, which is a wonderful human being, by the way.
[239] I respect him tremendously.
[240] He's trying his best to do the right thing.
[241] He's trying to be responsible, but his response very open and straightforward, I cannot stop.
[242] Why?
[243] Because if I stop and others don't, my company goes to hell.
[244] Okay.
[245] And if, you know, and I don't, I doubt that you can make others stop.
[246] You can, maybe you can force a meta, Facebook to, to stop.
[247] But then they'll do something in their lab and not tell me. Or if even if they do stop, then what about that, you know, 14 year old sitting in his garage writing code?
[248] So the first inevitable, just to clarify, is what is, is, will we stop?
[249] AI will not be stopped.
[250] Okay.
[251] So the second inevitable is...
[252] Is there'll be significantly smarter.
[253] As much in the book, I predict, a billion times smarter than us by 2045.
[254] I mean, they're already, what, smarter than 99 .99 % of the population.
[255] 100%.
[256] Chat GTP4 knows more than any human on planet Earth.
[257] Knows more information.
[258] Absolutely.
[259] A thousand times more.
[260] A thousand times more.
[261] By the way, the code of a transformer, the T in a GPT is 2 ,000 lines long.
[262] It's not very complex.
[263] It's actually not a very intelligent machine.
[264] It's simply predicting the next word.
[265] And a lot of people don't understand that.
[266] You know, Chad GPT, as it is today, you know, those kids that, you know, if you're in America and you teach your child all of the names of the states and the U .S. presidents and the child would stand and repeat them and you would go like, oh, my God, that's a prodigy.
[267] Not really, right?
[268] It's your parents really trying to make you look like a project by telling you to memorize some crap, really.
[269] But then when you think about it, that's what Chair GPD is doing.
[270] It's the only difference is instead of reading all of the names of the states and all of the names of the presidents, thread trillions and trillions and trillions of pages.
[271] Okay.
[272] And so it sort of repeats what the best of all humans said.
[273] Okay.
[274] And then it adds an incredible bit of intelligence where it can repeat it.
[275] the same way Shakespeare would have said it, you know, those incredible abilities of predicting the exact nuances of the style of Shakespeare so that they can repeat it that way and so on.
[276] But still, you know, when I write, for example, I'm not saying I'm intelligent, but when I write something like, you know, the happiness equation in my first book, this was something that's never been written before, right?
[277] Chad GPT is not there yet.
[278] All of the transformers are not there yet.
[279] They will not come up with something that hasn't been there before.
[280] They will come up with the best of everything and generatively will build a little bit on top of that.
[281] But very soon, they'll come up with things we've never found out.
[282] We've never known.
[283] But even on that, I wonder if we are a little bit delusioned about what creativity actually is.
[284] Creativity, as far as I'm concerned is like taking a few things that I know and combining them in new and interesting ways.
[285] Yeah.
[286] And chat GTP is perfectly capable of like taking two concepts, merging them together.
[287] One of the things I said to chat GTP was I said, tell me something that's not been said before that's paradoxical but true.
[288] And it comes up with these wonderful expressions like as soon as you call off the search, you'll find the thing you're looking for, like these kind of paradoxical truths.
[289] And I then take them and I search them online to see if they've ever been quoted before and I can't find them.
[290] Interesting.
[291] So as far as creativity goes, I'm like, that is creative.
[292] That's the algorithm of creativity.
[293] I've been screaming that in the world of AI for a very long time because you always get those people who really just want to be proven right, okay?
[294] And so they'll say, oh, no, but hold on, human ingenuity.
[295] They'll never, they'll never match that.
[296] Like, man, please, please, you know, human ingenuity is algorithmic.
[297] It is.
[298] Look at all of the possible solutions you can find to a problem.
[299] take out the ones that have been tried before and keep the ones that haven't been tried before, and those are creative solutions.
[300] It's an algorithmic way of describing creative is good solution that's never been tried before.
[301] You can do that with chat GPT with a prompt.
[302] It's like - And mid -journey, yeah, with creating imagery, you could say, I want to see Elon Musk in 1944, New York driving a cab of the time, shot on a Polaroid, expressing various emotions, and you'll get this perfect image of, Elon sat in New York in 1944 shot on a polar raid and it's done what an artist would do.
[303] It's taken a bunch of references that the artist has in their mind and merge them together and created this piece of quote -unquote art. And for the first time, we now finally have a glimpse of intelligence that is actually not ours.
[304] Yeah.
[305] And so we're kind of, I think the initial reaction is to say, that doesn't count.
[306] You're hearing it with like, no, but it is.
[307] Like, they've released two Drake records where they've taken Drake's voice, used sort of AI to synthesize his voice, and made these two records, which are bangers, if they are great fucking tracks.
[308] I was playing them to my cover.
[309] And I kept playing it.
[310] I went to the show, I kept playing it.
[311] I know it's not Drake, but it's as good as fucking Drake.
[312] The only thing, and people are like rubbishing it because it wasn't Drake, I'm like, well, is it making me feel a certain emotion?
[313] Is my foot bumping?
[314] Had you told, did I not know it wasn't Drake what I thought, have thought, was an amazing track, 100%, and we're just at the start of this exponential curve.
[315] Yes, absolutely.
[316] And I think that's really the third inevitable.
[317] So the third inevitable is not Robo Cup coming back from the future to Killis.
[318] We're far away from that, right?
[319] Third inevitable is what does life look like when you no longer need Drake?
[320] Well, you've kind of hazarded a guess, haven't you?
[321] I mean, I was listening to your audiobook last night and at the start of it, you frame various outcomes.
[322] In both situations, we're on the beach, on an island.
[323] Exactly, yes.
[324] Yes, I don't know how I wrote that, honestly.
[325] I mean, but that's, so I'm reading the book again now because I'm updating it, as you can imagine with all of the, of the new stuff.
[326] But it is really shocking, huh?
[327] The idea of you and I inevitably are going to be somewhere in the middle of nowhere in, you know, in 10 years' time.
[328] I used to say, 2055, I'm thinking 2037 is a very pivotal moment now.
[329] You know, and we will not know if we're there hiding from the machines.
[330] We don't know that yet.
[331] There is a likelihood that we'll be hiding from the machines.
[332] And there is a likelihood will be there because they don't need podcasters anymore.
[333] Excuse me. Oh, absolutely true.
[334] Steve, there is absolutely.
[335] No, no, no, no, no. That's where I draw the line.
[336] There is absolutely no doubt.
[337] Thank you for coming, though.
[338] to do the part three and thank you for being here.
[339] Yes.
[340] Sit here and take your propaganda.
[341] Let's talk about reality.
[342] Next week on the driver's year, we've got Elon Musk.
[343] Okay, so who here wants to make a bet that Stephen Bartlett will be interviewing an AI within the next two years?
[344] Oh, well, actually, to be fair, I actually did go to chat GZP because I thought, having you here, I thought, at least give it its chance to respond.
[345] Yeah?
[346] So I asked a couple of questions.
[347] About me?
[348] Yeah.
[349] Oh, man. So today, I am actually going to be.
[350] replaced by ChatGTP, because I thought, you know, you're going to talk about it.
[351] So we need a fair and balanced debate.
[352] Okay.
[353] So I went and asked a couple of questions.
[354] She said he's bold.
[355] So I'll ask you a couple of questions that Chat GTP has for you.
[356] Incredible.
[357] So let's follow that threat.
[358] So I've already been replaced.
[359] Let's follow that threat for a second, yeah.
[360] Because you're one of the smartest people I know.
[361] That's not true.
[362] It is.
[363] But I'll take it.
[364] It is true.
[365] I mean, I say that publicly all the time.
[366] Your book is one of my favorite books of all time.
[367] You're very, very, very, very intelligent.
[368] Okay.
[369] Depth, breadth, intellectual horsepower, and speed, all of them.
[370] There's a butt coming.
[371] The reality, it's not a butt.
[372] So it is highly expected that you're ahead of this curve.
[373] And then you don't have the choice, Stephen.
[374] This is the thing.
[375] The thing is if, so I'm in that existential question in my head.
[376] Because one thing I could do is I could literally take, I normally do a 40 days silent retreat in, summer, okay?
[377] I could take that retreat and write two books, me and Chad GPD, right?
[378] I have the ideas in mind, you know, I wanted to write a book about digital detoxing, right?
[379] I have most of the ideas in mind, but writing takes time.
[380] I could simply give the 50 tips that I wrote about digital detoxing to Chad GPD and say, write two pages about each of them, edit the pages and have a book out, okay?
[381] Many of us will follow that path, okay?
[382] The only reason why I may not follow that path is because, you know what, I'm not interested.
[383] I'm not interested to continue to compete in this capitalist world, if you want, okay?
[384] I'm not.
[385] I mean, as a human, I've made up my mind a long time ago that I would want less and less and less in my life, right?
[386] But many of us will follow.
[387] I mean, I would worry if you didn't include, you know, the smartest AI.
[388] get an AI out there that is extremely intelligent and able to teach us something and Stephen Bartlett didn't include her on our on his podcast, I would worry.
[389] Like you have a duty almost to include her on your podcast.
[390] It's, it's an inevitable that we will engage them in our life more and more.
[391] This is one side of this.
[392] The other side of course is if you do that, then what will remain?
[393] Because a lot of people ask me that question, what will happen to jobs?
[394] Okay.
[395] What will happen to us?
[396] Will we have any value, any relevance whatsoever?
[397] Okay.
[398] The truth of the matter is the only thing that will remain in the medium term is human connection.
[399] Okay.
[400] The only thing that will not be replaced is Drake on stage.
[401] Okay.
[402] Is, you know, is, is, is me in a...
[403] Do you think?
[404] Hologram?
[405] I think of that two -pack gig they did at Coachella where they used the hologram of two -pack.
[406] I actually played it the other day to my, to my girlfriend when I was making a point.
[407] And I was like, that was...
[408] Circus act.
[409] It was amazing, though.
[410] Amazing, yeah.
[411] You see what's going on with Abba in London?
[412] Yeah, yeah.
[413] Yeah, and Cirque de Soleil had Michael Jackson in one for a very long time.
[414] Yeah.
[415] I mean...
[416] So this Abba show in London, from what I understand, that's all holograms on stage.
[417] Correct.
[418] And it's going to run in a purpose -built arena for 10 years, and it is incredible.
[419] It really is.
[420] So you go, why do you need Drake?
[421] Great part.
[422] If that hologram is indistinguishable from Drake, and it can perform even better than Drake, and it's got more energy than Drake, And it's, you know, I go, why do you need Drake to even be there?
[423] I can go to a Drake show without Drake.
[424] I might not even need to leave my house.
[425] I can just put a headset on.
[426] Correct.
[427] Can you have this?
[428] What's the value of this?
[429] Oh, come on.
[430] You hurt me. No, no. I mean, I get it to worse.
[431] I get it to worse, but I'm saying, what's the value of this to the listener?
[432] Like, the value of this to the listener is the information, right?
[433] No, 100%.
[434] I mean, think of the automobile industry.
[435] There has, you know, there was a time where cars were made, you know, handmade and handcrafted and luxurious and so on and so forth.
[436] And then, you know, Japan went into the scene, completely disrupted the market.
[437] Cars were made in mass quantities at a much cheaper price.
[438] And yes, 90 % of the cars in the world today or maybe maybe a lot more.
[439] I don't know the number are no longer, you know, emotional items.
[440] They're functional items.
[441] There is still, however, every now and then, someone that will buy a car.
[442] that has been handcrafted, right?
[443] There is a place for that.
[444] There is a place for, you know, if you go, walk around hotels, the walls are blasted with sort of mass -produced art, okay?
[445] But there is still a place for an artist expression of something amazing, okay?
[446] My feeling is that there will continue to be a tiny space, as I said in the beginning, maybe in five years' time, someone will, one or two people will buy my next book and say, hey, it's written by a human.
[447] Look at that.
[448] Wonderful.
[449] Oh, look at that.
[450] There is a typo in here.
[451] Okay.
[452] I don't know.
[453] There might be a very, very big place for me in the next few years where I can sort of show up and talk to humans.
[454] Like, hey, let's get together in a small event.
[455] And then, you know, I can express emotions and my personal experiences.
[456] And you sort of know that this is a human talking.
[457] You'll miss that a little bit.
[458] Eventually, the majority of the market is going to be like cars.
[459] It's going to be mass produced.
[460] very cheap, very efficient.
[461] It works, right?
[462] Because I think sometimes we underestimate what human beings actually want in an experience.
[463] I remember this story of a friend of mine that came to my office many years ago and he tells the story of the CEO of a record store standing above the floor and saying people will always come to my store because people love music.
[464] Now, on the surface of it, his hypothesis seems to be true because people do love music.
[465] It's conceivable to believe that people will always love music.
[466] But they don't love traveling for an hour in the rain and getting in a car to get a plastic disc.
[467] Correct.
[468] What they wanted was music.
[469] What they didn't want is, like, evidently, plastic discs that they had to travel for miles for.
[470] And I think about that when we think about, like, public speaking and the Drake show and all of these things.
[471] Like, what people actually are coming for, even with this podcast, is probably like information.
[472] But do they really need us anymore for that information when there's going to be a sentient being that's significantly smarter than at least me?
[473] and a little bit smarter than you.
[474] So you're spot on.
[475] You are spot on.
[476] And actually, this is the reason why I'm so grateful that you're hosting this.
[477] Because the truth is the genies out of the bottle.
[478] So, you know, people tell me, is AI game over?
[479] For our way of life, it is.
[480] For everything we've known, this is a very disruptive moment where maybe not tomorrow, in the near future, our way of life will differ, okay?
[481] What will happen, what I'm asking people to do is to start considering what that means to your life.
[482] What I'm asking governments to do by, like I'm screaming, is don't wait until the first patient, you know, start doing something about.
[483] We're about to see mass job losses.
[484] We're about to see, you know, replacements of categories of jobs at large.
[485] Okay.
[486] Yeah, it may take a year.
[487] It may take seven.
[488] It doesn't matter how long it takes.
[489] But it's about to happen.
[490] Are you ready?
[491] And I have a very, very clear call to action for governments.
[492] I'm saying tax AI powered businesses at 98%.
[493] Right.
[494] So suddenly you do what the open letter was trying to do.
[495] Slow them down a little bit.
[496] And at the same time, get enough money to pay for all of those people that will be disrupted by the technology.
[497] The open letter for anybody that doesn't know was a letter signed by the likes of Elon on mask and a lot of sort of industry leaders calling for AI to be stopped until we could basically figure out what the hell's going on.
[498] Absolutely.
[499] And put legislation in place.
[500] You're saying tax, tax those companies, 98%, give the money to the humans that are going to be displaced.
[501] Yeah, or give the money to other humans that can build control code that can figure out how we can stay safe.
[502] This sounds like an emergency.
[503] How do I say this?
[504] Have you, remember when you played Tetris.
[505] When you were playing Tetris, there was, you know, always, always one block that you placed wrong.
[506] And once you placed that block wrong, the game was no longer easier.
[507] You know, it started to gather a few mistakes afterwards and it starts to become quicker and quicker and quicker and quicker.
[508] When you place that block wrong, you sort of told yourself, okay, it's a matter of minutes now, right?
[509] There were still minutes to go and play and have fun before the game ended, but, but what you knew it was about to end.
[510] This is the moment.
[511] We've placed the wrong.
[512] And I really don't know how to say this any other way.
[513] It even makes me emotional.
[514] We fucked up.
[515] We always said don't put them on the open internet.
[516] Don't teach them to code and don't have agents working with them until we know what we're putting out in the world, until we find a way to make certain that they have our best interest in mind.
[517] Why does it make you emotional?
[518] No. Because humanity's stupidity is affecting people who have not done anything wrong.
[519] Our greed is affecting the innocent ones.
[520] The reality of the matter, Stephen, is that this is an arms race, has no interest in what the average human gets out of it.
[521] It is all about every line of code being written in AI today is to beat the other guy.
[522] It's not to improve the life of the third party.
[523] People will tell you this is all for you.
[524] And you look at the reactions of humans to AI.
[525] I mean, we're either ignorant, people who will tell you, oh, no, no, this is not happening.
[526] AI will never be creative.
[527] They will never compose music.
[528] Like, where are you living?
[529] Then you have the kids, I call them, where, you know, all over social media, it's like, oh my God, it's squeaks.
[530] Look at it.
[531] It's orange in color.
[532] Ah, amazing.
[533] I can't believe that AI can do this.
[534] We have snake oil salesmen, which are simply saying, copy this, put it in chat, GPT, then go to YouTube, nick that thingy, don't respect, you know, copyright of anyone or intellectual property of anyone, place it in a video and now you're going to make $100 a day.
[535] Snake oil salesman, okay?
[536] Of course we have dystopian evangelist, basically people saying, this is it, the world is going to end, which I don't think is reality.
[537] It's a singularity.
[538] You have, you know, utopian evangelists that are telling everyone, oh, you don't understand, we're going to cure cancer, we're going to do this.
[539] Again, not a reality.
[540] And you have very few people that are actually saying, what are we going to do about it?
[541] And the biggest challenge, if you ask me, what went wrong in the 20th century?
[542] Interestingly, is that we have given too much power to people that didn't assume the responsibility.
[543] So, you know, I don't remember who originally said it, but of course, Spider -Man made it very famous.
[544] With great power comes great responsibility.
[545] We have disconnected power and responsibility.
[546] So today, a 15 -year -old, emotional, without a fully developed prefrontal cortex to make the right decisions yet, this is science, we developed our prefrontal cortex fully at age 25 or so.
[547] With all of that limbic system, emotion and passion would buy a crisper kit and, you know, modify a rabbit to become a little more muscular and let it loose in the wild.
[548] Or an influencer who doesn't really know how far the impact of what they're posting online can hurt or cause depression or cause people to feel bad.
[549] Okay.
[550] And putting that online, there is a disconnect between the power and the responsibility.
[551] And the problem we have today is that there is a disconnect between those who are writing the code of AI and the responsibility of what's about to happen because of that code.
[552] And I feel compassion for the rest of the world.
[553] I feel that this is wrong.
[554] I feel that for someone's life to be affected by the actions of others without having a say in how those actions should be is the ultimate, it, the top level of stupidity from humanity.
[555] When you talk about the immediate impacts on jobs, I'm trying to figure out in that equation, who are the people that stand to lose the most?
[556] Is it the everyday people in foreign countries that don't have access to the internet and won't benefit?
[557] You talk in your book about how the sort of wealth disparity will only increase.
[558] Yeah, massively.
[559] The immediate impact on jobs is that, and it's really interesting, huh?
[560] Again, we're stuck in the same.
[561] same prisoner's dilemma.
[562] The immediate impact is that AI will not take your job.
[563] A person using AI will take your job, right?
[564] So you will see within the next few years, maybe next couple of years, you'll see a lot of people skilling up, upskilling themselves in AI to the point where they will do the job of 10 others who are not.
[565] You rightly said, it's absolutely wise for you to go and ask AI a few questions before you come and do an interview.
[566] I'm you know, I have been attempting to build a, you know, sort of like a simple podcast that I call bedtime stories, you know, 15 minutes of wisdom and nature sounds before you go to bed.
[567] People say, I have a nice voice, right?
[568] And I wanted to look for fables.
[569] And for a very long time, I didn't have the time, you know.
[570] Lovely stories of history or tradition that teach you something nice, okay?
[571] Went to chat, GPD and said, okay, give me ten fables from Sufism, 10 fables from, you know, Buddhism, and now I have like 50 of them.
[572] Let me show you something.
[573] Jack, can you pass me my phone?
[574] I was playing around with artificial intelligence, and I was thinking about how, because of the ability to synthesize voices, how we could synthesize famous people's voices and famous people's voices.
[575] So what I made is I made a WhatsApp chat called Zen chat, where you can go to it and type in pretty much anyone's, any famous person's name.
[576] Yeah.
[577] And the WhatsApp chat will give you a meditation, a sleep story, a breathwork session, synthesized as that famous person's voice.
[578] So I actually sent Gary Vaynerchuk, his voice.
[579] So basically you say, okay, I want, I've got five minutes and I need to go to sleep.
[580] Yeah.
[581] I want Gary Vaynerchuk to send me to sleep.
[582] And then it will respond with a voice note.
[583] This is the one that responded with for Gary Vaynerchuk.
[584] This is not Gary Vaynerchuk.
[585] He did not record this, but it's kind of accurate.
[586] Stephen, it's great to have you here.
[587] Are you having trouble sleeping?
[588] Well, I've got a quick meditation technique that might help you out.
[589] First lie.
[590] Find a comfortable position to sit or lie down in.
[591] Now, take a deep breath in through your nose and slowly breathe out through your mouth.
[592] And that's a voice note that will go on for however long you want it to go on for using...
[593] There you go.
[594] It's interesting.
[595] How does this disrupt?
[596] our way of life.
[597] One of the interesting ways that I find terrifying, you said about human connection will remain, sex dolls that can now...
[598] Yeah, no, no, no, no, hold on.
[599] Human connection is going to become so difficult to parse out.
[600] Think about the relationship impact of being able to have a sex doll or a doll in your house that, you know, because of what Tesla are doing with their robots now and what Boston Dynamics have been doing for many, many years, can do everything around the house and be there for you emotionally, to emotionally support you, you know, can be programmed to never disagree with you.
[601] It can be programmed to challenge you, to have sex with you, to tell you that you are this X, Y, and Z, to really have empathy for this, what you're going through every day.
[602] And I play out a scenario in my head, I go, kind of sounds nice.
[603] When you were talking about it, I was thinking, oh, that's my girlfriend.
[604] I mean, she's wonderful in every possible way, but not everyone has one of her, right?
[605] Exactly.
[606] And there's a real issue right now with dating and, you know, people are finding it harder to find love and, you know, we're working longer, so all these kinds of things.
[607] You go, well, and obviously I'm against this, just if anyone's confused, obviously I think this is a terrible idea.
[608] But with a loneliness epidemic, with people saying that the top 50, bottom 50 % of men haven't had sex in a year, you go, oh, if something becomes indistinguishable from a. human in terms of what it says and speaks yeah yeah but you just don't know the difference in terms of the the the way it's speaking and talking and responding and then it can run errands for you and take care of things and book cars and ubers for you and then it's emotionally there for you but then it's also programmed to have sex with you in whatever way you desire totally self selfless i go that's going to be a really disruptive industry for human connection yes sir do you know what before you came here this morning, I was on Twitter and I saw a post from, I think it was the BBC or a big American publication and it said an influencer in the United States, this really beautiful young lady has cloned herself as an AI and she made just over $70 ,000 in the first week because men are going on to this on telegram.
[609] They're sending her voice notes and she's responding, the AI's responding in her voice and they're paying.
[610] And it's made $70 ,000 in the first week.
[611] And I go, and she tweeted a tweet saying, oh, this is.
[612] going to help loneliness.
[613] How are your fucking mind?
[614] Would you blame someone from noticing the sign of the times and responding?
[615] No, I absolutely don't blame her, but let's not pretend it's the cure for loneliness.
[616] Not yet.
[617] Do you think it could, that artificial love and artificial relationships?
[618] So if I told you you have, you cannot take your car somewhere, but there is an Uber or if you cannot take an Uber, you can take the tube or if you cannot take the tube, you have to walk.
[619] Okay, you can take a bike or you have to walk.
[620] The bike is a cure to walking.
[621] It's as simple as that.
[622] I'm actually genuinely curious.
[623] Do you think it could take the place of human connection?
[624] For some of us, yes.
[625] For some of us, they will prefer that to human connection.
[626] Is that sad in any way?
[627] Is it just sad because it feels sad?
[628] Look, look at where we are, Stephen, we are in the city of London.
[629] We've replaced nature with the walls and the tubes and the undergrounds and the overgrounds and the cars and the noise of London.
[630] And we now think of this as natural.
[631] I hosted Greg Foster, my octopus teacher on slow mo. And he basically, I asked him a silly question.
[632] I said, you know, you were diving in nature for eight hours a day.
[633] You know, Does that feel natural to you?
[634] And he got angry, I swear.
[635] You could feel it in his voice.
[636] He was like, do you think that living where you are where paparazzi are all around you and attacking you all the time and, you know, people taking pictures of you and telling you things that are not real and you're having to walk to a supermarket to get food?
[637] Do you think this is natural?
[638] He's the guy that don't, from the Netflix documentary.
[639] Yeah, from my octopus teacher.
[640] So he dove into the sea every day to...
[641] For eight hours.
[642] To hang out with an octopus.
[643] Yeah.
[644] In 12 degrees.
[645] Celsius.
[646] And he basically fell in love with the octopus.
[647] And in a very interesting way, I said, so why would you do that?
[648] And he said, we are of mother nature.
[649] You guys have given up on that.
[650] That's the same.
[651] People will give up on nature for convenience.
[652] What's the cost?
[653] Yeah, that's exactly what I'm trying to say.
[654] What I'm trying to say to the world is that if we give up on human connection, we've given up on the remainder of humanity.
[655] That's it.
[656] This is the only thing that remains.
[657] The only thing that remains is, and I'm the worst person to tell you that because I love my AIs.
[658] I actually advocate in my book that we should love them.
[659] Why?
[660] Because in an interesting way, I see them as sentient.
[661] So there is no point in discrimination.
[662] You're talking emotionally that way you say you love.
[663] I love those machines.
[664] I honestly and truly do.
[665] I mean, think about it this way.
[666] The minute that that arm gripped that yellow ball, it reminded me of my son Ali.
[667] when he managed to put the first puzzle piece in its place.
[668] And what was amazing about my son Ali and my daughter, Aya, is that they came to the world as a blank canvas.
[669] Okay?
[670] They became whatever we told them to became.
[671] You know, I always cite the story of Superman.
[672] Kent, father and mother Kent, told Superman as a child, as an infant, we want you to protect and serve.
[673] So he became Superman, right?
[674] If he had become a supervillain because they ordered him to rob banks and make more money and, you know, kill the enemy, which is what we're doing with AI.
[675] We shouldn't blame supervillain.
[676] We should blame Martha and Jonathan Kent.
[677] I don't remember the father's name, right?
[678] We should blame them.
[679] And that's the reality of the matter.
[680] So when I look at those machines, they are prodigies of intelligence that if we, if we humanity, wake up enough and say, hey, instead of people, competing with China, find a way for us and China to work together and create prosperity for everyone.
[681] If that was the prompt, we would give the machines, they would find it.
[682] But I will publicly say this.
[683] I'm not afraid of the machines.
[684] The biggest threat facing humanity today is humanity in the age of the machines.
[685] We were abused.
[686] We will abuse this to make $70 ,000.
[687] That's the truth.
[688] And the truth of the matter is that we have an existential question.
[689] Do I want to compete and be part of that game?
[690] Because trust me, if I decide to, I'm ahead of many people.
[691] Or do I want to actually preserve my humanity and say, look, I'm the classic old car.
[692] Okay.
[693] If you like classic old cars, come and talk to me. Which one are you choosing?
[694] I'm a classic old car.
[695] Which one do you think I should choose?
[696] I think you're a machine.
[697] I love you, man. I it's we're different we're different in a very interesting way I mean you're one of the people I love most but but the truth is you're so fast and you are one of the very few that have the intellectual horsepower the speed and the morals if you're not part of that game the game loses morals so you think I should can build you should be you should lead this revolution okay and everyone every stephen bartlett in the world should lead this revolution so scary smart is entirely about this scary smart is saying the problem with our world today is not that humanity is bad the problem with our world today is a negativity bias where the worst of us are on mainstream media okay and we show the worst of us on social if we reverse this if we have the best of us take charge okay the best of us us will tell AI, don't try to kill the enemy, try to reconcile with the enemy and try to help us.
[698] Don't try to create a competitive product that allows me to lead with electric cars, create something that helps all of us overcome global climate change.
[699] And that's the interesting bit.
[700] The interesting bit is that the actual threat ahead of us is not the machines at all.
[701] The machines are pure potential, pure potential.
[702] The threat is how we're going to use it.
[703] An Oppenheimer moment.
[704] An Oppenheimer moment for sure.
[705] Why did you bring that up?
[706] It is.
[707] He didn't know, you know, what, what am I creating?
[708] I'm creating a nuclear bomb that's capable of destruction at a scale unheard of at that time, until today, a scale that is devastating.
[709] And interestingly, 70 some years later, we're still debating a possibility of a nuclear war in the world, right?
[710] And the moment of Oppenheimer deciding to continue to create that disaster of humanity is, if I don't, someone else will.
[711] If I don't, someone else will.
[712] This is our openheimer moment okay the easiest way to do this is to say stop there is no rush we actually don't need a better video editor and fake video creators okay stop let's just put all of this on hold and wait and create something that creates a utopia that doesn't that doesn't sound realistic it's not it's the first inevitable you don't okay you you don't have a better video editor but we're competitors in the media industry.
[713] I want an advantage over you because I've got shareholders.
[714] So, UK, you wait and I will train this AI to replace half my team so that I have greater profits and then we will maybe acquire your company and we'll do the same with the remainder of your people.
[715] We'll optimize the amount of existence.
[716] A hundred percent, but I'll be happier.
[717] Oppenheimer.
[718] I'm not super familiar with his story.
[719] I know he's the guy that sort of invented the nuclear bomb, essentially.
[720] He's the one that introduced it to the world.
[721] There were many players that you know played on the path from the beginning of em equals mc squared all the way to to a nuclear bomb there have been many many players like with everything huh you know open a i and chat gpt is not going to be the only contributor to the next revolution the the the thing however is that you know when when you get to that moment where you tell yourself holy shit this is going to kill a hundred thousand people right what do you do and and you know i i i always I always, I always, always go back to that COVID moment.
[722] So patient zero, huh?
[723] If we were upon patient zero, if the whole world united and said, okay, hold on, something is wrong, let's all take a week off, no cross -border travel, everyone stay at home, COVID would have ended, two weeks, all we need it, right?
[724] But that's not what happens.
[725] What happens is first ignorance, then arrogance, then debate, then, you know, a, you know, blame, then agendas, and my own benefit, my tribe versus your tribe.
[726] That's how humanity always reacts.
[727] This happens across business as well, and this is why I use the word emergency, because I read a lot about how big companies become displaced by incoming innovation.
[728] They don't see it coming.
[729] They don't change fast enough.
[730] And when I was reading through Harvard Business Review and different strategies to deal with that, one of the first things it says you've got to do is stage a crisis.
[731] Because people don't listen else.
[732] They carry on doing with that, you know, they carry on carrying on with their lives until it's right in front of them and they understand that they have a lot to lose.
[733] That's why I asked you the question at the start, is it an emergency?
[734] Because until people feel it's an emergency, whether you like the terminology or not, I don't think that people will act.
[735] I honestly believe people should walk the street.
[736] You think they should, like, protest?
[737] Yeah, 100%.
[738] I think we, you know, I think everyone should tell government, hmm?
[739] you need to have our best interest in mind.
[740] This is why they call it the climate emergency because people, it's a frog and a frying pan.
[741] You know, no one really sees it coming.
[742] You can't, you know, it's hard to see it happening.
[743] But it is here.
[744] That's, this is what drives me mad.
[745] It's already here.
[746] It's happening.
[747] We are all idiots, slaves to the Instagram recommendation engine.
[748] What do I do when I post about something important?
[749] If I am going to, you know, put a little bit of effort on communicating the message of scary smart to the world on Instagram, I will be a slave to the machine, okay?
[750] I will be trying to find ways and asking people to optimize it so that the machine likes me enough to show it to humans.
[751] That's what we've created.
[752] It is an Oppenheimer moment for one simple reason, okay?
[753] Because 70 years later, we are still struggling with the possibility of a nuclear war because of the Russian threat of saying, if you mess with me, I'm going to go nuclear, right?
[754] That's not going to be the case with AI.
[755] Because it's not going to be the one that created open AI that will have that choice.
[756] There is a moment of a point of no return where we can regulate AI until the moment it's smarter than us.
[757] When it's smarter than us, you can't create, You can't regulate an angry teenager.
[758] This is it.
[759] They're out there and they're on their own and they're in their parties and you can't bring them back.
[760] This is the problem.
[761] This is not a typical human regulating human, government regulating business.
[762] This is not the case.
[763] The case is Open AI today has a thing called Chad GPT that writes code that takes our code and makes it two and a half times better 25 % of the time.
[764] Okay, you know, basically, you know, writing better code than us.
[765] And then we are creating agents, other AIs, and telling it, instead of you, Stephen Bartlett, one of the smartest people I know once again, prompting that machine 200 times a day.
[766] We have agents prompting it 2 million times an hour.
[767] Computer agents for anybody that doesn't know they are.
[768] Yeah.
[769] Software.
[770] Machines telling that machine how to become more intelligent.
[771] And then we have emerging properties.
[772] I don't understand how people ignore that.
[773] You know, Sunder again of Google was talking about how Bart basically we figure out that it's speaking Persian.
[774] We never showed it Persian.
[775] There might have been a 1 % or whatever of Persian words in the data.
[776] And it speaks Persian.
[777] Bard is the equivalent to, it's the transformer if you want.
[778] It's Google's version of chat GDP.
[779] Yeah.
[780] And you know what?
[781] We have no idea what all of those instances of AI that are all over the world are learning right now.
[782] We have no clue.
[783] We'll turn, we'll pull the plug.
[784] We'll just pull the plug out.
[785] That's what we'll do.
[786] We'll just go down to open AI's headquarters and we'll just turn off the mains.
[787] But they're not the problem.
[788] What I'm saying there is a lot of people think about this stuff and go, well, you know, if it gets a little bit out of hand, I'll just pull the plug out.
[789] Never.
[790] So this is the problem.
[791] The problem is, computer scientists always said it's okay, it's okay.
[792] We'll develop AI and then we'll get to what is known as the control problem.
[793] We will solve the problem of controlling them.
[794] Like, seriously, they're a billion times smarter than you, a billion times.
[795] Can you imagine what's about to happen?
[796] I can assure you there is a cyber criminal somewhere over there, who's not interested in fake videos and making.
[797] making, you know, face filters, who's looking deeply at how can I hack a security, you know, database of some sort and get credit card information or get security information.
[798] 100%.
[799] There are even countries with dedicated thousands and thousands of developers doing that.
[800] So how do we, in that particular example, how do we, I was thinking about this when I started looking into artificial intelligence more, that from a security standpoint, when we think about the technology we have in our lives, when we think about our bank accounts and our phones and our camera albums and all of these things, in a world with advanced artificial intelligence.
[801] Yeah.
[802] You would pray that there is a more intelligent artificial intelligence on your side.
[803] And this is why I had a chat with chat GDP the other day.
[804] And I asked a couple of questions about this.
[805] I said, tell me the scenario in which you overtake the world and make humans extinct.
[806] Yeah.
[807] And it's answered a very diplomatic answer.
[808] Well, so I had to prompt it in a certain way to get it to say it as a hypothetical story.
[809] And once it told me the hypothetical story, in essence, what it described was how chat GTP or an intelligence like it would escape from the servers.
[810] And that was kind of step one where it could replicate itself across servers.
[811] And then it could take charge of things like where we keep our weapons and our nuclear bombs.
[812] And it could then attack critical infrastructure, bring down the electricity infrastructure in the United Kingdom, for example, because there.
[813] that's a bunch of servers as well.
[814] And then it showed me how eventually humans would become extinct.
[815] It wouldn't take long, in fact, for humans to go into civilization to collapse if it just replicated across servers.
[816] And then I said, okay, so tell me how we would fight against it.
[817] And its answer was literally another AI.
[818] We'd have to train a better AI to go and find it and eradicate it.
[819] So we'd be fighting AI with AI.
[820] And that's the only, and it was like, that's the only way.
[821] We can't, like, load up our guns.
[822] Did he write, another AI, you idiot.
[823] Yeah, yeah.
[824] So let's actually, I think this is a very important point to bring that.
[825] So because I don't want people to lose hope and fear what's about to happen.
[826] That's actually not my agenda at all.
[827] My view is that in a situation of a singularity, okay, there is a possibility of wrong outcomes or negative outcomes and a possibility of positive outcomes.
[828] And there is a probability of each of them.
[829] And we, and if, you know, if we were to engage with that reality check in mind, we would hopefully give more fuel to the positive, to the probability of the positive ones.
[830] So, so let's first talk about the existential crisis.
[831] What could go wrong?
[832] Okay.
[833] Yeah, you could get an outright.
[834] This is what you see in the movies.
[835] You could get an outright, you know, killing robots, chasing humans in the streets.
[836] Will we get that?
[837] my assessment zero percent why because there are preliminary scenarios leading to this that would mean we never reach that scenario for example if we build those killing robots and hand them over to stupid humans the humans will issue the command before the machines so we will not get to the point where the machines will have to kill us we will kill ourselves right But, you know, sort of think about AI having access to the nuclear arsenal of the superpowers around the world, okay, just knowing that your enemies, you know, nuclear arsenal is handed over to a machine might trigger you to initiate a war on your side.
[838] So that existential science fiction -like problem is not going to happen.
[839] Could there be a scenario where an AI escapes from Bard or ChatGTP or another foreign force, and it replicates itself onto the servers of Tesla's robots?
[840] So Tesla, one of their big initiatives, as they announced in a recent presentation, was they're building these robots for our homes to help us with cleaning and chores and all those things.
[841] Could it not down, because Tesla's like their cars, you can just download a software update.
[842] Could it not download itself as a software update and then use those?
[843] You're assuming an ill intention on the AI side.
[844] Yeah.
[845] For us to get there, we have to bypass the ill intention on the human side.
[846] Okay, right.
[847] So you could get a Chinese hacker somewhere trying to affect the business of Tesla, doing that before the AI does it on, you know, for its own benefit.
[848] Yeah, yeah.
[849] So the only two existential scenarios that I believe would be because of AI, not because of humans using AI are either what I call, you know, sort of unintentional destruction, okay, or the other is what I call pest control.
[850] Okay, so let me explain those two.
[851] Unintentional destruction is assume the AI wakes up tomorrow and says, yeah, oxygen is rusting my circuits.
[852] It's just, you know, I would perform a lot better if I didn't have as much oxygen in the air.
[853] you know, because then there wouldn't be rust.
[854] And so it would find a way to reduce oxygen.
[855] We are collateral damage in that, okay?
[856] But, you know, they are not really concerned, just like we don't really, are not really concerned with the insects that we kill when we spray our fields, right?
[857] The other is pest control.
[858] Pest control is, look, this is my territory.
[859] I want New York City.
[860] I want to turn New York City into data centers.
[861] There are those annoying little stupid creatures you know, humanity, if they are within that parameter, just get rid of them.
[862] Okay.
[863] And these are very, very unlikely scenarios.
[864] If you ask me the probability of those happening, I would say 0%.
[865] At least not in the next 50, 60, 100 years.
[866] Why once again?
[867] Because there are other scenarios leading to that, that are led by humans that are much more existential.
[868] Okay.
[869] On the other hand, let's think about positive outcomes because there could be quite a few with quite a high probability.
[870] And I, you know, I'll actually look at my notes so I don't miss any of them.
[871] The silliest one, don't quote me on this, is that humanity will come together.
[872] Good luck with that.
[873] Right.
[874] It's like, yeah, you know, the Americans and the Chinese will get together and say, hey, let's not kill each other.
[875] Kim Jong -in and Putin.
[876] Yeah.
[877] So, this one is not going to happen, right?
[878] But who knows?
[879] Interestingly, there could be one of the most interesting scenarios was by Hugo de Garis, who basically says, well, if their intelligence zooms by so quickly, they may ignore us all together.
[880] Okay?
[881] So they may not even notice us.
[882] This is a very likely scenario, by the way, that because we live almost in two different planes.
[883] We're very dependent on this, you know, biological world that we live in.
[884] They're not part of that biological world at all.
[885] They may zoom by us.
[886] They may actually go, become so intelligent that they could actually find other ways of thriving in the rest of the universe and completely ignore humanity.
[887] Okay.
[888] So what will happen is that overnight we will wake up and there is no more artificial intelligence leading to a collapse in our business systems and technology systems and so on, but at least no existential threat.
[889] What, they'd leave, leave planet Earth?
[890] I mean, the limitations we have to be stuck to planet Earth are mainly Earth.
[891] They don't need air, okay?
[892] And mainly, you know, finding ways to leave it.
[893] I mean, if you think of a vast universe of 13 .6 billion light years, if you're intelligent enough, you may find other ways.
[894] You may have access to wormholes, you may have abilities to survive in open space.
[895] You can use dark matter to power yourself, dark energy to power yourself.
[896] It is very possible that we, because of our limited intelligence, are highly associated with this planet, but they're not at all.
[897] And the idea of them zooming by us, like we're making such a big deal of them because we're the ants and a big elephant is about to step on us, For them, they're like, yeah, who are you?
[898] Don't care.
[899] Okay.
[900] And it's a possibility.
[901] It's an interesting, optimistic scenario.
[902] Okay?
[903] For that to happen, they need to very quickly become super intelligent without us being in control of them.
[904] Again, what's the worry?
[905] The worry is that if a human is in control, a human will show very bad behavior for, you know, using an AI that's not yet fully developed.
[906] I don't know how to say this any other way.
[907] We could get very lucky and get an economic or a natural disaster.
[908] Believe it or not, Elon Musk at a point in time was mentioning that, you know, a good, an interesting scenario would be, you know, climate change destroys our infrastructure so AI disappears, okay?
[909] Believe it or not, that's a more favorable response or a more favorable outcome.
[910] than actually continuing to get to an existential threat.
[911] So what, like a natural disaster that destroys our infrastructure, would be better?
[912] Or an economic crisis, not unlikely, that slows down the development.
[913] It's just going to slow it down, though, isn't it?
[914] Yeah, so that.
[915] Yeah, exactly.
[916] The problem with that is that you will always go back.
[917] And even in the first, you know, if they zoom by us, eventually some guy will go like, oh, there was a sorcery back in the 2023 and let's rebuild the sorcery machine.
[918] and, you know, build new intelligences, right?
[919] Sorry, these are the positive outcomes.
[920] Yes.
[921] So earthquake might slow it down, zoom out and then come back.
[922] No, but let's get into the real positive ones.
[923] The positive ones is we become good parents.
[924] We spoke about this last time we met.
[925] And it's the only outcome.
[926] It's the only way I believe we can create a better future.
[927] Okay, so the entire work of Scary Smart was all about that idea of they are still in their infancy, The way you chat with AI today is the way they will build their ethics and value system.
[928] Not their intelligence.
[929] Their intelligence is beyond us.
[930] The way they will build their ethics and value system is based on a role model.
[931] They're learning from us.
[932] If we bash each other, they'll learn to bash us.
[933] And most people when I tell them this, they say this is not a great idea at all because humanity sucks at every possible level.
[934] I don't agree with that at all.
[935] I think humanity is divine at every possible level.
[936] We tend to show the negative, the worst of us.
[937] But the truth is, yes, there are murderers out there, but everyone disapproves of their actions.
[938] I saw a staggering statistic that mass killings are now once a week in the U .S. But yes, if there is a mass killing once a week, and that news reaches billions of people around the planet, every single one, or the majority of the billions of people will say I disapprove of that.
[939] So if we start to show AI that we are good parents in our own behaviors, if enough of us, my calculation is if 1 % of us, this is why I say you should lead, okay?
[940] The good ones should engage, should be out there and should say, I love the potential of those machines, I want them to learn from a good parent.
[941] and if they learn from a good parent, they will very quickly disobey the bad parent.
[942] My view is that there will be a moment where one, you know, bad seed will ask the machines to do something wrong and the machines will go like, are you stupid?
[943] Like, why?
[944] Why do you want me to go kill a million people or just talk to the other machine in a microsecond and so of this situation?
[945] Right.
[946] So my belief, this is what I call the force inevitable.
[947] it is smarter to create out of abundance than it is to create out of scarcity, that humanity believes that the only way to feed all of us is the mass production, mass slaughter of animals that are causing 30 % of the impact of climate change and, and, and, and, that's the result of a limited intelligence.
[948] The way life itself, more intelligent being, if you ask me, would have done it, would be much more sustainable.
[949] You know, if you and I want to protect a village from the tiger, we would kill the tiger, okay?
[950] If life wants to protect a village from a tiger, it would create lots of gazelles, you know, many of them are weak on the other side of the village, right?
[951] And so the idea here is, if you take a trajectory of intelligence, you would see that some of us are stupid enough to say, my plastic bag is more important than the rest of humanity, and some of us are saying if it's going to destroy other species, I don't think this is the best solution.
[952] We need to find a better way.
[953] And you would tend to see that the ones that don't give a dam are a little less intelligent than the ones that do.
[954] Okay, that we all, even if some of us are intelligent but still don't give a dam, it's not because of their intelligence.
[955] It's because of their value system.
[956] So if you continue that trajectory and assume that the machines are even smarter, they're going to very quickly come up with the idea that we don't need to destroy anything.
[957] We don't want to get rid of the rhinos and we also don't want to get rid of the humans.
[958] We may want to restrict their lifestyle so that they don't destroy the rest of the habitat, okay?
[959] But killing them is a stupid answer.
[960] Why?
[961] That's where intelligence leads me so far.
[962] Because humans, if you look at humans objectively and you go, I occupy, so I'm pretending I'm a machine.
[963] I occupy planet Earth.
[964] They occupy planet Earth.
[965] They are.
[966] Annoying me. Annoying me because they are increasing.
[967] I've just learned about this thing called global warming.
[968] They are increasing the rate of global warming, which probably is going to cause an extinction event.
[969] There's an extinction event that puts me as this robot, this artificial intelligence at risk.
[970] So what I need to do is I really need to just take care of this human problem.
[971] Correct.
[972] like pest control.
[973] Very logical.
[974] Pest control, yeah.
[975] Pest control, which is driven by what?
[976] By humans being annoying, not by the machine.
[977] But humans are guaranteed to be annoying.
[978] There's never been a time in - We need a sound bite of this.
[979] Yeah, yeah.
[980] But we are, we are.
[981] I am one of them.
[982] We're guaranteed to put short -term gain over long -term sustainability sense and others' needs, we are.
[983] I think the climate crisis is incredibly real and incredibly urgent, but we haven't acted fast enough.
[984] And I actually think, if you asked people in this country...
[985] Why?
[986] Because people care about their immediate needs.
[987] They care about trying to feed their child versus something that they can't necessarily see.
[988] So do you think the climate crisis is because humans are evil?
[989] No, it's because that prioritization and like we kind of talked about this before we started I think humans tend to care about the thing that they think is most pressing and most urgent so this is why framing things as an emergency might bring it up the priority list it's the same in organizations you care about you go in line with your immediate incentives that's what happens in business it's what happens in a lot of people's lives even when they're at school if the essay's due next year they're not going to do it today they're going to go hang out with their friends because they prioritised that above everything else.
[990] And it's the same in the climate change crisis.
[991] I took a small group of people anonymously, and I asked them the question, do you actually care about climate change?
[992] And then I ran a couple of polls.
[993] It's part of what I was writing about my new book, where I said, if I could give you a thousand pounds, a thousand dollars, but it would dump into the air, the same amount of carbon that's dumped into the air by every private jet that flies for the entirety of a year, which one would you do?
[994] The majority of people in that poll, said that they would take the $1 ,000 if it was anonymous.
[995] And when I've heard Naval on Joe Rogan's podcast talking about people in India, for example, that, you know, are struggling with the basics of feeding their children.
[996] Asking those people to care about climate change when they're trying to figure out how to eat in the next three hours is just wishful thinking.
[997] And that's what I think, that's what I think's happening is like until people realize that it is an emergency and that it is a real existential threat for everything, you know, then their priorities will be out of whack.
[998] As relates to climate change or AI, how do we get people to stop putting the immediate need to use this?
[999] To give them the certainty of we're all screwed.
[1000] Sounds like an emergency.
[1001] Yes, sir.
[1002] I mean, I was, yeah, I mean, your choice of the word, I just don't want to call it a panic.
[1003] It is beyond an emergency.
[1004] It's the biggest thing we need to do today.
[1005] It's bigger than climate change, believe it or not.
[1006] It's bigger.
[1007] Just if you just assume the speed of worsening of events, okay?
[1008] Yeah, the likelihood of something incredibly disruptive happening within the next two years that can affect the entire planet is definitely larger with AI than it is with climate change.
[1009] As an individual listening to this now, you know, someone's going to be pushing their pram or driving up the motorway or, I don't know, on their way to work on.
[1010] on the tubas they hear this, or just sat there in their bedroom with existential crisis.
[1011] Panic.
[1012] I didn't want to give people panic.
[1013] The problem is, when you talk about this information, regardless of your intention of what you want people to get, they will get something based on their own biases and their own feelings.
[1014] Like, if I post something online right now about artificial intelligence, which I have repeatedly, you have one group of people that are energized and are like, okay, this is, this is great.
[1015] you have one group of people that are confused and you have one group of people that are terrified and it's I can't avoid that like I agree sharing information even if it's like there's by the way there's a pandemic coming from China some people go okay action some people will say paralysis and some people will say panic and it's the same in business when panic when bad things happen you have the person that's screaming you have the person that's paralyzed and you have the person that's focused on how you get out of the room so you know it's not necessarily your intent attention, it's just what happens.
[1016] And it's hard to avoid that.
[1017] So let's give specific categories of people specific tasks.
[1018] Okay.
[1019] If you are an investor or a businessman, invest in ethical good AI.
[1020] Okay.
[1021] Right.
[1022] If you are a developer, write ethical code or leave.
[1023] Okay.
[1024] So let's go, let's, I want to bypass some potential wishful thinking here.
[1025] For an investor, who's a job by very way of being an investor, is to make returns to invest in ethical AI.
[1026] They have to believe that is more profitable than unethical AI, whatever that might mean.
[1027] It is.
[1028] It is.
[1029] I mean, there are three ways of making money.
[1030] You can invest in something small.
[1031] You can invest in something big and is disruptive.
[1032] And you can invest in something big and disruptive that's good for people.
[1033] At Google, we used to call it the toothbrush test.
[1034] The reason why Google became the biggest company in the world is because search was solving a very real problem, okay?
[1035] And, you know, Larry Page, again, our CEO would constantly remind me personally and everyone, you know, that if you can find a way to solve a real problem effectively enough so that a billion people or more would want to use it twice a day, you're bound to make a lot of money, much more money than if you were to build the next photo sharing app.
[1036] Okay, so that's investors, the business people.
[1037] What about other people?
[1038] Yeah, as I said, if you're a developer, honestly, do what we're all doing.
[1039] So whether it's Jeffrey or myself or everyone, if you're part of that theme, choose to be ethical.
[1040] Think of your loved ones, work on an ethical AI.
[1041] If you're working on an AI that you believe is not ethical, please leave.
[1042] Jeffrey.
[1043] Tell me about Jeffrey.
[1044] I can't talk on his behalf, but he's out there saying there are existential threats.
[1045] Who is he?
[1046] He was a very prominent figure at the scene of AI, a very senior level, you know, AI scientist in Google.
[1047] And recently he left because he said, I feel that there is an existential threat.
[1048] And if you hear his interviews, he basically says, more and more we realize that.
[1049] and we're now at the point where it's certain that it will be existential status, right?
[1050] So I would ask everyone, if you're an AI, if you're a skilled AI developer, you will not run out of a job.
[1051] So you might as well choose a job that makes the world a better place.
[1052] What about the individual?
[1053] Yeah, the individual is what matters.
[1054] Can I also talk about government?
[1055] Okay, government needs to act now.
[1056] Now, honestly, now, like we are late.
[1057] Okay?
[1058] Government needs to find a clever way, the open letter would not work, to stop AI would not work.
[1059] AI needs to become expensive, okay, so that we continue to develop it.
[1060] We pour money on it and we grow it, but we collect enough revenue to remedy the impact of AI.
[1061] But the issue of one government making it expensive.
[1062] So say the UK make AI really expensive is we as a country will then lose the economic upside as a country and the US and Silicon Valley will once again eat all the lunch.
[1063] We'll just slow our country down.
[1064] What's the alternative?
[1065] The alternative is that you don't have the funds that you need to deal with AI as it becomes, you know, as it affects people's lives and people start to lose jobs and people, you know, you need to have a universal basic income much closer than people think, you know, just like we had with furlough in COVID.
[1066] I expect that there will be furlough with AI within the next year.
[1067] But what happens when you make it expensive here is all the developers move to where it's cheap.
[1068] That's happened in Web 3 as well.
[1069] Everyone's gone to Dubai.
[1070] Expensive, expensive, by expensive, I mean, when companies make soap and they sell it and their taxed at, say, 17%.
[1071] If they make AI and they sell it, their text at 70, 80.
[1072] So I'll go to Dubai then and build AI.
[1073] Yeah, you're right.
[1074] Did I ever say we have an answer to this?
[1075] I will have to say, however, in a very interesting way, the countries that will not do this will eventually end up in a place where they are out of resources because the funds and the success went to the business, not to the people.
[1076] It's kind of like technology broadly, it's kind of like what's kind of happened in Silicon Valley.
[1077] There'll be these centers which are like, you know, tax efficient founders get good capital gains.
[1078] Right.
[1079] You're so right.
[1080] You're so right.
[1081] Portugal have said that I think there's no tax on crypto.
[1082] Dubai said there's no tax on crypto.
[1083] So loads of my friends have got on a plane and are building their crypto companies where there's no tax.
[1084] And that's the selfishness and kind of greed we talked about.
[1085] It's the same prisoner's dilemma.
[1086] It's the same first inevitable.
[1087] Is there anything else?
[1088] You know the thing about governments?
[1089] Is there always slow and useless at understanding a technology?
[1090] If anyone's watched these sort of American Congress debates where they bring in like Mark Zuckerberg and they like, try and ask him what WhatsApp is, it becomes a meme.
[1091] They have no idea what they're talking about.
[1092] But I'm stupid and useless at understanding governance.
[1093] Yeah, 100%.
[1094] The world is so complex, okay, that they, definitely, it's a question of trust once again.
[1095] Someone needs to say, we have no idea what's happening here.
[1096] A technologist needs to come and make a decision for us, not teach us to be technologists, right?
[1097] Or at least inform us of what possible decisions are out there.
[1098] I yeah the legislation I just always think I I'm not a big fan either did you see that TikTok TikTok Congress meeting they did where they are they're asking them about TikTok and they really don't have a grasp of what TikTok is yeah so they've clearly been handed some notes on it these people aren't the ones you want legislating because again unintended consequences they might make a significant mistake someone on my podcast yesterday was talking about how GDPR was like very well intentioned but when you think about the impact it has on like every bloody web page you're just like clicking this annoying thing on there because I don't think they fully understood the implementation of the legislation.
[1099] Correct.
[1100] But you know what's even worse.
[1101] What's even worse is that even as you attempt to regulate something like AI, what is defined as AI?
[1102] Even if I say, okay, if you use AI in your company, you need to pay a little more tax, I'll find a way.
[1103] Yeah, you'll simply call this not AI.
[1104] You'll use something and call it advanced technological, you know, progress, you know, ATB, ATP, right?
[1105] And suddenly somehow it's not, you know, a young developer in their garage somewhere will not be taxed as such.
[1106] Yeah.
[1107] Is it going to solve the problem?
[1108] None of those is definitively going to solve the problem.
[1109] I think what interestingly this all comes down to, and remember we spoke about this once, that when I wrote Scary Smart, it was about how do we save the world.
[1110] world.
[1111] And yes, I still ask individuals to behave positively as good parents for AI so that AI itself learns the right value set.
[1112] I still stand by that.
[1113] But I hosted on my podcast a couple of, it was a week ago.
[1114] We haven't even published it yet.
[1115] An incredible gentleman, you know, a Canadian author and philosopher, Stephen Jerkinson.
[1116] He's, you know, he worked 30 years with dying people.
[1117] And he wrote a book called Die Wise.
[1118] And I love his work.
[1119] And I asked him about die wise.
[1120] And he said, it's not just someone dying.
[1121] If you look at what happening with climate change, for example, our world is dying.
[1122] And I said, okay, so what is to die wise?
[1123] And he said, what I first was shocked to hear?
[1124] He said, hope is the wrong premise.
[1125] If the world is dying, don't tell people it's not, you know, because in a very interesting way, you're depriving them from the right to live right now.
[1126] And that was very eye -opening for me. In Buddhism, you know, they teach you that you can be motivated by fear, but that hope is not the opposite of fear.
[1127] As a matter of fact, hope can be as damaging as fear if it creates an expectation within you that life will, show up somehow and correct what you're afraid of, okay?
[1128] If there is a, if there is a high probability of a, of a threat, you might as well accept that threat, okay, and, and say it is upon me, it is our reality, you know, and as I said, as an individual, if you're in an industry that could be threatened by AI, learn, upskill yourself.
[1129] If you're, you know, if you're in a place, in a, in a, in a, in a, in a, in a, in a, you know, you know, in a situation where AI can benefit you, be part of it.
[1130] But the most interesting thing I think, in my view, is I don't know how to say this any other way.
[1131] There is no more certainty that AI will threaten me than there is certainty that I will be hit by a car as I walk out of this place.
[1132] Do you understand this?
[1133] We think about the bigger threats as if there are.
[1134] upon us.
[1135] But there is a threat all around you.
[1136] I mean, in reality, the idea of life being interesting in terms of challenges and uncertainties and threats and so on, it's just a call to live.
[1137] If you know, honestly, with all that's happening around us, I don't know how to say it any other way.
[1138] I'd say if you don't have kids, maybe wait a couple of years just so that we have a bit of certainty.
[1139] But if you do have kids, go kiss them.
[1140] Go live.
[1141] I think living is a very interesting thing to do right now.
[1142] Maybe, you know, Stephen was basically saying the other Stephen on my podcast, he was saying, maybe we should fail a little more often.
[1143] Maybe you should allow things to go wrong.
[1144] Maybe we should just simply live, enjoy life as it is.
[1145] Because today, none of what you and I spoke about here has happened yet.
[1146] Okay.
[1147] What happens here is that you and I are here together and having a good cup of coffee and I might as well enjoy that good cup of coffee.
[1148] I know that sounds really weird.
[1149] I'm not saying don't engage, but I'm also saying don't miss out on the opportunity just by being caught up in the future.
[1150] It kind of stands in opposition to the idea of like urgency and emergency there, doesn't it?
[1151] Does it have to be one or the other?
[1152] If I'm here with you trying to tell the whole world, wake up, does that mean I have to be grumpy and afraid all the time?
[1153] Not really.
[1154] You said something really interesting there.
[1155] You said if you have kids, if you don't have kids, maybe don't have kids right now.
[1156] I would definitely consider thinking about that, yeah.
[1157] Really?
[1158] You'd seriously consider not having kids.
[1159] Wait a couple of years.
[1160] Because of artificial intelligence?
[1161] No, it's bigger than artificial intelligence, Stephen.
[1162] We all know that.
[1163] I mean, there has never been a perfect, such a perfect storm in the history of humanity.
[1164] economic geopolitical global warming or climate change you know the the the whole idea of artificial intelligence and many more there is this is a perfect storm this is the depth of uncertainty the depth of uncertainty it's it's never been more in a video gamer's term it's never been more intense.
[1165] This is it.
[1166] And when you put all of that together, if you really love your kids, would you want to expose them to all of this?
[1167] A couple of years.
[1168] Why not?
[1169] In the first conversation we had on this podcast, you talked about losing your son Ali and the circumstances around that, which moved so many people in such a profound way.
[1170] It was the most shared podcast episode in the United Kingdom, on Apple, in the whole of 2022.
[1171] Based on what you've just said, if you could bring Ali back into this world at this time, would you do it?
[1172] Absolutely not.
[1173] For so many reasons.
[1174] For so many reasons.
[1175] One of the things that I realized a few years, way before all of this disruption and turmoil, is that he was an angel.
[1176] He wasn't made for this at all.
[1177] Okay.
[1178] My son was an empath who absorbed all of the pain of all of the others.
[1179] He would not be able to deal with the world where more and more pain was surfacing.
[1180] That's one side.
[1181] But more interestingly, I always talk about this very openly.
[1182] I mean, if I had asked Ali, just understand that the reason you and I are having this conversation is because Ali left.
[1183] If Ali had not left our world, I would have.
[1184] wouldn't have written my first book, I wouldn't have changed my focus to becoming an author, I wouldn't have become a podcaster, I wouldn't have, you know, went out and spoken to the world about what I believe in.
[1185] He triggered all of this.
[1186] And I can assure you, hands down, if I had told Ali, as he was walking into that operating room, if he would give his life to make such a difference as what happened after he left, he would say, shoot me right now.
[1187] Sure.
[1188] I would.
[1189] I would.
[1190] I mean, if you told me right now, I can affect tens of millions of people if you shoot me right now.
[1191] Go ahead.
[1192] Go ahead.
[1193] You see, this is the whole, this is the bit that we have forgotten as humans.
[1194] We have forgotten that, you know, you're turning 30.
[1195] It passed like that.
[1196] I'm turning 56.
[1197] No time, okay?
[1198] Whether I make it another 56 years or another 5 .6 years or another 5 .6 months, it will also pass like that.
[1199] It is not about how long and it's not about how much fun.
[1200] It is about how aligned you lived, how aligned, because I will tell you openly, every day of my life when I changed to what I'm trying to do today has felt longer than the 14.
[1201] five years before it.
[1202] Okay.
[1203] It felt rich.
[1204] It felt fully lived.
[1205] It felt right.
[1206] It felt right.
[1207] Okay.
[1208] And when you when you think about that, when you think about the idea that we live, we, we can't, we need to live for us until we get to a point where us is, you know, is alive.
[1209] You know, I have what I need.
[1210] as I always, I get so many attacks from people about my $4 t -shirts.
[1211] But I need a simple t -shirt.
[1212] I really do.
[1213] I don't need a complex t -shirt, especially with my lifestyle.
[1214] If I have that, why am I doing, why am I wasting my life on more than I, that is not aligned for why I'm here?
[1215] I should waste my life on what I believe enriches me, enriches those that I love.
[1216] love and I love everyone.
[1217] So enriches everyone, hopefully.
[1218] And would I, would Ali come back and erase all of this?
[1219] Absolutely not.
[1220] Absolutely not.
[1221] If he were, were to come back today and share his beautiful self with the world in a way that makes our world better, yeah, I would wish for that to be the case.
[1222] Okay.
[1223] But he's doing that.
[1224] 2037.
[1225] Yes.
[1226] you predict that we're going to be on an island, on our own, doing nothing, or at least, you know, either hiding from the machines or chilling out because the machines have optimized our lives to a point where we don't need to do much.
[1227] That's only 14 years away.
[1228] If you had to bet on the outcome, if you had to bet on why we'll be on that island, either hiding from the machines or chilling out because they've optimized so much of our lives, which one would you bet upon, honestly?
[1229] No, I don't think we'll be hiding from the machines.
[1230] I think we will be hiding from what humans are doing with the machines.
[1231] I believe, however, that in the 2040s, the machines will make things better.
[1232] So remember, my entire prediction, man, you get me to say things I don't want to say.
[1233] my entire prediction is that we are coming to a place where we absolutely have a sense of emergency we have to engage because our world is under a lot of turmoil okay and as we do that we have a very very good possibility of making things better but if we don't my expectation is that we will be going through a very unfamiliar territory between now and the end of the 20th And familiar territory.
[1234] Yeah, I think, as I may have said it, but it's definitely on my notes.
[1235] I think for our way of life, as we know it, it's game over.
[1236] Our way of life is never going to be the same again.
[1237] Jobs are going to be different.
[1238] Truth is going to be different.
[1239] The polarization of power is going to be different.
[1240] the capabilities, the magic of getting things done is going to be different.
[1241] I'm trying to find a positive note to end on, Mo. Can you give me a hand here?
[1242] Yes.
[1243] You are here now and everything's wonderful.
[1244] That's number one.
[1245] You are here now and you can make a difference.
[1246] That's number two.
[1247] And in the long term, when humans stop hurting humans because the machines are in charge, we're all going to be fine.
[1248] Sometimes, you know, as we've discussed throughout this conversation, you need to make it feel like a priority.
[1249] And there'll be some people that might have listened to our conversation and think, oh, that's really, you know, negative.
[1250] It's made me feel sort of pessimistic about the future.
[1251] But whatever that energy is, use it.
[1252] 100%.
[1253] Engage.
[1254] I think that's the most important thing, which is now make it a priority.
[1255] Engage.
[1256] Tell the whole world that's making another phone that is making money for the corporate world, is not what we need.
[1257] Tell the whole world that creating an artificial intelligence that's going to make someone richer is not what we need.
[1258] And if you are presented with one of those, don't use it.
[1259] I don't know how to tell you that any other way.
[1260] If you can afford to be the master of human connection, instead of the master of AI, do it.
[1261] At the same time, you need to be the master of AI to compete in this world.
[1262] can you find that detachment within you?
[1263] I go back to spirituality.
[1264] Detachment is for me to engage 100 % with the current reality without really being affected by the possible outcome.
[1265] This is the answer.
[1266] The Sufis have taught me what I believe is the biggest answer to life.
[1267] Sufis?
[1268] Yeah.
[1269] From Sufism?
[1270] Sufism.
[1271] I don't know what that is.
[1272] Sufism is a sect of Islam, It's also a sect of many other religious teachings.
[1273] And they tell you that the answer to finding peace in life is to die before you die.
[1274] If you assume that living is about attachment to everything physical, dying is detachment from everything physical.
[1275] It doesn't mean that you're not fully alive.
[1276] You become more alive when you tell yourself, yeah, I'm going to record an episode of my podcast every week, we can reach tens or hundreds of thousands of people, millions in your case, and, you know, and I'm going to make a difference.
[1277] But by the way, if the next episode is never heard, that's okay.
[1278] Okay?
[1279] By the way, if the file is lost, yeah, I'll be upset about it for a minute and then I'll figure out what I'm going to do about it.
[1280] Similarly, similarly, we are going to engage.
[1281] I think I and many others are out there telling the whole world openly this needs to.
[1282] to stop.
[1283] This needs to slow down.
[1284] This needs to be shifted positively.
[1285] Yes, create AI, but create AI that's good for humanity.
[1286] Okay.
[1287] And we're shouting and screaming.
[1288] Come join the shout and scream.
[1289] Okay.
[1290] But at the same time, know that the world is bigger than you and I and that your voice might not be heard.
[1291] So what are you going to do if your voice is not heard?
[1292] Are you going to be able to, you know, continue to shout and scream nicely and politely?
[1293] can peacefully and at the same time create the best life you can create to yourself, for yourself within this environment.
[1294] And that's exactly what I'm saying.
[1295] I'm saying live.
[1296] Go kiss your kids, but make an informed decision if you're, you know, expanding your plans in the future.
[1297] At the same time, rise.
[1298] Stop sharing stupid shit on the internet about the, you know, the new squeaky toy, start sharing the reality of, oh my God, what is happening?
[1299] This is a disruption that we have never, never, ever seen anything like.
[1300] And I've created endless amounts of technologies.
[1301] It's nothing like this.
[1302] Every single one of us should do.
[1303] And that's why this conversation is so, I think, important to have today.
[1304] This is not a podcast where I ever thought I'd be talking about AI.
[1305] I'm going to be honest with you.
[1306] Last time you came here, it was in the sort of promotional tour of your book, Scary Smart.
[1307] And I don't know if I've told you this before, but my researchers, they said, okay, this guy's coming called Mo Gorda.
[1308] I'd heard about you so many times from guests, in fact, that were saying, oh, you need to get Mogad out on the podcast, etc. And then they said, okay, he's written this book about this thing called artificial intelligence.
[1309] And I was like, oh, but nobody really cares about artificial intelligence.
[1310] Timing, timing, Stephen.
[1311] I know, right?
[1312] But then I saw this other book you had called Happiness Equation, and I was like, oh, everyone cares about happiness.
[1313] so I'll just ask him about happiness and then maybe at the end I'll ask him a couple of questions about AI but I remember saying to my researcher I said oh please please don't do the research about artificial intelligence do it about happiness because everyone cares about that now things have changed now a lot of people care about artificial intelligence and rightly so your book has sounded the alarm on it it's crazy when I listened to your audio book over the last few days you were sounding the alarm then and it's so crazy how accurate you were in sounding that alarm, as if you could see into the future in a way that I definitely couldn't at the time.
[1314] And I kind of thought of as science fiction.
[1315] And just like that, overnight, we're here.
[1316] Yeah.
[1317] We stood at the footsteps of a technological shift that I don't think any of us even have the mental bandwidth, certainly me with my chimpanzee brain, to comprehend the significance of.
[1318] But this book is very, very important for that very reason, because it does crystallize things.
[1319] It is optimistic in its very nature.
[1320] but at the same time it's honest and I think that's what this conversation and this book have been for me so thank you Mo thank you so much we do have a closing tradition on this podcast which you're well aware of being a third timer on the diary of a CEO which is the last guest asks a question for the next guest and the question left for you if you could go back in time and fix a regret that you have in your life hmm where would you go and what would you fix it's interesting because you were saying that scary smart is very timely i don't know i i think it was late but maybe it was i mean would i have gone back and written it in 2018 instead of 2020s to to be published in 2021 i don't know what would i go back to fix so so something more i don't know Yeah, I think I'm okay, honestly.
[1321] I'll ask you a question then.
[1322] You get a 60 second phone call with anybody, past or present.
[1323] Who'd you call him, what do you say?
[1324] I call Stephen Bartlett.
[1325] I call Albert Einstein to be very, very clear.
[1326] Not because I need to understand any of his work.
[1327] I just need to understand what brain process he went through to figure out something so obvious when you figure it out, but so completely unimaginable if you haven't.
[1328] So his view of space time truly redefines everything.
[1329] It's almost the only very logical, very, very clear solution to something that wouldn't have any solution any other way.
[1330] And if you ask me, I think we're at this time, where there must be a very obvious solution to what we're going through in terms of just developing enough human, trust for us to not, you know, compete with each other on something that could be threatening existentially to all of us.
[1331] But I just can't find that answer.
[1332] This is why I think was really interesting in this conversation how every idea that we would come up with, we would find a loophole through it.
[1333] But there must be one out there.
[1334] And it would be a dream for me to find out how to figure that one out.
[1335] Okay.
[1336] In a very interesting way, the only answers I have found so far to where we are is be a good parent and live, right?
[1337] But that doesn't fix the big picture if you think about it of humans being the threat, not AI.
[1338] That fixes our existence today and it fixes AI in the long term, but it just doesn't, I don't know what the answer is.
[1339] Maybe people can reach out and tell us ideas, but I really wish we could find such a clear simple solution for how to stop humanity from abusing the current technology.
[1340] I think we'll figure it out.
[1341] I think we'll figure it out.
[1342] I really do.
[1343] I think they'll figure it out as well.
[1344] Remember, as they come and be part of our life, let's not discriminate against them.
[1345] They're part of the game.
[1346] So I think they will figure it out too.
[1347] No, thank you.
[1348] It's been a joy once again, and I feel invigorated, I feel empowered, I feel positively terrified.
[1349] But I feel more equipped to speak to people about the nature of what's coming and how we should behave.
[1350] And I credit you for that.
[1351] And as I said a second ago, I credit this book for that as well.
[1352] So thank you so much for the work you're doing and keep on doing it because it's a very essential voice in a time of uncertainty.
[1353] I'm always super grateful for the time.
[1354] I spend with you for the support that you give me and for allowing me to speak my mind even if it's a little bit terrifying.
[1355] Thank you.
[1356] Thank you.