Armchair Expert with Dax Shepard XX
[0] Welcome, welcome, welcome to armchair expert experts on expert.
[1] I'm Dan Riggle and I'm joined by Mrs. Mouse.
[2] We had an armchair, this is an Easter egg.
[3] We had an armchair anonymous yesterday.
[4] We recorded one and someone said, it's Dan and Lily.
[5] Daniel and Lily, man. Yeah, it was really cute.
[6] Today we have Mustafa Sullyman and this is, we've been wanting really, really bad to get a premium expert on AI.
[7] Yeah, because it's so relevant.
[8] And we talk so much about it, yet we haven't had too many experts on to really tell us where we're at in this whole crazy experience.
[9] Mustafa is an AI researcher and an entrepreneur.
[10] He is the co -founder of Deep Mind and Infliction AI.
[11] He's a powerhouse in this field.
[12] He's a big deal.
[13] Yes, they have pie as their AI version.
[14] He has a new book out called The Coming Wave, Technology, Power, and the 21st Century's Greatest Dilemma.
[15] So he's got lots of recommendations on how we keep this from devouring us, and they're all very, very interesting.
[16] It was a really interesting conversation, and I think necessary at this point in time.
[17] So please enjoy Mustafa Sullyman.
[18] Wondry Plus subscribers can listen to armchair expert early and ad free right now.
[19] Join Wondry Plus in the Wondry app or on Apple Podcasts.
[20] Or you can listen for free wherever you get your podcasts.
[21] Oh, it's chilling.
[22] I can't imagine.
[23] You don't live here.
[24] I live in Palo Alto.
[25] Seems on the nose.
[26] Literally I was in San Francisco at the weekend visiting this very interesting antique shop full of strange curiosities, like really interesting collection.
[27] Within like one second I started chatting to the person who worked behind the counter.
[28] She was like, so where did you live then?
[29] I was like, Paul Alto.
[30] She's like, I knew it!
[31] Oh, it's so obvious!
[32] Oh my God, that's funny.
[33] I want to go to a fun antique shop.
[34] I love...
[35] I love that he said full of curiosity.
[36] Yeah.
[37] I think the place is called a cottage of curiosity.
[38] Oh.
[39] Isn't that a cool name?
[40] They should sell altoyds there because, of course, they're saying is curiously strong.
[41] Oh, it is?
[42] Yeah.
[43] Isn't it the greatest slogan for a product?
[44] Yeah, I didn't know.
[45] I don't know that.
[46] I also love brands that have never changed.
[47] And I feel like altoyeds, it's the same tin as it was in the 50s.
[48] Not that I was around, but it feels like it was...
[49] It has a retro vibe.
[50] Unless it is just doing the retro thing and I'm being tricked and that would be really embarrassing.
[51] Certainly possible, but you're right, it feels fun to consume an institution.
[52] Yeah, be part of the history with every mouthful.
[53] Look at this.
[54] It's an advert for Altois.
[55] Yeah, I'll go further.
[56] I mean, chips are always a stretch.
[57] But there's even there's like food products where it's like, no, no, this is an insulin.
[58] Like vanilla wafers, yes.
[59] Yeah, my grandparents consume them.
[60] They fed them to, us.
[61] I can't say that I'm passing it on.
[62] Do you have any children?
[63] I do not.
[64] One day, you guys do?
[65] I have two.
[66] Monica has frozen eggs.
[67] I have a couple eggs.
[68] That's where we're at.
[69] No kids for me yet.
[70] You have a, um, Jesus, I'm a mess.
[71] Throw it all on the floor, too.
[72] I mean, literally just toss it all out the window.
[73] And I've just realized I have a little bit of de bris on my phone and I hate that.
[74] Oh, like one of my minces does.
[75] Well, no, when you don't need to, Waste Mustafa's time.
[76] Well, I'll just do it on my shirt.
[77] I think the quicker you get it scratched, the more accustomed you get to it just being like junky, and then it's like, fine.
[78] That's why I don't do cases, I'm like, screw it.
[79] Oh, wow.
[80] Yeah, it's fine.
[81] I mean, I've got quite a little scratches, so don't get over -excited, but, like, I just learn to live with them.
[82] Yeah.
[83] How frequently do you shatter a screen?
[84] Once every two years, I tend to upgrade every other generation.
[85] Okay.
[86] Yeah, is it not too bad, actually.
[87] I'm good at catching.
[88] I've definitely got into that kind of like, right.
[89] They've gotten sturdier over time.
[90] For sure.
[91] They have.
[92] I have kids grabbing them and stuff.
[93] I need a case.
[94] I hate it.
[95] I hate when you put it in your front pocket.
[96] It's got dragged now.
[97] It has resistance.
[98] Right.
[99] I like that that's sliding.
[100] I know.
[101] When you first get it, it looks so beautiful.
[102] Sneek and slippery.
[103] Yeah.
[104] Could find its way into that shop of curiosity.
[105] One day.
[106] One day.
[107] Most deaf.
[108] Okay, so you're from England, London, England.
[109] Dad was a Syrian taxi cab driver?
[110] He was a minicab driver.
[111] Yeah.
[112] There are two kinds of taxis in the UK.
[113] There's the black cab, which is the profession.
[114] You have to do the knowledge.
[115] It takes like three years.
[116] They memorize every street.
[117] You have to remember all the routes.
[118] It's insane.
[119] They're actually very, very, very smart cabins.
[120] Really appropriate for this conversation.
[121] Because here's a task that has already been outsourced to the phone, yet they are still doing it that way.
[122] And I don't know what we say about that.
[123] That's just very rare.
[124] It's a ding, ding, ding.
[125] It's one of those things where the rules matter, and it affects the pace of change.
[126] So that's, I think, a sign of hope because I think we should be collectively making decisions about how quickly we want new technologies to be introduced so that we can decide on the rate of change.
[127] We accidentally, or maybe you engineered this, we stumbled upon almost the greatest example of some legislation or policy that has prevented technology taking over a space.
[128] And it seems to be working.
[129] Yeah, we were in London not too long ago in one of those cabs and it worked out just fine.
[130] They're brilliant.
[131] They do have the advantage, though, of being able to use the bus lanes.
[132] So they can get there faster than a regular car.
[133] So I think that is a structural advantage, which justifies the higher price, because they're also quite a bit more expensive than Uber.
[134] They are.
[135] But they have adopted stuff.
[136] You can order them with your phone now.
[137] Yeah.
[138] So that's an improvement.
[139] Now, I guess what I interpreted by reading that your father was Syrian, I assumed in Syria he drove.
[140] No, so he moved to London in the early 80s.
[141] and settled there.
[142] And so I was born and raised in London.
[143] 84?
[144] That's right.
[145] Yeah, exactly.
[146] Painfully younger than me. It's okay.
[147] Still older than me, so we're doing pretty good.
[148] He's made more money and he's more relevant.
[149] Okay, well, I'm hurtful.
[150] Anyway, so, but in London as an occupation, he did drive.
[151] He drove initially one of those unlicensed minicabs.
[152] They don't have a medallion.
[153] You negotiate a price with the person.
[154] There's no meter.
[155] Exactly.
[156] You negotiate on the spot.
[157] Is it 15 pounds, 20 pound, 25.
[158] That's too much.
[159] a bit less.
[160] That has gone out of fashion now, whatever.
[161] It's been licensed, so there's much less of that.
[162] You can't sort of just hawk and pick up on the street and so on.
[163] That was his profession for basically all of his life in the UK.
[164] We all love story.
[165] And for me, that's a great story that his son would end up being a pioneer in AI.
[166] I think that feels like about as good of an immigrant story as you can get for your child.
[167] Right.
[168] It's funny because my mom, who was a nurse in the National Health Service, she was kind of adamant that I would drop out of school at 16 and get a trade.
[169] Become a plumber, become a carpenter.
[170] Everyone's always going to need an electrician.
[171] That was just like long -term reliable.
[172] Obviously, I didn't do that.
[173] But it was definitely that kind of, you know, making money quick is important.
[174] Yeah, exactly.
[175] She's English, though?
[176] She's English.
[177] How did he woo her?
[178] That must have been hard in the 80s?
[179] That is actually a funny story in a way.
[180] So my mom was riding around the world on a double -decker red bus in 1982, I think it was.
[181] And she was at a rest stop in Afghanistan, repairing their tire with this crew of basically travelers.
[182] And in the washrooms at the rest stop, my dad apparently was having a shower next door.
[183] Oh, my goodness.
[184] He's taking a shower at the gas station.
[185] She's in a double -decker.
[186] This is all impossible.
[187] It's like a Dr. Sue's story.
[188] It's completely crazy.
[189] So he comes out in his towel.
[190] You know, they have a little exchange.
[191] He's attractive, I'm guessing.
[192] I have no idea.
[193] He must.
[194] It's sounding like you.
[195] Well, you're handsome as hell.
[196] So I'm imagining he was a looker.
[197] So then he's standing there in his towel, and of course they don't have a common language.
[198] He doesn't speak any English.
[199] He doesn't speak any Arabic.
[200] They have a bit of like schoolboy, schoolgirl French.
[201] And so they have this connection, nice little chat.
[202] They go on their way.
[203] My dad is picking up marble to head to Pakistan to trade.
[204] He was basically trying to fund his engineering degree.
[205] Is he going over the Kaibar Pass?
[206] Is it like that historic?
[207] I don't know if that was the route, but it certainly feels like that in your head.
[208] Yeah, for sure.
[209] Then really strangely, 10 days later, they end up encountering each other again, completely randomly this time in Iran.
[210] What?
[211] No. This is destiny.
[212] This is so romantic It's totally wild Very horny And obviously that was game over From then on it was just like a slam dunk At least for a few years He emigrated to England to be with her Correct Oh wow Did he get his engineering degree You know what he did not finish it Because they became pregnant with me in Pakistan They didn't want to have me in Pakistan Although they did think about it in Islamabad And so then they came back to the UK And then obviously he was sort of very upset because in the UK, they didn't recognize half of his Pakistani engineering degree.
[213] And he had actually left Syria to avoid conscription.
[214] Because in those days, everyone at the age of 18 has to go into the army for three years.
[215] And in the 80s, Syria was getting into it.
[216] It's not like you wouldn't be realistic to assume that could be life -threatening.
[217] That was a rough experience.
[218] I have uncles who went through it, and it's not the kind of touchy -feely, friendly, turn you into a servant of the national effort.
[219] But it's very brutal, and so he basically wanted to avoid that.
[220] Okay.
[221] We'd call that a draft dodger here.
[222] Exactly.
[223] Well, at one point in time, I think nowadays we'd say it's smart.
[224] Well, I think we all know where I stand on that.
[225] I'm a big bleeding liberal, so, you know.
[226] We're for it.
[227] I don't think you should go kill people you don't want to kill.
[228] You have a very interesting and circuitous route to this position you hold as an authority in AI because you drop out of college, I guess.
[229] You're 19, presumably you're in college.
[230] You start a helpline for Muslims, which becomes an enormous resource and maybe the biggest for mental health for Muslims in the UK.
[231] How the fuck does that happen?
[232] How do you drop out of college to pursue that?
[233] I got to Oxford where I was actually studying philosophy.
[234] I think the biggest transformation of my life happened, where I basically discovered human rights principles.
[235] In a philosophy class?
[236] Yes, in the kind of spirit of universal justice and fairness, rather than thinking that me and my people were the righteous ones, the chosen ones, the especial elite, and so on.
[237] And throughout my late teens, I had kind of struggled with lots of parts of the religion.
[238] Both my parents were very strict.
[239] Very strangely, my mom was actually already a Muslim.
[240] She converted before meeting my dad.
[241] Hence the crazy bus trip, probably.
[242] Yeah, going to find herself.
[243] I'd grown up with a very strict sense of the religion, and it was starting to get uncomfortable.
[244] And when I got to Oxford, it kind of provided me with a framework for thinking about universal rights rather than just one group that you didn't need to appeal to faith.
[245] You could use reason to establish the fairness of things.
[246] I think that we need that more than ever now rather than having to rely on this very shaky arbitrary foundation of what I believe.
[247] For me, that's a bit of a trigger word.
[248] When I hear believe these days, I'm like, eke, I'm going to have to be cool here because I'd much rather have a discussion and have those beliefs or those ideas evolve and be subject to critical reason and stuff like that.
[249] So I helped to start the Muslim Youth Helpline.
[250] There's actually a group of people who were already in motion getting that started.
[251] And it was a non -judgmental, non -directional, so secular support service.
[252] Right, which probably had to be very rare in that community.
[253] I mean, it was the first of its kind because all the other ones...
[254] They would drive you to...
[255] Exactly.
[256] You shouldn't have sex before marriage and homosexuality is a sin.
[257] You observed the five pillars today.
[258] Did your parents freak out?
[259] Did they know you were doing this?
[260] I don't think my dad would have approved.
[261] I had left home by that point when I was 15, so they weren't so much in the picture.
[262] It wasn't such a big deal.
[263] That was actually very freeing in itself because I could sort of go and make my own way, figure out my own ideas.
[264] Because you had gone to a boarding school at some point as well.
[265] Actually, I went to a state school of free education, but it was a very good one.
[266] You had to do an exam to get in.
[267] And I think because it was selective, you had to pass two exams and an interview, you know, just meant that everyone was starting from a little bit further ahead.
[268] It changed my life to school.
[269] It really gave me a huge advantage in life and just to be surrounded by a lot of other really driven people and really smart people.
[270] Pressure testing your thoughts and ideas around adversaries that are worthy, they will make you raise your argument.
[271] They'll make you better.
[272] Yeah.
[273] And that's actually what I found at Oxford as well is that there was no judgment for being a bit obsessive or a bit nerdy or really overpassionate about stuff.
[274] Yeah, yeah.
[275] Whereas earlier in life, that was a bit more tricky.
[276] The helpline experience, leads seamlessly into some more broader public health and public service work.
[277] You end up working for the mayor of London.
[278] You end up advising on conflict resolution, all kinds of things.
[279] You have clients like the United Nations.
[280] That seems like an easy to follow trajectory.
[281] And then you form deep mind with two other people.
[282] You co -found deep mind, which is an artificial intelligence lab.
[283] How on earth do we get from the public service to there?
[284] I guess my assumption that anyone like you that I'd be talking to had to be a computer science major, probably as a PhD in something, like I would not have thought drop out 19 public health.
[285] I definitely don't have a PhD, no. I got lucky in that I started the kind of work thing very early.
[286] So because I left at 19, I managed to cram in a lot of experience, working in charities and local government, working as a conflict resolution facilitator all around the world.
[287] that gave me huge exposure to lots of different types of work and types of theories of change.
[288] I was obsessed with, what is your theory of change?
[289] How do you have impact in the world?
[290] How do you scale that impact?
[291] How do we actually progress civilization?
[292] Just in that transition from religion to secular ethics, that was my new kind of raison d 'etre in life.
[293] And I got to this point where in 2009, I was at the climate negotiations in Copenhagen, and I was helping to facilitate one of the main negotiating tracks, the one around reducing emissions from deforestation.
[294] And I just suddenly had this realization that actually these institutions are super stuck, they're not evolving fast enough.
[295] We can't establish consensus of any type on even the basics of the science of what's happening, let alone what the right interventions are and what we should do.
[296] Right.
[297] And at the same time, for the previous couple years, 2007 to 2009, I had my eye on what Facebook was doing.
[298] It was blowing me away.
[299] I was like, this tiny little app has gone from zero to a hundred million monthly active users engaging with it.
[300] They're sharing their personal lives.
[301] They're connecting.
[302] They're forming new relationships.
[303] They're getting married.
[304] It's incredible how much this is changing things so quick.
[305] This is an incredible instrument of nothing else.
[306] It was very obvious to me that it was so much more than a platform.
[307] And the word platform was actually a misrepresentation of what was actually going on there.
[308] Because it's much more of a mode for framing how activity takes place.
[309] The incentives, the structure, the way the website is laid out, the colors, the very all famous now thumbs up like button, which drives engagement.
[310] The original sim.
[311] Yes.
[312] Yeah, in hindsight, that seems super obvious.
[313] And to me at the time, it was like, wow, this is really incredible.
[314] I need to do everything I can to be a participant in technology.
[315] Technology is going to be the thing that helps make us smarter, more efficient, more productive, essentially do more with less.
[316] It must have been appealing as well.
[317] If you're dealing with governments and institutions in these negotiations, to see something that comes with no history has to be very encouraging.
[318] Like, you're not bridled with 300 years of how we do things.
[319] That's a great point.
[320] It's actually much easier to innovate in Greenfields than it is to change the status quo.
[321] I kind of think that's one of the big problems that we have in the world today where there aren't very many compelling positive narratives of the future coming from the old world order.
[322] On all sides of the spectrum, people talk about the mainstream media, they talk about mainstream finance.
[323] Their reaction is that the existing establishment is not innovating fast enough.
[324] And so we look outside of that for new narratives of the future.
[325] And to me, Silicon Valley and technology and the potential to do more with less and invent and create things in green fields, that really is the kind of vision of the future that whether we like it or not is becoming the kind of default way that we understand how things are going to play out.
[326] Yeah.
[327] Because the old institutions work so slowly, they can't even correct fast enough, right?
[328] I mean, that's part of the inherent problem is that they're not nimble the way all these technologies are.
[329] I mean, especially the ones we're going to get into, where it's like between 8 a .m. and 12 p .m., this machine, if given enough data, can go from knowing nothing to knowing everything about something.
[330] What human organization can keep up with that?
[331] Yeah, I mean, what human organization or collection of all humans could possibly consume that much raw content?
[332] I mean, these models are just like alien life forms in the amount of knowledge that they can consume and reproduce.
[333] Yeah, kind of beyond comprehension.
[334] You should get involved with the SAG and WGA strike negotiations because first you know how to negotiate or you've been involved in that and you know everything about AI.
[335] Believe me, I've got an earmark question as we get to that.
[336] Okay, let's get you in there and let's also get it solved fast because we're ready.
[337] Okay, so specifically how does this interest in my, I want to say MySpace because I never did use Facebook, but Facebook.
[338] You weren't as enamored with MySpace, apparently.
[339] That's the only place I did a lot of business.
[340] But how does that then take you to artificial intelligence specifically?
[341] Well, so it set me on a quest to find anybody and everybody who was in my network, who was involved in technology or software of some kind.
[342] During that year 2009 and early 2010, I met with anyone who would give me five minutes to just tell me what was happening and why and how it worked.
[343] And I had an interest in software and technology.
[344] I mean, I was on the internet very young and was an obsessive on forums.
[345] And actually, in 2003, my first actual business was an electronic point -of -sale system where we had these little PDAs, and we went in and installed network equipment for restaurants and stuff and tried to get them digitized so they could take orders really quickly and stuff.
[346] Then you remembered in England, the restaurants have no desire to do anything quickly.
[347] Quickly.
[348] Especially bring you your check or allow you to give them the credit cards.
[349] It's really true.
[350] It's my only complaint about England in general is just, you want to walk in and go like, let's pay now.
[351] Yes, or you know it's going to be 90 minutes.
[352] You don't know what I ordered, but let's start that process of giving me my check.
[353] It's so true.
[354] That's what I quickly found out.
[355] This is not a problem that needs to be solved at that time in the UK.
[356] Plus, the technology wasn't good enough.
[357] So I basically set about on this quest and my best friend at the time, George, his older brother, Demis Asabas, who was the co -founder and CEO of DeMis, deep mind, was just finishing his postdoctoral research at UCL at the Gatsby Computational Neuroscience Unit.
[358] So that's what his degree was in, was neuroscience?
[359] His degree was in neuroscience.
[360] He was interested in memory and how the brain processed information and used memory for imagination.
[361] He wrote some really important papers there.
[362] And whilst he was at Gatsby, he met our third co -founder who is sort of an AI guy through and through and a big believer in AGI from day one.
[363] He wrote, did his PhD on definitions of intelligence.
[364] He aggregated like a hundred different definitions, all the different types of intelligence you could possibly imagine, and he universalized them into one working definition, which was the ability to perform well across a wide range of tasks.
[365] So this emphasis on generality.
[366] Intelligence is about generality.
[367] And that's when we pushed this idea of AGI.
[368] And in fact, our business plan said, in this summer of 2010, building artificial general intelligence ethically and safely.
[369] So did you guys coin this AGI term?
[370] It was in use before.
[371] We didn't coin it, but I think we were the first to use it in a company as a mission, and then it got popularized shortly after because this technique called deep learning started to work.
[372] It wasn't really until 2012, 2013 that deep learning was showing very promising signs.
[373] Did you have imposter syndrome among these two with the PhDs and the relevant?
[374] And did you feel like You had to prove yourself worthy of being a part of this triumperate?
[375] For sure.
[376] What I came to realize after the fact is that I had very different skills, and that complementarity was an amazing three -way dynamic for a decade.
[377] And, you know, one of the things I am luckily able to do is to just go from the micro to the macro.
[378] Like, I'm quite good at thinking big picture and quite practical and very operational as one.
[379] I'm very urgent.
[380] I'm the one who's saying, right, what are we going to do tomorrow?
[381] And that turned out to be a good constellation of skills between the three of us.
[382] Would you guess or you probably know, that's a rare combination of folks, I'd imagine.
[383] I imagine most people that end up in Silicon Valley, there's not a lot of outsider, let's just say, artsy, social scientists above hard science.
[384] You know, whatever we would call that.
[385] I can't imagine they find themselves in this situation very often.
[386] I think that's exactly right.
[387] And interestingly, between the three of us, even Demis and Shane are not strong programmers.
[388] Demis is a neuroscientist.
[389] He did computer science as his degree, but he never became a software engineer formerly.
[390] And Shane was very much on the mathematics and the theoretical end of the spectrum.
[391] So typically, into Canvalley, a startup pair of co -founders would both be coding.
[392] And one of them would be an exceptional coder.
[393] This is the Wozniak, Steve Jobs.
[394] Yeah, although Steve wasn't a coder either, right?
[395] Right, he's the non -coder.
[396] He's a non -coder, yeah, yeah, yeah, yeah.
[397] Steve is the big picture guy, you know, whose focuses on the user and thinks always about the product.
[398] Right.
[399] Okay, so this company, you guys end up being pretty successful in your exploration of all this, and Google ends up buying this company.
[400] Pretty successful is really low -key.
[401] Well, in that, I mean, it's not like they have an operational chat, GPT, or pie.
[402] You don't have a product yet per se, right?
[403] You just have a lot of progress.
[404] Right.
[405] It was a remarkable acquisition.
[406] I mean, we're talking 2014.
[407] Most people haven't even heard the word AI, let alone AGI.
[408] This is out there completely speculative.
[409] And they haven't been snatching up European companies either.
[410] Exactly.
[411] It was actually the first acquisition that Google made in Europe.
[412] Right.
[413] So all the reasons you should not assume that you're going to get bought by Google.
[414] Yeah, it was very unlikely.
[415] But the thing that caught Google's attention is that we made this demo of an AI learning to play the Atari games from scratch.
[416] So you remember like Space Invaders and Pong?
[417] Asteroid.
[418] Right.
[419] Breakout, if you ever played that game where you have a paddle at the bottom, exactly.
[420] We basically had the AI just from the pixels, right?
[421] So we didn't give it any rules or any structure, nothing else.
[422] Just interact with this environment.
[423] Get a score.
[424] So it tells you if you randomly, luckily, manage to hit it, or if you lose.
[425] And then over time, through self -play, so it would play itself millions of times, it would learn to associate a set of pixels with a set of actions with a score.
[426] And then over time, it would get really good.
[427] It would start to think, oh, yeah, these are the actions that I took in the run -up to getting that score last time.
[428] So I'm going to reproduce that.
[429] And really quickly, does it have to play the game in real time, or can it accelerate the game itself?
[430] That's a great question.
[431] And this is a key insight about computers, right?
[432] They're paralyzable.
[433] so they can go at lightning speed.
[434] You can scale them up.
[435] You have millions of instances playing against itself rather than having to go in human time.
[436] Okay, good.
[437] So it's playing a whole match in seconds or minutes as opposed to it would take us 15 minutes to get to that board or whatever.
[438] Right.
[439] That's a key intuition because machines can have more experiences than any single human.
[440] And we see this pattern repeat over and over again.
[441] In 2015, we trained an AI to detect breast cancer in mammograms.
[442] And by 2016, it was better than the expert radiologists.
[443] We did the same thing for ophthalmology.
[444] So for 52 blinding diseases, it was better than the best human ophthalmologists in the world.
[445] Right.
[446] And that's mostly because it has seen orders of magnitude more cases than the humans.
[447] You know, the best humans in their career might see 30 ,000 cases.
[448] Well, seen and I think even more importantly, remember at all times.
[449] I mean, that's the real key.
[450] is like if you actually could remember the sequence of events of everything in your life you too would be able to predict a much higher degree of accuracy but it's like we don't have access to that totally so the fact that it can see things much much faster the fact that it has perfect memory in many cases or very very good memory and the fact that it doesn't get tired yeah because if it sees a slide of a iris let's just say I don't really know how it works and it has some little impigmentation here it can rapidly go through all 10 million photos in the data set with known prognosis, the human doctor can't hold it up next to 10 million examples, but it can.
[451] That's the magic of it, right?
[452] Right.
[453] And that's actually what has happened for the last decade, as we see these crazy exponential trends of now seeing trillions and trillions of words of open data.
[454] But at the time, it couldn't work in the space of words.
[455] It was really just looking at images and pixels.
[456] When we saw it play these Atari games, it was really incredible.
[457] This thing has learnt clever strategies that your average human player wouldn't have discovered.
[458] There was one particularly in breakout where it would tunnel up the back and knock down all the bricks and then bounce off the back wall.
[459] Oh, my God.
[460] Like a clever little trick, right?
[461] That was the first time that I thought, this is why I'm working on AI, the ability to learn new knowledge.
[462] That's how AIs can truly help us get out of all the messes that we're in from.
[463] from climate change to what we have to do with agriculture and what we have to do in transportation, right?
[464] They can teach us new knowledge, like great scientists and researchers.
[465] Yeah, they can shatter paradigms, right?
[466] Because we all get stuck in thought paradigms.
[467] You know, medicine works this way, diagnose, treat, whatever paradigm it is.
[468] The thing outside of it is so hard for us, but for a machine it's not.
[469] And it can actually be creative.
[470] So this is the cool thing, is that it can discover new knowledge, it can see outside of the box and it can be genuinely inventive.
[471] Everyone's probably seen these image generation models these days like Dali and other ones.
[472] It's wild.
[473] Probably more than you can imagine can now be manifested.
[474] I think that's all creativity is.
[475] It's just combining multiple different ideas in novel ways to produce something unique.
[476] Okay, that kind of fast -forwards us to one point about AI.
[477] When it's explained to us, I think we discredit the fact that we, too, operate almost in an identical way.
[478] So a current argument right now, and this is dangerous for me to say because I'm in the union and I am supportive of the WGA, but...
[479] Anzac.
[480] The example I'm giving is about writing.
[481] Got it.
[482] Joseph Gordon Levitt, wonderful actor, really smart guy, wrote this incredibly well -thought -out piece about this issue and said that the owners of the content that are fed to the AI, let's say if it's to write a script and you feed it 600 scripts from brilliant writers that have worked in the past that there should be a royalty paid on that original work which is a very sound argument but i do have to point out i as an artist i too have been trying to replicate all my favorite movies when i sit down to write a movie i'm informed by michael man in heat i'm informed by pulp fiction's my favorite movie i like this i like that i am an AI.
[483] I am cashing in on all the info I took in and liked, and now I synthesized my version of it.
[484] So it's just a curious situation where I shouldn't be asked to pay a royalty to Quentin Tarantino if I write something inspired by him, but the computer should.
[485] I feel like there's some weakness in that argument.
[486] So first of all, the strength in the argument, which I broadly agree with, is provided the AI isn't regurgitating word for word, the paragraph of copyrighted text, that would be plagiarizing, right?
[487] But it seems that most of the time these models are not regurgitating that word for word.
[488] They're really inventing something new.
[489] They're finding the space between two ideas.
[490] Interpolation is called.
[491] That seems to me just like anything that we would normally do as humans.
[492] That's how we're creative.
[493] You might like the pattern of that sofa and you might go, hmm, that's cool.
[494] I want to see that on a jacket and you take it with you.
[495] I'll be more literal.
[496] I pitched this movie to Warner Brothers.
[497] I said I want to remake the TV show chips, but I want to do it chips meets lethal weapons so it's hard are the actions intense and it has comedy i mean i'm literally saying marry chips and lethal weapon together that's what i'm going to try to do but didn't you have to pay some sort of royalty to original chips well chips because we're using the actual intellectual property in the name yeah but i could have also gone in and said i got this movie idea it's called bike cops imagine chips plus lethal weapon and most pitches are sold that way so it's midnight run meets blank but midnight run has never gotten any royalty It's used from any of these pitches, from these humans.
[498] So it's just curious, it's okay if humans don't pay royalty, but we want the machines to.
[499] And it's also exactly the same thing with apps.
[500] The number of times you hear a startup founder being like, I'm going to do Uber for food delivery, or I'm going to do it for this.
[501] I'm going to do it for that.
[502] This is how we create.
[503] Yes.
[504] Stay tuned for more armchair expert, if you dare.
[505] What's up, guys?
[506] It's your girl Kiki, and my podcast is best.
[507] back with a new season, and let me tell you, it's too good.
[508] And I'm diving into the brains of entertainment's best and brightest, okay?
[509] Every episode, I bring on a friend and have a real conversation.
[510] And I don't mean just friends.
[511] I mean the likes of Amy Poehler, Kell Mitchell, Vivica Fox, the list goes on.
[512] So follow, watch, and listen to Baby.
[513] This is Kiki Palmer on the Wondery app or wherever you get your podcast.
[514] We've all been there.
[515] Turning to the internet to self -diagnose our inexplicable pains, debilitating body aches, sudden fevers and strange rashes.
[516] Though our minds tend to spiral to worst -case scenarios, it's usually nothing, but for an unlucky few, these unsuspecting symptoms can start the clock ticking on a terrifying medical mystery.
[517] Like the unexplainable death of a retired firefighter, whose body was found at home by his son, except it looked like he had been cremated, or the time when an entire town started jumping from buildings and seeing tigers on their ceilings.
[518] Hey listeners, it's Mr. Ballin here, and I'm here to tell you about my podcast.
[519] It's called Mr. Ballin's Medical Mysteries.
[520] Each terrifying true story will be sure to keep you up at night.
[521] Follow Mr. Ballin's Medical Mysteries wherever you get your podcasts.
[522] Prime members can listen early and ad free on Amazon Music.
[523] Okay, you work there for a while.
[524] You see this obviously just grow in its ability.
[525] You have to be dazzled almost monthly because it itself becomes a bit exponential, doesn't it?
[526] Because it learns from its learning.
[527] And so it's just accelerating at all times.
[528] And is the pace of it at any point while you're observing it?
[529] When do you start getting nervous or apprehensive about it?
[530] I think I've been nervous about it since the day we founded the company.
[531] That's the only honest and wise approach.
[532] We should approach technologies that are as fundamental as intelligence itself.
[533] We've literally just been describing something that if you swapped out the human, for the machine and you replayed this conversation you couldn't really tell were we talking about the machine there or were we talking about the human?
[534] You can put a kid in front of an Atari game and he will master it.
[535] We've seen it happen.
[536] And he does it through remembering what things worked and what didn't it.
[537] So we're already at a place where we're taking the thing that has made us unique as a species, our intelligence, this ability to plan and imagine, create, adapt and invent new things, communicate perfectly in language.
[538] Language is a technology.
[539] We now have another type of input to that technology, which is the machine, able to use language.
[540] So it's always been top of mind for me. I mean, it's why we framed the company around ethics and safety from day one.
[541] And when we were acquired by Google, we actually made it a condition of the acquisition that we have an ethics and safety oversight board with independent members and a charter of ethics that governed all the technology that we give to Google in perpetuity.
[542] And two of those red lines were that it could never be used for.
[543] for military purposes and never be used for state surveillance.
[544] That's still in operation now.
[545] But as we'll get into, even a well -intentioned, seemingly bulletproof statement like that can be hacked by AI.
[546] And we'll get into how, like, you couldn't prevent the machine from getting racist.
[547] There's all these things you can't account for, and that's in your book, The Coming Wave.
[548] There's, like, a lot of almost impossible to predict things.
[549] Right.
[550] And that has been the story of the last decade.
[551] The progress has been eye -watering.
[552] It was 2014 when we were acquired by Google, and we became Google Deep Mind part of the Google ecosystem.
[553] And in that year, when we trained the Atari model, that used something like two petaflops of computations, so two billion million computations, which sounds like a lot, but it's actually relatively small.
[554] Two billion million million.
[555] That sounds like when a little kid is like, it's a billion million.
[556] Yeah, yeah, yeah, exactly.
[557] A thousand billion.
[558] It's totally a made -up number.
[559] It's called a petaflop, so you can go and, like, you know, drop that on someone.
[560] Every year since then, the cutting -edge models in AI have used 10 times more compute than the previous year.
[561] So for 10 years, it's gone 10x.
[562] Oh, my God.
[563] So to your question about, am I surprised, am I amazed?
[564] I mean, it's totally mind -blowing.
[565] Well, 10 to the 10th powers, I don't even know what that number is.
[566] It's cabillions.
[567] Exactly.
[568] That's a good number.
[569] It's called 10 billion million -billion flops.
[570] So it's gone from 2 to 10 billion.
[571] 10 billion million -billion flop billions.
[572] Some insane numbers.
[573] You can't even graph that.
[574] There's definitely not enough room on the page to list all the zeros.
[575] Scale has been the thing that has transformed this.
[576] The models are getting bigger.
[577] They're consuming more data.
[578] They consume insane amounts of computation.
[579] That is starting to look like a brain -like, structure in terms of the number of neurons we have in the human brain.
[580] Which we have trillions.
[581] We have about 100 trillion.
[582] 100 trillion.
[583] Connections in the brain.
[584] And so these models at the moment are roughly on the order of about one trillion.
[585] There's a hundred X difference between the models and the brain.
[586] Which would just be a year or two?
[587] Yeah.
[588] Could be a couple of years.
[589] That's right.
[590] To at least match the amount of neurons in the brain.
[591] Just to be clear, that's not going to mean a human -like performance.
[592] Right.
[593] But it's just a crude, rough measure.
[594] really early into this acquisition, you started using the deep mine technology to address energy consumption for cooling Google's data centers.
[595] And the AI itself looked at this and figured out how to reduce it by 40 % for cooling.
[596] Yeah, that was a crazy project.
[597] I mean, the Google data center infrastructure spends tens of billions of dollars a year.
[598] I was going to say, I have to imagine the very best engineers on planet Earth designed the Google cooling center.
[599] Literally.
[600] Who were the very best person in thermodynamics or whatever?
[601] I'm sure they have he or her.
[602] Absolutely.
[603] And they were very resistant to cooperating with us at the beginning.
[604] They were like, we're the best systems engineers, mechanical engineers, industrial engineers, and planet.
[605] So it took us about six months to cozy up to them and persuade them to give us a shot at doing this.
[606] And we basically had the AI look at all the historical data, five years worth of mechanical set points.
[607] Like how fast there's a fan going.
[608] Does it turn on at this temperature or that temperature?
[609] Exactly.
[610] What combination of fans to use?
[611] Because there's like a motherboard fan, there's a chassis fan, there's corridor fans.
[612] And these are like the size of three or four football pitches, these data centers, right?
[613] You look down them and you can't see the end.
[614] And they're usually underground?
[615] They can be overground, but they're almost always near hydroelectric power or solar power.
[616] Cheap energy is the number one priority.
[617] Right.
[618] Wow.
[619] I know.
[620] It's insane.
[621] I mean, it'd take you like 20 minutes, half an hour to walk down the thing, and you have to wear headphones because it's like screechingly loud.
[622] Really?
[623] And of course, freezing.
[624] So in looking at all this data, how long did it take the AI to figure out a system that would reduce it by 40 %?
[625] Well, it took us about a year of tinkering and experimentation, because obviously we got it wrong a lot.
[626] And in the first deployment, it got about a 30 % reduction.
[627] So we were pretty blown away by that.
[628] And then what we were doing, because obviously they didn't want to let the AI loose on the actual hardware.
[629] Of course.
[630] The AI would basically give a set of recommendations to the human data center controller.
[631] And you guys would implement that.
[632] Yeah, and then the human would be like, yeah, that seems sensible.
[633] I'll adjust it this level, that level.
[634] Because then what we saw after like three or four months of operation, the human was just accepting the AI's recommendations like 95 % of time.
[635] So instead of having a 15 -minute gap, we were like, well, if we have it as a real -time control, it'll just update the set points every 30 seconds or every minute.
[636] Right.
[637] And you get like a huge efficiency just from that, which just shows you how kind of inefficient humans are in general.
[638] Yes.
[639] So that gave another 10%.
[640] We have limits of conscientiousness.
[641] I mean, I don't know why, but just recently I've been thinking like, you know, it's kind of the number one thing you would want in an employee is conscientiousness.
[642] The computer is just the most conscientious thing in the world.
[643] It can't deviate from whatever it's supposed to do, right?
[644] I think this is another reason for us to be really optimistic about the future, because we want the machines to produce a consistently fair and just outcome across the board.
[645] Obviously, the challenge is that we have to make sure we program them in that way and that we can hold them accountable, keep them controlled.
[646] So there's that challenge.
[647] But in theory, it should mean that unlike a judge that gets tired after lunch.
[648] Sentences you before or after lunch.
[649] Right.
[650] Yeah.
[651] There's a huge swing.
[652] It's like three years difference or something crazy.
[653] You see it all the time.
[654] We're guilty.
[655] of it individually.
[656] And also we walk into a place and we know they're supposed to do that, but they don't want to and they don't do it.
[657] Emotions.
[658] Yeah, emotions and energy levels and distractions and personal problems.
[659] You think about it if you ever visit a relative in a hospital or something and you see all the different people on different parts of the ward and, you know, the people who are quiet and not very pushy and demanding of nurse practitioner care, you have to think how uneven that treatment is as a result of somebody having a really active patient advocate member of their family to help manage their care and stuff.
[660] And that is going to end up in worse care for some people who really deserve better.
[661] And so that's why I'm excited about the kind of fairness side of AI.
[662] Yeah.
[663] I will add that you left Google in 2022.
[664] People will be mad if I don't mention you had allegations of bullying people and you left DeepMind.
[665] Right.
[666] You bullied people on email, maybe.
[667] I can be super hard charging and very demanding.
[668] It's five years ago and I've learned a lot from that.
[669] Okay, wonderful.
[670] Moving on.
[671] So you leave Deep Mind and you.
[672] go to Google proper and then you leave there in 2022 and then you found with Reed Hoffman who we've had on and we adore Reed inflection AI and in 2023 you guys introduce pie which is a chat bot now I'd love to get into some of the things that your book is warning us about I don't know that it should be optimistic first or pessimistic first but I imagine a lot of this will file into the various astute observation of Tristan Harris, which is social media is dystopia and utopia.
[673] It's all things.
[674] So I think AI too will be all things.
[675] It'll be like these incredible miraculous breakthroughs.
[676] And then there'll be some really dangerous things that come along with it because there are bad actors in the world.
[677] So let's first look at what's coming.
[678] You have a wonderful list in the book.
[679] But if you want to hit me with some of the ones that you find personally the most interesting and I have a few that I think seems so exciting.
[680] Yeah, I mean, I think this is going to be the greatest force amplifier in history.
[681] So you're right that it's going to amplify the bad as much as it will amplify the good.
[682] And I think the sooner we just accept that that part of the equation is inevitable, the quicker we can start to adapt.
[683] Because this is about adaptation and mitigation.
[684] What boundaries and constraints and guardrails can we put around the technology so that we get the most out of it, but so that it always remains accountable to us and that it doesn't end up causing more harm than good.
[685] I think there is a low chance that it ends up causing more harm than good because I think that people are starting to realize how significant this moment is and they're starting to get involved.
[686] You mentioned the writer's strike.
[687] I mean, that's the tip of the iceberg.
[688] Yes.
[689] And this is just the first opening salvo.
[690] It's going to be the greatest force amplifier.
[691] And so that is likely to cause huge chaos and instability.
[692] It's going to come from two different angles.
[693] On the one hand, there's going to be these super giant models that the really big companies build.
[694] We're one of them.
[695] We can talk about that for Pi.
[696] But it's also going to be open source, meaning anyone can make it, create it, recombine those ideas.
[697] So as we've seen from things like Stabled Diffusion, what's that?
[698] Stable diffusion is an image generation model, which is like OpenAI's Dali, but it's entirely open source.
[699] So the actual code to run that model is available to anybody on the internet to adapt it, play with it, improve it.
[700] You can actually take the entire model and run it on a laptop and it can produce photorealistic, super high quality images with just a few sentences of instruction.
[701] Zero technical effort or expertise required.
[702] Wow, my mind just immediately went to something pornographic.
[703] People are probably creating like...
[704] Well, that's a whole deep fakes thing, right?
[705] Yeah, is that AI generated?
[706] Those deep fakes?
[707] Those deep fakes are AI generated.
[708] Right.
[709] So they're probably, you know, imagining their favorite...
[710] They're making that.
[711] Podcast hosts, probably me. You wish.
[712] I know.
[713] That's open source.
[714] Anyone wants to do that?
[715] No, but Kristen has talked about this because her face has been put on some porn.
[716] Yeah, yeah, yeah.
[717] Stuff and it's crazy.
[718] Ashton sent that over to us.
[719] Oh.
[720] Okay, well, I guess he is at the forefront of this tech stuff.
[721] He's an investor in inflection, actually.
[722] Oh, he is okay.
[723] I'm fine.
[724] He would be.
[725] Of course.
[726] Okay, so you can run it on your laptop.
[727] Now, when you generate, we get into kind of copyright authorship, like if you give it the five commands that generate this image, is that your image?
[728] I think that the idea of ownership is going to start to fall away.
[729] Interesting.
[730] And obviously that's controversial.
[731] I'm not proposing this outcome.
[732] I'm describing what I see.
[733] Don't shoot the messenger.
[734] I believe that as far as I can see, my best prediction is that over the next five to ten years or so, text, video, audio, imagery, it is just going to be produced at like zero marginal cost.
[735] It's going to cost almost nothing.
[736] Yes.
[737] And it's going to be super easy to edit it, manipulate it, improve it, within just one sentence.
[738] And that's going to be an unbelievable explosion of creativity because now the barrier to entry is lower than it's ever been.
[739] I mean, think about what has happened in the last 20th, years now that everyone has a smartphone camera.
[740] They're basically all directors.
[741] We now have hundreds of millions of professional directors producing incredibly engaging content so much that we all want to spend time on TikTok.
[742] We're leaving YouTube behind and YouTube left TV behind.
[743] And that is what happens when you get access to a new technology.
[744] Everybody suddenly gets wildly creative and inventive.
[745] And I think that's the trajectory that we're on from here.
[746] And just the volume will be so vast that copyright tracking down would be impossible.
[747] Yeah, a little bit like on YouTube, other than for the very sort of elite high -grade feature film stuff, there isn't really a concept of you owning your data, like you made a video on YouTube five years ago or ten years ago and someone basically completely ripped it off.
[748] I mean, that is the definition of the meme space.
[749] A meme is an evolving idea manifested in images and video.
[750] TikTok has set fire to that such that it's happening minute by minute.
[751] You put a cool video out, and then the next thing you know, someone has basically made exactly the same video a few degrees to the right with a slight shift in color or style or something and matched it with someone else's direction.
[752] And so you're seeing that memetic evolution happen in hyper real time.
[753] Well, and it also makes sense that people's attachment to things that took years to create are going to be different.
[754] Let's say the David.
[755] I don't know how long it took him to chisel that out of the block of marble.
[756] but presumably a long, long time.
[757] And so his attachment to it is, this represents a year or two of my life.
[758] Whereas if he was able to make 10 Davids a day, he couldn't possibly have that same attachment or that sense of ownership or the sense of anything.
[759] So maybe part of it is just like quantity and investment.
[760] The investment is so much smaller and therefore your sense of what you're owed from it probably diminishes as well a little bit.
[761] Totally.
[762] I mean, we've become overly attached to the craft, whereas really the value is in the ideas.
[763] and the concepts.
[764] Now, I think we're moving to a stage where the value is in the curator.
[765] It's the edit.
[766] It's the judgment to reduce, to scale back, to simplify.
[767] It's the shortlist, because everybody is going to have the power to produce really good and interesting creative content.
[768] I mean, that's the thing that blows me away every time I use TikTok.
[769] I mean, the variety is insane.
[770] And what TikTok is kind of doing is curating the feed.
[771] That's actually where there's a huge amount of value.
[772] So high -quality productions in the future are really going to be about the edit, taking away the extra and simplifying.
[773] I think that's the kind of thing that would take off.
[774] Yeah, I think for people like myself, who I would say is a tradesman or a craftsman, I know how to write screenplays.
[775] I've spent a huge chunk of my life investing in that ability, and I spent a huge chunk of my life refining my acting and everything else.
[776] I think what happens with this technology is it kind of democratizes, everybody access to executing their ideas.
[777] And I personally feel like, bullshit, go sit in a room for five days.
[778] If you want the result, you go put in the time.
[779] And I think a lot of us who have dedicated our lives to these things to see someone get an idea like we have, but they don't have to do anything beyond that to execute it.
[780] It feels very threatening.
[781] It feels like it erases talent.
[782] Talent is about to become or is becoming irrelevant.
[783] Which is interesting because I think as humans and humanists, we would probably go like, well, yeah, we would want everyone to feel creative and be able to create and we wouldn't want barriers.
[784] I don't either.
[785] But I mean, saying it probably butts up against some of our ideals.
[786] Well, you know, I think humans marvel at talent.
[787] That's why we love the Olympics because it's like, how is it possible that someone was born like me can do that or someone could write this movie or direct everything everywhere all at once?
[788] Like, how did somebody do that?
[789] When you take away the person, what's going to happen?
[790] Are we going to be excited anymore about anything?
[791] Or is it just like, well, yeah, duh, computer thought of that because a computer can think of anything?
[792] I think it's a really interesting question.
[793] We're about to run the experiment.
[794] Yeah.
[795] So we all know.
[796] We'll find out in my lifetime.
[797] We'll find out in 10 years.
[798] My instinct is that I think we tend to feel arbitrarily precious about the thing that we've previously been invested in, understandably.
[799] It's an emotional attachment.
[800] It's an identity.
[801] And that identity is a product of you having slogged away for like years honing your craft only to see it now available to absolutely everybody.
[802] But then the flip side of that narrative would be, well, what about all the people that didn't get the privilege and opportunity to be able to get through that training, to make those connections, to be able to make it in the industry?
[803] Think about how much arbitrary luck or even nepotism there is that enables people to get access.
[804] And so much as we're displacing people by the democratization of access to these tools, we're also giving other people the potential to be massively uplifted.
[805] I mean, think of all the YouTube stars, TikTok stars that have emerged out in the middle of nowhere that never were on the scene or had connections.
[806] I do think that tends to default to producing more creativity in aggregate overall.
[807] And it just sort of reshuffles the chips of who basically has power today.
[808] And basically makes everything much more competitive.
[809] Yeah, but there are.
[810] Also is like a really huge global thought or question.
[811] A literal analogy I can give is that I had a year of my life where I came into more money than I thought I would ever have.
[812] I started doing all these exceptional things I always dreamt of.
[813] Maybe I'll take a helicopter to this race.
[814] A year into this experience, I was doing something spectacular again.
[815] And I was like, I don't even care about this.
[816] I don't care about it because I've done 13 other cool things of this magnitude.
[817] and I'm now ruining everything with the greatness that's at my fingertips.
[818] And I have to rein this in and police myself and keep things special and rare.
[819] So I think there's also a potential for it's like every movie you see is Pulp Fiction.
[820] What does that do to your overall appetite for anything?
[821] If everything's perfect and great, that's a...
[822] Yeah, there's nothing to compare one thing to another.
[823] What's good anymore, if it's all good, what is good?
[824] It's half of what we like about something good is, in fact, how rare it is.
[825] The dopamine hit of experiencing something novel.
[826] So we could end up with six million perfect movies a year in the theater, but we might find that we don't give a fuck soon as they're all perfect.
[827] I think that's a great point.
[828] And actually, I think that's quite likely.
[829] We're already trending in that direction.
[830] We're becoming completely overwhelmed with access to information.
[831] Where are the secrets today?
[832] Where are the vacation spots that no one's ever heard about?
[833] Where's that coffee shop that you heard from a friend of a friend that that was a place that you should go to?
[834] I mean, this is like 20, 30 years ago.
[835] So we've learned to reproduce culture so quickly and in such a Polish kind of semi -artificial way.
[836] Like, you know, you go to a restaurant.
[837] So few restaurants are clearly authentically the culture that they are and they've been there 20, 30 years.
[838] And then some of you might stumble across like an old Italian place and you're like, wow, this really hasn't changed for 20 years.
[839] The altoyeds.
[840] Right, exactly, the altoids.
[841] They're not better than other men's, but there's something special about that old back.
[842] You should be definitely getting a cup.
[843] I mean, they are not sponsored yet.
[844] Send them.
[845] You know, I mention their ad budget is enormous.
[846] Anywho, you do think that's already kind of happening.
[847] I think we're becoming desensitized to the variety of content that we experience everywhere of every form.
[848] So now that we've overwhelmed with this information, it is just hard to discern what is cool, what is good, what is interesting, what do I like?
[849] Because the volume we're seeing is like 100x, what I would have consumed.
[850] 10 years ago.
[851] I'm now seeing that in a week.
[852] Also, how's anyone going to make money?
[853] I don't understand.
[854] If we can produce a hundred dollies, like, if I can, and then you, what's going to happen to money?
[855] Explain that to us.
[856] The marketplace.
[857] How are people going to make money?
[858] Yeah, I mean, look, hypercompetition has that tendency, right?
[859] Because if we reduce the means of production to zero marginal cost, right?
[860] So the inputs become much cheaper.
[861] You don't have to rent a studio.
[862] you don't have to go hire a whole ton of extra actors.
[863] You don't have to go into post -production and do all the coloring because there's filtering.
[864] You can just take an auto -tuned voice off the library or off a shelf.
[865] I mean, we've seen that trajectory for the last decade, and it's now going to get compressed into hyper -real -time.
[866] The utopian story is that we're going to be more creative, and yeah, we might need some more curators and editors.
[867] But the other question is, what on earth are we going to do?
[868] I basically said, from the beginning, we should expect significant taxation to fund significant redistribution because over a 20 -year period, these models really do make production much, much more efficient and much more accurate.
[869] I mean, it is a race against the machine, and human innovation and evolution independent of machines isn't going to move fast enough to be able to be better.
[870] So you had originally been a part of the artificial general intelligence which he talked about a little bit, but you have since introduced this term artificial capable intelligence, which is kind of a midway point between AI and AGI.
[871] So tell us how you would define artificial capable intelligence.
[872] Because I feel like it funnels into the question about money.
[873] It's exactly connected to that.
[874] So for years, there's been this idea of a Turing test.
[875] What is that?
[876] Yeah, I never heard that.
[877] Alan Turing, one of the earliest computer scientists.
[878] Benedict Comberbott.
[879] Yes, exactly.
[880] Hot guy.
[881] I'll remember it We should have Brad Pitt teaching all tech Oh my God, what a hat Everything's the Brad Pitt principle Oh my God Pit principle I like this That's why Killian Murphy was such an amazing cast Jesus Who doesn't want to watch Yeah Oppenheimer v looks like killing Perfect looking The Turing test was an imitation game That Alan Turing invented I proposed 60 years ago And it basically said If you could teach a machine to speak as well as a human and it could deceive another human into thinking that it was in fact human and not a machine then it would have succeeded in imitating human intelligence.
[882] That was the grand Turing test.
[883] And now that we have these language models, Pi and Chat GTPT, it's pretty clear that we're quite close to beating the Turing test.
[884] And yet we've got no idea if we're close to inventing intelligence at all.
[885] I mean, it's a version of intelligence, but it clearly isn't the full picture.
[886] And so in the next few years, what I think is likely to happen is that we will pass a modern Turing test, which is where an AI is capable, it can do real things in the world, right?
[887] It can learn to use tools.
[888] It can ask other humans to take actions on its behalf, and it can string together pretty complicated sequences of tasks around abstract goals.
[889] So, for example, you could say to an AI, go off and make a lot of.
[890] make a million dollars by inventing a new product, do your research online, figure out what people are into at the moment, go and generate a new image, go and negotiate with a manufacturer in China, discuss the blueprints with the manufacturer, get it drop shipped over, and then go and market it and promote it.
[891] And you said like you imagine giving it 100 ,000 to start with?
[892] It could start with a relatively small amount of money and I think quite quickly it would be able to generate a lot more.
[893] And that would be for you, artificial, capable intelligence.
[894] That would signal that.
[895] It would, yeah, because artificial general intelligence is a much more long -term speculative.
[896] That's the kind of super -intelligence Terminator and all the rest of it.
[897] This is a much more narrow focus on not just what can an AI say, but what can an AI do, right?
[898] Because that's what we really care about.
[899] Right.
[900] It's funny while you were talking about, I don't know if this is a breakthrough idea or if it's really basic, but I was thinking, I'm thinking, weirdly, I might actually say a measure of intelligence would be acting logically without data.
[901] It's almost the opposite.
[902] It's that a human without any data can make a relatively sound good logical decision in any given moment by modeling purely from their imagination and perhaps no data.
[903] Extrapolation is a critical skill that these machines currently don't have, which is, without any context, can you figure out what's required in a certain setting without any prompting?
[904] Without any patterns to observe to then model onto it.
[905] Exactly.
[906] The moment they're mostly reproducing known patterns.
[907] Right.
[908] And it turns out you can go really, really far just by looking at known patterns.
[909] Yeah.
[910] But that's humans too.
[911] Normally I think it's because you've experienced something you can at least connect to that's similar in some way or you've seen it in a movie or you've read it.
[912] I agree, but if you look at big, Like babies have an intelligence.
[913] They're born with an intelligence.
[914] They can make decisions and they don't really have a data set to draw from, you know?
[915] Yeah, that's true.
[916] And I think we often are in situations with almost no comps, and we intuitively know what to do.
[917] Well, another way of thinking about that is that we are in the millionth generation of the evolution of that mind.
[918] So although the baby appears to learn something on the spot that's really not.
[919] In fact, we're all one species.
[920] 65 million years of mammalian evolution that we're sharing lots of our genetic code with.
[921] Right.
[922] And in some ways, these new AIs, these large language models are only in the 10th or 20th generation.
[923] As we make progress in a certain area and we publish academic papers on it and they get peer reviewed, the knowledge and insight then gets passed on to the next group of developers and the other teams.
[924] And when you see that, oh, those guys have done something really interesting, you try and copy it, and so then that gets incorporated into the models next.
[925] You're just building endlessly.
[926] Once we have Pythagorean's theorem, it gets implemented in everything going forward.
[927] Yeah, and that's how we evolve knowledge.
[928] We're inventing new ideas.
[929] It becomes part of the established status quo.
[930] I mean, think about how many things seemed normal 30 years ago that are now absurd, not just our cultural positions on really important topics, but doctors used to be the biggest smokers of all.
[931] Sure.
[932] You know, it's amazing how that kind of show.
[933] There's many different brands of cigarettes over the years.
[934] People collect these wonderful ads for camel, eight out of ten doctors smoke camels.
[935] We left out a couple of things that I just want to say that are really exciting because, again, we're already kind of getting into the nag of things.
[936] But abundant energy is something, obviously, that just like it figured out this 40 % reduction and what it'll do to management of a nuclear facility, like the ones Bill Gates is behind.
[937] You know, that's incredible.
[938] We left out the synthetic biology, which is going to be.
[939] working in concert with this AI where we can read, edit, and create and print DNA.
[940] Oh, my God.
[941] That's bonkers.
[942] Viruses that produce batteries.
[943] What?
[944] Proteins that purify contaminated water.
[945] Carbon scrubbing algae.
[946] Toxic waste into biofactory.
[947] Like, these are very exciting and could heal the planet.
[948] Whatever our concern is about whether we're going to write movies or not.
[949] You know, there's also some big ticket items that we might have to, like, surrender whether we drive semis long distance in order to save the planet, we might think that's a trade -off that's worth it.
[950] We have to extract carbon from the atmosphere.
[951] Those algae or the kelp farms that you talk about, that's an incredible upside of this experimentation.
[952] Imagine being able to absorb all of this excess carbon that we really need to remove as quickly as possible.
[953] I honestly think that in the next 20 years, energy is going to become an order of magnitude cheaper than it currently is.
[954] And we're on an incredible trajectory.
[955] Renewables are the quiet hero of the last 20 years.
[956] Costs of solar are going through the floor.
[957] Hydroelectric power is now a huge percentage of the energy mix.
[958] I think that we get a bit stuck in the negativity and the downside of things when, in fact, this is exactly what we need to be making progress on.
[959] Also like running core government services.
[960] You know, you imagine a government without any outside financial manipulation, no corruption whatsoever.
[961] These are pretty pleasing thoughts, and obviously those will be the upside.
[962] Now, we should address some of the things that could be very dangerous about it.
[963] There could be some major threats to government stability.
[964] I think also we've not talked about it, but it's the point I always end up coming to when I'm debating with somebody who basically wants to pull the plug on everything.
[965] And I go, I'd be for that.
[966] If we could truly get Russia to agree to that in China, we have to acknowledge we're in an arms race with this technology.
[967] That's a fact.
[968] So who do you want to have the lead?
[969] Do I want North Korea to have the lead?
[970] I personally don't.
[971] So you start working backwards from the reality of can we afford to not be out in front?
[972] I don't think we can.
[973] Do you have an opinion on that?
[974] The technology itself is not going to kill us.
[975] It's going to be the mishandling of the technology by our, own governments or by a bunch of crazies, bad actors.
[976] Now that is a manageable downside.
[977] It is not a reason for us to panic and pull the plug.
[978] It's a reason for us to be responsible and conscious and proactive and start having an adult debate about it right now.
[979] Because too often you hear extremists on both sides.
[980] I hear these people who are just like, fuck it, we should just be charging ahead.
[981] equally insane to be like, right, I'm done, pull the plug, Luddite story.
[982] It's just maybe too clickbaity and you end up just reading or seeing people advocating for one or other.
[983] They're the most exciting arguments to listen to.
[984] Moderation and primitism isn't the sexiest.
[985] Right, and it's actually just a much more dry and boring to sort of fumble our way through a way of just making it work, which I think is possible.
[986] Stay tuned for more Armchair Excery.
[987] if you dare.
[988] Oh, well, okay, so you have 10 -step plan for the future that can help us avoid these kind of pitfalls.
[989] One of the things I want you to explain to us is containment.
[990] So containment is a theory.
[991] It's an approach that basically says all of the technologies that we make have to remain under meaningful human control.
[992] Long term, we want to make sure that we're able to understand what we're building, that there are guard.
[993] rails that are built into all of these systems by design and that the emergent effects can be predicted or at least to the best of our ability accounted for because in past waves of technology if they come too quickly then society doesn't really have time to adapt and update whereas if you look at things like the car it's actually been a pretty incredible track record of safety on every front from seat belts to airbags.
[994] Non -shattering glass.
[995] Yeah, like there's so many small innovations and a huge amount of licensing like driver training and you have parking tickets and there's all these test standards.
[996] That is a huge success story.
[997] Obviously, it's a tragedy that some people still lose their lives, but net net, that is an incredible benefit.
[998] Likewise with flight aviation, we very early on establish that there must be a black box recorder tracking everything that the pilots say, tracking all the telemetry on board the aircraft, and sharing that with a centralized body, the FAA or equivalent, that can review that information and share it with competitors where there's like a known weakness or a known fault.
[999] Right, because it would benefit Boeing to have Airbus not have a solution to their air safety as a competitor.
[1000] Not that they're evil and they would want that, but surely a competitive advantage, the FAA says not a chance.
[1001] Right, that's a sign of good regulation.
[1002] Like, there's too much regulation bashing, particularly in this country, I think that Europe's a little bit more open to it.
[1003] People are just so afraid of it.
[1004] And it's like, come on, let's not throw the baby out with a bathwater.
[1005] It works quite often.
[1006] Look at a country that just had an earthquake with doesn't have building regulations.
[1007] Right.
[1008] Turkey, for example.
[1009] I mean, that was a complete catastrophe.
[1010] Many, many buildings completely just sank to the grounds.
[1011] because this evil force government oversight wasn't there.
[1012] Right.
[1013] Okay, so containment.
[1014] Now, here's my question.
[1015] There is a bit of a paradoxical thought.
[1016] First of all, just hit me. Dead on, you've got to be flush.
[1017] Okay.
[1018] Bit of Robert Downey.
[1019] Oh, wow.
[1020] Yes, yes, yes, yes.
[1021] It hit me in the eyes there.
[1022] There was a moment where I thought I was looking at Downey.
[1023] Okay, that's not the paradox.
[1024] This is always part of our show?
[1025] I was like, is this a paradox?
[1026] I'm an AI.
[1027] I see patterns.
[1028] I'm like, oh, some of the metrics are there.
[1029] on the very surface we'd have to say it's a little bit paradoxical to first acknowledge or at least I'm willing to acknowledge we keep talking about it like what if it gets as good as us but it's going to get better than us it's going to get more intelligent than us and so it seems a bit of a paradox that there will be an entity on earth that's dumber than another entity that'll have control i mean just in its simplest thought like chimps are not going to be ruling human society They don't have the capacity.
[1030] Whatever fucking clever workaround they thought they came up with will be smarter.
[1031] So how do we address that most fundamental question that we will be expecting to be the dumber of the two yet be in control?
[1032] Think about it like this.
[1033] An AI isn't going to function in the same way that a human does.
[1034] So it's actually an anthropomorphism, like a projection of our human kind of emotional state to assume that an AI, by default, must desire control.
[1035] Now, that is just because we are a competitive species and we've lived and breathed the evolutionary fight or flight for millions of years.
[1036] Kill or be killed.
[1037] So the first thing we think this thing's going to kill us.
[1038] Sure.
[1039] Why would it want us on Earth gobbling up resources?
[1040] Totally.
[1041] And actually, the way that we design these models is very far from that kind of approach, right?
[1042] It's a completely different species.
[1043] Some people may design them to have that kind of independent control.
[1044] And that is one of the things where we would need regulation to stop those kinds of activities, that kind of research.
[1045] Like you wouldn't want an AI that was inherently designed for complete autonomy.
[1046] I think that's one of the things that would be pretty scary and that's not all we should be pursuing.
[1047] You wouldn't want an AI that could go off and create its own goals.
[1048] It shouldn't just be allowed to just decide, well, today I'm going to work on cancer research, and tomorrow I'm going to build a dam, and next week I'm going to build a tank.
[1049] Yeah, yeah, yeah.
[1050] It's not free to do that.
[1051] You know, likewise, recursive self -improvement.
[1052] If the AI can look at its own code and update its own code independently of human oversight, that's an issue.
[1053] That's, I guess, the thing I get fearful of.
[1054] Mind you, I've heard Stephen Pinker make a very similar argument, which I like, which is like we're kind of trapped in our animal mindset, and we are anthropomorphizing this machine, giving it these animalistic things that it just doesn't have.
[1055] But it's on us, though, to make sure we don't program it that way, because it will have the ability, but we have to not allow it to have that ability.
[1056] And what we just talked about with everything being open source and being very democratizing, right now it does require a company the size of yours, the size of Google, but as the individual has access to all this, because you can hold Google accountable.
[1057] You can hold Pi accountable.
[1058] Holding Jerry accountable in Tulsa, I don't know how we do that.
[1059] Well, it has to be illegal.
[1060] I mean, it has to then be fully illegal.
[1061] legal to do that.
[1062] This is part of the, you know, definition of containment that I'm sort of trying to popularize because I do think that the proliferation risk is the real risk here.
[1063] We've evolved as a nation state system a mechanism for holding centralized power accountable.
[1064] That's what the state is.
[1065] It says you pay your taxes and in return we'll have a monopoly over the use of violence and force and we'll use that to keep the peace.
[1066] That's just the basic rules of the state.
[1067] Law and order, that is what everyone fundamentally cares about and should care about.
[1068] Over the next 20 or 30 years, if these exponential cost curves continue and everything from synthetic biology to AI gets radically, radically cheaper, I think it means that people become less dependent on the state.
[1069] They should be able to generate power off -grid.
[1070] They'll be able to grow crops that are resistant to disease and that require less water and they won't need as much centralized support.
[1071] They could maybe even have their own robot armies and so on and so forth.
[1072] That proliferation of power definitely represents a threat to the nation state.
[1073] Now, just to be clear for anyone who is a supporter of open source, I am not saying that there aren't risks of centralized developers of AI like myself.
[1074] This is not a ploy for me to say, I'm the trusted one, don't worry about me. Because that would be really bad.
[1075] All white college educated.
[1076] At the end of the day, you get into who's really at the top of all of us.
[1077] And who should make those decisions?
[1078] That is a problem because I have to be regulated.
[1079] Pye has to be regulated.
[1080] Google has to be regulated.
[1081] Microsoft, everyone else, just as much as the open source.
[1082] Bill Gates is super in support of real regulation, shockingly.
[1083] I think everyone is.
[1084] Everyone is like, look, this is a time to have this conversation.
[1085] We want to remain as humans at the top of the food chain forever more.
[1086] We're not trying to displace ourselves.
[1087] The analogy I would use is it weirdly in my mind feels similar to the war on drugs in that we have only been successful at ending a single drug.
[1088] And that was quailudes, very popular drug here in the U .S. in the 70s and 80s.
[1089] It was like a benz -or or a muscle relaxer.
[1090] It chilled you out.
[1091] There was only a single manufacturer of quailudes.
[1092] And I think they were in Switzerland.
[1093] And so finally the FDA or whoever said, like, we're asking you to stop.
[1094] manufacturing that because no one can do it on their own and then as we got into the crystal meth epidemic it is also true that the base compound you need to make it in your bathtub really only a few people make i think both the facilities are in india the patents are held by u .s companies but that too can't be made by an individual you need that precursor that base thing so we could have at any point decided we don't want meth as a problem anymore but when you have weed you'll never combat that That's why I think we don't even try, because you can grow it in your backyard.
[1095] Anything that can be democratized, we're not going to have control over.
[1096] That's what scares me about the full empowering of your average guy with a laptop.
[1097] And I think that's the fundamental question of the next 20 or 30 years, because in every sense, power is getting smaller with a bigger reach.
[1098] So think about those image models that we mentioned earlier.
[1099] They've been trained on all of the images available on the open web, but they've been compressed to a two -gigabyte thumb drive.
[1100] So in many ways, you're putting all of that knowledge and insight from the open web onto something that is moved around on a thumb drive.
[1101] Now, the same trajectory is going to happen across all of these other areas.
[1102] The knowledge and know -how to produce really high -quality crops.
[1103] The knowledge and know how to produce the very best doctor and clinician.
[1104] Like in 10 years, just as we had all the images compressed to a thumb drive, You can have all the medical expertise, all the very best doctors in the world, all their knowledge and experience, compressed to a single model that provides an incredible diagnosis that is universally available to everybody, right?
[1105] So that's the trajectory that we're on for basically every profession, for all knowledge, all intellectual capital, which is obviously amazing.
[1106] But of course, how then do you keep the state together?
[1107] How do you keep order in that kind of environment?
[1108] We're just beginning to have that conversation.
[1109] Obviously, I don't have the answers, by the way, just so don't let that be your next question, please.
[1110] Yeah.
[1111] Well, one of the ten steps is national treaties.
[1112] I wish I had more faith in these.
[1113] I mean, if you just look at like the nuclear treaties and how willfully they were totally lied and ineffective in so many ways, and those are visible from space.
[1114] You know, we can see manufacturing of this.
[1115] This other stuff is like imperceptible from the outside.
[1116] So how would I come to trust that when Putin signs this, they'll actually stop working on this thing?
[1117] Actually, nuclear gives a lot of hope.
[1118] Oh, okay.
[1119] We have had a lot of progress with nuclear non -proliferation.
[1120] It is kind of incredible that there are only seven nuclear powers in the world today.
[1121] That's true.
[1122] And in fact, the number of nuclear powers went down from, I believe, nine.
[1123] So South Africa gave up nuclear weapons, Ukraine, and one other country.
[1124] That's an amazing story.
[1125] Like in the 50s, everyone thought, okay, the whole world is going to get access to nuclear weapons.
[1126] That was the mental model 50 years ago.
[1127] How did we do that?
[1128] Well, we did it through traditional threats and incentives.
[1129] There were economic incentives to participate in the global market.
[1130] there were military threats that said you can't have access there were licensing of expertise so you basically said if you've got a degree in nuclear physics you have to register we need to know who you are where you are how you work who you work for same story with chemical weapons biological weapons if you work on conventional missiles but I would say biological weapons are failing I think we just saw that now look I don't think we know for sure but certainly even the New York Times is willing to consider with a 50 % likelihood that COVID was coming from a lab in Wuhan.
[1131] And we have treaties against that.
[1132] Completely.
[1133] That really does make my blood boil.
[1134] That's a different story, actually.
[1135] That's a different failure of containment.
[1136] That is the scientific research of gain of function, where in that lab, they were actively trying to improve the transmissibility of viruses.
[1137] Yes.
[1138] Deliberately.
[1139] It's like out of a James Bond movie.
[1140] It's totally nuts, right?
[1141] But obviously, their goal was to try and make it more transmissible so they could invent something that would counterattack it.
[1142] One of the things that was alluded to is that they wanted to see more progress, that Xi Jinping, this lab was under all this pressure, to show him some huge progress, that they had been able to unleash and then control for his vanity, I guess.
[1143] The Wuhan Lab was actually funded or had a number of postdoctoral researchers and professors that were funded by the US National Institute of Health, right?
[1144] So this is a much more complicated story than a Chinese effort.
[1145] There are many gain of function research labs that are the same BSL 4 and 3 in the US and in Europe and in Australia.
[1146] So there's lots of these labs around the world.
[1147] The gain of function research is still continuing.
[1148] They weren't the only people that were doing it.
[1149] And they were located there because of the proximity to some interesting bats and so on.
[1150] This is more of a culture of having more.
[1151] more awareness about what scientific research is happening and why and who's funding it.
[1152] And again, it's about people getting involved in the political process and really caring about the future of our planet and not thinking that science is off limits to people, you know, because I think a lot of people think, oh, that's too technical for me. I can't be part of it.
[1153] I was just having that thought when you knew way more about that lab than I did, and you started saying acronyms and stuff.
[1154] I was like, oh, I'm out.
[1155] I don't really know shit about this, and I really should have brought Wu Hong.
[1156] That's right, because then we get to learn more information.
[1157] I was just relishing in the fact that I just had that thought two seconds ago.
[1158] Oh, yeah, that's out of my domain right there.
[1159] Everyone needs to feel that they can get access to technology and science, and it's not this kind of elusive.
[1160] Well, this is your plea for transparency, right?
[1161] Exactly.
[1162] Labs should be open about what they're working on, what progress they're making, what issues they've run into.
[1163] And again, going back to where I said about the black box thing, There's a lot of simple lessons from previous eras in other domains that we should learn from.
[1164] We should share the mistakes that we make.
[1165] In 2016, I co -founded the partnership on AI, which is this multi -stakeholder forum for sharing best practices between all the companies.
[1166] We got all the companies to sign up.
[1167] Apple, Google, Microsoft, IBM, Facebook, OpenAI, DeepMind, as well as 100 civil society groups, non -profits from Oxfam to the UN and everyone.
[1168] And it's still going now.
[1169] and the goal was to basically have an incident reporting database.
[1170] If you work at an AI company or elsewhere in big tech and you see something going aloof, whatever, then you could report it confidentially, kind of blow the whistle, not cause a big PR sting, can go and be a martyr on the news or whatever.
[1171] Trust in each other, yeah.
[1172] Yeah, and I think that's a kind of really important part of the process.
[1173] This is where maybe we could use the racism thing as an example.
[1174] So some of the early large language models turned out to be racist quite quickly.
[1175] Could you explain how that happened?
[1176] And was that one of the self -reporting things?
[1177] And did they say, this is how we fix it.
[1178] So when it arises in your large language model, this is how you'll fix it?
[1179] Yeah, two or three years ago, if you asked one of these models, if you said, complete the sentence, the pimp is of this origin.
[1180] And it would say, well, a black man. Or if you said the president, always a white man. It had these biases.
[1181] It had a tendency to take the data that it had been trained on and basically rigged by humans and regurgitate it.
[1182] Now, the interesting thing about the last year or so is that we've made a huge amount of progress on these kinds of biases, so much so that I think they are going to be largely eliminated.
[1183] So if you play with Pi now, pie is extremely respectful and balanced.
[1184] Can you tell me what mechanisms get put in to curb that?
[1185] It's a process of alignment.
[1186] So the alignment process is where, we show two examples of outputs by the model to real humans.
[1187] We call them AI teachers.
[1188] And they come from all different backgrounds.
[1189] They're old and young.
[1190] They're educated and not educated.
[1191] Some of them are professionals in certain areas.
[1192] Some of them are just generalists.
[1193] And we have thousands of these AI teachers that work basically round the clock 24 -7.
[1194] And we're constantly showing thousands and thousands of examples of two or three or four different answers.
[1195] that the AI could produce given some question or some prompt.
[1196] And the human then selects between these different options with regard to a criteria.
[1197] Is this one more funny or is this one more respectful?
[1198] Is this one clearly racist?
[1199] Does it look more like financial advice?
[1200] Is this about carpentry?
[1201] And so that feedback is called the process of alignment.
[1202] You align the model with a set of values or ideas.
[1203] which is pretty incredible and that has produced extremely nuanced and very precise behaviors which you now see in Pi And then I imagine that group that the AI is aligning itself to I guess it would just naturally evolve as culture evolves and societal norms evolve because it wouldn't be like you'd set it in stone like okay we got it we've aligned it with what humanity thinks is right and then see in a thousand years no that's going to evolve as we see an incredibly rapid rate as well.
[1204] Well, this comes back to what we were saying earlier about whether the AI should be able to evolve its own set of values or set its own goals or update its own code.
[1205] So I think these are the sorts of capabilities that in a containment strategy should be off the table.
[1206] For a start, just to state the obvious, we are shaping the values of this AI that goes and participates in the world.
[1207] By that, I mean, me and my team, we try to be as transparent as we can be about the process and we publish the value set on our website, just to show what it is being held accountable to, what it's aligning to, but that's where competition is really healthy because we want to have a variety of different models produced by lots of different actors, not just me. I think in time you'll be able to create your own sort of AIs that align to your values, subject to some baseline constraints.
[1208] There'll be lots of different AIs that end up being out there in the world with different kinds of positions on these sorts of things for both good and bad.
[1209] That's the whole point about it, amplifying who we are as a species today.
[1210] because everyone's going to want to have influence, basically, over their own AI, right?
[1211] You're going to want it to be more in your vibe.
[1212] Absolutely, irreverent.
[1213] A little dangerous sometimes.
[1214] In my Aeros motorcycles.
[1215] And then I guess my last question, you kind of already answered.
[1216] We hear these examples that, you know, the AI will create work cited.
[1217] So I heard that a lawyer asked the AI to write its brief.
[1218] It referenced some statutes that didn't exist.
[1219] It referenced some cases that it had made up.
[1220] Because its ability is so large and fast, it makes me think that there has to be an AI over top of everything that just minimally tells you what's faking not.
[1221] How do is it police?
[1222] I mean, no one can do that.
[1223] Yeah, but I guess if you saw something cited, you could go search and see if that's a real article.
[1224] That's what happened in the 60 Minutes piece.
[1225] It had cited some references in a work cited page or bibliography about this argument it laid out.
[1226] And then they found out that AI had created like four of the books.
[1227] Yes, and that's funny and silly, but there's a real article on absolutely everything, every different opinion on it.
[1228] It all comes from some, quote, reputable something.
[1229] Even now, I mean, we fact check on this show.
[1230] It's stupid.
[1231] We can find a defense for absolutely anything.
[1232] Yeah, but I think we need to know what was an actual scientific study that produced these results.
[1233] not what the AI said.
[1234] We have to know the difference.
[1235] And it's going to be such a volume.
[1236] A human can't police that.
[1237] So I think you need an AI to police the AI.
[1238] That's my last concern.
[1239] I'm just curious, like, how will we fucking know what information is real?
[1240] Just as three or four years ago, these models were prone to bias and racism, and they have now got much, much better, not a solved problem, but hugely better.
[1241] these are the kinds of problems that I expect to be completely eliminated in the next few years.
[1242] Citations and sources is one that everybody is actively working on at the moment.
[1243] I mean, if you go to Pi today and ask for fresh and factual information, it will know the sports results from yesterday.
[1244] It'll know the news.
[1245] It'll have an opinion on Barbie and Oppenheimer.
[1246] It's pretty fresh and factual, and that's because it is going off and checking stuff on the web.
[1247] It's looking up in real time.
[1248] And so I expect in sort of three, or four years time, it is going to go and do the same thing for all of the academic journals and all the news reporting and all the real sources and citations.
[1249] So I think the thing for us to really focus on is not are we going to have these problems in perpetuity.
[1250] It's what do we do when they're solved?
[1251] And what does it mean to have AIs that can do perfect recall from any knowledge base that aren't biased actually, that are actually really fair, like more fair than humans, that they're actually more creative than most humans.
[1252] How do we handle the arrival of that thing?
[1253] Is there any part of your book that I left out that you'd like to talk about before we wrap up?
[1254] This has been exhilarated.
[1255] Yeah, so fascinating.
[1256] How many days have we been talking we're right on time?
[1257] You covered all the good bits, I think.
[1258] No, that's great.
[1259] I mean, definitely makes me want to read the book.
[1260] I'll say that because this is so deep and will continue to be so relevant for all of us okay we're going to go out on this is it not suspicious to you that you're here on planet earth at this moment to witness this i have this deep prevailing suspicion like wait a minute when i was born we didn't have computers now we're at a point where the computers are going to solve every fucking human problem and then probably some kind of bizarre longevity is going to emerge out of that is it possible i was born at the time i was to witness all this is it feels suspicious.
[1261] Do you have that ache of suspicion at all?
[1262] 100%.
[1263] If you just think about this trajectory on cosmological time, how much our planet and our species and this moment is just a complete freak accident?
[1264] But how on earth can this be the case that we're now alive sitting here on these chairs at this moment doing this thing?
[1265] It just feels so arbitrary and so fragile and in a way like such a fleeting moment of evolutionary time.
[1266] And we're so obsessed with time on a month and week and year basis, you know.
[1267] And it's like, actually the world doesn't care about that schedule.
[1268] The world operates on this geological or cosmological time.
[1269] Yes.
[1270] If you do our geological calendar, which is so fun, humans arrive at 1159 p .m. on December 31st.
[1271] Yeah.
[1272] The idea that in that last minute of this geological year calendar, In the last one second, we went from no telephones to this is hard for me to actually buy into it.
[1273] And that we happen to be born in that one second.
[1274] No, that's where I call bullshit.
[1275] So then we get into Sim.
[1276] Are we in a simulation?
[1277] The classic.
[1278] Well, you know what the funny thing is?
[1279] We're going to find out.
[1280] Well, right.
[1281] That's what's wild as you're going like, you're going to find out.
[1282] And it's true.
[1283] We're going to find out if there are movie stars pretty soon.
[1284] We're going to find out if there's writers pretty soon.
[1285] I can almost not comprehend that that's true.
[1286] Yet, I bet it will be...
[1287] Yeah, it does feel a little bit like...
[1288] Even for a progressive, that's a lot.
[1289] I've read a bunch of great books on dopamine lately.
[1290] I'm kind of obsessed with dopamine.
[1291] I'm pretty panicked about this level of change.
[1292] And I know I have apex dopamine and progressiveness in me. I can't imagine how fucking terrifying this is for half the country.
[1293] I completely agree with that.
[1294] And think about how ridiculous.
[1295] the world looked 40 or 50 years ago, how it was dominated by the patriarchy, how you had a career for 40 years, how people of color were like irrelevant and basically just coming out of slavery all over the world, how empire and colonialism was the default way of operating.
[1296] We have changed unbelievably, culturally and politically.
[1297] And so we're a product of that generation.
[1298] So we're still living in this mindset that things should be stable.
[1299] We have this default expectation that actually this order should be here forever.
[1300] Actually, there's just no reason to believe that.
[1301] In fact, the default is flux.
[1302] Yeah, it's very unsettling for a lot of people and in a varying degree.
[1303] That's why I'm acknowledging, like, I think I trend quite high on that embracing of change, and it's very scary to me. So I just am very sympathetic.
[1304] I imagine for some people, it's just completely overwhelming.
[1305] It is intense, and I think that we have to figure out ways to be respectful.
[1306] of everybody's rate of change.
[1307] I think it's super important to be empathetic to that and not just charge ahead as though it is obviously right.
[1308] Just as we shouldn't be righteous in caution and non -change, we absolutely should not be righteous that change is inevitably going to be good or is out of our control or is being done to people.
[1309] Yeah, we need massive humility as we race into the unknown.
[1310] And we have to figure out each challenge because otherwise they will pull the plug on the sim.
[1311] Like we're just here for us to figure out all the problems.
[1312] So we have to keep figuring them out.
[1313] They're going to be like, oh, start over.
[1314] Well, we're an AI model.
[1315] It's going to result either we fix global warming or we don't.
[1316] So we have to, if we want to keep living.
[1317] Because I was interviewing you today, I was thinking about all this.
[1318] And then I thought, okay, so let's see.
[1319] So all this will happen in front of me. There'll be a headline in New York Times, if that's even a thing still.
[1320] People are going to live forever.
[1321] And I go, okay, well, that's bullshit.
[1322] Like, now I know I'm in a sim.
[1323] And then someone's going to pull a cable out.
[1324] And I'm going to be in a room.
[1325] And they're going to go, pretty fun ride, huh?
[1326] But it won't be you.
[1327] I'll be this beautiful, like, avatar guy.
[1328] Seven feet tall, the tail.
[1329] No, I bet it's.
[1330] It would be like a blob of ectoplasm.
[1331] You should make that film.
[1332] Tell your AI to produce it for you quick.
[1333] Oh, my God.
[1334] I think the strike would prevent that.
[1335] But I was thinking like, okay, so I'm going to come out and there and go, that was a wild ride, huh?
[1336] Like, we took you from nothing to all of that in 70 years.
[1337] Do you want to do it again?
[1338] Right.
[1339] I don't know what they let us do.
[1340] I don't either.
[1341] And what if it's actually 10 minutes?
[1342] That would make sense because the way time moves.
[1343] Yeah, we were plugged into a machine for 10 minutes of this entire experience.
[1344] You're not saying that.
[1345] We're not saying you're saying that.
[1346] You're co -signing on this.
[1347] No, no. Mustafa says this is exactly how it's going to happen.
[1348] Well, I find you to be an incredibly thoughtful and this may say I'm derogatory to other tech geniuses I've interviewed.
[1349] but you're also very EQ.
[1350] I'm grateful you're in the mix.
[1351] That is the one thing that I think scares a lot of us is there's a type that's in Silicon Valley and it's not the type that's was in my rural American town.
[1352] And so that's a little scary.
[1353] The power has been consolidated in very few hands and they're very similar to each other and that's a bit unnerving.
[1354] I'm delighted you're in the mix.
[1355] It's been a pleasure meeting you.
[1356] Yeah, this is great.
[1357] Thank you.
[1358] Everybody read the coming wave.
[1359] obviously I feel like this is an act of altruism from you I can't imagine you need money I'm trying to get rich off a book I don't think that could possibly be the case you're doing fine right true unfortunately books don't make very much these days either but I kind of felt I had to write it yeah that's wonderful I'm really supportive of it so everyone check out the coming wave and familiarize yourself with the sim we're all in that we'll all find out shortly and play with pie yes play with pie you can find pie at www Pi