Insightcast AI
Home
© 2025 All rights reserved
ImpressumDatenschutz
The Godfather of A.I. Has Some Regrets

The Godfather of A.I. Has Some Regrets

The Daily XX

--:--
--:--

Full Transcription:

[0] From the New York Times, I'm Sabrina Tavernisi, and this is The Daily.

[1] As the world begins to experiment with the power of artificial intelligence, a debate has begun about how to contain its risks.

[2] One of the sharpest and most urgent warnings has come from the man who helped invent the technology.

[3] Today, my colleague Cade Metz speaks to Jeffrey Hinton, who many consider to be the godfather of AI.

[4] It's Tuesday.

[5] May 30th.

[6] Cade, welcome to the show.

[7] Glad to be here.

[8] So a few weeks ago, you interviewed Jeffrey Hinton, a man who many people know as the godfather of AI.

[9] And aside from the obvious fact that AI is really taking over all conversations at all times, why talk to Jeff now?

[10] I've known Jeff a long time.

[11] I wrote a book about the 50 -year rise of the ideas that are now driving chatbots like Chat -GPT, and Google Bard.

[12] And you could argue that he is the most important person to the rise of AI over the past 50 years.

[13] And amidst all this that's happening with these chatbots, he sent me an email and said, I'm leaving Google and I want to talk to you.

[14] And that he wants to discuss where this technology is going, including some serious concerns.

[15] Who better to talk to than the godfather of AI?

[16] Exactly.

[17] So naturally, I got on a plane and I went to Toronto.

[18] Jeff.

[19] Come on, come on.

[20] It's great to see.

[21] Nice to see you to see.

[22] To sit down into his dinner table and discuss.

[23] Would you like?

[24] A cup of coffee, a cup of tea, a beer, some whiskey.

[25] Well, if you've made some coffee, I'll have some coffee.

[26] Jeff is a 75 -year -old Cambridge -educated British man who now lives in Toronto.

[27] He's been there since the late 80s.

[28] He's a professor.

[29] at the university.

[30] My question is, somewhere along the way, people started calling you the godfather of AI.

[31] And I'm not sure it was meant as a compliment.

[32] And do AI researchers, you know, come to your door and kneel before you and kiss your hand?

[33] Like, how does it work?

[34] No, no, they don't.

[35] They don't.

[36] And I never get to ask them for favors.

[37] So how does Jeff become the godfather of AI?

[38] Where does his story start?

[39] It starts in high school.

[40] He grew up the son of an academic, but he always tells the story about a friend describing a theory of how the brain works.

[41] And he wrote about holograms, and he got interested in the idea that memory in the brain might be like a hologram.

[42] This friend talked about the way the brain stores memories, and that he felt it stored these memories like a hologram.

[43] A hologram isn't stored in a single spot.

[44] It's divided into tiny pieces and then spread across a piece of film.

[45] And this friend felt that the brain stored memories in the same way, that it broke these memories into pieces and stored them across the network of neurons in the brain.

[46] It's quite beautiful, actually.

[47] It is.

[48] And we talked about that, and I've been interesting how the brain worked ever since.

[49] That sparked Jeff's interest, and from there on, he spent his life in the brain.

[50] pursuit of trying to understand how the brain worked.

[51] So how does Jeff start to answer the question of how the brain works?

[52] So he goes to Cambridge, and he studies physiology, looking for answers from his professors.

[53] Can you tell me how the brain works?

[54] And his physiology professors can't tell him.

[55] And so I switched to philosophy, and then I switched to psychology, in the hopes that psychology would tell me more about the mind, and it didn't.

[56] And no one can tell him how the brain works.

[57] The layperson might ask, don't we understand how the brain works?

[58] No, we don't.

[59] We understand some things about how it works.

[60] I mean, we understand that when you're thinking or when you're perceiving, there's neurons, brain cells, and the brain cells fire, they go ping and send the ping along an axon to other brain cells.

[61] we still don't know the details of how the neurons in our brains communicate with one another as we think and learn.

[62] And so all you need to know now is, well, how does it decide on the strengths of the connections between neurons?

[63] If you could figure that out, you understand how the brain works, and we haven't figured it out yet.

[64] He then moves into a relatively new field called artificial intelligence.

[65] The field of artificial intelligence was created in the late, 50s by a small group of scientists in the United States.

[66] Their aim was to create a machine that could do anything the human brain could do.

[67] And in the beginning, many of them thought they could build machines that operated like the network of neurons in the brain, what they called artificial neural networks.

[68] But 10 years into this work, progress was so slow that they assumed it was.

[69] was too difficult to build a machine that operated like the neurons in the brain, and they gave up on the idea.

[70] So they embraced a very different way of thinking about artificial intelligence.

[71] They embraced something they called symbolic AI.

[72] You would take everything that you and I know about the world and put them into a list of rules.

[73] Things like you can't be in two places at the same time or when you hold a coffee cup you hold the open end up the idea was that you would list all these rules step by step line of code by line of code and then feed that into a machine and then that would give it the power that you and i have in our own brains so essentially tell the computer every rule that governs reality and the computer makes the decisions based on all of those rules.

[74] Right.

[75] But then Jeff Hinton comes along in 1972 as a graduate student in Edinburgh, and he says, wait, wait, wait, that is never going to happen.

[76] That's a lot of rules.

[77] You will never have the time and the patience and the person power to write all those rules and feed them into a machine.

[78] I don't care how long you take, he says, it is not going to happen.

[79] And by the way, the human brain doesn't work like that.

[80] That's not how we learn.

[81] So he returns to the old idea of a neural network that was discarded earlier by other AI researchers.

[82] And he says, that is the way that we should build machines that think.

[83] We have them learn from the world like humans learn.

[84] So instead of feeding the computer a bunch of rules, like the other guys were doing, you'd actually feed it a bunch of information.

[85] And the idea was that the computer would gradually sort out how to make sense of it all, like a human brain.

[86] You would give it examples of what is happening in the world, and it would analyze those examples and look for patterns in what happens in the world and learn from those patterns.

[87] But Jeff is taking up an idea that had been largely discarded by the majority of the AI community.

[88] Did he have any evidence that his approach was actually going to work?

[89] The only reason to believe it might work at all was because the brain works.

[90] And that was the main reason for believing there was any hope at all.

[91] His only evidence was that basically this is how the human brain worked.

[92] It was widely dismissed as just a crazy idea that was not going to work.

[93] And at the time, many of his colleagues thought he was silly for even trying.

[94] How did that feel to have most of your colleagues tell you that you were working on a crazy idea that would never work?

[95] It felt very like when I was at school, when I was 9 and 10.

[96] I came from an atheist family, and I went to a Christian school, and everybody was saying, of course God exists.

[97] I was saying, no, he doesn't, and where's he else?

[98] So I was very used to being the outsider and believing in something that was obviously true that nobody else believed in.

[99] And I think that was a very good dream.

[100] Okay, so what happened next?

[101] So after graduate school, Jeff moves to the United States.

[102] He's a postdoc at a university in California, and he starts to work on an algorithm, a piece of math that can realize his idea.

[103] And what exactly does this algorithm do?

[104] Jeff essentially builds an algorithm in the image of the human brain.

[105] Remember, the brain is a network of neurons, that trade signals.

[106] That's how we learn.

[107] That's how we see.

[108] That's how we hear.

[109] What Jeff did that was so revolutionary was he recreated that system in a computer.

[110] He created a network of digital neurons that traded information, much like the neurons in the brain.

[111] So that question he set out to answer all those years ago, you know, how do brains work?

[112] He answered it, only for computers, not for humans.

[113] Right.

[114] He built a system that allowed computers to learn on their own.

[115] In the 80s, this type of system could learn in small ways.

[116] It couldn't learn in the complex ways that could really change our world.

[117] But fast forward, a good three decades, Jeff and two of his students built a system that really opened up the eyes of a lot of people to what this type of technology was capable.

[118] of.

[119] He and two of his students at the University of Toronto built a system that could identify objects in photos.

[120] The classic example is a cat.

[121] What they did was take thousands of cat photos and feed them into a neural network.

[122] And in analyzing those photos, the system learned how to identify a cat.

[123] It identified patterns in those photos that define what a cat looks like.

[124] the edge of a whisker, the curve of a tail.

[125] And over time, by analyzing all those photos, the system could learn to recognize a cat in a photo it had never seen before.

[126] They could do this not only with cats, but with other objects, flowers, cars.

[127] They built a system that could identify objects with an accuracy that no one thought was possible.

[128] So it's basically image recognition, right?

[129] It's presumably why my phone can sort pictures of my family and deliver whole albums of pictures just of my husband or just of my dog and photographs of, you know, a hug or a beach.

[130] Right.

[131] So in 2012, all Jeff and his students did was publish a research paper describing this technology, showing what it could do.

[132] What happens to that idea in the large sense over the next decade?

[133] it took off that set off a race for this technology in the tech industry so we decided what we would do is just take the big companies that were interested in us and we would sell us us there was a literal auction for Jeff and his two students and their services we sell the intellectual property plus the three of us Google was part of the auction Microsoft another giant of the tech world.

[134] Baidu often called the Google of China.

[135] Over two days, they bid for the services of Jeff and his two students to the point where Google paid $44 million, essentially, for these three people who had never worked in the tech industry.

[136] And that worked out very nicely.

[137] I came next.

[138] So what does Jeff do at Google after this bidding?

[139] more for his services.

[140] He works on increasingly powerful neural networks.

[141] And you see this technology move into all sorts of products, not only at Google, but across the industry.

[142] Well, all the big companies like Facebook and Microsoft and Amazon and the Chinese companies, all develop big teams in that area.

[143] And it was just sort of used everywhere.

[144] This is what drives Siri and other digital assistance.

[145] When you see, speak commands into your cell phone, it's able to recognize what you say because of a neural network.

[146] When you use Google Translate, it uses a neural network to do that.

[147] There are all sorts of things that we use today that use neural networks to operate.

[148] So we see Jeff's idea really transforming the world, powering things that we use all the time in our day.

[149] daily lives without even thinking about it.

[150] Absolutely.

[151] But this idea, at Google and in other places, is also applied in situations that make Jeff a little uneasy.

[152] The prime example is what's called Project Maven.

[153] Google went to work for the Department of Defense, and it applied this idea to an effort to identify objects in drone footage.

[154] If you can identify objects in drone footage, you can build a targeting system.

[155] If you pair that technology with a weapon, you have an autonomous weapon.

[156] That raised the concerns of people across Google at the time.

[157] I was upset too, but I was a vice president at that point.

[158] So I was a sort of executive of Google.

[159] And so rather than publicly criticizing company, I was doing stuff behind the scenes.

[160] Jeff never wanted his work applied to me. military use.

[161] He raised these concerns with Sergey Brandt, one of the founders of Google, and Google eventually pulled out of the project.

[162] And Jeff continued to work at the company.

[163] Maybe I should have gone public with it, but I thought it wasn't.

[164] He's somehow not right to bite the hand that feeds you, even if it's a corporation.

[165] But around the same time, the industry started to work on a new application for the technology that eventually made him even more concerned.

[166] It began applying neural network.

[167] to what we now call chatbots.

[168] Essentially, companies like Google started feeding massive amounts of text into neural networks, including Wikipedia articles, chat logs, digital books.

[169] These systems started to learn how to put language together in the way you and I put this language together.

[170] The auto completion on my email, for example.

[171] Absolutely.

[172] But taken up to an enormous.

[173] scale.

[174] As they fed more and more digital text into these systems, they learned to write like a human.

[175] This is what has resulted in chat bots like chat GPT and Bard.

[176] And what gave Jeff pause about all of this?

[177] But why was he so concerned?

[178] What's happened to me over the last year is I've changed my mind completely about whether these are just not yet adequate attempts to model what's going on in the brain.

[179] That's how they started off.

[180] Well, he still feels like these systems are not as powerful as the human brain, and they're not.

[181] They're still not adequate to model what's going on in the brain.

[182] They're doing something different and better.

[183] But in other ways, he realizes they're far more powerful.

[184] More powerful, how exactly?

[185] Jeff thinks about it like this.

[186] If you learn something complicated, like a new bit of physics, and you want to explain it to me, in our brains, all our brains are a bit different, and it's going to take a while and be an inefficient process.

[187] You and I have a brain that can learn a certain amount of information, and after I learn that information, I can convey that to you.

[188] But that's a slow process.

[189] Imagine if you had a million people, and when any one of them learns something, all the others automatically know it.

[190] That's a huge advantage.

[191] And to do that, you need to go digital.

[192] With these neural networks, Jeff points out, you can piece them together.

[193] A small network that can learn a little bit of information can be connected to all sorts of other neural networks that have learned from other parts of the internet.

[194] And those can be connected to still other neural networks that learn from additional parts.

[195] So these digital agents, as soon as one of them is learn something, all the others know it.

[196] They can all learn in tandem, and they can trade what they have.

[197] learn with each other in an instant.

[198] It means that many, many copies of a digital agent can read the whole internet in only a month.

[199] We can't do that.

[200] That's what allows them to learn from the entire internet.

[201] You and I cannot do that individually, and we can't do it collectively.

[202] Even if each of us learns a piece of the internet, we can't trade what we have learned so easily with each other, but machines can.

[203] Machines can operate in ways that humans cannot.

[204] So what does all this add up to for Jeff?

[205] Well, in a sense, he sees this as a culmination of his 50 years of work.

[206] He always assumed that if you threw more data at these systems, they would learn more and more.

[207] He didn't think they would learn this much, this quickly, and become this powerful.

[208] Look at how it was five years ago and look at how it is now, and take that difference from propagating forwards, and that's scary.

[209] We'll be right back.

[210] Okay, so what exactly is Jeff afraid of when he realizes that AI has this turbocharged capability?

[211] There's a wide range of things that he's concerned about.

[212] At the small end of the scale are things like hallucinations and bias.

[213] scientists talk about these systems hallucinating, meaning they make stuff up.

[214] If you ask a chatbot for a fact, it doesn't always tell you the truth.

[215] And it can respond in ways that are biased against women and people of color.

[216] But as Jeff says, those issues are just a byproduct of the way chatbots mimic human behavior.

[217] We can fabulate.

[218] We can be biased.

[219] And he believes all that will soon be ironed out.

[220] So I don't, I mean, bias is a horrible problem, but it's a problem that comes from people and it's easier to fix it in a neural network than it isn't a person.

[221] Where he starts to say that these systems get scary are first and foremost with the problem of disinformation.

[222] I see that as a huge problem, not being able to know what's true anymore.

[223] These are systems that allow organizations, nation states, other bad actors, to spread disinformation at a scale and an efficiency that was not possible in the past.

[224] These chatbots are going to make it easier for them to make it and be able to make very good fake videos.

[225] They can also produce photorealistic images and videos.

[226] De -fakes.

[227] Right.

[228] They're getting better quite quickly.

[229] He, like a lot of people, is worried that the internet will soon be flooded with fake text, fake images and fake videos, to the point where we won't be able to trust anything we see online.

[230] So that's the short -term concern.

[231] Then there's a concern in the medium term, and that's job loss.

[232] Today, these systems tend to complement human workers, but he's worried that as these systems get more and more powerful, they will actually start replacing jobs in large numbers.

[233] And what are some examples?

[234] A place where it can obviously take away all the dredge work and maybe more besides is in computer programming.

[235] None too surprisingly, Jeff, a computer scientist, points to the example of computer programmers.

[236] These are systems that can write computer programs on their own.

[237] So it may be that computer programming, you don't need so many programs anymore because you can tell one of these chat pots what you want the program to do.

[238] Those programs are not.

[239] not perfect today.

[240] Programmers tend to use what they produce and incorporate the code into larger programs.

[241] But as time goes on, these systems will get better and better and better at doing a job that humans do today.

[242] And you're talking about jobs that aren't really seen as being vulnerable because of tech up until this point, right?

[243] Exactly.

[244] The thinking for years was that artificial intelligence would replace blue -collar jobs, that robots, physical robots, would do manufacturing jobs and sorting jobs in warehouses.

[245] But what we're seeing is the rise of technology that can replace white -collar workers, people that do office work.

[246] So that's the medium term.

[247] Then there are more long -term concerns.

[248] And let's remember that as these systems get more and more powerful, Jeff is increasingly concerned about how this technology will be used on the battlefield.

[249] The U .S. Defense Department would like to make robot soldiers, and robot soldier is going to be pretty scary.

[250] In an off -headed way, he refers to this as robot soldiers.

[251] Like actually soldiers that are robots?

[252] Yes, actually soldiers that are robots.

[253] And the relationship between a robot soldier and your idea is pretty simple.

[254] You are working on computer vision.

[255] If you have computer vision, you give that to a robot.

[256] It can identify what's going on in the world around it.

[257] If it can identify what's going on, it can target those things.

[258] Also, you can make it agile.

[259] So you can have things that can move over rough ground and can shoot people.

[260] And the worst thing about robot soldiers is if a large country wants to invade a small country, they have to worry a bit about how many Marines are going to die.

[261] But if they're sending robot soldiers, instead of worrying about how many Marines are going to die, the people who fund the politicians are going to say, great, you're going to send these expensive weapons that will get used up.

[262] The military industrial complex would just love robot soldiers.

[263] What he talks about is potentially this technology lowering the bar to entry for war, that it becomes easier.

[264] for nation states to wage war.

[265] So it's kind of like drones.

[266] The people doing the killing are sitting in an office with a remote control really far away from the people doing the dying.

[267] No, it's actually a step beyond that.

[268] It's not people controlling the machines.

[269] It's the machines making decisions on their own increasingly.

[270] That is what Jeff is concerned about.

[271] And then there's the sort of existential mind.

[272] of this stuff getting to be much more intelligent than less than just taking over.

[273] His concern is that as we give machines certain goals, as we ask them to do things for us, that in service of trying to reach those goals, they will do things we don't expect them to do.

[274] So he's worried about unintended consequences?

[275] Unintended consequences.

[276] And this is where we start to venture into the race.

[277] of science fiction.

[278] Read me, HAL.

[279] For decades, we've watched this play out in books and movies.

[280] Affirmative, Dave, I read you.

[281] If anyone has seen Stanley Kubrick's great film 2001...

[282] Mm -hmm.

[283] Open the pod bay doors, HAL.

[284] I'm sorry, Dave.

[285] I'm afraid I can't do that.

[286] This mission is too important for me to allow you to jeopardize it.

[287] We've watched the HAL -9 ,000, spin outside the computer.

[288] of the people who created it.

[289] I know that you and Frank were planning to disconnect me. Where the hell did you get that idea, Hal?

[290] Dave, although you took very thorough precautions in the pot against my hearing you, I could see your lips move.

[291] That is a scenario, believe it or not, that Jeff is concerned about, and he is not alone.

[292] Basically, robots taking over.

[293] Exactly.

[294] If you give one of these superintentioned agents a goal, it's going to very quickly realize that a good sub -goal, for more or less any goal, is to get more power.

[295] Whether these technologies are deployed on the battlefield or in an office or in a computer data center, Jeff is worried about humans seating more and more control to these systems.

[296] We love to get control, and that's a very sensible.

[297] goal to have because if you've got controlled, you can get more done.

[298] But these things are going to want to get control too for the same reason, just in order to get more done.

[299] And so that's a scary direction.

[300] So this sounds pretty far -fetched, honestly.

[301] But like, okay, let's play it out as if it wasn't.

[302] Like, what would be that doomsday scenario?

[303] Paint the picture for me. Think about it in simple terms.

[304] If you ask a system to make money for you, which people, by the way, are already starting to do, can you use chat GPT to make money on the stock market?

[305] As people do that, think of all the ways that you can make money and think of all the ways that that could go wrong.

[306] That is what he's talking about.

[307] Remember, these are machines.

[308] Machines are psychopaths.

[309] They don't have emotions.

[310] They don't have a moral compass.

[311] They do what you ask them to do.

[312] make us money, okay, we'll make you money.

[313] Perhaps you break into a computer system in order to steal that money.

[314] If you own oil futures in Central Africa, perhaps you foam in a revolution to increase the price of those futures, to make money from it.

[315] Those are the kind of scenarios that Jeff and many other people I've talked to relate.

[316] What I should say at this point, though, is that This is hypothetical as we stand today.

[317] A system like chat GPT is not going to destroy humanity.

[318] Full stop.

[319] Good.

[320] And if you bring this up with a lot of experts in the field, they get angry that you even bring it up.

[321] And they point out that this is not possible today.

[322] And I really push Jeff on this.

[323] But how do you see that existential risk relative to what we have today?

[324] I mean, today, you know, you have GPT4, and, you know, it does a lot of things that you don't necessarily expect, but it doesn't have the resources it needs to write computer programs and run them.

[325] You know, it doesn't have everything that you need.

[326] Right, but suppose that you gave it a high level goal, like, be really good at summarizing text or something.

[327] And it then realizes, okay, to be really good at that, I need to do more learning.

[328] How am I going to do more learning?

[329] Well, if I could grab more hardware and run more copies of myself.

[330] It doesn't work that way today, though, right?

[331] It requires someone to say, have all the hardware you want.

[332] It can't do that today because it doesn't have access to the hardware, and it cannot revolution to itself.

[333] But suppose it's connected to the internet.

[334] Suppose it can get into a data center and modify what's happening there.

[335] Right, but it cannot do that today.

[336] I don't think that's going to last.

[337] And the reason I don't think it's going to last is because you can make it more efficient by giving it the event you do that.

[338] And there will be bad actors who just want to make it more efficient.

[339] So what you're basically saying is that because humans are flawed and because they're going to want to push this stuff forward, they're going to continue to push it forward in ways that do push it into those danger areas.

[340] Yes.

[341] So he's basically arguing that this is a Pandora's box, that it's been opened.

[342] and that because people are people, they're going to want to use what's inside of it.

[343] But I guess I'm wondering, I mean, you know, much like you're reflecting here, how much weight should we give to his warnings?

[344] Yes, he has a certain level of authority, Godfather of AI and all of that, but he has been surprised by its evolution in the past, and he might not be right.

[345] Right.

[346] There are reasons to trust, Jeff, and there are reasons not to trust him.

[347] About five years ago, he predicted that all radiologists would be obsolete by now.

[348] And that is not the case.

[349] You cannot take everything, he says, at face value.

[350] I want to underscore that.

[351] But you've got to remember, this is someone who lives in the future.

[352] He's been living in the future since he was in his mid -20s.

[353] He saw then where these systems would go, and he was right.

[354] Now, once again, he's looking into the future to see where these systems are headed, and he fears they're headed to places that we don't want them to go.

[355] Cade, what steps does he suggest we take to make sure that these doomsday scenarios never happen?

[356] Well, he doesn't believe that people will just stop developing the technology.

[357] If you look at what the financial commentators say, they're saying Google's behind Microsoft.

[358] don't buy Google stock.

[359] This technology is being built by some of the biggest companies on earth, public companies who are designed to make money.

[360] They are now in competition.

[361] Basically, if you think of it as a company whose aim is to make profits, I don't work for Google anymore so I can say this now.

[362] As a company, they've got to compete with that.

[363] And he sees this continuing, not just with companies, but with governments in other parts of the world.

[364] So in a way, it's kind of like nuclear weapons, right?

[365] We knew that they would destroy the world, yet we mounted an arms race to get them anyway.

[366] Absolutely.

[367] He uses that analogy.

[368] Others in the field use that analogy.

[369] This is a powerful technology.

[370] So I think there's zero chance, you shouldn't say zero, but minuscule, minuscule chance of getting people to agree not to develop it further.

[371] He wants to make sure we get the balance right between using this technology for good and using it for ill. The best hope is that you take the leading scientists and you get them to think very seriously about are we going to be able to control this stuff?

[372] And if so, how?

[373] That's what the leading minds should be working on.

[374] And that's why I'm doing this podcast.

[375] So, Kate, you've laid out a pretty complicated puzzle here.

[376] On the one hand, there's this technology that works a lot differently and perhaps a lot better than one of its key inventors anticipated.

[377] But on the other hand, it's a technology that's also left this inventor and others worried about the future because of those very surprising and sudden evolutions.

[378] Did you ask Jeff if, you know, looking back, he would have done anything differently?

[379] I asked that question multiple times.

[380] Is there part of you, at least, or maybe all of you, who regrets what you have done?

[381] I mean, you could argue that you are the most important person in the progress of this idea over the past 50 years.

[382] And now you're saying that this idea could be a serious problem for the planet.

[383] For our species.

[384] For our species?

[385] Yep.

[386] And various people would be saying this for a while, and I didn't believe them because I thought it was a long way off.

[387] What's happened to me is understanding there might be a big difference between this kind of intelligence and biological intelligence, as may be completely revised my opinions.

[388] It's a complicated situation for him to be in.

[389] Again, do you regret your role in all this?

[390] So the question is, looking back 50 years, would I have done something different?

[391] Given the choices I made 50 years ago, I think there were reasonable choices to make.

[392] It's just turned out very recently that this is going somewhere I didn't expect.

[393] And so I regret the fact that this is as advanced as it is now on my part in doing that.

[394] But it's a distinction Bertrand Russell made between wise decisions and fortunate decisions.

[395] He paraphrased the British philosopher Bertrand Russell.

[396] You can make a wise decision that turns out to be unfortunate.

[397] Saying that you can make a wise decision that still turns out to be unfortunate.

[398] And that's basically how he feels.

[399] And I think it was a wise decision to try and figure out how the brain looked.

[400] And part of my motivation was to make human society more sensible.

[401] but it turns out that maybe it was unfortunate.

[402] It's reminding me, Cade, of Andrei Sakharov, who was, of course, the Soviet scientist who invented the hydrogen bomb and witnessed his invention and became horrified and spent the rest of his life trying to fight against it.

[403] Do you see him that way?

[404] I do.

[405] He's someone who has helped build a powerful technology, and now he is extremely concerned.

[406] concerned about the consequences.

[407] Even if you think the doomsday scenario is ridiculous or implausible, there are so many other possible outcomes that Jeff points to, and that is reason enough to be concerned.

[408] Cade, thank you for coming on the show.

[409] Glad to be here.

[410] On Tuesday, leaders from AI companies such as OpenAI, the maker of ChatGPT, Google, and others, plan to come together to warn about what they see as AI's existential risk to humanity.

[411] In a statement, the leader said, quote, mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.

[412] We'll be right back.

[413] Here's what else you should know today.

[414] After a marathon set of crisis talks, President Biden and House Speaker Kevin McCarthy reached an agreement on Saturday night to lift the government's debt limit.

[415] for two years, enough to get it passed the next presidential election.

[416] The agreement still needs to pass Congress, and both McCarthy and Democratic leaders spent the rest of the weekend making an all -out sales pitch to members of their own parties.

[417] The House plans to consider the agreement on Wednesday, less than a week before the June 5th deadline, when the government will no longer be able to pay its bills.

[418] And in Turkey on Sunday, President Recep Tayyip Erdogan beat back the greatest political challenge of his career, securing victory in a presidential runoff that granted him five more years in power.

[419] Erdogan, a mercurial leader who has vexed his Western allies while tightening his grip on the Turkish state, will deepen his conservative imprint on Turkish society and what will be, at the end of this term, a quarter century in power.

[420] Today's episode was produced by Stella Tan, Ricky Nevetsky, and Luke van der Plug, with help from Mary Wilson.

[421] It was edited by Michael Benoit, with help from Anita Badajo and Lisa Chow.

[422] Contains original music by Marian Lazzano, Dan Powell, Rowan Nemistow, and was engineered by Chris Wood.

[423] Our theme music is by Jim Brenberg and Ben Landsberg of Wonderly.

[424] That's it for the Daily.

[425] I'm Sabrina Tavernisi.

[426] See you tomorrow.