The Diary Of A CEO with Steven Bartlett XX
[0] Artificial intelligence is superhuman.
[1] It is smarter than you are.
[2] And there's something inherently dangerous for the dumber party in that relationship.
[3] You just can't put the genie back in the bottle.
[4] Sam Harris.
[5] Neuroscientist philosopher.
[6] He goes into intellectual territory where a few others dare tread.
[7] Six years ago, you did a TED talk.
[8] The gains we make in artificial intelligence could ultimately destroy us.
[9] If your objective is to make humanity happy and there was a button, placed in front of you and it would end artificial intelligence.
[10] What would you do?
[11] Well, I would definitely pause it.
[12] The idea that we've lost the moment to decide whether to hook our most powerful AI to everything is just, oh, it's already connected to the internet, got millions of people using it.
[13] And the idea that these things will stay aligned with us because we have built them.
[14] We gave them a capacity to rewrite their code.
[15] There's just no reason to believe that.
[16] And I worry about the near -term problem of what humans do.
[17] with increasingly powerful AI, how it amplifies misinformation.
[18] Most of what's online could soon be fake.
[19] Can we hold a presidential election 18 months from now that we recognize as valid?
[20] Like, is it safe?
[21] And it just gets scarier and scarier.
[22] I worry we're just going to have to declare bankruptcy to the Internet.
[23] The Internet.
[24] If your intuition is correct, are you optimistic about our chances of survival?
[25] six years ago you did a TED talk um i watched that TED talk a few times over the last week and the TED talk was called can we build AI without losing control over it in that TED talk you really discussed the idea whether um AI when it gets to a certain point of sentience and intelligence will will wreak havoc on humanity six years later where do you stand on on it today do you think are you optimistic about our chances of survival.
[26] Yeah, I mean, I can't say I'm optimistic.
[27] I'm worried about two species of problem here that are related.
[28] I mean, there's sort of the near -term problem of just what humans do with increasingly powerful AI and how it amplifies the problem of misinformation.
[29] and disinformation and it just makes it harder and harder to make sense of reality together.
[30] And then there's just the longer -term concern about what's called alignment with artificial general intelligence, where we build AI that is truly general and, you know, by definition, superhuman in its competence and power.
[31] And then the question is, have we built it in such a way that is aligned in a durable way?
[32] with our interests and I mean there's some people who just don't see this problem they're kind of blind to it when I'm in the presence of someone who doesn't have doesn't share this intuition they don't resonate to it I just don't understand what they're doing or not doing with their minds in that moment well saying I'm wrong about that well then you know it's just the other person's right and so we just we just have fundamentally different intuitions about, about this particular point, and the point is this, if you're imagining building true artificial general intelligence that is superhuman, and that is what everyone, whatever their intuitions purports to be imagining here, I mean, there's, you know, there are people on both sides of the alignment debate, or the people who think alignment's a real problem, and people think it's a total fiction, but everyone, you know, firstly everyone whose party to this conversation agrees that we will ultimately build artificial general intelligence that will be superhuman in its capacities.
[33] And there's very little you have to assume to be confident that we're going to do that.
[34] There's really just two assumptions.
[35] One is that intelligence is substrate independent, right?
[36] It doesn't have to be made of meat.
[37] It can be made in silico, right?
[38] And we've already proven that with narrow AI.
[39] I mean, we obviously have intelligent machines.
[40] and, you know, your calculator and your phone is better than you are at arithmetic, and it's just, that's some very narrow band of intelligence.
[41] So as we keep building intelligent machines on the assumption that there's nothing magical about having a computer made of meat, the only other thing you have to assume is that we will keep doing this, we will keep making progress, and eventually we will be in the presence of something more intelligent than we are.
[42] And that's not assuming Moore's law, it's not assuming exponential progress.
[43] We just have to keep going, right?
[44] And when you look at the reasons why we wouldn't keep going, those are all just terrifying, right?
[45] Because intelligence is so valuable and we're so incentivized to have more of it.
[46] And every increment of it is valuable.
[47] It's not like it only gets valuable when you get, you know, when you double it or 10x it.
[48] No, no. If you just get three more percent, right, that's, that pays for itself.
[49] So we're going to keep doing this.
[50] Our failure to do it suggests that something terrible has happened in the meantime, right?
[51] We've had a world war.
[52] We've had a global pandemic far worse than COVID.
[53] We got hit by an asteroid.
[54] Something happened that prevented us as a species from continuing to make progress in building intelligent machines, right?
[55] So absent that, we're going to keep going.
[56] We will eventually be in the presence of something smarter than we are.
[57] are.
[58] And this is where intuitions divide.
[59] My intuition, and it's shared by many people, I'm sure, and I know at least one who you've spoken to, my intuition is that there is something inherently dangerous for the dumber party in that relationship.
[60] There's something inherently dangerous for the dumber species to be in the presence of the smarter species.
[61] And we have seen this, you know, based on our entanglement with all other species, dumber than we are, right, or certainly less competent than we are.
[62] And so by reasoning by analogy, it would be true of something smarter than we are.
[63] People imagine that because we have built these machines, that is no longer true, right?
[64] But here's where my intuition goes, from there, that is, that imagination is born of not taking intelligence seriously, right?
[65] Because what intelligence is, is a, you know, a mismatch in intelligence in particular, is a fundamental lack of insight into what the smarter party is doing and why it's doing it and what it will do next on the part of the dumber party, right?
[66] So, I mean, you just could imagine that, by analogy, just imagine that dogs had invented us as their super -intelligent AIs, right?
[67] For the purpose of making their lives better, you know, just securing resources for them, securing comfort for them, getting them medical attention.
[68] It's been working out pretty well for the dogs, for about 10 ,000 years, right?
[69] I mean, there's some exceptions.
[70] We've got, we mistreat certain dogs, but generally speaking, for most dogs, most of the time, humans have been a great invention, right?
[71] Now, it's true that the mismatch in our intelligence dictates a fundamental blindness with respect to what we've become in the meantime, right?
[72] So, like, we have all these instrumental goals and things we care about, that they cannot possibly conceive, right?
[73] They know that when we go get the leash and say, it's time for a walk, they understand that particular part of the language game.
[74] But everything else we do when we're talking to each other and when we're on our computers or on our phones, they don't have the dimest idea of what we're up to.
[75] And if we ever, if something happened, if we, I mean, the truth is we love our dogs, we make just irrational sacrifices for our dogs.
[76] We prioritize their health over all kinds of things that is just amazing to consider.
[77] And yet, if we learned, if there was a new, you know, global pandemic kicking off, and some xenovirus was jumping from dogs to humans, and it was just kind of super Ebola, right?
[78] It was just, it was 90 % lethal.
[79] And this was just a forced choice between, I mean, what do you value more?
[80] The lives of your dogs or the lives of your kids, right?
[81] If that's a situation we were in, It's totally conceivable, I mean, it's not a, you know, by no means impossible.
[82] We would just kill all the dogs, right?
[83] And they would never know why, right?
[84] We would just, and it's because we have this layer of mind and culture and just, just the new sphere, right?
[85] There's this, this realm of mind that requires a requisite level of intelligence to even be partied to, even know exists, that they have no idea it exists, right?
[86] And it's, so this is a fanciful analogy because the dogs did not invent us, but evolution invented us, right?
[87] Evolution has coded us, you know, as I said, to survive and spawn, and that's it, right?
[88] So evolution can't see everything else we've done with our time and attention and all the values we've formed in the meantime and all the ways in which we have explicitly disavowed the program we've been given.
[89] So evolution gave us a program, but if we were really going to live by the lights of that program, what would we be doing?
[90] I mean, we would be having as many kids as possible, right?
[91] You know, guys would be going to sperm banks and donating their sperm and finding that, like, the best use of their time and attention.
[92] I mean, it's like the idea that you could have hundreds of kids for which you have no financial responsibility, that should be the most rewarding thing that you could possibly do with your time as a man. And yet, that's obviously not what we do.
[93] And there are people who decide not to have kids.
[94] And there are people who, and yet, and everything else we do, from having podcast conversations like this, to curing diseases, to literally, everything we're doing with our, with science, with culture is, yes, there are points of contact between those, those products and our evolved capacities, right?
[95] It's not, it's not just, it's not magic, right?
[96] We are social primates that, that have leveraged certain ancient hardware to do new things, but evolution, the code that we've been given doesn't see any of that, right?
[97] And we've not been optimized to build democracies, right?
[98] evolution knows nothing, it can know nothing.
[99] If evolution were a coder, there's just no democracy maximization in that code, right?
[100] It's just, it's not there.
[101] So the idea that these things will stay aligned with us because we have built them, because if we have this origin story, that we gave them their initial code, and yet we gave them a capacity to rewrite their code and build future generations of themselves, right?
[102] There's just no reason to believe that.
[103] I see no, and the mismatch in intelligence is intrinsically dangerous.
[104] And you could see this by, I mean, Stuart Russell.
[105] I don't know if you had him on the podcast.
[106] He's a great professor of computer science at Berkeley, and he wrote, literally co -wrote one of the most popular textbooks on AI.
[107] I mean, he has some arresting analogies, which I think are good intuition pumps.
[108] here.
[109] And one is, just think of how you would feel if you knew, like, let's we got a communication from elsewhere in the galaxy.
[110] And it was a message that we decoded and it said, people of Earth, we will arrive on your lowly planet in 50 years.
[111] Get ready.
[112] Right.
[113] That, anyone who thinks that we're going to get super intelligent AI in, let's say, 50 years, thinks, we're essentially in that situation and yet we're not responding emotionally to it in the same way.
[114] If we received a communication from a species that we knew just by the sheer fact that they were communicating with us in this way, we knew they're more competent and more powerful and more intelligent than we are.
[115] And they're going to arrive.
[116] We would feel that we were on the threshold of the most momentous change in the history of our species.
[117] And we would feel, but most importantly, we would feel that it's because this is a relationship, an unavoidable relationship that's being foisted upon us.
[118] It's like a new creature is coming into the room, right, with its own capacities, and now you're in relationship.
[119] And one thing is absolutely certain, it is smarter than you are, right?
[120] By what factor, I mean, ultimately we're talking about by factors, you know, just by so many orders of magnitude, our intuitions completely fail.
[121] I mean, even if it was just a difference in the time of processing, even if it, let's say there was no difference in the actual, you know, native intelligence, but it's just processing speed.
[122] A million -fold difference in processing speed is just a phantasmagorical difference in capacity.
[123] Just imagine we had 10 smart guys in a room over there, and they were working and thinking and talking a million times faster than we are.
[124] Well, so they're no smarter than we are, but they're just faster.
[125] And we talk to them once every two weeks just to catch up on what they're up to and what they want to do and whether they still want to collaborate with us.
[126] Well, two weeks for us is 20 ,000 years of analogous progress for them.
[127] How could we possibly hope to constrain the opinions and collaborate with and negotiate with people no smarter than ourselves who are making 20 ,000 years of progress every time we make two weeks of progress.
[128] It's just it's it's it's unimaginable and yet there are many people who don't they just think this is just fiction where everything i all all the noises i've made in the last five minutes are just like a a a new religion of fear right and it's just there's no reason to think that alignment is even a potential problem if your intuition is correct and that analogy of us getting a signal from outer space that someone is coming in 30 years which by the way A lot of people that speak on this subject matter don't believe it's even going to be 30 years until we reach that sort of singularity moment.
[129] I think they speak of artificial general intelligence.
[130] I've heard people like Elon say, you know, many fewer decades, 10 years, 15 years, 20 years, etc. If that is correct, then surely this is the most pressing challenge, conversation, issue of our time.
[131] And there's no logical reason that I can see to refute your intuition there.
[132] I can't see a logical reason.
[133] The rate of progress will continue.
[134] Don't necessarily see anything that will wipe out or pause our rate of progress.
[135] I mean, let me just to be charitable to the other side here.
[136] There are other assumptions that they smuggle in that they, some people, I mean, some do it without being aware of it, but some actually believe these assumptions, and this spells the difference on this particular intuition.
[137] So it's possible to assume that the more intelligent you get, the more ethical you become by definition, right now.
[138] And we might draw a somewhat more equivocal picture from just the human case, where we see that, oh, there's some very smart people who aren't that ethical, but I believe there are people, I mean, I've talked to at least a few people who believe this.
[139] There are people who assume that kind of in the limit, as you push out into just far beyond human levels of intelligence, there's every reason to believe that all of the provincial, creaturely failures of human ethics will be left behind as well.
[140] It's like you're not like the selfishness and the basis for conflict.
[141] Like, these are not going to, the apish urges of, you know, status -seeking monkeys is just not, it's not going to be in the code.
[142] And as you push out into just kind of the omnibus genius of the coming AI, you're going to, there's a kind of a sainthood that's going to come along with it, right?
[143] And a wisdom that will come along with it.
[144] Now, I just think that's quite a gamble.
[145] I think I would take the other side of that bet, and I was frame it this way.
[146] There have to be ways, in the space of all possible intelligences that are beyond the human, right?
[147] There's got to be more than one possible.
[148] It's just like there's many different ways to have a chess engine that's better than I am at chess.
[149] They're different from each other, but they're all better than me, right?
[150] there's got to be more than one way to have a superhuman artificial intelligence and I would imagine there are not an infinite number of ways but just a vast number of in the space of all possible minds there are many locations in that space beyond the human that are not aligned with human well -being there's got to be more ways to build this unaligned, then aligned, right?
[151] And what other people are smuggling into this conversation is the intuition that, no, no, once you get beyond the human, it's just going to get, it's just, you're going to be in the presence of, you know, just the Buddha who understands quantum mechanics and oncology and everything else, right?
[152] I just see no reason to think that that's so.
[153] And we could build something that is, again, take an intelligence seriously.
[154] We're going to build something that we're in relationship to, it's really intelligent in all the ways that we're intelligent.
[155] It's just better at all of those things than we are.
[156] It's by definition superhuman because the only way it wouldn't be superhuman, the only way it would be human level, even for 15 minutes, is if we didn't let it improve itself, if we wanted to just keep it stuck at, you know, at a, we built a college undergraduate, we wanted just to keep it stuck there.
[157] But we would have to dumb down all of the specific capacities we've already built, right?
[158] Just like every AI we have, narrow AI is superhuman for the thing it does.
[159] You know, it's, it's, it has access to all the information on the internet, right?
[160] It's, it's just like it's got perfect memory.
[161] It can perfectly copy itself.
[162] When one part of the system learns something, the rest of the system learns it, because it just can swap files, right?
[163] It can, it's, um, you're, again, your, your phone is a, is a, is a superhuman calculator.
[164] There's no reason to make it a, a, a calculator that is human level.
[165] And so we're never going to do that.
[166] We're never going to be in the presence of human AGI.
[167] We will be immediately in the presence of superhuman AGI.
[168] And then the question is how quickly it improves and how far, how much headroom is there to improve into.
[169] On the assumption that you can get quite a bit more intelligent than we are, right?
[170] that we're nowhere near the summit of possible intelligence.
[171] You have to imagine that you're going to be in the presence of something that is, again, it could be completely unconscious, right?
[172] I'm not saying that there's something that's like to be this thing, although there might be, and that's a totally different problem that's worth worrying about.
[173] But conscious or not, it is solving problems, detecting problems, improving its capacity, to do all of that in ways that we can't possibly understand.
[174] And the products of its increasing competence are always being surfaced, right?
[175] So it's like it's, we've been using it to change the world.
[176] We've become reliant upon it.
[177] We built this thing for a reason.
[178] I mean, one thing that's been amazing about developments in recent months is that those of us who have been at all cognizant of the AI safety space for, you know, now going on a decade or more for some people, always assumed that as we got closer to the end zone, we'd become, that the labs would become more circumspect, we'd be building this stuff, air gap from the internet, you know, it's like we have this phrase, air gap from the internet, like we thought this was a thing, like this thing would be in a box, and then the question would be, well, do we let it out of the box and let it do something, right?
[179] Like, is it safe?
[180] and how do we know if it's safe, right?
[181] And we thought we would have that moment.
[182] We thought it would happen in a lab at Google or at Facebook or somewhere.
[183] We thought we would hear, okay, we've got something really impressive and now we just want it to touch the stock market or we want it to touch our medical data or we just want to see if we can use it.
[184] We're way past that, right?
[185] We've built this stuff already in the wild.
[186] It's already connected to the Internet.
[187] It's already got millions of people using it.
[188] It already has APIs.
[189] It's already, I mean, it's already doing work.
[190] So from an AI safety point of view, that's, it's amazing.
[191] Like, we didn't even have the moment, the choice point we thought was going to be so fraught.
[192] Of course we didn't.
[193] We, we, we, we, because there was such pressing incentives for people to press forward regardless of that conversation, especially, but yeah, everybody, but everyone, everyone thought, I mean, I was never, I was, I was, I don't believe I was ever in conversation with someone, someone like L .A. Z. Yerukowski or Nick Bostrom or Stuart Russell, who assumed we would be in this spot?
[194] Like, I just, everyone, because, yeah, I'd have to go back and look at those conversations, but there was so much time spent, you know, it seems quite unnecessarily, on this idea that circumspect, we'd make a certain amount of progress, and circumspection would kick in like even the people who were who were doubters would become worried and there and there would be like in the final yards you know as we go cross into the end zone there'd be some mode where we could sort of slow down and figure it out and try like try to deal with the arms race dynamics like let's place a phone call to china and and and just like let's let's talk about this we got something interesting but the stuff is already being built in connection to everything and there's already just endless businesses being devised on the back of this thing and all the improvements are going to get plowed into it and so just imagine what this looks like even in success right like let's say it just starts working wonders for us and we just get these great productivity gains and okay so then we cross into the whatever the singularity is at whatever speed we find ourselves in the presence of something that is truly general.
[195] After all of this stuff is, all of this narrow stuff, albeit superhuman, narrow stuff, is something that we totally depend on.
[196] Like every hospital requires it and every airplane requires it and all of our missile systems require it.
[197] And it's, we're just, this is the way we do business.
[198] There is no, there's nothing to turn off at that point.
[199] I mean, I just don't, you know, it's like, I guess, I mean, I put this to Mark Andreessen on my podcast, and he said, yeah, you can turn off the internet.
[200] I mean, I don't, I can't believe he was quite serious.
[201] I mean, yes, if you're North Korea, I guess you can turn off the internet for North Korea.
[202] And that's why North Korea is like North Korea.
[203] But the idea that we could, I mean, just the cost of turning off the internet now would be, uh, I think it would be unimaginable in the, in the, in the economic, just the, The economic cost alone, it just would be, so anyway, I mean, just the idea that we've lost the moment to decide whether to hook our most powerful AI to everything, because it's already being built more or less in contact with, if not everything, so many things that you just can't put the genie back in the bottle, that's, that is genuinely surprising to me. yeah I mean incentives is this not the most pressing problem though because I I was going to start this conversation by asking you the question about the thing that occupies your mind the most and the most important thing we should be talking about and I I in part assume the answer would be artificial intelligence because the way that you talk about your intuition on this subject matter you've got children yeah you think about the future a lot if you can see this species coming to earth in the next even if it's in the next 100 of years.
[204] It strikes me to be the most pressing problem for humanity.
[205] Well, I do, I'm as, as interesting as I think that problem is and consequential as it is, I'm worried that life could become unlivable in the near term before we even get there.
[206] Like, I'm just worried about the misuses of narrow AI in the meantime.
[207] Just, I'm worried about, just take the current level of AI we have.
[208] You know, we have GPT -4.
[209] I think within the next 12 months or two years, let's say whatever GPT -5 is, we're going to be in the presence of something where most of what's online that purports to be information could soon be fake, right?
[210] Where, like, just most of the text do you find on any topic is just fake, right?
[211] Like, someone has just decided, write me a thousand journal articles on why mRNA vaccines cause cancer and give me 150 citations, write them in the style of nature and nature genetics and Lancet and JAMA and publish them.
[212] And just put them out there, right?
[213] One teenager could do that in five minutes with the right AI, right?
[214] It's just like we're not, GPT4 is not quite that but GPT5, you know, possibly will be that.
[215] I mean, it's like that, that is such a near -term advance.
[216] right or get you know just when you imagine knitting together the visual stuff like mid journey and dolly um and stable diffusion with with a large language model just imagine the tool again this is maybe this is 18 months away maybe it's three years away but it's not 30 years away the tool which where you can just say give me a 45 minute documentary on how the holocaust never happened filled with archival imagery give me, you know, Hitler speaking in German and with the appropriate translations and give it in the style of Alex Gibney or Ken Burns or and give me a 10 ,000 of those right?
[217] Like that's all all the friction for misinformation has been taken out of the system and yeah I worry we're just going to have to declare bankrupt with respect to the internet.
[218] Like, we just are not going to be able to figure out what's real.
[219] And when you look at how hard that is now with social media in the aftermath of COVID and Trump, and how, just the challenge of holding an election that most of the population agrees was valid, right?
[220] That challenge already is, is, on the verge of being insurmountable in the U .S., right?
[221] I mean, it's just like it's easy to see us failing at that, AI aside.
[222] Now, when you add large language models to that and the more competent future version of it, where it's just the most compelling deep fakes are indistinguishable from, you know, real data.
[223] And everyone is siloed into their tribes where they're stigmatizing the information that comes from any other tribes.
[224] And we're just, and the internet is now so big a place that there really isn't the ordinary selection pressures where bad information gets successfully debunked so that it goes away.
[225] It's just you can live in a conspiracy cult for the rest of your life if you want to.
[226] You know, you can be queuing on all day long if you want to.
[227] And now we've got deep fakes shoring all that up and just spurious, you know, scientific articles.
[228] shoring all that up.
[229] All of this just becomes a more compelling form of psychosis and, you know, culturally speaking.
[230] And so I'm just worried that it's going to get harder and harder for us to cooperate with one another and collaborate and that our politics will just completely break.
[231] And that'll, you know, offer an opportunity for lots of, you know, bad actors.
[232] And, I mean, and they'll even aside, you know, there's cyberterrorism and there's, there's synthetic biology that, you know, the moment you get, you turn AI loose on the, on the prospect of, of engineering viruses and, you know, all of that, it's like, it potentiates, I mean, the asymmetry here is that it seems like it's, it's always easier to break things than to fix them or to prevent people, categorically prevent people from breaking them.
[233] And what we have with increasingly powerful technology is the ability for one person to create more and more damage or one small group of people, right?
[234] And it was, so it's just, it just turns out it's hard enough to build a nuclear bomb that like one person can't really do it, you know, no matter how smart.
[235] You need a team and you need a, you need, it traditionally needed state actors and you need, you need, you need access to resources, and you need to have to get the, the physical material.
[236] And it's hard enough, but this isn't, this is being fully democratized, this tech.
[237] And so it's, um, yeah, Yeah, I worry about the near -term chaos.
[238] I've never found the narrow term consequences of artificial intelligence to be that interesting until now, until what you said.
[239] That image of like the internet becoming unusable.
[240] So that was a real eureka moment for me, because I've not been thinking about that.
[241] Yeah, me too.
[242] I was just concerned about the AGI risk.
[243] And now, really in the aftermath of Trump and COVID, I've just, I see the risk of.
[244] you know, if not losing everything, losing a lot that matters just based on our interacting with these very simple tools that are reliably misleading us.
[245] I mean, I'm just, I'm amazed at what social media, I forget about, I'm amazed at what Twitter did to me. I mean, you know, even with all of my training and all, you know, with my head screwed on reasonably straight.
[246] I mean, it's amazing to say it, but almost all of the truly bad things that have happened to me in the last decade that just really, like, just destabilized relationships and, and just priorities and kind of got plowed back into me, it became a kind of professional emergency, you know, stuff I had to respond to, you know, in writing or on podcast.
[247] it was all Twitter.
[248] My engagement with Twitter was the thing that produced the chaos and it was completely unnecessary.
[249] And it was just it was amplifying a kind of signal for me that I felt compelled to pay attention to because I was on it and I was trying to communicate with people on it and I was getting certain communication back and it was giving me a picture of the rest of humanity which I now think was fundamentally misleading but it was still consequential in its, like even believing that it was, at a certain point, believing that it was misleading wasn't enough to inoculate me against the delusion of the kind of the opinion change that was being forced upon me. And I was feeling like, okay, like these people are becoming unrecognizable.
[250] Like I know some of these people.
[251] I've had dinner with some of these people.
[252] And their behavior on Twitter is appearing so deranged to me and so in such bad faith that people are, people who I know to be non -psychopaths are starting to behave like psychopaths, at least on Twitter.
[253] And I'm becoming similarly unrecognizable to them.
[254] That it's just, again, it all felt like a psychological experiment to which I hadn't consented in which I enrolled myself somehow, because it was what everyone was doing in 2009.
[255] And I spent, you know, 12 years there, getting some signal and responding to it, and it's not to say that it was all bad.
[256] I mean, I read a bunch of good articles that got linked there, and I discovered some interesting people, but the change in my life after I deleted my Twitter account was so enormous.
[257] I mean, it's embarrassing to admit it.
[258] I mean, it's just, it's like getting out of a bad relationship.
[259] I mean, it was just, it was a fundamental just freedom from this chaos monster that was always there ready to disrupt something based on its own dynamics and when did you delete it um yeah like December I think was December I would and I'm not someone that really takes sides on things I like to try and remain in the middle I think politically so you must have a very different Twitter experience than I was having no no no no so I don't tweet anything other than this podcast trailer.
[260] Don't tweet anything else.
[261] Right.
[262] Okay.
[263] So I just, the only thing you'll see on my Twitter is the podcast trailer, that's it.
[264] Yeah.
[265] And for all the reasons you've described, and more interestingly, I wanted to say in the last eight months, as someone that tries to be, doesn't get caught up too much in the media, oh, Elon bought this.
[266] It's 100 % gone in that direction.
[267] As in my timeline now is, I say it to my friends all the time.
[268] And some of my friends who, again, I think are nuanced and balanced, have said to me, there's something that's been turned up in the algorithm to increase engagement that has planted me in an unpleasant echo chamber that I didn't desire to be in.
[269] And if I wasn't somewhat conscious, I would 100 % be in there.
[270] My timeline, my friend tweeted the other day, my friend Cahle tweeted, he's never seen more people die on his Twitter timeline than he has in the last six months.
[271] They're prioritizing videos.
[272] So you're seeing a lot of like death and CCTV footage that I've never seen before.
[273] And then the debate around gender, um, politics, right -leaning subject matter has never been more right down your throat.
[274] Yeah.
[275] Because it's been, it's almost like something in the algorithm has been switched, where it's now, it's now, like, people have been let out the asylum.
[276] That's the only way I can describe it.
[277] And it's made me retract even more.
[278] So when Zuckerberg announced threads the other, the other couple of weeks ago, it was kind of like a life raft out of the Titanic.
[279] And I really, really mean that.
[280] And I'm not someone to get easily caught up in narrative, you know, as it relates to social media platforms, it's been my industry for a decade.
[281] But what I've seen on Twitter, and it's actually made me believe this hypothesis I had five years ago where I thought there would be, I thought the journey of social networking would be, would have way more social networks and they'd be more siloed.
[282] I thought we'd have one for our neighborhood, our football club, and now I believe that even more than ever.
[283] Yeah, that seems right.
[284] And I think it's, I mean, whether it's possible to have a truly healthy, social network that people want to be in and it's a good reason to be there and it's it's I don't know if it's possible I'd like to think it is but it's um I think there's certain things you you have to clean up at the outset that is those to make it possible and I think I think anonymity is a bad thing I think um probably being free is a bad thing I think for you know you sort of get what you pay for online and if it's if it's uh i just think there there might be ways to set it up that where we'd be better but i don't think it'd be popular what was that i think with the thing that makes it popular makes it toxic right right and even the anonymity piece i've played this out a couple of times in my mind and the rebuttal i always get is well there's people in syria who have news to break important news to break and they'd be hung if they so we need a anonymous version of the social internet right yeah well that i guess there could be some exception there but um i don't know it just doesn't it actually doesn't interest me because i just feel such a different sense of my being in the world as a result of not paying attention to the my online simulacrum of myself.
[285] It's a Twitter was the only one I used.
[286] I've been on Facebook this whole time.
[287] I've been on, I guess I'm on Instagram too, but my team just uses those as marketing channels.
[288] You know, it's just like you, it sounds like that's the way you use Twitter now.
[289] But Twitter was the one that I decided, okay, this is going to be me. I'm going to be posting here.
[290] I'm going to, you know, if I've made a mistake, I want to hear about it.
[291] You know, it's like, And I just wanted to use it as an actual basis for communication.
[292] And for the longest time, it actually felt like a valid tool in that respect.
[293] You know, it reached a crisis point.
[294] I decided this is just pure toxicity.
[295] There's just no reason.
[296] Even the good stuff can't possibly make a dent in the bad stuff.
[297] So I just deleted it.
[298] And then I was returned to the real world, right, where I actually live.
[299] to books and to, I mean, I'm online all the time anyway, but it's not having the, it's the time course of reactivity when you don't have social media, when you don't, and you don't have a place to put this instantaneous hot take that you're tempted to put out into the world, because there's literally no place to put it.
[300] Like, for me, if I have some reaction to something in the news, I have to decide whether it's worth talking about it in my next podcast that I might be recording, you know, four days from now.
[301] And rather often, people have been just bloviating about this thing for four solid days before I ever get to the microphone.
[302] And then I get to think, well, is it still worth talking about?
[303] And most, almost nothing survives that test anymore, right?
[304] It's like the conversation has moved on.
[305] So there's actually no place for me to just type this thing that takes me 10 seconds and then rolls out there to get, to detonate in the minds of, you know, my friends and enemies to opposite effect.
[306] And then I see the result of all that, you know, on a, again, on this sort of reinforcement loop of every 15 minutes.
[307] Not having that is such a relief that I just don't even know why I would.
[308] So, like, when Threads was announced, I wasn't, I think I'm on threads too, but it's not me. It's just, you know, just, again, another marketing channel.
[309] But, yeah, I haven't, I feel such relief not exercising that muscle anymore, where it's like, I, you know, I don't know how often I was checking Twitter, but it was, I was, you know, I was not checking it just to see what was happening to me or what, the response to my last, last thing I tweeted, I was checking it a lot because it was, you know, I was not checking, you know, I was, my newsfeed.
[310] It's like I'm following, you know, 200 smart people.
[311] They're telling me what they're paying attention to.
[312] And so I'm fascinated.
[313] So yeah, well, yeah, I want to see that next article or that next video.
[314] Just that engagement and the endless opportunity to comment and to put my foot in my mouth or put my foot in someone else's mouth or have someone put their foot.
[315] It's just not having that has been such relief that I would be, I mean, it's not impossible, but I would be very cautious in reactivating that because it was it was so much noise and again it would it created there was so much it became a it became an opportunity cost but it became a just this endless opportunity for misunderstanding but especially misunderstanding of me and you know everything I've been putting out into the world and then my sense that I had to react to it and then you just kind of plow that back into the you know that that becomes the basis for further misunderstanding um and it just constantly was giving me the sense that there's something there's something i need to react to on my podcast in an article on twitter that it's just this is a valid signal like this is this is this is like this is like you got to stop everything like you're by the pool on the one vacation you're taking with your family that summer and this thing just happened on your phone that it can't wait right like you actually have to pay attention because it's like the conversation is happening right now and so it was a kind of addiction to information and you know on some level reputation management or or or um and it was just I mean just to just be free of it is it's such a relief apart from like you know health issues with certain family members, virtually the only bad things that have happened to me have been a result of my engagement with Twitter over the last 10 years.
[316] So it's just, it's just, you know, I guess if I'm a masochist, I would be back on Twitter, but that would be the only reason to do it.
[317] Narrow AI.
[318] I asked you the question a second ago, which we, I really wanted to get a solution to it because I'm mildly terrified.
[319] I completely believe the logic underneath your opinion that narrow AI will cause this destabilization and unusability of the internet.
[320] So just focusing on narrow AI, what would you consider to be a solution to prevent us getting to that world where misinformation is rife to the point that it can destabilize society, politics and culture?
[321] Well, I think it's something I've been asking people about on my podcast because it's not actually my wheelhouse.
[322] house and I would just need to hear from experts about what's possible technically here.
[323] But I'm imagining that paradoxically or ironically, this could usher in a new kind of gatekeeping that we're going to rely on because the provenance of information is going to be so important.
[324] I mean, the assurance that a video has not been manipulated or there's not a just a pure confection of deep fakery, right?
[325] So it could be that we're meandering into a new period where you're not going to trust a photo unless it's coming from, you know, Getty images or, you know, the New York Times has some story about how they have verified every photo in their, that they put in their newspaper.
[326] They have a process.
[327] And, you know, so if you see a video of flashed, you know, Vladimir Putin seeming to say that he's declaring war on the U .S., right?
[328] I think most people are going to assume that's fake until proven otherwise.
[329] It's like it's just going to be too much fake stuff.
[330] And it's going to be, it's all going to look so good that the New York Times and every other, you know, organ of media that we have relied upon, as imperfect as they've been of late, they're going to have to figure out what the tools are whereby they can say okay this is actually a video of Putin right and if the new I mean I'm not going to be able to figure out on my own right if the New York Times doesn't have a process or CNN doesn't have a process that they go through before they say okay Putin really said this and so this is we have to now react to this because this is real whatever that process is and you know whether it's there's some kind of digital watermark that's connected to the blockchain that's some tech implementation of it that can be fully democratized where you by just being in the latest version of the Chrome browser can know that you can differentiate real and fake video, say.
[331] I don't know what the implementation will be, but I just know we're going to get to some spot where it's going to be, all right, we have to declare epistemological bankruptcy we don't know what's real.
[332] We have to assume anything especially lurid or agitating is fake until proven otherwise.
[333] So prove otherwise.
[334] And that's, you know, that'll be a resetting of something.
[335] I don't know what we do with that in a world where we really don't have that much time to react to certain things that are, you know, a video of Putin saying he's launched his big missiles is something that, you know, 30 minutes from now we would, we would understand whether.
[336] whether it's real or not.
[337] I mean, forget about, again, forget about everything we just said about AI.
[338] Look at all of our legacy risks.
[339] Look at the risk of nuclear war, the risk of stumbling into a nuclear war by accident has been hanging over our head for 70 years.
[340] I mean, we've got this old tech.
[341] We've got these wonky radar systems that throw up errors.
[342] We have moments in history where, you know, one Soviet sub -commander decided based on his just gut feeling, his common sense that the data was almost certainly in error, and he decided not to pass the obvious evidence of an American ICBM launch up the chain of command, knowing that the chain of command would say, okay, you have to fire, right?
[343] And he reasoned that if the U .S. was going to attack the Soviet Union, they would launch more than I think in this case it looked like there were four missiles that was the radar signature if the U .S. is going to launch a first strike against the Soviet Union in when it's like in the mid -80s they're going to launch more than four missiles right?
[344] This has to be bad data right so that is but you know so if we automate all this will we automate it to systems that have that kind of common sense right but we've been perched on the on the edge of the abyss based on this the possible forget about malevolent actors you know who might decide to have a nuclear war on purpose we have the possibility of of accidental nuclear war you add this cacophony of misinformation and deep fake to all of that and it just gets scarier and scarier and this is just this is not even AI.
[345] This is just, you know, you know, narrow AI amplified misinformation.
[346] How do you feel about it?
[347] Well, I mean, this is the thing that worries me. I mean, I worry about the next election.
[348] I, you know, I think the next president, if we can run the 2024 election in a way that most of America acknowledges was valid, that will be an amazing victory, you know, whatever the outcome.
[349] I mean, obviously, I'm, I, I, would, you know, would not be looking forward to a Trump presidency, but I think even more fundamental than that is, can we hold a presidential election 18 months from now that is that we recognize as valid?
[350] Like that, I don't know, I don't know what kind of resources are being spent on that particular performance, but that is hugely important.
[351] And I don't think our needs, our near -term experiments with AI is going to make that easier.
[352] Why is it so important?
[353] Well, it's just, I mean, if you think the maintenance of a valid democracy in the world's low in superpower is of minor importance, I'd like to drink the tea you're drinking.
[354] Are you optimistic?
[355] I mean, I can't say I'm optimistic.
[356] I'm, you know, it's a paradox.
[357] toxic state, I mean, because I definitely have, I tend to focus on what's wrong or might be wrong.
[358] I tend to I think have a pessimistic bias, right?
[359] Like I tend to notice what's wrong as opposed to what's right.
[360] You know, I mean, that's my bias.
[361] But I'm actually very happy, right?
[362] Like I have a very good life.
[363] I'm just like everything is, I just I'm incredibly lucky.
[364] I'm surrounded by great people it's like it's just it's all great and yet i see all of these risks on the horizon so i'm like i'm not um i just i have a very high degree of well -being at this moment in my life and yet i like what's on the television is scary and so it's it's a it's a very interesting juxtaposition you know i'll be i'll be very relieved if we have a uh maybe i just i feel like we're in a very I mean, like the, I haven't seen a, a full post -mortem on the COVID pandemic that has fully encapsulated what I think we, what I think happened to us there.
[365] But my, my vague sense is that we didn't learn a whole hell of a lot.
[366] I mean, basically what we learned is we're really bad at responding to this kind of thing.
[367] This was a challenge that, that just fragmented us as a society.
[368] It could have brought us together.
[369] it didn't and it amplified all of the divisions in our society politically and economically and tribally and all kinds of ways the role of misinformation and disinformation and all of that was all too clear and I think just getting worse so I think as a dress rehearsal for some future pandemic that's that is inevitably going to come and is you know could well be worse, I think we failed this dress rehearsal.
[370] And, you know, I have to hope that at some point our institutions will reconstitute themselves so as to be obviously trustworthy and engender the kind of trust we actually need to have on our institutions.
[371] Like, we need a CDC that not only that we trust, but that is trustworthy, that we, that we, they were right to trust, right?
[372] And so it is with an FDA and every other, you know, institution that, that is relevant here.
[373] And we don't quite have that.
[374] And half of our society thinks we don't have that at all, right?
[375] And so it's, we have to rebuild trust in institutions somehow.
[376] And I just think, you know, we have a lot of work to do, but to even figure out how to make an increment of progress on that score, because we're, again, the siloing of large constituents into alternate information in universes is just not functional.
[377] And that's so much of what social media has done to us and alternative media.
[378] I mean, like, you know, I call it, you know, you and our podcasters, but I call it podcastistan, right?
[379] I mean, we have this landscape of, I mean, there's now, whatever, a million plus podcasts, and there's, you know, email newsletters, and everyone has name.
[380] just decided to curate their information diet in a way that's just bespoke to them and you can stay there forever and you're getting you're getting one slice of and it could be a completely fictional slice of reality and we're losing the ability to converge on a common picture of what's going on.
[381] And you do that sound optimistic?
[382] I didn't hear the optimism there.
[383] You tell me. I don't.
[384] But again, I can't refute anything you've said on a logical basis.
[385] It all sounds like that is the direction of travel that we're going in, unfortunately.
[386] I have faith that there'll be surprising positives.
[387] There always tends to be surprising positives that we also didn't factor in.
[388] It's easy to see.
[389] I mean, if there's any significant low -hanging fruit technologically or scientifically that could be AI enabled for us.
[390] I mean, just take, like, you know, a cure for cancer, a cure for Alzheimer's, right?
[391] I mean, just having one thing like that, right?
[392] That would be such an enormous good.
[393] And that is, that's why we can't get off this ride.
[394] And that's why there is no break to pull.
[395] I mean, because the value of intelligence, is so enormous.
[396] I mean, it is just, it's not everything.
[397] I mean, there's other things we care about and are right to care about beyond intelligence.
[398] I mean, love is not the same thing as intelligence, right?
[399] But intelligence is the thing that can safeguard everything you love.
[400] Even if you think the whole point in life is to just get on a beach with your friends and your family and just hang out and enjoy the sunset, okay, You don't have to augment, you don't need superhuman intelligence to do any of that, right?
[401] You're fit to do it exactly as you are.
[402] You could have done that in the 70s, and it would just be just as good a beach, and they'd be just as good friends.
[403] But every gain we make in intelligence is the thing that safeguards that opportunity for you and everyone else.
[404] How would you do, I feel like we've not defined the term artificial general intelligence.
[405] From my understanding of it, it's when the intelligence can think and make decisions almost like a human.
[406] Yeah, I mean, loosely, this is kind of just a semantic problem, but intelligence can mean many things, but loosely speaking, it is the ability to solve problems and meet goals, make decisions in response to a changing environment, in response to data.
[407] aspect of that is an ability to do that in a cross in many different situations all the sort of situations we encounter as people and to have one's capacity in one area not you know as i get better at deciding whether or not this is a cup i don't magically get worse at deciding whether you know you just said a word right it's like i can do but it's like i can do multiple things in multiple channels That's not something we had in our artificial systems for the longest time because everything was bespoke to the task.
[408] We'd build a chess engine and it couldn't even play tic -tac -toe.
[409] All it could do was play chess.
[410] And we just would get better and better in these piecemeal narrow ways.
[411] And then things began to change a few years ago where you'd get with like deep mind would have its algorithms that were, you know, the same algorithm with slightly different.
[412] tuning could play go, right, or it could, you know, it could solve a protein folding problem as opposed to just playing chess, right?
[413] And it became the best in the world of chess, and it became the best in the world at Go.
[414] And amazingly, I mean, to take, you know, Alpha, what Alpha Zero did, it, you know, before Alpha Zero, all the chess algorithms were, they just had all of our chess knowledge plowed into them.
[415] They had studied every human game of chess, and it was just, you know, it was a bespoke chess engine.
[416] Alpha Zero just played itself, I think, for like four hours, right?
[417] It just had the rules of chess, and then it played itself, and it became better, not merely than every other, every person who's ever played the game.
[418] It became better than all the chess engines that had all of the, all of our chess knowledge.
[419] cloud into them.
[420] So it's a fundamentally new moment in how you build an intelligent system and it promises this this possibility.
[421] Again, this inevitability, the moment you admit that we will eventually get there, the moment you admit that it's, it can be done in silico and the moment that you admit that we will just keep going unless a catastrophe happens.
[422] And those two things are so easy to admit that I just don't, at this point, I don't see any place to stand where you're not forced to admit them, right?
[423] I don't see any neuroscientific or cognitive scientific argument for substrate dependence for intelligence, given what we've already built.
[424] And again, we're going to keep going until something stops us.
[425] We'll hit some immovable object that prevents us from releasing the next iPhone, but otherwise we're going to keep going.
[426] And then, yeah, so then Then whatever general will mean in that first case, there'll be a case where we've built a system that is so good at everything we care about that is functionally general.
[427] Now, maybe it's missing something.
[428] Maybe it's not, you know, maybe it's missing something that we don't even have a name for.
[429] You know, we're missing all kinds of, there are possible intelligences that we haven't even thought about because we just haven't thought about them.
[430] There are things that, there are ways to section the universe, undoubtedly, that we can't even conceive of because we are just, we have the minds we have.
[431] Elon was asked a question on this by a journalist.
[432] The journalist said to him, in a world where you believe that to be true, that artificial general intelligence is around the corner, when your kids come to you and say, Daddy, what should I do with my life to find purpose and meaning?
[433] What advice do you now give them?
[434] If you hold that intuition to be true, that it's around the corner, what do you say to your children when they say, what should I do with my life to create purpose and meaning?
[435] Did you say that Elon answered this question?
[436] Yeah, what did he say?
[437] It's one of the most chilling moments in an interview I think I've seen in recent times because he stutters.
[438] He goes silent for about 15 seconds, which is very un -Elon.
[439] He stutters, he stutters, he stutters a bit more.
[440] Like, he can't, he, and then he says, He thinks he's living in suspended disbelief.
[441] Because if he really thought about it too much, what's the point?
[442] He says, what's the point of me building all these cars?
[443] He was in his Tesla factory.
[444] What's the point of me building all these cars?
[445] And what's the point?
[446] I do think that sometimes.
[447] So I think I have to live in, as his words were, suspended disbelief.
[448] Right.
[449] Well, I would encourage him to ask, what's the point of spending so much time on Twitter?
[450] Because he could clearly benefit from rethinking that.
[451] But that aside, I mean, my answer.
[452] To that is, and I think other people have echoed this of late, I mean, it's sort of surprising to me. My answer is that this begins to privilege a return to the humanities as a kind of a core, like the center of mass intellectually for us.
[453] Because when you look at what we're really good at, it's among the last things that can be plausibly automated.
[454] And if we automate it, we may cease to care about it.
[455] So it's like learning to write good code is something that is going to be, it's being automated now.
[456] It's, you know, I'm not a programmer, but, you know, I have it on good authority that already these large language models are improving code and something like half the time they're writing better code than, people.
[457] That's all going to become like chess, right?
[458] It's just it's going to be better than people ultimately.
[459] So being a software engineer is something that, you know, and being a radiologist and being like those things, it's easy to see how AI just cancels those professions or at least makes one person, you know, so effective at using AI tools that, you know, one person can do the work of 100 people so that you've got 99 people who don't have to be doing that job.
[460] but creating art and, you know, writing novels and being a philosopher and talking about what it means and to live a good life and how to do it, like, that's something that if we, we have to look at those, we have to look at where we're going to care that we're actually in relationship to and in dialogue.
[461] dialogue with another person who we know to be conscious.
[462] Where we don't care about that, we're not going to care.
[463] We're going to want just the best version of it.
[464] Like, I don't care.
[465] If the cure for cancer comes from an in sentient AI, I do not give a shit.
[466] I just want the cure for cancer, right?
[467] There's no added value where I find out, okay, the person who gave me this cure really felt good about it.
[468] And he's, you know, he had tears in his eyes when he figured out the cure.
[469] every engineering problem is like that we want safer planes we want you know we just want things to work we're not sentimental about the the artistry that went into all of that and when the difference when the gulf between the best and the mediocre gets big and consequential we're just going to want the best we're just going to want the best we're all the way down the line but what is the best novel, right?
[470] What is the best podcast conversation?
[471] What is the, and can you subtract out the conscious person from that and still think it's the best?
[472] And so, like, someone, one sent me a, what purported to be, I didn't even listen to it, so I don't, I'm not even sure what it was, but it looked like it was an AI generated conversation between Alan Watts and Terrence McKenna.
[473] Right.
[474] Both guys who I love, I remember, I didn't know either of them, but fans of both have listened to hundreds of hours of both talk.
[475] As far as I know, they never met each other.
[476] It would have been a fascinating conversation.
[477] I realized when I looked at this YouTube video, I realized I simply don't care how good this is because I only care if it was actually Alan Watts and Terence McKenna talking.
[478] I got a simulacrum of Alan Watts and Terrence McKenna in this context I don't care about right so another use case I stumbled upon I was playing with with chat GPT and I asked it you know the causes of World War II you know give me 500 words on the cause of World War II it gives you this perfect little you know bullet pointed essay on the cause of World War II that's exactly what I want from it that's fine that's like I don't care that that that there was no person behind that typing.
[479] But when I think, well, do I want to read Churchill's history of World War II?
[480] It's on my shelf to read.
[481] It's kind of one of these aspirational sets of books.
[482] Haven't read it yet.
[483] I actually want to read it because Churchill wrote it, right?
[484] And if you could give me an AI version of Churchill saying this is in the style of Churchill, it's very even Churchill scholars say this sounds like Churchill I actually don't care about it like that's not the use I I'll take the generic use of you know give me the cause of World War II the fake Churchill is profoundly uninteresting to me the real real Churchill even though he's dead is is interesting to me so the rebuttal I give here and this is what my mind is doing is saying this the distinction you're you're presenting the difference I see is that in the case of the conversation between two people you respect that has been generated by AI, someone has signaled to you that that it, that it is fake.
[485] If you remove that, because say Churchill thought, oh, why would I write a book when I could just click a button and this thing will write it in my, in my voice, in my tone of voice with my, you know, with the entire the entire back catalogue of things I've written before and it will produce my account and it will save me time.
[486] So I'll just click a button, my publisher, maybe, maybe we'll do it for me and then I'll sell that to Sam on the basis that it is my thoughts which I imagine I can imagine a very near future if we just do it by percentage how many books are going to be increasingly written by artificial intelligence to the point that when you look at a shelf I imagine at some point in the future if intelligence does increase by any measure that most of it would be words strung together by artificial intelligence and it will be selling potentially better than the words written by humans.
[487] So again, when we go back to the conversation with your children, there might not be a career there either because artificial intelligence is faster, can produce more, can test and iterate on whether it sells better, clicks gets more clicks, it can write the headline, create the picture, write the content, and then I can just take the check because I put my name to it.
[488] Yeah.
[489] So I go, even in that regard, what remains?
[490] Well, so in the limit, what I think we're imagining is a world where and so none of the terrifyingly bad things have happened so it's just all working we're just producing a ton of great stuff that is better than the human stuff and people are losing their jobs so we've got a labor disruption but we're not talking about any other kind of political catastrophe or you know cyber apocalypse much less AGI destroying everything, then I think we just need a different economic assumption and ethical intuition around the value of work.
[491] Our default norm now in a capitalist society is you have to figure out something to do with most of your time that other people are willing to pay you for.
[492] You have to figure out how to add value to other people's lives such that you reliably get paid otherwise you might die we've got a social safety net but it's pretty meager you know we're not there are cracks you can fall through you could wind up homeless and we're not going to figure out what to do about that all too well you know and um you're so your claim upon your existence among us is you finding something to do with your time that other people will pay you for, right?
[493] And now we've got artificial intelligence removing some of those opportunities, creating others, but in the limit, and I do think it is different from, I think analogies to other moments in technological history are fundamentally flawed.
[494] I think this is a technology, which in the limit will replace jobs and not create better new jobs in their wake, right?
[495] It's just, this just cancels the need for for human labor ultimately and it strangely it replaces some of the highest status most cognitively intensive jobs first right you know it replaces elon musk before it replaces your electrician or your plumber or your masseuse way before right so we have to internalize the the reality of that if if again this is in success this is not This is all good things happening, right?
[496] And we have to have a new ethic.
[497] We have to have a new economics based on that ethic, which is, you know, UBI is one solution to this.
[498] Like, you shouldn't have to work to survive, right?
[499] Universal basic income.
[500] Yeah, there's so much abundance now being created.
[501] We have to figure out how to spread this wealth around, right?
[502] We've got a cure for cancer over here.
[503] We've got perfect, you know, photovoltaic, driven economies over here where it's like we've solved the climate change issue you know we're just pulling wealth out of the ether essentially we've got you know nanotechnology that is just birthing whole new industries yet but it's all being driven by AI we don't you know there's no room in the whenever you put a person in the in the chain in the decision chain you're just adding noise.
[504] This is the best thing.
[505] This should be the best thing that's ever happened to us.
[506] This is just like God handing us the perfect labor -saving device, right?
[507] The machine that can build every other machine that can do anything you could possibly want, we should figure out how to spread the wealth around in that case, right?
[508] This is just powered by sunlight, no more wars over resource extraction.
[509] It can build anything.
[510] We can all be on the beach, just hang out with our friends and family.
[511] Did you believe we should do universal basic income where everybody's given like a monthly check?
[512] We have to break this connection.
[513] Again, this is what will have to happen in the presence of this kind of labor force dislocation enabled by all of this going perfectly well.
[514] Like this again, just as pure success.
[515] Just AI is just producing good things.
[516] And the only bad thing is putting all these people out of work.
[517] It's coming for your job eventually.
[518] this and I've my issue with it and my rebuttal when I talk to my friends about this idea of universal basic income when we you know we hand out enough cash or resources to people so that they're stable which I'm not necessarily against but just just want to play with it a little bit is humans seem to have an innate an innate desire for purpose and meaning and we seem to be designed and built psychologically for labor and for discomfort but it doesn't have to be labor that's tied to money right like it can be like we we we we will get our status in other ways and we'll get our meaning in other ways.
[519] And again, these are all just stories we tell ourselves.
[520] I mean, like, you know, you're talking to a person who knows it's possible to be happy actually doing nothing, right?
[521] Like, like just sitting in a room for a month, right, and just staring at the wall.
[522] Like that's possible, right?
[523] So, so, and yet that's most people's worst nightmare.
[524] You know, I mean, it's solitary confinement in a prison is considered a torture, right?
[525] And I know people who spent 20 years in a cave, right?
[526] So it's like there's a, there are capacities here that are worth talking about.
[527] But just more, more commonly, I think we will, we want to be entertained, we want to have fun.
[528] We want to be with the people we love.
[529] We want to be useful in relationship.
[530] And insofar as that gets uncoupled from the necessity of working to survive.
[531] it doesn't all just go away we just need new norms and new ethics and new conversations around what we do on vacation right it's like what what you're imagining is that if you put everyone on vacation on the best vacation you can make the vacation as good as possible a majority of people will eventually be miserable because they're they're not back at work right and yet there most of these people are working so that they have enough money so they could finally take that vacation, right?
[532] We will figure out a new way to be happy on the beach, right?
[533] I mean, like, if you can't, if you get bored with Frisbee, we will figure something else out that is fun.
[534] You know, you can read, you know, I'll be able to read the Churchill history of World War II on the beach and not be rushed by any other imperative because I'm, you know, I, I'm happily retired, right?
[535] because my AI is creating the thing that is solving all my economic problems, right?
[536] You know, we should be so lucky as to have that be our problem.
[537] Like how to be happy in conditions of no economic imperative, no basis for political strife on the basis of scarce resources, and no question about, the question of survival is off the table with respect to what one does with one's time and attention, right?
[538] You can be as lazy as you want and you'll still survive.
[539] You can be as unlucky as you want and you'll still survive.
[540] I mean, the awful situation we're in now is that differences in luck mean everything, right?
[541] You know, someone is born without any of the advantages that we have.
[542] We don't have a system.
[543] We don't have an economic system that reliably gives them every advantage and opportunity that they could have, right?
[544] It's like, so we just, we, we don't have the, you know, we apparently, we've convinced ourselves, we either don't have the resources or we've convinced ourselves, we don't have the resources, we don't have the incentives such that we access the resources, so as to actually come to the help of people we could help, right?
[545] I mean, the idea that people starve to death is just, it's unimaginable, and yet it's still happens.
[546] you know that that's not a scarcity problem it's a political problem wherever it happens and yet all of this is tied to a system where everyone has convinced themselves that is normal to really have one's survival be in question if one doesn't work right and and by choice or by accident like like if you get if you haven't you know i think i think it's still true that in the at least in the u .s this is almost certainly not true in the UK, but in the U .S., the most common reason for a personal bankruptcy is, you know, overwhelming medical expense that just comes upon you for whatever reason.
[547] You know, your wife gets cancer.
[548] You guys go bankrupt solving the cancer problem or failing to solve the cancer problem, and now everything else unravels, right?
[549] And we have a society which thinks, yeah, well, unlucky you, you know, that's, you know, if you wind up homeless, just don't sleep in front of my store because I need my, you know, you're going to hurt my business.
[550] Like, you know, successful AI that cancels lots of jobs would be, it would be, it would only be canceling those jobs by virtue of producing so many good things, so much value for everybody that we would, we would have to figure out how to spread that wealth around.
[551] otherwise we'd yeah otherwise we would have a you know if an amazing amazingly dystopian bottleneck for a few short years and then we would just have a revolution right then we then the guys in their in their you know gated communities making trillions of dollars based on them having gotten close enough to the GPUs that they that you know some of it rubbed off on them yeah they'd be dragged out of their houses and off their gulf streams and you know we would have a fundamental reset we'd have a hard reset of the political system if i had to put you in a yes or no situation and ask your intuition the question now that if your objective was to which i'm sure it is is to encourage the betterment of humanity and to increase our odds of happiness and well -being a hundred years from now um and there was a button placed in front of you and it would either end the development of artificial intelligence as we've seen it over the last decade so it would never we'd never proceed with developing intelligent machines um or not so you could press a button and stop it right now what would you and stop it stop it permanently such that we never then do that thing we just never figure out how to build intelligent machines pause it indefinitely well i would definitely pause it to a point where we would get our heads around the alignment problem.
[552] Permanently.
[553] If the button was a permanent pause that you couldn't undo.
[554] Well, the question is how deep does that go?
[555] So like we have everything we have now, but we just never get better than...
[556] Yeah, we never make progress from here.
[557] Right.
[558] And your objective is to make humanity happy and prosperous.
[559] I mean, it's hard because when you begin When imagining all of the good stuff that we could get with aligned superhuman AI, well, then, you know, then the, it's just, you know, cornucopia upon cornucopia, it's just everything is, everything is potentially within reach.
[560] Yeah, I mean, I take the existential risk scenario seriously enough that I would, I would pause it, you know, I would say, I mean, I think we will get, we will eventually get to, if, if, if, if curing cancer is a is a biomedical engineering problem that admits of a solution and I think there's every reason to believe it ultimately would be we will eventually get there based on our own you know muddling along with our you know current level of tech you know currently information tech I'm you know reasonably confident of that because I mean aren't you know our intelligence shows every sign of being general it's just it's not it's not as fast as we would want it to be.
[561] It's not, it's not, the thing that AI is going to give us is, it's going to give us speed that is, I mean, there's speed and there's the, the access, there's memory, right?
[562] It's like, like, we can't integrate, we don't have the ability, we have, no person or team of people can integrate all of the data we already have.
[563] Right?
[564] So the real promise here is that these systems will be able to find patterns that we wouldn't even know how to look for and then do something on the basis of those patterns.
[565] You know, I think an intelligent search within the data space, you know, by apes like ourselves will eventually do most of the great things we want done.
[566] And, you know, there isn't, there isn't, I mean, the problems we need.
[567] need to solve so as to safeguard the career of our species and to make civilization durable and sane and to remove this sort of Damocles that is over our heads at every moment that you know at any moment we could just decide to have a nuclear war that ruins everything or or create an engineered pandemic that ruins everything.
[568] We don't need superhuman intelligence to solve all those problems.
[569] And we need an appropriate emotional response to the untenability of the status quo.
[570] And we need a political dialogue that eventually transcends our tripalism.
[571] You and I'd say a few others, maybe two or three others, helped change my mind about one of the most profound.
[572] things I think anyone could believe, which was when I was 18, I believed in Christianity.
[573] And then there was a couple of moments that shook my belief.
[574] Nothing on a personal level, just a couple of ideas that managed to sort of infect my operating system that led my curiosity towards your work.
[575] And I changed my mind profoundly.
[576] It's such a profound change that I had.
[577] How do we change our minds?
[578] And I really want to, I really want to focus that question on the individual's mind.
[579] Like I want to change my mind.
[580] I want better beliefs, better ideas in my head that are going to allow me to get out of my own way because I am not achieved.
[581] I'm miserable.
[582] I'm not living the life that I, I would say I know I can live, but some people don't even know they can live, live a better life.
[583] I'm not happy.
[584] That's the signal.
[585] And I want to rectify this in some way.
[586] Yeah, well, there are a few bright lines for me. I mean, take our ethical lives and our relationships to other people, right?
[587] So there's the problem of individual well -being that is still real, even if you're in a moral solitude.
[588] If you're on a desert island by yourself, you really don't have ethical questions that are emerging because you're not in relationship to anybody else, but you still have the problem of how to be happy.
[589] But so much of our unhappiness is in collaboration with others, right?
[590] We're unhappy in our relationships.
[591] We're unhappy professionally.
[592] And it's worth looking at how we're behaving with other people.
[593] For me, the highest leverage change I ever made, and it's, again, it's very easy to spell out, and it's very clear.
[594] and ultimately it's pretty easy is just to decide that you're not going to lie about anything, really.
[595] I mean, there might be some situations in extremists where you'll feel forced to lie, but those, you know, in my view, are analogous to acts of violence that you may be forced to use in self -defense, right?
[596] So, like, lying is sort of the first stage on the continuum of violence for me, right?
[597] So, like, I'm not going to lie to someone unless I recognize that this is not a rational actor who I can possibly collaborate with.
[598] This is someone I have to be, I have to avoid or defeat or otherwise, you know, contain their propensity to do me harm.
[599] So, yes, if the Nazis come to the door and ask if you've got Anne Frank in the attic, yes, you can lie or you can shoot them or you can, these are not normal circumstances.
[600] But that aside, every other moment in life where people are tempted to lie is one that I think you can categorically rule out as being unethical and beyond unethical.
[601] It's just not, it's creating a life you don't, when you examine it, you don't want to live.
[602] right i mean the moment you know that you're not going to lie to people and they know that about you um the the it's like all the dials get the social dials get sort of recalibrated on both sides and then you find yourself in the presence of of people who don't ask you for your opinion unless they really want it right and then and then when you're honest i mean then then then it's it's a night and day difference when you're giving people feedback critical feedback and they know you're honest right they know they they they're you know their their bullshit detector is not going off because they just know you're you're even when it's not convenient you're being honest and or even when it's not comfortable you're being honest um one that's incredibly valuable because basically you're you're giving them the information that you would want if you were in their shoes right because we have this sort of delusion that takes over us when whenever we're tempted to tell a white lie we imagine okay this person doesn't want it'd be much better for me to just tell them the kind fiction than tell them the the uncomfortable truth right but we don't do the so we don't even calculate that you know for the golden rule there most of the time And we, if you just took a moment, you'd realize, well, wait a minute, does someone who is actually doing a bad job want me to tell them that they're doing a good job and then just send them out into the world to bounce around other people who are going to be recognizing, as I just did, that the thing they're doing isn't so great, right?
[603] You're just not doing them a favor.
[604] This is part of the nature of belief change, isn't it?
[605] that when someone we believe that someone is on our side or we believe from like a political standpoint that they they represent the 99 % of the views that we represent we're much more likely to change our beliefs i spoke to tally sherrott about this the neuroscientist and i wrote about this in a chapter in my upcoming book about how you how you change people's minds and they showed in the elections that if like a flat earth says something to a flat earther about the nature of the earth i believe it but if nassus says something to a flat earther they will just dismiss it on site because the source of that information is not one that they believe or trust or like or believe is well -intentioned.
[606] I mean, this is a bug, not a feature.
[607] I mean, it's understandable, but this is something we have to grow beyond because the truth is the truth, right?
[608] So you can't, I mean, and it goes in both directions.
[609] The person on your team who you love and respect is capable of, in their very next sentence, of speaking of falsehood, right?
[610] And you need to be able to detect that.
[611] And conversely, you know, the person you least respect is capable of saying something that's quite incisive and worth taking on board.
[612] And so that's, we have to, we have to have this sort of meta -cognitive layer where we're noticing how we're getting played by our social alliances and recognize that the truth and rather often important truths, are evaluated by different principles.
[613] I mean, it's not a matter of the messenger.
[614] You know, you shouldn't shoot the messenger and you shouldn't worship him.
[615] You mentioned lying as being a, well, removing lying and being more honest as being a significant step change in your own happiness.
[616] Is that accurate?
[617] In my happiness.
[618] In your own happiness, yeah, yeah.
[619] Yeah, immensely so.
[620] Because it's, how, practically and specifically how?
[621] So when you look at how people ruin their reputations and their relationships and their businesses, their careers, the gateway to all of the misbehavior that accomplishes that is line.
[622] It's, I mean, look at somebody like Lance Armstrong, right?
[623] I mean, just, or Tiger Woods, right?
[624] These guys are the absolute apogee of sport.
[625] Everyone loves them.
[626] Everyone's just amazed at what they're, what they've accomplished.
[627] and yet, you know, the dysfunction in their lives just gets vomited up for all to see at a certain point.
[628] And it was just enabled at every stage along the way by line.
[629] So if either of them had early in their career, before they became famous, before they became rich, before they became tempted to do anything that was going to derail their lives later on, if they had decided they weren't going to lie, right?
[630] They would have found everything else they did to screw up their success impossible.
[631] So when I decided, and this was in the book, this was a course I took at Stanford.
[632] It was a seminar with this brilliant professor, Ron Howard, who many people, I think some people in Silicon Valley have taken this course as well.
[633] I mean, this course was just like a machine, you know, undergraduates and graduate students would come in on one side and then 12 weeks later would come out convinced that basically lying was no longer on the menu, right?
[634] It's just, it was, the whole seminar was an analysis of the question, is it ever right?
[635] lie and really we focused on on white lies and truly tempting lies as opposed to the obvious lies that screw people's lives and relationships um it's just so corrosive and it's corrosive of relationships in ways that you unless you're a student of this kind of thing you don't necessarily notice i mean one example i believe is in that that's in that book is that i remember my my wife was with a friend, and the two of them were out, and the friend had something she had to do with another friend later that night, but she didn't really feel like doing it.
[636] And she got a call from that friend in the presence of my wife, and she just lied to the friend to get out of the plan, right?
[637] She said, oh, you know, I'm so sorry, but my, you know, my daughter's got this thing.
[638] And it's just an utterly facile use of dishonesty to get, or she could have just been honest, right?
[639] But she just, it was just too awkward to be honest.
[640] So she just got out of it with a lie.
[641] But now it's in the presence of my wife.
[642] And my wife is now the immediate question is, how many times have I been on the other side of that conversation?
[643] How many times has she lied to me in an equally compelling way about something so trivial, right?
[644] And so it just eroded trust in that in that relationship in a way that the the liar would never have known about it would never have detected it because it's just she just went right back to having a good time with you know they were just out to lunch and they continued you know having their lunch and they're still having a good time and it's all smiles but my wife has just logged something about kind of the ethical limitations of this person um and the person doesn't know it right and so once you sort of pull on this thread you basically your entire life becomes for at least for the the the transition period until this just becomes a habit you no longer have to consider suddenly it's your you're you're the world becomes kind of mirror thrown up to your mind and you and you're you meet yourself in all these situations where you were avoiding yourself before so like someone will say you know do you want to have plans or do you want to do you want to collaborate with me on this project with and if previously you you always had recourse to some kind of white lie that just got you out of you know the awkward uh truth which is the answer is no and there are actually reasons why not right you never had you never have to confront the the awkwardness of that you're this kind of person who has these kinds of commitments and this kinds of you know it's like i you know you're you know, I mean, the most awkward one would be, you know, someone declares a romantic interest in you, and the, the, the, the, the, the, the, the, the, the, the, it's no for totally superficial reason, right?
[645] Like, this person is, is, is, you're not, they're not attractive enough for you, right?
[646] You know, they're, or they're overweight, or what, I mean, it's just, it's, like, you have your reason why not, and this is something you feel you cannot say, right?
[647] Now, I'm not saying that you should always go out of your way, like someone with Tourette's who just helplessly blurt's out the truth.
[648] Like, there's a scope for kindness and compassion and tact.
[649] But if someone is going to really drill down on the reasons why not, if the person says, no, I want to know exactly why you don't want to go out with me, there's something to discover on either side of that true disclosure.
[650] Right?
[651] Like either you are cast back on yourself and you have to realize, okay, I'm such a superficial person that it doesn't matter who anyone is.
[652] If they're 10 pounds overweight, I'm not interested.
[653] That's the mirror held up to your minds.
[654] Like, okay, all right, so you're that kind of person.
[655] Do you want to still be that kind of person?
[656] Do you really want to just decide that everyone, no matter what their virtues, right?
[657] And no matter what has been going, you know, what?
[658] no matter what chaos is going on in their life.
[659] I mean, this person might actually lose those 10 pounds next month and you would have a very different situation.
[660] But are you really not available?
[661] Are you really filtering by weight in this way?
[662] And are you really comfortable with that?
[663] And are you comfortable saying that?
[664] Like if somebody forces you to actually be honest.
[665] We have a closing tradition on this podcast where the last guest leaves a question for the next guest, not knowing who they're going to leave it for.
[666] The question that's been left for you, peccable handwriting.
[667] Where do you want to be when you die?
[668] Describe the place, time, people smell, and feeling.
[669] Well, it's actually, connects with an idea I've had.
[670] I mean, I think what we need, we haven't talked about psychedelics here, but there's just, been this renaissance in research and psychedelics, and it's hard to know.
[671] I'm worried that we could recapitulate some of the errors of the 60s and roll this all out in a way that's less than wise.
[672] But the wise version would be, I think we need to recapitulate something like the mysteries of elus where we have rites of passage that are enabled by, in many people's case, psychedelics and the practice of meditation.
[673] I just think these are just fundamental tools of insight that are, that, I mean, for most people, it's hard to see how they would get them any other way, right?
[674] I just think, you know, there's a longer conversation about which molecule and how and all that, but another component of this is, I mean, a hospice situation where the experience of dying is as wisely embraced and facilitated as possible.
[675] And I think psychedelics could certainly play a role for many people there.
[676] So I imagine something like we need places that are truly beautiful where people have gone to die and their families can visit them there.
[677] And it is just a final rite of passage that is that is embraced with you know all the wisdom we can muster there and yeah so for in my case you know I would want to be in you know currently I'd be happy to be home but you know wherever home is at that point I would want a I would want a view of the sky you know it could be an ocean beneath the sky that would be ideal right I just I mean it is there's basically nothing that makes me happier than just looking at a blue sky with just watching like cumulus clouds move across a blue sky.
[678] I mean, it's just like I can extract so much mental pleasure just looking at that, right?
[679] It's just, I mean, it's, um, so yeah, if I'm going to spend my last, uh, hours of life looking at anything, if my eyes are going to be open, you know, looking at the sky and having, the stars with a sky.
[680] the daytime?
[681] The sky, the daytime, yeah.
[682] I mean, if I were, if I, light pollution is enough of a thing in my world that I go for, I feel like I go for years without seeing a good night sky.
[683] So I've kind of given up hope there, but I do love that.
[684] But yeah, just a, you know, a view of the sky and with the people I love at that point who are still alive at that point.
[685] Yeah, I mean, I'm not, I'm not worried about, death in that sense.
[686] I really I think it's the death part is not a problem.
[687] I mean I can't say I'm looking for if I can imagine there could be sort of medical chaos and uncertainty and all of the weirdness that happens around the dying process right depending on and there are all kinds of ways to die that I wouldn't choose but having a nice place to do that But with a view of the sky would be the only solution I think I would require.
[688] The question asks, the smell.
[689] Give me the smell, give me an ocean breeze.
[690] I put an ocean there.
[691] So, yeah, an ocean breeze would be perfect.
[692] Sam, thank you so much.
[693] Thank you for not just this conversation.
[694] As I said to you before you sat down.
[695] You were pivotal in really helping me to unpack some problems on as younger, some conflicts I should describe them as with my view on religious, belief and um and the nature of the world but i think more more importantly you didn't you didn't rob me of my religious beliefs and leave me with nothing right left me with something else which is something that was really important to me which was the idea that that can still be great meaning and there can be what you describe as spirituality in the absence or in the place of um that religious belief religious gives people you know a lot of things and i i it's funny because when i was religious and i went on the journey to becoming agnostic let's say I was in conflict with people as in I would want to have a debate with everybody and I spent those two years watching everything that you and Richard Dawkins and Hitchens had all done and then I came out of the other side and it was peaceful.
[696] And you believe what you want, I'll believe what I want as long as we're not causing any conflicts with each other and you're not doing any harm, it's okay.
[697] And then I discovered what I would call my own spirituality which is my meaning, the meaning that I see in the world around me and the self and things like psychedelics And it's a better place to be.
[698] And it removed my fear of death, which I had as a religious person.
[699] Oh, nice.
[700] That's good.
[701] So thank you.
[702] Thank you for that.
[703] And all your subsequent work, but, you know, incredible books.
[704] You've written so many of them that are absolutely incredible.
[705] You've got an unbelievable podcast, which I was gorging on before you came here as well in an app, which, I mean, if you could speak just a few sentences about the meaning of the app and what you do, I know it's much more than meditation now.
[706] But I think people listening to this might be compelled to check it out and download it.
[707] Yeah, well, so I had that book, which you're holding Waking Up, which is the, which is where I talk about my experience in meditation and just how I fit it into a scientific, secular worldview.
[708] And it just turns out that an app is a much better delivery system for that kind of information.
[709] I mean, it's just hearing audio.
[710] You don't even need video.
[711] I think audio is the perfect medium for it.
[712] So when that technology came about, or when I discovered it, I just felt incredibly lucky to be able to build it.
[713] And so it's kind of outgrown me now.
[714] There are many, many teachers on it and many other topics beyond meditation that are touched.
[715] But it's a, it really subverts all of the problems that, you know, some of which we touched upon here, with the smartphone.
[716] I mean, like the smartphone has become this tool of fragmentation for us.
[717] It fragments our attention.
[718] It continually interrupts our experience.
[719] it's depending on how you use it.
[720] But most of what we do with it, you know, you're checking Slack, you're checking your email, you're checking your social media, you're just, it's punctuating your life with all these, you know, at this point seemingly necessary interruptions.
[721] But this app or, you know, really any app like it that's delivering this kind of content subverts all that because it's just, this is, this is, it's just a platform where you're getting audio that is guiding you, in a very specific use of attention and a sort of reordering of your priorities and getting you to recognize things about your experience that you wouldn't otherwise see.
[722] And yeah, an app is just a sheer good luck, it turns out, it's just the perfect delivery system for that information.
[723] So, yeah, I just felt very lucky to have stumbled upon it.
[724] Because, again, 10 years ago, there were no apps.
[725] It's just all I could do is write a book.
[726] Sam, thank you.
[727] Yeah, thank you so much for your generosity.
[728] Yeah, a pleasure to meet you as well.
[729] And congratulations with everything.
[730] It's really, I was catching up on your podcast in anticipation of this.
[731] And it's amazing, the reach you've got now.
[732] So it's wonderful.
[733] We're all still trying to catch up with it, but it's a credit to all of the team.
[734] And I really want to say from the bottom of my heart, thank you.
[735] Because the work you do is really, really important.
[736] It's been important in my life, as I've said.
[737] But it's just really important.
[738] I feel like we're living in a world where, like, nuance and all the things you've talked about and openness to debate and honest dialogue are, we're getting further or further away from there.
[739] So if there's anyone left in this world, that's still willing to engage on that level, I feel like they must be protected at all costs.
[740] And I see you as one of those people.
[741] So thank you.
[742] Nice.
[743] All right.
[744] Well, to be continued.