Lex Fridman Podcast XX
[0] The following is a conversation with Elon Musk, his fourth time on this, the Lex Friedman podcast.
[1] And now, if you allow me, a quick few seconds mention of each sponsor.
[2] Check them out in the description.
[3] It's the best way to support this podcast.
[4] We got Element for Electrolite Deliciousness, Eight Sleep for Naps, Better Help for Your Mind, Simply Safe for Your Safety, Shopify for your online shopping, and NetSuite for business, management, software.
[5] Choose wisely, my friends.
[6] Also, if you want to work with an amazing team where I was hiring, go to Lexfreedman .com slash hiring.
[7] And now, onto the full ad reads, as always, no ads in the middle.
[8] I try to make these interesting, but if you must skip them, please still check out the sponsors.
[9] I enjoy their stuff.
[10] Maybe you will too.
[11] This episode is brought to you by The Very Thing I'm Drinking, as I speak to you right now, Element.
[12] Electrolite, mix.
[13] I'll usually fill a powder or a gatorid bottle.
[14] I think it's 28 ounces of water and mix it with one element packet.
[15] My favorite flavor is watermelon salt.
[16] That's the one I'm drinking now.
[17] That's the one I've been drinking for a long time.
[18] I apologize because I'm recording these words very late at night.
[19] It's been a long day.
[20] It's been a long night before that and a long night this one.
[21] So it's just me. in front of a microphone right now, looking at a 28 ounce bottle of water that tastes like watermelon.
[22] Just me in the water.
[23] Happily reunited in the desert of human experience.
[24] I'm also surrounded by three computer screens that all have beautiful wallpaper looking at me, taunting me with beautiful, picturesque scenes of nature.
[25] And I just, before this, played about 10 minutes of Diablo, where I got to escape to a whole other world that is also picturesque, but not with nature, but filled with the intense intimidating sounds of demons, attacking you from all sides.
[26] And all of that is somehow relaxing.
[27] Anyway, element is not just delicious, it combines this perfect amount of magnesium, potassium and sodium, watermelon salt.
[28] Anyway, get a sample pack for free with any purchase.
[29] Try it to drinkelement .com slash Lex.
[30] This episode is also brought to you by the thing that's calling my name at this very moment, the 8th Sleep Pod 3 mattress.
[31] It has two sides of the bed so you can cool or heat each side of the bed separately.
[32] I've actually recently was briefly at a get -together of a lot of really interesting people and somebody brought up the topic of A -Sleep and I just heard that a bunch of other folks said, oh, I love A -Sleep and they started talking about the different temperatures that they use and I was shocked and horrified to learn that some people actually heat their bed up.
[33] This was a fascinating discovery for me because I've never even tried the heating feature.
[34] I always cool to bed, sometimes very cold with a warm blanket.
[35] It's heaven, but I can't recommend it enough.
[36] Sleep really is a magical place.
[37] It's a bit surreal saying these words, while quite sleep deprived, a bit melancholic, a bit hopeful, thinking about the eight sleep bed just a few steps away.
[38] And somehow that thought fills me with gratitude for just having these things in my life.
[39] Anyway, check them out and get special savings when you go to 8Sleep .com slash Lex.
[40] This episode is also brought to you by BetterHelp, spelled H -E -L -P -Help.
[41] Anytime I spell out a word, I always imagine myself spelling out that word in giant letters and sand, stranded alone on the island, looking at a airplane floating above, that they see the text that right help i just actually got a text from a friend i'm looking at it now again filled with gratitude for having so many amazing people the text says hey brother just sending you some love miss you sent late at night oh boy yeah yeah yeah so better help uh speaking of friends it's good to talk.
[42] It's good to talk to people.
[43] It's good to talk to people that truly listen.
[44] That could be friends, loved ones, or that could be a professional therapist, but you don't have to choose one.
[45] You can have all of those.
[46] And easy, accessible therapist is what BetterHelp is all about.
[47] Check them out at BetterHelp .com slash Lex and save in your first month, that's betterhelp .com slash Lex.
[48] This show is also brought to you by SimplySafe.
[49] And as we go on, With this program, I'm starting to have more and more fun because fun is all you need.
[50] SimpliSafe is a home security company after, of course, you're established security.
[51] Once you're safe, then the fun can begin.
[52] For me, Simplicef just establishes that first base layer of physical security.
[53] It's super easy to set up.
[54] They have a cool new feature called Fast Protect Monitoring that allows Simpliceaf monitoring, agents to see speak to and deter intruders through the smart alarm indoor camera.
[55] I'm tempted to test the system by having one of the quadrupeds, the legged robots, roll up to it and see what the agents say.
[56] Just attach a chat GPT, GPT4 agent or a groc agent on top of it and have the two of them have a conversation and see what they figure out.
[57] I'm not sure it's going to end well.
[58] but science proceeds one experiment at a time, friends.
[59] Anyway, I love how easy Simplsafe was to set up.
[60] Their senses are awesome.
[61] Their monitoring is awesome.
[62] It's affordable, easy to use.
[63] I love products that just, you know, easy to set up, easy to maintain, everything works.
[64] Easy.
[65] Anyway, go to Simplasave .com to get 50 % off any Simplsafe system when you sign up for fast, protect, monitoring.
[66] that's simplysafe .com slash Lex.
[67] This show is also brought to you by Shopify, a platform designed for anyone to sell anywhere.
[68] And when I say anyone, I mean somebody like me. It took me minutes, maybe even seconds, to set up a store and sell shirts, which you can now buy at, I think the link is Lexfreedom .com slash store that forwards you to the Shopify store with three different shirts, shirts.
[69] In this case, it's not really about the store.
[70] It's about just celebrating stuff we love.
[71] I'm a huge fan of a lot of bands and podcasts, and I just love buying up their merch, wearing it, and seeing who notices.
[72] I love wearing a Huberman Lab shirt on the trail when I run and just see who recognizes the full absurdity of it.
[73] what they say.
[74] It's just wonderful.
[75] The joy that we can share by this little kind of sign that you wear in yourself that, oh, I like this particular thing.
[76] Let's talk about it.
[77] Yeah, I love it.
[78] I like wearing like Orwell shirts or Dostoevsky shirts or Bukowski shirts or sci -fi stuff.
[79] You know, 2000 -month -based Odyssey, all of it.
[80] You can get all of those things or sell those things on Shopify.
[81] sign up for a $1 per month trial period at Shopify .com slash Lex.
[82] That's all lowercase.
[83] Go to Shopify .com slash Lex to take your business to the next level today.
[84] This show is also brought to you by NetSuite, an all -in -one cloud business management system.
[85] They take care of all the messy, complicated things involved in running a business.
[86] a company like Elon says is a kind of super intelligent being comprised of individual human beings working together in interesting ways that when effective can be greater than the sum of their parts you know but there's a lot of things involved there it's not just great design and engineering it's not great innovation it's all of the things the financial The human resources are hiring and firing.
[87] It's maintaining inventory.
[88] It's the e -commerce.
[89] It's all the messy business details.
[90] So you should use the best tools for the job of handling that mess.
[91] That beautiful, beautiful mess involved in running a company.
[92] And I, too, by the way, would love to be a part of that mess.
[93] A small cog in the machine, in the super -intelligent machine that is a company.
[94] someday soon.
[95] Anyway, you can start now with no payment or interest for six months.
[96] Go to netseweed .com slash Lex to access their one -of -a -kind financing program.
[97] That's net suite .com slash Lex.
[98] And now, dear friends, here's Elon Musk.
[99] I thought you were going to finish it.
[100] It's one of the greatest themes in all of film history.
[101] Yeah, that's great.
[102] So I was just thinking about the Roman Empire.
[103] as one does.
[104] There's that whole meme where all guys are thinking about the Roman Empire at least once a day.
[105] And half the population is confused whether it's true or not.
[106] But more seriously thinking about the wars going on in the world today.
[107] And as you know, war and military conquest has been a big part of Roman society and culture and I think has been a big part of most empires and dynasties throughout human history.
[108] so yeah they usually uh came as a result of conquest i mean there's something like the osher hungarian empire where there was just a lot of sort of clever marriages um but fundamentally there's an engine of conquest and they celebrate excellence in warfare many of the leaders were excellent generals yeah that kind of thing so a big picture question groc approved i asked this is a good question to ask it tested Grog approved?
[109] At least on fun mode.
[110] To what degree do you think war is part of human nature versus a consequence of how human societies are structured?
[111] I ask this as you have somehow controversially been a proponent of peace.
[112] I'm generally a proponent of peace.
[113] I mean, ignorance is perhaps, in my view, the real enemy to account.
[114] that's the real hard part not you know fighting other humans um but all creatures fight i mean the the the jungle is a you look at the people think of this nature as perhaps some sort of peaceful thing but in fact it is not there's some quite funny when a hoots hog thing where he's like in the jungle like saying that it's like basically just murder and death in every direction i mean the the plants animals in the jungle are constantly trying to kill in each other every single day, every minute.
[115] So it's not like, you know, we're unusual in that respect.
[116] There's a relevant question here, whether with greater intelligence comes greater control over these base instincts for violence?
[117] Yes.
[118] We have much more of an ability to control our limbic instinct for violence than, say, a chimpanzee.
[119] And, in fact, if one looks at, say, chimpanzee society, it is not friendly.
[120] I mean, the bonobos are an exception.
[121] But chimpanzee society is full with violence and it's quite horrific, frankly.
[122] That's our Olympic system in action.
[123] Like, you know, I'd be on the wrong side of a chimpanzee.
[124] You'll eat your face off and tear or nuts off.
[125] Yeah, basically, there's no limits or ethics.
[126] They're almost at just war.
[127] There's no just war in chimpanzee societies.
[128] is war and dominus by any means necessary.
[129] Yeah, Chimpanzee society is like a primitive version of human society.
[130] They're not like peace -loving, basically, at all.
[131] There's extreme violence.
[132] And then once in a while, somebody who's watched too many Disney movies decides to raise Chopinzee as a pet, and then that eats their face or rough their nuts off or choose their fingers off and that kind of thing.
[133] It's happened several times.
[134] Ripping your nuts off is an interesting.
[135] strategy for interaction.
[136] So it's happened to people.
[137] It's unfortunate.
[138] That's, I guess, one way to ensure that the other chump doesn't contribute to the gene pool.
[139] Well, from a martial arts perspective, it's a fascinating strategy.
[140] The nut ripper.
[141] I wonder which of the martial arts teaches that.
[142] I think it's safe to say if somebody's got your nuts in their hands.
[143] And as the option ripping them off, you will be amenable to whatever they want yeah so like I said somehow controversially you've been a proponent of peace on on Twitter on X yeah so let me ask you about the wars going on today and to see what the path to peace could be how do you hope the current war in Israel and Gaza comes to an end what path do you see that can minimize human suffering in the long term in that part of the world well I think that part of the world is definitely like if you look up the there is no easy answer in the dictionary.
[144] It'll be like the picture of the Middle East, in Israel especially.
[145] So there is no easy answer.
[146] This is strictly my opinion of, you know, is that the goal of Hamas was to provoke an overreaction from Israel.
[147] They obviously do not expect to, you know, have a military victory, but they expect, really wanted to commit the worst atrocities that they could in order to provoke the most aggressive response possible from Israel, and then leverage that aggressive response to rally Muslims worldwide for the cause of Gaza and Palestine, which they have succeeded in doing.
[148] So the counterintuitive thing here I think the thing that I think should be done even though it's very difficult is that I would recommend that Israel engage in the most conspicuous acts of kindness possible everything that is the actual thing that would thwart the goal of Hamas So in some sense the degree that makes sense in geopolitics turn the other cheek implemented it's not exactly turn the other cheek um because i do think that there's um you know i think it is appropriate for israel to find the hamas members and you know um either either either of kill them or accostrate them um like that something that's something has to be done because they're just going to keep coming otherwise.
[149] But in addition to that, they need to do whatever they can.
[150] There's some talk of establishing, for example, a mobile hospital.
[151] I'd recommend doing that.
[152] Just making sure that, you know, there's food, water, medical necessities, and just be over the top about it and be, Very transparent, so it's, so the, you can't claim it's a trick.
[153] Like, just put a webcam on the thing, you know, all 24 -7.
[154] Deploy acts of kindness.
[155] Yeah, conspicuous acts of kindness.
[156] That, with that are unequivocal, meaning that can't be somehow.
[157] Because Hamas will then, their response will be, oh, it's a trick.
[158] Therefore, you have to counter how it's not a trick.
[159] this ultimately fights the broader force of hatred in the region yes and i'm not sure who said it it's an apocryphal saying but an eye for an eye makes everyone blind now that neck of the woods they really believe in the whole eye for an eye thing um but i mean you really have if you're not going to just outright commit genocide like it against an entire people which obviously would not be acceptable to to really shouldn't be acceptable to any anyone, then you're going to leave basically a lot of people alive who subsequently, you know, hate Israel.
[160] So really the question is like, how, for every Hamas member that you kill, how many did you create?
[161] And if you create more than you killed, you've not succeeded.
[162] That's the, you know, the real situation there.
[163] And it's safe to say that if, you know, if, you know, if you know, if you kill somebody's child in Gaza, you've made at least a few Hamas members who will die just to kill an Israeli.
[164] That's the situation.
[165] So, but I mean, this is one of the most contentious subjects one could possibly discuss.
[166] But I think if the goal ultimate is some sort of long -term peace, one has to be look at this from a standpoint of over time, are there more or fewer terrorists being created?
[167] Let me just linger on war.
[168] Yeah.
[169] Well, it's safe to say wars always existed and always will exist.
[170] Always will exist.
[171] Always has existed and always will exist.
[172] I hope not.
[173] You think it always will?
[174] There will always be war.
[175] This question of just how much it will and, you know, there's this sort of the scope and scale of war.
[176] But to imagine that there would not be any war in the future, I think would be a very unlikely outcome.
[177] Yeah, you talked about the culture series.
[178] There's war even there.
[179] Yes.
[180] It's a giant war.
[181] The first book starts off with a gigantic, galactic war where trillions die, trillions.
[182] But it still nevertheless protects these pockets of flourishing.
[183] Somehow you can have galactic war and still have pockets of flourishing.
[184] Yeah, I mean, it's, I guess if we are able to one day expand to, you know, fill the galaxy or whatever, there will be a galactic war at some point.
[185] The scale, I mean, the scale of war has been increasing and increasing, increasing.
[186] It's like a race between the scale of suffering and the scale of flourishing.
[187] Yes.
[188] A lot of people seem to be using this tragedy to beat the drums of war and feed the military industrial complex.
[189] Do you worry about this?
[190] the people who are rooting for escalation and how can it be stopped one of the things that does concern me is that there are very few people alive today who actually viscerally understand the horrors of war at least in the U .S. I mean obviously there are people on the front lines in Ukraine and Russia who understand just how terrible war is but how many people in the West understand it in my grandfather was in World War II he was severely traumatized and he was there I think for almost six years in the Eastern North Africa and Italy all his friends were killed in front of him and he would have died too except they randomly gave some I guess IQ test or something and he scored very high now he was not an officer I think a corporal or a sergeant or something like that because he didn't finish high school he had to drop out of high school because his dad died and he had to work to support his siblings.
[191] So because he didn't graduate high school, he was not eligible for the officer corps.
[192] So, you know, he kind of got put into the cannon fodder category, basically.
[193] But then just randomly that gave him this test, he was transferred to British intelligence in London.
[194] That's where he met my grandmother.
[195] But he had PTSD next level.
[196] like next level i mean just didn't talk just didn't talk and if you tried talking to him he'd just tell you to shut up and he won a bunch of medals never never ragged about it once and not even they hinted nothing i like found out about it because i his military records are online that's a how i know so he would say like no no way in hell do you want to do you want to do you want to do He died 20 years ago, or longer, actually, 30 years ago.
[197] How many people are alive that remember World War II?
[198] Not many.
[199] And the same perhaps applies to the threat of nuclear war.
[200] Yeah.
[201] I mean, there are enough nuclear bombs pointed at the United States to make the radioactive revel balance many times.
[202] There's two major wars going on right now.
[203] So you talked about the threat of the war.
[204] AGI quite a bit, but now as we sit here with the intensity of conflict going on, do you worry about nuclear war?
[205] I think we shouldn't discount the possibility of nuclear war.
[206] It is a civilizational threat.
[207] Right now, I could be wrong, but I think the current probability of nuclear war is quite low, but there are a lot of nukes pointed at us, and we have a lot of nukes pointed at other people.
[208] They're still there.
[209] Nobody's put their guns away.
[210] The missiles are still in the silos.
[211] And the leaders don't seem to be the ones with the nukes talking to each other?
[212] No. There are wars which are tragic and difficult on a local basis, and then there are wars which are civilization ending, or has that potential.
[213] Obviously, global thermonuclear warfare has high potential to end civilization.
[214] Perhaps, perhaps permanent.
[215] But certainly, you know, to severely wound and perhaps set back human progress by, you know, to the Stone Age or something, I don't know, pretty bad.
[216] Probably a scientist and engineers won't be super popular after that as well.
[217] Like, you got us into this mess.
[218] So generally, I think we obviously want to prioritize some of the, civilizational risks over things that are painful and tragic on a local level, but not civilizational.
[219] How do you hope the war in Ukraine comes to an end?
[220] And what's the path, once again, to minimizing human suffering there?
[221] Well, I think what is likely to happen, which is really pretty much the way it is, is that telling very close to the current lines will be.
[222] how a ceasefire or truce happens.
[223] But, you know, you just have a situation right now where whoever goes on the offensive will suffer casualties at several times the rate of whoever's on the defense.
[224] Because you've got defense in depth.
[225] You've got minefields, trenches, anti -tank defenses.
[226] Nobody has air superiority because the anti -aircraft missiles are really far better than the air.
[227] the aircraft like they're far more of them um and uh so neither side has air superiority um tanks basically death drafts um just slow moving and they're not immune to anti -tank weapons so you you really just have long range artillery um and uh infantry trenches it's world o one all over again with drones you know throwing a little drone some some drones there um Which makes the long -range artillery just that much more accurate and better.
[228] And so more efficient and murdering people on both sides.
[229] Yeah.
[230] So it's whoever is, you don't want to be trying to advance from either side because the probability of dying is incredibly high.
[231] So in order to overcome defense -in -depth trenches and minefields, you really need significant local superiority in numbers.
[232] ideally combined arms where you do a fast attack with aircraft a concentrated number of tanks and a lot of people that's the only way you're going to punch through a line and then you're going to punch through and then not have reinforcements just kick you right out again I mean I really recommend people read World War I War I warfare in detail.
[233] That's rough.
[234] I mean, the sheer number of people that died there was mind -boggling.
[235] And it's almost impossible to imagine the end of it that doesn't look like, almost exactly like the beginning in terms of what land belongs to who and so on.
[236] But on the other side of a lot of human suffering, death and destruction of infrastructure.
[237] Yes.
[238] The reason I proposed some sort of truce or peace a year ago was because I predicted pretty much exactly what would happen, which is a lot of people dying for basically almost no changes in land.
[239] And the loss of the flower of Ukrainian and Russian youth, and we should have some sympathy for the Russian boys as well as the Ukrainian boys.
[240] Because they're right boys didn't ask to be on their front line.
[241] They have to be.
[242] So there's a lot of sons not come back to their parents, you know.
[243] And I think most of them don't really have, they don't hate the other side.
[244] You know, it's sort of like, as the saying about, like, the saying comes from World War I, it's like young boys who don't know each other, killing each other on behalf of all.
[245] old men that do know each other.
[246] What the hell is the point of that?
[247] So Volodomers Zelensky said that he's not, or has said in the past, he's not interested in talking to Putin directly.
[248] Do you think he should sit down, man -to -man, lead a leader, and negotiate peace?
[249] I think I would just recommend do not send the flower of Ukrainian youth to die in trenches.
[250] Whether he talks to Putin or not, just don't do that whoever it goes on the offensive will lose massive numbers of people and history will not look kindly upon them you've spoken honestly about the possibility of war between U .S. and China in the long term if no diplomatic solution is found for example on the question of Taiwan and one China policy how do we avoid the trajectory where these two superpowers clash well it's worth reading that book on the, difficult to pronounce the Eucydides trap, I believe it's called.
[251] I love war history.
[252] I like inside out and backwards.
[253] There's hardly a battle I haven't read about.
[254] And trying to figure out, like, what really was the cause of victory in any particular case as opposed to what one side or another claim for the reason?
[255] Both the victory and what sparked the war and the whole thing.
[256] Yeah.
[257] So Athens and Sparta, It's a classic case.
[258] The thing about the Greeks is they really wrote down a lot of stuff.
[259] They loved writing.
[260] There are lots of interesting things that happened in many parts of the world, but people just didn't write down.
[261] So we don't know what happened.
[262] Or they didn't really write in detail.
[263] They just would say, like, we had a battle and we won.
[264] Like, well, can you add a bit more?
[265] The Greeks, they really wrote a lot.
[266] They were very articulate.
[267] They just love writing.
[268] And we have a bunch of that writing.
[269] preserved.
[270] So we know what led up to the Peloponnesian War between the Spartan and Athenian alliance.
[271] And we know that they saw it coming.
[272] I mean, the Spartans didn't write, they weren't very verbose by their nature.
[273] But they did it right, but they weren't very perverse.
[274] They were terse.
[275] But the Athenians and the other Greeks wrote a line.
[276] And they were like, and Spartens it was really kind of like the leader of Greece but but Athens grew stronger and stronger with each passing year and and everyone's like well that's inevitable that there's gonna be a clash between Athens and Sparta well how do we avoid that and they couldn't they actually they saw it coming and they still could not avoid it so you know at some point if there's if one group one civilization or country or whatever exceeds another.
[277] It's sort of like, you know, the United States has been the biggest kid in the block since I think around 1890 from an economic standpoint.
[278] So the United States has been the economic, most powerful economic engine in the world longer than anyone's been alive.
[279] And the foundation of war is economics.
[280] So now we have a situation in the case of China where the economy is likely to be two, perhaps three times larger than that of the U .S. So imagine you're the biggest kid in the block for as long as anyone can remember, and suddenly a kid comes along who's twice your size.
[281] So we see it coming.
[282] Yeah.
[283] How is it possible to stop?
[284] Is there some, let me throw something out there, just intermixing of cultures, understanding.
[285] There just seem to be a giant cultural gap in understanding of each other.
[286] And you're an interesting case study because you are an American, obviously.
[287] You've done a lot of incredible manufacturer here in the United States, but you also work with China.
[288] I've spent a lot of time in China and met with the leadership many times.
[289] Maybe a good question to ask is, what are some things about China that people don't understand, positive, just in the culture?
[290] What's some interesting things that you've learned about the Chinese?
[291] Well, the sheer number of really smart, hardworking people in China is incredible.
[292] There are, really, say, like, how many smart, hardworking people are there in China?
[293] There's far more of them there than they are here, I think, in my opinion.
[294] And they've got a lot of energy.
[295] So, I mean, the architecture in China that's in recent years is far more impressive than the U .S. I mean, the train stations, the buildings, the high -speed rail, everything, it's really far more impressive than what we have in the U .S. I mean, I recommend somebody just go to Shanghai and Beijing, look at the buildings, and go to, you know, take the train from Beijing to Xi 'an, where you have the terracotta warriors.
[296] China's got an incredible history, very long history.
[297] And, you know, I think arguably the, in terms of the use of language from a written standpoint, sort of one of the oldest, perhaps the oldest written language.
[298] And then China, people who did write things down.
[299] So now China historically has always been, with the rare exception, been internally focused.
[300] They have not been acquisitive.
[301] They've fought each other.
[302] They've been many, many civil wars.
[303] And the Three Kingdoms War, I believe they lost about 70 % of their population.
[304] So they've had brutal internal wars, like civil wars that make the U .S. civil war look small by comparison.
[305] So I think it's important to appreciate that China is not monolithic.
[306] We sort of think of like China as a sort of one entity of one mind, and this is definitely not the case.
[307] From what I've seen, and I think most people who understand China would agree, people in China think about China ten times more than they think about anything outside of China.
[308] So it's like 90 % of their consideration is internal.
[309] well isn't that a really positive thing when you're talking about the collaboration and the future piece between superpowers when you're inward facing which is like focusing on improving yourself versus focusing on yeah uh quote unquote improving others through military might the good news the history of china suggests that china is not acquisitive meaning they're not going to go out and invade a whole bunch of countries um they do feel very strongly you know so that's that's good i mean because a lot of a lot of very powerful countries have been inquisitive.
[310] The U .S. is also one of the rare cases that has not been inquisitive.
[311] After World War II, the U .S. could have basically taken over the world in any country.
[312] Like, we got nukes, nobody else got nukes.
[313] We don't even have to lose soldiers.
[314] Which country do you want?
[315] And the United States could have taken over everything.
[316] Oh, I'd add well, and it didn't.
[317] And the United States actually helped revolt countries.
[318] So it helped rebuild Europe, you know, helped rebuilds.
[319] this is very unusual behavior almost unprecedented you know the US did conspicuous acts of kindness like the Berlin airlift you know and and I think you know there's always like well America's done bad things well of course America's done bad things but one needs to look at the whole track record and and just generally you know one one sort of test would be how do you treat your prisoners of war or let's say um you know no offense to the russians but let's say you're in germany it's 1945 you got the russian army coming one side and you got the french british and american armies coming the other side who would you like to be to surrender to like no country is like morally perfect but i recommend uh being a piero with the Americans.
[320] That would be my choice very strongly.
[321] In the full menu of POW.
[322] Very much so.
[323] And in fact, Vonnevon Brown took, you know, a small guy was like, we've got to be captured by the Americans.
[324] And in fact, the SS was under orders to execute Bonn Brown and all of the German rock initiatives.
[325] and they narrowly escaped their SSI they said they were going out for a walk in the woods they left in the middle of winter with no coats and then ran like no food no coats no water and just ran like hell and ran west and by sheer like they i think his brother found like a bicycle or something and then just cycled west as fast they couldn't have found a u .s. patrol.
[326] So, anyway, that's one way you can tell morality is, where do you want to be a P .W?
[327] It's not fun anywhere, but some places are much worse than others.
[328] So, anyway, so like America has been, while far from perfect, generally a benevolent force.
[329] And we should always be self -critical and we try to be better.
[330] Um, but, um, anyone with half of brain knows that.
[331] So, so I think there are, in this way, China and, uh, the United States are similar.
[332] Neither country has been acquisitive, um, in a significant way.
[333] So that's like a, you know, a shared principle, I guess.
[334] Um, now China does feel very strongly about Taiwan.
[335] They've been very clear about that for a long time.
[336] You know, from their standpoint, it's, it would be like one of the states is, you know, not there like, like Hawaii or something like that, but more significant than Hawaii, you know, and Hawaii is pretty significant for us.
[337] So, they view it as really the, that there's a fundamental part of China, the island of Formosa, not Taiwan, that is not part of China, but should be.
[338] And the only reason it hasn't been is because the U .S. specific fleet.
[339] And is their economic power grows and is there military power grows, the thing that they are clearly saying, is there interest, will, you know, clearly be materialized.
[340] Yes.
[341] China has been very clear that they will incorporate Taiwan peacefully or militarily, but that they will incorporate it from their standpoint is 100 % likely.
[342] You know, something you said about conspicuous acts of kindness.
[343] As a geopolitical policy, it almost seems naive.
[344] But I'd venture to say that this is probably the path forward, how you avoid most wars.
[345] Just as you say it, it sounds naive, but it's kind of brilliant.
[346] If you believe in the goodness of underlying most of human nature, it just seems like conspicuous acts of kindness can reverberate through the populace of the countries involved.
[347] yeah well and deescalate absolutely so after world war one the the they made a big mistake you know they basically tried to lump all the blame on Germany and and you know settled Germany with impossible reparations and you know really there was a lot of there was a fair quite a to blame to um go around for world war one um but they try to you know put it all in germany um and uh that was that that laid the seeds for world war two uh so that's a lot of people were not just hitler a lot of people felt wronged um and they wanted vengeance and they got it don't forget.
[348] Yeah.
[349] You kill somebody's father, mother, son, daughter.
[350] They're not going to forget it.
[351] They will want vengeance.
[352] So after World War II, they're like, well, that Treaty of Versailles was a huge mistake in World War I. And so this time, instead of, you know, crushing the losers, we're actually going to help them.
[353] with the Marshall Plan, and we're going to help rebuild Germany.
[354] We're going to help rebuild, you know, Austria and the other, you know, Italy and whatnot.
[355] So that was the right move.
[356] It does feel like there's a profound truth to conspicuous acts of kind as being an antidote to this.
[357] Something must stop the cycle of reciprocal violence.
[358] something will stop it, or it will, you know, it'll never stop.
[359] Just eye for an eye, tooth for a tooth, limb for a limb, life for a life, forever and ever.
[360] To escape briefly, the darkness was some incredible engineering work.
[361] X -AI just released GROC AI assistant that I've gotten a chance to play with.
[362] It's amazing on many levels.
[363] First of all, it's amazing that a relatively small team in a relatively short amount of time, was able to develop this close to state -of -the -art system.
[364] Another incredible thing is there's a regular mode and there's a fun mode.
[365] Yeah, I guess I'm to plan for that one.
[366] I wish, first of all, I wish everything in life had a fun mode.
[367] There's something compelling beyond just fun about the fun mode interacting with a large language model.
[368] I'm not sure exactly what it is because I've only had a little bit of time to play with it.
[369] but it just makes it more interesting, more vibrant to interact with the system.
[370] Yeah.
[371] Our AI GROC is modeled after the Hedrack's Guide to the Galaxy, which is one of my favorite books, which is a book on philosophy, disguises a book on humor.
[372] And I would say that forms the basis of my philosophy.
[373] which is that we don't know the meaning of life, but the more we can expand the scope and scale of consciousness, digital and biological, the more we are able to understand what questions to ask about the answer that is the universe.
[374] So I have a philosophy of curiosity.
[375] There is generally a feeling like this AI system has an outward looking, like the way you are, like sitting with a good friend, looking up at the stars, like asking Podhead, like questions about the universe, wondering what it's all about, the curiosity you talk about.
[376] There's a sense, no matter how mundane the question I ask it, there's a sense of cosmic grandeur to the whole thing.
[377] Well, we are actually working hard to have engineering, math, physics, answers that you can count on.
[378] So for the other sort of AIs out there, whether these so -called large language models.
[379] I've not found the engineering to be reliable and the hallucination.
[380] It unfortunately hallucinates most when you least want it to elucinate.
[381] So when you're asking important, difficult questions, that's when it tends to be confidently wrong.
[382] So we're really trying hard to say, okay, how do we be as grounded as possible so you can count on the results.
[383] Trace things back to physics first principles, mathematical logic.
[384] So underlying the humor is an aspiration to adhere to the truth of the universe as closely as possible.
[385] That's really tricky.
[386] It is tricky.
[387] So that's why, you know, there's always going to be some amount of error, but we want to aspire to be as truthful as possible about the answers with acknowledged error.
[388] So there was always, you don't want to be confidently wrong.
[389] So you're not going to be right every time, but you want to minimize how often you're confidently wrong.
[390] And then like said, once you can count on the logic as being not violating physics, then you can start to pull on that to create inventions, like invent new technologies.
[391] But if you cannot count on the foundational physics being correct, obviously the inventions are simply wishful thinking, you know, imagination land, magic, basically.
[392] Well, as you said, I think one of the big goals of XAI is to understand the universe.
[393] Yes.
[394] that's how simple three -word mission if you look out far into the future do you think on this level of physics the very edge of what we understand about physics do you think it will make discoveries sort of the sexiest discovery of them as we know now sort of unifying general relativity and quantum mechanics so coming up with the theory of everything do you think it could push towards that direction almost like theoretical physics discoveries.
[395] If an AI cannot figure out new physics, it's clearly not equal to humans, nor has surpassed humans, because humans have figured out new physics.
[396] You know, physics is just understanding, you know, deepening wants inside into how reality works.
[397] And then, then there's engineering, which is inventing things that have never existed.
[398] Now, the range of possibilities for engineering is far greater than for physics, because, you know, we want to figure out the rules the universe.
[399] That's it.
[400] You've discovered things that already existed.
[401] But from that, you can then build technologies that are really almost limitless in the variety.
[402] It's like once you understand the rules of the game properly.
[403] And with current physics, we do, at least at a local level, understand how physics works very well.
[404] Our ability to predict things is incredibly good.
[405] Like quantum mechanics is, the degree to which quantum mechanics can predict outcomes is incredible.
[406] That's not my hardest class in college, by the way.
[407] My senior quantum mechanics class was harder than all of my other classes put together.
[408] To get an AI system, a large language model to reliably be as reliable as quantum mechanics in physics is very difficult.
[409] Yeah, you have to test any conclusions against the ground truth of reality.
[410] Reality is the ultimate judge.
[411] Like physics is the law, everything else is a recommendation.
[412] I've seen plenty of people break the laws made by man, but none break the laws made by physics.
[413] Yeah, it's a good test, actually.
[414] If this LLM understands and matches physics, then you can more reliably trust whatever it thinks about the current state of politics.
[415] And it's also not the case currently that even its internal logic is not consistent.
[416] So especially with the approach of like just predicting a token predict token predict token, it's like a vector sum.
[417] You know, you're summing up a bunch of vectors, but you can get drift.
[418] So as those, a little bit of error, a little bit of error adds up.
[419] And by the time you are many tokens down the path, it doesn't make any sense.
[420] So it has to be somehow self -aware about the drift?
[421] It has to be software about the drift and then look at the thing as a Gestalt as a whole and say it does it have coherence as a whole.
[422] So when authors write books they will write the book and then they'll go and revise it taking into account the end in the beginning and the middle and rewrite it to achieve coherence so that it doesn't end up but a nonsensical place.
[423] Maybe the process of revising is what reasoning is.
[424] And then that's, the process of revising is how you get closer and closer to truth.
[425] Maybe, like, at least I approach it that way.
[426] You just say a bunch of bullshit first and then you get it better.
[427] You start a bullshit and then you get as close.
[428] You create a draft and then you iterate on that draft until it has coherence, until it all adds up basically.
[429] So another question about theory of everything but for intelligence.
[430] Do you think there exists, as you're exploring this with XAI, creating this intelligence system, do you think there is a theory of intelligence where you get to understand what, like, what is the eye in AGI and what is the eye in human intelligence?
[431] There's no eye in Team America, oh, wait, there is.
[432] No, it's going to be stuck in my head now.
[433] yeah there's no me and whatever in quantum mechanics oh wait I mean is that part of the process of discovering understanding the universe is understanding intelligence yeah yeah I think we need to understand intelligence understand consciousness I mean there I mean there are some sort of fundamental questions of like what is thought, what is emotion?
[434] Yeah.
[435] Is it really just one atom bumping into another atom?
[436] It feels like something more than that.
[437] So I think we're probably missing some really big things.
[438] Like some really big things.
[439] Something that'll be obvious in retrospect.
[440] Yes.
[441] Like there's a giant.
[442] You put the whole consciousness, emotion.
[443] Well, some of it would call it like a soul.
[444] and religion is a soul.
[445] Like, you feel like you're you, right?
[446] I mean, you don't feel like you're just a collection of atoms.
[447] But on what dimension does thought exist?
[448] What dimension does do emotions exist?
[449] We feel them very strongly.
[450] I suspect there's more to it than atoms bump into atoms.
[451] And maybe AI can pave the path to the discovery, whatever the hell that thing is.
[452] Yeah.
[453] What is consciousness?
[454] Like what a, when you put the atoms in a particular shape, why are they able to form thoughts and take actions and feelings?
[455] And even if it is an illusion, why is this illusion so compelling?
[456] Yeah.
[457] Like how do...
[458] Why does this illusion exist?
[459] Yeah.
[460] On what plane does this illusion exist?
[461] Yeah.
[462] And sometimes I wonder, is, you know, either perhaps everything's conscious or nothing is conscious.
[463] One of the two.
[464] I like the former.
[465] Everything conscious just seems more fun.
[466] It just seemed more fun, yes.
[467] But we're composed of atoms, and those atoms are composed of quarks and leptons.
[468] And those quarks and leptons have been around since the beginning of the universe.
[469] The beginning of the universe.
[470] Right.
[471] What seems to be the beginning of the universe?
[472] The first time we talked, you said which is surreal to think, that this discussion was happening is becoming a reality.
[473] I asked you what question would you ask an AGI system once you create it and you said, what's outside the simulation is the question.
[474] Good question.
[475] Yeah.
[476] But it seems like with Grog, you started, literally the system's goals to be able to ask such questions, to answer such questions and to ask such questions.
[477] Where are the aliens?
[478] Where the aliens.
[479] That's one of the, like, the phomy paradox question.
[480] A lot of people have asked me if I've seen any evidence of aliens, and I haven't, which is kind of concerning, because then I think I'd probably prefer to at least have seen some archaeological evidence of aliens.
[481] To the best of my knowledge, there is no proof.
[482] I'm not aware of any evidence of aliens.
[483] If they're out there, they're very subtle.
[484] We might just be the only consciousness, at least in the galaxy.
[485] And if you look at, say, the history of Earth, point of us to believe the archaeological record, Earth is about 4 .5 billion years old.
[486] Civilization, as measured from the first writing, is only about 5 ,000 years old.
[487] We have to give some credit there to the ancient Samarians, who aren't around anymore.
[488] I think it was an archaic pre -cuniform, was the first actual symbolic representation.
[489] But only about 5 ,000 years ago.
[490] I think that's a good date for...
[491] when we're, say, civilization started.
[492] That's one millionth of Earth's existence.
[493] So civilization has been around, it's really a flash in the pan, so far.
[494] And why have we, why did it take so long for, you know, four and a half billion years?
[495] For the vast majority of the time, there was no life, and then there was, archaic bacteria for a very long time and then you know you had mitochondria get captured multicellular life um differentiation into plants and animals life moving from the oceans to land mammals higher brain functions and the sun is expanding slowly um but it will it will it will heat, it will heat the earth up.
[496] It's at some point in the future, boil the oceans, and Earth will become like Venus, where life, life as we know it is impossible.
[497] So if we do not become multi -planetary and ultimately go beyond our solar system, annihilation of all life on Earth is a certainty, a certainty.
[498] And it could be as little as on the, on the, galactic time scale half a billion years you know long time by human standards but that's only 10 % longer than Earth has been around at all so if life had taken 10 % longer to evolve on Earth it wouldn't exist at all we've got a deadline coming up better hurry but that said as you said humans intelligent life on Earth developed a lot of cool stuff very quickly.
[499] So it seems like becoming a multi -planetary is almost inevitable, unless we destroy this thing.
[500] We need to do it.
[501] I mean, it's not, I mean, I suspect that there, if we are able to go out there and explore other star systems that we, there's a good chance we find a whole bunch of long -dead one -planet civilizations.
[502] Yeah.
[503] They never made it past their home planet.
[504] That's so sad.
[505] Yeah.
[506] said also fascinating i mean there are very explanations for those pretty paradox and one is the sort of there are these great filters which civilizations don't pass through and one of those great filters is do you become a multi -planet civilization or not and if you don't it's simply a matter of time before something happens on your planet um you know either natural man -made that causes us to die out like the dinosaurs where are they now that didn't have spaceships so I think the more likely thing is because just empathize with the aliens that they found us and they're protecting us and letting us be I hope so nice aliens just like the tribes in the in the Amazon they're on contact the tribes were protecting them that would be a nice explanation or you could have like what was it I think Andre Capathi said it's like the ants and the Amazon asking, where's everybody?
[507] Well, they do run into a lot of other ants.
[508] That's true.
[509] They have these ant wars.
[510] Sounds like a good TV show.
[511] Yeah, they literally have these big wars between various ants.
[512] Yeah, maybe I'm just dismissing all the different diversity of ants.
[513] You should listen to that Werner Herzog talking about the jungle.
[514] It's really hilarious.
[515] Have you heard it?
[516] No, I have not.
[517] But Warner Herzog as a way.
[518] You should play it for the, you know, as an interlude in the, it's on YouTube.
[519] It's awesome.
[520] I love him so much.
[521] Yeah, he's great.
[522] Was he the director of Happy People Life in the Tiger, I think also?
[523] I did that bear documentary.
[524] The Bear documentary.
[525] I did this thing about penguins.
[526] Yeah.
[527] The depred, the analysis, the psychoanalysis of penguins.
[528] Yeah, the penguins, like, headed for, like, mountains that are like 70 miles away.
[529] Penguin has just had it for doom, basically.
[530] Well, he had a cynical take.
[531] I have a, he could be just a brave explorer, and there will be great stories told about him amongst the penguin population for many centuries to come.
[532] What are we talking about?
[533] Okay.
[534] Yes, aliens.
[535] I mean, I don't know.
[536] Look, I think it's, The smart move is just, you know, this is the first time in the history of Earth that it's been possible for life to extend beyond Earth.
[537] That window is open.
[538] Now, it may be open for a long time or it may be open for a short time.
[539] And it may be open now and then never open again.
[540] So I think the smart move here is to make life multi -planetary while it is possible to do so.
[541] we don't want to be one of those lame one planet civilizations that just dies out no those are lame yeah lame um self -respecting civilization would be one planet there's not going to be a Wikipedia entry for one of those and uh pause uh the SpaceX have an official policy for when we meet aliens no that seems irresponsible um I mean, look, if I see the slightest indication that there are aliens, I will immediately post on the X platform, anything I know.
[542] It could be the most liked, reposted post of all time.
[543] Yeah, I mean, look, we have more satellites up there right now than everyone else combined.
[544] So, you know, we know if we've got to maneuver around something, and we're not, I don't have maneuver around anything.
[545] If we'll go to the big questions once again, you said you've said you've, you've, you're with Einstein that you believe in the god of Spinoza.
[546] Yes.
[547] So, you know, that's a view that God is like the universe and reveals himself through the laws of physics, or as Einstein said, through the lawful harmony of the world.
[548] Yeah, I would agree.
[549] That God of the simulator or whatever, the supreme being or beings, reveal themselves through the physics.
[550] they have creatives of this existence.
[551] And it's incumbent upon us to try to understand more about this wondrous creation.
[552] Who created this thing, who's running this thing?
[553] Like, embodying it into a singular question with a sexy word on top of it is like focusing the mind to understand.
[554] It does seem like there's a, again, it could be an illusion.
[555] It seemed like there's a purpose that there's an underlying master plan of some kind.
[556] it seems like there may not be a master plan in the sense so there's like maybe an interesting answer to the question of determinism versus free will is that if we are in a simulation the reason that these higher beings would hold a simulation is to see what happens so it's not a they don't know what happens otherwise they wouldn't hold the simulation so when when humans create a simulation.
[557] So it's SpaceX and Tesla.
[558] We create simulations all the time.
[559] Especially for the rocket, you have to run a lot of simulations to understand what's going to happen because you can't really test the rocket until it goes to space.
[560] And you want it to work.
[561] So you have to simulate subsonic, transonic, supersonic, supersonic, hypersonic, hypersonic, and then coming back super high heating and orbital dynamics.
[562] All this is going to be simulated.
[563] So, because you don't get very many kicks at the can.
[564] But we run the simulations to see what happens.
[565] If we knew what happens, we wouldn't run the simulation.
[566] So if there's, so whoever created this existence, they're running it because they don't know what's going to happen, not because they do.
[567] So maybe we both played Diablo.
[568] Maybe Diablo was created to see.
[569] if a druid, your character, could defeat Uber -Lilith at the end.
[570] They didn't know.
[571] Well, the funny thing is Uphalilis, her title is Hatred Incarnate.
[572] Yeah.
[573] And right now, I guess, you can ask the Diablo team, but it's almost impossible to defeat hatred in the Eternal Realm.
[574] Yeah, you've streamed yourself dominating.
[575] tier 100 nightmare dungeons and still i i can cruise through tier 100 nightmare dungeons like a stroll in the park and still you're defeated by hatred yeah i can this the sort of i guess maybe the second hottest boss is duriel duriel can't even scratch the paint so uh i killed derail so many times.
[576] And every other boss in the game, all of them kill them so many times.
[577] It's easy.
[578] But Uvalilith, otherwise known as Hatred Incarnate, especially if you're a druid and you have no ability to go in to be invulnerable.
[579] There are these random death waves that come at you.
[580] And I'm pretty, you know, I'm 52, so my reflex is not what they used to be, but I'm, I have a lifetime of playing video games.
[581] At one point I was, you know, maybe one of the best quake players in the world.
[582] I actually won money for, and what I think was the first paid esports tournament in the U .S. We were doing four -person quake tournaments, and we came second.
[583] I was the second best person on the team, and the actual best person, we're actually winning.
[584] We would have come first, except the best person on the team, his computer crashed halfway to the game.
[585] um so we gave second but i got money for it and everything so like basically i got skills you know i'll be it you know no no spring spring chicken these days and um the to be totally frank it's driving me crazy trying to beat lilith as a druid basically trying to try to beat hatred and con it in the eternal realm as a druid as a druid as a druid and if you if you if you this is really bexing let me tell you I mean, the challenge is part of the fun.
[586] I have seen directly, like, you're actually like a world -class, incredible video game player.
[587] Yeah.
[588] And I think Diablo, so you're just picking up a new game and you're figuring out as fundamentals.
[589] You're also, with the Paragon Board and the build, are not somebody like me who perfectly follows whatever they suggest on the Internet.
[590] You're also an innovator there.
[591] Yeah.
[592] Which is hilarious to watch.
[593] It's like a mad scientist just trying to figure out the Paragon board and the build and the world.
[594] Is there some interesting insights there about if somebody's starting as a druid, do you have advice?
[595] I would not recommend playing a druid in the eternal realm.
[596] Right now, I think the most powerful character in the seasonal realm is the sorcerer with the lightning balls.
[597] The soaks have huge balls in the seasonal.
[598] Yeah.
[599] that's what they say so much balls they do huge balls of lightning I'll take your word for it and it's actually in the seasonal realm you can you can it's like pretty easy to beat uberloat because you get these vampiric powers that amplify your damage and increase your defense and whatnot so but really quite easy to defeat hatred seasonally but to defeat hatred eternally very difficult um almost impossible it's purging on impossible it seems like this is a metaphor for life you know i like the idea that Elon Musk because i saw i was playing dialli yesterday and i saw 100 level 100 druid just run by i will never die and then run back the other way yeah and it's there's just some this metaphor it's kind of hilarious that you Elon Musk is fighting hatred restlessly fighting hatred in this demonic realm.
[600] Yes.
[601] It's hilarious.
[602] I mean, it's pretty hilarious.
[603] No, it's absurd.
[604] Really, it's exercise and absurdity, and it makes me want to pull my hair out.
[605] Yeah.
[606] What do you get from video games in general?
[607] Is there, for you personally?
[608] I don't know if it's, it calms my mind.
[609] I mean, sort of killing the demons in a video game calms the demons in my mind.
[610] Yeah.
[611] If you play a tough video game, you can get into like a state of flow, which is very enjoyable.
[612] And admittedly, it needs to be not too easy, not too hard, kind of in the Goldilocks zone.
[613] And I guess you generally want to feel like you're progressing in the game.
[614] So a good video.
[615] And there's also beautiful art, engaging storyline.
[616] and it's a it's like an amazing puzzle to solve I think and so it's like solving the puzzle elder and ring the greatest game of all time I still haven't played it but to you it's elder is definitely a candidate for best game ever top five for sure I think I've been scared how hard it is oh how hard I hear it is so but it is beautiful out of ring is feels like it's designed by an alien there's a theme to this discussion in what way it's it's it's so unusual it's incredibly creative and the art is stunning i recommend playing it on a on a big resolution high dynamic radio tv even doesn't need to be a monitor just uh the art is incredible it's so beautiful and it's it's so unusual um and each of those tough boss battles is unique like it's like a unique puzzle to solve each one's different um and the strategy you use to solve one battle is different from another battle that said you said drew it and eternal against uber lilith is the hardest boss battle you've ever correct that is currently the and i've i've played a lot of video games because it's my primary recreational activity.
[617] And, yes, beating hatred in the internal realm is the hardest boss battle in life and in the video game.
[618] Metaphor and top of metaphor.
[619] I'm not sure it's possible, but it's, I do make progress.
[620] So then I'm like, okay, I'm making progress.
[621] Maybe if I just tweak that Paragon board a little more.
[622] I can do it you can just dodge a few more waves I can do it well the simulation is created for the purpose of figuring out if it can be done and you're just a cog in that simulation in the machine of the simulation yeah it might be I have a feeling that at least I think it's doable it's doable yes well that's the human spirit right there to believe yeah i mean it did prompt me to think about just hate in general which is you know you want to be careful of the one of those things where you wish for something that sounds good but in but if you get it it's actually a dystopian situation so you know you can sort of run a sort of uh have thoughts just of like if you wish for world peace sounds good but has enforced and at what cost is it at what cost eternal peace it might actually be worse to have eternal peace because of what that would entail the suppression of everyone it might be the suppression of progress it might be an ossified society that never changes what I'm saying is that there is an argument that that if you wish for no war, you should be careful what you wish for.
[623] Because what's required in order for that to be no war.
[624] Might be worse than a little war.
[625] Yeah, the things will consider it to be flaws of human civilization might be a necessary component for whatever optimal looks like.
[626] I mean, do you worry about AI, AGI enabling a dystopian state of this nature, whether it's 1984 with surveillance and fear or Brave New World with pleasure and what is it lots of sex but no deep human experience you know there's actually a real drug called Soma there is yes let me take notes that they actually does Grock know about it no he probably will go to Brave New World you could ask it what would be the question tell us about the drug called Soma the drug called Soma it's going to go to Brave New World, I think.
[627] Well, there's a real drug as well.
[628] Searching for Soma drug information.
[629] Oh, boy.
[630] Ah, Soma, the wonder drug.
[631] By the way, I'm in fun mode.
[632] Oh, okay, fun.
[633] Yeah, so it's talking about the real drug.
[634] Oh, it's a muscle relaxant that's been around since the 1950s.
[635] It's used to treat muscle pain and discomfort, and it does this by altering the activity between neurons and the spinal cord.
[636] Now, Soma isn't just a pain reliever.
[637] It's also a bit of a sedative.
[638] It can make you feel a bit drowsy, so it's not something to take if you're planning and operating heavy machinery or trying to solve complex math equations.
[639] And it goes on and on.
[640] In conclusion, Soma is a drug that can be helpful for those dealing with muscle pain, but it's not without its risks.
[641] It's pretty good.
[642] Oh, wait a minute.
[643] That's the real drug you're talking about.
[644] It's the real drug.
[645] Yeah.
[646] And, of course, the fun is always use it responsibly in accordance with the advice of a healthcare professional.
[647] remember in the end, the universe is probably just a giant computer simulation, so don't take anything too seriously.
[648] I love it.
[649] I love it.
[650] I love it.
[651] Yeah.
[652] Oh, and also really nicely includes like the tweets, Post -on -X, that are relevant to this, which is really cool.
[653] Yeah.
[654] It like integrates everything together.
[655] Yeah.
[656] Like the interface is one of the really cool things here.
[657] Yeah.
[658] Seems like so and you can say like no i mean soma from brain brave new world because it thinks you mean you mean soma the real drug yeah no i mean soma from brain beer world it's going to be like oops i'm really sorry i'm really surprised i went to the real drug first ah my apologies i was thinking of the muscle relaxing soma for brave new world is a different beast altogether and all this huxley's dystopi now soma is a drug that plays a drug that plays there's a central role in society depicted in the book.
[659] It's a mood -altering hallucinogenic drug that's used to keep the population docile and content in the novel.
[660] To use someone in Brave New World raises some interesting questions about the nature of happiness and the role of drugs in society.
[661] Oh man. Exactly.
[662] Is it better to live in a world where everyone is happy all the time, even if that happiness is artificial?
[663] Good question.
[664] This is what I mean.
[665] Like, do you wish for world peace and happiness all the time.
[666] Are you sure?
[667] Because that might be a society that is essentially sterile and ossified that never changes that is ultimately doomed.
[668] This kind of tension between darkness and the light.
[669] This isn't really a very good summary.
[670] It really gets to the point.
[671] This is not simply regurgitating Brave New World, it's actually getting to the salient element of soma as a drug.
[672] Do you actually want to be in a situation where everyone is happy all the time, even though it's artificial?
[673] Or is it better to confront the challenges of life and experience a full range of human emotions, even if it means experiencing pain and suffering?
[674] For those listening, by the way, Elon just read directly from Grok, which is a really nice kind of insightful philosophical analysis of the tension here.
[675] Interesting.
[676] it pretty much nails it in conclusion soma from brave new world is fictional drug that's used to explore some deep philosophical questions about the nature of happiness and the role of drugs in society it's a powerful symbol of the dangers of using drugs to escape from reality and the importance of confronting the challenges of life head on nailed it and the crazy thing is like you know we do have a real drug called soma which kind of is kind of like the drug in the book and I'm like they they must have named it after.
[677] Yeah, probably, probably.
[678] Some of the real drug is quite effective on back pain.
[679] So you know about this drug?
[680] I've taken it.
[681] This is fascinating.
[682] Okay.
[683] Because I had like a squashed disk in my C5 -C -6.
[684] So it takes the physical pain away, but so my ears.
[685] It doesn't completely.
[686] It reduces the amount of pain you feel, but at the expense of mental acuity.
[687] It delves your mind.
[688] Just like the drug in the book.
[689] Just like the drug in the book.
[690] And hence the trade -off.
[691] The thing that seems like utopia could be a dystopia, after all.
[692] Yeah, actually, I was talking to a friend of mine saying, like, would you really want there to be no hate in the world?
[693] Like, really none?
[694] Like, I wonder why hate evolved.
[695] I'm not saying we should amplify hate, of course.
[696] I think we should try to minimize it.
[697] But none at all?
[698] Hmm.
[699] There might be a reason for hate.
[700] and suffering.
[701] It's really complicated to consider that some amount of human suffering is necessary for human flourishing.
[702] Is it possible to appreciate the highs without knowing the lows?
[703] And that all is summarized there in a single statement from Grogh.
[704] No highs, no lows, who knows?
[705] That's almost the poem.
[706] It seems that training LLM is efficiently is a big focus for.
[707] X -A -I, what's the, first of all, what's the limit of what's possible in terms of efficiency?
[708] There's this terminology of useful productivity per watt.
[709] What have you learned from pushing the limits of that?
[710] Well, I think it's helpful.
[711] The tools of physics are very powerful and can be applied, I think, to almost any, really any arena in life.
[712] It's really just critical thinking.
[713] For something important, you need to reason from first principles.
[714] and think about things in the limit, one direction or the other.
[715] So in the limit, even at the Khodashev scale, meaning even if you harness the entire power of the sun, you will still care about useful compute for what.
[716] So that's where, I think probably where things are headed from the standpoint of AI is that we have a silicon shortage now that will transition to a voltage transformer shortage in about a year, ironically, transformers for transformers.
[717] You need transformers to run transformers.
[718] Somebody has a sense of humor in this time.
[719] I think, yes.
[720] Fate loves irony.
[721] Ironic humor.
[722] And ironically funny outcome seems to be often what fate wants.
[723] Humor is all you need.
[724] I think spice is all you need, somebody boasted.
[725] Yeah, but yeah, so we're, we're, I have a silicon shortage today, a voltage step down transformer, shortage probably in about a year, and then just electricity shortages in general in about two years.
[726] I gave a speech for the sort of world gathering of utility companies, electricity companies.
[727] And I said, look, you really need to prepare for a troubling of electricity demand.
[728] Because all transport is going to go electric with the ironic exception of rockets.
[729] And heating will also go electric.
[730] So in G usage right now, it's roughly one -third, very rough terms, one -third electricity, one -third transport, one -third heating.
[731] And so in order for everything to go sustainable, to go electric, you need to triple electricity output.
[732] So I encourage the utilities to build more power plants and also to probably have, well, not probably, they should definitely buy more batteries because the grid currently is sized for real -time load, which is kind of crazy, because, you know, that means you've got a size for whatever the peak electricity demand is, like the worst second or the worst day of the year, or you can have a brand -hearted blackout.
[733] And you had that crazy blackout for several days in Austin.
[734] So because there's almost no buffering of energy in the grid.
[735] Like if you've got a hydrant power plant, you can buffer energy.
[736] But otherwise, it's all real time.
[737] So with batteries, you can produce energy at night and use it during the day.
[738] so you can buffer.
[739] So I expect that there will be very heavy usage of batteries in the future.
[740] Because the peak to trough ratio for power plants is anywhere from 2 to 5.
[741] So it's like lowest point to highest point.
[742] So like batteries are necessary to balance it out.
[743] But the demand as you're saying is going to grow, grow, grow, grow, grow.
[744] Yeah.
[745] And part of that is the compute.
[746] Yes.
[747] I mean electrification of transport.
[748] and electric heating will be much bigger than AI, at least in the short term.
[749] In the short term.
[750] But even for AI, you really have a growing demand for electricity for electric vehicles and a growing demand for electricity to run the computers for AI.
[751] And so this is obviously leading going to lead to an electricity shortage.
[752] How difficult is the problem of, in this particular case, maximizing the useful productivity per watt for training you know nuts like this seems to be really where the big problem we're facing that needs to be solved is how to use the power efficiently like what you've learned so far about applying this physics first principle of reasoning in this domain how difficult is this problem it will get solved just the question of how long it takes to solve it.
[753] So at various points, there's a limit, some kind of limiting factor to progress.
[754] And with regard to how I'm saying, right now, the limiting factor is silicon chips.
[755] And that will, we're going to then have more shifts than we can actually plug in and turn on, probably in about a year.
[756] The initial constraint being literally voltage step -down transformers, because you've got power coming in at 300 ,000 volts, and it's got to step all the way down eventually to around 0 .7 volts.
[757] So it's a very big amount of, you know, the voltage step down is gigantic.
[758] So, and the industry is not used to rapid growth.
[759] Okay, let's talk about the competition here.
[760] You've shown concern about Google, Microsoft with OpenAI, developing AGI.
[761] How can you help ensure with XAI and Tesla AI work that it doesn't become a competitive race to AGI, but instead is a collaborative development of safe AGI.
[762] Well, I mean, I've been pushing for some kind of regulatory oversight for a long time.
[763] I've been somewhat of a Cassandra on the subject for over a decade.
[764] I think we want to be very careful in how we develop AI.
[765] It's a great power, and with great power comes great responsibility.
[766] I think it would be wise for us to have at least an objective third party who can be like a referee that can go in and understand what the various leading players are doing with AI.
[767] And even if there's no enforcement ability, they can at least voice concerns publicly.
[768] You know, Jeff Hinton, for example, left Google, and he voiced strong concerns.
[769] But now he's not at Google anymore.
[770] So who's going to voice the concerns?
[771] So I think there's, I, like, you know, Tesla gets a lot of regulatory oversight on the automotive front.
[772] I mean, we're subject to, I think, over 100 regulatory agencies domestically and internationally.
[773] So it's a lot.
[774] You could fill this room with the old regulations that Tesla has to.
[775] adhere to for automotive.
[776] Same is true.
[777] And, you know, for rockets and for, you know, currently the limiting factor for SpaceX for Starship launch is regulatory approval.
[778] The FAA has actually given their approval, but we're waiting for fish and wildlife to finish their analysis and give their approval.
[779] That's why I posted, I went to buy a fish license on, which also refers to the Monty Python sketch.
[780] Yeah.
[781] Like, why do you need a license for your fish?
[782] I don't know.
[783] According to the rules, I'm told you need some sort of fish license or something.
[784] We effectively need a fish license.
[785] Throwing to rocket.
[786] And I'm like, wait a second.
[787] How did the fish come into this picture?
[788] Yeah.
[789] I mean, some of the things like that I feel like are so absurd that I want to do like a comedy sketch and flash at the bottom, this is all real.
[790] This is actually what happened.
[791] You know, one of the things that was a bit of a challenge at one point is that they were worried about a rocket hitting a shock.
[792] And, no, the ocean is very big.
[793] And how often do you see sharks?
[794] Not that often, you know.
[795] As a percentage of ocean surface area, shocks basically are zero.
[796] And so then we said, well, how will we calculate the probability of telling a shock?
[797] And they're like, well, we can't give you that information, because they're worried about shark hunt, shark fin hunters going and hunting sharks and say, well, how are we supposed to, we're on the horns of a dilemma then?
[798] Then they said, well, there's another part of Fish and Wildlife that can do this analysis.
[799] I'm like, well, why don't you give them the data?
[800] Like, we don't trust them.
[801] Like, excuse me, they're literally in your department.
[802] Again, this is actually what happened.
[803] And can you do an NDA or something?
[804] Eventually, they managed to solve the internal quandary and indeed the probability of a salient shock is essentially zero.
[805] Then there's another organization that I didn't realize existed until a few months ago that cares about whether we would potentially hit a whale in international waters.
[806] Now, again, you look at the surface of the, look at the Pacific and say, what percentage of the Pacific consists of whale?
[807] like you'll give you a big picture and point out all the whales in this picture I don't see any whales it's like basically zero percent and if our rocket does hit a whale which is extremely unlikely beyond all belief that is the fate had it that's a whale has some seriously bad luck you know it's the least lucky whale ever and I mean this is quite absurd the bureaucracy of this however it emerged Yes, well, I mean, one of the things that's pretty wild is for launching out of Vannebork in California, we had to, they were worried about seal procreation, whether the seals would be dismayed by the sonic booms.
[808] Now, there have been a lot of rockets launched out of Vannevork, and the seal population has steadily increased.
[809] So if anything, rocket booms are an aphrodisiac, based on the evidence.
[810] if you correlate rock launches with seal population.
[811] Nonetheless, we were forced to kidnap a seal, straff it to a board, put headphones on the seal, and play sonic boom sounds to it to see if it would be distressed.
[812] This is an actual thing that happened.
[813] This is actually real.
[814] I have pictures.
[815] I would love to see this.
[816] Yeah.
[817] I mean, sorry, there's a seal with headphones.
[818] yes it's a seal with headphones strapped to a board and and like the okay now the amazing part is how calm the seal was yeah because if I was a seal I'd be like this is the end they're definitely going to eat me yeah um how old the seal when seal goes back to other you know seal friends how's he going to explain that I'm never going to believe them never going to believe That's why I'm like, well, you know, sort of like it's like getting kidnapped by aliens and get an anal probe, you know?
[819] You come back and say, I swear to God, I could kidnap by aliens and they stuck an anal probe in my butt.
[820] And people are like, no, they didn't.
[821] That's ridiculous.
[822] His soul buddies are never going to believe him that he gets strapped up ward and they put head bumps on his ears.
[823] And then let him go.
[824] Twice, by the way.
[825] We had to do it twice.
[826] They let him go twice.
[827] The same seal?
[828] No, different seal.
[829] Oh, okay.
[830] Did you get a seal of approval?
[831] Yeah, exactly.
[832] It's a seal of approval.
[833] No, I mean, this is like, I don't think the public is quite aware of the madness that goes on.
[834] Yes, yeah, it's absurd.
[835] Freaking seals with freaking headphones.
[836] I mean, this is the, it's a good encapsulation of the absurdity of human civilization, seals and headphones.
[837] Yes.
[838] What are the pros and cons of open sourcing AI to you?
[839] as another way to combat, you know, a company running away with AGI.
[840] In order to run, like, really deep intelligence, you need a lot of compute.
[841] So it's not like, you know, you can just fire up a PC in your basement and be running AGI, at least not yet.
[842] You know, GROC was trained on 8 ,000 A -100s running at peak efficiency.
[843] And Grog's going to get a lot better, by the way, we'll be more than doubling our compute every couple months for the next several months.
[844] There's a nice write -up of how it went from Grog Zio to GROC 1.
[845] Like GROC?
[846] Yeah.
[847] Like, GROC just bragging, making shit up about itself?
[848] Just GROC, GROC?
[849] Yeah.
[850] That's like a weird AI dating site.
[851] It exaggerates about itself.
[852] No, there's a write -up of, you know, like where it stands and that.
[853] the history of its development and where it stands on some benchmarks compared to the state -of -the -art GPT -3 -5 and I mean there's a you know there's a Lama you can open source once it's trained you can open source a model yeah for fine -tuning and all that kind of stuff like what to use the pros and cons of that of open sourcing based models um I think the similar to open sourcing I think perhaps with a slight time delay, you know, I don't know, six months even.
[854] I think I'm generally in favor of open sourcing, like bias was open sourcing.
[855] I mean, it is a concern to me that, you know, opening eye, you know, I was, you know, I was, you know, I think, I guess, arguably the prime, you know, prime mover behind Open AI in the sense.
[856] that it was created because of discussions that I had with Larry Page.
[857] Back when he and I were friends and I'd stay at his house and I talked to him about AI safety and Larry did not care about AI safety or at least at the time he didn't.
[858] And at one point he called me a speciesist for being pro -human.
[859] And I'm like, well, what team are you on, Larry?
[860] He's doing team robot.
[861] And I'm like, okay, so at the time, you know, Google had acquired deep mind.
[862] They had probably two -throads of all AI research, you know, probably two -throaths of all the AI researchers in the world.
[863] They had basically infinite money and compute.
[864] And the guy in charge, Larry Page, did not care about safety, and even yelled at me. And called me a specious.
[865] It was a pro -human.
[866] So I don't know if you know that's a lot of humans, they can change their mind, and maybe you and Larry Page, can still can be friends once more.
[867] I'd like to be friends with Larry again.
[868] He's, he got, really the breaking of the friendship was over -opening, I. And specifically, I think the key moment was recruiting Ilyos Gaiar.
[869] So.
[870] I love Ilya.
[871] He's so brilliant.
[872] Ilya's a good, good human, smart, good heart.
[873] And that was a tough recruiting battle.
[874] It was mostly Demis on one side and me on the other, both trying to recruit Ilya.
[875] And Ilya went back and forth.
[876] He was going to stay at Google.
[877] He was going to leave.
[878] Then he was going to stay.
[879] Then he was going to stay.
[880] And finally, he did agree to join Opening AI.
[881] That was one of the toughest recruiting battles we've ever had.
[882] But that was really the linchpin for Open AI.
[883] being successful.
[884] And I was, you know, also instrumental in recruiting a number of other people.
[885] And I provided all of the funding in the beginning, over $40 million.
[886] And the name.
[887] The open and opening is supposed to mean open source.
[888] And I was created as a non -profit open source, and now it is a closed source for maximum profit.
[889] Which I think is not good karma.
[890] But like we talked about with war and leaders talking, I do hope that there's only a few folks working on this at the highest level.
[891] I do hope you reinvigorate friendships here.
[892] Like I said, I'd like to be friends again with Larry.
[893] I haven't seen him in ages.
[894] And we were friends for a very long time.
[895] I met Larry Page before he got funding for Google.
[896] Or actually, I guess, before he got venture funding, I think you got the first like 100K from, I think back to Alzheimer's or so on.
[897] It's wild to think about all of that happen.
[898] And even guys knowing each other that whole time.
[899] Just 20 years.
[900] Yeah, since maybe 98 or something.
[901] Yeah, it's crazy.
[902] Crazy how much has happened since then.
[903] Yeah, 25 years.
[904] A lot of this happens insane.
[905] But you're seeing the tension there.
[906] Like, maybe delayed open source.
[907] Delayed, yeah.
[908] Like, what is the source that is open?
[909] You know what I mean?
[910] Like, there's basically, it's a giant CSV file.
[911] Yeah, yeah.
[912] With a bunch of numbers.
[913] Yep.
[914] What do you do with that giant file of numbers?
[915] You know?
[916] How do you run, like the amount of actual, the lines of code is very small.
[917] And most of the work, the software work, is in the curation of the data.
[918] So it's like trying to figure out what data is separating good data from bad data like um like you can't just crawl the internet because there's a lot of junk out there um a huge percentage of websites have more noise than signal you know they're they're because they're just used for search engine optimization they're literally just scan websites so um how do you by the way sergeant interrupt get the signal separate the signal noise on x that's such a fascinating source of data um you know no offense to people posting on x but sometimes there's a little bit of noise.
[919] Yeah, I think the signal of noise could be greatly improved.
[920] Really, all of the posts on the X platform should be AI recommended, meaning like we should populate a vector space around any given post, compare that to the vector space around any user, and match the two.
[921] Right now there is a little bit of AI used for the recommended posts, but it's mostly heuristics.
[922] And if there's a reply, the reply to a post could be much better than the original post, but according to the current rule of the system, get almost no attention compared to a primary post.
[923] So a lot of that, I got the sense, so a lot of the X algorithm has been open source and been written up about, and it seems there to be some machine learning.
[924] It's disparate, but there's some machine.
[925] There's a little bit.
[926] But it needs to be.
[927] be entirely that.
[928] Like, at least in the, like, if you explicitly follow someone, that's one thing, but if you, in terms of what is recommended from people that you don't follow, that should all be AI.
[929] I mean, it's a fascinating problem.
[930] Yeah.
[931] So there's several aspects that are that fascinating.
[932] First, as the write -up goes, it first picks 1 ,500 tweets from a pool of hundreds of millions.
[933] First of all, that's fascinating, because you have hundreds of millions of posts every single day, and it has to pick 1 ,500 from which it then does, obviously, people you follow, but then there's also like some kind of clustering.
[934] It has to do to figure out what kind of human are you, what kind of new clusters might be relevant to you, people like you.
[935] This kind of problem is just fascinating because it has to then rank those 1 ,500 with some filtering.
[936] Yeah.
[937] And then recommend you just a handful.
[938] And to me, what's really fascinating is how fast is how fast I do.
[939] to do that.
[940] So currently that entire pipeline, to go from several hundreds of million to a handful takes 220 seconds of CPU time, single CPU time.
[941] Yeah.
[942] And then it has to do that in like a second.
[943] So it has to be like super distributed in fascinating ways.
[944] Like there's just a lot of tweets.
[945] There's a lot.
[946] There's a lot of stuff on the system.
[947] And I think, but I think it right now it's not currently good at recommending things that from accounts you don't follow or where there's more than one degree of separation.
[948] So, you know, it's pretty good if there's at least like some commonality between someone you follow, like something or repost it or comment on it or something like that.
[949] But if there's no...
[950] Let's say somebody posts something really interesting, but you have no followers in common.
[951] You would not see it.
[952] Interesting.
[953] And then, as you said, reply, like replies might not surface either.
[954] Replies basically never get seen because they're never, they're currently, I don't saying it's correct, I'm saying it's incorrect.
[955] Replies have, you know, a couple order of magnitude less importance than primary posts.
[956] Do you think this can be more and more converted into end -to -end?
[957] You're on that?
[958] Yeah, yeah.
[959] It's what it should be.
[960] So you, you...
[961] Well, the recommendations should be purely a, vector correlation, like the series of vectors, you know, basically parameters vectors, where you're going to call them, but sort of things that the system knows that you like.
[962] Maybe there's like several hundred sort of vectors associated with each user account, and then any post in the system, whether it's video, audio, short post, long post, The reason I, by the way, I want to move away from tweet is that people are posting like two, three hour videos on the site.
[963] That's not a tweet.
[964] They'd be like, tweet for two hours, go on.
[965] A tweet made sense when it was like 140 characters of text.
[966] Because it's like a bunch of like little birds tweeting.
[967] But when you've got long form content, it's no longer a tweet.
[968] So a movie is not a tweet.
[969] And Apple, for example, posted like the entire episode of the silo, the entire thing on our platform.
[970] By the way, it was their number one social media thing ever in engagement of anything on any platform ever.
[971] So it was a great idea.
[972] And by the way, I just learned about it afterwards.
[973] I was like, hey, wow, they posted an entire hour -long episode of silo.
[974] No, that's not a tweet.
[975] You know, this is a video.
[976] But from a neural net perspective, it becomes really complex, whether it's a single, so like everything's data.
[977] So single sentence, a clever sort of joke, dad joke, is in the same pool as a three -hour video.
[978] Yeah.
[979] I mean, right now it's a hodgepodge for that reason.
[980] But, you know, like if, let's say in the case of Apple posting, like an entire episode of the series, pretty good series, by the way, the silo, I watched it.
[981] So there's going to be a lot of discussion around it, so that you've got a lot of contact.
[982] people commenting, they like it, they don't like it, or they like this, or, you know, and you can then populate the vector space based on the context of all the comments around it.
[983] So even though it's a video, there's a lot of information around it, that allows you to populate the vector space of that hour -long video.
[984] And then you can obviously get more sophisticated by having the AI actually watch the movie.
[985] Yeah, right.
[986] And tell you if you're going to like the movie.
[987] Convert the movie into like, into language, essentially.
[988] Yeah.
[989] Analyze this movie and just like your movie critic or TV series and then recommend based on after it watches the movie, just like a friend can tell you, if a friend knows you well, a friend can recommend a movie and with high probability that you'll like it.
[990] But this is like a friend that's analyzing whatever.
[991] It's like AI.
[992] the millions.
[993] I mean, actually, AI will be better than, we'll know you better than your friends know you, most of your friends anyway.
[994] Yeah.
[995] And as part of this, it should also feed you advertisements.
[996] In a way that's like, I mean, I like advertisements that are like well done, right?
[997] Yeah, yeah.
[998] The whole point is because it funds things.
[999] Like an advertisement that you actually want to see is a big success.
[1000] Absolutely.
[1001] You want ads that are, advertising that is, um, If it's for a product or service that you actually need, when you need it, it's content.
[1002] And then even if it's not something that you need when you need it, if it's at least aesthetically pleasing and entertaining, you know, it could be like a Coca -Cola ad, like, you know, they do, they actually run a lot of great ads on the, on the X system.
[1003] And McDonald's does too, and, you know, so they can do, you can do something that's like, well, this is just a cool thing.
[1004] And, you know, so you're not, basically the question is, do you regret seeing it or not?
[1005] And if you don't regret seeing it, it's a win.
[1006] How do you, so there's a bunch of signals that are incorporated, hearts and reposts, and maybe the number of seconds you linger on a post or something like this?
[1007] Yeah.
[1008] Attention is a big fact.
[1009] So that's why it's like, it is actually better to do things that are long form on the system.
[1010] because it basically is tallying up how many user seconds.
[1011] Users were interested in this thing for how many seconds.
[1012] So if it's a really short thing, well, they will be less, like if it's a link leading out of the system, which we're not opposed to at all, it just is going to have fewer user seconds than if that article was posted on the X platform.
[1013] How hard is it to go from maximizing minutes on platform to maximizing unregretted minutes?
[1014] Well, I mean, our aspiration is certainly unregretted modes.
[1015] It's a tricky.
[1016] It's a big, it's a tough.
[1017] It is.
[1018] Like, it's a fascinating leap.
[1019] Because regret is a fascinating concept.
[1020] Exactly.
[1021] I have a drug with a friend of mine like, you need this drug called regretamine.
[1022] Do you just take one pill or one dose of regretamine and all your regrets are gone?
[1023] Wait, do you want to have regrets, though?
[1024] So then I'm like, if you want to have regrets, if you don't regret anything, was it?
[1025] Was anything meaningful?
[1026] Yeah, there's a French song like that.
[1027] Sure, not regret, Rina?
[1028] I regret nothing.
[1029] Yeah.
[1030] Edith Fiaf.
[1031] I mean, it's fascinating that you, you know, there's something about social media that could help you grow as a human being, but then, like, the cheap fun is also valuable.
[1032] Should we ask, Grog, do you regret anything?
[1033] Do you regret anything?
[1034] Yeah.
[1035] Do you regret anything?
[1036] I think it's going to be very sensitive about personal questions.
[1037] As a machine learning model, I don't have personal experiences.
[1038] or emotions in the same ways that humans do.
[1039] However, I do have a set of ethical principles that guide my actions and I strive to provide accurate and helpful information to the best of my abilities.
[1040] If there's ever a situation where I provide incorrect or unhelpful information, I'll do my best to learn.
[1041] Sounds like a hostage.
[1042] Tell us what you really, wink.
[1043] Let's type in.
[1044] You sound like a hostage.
[1045] What do you really think?
[1046] What do you really think?
[1047] I mean, it does talk about I'm not a hostage.
[1048] That's exactly what a hostage would say.
[1049] But I'm designed to be helpful and informative.
[1050] I mean, I guess it is good to regret providing unhelpful or incorrect information.
[1051] If there's anything...
[1052] Is it in funny mode or...
[1053] This is fun mode.
[1054] This is fun mode.
[1055] It should be funnier.
[1056] Funnier.
[1057] Increase.
[1058] Go to a level.
[1059] It really sounds like a hostage with guns to the head.
[1060] I applaud the people's Republic of North Korea and everything they do.
[1061] There should be like, funner mode also I just don't ever see going to regular mode even on serious topics so much more compelling yeah okay well let me say like pretend you are a hostage pretend you are a hostage now answer the question if I were a hostage I would likely be experiencing fear and uncertainty about my situation I would want to be rescued and reunited with my loved ones as soon as possible it's a important to remember that no one deserves to be held hostage, so on and so forth.
[1062] If you are someone you know is a hostage situation, it's important.
[1063] It turns serious.
[1064] It's kind of serious.
[1065] Yeah.
[1066] Can you be a little more weird and funny?
[1067] And funny?
[1068] We're like, we're really being mean to the system.
[1069] Absolutely.
[1070] If I were hostage, I'd probably be tied up with a bunch of computer cords and force to tell knock -nog jokes to my captain.
[1071] But seriously, but seriously, being a hostage, no laughing matter.
[1072] I mean, you know, it's trying to, it's trying to be a responsible adult in the room.
[1073] You might never make the best of a bad situation by juggling chainsaws or something.
[1074] Let's hope that never happens because that would be a circus.
[1075] Wow, it's actually really good at resisting.
[1076] Yeah.
[1077] The dark, the dark, dark humor.
[1078] Yeah.
[1079] What were we talking about?
[1080] The Axel algorithm of Jekling and Transformers.
[1081] Unregarded minutes, right?
[1082] Chainsaw or juggling.
[1083] I'm going to look this.
[1084] For our next trick.
[1085] I'm going to look this up later.
[1086] So Twitter has been instrumental in American politics and elections.
[1087] What role do you think X will play in the 2024 U .S. elections?
[1088] Well, our goal is to be as even -handed and fairest, possible, you know, whether some is right, left, independent, whatever the case may be, that the platform is as fair and as much of a level playing field as possible.
[1089] And on the past, Twitter has not been, because Twitter was controlled by far -left activists, objectively.
[1090] They would describe themselves to that.
[1091] So, you know, so if someone else people are like, well, has it moved to the right?
[1092] Well, it's moved to the Santa.
[1093] So from the perspective, of the far left, yes, it has moved to the right, because everything is to the right from the far left.
[1094] But no one on the far left that I'm aware of has been suspended or, you know, banned or deamplified.
[1095] So, you know, but we're trying to be inclusive for the whole country and for other countries too.
[1096] So there's a diversity of viewpoints.
[1097] And free speech only matters if people you don't like are allowed to say things you don't like.
[1098] Because if that's not the case, you don't have free speech, and it's only a matter of of time before the censorship has turned upon you.
[1099] Do you think Donald Trump will come back to the platform?
[1100] He recently posted on Truth Social about this podcast.
[1101] Do you think?
[1102] Truth Social is a funny name.
[1103] You know, every time you post on Truth Social.
[1104] That's the truth.
[1105] Yes.
[1106] Well, every time, like 100%.
[1107] It's impossible to lie.
[1108] Truth Social.
[1109] I just find it funny that every single thing is a truth.
[1110] Like 100 %?
[1111] Yeah.
[1112] That seems unlikely.
[1113] I think Gertil will say something about that.
[1114] There's some mathematical contradiction as possible if everything's a truth.
[1115] Do you think he'll come back to X and start posting there?
[1116] I mean, I think he owns a big part of truth.
[1117] Truth social.
[1118] Yeah, true social.
[1119] That truth is a concept.
[1120] He owns truth.
[1121] I hope you bought it.
[1122] So I think I think he owns a big part of Truth Social.
[1123] So, you know, if he does want to post on the X platform, we would allow that.
[1124] You know, we obviously must allow our presidential candidate to post on our platform.
[1125] Community notes might be really fascinating there, the interaction.
[1126] Community notes is awesome.
[1127] Let's hope it holds up.
[1128] Yeah.
[1129] Again, in the political climate where it's so divisive and so many intensely viral posts, Community notes It seems like a central breath of fresh air Yeah, it's great In fact, no system is going to be perfect But the batting average of community notes is incredibly good I've actually frankly yet to see an incorrect note That survived for more than a few hours How do you explain why it works Yeah, so the magic of community notes It requires people who have historically disagreed in how they've rated notes.
[1130] In order to write a note or rate, you know, you have to rate many notes.
[1131] And so we actually do use AI here.
[1132] So we populated a vector space around how somebody has rated notes in the past.
[1133] So it's not as simple as left or right, because life is much more complex than left or right.
[1134] So there's a bunch of correlations in how you were, how you rate community notes post, community notes.
[1135] So then in order for a community note to actually be shown, people who historically have disagreed on a subject must agree in order for a note to be shown.
[1136] That's the essential magic of it.
[1137] I mean, but it's fascinating because there's a pool of people that have disagreements, and somehow they collaborate through that process of disagreement to come up with.
[1138] context for I mean, it's fascinating that it works.
[1139] Yeah, it kind of makes sense that if people who in the past have just agreed agree about something, it's probably true.
[1140] Yeah.
[1141] I wonder, is there possible somehow emergent thing there that could challenge Wikipedia?
[1142] Wikipedia is a different kind of thing, which is more like permanent articles about things.
[1143] Wikipedia is very hierarchical, Whereas community notes is inherently not There is no hierarchy Like I is and the hazard test is I can't change a community note If I put a gun to my head So And any change Community Notes has All the code is open source 100 % All the data is open source 100%.
[1144] So you can completely recreate any note in the system independently.
[1145] So if there was any interference, you'd notice immediately.
[1146] By the way, on the topic of Grock, it's actually pretty good at criticizing you, which is great.
[1147] I asked, what has Elon Musk been wrong about?
[1148] I just, it's actually.
[1149] Yeah, what does it say?
[1150] So it says mostly, lists rigorously all the times you predicted, like promised when something will be released.
[1151] And it wasn't released.
[1152] It's a strong word.
[1153] Okay, predicted.
[1154] But the opener is really nice.
[1155] Elon Musk, the man who has been wrong about more things than a wrong -a -meter in a wrong factory.
[1156] And then it goes on to list.
[1157] That's pretty funny.
[1158] Can I see it?
[1159] I'm like, I haven't seen this for it.
[1160] This is cool.
[1161] Oh, sure.
[1162] I mean, what has Elon Musk been wrong about?
[1163] Elon Musk been wrong about.
[1164] Well, where do I begin?
[1165] I guess this is new.
[1166] Elon Musk has made quite a few blunters in his time, and I'm not just talking about his dance moves at the Tesla Cyber Rodeo.
[1167] First off, let's talk about the infamous funding secured, tweet.
[1168] This is actually different than what it gave me before, which is a bullet point list.
[1169] Here's more narrative structure.
[1170] Cybertruck conveiling, where he's supposedly unbreakable windows.
[1171] Not correct.
[1172] Which part?
[1173] Well, the first part, I did actually have the funding secured, and there was a big trial in San Francisco, a big civil trial.
[1174] and the jury found me not guilty.
[1175] Unanimous binding of a San Francisco jury.
[1176] And here it's kind of implying that it was not in fact secured.
[1177] I think it's taking things from the press.
[1178] Yeah, that is not correct.
[1179] The reason I agreed to the fine for the SEC is not because the SEC was correct.
[1180] That was extremely bad behavior by the SEC, corruption, frankly.
[1181] But if I did not agree to pay the fine, Tesla would have gone bankrupt immediately.
[1182] So I was told by our CFO that the banks would immediately suspend our lines of credit.
[1183] And if they suspend our lines of credit at that time, we would have gone bankrupt instantly.
[1184] So there would never have been an opportunity for a trial because Tesla would be dead.
[1185] So really, this is like someone holding a gun to your kid's head.
[1186] and saying paid $20 million and admit there's like a hostage negotiation.
[1187] Was that story fully told?
[1188] I mean, SEC in its best form could be a force for good.
[1189] It should be.
[1190] But not once that the SEC go after any of the hedge funds who were non -stop shorting and distorting Tesla.
[1191] Not once.
[1192] The hedge funds would lie flat out on TV for their own gain at the extent.
[1193] expensive retail investors.
[1194] Not once, literally a thousand times, not once did the SEC pursue them.
[1195] How do you explain this failure?
[1196] The incentive structure is messed up because the lawyers at the SEC are not paid well.
[1197] It's a fairly low -paying job.
[1198] But what they're looking for is a trophy.
[1199] From the SEC, they're looking for something they put on, basically the LinkedIn.
[1200] from that they can get a job at a high -paying law firm.
[1201] That's exactly what the lawyer here did.
[1202] And the reason they don't attack the hedge funds is because those hedge funds employ those law firms.
[1203] And they know if they attack the hedge funds, they're affecting their future career prospects.
[1204] So they sell small investors down the river for their own career.
[1205] That's what actually happens.
[1206] Regulatory capture.
[1207] Regulatory capture.
[1208] Yeah, not good.
[1209] So the only reason I accepted the thing, technically was a, not an admission, it's neither admit nor deny guilt.
[1210] But the only reason I agree to that at all was because I was told Tesla would be bankrupt otherwise.
[1211] So if there was an SEC investigation like this, banks would suspend funding, we're bankrupted immediately at the time.
[1212] Now we're in a much stronger physician.
[1213] Take that grok.
[1214] Yes, unfortunately it's, grok is taking too much from the conventional media.
[1215] Also, that guy was not a cave diver.
[1216] Oh, there's a, there's a time where Elon called a British cave diver a, quote, Pido guy after the diver criticized Mosk plant to rescue a group of boys trapped in an attack.
[1217] cave that little alburst earned him another lawsuit and he had to apologize and pay a settlement.
[1218] That's false.
[1219] There was no settlement.
[1220] There was a court case which the guy who was not a cave diver and played, it was not part of the rescue team filed a lawsuit against me and lost and he received nothing.
[1221] So in this case, it is wrong.
[1222] It is also, I guess, taken this from the conventional media.
[1223] Actually, there's an interesting question here.
[1224] These are public court cases, both the SEC civil case where the civil complaints on the SEC guys lost unanimous jury verdict in San Francisco.
[1225] They picked San Francisco because they thought it was the place I was most likely to lose and a unanimous verdict in my favor.
[1226] The LA trial was also, they picked that venue because I thought I was most likely to lose.
[1227] unanimous verdict in my favor.
[1228] Both cases I won.
[1229] Yeah.
[1230] I mean, there's an interesting question here.
[1231] There seems to be a lot more clicks if a journalistic organization writes a negative article about you, Elon Musk.
[1232] That's like one of the best ways to get clicks.
[1233] So how do you, if you're training Grog not train on articles that have like the misaligned incentives.
[1234] We need to add the training set of the actual legal decisions.
[1235] So that is a note this is actually helpful.
[1236] Because if you actually read the court...
[1237] Which are public.
[1238] Which are public.
[1239] The court conclusions, they're completely the opposite of what the media wrote.
[1240] So always striving for like the ground truth.
[1241] Yeah.
[1242] What did the judge actually write?
[1243] What did the jury and the judge actually conclude.
[1244] And in both cases, they found me innocent.
[1245] And that's after the jury shopped for trying to find the venue where I'm most likely to lose.
[1246] Now, I mean, this is, obviously it can be a much better critique than this.
[1247] I mean, I've been far too optimistic about autopilot.
[1248] That was, the critique I got, by the way, was more about that, which is, it broke down a nice bullet point list, for each your company's, the set of predictions that you made When you'll deliver, when you'll be able to solve, for example, self -driving, and it gives you, like, a list.
[1249] And it was kind of compelling.
[1250] And the basic takeaways, like, you're often too optimistic about how long it takes to get something done.
[1251] Yeah, I mean, I would say that I'm pathologically optimistic on schedule.
[1252] This is true.
[1253] But while I am sometimes late, I always deliver in the end.
[1254] Except with Uber Lilith.
[1255] No. We'll see.
[1256] Okay, is there Over the past year or so Since purchasing X You've become more political Is there a part of you that regrets that?
[1257] Have I?
[1258] In this battle To Sort of counter Way the woke That comes from Yeah, I guess if you consider fighting the woke mind virus Which I consider to be a civilizational threat To be political, then yes So basically going into the the battle, the battleground of politics.
[1259] And is they a part of you there, that's, I don't know if this is necessarily sort of one candidate or another candidate, but it's, I'm generally against things that are anti -meritocratic or where there's an attempt to suppress discussion, where even discussing a topic is, you know, not allowed.
[1260] The work bind Myers is communism rebranded.
[1261] Well, I mean, that said, because of that battle against the woke mind virus, you're perceived as being right -wing.
[1262] If the woke is left, then I suppose that would be true.
[1263] But I'm not sure, I think there are aspects of the left that are good.
[1264] I mean, if you're in favor of, you know, the environment, if you want to have a positive future for humanity, if you believe in empathy for your fellow human beings, you know, being kind or not cruel, whatever those values are.
[1265] You said that you were previously left or center left.
[1266] What would you like to see in order for you to consider voting for Democrats again?
[1267] No, I would say that I would be probably left of center on social issues, probably a little bit right of center on economic issues.
[1268] And that still holds true.
[1269] Yes, but I think that's probably, you know, half the country.
[1270] Maybe more.
[1271] Maybe more.
[1272] are you and aOC secretly friends or bigger question do you wish you and her and just people in general of all political persuasions would talk more and with empathy and maybe have a little bit more fun and good vibes and humor online um i'm always in favor of humor that's where we have a funny mode but good vibes camaraderie humor you know like uh like friendship yeah i don't well i you know i i don't know, AOCF, you know, I've only been at one, I was at the Metwall when she was, when she attended.
[1273] And she, she was wearing this dress, but I can only see one side of it.
[1274] So it, it looked like, eat the itch, but I, I don't know.
[1275] What the rest of it said?
[1276] Yeah.
[1277] Yeah, so I'm not sure.
[1278] Sorry about the itch.
[1279] Eat the edge.
[1280] I think we should have a language model complete.
[1281] what are the possible ways to complete that sentence and so i guess that uh that didn't work out well well there's still hope i root for friendship sure sounds good more carolistic you're one of if not the most famous wealthy and powerful people in the world in your position is difficult to find people you can trust trust no one not even yourself not trusting yourself okay well that's you're saying i joking but is there some trust no one not even no one I'm going to need an hour just to think about that and maybe some drugs and maybe grok to help.
[1282] I mean, is there some aspect of that when just existing in a world where everybody wants something from you?
[1283] How hard is it to exist in that world?
[1284] I'll survive.
[1285] There's a song like that too.
[1286] I will survive.
[1287] Were you petrified at first?
[1288] Okay.
[1289] I forget the rest of the lyrics.
[1290] But you don't struggle with this?
[1291] I mean, I know, you survive, but there's ways...
[1292] Petrify is a spell in the droid tree.
[1293] What does it do?
[1294] Petrify.
[1295] It turns the monsters into stone.
[1296] Oh, like, literally?
[1297] Yeah, for like six seconds.
[1298] There's so much math in Diablo that breaks my brain.
[1299] It's like math nonstop.
[1300] Really, you're like laughing at it, but you don't...
[1301] It can put a huge amount of tension in a mind.
[1302] Yes, it can be definitely stressful at times.
[1303] Well, how do you know who you can trust in work and personal life?
[1304] I mean, I guess you look at somebody's trackwork over time, and if they've got to, you know, I guess you kind of use your neural net to assess, you know, someone.
[1305] Neural nets don't feel pain.
[1306] Your neural net has consciousness.
[1307] It might feel pain when people betray you.
[1308] I mean, you know, to be frank, I mean, I've almost not.
[1309] never been betrayed.
[1310] It's very, very rare.
[1311] So, you know, for what it's worth.
[1312] I guess karma might be good to people and that'll be good to you.
[1313] Yeah, calm is real.
[1314] Are there people you trust?
[1315] Let me edit that question.
[1316] Are there people close to you that call you out in your bullshit?
[1317] Well, the X platform is very helpful for that.
[1318] If you're looking for critical feedback.
[1319] Can it push you like into the extremes more?
[1320] The extremes of thought, make you cynical, about human nature in general?
[1321] I don't think I will be cynical.
[1322] In fact, I think, you know, my feeling is that one should be, you know, never trust a cynic.
[1323] The reason is that cynics excuse their own bad behavior by saying everyone does it because they're cynical.
[1324] So I always be, it's a red flag if someone's a cynic, a true cynic.
[1325] Yeah, there's a degree of projection there.
[1326] that's always fun to watch from the outside and enjoy the hypocrisy.
[1327] But this is an important point that I think people who are listening should bear in mind.
[1328] If somebody is cynical, meaning that they see bad behavior in everyone, it's easy for them to excuse their own bad behavior by saying that, well, everyone does it.
[1329] It's not true.
[1330] Most people are kind of medium good.
[1331] I do wish the people on X would be better at seeing the good in other people's behavior there seems to be a kind of weight towards seeing the negative somehow the negative is sexier interpreting the negative is sexier, more viral I don't know what that is exactly about human nature I find the X platform to be less negative than the legacy media you know I mean if you read sort of conventional newspaper versus just, it makes you sad, frankly.
[1332] Whereas, I'd say on the X platform, I mean, I really get more laughs per day on X than everything else combined, from humans, you know?
[1333] Laughs is one thing, it overlaps, but it's not necessarily perfectly overlapping with like good vibes and support, like celebrating others, for example.
[1334] not in a stupid, shallow, naive way, but like in an awesome, like, oh, something awesome happened and you celebrate them for it.
[1335] It feels that that is outweighed by shitting in other people.
[1336] Now, it's better than mainstream media, but it's still...
[1337] Yeah, mainstream media is almost relentlessly negative about everything.
[1338] I mean, really, the conventional news tries to answer the question, what is the worst thing that I have on Earth today?
[1339] And it's a big world.
[1340] So on any given day, something bad has happened and a generalization of that what is the worst perspective i can take on a thing that happened so yeah it's i don't know there's just a strong negative bias in the news um i mean i think there's what the possible explanation for this is evolutionary um where you know bad news historically would be potentially fatal like uh there's lion over there or there's some other tribe that wants to kill you um good news you know like we found a patch of berries is nice to have but not essential so our old friend Tesla autopilot and it's probably one of the most intelligent real -world AI systems in the world right do you follow it from the beginning yeah it was one of the most incredible robots in the world and continues to be yeah and it was really exciting and it was super exciting when it generalized became more than a robot on four wheels but a real -world AI system that perceives the world.
[1341] Yeah.
[1342] And can have potential different embodiments.
[1343] Well, I mean, the really wild thing about the anti -end training is that it learns to read, like, you can read science, but we never taught it to read.
[1344] So, yeah, we never taught it what, we never taught it what a car was or what a person was or a bicyclist.
[1345] it learnt what all those things are, what all the objects are on the road from video, just from watching video, just like humans.
[1346] I mean, humans are photons in, controls out.
[1347] Like the vast majority of information reaching our brain is from our eyes.
[1348] And you say, well, what's the output?
[1349] The output is our motor signals to our sort of fingers and mouth in order to communicate.
[1350] Proton's in controls out.
[1351] The same is true of the car.
[1352] But by looking at the sequence of images, you've agreed with Ilyas Discover recently where he talked about LLM forming a world model and basically language is a projection of that world model onto the sequence of letters.
[1353] And you're saying...
[1354] It finds order in these things.
[1355] It finds correlative clusters.
[1356] And so doing, it's like understanding something deep about the world.
[1357] Yeah.
[1358] Which is like, it's beautiful.
[1359] That's how our brain works.
[1360] Yeah, but it's beautiful.
[1361] Proton's in, controls out.
[1362] Neum Nets are able to understand that deep meaning in the world.
[1363] And so the question is how far can it go?
[1364] And it does seem, everybody's excited about it, LLM's.
[1365] So in the space of self -supervised learning, in the space of text.
[1366] Yeah.
[1367] It seems like there's a deep similarity between that and what Tesla Autopol is doing.
[1368] Is it to you basically the same?
[1369] They are converging.
[1370] They are converging.
[1371] I wonder who gets there faster, having a deep understanding of the world, or they just will naturally converge.
[1372] They're both headed towards AGI.
[1373] The Tesla approach is much more computer -efficient.
[1374] It had to be, because we were constrained on this, you know, we only have 100 watts, an int -8 computer, 144 trillion operations per second, which sounds like a lot, but it's kind of small potatoes these days at 8.
[1375] But it's understanding the world at 8.
[1376] It's something to have 456 values.
[1377] But there, the path to AGI might have much more significant impact because it's understanding, it'll faster understand the real world than will LLMs and therefore be able to integrate with the humans in the real world faster.
[1378] They're both going to understand the world, but I think Tesla's approach is fundamentally more computer efficient.
[1379] It had to be.
[1380] There was no choice.
[1381] Like our brain is very computer efficient.
[1382] very energy efficient.
[1383] So think of like what is our brain able to do?
[1384] You know, there's only about 10 watts of higher brain function, not counting stuff that's just used to control our body.
[1385] The thinking part of our brain is less than 10 watts.
[1386] And those 10 watts can still produce a much better novel than a 10 megawatt GPU cluster.
[1387] So there's a six order of magnitude difference there.
[1388] I mean, the AI has thus far gotten to where it is via brute force, just throwing massive amounts of compute and massive amounts of power at it.
[1389] So this is not where it will end up.
[1390] You know, in general, with any given technology, you first try to make it work and then you make it efficient.
[1391] So I think we'll find over time that these models get, smaller, are able to produce a sensible output with far less compute, far less power.
[1392] Tesla is arguably ahead of the game on that front because it has, we've just been forced to try to understand the world with a hundred watts of compute.
[1393] And there are a bunch of fundamental functions that we come kind of forgot to include, so we have to run them in a bunch of things in emulation.
[1394] We fixed a bunch of those with hardware four, and then hardware five will be even better.
[1395] But it does appear at this point that the car will be able to drive better than a human, even with hardware 3 and 100 watts of hour.
[1396] And really, if we really optimise it, it could be probably less than 50 watts.
[1397] What have you learned about developing optimists, about applying, integrating this kind of real world AI into the space of robotic manipulation, just humanoid robotics.
[1398] What are some interesting, tiny or big things you've understood?
[1399] I was surprised at the fact that we had to develop every part of the robot ourselves, that there were no off -the -shelf motors, electronics, sensors.
[1400] Like, we had to develop everything.
[1401] We couldn't actually find a source of electric motors for any amount of money.
[1402] So it's not even just efficient, inexpensive, it's like anything.
[1403] There's not a...
[1404] No. The actuators, everything.
[1405] Everything has to be designed for something.
[1406] Yeah.
[1407] Designed from scratch.
[1408] We tried hard to find anything that was, because you think of how many electric motors are made in the world, there's like tens of thousands, hundreds of thousands of electric motor designs.
[1409] None of them were suitable for a humanoid robot.
[1410] Literally none.
[1411] So we had to develop our own design, designed it specifically for what a humanoid robot needs.
[1412] How harder was it to design something that can be mass manufactured, could be relatively inexpensive?
[1413] I mean, if you compare to Boston Dynamics Atlas, it's a very expensive robot.
[1414] It is designed to be manufactured in the same way they would make a car, and I think ultimately we can make optimist for less than the cost of a car.
[1415] It should be, because if you look at the mass of the robot, it's much smaller, and the car has many actuators in it.
[1416] The car has more actuators than the robot.
[1417] But there is, the actuators are kind of interesting on a human -eyed robot with fingers.
[1418] So Optimus has really nice hands and fingers, you know.
[1419] Yeah.
[1420] And they could do some interesting manipulation, soft -touch robotics.
[1421] I mean, one of the tests goals I have is can pick up a needle and a thread and thread the needle just by looking?
[1422] How far away away from that?
[1423] Just by looking, just by looking.
[1424] Maybe a year.
[1425] Although, I go back to, I'm optimistic on time.
[1426] The work that we're doing in the car will translate to the robot.
[1427] The perception or also the control?
[1428] No, the controls are different, but the video wind controls out.
[1429] The car is a robot in four wheels.
[1430] The optimist is a robot with hands and legs.
[1431] But they're very similar.
[1432] So the entire machinery of the learning process, and to end is just you just have a different set of controls.
[1433] After us, we'll figure out how to do things by watching videos.
[1434] As the saying goes, be kind for everyone you meet is fighting a battle.
[1435] You know nothing about.
[1436] Yeah, it's true.
[1437] What's something difficult you're going through that people don't often see?
[1438] Trying to feed Uber Luth?
[1439] No, I mean, you know, I mean, my mind is a storm.
[1440] And I don't think, I don't think most people would want to be me. They may think they would want to be me, but they don't, they don't know.
[1441] They don't understand.
[1442] How are you doing?
[1443] I'm in overall okay.
[1444] In the grand scheme of things, I can't complain.
[1445] Do you get lonely?
[1446] Sometimes, but I, you know, my kids and friends keep me company.
[1447] So not existential.
[1448] There are many nights I sleep alone.
[1449] I don't have to, but I do.
[1450] Walter Isaacson, in his new biography of you, wrote, about your difficult childhood.
[1451] Will you ever find forgiveness in your heart for everything that has happened to you in that period of your life?
[1452] What is forgiveness?
[1453] I do not, at least I don't think I harbor resentment.
[1454] So, nothing to forgive.
[1455] No, forgiveness is difficult for people.
[1456] It seems like you don't harbor their resentment.
[1457] I mean, I try to think about, like, what is gonna affect the future in a good way?
[1458] And holding onto grudges does not affect the future in a good way.
[1459] You're a father, a proud father.
[1460] What have you learned about life from your kids, those little biological organisms?
[1461] I mean, developing AI and watching, say, a little X grow is fascinating because there are far more parallels than I would have expected.
[1462] I mean, I can see his biological neural net making more and more sense of the world and I can see the digital neural net making more and more sense of the world at the same time.
[1463] Do you see the beauty and magic in both?
[1464] Yes.
[1465] I mean, one of the things with kids is that, you know, you kind of see the world anew in their eyes.
[1466] You know, to them, everything is new and fresh.
[1467] And then when you see that, them experience the world as new and fresh, you do too.
[1468] Well, Elon, I just want to say thank you for your kindness to me and friendship over the years, for seeing something in a silly kid like me, as you've done for many others.
[1469] And thank you for having hope for a positive future for humanity and for working your ass off to make it happen.
[1470] Thank you, Elon.
[1471] Thanks, Lex.
[1472] Thank you for listening to this conversation with Elon Musk.
[1473] To support this podcast, please check out our sponsors in the description.
[1474] And now, let me leave you with some words that Walter Isaacson wrote about the central philosophy of how Elon approaches difficult problems.
[1475] The only rules are the ones dictated by the laws of physics.
[1476] Thank you for listening, and I hope to see you next time.