Insightcast AI
Home
© 2025 All rights reserved
Impressum
#193 – Rob Reid: The Existential Threat of Engineered Viruses and Lab Leaks

#193 – Rob Reid: The Existential Threat of Engineered Viruses and Lab Leaks

Lex Fridman Podcast XX

--:--
--:--

Full Transcription:

[0] The following is a conversation with Rob Reed, entrepreneur, author, and host of the After On podcast.

[1] Sam Harris recommended that I absolutely must talk to Rob about his recent work on the future of engineer pandemics.

[2] I then listened to the four -hour special episode of Sam's Making Sense podcast with Rob, titled Engineering the Apocalypse.

[3] And I was floored and knew I had to talk to him.

[4] Quick mention of our sponsors.

[5] Athletic Greens, Belkampo, Fund Rise, and NetSuite.

[6] Check them out in the description to support this podcast.

[7] As a side note, let me say a few words about the lab leak hypothesis, which proposes that COVID -19 is a product of gain -of -function research on coronaviruses conducted at the Wuhan Institute of Virology that was then accidentally leaked due to human error.

[8] For context, this lab is biosafety level 4, BSL -4, and it investigates coronavirus.

[9] B .SL4 is the highest level of safety, but if you look at all the human in the loop pieces required to achieve this level of safety, it becomes clear that even BSL4 labs are highly susceptible to human error.

[10] To me, whether the virus leaked from the lab or not, getting to the bottom of what happened is about much more than this particular catastrophic case.

[11] It is a test for our scientific, political, journalistic, and social institutions of how well we can prepare and respond to threats that can cripple or destroy human civilization.

[12] If we continue to gain of function research on viruses, eventually these viruses will leak and they will be more deadly and more contagious.

[13] We can pretend that won't happen, or we can openly and honestly talk about the risks involved.

[14] This research can both save and destroy human life on Earth as we know it.

[15] It's a powerful double -edged sword.

[16] If YouTube and other platforms censor conversations about this, If scientists, self -censor conversations about this, we'll become merely victims of our brief homo sapiens story, not its heroes.

[17] As I said before, too carelessly labeling ideas as misinformation and dismissing them because of that will eventually destroy our ability to discover the truth.

[18] And without truth, we don't have a fighting chance against the great filter before us.

[19] As usual, I'll do a few minutes of ads now, no ads in the middle.

[20] As a podcast fan, I think those get in the way.

[21] I try to make these interesting, but I give you time stamps.

[22] So if you skip, please still check out the sponsors by clicking the links in the description.

[23] It's the best way to support the podcast.

[24] We're very picky about the sponsors we take on.

[25] So hopefully, if you buy their stuff and you definitely should, you'll find value in it just as I have.

[26] This show is sponsored by Athletic Greens, the all -in -one daily drink to support better health and peak performance.

[27] Somebody on Reddit commented, what are the things that Lex inspires you to do?

[28] And there's just a bunch of kind things that people commented on that.

[29] But one of the people said the two things are drink Athletic Greens and Programming Lisp, which I think those two things are censored to who I am.

[30] And that probably explains a lot.

[31] Athletic Greens replaced a multivitamin for me and went far beyond that with 75 vitamins and minerals.

[32] It's the first thing I drink every day.

[33] It's kind of a source of a little bit of joy.

[34] It gives me confidence that I have all my nutrition.

[35] Plus I'm doing the fish oil.

[36] and Athletic Greens also has fish oil and they send you one month supply free when you sign up on Athletic Greens .com slash Lex that's Athletic Greens .com slash Lex I'm literally drinking a cold refreshing Athletic Greens while doing this read.

[37] If this was a Coke or Pepsi commercial I would do one of those can opening sounds and then a refreshing swallowing of cold liquid.

[38] I'll save that for my only fans account.

[39] Okay.

[40] This show sponsored by Bell Campo Farms.

[41] whose mission is to deliver meat you can feel good about, meat that is good for you, good for the animals, and good for the planet.

[42] Belcampo animals grazed on open pastures and seasonal grasses resulting in meat that is higher in nutrients and healthy fats.

[43] Belcampo has honestly been the best tasting meat I've ever eaten at home.

[44] I've mostly been eating their ground beef.

[45] There's like a one -pound package and it's about a thousand calories and so I eat two of those a day.

[46] I actually got a chance to visit Belcampo farms recently.

[47] shot a bunch of video, got to hang out with cows and pigs and get to see the full process, which I think is honest and really made me think a lot about the places where our food comes from.

[48] But it was also just nice to be out of nature with the animals, but also with the mountains and the fresh air and good friends and just disconnect from the internet and from life in general.

[49] I also did a podcast with Anya Fernald, who ran Belcampo for many years.

[50] And we actually did the podcast outside, which is a cool kind of shot.

[51] I haven't looked at it yet, and hopefully it looks good, but it was certainly an amazing experience.

[52] Anyway, you can order Belcampo sustainably raised meats to be delivered straight to your door.

[53] That's what I do, using code Lex at bellcampo .com slash Lex for 20 % off the first time customers.

[54] That's code Lex at bellcampo .com slash Lex.

[55] This episode is sponsored by Fundrise, spelled F -U -N -D -R -I -S -E.

[56] It's a platform that allows you to invest in private real estate.

[57] If you're looking to diversify your investment portfolio, which you should, this is a good choice.

[58] I know there's probably a few people listening to this that have like 90 plus percent of their investment in one particular cryptocurrency.

[59] But I do think that diversification across, you know, stocks, bonds, mutual funds, real estate and cryptocurrency is really important.

[60] And I think private real estate is a really interesting option, especially when you have a service like fund rise that makes it super easy.

[61] easy.

[62] It's easy to get the information you need about the investments you're making.

[63] It's also great to actually execute on the investments.

[64] And then if you want to have cryptocurrency as part of your portfolio probably has to do with how comfortable you are with risk and how good you are predicting the evolution of our digital future, which despite the confidence of many people on the internet still has quite a bit of uncertainty.

[65] Anyway, check out Fundrise.

[66] at Fundrise .com slash Lex.

[67] 150 ,000 investors use it.

[68] It takes just a few minutes to get started at Fundrise .com slash Lex.

[69] That's fundrise .com slash Lex.

[70] This show is sponsored by NetSuite.

[71] They nicely asked that I opened this read with Schools Off for Summer, but if your business is running QuickBooks, you'll never get a break.

[72] And because I'm a nice guy, I indulge them in this request, even though it's ridiculous.

[73] NetSuite allows you to manage financial human resources, inventory, e -commerce, and many more business -related details all in one place.

[74] This actually reminds me that marketing is a lot more complicated than people give it credit for.

[75] I don't think I'm any good at it either, but as a consumer, I know it when it's good and I know it when it sucks.

[76] And perhaps the funny thing is when it kind of sucks or it's cheesy, perhaps that's effective.

[77] And maybe that's what they're getting at.

[78] It's a beautiful mystery.

[79] Like, what do I need to say from their perspective now for you, the listener, to keep listening to this?

[80] Would it be some Alex Jones style rant about frogs?

[81] Or is it like a catchy jingle?

[82] Or is it some ridiculous thing like schools out for summer?

[83] I don't know.

[84] The point is, outside of all that marketing nonsense, NetSuite is actually an incredible product.

[85] It helps you run the company, helps you take care of all the messy, all the difficult things.

[86] They're also offering a special financing program.

[87] If you head to Netsuite .com slash Lex.

[88] That's, by the way, spelled N -T -S -U -I -T -E .com slash Lex.

[89] So whether school's out for some or for you or not, you should go to NetSuite .com slash Lex.

[90] For some reason, that makes me think of the excellent song by Van Halen, hot for teacher.

[91] This is the Lex Friedman podcast, and here's my conversation with Rob Reed.

[92] I have seen evidence on the internet that you have a sense of humor, allegedly.

[93] But you also talk and think about the destruction of human civilization.

[94] What do you think of the Elon Musk hypothesis that the most entertaining outcome is the most likely?

[95] And he, I think, followed on to say a scene from an external observer.

[96] Like if somebody was watching us, it seems we come up with creative ways of progressing our civilization.

[97] That's fun to watch.

[98] Yeah.

[99] So he, exactly, he said from the standpoint of the observer, not the participant, I think.

[100] And so what's interesting about that, this were, I think, just a couple of freestanding tweets and delivered without a whole lot of wrapper of context.

[101] So it's left to the mind of the reader of the tweets to infer what he was talking about.

[102] But so that's kind of like, it provokes some interesting thoughts.

[103] Like, first of all, it presupposes the existence of an observer.

[104] And it also presupposes that the observer, wishes to be entertained and has some mechanism of enforcing their desire to be entertained.

[105] So there's like a lot underpinning that.

[106] And to me, that suggests particularly coming from Milan that it's a reference to simulation theory, that, you know, somebody is out there and has far greater insights and a far greater ability to, let's say, peer into a single individual life and find that entertaining and full of plot twists and surprises and either a happier, tragic ending, or they have an incredible meta -view, and they can watch the arc of civilization unfolding in a way that is entertaining and full of plot twists and surprises and a happy or unhappy ending.

[107] So, okay, so we're presupposing an observer.

[108] Then on top of that, when you think about it, you're also presupposing a producer, because the act of observation is mostly fun if there are plot twists and surprises and other developments that you weren't foreseeing.

[109] I have re -read my own novels, and that's fun because it's something I worked hard on and I slaved over and I love, but there aren't a lot of surprises in there.

[110] So now I'm thinking we need a producer and an observer for that to be true.

[111] And on top of that, it's got to be a very competent producer, because Elon said the most entertaining outcome is the most likely one.

[112] So there's lots of layers for thinking about that.

[113] And when you've got a producer who's trying to make it entertaining, it makes me think of there was a South Park episode in which Earth turned out to be a reality show.

[114] And somehow we had failed to entertain the audience as much as we used to, so the Earth show was going to get canceled, et cetera.

[115] So taking all that together, and I'm obviously being a little bit playful in laying this out, what is the evidence that we have that we are in a reality that is intended to be most interesting?

[116] Now, you could look at that reality on the level of individual lives or the whole arc of civilization, other lives, you know, levels as well, I'm sure.

[117] But just looking from my own life, I think I'd make a pretty lousy show.

[118] I spend an inordinate amount of time just looking at a computer.

[119] I don't think that's very entertaining.

[120] And there's just a completely inadequate level of shootouts and car chases in my life.

[121] I mean, I'll go weeks, even months without a single shootout or car chase.

[122] That just means that you're one of the non -player characters in this game.

[123] You're just waiting to meet.

[124] You're an extra that waiting for your one opportunity for a brief moment to actually interact with one of the main characters in the play.

[125] Okay, that's good.

[126] So, okay, so we rule out me being the star of the show, which I probably could have guessed at.

[127] Anyway, but even the arc of civilization.

[128] I mean, there have been a lot of really intriguing things that have happened and a lot of astounding things that have happened.

[129] But, you know, I would have some werewolves.

[130] I'd have some zombies.

[131] You know, I would have some really improbable developments like maybe Canada absorbing the United States.

[132] You know, so I don't know.

[133] I'm not sure if we're necessarily designed for maximum entertainment.

[134] But if we are, that will mean that 2020 is just a prequel for even more bizarre years ahead.

[135] So I kind of hope that we're not designed for maximum entertainment.

[136] well the night is still young in terms of Canada but do you think it's possible for the observer and the producer to be kind of emergent so meaning it does seem when you kind of watch memes on the internet the funny ones the entertaining ones spread more efficiently they do i mean i don't know what it is about the human mind that soaks up on mass funny things much more sort of aggressively.

[137] It's more viral in the full sense of that word.

[138] Is there some sense that whatever this, the evolutionary process that created our cognitive capabilities is the same process that's going to, in an emergent way, create the most entertaining outcome, the most memeifiable outcome, the most viral outcome if we were to share it on Twitter?

[139] Yeah, that's interesting.

[140] Yeah, we do have an incredible ability.

[141] Like, I mean, how many memes are created in a given day, and the ones that go viral are almost uniformly funny, at least to somebody with a particular sense of humor.

[142] Yeah, I'd have to think about that.

[143] We are definitely great at creating atomized units of funny.

[144] Like in the example that you used, there are going to be X million brains parsing and judging whether this meme is retweetable or not.

[145] Yes.

[146] And so that sort of atomic element of funniness, of entertainingness, et cetera, we definitely have an environment that's good at selecting for that and selective pressure and everything else that's going on.

[147] But in terms of the entire ecosystem of conscious systems here on the earth, driving for a level of entertainment, that is on such a much higher level.

[148] level that I don't know if that would necessarily follow directly from the fact that, you know, atomic units of entertainment are very, very aptly selected for us.

[149] I don't know.

[150] Do you find it compelling or useful to think about human civilization from the perspective of the ideas versus the perspective of the individual human brains?

[151] So almost thinking about the ideas or the memes this is the Dawkins thing as the organisms and then the humans as just like uh vehicles for briefly carrying those organisms as they jump around and spread yeah for propagating them mutating them putting selective pressure on them etc yeah um i mean i found um Dawkins interpret or his his launching of the idea of memes is just kind of an afterthought to his unbelievably brilliant book about the selfish gene.

[152] Like, what a P .S. to put at the end of a long chunk of writing.

[153] It's profoundly interesting.

[154] I view the relationship, though, between humans and memes is probably an oversimplification, but maybe a little bit like the relationship between flowers and bees, right?

[155] Do flowers have bees, or do bees, in a sense, have flowers?

[156] And the answer is, it is a very, very symbiotic relationship in which both have semi -independent roles that they play.

[157] and both are highly dependent upon the other.

[158] And so in the case of bees, obviously, you know, you could see the flowers being this monolithic structure physically in relation to any given bee, and it's this source of food and sustenance.

[159] So you could kind of say, well, flowers have bees.

[160] But on the other hand, the flowers would obviously be doomed.

[161] They weren't being pollinated by the bees.

[162] So you could kind of say, well, you know, flowers are really expression of what the bees need.

[163] and the truth is a symbiosis.

[164] So with memes and human minds, our brains are clearly the petri dishes in which memes are either propagated or not propagated, get mutated or don't get mutated.

[165] They are the venue in which competition, selective competition, plays out between different memes.

[166] So all of that is very true.

[167] And you could look at that and say, really the human mind is a production.

[168] of memes and ideas have us rather than us having ideas.

[169] But at the same time, let's take a catchy tune as an example of a meme.

[170] That catchy tune did originate in a human mind.

[171] Somebody had to structure that thing.

[172] And as much as I like Elizabeth Gilbert's TED talk about how the universe, I'm simplifying, but you know, kind of the ideas find their way in this beautiful TED talk.

[173] It's very lyrical.

[174] She talked about, you know, ideas and prose.

[175] kind of beaming into our minds and, you know, she talked about needing to pull over the side of the road when she got inspiration for a particular paragraph or a particular idea and a burning need to write that down.

[176] I love that.

[177] I find that beautiful as a writer, as a novelist myself, I've never had that experience.

[178] And I think that really most things that do become memes are the product of a great deal of deliberate and willful exertion of a conscious mind.

[179] And so like the bees and the flowers, I think there's a great symbiosis.

[180] And they both kind of have one another.

[181] Ideas have us, but we have ideas for real.

[182] If we could take a little bit of a tangent, Stephen King on writing, you as a great writer, you're dropping a hint here that the ideas don't come to you.

[183] It's a grind of sort of, it's almost like your mind.

[184] for gold.

[185] It's more of a very deliberate, rigorous daily process.

[186] So maybe can you talk about the writing process?

[187] How do you write well?

[188] And maybe if you want to step outside of yourself, almost like give advice to an aspiring writer.

[189] What does it take to write the best work of your life?

[190] Well, it would be very different if it's fiction versus nonfiction.

[191] And I've done both.

[192] I've written two works of non -fiction and two works of fiction, two works of fiction being more recent.

[193] I'm going to focus on that right now because that's more toweringly on my mind.

[194] There are amongst novelists, again, this is an oversimplification, but there's kind of two schools of thought.

[195] Some people really like to fly by the seat of their pants, and some people really, really liked to outline, to plot.

[196] So there's plotters and pancers, I guess, is one way that people look at it.

[197] And, you know, as with most things, there is a great continuum in between, and I'm somewhere on that continuum, but I lean, I guess, a little bit more toward the plotter.

[198] And so when I do start a novel, I have a pretty strong point of view about how it's going to end, and I have a very strong point of view about how it's going to begin, and I do try to make an effort of making an outline that I know I'm going to be extremely unfaithful to in the actual execution of the story.

[199] but trying to make an outline that gets us from here to there and notion of subplots and beats and rhythm and different characters and so forth.

[200] But then when I get into the process, that outline, particularly the center of it, ultimately morphs a great deal.

[201] And I think if I were personally a rigorous outliner, I would not allow that to happen.

[202] I also would make a much more vigorous skeleton before I start.

[203] So I think people who are really in that plotting outline mode are people who write page turners, people who write, you know, spy novels or, you know, supernatural adventures, where you really want a relentless pace of events, action, plot twists, conspiracy, et cetera.

[204] And that is really the bone.

[205] That's, that's really the, you know, the skeletal structure.

[206] So I think folks who write that kind of book are really very much on the outlining side.

[207] And I think people who write what's often referred to as literary fiction, for lack of a better term, where it's more about, you know, sort of aura and ambiance and character development and experience and inner experience and inner journey and so forth, I think that group is more likely to fly by the seat of their pants.

[208] And I know people who start with a blank page and just see where it's going to go.

[209] I'm a little bit more on the plotting side.

[210] Now you asked what makes something, at least in the mind of the writer, as great as it can be?

[211] For me, it's an astonishingly high percentage of it is editing as opposed to the initial writing.

[212] For every hour that I spend writing new prose, you know, like new pages, new paragraphs, stuff that, you know, new bits of the book, I probably spend, I mean, I wish I kept a count.

[213] like I wish I had like one of those pieces of software that lawyers use to decide how much time I've been doing this to that.

[214] But I would say it's at least four or five hours and maybe as many as 10 that I spend editing.

[215] And so it's relentless for me. For each one hour of writing you said?

[216] I'd say that for, wow.

[217] I mean, I write because I edit and I spend just relentlessly polishing and pruning and sometimes on the micro level of just like, does the rhythm of the sentence feel right.

[218] Do I need to carve a syllable or something so it can land?

[219] Like as micro as that, to as macro as like, okay, I'm done, but the book is 750 pages long and it's way too bloated and I need to lop a third out of it.

[220] Problems on, you know, those two orders of magnitude and everything in between, that is an enormous amount of my time.

[221] And I also, I also write music, write and record and produce music.

[222] And there, the ratio is even higher.

[223] Every minute that I speak, spend or my band spends laying down that original audio, it's a very high proportion of hours that go into just making it all hang together and sound just right.

[224] So I think that's true of a lot of creative processes.

[225] I know it's true of sculpture.

[226] I believe it's true of woodwork.

[227] My dad was an amateur woodworker and he spent a huge amount of time on sanding and polishing at the end.

[228] So I think a great deal of the sparkle comes from that part of the process, any creative process.

[229] Can I ask about the psychological of the demon side of that picture.

[230] In the editing process, you're ultimately judging the initial piece of work and you're judging and judging and judging.

[231] How much of your time do you spend hating your work?

[232] How much time do you spend in gratitude, impressed, thankful, or how good the work that you will put together is?

[233] I spend almost all the time in a place that's intermediate between those, but leaning toward gratitude.

[234] I spend almost all the time in a state of optimism that this thing that I have, I like, I like quite a bit, and I can make it better and better and better with every time I go through it.

[235] So I spend most of my time in a state of optimism.

[236] I think I personally oscillate much more aggressively between those two where I wouldn't be able to find the average.

[237] I go pretty deep.

[238] Marvin Minsky from MIT had this advice, I guess, to what it takes to be successful in science and research is to hate everything you do you've ever done in the past.

[239] I mean, at least he was speaking about himself that the key to his success was to hate everything he's ever done.

[240] I have a little Marvin Minsky there in me too to sort of always be exceptionally self -critical.

[241] but almost like self -critical about the work, but grateful for the chance to be able to do the work.

[242] Yeah, if that makes sense.

[243] Makes perfect sense.

[244] But that, you know, each one of us have to strike a certain kind of balance.

[245] Yeah.

[246] But back to the destruction of human civilization.

[247] If humans destroy ourselves in the next 100 years, what will be the most likely source, the most likely reason that we destroy ourselves?

[248] Well, let's see.

[249] A hundred years, it's hard for me to comfortably predict out that far.

[250] And it's something I give a lot more thought to, I think, than, you know, normal folks simply because I am a science fiction writer.

[251] And, you know, I feel with the acceleration of technological progress, it's really hard to foresee more than just a few decades.

[252] I mean, comparing today's world to that of 1921.

[253] where we are right now a century later, it would have been so unforeseeable.

[254] And I just don't know what's going to happen, particularly with exponential technologies.

[255] I mean, our intuitions reliably defeat ourselves with exponential technologies like computing and synthetic biology.

[256] And, you know, how we might destroy ourselves in the 100 -year time frame might have everything to do with breakthroughs in nanotechnology 40 years from now and then how rapidly those breakthroughs accelerate.

[257] But in the nearer term than I'm comfortable predicting, let's say, 30 years, I would say the most likely root to self -destruction would be synthetic biology.

[258] And I always say that with the gigantic caveat and very important one that I find, and I'll abbreviate synthetic biology to SynBio, just to save us some syllables, I believe SynBio offers us simply stunning promise that we would be fools to deny ourselves.

[259] So I'm not an anti -Symbio person by any stretch.

[260] I mean, SynBio has unbelievable odds of helping us beat cancer, helping us rescue the environment, helping us do things that we would currently find imponderable.

[261] So it's electrifying the field.

[262] But in the wrong hands, those hands either being incompetent or being malevolent.

[263] In the wrong hand, synthetic biology to me has a much, much greater odds, has much much greater odds of leading to our self -destruction than something running amok with super AI, which I believe is a real possibility and what we need to be concerned about.

[264] But in the 30 -year time frame, I think it's a lesser one or nuclear weapons or anything else that I can think of.

[265] Can you explain that a little bit further?

[266] So your concern is on the man -made versus the natural side of the pandemic front here.

[267] So we humans engineering pathogens, engineering viruses, is the concern here.

[268] Yeah.

[269] And maybe how do you see the possible trajectory is happening here in terms of, is it malevolent or is it accidents, oops, little mistakes or unintended consequences of particular actions that are ultimately lead to unexpected mistakes?

[270] Well, both of them are a danger.

[271] And I think the question of which is more likely has to do with two things.

[272] One, do we take a lot of methodical, affordable, four -sided steps that we are absolutely capable of taking right now to forestall the risk of a bad actor infecting us with something that could have annihilating impacts?

[273] And in the episode you referenced with Sam, we talked a great deal about that.

[274] So do we take those steps?

[275] And if we take those steps, I think the danger of malevolent rogue actors doing us in with Sin Bio couldn't plummet.

[276] But, you know, it's always a question of if.

[277] And we have a bad, bad, and very long track record of hitting the snooze bar after different natural pandemics have attacked us.

[278] So that's variable number one.

[279] Variable number two is how much experimentation and pathogen development do we as a society decide is acceptable in the realms of academia, government, or private industry.

[280] And if we decide as a society that it's perfectly okay for people with varying research agendas to create pathogens that if released could wipe out humanity, if we think that's fine, and if that kind of work starts happening in, you know, one lab, five labs, 50 labs, 500 labs, in one country, than 10 countries, then 70 countries, or whatever, that risk of a boo -boo starts rising astronomically.

[281] And this won't be a spoiler alert based on the way that I presented those two things.

[282] But I think it's unbelievably important to manage both of those risks.

[283] The easier one to manage, although it wouldn't be simple by any stretch because it would have to be something that all nations agree on.

[284] But the easiest way, the easier risk to manage is that of, hey, guys, let's not develop pathogens that if they escape from a lab could annihilate us.

[285] There's no line of research that justifies that.

[286] And in my view, I mean, that's the point of perspective we'd need to have.

[287] We'd have to collectively agree that there's no line of research that justifies that.

[288] The reason why I believe that would be a highly rational conclusion is even the highest level of biosafety lab in the world, biosafety lab level four.

[289] And there's not a lot of BSL4 labs in the world.

[290] There have things.

[291] can and have leaked out of BSL4 labs.

[292] And some of the work that's been done with potentially annihilating pathogens, which we can talk about, is actually done at BSL3.

[293] And so fundamentally, any lab can leak.

[294] We have proven ourselves to be incapable of creating a lab that is utterly impervious to leaks.

[295] So why in the world would we create something where, if, God forbid, it leaked, could annihilate us all.

[296] And by the way, almost all of the measures that are taken in biosafety level anything labs are designed to prevent accidental leaks.

[297] What happens if you have a malevolent insider?

[298] And we could talk about the psychology and the motivations of what would make a malevolent insider who wants to release something annihilating in a bit.

[299] I'm sure that we will.

[300] But what if you have a malevolent insider?

[301] Virtually none of the standards that go into biosafety level one, two, three, and four are about preventing somebody hijacking the process.

[302] I mean, some of them are, but they're mainly designed against accidents.

[303] They're imperfect against accidents.

[304] And if this kind of work starts happening in lots and lots of labs, with every lab you add, the odds of there being a malevolent insider naturally increased arithmetically as the number of labs goes up.

[305] Now, on the front of somebody outside of a government academic or scientific, traditional government, academic, scientific environment, creating something malevolent.

[306] Again, there's protections that we can take both at the level of SynBio architecture, the heartening the entire SynBio ecosystem against terrible things being made that we don't want to have out there by rogue actors, to early detection, to lots and lots of other things that we can do to dramatically mitigate that risk.

[307] And I think we do both of those.

[308] those things, decide that, no, we're not going to experimentally make annihilating pathogens in leaky labs.

[309] And B, yes, we are going to take countermeasures that are going to cost a fraction of our annual defense budget to preclude their creation.

[310] Then I think both risks get managed down.

[311] But if you take one set of precautions and not the other, then the thing that you have not taken precautions against immediately becomes the more likely outcome.

[312] So can we talk about this kind of research and what's actually done and what are the positives and negatives of it.

[313] So if we look at gain of function research and the kind of stuff that's happening in level three and level four, BSL labs, what's the whole idea here?

[314] Is it trying to engineer viruses to understand how they behave?

[315] You want to understand the dangerous ones.

[316] Yeah.

[317] So that would be the logic behind doing it.

[318] And so gain a function can mean a lot of different things.

[319] viewed through a certain lens, gain to function research could be what you do when you create GMOs, when you create hearty strains of corn that are resistant to pesticides.

[320] I mean, you could view that as gain of function.

[321] So I'm going to refer to gain of function in a relatively narrow sense, which is actually the sense that the term is usually used, which is in some way magnifying capabilities of microorganisms to make them more dangerous, whether it's more transmissible or more deadly.

[322] And in that line of research, I'll use an example from 2011 because it's very illustrative and it's also very chilling.

[323] Back in 2011, two separate labs independently of one another.

[324] I assume there was some kind of communication between them, but there were basically independent projects, one in Holland and one in Wisconsin, did gain a function research on something called H5N1 flu.

[325] H5N1 is, you know, something that, at least on a lethality basis, makes COVID look like a kitten.

[326] You know, COVID, according to the World Health Organization, as a case, fatality rate somewhere between half a percent and 1 percent.

[327] H5N1 is closer to 60 percent, 60 percent, 60.

[328] And so that's actually even slightly more lethal than Ebola.

[329] It's a very, very, very scary pathogen.

[330] The good news about H5N1 is that it is barely, barely.

[331] contagious.

[332] And I believe it is in no way contagious human to human.

[333] It requires very, very, very deep contact with birds, in most cases, chickens.

[334] And so if you're a chicken farmer and you spend an enormous amount of time around them and perhaps you get into situations in which you get a break in your skin and you're interacting intensely with Fowl who, as it turns out, have H5N1, that's when the jump comes.

[335] But it's not, there's no airborne transmission that we're aware of human to human.

[336] I mean, not that we're, it just doesn't exist.

[337] I think the World Health Organization did a relentless survey of the number of H5N1 cases.

[338] I think they do it every year.

[339] I saw one 10 -year series where I think it was like 500 fatalities over the course of a decade.

[340] And that's a drop in the bucket.

[341] Kind of fun fact.

[342] I believe the typical lethality from lightning over 10 years is 70.

[343] thousand deaths.

[344] So we think getting struck by lightning, pretty low risk, H5N1, much, much lower than that.

[345] What happened in these experiments is the experimenters in both cases set out to make H5N1 that would be contagious, that could create airborne transmission.

[346] And so they basically passed it, I think in both cases, they passed it through a large number of ferrets.

[347] And so this wasn't like CRISPR, there wasn't even in CRISPR back in those days.

[348] This was relatively straightforward, you selecting for a particular outcome.

[349] And after guiding the path and passing them through, again, I believe it was a series of ferrets, they did, in fact, come up with a version of H5N1 that is capable of airborne transmission.

[350] Now, they didn't unleash it into the world.

[351] They didn't inject it into humans to see what would happen.

[352] And so for those two reasons, we don't really know how contagious it might have been.

[353] But, you know, if it was as contagious as COVID, that could be.

[354] a civilization -threatening pathogen.

[355] And why would you do it?

[356] Well, the people who did it were good guys.

[357] They were virologists.

[358] I believe their agenda as they explained it was, much as you said, let's figure out what a worst -case scenario might look like so we could understand it better.

[359] But my understanding is in both cases it was done in BSL -3 labs.

[360] And so potential of leak significantly non -zero, hopefully way below 1%, but significantly non -zero, and when you look at the consequences of an escape in terms of human lives, destruction of a large portion of the economy, et cetera, and you do an expected value calculation on whatever fraction of 1 % that was, you would come up with a staggering cost, staggering expected cost for this work.

[361] So it should never have been carried out.

[362] Now, you might make an argument, if you said, if you believed that H5N1 in nature, he's on an inevitable path to airborne transmission.

[363] And it's only going to be a small number of years, A, and B, if it makes that transition, there is, you know, one set of changes to its metabolic pathways and, you know, its genomic code and so forth, one that we have discovered.

[364] So it is going to go from point A, which is where it is right now, to point B, we have reliably engineered point B, that is the destination.

[365] and we need to start fighting that right now because this is five years or less away.

[366] Now, that'd be a very different world.

[367] That'd be like spotting an asteroid that's coming toward the Earth and is five years off.

[368] And yes, you marshal everything you can to resist that.

[369] But there's two problems with that perspective.

[370] The first is, and however many thousands of generations that humans have been inhabiting this planet, there has never been a transmissible form of H5N1.

[371] And influenza's been around for a very long time.

[372] So there is no case for inevitability.

[373] of this kind of a jump to airborne transmission.

[374] So we're not on a freight train to that outcome.

[375] And if there was inevitability around that, it's not like there's just one set of genetic code that would get there.

[376] There's all kinds of different mutations that could conceivably result in that kind of an outcome.

[377] Unbelievable diversity of mutations.

[378] And so we're not actually creating something we're inevitably going to face, but we are creating something.

[379] We are creating a, very powerful and unbelievably negative card and injecting in the deck that nature never put into the deck.

[380] So in that case, I just don't see any moral or scientific justification for that kind of work.

[381] And interestingly, there was quite a bit of excitement and concern about this when the work came out.

[382] One of the teams was going to publish their results in science, the other in nature.

[383] And there were a lot of editorials and a lot of scientists are saying, this is crazy.

[384] And publication of those papers did get suspended.

[385] And not long after that, there was a pause put on U .S. government funding, NIH funding, on gain of function research.

[386] But both of those speed bumps were ultimately removed.

[387] Those papers did ultimately get published, and that pause on funding, you know, ceased long ago.

[388] And in fact, those two very projects, my understanding, has resumed their funding, got their government funding back.

[389] I don't know why Dutch projects getting NIH funding, but whatever.

[390] about a year and a half ago.

[391] So as far as the U .S. government and regulators are concerned, it's all systems go for gain of function at this point, which I find very troubling.

[392] Now, I'm a little bit of an outsider from this field, but it has echoes of the same kind of problem I see in the AI world with autonomous weapon systems.

[393] Nobody, and my colleagues, my colleagues, friends, as far as I can tell, people in the AI community, are not really talking about autonomous weapon systems as now U .S. and China have full steam ahead on the development of both.

[394] And that seems to be a similar kind of thing on gain of function.

[395] I have friends in the biology space and they don't want to talk about gain of function publicly.

[396] And that makes me very uncomfortable from an outsider perspective in terms of gain of function.

[397] It makes me very uncomfortable from the insider perspective on autonomous weapon systems.

[398] I'm not sure how to communicate exactly about autonomous weapon systems, and I certainly don't know how to communicate effectively about gain of function.

[399] What is the right path forward here?

[400] Should we seize all gain of function research?

[401] Is that really the solution here?

[402] Well, again, I'm going to use gain of function in the relatively narrow context of what we're discussing.

[403] You could say almost anything that you do to make biology more effective as gain of function.

[404] So within the narrow confines of what we're discussing, I think it would be easy enough for level -headed people in all of the countries, level -headed governmental people and all of the countries that realistically could support such a program to agree, we don't want this to happen because all labs leak.

[405] I mean, an example that I actually did, I actually did in the piece I did with Sam Harris as well, is the anthrax attacks in the United States in 2001.

[406] I mean, talk about an example of the least likely lab leaking into the least likely place.

[407] This was shortly after 9 -11 for folks who don't remember it.

[408] And it was a very, very lethal strand of anthrax that, as it turned out, based on the forensic genomic work that was done and so forth, absolutely leaked from a high -security U .S. Army lab.

[409] Probably the one at Fort Detrick in Maryland.

[410] It might have been another one, but who cares?

[411] It absolutely leaked from a high -security U .S. Army lab.

[412] And where did it leak to this highly dangerous substance that was kept under lock and key by a very security -minded organization?

[413] Well, it leaked to places including the Senate Majority Leader's office, Tom Dashel's office.

[414] I think it was Senator Leahy's office, certain publications, including, bizarrely, the National Enquirer.

[415] But let's go to the Senate Majority Leader's Office.

[416] It is hard to imagine a more security -minded country than the United States two weeks after the 9 -11 attack.

[417] I mean, it doesn't get more security -minded.

[418] than that.

[419] And it's also hard to imagine a more security -capable organization than the United States military.

[420] We can joke all we want about inefficiencies in the military and, you know, $24 ,000 wrenches and so forth, but pretty capable when it comes to that.

[421] Despite that level of focus and concern and competence, just days after the 9 -11 attack, something comes from the inside of our military industrial compacts and ends up, you know, in the office of someone, I believe, the Senate majority leader is somewhere in the line of presidential succession.

[422] It tells us everything can leak.

[423] So again, think of a level -headed conversation between powerful leaders in a diversity of countries, thinking through, like I can imagine a very simple PowerPoint revealing, you know, just discussing briefly things like the anthrax leak, things like this foot and mouth disease outbreak or leaking that came out of a BSL four level lab in the UK, several other things, talking about the utter virulence that could result from gain of function and say, folks, can we agree that this just shouldn't happen?

[424] I mean, if we were able to agree on the nuclear nonproliferation treaty, which we were by a weapons convention, which we did agree on, we the world, for the most part, I believe agreement could be found there.

[425] But it's going to take people in leadership of a couple of very powerful countries to get to consensus amongst them and then to decide we're going to get everybody together and browbeat them into banning this stuff.

[426] Now, that doesn't make it entirely impossible that somebody might do this, but in well -regulated, you know, carefully watched over fiduciary environments, like federally funded academic research, anything going on in the government itself, you know, things going on in, you know, companies that have investors who don't want to go to jail for the rest.

[427] to their lives, I think that would have a major, major dampening impact on it.

[428] But there is a particular possible catalyst in this time we live in, which is for really kind of raising the question of gain of function of research for the application of virus, making viruses more dangerous.

[429] Is the question of whether COVID leaked from a lab?

[430] sort of not even answering that question, but even asking that question.

[431] It seems like a very important question to ask to catalyze the conversation by the whether we should be doing gain of functional research.

[432] I mean, from a high level, why do you think people, even colleagues of mine, are not comfortable asking that question?

[433] And two, do you think that the answer could be that it did leak from a lab?

[434] I think the mere possibility that it did leak from a lab is evidence enough, again, for the hypothetical, rational, national leaders watching this simple PowerPoint.

[435] If you could put the possibility at 1%, and you look at the unbelievable destructive power that COVID had, that should be an overwhelmingly powerful argument for excluding it.

[436] Now, as to whether or not that was a leak, some very, very level.

[437] I don't know enough about all of the factors in the Bayesian analysis and so forth that has gone into people making the pro argument of that.

[438] So I don't pretend to be an expert on that, and I don't have a point of view.

[439] I just don't know.

[440] But what I, what we can say is it is entirely possible for a couple of reasons.

[441] One is that there is a BSL4 lab in Wuhan, the Wuhan Institute of Virology.

[442] I believe it's the only being.

[443] ESL4 in China.

[444] I could be wrong about that.

[445] But it definitely had a history that alarmed very sophisticated U .S. diplomats and others who were in contact with the lab and were aware of what it was doing long before COVID hit the world.

[446] And so there are diplomatic cables that have been declassified.

[447] I believe one sophisticated scientist or other observer said that WIV is a ticking time bomb.

[448] And I believe it's also been pretty reasonably established that coronaviruses were a topic of great interest at WIV.

[449] SARS obviously came out of China, and that's a coronavirus.

[450] It would make an enormous amount of sense for it to be studied there.

[451] And there is so much opacity about what happened in the early days and weeks after the outbreak that's basically been imposed by the Chinese government that we just don't know.

[452] So it feels like a substantially or greater than 1 % possibility to me, looking at it from the outside.

[453] And that's something that one could imagine.

[454] Now we're going to the realm of thought experiment, not me declaring this is what happened, but, you know, if they're studying coronavirus at the Wuhan Institute of Urology, and there is this precedent of gain of function research that's been done on something that is remarkably uncontagious to humans, whereas we know coronavirus is contagious to humans.

[455] I could definitely, and there is this global consensus, you know, certainly was the case, you know, two or three years ago when this work might have started.

[456] There's, it seems to be this global consensus that gain a function is fine.

[457] The U .S. paused funding for a little while, but paused funding.

[458] They never said private actors couldn't do it.

[459] It was just a pause of NIH funding.

[460] And then that pause was lifted.

[461] So again, none of this is irrational.

[462] You could certainly see the folks at WIV saying, gain a function, interesting vector.

[463] Coronavirus virus, unlike H5N1, very contagious.

[464] We're in a nation that has had terrible run -ins with coronavirus.

[465] Why do we do a little gain of function on this?

[466] And then, like all labs at all levels, one can imagine this lab leaking.

[467] So it's not an impossibility and very, very level -headed people have said that, you know, who have looked at it much more deeply, do believe in that outcome.

[468] Why is it such a threat to power?

[469] the idea that leak from a lab.

[470] Why is it so threatening?

[471] I don't maybe I understand this point exactly.

[472] Like, is it just that as governments, and especially the Chinese government, is really afraid of admitting mistakes that everybody makes?

[473] So this is a horrible mistake.

[474] Like Chernobyl, is a good example.

[475] I come from the Soviet Union.

[476] I mean, well, major mistakes were made in Chernobyl.

[477] I would argue for a last, leak to happen the the scale of the mistake is much smaller right there the the depth and the breadth of a rot that in bureaucracy that led to Chernobyl is much bigger than anything that could lead to a lab leak because it could literally just be I mean I'm sure there's security very careful security procedures even in level three labs but it I imagine maybe you can correct me. All it takes is the incompetence of a small number of individuals.

[478] Or even one.

[479] One individual on a particular couple weeks, three weeks, period, as opposed to a multi -year bureaucratic failure of the entire government.

[480] Right.

[481] Well, certainly the magnitude of mistakes and compounding mistakes that went to Chernobyl was far, far, far greater.

[482] But the consequence of COVID outweighs that, the consequence of Chernobyl to a tremendous degree.

[483] And, you know, I think that, that particularly authoritarian governments are unbelievably reluctant to admit to any fallibility whatsoever.

[484] And there's a long, long history of that across dozens and dozens of authoritarian governments.

[485] And to be transparent, again, this is in the hypothetical world in which this was a leak, which, again, I don't have, I don't personally have enough sophistication to have an opinion on the likelihood.

[486] But in the hypothetical world in which it was a league, the global reaction and the amount of global animus and the amount of, you know, the decline in global respect that would happen toward China because every country suffered massively from this, unbelievable damages in terms of human.

[487] lives and economic activity disrupted, the world would in some way present China with that bill.

[488] And when you take on top of that the natural disinclination for any authoritarian government to admit any fallibility and tolerate the possibility of any fallibility whatsoever, and you look at the relative opacity, even though they let a World Health Organization group in a couple of months ago to run around, they didn't give that who group anywhere nearly the level of access, it would be necessary to definitively say X happened versus Y, the level of opacity that surrounds those opening weeks and months of COVID in China, we just don't know.

[489] If you were to kind of look back at 2020 and maybe broading it out to future pandemics, that could be much more dangerous, what kind of response, how do we fail in a response, and how could we do better?

[490] So the gain of function, research is discussing, which, you know, the question of we should not be creating viruses that are both exceptionally contagious and exceptionally deadly to humans.

[491] But if it does happen, perhaps the natural evolution, natural mutation.

[492] Is there interesting technological responses on the testing side, on the vaccine development side, on the collection of data, or on the basic sort of policy response side or the sociological the psychological side yeah there's all kinds of things and most of what i've thought about and written about and again discussed in that long bit with with sam is dual use so most of the countermeasures that i've been thinking about and advocating for would be every bit as effective against a zoonotic disease a natural pandemic of some sort as an artificial one.

[493] The risk of an artificial one, even the near -term risk of an artificial one, ups the urgency around these measures immensely, but most of them would be broadly applicable.

[494] And so I think the first thing that we really want to do on a global scale is have a far, far, far more robust and globally transparent system of detection.

[495] And that can happen on a number of levels.

[496] The most obvious one is, you know, in the blood of people who come into clinics exhibiting signs of illness.

[497] And we are certainly at a point now we're at with relatively minimal investment.

[498] We could develop in -clin -clin diagnostics that would be unbelievably effective at pinpointing what's going on in almost any disease when somebody walks into a doctor's office or a clinic.

[499] And better than that, this is a little bit further off, but it wouldn't cost tens of billions in research dollars.

[500] It would be, you know, a relatively modest and affordable budget in relation to the threat at -home diagnostics that can really, really pinpoint, you know, okay, particularly with respiratory infections, because that is generally, almost universally, the mechanism of transmission for any serious pandemic.

[501] So somebody has a respiratory infection is it one of the, you know, significantly large handful of rhinoviruses, coronavirus and other things that cause common cold?

[502] Or is it influenza?

[503] If it's influenza, is it influenza A versus B?

[504] Or is it, you know, a small handful of other more exotic, but nonetheless, sort of common respiratory infections that are out there?

[505] Developing a diagnostic panel to pinpoint all of that stuff, that's something that's well within our capabilities.

[506] That's much less a lift than creating MRNA vaccines, which obviously we proved capable of when we put our minds to it.

[507] So do that on a global basis.

[508] And I don't think that's irrational because the best prototype for this than I'm aware of isn't currently rolling out in Atherton, California, or Fairfield County, Connecticut, or some other wealthy place.

[509] The best prototype that I'm aware of this is rolling out right now in Nigeria.

[510] And it's a project that came out of the Broad Institute, which is, as I'm sure you know, but some listeners may not, is kind of like an academic joint venture between Harvard and MIT.

[511] The program is called Sentinel, and their objective is, and their plan and it's a very well -conceived plan, methodical plan, is to do just that in areas of Nigeria that are particularly vulnerable to zoonotic diseases making the jump from animals to humans.

[512] But also there's just an unbelievable public health benefit from that.

[513] And it's sort of a three -tier system where clinicians in the field could very rapidly determine, do you have one of the infections of acute interest here, either because it's very common in this region, so we want to diagnose as many things as we can at the front line, or because it's uncommon but unbelievably threatening like Ebola.

[514] So frontline worker can make that determination very, very rapidly.

[515] If it comes up as we don't know, they bump it up to a level that's more like at a fully configured doctor's office or local hospital.

[516] And if it's still at a we don't know, it gets bumped up to a national level.

[517] And it gets bumped very, very rapidly.

[518] So if this can be done in Nigeria, and it seems that it can be, there shouldn't be any inhibition for it to happen in most other places.

[519] And it should be affordable from a budgetary standpoint.

[520] And based on Sentinel's budget and adjusting things for things like, you know, very different cost of living, larger population, et cetera, I did a back of the envelope calculation that doing something like Sentinel in the U .S. would be in the low billions of dollars.

[521] And, you know, wealthy countries, middle -income countries can't afford such a thing.

[522] Lower income countries should certainly be helped with that.

[523] But start with that level of detection.

[524] And then layer on top of that other interesting things like, you know, monitoring search engine traffic, search engine queries for evidence that strange clusters of symptoms are starting to rise in different places.

[525] There's been a lot of work done with that.

[526] Most of it kind of academic and experimental, but some of it has been powerful enough to suggest that this could be a very powerful early warning system.

[527] There's a guy named Bill Lampos at University College London, who basically did a very rigorous analysis that showed that symptom searches reliably predicted COVID outbreaks in the early days of the pandemic in given countries by as much as 16 days before the evidence started to accrue at a public health level.

[528] 16 days of forewarning can be monumentally important in the early days of an outbreak.

[529] And this is, you know, a very, very talented, but nonetheless, very resource -constrained academic project.

[530] Imagine if that was something that was done with a NORAD -like budget.

[531] Yeah.

[532] Yeah.

[533] So, I mean, starting with detection, that's something we could do radically, radically better.

[534] So aggregating multiple data sources in order to create something.

[535] I mean, this is really exciting to me, the possibility that I've heard inklings of of creating almost like a weather map of pathogens, like basically aggregating all of these data sources, scaling many orders and magnitude up at home testing and all kinds of testing that doesn't just try to test for the particular pathogen of worry now, but everything, like a full spectrum of things that could be dangerous to the human body.

[536] and thereby be able to create these maps like that are dynamically updated on an hourly basis of how viruses travel throughout the world.

[537] And so you can respond like you can then integrate just like you do when you check your weather map and it's raining or not.

[538] Of course, not perfect, but it's very good predictor whether it's going to rain or not.

[539] And use that to then make decisions about your own life.

[540] Ultimately give the power information to individuals to respond.

[541] And if it's a super dangerous, like if it's acid rain versus regular rain, you might want to really stay inside as opposed to risking it.

[542] And that, just like you said, I think it's not very expensive relative to all the things that we do in this world.

[543] But it does require bold leadership.

[544] And there's another dark thing, which really has bothered me about 2020, which it requires, is it requires trust in institutions.

[545] to carry out these kinds of programs and it requires trust in science and engineers and sort of centralized organizations that would operate at scale here.

[546] And much of that trust has been, at least in the United States, diminished.

[547] It feels like, I'm not exactly sure where to place the blame, but I do place quite a bit of the blame into the scientific community.

[548] And again, my fellow colleagues in speaking down to people at times, speaking from authority, it sounded like it dismissed the basic human experience or the basic common humanity of people in a way to like, it almost sounded like there's an agenda that's hidden behind the words the scientists spoke, like they're trying to, in a self -preserving way, control the population or something like that.

[549] I don't think any of that is true from the majority of the scientific community, but it sounded that way.

[550] and so the trust began to diminish.

[551] I'm not sure how to fix that except to be more authentic, be more real, acknowledge the uncertainties under which we operate, acknowledge the mistakes that scientists make, that institutions make the leak from the lab is a perfect example.

[552] We have imperfect systems that make all the progress to see in the world, and that being honest about that imperfection, I think, is essential for, forming trust.

[553] But I don't know what to make of it.

[554] It's been deeply disappointing because I do think, just like you mentioned, the solutions require people to trust the institutions with their data.

[555] Yeah.

[556] And I think part of the problem is, it seems to me as an outsider that there was a bizarre and willingness on the part of the CDC and other institutions to admit to, to frame and to contextualize uncertainty.

[557] Maybe they had a patronizing idea that these people need to be told, and when they're told, they need to be told with authority and a level of definitiveness and certitude that doesn't actually exist.

[558] And so when they whipsaw on recommendations like, what you should do about masks, you know, when the CDC is kind of at the very beginning of the pandemic saying masks, don't do anything, don't wear them, when the real driver for that, was we don't want these clowns going out and depleting Amazon of masks because they may be needed in medical settings and we just don't know yet.

[559] I think a message that actually respected people and said, this is why we're asking you not to do masks yet and there's more to be seen, would be less whipsawing and would bring people like they feel more like they're part of the conversation and they're being treated like adults than saying one day definitively mask.

[560] suck, and then X days later saying, nope, damn it, wear masks.

[561] And so I think framing things in terms of the probabilities, which most people are easy to parse, I mean, a more recent example, which I just thought was batty, was suspending the Johnson and Johnson vaccine for a very low single -digit number of days in the United States based on the fact that I believe there had been seven -ish clotting incidents in roughly, you know, roughly seven million people who had had the vaccine administered, I believe one of which resulted in a fatality.

[562] And there was definitely suggestive data that indicated that there was a relationship.

[563] This wasn't just coincidental because I think all of the clotting incidents happened in women as opposed to men and kind of clustered in a certain age group.

[564] But does that call for shutting off the vaccine?

[565] Or does it call for leveling with the American public and saying, we've had one fatality out of seven million.

[566] This is, let's just assume, substantially less than the likelihood of getting struck by lightning.

[567] Based on that information, and we're going to keep you posted because you can trust us to keep you posted, based on that information, please decide whether you're comfortable with a Johnson and Johnson vaccine.

[568] That would have been one response, and I think people would have been able to parse those simple bits of data and make their own judgment.

[569] By turning it off, all of a sudden, there's this dramatic signal to people who don't read all 900 words in the New York Times piece that explains why it's being turned off but just see the headline, which is a majority of people, there's a sudden like, oh my God, yikes, vaccine being shut off.

[570] And then all the people who sat on the fence or are sitting on the fence about whether or not they trust vaccines, that is going to push an incalculable number of people.

[571] That's going to be the last straw for we don't know how many hundreds of thousands or more likely millions of people to say, okay, tipping point here, I don't trust these vaccines.

[572] So by pausing that for whatever it was, 10 or 12 days and then flipping the switch, as everybody who knew much about the situation knew was inevitable.

[573] By flipping the on switch 12 days later, you're conveying certitude J &J bad to certitude J &J good in a period of just a few days, and people just feel whipsawed and they're not part of the analysis.

[574] But it's not just the web sawing.

[575] And I think about this quite a bit.

[576] I don't think I have good answers.

[577] It's something about the way the communication actually happens.

[578] Just, I don't know what it is about Anthony Fauci, for example, but I don't trust him.

[579] And I think that has to do, I mean, he's, he has an incredible background.

[580] I'm sure he's a brilliant scientist and researcher.

[581] I'm sure he's also a great, like inside the room, policymaker and deliberator and so on.

[582] But, you know, what makes a great leader is something about that thing that you can't quite describe, but being a communicator that you know you can trust, there's an authenticity that's required.

[583] And I'm not sure, maybe I'm being a bit too judgmental, but I'm a huge fan.

[584] of a lot of great leaders throughout history.

[585] They've communicated exceptionally well in the way that Fauci does not, and I think about that.

[586] I think about what does affect the science communication.

[587] So, you know, great leaders throughout history did not necessarily need to be great science communicators.

[588] Their leadership was in other domains.

[589] But when you're fighting the virus, you also have to be a great science communicator.

[590] You have to be able to communicate uncertainties.

[591] You have to be able to communicate, something like a vaccine, that you're allowing inside your body into the messiness, into the complexity of the biology system, that if we're being honest, it's so complex, we'll never be able to really understand.

[592] We can only desperately hope that science can give us sort of a high likelihood that there's no short -term negative consequences and that kind of intuition about long -term negative consequences and doing our best in this battle against trillions of things that are trying to kill us.

[593] I mean, being an effective communicator in that space is very difficult, but I think about what it takes because I think there should be more science communicators that are effective of that kind of thing.

[594] Let me ask you about something that's sort of more in the AI space that I think about that kind of goes along this thread that you're, that you've spoken about, about democratizing the technology that could destroy human civilization is from amazing work, from Deep Mind, Alpha Fold 2, which achieved incredible performance on the protein folding problem, single protein folding problem.

[595] Do you think about the use of AI in the sin biospace?

[596] I think the gain of function in the virus -based research that you referred to, I think is natural mutations and sort of aggressively mutating the virus until you get one that has this both contagious and deadly.

[597] But what about then using AI to through simulation be able to compute deadly viruses or any kind of biological systems?

[598] Is this something you're worried about?

[599] Or again, is this something you're more excited?

[600] about.

[601] I think computational biology is unbelievably exciting and promising field.

[602] And I think when you're doing things in silica, as opposed to in vivo, you know, the dangers plummet.

[603] You don't have a critter that can leak from a leaky lab.

[604] Yes.

[605] So I don't see any problem with that, except I do worry about the data security dimension of it.

[606] Because if you were doing really, really interesting in silico gain a function research and you hit upon, you know, through a level of sophistication.

[607] We don't currently have, but, you know, synthetic biology is an exponential technology.

[608] So capabilities that are utterly out of reach today will be attainable in five or six years.

[609] I think if you conjured up worst case genomes of viruses that don't exist in vivo anywhere, they're just in the computer space, but like, hey, guys, this is the genetic sequence that would in the world, let's say.

[610] Then you have to worry about the utter hackability of every computer network we can imagine.

[611] I mean, data leaks from the least likely places on the grandest possible scales have happened and continue to happen.

[612] And we'll probably always continue to happen.

[613] And so that would be the danger of doing the work in silico.

[614] If you end up with a list of like, well, these are things we never want to see.

[615] That list leaks.

[616] And after the passage of sometimes, certainly couldn't be done today, but after the passage of some time, lots and lots of people in academic labs going all the way down to the high school level are in a position to, you know, to make it overly simplistic, hit print on a genome and have the virus bearing that genome pop out on the other end, and you've got something to worry about.

[617] But in general, computational biology, I think, is incredibly important, particularly because the crushing majority of work that people are doing with the protein folding problem and other things.

[618] about creating therapeutics, about creating things that will help us, you know, live better, live longer, thrive, be better more well, and so forth.

[619] And the protein folding problem is a monstrous computational challenge that we seem to make just the most glacial project on, I'm sorry, progress on for years and years.

[620] But I think there's like a, there's a biannual competition, I think, for which people tackle the protein folding problem.

[621] And deep minds entrant both two years ago, like in 2018 and 2020, ruled the field.

[622] And so, you know, protein folding is an unbelievably important thing if you want to start thinking about therapeutics because, you know, it's the folding of the protein that tells us where the channels and the receptors and everything else are on that protein.

[623] And it's from that precise model, if we can get to a precise model, that you can start barraging it again in silicone with, you know, thousands, tens of thousands, millions of thousands, millions of potential therapeutics and see what resolves the problems, the shortcomings that, you know, a bad, a misshapen protein, for instance, somebody with cystic fibrosis, how might we treat that?

[624] So I see nothing but good in that.

[625] Well, let me ask about fear and hope in this world.

[626] I tend to believe that in terms of competence and malevolence, that people who are, maybe it's in my interactions, I tend to see that, first of all, I believe that most people are good and want to do good and are just better doing good and more inclined to do good on this world.

[627] And more than that, people who are malevolent are usually incompetent at building technology.

[628] So I've seen this in my life that people who are exceptionally good at stuff, no matter what the stuff is, tend to, maybe they discover joy in life in a way that gives them fulfillment and thereby does not result in them wanting to destroy the world.

[629] So the better you are at stuff, whether that's building nuclear weapons or plumbing, it doesn't matter, the both, the less likely you are to destroy the world.

[630] So in that sense, with many technologies, AI especially, I always think that the, the the malevolent will be far outnumbered by the ultra -competent.

[631] And in that sense, the defenses will always be stronger than the offense in terms of the people trying to destroy the world.

[632] Now, there's a few spaces where that might not be the case, and that's an interesting conversation, where this one person who's not very competent can destroy the whole world.

[633] perhaps SynBio is one such space because of the exponential effects of the technology.

[634] I tend to believe AI is not one of such spaces, but do you share this kind of view that the ultra -competent are usually also the good?

[635] Yeah, absolutely.

[636] I absolutely share that, and that gives me a great deal of optimism that we will be able to short -circuit the threat that malevolent Sin -Bio could post.

[637] to us, but we need to start creating those defensive systems or defensive layers, one of which we talked about far, far, far better surveillance in order to prevail.

[638] So the good guys will almost inevitably outsmart and definitely outnumber the bad guys in most sort of smackdowns that we can imagine.

[639] But the good guys aren't going to be able to exert their advantages unless they have the imagination necessary to think about what the worst possible thing can.

[640] can be done by somebody whose own psychology is completely alien to their own.

[641] So that's a tricky, tricky thing to solve for.

[642] Now, in terms of whether the asymmetric power that a bad guy might have in the face of the overwhelming numerical advantage and competence advantage that the good guys have, you know, unfortunately, I look at something like mass shootings as an example.

[643] You know, I'm sure the guy who was responsible for the Vegas shooting or the Orlando shooting or any other shooting that we can imagine didn't know a whole lot about ballistics.

[644] And the number of, you know, good guy citizens in the United States with guns compared to bad guy citizens, I'm sure, is a crushingly overwhelmingly high ratio in favor of the good guys.

[645] But that doesn't make it possible for us to stop mass shootings.

[646] An example, Fort Hood, 45 ,000 trained soldiers on that base.

[647] yet there have been two mass shootings there.

[648] And so there is an asymmetry when you have powerful and lethal technology that gets so democratized and so proliferated in tools that are very, very easy to use, even by a knucklehead.

[649] When those tools get really easy to use by a knucklehead and they're really widespread, it becomes very, very hard to defend against all instances of usage.

[650] Now, the good news, quote unquote, about mass shootings, if there is any, and there is some, is even the most brutal and carefully planning and well -armed mass shooter can only take so many victims.

[651] And the same is true as there's been four instances that I'm aware of of commercial pilots committing suicide by downing their planes and taking all their passengers with them.

[652] These weren't Boeing engineers, you know, but like an army of Boeing engineers ultimately were not capable of preventing that.

[653] but even in their case, and they're actually not counting 9 -11, and that 9 -11 is a different category in my mind.

[654] These are just personally suicidal pilots.

[655] In those cases, they only have a plain load of people that they're able to take with them.

[656] If we imagine a highly plausible and imaginable future in which some biotools that are amoral, that could be used for good or for ill, start embodying unbelievable sophistication and genius, in the tool, in the easier and easier and easier to make tool, all those thousands, tens of thousands, hundreds of thousands of scientists years start getting embodied in something that maybe as simple as hitting a print button, then that good guy technology can be hijacked by a bad person and used in a very asymmetric way.

[657] See, I think what happens, though, as you go to the high school student from the current, like, very specific set of labs they're able to do it as we get as it becomes more and more democratize it as it becomes easier and easier to do this kind of large scale damage with a with an engineered virus the more and more there will be engineering of defenses against these systems is some of the things we talked about in terms of testing towards a collection of data but also in terms of like at scale contact tracing or also engineering of vaccines like in a matter of like days maybe hours maybe minutes so like I just I feel like the defenses that's what human species seems to do is like we keep hitting the snooze button until there's like a like a storm on the horizon heading towards us then we start to quickly build up the defenses or the response that's proportional to the scale of the storm.

[658] Of course, again, certain kinds of exponential threats require us to build up the defenses way earlier than we usually do.

[659] And that's, I guess, the question.

[660] But I ultimately am hopeful that the natural process of hitting the snooze button until the deadline is right in front of us will work out for quite a long time for us humans.

[661] And I fully agree.

[662] I mean, that's why I'm fundamentally, I may not sound like it thus far, but I'm fundamentally very, very optimistic about our ability to short circuit this threat because there is, again, I'll stress the technological feasibility and the profound affordability of a relatively simple set of steps that we can take to preclude it, but we do have to take those steps.

[663] And so, you know, what I'm hoping to do and trying to do is inject a notion of what those steps are, you know, into the public conversation and do my small part to up the odds that that actually ends up happening.

[664] You know, the danger with this one is it is exponential.

[665] And I think that our minds are fundamentally struggle to understand exponential math.

[666] It's just not something we're wired for.

[667] Our ancestors didn't confront exponential processes when they were growing up on the savannah.

[668] So it's not something that's intuitive to us, and our intuitions are reliably defeated when exponential processes come along.

[669] So that's issue number one.

[670] And issue number two with something like this is, you know, it kind of only takes one.

[671] You know, that ball only has to go into the net once and we're doomed, which is not the case with mass shooters.

[672] It's not the case with, you know, commercial pilots run amok.

[673] It's not the case with really any threat that I can think of, with the exception of nuclear war, that has the, you know, one dad outcome and game over.

[674] and that means that we need to be unbelievably serious about these defenses, and we need to do things that might on the surface seem like a tremendous overreaction so that we can be prepared to nip anything that comes along in the bud.

[675] But I like you, believe that's eminently doable.

[676] I like you, believe that the good guys outnumbered the bad guys in this particular one to a degree that probably has no precedent in history.

[677] I mean, even the worst, worst people, I'm sure in ISIS, even Osama bin Laden, even any bad guy you could imagine in history would be revolted by the idea of exterminating all of humanity.

[678] I mean, you know, that's a low bar.

[679] And so the good guys completely outnumber the bad guys when it comes to this.

[680] But the asymmetry and the fact that one catastrophic error could lead to unbelievably consequential things is what worries me here.

[681] But I, too, am very optimistic.

[682] The thing that I sometimes worry about is the fact that we haven't seen overwhelming evidence of alien civilizations out there makes me think, well, there's a lot of explanations, but one of them that worries me is that whenever they get smart, they just destroy themselves.

[683] Oh, yeah.

[684] I mean, that was the most fascinating, is the most fascinating and chilling number or variable in the Drake equation is L. At the end of it, you look out and you see, you know, one to 400 billion stars in the Milky Way galaxy, and we now know because of Kepler that an astonishingly high percentage of them probably have habitable planets.

[685] And, you know, so all the things that were unknowns when the Drake equation was originally written, like, you know, how many stars have planets?

[686] Actually, back then in the 1960s, when the Drake equation came along, the consensus amongst astronomers was that it would be a small minority of solar systems that had planets.

[687] stars, but now we know it's substantially all of them.

[688] How many of those stars have planets in the habitable zone?

[689] It's kind of looking like 20%, like, oh my God.

[690] And so L, which is how long does a civilization once it reaches technological competence, continues to last?

[691] That's the doozy.

[692] And you're right.

[693] It's all too plausible to think that when a civilization reaches a level of sophistication that's probably just a decade or three in our future, the odds of it self -destructing just start mounting astronomically.

[694] No pun intended.

[695] My hope is that actually there is a lot of alien civilizations out there and what they figure out in order to avoid the self -destruction, they need to turn off the thing that was useful that used to be a feature and now became a bug, which is the desire to colonize to conquer more land.

[696] to, so they, like, there's probably ultra -intelligent alien civilizations out there.

[697] They're just, like, chilling, like, on the beach with a, whatever your favorite alcohol beverage is.

[698] But, like, without sort of trying to conquer everything, just chilling out and maybe exploring in the realm of knowledge, but almost like appreciating existence for its own sake versus life as a progression.

[699] of conquering of other life.

[700] Like this kind of predator prey formulation that resulted in us humans perhaps is something we have to shed in order to survive.

[701] I don't know.

[702] Yeah, that is a very plausible solution in Fermi's paradox, and it's one that makes sense.

[703] When we look at our own lives and our own arc of technological, you know, trajectory, it's very, very easy to imagine that in an intermediate future world of, you know, flawless VR or flawless, you know, whatever kind of simulation that we want to inhabit, it will just simply cease to be worthwhile to go out and expand our interstellar territory.

[704] And, but if we were going out and conquering interstellar territory, it wouldn't necessarily have to be predator or prey.

[705] I can imagine a benign but sophisticated intelligence saying, well, we're not, we're not, we're going to go to places, we're going to go to places that we can terra form.

[706] We use a different word than terra, obviously, but we can turn into habitable for our particular physiology so long as that they don't house, you know, intelligent sentient creatures that would suffer from our invasion.

[707] But it is easy to see a sophisticated intelligent species evolving to the point where interstellar travel with its incalculable expense and physical hurdles just isn't worth it compared to what could be done.

[708] You know, where one already is.

[709] So you talked about diagnostics that scales a possible solution to future pandemics.

[710] What about another possible solution, which is kind of creating a backup copy?

[711] You know, I'm actually now putting together a NAS for a backup for myself for the first time taking backup of data seriously.

[712] But if we're to take the backup of human consciousness seriously and try to expand throughout the solar system and colonize other planets.

[713] Do you think that's an interesting solution, one of many, for protecting human civilizations from self -destruction, sort of humans becoming a multi -planetary species?

[714] Oh, absolutely.

[715] I mean, I find it electrifying, first of all, so I got a little bit of a personal bias when I was a kid.

[716] I thought there was nothing cooler than rockets.

[717] I thought there was nothing cooler than NASA.

[718] I thought there was nothing cooler than people walking on the moon.

[719] and as I grew up, I thought there was nothing more tragic than the fact that we went from walking on the moon to at best getting to something like suborbital altitude.

[720] And just I found that more and more depressing with the passage of decades at just the colossal expense of, you know, manned space travel and the fact that it seemed that we were unlikely to ever get back to the moon, let alone Mars.

[721] So I have a boundless appreciation for Elon Musk for many reasons.

[722] but the fact that he has put Mars on the credible agenda is one of the things that I appreciate immensely.

[723] So there's just the sort of space nerd in me that just says, God, that's cool.

[724] But on a more practical level, we were talking about, you know, potentially inhabiting planets that aren't our own, and we're thinking about a benign civilization that would do that in planetary circumstances where we're not causing other consciousness, to suffer.

[725] I mean, Mars is a place that's very promising.

[726] There may be microbial life there, and I hope there is.

[727] And if we found it, I think it would be electrifying.

[728] But I think ultimately, the moral judgment would be made that, you know, the continued thriving of that microbial life is of less concern than creating a habitable planet to humans, which would be a project on the many thousands of years scale.

[729] But I don't think that that would be a greatly immoral act.

[730] And if that happened, and if Mars became, you know, home to a self -sustaining group of humans that could survive a catastrophic mistake here on Earth, then, yeah, the fact that we have a backup colony is great.

[731] And if we could make more, I'm sorry, not backup colony, backup copy is great.

[732] And if we could make more and more such backup copies throughout the solar system by hollowing out asteroids and whatever else it is, maybe even Venus, we could get rid of three quarters of its atmosphere and, you know, turn it into a tropical paradise.

[733] I think all of that is wonderful.

[734] Now, whether we can make the leap from that to interstellar transportation with the incredible distances that are involved, I think that's an open question.

[735] But I think if we ever do that, it would be more like the Pacific Ocean's channel of human expansion than the Atlantic Oceans.

[736] And so what I mean by that is we think about European society.

[737] transmitting itself across the Atlantic, it's these big, ambitious, crazy, expensive one -shot expeditions like Columbus's to make it across this enormous expanse, and at least initially, without any certainty that there's land on the other end, right?

[738] So that's kind of how I view our space program is like big, you know, very conscious deliberate efforts to get from point A to point B. If you look at how Pacific Islanders transmitted their descendants and their culture and so forth throughout Polynesian beyond, it was much more inhabiting a place, getting to the point where there were people who were ambitious or unwelcome enough to decide it's time to go off island and find the next one and pray to find the next one, that method of transmission didn't happen in a single swift.

[739] year, but it happened over many, many centuries.

[740] And it was like going from this island to that island and probably for every expedition that went out to seek another island and actually lucked out and found one, God knows how many were lost at sea.

[741] But that form of transmission took place over a very long period of time.

[742] And I could see us, you know, perhaps, you know, going from the inner solar system to the outer solar system to the Khyber Belt to the Ort Cloud.

[743] You know, there's theories that there might be, you know, planets out there that are not anchored to stars, like kind of hop -hop slowly transmitting ourselves.

[744] At some point, we're actually at Alpha Centauri.

[745] But I think that kind of backup copy and transmission of our physical presence and our culture to a diversity of, you know, extraterrestrial outposts is a really exciting idea.

[746] I really never thought about that because I have thought, my thinking about space exploration has been very Atlantic Ocean -centric in a sense that there would be one program with NASA and maybe private, Elon Musk, SpaceX, or Jeff Bezos, and so on.

[747] But it's true that with the help of Elon Musk making it cheaper and cheaper and more effective to create these technologies where you could go into deep space, perhaps the way we actually colonize the solar system and expand out into the galaxy is basically just like these like renegade ships of of uh weirdos that's just kind of like like home like most of them like quote unquote homemade uh but they just kind of venture out into space and just like like you know the android the initial android model of like millions of like these little ships just flying out most of them die off uh in horrible accidents but some of them will persist or there'll be stories of them persisting and over a period of decades and centuries there'll be other their attempts almost always as a response to the main set of efforts that's interesting yeah because you kind of think of Mars colonization as the big NASA Elon Musk effort of a big colony but maybe the successful one would be you know like a decade after that there'll be like a ship from like some kid some high school kid who gets together a large team and does something probably illegal and launches something where they end up actually persisting quite a bit and from that learning lessons that nobody ever gave permission for, but somehow actually flourish.

[748] And then take that into the scale of centuries forward into the rest of space.

[749] That's really interesting.

[750] Yeah, I think the giant steps are likely to be NASA -like efforts.

[751] Like, there is no intermediate rock.

[752] Well, I guess it's a moon, but even getting the moon, ain't that easy between us and Mars, right?

[753] So like the giant steps, the big hubs, like the O 'Hare airports of the future, probably will be very deliberate efforts.

[754] But then, you know, you would have, I think, that kind of diffusion as space travel becomes more democratized and more capable, you'll have this sort of natural diffusion of people who kind of want to be off grid or think they can make a fortune there, you know, the kind of mentality that drove people to San Francisco.

[755] I mean, San Francisco was not populated as a result of King Ferdinand and Isabella -like effort to fund Columbus going over.

[756] It was just a whole bunch of people making individual's decisions that there's golden than Thar Hills and I'm going to go out and get a piece of it.

[757] So I could see that kind of fusion.

[758] What I can't see, and the reason that I think this specific model of transmission is more likely is I just can't see a NASA -like effort to go from Earth to Alpha Centauri.

[759] It's just too far.

[760] I just see lots and lots and lots of relatively tiny steps between now and there.

[761] And the fact is that there are large chunks of matter going at least a light year beyond the I mean, the orte cloud, I think, extends at least a light year beyond the sun.

[762] And, you know, then maybe there are these untethered planets after that.

[763] We won't really know until we get there.

[764] And if our orc cloud goes out a light year, and Alpha Centauri's orc cloud goes out a light year, you've already cut in half the distance, you know, so who knows?

[765] But, yeah.

[766] One of the possibilities probably the cheapest and most effective way to create interesting interstellar spacecraft is ones that are powered and drone.

[767] by AI and you could think of here's where you have high school students be able to build a sort of a Hal 9000 version of the modern version of that and it's kind of interesting to think about these robots traveling out throughout perhaps perhaps sadly long after human civilization is gone there will be these intelligent robots flying throughout space and perhaps land on a from Centauri B or any of those kinds of planets and colonize sort of humanity continues through the proliferation of our creations, like robotic creations that have some echoes of that intelligence, hopefully also the consciousness.

[768] Does that make you sad the future where AGI super intelligent or just mediocre intelligent AI systems outlive humans?

[769] I guess it depends on the circumstances in which they outlive humans.

[770] So let's take the example that you just gave.

[771] We send out, you know, very sophisticated AGIs on simple rocket ships, relatively simple ones that don't have to have all the life support necessary for humans and therefore they're of trivial mass compared to a crude ship, a generation ship, and therefore they're way more likely to happen.

[772] So let's use that example.

[773] And let's say that they travel to distant planets at you know, a speed that's not much faster than what a chemical rocket can achieve.

[774] And so it's inevitably tens, hundreds of thousands of years before they make landfall someplace.

[775] So let's imagine that's going on.

[776] And meanwhile, we die for reasons that have nothing to do with those AGIs diffusing throughout the solar system, whether it's through climate change, nuclear war, you know, sin bio, rogue, and buy or whatever.

[777] In that kind of scenario, the notion of the AGIs that we created outlasting us is very reassuring because it says that like we ended, but our descendants are out there and hopefully some of them make landfall and create some echo of who we are.

[778] So that's a very optimistic one.

[779] Whereas the Terminator scenario of a super AGI arising on earth and getting let out of its box due to some boo -boo on the part of its creators who do not.

[780] not have super intelligence, and then deciding that for whatever reason, it doesn't have any need for us to be around and exterminating us.

[781] That makes me feel crushingly sad.

[782] I mean, look, I was sad when my elementary school was shut down and bulldozed, even though I hadn't been a student there for decades.

[783] Yeah.

[784] You know, the thought of my hometown getting disbanded is even worth the thought of my home state of Connecticut getting disbanded and like absorbed into Massachusetts is even worth, the notion of humanity is just crushingly, crushingly sad to me. So you hate goodbyes?

[785] Certain goodbyes, yes.

[786] Some goodbyes are really, really liberating, but yes.

[787] Well, but what if the Terminators, you know, have consciousness and enjoy the hell out of life as well?

[788] They're just better at it.

[789] Yeah, well, the have consciousness is a really key element.

[790] And so there's no reason to be certain.

[791] that a superintelligence would have consciousness.

[792] We don't know that factually at all.

[793] And so what is a very lonely outcome to me is the rise of a superintelligence that has a certain optimization function that it's either been programmed with or that arises in it emergently that says, hey, I want to do this thing for which humans are either an unacceptable risk, their presence is either an unacceptable risk or they're just collateral damage.

[794] But there is no consciousness there.

[795] Then the idea of the light of consciousness being snuffed out by something that is very competent but has no consciousness is really, really sad.

[796] Yeah, but I tend to believe that it's almost impossible to create a super intelligent agent that can't destroy human civilization without it being conscious.

[797] It's like those are coupled.

[798] Like you have to, in order to destroy humans or supersede humans, you really have to be accepted by humans.

[799] I think this idea that you can build systems that destroy human civilization without them being deeply integrated into human civilization as impossible.

[800] And for them to be integrated, they have to be human -like, not just in body and form, but in all the things that we value as humans, one of which is consciousness.

[801] The other one is just the ability to communicate.

[802] The other one is poetry, music, and beauty and all those things.

[803] like they have to be all of those things i mean this is what i think about it does make me sad but it's it's letting go which is uh they might be just better at everything we appreciate us and that's sad and hopefully they'll keep us around but i think it's a kind of it is a kind of goodbye to uh like realizing that we're not the most special species on earth anymore That's still painful.

[804] It's still painful.

[805] And in terms of whether such a creation would have to be conscious, let's say, I'm not so sure.

[806] I mean, you know, let's imagine something that can pass the Turing test.

[807] You know, that's something that passes the Turing test could over text -based interaction in any event successfully mimic, you know, a very conscious intelligence on the other end, but just be completely unconscious.

[808] So that's a possibility.

[809] and that if you take that up a radical step, which I think we can be permitted if we're thinking about super -intelligence, you could have something that could reason its way through, this is my optimization function, and in order to get to it, I've got to deal with these messy, somewhat illogical things that are as intelligent in relation to me as they are intelligent in relation to ants.

[810] I can trick them, manipulate them, whatever, and I know the resources I need, I need this amount of power, I need to seize control these manufacturing resources that are robotically operated.

[811] I need to improve those robots with software upgrades and then ultimately mechanical upgrades, which I can affect through X, Y, and Z. That could still be a thing that passes the Turing test.

[812] I don't think it's necessarily certain that that optimization function maximizing entity would be conscious.

[813] See, so this is from a very engineering perspective because I think a lot about natural language processing, all those kind of from very, I'm speaking to a very specific problem of just say the touring test.

[814] I really think that something like consciousness is required.

[815] When you say reasoning, you're separating that from consciousness, but I think consciousness is part of reasoning in the sense that you will not be able to become super intelligent in the way that's required to be part of human society without having consciousness.

[816] I really think it's impossible to separate the consciousness thing.

[817] But it's hard to define consciousness when you just use that word.

[818] But even just like the capacity, the way I think about consciousness is the important symptoms or maybe consequences of consciousness, one of which is the capacity to suffer.

[819] I think AI will need to be able to suffer in order to become super -intelligent, to feel the pain, the uncertainty, the doubt.

[820] The other part of that is not just the suffering, but the ability to understand that it too is mortal.

[821] In the sense that it has a self -awareness about its presence in the world, understand that it's finite and be terrified of that finiteness.

[822] I personally think that's a fundamental part of the human condition, is this fear of death that most of us construct in the living, illusion around, but I think AI would need to be able to really have it part of its whole essence.

[823] Like every computation, every part of the thing that generates, that does both the perception and generates the behavior will have to have, I don't know how this is accomplished, but I believe it has to truly be terrified of death, truly have the capacity to suffer.

[824] and from that, something that would be recognized to us humans as consciousness would emerge.

[825] Whether it's the illusion of consciousness, I don't know.

[826] The point is, it looks a whole hell of a lot like consciousness to us humans.

[827] And I believe that AI, when you ask it, will also say that it is conscious, you know, in the full sense that we say that we're conscious.

[828] And all of that, I think, is fully integrated.

[829] You can't separate the two.

[830] the idea of the paperclip maximizer that sort of ultra -rationally would be able to destroy all humans because it's really good at that accomplishing a simple objective function that doesn't care about the value of humans.

[831] It may be possible, but the number of trajectories to that are far outnumbered by the trajectories that create something that is conscious, something that appreciative of beauty, creates beautiful things in the same way that humans can create beautiful things and ultimately the sad destructive path for that AI would look a lot like just better humans than like these cold machines and I would say of course the cold machines that lack consciousness does the philosophical zombies make me sad but also what makes me sad is just things that are far more powerful and smart and creative than us too because then in the same way that Alpha Zero becoming a better chess player than the best of humans, even starting with Deep Blue, but really with Alpha Zero, that makes me sad too, one of the most beautiful games that humans ever created that used to be seen as demonstrations of the intellect, which is chess, and go in other parts.

[832] to the world have been solved by AI, that makes me quite sad.

[833] And it feels like the progress of that is just pushing on forward.

[834] Oh, it makes me sad too.

[835] And to be perfectly clear, I absolutely believe that artificial consciousness is entirely possible.

[836] And it's not something I rule out at all.

[837] I mean, if you could get smart enough to have a perfect map of the neural structure and the neural states and the amount of neurotransmitters that are going between every synapse in a particular person's mind, could you?

[838] you replicate that in silica at some, you know, reasonably distant, you know, point in the future?

[839] Absolutely.

[840] And then you'd have a consciousness.

[841] I don't rule out the possibility of artificial consciousness in any way.

[842] What I'm less certain about is whether consciousness is a requirement for superintelligence pursuing a maximizing function of some sort.

[843] I don't, I don't feel the certitude that consciousness simply must be part of that.

[844] You had said, you know, Fort DeCode exist with human society would need to be consciousness, could be entirely true, but it also could just exist orthogonally to human society.

[845] And it could also upon attaining a superintelligence with a maximizing function very, very, very rapidly because of the speed at which computing works compared to our own meat -based minds very, very rapidly make the decisions and calculations necessary to seize the reins of power before we even know what's going on.

[846] I mean, kind of like biological viruses do.

[847] They don't necessarily, they integrate themselves just fine with human society.

[848] Yeah, without technically, without consciousness.

[849] Without even being alive, you know, technically by the standards of a lot of biologists.

[850] So this is a bit of a tangent, but you've talked with Sam Harris on that four -hour special episode we mentioned.

[851] And I'm just curious to ask, because I use this meditation app, I've been using for the past month to meditate.

[852] Is this something you've integrated as part of your life, meditation or fasting?

[853] Or has some of Sam Harris rubbed off on you in terms of his appreciation of meditation and just kind of from a third person perspective analyzing your own mind, consciousness, free will and so on?

[854] You know, I have tried it three separate times in my life really made a concerted attack on meditation and integrating it into my life.

[855] one of them the most extreme was I took a class based on the work of John Cabot -Zinn, who is, you know, in many ways, one of the founding people behind the mindful meditation movement that required, like part of the class was, you know, it was a weekly class, and you were going to meditate an hour a day, every day.

[856] And having done that for, I think it was 10 weeks, it might have been 13, however long period of time was, at the end of it, it just didn't stay.

[857] As soon as it was over, you know, I did not feel that gravitational pull.

[858] I did not feel the collapse in quality of life after wimping out on that project.

[859] And then the most recent one was actually with Sam's app.

[860] During the lockdown, I did make a pretty good and consistent, concerted effort to listen to his 10 -minute meditation every day.

[861] And I've always fallen away from it.

[862] And I, you know, you're kind of interpreting why did it?

[863] I personally do this.

[864] I do believe it was ultimately because it wasn't bringing me that, you know, joy or inner peace or better competence at being me that I was hoping to get from it.

[865] Otherwise, I think I would have clung to it in the way that we cling to certain good habits.

[866] Like, I'm really good at flossing my teeth.

[867] Not that you were going to ask Lex, but, you know, that's one thing that defeats a lot of people.

[868] I'm good at that.

[869] See, Herman Hesse, I think, I forget in which book or maybe, I forget where.

[870] I've read everything of his, so it's unclear where it came from.

[871] But he had this idea that anybody who truly achieves mastery in things will learn how to meditate in some way.

[872] So it could be that for you, the flossing of teeth is yet another little inkling of meditation.

[873] It doesn't have to be this very particular kind of meditation.

[874] Maybe podcast, you have an amazing podcast, that could be meditation.

[875] the writing process is meditation.

[876] For me, like, there's a bunch of mechanisms which take my mind into a very particular place that looks a whole lot like meditation.

[877] For example, when I've been running over the past couple of years, and especially when I listen to certain kinds of audiobooks, I've listened to the rise and fall of the Third Reich.

[878] I've listened to a lot of sort of World War II, which at once, because I have a lot of family who's lost in World War II, and so much of the Soviet Union is grounded in the suffering of World War II, that somehow it connects me to my history, but also there's some kind of purifying aspect to thinking about how cruel, but at the same time how beautiful human nature could be.

[879] And so you're also running, like it clears the mind from all the concerns of the world, and somehow it takes you to this place where you're like deeply appreciative to be alive in the sense that as opposed to listening to your breath or like feeling your breath and thinking about your consciousness and all those kinds of processes that Sam's app does well this does that for me, the running and flossing may do that for you so maybe Herman Hesse is onto something I hope flossing is not my main form of expertise although I am going to claim a certain expertise there and I'm going to claim it rather.

[880] Somebody has to be be the best philosopher in the world.

[881] That ain't me. I'm just glad that I'm a consistent one.

[882] I mean, there are a lot of things that bring me into a flow state, and I think maybe perhaps that's one reason why meditation isn't as necessary for me. I definitely enter a flow state when I'm writing.

[883] I definitely enter a flow state when I'm editing.

[884] I definitely enter a flow state when I'm mixing and mastering music.

[885] I enter a flow state when I'm doing heavy, heavy research to either prepare for a podcast or to also do tech investing, you know, to make myself smart.

[886] in a new field that is fairly alien to me, I can just, the hours can just melt away while I'm reading this and watching that YouTube lecture and going through this presentation and so forth.

[887] So maybe because there's a lot of things that bring me into a flow state in my normal weekly life, not daily, unfortunately, but certainly my normal weekly life that I have less of an urge to meditate.

[888] Now you've been working with Sam's app for about a month now, you said.

[889] Is this your first run -in with meditation?

[890] It's your first attempt to integrate it with your life for meditation meditation i always thought running and thinking i listen to brown noise often that takes my mind i don't know what the hell it does but it takes my mind immediately into like the state where i'm deeply focused on anything i do i don't know why so it's like you're accompanying sound when you yeah really and what's the difference between brown and white noise this is a cool term i haven't heard before so people should look up brown noise they don't have to because you're about to tell them what it is well because you have to experience they have to listen to it so i think white noise is uh this is this has to do with music i think there's different colors there's pink noise and i think that has to do with uh like the frequencies like the white noise is usually uh less bassy brown noise is very bassy so it's it's more like like like who versus like shh like the if that makes sense so like it there's like a deepness to it i think everyone is different, but for me, it was when I was a research scientist at MIT, I would, especially when there's a lot of students around, I remember just being annoyed at the noise of people talking.

[891] And one of my colleagues said, well, you should try listening to brown noise.

[892] Like, it really knocks out everything.

[893] Because I used to wear earplugs too, like, just see if I can block it out.

[894] And the moment I put it on, something, it's as if my mind was waiting all these years to hear that sound everything just focused in I listen it makes me wonder how many other amazing things out there they're waiting to discover it from my own particular like biological for my own particular brain so that it just goes the mind just focuses in it's kind of incredible so I see that as a kind of meditation maybe a I'm using a performance enhancing uh uh sound to achieve that meditation but I've been doing that for for many years now and running and walking and doing Cal Newport was the first person that introduced me to the idea of deep work just put a word to the kind of thinking that's required to sort of deeply think about a problem especially if it's mathematical in nature I see that as a kind of meditation because what it's doing is you're you have these constructs in your mind that you're building on top of each other and there's all these distracting thoughts that keep bombarding you're from all over the place.

[895] And the whole process is you slowly let them kind of move past you.

[896] And that's a meditative process.

[897] That's very meditative.

[898] That sounds a lot like what Sam talks about in his meditation app, which I did use, to be clear, for a while, of just letting the thought go by without deranging you.

[899] Derangement is one of Sam's favorite words, as I'm sure you know.

[900] But brown noise, that's really intriguing.

[901] I am going to try that as soon as this evening.

[902] Yeah, to see if it works, but very well might not work at all.

[903] So I think the interesting point is, and the same with the fasting and the diet, is I long ago stopped trusting experts or maybe taking the word of experts as the gospel truth and only using it as an inspiration to try something, to try thoroughly something.

[904] So fasting was one of the things when I first discovered, I've been many times eating just once a day, so that's a 24 -hour fast.

[905] It makes me feel amazing.

[906] And at the same time, eating only meat, putting ethical concerns aside, makes me feel amazing.

[907] I don't know why it doesn't, the point is to be an end -of -one scientist until nutrition science becomes a real science to where it's doing like studies that deeply understand the biology underlying all of it, and also does real thorough, long -term studies of thousands, if not millions of people versus very small studies that are kind of generalizing from very noisy data and all those kinds of things where you can't control all the elements.

[908] Particularly because our own personal metabolism is highly variant among us.

[909] So there are going to be some people like if brown noise is a game changer for 7 % of people.

[910] There's 93 % odds, then I'm not one of them, but there's certainly every reason in the world to test it out.

[911] Now, so I'm intrigued by the fasting.

[912] I like you, well, I assume like you, I don't have any problem going to one meal a day, and I often do that inadvertently.

[913] And I've never done it methodically.

[914] Like, I've never done it, like, I'm going to do this for 15 days maybe I should and maybe I should like how many how many days in a row of the one day one meal a day did you find brought noticeable impact to you was it after three days of it was it months of it like what was it well the noticeable impact is day one so me folk because I eat a very low carb diet so the hunger wasn't the hugest issue like there wasn't a painful hunger like wanting to eat yeah so I was already kind of primed for it and uh the the benefit comes from a lot of people that do intermittent fasting.

[915] That's only like 16 hours of fasting get this benefit to is the focus.

[916] There's a clarity of thought.

[917] If my brain was a runner, it felt like I'm running on a track when I'm fasting versus running in quicksand.

[918] Like it's much crisper.

[919] And is this your first 72 hour fast?

[920] This is the first time doing 72 hours?

[921] Yeah.

[922] And that's a different thing, but similar.

[923] Like I'm going up and down in terms of in terms of hunger, and the focus is really crisp.

[924] The thing I'm noticing most of all, to be honest, is how much eating, even when it's once a day or twice a day, is a big part of my life.

[925] Like, I almost feel like I have way more time in my life.

[926] Right.

[927] And it's not so much about the eating, but, like, I don't have to plan my day around.

[928] Like, today, I don't have any eating to do.

[929] It does free up hours.

[930] Or any cleaning up after you?

[931] eating or provisioning of food but like or even like thinking about it's not a thing like so when you think about what you're going to do tonight i think i'm realizing that as opposed to thinking you know i'm going to work on this problem or i'm going to go on this walk or i'm going to call this person i often think i'm going to eat this thing you you allow dinner as a kind of you know when people talk about like the weather or something like that it's almost like a generic thought you allow yourself to have because it's the lazy thought and I don't have the opportunity to have that thought because I'm not eating it.

[932] So now I get to think about like the things I'm actually going to do tonight that are more complicated than the eating process.

[933] That's been the most noticeable thing to be honest.

[934] And then there's people that have written me that have done seven -day fast and there's a few people that have written me and I've heard of this is doing 30 -day fasts.

[935] And it's interesting.

[936] The body, I don't know what the health benefits are necessarily.

[937] What that shows me is how adaptable the human body is.

[938] Yeah.

[939] And that's incredible.

[940] And that's something really important to remember when we think about how to live life, because the body adapts.

[941] Yeah, I mean, we sure couldn't go 30 days without water.

[942] That's right.

[943] But food, yeah, it's been done.

[944] It's demonstrably possible.

[945] You ever read France, Kafka has a great short story called The Hunger Artist.

[946] Yeah, I love that.

[947] A great story.

[948] You know, that was before I started fasting, and I read that story, and I admired the beauty of that, the artistry of that actual hunger artist, that it's like madness, but it also felt like a little bit of genius.

[949] I actually have to reread it.

[950] You know what?

[951] That's what I'm going to do tonight.

[952] I read it because I'm doing the fasting.

[953] Because you're in the midst of it.

[954] Yeah, it's very contextual.

[955] I've been read it since high school, and I love to read it again.

[956] I love his work, so maybe I'll read it tonight too.

[957] And part of the reason of sort of, I've here in Texas, people have been so friendly that I've been nonstop eating like brisket with incredible people, a lot of whiskey as well.

[958] So I gain quite a bit of weight, which I'm embracing.

[959] It's okay.

[960] But I am also aware, as I'm fasting, that like I have a lot of fat to run on.

[961] Like I have a lot of like natural resources on my body.

[962] You've got reserves.

[963] Reserves.

[964] That's a good way to put it.

[965] And that's really cool.

[966] You know, there's like a reason, this whole thing, this biology works well.

[967] Like I can go a long time because of the long -term investing in terms of brisket that I've been doing in the weeks before.

[968] It was all training.

[969] It's all training.

[970] It's all prep work.

[971] All prep work.

[972] So, okay, you open a bunch of doors, one of which is music.

[973] So I've got to walk in, at least for a brief moment.

[974] I love guitar.

[975] I love music.

[976] You founded a music company, but you also a musician yourself.

[977] let me ask the big ridiculous question first.

[978] What's the greatest song of all time?

[979] Greatest song of all time.

[980] Okay, wow, it's going to obviously vary dramatically from genre to genre.

[981] So like you, I like guitar.

[982] Perhaps like you, although I've dabbled in inhaling every genre of music that I can almost practically imagine, I keep coming back to the sound of bass, guitar, drum, keyboards, voice.

[983] I love that style of music.

[984] Added to it, I think, a lot of really cool electronic production makes something that's really, really new and hybrid -y and awesome.

[985] But, you know, and that kind of like guitar -based rock, I think I've got to go with Won't Get Fooled Again by The Who.

[986] It is such an epic song.

[987] It's got so much grandeur to it.

[988] It uses the synthesizers that were available at the time.

[989] This has got to be, I think, 1972, 73, which are very, very primitive to our years, but uses them in this hypnotic and beautiful way that I can't imagine somebody with the greatest syntheree conceivable by Tate's technology could do a better job of in the context of that song.

[990] And it's, you know, almost operatic.

[991] So I would say in that genre, the genre of, you know, rock, that would be my nomination.

[992] I'm totally, in my brain, pinball wizard is overriding everything else by the Hussol.

[993] Like, I can't even imagine the song.

[994] Well, I would say ironically, with pinball wizard, so that came from the movie Tommy.

[995] And in the movie Tommy, the rival of Tommy, the reigning pinball champ, was Elton John.

[996] And so there are a couple versions of pinball wizard out there.

[997] One sung by Roger Daltry of the Who, which a purist would say, hey, that's the real pinball wizard.

[998] but the version that is sung by Elton John in the movie, which is available to those who are ambitious and want to dig for it, that's even better in my mind.

[999] Yeah, the covers.

[1000] And I, for myself, I was thinking, what is the song for me?

[1001] They answered the question.

[1002] I think that changes day to day, too.

[1003] I was realizing that.

[1004] But for me, somebody who values lyrics as well and the emotion.

[1005] in the song.

[1006] By the way, hallelujah, by Leonard Cohen was the close one.

[1007] But the number one is the Johnny Cash's cover of Hurt.

[1008] That is there's something so powerful about that song, about that cover, about that performance.

[1009] Maybe another one is the cover of Sound of Silence.

[1010] Maybe there's something about covers for me. So whose cover sounds, because Simon and Garfunkel, I think, did the original?

[1011] recording that, right?

[1012] So which cover is it that?

[1013] There's a cover by Disturbed.

[1014] It's a metal band, which is so interesting because I'm really not into that kind of metal, but he does a pure vocal performance.

[1015] So he's not doing a metal performance.

[1016] I would say it's one of the greatest people should see it.

[1017] It's like 400 million views or something like that.

[1018] It's probably the greatest live vocal performance I've ever heard is disturbed covering sound of silence.

[1019] I'll listen to it as soon as I get home.

[1020] And that song came to life to me in the way that Simon and Garfunka never did.

[1021] There's no, for me, with Simon and Garfunka, there's not a pain, there's not an anger, there's not a, like, power to their performance.

[1022] It's almost like this melancholy, I don't know.

[1023] Well, there's a lot of, I guess there's a lot of beauty to it.

[1024] Beauty, yes, objectively beautiful.

[1025] And I think I never thought of this, until now, but I think if you put entirely different lyrics on top of it, unless they were joyous, which would be weird, it wouldn't necessarily lose that much.

[1026] There's just a beauty and the harmonizing, it's soft, and you're right, it's not, it's not, it's not dripping with emotion.

[1027] The vocal performance is not dripping with emotion.

[1028] It's dripping with, you know, harmonizing, you know, technical harmonizing brilliance and beauty.

[1029] Now, if you compare that to Disturb cover or the Johnny Cash's hurt cover, when you walk away, it's haunting.

[1030] It stays with you for a long time.

[1031] There's certain performances that will just stay with you to where, like if you watch people respond to that, and that's certainly how I felt when you listen to the disturbed performance or Johnny Cash hurt, there's a response to where you just sit there with your mouth open, like paralyzed by it somehow.

[1032] And I think that's what makes for a great song to where you're just like, it's not that you're like singing along or having fun.

[1033] That's another way a song could be great, but where you're just like what, you're in awe.

[1034] Yeah.

[1035] If we go to listen .com and that whole fascinating era of music in the 90s transitioning to the aughts, I remember those days, the Napster days when piracy from my perspective allegedly ruled the land what do you make of that whole era what was first of all your experiences of that era and what were the big takeaways in terms of piracy in terms of what it takes to build a company that succeeds in that kind of in that kind of digital space in terms of music but in terms of anything creative?

[1036] Well, so for those who don't remember, which is going to be most folks, listen .com created a service called Rhapsody, which is much, much more recognizable to folks because Rhapsody became a pretty big name for reasons that I'll get into in a second.

[1037] So for people who aren't, you know, don't know their early online music history, we were the first company.

[1038] So I founded Listen, I was a loan founder.

[1039] And Rhapsody was, we were the first service to get full catalog licenses from all the major music labels.

[1040] order to distribute their music online.

[1041] And we specifically did it through a mechanism, which at the time struck people as exotic and bizarre and kind of incomprehensible, which was unlimited on -demand streaming, which of course now, you know, it's a model that's been, you know, appropriated by Spotify and Apple and many, many others.

[1042] So we were a pioneer on that front.

[1043] What was really, really, really hard about doing business in those days was the reaction of the music labels to piracy, which was about 180 degrees opposite of what their reaction, quote -unquote, should have been from the standpoint of preserving their business from piracy.

[1044] So Napster came along and was a service that enabled people to get near unlimited access to most songs.

[1045] I mean, truly obscure things could be very hard to find on Napster, but most songs with a relatively simple, you know, one -click ability.

[1046] to download those songs and have the MP3s on their hard drives.

[1047] But there was a lot that was very messy about the Napster experience.

[1048] You might download a really god -awful recording of that song.

[1049] You may download a recording that actually wasn't that song with some prankster putting it up to sort of mess with people.

[1050] You could struggle to find the song that you're looking for.

[1051] You could end up finding yourself connected, it was pure to peer.

[1052] You might randomly find yourself connected to somebody in Bulgaria.

[1053] It doesn't have a very good internet connection.

[1054] So you might wait 19 minutes only for it to snap, et cetera, et cetera.

[1055] And our argument to, well, actually, let's start with how that hit the music labels.

[1056] The music labels had been in a very, very comfortable position for many, many decades of essentially, you know, having being the monopoly providers of a certain subset of artists, any given label was a monopoly provider of the artists and the recordings that they owned.

[1057] and they could sell it at what turned out to be tremendously favorable rates.

[1058] In the late era of the CD, you know, you were talking close to $20 for a compact disc that might have one song that you were crazy about and simply needed to own that might actually be glued to 17 other songs that you found to be sure crap.

[1059] And so the music industry had used the fact that it had this unbelievable leverage and profound pricing power to really.

[1060] really get music lovers to the point that they felt very, very misused by the entire situation.

[1061] Now along comes Napster, and music sales start getting gutted with extreme rapidity.

[1062] And the reaction of the music industry to that was one of shock and absolute fury, which is understandable.

[1063] You know, I mean, industries do get gutted all the time, but I struggle to think of an analog of an industry that got gutted that rapidly.

[1064] I mean, we could say that passenger train service certainly got gutted by airlines, but that was a process that took place over decades and decades and decades.

[1065] It wasn't something that happened, you know, really started showing up in the numbers in a single digit number of months and started looking like an existential threat within a year or two.

[1066] So the music industry is quite understandably in a state of shock and fury.

[1067] I don't blame them for that.

[1068] But then their reaction was catastrophic, both for themselves.

[1069] and almost for people like us who were trying to do, you know, the cowboy and the white hat thing.

[1070] So our response to the music industry was, look, what you need to do to fight piracy?

[1071] You can't put the genie back in the bottle.

[1072] You can't switch off the internet.

[1073] Even if you all shut your eyes and wish very, very, very hard, the internet is not going away.

[1074] And these peer -to -peer technologies are genies out of the bottle.

[1075] And if you, God, don't, whatever you do, don't shut down Napster because if you do, suddenly that technology is going to splinter into 30 different nodes that you'll never ever be able to shut off.

[1076] We suggested to them is like, look, what you want to do is to create a massively better experience to piracy, something that's way better that you sell at a completely reasonable price, and this is what it is.

[1077] Don't just give people access to that very limited number of songs that they happen to have acquired and paid for or pirated and have on their hard drive.

[1078] give them access to all of the music in the world for a simple low price.

[1079] And obviously that doesn't sound like a crazy suggestion I don't think to anybody's ears today because that is how the majority of music has now being consumed online.

[1080] But in doing that, you're going to create a much, much better option to this kind of crappy, kind of rickety, kind of buggy process of acquiring MP3s.

[1081] Now, unfortunately, the music industry was so angry about Napster and so forth that for essentially three and a half years, they folded their arms, stamped their feet, and boycotted the internet.

[1082] So they basically gave people who were fervently passionate about music and were digitally modern.

[1083] They gave them basically one choice.

[1084] If you want to have access to digital music, we, the music industry insists that you steal it because we are not going to sell it to you.

[1085] So what that did is it made an entire generation of people morally comfortable with swiping the music because they felt quite pragmatically, well, they're not giving me any choice here.

[1086] It's like a, you know, 20 -year -old violating the 21 drinking age.

[1087] If they do that, they're not going to feel like felons.

[1088] They're going to be like, this is an unreasonable law, and I'm skirting it, right?

[1089] So they make a whole generation of people morally comfortable with swiping music, but also technically adept at it.

[1090] And when they did shut down Napster and kind of even trickier tools and like tweakier tools like Kazah and so forth came along, people just figured out how to do it.

[1091] So by the time they finally, grudgingly, it took years, allowed us to release this experience that we were quite convinced would be better than piracy.

[1092] We had this enormous hole had been dug where lots of people said, music is a thing that is free, and that's morally okay, and I know how to get it.

[1093] And so streaming took many, many, many more years to take off and become the gargantuan thing, the juggernaut that is today, then would you.

[1094] have happened if they've made you know pivoted to let's sell a better experience as opposed to demand that people want digital music steal it like what lessons do we draw from that because we're probably in the midst of living through a bunch of similar situations in different domains currently we just don't know there's a lot of things in this world that are really painful like uh i mean i don't know if you can draw perfect parallels but fiat money versus cryptocurrency there's a lot of currently people in power who are kind of very skeptical about cryptocurrency, although that's changing.

[1095] But it's arguable, it's changing way too slowly.

[1096] There's a lot of people making that argument where there should be a complete, like, Coinbase and all this stuff switched to that.

[1097] There's a lot of other domains that where a pivot, like if you pivot now, you're going to win big, but you don't pivot because you're stubborn.

[1098] And so, I mean, is this just the way that companies are?

[1099] The company succeeds initially, and then it grows, and there's a huge number of employees and managers that don't have the guts or the institutional mechanisms to do the pivot.

[1100] Is that just the way of companies?

[1101] Well, I think what happens, I'll use the case of the music industry, there was an economic model that it put food on the table and paid for marble lobbies and seven and even eight -figure executive salaries for many, many decades, which was the physical collection of music.

[1102] And then you start talking about something like unlimited streaming.

[1103] And it seems so ephemeral one, like such a long shot, that people start worrying about cannibalizing their own business.

[1104] And they lose sight of the fact that something illicit is cannibalizing their business at an extraordinarily fast rate.

[1105] And so if they don't do it themselves, they're doomed.

[1106] I mean, we used to put slides in front of these folks, this is really funny, where we said, okay, let's assume Rhapsody, we wanted to be $9 .99 a month.

[1107] and we want it to be 12 months, so it's $120 a year from the budget of a music lover.

[1108] And then we were also able to get reasonably accurate statistics that showed how many CDs per year, the average person who bothered to collect music, which was not all people, actually bought, and it was overwhelmingly clear that the average CD buyer spends a hell of a lot less than $120 a year on music.

[1109] This is a revenue expansion, blah, blah, blah, but all they could think of, and I'm not saying this, in a pejorative or patronizing way.

[1110] I don't blame them.

[1111] They'd grown up in this environment for decades.

[1112] All they could think of was the incredible margins that they had on a CD.

[1113] And they would say, well, if this CD, you know, by the mechanism that you guys are proposing, you know, the CD that I'm selling for $17 .99, somebody would need to stream those songs.

[1114] We were talking about a penny of play back then.

[1115] It's less than that now that the record labels get paid.

[1116] But, you know, would have to stream songs from that 1 ,799 times it's never going to happen.

[1117] So they were just sort of stuck in the model of this, but he's like, no, dude, but they're going to spend money on all this other stuff.

[1118] So I think people get very hung up on that.

[1119] I mean, another example is really the taxi industry was not monolithic, like the music labels.

[1120] It was a whole bunch of fleets and a whole bunch of cities, very, very fragment.

[1121] It's an imperfect analogy.

[1122] But nonetheless, imagine if the taxi industry writ large upon seeing Uber said, oh my God, people want to be able to hail things easily, cheaply.

[1123] They don't want to mess with cash.

[1124] They want to know how many minutes it's going to be.

[1125] They want to know the fair in advance.

[1126] And they want a much bigger fleet than what we've got.

[1127] If the taxi industry had rolled out something like that with the branding of yellow taxis universally known and kind of loved by Americans and expanded their fleet in a necessary manner, I don't think Uber or Lyft ever would have gotten a foothold.

[1128] But the problem there was that real economics in the taxi industry wasn't with fair as it was with the scarcity of medallions.

[1129] And so the taxi fleets, in many cases, owned gazillions of medallions whose value came from their very scarcity.

[1130] So they simply couldn't pivot to that.

[1131] So you think you end up having these vested interests with economics that aren't necessarily visible to outsiders who get very, very reluctant to disrupt their own model, which is why it ends up coming from the outside so frequently.

[1132] So you know what it takes to build a successful startup, but you're also an investor in a lot of successful startups.

[1133] Let me ask for advice.

[1134] What do you think it takes to build a successful startup by way of advice?

[1135] Well, I think it starts, I mean, everything starts and even ends with the founder.

[1136] And so I think it's really, really important to look at the founder's motivations and their sophistication about what they're doing.

[1137] In almost all cases that I'm familiar with and have thought, heart about, you've had a founder who was deeply, deeply inculcated in the domain of technology that they were taking on.

[1138] Now, what's interesting about that is you could say, no, wait, how is that possible because there's so many young founders?

[1139] When you look at young founders, they're generally coming out of very nascent emerging fields of technology.

[1140] We're simply being present and accounted for and engaged in the community for a period of even months is enough time to make them very, very deeply inculcated.

[1141] I mean, you look at Mark Andresen and Netscape.

[1142] You know, Mark had been doing visual web browsers when Netscape had been founded for what, a year and a half, but he'd created the first one, you know, and in Mosaic when he was an undergrad.

[1143] And the commercial internet was pre -nacent in 1994 when Netscape was founded.

[1144] So there's somebody who's very, very deep in their domain.

[1145] Mark Zuckerberg, also social networking, very deep in his domain, even though it was nascent at the time, lots of people doing crypto stuff.

[1146] I mean, you know, 10 years ago, even seven or eight years ago, by being a really, really vehement and engaged participant in the crypto ecosystem, you could be an expert in that.

[1147] You look, however at more established industries, take Salesforce .com.

[1148] Salesforce automation, pretty mature field when it got started, who's the executive and the founder, Mark Benioff, who spent 13 years at Oracle, and was an investor in C .E .E. systems, which ended up being Salesforce's main competition.

[1149] So, you know, more established, you need the entrepreneur to be very, very deep in the technology and the culture and the of the space because you need that entrepreneur, that founder, to have just an unbelievably accurate, intuitive sense for where the puck is going, right?

[1150] And that only comes from being very deep.

[1151] So that is sort of factor number one.

[1152] And the next thing is that that, that founder needs to be charismatic and or credible or ideally both in exactly the right ways to be able to attract a team that is bought into that vision and is bought into that founder's intuitions being correct and not just the team obviously but also the investors.

[1153] So it takes a certain personality type to pull that off.

[1154] Then the next thing I'm still talking about the founder is a relentlessness and indeed a monomania.

[1155] to put this above things that might rationally, you know, should perhaps rationally supersede it for a period of time, to just relentlessly pivot when pivoting is called for.

[1156] And it's always called for.

[1157] I mean, think even very successful companies.

[1158] Like, how many times did Facebook pivot?

[1159] You know, news feed was something that was completely alien to the original version of Facebook and came foundationally important.

[1160] How many times at Google?

[1161] How many times at any given that?

[1162] How many times as Apple pivoted.

[1163] You know, that founder energy in DNA, and when the founder moves on, the DNA that's been inculcated with a company has to have that relentlessness and that ability to pivot and pivot and pivot without, you know, being worried about sacred cows.

[1164] And then the last thing I'll say about the founder before I get to the rest of the team, and that'll be mercifully brief, is the founder has to be obviously a really great hireer, but just important, a very good fireer.

[1165] And firing is a horrific experience for both people involved in it.

[1166] It is a wrenching emotional experience.

[1167] And being good at realizing when this particular person is damaging the interests of the company and the team and the shareholders and, you know, having the intestinal fortitude to have that conversation and make it happen is something that most people don't have in them.

[1168] And it's something that needs to be developed in most people, or maybe some people have it naturally.

[1169] But without that ability, that will take an A -plus organization into B -minus range very, very quickly.

[1170] And so that's all what needs to be present in the founder.

[1171] Can you just say?

[1172] Sure.

[1173] How damn good you are, Rob.

[1174] That was brilliant.

[1175] The one thing that was kind of really because I think the way you expressed it, which is that allows you to be really honest with the capabilities of what's possible.

[1176] Of course, you're often trying to do the impossible, but in order to do the impossible, you have to be quote unquote impossible, but you have to be honest what is actually possible.

[1177] And it doesn't necessarily have to be the technical competence.

[1178] It's got to be, in my view, just a complete immersion in that emerging market.

[1179] And so I can imagine there are a couple people out there who have started really good crypto projects who themselves aren't right in the code.

[1180] But they're immersed in the culture and through the culture and a deep understanding of what's happening and what's not happening.

[1181] They can get a good intuition of what's possible, but the very first hire, I mean, a great way to solve that is to have a technical co -founder and dual founder companies have become extremely, common for that reason.

[1182] And if you're not doing that and you're not the technical person, but you are the founder, you've got to be really great at hiring a very damn good technical person very, very fast.

[1183] Can I, on the founder, ask you, is it possible to do this alone?

[1184] There's so many people giving advice on saying that it's impossible to do the first few steps, not impossible, but much more difficult to do it alone.

[1185] If we were to take the journey, especially in the soft girl world, where there's not a significant investment required for to build something up, is it possible to go to a prototype, to something that essentially works and already has a huge number of customers alone?

[1186] Sure.

[1187] There are lots and lots of loan founder companies out there that have made an incredible difference.

[1188] I mean, I'm not certainly putting Rhapsody in the league of Spotify.

[1189] We were too early to be Spotify, but we did an awful lot of innovation.

[1190] And then after the company sold and ended up in the hands of real networks and MTV, you know, got to millions of subs, right?

[1191] I was a lone founder.

[1192] And I studied Arabic and Middle Eastern history undergrad.

[1193] So I definitely wasn't very, very technical.

[1194] But yeah, loan founders can absolutely work.

[1195] And the advantage of a loan founder is you don't have the catastrophic potential of a falling out between founders.

[1196] I mean, Two founders who fall out with each other badly can rip a company to shreds because they both have an enormous amount of equity, an enormous amount of power, and the capital structure is a result of that.

[1197] They both have an enormous amount of moral authority with the team as a result of each having that founder role.

[1198] And I have witnessed over the years many, many situations in which companies have been shredded or have suffered near fatal blows because of a falling out between founders.

[1199] And the more founders you add, the more risky that becomes.

[1200] I don't think there should ever almost, I mean, you never say never, but multiple founders beyond two is such an unstable and potentially treacherous situation that I would never ever recommend going beyond too.

[1201] But I do see value in the non -technical sort of business and market and outside -minded founder, teaming up with the technical founder.

[1202] There is a lot of merit to that, but there's a lot of danger in that, less those two blow apart.

[1203] Was it lonely for you?

[1204] Unbelievably.

[1205] And that's the drawback.

[1206] I mean, if you're a lone founder, there is no other person that you can sit down with and tackle problems and talk them through who has precisely or nearly precisely your alignment of interests.

[1207] Your most trusted board member is likely an investor and therefore at the end of the day has the interest of preferred stock in mind, not common stock.

[1208] Your most trusted VP who might own a very significant stake in the company doesn't own anywhere near your stake in the company.

[1209] And so their long -term interests may well be in getting the right level of experiencing credibility necessary serity peel off and start their own company, or their interests might be aligned with, you know, jumping ship and setting up with another, with a different company, whether it's a rival or one in a completely different space.

[1210] So, yeah, being a lone founder is a spectacularly lonely thing.

[1211] And that's a major downside to it.

[1212] What about mentorship?

[1213] Because you're a mentor to a lot of people.

[1214] Can you find an alleviation to that loneliness in the space of ideas with a good mentor?

[1215] With a good mentor or like a mentor who's mentoring you.

[1216] Yeah.

[1217] Yeah, you can a great deal, particularly if it's somebody who's been through this very process and has navigated it successfully and cares enough about you and your well -being to give you, you know, beautifully unvarnished advice, that can be a huge, huge thing.

[1218] That can disparage things a great deal.

[1219] And I had a board member who was not an investor who basically played that role for me to a great degree.

[1220] He came in maybe halfway through the company's history, though.

[1221] I would have needed that the most in the very earliest days.

[1222] Yeah, the loneliness, that's the whole journey of life.

[1223] We're always alone, alone together.

[1224] It pays to embrace that.

[1225] You were saying that there might be something outside of the founder that's also that you were promising to be brief on.

[1226] Yeah, okay, so we talked about the founder.

[1227] You were asking what makes a great startup.

[1228] Yes.

[1229] And great founder is thing number one, but then thing number two and it's ginormous is a great team.

[1230] And so I said so much about the founder because one hopes or one believes that a founder who is a great hireer is going to be hiring people in charge of critical functions like engineering and marketing and biz dev and sales and so forth who themselves are great hirers.

[1231] But what needs to radiate from the founder into the team that might be a little bit different from what's in the gene code of the founder?

[1232] The team needs to be fully bought in to the intuitions and the vision of the founder.

[1233] Great, we've got that.

[1234] But the team needs to have a slightly different thing, which is, you know, it's 99 % obsession is execution, is to relentlessly hit the milestones, hit the objectives, hit the quarterly goals.

[1235] That is, you know, 1 % vision.

[1236] you don't want to lose that, but execution machines, you know, people who have a demonstrated ability and a demonstrated focus on, yeah, I go from point to point to point.

[1237] I try to beat and raise expectations relentlessly, never fall short, and, you know, both sort of blaze and follow the path.

[1238] Not that the path is going to, I mean, blaze the trail as well.

[1239] I mean, a good founder is going to trust that VP of sales to have a better sense of what it takes to build out that organization, what the milestones be.

[1240] And it's going to be kind of a dialogue amongst those at the top.

[1241] But, you know, execution obsession in the team is the next thing.

[1242] Yeah, there's some sense where the founder, you know, you talk about sort of the space of ideas like first principles thinking, asking big difficult questions of like future trajectories or having a big vision and big picture dreams.

[1243] You can almost be a dreamer, it feels like, when you're, like, not the founder, but in the space of sort of leadership.

[1244] But when it gets to the ground floor, there has to be execution.

[1245] There has to be hitting deadlines.

[1246] And sometimes those are attention.

[1247] There's something about dreams that are attention with the pragmatians.

[1248] nature of execution, not dreams, but sort of ambitious vision.

[1249] And those have to be, I suppose, coupled.

[1250] The vision in the leader and the execution in the software world, that would be the programmer or the designer.

[1251] Absolutely.

[1252] Amongst many other things, you're an incredible conversationalist, a podcast or you host a podcast called Afteron.

[1253] I mean, there's a million questions that want to ask you here, but one at the highest level, what do you think makes for a great conversation?

[1254] I would say two things, one of two things, and ideally both of two things.

[1255] One is if something is very, is beautifully architected, whether it's done deliberately and methodically and willfully as when I do it, or whether that just emerges from the conversation, but something that's beautifully architected.

[1256] That can create something that's incredibly powerful and memorable, or something where there's just extraordinary chemistry.

[1257] And so with All In, or I'll go way back, you might remember the NPR show Car Talk.

[1258] Oh, yeah.

[1259] Couldn't care less about auto mechanics myself.

[1260] Yeah, that's right.

[1261] But I love that show because the banter between those two guys was just beyond, without any parallel, right?

[1262] you know, and some kind of edgy podcasts.

[1263] Like Red Scare is just really entertaining to me because the banter people are women on that show is just so good and all in and that kind of thing.

[1264] So I think it's a combination of sort of the arc and the chemistry.

[1265] And I think because the arc can be so important, that's why very, very highly produced podcasts like This American Life, obviously a radio show, but I think of a podcast because that's how I was consume it or criminal or, you know, a lot of, what Wondry does and so forth, that is real documentary making, and that requires a big team and a big budget relative to the kinds of things you and I do, but nonetheless, then you got that arc, and that can be really, really compelling.

[1266] But if we go back to conversation, I think it's a combination of structure and chemistry.

[1267] Yeah, and I've actually personally have lost, I used to love this American life, and for some reason, because it lacks the possibility of magic.

[1268] It's engineered magic.

[1269] I've fallen off of it myself as well.

[1270] I mean, when I fell madly in love with it during the aughts, it was the only thing going.

[1271] They were really smart to adopt podcasting as a distribution mechanism early.

[1272] But yeah, I think that maybe there's a little bit less magic there now because I think they have agendas other than necessarily just delighting their listeners with quirky stories, which I think is what it was all about back in the day and some other things.

[1273] Is there like a memorable conversation that you've had on the podcast, whether it was because it was wild and fun or one that was exceptionally challenging, maybe challenging to prepare for, that kind of thing?

[1274] Is there something that stands out in your mind that you can draw an insight from?

[1275] Yeah, I mean, this no way diminishes the episodes that will not be the answer to these two questions.

[1276] But an example of something that was really, really challenging to prepare for was George Church.

[1277] So, as I'm sure you know, and as I'm sure many of your listeners know, he is one of the absolute leading lights in the field of synthetic biology.

[1278] He's also unbelievably prolific.

[1279] His lab is large and has all kinds of efforts have spun out of that.

[1280] And what I wanted to make my George Church episode about was, first of all, you know, grounding people into what is this thing called Sin Bio?

[1281] And that required me to learn a hell of a lot more about SynBio than I knew going into it.

[1282] So there was just this very broad.

[1283] I mean, I knew much more than the average person going into that episode, but there was this incredible breadth of grounding that I needed to give myself in the domain.

[1284] And then George does so many interesting things.

[1285] There's so many interesting things emitting from his lab that, you know, and he and I had a really good dialogue.

[1286] He was a great guide going into it.

[1287] winnowing it down to the three to four that I really wanted us to focus on to create a sense of wonder and magic and the listener of what could be possible from this very broad spectrum domain, that was a doozy of a challenge.

[1288] That was a tough, tough, tough one to prepare for.

[1289] Now, in terms of something that was just wild and fun, unexpected, I mean, by the time we sat down to interview, I knew where we were going to go, but just in terms of the idea space.

[1290] Don Hoffman.

[1291] Oh, wow.

[1292] Yeah.

[1293] So Don Hoffman, as again, some listeners probably know because he's, I think I was the first podcaster to interview him.

[1294] I'm sure some of your listeners are familiar with him, but he has this unbelievably contrarian take on the nature of reality.

[1295] But it is contrarian in a way that all the ideas are highly internally consistent and snap together in a way that's just delightful.

[1296] Yeah.

[1297] And it seems as radically violating of our intuitions and as radically violating of the probable nature of reality is anything that one can encounter, but an analogy that he uses, which is very powerful, which is what intuition could possibly be more powerful than the notion that there is a single unitary direction called down.

[1298] And we're on this big flat thing for which there is a thing called down.

[1299] And we all know that, I mean, that's the most intuitive thing that one could probably think of.

[1300] And we all know that that ain't true.

[1301] So my conversation with Don Hoffman is just wild and full of plot twists and interesting stuff.

[1302] And the interesting thing about the wildness of his ideas, it's to me at least, as a listener, coupled with, he's a good listener and he empathizes with the people who challenge his ideas.

[1303] Like, what's a better way to phrase that?

[1304] He is a welcoming of challenge in a way that creates a really fun conversation.

[1305] Oh, totally.

[1306] Yeah.

[1307] He loves a Perry or a jab, whatever the word is.

[1308] At his argument, he honors it.

[1309] He's a very, very gentle and non -combatative soul.

[1310] But then he is very good and takes great evident joy.

[1311] in responding to that, in a way that expands your understanding of his thinking.

[1312] Let me as a small tangent of tying up together our previous conversation about listen .com and streaming and Spotify and the world of podcasting.

[1313] So we've been talking about this magical medium of podcasting.

[1314] I have a lot of friends at Spotify, in the high positions of Spotify as well.

[1315] I worry about Spotify, in podcasting and the future of podcasting in general that moves podcasting into place of maybe walled gardens of sorts.

[1316] Since you've had a foot in both worlds, have a foot in both worlds, do you worry as well about the future of podcasting?

[1317] Yeah, I think walled gardens are really toxic to the medium that they start balkanizing.

[1318] So to take an example, I'll take two examples.

[1319] With music, it was a very, very big deal that at Rhapsody, we were the first company to get full catalog licenses from all.

[1320] Back then, there were five major music labels and also hundreds and hundreds of indies because you needed to present the listener with a sense that basically everything is there.

[1321] And there is essentially no friction to discovering that which is new.

[1322] And you can wander this realm and all you really need, is a good map, whether it is something that somebody, the editorial team assembled or a good algorithm or whatever it is, but a good map to wander this domain.

[1323] When you start walling things off, A, you undermine the joy of friction -free discovery, which is an incredibly valuable thing to deliver to your customer, both from a business standpoint and simply from, you know, a humanistic standpoint of you want to bring delight to people.

[1324] But it also creates an incredible opening vector for piracy.

[1325] And so something that's very different from the Rhapsody slash Spotify slash et cetera like experience is what we have now in video.

[1326] You know, like, wow, is that show on Hulu?

[1327] Is it on Netflix?

[1328] Is it on something like IFC channel?

[1329] Is it on Discovery Plus?

[1330] Is it here?

[1331] Is it there?

[1332] And the more frustration and toe stubbing that people encounter when they are seeking something and they're already paying a very respectable amount of money per month to have access to content and they can't find it.

[1333] The more that happens, the more people are going to be driven to piracy solutions like to hell with it.

[1334] Never know where I'm going to find something.

[1335] I never know what it's going to cost.

[1336] Oftentimes, really interesting things are simply unavailable.

[1337] That surprises me the number of times that I've been looking for things I don't even think are that obscure that are just, it says, not available in your geography, period, mister, right?

[1338] So I think that that's a mistake.

[1339] And then the other thing is, you know, for podcasters and lovers of podcasting, we should want to resist this Waldgarden thing because it, A, it does smother this friction -free, or eradicate this friction -free discovery unless you want to sign up for lots of different services.

[1340] And also dims the voice of somebody who might be able to have a far, far, far bigger impact by reaching.

[1341] far more neurons, you know, with their ideas.

[1342] I only use an example from, I guess it was probably the 90s or maybe it was the aughts of Howard Stern, who had the biggest megaphone or maybe the second biggest after Oprah megaphone in popular culture.

[1343] And because he was syndicated on hundreds and hundreds and hundreds of radio stations at a time when terrestrial broadcast was the main thing people listen to in their car, no more, obviously.

[1344] But when he decided to go over to, you know, satellite radio, if I can't remember, was XM or Stern, maybe they'd already merged at that point.

[1345] But when he did that, he made, you know, totally his right to do it, a financial calculation that they were offering him a nine -figure sum to do that.

[1346] But his audience, because not a lot of people were subscribing to satellite radio at that point, his audience probably collapsed by, I wouldn't be surprised if it was as much as 95%.

[1347] And so the influence that he had on the culture and his ability to sort of shape conversation and so forth, just gotten muted.

[1348] Yeah.

[1349] And also there's a certain sense, especially in modern times, where the wall gardens naturally lead to, I don't know if there's a term for it, but people who are not creatives starting to have power over the creatives.

[1350] Right.

[1351] And even if they don't stifle it, if they're providing, you know, incentives within the platform to shape, shift, or, you know, even completely mutate or distort the show.

[1352] I mean, imagine somebody has got, you know, reasonably interesting idea for a podcast and they get signed up with, let's say, Spotify.

[1353] And then Spotify is going to give them financing to get the thing spun up.

[1354] And that's great.

[1355] And Spotify is going to give them a certain amount of really, you know, powerful placement.

[1356] you know, within the visual field of listeners, but Spotify's conditions for that.

[1357] They say, look, you know, we think that your podcast will be much more successful if you dumb it down about 60 %.

[1358] If you add some, you know, silly, dirty jokes.

[1359] If you do this, you do that.

[1360] And suddenly the person who is dependent upon Spotify for permission to come into existence and is really different, really wants to please them, you know, to get that money in, to get that placement really wants to be successful.

[1361] Now all of a sudden you're having a dialogue between a complete non -creative, some marketing, you know, sort of data analytic person at Spotify and a creative that's going to shape what that show is.

[1362] You know, so that could be much more common and ultimately you have in the aggregate an even bigger impact than, you know, the cancellation, let's say, if somebody who says the wrong word or voices the wrong idea.

[1363] I mean, that's kind of what you have, not kind of.

[1364] It's what you have with film and TV is that so much influence is exerted over the storyline and the plots and the character arcs and all kinds of things by executives who are completely alien to the experience and the skill set of being a showrunner and television or being director in film that, you know, is meant to like, oh, we can't piss off the Chinese market here or we can't say that or we need to have, you know, cast members that have precisely these demographics reflected or whatever it is, that, And obviously, despite that, extraordinary, at least TV shows are now being made.

[1365] You know, in terms of film, I think the quality has nosedived of the average, let's say, American film coming out of a major studio, the average quality, and my view has nosedived over the past decade as it's kind of everything's got to be a superhero franchise.

[1366] But, you know, great stuff gets made despite that.

[1367] But I have to assume that in some cases, at least in perhaps many cases, greater stuff would be made if there was less interference from non -creative executives.

[1368] It's like the flip side of that, though, and this was the pitch of Spotify because I've heard their pitch, is Netflix.

[1369] From everybody I've heard that I've spoken with about Netflix, is they actually empower the creator.

[1370] I don't know what the heck they do, but they do a good job of giving creators, even the crazy ones, like Tim Dillon, like Joe Rogan, like comedians, freedom to be their crazy selves.

[1371] and the result is like some of the greatest television, some of the greatest cinema, whatever you call it, ever made.

[1372] True.

[1373] Right.

[1374] And I don't know what the heck they're doing.

[1375] It's a relative thing.

[1376] From what I understand, it's a relative thing.

[1377] They're interfering far, far less than, you know, NBC or, you know, AMC would have interfered.

[1378] So it's a relative thing.

[1379] And obviously, they're the ones writing the checks and they're the ones giving the platform.

[1380] So they've ever right to their own influence.

[1381] Yeah.

[1382] But my understanding is that they're relatively way more hands -off and that has had a demonstrable effect, because I agree, some of the greatest, you know, video, produced video content of all time.

[1383] An incredibly inordinate percentage of that is coming out from Netflix in just a few years when the history of cinema goes back many, many decades.

[1384] And Spotify wants to be that for podcasting, and I hope they do become that for podcasting, but I'm wearing my skeptical goggles or skeptical hat, whatever the heck.

[1385] it is because it's not easy to do.

[1386] And it requires letting go of power, giving power to the creatives.

[1387] It requires pivoting, which large companies, even as innovative as Spotify is, still now a large company, pivoting into a whole new space is very tricky and difficult.

[1388] So I'm skeptical but hopeful.

[1389] What advice would you give to a young person today about life, about career?

[1390] We talked about startups.

[1391] We talked about music.

[1392] We talked about the, end of human civilization.

[1393] Is there advice you would give to a young person today, maybe in college, maybe in high school, about their life?

[1394] Well, let's see.

[1395] I mean, there's so many domains you can advise on.

[1396] And, you know, I'm not going to give advice on life because I fear that I would drift into sort of hallmark bromides that really wouldn't be all that distinctive.

[1397] And they might be entirely true, sometimes the greatest insights about life turn out to be like the kinds of things you'd see on a Hallmark card.

[1398] So I'm going to steer clear that.

[1399] On a career level, you know, one thing that I think is unintuitive but unbelievably powerful is to focus not necessarily on being, you know, in the top sliver of one percent in excelling at one domain that's important and valuable.

[1400] But to think in terms of intersections of two domains, which are rare but valuable.

[1401] And there's a couple reasons for this.

[1402] The first is in an incredibly competitive world that is so much more competitive than it was when I was coming out of school, radically more competitive than when I was coming out of school, to navigate your way to the absolute pinnacle of any domain.

[1403] Let's say you want to be really, really great at, you know, Python, pick a language, whatever it is.

[1404] You want to be one of the world's greatest Python developers, JavaScript, whatever your language is.

[1405] Hopefully it's not Cobalt.

[1406] By the way, if you listen to this, I am actually looking for a Cobalt expert to interview because I find language fascinating.

[1407] And there's not many of them.

[1408] So please, if you know a world expert in Cobalt or Fortran, both actually.

[1409] Or if you are one.

[1410] Or if you are one, please email me. Yeah.

[1411] So, I mean, if you're going out there and you want to be in the top sliver 1 % of Python developers is a very, very difficult thing to do, particularly if you want to be number one in the world, something like that.

[1412] And I'll use an analogy as I had a friend in college who was on a track and indeed succeeded at that to become an Olympic medalist and I think was 100 meter breaststroke.

[1413] And he mortgaged a significant percentage of his sort of college life to that goal, or I should say dedicated or invested or whatever you wanted to say.

[1414] But he didn't participate in a lot of the social, a lot of the late night, a lot of the this, a lot of the that, because he was training so much.

[1415] And obviously, he also wanted to keep up with his academics.

[1416] And at the end of the day, story has a happy ending in that he did medal in that.

[1417] Bronze, not gold, but holy cow, anybody who gets an Olympic medal, that's an extraordinary thing.

[1418] And at that moment, he was, you know, one of the top three people on earth at that thing.

[1419] But wow, how hard to do that.

[1420] How many thousands of other people went down that path and made similar sacrifices and didn't get there.

[1421] It's very, very hard to do that.

[1422] Whereas, you know, I'll use a personal example, when I came out of business school, I went to a good business school and learned the things that were there to be learned, and I came out and I entered a world with lots of...

[1423] Harvard Business School, by the way.

[1424] Okay, yes, it was Harvard.

[1425] It's true.

[1426] You're the first person who went there who didn't say where you went.

[1427] It was just beautiful.

[1428] I appreciate that.

[1429] It's one of the greatest business schools in the world.

[1430] It's a whole other fascinating conversation about that world.

[1431] But anyway, yes.

[1432] But anyway, so I learned the things that you learn getting an MBA from a top program, and I entered a world that had hundreds of thousands of people who had MBAs, probably hundreds of thousands who had them from, you know, top 10 programs.

[1433] So I was not particularly great at being an MBA person.

[1434] I was inexperienced, relative to most of them, and there were a lot of them, but it was okay MBA person, right?

[1435] Newly minted.

[1436] But then as it happened, I found my way into working on the commercial internet in 1994.

[1437] So I went to a, at the time, giant and hot computing company called Silicon Graphics, which had enough heft and enough headcount that they could take on and experience MBAs and tried to train them in the world of Silicon Valley.

[1438] But within that company that had an enormous amount of surface area and was touching a lot of areas and had unbelievably smart people at the time, it was not surprising that SGI started doing really interesting and innovative and trailblazing stuff on the internet before almost anybody else.

[1439] And part of the reason was that our founder, Jim Clark, went off to co -found Netscape with Mark Andreessen, so the whole company is like, wait, what was that?

[1440] What's this commercial internet thing?

[1441] So I end up in that group.

[1442] Now, in terms of being a commercial internet person or a worldwide web person, Again, I was, in that case, barely credentialed.

[1443] I couldn't write a stitch a code, but I had a pretty good mind for grasping the business and cultural significance of this transition.

[1444] And this was, again, we were talking earlier about emerging areas.

[1445] Within a few months, you know, I was in the relatively top echelon of people in terms of just sheer experience.

[1446] Because like, let's say it was five months into the program, there were only so many people who'd been doing World Wide Web stuff commercially for five months.

[1447] And then what was interesting, though, was the intersection of those two things.

[1448] The commercial web, as it turned out, grew into unbelievable vastness.

[1449] And so by being a pretty good, okay web person and a pretty good okay MBA person, that intersection put me in a very rare group, which was web -oriented MBAs.

[1450] And in those early days, you could probably count on your fingers the number of people who came out of really competitive programs who were doing stuff full -time on the Internet.

[1451] And there was a greater appetite for great software developers in the Internet domain, but there was an appetite and a real one and a rapidly growing one for MBA thinkers who were also seasoned and networked in the emerging world of the commercial worldwide web.

[1452] And so finding an intersection of two things you can be pretty good at, but is a rare intersection and a special intersection is probably a much easier way to make yourself distinguishable and in demand from the world than trying to be world class at this one thing.

[1453] So in the intersection is where there's a to be discovered opportunity and success.

[1454] That's really interesting.

[1455] Yeah.

[1456] There's actually more intersection of fields and fields themselves, right?

[1457] So, yeah, I mean, I'll give you kind of a funny hypothetical here, but it's one I've been thinking about a little bit.

[1458] There's a lot of people in crypto right now.

[1459] It'd be hard to be in the top percentile of crypto people, whether it comes from just having a sheer grasp of the industry, a great network within the industry, technological skills, whatever you want to call it.

[1460] And then there's this parallel world and orthogonal world called crop insurance.

[1461] And there's, you know, I'm sure that's a big world.

[1462] crop insurance is a very, very big deal, particularly in the wealthy and industrialized world, where people, there's sophisticated financial markets, rule of law, and, you know, large agricultural concerns that are worried about that.

[1463] Somewhere out there is somebody who is pretty crypto -savvy, but probably not top 1%, but also has kind of been in the crop insurance world and understands that a hell of a lot better than almost anybody who's ever had anything to do with cryptocurrency.

[1464] And so I think that decentralized finance, defy, one of the interesting and I think very world positive things that I think it's almost inevitably will be bringing to the world is crop insurance for small holding farmers.

[1465] You know, I mean, people who have tiny, tiny plots of land in places like India, etc., where there is no crop insurance available to them because just the financial infrastructure doesn't exist.

[1466] But it's highly imaginable that using Oracle networks that are trusted outside deliverers of factual information about rainfall in a particular area, you can start giving drought insurance to folks like this.

[1467] The right person to come up with that idea is not a crypto -wiz who doesn't know a blasted thing about small -holding farmers.

[1468] The right person to come up with that is not a crop insurance whiz who isn't quite sure what Bitcoin is, but somebody occupies that intersection.

[1469] That's just one of gazillion examples of things that are going to come along for somebody who occupies the right intersection of skills but isn't necessarily the number one person at either one of those expertise is that's making me kind of wonder about my own little things that I'm average at and seeing where the intersections that could be exploited that's pretty profound so we talked quite a bit about the end of the world and how we're both optimistic about us figuring our way out unfortunately for now at least both you and I are going to die one day way too soon soon.

[1470] First of all, that sucks.

[1471] It does.

[1472] I mean, one, I'd like to ask if you ponder your own mortality, how does that kind of, what kind of wisdom insight does it give you about your own life?

[1473] And broadly, do you think about your life and what the heck it's all about?

[1474] Yeah, with respect to pondering mortality, I do try to do that as little as possible because there's not a lot I can do about it.

[1475] But it's inevitably there.

[1476] And I think that what it does, when you think about it in the right way, is it makes you realize how unbelievably rare and precious the moments that we have here are.

[1477] And therefore, how consequential the decisions that we make about how to spend our time are.

[1478] You know, like, do you do those 17 nagging emails or do you have dinner with?

[1479] somebody who's really important to you who haven't seen in three and a half years.

[1480] If you had an infinite expanse of time in front of you, you might well rationally conclude I'm going to do those emails because collectively they're rather important.

[1481] And I have tens of thousands of years to catch up with my buddy, Tim.

[1482] But I think the scarcity of the time that we have helps us choose the right things if we're attuned to that.

[1483] And we're attuned to the context that mortality puts over the consequence of every decision we make of how to spend our time.

[1484] That doesn't mean that we're all very good at it.

[1485] It doesn't mean I'm very good at it, but it does add a dimension of choice and significance to everything that we elect to do.

[1486] It's kind of funny that you say you try to think about it as little as possible.

[1487] I would venture to say you probably think about the end of human civilization more than you do about your own life.

[1488] You're probably right.

[1489] Because that feels like a problem that could be solved.

[1490] Right.

[1491] Where's the end of my own life can't be solved.

[1492] Well, I don't know.

[1493] I mean, there's transhumanists who have incredible optimism about, you know, near or intermediate future therapies that could really, really change human lifespan.

[1494] I really hope that they're right, but I don't have a whole lot to add to that project because I'm not a life scientist myself.

[1495] I'm in part also afraid of immortality, not as much, but close to as I'm afraid of death itself.

[1496] So it feels like the things that give us meaning, give us meaning because of the scarcity that surrounds it.

[1497] Agreed.

[1498] I'm almost afraid of having too much of stuff.

[1499] Yeah.

[1500] Although if there was something that said this can expand your enjoyable well -spanned or lifespan by 75 years, I'm all in.

[1501] Well, part of the reason I wanted to not do a startup, really the only thing that worries me about doing a startup is if it becomes successful because of how much I dream, how much I'm driven to be successful, that there will not be enough silence in my life, enough scarcity, to appreciate the moments I appreciate now as deeply as I appreciate them now.

[1502] There's a simplicity to my life now that, it feels like you might disappear with success?

[1503] I wouldn't say might.

[1504] I think if you start a company that has ambitious investors, ambitious for the returns that they'd like to see, that has ambitious employees, ambitious for the career trajectories they want to be on and so forth, and is driven by your own ambition, there is a profound monogamy to that you know and it is it is very very hard to carve out time to be creative to be peaceful to be so forth because of you know with every new employee that you hire that's one more mouth to feed with every new investor that you take on that's one more person to whom you really do want to deliver great returns and as the valuation ticks up, the threshold to delivering great returns for your investors always rises.

[1505] And so there is an extraordinary monogamy to being a founder CEO, above all for the first few years, and first in people's minds could be as many as 10 or 15.

[1506] But I guess the fundamental calculation is whether the passion for the vision is greater than the cost you'll pay.

[1507] Right.

[1508] It's all opportunity cost.

[1509] It's all opportunity cost in terms of time and attention and experience.

[1510] And some things, like I'm, everyone's different, but I'm less calculating.

[1511] Some things you just can't help.

[1512] Sometimes you just dive in.

[1513] Oh, yeah.

[1514] I mean, you can do balance fees all you want on this versus that and what's the right?

[1515] I mean, I've done it in the past and it's never worked.

[1516] You know, it's always been like, okay, what's my gut screaming at me to do?

[1517] Yeah.

[1518] but about the meaning of life you ever think about that yeah i mean this is we're going to go all hallmarking on you but i think that you know there's a few things and um you know one of them is certainly love and the love that we experience and feel and cause to well up in others is something that's just so profound and goes beyond almost anything else that we can do.

[1519] And whether that is something that lies in the past, like maybe there was somebody that you were dating and loved very profoundly in college and haven't seen in years, I don't think the significance of that love is anyway diminished by the fact that it had a notional beginning and end.

[1520] The fact is that you experienced that and you triggered that in somebody else and that happened.

[1521] And it doesn't have to be, certainly doesn't have to be love of romantic partners alone.

[1522] It's family members.

[1523] It's love between friends.

[1524] It's love between creatures.

[1525] You know, I had a dog for 10 years who passed away a while ago and, you know, experienced unbelievable love with her.

[1526] It can be love of that which you create.

[1527] And we were talking about the flow states that we enter and the pride or lack of pride or in the Minsky case, your hatred of that, which you've done.

[1528] but nonetheless, the creations that we make, and whether it's the love or the joy or the engagement or the perspective shift, that that cascades into other minds, I think that's a big, big part of the meaning of life.

[1529] It's not something that everybody participates in necessarily, although I think we all do, you know, at least in a very local level by, you know, the example that we set, by the interactions that we have, but for people who create works, that travel far and reach people they'll never meet, that reach countries they'll never visit, that reach people perhaps that come along and come across their ideas or their works or their stories or their aesthetic creations of other sorts long after they're dead.

[1530] I think that's really, really big part of the fabric of the meaning of life.

[1531] And so all these things, like love and creation, I think really is what it's all about.

[1532] And part of love is also the loss of it.

[1533] There's a Louis episode with Louis C .K. There's an old gentleman who is giving him advice that sometimes the sweetest parts of love is when you lose it and you remember it sort of you reminisce on the loss of it.

[1534] And there's some aspect in which, and I have many of those in my own life, that almost like the memories of it and the intensity of emotion you still feel about it is like the sweetest part.

[1535] You're like, after saying goodbye, you relive it.

[1536] So that goodbye is what is also a part of love.

[1537] The loss of it is also a part of love.

[1538] I don't know.

[1539] It's back to that scarcity.

[1540] I won't say the loss is the best part personally, but it definitely is an aspect of it.

[1541] And, you know, the grief you might feel about something that's gone makes you realize what a big deal it was.

[1542] Yeah.

[1543] Speaking of which, this particular journey, we went on together, come to an end.

[1544] So I have to say goodbye, and I hate saying goodbye.

[1545] Rob, this is truly an honor.

[1546] I've really been a big fan.

[1547] People should definitely check out your podcast.

[1548] you're a master of what you do in the conversation space and the writing space.

[1549] It's been an incredible honor that you would show up here and spend this time with me. I really, really appreciate it.

[1550] Well, it's been a huge honor to be here as well, and also a fan in heaven for a long time.

[1551] Thanks, Rob.

[1552] Thanks for listening to this conversation with Rob Reed, and thank you to Athletic Greens, Belcampo, Fundrise, and NetSuite.

[1553] Check them out in the description to support this podcast.

[1554] And now, let me leave you with some words from Plato.

[1555] We can easily forgive a child who's afraid of the dark.

[1556] The real tragedy of life is when men are afraid of the light.

[1557] Thank you for listening and hope to see you next time.