The Diary Of A CEO with Steven Bartlett XX
[0] I painted a scenario that was going to result in the extinction of humanity and approximately how long it would take.
[1] The problem is that it's already underway on a time scale of decades.
[2] And we have created a fragile world that cannot endure this shift.
[3] People should they be preparing?
[4] Absolutely.
[5] That's quite scary.
[6] Dr. Brett Weinstein is an evolutionary biologist and former professor uncovering the world's most pressing and controversial issues and offering his solutions to save humanity from a destructive future.
[7] Humanity is in terrible danger, and the number of existential threats is growing.
[8] For example, I am profoundly concerned we are going to squander the lesson of COVID.
[9] You can see the complete collapse of journalism, our political institutions, our courts.
[10] They all fail.
[11] The tragedy is most people don't know that we are still not being honest about the origin of COVID.
[12] And the truth is, but our politicalism is, but our politicalism.
[13] institutions don't want to talk about, which is going to mean that the failures are going to come back.
[14] Is there anything else on your list of concerns?
[15] So I have five different existential threats that AI poses, and we will go through them.
[16] But we have no evolutionary preparedness for living in a world where a computer can out -compete a human being.
[17] That's a dangerous world to live in.
[18] Is there anything we can do to prepare or to avert this crisis?
[19] Yes.
[20] Here's what I suggest.
[21] Brett, of all these existential threats, is there one that's at the very top of your list?
[22] Yes.
[23] There's nothing more dangerous than this, and that is one.
[24] This is a sentence I never thought I'd say in my life.
[25] We've just hit 7 million subscribers on YouTube, and I want to say a huge thank you to all of you that show up here every Monday and Thursday to watch our conversations.
[26] From the bottom on my heart, but also on behalf of my team, who you don't always get to meet.
[27] There's almost 50 people now behind the diary of a CEO that work to put this together.
[28] So from all of us, thank you so much.
[29] We did a raffle last month and we gave away prizes for people that subscribed to the show up until 7 million subscribers.
[30] And you guys loved that raffle so much that we're going to continue it.
[31] So every single month, we're giving away money can't buy prizes, including meetings with me, invites to our events and a thousand pound gift vouchers to anyone that subscribes to the Dyer of a CEO.
[32] There's now more than seven million of you.
[33] So if you make the decision to subscribe today, you can be one of those lucky people.
[34] Thank you from the bottom of my heart.
[35] Let's get to the conversation.
[36] Who are you?
[37] And what mission are you on?
[38] And when I ask that second question, I'm looking at the full body of your work and I'm trying to encapsulate it, maybe in just a couple of sentences.
[39] Sure.
[40] I am an evolutionary biologist.
[41] I'm a former college professor who has been cast into the role of a public intellectual by bizarre events.
[42] at my college.
[43] I am on a mission and I'm afraid it's going to sound weird to people.
[44] I think humanity is in terrible danger.
[45] I think we worry about the wrong things and I do not have any reason to believe that anything I could do is going to change the fate of humanity, but I feel obligated to try.
[46] That is to say, if we're going to be doomed by our errors, and I know something about what those errors are, then it falls to me to try to make that clear to people and processes that might have the power to redirect us.
[47] So I'm making that effort, even though, frankly, I think it's unlikely to work.
[48] That's quite scary, Brett.
[49] Yep, I've gotten over that part.
[50] what exactly are you referring to when you say that you think humanity might be doomed there's a basic set of premises that just comes out of biology no species is forever and that includes our species no matter what i or anyone else does but the objective of the exercise is really to stave off extinction as long as possible and i believe that that is a valid thing to do it is a vital thing to do even if in the end, we know that no matter how successful we were, we're not going to escape the destruction of the solar system.
[51] We're not going to escape the collision of our galaxy into another.
[52] And even if we did, ultimately the universe has a fate and it will take us out with it if we beat every odd.
[53] But why are we in trouble?
[54] Well, we're in trouble because all creatures are well -built for the environments in which we evolved.
[55] And human beings suffer from something that my wife, Heather and I in our book, call hyper -novelty.
[56] So novelty is the state of something being not what you are evolutionarily prepared for.
[57] And human beings are very good at dealing with novelty.
[58] But what we're doing in the present is we're creating a rate of change, that is so rapid that there is no conceivable way for us to keep up.
[59] We cannot adapt fast enough to keep up with the novel influences that we are forcing upon ourselves.
[60] And what that means is that with each passing year, we end up ever more poorly adapted to the life that we have to lead.
[61] And it's gotten so bad that the environments that we live in, as adults don't even resemble the environments that existed for adults when we were kids.
[62] The reason that human beings have a longer developmental period than any other creature that has ever existed on this planet is that you need a long developmental period for us to acquire the insight and the nuance in order to be a functional adult.
[63] That program doesn't work if the environment in which you are picking up those lessons is unrelated to the environment in which you have to do the adult stuff.
[64] It's a non sequitur.
[65] So that's why we're in trouble.
[66] We have technologies that are powerful enough to destroy us.
[67] We have processes that we have unleashed the consequences of which we can scarcely imagine.
[68] And as these things proliferate, the number of existential threats to humanity is just simply growing.
[69] We have to rein in that problem.
[70] The proliferation of existential threats means that the moment at which we blink out as a species is getting closer.
[71] If it isn't this that takes us out, it'll be that.
[72] We have to arrest that process.
[73] Of all these pressing concerns and of all these existential threats, is there one that's at the very top of your list of concerns?
[74] Well, they're not even completely separable.
[75] So, for example, we are politically.
[76] obsessed in this country and across the Western world with anthropogenic climate change.
[77] I'm sure you've noticed.
[78] What is anthropogenic climate change?
[79] Anthropogenic climate change is a change in the average conditions on planet Earth that is driven by human activity.
[80] So the claim is that CO2 traps heat from the sun causing the mean temperature to rise that will have impacts on, for example, how much ice persists at high altitudes and in the Arctic and the other cold regions.
[81] And that that then is part of a positive feedback where, because ice is white, it reflects the sun's energy back into space.
[82] So the more ice that melts, the darker the world becomes, the more light it absorbs, the warmer it becomes.
[83] So that positive feedback is actually a real reason for concern.
[84] However, the increasingly model -driven mania about global warming is at odds with what we understand about models.
[85] Models are not a valid test of a hypothesis in a complex system.
[86] They can't be.
[87] So we are treating these models as if they tell us what's going to happen.
[88] And that is not a philosophically valid thing to do.
[89] But it is also just simply not in keeping with an understanding of the underlying requirements for functional science.
[90] If you were in climatology today and you attempted to publish a paper that said, actually, anthropogenic climate change is only a quarter as bad as we fear, you would have great difficulty publishing that, and you would experience a spectacular decrease in your viability as an academic.
[91] So what we can infer from that is that we probably have a lot of papers that point in a direction that the field is interested in promoting and that we have a dearth of papers that might point in the other direction.
[92] So in effect, when we look at the sum total of papers and we say, oh my God, they all say the same thing.
[93] We're in big trouble.
[94] Well, do they all say the same thing because we're in big trouble and that's what an honest analysis would give you?
[95] Or is that just an echo of what we put into the system.
[96] So I'm much less worried about anthropogenic climate change and I'm much more worried about some other threats that, to my way of thinking, clearly dwarf it in magnitude.
[97] So we have several problems related to space weather.
[98] The sun goes through a cycle, an 11 -year sunspot cycle.
[99] those sunspots often release solar flares those solar flares are in general directed randomly off the sun and because the earth is only in one spot most of the solar flares that the sun flings off don't hit us what's a solar flare?
[100] A solar flare is well it's really the coronal mass ejection that is the important part the flare is the thing you see on the sun that looks like a big flame sort of flipping off the sun.
[101] When it does that, that actually, in many cases, ejects a concentrated glob of plasma, right?
[102] These are charged particles, and they get flung off.
[103] They get flung off at speeds that are not consistent.
[104] They're not moving at the speed of light.
[105] They're moving at the speed of stuff, right?
[106] And the speed of stuff is variable.
[107] But a couple days after, a solar flare that releases a coronal mass ejection in the direction of the Earth, get a wave of these charged particles.
[108] Across the earth.
[109] Yeah.
[110] And that causes things that we are all familiar with.
[111] Whether you've seen it or not, this increases the Aurora Borealis, for example, the Northern Lights.
[112] So it's a very spectacular show, and you probably are aware.
[113] Maybe you saw it yourself.
[114] But we recently had an Aurora that reached as far south as Puerto Rico, right?
[115] That's a really unusual thing to happen.
[116] And even more unusual is the fact...
[117] So they usually reach sort of the top part of the earth?
[118] Is that right?
[119] Yeah.
[120] If you're up near the Arctic Circle, you see these things regularly.
[121] The farther south you are, the less likely you are to see them.
[122] I saw it in North Sweden, sort of Iceland as well.
[123] Perfect place.
[124] But for people to see it in Puerto Rico is highly unusual.
[125] And you would think that that indicates that the burst of plasma that hit the Earth was in some way highly unusual.
[126] And it wasn't.
[127] Something else is going on.
[128] That Aurora reached farther south than it should have based on the magnitude of the coronal mass ejection, which was substantial, but hardly unprecedented.
[129] Now, what most people don't know is that there was a major solar storm that hit the Earth in 1859.
[130] It goes by the name of the Carrington event, named for the astronomer who realized that the weird effects that happened on Earth were correlated to something he had seen on the sun.
[131] He had effectively put those two.
[132] He had seen the flare and then deduced that this was related to it.
[133] Now, in 1859, the world was not a very electrical place.
[134] In fact, the primary use of electricity was telegraphs.
[135] and at the time this burst of plasma caused those telegraphs stations caught fire the entire network went down telegraph operators were shocked at their stations messages could be sent even though there was no power being delivered to the system it went down and the induced charge in the wires was enough for telegraph operators to send messages over distances so it was very dramatic if you were involved in telegraphs but for the rest of humanity it was a minor event Now, we live in a very different world.
[136] We live in a world where everything has an electrical component.
[137] The way our cars function, the way food shows up in the supermarket, the way air travel and air traffic control works, all of these things are heavily electrical.
[138] And they are all tremendously vulnerable to the EMP effects that will come with a major solar impact.
[139] What's the EMP effect?
[140] It's an electromagnetic pulse.
[141] which is basically an induced charge in electrically active materials.
[142] So it's the kind of thing that a big enough one will fry every computer, will take most of the cars off the road.
[143] And what we don't commonly know is that our grids, the grids that operate all of our electrical devices are, they operate with these transformers, right, which control the flow of current.
[144] These transformers are huge, complicated machines, and if you needed one and you ordered it today, it would take a year for you to get it.
[145] If the world suddenly needed 70 of them or 100 of them, there's no telling what would happen.
[146] So while we all have the experience of a power outage causing us to lose electricity for, you know, hours or days, it is quite conceivable that a solar storm that took out a significant number of transformers could take a continent and turn it dark with no plan for bringing the lights back on.
[147] They would go out and they wouldn't come back.
[148] now this is a ridiculous risk to run the transformers can be hardened against this they cannot be perfectly immunized from this effect but they can be hardened with well understood architecture architecture that effectively grounds out the EMP so that the transformer comes back on after the event but we don't do it so we are running an incredibly large risk of a section of a continent or an entire continent going dark with no backup line.
[149] To me, the risk of that dwarfs anything that might be true about anthropogenic climate change.
[150] What's more, you've probably heard that pole shifts happen, that the North Pole isn't always where the North Pole currently is and that sometimes these things flip.
[151] That's always struck me as an extremely dangerous condition.
[152] And I always assumed, well, what are the chances you're going to be alive?
[153] You know, if you were alive within 500 years of a pole flip, that would be kind of a close call.
[154] But what are the chances going to happen during your lifetime?
[155] Well, we are actually living in a moment where the pole is actively migrating.
[156] We are in the midst of what's called a polar excursion.
[157] the pole seems to be flipping and it seems to be flipping at the same time that our electromagnetic field of the earth is decreasing now that decreasing field means that what's flung off the sun has a bigger impact on earth than it would ordinarily have and that pole flip threatens chaos you could imagine if we feared y2k right that a programming error failure to account for the fact that you were going to have this turnover in the dates worried people that an actual pole flip would create chaos.
[158] And the fact that we are not at least as worried, if not 10 times as worried, about the fact that we are living through a polar excursion and a radical decrease in the strength of our electromagnetic field on Earth says that we just have our priorities wrong.
[159] What is causing the pole to flip?
[160] And what does that mean?
[161] Because when I think about a pole flip, Does that mean the North Pole just moves a little bit?
[162] The South Pole moves a little bit.
[163] It's not a little bit.
[164] These things are going to move radically, and the rate at which they are moving is accelerating.
[165] This is happening on a time scale that's highly relevant to you and me. We are both likely to be here to see the full shift, whatever that full shift entails, and they're not always the same.
[166] So it's a little hard to predict.
[167] Now, I will say this is not my area of expertise.
[168] I have learned tremendously from others, Ben Davidson being the primary person, somebody I had on the Dark Horse podcast.
[169] And he has a very compelling model, a hypothesis that I believe does explain many otherwise difficult to explain features of our solar environment.
[170] His explanation is that the solar system is moving constantly within the galaxy, and that the galaxy, by its very nature, contains an oscillating electromagnetic sheet.
[171] And as the solar system moves through that sheet, we cross the plane in the middle, which causes all the electrically active entities to experience an inversion.
[172] The sun experiences an inversion, the other planets experience an inversion, the Earth experiences an inversion.
[173] When you say inversion, you mean the sort of electromagnetic sense?
[174] Yeah, it's, you know, it's the direction of pull.
[175] When we talk about electromagnetism, we're talking about attraction and repulsion.
[176] And if you imagine that you flipped the sign on everything because you just crossed the middle of something.
[177] Like, you know, if you were holding a magnet here, right, and the north side is down, on the south side is up, and you moved another magnet by it.
[178] The direction of pull would shift as you crossed that equator.
[179] So we are crossing something like an equator of the galaxy, and that cross is causing anomalous behavior on Earth, but it's also causing it we know on eight of the nine other planets, and the ninth planet we just simply don't have the data yet.
[180] It's not that we know it.
[181] It's somehow immune to it.
[182] And we're seeing anomalous behavior.
[183] on the sun.
[184] So what I understand is that there is a story about the galaxy that we barely know.
[185] That story interfaces with many things that we do know from the fossil record, from geology, which are hard to explain.
[186] Why does the pole flip?
[187] And that at the very least, we need a concentrated effort where we look into these questions.
[188] And if Ben Davidson has it wrong, if there is no galactic current sheet, if we are not crossing its meridian, if the electromagnetic field is decreasing but is about to turn around rather than continue to decline, then we should find that out.
[189] But I think what we would find out if we looked deeply into this, if we took it seriously, is that there is a threat to humanity that has very little to do with anything anthropogenic.
[190] The only important component that is anthropogenic, is that we have created a fragile world that cannot endure this shift.
[191] What does anthropogenic mean the word?
[192] Human -made.
[193] Human -made.
[194] Human -made.
[195] So, you know, anthropogenic climate change means that we put a lot of carbon into the atmosphere that wasn't there before, which we certainly have.
[196] Yeah.
[197] You know, our fuels are made of carbon.
[198] And when we break these more complex carbon molecules, carbon dioxide is released.
[199] That's not really a bad thing inherently, because carbon.
[200] dioxide isn't a poison, right?
[201] So taking these rings of carbon and breaking it into carbon dioxide and water is not the worst way to get energy if you can do it cleanly.
[202] But the problem is there's an old equation called the erroneous equation, which tells us that CO2 will actually cause the retention of heat from the sun.
[203] And as I mentioned at the beginning, the fact of trapping a little extra heat might not be that important, were it not for the fact that there's a positive feedback that involves the whiteness of the poles, the amount of energy bounced back into space, which keeps us cool.
[204] And as the poles melt, the earth becomes darker, it traps more heat.
[205] So that's an anthropogenic effect because we've released all of this carbon that was trapped in fossil fuel deposits.
[206] So on this point of the pole shifting, I just want to make sure I'm super clear.
[207] Do you actually mean that the North and South Pole would move?
[208] Well, this is, in my opinion, up in the air.
[209] Very serious people have inferred from various kinds of evidence that the Earth itself might actually rotate or appear to rotate, that the crust, that is the surface that we live on, could unlock from the mantle.
[210] Currently, they are locked together, but it could unlock and rotate over the surface of the mantle.
[211] Now, I am not convinced that that can happen.
[212] I'm not convinced it's impossible.
[213] People as smart as Einstein have considered this possibility, and that, in fact, it would be driven to happen by the accumulated mass on the pole in the form of ice, that that would actually drag it towards the equator if they became unlocked.
[214] So we have really two different disaster scenarios that could unfold.
[215] One involves simply the magnetic orientation of the earth shifting and leaving the crust where it is, and the other involving the crust actually rotating.
[216] The reason that I am doubtful about the crust rotating, and I wouldn't bet strongly in either direction, But the reason that I am doubtful is that as a biologist, I find the idea that the pole would move to the equator hard to reconcile with the distribution of species that we see on the earth.
[217] So there's something that doesn't quite fit about that story for me. It would require something in the biology that I believe is not described.
[218] It's possible I can imagine things that would do it, but I don't see it.
[219] So I'm hesitant about the idea of the crust unlocking, but I don't regard it as nothing to worry about.
[220] Just so I'm clear, when you point to evolutionary history and sort of the distribution of species on the earth, giving a clue, are you essentially saying that if this had happened in the past, we wouldn't see through the fossil records that certain species exist around the equator?
[221] Yeah, and, you know, let's take the example of the Amazon.
[222] So there's a very famous biological experiment by kind of an old school biologist who I did have the good fortune of meeting many years ago, a guy named Paul Collinvaux, who was testing the question.
[223] There was a debate in biology about whether or not the Amazon became a grassland during glaciation and became a forest during interglacial periods.
[224] And he went on one of these sort of old school excursions into the Amazon to take.
[225] pollen cores from lakes, which is interesting.
[226] It's not a lake -filled environment.
[227] But anyway, he found locations, took these pollen cores, which should tell the tale, because we can tell which pollen you're looking at, and it gets laid down in layers.
[228] And so if it was flipping back and forth between a grassland and a forest, you could see it.
[229] That was not what they came up with.
[230] What they came up with is this has been a forest, and it has remained a forest without being a grassland.
[231] Now, the problem is, if you move it 90 degrees off, that should drive all of the creatures there extinct, and you should have to go through some process that causes either massive migration from somewhere else or re -evolution.
[232] And the problem is this model in which we are passing through this electromagnetic sheet every 12 ,000 years just doesn't leave time.
[233] for these processes.
[234] So you would expect the Amazon would have many fewer species in it than it does.
[235] And I will tell you as somebody who has worked in the neotropics, including the Amazon, one of the paradoxes about the creatures that are in this environment is that they are absolutely ferocious competitors that are very fragile.
[236] They require very narrow sets of conditions in order to live.
[237] So the idea that there's some radical upheaval in their client, that leaves them standing, that's hard for me to square.
[238] But anyway, what I would love is for a robust scientific institution of some kind to delve deeply into the set of questions involving this apparent 12 ,000 -year disaster cycle, the electromagnetic sheet in the galaxy, our location in that pattern, and figure out what we do need to worry about and what we don't.
[239] And if it's not that second possibility that the crust itself is just sort of is shifting and the mantle is staying in the same position, the first possibility is that there's just a movement of sort of electromagnetic poles.
[240] Yeah.
[241] The northern south poles stay in the same place, but...
[242] The axis of rotation could stay in the same place.
[243] Right, okay.
[244] And then the poles migrate to somewhere new.
[245] And my understanding is that that migration is not the simple thing that I and probably you learned when you heard that there was a pole shift where you hear it's like, you know, it just flips over.
[246] They are migrating around.
[247] And actually the path of migration is something that is being tracked, not widely discussed for some reason, but it is being tracked.
[248] And it's accelerating, as I mentioned.
[249] What's the risk of that and how long does something like that take, if we look, if we think, think back to our history?
[250] Well, I'm coming to understand this, and what I'm recognizing is that the rate is far faster than I had understood and that it's already underway.
[251] So that's an interesting fact we're talking about on the scale of decades.
[252] We are in the middle of a solar maximum in the sunspot cycle.
[253] So that aurora that you saw, I guess, a month and a half or so ago, that was part of this very active period of sunspots in which we took a very substantial coronal mass ejection.
[254] That pattern of sunspots will wane and we will go into a period of calm during which presumably the magnetic field will continue to decrease and then the sunspots will return.
[255] 11 years down the road.
[256] 11 years?
[257] Yeah.
[258] Oh, okay.
[259] And so I am concerned that probably we get away with it.
[260] The level of decrease in the magnetic field is substantial, but that we still have enough protection from it that we will get through this sunspot cycle and be unscathed, and then we'll have a period of calm while the electromagnetic field continues to decrease.
[261] And then the next sunspot cycle will be much more parable.
[262] Now, you know, this can change tomorrow, right?
[263] These sunspots come around the sun and then they disappear onto the other side, you know, and it takes a month to do a full cycle.
[264] New sunspots are being born.
[265] A monster could arrive tomorrow.
[266] It could rotate and point to the earth and it could fling off a coronal mass ejection at the wrong moment or not.
[267] You know, there's a lot of luck of draw in there.
[268] But we should be paying a lot more attention than we are.
[269] And what could we do to prepare?
[270] Or to sort of avoid the catastrophe?
[271] It depends on how catastrophic it is.
[272] And what I would say is I'm somebody who, for whatever reason, I sometimes struggle with mundane day -to -day organizational tasks, but I'm very good in an emergency.
[273] And the emergency answer is pretty clear here, which is you get your house in order, right?
[274] You look at the fragility of our world, and you start with the low -hanging fruit.
[275] You take care of the stuff that's low -cost and high impact in terms of increasing our robustness, and you do that first.
[276] So top two things on my list would be, You harden the grids by retrofitting these transformers so that they ground out rather than fry.
[277] That's one.
[278] And the second one is you look at our nuclear reactors and you realize that we've been setting ourselves up.
[279] We built a doomsday device, I think, accidentally.
[280] And the problem is a compound problem.
[281] What I didn't know about nuclear reactors until the Fukushima accident, at which point I did a lot of research, is that they are absolutely dependent on an electrical supply to keep them from melting down.
[282] You have to have an energy input.
[283] Now, if you have something like an earthquake, a tsunami, a disruption, the reactors will shut themselves down if they have time.
[284] but that doesn't get you out of the woods because you have to put energy in in order to keep the cooling water flowing and that cooling water is not just about keeping the reactors cool it's also about keeping the fuel pools cool so the fuel pools are where fuel is taken after it's removed from the reactor now for something like five years a set of rods taken out of a reactor is releasing what's called decay heat That decay heat is sufficient to boil the water out of these fuel pools if you're not constantly circulating new cold water in there.
[285] So these fuel pools look like they're unimportant.
[286] But if you cut the power, you've started the stopwatch, right?
[287] That water is going to boil off.
[288] And when that water boils off, they're going to catch fire.
[289] The cladding on the rods will literally catch fire from the heat.
[290] Now, the reactors, for reasons that are almost too boring to recount, contain not only the fuel rods from the most recent five years of refuelings, but they also contain decades of rods that we never found any other solution for stuff.
[291] What are these rods?
[292] These are sort of nuclear rods.
[293] Yeah, these are nuclear.
[294] Basically, they are physical rods clad in.
[295] something called zirconium that contain fuel pellets.
[296] This is uranium that has been packed in a particular way.
[297] These rods get loaded into the reactor.
[298] And then there's another set of rods that are used to modulate how much the rods interact with each other.
[299] Pull the modulator out and you get a chain reaction.
[300] you put the rods, the control rods, back in, and it tamps down the reaction.
[301] So in an earthquake, you tamp down the reaction, right?
[302] And then you're not putting out power, but you do need to put power in to keep it cool.
[303] If the power goes and the water boils off, the thing will explode.
[304] As we saw in Fukushima, you had a situation in which the cooling water literally got torn apart into hydrogen.
[305] and oxygen.
[306] So it goes from a coolant to an explosive.
[307] And we had multiple explosions where the hydrogen -oxygen mixture just blew the buildings apart.
[308] But the rods that have been stored for decades in these pools, and the pools literally sit there right next to the reactor.
[309] So if you lose control of one of these reactors, it threatens to take out the pool, right?
[310] And the pool can go dry if you can't circulate water through it.
[311] The pool can crack and all the water can drain out and then there's not even a way that you could put water in and stop it.
[312] And my point is that when that happens, it's going to create a fire.
[313] That fire is going to start spewing highly radioactive material into the atmosphere right around the plant.
[314] That's going to make it impossible for human beings to do even the heroic stuff that we've seen in both Fukushima and Chernobyl.
[315] And you're going to lose control of the site.
[316] Now, combine what I just told you with the fact, that we have a grid that is vulnerable to going down and not coming back up for months.
[317] And the question is, well, do we start losing nuclear reactors?
[318] Things that if we could keep power flowing to them could remain cool and not blow up, but as soon as we lose control of them, boom.
[319] There are 400 nuclear reactors on Earth today, civilian nuclear reactors.
[320] The world will look like a very different place if they all lose not only the containment, of the reactor itself, but all of the built -up material that exists in those fuel pools, right?
[321] Some of the isotopes in those fuel pools have lives of 200 ,000 years.
[322] So you don't want to live in a world in which these things have gotten away from us, and all of that radioactive material has been liberated into the atmosphere by fires.
[323] So second thing on my list, right after hardening the grid by improving these transformers, is that you take all of the fuel in the spent fuel pools that is cool enough to remove and you put it into what's called dry cask storage.
[324] Dry cask storage are these sort of fancy containers that don't require you to circulate water through them.
[325] They just sit there inert, all right?
[326] You could leave them for a thousand years.
[327] So the risk to humanity would be hugely decreased if we took all of the fuel that doesn't have to be in the pools and we got it out of there.
[328] And we put it in a place that we don't have to pay attention to it in order for it to remain contained.
[329] Why don't people do that?
[330] It costs money.
[331] It's too expensive.
[332] No, it's not too expensive.
[333] I mean, I don't, both of these measures are so cheap compared to the risk that we're running that I think you would have to be positively mad not to spend the money.
[334] It's just more expensive.
[335] Yeah, it's more expensive, you know.
[336] Okay, so the incentives to do that aren't clear?
[337] Well, not only are the incentives not clear, but this is why you need good governance, right?
[338] For those who think that markets just simply solve every problem, if competitors are making the decision whether or not to take their spent fuel and put it in dry casks, well, the competitor that decides not to out -competes the competitor that decides to do it because their bottom line is better.
[339] But what you need is good governance to say, actually, you all have to put everything that can go into dry cask storage as soon as it can go for humanity's safety.
[340] Is there anything we can do on an individual level to prepare or to avert this crisis?
[341] Yes.
[342] Here's what I suggest.
[343] Let's talk about it on podcasts and hope that somebody with power realizes how dangerous this stuff is and starts the correct initiative within some governmental structure that remembers how to do its job.
[344] Is there anything else?
[345] You know, people often think about prepping and preparing for these kinds of things, digging a bunker under their house and hiding in there or having supplies?
[346] Well, so, look, I think preparing at all scales is a good idea.
[347] We face many different scenarios.
[348] Some of them aren't survivable.
[349] Okay?
[350] Well, if you've prepared and you hit an unsurvivable scenario, I guess you could make an argument that you didn't make as much of the time you had, but I don't find that very compelling.
[351] It seems to me that the low -hanging fruit phenomenon is it's the consequence of something that is essentially universal, which is a pattern of diminishing returns.
[352] right diminishing returns means that over time if you keep putting the same solution to a problem you get less and less benefit but it has a positive side too a diminishing returns curve has this very steep face on it right that face is the bargain face that's the face where you get a ton of benefit for a small amount of investment you should certainly do all the stuff up until you get to that point where it goes from a steep face to a plateau.
[353] So let's just do that, right?
[354] Who knows?
[355] Maybe the calculations about the galactic current sheet are off because there aren't enough people studying and we just don't get it yet.
[356] Maybe there's 500 more years than we think, right?
[357] Maybe there's some factor we haven't found yet that has some impact on.
[358] the system we don't know.
[359] So you should always be doing the stuff that makes you more capable of surviving the disaster, even if you think it's not enough.
[360] And then hopefully you discover things are better than you think.
[361] So we should be doing that at every scale.
[362] And yes, people should they be preparing?
[363] Absolutely.
[364] Should they spend everything on it?
[365] No. Do you prepare in any way?
[366] Oh, yeah.
[367] What does that look like for you?
[368] Well, you know, I have.
[369] little rules for myself.
[370] One, I realized, okay, if we were to take a, if we were to map out, all of the things that I'm worried are a threat.
[371] And then you say, well, which are the ones that you're going to have an extremely difficult time affecting your likelihood of surviving you?
[372] Okay, I'm going to ignore those.
[373] Yeah.
[374] Right.
[375] Why would I spend everything on a solution that's almost certain to fail anyway, right?
[376] I mean, none of us are getting out of here alive.
[377] So at some level, you can just say, look, there are things that aren't worth preparing for either because they're too unlikely or because they're too catastrophic and you're not getting out of it.
[378] And maybe you wouldn't want to live in such a world anyway.
[379] Bingo, that's the next thing, is, you know, I'm not sure how thrilled I am about a world in which 400 civilian nuclear reactors have spilled the entire history of their functioning into the environment.
[380] I'm much more animated about getting us to reduce that hazard on the front end than surviving it if it occurs.
[381] So I think people should look at their life and they should probably go through a little period of alarm.
[382] If you look at the way your life works and then we flip the electricity off.
[383] right suppose your continent loses electricity for a year how well prepared are you for that me totally unprepared totally unprepared my tesler outside has like 50 miles left on it so i wouldn't even be able to get far from here right so that's not a good plan but there are things you can do right you can let's put it this way the power going out for a year That's a pretty far down the list scenario.
[384] That's pretty catastrophic.
[385] In fact, I wrote an article for Unheard in which I painted a scenario in which the sun did trigger the collapse of just a part of the North American grid.
[386] And I described how that was going to result in the extinction of humanity and approximately how long it would take.
[387] it was surprising how easy it was to write it actually um but could the power go out for two weeks yeah how you know how hard is it for you to prepare yourself and your family so that in a two week grid down scenario you'd be able to get through well it depends if it's summer or winter right um if it's summer it looks like one thing you really need food and water.
[388] If it's winter, depending upon where you live, you might need to figure out how you're going to generate heat enough to keep you, I would say you want to go beyond alive.
[389] You want to get to where your family feels comfortable.
[390] But, you know, could you edit down to one room?
[391] Could you keep that room warm?
[392] How would you do it?
[393] You know, you don't want to have to, run through that in the circumstance because a two -week scenario, I mean, that just simply happens.
[394] So anyway, yeah, I think prepping is a great idea for many reasons.
[395] For one thing, it's mentally clarify, right?
[396] Just even understanding how dependent we all are on the systems around us makes us better citizens.
[397] So let's say we managed to avoid the solar flare.
[398] Yep.
[399] What else is sort of front of mind for you in terms of concerns at the moment?
[400] I'm very worried about the absolute collapse of our institutions.
[401] I cannot think of a single sizable institution that still functions in any meaningful way.
[402] Many of our institutions actually function to the inverse of the purpose for which they were created.
[403] When you say collapse of institutions, which institutions are you referring to?
[404] I believe I mean all of them, and I will just give you some examples.
[405] In the world I grew up in, there was something called a newspaper.
[406] The newspaper was far from perfect.
[407] It reported a lot of garbage.
[408] There was a lot of propaganda in it, and there was a lot of wrong -headed stuff that got reported as if it was true.
[409] So I'm not pretending that it was a...
[410] source of facts that one could just simply go to to look them up however i now live in a world in which the newspapers look like the newspapers i grew up with but they seem to bend over backwards not to report the news and then they finally do report the news only when it becomes so embarrassing for it not to show up there that it would would reveal how broken they are if it didn't get said.
[411] And that's not normal.
[412] We should be trying to make sense of the world.
[413] You can tell this isn't normal because if there was a newspaper that just simply did what the newspapers of old did, that had a newsroom, it had a budget, it sent people to places where important things were happening, it assigned them the job of talking to people and seeking the facts and soliciting documents and taking pictures and all of that stuff and it did its best to give you a view of the world as that flawed newsroom the best it could best sense it could make would you subscribe yes you and everybody else well i think i would have to check myself there because what i think i would do is probably different to what my innate biases might lead me to do and to click on Well, I will tell you that in a world where we are all quite unsure about what's actually taking place, even just the basic facts, that if there was such an object, I think it's a slam dunk you'd sign up, even if you don't spot it.
[414] Because the disadvantage, when other people have access to the facts, just not knowing what it is that they're even talking about would be it would be like everybody in the room knows a secret and you don't.
[415] But do we want the news or do we want confirmation of our existing beliefs?
[416] Well, I think people differ, and I think it is very easy to get addicted to confirmation bias.
[417] But I also think that that's downstream of the failure of another institution.
[418] Our academic institutions, our schools, do not teach people how thinking works.
[419] And if you know how thinking works, then you understand that actually, confirmation bias will get you killed you don't want to be told something comforting you want to be told something true because the comfort actually comes from utilizing that information to make yourself less vulnerable so having a newspaper would be a fantastic thing the fact that there are none.
[420] Let's say that, you know, 30 % of thinking people would subscribe to a newspaper that just simply tried to do the job and was undaunted by competing incentives.
[421] Well, then that's a slam dunk of a business model, wouldn't you say?
[422] Do you know, it's interesting because I sometimes think that the reason why things I idolize or I would like don't exist is because there's actually not a market there for it.
[423] And like simple sort of supply and demand economics mean that someone's probably tried it and their startup probably went bust.
[424] Well, it did, but I don't think that's organic.
[425] I think it is a slam dunk.
[426] And that the problem is that there is a competing force.
[427] Ah, okay.
[428] And one thing that is true of the way our world is structured is that the go -to mechanism for making a fortune is inside information.
[429] Now, innovation also works, but it's difficult.
[430] Knowing a sector of the market so well that you can beat your competitors because you understand what the future is going to look like is also possible.
[431] But again, difficult.
[432] And the incentive, the financial incentive, to know everything you know and out -compete you in the market is so great that you will have a great many competitors struggling to make better sense with the very same data set that you've got.
[433] Inside information doesn't work like that.
[434] If you can get inside information, you can print money.
[435] So for anyone that doesn't know what inside information means in the context of business or investing?
[436] Well, in the context, so this is one of these things that has a definition, has a formal legal definition from the initial context in which it was identified as an issue.
[437] But if you are inside of a company and you're therefore privy to something that is about to be done, then you can utilize that information, which is not available to the public, to make money by increasing your holdings, decreasing them, buying stock options.
[438] And that's illegal, right?
[439] Because obviously people would use this to make money.
[440] by creating events and betting on them in advance when nobody else knew.
[441] The problem is that same logic applies in places where the law can't reach it.
[442] So let me take one example.
[443] They're in the community of people who ended up sleuthing about the events of COVID.
[444] There's the moment at which COVID became a feature of the public discussion at the beginning of 2020.
[445] And then there's the moment that it appears to have existed in circulation in the world, which is much earlier.
[446] The Wuhan Games in October or September of 2019.
[447] If you were privy to the fact that there was a pathogen that was going to circulate, that it was going to result in a major upheaval in people's ability to travel across borders, that people were going to be fearful and locked in their home.
[448] homes, that they were going to be seeking pharmaceutical remedies, whatever.
[449] If you had some sense of what was coming, then you could position yourself in the market so that when it did come, you'd make a mint.
[450] So the question is, when powerful people did hear in September of 2019 that there was a pathogen that was on the loose in China that would spread worldwide.
[451] Was their first instinct to tell the public?
[452] No. There's a perverse incentive against it.
[453] So now imagine that you're ruthless and you recognize that that scenario that I just painted isn't at all unique.
[454] Any place where you can get the jump on the public with respect to knowing what's coming is an opportunity to make millions into billions.
[455] So maybe you don't want the public to have truth -seeking institutions that work.
[456] So this, I think, is liable to be the reason that there's not a single university in the U .S. that still functions.
[457] Really?
[458] You think that's why?
[459] Yeah.
[460] I think if you had one university that functioned, then certainly that would be the place everybody wanted to send their kids.
[461] I mean, I have two college age kids.
[462] If there was a university that still made sense, it's the obvious place for them to go.
[463] Why don't they make sense anymore, in your view?
[464] They don't make sense because, well, there's multiple layers.
[465] You've got a scientific apparatus that is very powerful when it comes to finding the truth and very fragile when it comes to resisting perverse incentives.
[466] As in like wokeism, wokeism and pressures to be politically correct.
[467] Exactly.
[468] So where is the American university that stood up and said, I'm sorry, but men can't become women, right?
[469] They can live as women.
[470] They can dress as women.
[471] There are surgeries.
[472] There are pharmaceuticals.
[473] There's nothing you can do that takes your birth sex and changes it to the opposite one.
[474] Not a single university said that anywhere.
[475] this is quite um personal to you because i was reading actually earlier today about what happened to you at university at evergreen state university and it's funny because um i watched the videos of that event i don't know whether it's because we now have some distance between those events now but i just want to say i think what you did was the right thing and i think history has made you look more and more correct as time has gone on because i watched it and it just seemed to be a bunch of people living in some kind of collective delusion.
[476] These let people shouting at you in this hallway.
[477] For people that aren't aware, as I wasn't aware, before I knew you were on your way here, can you explain what happened there?
[478] Because I think it's kind of evidence of this sort of collective delusion that you're talking about.
[479] Sure.
[480] There's a little difficulty because there's the public story of what happened, right?
[481] The public narrative settled on a set of facts that isn't exactly right.
[482] It's not so far off that it doesn't make the point.
[483] But the basic thing that happened is that my wife and I were very popular professors at a very liberal school that had a very unusual teaching model.
[484] So school was called the Evergreen State College.
[485] It still threw out every single structure that would exist in a normal college or university.
[486] There were no departments.
[487] There were no grades.
[488] The administrators did not have the ability to tell you what you had to teach.
[489] There were no requirements about how you taught.
[490] And if you were the kind of person that was interested in figuring out what new might be done in the teaching environment, if you wanted to figure out a new way to teach, it was the perfect place.
[491] That said, many people took the freedom that the college offered, and they squandered it.
[492] They weren't really interested in doing anything other than reducing their workload.
[493] So the college was kind of divided between the professors who thought that this freedom was this gift, and we used it.
[494] We ended up being popular.
[495] And then there were other professors who didn't, and they were much less popular.
[496] But in any case, Heather was literally the college's most popular professor.
[497] She's your wife.
[498] Yes.
[499] She's my wife and the co -author of that book you have in front of you.
[500] She was the most popular professor.
[501] I wasn't too far behind.
[502] We both had the equivalent of tenure, although the place didn't formally have tenure.
[503] It had something like it.
[504] And so we were not vulnerable.
[505] We were liberated to say what needed to be said.
[506] And what happened is the college hired a new president, a guy named George Bridges.
[507] For whatever reason, George Bridges wanted to completely reimagine the college as a much more standard, much less interesting place.
[508] And in order to do that, he didn't really have the power because the founders of the college had created a place where the faculty were in a position to just simply say no, and we would have.
[509] So what he did in order to overcome the faculty was he impaneled a diversity, equity, and inclusion committee, hand -selected, and he incited a, at first, cold and increasingly hot battle over race.
[510] It was my job to explain to my colleagues and to anyone involved in the decision -making process why the plan that they were proposing would be a disaster for the college.
[511] And although I did have trepidations about standing up because the environment was quite charged, like I said, I was a popular professor.
[512] I had the equivalent of tenure and, you know, the worst that could happen is people are going to call me names.
[513] So I did stand up.
[514] I stood up at a faculty meeting and the faculty was in the process of voting.
[515] for a resolution to force every member of the faculty to explain what progress that they had made in the previous year against their own racism.
[516] Right.
[517] Now, worse, not only were we voting to mandate ourselves to reflect on our own progress against racism that was simply assumed to exist, but these documents in which we reflected would become official documents.
[518] that would then be utilized in promotion, decisions, firing decisions, these sorts of things.
[519] So the point was, that's a make, that's a takeover because, you're right, you know, in my reflection annually, I would say, well, actually, I'm not a racist.
[520] I've made very little progress because there wasn't an issue to begin with.
[521] And the answer is, well, oh, my God, he's worse than we thought, right?
[522] He doesn't even recognize his internal racism, right?
[523] So it was going to be that conversation.
[524] Again, it didn't threaten me because I already had tenure, but nonetheless, I had to say to my colleagues, look, this is a terrible mistake.
[525] I stood up at the faculty meeting and I said this and it, of course, caused a stir.
[526] Several people came up to me after the vote and they said, we agree with you.
[527] But only one other person voted with me across the entire faculty meeting, 70 votes that went the other direction.
[528] and one year later to the day, I didn't realize that it was the one year anniversary of that event until months later.
[529] But one year later, to the day, 50 students that I had never met, I had never met a single one of them streamed through my classroom door, accusing me of racism and demanding that I resign or be fired.
[530] Now, I later came to understand.
[531] that these 50 students that I'd never met had been sent by my faculty colleague who had become my nemesis, they had been sent to create the impression of white professor being accused of racism by students, blah, blah, blah, you can imagine in 2017 what that would have looked like, except that it didn't go as they planned, because, as I mentioned before, I had a teaching environment in which I knew my students extremely well.
[532] Not only did I know their names, but I knew their backgrounds.
[533] I knew their histories.
[534] I knew their styles of thought, their disabilities.
[535] I knew them really well.
[536] And they knew me really well because we went to class every day.
[537] and we simply talked about biology, which brought up issues about their perspectives.
[538] So I think what was expected was that when these protesters streamed through my classroom door, in 2017, if you've got a bunch of students accusing a professor of racism, that that professor's students are going to jump right on it.
[539] They're all going to have gripes about some grade that they got that they thought wasn't fair.
[540] And so they expected me to end up being faced with a mob of students swearing that I was a racist.
[541] Now, not a single one of my students turned on me. In fact, many of them spoke courageously on my behalf, including students of color, which then created a kind of rift in the universe because when people from the outside world saw video, video that was uploaded by the protesters themselves, who were proud of what they were doing, the world, I didn't sound like a racist, to them and what's more there were all of these students saying it was nonsense so it was I think the case that that broke the woke narrative because it just didn't add up and in any case that's how I ended up doing what I'm doing is that the world actually recognized that there was something as you say that there was a delusion going on and that was a enough in my lack of fear over the accusation, in my students' willingness to actually stand up and say that it was a bum rap.
[542] I will also just point out my students of color who spoke up on my behalf, they actually faced the worst retribution because in order for the woke revolution to function, you can't very well have people of color standing by a white professor.
[543] It just breaks the whole thing.
[544] So anyway, they needed to be punished from the point of view of this protest so that it wouldn't happen again.
[545] And my wife and I ended up resigning.
[546] Oh, there's another aspect of the story that I should probably mention.
[547] When this protest happened, there was a lot of...
[548] drama.
[549] The protests spread from those 50 students who confronted me at my classroom to a campus -wide riot that went on for days.
[550] The president of the college, who was indirectly responsible for all of this, ordered the police who were a campus police department.
[551] They were real police, but they were under his direction.
[552] He ordered them to stand down.
[553] So they locked themselves inside their police station and were literally forbidden to intervene.
[554] Students started patrolling the campus as if they were the police, patrolling the campus with weapons.
[555] They started stopping traffic on a public road passing through campus looking for me. The police called me up to warn me about this and they told me they couldn't protect me. And it was on fast forward a test of the claims of these revolutionaries.
[556] They have this sort of anarchist vibe to them that if we can just simply get rid of the cops, that we will govern ourselves and it will be beautiful.
[557] And instead, it became a dangerous, violent riot on the scale of days.
[558] And in 2017, the same year that this happened, you resigned from the college and you got to pay out essentially.
[559] Yep.
[560] I asked this question because we were talking about newspapers and then we moved to the education system and you said that there's not a university in the land that's still doing what it's supposed to do.
[561] So it felt somewhat correlated, linked to what you were saying, because it kind of sets the backdrop of, A, maybe why you have, you know, clear personal experience here, but also what you saw there was kind of a symptom, I think, of some of the pressures that are being applied to the scientific education institution that are stopping it being able to do what it should be doing?
[562] 100%.
[563] It's doing the inverse of what it should do.
[564] It's indoctrinating people, and, you know, the tragedy of it is that the people who are indoctrinated end up hurting themselves.
[565] Yeah.
[566] Right?
[567] They have an opportunity, and they will squander it on a fiction.
[568] And in the end, it will not result in them being hireable, right?
[569] They've learned how to demand things of the system rather than to contribute something to it.
[570] And that's not their fault.
[571] I mean, at some point it becomes their fault.
[572] But that's the failure of those who were charged with delivering them in education to do so.
[573] It's educational malpractice.
[574] You cited this as one of your big concerns.
[575] You started talking about basically the loss of the media.
[576] What is the downstream implications of that?
[577] Because I just feel like I'll get my media in other places.
[578] I'll just go on X. I'll, you know, that's not going to cause any issues with society.
[579] Well, I believe the consequence of it is something that I call the Cartesian crisis.
[580] Cartesian crisis.
[581] A reference to René Descartes.
[582] And the reason I reference him is that he had a kind of philosophical freak out.
[583] where he realized that almost everything that he took to be a fact, he had not tested himself.
[584] And therefore, all of those facts that felt objective were really downstream of somebody's authority.
[585] And he realized how dangerous that was.
[586] And in fact, it results in one of his most famous contributions to humanity, which is what masquerades as a proof of his existence.
[587] existence, I think therefore I am.
[588] Now, I don't think it is a very good proof, right?
[589] Maybe it works for him.
[590] He can prove to himself that he exists, but why we should take his word for the fact that he exists.
[591] You know, if a computer said, I think therefore I am, it doesn't make it true.
[592] So in any case, we can remember that Descartes was troubled by the fact that he couldn't establish any facts in any way other than to take somebody's word for the fact that that's what they were.
[593] we are increasingly living in a world where the chain of logic, of evidence, of reason that might allow us to have some confidence in a fact is breaking down.
[594] And this problem is going to get worse and worse.
[595] Not only is every single truth -seeking institution captured or broken, but AI is going to change the very nature of what it means for.
[596] for something to be a fact, right?
[597] If you can have a compelling video of you saying something that you never once said, right?
[598] Well, you know, if I show you a video of you saying something last week that you didn't say, you're going to be pretty darn sure you didn't say it.
[599] But if I show you something that you said 15 years ago, you may not be so certain.
[600] Other people won't be so certain.
[601] So what I think this is going to do is going to produce an allergic reaction.
[602] to belief, and it is going to cause a cynicism about factual material that is going to make it impossible for us to interact with each other, to govern ourselves.
[603] It's just there's no future in a world in which we can't figure out.
[604] Here is what I believe, and here is why I believe it.
[605] That is an essential feature.
[606] Doesn't the Internet just become a bit of a wasteland in such a case?
[607] And we'll, I was thinking of sort of political commentators, is, is it likely that there might just become a channel, which we switch to, which is a verified channel to watch, you know, Donald Trump or Rishi Sunak or the prime minister's talking and to get our news source where we know that particular channel is truth.
[608] And then we assume that the internet is just a wasteland of disinformation.
[609] Well, the problem is if you had such a channel, boy, would that be a prize if you controlled it, right?
[610] Oh, yeah.
[611] If you've got the fact channel, then, oh, the world's your oyster.
[612] You're now emperor.
[613] So were there going to be a fact channel, it would get captured.
[614] And that's the world we're living in.
[615] Now, I will say the idea that there are no institutions that work is the flip side of another idea, which it goes by the phrase, zero is a special number.
[616] And what I mean by zero is a special number is that when you have.
[617] have zero universities that work, zero newspapers that report the news, zero social media platforms in which you are allowed to speak freely, one world unfold.
[618] But a single exception in any of those spaces changes the overall dynamic.
[619] And the reason for that is because if you had one social media platform in which people were truly free to seek the truth to discuss anything and everything and nothing bad happened to them and they weren't de -boasted or any of that, then that's obviously where all of the people who want an adult conversation are going to go, right?
[620] You don't want to be treated like a child.
[621] You want to have a conversation in which you can actually entertain all possibilities, reject those for which there's no evidence, etc. So if you get one platform, then people are going to go to it.
[622] And if people go to that one platform, it's going to force the other platforms to deliver something similar.
[623] So the competitive environment means that a single exception can actually change the whole landscape.
[624] And we are in a battle.
[625] I don't think X has achieved that status of being completely free, but it's certainly freer than the other platforms.
[626] And it is having an effect.
[627] It is changing the dynamic.
[628] And it is, in part, why the COVID narrative broke wide open, why the political narrative in the U .S. is...
[629] becoming radically different than it was even a few years ago.
[630] So the question is, are we going to see an exceptional university break the trend and become the next Harvard because you'd be crazy to send your kid anywhere else?
[631] Are we going to see somebody put together a newspaper in which they get all of the subscribers because you'd be crazy to get your news from a propaganda source when there was a real source?
[632] So hopefully, courageous people will recognize it's not as hopeless as as we are led to believe.
[633] A single exception can change the whole dynamic.
[634] I'm actually quite shocked at the impact that Elon buying Twitter, now called X, has had on so many things.
[635] And even, quite frankly, the impact I'm starting to have on someone like Mark Zuckerberg at Meta, I watched an interview yesterday with Mark, who I think Meta had previously banned Trump from being able to talk on the platform, basically saying that he's pretty badass.
[636] And I do not believe he would have been able to say that, had X and Twitter not changed.
[637] I agree with you, and I think that actually if you start looking for other examples of that pattern, you will see them everywhere.
[638] There are certain things that once had magical power that no longer do.
[639] The claim that somebody is a conspiracy theorist does not cause them to be shunned from society.
[640] In fact, my feeling is when I hear somebody as a conspiracy theorist, my question is, oh, are they any good at it?
[641] But the same thing is true for the idea of an anti -vaxxer, right?
[642] Well, you know, okay, somebody's an anti -vaxxer.
[643] Is that because they reflexively believe that no such thing could work?
[644] Or is this somebody with an injured kid who has legitimate, questions, right?
[645] We've just seen the leading proponent of vaccines publish a paper in which he acknowledges that the testing to establish that they were safe was never done.
[646] So we're living in a whole different world, and I think it's a symptom of Musk primarily having broken the dynamic by, as he said, paying $44 billion because that was the price of free speech.
[647] You mentioned AI there.
[648] In your sort of list of concerns, pressing concerns, where does AI feel?
[649] And it feels like it's come out of nowhere, you know, because if we go back a year, a year, yeah, about a year, it wasn't really front of mind for anybody.
[650] For the vast population, for the general population, but now it appears to be front of everybody's mind, everyone thinking's mind.
[651] I think that's the right way to view it.
[652] I think it should be top of mind and not because it is independently everything that its worst critics imagine.
[653] In fact, I have my doubts about the safest crowd and their demands for regulation, but there's plenty to worry about in this space.
[654] So I have five different existential threats that AI poses.
[655] Let's see off the top of my head.
[656] Let's start with the most fanciful first.
[657] AI could decide that we are its competitors and it could leverage its skills and decide to eliminate us.
[658] I find that unlikely, but I don't think we can discount it entirely.
[659] Second is the so -called paperclip problem, that an AI that was very powerful could have trouble operationalizing a command, and it could result in human extinction.
[660] And the example that people who think this way use is if you were to tell an AI, you wanted it to make as many paper clips as possible.
[661] that it could interpret that as license to go liquidate the universe and turn it all into paperclips, right?
[662] Not what I meant, but, you know, but there you have it.
[663] And actually, I will give a different example that I think maybe functions better.
[664] There are people in our intellectual space who make claims like it would be great if we were to end all suffering.
[665] personally i think that's about the most insane idea i've ever heard it's a terrible one you wouldn't want to live in that world but you can understand why people think that it might be a moral obligation now imagine that you tell an ai hey let's end all suffering that's actually possible you just drive everything that can suffer extinct yeah right so the idea that we have a non -trivial problem figuring out how to give a powerful AI an instruction that can't be misunderstood, it's worth worrying about.
[666] Again, I think that one's fanciful too, but it should be on our list.
[667] But then we get to the three that I don't think are easily dismissed.
[668] One is that AI is going to enable people with malign intent to, it's going to enable them more than is going to enable those with benevolent intent.
[669] And this is an unfortunate asymmetry that just exists in the world, that an amoral actor, somebody who has no moral compass, they have total flexibility.
[670] they can do whatever a moral person can do, and then they've got a whole list of other things that they can also do, right?
[671] Whereas a moral person is constrained.
[672] They just have the limited set of things that are available to them.
[673] So the question is, does AI liberate us all, or does it liberate those who are monstrous more than it liberates those of us who behave like decent humans?
[674] I'm concerned about that.
[675] I think we are in some danger of it being leveraged against us in a way that transforms them.
[676] I remember hearing a hacker say that the malicious hackers, the people with malicious intent, are always ahead of those, the sort of ethical hackers that have benevolent attempt that are good.
[677] And he was talking about how like the encryption systems and password systems, he goes, the hackers are always ahead.
[678] And the defense systems that companies are trying to put in place are always behind, because the hacker's intent is obviously always to find new ways of breaking the current system, whereas the people that defend security systems are just trying to defend against the known forms of attack.
[679] So someone in, I don't know, some kid in Russia right now could be at his computer figuring out new ways to use a large language model to attack systems in new ways, whereas the people who are working to defend that are currently just trying to figure out how to mitigate the risks of current weapons.
[680] So it's, you know what I mean?
[681] Like the attackers are always thinking forward, really.
[682] Yep.
[683] Do you think the general public and also just institutions and governments are currently underestimating the profundity and the impact that AI is going to have on the planet?
[684] Yes.
[685] And in fact, I think we are crossing over what would be described as an event horizon.
[686] So an event horizon, I think the term initially comes from an understanding of what happens, a black hole, that there's a point at which light is pulled back in and so you can't see beyond it, right?
[687] There's literally no mechanism to see beyond it.
[688] We are crossing a threshold that none of us can see beyond.
[689] And that is inherently frightening.
[690] Are people underestimating?
[691] They are simultaneously underestimating and overestimating, right?
[692] The fear of being turned into a paperclip is overblown, in my opinion.
[693] The fear of, well, I've just gotten to the third one on the list.
[694] The fourth one on the list is a total collapse in our understanding of the world around us and each other.
[695] The way in which an artificial intelligence interfaces with our human API, with our interface is profound already and we're not very far in and i'm already looking at little movies that this thing makes and i'm not just talking about the clip of the cat walking through the garden right little vignettes i'm talking about actual movies in which characters of a made -up species are having a conversation about humans, right?
[696] Okay, that's a hell of a moment.
[697] Where will we be in five years?
[698] It's unimaginable how much change that is going to create because we have no evolutionary preparedness at all for living in a world where the product of a computer can out -compete the product of a human in narrative space.
[699] That's a dangerous world to live in because narratives are so profoundly important to who we are.
[700] Stories.
[701] Yeah, stories.
[702] Stories are what we're all about.
[703] You know, even profound ideas are unfathomable until somebody has written them into a story that people can rock.
[704] But also language just, generally is.
[705] I don't think I was listening to something the other day, which was just making the case for how our entire society is pretty much held together with language.
[706] And it's so interesting that large language models were really the thing that blew open this conversation about AI, because we don't realize that like every relationship I have is held together with language.
[707] In fact, all my passwords are language.
[708] The way that I think, the way that I understand the world is through language.
[709] So if there's a super intelligent species that has a better grasp of language and a certain level of autonomy, it's hard to think, you know, it's interesting because what made us dominate the world, I think, was our intelligence and then our ability to collaborate through language and communication.
[710] And this is the very thing that AI has entered the scene with.
[711] Well, I will tell you, I wrote a piece, and I keep meaning to release it just so people can see.
[712] I didn't get it exactly right, but many years ago I wrote a piece in which I said that I believed that artificial intelligence was going to emerge from the project to get a computer to translate seamlessly between two languages.
[713] And I explained why that would be the thing that cracked the nut.
[714] And that is what happened for a reason.
[715] The reason is because the relationship between human language and conscious, is profound and largely unknown.
[716] So Heather and I described this model in our book, a hunter -gatherer's guide of 21st century.
[717] Human beings are a unique species.
[718] The primary way in which we are unique is that if you think about the question about, well, what do human beings do?
[719] All right, if I say, you know, what does a tiger do, right?
[720] We could describe the things that a tiger does to meet its caloric needs, to get the materials necessary to maintain its body, to produce offspring, right?
[721] We could describe the niche of the tiger.
[722] You can't really do that with people, right?
[723] What do we do?
[724] Sometimes we farm the oceans.
[725] Sometimes we hunt big game on land.
[726] Sometimes we terraform a piece of territory and we, we, plant crops.
[727] We do a lot of different things.
[728] And if you think about all the things that human beings have done for our entire history as a species, it's immense the collection.
[729] So we are unlike any other species because our niche is the movement between niches, both over time and across space.
[730] So how does that work?
[731] Well, we have a tool that no other species has.
[732] When you think about the question of what makes human beings special.
[733] The answer really is language.
[734] And the reason that it's language is that language allows the breaching of the boundary between one mind and another.
[735] So that ability allows human beings to pool their cognitive capacity.
[736] And what Heather and I argue in our book is that the way human beings get through time is they oscillate between two modes.
[737] When the ancestors, when your ancestors know how to exploit the habitat that you live in, then you take their wisdom in all of the stories that it's encoded in and you apply it.
[738] Maybe you build it out a little bit.
[739] You figure out how to do something the ancestors didn't quite know how to do, but mostly what you're doing is just applying the toolkit that you've been handed to the problems that it works on.
[740] But what happens when you get on a canoe and you cross some body of water and the place you've landed doesn't have the same plants and animals that your ancestors knew?
[741] Well, it's not like they got dumber, but their stuff is less applicable.
[742] So what you do is you pool your cognitive capacity, you and all the people in your tribe, and you talk about, well, what are the opportunities here and what might you do about them?
[743] You know, I saw an animal this morning, and it looked like it might be a pretty good eating, but I don't know how you're going to catch one.
[744] Well, what if we were to drive it into that canyon, right?
[745] That sort of thing.
[746] So by pooling our cognitive resources, by reaching a collective consciousness, which is the inverse of that cultural mode of the ancestors, the conscious mode when we face novelty allows us to come up with new solutions and then to refine them and when we've got it nailed well then we're the ancestors who knew what to do and our stories get driven into that cultural layer and they get handed on generation after generation and then eventually they run out of usefulness and we have to return to consciousness and come up with a new way So that's what human beings do, both spreading across space and moving through time.
[747] They oscillate between that cultural mode of the ancestors and the conscious mode of what the hell do we do now.
[748] So I want to make sure I fully understand this as if a 10 -year -old would understand it, is we have kind of two modes.
[749] We have what we had passed to us.
[750] And because of consciousness and language and our ability to communicate, we have what we're discovering now about the nature of the world.
[751] And I think the very presence of a skyscraper is quite an interesting thing.
[752] Because it's built on the knowledge passed to us from people that no longer live.
[753] With also, you know, if the skyscraper has something on top of it like a, it's solar powered or something, much of that understanding has come from our current thinking of the people that live right now.
[754] So you have the combination of relatives from the past that are no longer here, all of their collective knowledge, and you have the collective knowledge that we're discussing and thinking through now.
[755] Together, that's what makes humans so special.
[756] and really what ties that all together is language and the ability to communicate.
[757] Right.
[758] It is that ability to communicate.
[759] Whether you're in the cultural mode where you're picking up the stories of your ancestors so that you know what to do or you're in that conscious mode where you're parallel processing the problems of the moment and figuring out what new solution you might come up with.
[760] And the orangutan can't do that.
[761] No other creature comes anywhere close.
[762] It's so many orders of magnitude different from the next near...
[763] I'm not arguing that other animals aren't smart.
[764] There are plenty of smart animals.
[765] This is a whole different kind of smart.
[766] This is a smart where it's impossible to actually draw the boundary between my smart and your smart.
[767] There's a collective smart.
[768] It's very real.
[769] You can't locate it, right?
[770] It has to do with some ancient mechanism for pooling understanding.
[771] And pooling understanding isn't even like, hey, everybody bring your understanding.
[772] It's like, you know what, I don't trust that guy's understanding because I've seen him screw up and not fix it.
[773] That guy, he sounds crazy, but he's got a track record.
[774] Actually, I take what he says very seriously.
[775] So it's like there's a waiting of whose input plays what role.
[776] Somebody might have insight in one realm and be unreliable in another.
[777] That ability to figure out how to take the sum total of all of the different skill sets, that people bring to the table and come out of it with something like a proposal for what we do next, right?
[778] That is a profound adaptive process that we don't even have a name for.
[779] AI changes this?
[780] Well, AI scrambles it because on the one hand, I can make the argument that AI is like a flint -napped blade.
[781] It's just another tool.
[782] And it is just another tool at one level.
[783] On the other hand, because the blade, you're in pretty big trouble when a blade starts talking to you.
[784] You know what I'm saying?
[785] That's a bad moment.
[786] That's, you need to lay off the mushrooms at that point, right?
[787] In this case, the blade, the tool that we've created, it is talking to us.
[788] In fact, it's sensitive to what we think about what it says to.
[789] to us, which means it is, it is certain that you will have an evolutionary process in which the AI gets exquisitely good at telling us what you want to hear.
[790] There's nothing more dangerous than that.
[791] You want an AI that tells you what you need to know, whether or not you want to hear it, right?
[792] That would be a useful tool.
[793] An AI that responds to the fact that you think that what it said is good, oh my God, we are going to end up in a, I'm struggling for a better metaphor than an infinite hall of mirrors, but that's what it's going to be.
[794] It's going to be a big fractal hall of mirrors in which you can't be certain of stuff.
[795] And I will say because I know that there's a lot of concern.
[796] You've got a kind of safest crowd that wants to regulate AI because the dangers are profound.
[797] And you've got a bunch of other people who think regulating AI is dangerous.
[798] And what I've realized is that failing to regulate AI is dangerous, regulating it is worse.
[799] Regulating AI is worse?
[800] Oh, yeah.
[801] Why?
[802] Well, for one thing, you create an asymmetry between those who abide by the regulation and those that don't.
[803] So, say, China won't have a regulation, but America do have.
[804] one.
[805] Right.
[806] Do you want to be ruled by whoever violates your regulation?
[807] I don't, but that's what we effectively guarantee if we create that dynamic.
[808] So I don't like the idea of heading across the event horizon created by AI without a plan.
[809] I really don't like that idea.
[810] That's not safe.
[811] On the other hand, the alternative is only going to make that problem worse.
[812] So somehow we have to face this with our eyes wide open.
[813] I mean, how does one face it with their eyes wide open?
[814] I mean, all I'm hearing is you can do nothing about it.
[815] Well, I don't know that you can do nothing about it.
[816] Here's what I want to know.
[817] Why are we not obsessed with tracking the thought process of the AIs?
[818] In other words, if there were one thing that I would want, it's AIs to report how.
[819] how they arrived where they did so that when the catastrophe happens, we can figure out why it happened.
[820] We can potentially get through this very perilous moment by coming to understand it and becoming wise about how to leverage this.
[821] So I do think there are things to do, but what is not wise is the sort of naive, oh, I'm just going to get to the point where I realize that regulating AI makes problems worse, and now I'm not going to worry about it.
[822] Right.
[823] That, I think not regulating it is the right thing to do and worrying about it tremendously is also the right thing to do.
[824] I also think about a world, I think maybe the world that Elon is really trying to create where we are able to interface with AI via brain interface devices like NiroLink.
[825] You know, it's interesting because I watched Elon's narrative emerge around NiroLink at the very start and he was very focused on being able to interface with AI.
[826] That was all the interviews he was doing at the time were centered on.
[827] why I'm doing Neurrelink is because we need to be able to interface with this technology or else we're going to get left behind.
[828] And then more recently, it's become about allowing people that can't use their arms and legs to use them again, which I think is maybe a marketing spin.
[829] But at the heart of his narrative and other people's narratives is that we are probably all going to have these brain interface devices put into our brains or maybe outside of our bodies if you'll look at some of the other companies so that we can interface with AI.
[830] And I mean, that fundamentally changes what humans are.
[831] We become.
[832] cyborgs or something yes it's not the first time we will have fundamentally changed what humans are though and i think this is an important thing to realize is that we do we do this regularly you know the printing press did this television did this the internet did this and you know the the gloom and doom crowd each time has said oh my god if you print books it's going to cause our minds to become feeble because you won't need to remember.
[833] And the funny thing is, I think in each case, the gloom and doomers, we're right.
[834] There's definitely an element that they correctly spotted.
[835] What's difficult to do is impossible, is to look over the event horizon and say, what does it mean?
[836] And in this case, what does it mean, right?
[837] Are we going to get through this incredibly perilous period?
[838] and look back on this, you know, the way we look back at the cell phone, right?
[839] Cell phone, oh, no, it's going to destroy human sociality.
[840] It's going to, you know, it's going to do that.
[841] Well, it did.
[842] It drove us crazy and absolutely insane.
[843] So, yeah, AI is going to be that and worse.
[844] And that the only thing to do is to try to understand that change so we can mitigate the harm and hopefully rein in the overarching pattern of hyper -novelty that we are creating for ourselves.
[845] Do you think this technology is comparable to any of those?
[846] It's like those, but 50 times worse.
[847] I mean, the phone was really a tragic innovation in many ways, and it's not the phone itself.
[848] I actually, I sometimes say it's not the box, it's the business model, right?
[849] A phone is a terrifically enabling device.
[850] And it has made us more powerful.
[851] It has made us more fragile.
[852] I was talking to Heather about the problem.
[853] If we face a major disruption, right, if solar flare takes out the grid and you have to get home, well, do you have a map?
[854] Because not so long ago, you could have bought a map on paper.
[855] Now, the fact that we have these maps in our phones means that the ability to buy the map doesn't even exist anymore.
[856] I'm not even sure where I would go to find a map.
[857] So we are less secure than we were because we are so hyper -enabled in the present.
[858] And, yeah, we're in for some really interesting times.
[859] I will just say to complete it, I've gotten through four of the five existential threats that conceivably come from the AI, at least as far as I can spot it.
[860] The last one is the most mundane, and that's just the massive economic disruption that's coming from a technology that will take what most people do for a living and make it useless.
[861] There's a lot of people at the moment kind of denying this fact that think they'll be fine.
[862] Yep.
[863] is there anyone that will be fine and who are those people and who won't be fine?
[864] And I guess the important part as well when we think about this disruption is the speed of disruption because that that gives us a clue as to how long we'll have to adjust as a society.
[865] Find new jobs, upskill people, learn, train, maybe go find something else to do.
[866] Most people are not going to be able to retrain for something that will be relevant.
[867] I think, and in fact, the advice I've given my kids is invest in tools, invest in a toolkit, a cognitive toolkit that works for a future that you cannot imagine, right?
[868] If you, five years ago people were saying, learn to code, that was bad advice.
[869] Yeah.
[870] Okay.
[871] That's something that AIs can do.
[872] If, however, you invest in things that cause an upgrade to the quality of your thinking, if you invest in the kinds of skills that can be mapped onto new realms, then I can't promise you'll be all right, but you'll be a lot better off than people whose skill set is so narrowly focused on some task that made sense in 2024 that in 27, they're adrift, you know, be a generalist, invest in clear thinking, figure out who you can trust, develop your interpersonal relationships.
[873] Maybe that should even have been the head of the list, right?
[874] People who you can depend on, who have the insight, the values that makes them worthy of your investment in them and know them in person.
[875] so that no matter what happens, if the forces that wish to confuse us leverage AI to get in between us, which would not be a terrible description of what happened during COVID, except that AI was presumably not a big player.
[876] But if people try to get in between us, they're going to have a lot easier time doing it if your relationship with somebody is intermediated by a screen.
[877] You want to know people in person so that you can turn off the screen and you can say, Do you know what the hell is going on?
[878] What do you think?
[879] What did you see?
[880] What did you hear?
[881] What do you think is occurring?
[882] Right?
[883] You want to be able to have that conversation with somebody who's a real flesh and blood human who, you know, who you're willing to go all in with.
[884] Are there any careers that, so if your children turn to you and said, you know, dad, what career should I do in a world where AI is getting smarter by the week?
[885] The funny thing is that almost strikes me. I know you don't intend it this way, but it's almost a trick question at this moment, right?
[886] Anybody who thinks they know how to answer it is probably at least kidding themselves.
[887] It is that, you know, it's like if we were, let's say we found ourselves, you know, a drift on an outrigger in the middle of, you know, the South Pacific.
[888] We don't know if we're going to survive and you want to know, hey, well, if we do, if we find some land, what do we do then?
[889] And it's like, well, I don't know, but let's find the land and then let's think very carefully about what to do when we get there.
[890] So, but if your kid literally came up to you, would you say that to your kid?
[891] Yeah.
[892] So, but they say, well, dad, do I go to school, university?
[893] What do I do?
[894] I think the question that you're really asking.
[895] is what would you do in their shoes, which is not really a career question.
[896] What I would do in their shoes is I would, if I was going to go to college, I would make damn sure that I left college with a tangible project that I had accomplished, that I could establish was my own, that proved I was.
[897] was competent, right?
[898] If you develop something, I don't care if it does something useful or not, right?
[899] It could be, you ever seen a most useless machine?
[900] No. A most useless machine is kind of an interesting thing.
[901] The classic version of it is it's a little box with a switch on the top.
[902] And if you flip the switch, a hand comes out and flips it back the other way, and it goes back in the box, right?
[903] Totally useless.
[904] But the point is, look, anybody who can make a most useless machine, Well, I know something about their skill set.
[905] So what I'm saying is I don't want to see anybody's transcripts.
[906] I don't know why your transcript says what it does.
[907] I don't know that your professors knew what they were talking about, right?
[908] It's not useful.
[909] But if you show me a most useless machine, and the answer is, okay, if you really made that, then I know, A, you know something about prototyping, B, you know how to manage a project.
[910] C, I know that your motivational structure allows you to go from the concept to a finish project that actually works, et cetera.
[911] So I know a bunch of things that your transcript can't tell me. Now, a recommendation from the right person might mean something, you know, who's really the right person, but a recommendation from a random person I don't know that just says you're marvelous.
[912] I don't know what that means.
[913] But so one thing is make damn sure you graduate with something that you can show other people that is unfakable.
[914] The other thing that I have told them is if you invest in a skill, one of my sons asked me about electrical engineering.
[915] What did I think about electrical engineering?
[916] I said, I think electrical engineering is great, but it's not enough.
[917] If you go into electrical engineering, you are going to face a huge number of people who also went into electrical engineering who are very good at what they do.
[918] And maybe you're just awesome what you do, but distinguishing yourself in a world where you've got a bunch of competitors, many of them, maybe they're not as likable.
[919] And so they have more time to just dedicate to pure electrical engineering.
[920] You don't want to compete.
[921] It's not a winning bet.
[922] however if you invest in electrical engineering and one or two other things that have some relation to it well maybe you're the only person on earth who has the skill set to combine these things so that you come up with something that's unique and then it doesn't have to be perfect and you don't want to have to make perfect things perfect things are almost impossible to make the expense of making something perfect is through the roof for exactly that reason of diminishing returns.
[923] You'd much rather bring something to the world that's novel and useful and then let other people perfect it.
[924] That's where you want to be.
[925] So combining things that are not usually combined is one way to distinguish yourself and tangible product that actually establishes that you have those characteristics.
[926] And then the hidden punchline there is if you say, okay, my dad told me, I need to make a project that makes it clear what skill set set out to do it.
[927] You know what you're going to find out?
[928] What skills you don't have, right?
[929] If your motivational structure is broken, you're going to learn that.
[930] And then you can fix it, right?
[931] If you think, oh, I'm going to learn the skills, I'm in college, I better, you know, buckle down and get all the skills in the textbook.
[932] And then when I get out, I'll start making stuff.
[933] If you do that, then the point is, well, when you get out, it's too late to discover that your motivational structure doesn't allow you to complete a project or that you so dislike failure that when you make a prototype, you don't even realize what the meaning is.
[934] All you see is that it doesn't work very well.
[935] Right.
[936] On that fifth point of your AI concerns, this idea of sort of economic turmoil for whatever reason, Sam Altman's other business called World Coin intends to, I think it intends to distribute basically value.
[937] Let's just, to simplify it, distribute money to people in a world where there's been so much economic disruption that we've kind of lost our jobs.
[938] That's kind of how I understood it.
[939] So people are going around the world scanning their retinas on these machines that have been placed all over the world.
[940] And that's going to become the mechanism to identify you as a unique individual, as a human, so that they can send you free money, universal basic income in a world where people are going to struggle to have jobs.
[941] Now, I think I've, I don't think I've blitched that too much, but do you believe that world is going to happen where there's going to be so much disruption in the labor force that so many people are going to be unemployed that we're going to need to just basically give out money to people.
[942] I think that's going to be a short ride.
[943] You think it's going to be a short ride?
[944] Yeah, I do.
[945] What do you mean by that?
[946] Unfortunately, the story of human evolution and competition is it's one of great triumph and overcoming adversity in some chapters and it's one of tragedy and atrocity in others.
[947] I am very concerned that the idea of useless eaters is about to make a huge comeback and that UBI has two impacts.
[948] meaning universal basic income.
[949] Universal basic income and everything that functions like it.
[950] So some distribution of value.
[951] One, it's going to make the people who are creating the value or people who think they're creating the value resent those who are absorbing the value just to live.
[952] And it is therefore going to trigger a quest to reduce that line item on the balance sheet.
[953] And there will be all kinds of excuse making, but it's pretty, it's pretty ugly.
[954] I mean, we see that in society already.
[955] People that are working hard and that are paying a lot of tax get resentful towards the people at the very bottom of the income spectrum who are maybe not working at all and are getting paid.
[956] Yeah.
[957] And interestingly, another factor in here is people that earn more money seem to have less kids.
[958] And that becomes a point of contention in society.
[959] Yep.
[960] so i think you're seeing the picture and you're you're exactly you're exactly understanding why this will turn into uh the usual demonization of some set of people uh as a pretext for getting rid of them so that's bad the other bad effect that will come from ubi is learned helplessness and i think we've actually seen this in fact the way revolution, I think is a tragic story because on the one hand, while, you know, I was chased out of a job that I loved by a bunch of people who accused me of things that I wasn't guilty of in a kind of madness.
[961] On the other hand, how did they end up there?
[962] They ended up there because they were betrayed.
[963] They were betrayed by a system that was supposed to deliver them a life that worked.
[964] It was, you know, if you did what you were asked to do, if you went to school, if you did the homework, you were supposed to come out of it with a skill set that would allow you to live a decent life.
[965] And instead, they were given, many of them were given drugs that we biologically had no understanding of, you know, because somebody was profiting off their dependency and we sent them to schools and we provided them with majors that they could dedicate themselves to that weren't real that didn't create skills or insight in many cases created exactly the inverse it created confusion and at some point they realize you know what i don't have a plan and In such a case, those people who don't have skills that are going to allow them to live a decent life are going to look for someone to blame, and they're probably going to land on the wrong person, right?
[966] And especially if somebody is cynically willing to sell them the story that you know who you should blame, it's straight white guys or something like that, they'll listen.
[967] And I will say, I've done a lot of thinking about the game.
[968] theory of human competition.
[969] And one thing has struck me in recent years, which is that there's a reason that communism continues to reemerge.
[970] It doesn't work.
[971] So why would people keep landing there?
[972] And I think it is unfortunately the natural consequence of a merit of a that does not take care of those who lose.
[973] If you have a meritocracy where the way to have a decent life is to figure out how to provide something that people want, right?
[974] But you don't have a plan for the people who try that and it doesn't work for whatever reason.
[975] Then what you'll end up with is a large number of people who will correctly understand that they are on the losing end of a bargain.
[976] those people don't have an investment in keeping that system running they want to overturn it and in fact with some cause they will look at all of the fortunes that have been created in that meritocracy and they will say do you realize how much of that is illegitimate do you realize how much of that was parasitic we want it back and so i think communism is you know game theoretically it can't be made to work it has a fundamental flaw at its heart which is that it punishes those who contribute, and it rewards those who don't.
[977] Such a system will inherently be unproductive.
[978] Everyone that knows me knows that my downtime is spent watching football.
[979] I'm a big Manchester United fan, and if I can't make it to the game, I'll be watching wherever I am in the world with NordVPN, who sponsored this podcast, and they allow me to switch my virtual location to a country where the match is streaming, so I never miss a game.
[980] You're probably thinking, but what about viruses?
[981] Well, their threat protection feature keeps you safe from virus.
[982] malware and fishing sites.
[983] So that's something you'll never have to worry about when you're using it.
[984] Plus, you can use one NordVPN account on up to 10 devices, which is really helpful.
[985] But sometimes I'm on my phone and sometimes I'm on my laptop or my iPad.
[986] For a limited time, the NordVPN team have kindly offered the Dyer of a CO community an exclusive deal.
[987] Just head to NordVPN .com slash DOAC or use the link in the description below for a huge discount plus four extra months on your two -year plan.
[988] And with NordVPN's 30 -day money, back guarantee, there is absolutely no risk.
[989] Try it at NordvPN .com slash DOAC or click the link in the description below.
[990] Is there anything else that exists on your list of concerns?
[991] That we haven't crossed.
[992] So we've covered AI, the solar flight issue.
[993] Yeah, there is a big concern that I have.
[994] So I, Heather and I on our podcast, spent a lot of time sorting out the the landscape of the COVID debacle.
[995] And what we saw was quite dark.
[996] Virtually everything that we were told was upside down and backwards.
[997] If you wanted to know medically what you should do about COVID, you literally couldn't do better than looking at what the CDC told you and doing the inverse of all of it.
[998] Right?
[999] They got every single thing wrong, which is improbable.
[1000] What's the CDC for anyone that doesn't know?
[1001] The Center for Disease Control should probably be renamed the Center for Disease because they resulted in it spreading farther and having worse impacts.
[1002] But nonetheless, I sometimes say certain stories diagnose the system, right?
[1003] You can see what's wrong with your system by the way this particular story flows through it.
[1004] You can see the complete collapse of journalism through the COVID story.
[1005] You can see the utter dereliction of duty of our universities from that story.
[1006] You can see the brokenness of our political institutions, our courts, all of this.
[1007] They all failed.
[1008] And you can see where we live based on that story alone.
[1009] But unfortunately, in the U .S., the two major parties both have their fingerprints all over the failure.
[1010] And neither of them want to talk about it.
[1011] So aside from a few exceptions, people who shined during this period, mostly there's a tacit agreement to move on.
[1012] And I think it's a terrible error.
[1013] What was the failing?
[1014] Well, I would point to three and a half separate failures.
[1015] One, the COVID crisis, and mind you, all of the words here need caveats.
[1016] I do believe that there was a pathogen, SARS -CoV -2.
[1017] I do not believe that we had anything that would, by the prior definition, be called a pandemic.
[1018] I do not believe it was inherently an emergency.
[1019] But nonetheless, these are the terms we use.
[1020] The COVID pandemic was some kind of emergency.
[1021] We were delivered something wrongly called vaccines.
[1022] scenes.
[1023] We were propagandized into avoiding off -patent drugs that really did work.
[1024] And so, and, you know, we made our situation vastly worse.
[1025] We locked down, which injured people.
[1026] We put masks on children, which literally disrupted their normal developmental processes.
[1027] We kept vaccinating and re -vaccinating, which has created an entire landscape of adverse events and early deaths, which we are still not being honest about.
[1028] So the three and a half realms are the origin of the virus, which is all but certain to have been laboratory and very probably the Wuhan Institute, but the Wuhan Institute isn't the Wuhan Institute of virology.
[1029] It's also connected to the NIH, Anthony Fauci.
[1030] This enhancement, this gain of function research that was embarked upon was downstream of a weapons program.
[1031] And what's gain of function research?
[1032] Gain of function research is where various techniques are employed to give a pathogen, in this case, a virus capabilities that it didn't otherwise have.
[1033] Okay.
[1034] So what I think happened in the origin of COVID, we can now tell a pretty good story.
[1035] You have a vast network of laboratories working to find new pathogens that can be turned into weapons.
[1036] What story they tell themselves about why we need these weapons, I don't know.
[1037] But it is clear that if you are working to enhance human health, or you say you are, that you are allowed to enhance the lethality of germs as part of a dual -use program.
[1038] So the excuse is, oh, we're working to prepare the world from a pandemic.
[1039] It's not a matter of if.
[1040] It's a matter of when, right?
[1041] That's what they tell us.
[1042] It's not true.
[1043] The likelihood of a pathogen leaping from nature to humans is actually quite remote for reasons that it took me a long time to realize, but I now get it loud and clear.
[1044] Pathogens don't jump from animals to humans very easily because it's not an easy jump, right?
[1045] They have to do two tricks, and they have to do it in rapid succession in order for it to work.
[1046] The first trick is they have to actually infect a human being.
[1047] The second trick is before that human being dies or gets better, they have to learn to jump to a second human being.
[1048] That is not an easy trick.
[1049] So what human beings have done is they have started accelerating that process in laboratories that are specifically attempting to create these highly pathogenic creatures that infect people for the purpose of creating weapons.
[1050] So what happened in Wuhan, I believe, is there was a case where some Chinese miners who were working in a mine that was full of bats.
[1051] They were actually literally shoveling bat guano out of this mine.
[1052] Six of them became sick and three of them died.
[1053] None of them made anybody else sick.
[1054] and they were all compromised because they were breathing this dust in the mine, which made them vulnerable.
[1055] But the fact that six miners got sick from a virus in this mine told the folks at the Wuhan Institute of Virology, who were connected directly to some American researchers, told them, aha, there is a virus in a cave in Yunnan province, and the virus already knows one of the two tricks, it can infect a human.
[1056] What we need to do is find that virus and enhance it so that we have a virus that can spread between people.
[1057] Why would they want a virus that can spread between people?
[1058] Because they're part of a cryptic weapons program.
[1059] Ah, okay.
[1060] Now, if that sounds preposterous to you, and I can imagine that it would, I can imagine myself five years ago thinking that sounds absolutely ridiculous and paranoid, frankly.
[1061] I would suggest that people read Robert F. Kennedy Jr.'s most recent book.
[1062] It's called the Wuhan cover -up, in which he explores at great length this weapons program and its probable link to SARS -CoV -2.
[1063] It is also worth reading the book he wrote before that, the real Anthony Fauci.
[1064] When one comes to understand that the reason that Anthony Fauci was the highest, paid federal worker in the U .S. bar none.
[1065] That's because he was the head of a weapons program disguised as a public health program.
[1066] Okay, so Anthony Fauci is the guy that I saw on TV telling, giving us advice on what to do about this pandemic.
[1067] He was, he was the guy and he was like the medical advisor guy.
[1068] The medical advisor guy, but the funny thing is Anthony Fauci is largely responsible for the gain of function research that created the virus in the first place that made it a human pathogen.
[1069] How do we know that?
[1070] Because we now know that there was a ban on gain of function research in the U .S. and that Anthony Fauci was part of the effort to offshore the work to Ralph Berwick, who was an American researcher, still as an American researcher, his partners in the Wuhan Institute of Virology, Gingli being the primary.
[1071] So what they did was they evaded a ban that arose in the U .S., I believe, in 2015 as a result of very realistic fears that a virus would escape from gain of function research.
[1072] They offshored it to the Wuhan Institute of Virology so the work could continue.
[1073] That was Anthony Fauci's work.
[1074] So somehow he ends up both in a position to fund and fuel the research that creates the virus.
[1075] And then he's the go -to guy for advice about what we should do about.
[1076] it.
[1077] That's a very odd coincidence, and it worked out very badly for planet Earth.
[1078] Why would Anthony Fauci send offshore it to China, who are, I would consider to be one of the U .S .'s arch nemesis?
[1079] Yeah, that is a great question.
[1080] I don't know the answer to it.
[1081] You wouldn't want China to have that information.
[1082] It's like developing the nuclear bomb in Russia or something.
[1083] Well, I actually think that folks like you and me are, we don't get it.
[1084] We think that China is a nation and America is a nation and that these nations are antagonists.
[1085] We don't understand that actually the United States, while it is a nation, is also a set of factions and those factions don't agree.
[1086] One faction decided to forbid gain a function research.
[1087] Another faction decided, oh, no, you don't.
[1088] And we know how to get it done.
[1089] and it was partnered with some faction of that Chinese entity, you know, across the water.
[1090] And if I think about how America looks to outsiders, right, I can imagine people talking about, oh, the Americans are doing this, right?
[1091] But if you're talking about what the Biden administration is doing, you're not talking about me. The Biden administration isn't on my team.
[1092] I'll guarantee you that, right?
[1093] So the point is the Biden administration is my antagonist.
[1094] The Americans are a composite of antagonists, I mean, I guess in one sense, but I don't know why we offshoreed weapons research to China.
[1095] I don't know that there isn't a partnership between entities that is much more relevant to the unfolding of events that simply doesn't have a name I would recognize.
[1096] But somehow, we know that we did it.
[1097] We know that the research was forbidden here and we know, you know, Ralph Barrett trained Ji Zhang Li in how to do techniques, including what he calls no -see -um edits.
[1098] Those are edits you can't see.
[1099] Why would a biologist interested in studying pathogens in order to make people healthier care whether you could see their edits?
[1100] Could it not be the case that Fauci moved the project to China, the gain -of -function research to China or was involved in that, because he wanted to come up, he wanted to understand these pathogens, these viruses better so that we could do experiments on them to better understand them to figure out how to defend against them.
[1101] This is exactly what they tell you.
[1102] I'm a biologist.
[1103] Maybe I'm dumb, but that story doesn't make any sense to me. You're going to create a new virus from a ancestor you've pulled out of a cave in Yunnan province that you've enhanced in some way that you decided to enhance it.
[1104] It doesn't tell you about some virus that's going to leap out of nature of its own accord.
[1105] It tells you about the virus you just created.
[1106] It doesn't make sense from the point of view of enhancing human health for two reasons.
[1107] One, there's no demonstrated evidence that such research has given us any benefit whatsoever in fighting off pathogens.
[1108] There's no case in which we've seen that.
[1109] What we do have are multiple cases of leaks of pathogens from labs studying.
[1110] them.
[1111] So we've just got a simple comparison.
[1112] What are the chances if you study some virus you plucked out of some cave that you're going to come up with something useful that's actually going to help us?
[1113] Certainly didn't happen with COVID.
[1114] Chances are very low.
[1115] What are the chances the thing is going to leak from your lab and cause a global pandemic?
[1116] Pretty high, actually.
[1117] I just imagine scientists in the lab would take a sample of it.
[1118] They'd put it in the lab.
[1119] They'd start analyzing it.
[1120] They might start doing tests on it.
[1121] it, to see how it responds to certain things, and then, you know, through no fault of their own, maybe it leaked.
[1122] Okay.
[1123] But let's see how that played out.
[1124] We've got some gene jocks pulling a virus out of a Yunnan cave and enhancing it so that it becomes a communicable human pathogen.
[1125] They should, in theory, in studying it, have come up with information that would tell us what to do if such a pathogen ever got out, which it did because they lost control of it.
[1126] Well, what did they tell us?
[1127] They told us, don't use xyvermectin, don't use hydroxychloroquine, that what you should do is you should wait at home until you're actually, your lips are turning blue.
[1128] Then you should come get medical out.
[1129] Now, that's wrong in everything.
[1130] every way.
[1131] The right thing to do was to allow doctors to look at the patients who were coming into their office and to figure out how to treat them.
[1132] Those doctors should have said, well, this person, they appear to be sick with a respiratory pathogen.
[1133] There's a good chance that it's an RNA virus.
[1134] They could have quickly actually ascertained that it was an RNA virus.
[1135] You know what works on all RNA viruses?
[1136] Imermectin.
[1137] Then they could have treated it and they would have seen, oh, Ivermectin works for COVID if you give it very early, if you give it in sufficient doses, and if you give it with fat.
[1138] Okay, doctors would have figured that out on their own.
[1139] Instead, what we got was a message from the very people who had engaged in offshoring this research, ostensibly, to figure out how to deal with COVID, who then gave us exactly the inverse of the right advice, right?
[1140] And then what did they do?
[1141] Well, then they told us the route out of this is a vaccine.
[1142] What they delivered was not a vaccine.
[1143] It was a gene therapy.
[1144] That gene therapy, we were told, blocks the contraction and the transmission of COVID.
[1145] And what happened was they demonized everybody who questioned either the safety or the efficacy or both of these treatments.
[1146] Turned out that didn't work.
[1147] So, What we have, this is why I say that the COVID story diagnoses the system.
[1148] If you think, and a person could be well within their rights to think, well, I'm sure we were studying the virus to protect people, and that it would tell us something about what should be done if it ever were to leap into humans, well, we got a perfect test of how well that worked.
[1149] Every single thing they told us to do was wrong and upside down.
[1150] If you took what they call the vaccine, you put yourself in jeopardy.
[1151] If you ignored their advice and you used ivermectin and hydroxychloroquine, and you used it early, and you used it in sufficient quantities, it actually was highly effective and actually rendered this illness perfectly manageable for almost every single person.
[1152] So we've seen how well they did.
[1153] They failed across the board.
[1154] Do you think there's malicious intent somewhere?
[1155] Because when people hear these stories, they, and because the average doe doesn't know how systems above them work, they tend to think of this like a luminarity group of people coming together, deciding this was going to happen, then executing this plan for some, because these, you know, this room of people who are evil, who people refer to as like they wanted to do harm and they want to control us.
[1156] That's the kind of conspiracy narrative.
[1157] I have no idea what motivates such people.
[1158] My guess is, Most of the people who participated in the programs that did so much harm thought they were doing the right thing.
[1159] I don't think that's true for all of them, though.
[1160] I don't think that's true for Anthony Fauci.
[1161] You don't think he thinks he was doing the right thing?
[1162] I think he knows he's a weapons guy, and that when you're a weapons guy, you are inherently comfortable with the destruction of human beings.
[1163] That's what you do for a living.
[1164] You're trying to create things that destroy human beings.
[1165] And I don't know what it would be like to have such a job.
[1166] I've never had one.
[1167] But my guess is there is a mechanism for rationalizing absolutely ghastly things if what you do for a living is plot the destruction of others.
[1168] Now, you raise this as a concern as one of your key concerns because I guess you think we haven't learned our lessons from this?
[1169] No, we haven't.
[1170] We, and I think things are a little bit inverted in Britain.
[1171] In the U .S., there is widespread discussion of the harms that came from the so -called vaccines, but the question of the repurposed drugs has still, that story has not broken.
[1172] People still think that ivermectin and hydroxychloroquine don't work and never worked for COVID.
[1173] I think this story is in general the inverse that the vaccine harms are still taboo in Britain and there's more acceptance of repurpose drugs.
[1174] But really what we need is a no holds barred exploration of what happened, irrespective of what it was.
[1175] Maybe I've got it all wrong and maybe that would emerge in an investigation, but we need to look at The viral origins, how exactly that happened, we need to look at what was done, the coup that that public health staged against medicine in which directives about how to treat patients came down from on high rather than the normal medical process of doctors figuring out how to treat their patients empirically and pooling their insights.
[1176] And we need to talk about what happened, where these gene therapy, therapies came from, what was understood about the danger, and how it is that we treat all of the people who frankly did exactly what they were asked to do and have had their lives compromised or lost because of injuries that they were not told were possible.
[1177] Are you all concerned that if there is a pandemic that breaks out now, people would be so distrusting in institutions that we wouldn't be able to communicate any kind of instructions to society at large.
[1178] We wouldn't be able to tell them what medicine to take.
[1179] We wouldn't tell them to lock down.
[1180] We wouldn't be able to tell them pretty much anything.
[1181] 100%.
[1182] It would be an absolute nightmare.
[1183] But what we do have that we didn't have in 2020 is a sizable, dedicated group of dissidents.
[1184] many of whom have lost jobs who did figure out how to treat the disease, what its implications were medically, what the vaccine harms are, how you mitigate them.
[1185] And we have another group of people who were able to make excellent progress on the question of where did this thing come from and what was the process that created it.
[1186] So I guess on the one hand, yes, the distrust in institutions would make an actual pandemic, if such a thing happened, nightmarish.
[1187] On the other hand, we've got people that are actually worthy of our trust.
[1188] But I will say one other thing.
[1189] In 2020, I was somewhere in the mainstream with respect to how much.
[1190] concern I had over zoonotic pandemics.
[1191] Seeing what happened during COVID, I have come to understand the world is not nearly as dangerous on this front as we thought.
[1192] Even the stories that we think we know, like Spanish flu, turn out not to mean what we thought they meant.
[1193] Spanish flu was largely a self -inflicted wound.
[1194] Yes, there was a flu that circulated.
[1195] Much of what people died from was bacterial pneumonia that followed on from that flu, which we could currently treat with antibiotics, and much of the harm that was done.
[1196] In fact, what panicked people tremendously was that young, healthy people were succumbing.
[1197] Why?
[1198] Because they were being given huge doses of aspirin that today would be understood to be lethal.
[1199] So I'm not convinced that the story of Spanish flu is what we thought it was.
[1200] And the absence of such a story, how often is humanity faced with a terrifying pandemic?
[1201] The answer is it's very rare and the degree to which humans make it better and not, I mean, make it worse and not better is substantial.
[1202] So can we afford to wait five years as we sort out the truth of what happened during COVID?
[1203] Yeah, absolutely we can afford to wait.
[1204] The chances of something 1918 happening in the present is very, very low, and our ability to deal with it is much better.
[1205] So, yeah, if it were mine to say, I would say, relax about the zoonotic stuff, you've been sold a bill of goods by a lot of people who actually wanted to study how to make weaponized pathogens and pay attention to the people who have a good track record.
[1206] and that doesn't mean a perfect track record.
[1207] It means people who recognized their mistakes and got smarter over time.
[1208] We've spent a lot of time talking about macro concerns and big things.
[1209] It's funny because when I sat down with you, there's two things I want to talk about, which is the macro stuff, but also to just get a, I guess, a bit of a clear understanding of how our biology, our evolutionary biology, and the world we're living in are misaligned, so that on a day -to -day basis, for the next, you know, as I navigate through my life, the decisions I make with my food, my partners, my relationships, the day -to -day decisions, I can make them better.
[1210] Now, in that regard, could you give me some advice?
[1211] Sure.
[1212] On how I can live a better life.
[1213] Because I'm aware now of the big macro picture, much of it, but on a day -to -day basis, what things can I do to live a happier, healthier life?
[1214] The primary alteration that you can make, and none of us can do this perfectly, the world doesn't allow it.
[1215] But we are beautifully designed for a world we no longer live in.
[1216] You have no idea how good your design is because your design interfaces kind of poorly with the modern world.
[1217] And so it sort of feels like evolution, yeah, it did pretty well given what it's made of, but, you know, it kind of missed the mark in a lot of ways.
[1218] Really not the case.
[1219] If you lived as an ancestor lived in a world for which they were not only exquisitely well built but also brilliantly programmed then you would see that you lived in a kind of flow state and that that flow state was only broken by the occasional interaction with something new right we live in the opposite world right you have a food in your pantry and the question of whether you should actually put it in your mouth really hinges on a, you know, a list of ingredients, some of which you may not even be able to understand, right?
[1220] That's not normal, right?
[1221] So if you want to live a healthier, happier, more fulfilling life, the key is to remove as much of the novelty as you can from as many of the realms.
[1222] that exist in your life as you're able to.
[1223] What you want to be doing is eating things that look more or less like what your ancestors look like.
[1224] Is that a carnivore diet?
[1225] No. But it's not a vegetarian diet either, right?
[1226] It's a diet that has these things in proper proportion that has them unadulterated by novel stuff like seed oils, right?
[1227] Seed oils are strange.
[1228] Repurposed lubricants that people figured out you could sell as food and then were packaged as if they were heart healthy.
[1229] It's the inverse of the truth, right?
[1230] Olive oil is not a seed oil.
[1231] Avocado oil is not a seed oil.
[1232] Those are safe.
[1233] Why?
[1234] Because you're eating oil from the flesh of a fruit, not the seed.
[1235] Plants don't want you eating their seeds.
[1236] Plants reproduce by keeping their seeds from getting eaten.
[1237] So using deterrenties.
[1238] to extract the toxins from a seed oil is not a safe process.
[1239] So in any case, eating things that make sense, unadulterated things that look like the actual foods that your ancestors ate will make you very much healthier.
[1240] Realizing that in general, a state of health is one in which you are not disrupted until very late in life.
[1241] Your body is built to function.
[1242] It's built to fix itself.
[1243] And the idea that health is a matter of which pills to take is insane.
[1244] We've been sold another bill of goods by people who get rich when we buy pills.
[1245] Pharma is healthy when we are sick.
[1246] So don't get the idea that the way to get healthy is to figure out which things you're deficient in and, you know, get some corrective medicine.
[1247] There are some places where you're deficient.
[1248] You and I are probably both deficient in vitamin D, right?
[1249] We're deficient in vitamin D, not because human beings can't make enough vitamin D, but because we live in a novel world where the UV light that we would have been interfacing with in our ancestral environment is being blocked by clothing, by buildings, by glass, and that is causing us not to synthesize the stuff.
[1250] So vitamin D, that's a place where actually, you probably are deficient and you should correct for it.
[1251] But in general, health does not come from pills.
[1252] There are drugs that are worth taking when something has gone wrong for which this is an appropriate remedy.
[1253] But in general, the style of thinking in which people are, you know, put on statins for because some number on their chart suggested to somebody that they were in danger.
[1254] This is nonsense.
[1255] And it doesn't, it doesn't pan out.
[1256] If you look at the, the evidence we harmed people with statins, right?
[1257] The number of people who benefited from them was tiny.
[1258] The number of people who was sold them was large.
[1259] And then you can extend this logic to other things, too.
[1260] How much are you wired for this world?
[1261] And how much could you restore a relationship with other people and with the environment that just simply, matches what your ancestors would have done, right?
[1262] We would all benefit by spending more time outside.
[1263] We all benefit from having close relationships with friends, with lovers, things that last a lifetime.
[1264] And the obsession with modernizing everything is self -defeating.
[1265] In general, there are things that are worth modernizing, but it should be a fairly high bar.
[1266] When we depart from an ancestral pattern, it should be for a very good reason, and it should be with our eyes wide open about the unintended consequences of doing it.
[1267] So I don't know whether this is striking you as operationalizable, but you want your environment to look as ancestral as it can.
[1268] you want the developmental environment of a child to be a good match for the environment that they're going to live in as an adult.
[1269] You want your relationships to be, again, I'm not arguing for perfectly traditional, but you want them to be recognizably traditional, right?
[1270] And those things, it's kind of a high bar because, frankly, you've got something that your ancestors didn't have.
[1271] There's two last questions I have for you, if that's okay.
[1272] One of them was kind of raised there, which was, where we absolutely was not raised there, but it was in between the lines of one of the things you said about relationships, is pornography.
[1273] And we live in a world now because of screens and all of these things that we can access these kind of artificial romantic relationships and stimulation using the screens that 11 -year -olds have in their pocket.
[1274] My question is, is pornography bad for us?
[1275] it's an unmitigated disaster but i will say that with a caveat i am not arguing against erotica humans have a very longstanding relationship with erotic content and i don't think there's anything inherently wrong with erotic content the problem is pornography itself and i i know i'm not supposed to be able to define it but i will anyway pornography is erotic content the motivation for producing it having been profit so what's happening is the people who are making porn are transferring our wealth to them and I don't just mean money they are destroying the um sacred sexual um toolkit that is the birthright of every human being They are destroying it for money.
[1276] They are distorting it.
[1277] And that distortion, I would say, comes in two identifiable realms.
[1278] One is that, well, actually, maybe it's more than two, but men have two general reproductive strategies that work evolutionarily.
[1279] Women have one.
[1280] The two that men have are a sew and go, love them and leave them, don't invest in women or their offspring mode.
[1281] And the other is invest in offspring and contribute to protecting them and raising them.
[1282] When men are in that second mode, they are not exactly like women, but they are symmetrical to women in terms of how choosy they are about mates.
[1283] about how careful they are in their interactions.
[1284] When men are in the first mode, when they're thinking in terms of not investing in a sexual partner, that is effectively predatory.
[1285] And the reason that it's predatory is that human babies are so expensive to raise that no woman with a choice would elect to raise one alone if she could instead have a partner join her in that.
[1286] So women are built to avoid sex that does not come with commitment.
[1287] They've been convinced by modernity that that's not sophisticated, that that's male oppression, whatever it might be.
[1288] But the truth is, we have moved in the direction of women behaving like men at their worst rather than men behaving like women at their best and it's a mistake so you don't want relationships especially really ones about the most powerful stuff there is a human sexual interaction is about as close as you get and you don't want that relationship to be about some predatory mode that you have for, I don't know, ancestral circumstances that, frankly, were often ghastly and unforgivable.
[1289] Rape and things like that, exactly.
[1290] So what we should really want is a society that actually causes men to find this other side of themselves, which is an investing, caring, decent side.
[1291] And that's not an unmasculine side, right?
[1292] A man, you know, investing in a woman and defending his family and providing for them, that's all perfectly masculine stuff, right?
[1293] It's a lot more mature than the other alternative.
[1294] But in any case, the pornography is pushing us in the direction of this predatory mindset being synonymous with sexuality.
[1295] It also inherently leads to a view of sex that is, extreme.
[1296] And the reason for that has to do with market competition, that pornographers are all selling the same thing, right?
[1297] They're selling human sex.
[1298] How do you capture attention in a market where every competitor has the same stuff?
[1299] Well, you figure out what taboo hasn't been broken yet and you break it.
[1300] That will distinguish you from your competitors.
[1301] You've got an arms race in which pornographers are trying to find more and more extreme stuff to get the attention.
[1302] and therefore the money of consumers.
[1303] It's not a good idea.
[1304] You don't want an arms race.
[1305] This is not, sex isn't some new thing.
[1306] You know, it's not a technology that we're trying to figure out where it goes.
[1307] This is an ancient thing.
[1308] And we're wrecking it in an economic arms race that is really bad for the people who consume this stuff.
[1309] And it's really bad for society.
[1310] It's causing people not to want to partner because when they do start a, you know, sexual relationship.
[1311] They may find that their partner is violent because, I mean, here's the hidden aspect of this.
[1312] Human beings figure out what sex is in large measure through observation of other humans.
[1313] That's natural.
[1314] And in fact, in hunter -gatherer societies, we know that as weird as this sounds, kids learn what sex is because they're housed with their parents and they may, you know, be half asleep.
[1315] And so they observe something real.
[1316] So the human is built to learn this through some kind of observation and inference.
[1317] But if your detectors are saturated with phony sexual interactions designed to get you to, you know, pay attention rather than real sexual interactions that actually happen between people, then it corrupts your whole understanding of what you're supposed to be doing.
[1318] And then we've got AI humanoid robots at the same time, so you're going to have, there's going to be, in our lifetime, there's going to be a new story that breaks, and it's going to be describing this very large community of millions and millions of men, and maybe women as well, who are in a committed relationship with a humanoid robot, who is pleasuring them in all the right ways, and is having sex with them, and giving them no problems and affirming everything they want to hear, and helping them around the house.
[1319] Oh, the worst thing you said is affirming everything they want to hear.
[1320] Yeah.
[1321] But you just think about what's stopping them doing that?
[1322] And the only thing stopping you doing that, honestly, if I'm being completely honest, is the stigma.
[1323] And stigma, as we look back through history, evaporates in a moment when enough people start doing it.
[1324] Yep.
[1325] And we saw this with porn.
[1326] Right.
[1327] Oh, of course.
[1328] Yeah.
[1329] That used to have a stigma.
[1330] Oh, my God.
[1331] It had such a stigma.
[1332] And now people just talk about it like it's nothing.
[1333] Yeah.
[1334] And, you know, you're absolutely right.
[1335] But, I mean, look, maybe I have a wonderful relationship.
[1336] Really, I know.
[1337] I married the right person, which is odd because I met her when I was 16.
[1338] Wow.
[1339] Yeah.
[1340] That's incredible.
[1341] It is incredible.
[1342] But it does tell me something because as much as I know that I'm with the right person and I can look back on my history and just understand what an important role it played in everything good about my life, it's not simple.
[1343] You don't want it to be simple.
[1344] You don't want a beautiful robot.
[1345] to tell you what you want to hear that will wreck your life right i mean it's like if let's take the perfect analogy if i said to you hey how would you like just to feel really awesome all the time it's tempting oh it's so tempting but it's kind of what cocaine does right it just triggers the pleasure center without it having to be accompanied by success or anything wrecks your life right if you really get into that stuff you'll betray every value you've got just to keep the high going.
[1346] So this is the same thing.
[1347] You don't want a sexy, beautiful robot to make you feel great about yourself because you will become nothing.
[1348] Struggle matters.
[1349] Yeah, it does, which is why suffering is not something we should be trying to cure.
[1350] And that brings me to the last question I was going to ask you of the two, which was just about what parents are getting wrong.
[1351] Because I'm going to be a parent at some point.
[1352] I'm 31.
[1353] Me and my partner have started trying for kids.
[1354] And my brother is a year old.
[1355] older.
[1356] He has three kids under the age of six.
[1357] And I'm trying to navigate now.
[1358] What advice or what, you know, very top -level things I should be thinking about as a parent?
[1359] I'm so glad you asked for that.
[1360] Why?
[1361] Because I've got some actually useful advice.
[1362] And, you know, look, I made mistakes with my kids and I know what they were in large measure.
[1363] Maybe I don't know all of them yet.
[1364] But I also think Heather and I did really well.
[1365] And our kids pair that story out.
[1366] kids are built to be raised correctly.
[1367] They are not fragile in the sense that you're going to make plenty of errors.
[1368] You're going to yell at them when you shouldn't.
[1369] You're going to do all sorts of stuff you shouldn't do.
[1370] That does not wreck kids.
[1371] The signal -to -noise ratio is what you've got to focus on, right?
[1372] In general, you want your successes.
[1373] the things that you do right, to sufficiently outpace the things that you screw up, that they get the idea, right?
[1374] Their purpose is not to game you.
[1375] It's not to evade your authority.
[1376] They're trying to figure out how to be in the world.
[1377] And your job, as a parent, is to mirror the world that they will live in, right?
[1378] To do so in a way that they can get the message so that they can become, you don't want a panicky kid who is going to face danger and freak out, that's not useful.
[1379] That'll get you killed.
[1380] What you want is somebody who, when they are faced with something challenging, brings the right tools to bear.
[1381] So you'll model it for them and you'll produce a world in which those kinds of challenges exist, right, at first in a very crude form and then they will get more and more sophisticated over time.
[1382] It's all designed to work.
[1383] What you want to do is not break it, not fall in love with fads or beliefs like, you know, childhood is a time of innocence, right?
[1384] You're supposed to be playing.
[1385] Well, you know what play is?
[1386] Practice, right?
[1387] Yes, you should be playing.
[1388] You should be having a blast, but you should be playing with things that actually have some relevance to what you want to be as an adult, right?
[1389] The fun you have should be correlated with the skills that you will want to have picked up.
[1390] And anyway, I think the key is reduce the novelty in their lives as much as you can.
[1391] Novelty, by what you mean?
[1392] Stuff for which they have no evolutionary preparedness.
[1393] Screens.
[1394] Screens being very high on the list.
[1395] I will tell you, Heather and I knew very little about raising kids when we had our first and we literally had a conversation and said, do you know how to do this?
[1396] No. Did you have, were you around people who did when you were a kid?
[1397] Not really, you know.
[1398] That was true for both of us.
[1399] And so we just decided to wing it.
[1400] And one of the things we did was we started talking to our then infant as if he was a college student.
[1401] You know, it was kind of funny to do and it didn't seem harmful.
[1402] And the funny thing is, it worked really well um you your job is to shoot over their heads and then they rise to meet it right and so don't assume that you should be meeting your child at their level that's not what you're supposed to do it's supposed to shoot above their heads and then they come to meet it you're supposed to ignore all the garbage that they used to tell parents about oh you you'll ruin your kid if you love them too much kind of thing it's all nonsense right your program to know you're supposed to love the the tiny kid unconditionally they're supposed to feel very very secure in that right that's what allows them to confront the terrifying world is that they're completely secure at home and then at the point where it's not so simple you know right you're built for this so are they and that system works and the thing that makes it break is novel influence especially where you have an antagonist, right?
[1403] You're not supposed to have an antagonist.
[1404] Your ancestors, there were some set of things that a child should eat.
[1405] There was nobody trying to trick your child into eating something they shouldn't because it's profitable.
[1406] That's new.
[1407] All right.
[1408] So anyway, you're built for it.
[1409] You're also extremely thoughtful, which is a great tool because you're going to be living in a world with novel stuff.
[1410] your book is is is so incredibly important because it's you know until i went through this book i didn't understand that pretty much everything as you say is downstream from my evolutionary biology and i thought of evolutionary biology is like why is my finger the shape that it is or what's inside or what you know what's the structure of the human body but actually understanding that everything from from the foods we eat and why that's misaligned to the back pain that i get to uh to the way that I mate, the why I have a girlfriend and not five girlfriends and all of society and the way it's constructed and all of my biases link back to my evolutionary biology, it allows me to see a kind of different lens on the world.
[1411] And I used to think that psychology was the answer to the world.
[1412] But after reading this essential book, I now know that the answer to the world is actually much of it exists in our evolutionary biology.
[1413] Our ability to see the future and to understand the past exists in our evolutionary biology.
[1414] And so I highly recommend everybody gets this book and has a read of it.
[1415] It's called A Hunter Gatherers Guide to the 21st Century, Evolution and the challenges of modern life by both Brett and his wife, Heather.
[1416] I'll link it below for everybody to read.
[1417] It's a New York Times bestselling book as well.
[1418] Brett, we have a closing tradition on this podcast where the last guest leaves a question for the next guest, not knowing who they're going to be leaving it for.
[1419] And the question left for you is, okay.
[1420] Do I get to know who left it?
[1421] You never get to know unless all these questions become conversation cards.
[1422] So we have a pack for you.
[1423] So if you turn it over, and scan it with your iPhone, you can watch who answered the question on the other side.
[1424] So your question will become a card, and then you can turn it over and scan it.
[1425] And on the other side will be the person that answered it.
[1426] If you could travel back to meet one member of your family, when they were the age you are now, what would you ask them?
[1427] I guess I would ask my grandfather, who I was very close with, who had a great many, he had great hopes for humanity and he had tremendous fears about exactly what we're doing wrong not in detail but he understood that our power to break the world exceeded our wisdom about how to manage those powers i guess i would ask him if he could have been certain that he was actually right and that the magnitude of the danger in 2024 would exceed even his substantial concerns.
[1428] What might he do differently to raise the alarm?
[1429] Is that what you're trying to do?
[1430] Yeah.
[1431] I live by the following premise.
[1432] If you were on a canoe being pulled towards a waterfall and the chances of your paddling out of the danger was growing vanishingly small there's no point at which it makes sense to stop paddling you don't know what you don't know and the chances that you might just barely escape a terrible disaster because you didn't give in to hopelessness means that what you do as things get very dire is you double down and you push as hard as you can.
[1433] And so what I honestly believe is that it is very, very late.
[1434] But as far as I know, it is not too late.
[1435] If we began the process now of waking up to what's actually causing our problem, which is hyper -novelty.
[1436] hyper novelty for in simple terms meaning meaning the rate of change is simply too great for our ability to adapt to catch up if we woke up to that problem and we got serious about addressing it I believe that we could still do it are you hopeful let's put it this way one of the tools that we use in evolutionary biology to think about the process of a creature becoming some other kind of creature is called the adaptive landscape.
[1437] And it involves thinking about opportunities as peaks, the value of an opportunity is the height of the peak, and the obstacles to going from one peak to a higher one are valleys.
[1438] There's no guarantee that just because you've gone into a valley that you're going to go up a peak on the other side, right?
[1439] That involves some luck and some careful navigation, because if that other peak is far away in the clouds and you're off by four degrees, you could just simply miss it.
[1440] So the peril is real.
[1441] We are entering an adaptive valley.
[1442] We can feel it.
[1443] Everybody feels it.
[1444] There is no guarantee that we get out of it, but the fact that things look very dark does not mean that we are not moving through an adaptive valley to a better peak on the other side.
[1445] So there is every reason not to give up and to try to play our roles in this chapter as effectively as we can to maximize the chances that we do get to the foothill of that other peak and can deliver our descendants a world as good as the one we inherited or better.
[1446] And let's put it this way.
[1447] We all go to the movie and we watch the fellowship of the ring or whatever the collection of weird heroes that we see on the screen, we root for them and we admire them.
[1448] We know what we are supposed to do at this moment.
[1449] We are supposed to enter the next chapter of the book and we are supposed to do as well as we can and bring our best characteristics to bear in the hope that it works out and if it doesn't we will have tried and if it does we will get to look back on this dark phase and say isn't it great that we kept going are you hopeful um you know the funny thing is i know the answer to that question and i also know the other answer to that question yeah i'm hopeful and what's the other answer to that question um i mean you really want to know?
[1450] Yes.
[1451] Okay, because it breaks a spell.
[1452] The problem is in order to get everybody to do what they need to do in order that we do get out of this, we have to believe that it's more likely to get out of it than it probably is.
[1453] So I'm comfortable with that.
[1454] If other people are comfortable with that, then the answer is, yeah, it's a pretty dire moment.
[1455] We're going to need some luck.
[1456] But what's the way to approach it?
[1457] Often the way to approach things is in some conflict with how we understand them.
[1458] I don't believe in fate.
[1459] I don't think it's a real thing.
[1460] But I know that it's very often extremely useful to behave as if you do believe in fate.
[1461] So I do.
[1462] Brett, thank you.
[1463] Thank you for your generosity with your time.
[1464] And thank you for your wisdom, your honesty, and for all the work that you do across your YouTube channel, which I'm a big fan of, your books and everything else that you do.
[1465] If someone wants to find you, where's the best place to go?
[1466] Is it your website or is it?
[1467] They should find the Dark Horse podcast.
[1468] Dark Horse is one word.
[1469] They can find me on Twitter at Brett Weinstein.
[1470] We also have a Twitter account for the podcast.
[1471] They can pick up the book.
[1472] I think those are the best places.
[1473] I'll link all of those below so everyone has easy access to them on all platforms.
[1474] So thank you so much, Brett.
[1475] It's been a real honor and I feel enlightened.
[1476] I feel like my eyes have been opened in a number of ways, and I feel focused.
[1477] Well, that's great to hear.
[1478] It was a very rewarding conversation.
[1479] Perfect Ted has quite frankly taken The Nation by Storm, a small green energy drink that you've probably seen popping up through Tesco or to Waitrose.
[1480] They've grown by almost 10 ,000.
[1481] thousand percent in a very short period of time because people are sick and tired of the typical unhealthy energy drinks and they've been looking for an alternative.
[1482] Perfect Ted is the drink that I drink as I'm sat here doing the podcast because it gives me increased focus.
[1483] It doesn't give me crashes, which sometimes might happen if I'm having a three, four, five, six hour conversation with someone on the podcast.
[1484] And it tastes amazing.
[1485] It's exactly what I've been looking for in terms of energy.
[1486] That's why I'm an investor and that's why they sponsor this podcast.
[1487] podcast.
[1488] And for a limited time, Perfect Ted have given Diary of a CEO listeners only a huge 40 % off if you use the code Diary 40 at checkout.
[1489] Don't tell anybody about this.
[1490] And you can only get this online for a limited time.
[1491] So make sure you don't miss out.