Insightcast AI
Home
© 2025 All rights reserved
Impressum
#186 – Bryan Johnson: Kernel Brain-Computer Interfaces

#186 – Bryan Johnson: Kernel Brain-Computer Interfaces

Lex Fridman Podcast XX

--:--
--:--

Full Transcription:

[0] The following is a conversation with Brian Johnson, founder of Kernel, a company that has developed devices that can monitor and record brain activity.

[1] And previously, he was the founder of Brain Tree, a mobile payment company that acquired Venmo and then was acquired by PayPal and eBay.

[2] Quick mention of our sponsors for Sigmatic, NetSuite, Grammarly, and ExpressVPN.

[3] Check them out in the description to support this podcast.

[4] As a side note, let me say that this was a fun.

[5] and memorable experience, wearing the kernel flow brain interface in the beginning of this conversation, as you can see if you watch the video version of this episode.

[6] And there was a Ubuntu Linux machine sitting next to me collecting the data from my brain.

[7] The whole thing gave me hope that the mystery of the human mind will be unlocked in the coming decades, as we begin to measure signals from the brain in a high bandwidth way.

[8] To understand the mind, we either have to build it or to measure it.

[9] Both are worth a try.

[10] Thanks to Brian and the rest of the kernel team for making this little demo happen.

[11] And now it's time for the advertisement part of the program.

[12] I try to make these things interesting.

[13] I give you time stamps.

[14] So if you want to be cheeky, you can skip, but please still check out the sponsors by clicking the links in the description.

[15] We are really picky about the sponsors we take on.

[16] So hopefully if you buy their stuff, it's actually going to be useful.

[17] You're going to live better, longer, happier lives.

[18] That said, we all die in the end, so it really doesn't matter, does it now?

[19] This show is sponsored by Four -Sigmatic, the maker of delicious mushroom coffee and plant -based protein.

[20] Does the coffee taste like mushrooms, you ask?

[21] No, it does not.

[22] But it does taste delicious.

[23] I did a three -day fast a few days ago, and I actually ran out of coffee, so I was drinking green mint tea, didn't have any caffeine.

[24] And I honestly think it's not not the caffeine.

[25] It's the entire experience of coffee that I missed.

[26] So you acquire usually a taste for a particular kind of coffee, like four -sigmatic, and then that, almost in the Pavlovian way, starts getting associated with comfort.

[27] And for me, it's actually associated with focus.

[28] So I don't know if it's placebo, if there's actual effects from the caffeine or the coffee, but when I take a sip of coffee, I'm ready to get work done.

[29] I'm ready to focus.

[30] We are all Pavlov's dog, after all.

[31] Anyway, get up to 40 % off and free shipping on mushroom coffee bundles if you go to 4Sigmatic .com slash Lex.

[32] That's 4Sigmatic .com slash Lex.

[33] And also, they have protein.

[34] If you want protein, they got protein.

[35] But the coffee is what I really love.

[36] This show is also sponsored by NetSuite.

[37] This one's for the business owners.

[38] Running a business is hard.

[39] If you own a business, don't like QuickBooks and spreadsheets make it even harder than needs to be.

[40] You should consider upgrading to NetSuite.

[41] It allows you to manage financials, HR, that's human resources, inventory, e -commerce, and many more business -related details all in one place.

[42] I've been thinking a lot about what it takes to run a large company.

[43] Whether that's something when done well can be a source of happiness, or whether it's always a kind of sequential experience of putting out fires, dealing with emergencies, drama, taking big risks where failure is always a possibility, a real possibility, dealing with failure, dealing with a possibility of losing the entire company, dealing with the immense uncertainty of all the possible trajectories that you're a product development, that the innovation should take.

[44] So all of that just seems like hell.

[45] Anyway, you should at least have good tools for this thing.

[46] Get a free product tour at net suite .com slash Lex.

[47] If you own a business, try them out.

[48] Schedule your free product tour at NetSuite .com slash Lex.

[49] That's N -E -S -U -I -T -E .com slash Lex.

[50] This show is sponsored by Grammarly, a writing assistant tool.

[51] The Czech spelling, grammar, sentence, structure, and readability.

[52] Grammally Premium, the version you pay for, offers a bunch of of extra features.

[53] My favorite is the clarity check, which helps detect rambling over complicated chaos that many of us can descend into.

[54] Like this very thing that I wrote for myself.

[55] Most of these reads I go off the top of my head, but some of them are write down.

[56] The parts I write down are even worse than the parts that go off the top of my head.

[57] I wish I had something like real -time grammarly for spoken word.

[58] But in the medium of the written word, I, truly value simplicity.

[59] I find simplicity be just beautiful.

[60] And so grammarly is a kind of nudge towards simplicity and clarity with the tools that their premium service provides.

[61] I recommend it highly.

[62] Even if you don't agree with a suggestion it makes, ultimately I think the debate with a suggested change will create a better written piece.

[63] Anyway, Grammarly is available on basically any platform in major sites and apps like Gmail and Twitter and so on.

[64] Do more than just spellcheck.

[65] you really meet with Gramerly Premium, get 20 % off Grammarly Premium by signing up at Grammarly .com slash Lex.

[66] That's 20 % off at Grammarly .com slash Lex.

[67] This show is also sponsored by ExpressVPN.

[68] They put a layer of protection between you and powerful technology companies that control much of what we do online.

[69] The suggested talking points that ExpressVPN keeps sending me really make these like big tech companies into the enemy.

[70] I think we should definitely be skeptical, especially skeptical of the surveillance state.

[71] But I think these companies bring a lot of good too.

[72] But yes, it is good to have a layer of protection in terms of your data.

[73] That said, I do think there's a lot of money to be made by companies to actually be transparent and actually give you control over your data.

[74] I think it's a win -win.

[75] And I have no idea why Facebook is doing what is doing I have no idea why Twitter is doing what is doing.

[76] I think it was just the momentum of the past, and hopefully we're going to see a pivot towards companies giving individual users more control over their data.

[77] Anyway, go to expressvpn .com slash LexPod to get an extra three months free, that's ExpressVPN .com slash LexPod.

[78] This is the Lex Friedman podcast, and here is my conversation with Brian Johnson.

[79] You ready, Lex?

[80] Yes, I'm ready.

[81] Do you just want to come in and put the interfaces on our heads?

[82] And then I will proceed to tell you a few jokes.

[83] So we have two incredible pieces of technology and a machine running Bantu 2004 in front of us.

[84] What are we doing?

[85] Are these going on our head?

[86] They're going on our heads, yeah.

[87] And they will place it on our heads for proper alignment.

[88] Does this support giant heads?

[89] Because I kind of have a giant head.

[90] Is this giant head?

[91] As like an ego, or are you saying physically both?

[92] It's a nice massage.

[93] If you're, if, if, it's okay to move around?

[94] Yeah.

[95] It feels, oh, yeah, he -ha.

[96] This feels awesome.

[97] Thank you.

[98] That feels good.

[99] That feels good.

[100] So this is big head friendly.

[101] It suits you well, Lex.

[102] Thank you.

[103] I feel like I need to, uh, I feel like when I wear this, I need to sound like Sam Harris, calm, collected, eloquent.

[104] I feel smarter, actually.

[105] I don't think I've ever felt quite as much like I'm part of the future as now.

[106] Have you ever worn a brain interface or had your brain imaged?

[107] Oh, never had my brain imaged.

[108] The only way of analyze my brain is by talking to myself and thinking.

[109] No direct data.

[110] Yeah.

[111] Yeah, that is definitely a brain interface that has a lot of blind spots.

[112] It has some blind spots.

[113] Yeah.

[114] Psychotherapy.

[115] That's right.

[116] All right.

[117] Are we recording?

[118] Yeah, we're good.

[119] All right.

[120] So, Lex, the objective of this, I'm going to tell you some jokes.

[121] And your objective is to not smile, which as a Russian, you should have an edge.

[122] Make the motherline proud.

[123] I got you.

[124] Okay.

[125] Let's hear the jokes.

[126] Lex, and this is from the Colonel Crew.

[127] We've been working on a device that can read your mind, and we would love to see your thoughts.

[128] Is that the joke?

[129] That's the opening.

[130] Okay.

[131] If I'm seeing the muscle activation correctly on your lips, you're not going to do well on this.

[132] Let's see.

[133] All right, here comes the first.

[134] I'm screwed.

[135] Here comes the first one.

[136] Is this going to break the device?

[137] Is it resilient to laughter?

[138] Lex, what goes through a potato's brain?

[139] I got already failed.

[140] That's the hilarious opener.

[141] Tater thoughts.

[142] What kind of fish performs brain surgery?

[143] I don't know.

[144] A neural surgeon.

[145] And so we're getting data.

[146] of everything that's happening in my brain right now?

[147] Lifetime, yeah.

[148] We're getting activation patterns of your entire cortex.

[149] I'm going to try to do better.

[150] I'll let it out all the parts where I left.

[151] Photoshop, put a serious face over me. You can recover.

[152] Yeah, all right.

[153] Lex, what do scholars eat when they're hungry?

[154] I don't know what.

[155] Academia nuts.

[156] That was a pretty going.

[157] So what we'll do is, so you're wearing Colonel Flow.

[158] which is an interface built using technology called spectroscopy.

[159] So it's similar to what we wearables on the wrist, using light.

[160] So using LiDAR, as you know.

[161] And we're using that to image, it's a functional imaging of brain activity.

[162] And so as your neurons fire electrically and chemically, it creates blood oxygenation levels.

[163] We're measuring that.

[164] And so you'll see in the reconstructions we do for you, you'll see your activation patterns and your brain as throughout this entire time we are wearing it.

[165] So in the reaction to the jokes, and as we were sitting here talking, and so it's a, we're moving towards a real -time feed of your cortical brain activity.

[166] So there's a bunch of things that are in contact with my skull right now.

[167] How many of them are there?

[168] And so how many of them are, what are they?

[169] What are the actual sensors?

[170] There's 52 modules, and each module has one later.

[171] are in six sensors, and the sensors fire in about 100 picoseconds, and then the photons scatter and absorb in your brain, and then a few go in, a bunch go in, then a few come back out, and we sense those photons, and then we do the reconstruction for the activity.

[172] Overall, there's about a thousand plus channels that are sampling your activity.

[173] How difficult is it to make it as comfortable as it is?

[174] Because it's surprisingly uncomfortable.

[175] I would not think it would be comfortable.

[176] Something that's measuring brain activity, I would not think it would be comfortable, but it is.

[177] I agree.

[178] In fact, I want to take this home.

[179] Yeah, yeah, that's right.

[180] So people are accustomed to being in big systems like FMRI where there's 120 decibels sounds and you're in a claustrophobic encasement or EEG, which is just painful or surgery.

[181] And so, yes, I agree that this is a. convenient option to be able to just put on your head.

[182] It measures your brain activity in the contextual environment you choose.

[183] So if we want to have it during a podcast, if we want to be at home in a business setting, it's freedom to be where to record your brain activity in the setting that you choose.

[184] Yeah, but sort of from an engineering perspective, are these, what is it, there's a bunch of different modular parts and they're kind of, there's like a rubber band thing where they mold to the shape of your head.

[185] That's right.

[186] So we built this version of the mechanical design to accommodate most adult heads.

[187] But I have a giant head, and it fits fine.

[188] It fits well, actually.

[189] So I don't think I have an average head.

[190] Okay, maybe I feel much better about my head now.

[191] Maybe I'm more average than I thought.

[192] Okay, so what else is there interesting?

[193] You can say while it's on our heads.

[194] this on the whole time this is kind of awesome and it's amazing for me as a fan of abantu I use a bonte you guys use that too but it's amazing to have code running to the side measuring stuff and collecting data I mean that I just I feel like much more important now that my data is being recorded like somebody care like you know when you you have a good friend that listens to you that actually like listens like actually is listening to you this is what I feel like I'm like a much better friend because it's like accurately listening to me. Ubuntu.

[195] What a cool perspective.

[196] I hadn't thought about that of feeling understood.

[197] Yeah.

[198] Heard.

[199] Yeah, heard.

[200] Deeply by the mechanical system that is recording your brain activity versus the human that you're engaging with.

[201] That your mind immediately goes to that there's this dimensionality and depth of understanding.

[202] Yeah.

[203] of this software system, which you're intimately familiar with, and now you're able to communicate with this system in ways that you couldn't before.

[204] Yeah, I feel heard.

[205] Yeah, I mean, I guess what's interesting about this is your intuitions are spot on.

[206] Most people have intuitions about brain interfaces that they've grown up with this idea of people moving cursors on the screen or typing or changing the channel or skipping a song.

[207] It's primarily been anchored on control.

[208] And I think the more relevant understanding of brain interfaces or neural imaging is that it's a measurement system.

[209] And once you have numbers for a given thing, a seemingly endless number of possibilities emerge around that of what to do with those numbers.

[210] So before you tell me about the possibilities, this was an incredible experience.

[211] I can keep this on for another two hours, but I'm being told that for a bunch of reasons, just because we probably want to keep the data.

[212] small and visualize it nicely for the final product.

[213] We want to cut this off and take this amazing helmet away from me. So Brian, thank you so much for this experience.

[214] And let's continue without helmetless.

[215] All right.

[216] So that was an incredible experience.

[217] Can you maybe speak to what kind of opportunities that opens up, that stream of data, that rich stream of data from the brain?

[218] First, I'm curious, what is your reaction?

[219] What comes to mind when you put it on your head.

[220] What does it mean to you and what possibilities emerge and what significance might it have?

[221] I'm curious where your orientation is at.

[222] Well, for me, I'm really excited by the possibility of various information about my body, about my mind being converted into data such that data can be used to create products that make my life better.

[223] So that to me exciting possibility, even just like a fit bit that measures, I don't know, some very basic measurements about your body is really cool.

[224] But it's the bandwidth of information, the resolution of that information is very crude, so it's not very interesting.

[225] The possibility of recording of just building a data set coming in a clean way and a high bandwidth way from my brain It opens up all kinds of, you know, at the very, I was kind of joking when we're talking, but it's not really, is like I feel heard in the sense that it feels like the full richness of the information coming from my mind is actually being recorded by the machine.

[226] I mean, there's a, I can't quite put it into words, but there is genuinely for me, this is not some kind of joke about me being a robot.

[227] This genuinely feels like I'm being heard in a way that that's going to improve my life.

[228] As long as the thing that's on the other end can do something useful with that data.

[229] But even the recording itself is like once you record, it's like taking a picture.

[230] That moment is forever saved in time.

[231] Now, a picture cannot allow you to step back.

[232] back into that world.

[233] But perhaps recording your brain is a much higher resolution thing, much more personal recording of that information than a picture that would allow you to step back into where you were in that particular moment in history and then map out a certain trajectory to tell you certain things about yourself that could open up all kinds of applications.

[234] Of course, there's health that I consider it.

[235] But honestly, to me, the exciting thing is just being heard.

[236] My state of mind, the level of focus, all those kinds of things, being heard.

[237] What I heard you say is you have an entirety of lived experience, some of which you can communicate in words and in body language, some of which you feel internally, which cannot be captured in those communication modalities.

[238] And that this measurement system captures both the things you can try to articulate in words, a lower dimensional space using one word, for example, to communicate focus, when it really may be represented in a 20 -dimensional space of this particular kind of focus and that this information is being captured.

[239] So it's a closer representation to the entirety of your experience captured in a dynamic fashion that is not just a static image of your conscious experience.

[240] Yeah.

[241] That's the promise.

[242] That was the feeling.

[243] And it felt like the future.

[244] So it's a pretty cool experience.

[245] And from the sort of mechanical perspective, it was cool to have an actual device that feels pretty good.

[246] That doesn't require me to go into the lab.

[247] And also, the other thing I was feeling, there's a guy named Andrew Huberman.

[248] He's a friend of mine.

[249] Amazing podcast.

[250] People should listen to a Huberman lab podcast.

[251] We're working on a paper together about eye movement and so on.

[252] And we're kind of, he's a neuroscientist and I'm a data person, machine learning person.

[253] And we're both excited by how much the data measurements of the human mind, the brain and all the different metrics that come from that can be used to understand human beings and in a rigorous scientific way.

[254] So the other thing I was thinking about is how this could be turned into a tool for science, sort of not just personal science, not just like Fitbit style.

[255] like how am I doing my personal metrics of health, but doing largest scale studies of human behavior and so on.

[256] So like data, not at the scale of an individual, but data at a scale of many individuals or a large number of individuals.

[257] So it's personal being heard was exciting and also just for science is exciting.

[258] It's very easy.

[259] Like there's a very powerful thing to it being so easy to just put on that you could scale much easier.

[260] If you think about that second thing you said, about the science of the brain, most, we've done a pretty good job, like we, the human race has done a pretty good job, figuring out how to quantify the things around us, from distant stars to calories and steps and our genome.

[261] So we can measure and quantify pretty much everything in the known universe except for our minds.

[262] And we can do these one -offs if we're going to get an fMRI scan or do something with a low -res EEG system, but we haven't done this at population scale.

[263] And so if you think about human thought or human cognition is probably the single law largest raw input material into society at any given moment, it's our conversations with ourselves and with other people and we have this this raw input that we can't haven't been able to measure yet yeah and if you when i think about it through that frame it's remarkable it's almost like we live in this wild wild west of unquantified communications within ourselves and between each other when everything else has been grounded me for example I know if I buy an appliance at the store or on a website, I don't need to look at the measurements on the appliance and make sure it can fit through my door.

[264] That's an engineered system of appliance manufacturing and construction.

[265] Everyone's agreed upon engineering standards.

[266] And we don't have engineering standards around cognition.

[267] It has not entered as a formal engineering discipline that enables us to scaffold in society with everything else we're doing, including consuming news, our relationships, politics, economics, education, all the above.

[268] And so to me, the most significant contribution that kernel technology has to offer would be the formal introduction of the formal engineering of cognition as it relates to everything else in society.

[269] I love that idea that you kind of think that there is just this ocean of data that's coming from people's brains as being in a crude wave reduced down to, like, tweets and text and so on.

[270] So it's a very hardcore, many -scale compression of actual the raw data.

[271] But maybe you can comment, because you're using the word cognition, I think the first step is to get the brain data, but is there a leap to be taking to sort of interpreting that data in terms of cognition?

[272] So is your idea is basically you need to start collecting data at scale from the brain, and then we start to really be able to take little steps along the path to actually measuring some deep sense of cognition.

[273] Because as I'm sure you know, we don't, we understand a few things, but we don't understand most of what makes up cognition.

[274] This has been one of the most significant challenges of building kernel, And Colonel wouldn't exist if I wasn't able to fund it initially by myself.

[275] Because when I engage in conversations with investors, the immediate thought is what is the killer app?

[276] And of course, I understand that heuristic.

[277] That's what they're looking at is they're looking to de -risk.

[278] Is the product solved?

[279] Is there a customer base?

[280] Are people willing to pay for it?

[281] How does it compare to competing options?

[282] And in the case with brain interfaces, when I started the company, there was no known path to even build the technology that could potentially become mainstream.

[283] And then once we figured out the technology, we could commence having conversations with investors, and it became, what is the killer app?

[284] And so what has been, so I funded the first $53 million to the company.

[285] And to raise the round of funding, the first one we did, I spoke to 228 investors.

[286] One said yes.

[287] It was remarkable.

[288] And it was mostly around this concept around what is a killer app.

[289] And so internally, the way we think about it is we think of the go -to -market strategy much more like the Drake equation, where if we can build technology that has the characteristics of, it has the data quality is high enough, it meets some certain threshold, cost, accessibility, comfort, it can be worn in contextual environments.

[290] If it meets the criteria of being a mass market device, then the responsibility that we have is to figure out how to create the algorithm that enables the human, to enable humans to then find value with it.

[291] So the analogy is like brain interfaces are like early 90s of the internet, is you want to populate an ecosystem with a certain number of devices.

[292] You want a certain number of people who play around with them, who do experiments of certain data collection parameters.

[293] You want to encourage certain mistakes from experts and non -experts.

[294] These are all critical elements that ignite discovery.

[295] And so we believe we've accomplished the first objective of building technology that reaches those thresholds.

[296] And now it's the Drake equation component of how do we try to generate 20 years, a value discovery in a two - or three -year time period.

[297] How do we compress that?

[298] So just to clarify, so when you mean the Drake equation, which for people who don't know, I don't know why you, if you listen to this, I bring up aliens every single conversation.

[299] So I don't know how you wouldn't know what the Drake equation is, but you mean like the killer app, it would be one alien civilization in that equation.

[300] So meaning like this is in search of an application that's impactful transformative.

[301] By the way, it should be, we need to come up with a better term than killer app as a it's also violent right you can go like viral app that's horrible too right it's some very uh inspiringly impactful application how about that no yeah okay so but let's stick with killer app that's fine nobody's but i concur with you i dislike the chosen words in in capturing the concept you know it's one of those sticky things that uh is as effective to use in the tech world but when you're now become a a communicator outside of the tech world, especially when you're talking about software and hardware and artificial intelligence applications, it sounds horrible.

[302] Yeah, no, it's interesting.

[303] I actually regret, now having called attention, because I regret having used that word in this conversation because it's something I would not normally do.

[304] I used it in order to create a bridge of shared understanding of what terminology others would use.

[305] Yeah.

[306] But, yeah, I concur.

[307] Let's go with impactful application.

[308] Orga.

[309] Just value creation.

[310] Value creation, something people love using.

[311] There we go, that's it.

[312] Love app.

[313] Okay, so do you have any ideas?

[314] So you're basically creating a framework where there's the possibility of a discovery of an application that people love using.

[315] Do you have ideas?

[316] We've began to play a fun game internally where when we have these discussions, we begin circling around this concept of, does anybody have an idea?

[317] Yeah.

[318] Does anyone have intuitions?

[319] And if we see the conversation starting to veer in that direction, we flag it and say, human intuition alert, stop it.

[320] And so we really want to focus on the algorithm of there's a natural process of human discovery that when you populate a system with devices and you give people the opportunity to play around with it in expected and unexpected ways, we are thinking that is a much better system of discovery.

[321] than us exercising intuitions.

[322] And it's interesting, we're also seeing a few neural scientists who have been talking to us where I was speaking to this one young associate professor and I approached a conversation and said, hey, we have these five data streams that we're pulling off.

[323] When you hear that, what weighted value do you add to each data source?

[324] Which one do you think is going to be valuable for your objectives and which one's not?

[325] And he said, I don't care.

[326] Just give me the data.

[327] All I care about is my machine learning model.

[328] But importantly, he did not have a theory of mind.

[329] He did not come to the table and say, I think the brain operates in this way and these reasons or have these functions.

[330] He didn't care.

[331] He just wanted the data.

[332] And we're seeing that more and more that certain people are devaluing human intuitions for good reasons, as we've seen in machine learning over the past couple years.

[333] And we're doing the same in our value creation market strategy.

[334] So collect more data, clean data, make the product such that the collection of data is easy and fun and then the rest will just spring to life.

[335] That's right.

[336] Through humans playing around with it.

[337] Our objective is to create the most valuable data collection system of the brain ever.

[338] And with that, then applying all the best tools of machine learning and other techniques to extract out you know, to try to find insight.

[339] But yes, our objective is really to systematize the discovery process because we can't put definite timeframes on discovery.

[340] The brain is complicated and science is not a business strategy.

[341] And so we really need to figure out how to, this is the difficulty of bringing, you know, technology like this to market.

[342] It's why most of the time it just languishes in academia for quite some time.

[343] But we hope that we will cross over, you know, cross over and make this mainstream in the coming years.

[344] The thing was cool to wear, but what's, are you chasing a good reason for millions of people to put this on their head and keep on their head regularly?

[345] Is there, like, who is going to discover that reason?

[346] Is it going to be people just kind of organically or is there going to be an angry bird style application?

[347] That's just too exciting to not use.

[348] If I think through the things that have changed my life most significantly over the past few years, when I started wearing a wearable in my wrist, that would give me data about my heart rate, heart rate variability, respiration rate, metabolic approximations, et cetera, for the first time in my life, I had access to information, sleep patterns that were high.

[349] impactful.

[350] They told me, for example, if I eat close to bedtime, I'm not going to get deep sleep.

[351] And not getting deep sleep means you have all these follow -on consequences in life.

[352] And so it opened up this window of understanding of myself that I cannot self -introspect and deduce these things.

[353] This is information that was available to be acquired, but it just wasn't.

[354] I would have to get an expensive sleep study.

[355] Then it's an end, like one night.

[356] And that's not good enough to run all my trials.

[357] And so if you look just at the information that one can acquire on their wrist, and now you're planning it to the entire cortex on the brain, and you say what kind of information could we acquire, it opens up a whole new universe of possibilities.

[358] For example, we did this internal study at Kernel where I wore a prototype device, and we were measuring the cognitive effects of sleep.

[359] So I had a device measuring my sleep.

[360] I performed with 13 and my coworkers, we performed four cognitive tasks over 13 sessions.

[361] And we focused on reaction time, impulse control, short -term memory, and then a resting state task.

[362] And with mine, we found, for example, that my impulse control was independently correlated with my sleep, outside of behavioral measures of my ability to play the game.

[363] The point of the study was I had the brain study I did at Colonel confirmed my life experience that if I, my deep sleep determined whether or not I would be able to resist temptation the following day.

[364] And my brain did I show that as one example.

[365] And so if you start thinking if you actually have data on yourself on your entire cortex and you can control the settings, I think there's probably a large number of things that we could discover about ourselves, very, very small and very, very big.

[366] Just, for example, like, when you read news, what's going on?

[367] Like, when you use social media, when you use news, like, all the ways we allocate attention.

[368] That's right.

[369] With the computer.

[370] I mean, that seems like a compelling place to where you would want to put on a kernel.

[371] By the way, what is it called?

[372] Colonel flux.

[373] Kernel, like what?

[374] Flow.

[375] We have two technologies.

[376] You wore flow.

[377] Flow.

[378] Okay.

[379] Okay.

[380] So when you put on the kernel flow, it is, seems like to be a compelling time and place to do it is when you're behind a desk, behind a computer.

[381] Because you could probably wear it for prolonged periods of time as you're taking in content.

[382] There could a lot of, because some of our, so much of our lives happens in the digital world now.

[383] That kind of coupling the information about the human mind with the consumption and the behaviors in the digital world might give us a lot of information about the effects of the way we behave and navigate the digital world to the actual physical meat space effects on our body.

[384] It's interesting to think so in terms of both like for work, I'm a big fan of Cal Newport, his ideas of deep work that I spend, And with few exceptions, I try to spend the first two hours of every day.

[385] Usually, if I'm, like, at home and have nothing on my schedule, is going to be up to eight hours of deep work, of focus, zero distraction.

[386] And for me to analyze, I mean, I'm very aware of the waning of that, the ups and downs of that.

[387] And it's almost like you're surfing the ups and downs of that as you're doing programming, as you're doing thinking about.

[388] particular problems.

[389] You're trying to visualize things in your mind.

[390] You start trying to stitch them together.

[391] You're trying to, when there's a dead end about an idea, you have to kind of calmly like walk back and start again.

[392] All those kinds of processes.

[393] It'd be interesting to get data on what my mind is actually doing.

[394] And also recently started doing, I just talked to Sam Harris a few days ago and been building up to that.

[395] I started meditating using his app waking up but very much recommend it and be interesting to get data on that because it's you're very it's like you're removing all the noise from your head and you very much it's an active process of active noise removal active noise canceling like the headphones and it'd be interesting to see what is going on in the mind before the meditation during it and after all those kinds of things and all of your examples it's interesting that everyone who's designed an experience for you, so whether it be the meditation app or the deep work or all the things you've mentioned, they constructed this product with a certain number of knowns.

[396] Now, what if we expand to the number of knowns by 10x or 20x or 30x?

[397] They would reconstruct their product who'll incorporate those known.

[398] So it'd be, and so this is the dimensionality that I think is the promising aspect is that people will be able to use this quantification, use this information to build more effective products.

[399] And this is, I'm not talking about better products to advertise to you or manipulate you.

[400] I'm talking about our focus is helping people, individuals, have this contextual awareness and the quantification, and then to engage with others who are seeking to improve people's lives.

[401] that the objective is betterment across ourselves individually and also with each other.

[402] Yeah, so it's a nice data stream to have if you're building an app, like if you're building a podcast listening app, it would be nice to know data about the listener so that like if you're bored or you fell asleep, maybe pause the podcast.

[403] It's like really dumb, just very simple applications that could just improve the quality of the experience of using the app.

[404] I'm imagining if you have you have your neurom.

[405] Lex and you there's a statistical representation of you and you engage with the app and it says Lex you're best to engage with this meditation exercise in the following settings at this time a day after eating this kind of food or not eating fasting with this level of blood glucose and this kind of night's sleep but all these data combined to give you this contextually relevant experience, just like we do with our sleep.

[406] You've optimized your entire life based upon what information you can acquire and know about yourself.

[407] And so the question is, how much do we really know of the things going around us?

[408] And I would venture to guess, in my own life life experience, I capture, my self -awareness captures an extremely small percent of the things that actually influence my conscious and unconscious experience.

[409] Well, in some sense, the data would help encourage you to be more self -aware, not just because you trust everything the data is saying, but it'll give you a prod to start investigating.

[410] I'd love to get a rating, like a ranking of all the things I do and what are the things, this is probably important to do without the data, but the data will certainly help.

[411] It's like rank all the things you do in life and which ones make you feel shitty, which ones make you feel good.

[412] Like you're talking about evening, Brian.

[413] Like, this is a good example.

[414] Somebody, like, I do pig out at night as well.

[415] And it never makes you feel good.

[416] Like you're in a safe space.

[417] This is a safe space.

[418] Let's hear it.

[419] No, I definitely have much less self -control at night.

[420] And it's interesting.

[421] And the same, you know, people might criticize this, but I know my own body.

[422] I know when I eat carnivore, just eat meat, I feel much better.

[423] than if I eat more carbs.

[424] The more carbs I eat, the worse I feel.

[425] I don't know why that is.

[426] There is science supporting, but I'm not leading on science.

[427] I'm leading on personal experience.

[428] And that's really important.

[429] I don't need to read, I'm not going to go in a whole rant about nutrition science, but many of those studies are very flawed.

[430] They're doing their best, but nutrition science is a very difficult field of the study because humans are so different.

[431] And the mind has so much impact on the way your body behaves.

[432] And it's so difficult from a scientific perspective to conduct really strong studies that you have to be almost like a scientist of one you have to do these studies on yourself.

[433] That's the best way to understand what works for you or not.

[434] And I don't understand why, because it sounds unhealthy, but eating only meat always makes me feel good.

[435] Just eat meat.

[436] That's it.

[437] And I don't have any allergies, of that kind of stuff.

[438] I'm not full like Jordan Peterson where like if he like deviates a little bit that he goes off like deviates a little bit from the carnivore diet, he goes off like the cliff.

[439] No, I can have like chalk.

[440] I can go off the diet.

[441] I feel fine.

[442] It's a gradual, it's a gradual worsening of how I feel.

[443] But what I eat only meet, I feel great.

[444] And it would be nice to be reminded of that.

[445] Like it's a very simple fact.

[446] that I feel good when I eat carnivore.

[447] And I think that repeats itself in all kinds of experiences.

[448] Like, I feel really good when I exercise.

[449] Not, I hate exercise, okay.

[450] But in the rest of the day, the impact it has on my mind, on the clarity of mind, and the experiences and the happiness, all those kinds of things I feel really good.

[451] And to be able to concretely express, that through data would be nice.

[452] It would be a nice reminder, almost like a statement, like remember what feels good and whatnot.

[453] And there could be things like that I'm not many things.

[454] Like you're suggesting that I could not be aware of.

[455] There might be sitting right in front of me that makes me feel really good and make me feel not good.

[456] And the data would show that.

[457] I agree with you.

[458] I've actually employed the same strategy.

[459] I fired my mind entirely from being responsible for constructing my diet.

[460] And so I started doing a program where I now track over 200 biomarkers every 90 days.

[461] And it captures, of course, the things you would expect like cholesterol, but also DNA methylation and all kinds of things about my body.

[462] All the processes that make up me. And then I let that data generate the shopping list.

[463] And so I never actually ask my mind what it wants.

[464] It's entirely what my body is reporting that it wants.

[465] And so I call this goal alignment within Brian.

[466] And there's 200 plus actors that I'm currently asking their opinion of.

[467] And so I'm asking my liver, how are you doing?

[468] And it's expressing via the biomarkers.

[469] And so that I construct that diet.

[470] And I only eat those foods until my next testing round.

[471] And that has changed my life more than I think anything else.

[472] Because in the demotion of my conscious mind that I gave primacy to my entire life, it led me astray because like you're saying, the mind then goes out into the world and it navigates the dozens of different dietary regimens people put together in books and it all has their all has their supporting science in certain contextual settings but it's not end of one and like you're saying this dietary really is an end of one these what people have published scientifically of course can be used for nice groundings but it changes when you get an end of one level and so That's what gets me excited about brain interfaces is if you, if I could do the same thing for my brain, where I can stop asking my conscious mind for its advice or for its decision making, which is flawed.

[473] And I'd rather just look at this data that, and I've never had better health markers in my life than when I stopped, actually asking myself to be in charge of it.

[474] The idea of demotion of the conscious mind is such a sort of engineering way of phrasing like meditation.

[475] with they what they what they're doing right yeah that's beautiful that means really beautifully put by the way testing round what does that look like what's that well you mentioned uh yeah that the very the test i do yes so includes uh a complete blood panel i do a microbiome test i do a food inflammation uh a diet induced inflammation so i look for exatokine expressions so foods that produce inflammatory reactions uh i look at my neuroendocrine systems.

[476] I look at all my neural transmitters.

[477] I do, yeah, there's several micronutrient tests to see how I'm looking at the various nutrients.

[478] What about like self -report of like how you feel, you know, almost like you can't demote your, you still exist within your conscious mind, right?

[479] So that lived experience is of a lot of value.

[480] So how do you measure that?

[481] I do a temporal sampling.

[482] over some duration of time.

[483] So I'll think through how I feel over a week, over a month, or over three months.

[484] I don't do a temporal sampling of, if I'm at the grocery store in front of a cereal box and be like, you know what, Captain Crunch is probably the right thing for me today because I'm feeling like I need a little fun in my life.

[485] And so it's a temporal sampling.

[486] If the data sets large enough, then I smooth out the function of my natural oscillations of how I feel about life, where some days I may feel upset or depressed or down or whatever.

[487] And I don't want those moments to then rule, my decision making.

[488] That's why the demotion happens.

[489] And it says, really, if you're looking at health of a 90 -day period of time, all my 200 voices speak up on that interval.

[490] And they're all a given voice to say, this is how I'm doing, and this is what I want.

[491] And so it really is an accounting system for everybody.

[492] So that's why I think that if you think about the future of being human, there's two things I think that are really going on.

[493] One is the design, manufacturing and distribution of intelligence is heading towards zero caught in a cost curve over a certain time frame but our ability to you know evolution produced us an intelligent form of intelligence we are now designing our own intelligence systems and the design manufacturing distribution of that intelligence over a certain time frame is going to go to a cost of zero design manufacturing distribution of intelligent cost is going to zero for example just gave me a second okay that's brilliant okay and evolution is doing the design manufacturing distribution of intelligence and now we are doing the design manufacturing distribution of intelligence and the cost of that is going to zero that's a very uh nice way of looking at life on earth so if that that's going on and then now in parallel to that then you say okay what what then happens if when that cost curve is heading to zero our existence becomes a goal alignment problem a goal alignment function and so the same thing i'm doing where i'm doing goal alignment within myself of these 200 biomarkers where i'm saying when when brian exists on a daily basis and this entity is deciding what to eat and what to do and etc it's not just my conscious mind which is opining, it's 200 biological processes, and there's a whole bunch of more voices involved.

[494] So in that equation, we're going to increasingly automate the things that we spend high energy on today, because it's easier, and now we're going to then negotiate the terms and conditions of intelligent life.

[495] Now, we say conscious existence because we're biased because that's what we have, but it will be the largest computational exercise in history because you're now doing goal alignment with planet earth within yourself with each other within all the intelligent agents we're building bots and other voice assistants you basically have a trillions and trillions of agents working on the negotiation of goal alignment yeah this this is in fact true and what was the second thing that was it so the cost the design manufacturing distribution of intelligence going to zero, which then means what's really going on?

[496] What are we really doing?

[497] We're negotiating the terms and conditions of existence.

[498] Do you worry about the survival of this process, that life as we know what on earth comes to an end, or at least intelligent life, that as the cost goes to zero, something happens where all of that intelligence is thrown in the trash by something like nuclear, war or development of AGI systems that are very dumb, not AGI, I guess, but AI system.

[499] It's the paperclip thing, en masse, is dumb but has unintended consequences to where it destroys human civilization.

[500] Do you worry about those kinds of things?

[501] I mean, it's, it's unsurprising that a new thing comes into the sphere of human consciousness.

[502] Humans identify the foreign object, in this case, artificial intelligence, are amygdala.

[503] fires up and says, scary, foreign, we should be apprehensive about this.

[504] And so it makes sense from a biological perspective, the knee -jerk reaction is fear.

[505] What I don't think has been properly weighted with that is that we are the first generation of intelligent beings on this earth that has been able to look out over their expected lifetime and see there is a real possibility of evolving into entirely novel forms of consciousness.

[506] Yeah.

[507] So different that it would be totally unrecognizable to us today.

[508] We don't have words for it.

[509] We can't hint at it.

[510] We can't point at it.

[511] We can't look in the sky and see that thing that is shining.

[512] We're going to go up there.

[513] You cannot even create an aspiration.

[514] statement about it.

[515] And instead, we've had this knee -jerk reaction of fear about everything that could go wrong.

[516] But in my estimation, this should be the defining aspiration of all intelligent life on Earth that we would aspire, that basically every generation surveys the landscape of possibilities that are afforded, given the technological, cultural and other contextual situation that they're in, we're in this context.

[517] We haven't yet identified this and said, this is unbelievable.

[518] We should carefully think this thing through, not just of mitigating the things that wipe us out.

[519] We have this potential, and so we just haven't given voice to it, even though it's within this realm of possibilities.

[520] So you're excited about the possibility of superintelligence systems and what the opportunities that bring.

[521] I mean, there's parallels to this.

[522] you think about people before the internet as the internet was coming to life I mean there's kind of a fog through which you can't see what does the future look like predicting collective intelligence which I don't think we're understanding that we're living through that now is that there's now we've in some sense stopped being individual intelligences and become much more like collective intelligences because ideas travel much, much faster now.

[523] And they can, in a viral way, like, sweep across the population.

[524] And so it's almost, I mean, it almost feels like a thought is had by many people now, thousands or millions of people as opposed to an individual person.

[525] And that's changed everything.

[526] But to me, I don't think we're realizing how much that actually changed to people or societies.

[527] But, like, to predict that before the internet would have been very difficult.

[528] and in that same way we're sitting here with the fog before us thinking what is super intelligence systems how is that going to change the world what is increasing the bandwidth like plugging our brains into this whole thing how is that going to change the world and it seems like it's a fog you don't know and it could be it could whatever comes to be could destroy the world that this we could be the last generation but it also could transform in in ways that creates an incredibly fulfilling life experience that's unlike anything we've ever experienced it might involve the solution of ego and consciousness and so on you're no longer one individual it might be more you know that might be a certain kind of death and ego death, but the experience might be really exciting and enriching.

[529] Maybe we'll live in a virtual, like, it's like, it's, it's, it's funny to think about a bunch of sort of hypothetical questions of, would it be more fulfilling to live in a virtual world?

[530] Like, if you were able to plug your brain in a very dense way into a video game, like, which world would you want to live in?

[531] in the video game or in the physical world.

[532] For most of us, we kind of toying it with the idea of the video game, but we still want to live in the physical world, have friendships and relationships in the physical world.

[533] But we don't know that.

[534] Again, it's a fog.

[535] And maybe in 100 years, we're all living inside a video game.

[536] Hopefully not Call of Duty.

[537] Hopefully more like Sims 5.

[538] Which version is it on?

[539] for you individually though does it make you sad that your brain ends that you die one day very soon that the whole thing that that that that that data source just goes offline sooner than you would like that's a complicated question i would have answered it differently in different times in my life, I had chronic depression for 10 years.

[540] And so in that 10 -year time period, I desperately wanted lights to be off.

[541] And the thing that made it even worse is I was in a religious, I was born into a religion.

[542] It was the only reality I ever understood.

[543] And it's difficult to articulate to people when you're born into that kind of reality and it's the only reality you're exposed to, you are literally blinded to the existence of other realities, because it's so much the in -group, out -group thing.

[544] And so in that situation, it was not only that I desperately wanted lights out forever, it was that I couldn't have lights out forever.

[545] It was that there was an afterlife.

[546] And this afterlife had this system that would either penalize or reward you for your behaviors.

[547] And so it's almost like this, this indescribable hopelessness of not only being in hopeless despair of not wanting to exist, but then also being forced to exist.

[548] And so there was a duration of my time, of a duration of life where I'd say, like, yes, I have no remorse for lights being out and actually want it more than anything in the entire world.

[549] There are other times where I'm looking out at the future and I say, this is an opportunity for a future evolving human conscious experience that is beyond my ability to understand and I jump out of bed and I race to work and I can't think about anything else.

[550] But I think the reality for me is, I don't know what it's like to be in your head, but in my head, when I wake up in the morning, I don't say, good morning, Brian, I'm so happy to see you.

[551] Like, I'm sure you're just going to be beautiful to me today.

[552] You're not going to make a huge long list of everything you should be anxious about.

[553] You're not going to repeat that list to me 400 times.

[554] You're not going to have me relive all the regrets I've made in life.

[555] I'm sure you're not doing any of that.

[556] You're just going to just help me along all day long.

[557] It's a brutal environment in my brain.

[558] And we've just become normalized to this environment, that we just accept that this is what it means to be human.

[559] But if we look at it, if we try to muster as much soberness as we can about the realities of being human, it's brutal if it is for me and so am i sad that the brain may be off one day you know it depends on the contextual setting like how am i feel at what moment are you asking me that and that's it's my mind is so fickle and this is why again i don't trust my conscious mind i have been given realities i was given a religious reality that was a video game and then i figured out it was not a real reality and then i lived in a depressive reality which delivered this terrible hopelessness.

[560] That wasn't a real reality.

[561] Then I discovered behavioral psychology and I figured out how biased 188 chronicle biases and how my brain is distorting reality at the time, I have gone from one reality to another.

[562] I don't trust reality.

[563] I don't trust realities that are given to me. And so to try to make a decision on what I value or not value that future state, I don't trust my response.

[564] So not fully listening.

[565] seeing to the conscious mind at any one moment as the ultimate truth, but allowing you to go up and down as it does, and just kind of being observing it.

[566] Yes, I assume that whatever my conscious mind delivers up to my awareness is wrong on pond landing.

[567] And I just need to figure out where it's wrong, how it's wrong, how wrong it is, and then try to correct for it as best I can.

[568] But I assume that on impact, it's mistaken in some critical ways.

[569] Is there something, you can say by way of advice when the mind is depressive when the conscious mind serves up something that you know dark thoughts how you deal with that like how in your own life you've overcome that and others who are experienced in that can overcome it two things one that those depressive states are biochemical states it's not you and the suggestions that these things that this state delivers to you about suggestion of the hopelessness of life or the meaninglessness of it or that you should hit the eject button that's a false reality yeah and that it's when i i completely understand the rational decision to commit suicide there's it is not lost to me at all that that is that is that and that is an irrational situation, but the key is when you're in that situation of those thoughts are landing to be able to say, thank you, you're not real.

[570] I know you're not real.

[571] And so I'm in a situation where for whatever reason I'm having this, this neurochemical state, but that state can be altered.

[572] And so, again, it goes back to the realities of the difficulties of being human.

[573] And like when I was trying to solve my depression, I tried literally, if you name it I tried it systematically and nothing would fix it and so this is what gives me hope with brain interfaces for example like could I have numbers on my brain can I see what's going on because I go to the doctor and it's like how do you feel I don't know terrible like on a scalp 10 how bad do you want to commit suicide 10 you're like okay at this moment here's here's his bottle how much I take well I don't know like just yeah it's very very crude and the data opens up the it opens out the possibility of really helping in those dark moments to first understand the ways the ups and downs of those dark moments on the complete flip side of that I am very conscious in my own brain and deeply deeply grateful that what there it's almost like a chemistry thing a biochemistry thing that I go many times throughout the day I'll look at like this cup and I'll be overcome with joy how amazing it is to be alive.

[574] Like I actually think I'm, my biochemistry is such that it's not as common.

[575] Like I've talked to people and I don't think that's that common.

[576] Like it's a, and it's not a rational thing at all.

[577] It's like, I feel like I'm on drugs.

[578] And I'll just be like, whoa.

[579] A lot of people talk about the meditative experience will allow you to sort of look at some basic things like the movement of your hand as deeply joyful because that's like that's life but I get that from just looking at a cup like I'm waiting for the coffee to brew and I'll just be like fuck life is awesome and I'll sometimes tweet that but then I'll regret it later like god damn it you're so ridiculous but yeah but that is purely chemistry like there's no rational it doesn't fit with the rest of my life i have all this shit i'm always late to stuff i'm always like there's all stuff you know i'm super self -critical like really self -critical about everything i do i'm to the point i almost hate everything i do but there's this engine of joy for life outside of all that and that has to be chemistry and the flip side of that is what depression probably is the opposite of that feeling of like because I bet you that feeling of the cup being amazing is would save anybody in a state of depression like that would be like fresh you're in a desert and it's a drink of water shit man the brain is a it would be nice to understand where that's coming from to be able to to understand how you hit those lows and those highs, they have nothing to do with the actual reality.

[580] It has to do with some very specific aspects of how you maybe see the world.

[581] Maybe it could be just like basic habits you engage in and then how to walk along the line to find those experiences of joy.

[582] And this goes back to the discussion we're having of human cognition is in volume, the largest input of raw material into society.

[583] Yeah.

[584] And it's not quantified.

[585] We have no bearings on it.

[586] And so we just, you wonder, we both articulated some of the challenges we have on our own mind.

[587] And it's likely that others would say, I have something similar.

[588] And you wonder, when you look at society, what, how does that contribute to all the other compounder problems that we're experiencing?

[589] How does that blind?

[590] us to the opportunities we could be looking at.

[591] And so it really, it has this potential distortion effect on reality that just makes everything worse.

[592] And I hope if we can assign some numbers to these things, just to get our bearings so we're aware of what's going on.

[593] If we could find greater stabilization in how we conduct our lives and how we build society, it might be the thing that enables us to scaffold.

[594] Because we've really, again, we've done a, humans have done a fantastic job systematically scaffolding technology and science and institutions.

[595] It's humans.

[596] It's our own selves, which we have not been able to scaffold.

[597] We are the one part of this intelligence infrastructure that remains unchanged.

[598] Is there something you could say about coupling this?

[599] this brain data with not just the basic human experience, but say an experience, you mentioned sleep, but the wildest experience, which is psychedelics, is there, and there's been quite a few studies now that are being approved and run, which is exciting from a scientific perspective on psychedelics.

[600] Do you think, what do you think happens to the brain on psychedelics?

[601] And how can data about this help us understand it.

[602] And when you're on DMT, do you see elves and can we get, can we convert into data?

[603] Can you add aliens in there?

[604] Yeah, aliens, definitely.

[605] Do you actually meet aliens?

[606] And elves are, elves are the aliens?

[607] I'm asking for, uh, for a few Austin friends yet that they're convinced that they've actually met the elves.

[608] What are elves like?

[609] Are they friendly?

[610] I haven't met them personally.

[611] Are they like the smurfs of like they're like they're, industrious and they have different skill sets and yeah i think they're very uh they're very critical as friends it's that's what they're trolls the elves are trolls no but they care about you so there's a bunch of different version of trolls there's a loving trolls that are harsh on you but they want you to be better and there's trolls that just enjoy the your destruction and i think they're the ones that care for you.

[612] Like, I think they're criticism for my...

[613] See, I'm talking, I haven't met them directly, so I'm like a friend of a friend.

[614] Yeah, they're getting a telephone.

[615] Yeah, a bit of a...

[616] And the whole point is on psychedelics, and certainly at DMT, word, this is where the brain data versus word data fails, which is, you know, words can't convey the experience.

[617] Most people that, you can be poetic and so on, but it really does not convey the experience of what, It actually means to meet the elves.

[618] I mean, to me, what baselines this conversation is, imagine if we were interested in the health of your heart.

[619] And we started and said, okay, Lex, self -introspect, tell me, how's the health of your heart?

[620] You sit there and you close your eyes and you think, feels all right, like seeing things feel okay.

[621] And then you went to the cardiologist, and the cardiologist, like, hey, Lex, you know, tell me how you feel.

[622] You're like, well, actually, what I'd really like you to do is do an EKG and a blood panel and look at arterial plaques and let's look at my cholesterol and there's like five to ten studies you would do.

[623] They would then give you this report and say, here's the quantified health of your heart.

[624] Now, with this data, I'm going to prescribe the following regime of exercise and maybe I put you on a statin, like, et cetera.

[625] But the protocol is based upon this data.

[626] You would think the cardiologist is out of their mind if they just gave you a bottle of statins based upon, you're like, well, I think something's kind of wrong.

[627] And they're just kind of experiment and see what happens.

[628] But that's what we do with our mental health today.

[629] So it's kind of absurd.

[630] And so if you look at psychedelics, to have, again, to be able to measure the brain and get a baseline state and then to measure during a psychedelic experience and post a psychedelic experience and then do it longitudinally, you now have a quantification.

[631] of what's going on.

[632] And so you could then pose questions what molecule is appropriate at what dosages, at what frequency, in what contextual environment, what happens when I have this diet with this molecule with this experience, all the experimentation you do when you have good sleep data or HIV.

[633] And so that's what I think happens.

[634] What we could potentially do with psychedelics is we could add this level of sophistication that is not in the industry currently.

[635] And it may improve the outcomes, people experience, it may improve the safety and efficacy.

[636] And so that's what I hope we are able to achieve.

[637] And it would transform mental health because we would finally have numbers to work with the baseline ourselves.

[638] And then if you think about it, when we talk about things related to the mind, we talk about the modality.

[639] We use words like meditation or psychedelics or something else because we can't talk about a marker in the brain.

[640] We can't use a word to say, we can't talk about cholesterol.

[641] We don't talk about plaque in the arteries.

[642] We don't talk about HRV.

[643] And so if we have numbers, then the solutions get mapped to numbers instead of the modalities being the thing we talk about.

[644] Meditation just does good things in a crude fashion.

[645] So in your blog post, zeroth principle thinking, good title, you ponder how do people come up with truly original ideas.

[646] What's just thoughts on this as a human and as a person who's measuring brain data?

[647] zero of principles are building blocks first principles are understanding of system laws so if you take for example like in sherlock holmes he's a first principal's thinker he says once you've eliminated the impossible anything that remains however improbable is true whereas dirk gently the holistic detective by douglas adams says I don't like eliminating the impossible.

[648] So when someone says, from a first principle's perspective, and they're trying to assume the fewest number of things within a given time frame.

[649] And so when I, after Braintree Venmo, I set my mind to the question of, what single thing can I do that would maximally increase the probability that the human race thrives beyond what we can even imagine.

[650] And I found that in my conversations with others, in the books I read, in my own deliberations, I had a missing piece of the puzzle because I didn't feel like over, yeah, I didn't feel like the future could be deduced from first principle thinking.

[651] And that's when I read the book, Zero, a biography of a dangerous idea.

[652] And I It's a really good book by the way It's I think it's my favorite book I've ever read It's also a really interesting number Zero And I wasn't aware That the number zero had to be discovered I didn't realize that it caused A revolution in philosophy And it just tore up math And it tore up I mean it builds modern society But it wrecked everything in its way It was an unbelievable disruptor And it was so difficult For society to get their heads around it And so zero is, of course, a representation of a zero -th principle thinking, which is it's the caliber and consequential nature of an idea.

[653] And so when you talk about what kind of ideas have civilization transforming properties, oftentimes they fall in the zero -th category.

[654] And so in thinking this through, I was wanting to find a quantitative structure on a how to think about these zero principles.

[655] And that's, so I came up with that to be a coupler with first principles thinking.

[656] And so now it's a staple as part of how I think about the world and the future.

[657] So it emphasizes trying to identify, it lands on that word impossible, like what is impossible, essentially trying to identify what is impossible and what is possible.

[658] And being as, how do you, I mean, this is the thing is most of society.

[659] tells you the range of things they say is impossible is very wide.

[660] So you need to be shrinking that.

[661] I mean, that's the whole process of this kind of thinking, is you need to be very rigorous in trying to be, trying to draw the lines of what is actually impossible because very few things are actually impossible.

[662] I don't know what is actually impossible.

[663] Like, it's the Joe Rogan, it's entirely possible.

[664] I like that approach to science, to engineering, to entrepreneurship.

[665] It's entirely possible.

[666] Basically shrink the impossible to zero, to a very small set.

[667] Life constraints favor first principle thinking because it enables faster action with higher probability of success.

[668] Pursuing zero of principle optionality is expensive and unsubstable.

[669] certain.

[670] And so in a society constrained by resources, time and money and a desire for social status, accomplishment, et cetera, it minimizes zeroth principle thinking.

[671] But the reason why I think zero of principle thinking should be a staple of our shared cognitive infrastructure is if you look through the history of the past couple thousand years and let's just say we arbitrarily, we subjectively try to assess what is a zero level, zero level idea.

[672] And we say how many have occurred on what time scales and what were the contextual settings for it.

[673] I would argue that if you look at AlphaGo, when it played Go from another dimension, the human Go players, when it saw AlphaGo's moves, it attributed it to playing with an alien, playing with AlphaGo beating from another dimension.

[674] And so if you say computational intelligence has an attribute of introducing zero -like insights, then if you say, what is going to be the occurrence of zeros in society going forward?

[675] And you could reasonably say probably a lot more than have occurred and probably more at a faster pace.

[676] So then if you say, what happens if you have this computational intelligence throughout society that the manufacturing, design, and distribution of intelligence is now going to, heading towards zero, you have an increased number of zeros being produced with a tight connection between human and to computers.

[677] That's when I got to a point and said, we cannot predict the future with first principles thinking.

[678] We can't, that cannot be our imagination set.

[679] It can't be our sole anchor in this situation, that basically the future of our conscious existence, 20, 30, 40, 50 years is probably a zero.

[680] So just to clarify, when you say zero, you're referring to basically a truly revolutionary idea.

[681] Yeah, something that is currently not a building block of our shared conscious existence, either in the form of knowledge.

[682] Yeah, it's currently not manifest in what we.

[683] acknowledged.

[684] That's the zeroth principle thinking is playing with ideas that are so revolutionary that we can't even clearly reason about the consequences once those ideas come to be.

[685] Yeah, or, for example, like Einstein, that was a zero -th, I would categorize it as a zero -th principle insight.

[686] You mean general relativity, space -time?

[687] Yeah, space -time.

[688] Yep, yeah.

[689] That basically, building upon what Newton had done.

[690] done and said, yes, also, and it just changed the fabric of our understanding of reality.

[691] And so that was unexpected.

[692] It existed.

[693] We just, it became, it came part of our awareness.

[694] And the moves AlphaGo made existed.

[695] It just came into our awareness.

[696] And so it, to your point, there's this question of what do we know and what don't we know?

[697] do we think we know 99 % of all things or do we think we know 0 .001 % of all things?

[698] And that goes back to no known, known, knowns, and unknown unknowns.

[699] And first principles and zero principle thinking gives us a quantitative framework to say, there's no way for us to mathematically try to create probabilities for these things.

[700] Therefore, it would be helpful if they were just part of our standard thought processes because it may encourage different behaviors.

[701] in what we do individually, collectively as a society, what we aspire to, what we talk about, the possibility sets we imagine.

[702] Yeah, I've been engaged in that kind of thinking quite a bit.

[703] And thinking about engineering of consciousness, I think it's feasible.

[704] I think it's possible in the language that we're using here.

[705] And it's very difficult to reason about a world when inklings of consciousness can be engineered into artificial systems.

[706] Not from a philosophical perspective, but from an engineering perspective, I believe a good step towards engineering consciousness is creating engineering the illusion of consciousness.

[707] I'm captivated by our natural predisposition to anthropomorphize things.

[708] and I think that's what we I don't want to hear from the philosophers but I think that's what we kind of do to each other that consciousness is created socially that much of the power of consciousness is in the social interaction I create your consciousness no I create my consciousness by having interaction with you.

[709] And that's the display of consciousness.

[710] It's the same as like the display of emotion.

[711] Emotion is created through communication.

[712] Language is created through its use.

[713] And then we somehow humans kind of, especially philosophers, you know, the heart problem of consciousness, really want to believe that we possess this thing that's like there's a, there's an elf sitting there with a hat or like name tag says consciousness and they're like feeding this subjective experience to us as opposed to like it actually being an illusion that would construct to make social communication more effective.

[714] And so I think if you focus on creating the illusion of consciousness, you can create some very fulfilling experiences in software.

[715] And so that to me is a compelling space of ideas to explore.

[716] I agree with you.

[717] And I think going back to our experience together with brain interfaces on, you could imagine if we get to a certain level of maturity.

[718] So first, let's take the inverse of this.

[719] So you and I text back and forth and we're sending each other emojis.

[720] That has a certain amount of information transfer rate as we're communicating with each other.

[721] And so in our communication with people via email and text and whatnot, we've taken the bandwidth of human interaction, the information transfer rate, and we've reduced it.

[722] we have less social cues, we have less information to work with, there's a lot more opportunity for misunderstanding.

[723] So that is altering the conscious experience between two individuals.

[724] And if we add brain interfaces to the equation, let's imagine now we amplify the dimensionality of our communications.

[725] That to me is what you're talking about, which is consciousness engineering.

[726] Perhaps I understand you with dimensions.

[727] So maybe I understand your happy.

[728] When you look at the cup and you experience that happiness, you can tell me you're happy.

[729] And I, then do theory of mind and say, I can imagine what it might be like to be Lex and feel happy about seeing this cup.

[730] But if the interface could then quantify and give me a 50 vector space model and say, this is the version of happiness that Lex is experiencing as he looks at this cup, then it would allow me potentially to have much greater empathy for you and understand you as a human, this is how you experience joy, which is entirely unique from how I experience joy, even though we assumed ahead of time that we're having some kind of similar experience, but I agree with that.

[731] you that the we do consciousness engineering today in everything we do when we talk to each other when we're building products and that we're entering into a stage where it will be much more methodical and quantitative based and computational in how we go about doing it which to me i find encouraging because i think it creates better guardrails uh for to create uh ethical systems on versus right now i feel like it's really a wild, wild west on how these interactions are happening.

[732] Yeah, and it's funny you focus on human to human, but that this kind of data enables human to machine interaction, which is what we're kind of talking about when we say engineering consciousness.

[733] And that will happen, of course, let's flip that on its head.

[734] Right now we're putting humans as the central node.

[735] What if we gave GPT3 a bunch of human brains?

[736] gpt three learn some manners when you speak yeah and run your algorithms on humans brains and see how they respond uh so you can be polite and so that you can be friendly and so that you can be conversational appropriate but to inverse it to give our machines a training set in real time with closed loop feedback so that our machines were better equipped to uh find their way through our society in polite and kind and appropriate ways.

[737] I love that idea.

[738] Or better yet, teach it some, have it read the founding documents and have it visit Austin and Texas.

[739] And so that when you tell it, why don't you learn some manners, GPT3 learns to say no. It learns what it means to be free and a sovereign individual.

[740] So it depends.

[741] So it depends what kind of version of GPT3 you want.

[742] One that's free, one that behaves well with the social revolution.

[743] You want a socialist GPT3, you want an anarchist GPT3, you want an anarchist GPT3, you want a polite, like, you take it home to visit mom and dad, GPT3, and you want like party in like Vegas to a strip club GPT3.

[744] You want all flavors.

[745] And then you've got to have goal alignment between all those.

[746] Yeah.

[747] They don't want to manipulate each other for sure.

[748] So that's, I mean, you kind of spoke to ethics.

[749] One of the concerns that people have in this modern world, the digital data is that of privacy and security.

[750] But privacy, you know, they're concerned that when they share data, it's the same thing we feel when we trust other human beings in being fragile and revealing something that, we're vulnerable, vulnerable about, there's a leap of faith, there's a leap of trust that that's going to be just between us, there's a privacy to it.

[751] And then the challenge is when you're in the digital space, then sharing your data with companies that use that data for advertisement and all those kinds of things, there's a hesitancy to share that much data, to share a lot of deep personal data.

[752] And if you look at brain data, that feels a whole lot like, It's richly, deeply personal data.

[753] So how do you think about privacy with this kind of ocean of data?

[754] I think we got off to a wrong start with the internet where the basic rules of play for the company that be was if your company, you can go out and get as much information on a person as you can find without their approval.

[755] and you can also do things to induce them to give you as much information.

[756] And you don't need to tell them what you're doing with it.

[757] You can do anything on the backside.

[758] You can make money on it.

[759] But the game is who can acquire the most information and devise the most clever schemes to do it.

[760] That was a bad starting place.

[761] And so we are in this period where we need to correct for that.

[762] And we need to say, first of all, the individual always has control.

[763] over their data.

[764] It's not a free -for -all.

[765] It's not like a game of hungry hippo.

[766] They can just go out and grab as much as they want.

[767] So, for example, when your brain data was recorded today, the first thing we did in the kernel app was you have control over your data.

[768] And so it's individual consent, it's individual control, and then you can build up on top of that.

[769] But it has to be based upon some clear rules of play of everyone knows what's being collected, they know what's being done with it, and the person has control over it.

[770] To transparency and control.

[771] So everybody knows what does control look like, my ability to delete the data if I want?

[772] Yeah, delete it and to know who is being shared with under what terms and conditions.

[773] We haven't reached that level of sophistication with our products of if you say, for example, hey, Spotify, please give me a customized playlist according to my neurome.

[774] You know, you could say you can have access to this vector space model, but only for this duration of time.

[775] and then you've got to delete it.

[776] We haven't gotten there to that level of sophistication, but these are ideas we need to start talking about of how would you actually structure permissions?

[777] Yeah.

[778] And I think it creates a much more stable set for society to build where we understand the rules of play and people aren't vulnerable to being taken advantage.

[779] It's not fair for an individual to be taken advantage of without their awareness, with some other practice that some company is doing for their sole benefit.

[780] And so hopefully we are going through a process now where we're correcting for these things and that it can be an economy -wide shift that because really these are these are fundamentals we need to have in place.

[781] It's kind of fun to think about like in Chrome when you install an extension or like install an app it's ask you like what permissions you're willing to give and be cool from the future it's like, you can have access to my brain data.

[782] I mean, it's not unimaginable in the future.

[783] The big technology companies have built a business based upon acquiring data about you that they can then create a model of you and sell that predictability.

[784] And so it's not unimaginable that you will create with a kernel device, for example, a more reliable predictor of you than they could and that they're asking you for permission to complete their objectives and you're the one that gets to negotiate that with them and say, sure.

[785] So it's not unimaginable that might be the case.

[786] So there's a guy named Dela Musk and he has a company and one of the many companies called Neurrelink that's also excited about the brain.

[787] So it would be interesting to hear your kind of opinions about a very different approach that's invasive that requires surgery that implants a data collection device in the brain.

[788] How do you think about the difference between Kernel and NeurLink in the approaches of getting that stream of brain data.

[789] Elon and I spoke about this a lot early on.

[790] We met up.

[791] I had started Kernel, and he had an interest in brain interfaces as well.

[792] And we explored doing something together, him joining Kernel.

[793] And ultimately, it wasn't the right move.

[794] And so he started neural link and I continued building Kernel.

[795] But it was interesting because we were both at this very early.

[796] time where it wasn't certain what if there was a path to pursue if now was the right time to do something and then the technological choice of doing that and so we were both our starting point was looking at invasive technologies and I was building to invasive technology at the time that's ultimately where he's gone a little less than a year after Elon and I were engaged I shifted Colonel to do non -invasive.

[797] And we had this neuroscientist come to Colonel we were talking about.

[798] He had been doing neurosurgery for 30 years, one of the most respected neuroscientists in the U .S. And we brought him to Colonel to figure out the ins and outs of his profession.

[799] And at the very end of our three -hour conversation, he said, you know, every 15 or so years, a new technology comes along that changes everything.

[800] He said, it's probably already here.

[801] You just can't see it yet.

[802] and my jaw dropped.

[803] I thought, because I had spoken to Bob Greenberg, who had built second site first on the optical nerve, and then he did an array on the optical cortex.

[804] And then I also became friendly with neural pace, who does the implants for seizure detection and remediation.

[805] And I saw in their eyes what it was like to take something through an implantable device through for a 15 -year run.

[806] They initially thought it was seven years and it ended up being 15 years and they thought it'd be 100 million, you know, 300 or 400 million.

[807] And I really didn't want to build invasive technology.

[808] It was the only thing that appeared to be possible.

[809] But then once I spun up an internal effort to start looking at non -invasive options, we said, is there something here?

[810] Is there anything here that, again, has the characteristics of it has the high quality data, it could be low cost, it could be accessible.

[811] Could it make brain interfaces mainstream?

[812] And so I did a bet the company move.

[813] We shifted from non -invasive to non -invasive.

[814] So the answer is yes to that.

[815] There is something there.

[816] It's possible.

[817] The answer is we'll see.

[818] We've now built both technologies.

[819] And they're now, you experience one of them today.

[820] We were applying, we're now deploying it.

[821] So we're trying to figure out what value is really there.

[822] But I'd say it's really too early to.

[823] express confidence whether it's too i think it's too early to assess which technological choice is the right one on what time scales yeah time scales are really important here very important because if you look at the like on the invasive side there's so much activity going on right now of less invasive techniques to get at the neuron firings which what what neural link is building it's possible that in 10, 15 years, when they're scaling that technology, other things have come along, and you'd much rather do that, that thing starts to clock again.

[824] It may not be the case.

[825] It may be the case that Nealink has properly chosen the right technology, and that that's exactly what they want to be, totally possible.

[826] And it's also possible that the path we chose at non -invasive fall short for a variety of reasons.

[827] It's just, it's unknown.

[828] And so right now, the two technologies we chose, the analogy I'd give you to create a baseline of understanding is if you think of it like the internet in the 90s, the internet became useful when people could do a dial -up connection.

[829] And then the paid, and then as bandwidth increased, so to the utility of that connection and so to the ecosystem improve.

[830] And so if you say what kernel flow is going to give you a full screen on the picture of information, but you're going to be watching a movie, but the image is going to be blurred and the audio is going to be muffled.

[831] So it has a lower resolution of coverage.

[832] Colonel Flux, our MEG technology, is going to give you the full movie and 1080P.

[833] And Neurrelink is going to give you a circle on the screen of 4K.

[834] And so each one has their pros and cons, and it's give and take.

[835] And so the decision I made, what Colonel, was that these two technologies, flux and flow, were basically the answer for the next seven years.

[836] And they would give rise to the ecosystem, which would become much more valuable than the hardware itself, and that we would just continue to improve on the hardware over time.

[837] And, you know, it's early days.

[838] It's kind of fascinating to think about that you don't, it's very true that you don't know.

[839] Both paths are very promising.

[840] And it's like 50 years from now, we will look back and maybe not even remember one of them.

[841] And the other one might change the world.

[842] It's so cool how technology is.

[843] I mean, that's what entrepreneurship is.

[844] It's the zeroth principle.

[845] It's like you're marching ahead into the darkness, into the fog, not knowing.

[846] It's wonderful to have someone else out there with us doing this.

[847] Because if you look at brainer faces, anything that's off the shelf right now, is inadequate.

[848] It's had its run for a couple decades.

[849] It's still in hacker communities.

[850] It hasn't gone to the mainstream.

[851] The room -sized machines are on their own path.

[852] But there is no answer right now of bringing brain interfaces mainstream.

[853] And so both they and us, we've both spent over $100 million.

[854] And that's kind of what it takes to have a go at this.

[855] Because you need to build full stack.

[856] kernel, we are from the photon and the atom through the machine learning.

[857] We have just under 100 people, I think it's something like 36, 37 PhDs in these specialties in these areas that there's only a few people in the world who have these abilities.

[858] And that's what it takes to build next generation, to make an attempt at breaking into brain interfaces.

[859] And so we'll see over the next couple of years, whether it's the right time or whether we were both too early or whether there's something else comes along in seven to ten years, which is the right thing that brings it mainstream?

[860] So you see Elon as a kind of competitor or a fellow traveler along the path of uncertainty or both?

[861] It's a fellow traveler.

[862] It's like at the beginning of the internet is how many companies are going to be invited to this new ecosystem, like an endless number.

[863] because if you think that the hardware just starts the process.

[864] And so, okay, back to your initial example, if you take the Fitbit, for example, you say, okay, now I can get measurements on the body and what do we think the ultimate value of this device is going to be?

[865] What is the information transfer rate?

[866] And they were in the market for a certain duration of time and Google bought them for $2 .5 billion.

[867] They didn't have ancillary value at.

[868] There weren't people building on top of the Fitbit device.

[869] they also didn't have increased insight with additional data streams.

[870] So it's really just the device.

[871] If you look, for example, at Apple and the device they sell, you have value in the device that someone buys, but also you have everyone who's building on top of it.

[872] So you have this additional ecosystem value.

[873] And then you have additional data streams that come in, which increased the value of the product.

[874] And so if you say, if you look at the hardware as the instigator of value creation, you know, over time, what we've built may constitute five or 10 percent of the value.

[875] value of the overall ecosystem.

[876] And that's what we really really care about.

[877] What we're trying to do is kickstart the mainstream adoption of quantifying the brain.

[878] And the hardware just opens the door to say what kind of ecosystem could exist.

[879] And that's why the examples are so relevant of the things you've outlined in your life.

[880] I hope those things, the books people write, the experiences, people build, the conversations you have, your relationship with your AI systems, I hope those all are feeding on the insights built upon this ecosystem we've created to better your life.

[881] And so that's the thinking behind it.

[882] Again, with the Drake equation being the underlying driver of value, and the people at Kernel have joined not because we have certainty of success, but because we find it to be the most exhilarating opportunity we could ever pursue in this time to be alive.

[883] You founded the payment system, Braintree, in 2007, that acquired Venmo in 2012, in that same year was acquired by PayPal, which is now eBay.

[884] Can you tell me the story of the vision and the challenge of building an online payment system and just building a large successful business in general?

[885] I discovered payments by accident.

[886] As I was, when I was 21, I just returned from Ecuador living among extreme poverty for two years.

[887] And I came back to the U .S. and I was shocked by the opulence of the United States.

[888] And I just thought this is, I couldn't believe it.

[889] And I decided I wanted to try to spend my life helping others.

[890] That was the life objective that I thought was worthwhile to pursue versus making money and whatever the case may be for its own right.

[891] And so I decided in that moment that I was going to.

[892] try to make enough money by the age of 30 to never have to work again.

[893] And then with some abundance of money, I could then choose to do things that might be beneficial to others, but may not meet the criteria of being a standalone business.

[894] And so in that process, I started a few companies, had some small successes, had some failures.

[895] In one of the endeavors, I was up to my eyeballs in debt.

[896] Things were not going well.

[897] And I needed a part -time job to pay.

[898] pay my bills.

[899] And so I, one day I saw in the paper in Utah where I was living the 50 richest people in Utah.

[900] And I emailed each one of their assistants and said, you know, I'm young, I'm resourceful.

[901] I'll do anything.

[902] I just want to, I'm entrepreneurial.

[903] I try to get a job that would be flexible and no one responded.

[904] And then I interviewed at a few dozen places.

[905] Nobody would even give me the time of day.

[906] Like, it wouldn't want to take me seriously.

[907] And so finally, it was on monster .com but I saw this job posting for credit card sales door to door.

[908] I did not know this story.

[909] This is great.

[910] I love the head drop.

[911] That's exactly right.

[912] So it was...

[913] The low points to which would go in life.

[914] So I responded and, you know, the person made an attempt at suggesting that they had some kind of standards that they would consider and hire me. But it's kind of like if you could fog a mirror, like you come and do this, because it's 100 % commission.

[915] And so I started walking up and down the street in my community selling credit car processing.

[916] And so what you learn immediately in doing that is if you walk into a business, first of all, the business owner is typically there.

[917] And you walk in the door and they can tell by how you're addressed or how you walk, whatever their pattern recognition is.

[918] And they just hate you immediately.

[919] It's like, stop wasting my time.

[920] I really am trying to get stuff done.

[921] I don't want to listen to a sales pitch.

[922] And so you have to overcome the initial get out.

[923] And then once you engage, when you say the word credit card processing, the person's like, I already hate you because I have been taken advantage of dozens of times because you all are weasels.

[924] And so I had to figure out an algorithm to get past all those different conditions because I was still working on my other startup for the majority of my time.

[925] I was doing this part -time.

[926] And so I figured out that the industry really was built on people.

[927] on deceit, basically people promising things that were not reality.

[928] And so I'd walk into a business, I'd say, look, I would give you $100, I'd put a $100 bill and say, I'll give you $100 for three minutes for your time.

[929] If you don't say yes to what I'm saying, I'll give you $100.

[930] And then you usually crack a smile and say, okay, what he got for me, son?

[931] And so I'd sit down and I just open my book and I'd say, here's the credit card industry, here's how it works, here are the players, here's what they do, here's how they deceive you.

[932] Here's what I am.

[933] I'm no different than anyone else.

[934] It's like you're going to process your credit card.

[935] You're going to get the money, account.

[936] You're just going to get a clean statement.

[937] You can have someone who answers the call when someone asks and, you know, just like the basic, like you're okay.

[938] And people started saying yes.

[939] And then, of course, I went to the next business and be like, you know, Joe and Susie and whoever said yes, too.

[940] And so I built the social proof structure.

[941] And I became the number one salesperson out of 400 people nationwide doing this.

[942] And I worked, you know, half time still doing this other startup.

[943] And that's a brilliant strategy, by the way.

[944] It's very well, very well strategized and executed.

[945] I did it for nine months.

[946] And at the time, my customer base was making, was generating around, I think it was sick, if I remember correctly, $62 ,504 a month were the overall revenues.

[947] I thought, wow, that's amazing.

[948] If I built that as my own company, I would just make $62 ,000 a month of income passively with these merchants processing credit cards.

[949] So I thought, hmm and so that's when I thought I'm going to create a company and so then I started Braintree and the idea was the online world was broken because PayPal had had been acquired by eBay around I think 2009 or 2000 and eBay had not innovated much with PayPal so it basically sat still for seven years as the software world moved along and then authorized .net was also a company that was relatively stagnant so you basically had software engineering who wanted modern payment tools, but there were none available for them.

[950] And so they just dealt with software they didn't like.

[951] And so with Braintree, I thought the entry point is to build software that engineers will love.

[952] And if we can find the entry point via software, make it easy and beautiful and just a magical experience and then provide customer service on top of that, it was easy.

[953] That would be great.

[954] What I was really going after for those, it was PayPal.

[955] They were the only company in payments making money.

[956] Because they, because they had the relationship with eBay early on, people created a PayPal account, they'd fund their account with their checking account versus their credit cards.

[957] And then when they'd use PayPal to pay a merchant, PayPal had a cost of payment of zero versus if you have coming from a credit card, you have to pay the bank the fees.

[958] So PayPal's margins were 3 % on a transaction versus a typical payments company, which may be a nickel or a penny or a dime or something like that.

[959] And so a new PayPal really was the model to replicate, but a bunch of companies.

[960] companies had tried to do that.

[961] They tried to come in and build a two -sided marketplace.

[962] So get consumers to fund the checking account and the merchants to accept it.

[963] But they'd all failed because building a two -sided marketplace is very hard at the same time.

[964] So my plan was, I'm going to build a company and get the best merchants in the whole world to use our service.

[965] Then in year five, I'm going to acquire a consumer payments company and I'm going to bring the two together.

[966] Wow.

[967] And so focus on the merchant side.

[968] Exactly.

[969] And then.

[970] get the payments company that does the customer, the whatever.

[971] So the other side of it.

[972] Yeah.

[973] This is the plan I presented when I was at the University of Chicago.

[974] And weirdly, it happened exactly like that.

[975] So four years in, our customer base included Uber, Airbnb, GitHub, 37 signals, not base camp.

[976] We had a fantastic collection of companies that represented the fastest growing, some of the fastest growing tech companies in the world.

[977] And then we met up with Venmo, and they had done a remarkable job in building product.

[978] There's then something very counterintuitive, which is make public your private financial transactions, which people previously thought were something that should be hidden from others.

[979] And we acquired Venmo.

[980] And at that point, we now had, we replicated the model because now people could fund their Venmo account with their checking account, keep money in the account, and then you could just plug Venmo in as a form of payment.

[981] And so I think PayPal saw that, that we were getting the best merchants in the world.

[982] We had people using Venmo.

[983] They were both the up -and -coming millennials at the time who had so much influence online.

[984] And so they came in and offered us an attractive number.

[985] And my goal was not to build the biggest payments company in the world.

[986] It wasn't to try to climb the Forbes billionaire list.

[987] It was the objective was, I want to earn enough money.

[988] so that I can basically dedicate my attention to doing something that could potentially be useful on a society -wide scale, and more importantly, that could be considered to be valuable from the vantage point of 2050, 2100, and 2 ,500.

[989] So thinking about it on a few hundred -year time scale.

[990] And there was a certain amount of money I needed to do that, so I did.

[991] didn't require the permission of anybody to do that.

[992] And so that, what PayPal offered was sufficient for me to get that amount of money to basically have a go.

[993] And that's when I set off to survey everything I could identify in existence to say of anything in the entire world I could do, what one thing could I do that would actually have the highest value potential for the species?

[994] And so it took me a little while to arrive at brainer faces, but, you know, Payments in themselves are revolutionary technologies that can change the world.

[995] Like, let's not sort of, let's not forget that too easily.

[996] I mean, obviously, you know this, but there's quite a few lovely folks who are now fascinated with the space of cryptocurrency.

[997] And what payments are very much connected to this, this, but in general, just money.

[998] And many of the folks I've spoken with, they also kind of connect that to not just purely financial discussions, but philosophical and political discussions.

[999] And they see Bitcoin as a way, almost as activism, almost as a way to resist the corruption of centralized, the centers of power.

[1000] And sort of basically in the 21st century decentralized and control, whether that's Bitcoin or other cryptocurrencies, they see that's one possible way to give power to those that live in regimes that are corrupt or are not respectful of human rights and all those kinds of things.

[1001] What's your sense, just all your expertise with payments and seeing how that changed the world, what's your sense about the lay of the land for the future of Bitcoin or other cryptocurrencies in the positive impact they may have in the world?

[1002] Yeah.

[1003] And to be clear, my, communication wasn't suggest, wasn't meant to minimize payments or to denigrate it in any way.

[1004] It was an attempted communication that when I was surveying the world, it was an algorithm of what could I individually do.

[1005] So there are things that exist that have a lot of potential that can be done.

[1006] And then there's a filtering of how many people are qualified to do this given thing.

[1007] And then there's a further characterization that could be done of, okay, given the number of qualified people, will somebody be a unique outperformer of that group to make something truly impossible to be something done that otherwise couldn't get done?

[1008] So there's a process of assessing where can you add unique value in the world.

[1009] And some of that has to do with you being very, very formal and calculative here.

[1010] But some of that is just like what you sense, like part of that equation is how much passion you sense within yourself to be able to drive that through to discover the impossibilities and make them possible.

[1011] That's right.

[1012] And so we were a brain tree, I think we were the first company to integrate Coinbase into our, I think we were the first payments company to formally incorporate crypto, if I'm not mistaken.

[1013] For people who are not familiar, Coinbase is a place we can trade cryptocurrencies.

[1014] Yeah, which was one of the only places you could.

[1015] So we were early in doing that.

[1016] And, of course, this was in the year 2013, so an attorney to go in cryptocurrency land.

[1017] I concur with the statement you made of the potential of the principles underlying cryptocurrencies.

[1018] And that many of the things that they're building in the name of money and of moving value is equally applicable to the brain and equally applicable to how the brain interacts with the rest of the world and how we would imagine doing goal alignment with people.

[1019] So it's, to me, it's a continuous spectrum of possibility.

[1020] And we're taught, your question is isolated on the money.

[1021] And I think it just is basically a scaffolding layer for all of society.

[1022] So you don't see the, this money is particularly distinct from.

[1023] I don't.

[1024] It's, I think we, we, we will at colonel we will benefit greatly from the progress being made in cryptocurrency because it will be a similar technology stack we will want to use for many things we want to accomplish and so I'm bullish on what's going on and I think it could greatly enhance brain interfaces and the value of the brain interface ecosystem I mean is there something you could say about first of all bullish on cryptocurrency versus fiat money so do you have a sense that in 21st century cryptocurrency will be a embraced by governments and change the face of governments, the structure of governments?

[1025] It's the same way I think about my diet, where previously it was conscious Brian looking at foods in certain biochemical states.

[1026] Am I hungry?

[1027] Am I irritated?

[1028] I'm I depressed?

[1029] And then I choose based upon those momentary windows.

[1030] Do I eat at night when I'm fatigued and I have low willpower?

[1031] am I going to pig out on something.

[1032] And the current monetary system is based upon human conscious decision making and politics and power and this whole mess of things.

[1033] And what I like about the building blocks of cryptocurrency is it's methodical, it's structured, it is accountable, it's transparent.

[1034] And so it introduces this scaffolding, which I think, again, is the right starting point for how we think about building next generation.

[1035] institutions for society.

[1036] And that's why I think it's much broader than money.

[1037] So I guess what you're saying is Bitcoin is the demotion of the conscious mind as well.

[1038] In the same way you were talking about diet, it's like giving less priority to the ups and downs of any one particular human mind, in this case your own, and giving more power to the sort of data -driven.

[1039] Yes.

[1040] Yeah.

[1041] I think that is accurate, that cryptocurrency.

[1042] is a version of what I would call my autonomous self that I'm trying to build.

[1043] It is an introduction of an autonomous system of value exchange and the process, yeah, a value creation in society, yes.

[1044] I said there's similarities.

[1045] So I guess what you're saying is Bitcoin will somehow help me not pig out at night or the equivalent of, speaking of diet, If we could just linger on that topic a little bit.

[1046] We already talked about your blog post of I fired myself.

[1047] I fired Brian, the evening, Brian, who is too willing to not making good decisions for the long -term well -being and happiness of the entirety of the organism.

[1048] Basically, you were like picking out at night.

[1049] But it's interesting, because I do the same.

[1050] In fact, I often eat one meal a day, and like I have been this week, actually, and especially when I travel, and it's funny that it never occurred to me to just basically look at the fact that I'm able to be much smarter about my eating decisions in the morning and the afternoon than I'm at night.

[1051] So if I eat one meal a day, why not eat that one meal a day in the morning?

[1052] Like, I'm not, it never occurred to me, this revolutionary, yeah, until you've outlined that.

[1053] So maybe can you give some details in what, this is just you.

[1054] This is one person, Brian, arrived at a particular thing that they do.

[1055] But it's fascinating to kind of look at this one particular case study.

[1056] So what works for you, diet well?

[1057] What's your actual diet?

[1058] What do you eat?

[1059] How often do you eat?

[1060] My current protocol is basically the result of thousands of experiments and decision -making.

[1061] So I do this every 90 days.

[1062] I do the tests.

[1063] I do the cycle -throughs.

[1064] Then I measure again, and then I'm measuring it all the time.

[1065] And so what I, of course, I'm optimizing for my biomarkers.

[1066] I want perfect cholesterol, and I want perfect blood glucose levels and perfect DNA methylation.

[1067] you know, processes, I also want perfect sleep.

[1068] And so, for example, recently, in the past two weeks, my resting heart rate has been at 42 when I sleep.

[1069] And when my resting heart rate is at 42, my HRV is at its highest.

[1070] And I wake up in the morning, feeling more energized than any other configuration.

[1071] And so I know from all these processes that eating at roughly 8 .30 in the morning, right after I work out on an empty stomach creates enough distance between that completed eating and bedtime where I have no almost no digestion processes going on in my body, therefore my resting heart rate goes very low.

[1072] And when my resting heart rate is very low, I sleep with high quality.

[1073] And so basically I've been trying to optimize the entirety of what I eat to my sleep quality.

[1074] My sleep quality then, of course, feeds into my willpower, so it creates this virtuous cycle.

[1075] And so at 8 .30, what I do is I eat what I call super veggie, which is, it's a pudding of 250 grams of broccoli, 150 grams of cauliflower and a whole bunch of other vegetables that I eat what I call it, what I call it, like nutty pudding, which is.

[1076] You make the pudding yourself like vet, like a veggie mix, whatever thing.

[1077] Is that like a blender?

[1078] Yeah, high speed blender.

[1079] You can be made in a high speed blender.

[1080] But basically I eat the same thing every day.

[1081] A veggie bowl as in a form of pudding.

[1082] And then a, um, bowl in the form of nuts and then i have vegan vegan yes vegan so that's fat and that's like that's fat and carbs and so has some protein and so on that i have a third taste good i love it i i love it so much i dream about it yeah that's awesome this is uh and then i have a third dish which is it changes every day uh today it was kell and spinach and sweet potato and then i take about 20 supplements that hopefully make a constitute a perfect nutritional profile.

[1083] So what I'm trying to do is create the perfect diet for my body every single day.

[1084] Or sleep is part of the optimization.

[1085] That's right.

[1086] You're like one of the things you're really tracking.

[1087] I mean, can you, well, I have a million questions, but 20 supplements, like what kind or like would you say are essential?

[1088] Because I only take, I only take athletic greens .com slash less.

[1089] That's like the multivitamin essentially.

[1090] that's like the lazy man you know like if you don't actually want to think about shit that's what you take and then fish oil and that's it that's all I take yeah you know Alfred North Whitehead said advanced civilization advances as it extends the number of important operations it can do without thinking about them yes and so my objective on this is I want an algorithm for perfect health that I never have to think about and then I want that system to be scalable to anybody so that they don't have to think about it and right now it's expensive for me to do it it's time consuming for me to do it and i have infrastructure to do it but the future of being human is not going to the grocery store and deciding what to eat it's also not reading scientific papers trying to decide this thing or that thing it's all end of one so it's devices on the outside and inside your body assessing real time what your body needs and then creating closed -loop systems for that to happen.

[1091] Yeah, so right now you're doing the data collection and you're being the scientist, it'd be much better if you're doing just, if you just did the data collect, or it was being essentially done for you, and you can outsource that to another scientist that's doing the N1 study of you.

[1092] That's right, because every time I spend time thinking about this, or executing spending time on it, I'm spending less time thinking about building kernel or future of being human.

[1093] And so it's, we just all have this, the budget of our capacity on an everyday basis, and we will scaffold our way up out of this.

[1094] And so, yeah, hopefully what I'm doing is really, it serves as a model that others can also build on.

[1095] That's why I wrote about it is hopefully people can then take it and prove upon it.

[1096] I hold nothing sacred.

[1097] I change my diet almost every day based upon some new test results or science or something like that.

[1098] Can you maybe elaborate on the sleep thing?

[1099] Why is sleep so important?

[1100] and why, presumably, like, what does good sleep mean to you?

[1101] I think sleep is a contender for being the most powerful health intervention in existence.

[1102] It's a contender.

[1103] I mean, it's magical what it does if you're well rested and what your body can do.

[1104] And, I mean, for example, I know when I, when I, eat close to my bedtime.

[1105] And I've done a systematic study for years looking at like 15 minute increments on time of day on where I eat my last meal.

[1106] My willpower is directly correlated to my to the amount of deep sleep I get.

[1107] So my ability to not binge eat at night when Rascal Bryan's out and about is based upon how much deep sleep I got the night before.

[1108] Yeah.

[1109] And so there's some, there's a lot to that.

[1110] Yeah.

[1111] And so I just, I've seen it manifest.

[1112] itself.

[1113] And so I think the way I summarized this is in society, we've had this myth of we tell stories, for example, of entrepreneurship, where this person was so amazing.

[1114] They stayed at the office for three days and slept under their desk.

[1115] And we say, wow, that's amazing.

[1116] That's amazing.

[1117] And now I think we're headed towards a state where we'd say, that's primitive and really not a good idea on every level and so the new mythology is going to be the exact opposite yeah by the way just to sort of maybe push back a little bit on that idea uh did you sleep under your desk collects well yeah a lot i'm a big believer in that actually i'm a big believer in in chaos and not giving it and like giving it to your passion and sometimes doing things that are out out of the ordinary that are Like not trying to optimize health for certain periods of time in lieu of your passions is a signal to yourself that you're throwing everything away.

[1118] So I think what you're referring to is how to have good performance for prolonged periods of time.

[1119] I think there's moments in life you need to throw all of that away, all the plans away, all the structure away.

[1120] So the, I don't, this, I'm not sure I have an eloquent way describing exactly what I'm talking about, but it all depends on different people, on people, people are different, but there's a danger of over optimization to where you don't just give in to the madness of the way your brain flows.

[1121] I mean, to push back on my pushback is like, it's nice to have, like, where the, you know, where.

[1122] where the foundations of your brain are not messed with.

[1123] So you have a fixed foundation where the diet is fixed, where the sleep is fixed, and all of that is optimal.

[1124] And the chaos happens in the space of ideas as opposed to the space of biology.

[1125] But, you know, I'm not sure if there's a, that requires real discipline and forming habits.

[1126] There's some aspect to which some of the best days, and weeks of my life have been yeah sleeping under a desk kind of thing and i don't i'm not too willing to let go of things that empirically worked for things that work in theory and so i'm i'm again i'm absolutely with you on sleep also i'm with you on sleep conceptually but i'm also very humbled to understand that for different people, good sleep means different things.

[1127] I'm very hesitant to trust science on sleep.

[1128] I think you should also be a scholar of your own body.

[1129] Again, experiment of end of one.

[1130] I'm not so sure that a full night sleep is great for me. There is something about that power nap that I just have not fully studied yet.

[1131] But that natch, is something special.

[1132] I'm not sure I found the optimal thing.

[1133] So, like, there's a lot to be explored to what is exactly optimal amount of sleep, optimal kind of sleep, combined with diet and all those kinds of.

[1134] I mean, that all maps the sort of data leads to truth, exactly what everything you're referring to.

[1135] Here's a data point for your consideration.

[1136] Yes.

[1137] The progress in biology over the past, say, decade has been stunning.

[1138] Yes.

[1139] And it now appears as if we will be able to replace our organs, Zeta no ex transplantation.

[1140] And so we probably have a path to replace and regenerate every organ of your body except for your brain.

[1141] You can lose your hand and your arm and a leg.

[1142] You can have an artificial heart.

[1143] You can't operate without your brain.

[1144] And so when you make that trade -off decision, of whether you're going to sleep under the desk or not and go all out for a four -day marathon, right?

[1145] There's a cost -benefit trade -off of what's going, what's happening to your brain in that situation.

[1146] We don't know the consequences of modern day life on our brain.

[1147] We don't, it's the most valuable organ in our existence.

[1148] And we don't know what's going on if we, and how we're treating it today, with stress and with sleep and with dietary.

[1149] And to me, then if you say that you're trying to, you're trying to optimize life for whatever things you're trying to do, the game is soon, with the progress in anti -aging and biology, the game is very soon going to become different than what it is right now, with organ rejuvenation, organ replacement.

[1150] And I would conjecture that we will value the health status of our brain above all things.

[1151] Yeah, no, absolutely.

[1152] Everything is saying is true, but we die.

[1153] We die pretty quickly.

[1154] Life is short.

[1155] And I'm one of those people that I would rather die in battle than stay safe at home.

[1156] It's like, yeah, you look at kind of, there's a lot of things that you can read.

[1157] reasonably say this is the smart thing to do that can prevent you that becomes conservative that can prevent you from fully embracing life I think ultimately you can be very intelligent and data driven and also embrace life but I err on the side of embracing life it's very it takes a very skillful person to not sort of that hovering parent that says no you know what there's a 3 % chance that if you go out if you go out by yourself and play you're going to die get run over by a car come to a slow or a sudden end and I am more a supporter of just go out there if you die you die and that's a balance you have to strike I think there's a balance of strike in long term optimization and short term freedom for me for a programmer for a programming mind I tend to over -optimization and I'm very cautious and afraid of that, to not over -optimized and thereby be overly cautious, sub -optimally cautious about everything I do.

[1158] And the ultimate thing I'm trying to optimize for, it's funny you said, like sleep and all those kinds of things.

[1159] I tend to think, you're being more precise than I am, but I think I tend to want to minimize stress.

[1160] which everything comes into that from you sleep and all those kinds of things but I worry that whenever I'm trying to be too strict with myself then the stress goes up when I don't follow the strictness and so you have to kind of it's a weird it's a there's so many variables in an objective function that it's hard to get right and sort of not giving a damn about sleep and not giving a damn about die is a good thing to inject in there every once in a while for somebody who's trying to optimize everything.

[1161] But that's me just trying to, like, it's exactly like you said.

[1162] You're just a scientist.

[1163] I'm a scientist of myself.

[1164] You're a scientist of yourself.

[1165] It'd be nice if somebody else was doing it and had much better data than, because I don't trust my conscious mind.

[1166] And I piged out last night at some brisket in L .A. that I regret deeply.

[1167] So, there's no point to anything I just said.

[1168] What is the nature?

[1169] of your regret on the brisket?

[1170] Is it, do you wish you hadn't eaten it entirely?

[1171] Is it that you wish you hadn't eat as much as you did?

[1172] Is it that?

[1173] I think, well, the most regret, I mean, if we want to be specific, I drank way too much, like, diet soda.

[1174] My biggest regret is, like, having drank so much diet soda, that's the thing that really was the problem.

[1175] I had trouble sleeping because of that, because I was like programming and then I was editing.

[1176] So I step late at night and then I had to get up to go pee a few times and it was just a mess.

[1177] A mess of a night.

[1178] Well, it's not really a mess, but like it's so many, it's like the little things.

[1179] I know if I just eat, I drink a little bit of water and that's it.

[1180] And there's a certain, all of us have perfect days that we know diet wise and so on that's good to follow you, feel good.

[1181] I know what it takes for me to do that.

[1182] didn't fully do that, and thereby, because there's an avalanche effect where the other sources of stress, all the other to -do items I have, pile on my failure to execute on some basic things that I know make me feel good, and all of that combines to, you know, to create, to create a mess of a day.

[1183] But some of that chaos, you know, you have to be okay with it, but some of it, I wish was a little bit more optimal.

[1184] And your ideas about eating in the morning are quite interested.

[1185] as an experiment to try.

[1186] Can you elaborate, are you eating once a day?

[1187] Yes.

[1188] In the morning, and that's it.

[1189] Can you maybe speak to how that, you spoke, it's funny, he spoke about the metrics of sleep, but you're also, you know, run a business, you're incredibly intelligent, you have to, most of your happiness and success relies on you thinking, clearly.

[1190] So how does that affect your mind and your body in terms of performance?

[1191] So not just sleep, but actual like mental performance.

[1192] As you were explaining your objective function of, for example, in the criteria you are including you like certain neurochemical states.

[1193] Like you like feeling like your living life, that life has enjoyment, that sometimes you want to disregard certain rules to have a moment of passion, of focus.

[1194] There's this architecture of of the way Lex is, which makes you happy as a story you tell, as something you kind of experience.

[1195] Maybe the experience is a bit more complicated, but it's in this idea you have, this is a version of you.

[1196] And the reason why I maintain the schedule I do is I've chosen a game to say, I would like to live a life where I care more about what intelligent, what people who live in 2000, the year 2005, think of me than I do today.

[1197] That's the game I'm trying to play.

[1198] And so, therefore, the only thing I really care about on this optimization is trying to see past myself, past my limitations, using zero's principle thinking, pull myself out of this contextual mesh we're in right now and say, what will matter 100 years from now and 200 years from now?

[1199] What are the big things really going on that are defining reality?

[1200] And I find that if I were to hang out with Diet Soda Lex and Diet Soda Brian were to play along with that and my deep sleep were to get crushed as a result, my mind would not be on what matters in 100 years or 200 years or 300 years.

[1201] I would be irritable.

[1202] I would be in a different state.

[1203] and so it's just gameplay selection it's what you and i have chosen to think about it's what we've chosen to uh work on and this is why i'm saying that no generation of humans have ever been afforded the opportunity to look at their lifespan and contemplate that they will have the possibility of experiencing an evolved form of consciousness that is undoneifiable that would fall in a zeroth category of potential, that to me is the most exciting thing in existence.

[1204] And I would not trade any momentary neurochemical state right now in exchange for that.

[1205] I would, I'd be willing to deprive myself of all momentary joy in the pursuit of that goal, because that's what makes me happy.

[1206] That's brilliant.

[1207] But I'm a bit, I just looked it up.

[1208] I just looked up brave heart speech in William Wallace.

[1209] I don't know if you've seen a fight and you may die, run and you'll live at least a while, and dying in your beds many years from now, would you be willing to trade all the days from this day to that for one chance, just one chance, picture of Mel Gibson saying this, to come back here and tell our enemies that they may take our lives with growing excitement, but they'll never take our freedom.

[1210] I get excited every time I see that in the movie.

[1211] But that's kind of how I approach life.

[1212] Do you think they were tracking their sleep?

[1213] They were not tracking their sleep, and they ate way too much brisket, and they were fat, unhealthy, died early, and were primitive, but there's something in my ape brain that's attracted to that, even though most of my life is fully aligned with the way you see yours.

[1214] Part of it is for, comedy, of course, but part of it is like, I'm almost afraid of over -optimization.

[1215] Really what you're saying, though, if we're looking at this, let's say from a first principle's perspective, when you read those words, they conjure up certain life experiences, but you're basically saying, I experience a certain neurotransmitter state when these things are in action.

[1216] Yeah.

[1217] That's all you're saying.

[1218] So whether it's that or something else, you're just saying you have a selection for how your state for your body.

[1219] And so if you as an engineer of consciousness, that should just be engineerable.

[1220] Yeah.

[1221] And that's just triggering certain chemical reactions.

[1222] Yeah.

[1223] And so whether, so it doesn't mean they have to be mutually exclusive.

[1224] You can have that and experience that and also not sacrifice long -term health.

[1225] And I think that's the potential of where we're going is we don't have, we don't have to assume they are tradeoffs that must be had.

[1226] Absolutely.

[1227] And so I guess from my particular brain, it's useful to have the outlier experiences.

[1228] that also come along with the illusion of free will where I chose those experiences that make me feel like it's freedom.

[1229] Listen, going to Texas made me realize I spent, so I was, it still am, but I lived at Cambridge and MIT, and I never felt like home there.

[1230] I felt like home in the space of ideas with the colleagues, like when I was actually discussing ideas, but there is something about the constraints, the how cautious people are how much they valued also kind of material success career success when i showed up to texas it felt like i belong that was very interesting but that's my neurochemistry whatever the hell that is whatever whatever maybe probably is rooted to the fact that i grew up in the soviet union it was so such a constrained system that you really deeply value freedom and you you always want to escape the the man and the control of centralized systems i don't know what it is is.

[1231] But that's, but at the same time, I love strictness.

[1232] I love the dogmatic authoritarianism of diet, of like the same habit, exactly the habit you have.

[1233] I think that's actually when bodies perform optimally, my body performs ultimately.

[1234] So balancing those two, I think if I have the data every once in a while, party with some wild people, but most of the time eat once a day, perhaps in the morning, I'm going to try that.

[1235] It might be very interesting.

[1236] But I'd rather, I'd rather not try it.

[1237] I'd rather have the data that tells me to do it.

[1238] But in general, you're able to eating once a day, think deeply about stuff.

[1239] Like, that's a concern that people have is like, you know, does your energy wane, all those kinds of things?

[1240] Do you find that it's, especially because it's unique, it's vegan as well?

[1241] So you find that you're able to have a clear mind, a focus, and just physically and mentally throughout?

[1242] Yeah, and I find, like, Like my personal experience in thinking about hard things is, like, oftentimes I feel like I'm looking through a telescope and like I'm aligning two or three telescopes and you kind of have to close one eye and move it back and forth a little bit and just find just the right alignment.

[1243] Then you find just a sneak peek at the thing you're trying to find, but it's fleeting.

[1244] If you move just one little bit, it's gone.

[1245] And oftentimes what I feel like are the idea, as I value the most are like that.

[1246] They're so fragile and fleeting and slippery and elusive.

[1247] And it requires a sensitivity to thinking and a sensitivity to maneuver through these things.

[1248] If I concede to a world where I'm on my phone texting, I'm also on social media, I'm also doing 15 things at the same time because I'm running a company and I'm also feeling terrible from the last night it all just comes crashing down and the quality of my thoughts goes to a zero i'm just a i'm a functional person to respond to basic level things but i don't feel like i am doing anything interesting i think that's a good word sensitivity because that's when you that's that's what uh thinking deeply feels like is you're sensitive to the fragile thoughts and you're right all those other distractions kind of dull your ability to be sensitive to the the fragile thoughts it's a really good word out of all the things you've done you've also climbed mont kilimanjaro is this true it's true uh what do you uh why and how and what do you take from that experience i guess the backstory is relevant because that uh in that moment it was the darkest time in my life.

[1249] I was ending a 13 -year marriage.

[1250] I was leaving my religion.

[1251] I sold Braintree, and I was battling depression where I was just at the end.

[1252] And I got invited to go to Tanzania as part of a group that was raising money to build clean water wells.

[1253] And I had made some money from Braintree, and so I was able to donate $25 ,000.

[1254] And it was the first time I had ever had money to donate outside of paying tithing in my religion.

[1255] And it was such a phenomenal experience to contribute something meaningful to someone else in that form.

[1256] And as part of this process, we were going to climb the mountain.

[1257] And so we went there and we saw the clean water whales we were building.

[1258] We spoke to the people there.

[1259] And it was very energizing.

[1260] And then we climbed Kilimanjaro.

[1261] And I came down with a stomach flu on day three.

[1262] And I also had altitude sickness.

[1263] But I became so sick that on day four, we are somebody on day five, I came into the camp, base camp at 15 ,000 feet, just, you know, going to the bathroom on myself and, like, falling all over.

[1264] It was just, I was just a disaster.

[1265] It was so sick.

[1266] So stomach flu and altitude sickness.

[1267] Yeah, and I just was destroyed from the situation.

[1268] and plus psychologically one of the lowest points yeah and I think that was probably a big contributor I was I was just smoked as a human just absolutely done and I had three young children and so I was trying to reconcile like this is not a whether what whether I live or not is not my decision by itself I'm now intertwined with these three little people and I have an obligation whether I like it or not I need to be there and so it did it felt like I was was just stuck in a straight jacket and I had to decide whether I was going to summit the next day with the team and it was a difficult decision because once you start hiking there's no way to get off the mountain and a midnight came and our guide came in and he said where are you at and I said I think I'm okay I think I can try and so we we went and so from midnight to I made to the summit at 5 a .m., it was one of the most transformational moments of my existence.

[1269] And the mountain became my problem.

[1270] It became everything that I was struggling with.

[1271] And when I started hiking, the pain got so ferocious that it was kind of like this.

[1272] It became so ferocious that I turned my music to M &M.

[1273] And it was, you know, Eminem was, he was the only person in existence that spoke to my soul.

[1274] And it was something about, you know, his anger and his vibrancy and his multimensional way.

[1275] He's the only person who I could turn on and I could say, ah, like, I feel some relief.

[1276] I turned, I turned on Eminem and I made it to the summit after five hours.

[1277] But just, just a hundred yards from the top.

[1278] I was with my guide, Ike.

[1279] and I started getting very dizzy and I felt like I was going to fall backwards off this, this cliff area we were on.

[1280] I was like, this is dangerous.

[1281] And he said, look, Brian, I know where you're at.

[1282] I know what you're at.

[1283] And I can tell you you've got it in you.

[1284] So I want you to look up, take a step, take a breath, and look up, take a breath, and take a step.

[1285] And I did.

[1286] And I made it.

[1287] And so I got there and I just, I sat down with him.

[1288] him at the top I just cried like a baby broke down yeah just I just lost it and so you know he'd let me do my thing and then we pulled out the the pulse oxymeter and he measured my blood oxygen levels and it's like it was like 50 something percent and it was danger zone so he so he looked at it and I think he was like really alarmed that I was in this situation and so he said we can't get a helicopter here and we can't get you emergency evacuated you've got to go down you've got to hike down to 15 ,000 feet to get base camp and so he we went out on the mountain I got back down at base camp and again that was pretty difficult and then they put me out a stretcher, this metal stretcher with this one will and a team of six people willed me down the mountain and it was it was pretty tortuous.

[1289] I'm very appreciative they did also the trail is very bumpy so they'd go over these big rocks and so my head would just slam against this metal thing for hours and so I just felt awful plus I'd get my head slammed every couple seconds.

[1290] So the whole experience was really a life -changing moment.

[1291] that's it that was the demarcation of me basically building your life of basically i said i'm going to reconstruct brian my my understanding of reality my existential realities what i want to go after and i try i mean as much as that's possible as a human but that's when i set out to rebuild everything was it the struggle of that i mean there's also just like the romantic poetic, it's a freaking mountain.

[1292] He's a man in pain, psychological and physical, struggling up a mountain.

[1293] But it's just struggle, just in the face of, just pushing through in the face of hardship or nature too, something much bigger than you.

[1294] Was that the thing that just clicked?

[1295] For me, it felt like I was just locked in with reality.

[1296] It was a death match.

[1297] It was in that moment, one of this is going to die.

[1298] So you were pondering death, like not surviving.

[1299] Yeah.

[1300] And that was the moment.

[1301] And it was, the summit to me was, I'm going to come out on top and I can do this.

[1302] And giving in was, it's like, I'm just done.

[1303] And so it did, I locked in.

[1304] And that's why, yeah, mountains are magical to me. I didn't expect that.

[1305] I didn't design that.

[1306] I didn't know that was going to be the case.

[1307] It would not have been something I would have anticipated.

[1308] But you are not the same man afterwards.

[1309] Is there advice you can give to young people today that look at your story that's successful in many dimensions?

[1310] Advice you can give to them about how to be successful in their career, successful in life, whatever path they choose.

[1311] Yes, I would say, listen to advice and see it for what it is, a mirror of that person, and then map and know that your future is going to be in a zero -th principal land.

[1312] And so what you're hearing today is a representation of what may have been the right principles to build upon previously, but they're likely depreciating very fast.

[1313] And so I am a strong proponent that people ask for advice, but they don't take advice.

[1314] So how do you take advice properly?

[1315] It's in the careful examination of the advice.

[1316] It's actually the person makes a statement about a given thing somebody should follow.

[1317] The value is not doing that.

[1318] The value is understanding the assumption stack they built, the assumption and knowledge stack they built around that body of knowledge.

[1319] That's the value.

[1320] It's not doing what they say.

[1321] Considering the advice, but digging deeper to understand the assumption stack, like the full person.

[1322] It's almost, I mean, this is deep empathy, essentially, to understand the journey of the person that arrived at the advice.

[1323] And the advice is just the tip of the iceberg.

[1324] That's right.

[1325] That ultimately is not the thing that gives you.

[1326] That's right.

[1327] It could be the right thing to do.

[1328] It could be the complete wrong thing to do depending on the assumption stack.

[1329] so you need to investigate the whole thing.

[1330] Is there some, are there been people in your startup and your business journey that have served that role of advice giver that's been helpful?

[1331] Or do you feel like your journey felt like a lonely path or was it one that was, of course, we're all, we're all born and die alone?

[1332] But do you fundamentally remember the experience as one where you leaned on people at a particular moment of a time that changed everything?

[1333] Yeah, the most significant moments of my memory, for example, like on Kilimanjaro, when Ike, some person I had never met in Tanzania, was able to, in that moment, apparently see my soul when I was in this death match with reality.

[1334] and he gave me the instructions look up step and so there's magical people in my life that have done things like that and i suspect they probably don't know i probably should be better at identifying those things and but yeah hopefully the i suppose like the wisdom i would aspire to is to have the awareness and the empathy to be that for other people.

[1335] Not a retail advertiser of advice of tricks for life, but deeply meaningful and empathetic with a one -on -one context with people that it really can make a difference.

[1336] Yeah, I actually kind of experience, I think about that sometimes.

[1337] You know, you have like an 18 -year -old kid.

[1338] could come up to you, it's not always obvious, it's not always easy to really listen to them.

[1339] Like not the facts, but like see who that person is.

[1340] I think people say that about being a parent is, you know, you want to consider that you don't want to be the authority figure in a sense that you really want to consider that there's a special unique human being there with a unique brain that may be, um, brilliant in ways that you are not understanding that you'll never be and really try to hear that.

[1341] So when giving advice, there's something to that.

[1342] So both sides should be deeply empathetic about the assumptions stack.

[1343] I love that terminology.

[1344] What do you think is the meaning of this whole thing?

[1345] Of life.

[1346] Why the hell are we here, Ryan Johnson?

[1347] We've been talking about brains and studying brains, and you had this very eloquent way of describing life on earth as an optimization problem of the cost of intelligence going to zero at first through the evolutionary process and then eventually through building, through our technology, building more and more intelligent systems.

[1348] Do you ever ask yourself why is doing that?

[1349] Yeah, I think the answer to this question, again, the information value is more in the mirror it provides of that person, which is a representation of the technological, social, political context of the time.

[1350] So if you ask this question 100 years ago, you would get a certain answer that reflects that time period.

[1351] Same thing would be true of a thousand years ago.

[1352] It's rare, it's difficult for a person to pull themselves out of their contextual awareness and offer truly original response.

[1353] And so knowing that I am contextually influenced by the situation, that I am a mirror for our reality, I would say that in this moment, I think the real game going on is that evolution built a system of scaffolding intelligence that produced us.

[1354] We are now building intelligent systems that are scaffolding higher dimensional intelligence.

[1355] It's developing more robust systems of intelligence in doing in that process with the cost going to zero then the meaning of life becomes goal alignment which is the negotiation of our conscious and unconscious existence and then i'd say the third thing is if we're thinking that we want to be explorers is our technological progress is getting to a point where we could aspirationally say, we want to figure out what is really going on, really going on.

[1356] Because does any of this really make sense?

[1357] Now, we may be 100, 200, 500 ,000 years away from being able to poke our way out of whatever is going on.

[1358] But it's interesting that we could even state an aspiration to say, we want to poke at this question.

[1359] but I'd say in this moment of time, the meaning of life is that we can build a future state of existence that is more fantastic than anything we could ever imagine.

[1360] The striving for something more amazing.

[1361] And that defies expectations that we would consider bewildering and all the the things that that that's and and i guess the last thing if there's multiple meanings of life it would be infinite games you know james cars wrote the book finite games infinite games the only game to play right now is to keep playing the game and so this goes back to the algorithm of the lex algorithm of diet soda and brisket and pursuing the passion i'm what i'm suggesting is there is a there's a moment here where we can contemplate playing infinite games therefore it may make sense to err on the side of making sure one is in a situation to be playing infinite games if that opportunity arises.

[1362] So it's just the landscape of possibilities changing very, very fast, and therefore our old algorithms of how we might assess risk assessment and what things we might pursue and why those assumptions may fall away very quickly.

[1363] Well, I think I speak for a lot of people when I say that the game you, Mr. Brian Johnson, have been playing, is quite incredible.

[1364] Thank you so much for talking today.

[1365] Thanks, Lex.

[1366] Thanks for listening to this conversation with Brian Johnson, and thank you to Foresigmatic, NetSuite, Grammarly, and ExpressVPN.

[1367] Check them out in the description to support this podcast.

[1368] And now let me leave you with some words from Diane Ackerman.

[1369] Our brain is a crowded chemistry lab, bustling with non -stop neural conversations.

[1370] Thank you for listening, and hope to see you next time.