Acquired XX
[0] I will say, David, I would love to have NVIDIA's full production team every episode.
[1] It was nice not having to worry about turning the cameras on and off and making sure that nothing bad happened myself while we were recording this.
[2] Yeah, just the gear.
[3] I mean, the drives that came out of the camera.
[4] All right, red cameras for the home studio starting next episode.
[5] Yeah, great.
[6] All right, let's do it.
[7] Who got the truth?
[8] Is it you?
[9] Is it you?
[10] Is it you?
[11] Who got the truth now Is it you, is it you, is it you?
[12] Sit me down, say it straight, another story.
[13] Welcome to this episode of Acquired, the podcast about great technology companies and the stories and playbooks behind them.
[14] I'm Ben Gilbert.
[15] I'm David Rosenthal.
[16] And we are your hosts.
[17] Listeners, just so we don't bury the lead, this episode was insanely cool for David and I. Yeah.
[18] After researching Nvidia for something like 500 hours over the last two years, we flew down to NVIDIA headquarters to sit down with Jensen himself.
[19] And Jensen, of course, is the founder and CEO of Nvidia, the company powering this whole AI explosion.
[20] At the time of recording, NVIDIA is worth $1 .1 trillion and is the sixth most valuable company in the entire world.
[21] And right now is a crucible moment for the company.
[22] Expectations are set high.
[23] I mean, sky high.
[24] They have about the most important.
[25] strategic position and lead against their competitors of any company that we've ever studied.
[26] But here's the question that everyone is wondering.
[27] Will Nvidia's insane prosperity continue for years to come?
[28] Is AI going to be the next trillion dollar technology wave?
[29] How sure are we of that?
[30] And if so, can Nvidia actually maintain their ridiculous dominance as this market comes to take shape?
[31] So Jensen takes us down memory lane with stories of how they went from graphics to the data center, AI, how they survived multiple near -death experiences.
[32] He also has plenty of advice for founders, and he shared an emotional side to the founder journey toward the end of the episode.
[33] Yeah, I got new perspective on the company and on him as a founder and a leader just from doing this despite, you know, we thought we knew everything before we came in advance, and it turned out we didn't.
[34] Turns out the protagonist actually knows more.
[35] Yes.
[36] All right, well, listeners, join the Slack.
[37] There is incredible.
[38] discussion of everything about this company, AI, the whole ecosystem, and a bunch of other episodes that we've done recently going on in there right now.
[39] So that is Acquired .fm slash Slack.
[40] We would love to see you.
[41] And without further ado, this show is not investment advice.
[42] David and I may have investments in the companies we discuss.
[43] And this show is for informational and entertainment purposes only.
[44] On to Jensen.
[45] So Jensen, this is acquired.
[46] So we want to start with story time.
[47] So we want to wind the clock all the way back to, I believe, was 1997, you're getting ready to ship the Riva 128, which is one of the largest graphics chips ever created in the history of computing.
[48] It is the first fully 3D accelerated graphics pipeline for a computer.
[49] And you guys have about six months of cash left.
[50] And so you decide to do the entire testing in simulation rather than ever receiving a physical prototype.
[51] You commission the production run site unseen with the rest of the company's money.
[52] So you're betting it all right here on the Reva 128.
[53] It comes back, and of the 32 DirectX blend modes, it supports eight of them.
[54] And you have to convince the market to buy it, and you've got to convince developers not to use anything but those eight blend modes.
[55] Walk us through what that felt like.
[56] The other 24 weren't that important.
[57] Okay, so wait, wait, first question.
[58] Was that the plan all along?
[59] Like, when did you realize that the only way were going to work?
[60] I realized, I didn't learn about it until it was too late.
[61] We should have implemented all 32, yeah.
[62] But we built what we built, and so we had to make the best of it.
[63] That was really an extraordinary time.
[64] Remember, Revo 120 was NV3.
[65] NV1 and NV2 were based on forward texture mapping, no triangles but curves, and it tessellated the curves.
[66] and because we were rendering higher -level objects, we essentially avoided using Z -buffers, and we thought that that was going to be a good rendering approach, and turns out to have been completely the wrong answer.
[67] And so what Revo Run 28 was was a reset of our company.
[68] Now, remember, at the time that we started the company in 1993, we were the only consumer 3D graphics company ever created, and we were focused on transforming the PC into an accelerated PC, because at the time, Windows was really a software -rendered system.
[69] And so anyways, Reva -128 was a reset of our company because by the time that we realized we had gone down the wrong road, Microsoft had already rolled out DirectX.
[70] It was fundamentally incompatible with NVIDIA's architecture.
[71] 30 competitors have already shown up, even though we were the first company at the time that we were founded.
[72] So the world was a completely different place.
[73] the question about what to do as a company strategy, at that point, I would have said that we made a whole bunch of wrong decisions, but on that day that mattered, we made a sequence of extraordinarily good decisions.
[74] And that time, 1997, was probably NVIDIA's best moment.
[75] And the reason for that was our backs were up against the wall.
[76] We were running out of time, we're running out of money for a lot of employees running out of hope.
[77] And the question is, what do we do?
[78] Well, the first thing that we did was we decided that, look, DirectX is now here, we're not going to fight it.
[79] Let's go figure out a way to build the best thing in the world for it.
[80] And Revo 128 is the world's first fully accelerated, hardware -accelerated pipeline for rendering 3D.
[81] And so the transform, the projection, every single element all the way down to the frame buffer was completely hardware -accelerated.
[82] We implemented a texture cache.
[83] we took the bus limit, the frame buffer limit to as big as physics could afford at the time.
[84] We made the biggest chip that anybody had ever imagined building.
[85] We used the fastest memories.
[86] Basically, if we built that chip, there could be nothing that could be faster.
[87] And we also chose a cost point that is substantially higher than the highest price that we think that any of our competitors would be willing to go.
[88] If we built it right, we accelerated everything, we implemented everything in DirectX that we knew of, and we built it as larger as we possibly could, then obviously nobody can build something faster than that.
[89] Today, in a way, you kind of do that here at Invitya, too.
[90] You were a consumer products company back then, right?
[91] It was end consumers who were going to have to pay the money to buy them.
[92] That's right.
[93] But we observed that there was a segment of the market where people were, because at the time, the PC industry was still coming up, and it wasn't good enough.
[94] Everybody was clamoring for the next fastest thing.
[95] And so if your performance was 10 times higher this year than what was available, there's a whole large market of enthusiasts who we believe would have gone after it.
[96] And we were absolutely right, that the PC industry had a substantially large enthusiast market that would buy the best of everything.
[97] To this day, it kind of remains true.
[98] And for certain segments at a market where the technology is never good enough like 3D graphics, when we chose the right technology, 3D graphics is never good enough.
[99] And we call it back then, 3D gives us sustainable technology opportunity because it's never good enough.
[100] And so your technology can keep getting better.
[101] We chose that.
[102] We also made the decision to use this technology called Emulation.
[103] There was a company called ICOS, and on the day that I called them, they were just shutting the company down because they had no customers.
[104] And I said, hey, look, I'll buy what you have inventory, and, you know, no promise are necessary.
[105] And the reason why we needed that emulator is because if you figure out how much money that we have, if we taped out a chip and we got it back from the FAB and we started working on our software, by the time that we found all the bugs because we did the software, then we taped out of the chip again.
[106] Well, we would have been out of business already.
[107] And so I knew...
[108] And your competitors would have caught up.
[109] Well, not to mention we would have been out of business.
[110] Who cares?
[111] Exactly.
[112] So if you're going to be out of business, out of business anyways, that plan obviously wasn't the plan.
[113] The plan that companies normally go through, which is build the chip, write the software, fix the bugs, tape out the new chip, so and so forth, that method wasn't going to work.
[114] And so the question is, if we only had six months and you get to tape out just one time, then obviously you're going to tape out a perfect chip.
[115] So I remember having conversation with our leaders, and they said, but Jensen, how do you know it's going to be perfect.
[116] And I said, I know it's going to be perfect, because if it's not, we'll be out of business.
[117] And so let's make it perfect.
[118] We get one shot.
[119] We essentially virtually prototyped the chip by buying this emulator.
[120] And Dwight and the software team wrote our software the entire stack and ran it on this emulator and just sat in the lab waiting for windows to paint, you know, and it was like 60 seconds for a frame or something like that.
[121] I actually think that it was an hour per frame, something like that.
[122] And so we were just sit there and watch a paint.
[123] And so on the day that we decided to tape out, I assumed that the chip was perfect.
[124] And everything that we could have tested, we tested in advance and told everybody, this is it, we're going to tape out the chip, it's going to be perfect.
[125] Well, if you're going to tape out a chip and you know it's perfect, then what else would you do?
[126] That's actually the good question.
[127] If you knew that you hit enter, you taped out a chip and you knew it was going to be perfect, then what else would you do?
[128] Well, the answer, obviously, go to production.
[129] And marketing blitz.
[130] Yeah, yeah.
[131] And developer -related.
[132] Everything off.
[133] Kick everything off.
[134] Because you got a perfect chip.
[135] And so we got in our head that we have a perfect chip.
[136] How much of this was you and how much of this was like your co -founders, the rest of the company, the board?
[137] Was everybody telling you you were crazy?
[138] No, everybody was clear we had no shot.
[139] Not doing it would be crazy.
[140] Because otherwise you want to go home.
[141] You're going to be out of business anyways.
[142] So anything aside from that is crazy.
[143] So it seemed like a fairly logical thing.
[144] And quite frankly, right now, so I'm describing it.
[145] You were probably thinking, yeah, it's pretty sensible.
[146] Well, it worked.
[147] Yeah.
[148] And so we take that out and went directly to production.
[149] So is the lesson for founders out there, when you have conviction on something, like the Revo 128 or Kuda, go bet the company on it?
[150] And this keeps working for you.
[151] So it seems like your lesson learned from this is, yes, keep pushing all the chips in because so far it's worked every time.
[152] No. How do you think about that?
[153] No, no. When you push your chips in, I know it's going to work.
[154] Notice, we assume that we taped out a perfect chip.
[155] The reason why we taped out a perfect chip is because we emulated the whole chip before we taped it out.
[156] We developed the entire software stack.
[157] We ran QA on all the drivers and all the software.
[158] We ran all the games we had.
[159] We ran every VGA application we had.
[160] And so when you push your chips in, what you're really doing is when you bet the farm, you're saying, I'm going to take everything in the future, all the risky things, and I'm going to pull it in advance.
[161] And that is probably the lesson.
[162] And to this day, everything that we can pre -fetch, everything in the future that we can simulate today, we pre -fetch it.
[163] We talk about this a lot.
[164] We were just talking about this on our Costco episode.
[165] You want to push your chips in when you know it's going to work.
[166] So every time we see you make a bet the company move, you've already simulated it.
[167] Yeah, yeah, yeah.
[168] Do you feel like that was the case with Kuda?
[169] Yeah.
[170] In fact, before there was CUDA, there was a CG.
[171] Right.
[172] And so we were already playing with the concept of how do we create an abstraction layer above our chip that is expressable in a higher level language and higher level expression.
[173] And how can we use our GPU for things like CT reconstruction, image processing?
[174] We were already down that path.
[175] And so there were some positive feedback and some intuitive positive feedback.
[176] that we think that general -purpose computing could be possible.
[177] And if you just looked at the pipeline of a programmable shader, it is a processor and is highly parallel, and it is massively threaded, and it is the only processor in the world that does that.
[178] And so there were a lot of characteristics about programmable shading that would suggest that Kuta has a great opportunity to succeed.
[179] And that is true if there was a large market of machine learning practitioners who would eventually show up and want to do all this great science.
[180] scientific computing and accelerated computing.
[181] But at the time, when you were starting to invest what is now something like 10 ,000 person years in building that platform, did you ever feel like, oh, man, we might have invested ahead of the demand for machine learning since we're like a decade before the whole world is realizing it?
[182] I guess yes and no. When we saw deep learning, when we saw AlexNet and realized its incredible effectiveness and computer vision, We had the good sense, if you will, to go back to first principles and ask, you know, what is it about this thing that made it so successful?
[183] When a new software technology, a new algorithm goes along, and somehow leapfrogs 30 years of computer vision work, you have to take a step back and ask yourself, but why?
[184] And fundamentally, is it scalable?
[185] And if it's scalable, what other problems can it solve?
[186] And there were several observations that we made.
[187] The first observation, of course, is that if you have a whole lot of example data, you could teach this function to make predictions.
[188] Well, what we've basically done is discovered a universal function approximator because the dimensionality could be as high as you wanted to be in.
[189] Because each layer is trained one layer at a time, there's no reason why you can't make very, very deep neural networks.
[190] Okay, so now you just reasoned your way through, right?
[191] Okay, so now I go back to 12 years ago.
[192] You could just imagine reasoning I'm going through my head that we've discovered a universal function approximator.
[193] In fact, we might have discovered, with a couple more technologies, a universal computer.
[194] And you're paying attention to the ImageNet competition every year leading up to this?
[195] Yeah, yeah.
[196] And the reason for that is because we were already working on computer vision at the time, and we were trying to get CUDA to be a good computer vision system.
[197] Or most of the algorithms that were created for computer vision aren't a good fit for CUDA.
[198] And so what we're sitting there trying to figure it out, all of a sudden, AlexNet shows up.
[199] And so that was incredibly intriguing.
[200] It's so effective that it makes you take a step back and ask yourself, why is that happening?
[201] So by the time that you reason your way through this, you go, well, what are the kind of problems in a world where a universal function approximator can solve?
[202] Well, we know that most of our algorithms start from principled sciences.
[203] You want to understand the causality, and from the causality, you create a simulation algorithm them that allows us the scale.
[204] Well, for a lot of problems, we kind of don't care about the causality.
[205] We just care about the predictability of it.
[206] Like, do I really care for what reason you prefer this toothpaste over that?
[207] I don't really care the causality.
[208] I just want to know that this is the one you would have predicted.
[209] Do I really care that the fundamental cause of somebody who buys a hot dog buys ketchup and mustard?
[210] It doesn't really matter.
[211] It only matters that I can predict it.
[212] It applies to predicting movies, predicting music.
[213] It applies to predicting, quite frankly, weather.
[214] We understand thermal dynamics.
[215] We understand radiation from the sun.
[216] We understand cloud effects.
[217] We understand oceanic effects.
[218] We understand all these different things.
[219] We just want to know whether we should wear sweat or not.
[220] Isn't that right?
[221] Yep.
[222] And so causality for a lot of problems in the world doesn't matter.
[223] We just want to emulate the system and predict the outcome.
[224] And it can be an incredibly lucrative market.
[225] If you can predict, what the next best performing feed item to serve into a social media feed turns out that's a hugely valuable market I love the examples you pulled it you know toothpaste catch up music movies when you realize those you realize hey hang on a second a universal functional approximator a machine learning system you know something that learns from examples could have tremendous opportunities because it's just the number of applications is quite enormous and everything from obviously we just that's talking about commerce all the way to science.
[226] And so you realize that maybe this could affect a very large part of the world's industries.
[227] Almost every piece of software in the world would eventually be programmed this way.
[228] And if that's the case, then how you build a computer and how you build a chip, in fact, can be completely changed.
[229] And realizing that, the rest of it just comes with, do you have the courage to put your chips behind it?
[230] So that's where we are today, and that's where Nvidia is today.
[231] But I'm curious in the, you know, there's a couple of years after AlexNet.
[232] And this is when Ben and I were getting into the technology industry and the venture industry ourselves.
[233] I started at Microsoft in 2012.
[234] So right after AlexNet, but before anyone was talking about machine learning and even the mainstream engineering community.
[235] There were those couple of years there where to a lot of the rest of the world, these looked like science projects.
[236] Yeah.
[237] The technology companies here in Silicon Valley, particularly, the social media companies, they were just realizing huge economic value out of this, the Googles, the Facebooks, the Netflix, etc. And obviously that led to lots of things, including Open AI a couple of years later.
[238] But during those couple years, when you saw just that huge economic value unlock here in Silicon Valley, how are you feeling during those times?
[239] The first thought was, of course, reasoning about how we should change our computing stack.
[240] The second thought is where can we find earliest possibilities of use?
[241] If we were to go build this computer, what would people use it to do?
[242] And we were fortunate that working with the world's universities and researchers was innate in our company because we were already working on CUDA, and CUDA's early adopters were researchers, because we democratized supercomputing.
[243] You know, CUDA is not just used, as you know, for AI.
[244] Kuda is used for almost all fields of science.
[245] everything from molecular dynamics to imaging, CT reconstruction, to seismic processing, to weather simulations, quantum chemistry, the list goes on, right?
[246] And so the number of applications of CUDA in research was very high.
[247] And so when the time came and we realized that deep learning could be really interesting, it was natural for us to go back to the researchers and find every single AI researcher on the planet and say, how can we help you advance your work?
[248] and that included Jan Lecun and Andrew Eng and Jeff Hinton and that's how I met all these people.
[249] And I used to go to all the AI conferences and that's where I met Ilya Suscover there for the first time.
[250] And so it was really about at that point, what are the systems that we can build and the software stacks we can build to help you be more successful to advance the research because at the time it looked like a toy.
[251] But we had confidence that even GAN, the first time I met Goodfell, the GAN was like 32 by 32.
[252] And it was just a blurry image of a cat, you know.
[253] But how far can it go?
[254] And so we believed in it.
[255] We believed that, one, you could scale deep learning because obviously it's trained layer by layer and you could make the data sets larger and you could make the models larger.
[256] And we believe that if you made that larger and larger, it would get better and better, kind of sensible.
[257] And I think the discussions and the engagements with the researchers, was the exact positive feedback system that we needed.
[258] I would go back to research.
[259] That's where it all happened.
[260] When Open AI was founded in 2015, yeah.
[261] I mean, that was such an important moment that's obvious today now.
[262] But at the time, I think most people, even people in tech, were like, what is this?
[263] Were you involved in it at all?
[264] Because you were so connected to the researchers to Ilya, taking that talent out of Google and Facebook, to be blunt.
[265] receding the research community and opening it up was such an important moment.
[266] Were you involved in it at all?
[267] I wasn't involved in the founding of it, but I knew a lot of the people there, and Elon, of course, I knew, and Peter Beale was there, and Ilya was there, and we have some great employees today that were there in the beginning, and I knew that they needed this amazing computer that we were building, and we were building the first version of the DGX, which, you know, today, when you see a hopper, it's 70 pounds, 35 ,000 parts, 10 ,000 amps.
[268] But DGX, the first version that we built, was used internally, and I delivered the first one to Open AI.
[269] And that was a fun day.
[270] But most of our success was aligned around in the beginning just about helping the researchers get to the next level.
[271] I knew it wasn't very useful in its current state.
[272] But I also believe that in a few clients, it could be really remarkable.
[273] And that belief system came from the interactions with all these amazing researchers, and it came from just seeing the incremental progress.
[274] At first, the papers were coming out every three months, and then papers today are coming out every day, right?
[275] So you could just monitor the archive papers, and I took an interest in learning about the progress of deep learning, and to the best of my ability, read these papers, and you could just see the progress happening in real time, exponentially in real time.
[276] It even seems like within the industry, from some researchers we spoke with, it seemed like no one predicted how useful language models would become when you just increase the size of the models.
[277] They thought, oh, there has to be some algorithmic change that needs to happen.
[278] But once you cross that 10 billion parameter mark, and certainly once you cross the 100 billion, they just magically got much more accurate, much more useful, much more lifelike.
[279] Were you shocked by that the first time you saw a truly large language model?
[280] And do you remember that feeling?
[281] Well, my first feeling about the language model was how clever it was to just mask out words and make it predict the next word.
[282] It's self -supervised learning at its best.
[283] We have all this text.
[284] You know, I know what the answer is.
[285] I'll just make you guess it.
[286] And so my first impression of Burt was really how clever it was, and now the question is how can you scale that.
[287] You know, the first observation, almost everything, is interesting, and then try to understand intuitively why it works, and then the next step, of course, is from first principles, how would you extrapolate that?
[288] Yep.
[289] And so obviously, we knew that Bert was going to be a lot larger.
[290] Now, one of the things about these language models is it's encoding information, isn't that right?
[291] It's compressing information.
[292] And so within the world's languages and text, there's a fair amount of reasoning that's encoded in it.
[293] And we describe a lot of reasoning things.
[294] And so if you were to say that a few -step reasoning is somehow learnable from just reading things, I wouldn't be surprised.
[295] For a lot of us, we get our common sense and we get our reasoning ability by reading.
[296] And so why wouldn't a machine learning model also learn some of the reasoning capabilities from that?
[297] And from reasoning capabilities, you could have emergent capabilities, right?
[298] Immersion abilities are consistent with intuitively from reasoning.
[299] And so some of it could be predictable, but still, it's still amazing.
[300] The fact that it's sensible doesn't make it, any less amazing.
[301] Right.
[302] I could visualize literally the entire computer and all the modules in a self -driving car.
[303] And the fact that it's still keeping lanes makes me insanely happy.
[304] And so...
[305] I even remember that for my first operating systems class in college when I finally figured out all the way from programming language to the electrical engineering classes bridged in the middle by that OS class.
[306] I'm like, oh, I think I understand how the von Neumann computer works soup to nuts.
[307] And it's still a miracle.
[308] Yeah.
[309] Yeah, yeah, yeah.
[310] Exactly.
[311] Yeah, yeah.
[312] When you put it all together, it's still a miracle.
[313] Yeah.
[314] Now is a great time to talk about one of our favorite companies, Statsig, and we have some tech history for you.
[315] Yes.
[316] So in our NVIDIA Part 3 episode, we talked about how the AI research teams at Google and Facebook drove incredible business outcomes with cutting edge ML models.
[317] And these models powered features like the Facebook news feed, Google ads, and the YouTube next video recommendation in the process transforming Google and Facebook.
[318] into the juggernauts that we know today.
[319] And while we talked all about the research, we didn't touch on how these models were actually deployed.
[320] Yeah, the most common way to deploy new models was through experimentation, A -B testing.
[321] When the research team created a new model, product engineers would deploy the model to a subset of users and measure the impact of the model on core product metrics.
[322] Great experimentation tools transformed the machine learning development process.
[323] They de -risk releases, since each model could be released to a small set of users, they sped up release cycles, researchers could suddenly get quick feedback from real user data, and most importantly, they created a pragmatic, data -driven culture since researchers were rewarded for driving actual product improvements.
[324] And over time, these experimentation tools gave Facebook and Google a huge edge, because they really became a requirement for leading ML teams.
[325] Yep, so now you're probably thinking, well, that's great for Facebook and Google, but my team can't build out our own internal experimentation platform.
[326] Well, you don't have to, thanks to Statsig.
[327] So Statsig was literally founded by ex -Facebook engineers who did all this.
[328] They've built a best -in -class experimentation, feature flagging, and product analytics platform that's available to anyone.
[329] And surprise, surprise, a ton of AI companies are now using Statsig to improve and deploy their models, including Anthropic.
[330] Yep.
[331] So whether you're building with AI or not, Statsing can help your team ship faster and make better data.
[332] data -driven product decisions.
[333] They have a very generous free tier and a special program for venture -backed companies, simple pricing for enterprises, and no seat -based fees.
[334] If you're in the acquired community, there's a special offer.
[335] You get five million free events a month and white glove onboarding support.
[336] So visit statisig .com slash acquired and get started on your data -driven journey.
[337] We have some questions we want to ask you.
[338] Some are cultural about Nvidia, but others are generalizable to company building broadly.
[339] And the first one that we wanted to ask is we've heard that you have 40 plus direct reports and that this org chart works a lot differently than a traditional company org chart.
[340] Do you think there's something special about Nvidia that makes you able to have so many direct reports, not worry about coddling or focusing on career growth of your executives?
[341] And you're like, no, you're just here to do your freaking best work and the most important thing in the world now go.
[342] A, is that correct?
[343] And B, is there something special about NVIDIA that enables that?
[344] I don't think it's something special with NVIDIA.
[345] I think that we had the courage to build a system like this.
[346] NVIDIA is not built like a military, it's not built like the armed forces, where you have generals and colonels.
[347] We're not set up like that.
[348] We're not set up in a command and control and information distribution system from the top down.
[349] we're really built much more like a computing stack.
[350] And a computing stack, the lowest layer is our architecture, and then there's our chip, and then there's our software.
[351] And on top of it, there are all these different modules, and each one of these layers of modules are people.
[352] And so the architecture of the company, to me, is a computer with a computing stack, with people managing different parts of the system.
[353] and who reports to whom your title is not related to anywhere you are in the stack.
[354] It just happens to be who is the best at running that module on that function, on that layer.
[355] It is in charge, and that person is the pilot and command.
[356] And so that's one characteristic.
[357] Have you always thought about the company this way?
[358] Even from the earliest days.
[359] Yeah, pretty much, yeah.
[360] And the reason for that is because your organization should be the architecture of the machinery of building the product.
[361] right that's what a company is yep and yet everybody's company look exactly the same but they all build different things how does that make any sense do you see what I'm saying yeah you know how you make fried chicken versus how you flip burgers versus how you make you know Chinese fried rice it's different and so why would the machinery why would the process be exactly the same and so it's not sensible to me that if you look at the orcharts of most companies it all kind of looks like this and then you have one group that's for business and you have another for another business, you have another for another business, and they're all kind of supposedly autonomous.
[362] And so none of that stuff makes any sense to me. It just depends on what is it that we're trying to build and what is the architecture of the company that best suits to go build it.
[363] So that's number one.
[364] In terms of information system and how do you enable collaboration, we kind of wired up like a neural network.
[365] And the way that we say is that there's a phrase in the company called mission is the boss.
[366] And so we figure out what is the mission of, what is the and we go wire up the best skills and the best teams and the best resources to achieve that mission.
[367] And it cuts across the entire organization in a way that doesn't make any sense, but it looks like a little bit like a neural network.
[368] And when you see mission, do you mean mission like Amidia's mission is?
[369] Build Hopper.
[370] Yeah, okay.
[371] So it's not like further accelerated computing.
[372] It's like we're shipping DJX Cloud.
[373] Build Hopper or somebody else's build a system for Hopper.
[374] Somebody has built Kuda for Hopper.
[375] Somebody's job is built KudianN for Kuda for Hopper.
[376] Somebody's job is the mission, right?
[377] So, you know, your mission is to do something.
[378] What are the tradeoffs associated with that versus the traditional structure?
[379] The downside is the pressure on the leaders is fairly high.
[380] And the reason for that is because in a command and control system, the person who you reports to has more power than you.
[381] And the reason why they have more power than you is because they're closer to the source of information that you are.
[382] In our company, the information is disseminated fairly quickly to a lot of different people, and it's usually at a team level.
[383] So, for example, just now, I was in our robotics meeting, and we're talking about certain things, and we're making some decisions, and there are new college grads in a room, there's three vice presidents in a room, there's two e -stabs in a room, and at the moment that we decided together, we reasoned through some stuff, we made a decision.
[384] Everybody heard exactly the same time.
[385] So nobody has more power than anybody else.
[386] Doesn't make sense?
[387] The new college grad learned at exactly the same time as the e -staff.
[388] And so the executive staff and the leaders that work for me and myself, you earn the right to have your job based on your ability to reason through problems and helping other people succeed.
[389] And it's not because you have some privilege information that I knew the answer was 3 .7 and only I knew, you know.
[390] Everybody knew.
[391] When we did our most recent episode, in BIDIA Part 3 that we just released.
[392] We sort of did this thought exercise, especially over the last couple of years, your product shipping cycle has been very impressive, especially given the level of technology that you are working with and the difficulty of this all.
[393] We sort of said, like, could you imagine Apple shipping two iPhones a year?
[394] And we said that for illustrative purposes.
[395] For illustrative purposes, not to pick on Apple or whatnot.
[396] A large tech company shipping two flagship products, or their flagship product twice per year.
[397] Yeah, or, you know, two WWDCs a year.
[398] Yeah.
[399] There seems to be something unique.
[400] You can't really imagine that, whereas that happens here.
[401] Are there other companies, either current or historically, that you look up to, admire, maybe took some of this inspiration from?
[402] In the last 30 years, I've read my fair share of business books, and as in everything you read, you're supposed to, you're supposed to, first of all, enjoy it.
[403] Right, enjoy it, be inspired by it, but not to adopt it.
[404] That's not the whole point of these books.
[405] The whole point of these books is to share their experiences.
[406] And you're supposed to ask, you know, what does it mean to me in my world?
[407] And what does it mean to me in the context of what I'm going through?
[408] What does this mean to me in the environment that I'm in?
[409] What does this mean to me in what I'm trying to achieve?
[410] And what does this mean to invidia in the age of our company and the capability of our company?
[411] And so you're supposed to ask yourself, what does it mean to you?
[412] And then from that point, being informed by all these different things that we're learning, we're supposed to come up with our own strategies.
[413] You know, what I just described is kind of how I go about everything.
[414] You're supposed to be inspired and learn from everybody else.
[415] And the education's free, you know.
[416] When somebody talks about a new product, you're supposed to go listen to it.
[417] You're not supposed to ignore it.
[418] You're supposed to go learn from it.
[419] And it could be a competitor.
[420] It could be adjacent industry.
[421] It could be nothing to do with us.
[422] The more we learn from what's happening.
[423] on the world, the better.
[424] But then you're supposed to come back and ask yourself, what does this mean to us?
[425] Yeah, you don't just want to imitate them.
[426] That's right.
[427] I love this T -Up of learning but not imitating and learning from a wide array of sources.
[428] There's this sort of unbelievable third element, I think, to what NVIDIA has become today, and that's the data center.
[429] It's certainly not obvious.
[430] I can't reason from AlexNet and your engagement with the research community and social media feedbackers, too, you deciding and the company deciding we're going to go in a five -year all -in journey on the data center.
[431] How did that happen?
[432] Yeah.
[433] Our journey to the data center happened, I would say, almost 17 years ago.
[434] I'm always being asked in me, what are the challenges that the company could see someday?
[435] And I've always felt that the fact that Nvidia's technology is plugged into a computer and that computer has to sit next to you because it has to be connected to a monitor that will limit our opportunity someday because there are only so many desktop PCs that plug a GPU into and there's only so many CRTs and the time LCDs that we could possibly drive.
[436] So the question is, wouldn't it be amazing if our computer doesn't have to be connected to the viewing device?
[437] that the separation of it made it possible for us to compute somewhere else.
[438] And one of our engineers came and showed it to me one day, and it was really capturing the frame buffer, encoding it into video, and streaming it to a receiver device, separating computing from the viewing.
[439] In many ways, that's cloud gaming.
[440] In fact, that was when we started GFN.
[441] We knew that GFN was going to be a journey that would take a long time because you're fighting you're fighting all kinds of problems including including the speed of light and latency everywhere you look that's right for less soon dfn dfn dforce now gforce now yeah yeah gforce now and we've been working on gforce now it all makes sense your first cloud product that's right and and look at look at gforce now was invidia's first data center product and our second data center product was remote graphics putting our gpues in in the world's enterprise data centers which then led us to our third product, which combined CUDA plus our GPU, which became a supercomputer, which then worked towards, you know, more and more and more.
[442] And the reason why it's so important is because the disconnection between where NVIDIA's computing is done versus where it's enjoyed, if you can separate that, your market opportunity explodes.
[443] Yeah, yeah.
[444] And it was completely true.
[445] And so we're no longer limited by the physical constraints of the desktop PC sitting by your desk.
[446] and we're not limited by one GPU per person.
[447] And so it doesn't matter where it is anymore.
[448] And so that was really the great observation.
[449] It's a good reminder.
[450] You know, the data center segment of Envidia's business to me has become synonymous with how is AI going?
[451] And that's a false equivalence.
[452] And it's interesting that you were only this ready to sort of explode in AI in the data center because you had three plus previous products where you learned how to build data center computers.
[453] Even though those markets weren't these, like, gigantic, world -changing technology shifts the way that AI is, that's how you learned.
[454] Yeah, that's right.
[455] You want to pave the way to future opportunities.
[456] You can't wait until the opportunity is sitting in front of you for you to reach out for it.
[457] And so you had to anticipate, you know, our job as CEOs to look around corners and to anticipate where will opportunities be someday?
[458] and even if I'm not exactly sure what and when, how do I position the company to be near it, to be just standing kind of near under the tree, and we can do a diving catch when Apple falls.
[459] You guys know what I'm saying?
[460] Yeah.
[461] But you've got to be close enough to do the diving catch.
[462] Rewind to 2015 and OpenAI, if you hadn't been laying this groundwork in the data center, you wouldn't be powering OpenAI right now.
[463] Yeah, but the idea that computing would be mostly done away from the viewing device, that the vast majority of computing would be done away from the computer itself.
[464] That insight was good.
[465] In fact, cloud computing, everything about today's computing is about separation of that.
[466] And by putting it in a data center, we can overcome this latency problem, meaning you're not going to overcome speed of light.
[467] Speed of light end to end is only 120 milliseconds or something like that.
[468] It's not that long.
[469] From a data center to an internet user.
[470] Anywhere on the planet.
[471] Yeah.
[472] And so we could...
[473] Oh, I see.
[474] And literally across the planet.
[475] Yeah, right.
[476] So if you could solve that problem, approximately something like that, I forget the number, but it's 70 milliseconds, 100 milliseconds.
[477] But it's not that long.
[478] And so my point is, if you could remove the obstacles everywhere else, then speed of light should be, you know, perfectly fine.
[479] And you could build data centers as large as like, and you can do amazing things.
[480] And this little tiny device that we use as a computer or, you know, your TV as a computer, whatever computer, they can all instantly become amazing.
[481] And so that insight, you know, 15 years ago, was a good one.
[482] So speaking of the speed of light, Infiniband, David's like begging me to go here.
[483] I can feel it.
[484] You totally saw that infiniband would be way more useful, way sooner than anyone else realized.
[485] Acquiring Melanox, I think you uniquely saw that this was required to train large language models and you were super aggressive in acquiring that company.
[486] Why did you see that?
[487] Well, no one else saw that.
[488] Well, there were several reasons for that.
[489] First, if you want to be a data center company, building the processing chip isn't the way to do it.
[490] A data center is distinguished from a desktop computer versus a cell phone, not by the processor in it.
[491] A desktop computer and a data center uses the same CPUs, uses the same GPUs, apparently, right?
[492] Very close.
[493] And so it's not the chip.
[494] It's not the processing chip that describes it, but it's the networking of it, It's the infrastructure of it.
[495] It's the, you know, how the computing is distributed, how security is provided, how networking is done, you know, so on and so forth.
[496] And so those characteristics are associated with Melanox, not Invidia.
[497] And so the day that I concluded that really, Nvidia wants to build computers of the future and computers of the future are going to be data centers embodied in data centers, and we want to be data center -oriented company, then we really need to get into networks.
[498] And so that was one.
[499] The second thing is observation that, whereas cloud computing started in hyperscale, which is about taking commodity components, a lot of users, and virtualizing many users on top of one computer, AI is really about distributed computing, where one job, one training job, is orchestrated across millions of processors.
[500] And so it's the inverse of hyperscale almost.
[501] And the way that you design a hyperscale, scale computer with off -the -shelf commodity Ethernet, which is just fine for Hadoop, it's just fine for all of those things.
[502] But not when you're sharding a model across multiple racks.
[503] Not when you're sharding a model across, right?
[504] And so that observation says that the type of networking you want to do is not exactly Ethernet, and the way that we do networking for supercomputing is really quite ideal.
[505] And so the combination of those two ideas convinced me that Melanox is absolutely the right company because they were they're the world's leading high performance networking company and you know we worked with them in so many different areas and in high performance computing already plus i i really like the people um uh the the the israel team is a world class we have some three thousand two hundred people there now and it was one of the best strategic decisions i ever made when we were researching particularly part three of our invidia series we talked to a lot of people and many people told us the melanox acquisition is one of if not the best of all time by any technology company.
[506] I think so too, yeah.
[507] And it's so disconnected from the work that we normally do.
[508] It was surprising to everybody.
[509] But frame this way, you were standing near where the action was.
[510] So you could figure out as soon as that Apple sort of becomes available to purchase, like, oh, LLMs are about to blow up, I'm going to need that, everyone's going to need that.
[511] I think I know that before anyone else does.
[512] Yeah.
[513] You want to position yourself near opportunities.
[514] You don't have to be that perfect.
[515] You know, you want to position yourself near the tree.
[516] And even if you don't catch the apple before it hits the ground so long as you're the first one to pick it up.
[517] You want to position yourself close to the opportunities.
[518] And so that's kind of a lot of my work, is positioning the company near opportunities and having the company having the skills to monitor, ties each one of the steps along the way so that we can be sustainable.
[519] What you just said reminds me of a great aphorism from Buffett and Munger, which is it's better to be approximately right than exactly wrong.
[520] Yeah, there you go.
[521] Yeah, that's a good one.
[522] That's a good one.
[523] Yeah.
[524] All right, listeners, we are here to tell you about a company that literally couldn't be more perfect for this episode, Crusoe.
[525] Yes, Crusoe, as you know by now, is a cloud provider built specifically for AI workloads and powered by clean energy.
[526] And Nvidia is a major partner of Crusoe.
[527] Their data centers are filled with A100s and H -100s.
[528] And as you probably know, with the rising demand for AI, there's been a huge surge in the need for high -performing GPUs, leading to a noticeable scarcity of Nvidia GPUs in the market.
[529] Cruceau has been ahead of the curve and is among the first cloud providers to offer Nvidia's H -100s at scale.
[530] They have a very straightforward strategy, create the best AI cloud solution for customers using the very best GPU hardware on the market the customers ask for, like Nvidia, and invest heavily in an optimized cloud software stack.
[531] Yep.
[532] To illustrate, they already have several customers already running large -scale generative AI workloads on clusters of Nvidia H -100 GPUs, which are interconnected with 3 ,200 gigabit infiniband and leveraging Crusoe's network -attached block storage solution.
[533] And because their cloud is run on wasted, stranded, or clean energy, they can provide significantly better performance per dollar than traditional cloud providers.
[534] Yep.
[535] Ultimately, this results in a huge win -win.
[536] They take what is otherwise a huge amount of energy waste that causes environmental harm and use it to power massive AI workloads.
[537] And it's worth noting that through their operations, Crusoe is actually reducing more emissions than they would generate.
[538] In fact, in 2022, Crusoe captured over 4 billion cubic feet of gas, which led to the avoidance of approximately 500 ,000 metric tons of CO2 emissions.
[539] That's equivalent to taking about 160 ,000 cars off the road.
[540] Amazing.
[541] If you, your company, or your portfolio companies could use lower cost and more performant infrastructure for your AI workloads, go to crusoecloud .com slash acquired.
[542] That's CRU -S -O -E cloud .com slash acquired, or click the link in the show notes.
[543] I want to move away from Nvidia, if you're okay with it, and ask you some questions.
[544] we have a lot of founders that listen to this show, sort of advice for company building.
[545] The first one is when you're starting a startup in the earliest days, your biggest competitor is you don't make anything people want.
[546] Like, your company's likely to die just because people don't actually care as much as you do about whether they're building.
[547] In the later days, you actually have to be very thoughtful about competitive strategy.
[548] And I'm curious, what would be your advice to companies that have product market fit, that are starting to grow, they're in interesting, growing markets.
[549] Where should they look for competition and how should they handle it?
[550] Well, there are all kinds of ways to think about competition.
[551] We prefer to position ourselves in a way that serves a need that usually hasn't emerged.
[552] I've heard you or others in a video, I think used the phrase, zero billion -dollar markets.
[553] Yeah, that's exactly right.
[554] It's our way of saying there's no market yet, but we believe there will be one.
[555] And usually when you're positioned there, everybody's trying to figure out, why are you here?
[556] Right?
[557] Because when we first got into automotive, because we believe that in the future, the car is going to be largely software.
[558] And if it's going to be largely software, a really incredible computer is necessary.
[559] And so when we positioned ourselves there, most people, I still remember one of the CTOs told me, you know what, cars cannot tolerate the blue screen of death.
[560] I don't think anybody can tolerate that, but it doesn't change the fact that someday every car will be a software -defined car.
[561] And I think, you know, 15 years later, we're largely right.
[562] And so oftentimes there's non -consumption, and we like to navigate our company there.
[563] And by doing that, by the time that the market emerges, it's very likely there aren't that many competitors shaped that way.
[564] And so we were early in PC gaming, and today, Nvidia is very large in PC gaming.
[565] We reimagined what a design workstation would be like, and today, just about every workstation on the planet uses Nvidia's technology.
[566] We reimagine how supercomputing ought to be done and who should benefit from supercomputing that we would democratize it.
[567] And look today, Nvidia's an accelerated computing is quite large.
[568] And we reimagine how software would be done.
[569] And today it's called machine learning and how computing we'd be done.
[570] We call it AI.
[571] And so we reimagine these kind of things try to do that about a decade in advance.
[572] And so we spend about a decade in zero billion dollar markets.
[573] And today I spent a lot of time on Omniverse.
[574] And Omniverse is a classic example of a zero billion dollar business.
[575] There's like 40 customers now, something like that.
[576] Amazon, BMW.
[577] Yeah, no, it's cool.
[578] It's cool.
[579] So let's say you do get this great, 10 -year lead.
[580] But then other people figure it out, and you've got people nipping at your heels, what are some structural things that someone who's building a business can do to sort of stay ahead?
[581] And you can just keep your pedal to the metal and say, we're going to outwork them, and we're going to be smarter.
[582] And, like, that works to some extent.
[583] But those are tactics.
[584] What strategically can you do to sort of make sure that you can maintain that lead?
[585] Oftentimes, if you created the market, you ended up having, you know, what people describe as moats, because if you build your product right and it's enabled an entire ecosystem around you to help serve that end market, you've essentially created a platform.
[586] Sometimes it's a product -based platform, sometimes it's a service -based platform, sometimes it's a technology -based platform.
[587] But if you were early there and you were mindful about helping the ecosystem succeed with you, you ended up having this network of networks and all these developers and all these customers who are built around you.
[588] And that network is essentially your moat.
[589] And so, you know, I don't love thinking about it in the context of a moat.
[590] And the reason for that is because you're now focused on building stuff around your castle.
[591] I tend to like thinking about things in the context of building a network.
[592] And that network is about enabling other people to enjoy the success.
[593] of the final market, you know, that you're not the only company that enjoys it, but you're enjoying it with a whole bunch of other people, including, you know.
[594] I'm so glad you brought this up because I wanted to ask you, in my mind at least, and it sounds like in yours too.
[595] Invidia is absolutely a platform company, of which there are very few meaningful platform companies in the world.
[596] I think it's also fair to say that when you started for the first few years, you were a technology company and not a platform company.
[597] every example I can think of of a company that tried to start as a platform company fails.
[598] You've got to start as a technology first.
[599] When did you think about making that transition to being a platform?
[600] Like your first graphics cards were technology.
[601] There was no Cuda.
[602] There was no platform.
[603] Yeah.
[604] What you observed is not wrong.
[605] However, inside our company, we were always a platform company.
[606] And the reason for that is because from the very first day of our company, we had this architecture called UDA.
[607] It's the UDA of Cuda.
[608] Could it compute unified device architecture?
[609] That's right.
[610] And the reason for that is because what we've done, what we essentially did in the beginning, even though Reva 128 only had computer graphics, the architecture described accelerators of all kinds.
[611] And we would take that architecture and developers would program to it.
[612] In fact, Nvidia's first strategy, business strategy, was we were going to be a game console inside the PC, and a game console needs developers, which is the reason why NVIDIA, a long time ago, one of our first employees was a developer relations person.
[613] And so it's the reason why we knew all the game developers and all the 3D developers, and we knew everybody.
[614] So was the original business plan to like...
[615] Sort of like to build direct X. Yeah, compete with Nintendo and Sega as like with PCs?
[616] The original Nvidia architecture was called direct envy.
[617] Direct -MVIDIA, yeah.
[618] And DirectX was an API that made it possible for operating system to directly connect hardware.
[619] But DirectX didn't exist when you started Nvidia, right?
[620] And that's what made your strategy wrong for the first couple years.
[621] In 1993, we had Direct -MVIDia.
[622] And which in 1995 became, you know, well, DirectX came out.
[623] So this is an important lesson.
[624] We were always a developer -oriented company.
[625] The initial attempt was we will get.
[626] get the developers to build on direct NV, and then they'll build for our chips, and then we'll have a platform.
[627] And what played out is Microsoft already had all these developer relationships.
[628] So you learn the lesson, the hard way of like, yeah, yikes, we just got to slide into that.
[629] I mean, that's what Microsoft did back in the day.
[630] They're like, oh, that could be a developer platform.
[631] We'll take that.
[632] Thank you.
[633] No, but they had a lot.
[634] They did it very differently, and they did a lot of things right.
[635] We did a lot of things wrong.
[636] But having said that...
[637] You were competing against Microsoft in the 90s.
[638] I mean, that's...
[639] It's like trying to compete against Nvidia today.
[640] No, it's a lot different, but I appreciate that, but we were nowhere near competing with them.
[641] If you look now, when Kuda came along, there was OpenGL, there was DirectX, but there's still another extension, if you will, and that extension is Kuda.
[642] And that Kuda extension allows a chip that got paid for running DirectX and OpenGL to create an install base for Kuda.
[643] And so that's the Enidia strategy.
[644] You are so militant, and I think from our research it really was you being militant that every NVIDIA chip will run KUDA.
[645] Yeah, if you're a computing platform, everything's got to be compatible.
[646] We are the only accelerator on the planet where every single accelerator is architecturally compatible with the others.
[647] None that has ever existed.
[648] There are literally a couple of hundred million, right?
[649] 250 million, 300 million installed base of active Kuda GPU is being used in a world.
[650] today and they're all architecturally compatible.
[651] How would you have a computing platform if, you know, MV30 and MV 35 and 39 and NV40, they're all different, right?
[652] At 30 years, it's all completely compatible.
[653] And so that's the only unnegotiable rule in our company.
[654] Everything else is negotiable.
[655] I mean, and I guess Kudo was a rebirth of UDA, but understanding this now, UDA going all the way back.
[656] It really is all the way back to all the chips you've ever made.
[657] Yeah, yeah, yeah.
[658] In fact, UDA goes all the way back to all of our chips today.
[659] Wow.
[660] For the record, I didn't help any of the founding CEOs that are listening.
[661] I got to tell you, while you were asking that question, what lessons would I impart?
[662] I don't know.
[663] I mean, the characteristics of successful companies and successful CEOs, I think are fairly well described.
[664] There are a whole bunch of them.
[665] I just think starting successful companies are insanely hard.
[666] It's just insanely hard.
[667] And when I see these amazing companies getting built, I have nothing but admiration and respect because I just know that it's insanely hard.
[668] And I think that everybody did many similar things.
[669] There are some good, smart things that people do.
[670] There are some dumb things that you can do.
[671] But you could do all the right smart things and still fail.
[672] You could do a whole bunch of dumb things, and I did many of them, and still succeed.
[673] So obviously, that's not exactly, right.
[674] I think skills are the things that you can learn along the way, but at an important moments, certain circumstances have to come together.
[675] And I do think that, that the market has to, you know, be one of the agents to help you succeed.
[676] It's not enough, obviously, because a lot of people still fail.
[677] Do you remember any moments in NVIDIA's history where you're like, we made a bunch of wrong decisions, but somehow we got saved?
[678] Because, you know, it takes the sum of all the luck and all the skill in order to succeed.
[679] Do you remember any moments where you're like...
[680] I actually thought that you started with Revo 120 was spot on.
[681] Revo 128, as I mentioned, the number of smart decisions we made which are smart to this day, how we design chips is exactly the same to this day.
[682] Because, gosh, you know, nobody's ever done it back then.
[683] And we pulled every trick in the book in a desperation because we had no other choice.
[684] Well, guess what?
[685] That's the way things ought to be done.
[686] And now everybody does it that way.
[687] Right.
[688] Everybody does it because why should you do things twice if you can do it once?
[689] Why tape out a chip seven times if you could tape it out one time, right?
[690] And so the most efficient, the most cost effective, the most competitive, speed is technology, right?
[691] Speed is performance.
[692] Time to market is performance.
[693] All of those things apply.
[694] So why do things twice if you could do it once?
[695] Yeah.
[696] And so Revo 128 made a lot of great.
[697] decisions and how we spec products, how we think about market needs and lack of, and how do we judge markets and all of those, man, we made some amazingly good decisions.
[698] Yeah, we were, you know, back against the wall.
[699] We only had one more shot to do it.
[700] But once you pull out the stops and you see what you're capable of, why would you put stops in next time?
[701] Exactly.
[702] You're like, let's keep the stops out all the time.
[703] That's right.
[704] Every time.
[705] Is it fair to say, though, maybe on the left side of the equation, and thinking back to 1997, that that was the moment where consumers tip to really, really valuing 3D graphical performance in games?
[706] Oh, yeah.
[707] So, for example, luck.
[708] Let's talk about luck.
[709] If Carmack had decided to use acceleration, because remember, Doom was completely software rendered.
[710] And the Nvidia philosophy was that, although general purpose computing is a fabulous thing and is going to enable software and IT and everything, We felt that there were applications that wouldn't be possible or it would be costly if it wasn't accelerated.
[711] It should be accelerated.
[712] And 3D graphics was one of them, but it wasn't the only one.
[713] And it just happens to be the first one and a really great one.
[714] And I still remember the first times we met John, he was quite emphatic about using CPUs, and the software render was really good.
[715] I mean, quite frankly, if you look at Doom, the performance of Doom was really hard to achieve, even with accelerators at the time.
[716] You know, if you didn't filter, if you didn't have to do bi -linear filtering, it did a pretty good job.
[717] The problem with Doom, though, was you needed Carmack to program it.
[718] Yeah, you needed Carmack to program it.
[719] Exactly.
[720] It was a genius piece of code.
[721] But nonetheless, software renders did a really good job.
[722] And if he hadn't decided to go to OpenGL and accelerate for Quake, frankly, what would be the killer app that put us here?
[723] Right.
[724] And so Carmack and Sweeney, both between Unreal and Quake, created the first two killer applications for Consumer 3D.
[725] And so I owe them a great deal.
[726] I want to come back real quick, too.
[727] You told these stories, and you're like, well, I don't know what founders can take from that.
[728] I actually do think, you know, if you look at all the big tech companies today, perhaps with the exception of Google, they did all start and understanding this now about you by addressing developers, planning to build a platform and tools for developers, you know, all of them.
[729] Apple.
[730] Not Amazon.
[731] Well, I guess with AWS, that's how AWS started.
[732] So I think that actually is a lesson to your point of, like, that won't guarantee success by any means.
[733] But that'll get you hanging around a tree if the Apple falls.
[734] Yeah.
[735] As many good ideas as we have, you don't have all the world's good ideas.
[736] And the benefit of having developers is you get to see a lot of good ideas.
[737] Yep.
[738] Yeah.
[739] Well, as we start to drift toward the end here, we spent a lot of time on the past.
[740] And I want to think about the future a little bit.
[741] I'm sure you spend a lot of time on this being on the cutting edge of AI.
[742] You know, we're moving into an era where the productivity that software can accomplish when a person is using software can massively amplify the impact and the value that they're creating, which has to be amazing for humanity in the long run.
[743] In the short term, it's going to be inevitably bumpy as we sort of figure out what that means.
[744] What do you think some of the solutions are as AI gets more and more powerful and better at accelerating productivity for all the displaced jobs that are going to come from it?
[745] Well, first of all, we have to keep AI safe.
[746] And there's a couple of different areas of AI safety.
[747] That's really important.
[748] Obviously, in robotics and self -driving car, there's a whole field of AI safety, and we've dedicated ourselves to functional safety and active safety and all kinds of different different areas of safety, when to apply human in the loop, when is it okay for human not to be in the loop, how do you get to a point where increasingly human doesn't have to be in the loop, but human largely in the loop.
[749] In the case of information safety, obviously bias, false information and appreciating the rights of artists and creators, that whole area deserves a lot of attention.
[750] And you've seen some of the work that we've done, instead of scraping the internet, we partnered with Getty and Shutterstock to create commercially fair way of applying artificial intelligence standard to the AI.
[751] In the area of large language models and the future of increasingly greater agency AI, clearly the answer is for as long as it's sensible, and I think it's going to be sensible for a long time, is human in the loop.
[752] The ability for an AI to self -learn and improve and change out in the wild in a digital form, it should be avoided.
[753] And we should collect data, we should carry the data, we should train the model, we should, test the model, validate the model, before we release it on the wild again.
[754] So human is in the loop.
[755] There are a lot of different industries that have already demonstrated how to build systems that are safe and good for humanity, and obviously the way autopilot works for a plane.
[756] and two -pilot system and then air traffic control and redundancy and diversity.
[757] And all of the basic philosophies of designing safe systems apply as well in self -driving cars and so on and so forth.
[758] And so I think there's a lot of models of creating safe AI, and I think we need to apply them.
[759] With respect to automation, my feeling is that, and we'll see, but it is more likely that AI is going to create more jobs.
[760] and in the near term.
[761] The question is, what's the definition in near term?
[762] And the reason for that is, is the first thing that happens with productivity is prosperity.
[763] And prosperity, when the companies get more successful, they hire more people because they want to expand into more areas.
[764] And so the question is, if you think about a company and say, okay, if we improve the productivity, they need fewer people.
[765] Well, that's because the company has no more ideas, but that's not true from those companies.
[766] if you become more productive and the company become more profitable usually they hire more people to expand into new areas and so long as we believe that there are more areas to expand into that there are more ideas in drugs drug discovery, there are more ideas in transportation, there are more ideas in retail, there are more ideas in entertainment, that there's more ideas in technology.
[767] So long as we believe that there are more ideas, the prosperity of the industry which comes from improved productivity, results in hiring more people, more ideas.
[768] Now, go back in history, we can fairly say that today's industry is larger than the world's industry is 1 ,000 years ago.
[769] And the reason for that is because, obviously, humans have a lot of ideas.
[770] And I think that there's plenty of ideas yet for prosperity and plenty of ideas that can be begat from productivity improvements, but my sense is that it's likely to generate jobs.
[771] Now, obviously, net generation of jobs doesn't guarantee that any one human doesn't get fired.
[772] Okay, I mean, that's obviously true.
[773] And it's more likely that someone will lose a job to someone else, some other human that uses an AI, you know, and not likely to an AI, but to some other human that uses an AI.
[774] And so I think the first thing that everybody should do is learn how to use AI so that they can augment their own productivity and every company should augment their own productivity to be more productive so that they can have more prosperity, hire more people.
[775] And so I think jobs will change.
[776] My guess is that we'll actually have higher employment, will create more jobs.
[777] I think industries will be more productive.
[778] And many of the industries that are currently suffering from lack of labor, workforce is likely to use AI to get themselves off the fleet and get back to growth and progress.
[779] prosperity.
[780] So I see it a little bit differently, but I do think that jobs will be affected, and I'd encourage everybody just to learn AI.
[781] This is appropriate.
[782] There's a version of something we talk about a lot on Acquired.
[783] We call it the Moritz corollary to Moore's Law after Mike Moritz from Sequoia.
[784] Sequoia was the first investor in our company.
[785] Yeah, of course, yeah.
[786] The great story behind it is that when Mike was taking over for Don Valentine with Doug, he was sitting and looking at.
[787] at Sequo's Returns and he was looking at fund three or four, I think it was four maybe that had Cisco and he was like, how are we ever going to top that?
[788] I can't, I can't, you know, Don's going to have us beat.
[789] We're never going to beat that.
[790] They thought about it and you realized that, well, as compute gets cheaper and it can access more areas of the economy because it gets cheaper and can get adopted more widely, well, then the markets that we can address should get bigger.
[791] Yeah.
[792] And AI, your argument is basically, AI will do the same thing.
[793] Exactly.
[794] I just gave you exactly the same example that, in fact, productivity doesn't result in us doing less.
[795] Productivity usually results in us doing more.
[796] Everything we do will be easier, but we'll end up doing more.
[797] Because we have infinite ambition.
[798] The world has infinite ambition.
[799] So if a company is more profitable, they tend to hire more people to do more.
[800] Yeah.
[801] That's true.
[802] Technology is a lever and the place where the idea is, kind of falls down is that like that we would be satisfied yeah like humans have never -ending ambition no humans will always expand and consume more energy and attempt to pursue more ideas that has always been true of every version of our species yeah over time now is a great time to share something new from our friends at Blinkist and Go One that is very appropriate to this episode yes so personal story time I a few weeks ago was scouring the web to find Jensen's favorite business books which was proving to be difficult.
[803] I really wanted Blinkist to make blinks of each of those books so you could all access them.
[804] And I think I found one or two in random articles, but that just wasn't enough.
[805] So finally, before I gave up, as a last resort, I asked an AI chatbot, specifically Bard, to provide me a list and cite the sources of Jensen's favorite business books.
[806] And miraculously, it worked.
[807] Bard found books that Jensen had called out in public forums over the past several decades.
[808] So if you click the link in the show notes or go to blinkist .com slash Jensen, you can get the blinks of all five of those books, plus a few more that Jensen specifically told us about later in the episode.
[809] Yes.
[810] And we also have an offer from Blinkist and Go One that goes beyond personal learning.
[811] Blinkist has handpicked a collection of books related to the themes of this episode.
[812] So tech innovation, leadership, the dynamics of acquisitions.
[813] These books offer the mental models to adapt to a rapidly changing technology environment.
[814] And just like all other episodes, Blinkist is giving acquired listeners an exclusive 50 % discount on all premium content.
[815] This gives you key insights from thousands of books at your fingertips all condensed into easy -to -digest summaries.
[816] And if you're a founder, a team lead, or an L &D manager, Blinkist also includes curated reading lists and progress tracking features all overseen by a dedicated customer success manager to help your team flourish as you grow.
[817] Yes.
[818] So to claim the whole free collection, unlock the 50 % discount, and explore Blinkist's enterprise solution, simply visit Blinkist .com slash Jensen and use the promo code, Jensen.
[819] Blinkist and their parent company, Go One, are truly awesome resources for your company and your teams as they develop from small startup to enterprise.
[820] Our thanks to them, and seriously, this offer is pretty awesome.
[821] Go take them up on it.
[822] We have a few lightning round questions we want to ask you.
[823] And then we have a very fun.
[824] I can't think that fast.
[825] We'll open with an easy one based on all these conference rooms we see named around here.
[826] Favorite sci -fi book?
[827] I've never read a sci -fi book before.
[828] No. Oh, come on.
[829] Yeah, yeah.
[830] What's with like the obsession with Star Trek and?
[831] Oh, just, you know, I watched the TV show.
[832] Yeah.
[833] Okay, favorite sci -fi TV series.
[834] Uh, Star Trek's my favorite.
[835] Yeah, Star Trek's my favorite.
[836] I saw Viger out there on the way in.
[837] It's a good, it's a good conference room name.
[838] VJer is an excellent one, yeah.
[839] Yeah.
[840] What car is your daily driver these days and related questions?
[841] Do you still have the Supra?
[842] Oh, it's one of my favorite cars and also favorite memories.
[843] You guys might not know this, but Lori and I got engaged Christmas one year, and we drove back in my brand -new Supra, and we totaled it.
[844] We were this close to the end.
[845] Thank God you didn't.
[846] But nonetheless, it wasn't my fault.
[847] It wasn't the Supra's fault.
[848] But it's a remark.
[849] The one time when it wasn't the Supra's fault.
[850] Yeah, I love that car.
[851] I'm driven these days for security reasons and others, but I'm driven in the Mercedes EQS.
[852] It's a great car.
[853] Ah, nice.
[854] Yeah, great car.
[855] Thanks.
[856] Using Nvidia technology?
[857] Yeah, we're in the central computer.
[858] Sweet.
[859] I know we already talked a little bit about business books, but one or two favorites that you've taken something from?
[860] Clay Christensen, I think, the series is the best.
[861] I mean, there's just no two ways about it.
[862] And the reason for that is because it's so intuitive and so sensible, it's approachable.
[863] But I read a whole bunch of them, and I read just about all of them.
[864] I really enjoyed Andy Grove's books.
[865] They're all really good.
[866] Awesome.
[867] Favorite characteristic of Don Valentine?
[868] Grumpy, but endearing.
[869] And what he said to me, the last time as he decided to invest in our company, he says, if you lose my money, I'll kill you.
[870] Of course I did.
[871] And then, over the course of the decades, the years I've followed, when something is nice written about us in Mercury News, it seems like he wrote it in a crayon.
[872] You know, he'll say, good job, done, you know, just write right over the newspaper.
[873] and just good job done, and he mails it to me. And I hope I've kept them.
[874] But anyways, you could tell he's a real sweetheart, but he cares about the companies.
[875] He's a special character.
[876] Yeah, he's incredible.
[877] What is something that you believe today that 40 -year -old Jensen would have pushed back on and said, no, I disagree?
[878] There's plenty of time.
[879] Hmm.
[880] Yeah, there's plenty of time.
[881] if you prioritize yourself properly and you make sure that you you don't let outlook be the controller of your time there's plenty of time plenty of time in the day plenty of time to do anything to achieve this thing like to do anything just don't do everything prioritize your life make sacrifices don't let outlook control what you do every day notice i was late to our meeting and the reason for that by the time i looked up, I, oh my gosh, you know, Ben and Dave are waiting, you know.
[882] It's already.
[883] We have time.
[884] Yeah, exactly.
[885] Didn't stop this from being a great job.
[886] No, but you have to prioritize your time really carefully.
[887] And don't let Outlook determine that.
[888] Love that.
[889] What are you afraid of, if anything?
[890] I'm afraid of the same things today that I was in the very beginning of this company, which is letting the employees down.
[891] You know, you have a lot of people who, joined your company because they believe in your hopes and dreams and and they've adopted it as their hopes and dreams and you want to be right for them you want to be successful for them you want them to be able to build a great life as well as help you build a great company and be able to build a great career you want them to have to enjoy all of that and these days i want them to be able to enjoy the things i've had the benefit of enjoying and um all the great success i've enjoyed I want them to be able to enjoy all of that.
[892] And so I think the greatest fear is that you let them down.
[893] What point did you realize that you weren't going to have another job?
[894] That, like, this was it.
[895] I just don't change jobs.
[896] You know, if it wasn't because of Chris and Curtis convincing me to do NVIDIA, I would still be at LSI logic today.
[897] I'm certain of it.
[898] Wow.
[899] Yeah, really.
[900] Yeah, yeah.
[901] I'm certain of it.
[902] I would keep doing what I'm doing, and at the time that I was there, I was completely dedicated and focused on helping LSI Logic be the best company could be.
[903] And I was LSI Logic's best ambassador.
[904] I've got great friends to this day that I've known from LSI Logic.
[905] It's a company I loved then.
[906] I loved dearly today.
[907] I know exactly why I went.
[908] The revolutionary impact it had on chip design and system design and computer design.
[909] In my estimation, one of the most important companies that ever came to Silicon Valley and changed everything about how computers were made.
[910] It put me in the epicenter of some of the most important events in computer industry.
[911] It led me to meeting Chris and Curtis and Andy Bechtolstein and John Rubinstein and some of the most important people in the world.
[912] And Frank that I was with the other day and just, I mean, the list goes on.
[913] And so LSI Logic was really important to me, and I would still be there.
[914] I would, you know, who knows what LSI Logic would have become if I were still there, right?
[915] And so that's kind of how my mind works.
[916] Powering the AI of the world.
[917] Yeah, exactly.
[918] I mean, I might be doing the same thing I'm doing today.
[919] I got the sense from remembering back to part one of our series on NVIDIA.
[920] But until I'm fired, this is my last job.
[921] I love it.
[922] Yeah, yeah.
[923] I got the sense that LSI Logic might have also changed your perspective and philosophy about computing, too.
[924] The sense we got from the research was that right out of school, when you first went to AMD first, right?
[925] Yeah.
[926] You believed that, like, kind of a version of that, was it the Jerry Sanders, real men have fabs?
[927] Like, you need to do the whole stack.
[928] Like, you got to do everything.
[929] And that LSI logic changed you.
[930] What LSI Logic did was realized that you can express transistors and logical gates and chip functionality in high -level languages.
[931] That by raising the level of abstraction in what is now called high -level design, it was coined by Harvey Jones, who's on Envidias board, and I met him way back in the early days of synopsis.
[932] But during that time, there was this belief that you can express chip.
[933] design in high -level languages.
[934] And by doing so, you could take advantage of optimizing compilers and optimization logic and tools and be a lot more productive.
[935] That logic was so sensible to me, and I was 21 years old at the time, and I wanted to pursue that vision.
[936] Frankly, that idea happened in machine learning.
[937] It happened in software programming.
[938] I want to see it happen in digital biome.
[939] so that we can think about biology in a much higher -level language.
[940] Probably a large language model would be the way to make it representable.
[941] That transition was so revolutionary.
[942] I thought that was the best thing ever happened to the industry, and I was really happy to be part of it, and I was at ground zero.
[943] And so I saw one industry change revolutionize another industry, and if not for LSI Logic doing the work that it did, synopsis shortly after, then why would the computer industry be where it is today?
[944] Yeah.
[945] It's really, really terrific.
[946] I was at the right place at the right time to see all that.
[947] That's super cool.
[948] Yeah.
[949] And it sounded like the CEO of LSI Logic put a good word in for you with Don Valentine, too.
[950] I didn't know how to write a business plan.
[951] Which it turns out is not actually important.
[952] No, no, no. It turns out that making a financial forecast that nobody knows it's going to be right or wrong.
[953] Turns out not to be that important.
[954] But the important things that a business plan probably could have teased out, I think that the art of writing a business plan ought to be much, much shorter, and it forces you to condense.
[955] You know, what is the true problem you're trying to solve?
[956] What is the unmet need that you believe will emerge?
[957] And what is it that you're going to do that is sufficiently hard that when everybody else finds out is a good idea, they're not going to swarm it and, you know, make you obsolete.
[958] And so It has to be sufficiently hard to do.
[959] There are a whole bunch of those skills that are involved in just, you know, product and positioning and pricing and go to market and, you know, all that kind of stuff.
[960] But those are skills, and you can learn those things easily.
[961] The stuff that is really, really hard is the essence of what I described.
[962] And I did that, okay, but I had no idea how to write the business plan.
[963] And I was fortunate that Wolf Corrigan was so pleased with me in the work that I did when I was at Ellis Logic.
[964] He called up Don Valentine and told Don, you know, invest in this kid.
[965] And he's going to come your way.
[966] And so I was, you know, I was set up for success from that moment and got us on the ground.
[967] As long as he didn't lose the money.
[968] No, I think Sequoia did okay.
[969] That's good.
[970] Yeah, I think we probably are one of the best investments they've ever made.
[971] Have they held through today?
[972] The VC partner is still on the board, Mark Stevens.
[973] Yeah, Mark.
[974] Yeah, yeah, all these years.
[975] The two founding VCs are still on the board.
[976] Sutter Hill and Sequoia?
[977] Yeah, Tenshawks and Mark Stevens.
[978] I don't think that ever happens.
[979] We are singular in that circumstance, I believe.
[980] They've had a value this whole time, been inspiring this whole time, gave great wisdom and great support.
[981] But they also were so entertained.
[982] No, not yet.
[983] But they've been entertained, you know, by the company, inspired by the company, and enriched by the company.
[984] stayed with it.
[985] And I'm really grateful.
[986] Well, in that being, our final question for you, it's 2023, 30 years, anniversary of the founding of Nvidia.
[987] If you were magically 30 years old again today in 2023 and you were going to Denny's with your two best friends who are the two smartest people you know, and you're talking about starting a company, what are you talking about starting?
[988] I wouldn't do it.
[989] I know.
[990] And the reason for that is, really quite simple, ignoring the company that we would start.
[991] First of all, I'm not exactly sure.
[992] The reason why I wouldn't do it, and it goes back to why it's so hard, is building a company and building a video turned out to have been a million times harder than I expected it to be.
[993] Any of us expected it to be.
[994] And at that time, if we realized the pain and suffering and just how vulnerable you're going to feel and the challenges that you're going to endure, the embarrassment and the shame and the list of all the things that go wrong I don't think anybody would start a company nobody in their right mind would do it and I think that that's kind of the superpower of a entrepreneur they don't know how hard it is and they only ask themselves how hard can it be and to this day I trick my brain into thinking how hard can it be because you have to still when you wake up in the morning?
[995] How hard can it be?
[996] Everything that we're doing, how hard can it be?
[997] Omniverse, how hard can it be?
[998] I don't get the sense, though, that you're planning to retire anytime soon, though.
[999] No, no, I'm still going to.
[1000] You couldn't choose to say, like, whoa, this is too hard.
[1001] The trick is still working.
[1002] You're still, the trick is still working.
[1003] I'm still enjoying myself immensely, and I'm adding a little bit of value, but that's really the trick of an entrepreneur.
[1004] You have to get yourself to believe that it's not that hard.
[1005] because it's way harder than you think.
[1006] And so if I go taking all of my knowledge now and I go back and I said, I'm going to endure that whole journey again, I think it's too much.
[1007] It is just too much.
[1008] Do you have any suggestions on any kind of support system or a way to get through the emotional trauma that comes with building something like this?
[1009] Family and friends and all the colleagues we have here.
[1010] I'm surrounded by people who've been here for 30 years.
[1011] Right, Chris has been here for 30 years, and Jeff Fisher has been here 30 years, Dwight's been here 30 years, and Jonah and Brian have been here, you know, 25 -some years, and probably longer than that, and, you know, Joe Greco's been here 30 years.
[1012] I'm surrounded by these people that never one time gave up, and they never one time gave up on me, and that's the entire ball of wax, you know, and to be able to go home and have your family be fully committed to everything that you're trying to do.
[1013] thick or thin, they're proud of you and proud of the company.
[1014] And you kind of need that.
[1015] You need the unwavering support of people around you.
[1016] You know, Jim Gathers and the Tensh Coxes, the Mark Stevens, and Harvey Jones and all the early people of our company, the Bill Millers, they not one time gave up on the company and us.
[1017] And you kind of, you need that.
[1018] Not kind of need that, you need that.
[1019] And I'm pretty sure that almost every successful company, and entrepreneurs that have gone through some difficult challenges, they had that support system around them.
[1020] I can only imagine how meaningful that.
[1021] I mean, I know how meaningful that is in any company, but for you, given that, I feel like the Nvidia journey is particularly amplified on these dimensions, right?
[1022] And, like, you know, you went through two, two if not three, like 80 % plus drawdowns in the public markets.
[1023] Yeah.
[1024] To have investors who've stuck with you.
[1025] From day one through that, must be just, like, so much support.
[1026] Yeah, yeah, it is incredible.
[1027] And you hate that any of that stuff happened, and most of it is out of your control.
[1028] But, you know, 80 % fall, it's an extraordinary thing, no matter how you look at it.
[1029] And I forget exactly, but, I mean, we traded down at about a couple of two, three billion dollars.
[1030] in market value for a while because of the decision we made and going into CUDA and all that work.
[1031] And your belief system has to be really, really strong.
[1032] You know, you have to really, really believe it and really, really want it.
[1033] Otherwise, it's just too much to endure.
[1034] I mean, because, you know, everybody's questioning you and employees aren't questioned you, but employees have questions.
[1035] Right.
[1036] People outside are questioning you.
[1037] And it's a little embarrassing.
[1038] It's like, you know, when your stock price gets hit, It's embarrassing, no matter how you think about it, and it's hard to explain, you know.
[1039] And so there's no good answers to any of that stuff.
[1040] You know, CEOs are human and companies are built of humans, and these challenges are hard to endure.
[1041] And had an appropriate comment on our most recent episode on you all where we were talking about, you know, the current situation in NVIDIA.
[1042] I think you said for any other company, this would be a precarious spot to be in, but for NVIDIA.
[1043] This is kind of old hat.
[1044] Yeah, you guys are familiar with these large swings and amplitude.
[1045] Yeah.
[1046] The thing that to keep in mind is at all times, what is the market opportunity that you're engaging?
[1047] And that helps, that informs your size.
[1048] You know, I was told a long time ago that NVIDA can never be larger than a billion dollars.
[1049] Obviously, it's an underestimination of the size of the opportunity.
[1050] It is the case that no chip company can ever be so big.
[1051] And so, but if you're not a chip company, then why is that applied to you?
[1052] And this is the extraordinary thing about technology right now is technology is a tool and it's only so large.
[1053] What's unique about our current circumstance today is that we're in the manufacturing of intelligence, we're in the manufacturing of work world.
[1054] That's AI.
[1055] And the world of tasks doing work productive, generative AI work, generative, intelligent work.
[1056] That market size is enormous.
[1057] It's measured in trillions.
[1058] One way to think about that is if you build a chip for a car, how many cars are there and how many chips would they consume?
[1059] That's one way to think about that.
[1060] However, if you build a system that whenever needed, assist it in the driving of the car, And, you know, what's the value of an autonomous chauffeur every now and then?
[1061] And so now the market, obviously, the problem becomes much larger, the opportunity becomes larger.
[1062] You know, what would it be like if we were to magically conjure up a chauffeur for everybody who has a car?
[1063] And, you know, how big is that market?
[1064] And obviously, that's a much, much larger market.
[1065] And so the technology industry is that, you know, where, you know, where, you know, What we've discovered, what NVIDIA has discovered and what some of the discovered, is that by separating ourselves from being a chip company, but building on top of a chip and you're now in the ad company, the market opportunity has grown by probably a thousand times.
[1066] Don't be surprised if technology companies become much larger in the future, because what you produce is something very different.
[1067] And that's the kind of the way to think about, you know, How large can your opportunity?
[1068] How large can you be?
[1069] It has everything to do with the size of the opportunity.
[1070] Yep.
[1071] Well, Jensen, thank you so much.
[1072] Thank you.
[1073] Ooh, David.
[1074] That was awesome.
[1075] It's so fun.
[1076] Well, listeners, we want to tell you that you should totally sign up for our email list.
[1077] Of course, it is notifications when we drop a new email, but we've added something new.
[1078] We're including little tidbits that we learn after releasing the episode, including listener corrections.
[1079] And we also have been sort of teasing what the next episode will be.
[1080] So if you want to play the little guessing game along with the rest of the Acquired community, sign up at Acquired .fm slash email.
[1081] Our huge thank you to Blinkist, Statsig, and Crusoe.
[1082] All the links in the show notes are available to learn more and get the exclusive offers for the Acquired community from each of them.
[1083] You should check out ACQ2, which is available at any podcast player.
[1084] As these main acquired episodes get longer and come out once a month instead of once every couple weeks.
[1085] It's a little bit more of a rarity these days.
[1086] We've been up -leveling our production process, and that takes time.
[1087] Yes.
[1088] ACQ2 has become the place to get more from David and I, and we've just got some awesome episodes coming up that we are excited about.
[1089] If you want to come deeper into the Acquired Kitchen, become an LP, Acquired .fm slash LP.
[1090] Once every couple months or so, we'll be doing a call with all of you on Zoom just for LPs to get the inside scoop of what's going on in Acquired Land and get to know David and I a little bit better, and once a season, you'll get to help us pick a future episode.
[1091] So that's Acquired .fm slash LP.
[1092] Anyone should join the Slack.
[1093] Acquired .comfm slash Slack.
[1094] God, we've got a lot of things now, David.
[1095] I know.
[1096] The hamburger bar on our website is expanding.
[1097] Expanding.
[1098] I know.
[1099] That's how you know we're becoming Enterprise.
[1100] Wait till we have a mega menu, a menu of menus, if you will.
[1101] What is the acquired solution that we can sell?
[1102] That's true.
[1103] We've got to find that.
[1104] All right.
[1105] With that, listeners, Acquired .fm slash Slack to join the Slack and discuss this episode.
[1106] Acquired .fm slash store to get some of that sweet merch that everyone is talking about.
[1107] And with that, listeners, we will see you next time.
[1108] We'll see you next time.