Insightcast AI
Home
© 2025 All rights reserved
Impressum
Judea Pearl: Causal Reasoning, Counterfactuals, Bayesian Networks, and the Path to AGI

Judea Pearl: Causal Reasoning, Counterfactuals, Bayesian Networks, and the Path to AGI

Lex Fridman Podcast XX

--:--
--:--

Full Transcription:

[0] The following is a conversation with Judea Pearl, Professor UCLA and a winner of the Turing Award that's generally recognized as the Nobel Prize of Computing.

[1] He's one of the seminal figures in the field of artificial intelligence, computer science, and statistics.

[2] He has developed and championed probabilistic approaches to AI, including Beijing networks and profound ideas in causality in general.

[3] These ideas are important not just to AI, but to our understanding and practice of science.

[4] But in the field of AI, the idea of causality, cause and effect, to many, lie at the core of what is currently missing and what must be developed in order to build truly intelligent systems.

[5] For this reason, and many others, his work is worth returning to often.

[6] I recommend his most recent book called Book of Why that presents key ideas from a lifetime or work in a way that is accessible to the general public.

[7] This is the Artificial Intelligence Podcast.

[8] If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter, Alex Friedman, spelled F -R -I -D -M -A -N.

[9] If you leave a review on Apple Podcasts especially, but also cast box, or comment on YouTube, consider mentioning topics, people, ideas, questions, quotes, and science, tech, and philosophy that you find interesting, and I'll read them on this podcast.

[10] I won't call out names, but I love comments with kindness and thoughtfulness in them, so I thought I'd share them with you.

[11] Someone on YouTube highlighted a quote from the conversation with No Chomsky, where he said that the significance of your life is something you create.

[12] I like this line as well.

[13] On most days, the existentialist approach to life is one I find liberating and fulfilling.

[14] I recently started doing ads at the end of the introduction.

[15] I'll do one or two minutes after introducing the episode and never any ads in the middle that break the flow of the conversation.

[16] I hope that works for you and doesn't hurt the listening experience.

[17] This show is presented by Cash App, the number one finance app in the App Store.

[18] I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds.

[19] Cash App also has a new investing feature.

[20] You can buy fractions of a stock, say $1 worth, no matter what the stock price is.

[21] Broker services are provided by CashUp investing, a subsidiary of Square, a member SIPC.

[22] I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their first robotics and Lego competitions.

[23] They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means that donated money is used to the maximum effectiveness.

[24] When you get Cash App from the App Store, Google Play, and use code Lex Podcast, you'll get $10, and Cash App will also donate $10 to first, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world.

[25] And now, here's my conversation with Judea Pearl.

[26] You mentioned in an interview that science is not a collection of facts, by a constant human struggle with the mysteries of nature.

[27] What was the first mystery that you can recall that hooked you, that captained.

[28] Oh, the first mystery, that's a good one.

[29] Yeah, I remember that.

[30] I had a fever for three days.

[31] When I learned about Descartes, analytic geometry, and I found out that you can do all the construction in geometry using algebra.

[32] And I couldn't get over it.

[33] I simply couldn't get out of bed.

[34] So what kind of world does analytic geometry unlock?

[35] Well, it connects algebra with geometry.

[36] Okay, so Descartes had the idea that geometrical construction and geometrical theorems and assumptions can be articulated in the language of algebra, which means that all.

[37] all the proof that we did in high school and trying to prove that the three bisectos meet at one point and that, okay, all this can be proven by just shuffling around notation.

[38] Yeah, that was a traumatic experience.

[39] The traumatic experience.

[40] For me, it was, I'm telling you, right?

[41] So it's the connection between the different mathematical disciplines that they all.

[42] Not within two different languages.

[43] Languages.

[44] Yeah.

[45] So which mathematic discipline is most beautiful?

[46] Is geometry it for you?

[47] Both are beautiful.

[48] They have almost the same power.

[49] But there's a visual element to geometry being of...

[50] A visual, it's more transparent.

[51] But once you get over to algebra, then a linear equation is a straight line.

[52] This translation is easily absorbed.

[53] And to pass a tangent to a circle, you know, you have the basic theorems and you can do it with algebra.

[54] So, but the transition from one to another was really, I thought that Descartes was the greatest mathematician of all times.

[55] So you have been at the, if you think of engineering and mathematics as a spectrum.

[56] Yes.

[57] You have been, you have walked, casually along this spectrum throughout your life.

[58] You know, a little bit of engineering and then, you know, done a little bit of mathematics here and there.

[59] Not a little bit.

[60] I mean, we got a very solid background in mathematics because our teachers were geniuses.

[61] Our teachers came from Germany in the 1930s, running away from Hitler.

[62] They left their careers in Heidelberg and Berlin and came to teach high school in Israel.

[63] And we were the beneficiary of that experiment.

[64] And they taught us math a good way.

[65] What's the good way to teach math?

[66] Chronologically.

[67] The people.

[68] The people behind the theorems, yeah.

[69] Their cousins and their nieces and their faces.

[70] And how they jumped from the bathtub when they scream, Eureka!

[71] And ran naked in town.

[72] So you're almost educated as a historian of math?

[73] No, we just got a glimpse of that history together with a theorem.

[74] So every exercise in math was connected with the person and the time of the person.

[75] The period.

[76] The period also mathematically speaking.

[77] Mathematically speaking, yes, not the politics.

[78] So, and then in university, you have gone on to do engineering.

[79] Yeah.

[80] I get a BS in engineering and a technique on, right?

[81] And then I moved here for graduate work, and I did engineering in addition to physics in Radguer's.

[82] And it would combine very nicely with my thesis, which I did in RCA laboratories in superconductivity.

[83] And then somehow thought, to switch to almost computer science, software, even, not switch, but long to become, to get into software engineering a little bit, almost in programming, if you can call it that in the 70s.

[84] So there's all these disciplines.

[85] Yeah.

[86] If you were to pick a favor, in terms of engineering and mathematics, which path do you think has more beauty, which path has more power?

[87] It's how to choose, no. I enjoy doing physics, and even have a vortex named on my name.

[88] So I have investment in immortality.

[89] So what is a vortex?

[90] Vortex is in superconductivity.

[91] In the superconductivity.

[92] If you have permanent current swirling around, one way or the other, you can have a store one or zero for computer.

[93] That was we worked on in the 1960, in RCA.

[94] And I discovered a few nice phenomena.

[95] with a vortices.

[96] So that's a pearl vortex?

[97] Pearl vortex, right, you can Google it.

[98] Right?

[99] I didn't know about it, but the physicist picked up on my thesis, on my PhD, and it becomes popular.

[100] I mean, thin film superconductors became important for high temperature superconductors.

[101] So they called it pearl vortex without my knowledge.

[102] I discovered it about 15 years ago.

[103] You have footprints in all of the sciences.

[104] So let's talk about the universe a little bit.

[105] Is the universe at the lowest level deterministic or stochastic in your amateur philosophy view?

[106] Put another way, does God play dice?

[107] Well, we know it is stochastic, right?

[108] Today, today we think it is stochastic.

[109] Yes.

[110] We think because we have the Heisenberg concert in the principles and we have some experiments to confirm that.

[111] All we have is experiments to confirm it.

[112] We don't understand why.

[113] Why is already...

[114] You wrote a book about why.

[115] Yeah, it's a puzzle.

[116] It's a puzzle that you have the dice flipping machine, or God, and the result of the flipping, propagated with the speed faster than the speed of light.

[117] We can't explain it, okay?

[118] But it only governs microscopic phenomena.

[119] So you don't think of quantum mechanics as useful for understanding the nature of reality?

[120] No. Diversionary.

[121] So in your thinking, the world might as well be deterministic.

[122] The world is deterministic.

[123] And as far as the neuron firing is concerned, it is deterministic to first approximation.

[124] What about free will?

[125] Free will is also a nice exercise.

[126] Free will is an illusion that we AI people are going to solve.

[127] So what do you think once we solve it, that solution will look like, once we put it in a page?

[128] The solution will look like, first of all it will look like a machine.

[129] a machine that act as though it has free will.

[130] It communicates with other machines as though they have free will, and you wouldn't be able to tell the difference between a machine that does and machines that doesn't have free will.

[131] So the illusion, it propagates the illusion of free will amongst the other machines.

[132] And faking it is having it.

[133] Okay, that's what Turing tests are about.

[134] Faking intelligence is intelligent, because it's not easy to fake.

[135] It's very hard to fake.

[136] And you can only fake if you have it.

[137] That's such a beautiful statement.

[138] Yeah, you could, yeah.

[139] You can't fake it if you don't have it.

[140] So let's begin at the beginning with probability, both philosophically and mathematically, what does it mean to say the probability of something happening is 50%.

[141] What is probability?

[142] It's a degree of uncertainty that an agent has about the world.

[143] You're still expressing some knowledge in that statement.

[144] Of course.

[145] The probability is 90%, it's absolutely a different kind of knowledge and if it is 10%.

[146] But it's still not solid knowledge, it's...

[147] It is solid knowledge, but hey, if you tell me that 90 % assurance smoking will give you lung cancer in five years versus 10 %, it's a piece of useful knowledge.

[148] So the statistical view of the universe, why is it useful?

[149] So we're swimming in complete uncertainty, most of everything around us.

[150] It allows you to predict things with a certain probability, and computing those probabilities are very useful.

[151] That's the whole idea of prediction.

[152] And you need prediction to be able to survive.

[153] If you cannot predict the future, then you're just crossing the street will be extremely fearful.

[154] And so you've done a lot of work in causation, and so let's think about correlation.

[155] I started with the probability.

[156] You started with probability.

[157] You've invented the Bayesian networks.

[158] Yeah.

[159] And so, you know, we'll dance back and forth between these levels of uncertainty.

[160] But what is correlation?

[161] What is it?

[162] So probability is something happening, is something, but then there's a bunch of things happening.

[163] And sometimes they happen together, sometimes not, they're independent or not.

[164] So how do you think about correlation of things?

[165] Correlation occurs when two things vary together over a very long time, as one way of measuring it, or when you have a bunch of variables that they all vary cohesively, then we call it we have a correlation here.

[166] And usually when we think about correlation, we really think causally.

[167] Things cannot be correlated unless there is a reason for them to vary together, Why should they vary together?

[168] If they don't see each other, why should they vary together?

[169] So underlying it somewhere is causation.

[170] Yes.

[171] Hidden in our intuition, there is a notion of causation because we cannot grasp any other logic except causation.

[172] And how does conditional probability differ from causation?

[173] So what is conditional probability?

[174] Conditional probability, how things vary, when one of them stays the same.

[175] Now, staying the same means that I have chosen to look only on those incidents where the guy has the same value as a previous one.

[176] It's my choice as an experimenter.

[177] So things that are not correlated before could become correlated.

[178] Like, for instance, if I have two coins which are uncorrelated, and I choose only those flippings experiments in which a bell rings and a bell rings when at least one of them is a tail, then suddenly I see correlation between the two coins because I only look at the cases where the bell rang.

[179] You see, it's my design, with my ignorance, essentially, with my audacity to...

[180] ignore certain incidents, I suddenly create a correlation where it doesn't exist physically.

[181] Right.

[182] So that's, you just outlined one of the flaws of observing the world and trying to infer something fundamental about the world from looking at the correlation.

[183] I don't look at it as a flaw.

[184] The world works like that.

[185] But the flaws comes if we try to impose causal logic on correlation, it doesn't work too well.

[186] I mean, but that's exactly what we do.

[187] That's what, that has been the majority of science.

[188] The majority of naive science.

[189] The statisticians know it.

[190] The statisticians know it if you condition on a third variable, then you can destroy or create correlations among two other variables.

[191] They know it.

[192] It's in a data.

[193] There's nothing surprising.

[194] That's why they all dismiss the Simpson paradox.

[195] Ah, we know it.

[196] He don't know anything about it.

[197] Well, there's disciplines like psychology where all the variables are hard to account for.

[198] And so oftentimes there's a leap between correlation to causation.

[199] You're imposing.

[200] What do you mean?

[201] A leap.

[202] Who is to get causation from correlation.

[203] Not, you're not proving causation, but you're sort of discussing it, implying, sort of hypothesizing without ability to prove.

[204] Which discipline you have in mind?

[205] I'll tell you if they are absolute, or if they are outdated, or they're about to get outdated, or tell me which one do you have.

[206] Oh, psychology, you know.

[207] It's okay, what, is the SEM?

[208] Structly equation.

[209] No, no, I was thinking of a, applied psychologist studying, for example, we work with human behavior and semi -autonomous vehicles, how people behave.

[210] And you have to conduct these studies of people driving cars.

[211] Everything starts with the question.

[212] What is the research question?

[213] What is the research question?

[214] The research question, do people fall asleep when the car is driving itself?

[215] do they fall asleep or do they tend to fall asleep more frequently more frequently than the car not driving it's not driving itself that's a good question okay and so you measure you put people in the car because it's real world you can't conduct an experiment where you control everything why can't you can you could turn the automatic module on and off because it's on -road public.

[216] I mean, there's aspects to it that's unethical because it's testing on public roads.

[217] So you can only use vehicle.

[218] They have to, the people, the drivers themselves have to make that choice themselves.

[219] And so they regulate that.

[220] And so you just observe when they drive it autonomously and when they don't.

[221] But maybe they turn it off when they were very tired.

[222] Yeah, that's kind of.

[223] thing, but you don't know those variables.

[224] Okay, so that you have now uncontrolled experiment.

[225] Uncontrolled experiment.

[226] We call it observational study.

[227] Yeah.

[228] And we form the correlation detected, we have to infer causal relationship.

[229] Whether it was the automatic piece has caused them to fall asleep or, okay.

[230] So that is an issue that was about 120 years old.

[231] I should only go 100 years old okay and oh maybe no actually I should say it's 2 ,000 years old because we have this experiment by Daniel about the Babylonian king that wanted the exile the people from Israel that were taken in in exile to Babylonian to Babylon to serve the king, he wanted to serve them king's food, which was meat, and Daniel, as a good Jew, couldn't eat a non -kosher food, so he asked them to eat vegetarian food, but the king overseer says, I'm sorry, but if the king sees that your performance falls below that of other kids, he's going to kill me. Daniel said, let's make an experiment.

[232] let's take four of us from Jerusalem okay give us vegetarian food let's take the other guys to eat the king's food and about a week's time we'll test our performance and you know the answer of course he did the experiment and they were so much better than the others and the kings nominated them to superposition in his king so it was the first experiment Yes.

[233] So there was a very simple, also the same research questions.

[234] We want to know of vegetarian food, assist or obstruct your mental ability.

[235] And, okay, so the question is very old one.

[236] Even Democritos said, if I could discover one cause of things, I would rather discuss a one cause than be a king of Persia.

[237] The task of discovering causes was in the mind of ancient people from many, many years ago.

[238] But the mathematics of doing that was only developed in the 1920s.

[239] So science has left us often.

[240] Science has not provided us with the mathematics to capture the idea.

[241] of x causes y and y does not cause x because all the equation of physics are symmetrical algebraic the equality sign goes both ways okay let's look at machine learning machine learning today if you look at deep neural networks you can think of it as a kind of conditional probability estimators correct beautiful so where did you say that what conditional probability estimators.

[242] None of the machine learning people clabbled you, attacked you.

[243] Listen, most people, and this is why today's conversation I think is interesting, most people would agree with you.

[244] There are certain aspects that are just effective today, but we're going to hit a wall, and there's a lot of ideas.

[245] I think you're very right that we're going to have to return to about causality.

[246] And it would be, let's try to explore it.

[247] Let's even take a step back.

[248] You've invented Bayesian networks that look awfully a lot like they express something like causation, but they don't, not necessarily.

[249] So how do we turn Bayesian networks into expressing causation?

[250] How do we build causal networks?

[251] This, A, causes B, B causes C, how do we start to infer?

[252] that kind of thing.

[253] We start asking ourselves a question.

[254] What are the factors that would determine the value of X?

[255] X could be blood pressure, death, hungry, hunger.

[256] But these are hypotheses that we propose.

[257] Hypothesis, everything which has to do with causality comes from a theory.

[258] The difference is only how you interrogate the theory, it you have in your mind.

[259] So it still needs the human expert to propose.

[260] Right, you need the human expert to specify the initial model.

[261] Initial model could be very qualitative.

[262] Just who listens to whom?

[263] By whom listen to, I mean one variable listen to the other.

[264] So I say, okay, the tide is listening to the moon.

[265] and not to the rooster crow and so forth this is our understanding of the world in which we live scientific understanding of reality we have to start there because if we don't know how to handle cause and effect relationship when we do have a model and we certainly do not know how to handle it when we don't have a model.

[266] So let's start first.

[267] In AI, slogan is representation first, discovery second.

[268] But if I give you all the information that you need, can you do anything useful with it?

[269] That is the first representation.

[270] How do you represent it?

[271] I give you all the knowledge in the world.

[272] How do you represent it?

[273] When you represent it, I ask you, can you infer X or Y or Z?

[274] can you answer certain queries?

[275] Is it complex?

[276] Is it polynomial?

[277] All the computer science exercises we do once you give me a representation for my knowledge.

[278] Then you can ask me, now I understand how to represent things.

[279] How do I discover them?

[280] It's a secondary thing.

[281] First of all, I should echo the statement that mathematics and the current much of the machine learning world has not considered causation that A causes B, just in anything.

[282] So that seems like a, that seems like a, non -obvious thing that you think we would have really acknowledge it, but we haven't.

[283] So we have to put that on the table.

[284] So knowledge, how hard is it to create a knowledge from which to work?

[285] In certain area, it's easy because we have, only four or five major variables and an epidemiologist or an economist can put them down minimum wage unemployment policy X, Y, Z and start collecting data and quantify the parameter that were left unquantified with the initial knowledge.

[286] That's the routine work that you find in experimental psychology, in economics, everywhere, in the health science.

[287] That's a routine thing.

[288] But I should emphasize, you should start with a research question, what you want to estimate.

[289] Once you have that, you have a language of expressing what you want to estimate.

[290] You think it's easy?

[291] No. So we can talk about two things.

[292] I think one is how the science of causation is very useful for answering certain questions.

[293] And then the other is how do we create intelligence systems that need to reason with causation?

[294] So if my research question is how do I pick up this water bottle from the table?

[295] all the knowledge that is required to be able to do that.

[296] How do we construct that knowledge base?

[297] Do we return back to the problem that we didn't solve in the 80s with expert systems?

[298] Do we have to solve that problem of automated construction of knowledge?

[299] You're talking about the task of eliciting knowledge from an export?

[300] Task of eliciting knowledge or an expert, or just self -discussing knowledge.

[301] of more knowledge, more and more knowledge.

[302] So automating the building of knowledge as much as possible.

[303] It's a different game in the causal domain because it essentially is the same thing.

[304] You have to start with some knowledge and you're trying to enrich it.

[305] But you don't enrich it by asking for more rules.

[306] You enrich it by asking for the data, to look at the data and quantifying and ask queries that you couldn't answer when you started.

[307] You couldn't because the question is quite complex and it's not within the capability of ordinary cognition, of ordinary person, ordinary expert even, to answer.

[308] So what kind of questions do you think we can start to answer?

[309] Even simple.

[310] Suppose, yeah, I start with an easy one.

[311] Let's do it.

[312] Okay, what's the effect of a drug on recovery?

[313] What is the aspirin that caused my headache to be cured?

[314] Or what is the television program?

[315] Or the good news I received?

[316] This is already, you see, it's a difficult question because it's find the cause from effect.

[317] The easy one is find effect from cause.

[318] That's right.

[319] So first you construct a model saying that this is an important research question.

[320] This is an important question.

[321] Then you...

[322] No, I didn't construct a model yet.

[323] I just said it's an important question.

[324] It's an important question.

[325] And the first exercise is express it mathematically.

[326] What do you want to?

[327] Like, if I tell you, what will be the effect of taking this drug?

[328] You have to say that in mathematics.

[329] How do you say that?

[330] Yes.

[331] Can you write down the question?

[332] Not the answer.

[333] I want to find the effect of the drug on my headache.

[334] Right.

[335] Write down.

[336] Write it down.

[337] That's where the do calculus comes in.

[338] Yes.

[339] A do operator.

[340] What are you do operator?

[341] Do operator, yeah.

[342] Which is nice.

[343] It's the difference in association and intervention.

[344] Very beautifully sort of constructed.

[345] Yeah.

[346] So we have a do operator.

[347] So the do calculus connect on the do operator itself, connects the operation of doing to something that we can see.

[348] Right.

[349] So as opposed to the purely observing, you're making the choice to change a variable.

[350] That's what it expresses.

[351] And then the way that we interpreted, the mechanism by which we take your query and we translate into something that we can work with is by giving it semantics, saying that you have a model of the world and you cut off all the incoming arrow into X and you're looking Now in the modified mutilated model, you ask for the probability of Y. That is interpretation of doing X, because by doing things, you liberate them from all influences that acted upon them earlier, and you subject them to the tyranny of your muscles.

[352] So you remove all the questions about causality by doing them.

[353] There's one level of questions.

[354] Answer questions about what will happen if you do things.

[355] If you do, if you drink the coffee, if you take the aspirin.

[356] So how do we get the doing data?

[357] Now the question is, if we cannot run experiments, then we have to rely on observational study.

[358] So first we could decide to interrupt.

[359] We could run an experiment.

[360] where we do something, where we drink the coffee and do, and the do operator allows you to sort of be systematic about expressing.

[361] To imagine how the experiment will look like, even though we cannot physically and technologically conduct it.

[362] I'll give you an example.

[363] What is the effect of blood pressure on mortality?

[364] I cannot go down into your vein and change your blood pressure, but I can ask the question.

[365] Which means if I have a model of your body, I can imagine the effect of your, how the blood pressure change will affect your mortality.

[366] How I go into the model and I conduct this surgery about the blood pressure, even though physically I can do, I cannot do it.

[367] Let me ask the quantum mechanics question.

[368] Does the doing change the observation?

[369] meaning the surgery of changing the blood pressure is, I mean...

[370] No, the surgery is, I call the very delicate.

[371] It's very delicate, infinitely delicate.

[372] Incisive and delicate.

[373] Which means, do means, do X means I'm going to touch only X. Only X. Directly into X. So that means that I change only things which depends on X by virtue of X changing.

[374] But I don't depend things which are not depends on X. Like I wouldn't change your sex or your age.

[375] I just change your blood pressure.

[376] So in the case of blood pressure, it may be difficult or impossible to construct such an experiment.

[377] No, physically, yes.

[378] But hypothetically, no. If we have a model, that is what the model is for.

[379] So you conduct surgeries on a model, you take it apart, put it back, that's the idea of a model.

[380] The idea of thinking, counterfactually imagining, and that's the idea of creativity.

[381] So by constructing that model, you can start to infer if the blood pressure leads to mortality, which increases or decreases.

[382] By...

[383] I construct a model I can still cannot answer it.

[384] I have to see if I have enough information in the model that would allow me to find out the effects of intervention from a non -interventional study, from observation, hands -off study.

[385] So what's needed to mean that?

[386] You need to have assumptions about who affects whom.

[387] If the graph had a certain property, the answer is yes, you can get it from observational study.

[388] If the graph is too meshy, bushy, bushy, the answer is no, you cannot.

[389] Then you need to find either different kind of observation that you haven't considered or one experiment.

[390] So basically, that puts a lot of pressure on you to encode wisdom into that graph.

[391] Correct.

[392] But you don't have to encode more than what you know.

[393] God forbid if you put the like economists are doing this.

[394] They're identifying assumptions.

[395] They put assumptions, even they don't prevail in the world.

[396] They put assumptions so they can identify things.

[397] But the problem is, yes, beautifully put.

[398] But the problem is you don't know what you don't know.

[399] So.

[400] You know what you don't know.

[401] Because if you don't know, you say it's possible.

[402] It's possible that X affect the traffic tomorrow.

[403] It's possible.

[404] You put down an arrow which says it's possible.

[405] Every arrow in the graph says it's possible.

[406] So there's not a significant cost to adding arrows that...

[407] The more arrow you add, the less likely you are to identify things from purely observational data.

[408] So if the whole world is bushy and everybody affects everybody else, the answer is you can answer it ahead of time I cannot answer my query from observational data I have to go to experiments so you talk about machine learning is essentially learning by association or reasoning by association and this due calculus is allowing for intervention I like that word action so you also talk about counterfactuals yeah and I'm trying to sort of understand the difference in counterfactuals and intervention.

[409] What's the, first of all, what is counterfactuals and why are they useful?

[410] Why are they especially useful as opposed to just reasoning what effect actions have?

[411] But kind of factual contains what we normally call explanations.

[412] Can you give an example?

[413] If I tell you that acting one way, affect something else.

[414] I didn't explain anything yet.

[415] But if I ask you, was it the aspirin that cure my headache?

[416] I'm asking for explanation, what cure my headache?

[417] And putting a finger on aspirin, provide an explanation.

[418] It was aspirin.

[419] It was responsible for your headache going away.

[420] if you didn't take the aspirin, you would still have a headache.

[421] So by saying if I didn't take aspirin, I would have a headache, you're thereby saying that aspirin is the thing that removed the headache.

[422] Yes, but you have to have another important information.

[423] I took the aspirin, and my headache is gone.

[424] It's very important information.

[425] Now I'm reasoning backward, and I said, what is the aspirin?

[426] Yeah, by considering what would have happened if everything else is the same, but I didn't take aspirin.

[427] That's right.

[428] So, you know, that things took place, you know.

[429] Joe killed Schmoe, and Schmoor would be alive had John not use his gun.

[430] Okay, so that is the counterfactual.

[431] It had a conflict here, or clash between observed fact, but he did shoot, okay?

[432] And the hypothetical predicate which says, had he not shut, you have a clash, a logical clash, they cannot exist together.

[433] That's a counterfactual.

[434] And that is the source of our explanation, of the idea of responsibility, regret, and free will.

[435] Yes, it certainly seems, that's the highest level of reasoning, right?

[436] Yes, and physicists do it all the time.

[437] Who does it all the time?

[438] Physicists.

[439] Physicist.

[440] In every equation of physics, let's say you have a hook slow, and you put one kilogram on the spring, and the spring is one meter, and you say, had this weight been two kilogram, the spring would have been twice as long.

[441] It's no problem for physicists to say that.

[442] Except that mathematics is only in a form of equation, equation, equating the weight, proportionality constant, and the length of the string.

[443] So you don't have the asymmetry in the equation of physics, although every physicist thinks counterfactually.

[444] Ask high school kids, had the weight been three kilograms, what would be the length of the spring?

[445] They can answer it immediately, because they do the counterfactual processing in their mind and then they put it into equation, algebraic equation, and they solve it.

[446] But a robot cannot do that.

[447] How do you make a robot learn these relationships?

[448] Why you would learn?

[449] Suppose you tell him, can you do it?

[450] Before you go learning, you have to ask yourself, suppose I give him all the information.

[451] Can the robot perform a task that I ask him to perform?

[452] Can he reason and say, no, it wasn't the aspirin.

[453] It was the good news you received on the phone.

[454] Right, because, well, unless the robot had a model, a causal model of the world.

[455] Right, right.

[456] I'm sorry I have to linger on this.

[457] But now we have to linger and we have to say, how do we do it?

[458] How do we build it?

[459] Yes.

[460] How do we build a causal model without a team of human experts running around?

[461] Why don't you go to learning right away?

[462] You're too much involved with learning.

[463] Because I like babies.

[464] Babies learn fast.

[465] I'm trying to figure out how do they do it.

[466] Good.

[467] That's another question.

[468] How do the babies come out with the counterfactual model of the world?

[469] And babies do that.

[470] They know how to play in the crib.

[471] They know which balls hits another one.

[472] And so they learn it by playful manipulation of the world.

[473] Yes.

[474] The simple world involve only toys and balls and chimes.

[475] But if you think about it's a complex world.

[476] We take for granted.

[477] Yes.

[478] How complicated.

[479] And the kids do it by playful manipulation plus parents' guidance, pure wisdom, and hearsay.

[480] They meet each other.

[481] Can they say, you shouldn't have taken.

[482] in my toy.

[483] Right.

[484] And these multiple sources of information, they're able to integrate.

[485] Yeah.

[486] So the challenge is about how to integrate, how to form these causal relationship from different sources of data.

[487] Correct.

[488] So how much information is it to play, how much causal information is required to be able to play in the crib with different objects?

[489] I don't know.

[490] I haven't experimented with the crib.

[491] Okay, not a crib.

[492] I don't know.

[493] It's a very interesting.

[494] Manipulating physical objects on this very, opening the pages of a book, all the tasks, the physical manipulation task.

[495] Do you have a sense?

[496] Because my sense is the world is extremely complicated.

[497] It's extremely complicated.

[498] I agree, and I don't know how to organize it because I've been spoiled by easy problems, such as cancer and death.

[499] First, we have to start trying to.

[500] No, but it's easy.

[501] There is in the sense that you have only 20 variables.

[502] And they are just variables, are not mechanics.

[503] It's easy.

[504] You just put them on the graph, and they speak to you.

[505] And you're providing a methodology for letting them speak.

[506] I'm working only in the abstract.

[507] The abstract of knowledge in, knowledge out, data in between.

[508] Now, can we take a leap to trying to learn in this very, when it's not 20 variables, but 20 million variables, trying to learn causation in this world.

[509] Not learn, but somehow construct models.

[510] I mean, it seems like you would only have to be able to learn because constructing it manually would be too difficult.

[511] Do you have ideas of, I think it's a matter of combining simple models for many, many sources, for many, many disciplines, and many metaphors.

[512] Metaphors are the basics of human intelligence and basis.

[513] How do you think of about a metaphor in terms of its use in human intelligence?

[514] Metaphors is an expert system.

[515] An expert, it's mapping problem with which you are not familiar to a problem with which you are familiar.

[516] Like, I give you a good example.

[517] The Greek believed that the sky is an opaque shell.

[518] It's not really infinite space.

[519] It's an opaque shell and the stars are holes poked in the shells through which you see the eternal light.

[520] That was a metaphor.

[521] Why?

[522] Because they understand how you poke holes in the shells.

[523] They were not familiar with infinite space.

[524] And we are walking on a shell of a turtle.

[525] And if you get too close to the edge, you're going to fall down to Hadith or whatever.

[526] That's a metaphor.

[527] It's not true.

[528] But these kinds of of metaphor enabled Aristotanus to measure the radius of the earth.

[529] Because he said, come on, if we are walking on a turtle shell, then a ray of light coming to this angle will be different, this place will be a different angle that's coming to this place.

[530] I know the distance, I'll measure the two angles, and then I have the radius of the shell of the turtle.

[531] And he did, and he found his measurements were very close to the measurements we have today through the, what, 6 ,700 kilometers of the Earth.

[532] That's something that would not occur to Babylonian astronomer, even though the Babylonian experiments were the machine learning people of the time.

[533] They fit curves and they could predict the eclipse of the moon much more accurately than the Greek because they fit curve.

[534] That's a different metaphor.

[535] Something that you're familiar with.

[536] A game, a turtle shield.

[537] What does it mean?

[538] You are familiar.

[539] Familiar means that answers to So certain questions are explicit.

[540] You don't have to derive them.

[541] And they were made explicit because somewhere in the past, you've constructed a model of that.

[542] You're familiar with, so the child is familiar with billiard balls.

[543] Yes.

[544] So the child could predict that if you let loose of one ball, the other one will bounce off.

[545] you obtain that by familiarity.

[546] Familiarity is answering questions and you store the answer explicitly you don't have to derive them.

[547] So this is the idea of a metaphor.

[548] All our life, all our intelligence is built around metaphors, mapping from the unfamiliar to the familiar, but the marriage between the two is a tough thing which we haven't yet being able to algorithmize.

[549] So you think of that process of using metaphor to leap from one place to another, we can call it reasoning?

[550] Is it a kind of reasoning?

[551] It is reasoning by metaphor, metaphorical reason.

[552] Do you think of that as learning?

[553] So learning is a popular terminology today in a narrow sense.

[554] It is, it is definitely a form.

[555] So you may not.

[556] Okay, all right.

[557] It's one of the most important learning.

[558] Taking something which theoretically is derivable and store it in accessible format.

[559] I'll give you an example, chess.

[560] Okay?

[561] Finding winning starting move in chess is hard.

[562] But there is an answer.

[563] Either there is a winning move for a white or there isn't, or there is a draw.

[564] So the answer to that is available through the rule of the games.

[565] But we don't know the answer.

[566] So what does the chess master have that we don't have?

[567] He has taught explicitly an evaluation of certain complex pattern of the board.

[568] We don't have it, ordinary people like me, I don't know about you, I'm not a chess master.

[569] So for me, I have to derive things that for him is explicit.

[570] He has seen it before, or he has seen the pattern before, or similar pattern, you see, metaphor.

[571] And he generalized and said, don't move, it's a dangerous move.

[572] It's just that not in the game of chess, but in the game of billiard balls, where humans are able to initially derive very effectively and then reasoned by metaphor very effectively.

[573] And we make it look so easy that it makes one wonder how hard is it to build it in a machine?

[574] So in your sense, how far away are we to be able to construct?

[575] I don't know.

[576] I'm not a futurist.

[577] All I can tell you is that we are making tremendous progress in the causal reasoning domain.

[578] something that I even dare to call it revolution, the cultural revolution, because what we have achieved in the past three decades is something that dwarf, everything that was derived in the entire history.

[579] So there's an excitement about current machine learning methodologies.

[580] and there's really important good work you're doing in causal inference where do the where does the future where do these worlds collide and what does that look like first they're going to work without collision it's going to work in harmony harmony it's not going to the human is going to jump start the exercise by providing qualitative, non -committing models of how the universe work, the universe, how in the reality, the domain of discourse works.

[581] The machine is going to take over from that point of view and derive whatever the calculus says can be derived, namely quantitative answer to our questions.

[582] These are complex questions.

[583] I'll give you some example of complex question that will buggle your mind if you think about it.

[584] You take result of studies in diverse population under diverse condition and you infer the cause effect of a new population which doesn't even resemble any of the one studied.

[585] And you do that by due calculus You do that by generalizing from one study to another.

[586] See, what's common with Benito?

[587] What is different?

[588] Let's ignore the differences and pull out the commonality.

[589] And you do it over maybe 100 hospitals around the world.

[590] From that, you can get really mileage from big data.

[591] It's not only you have many samples, you have many sources of data.

[592] So that's a really powerful thing, I think, especially for medical applications, I mean, cure cancer, right?

[593] That's how from data you can cure cancer.

[594] So we're talking about causation, which is the temporal, temporal relationships between things.

[595] Not only temporal.

[596] It was both structural and temporal.

[597] Temporal presence by itself cannot replace causation.

[598] Is temporal precedence, the era of time in physics?

[599] It's important, necessarily.

[600] It's important.

[601] But it's efficient, yes.

[602] Is it?

[603] Yes.

[604] I never seen cause propagate backward.

[605] But if we use the word cause, but there's relationships that are timeless, I suppose that's still forward in the era of time.

[606] But are there relationships, logical relationships that fit into the structure?

[607] Sure, the whole do calculus, logical relationship.

[608] That doesn't require a temporal.

[609] It has just a condition that you're not traveling back in time.

[610] Yes, correct.

[611] So it's really a generalization of a powerful generalization of what?

[612] Of Boolean logic.

[613] Yeah, bullying logic.

[614] Yes.

[615] that is sort of simply put and allows us to, you know, reason about the order of events, the source, the...

[616] Not about between, we're not deriving the order of events.

[617] We are given cause -effects relationship, okay?

[618] They ought to be obeying the time president's relationship.

[619] We are given it.

[620] And now that we ask questions about other causal relationship, that could be derived from the initial ones, but were not given to us explicitly.

[621] Like in the case of the firing squad I gave you in the first chapter, and I ask, what if Rifleman A declined to shoot?

[622] Would the prisoners still be dead?

[623] Declined to shoot, it means that he disobey order.

[624] and the rule of the games were that he is an obedient and marksman.

[625] That's how you start, that's the initial order.

[626] But now you ask question about breaking the rules.

[627] What if he decided not to pull the trigger?

[628] He just became a pacifist.

[629] And you and I can answer that.

[630] The other rifleman would have killed him, okay?

[631] I want a machine to do that.

[632] Is it so hard to ask a machine to do that?

[633] It's just a simple task.

[634] But we have to have a calculus for that.

[635] Yes.

[636] But the curiosity, the natural curiosity for me is that, yes, you're absolutely correct and important.

[637] And it's hard to believe that we haven't done this seriously, extensively already a long time ago.

[638] So this is really important work.

[639] But I also want to know, you know, there's maybe you can philosophy.

[640] about how hard is it to learn.

[641] Okay, let's assume in learning.

[642] We want to learn it, okay?

[643] We want to learn.

[644] So what do we do?

[645] We put a learning machine that watches execution trials in many countries and many locations, okay?

[646] All the machine can learn is to see shot or not shot.

[647] Dead, not dead.

[648] Court issued an order or didn't, okay?

[649] That's the fact.

[650] From the fact, you don't know who listens to whom.

[651] You don't know that condemned person listen to the bullets that the bullets are listening to the captain, okay?

[652] All we hear is one command, two shots dead, okay?

[653] A triple of variable.

[654] Yes, no, yes, no. Okay, within that you can learn who listens to whom, and you can answer the question?

[655] No. Definitely no, but don't you think you can start proposing ideas for humans to review?

[656] You want machine to run it, right?

[657] You want a robot.

[658] So, robot is watching trials like that, 200 trials.

[659] And then he has to answer the question, what if Rifleman A refrain from shooting?

[660] Yeah.

[661] So how do that?

[662] That's exactly my point.

[663] That's looking at the facts, don't give you the strings behind the facts.

[664] Absolutely.

[665] But do you think of machine, learning, as is currently defined, as only something that looks at the facts and tries to do.

[666] Right now, they only look at the facts, yeah.

[667] So is there a way to modify in your sense?

[668] Playful manipulation.

[669] Playful manipulation.

[670] Doing the interventionist kind of thing.

[671] Intervention.

[672] But it could be at random.

[673] For instance, the rifleman is sick at that day, or he just vomits or whatever.

[674] So he can observe this unexpected event.

[675] which introduce noise, the noise still have to be random to be able to relate it to randomize experiment, and then you have observational studies from which to infer the strings behind the facts.

[676] It's doable to a certain extent.

[677] But now that we are expert in what you can do once you have a model, we can reason back and say, what kind of data you need, To build a model.

[678] Got it.

[679] So I know you're not a futurist, but are you excited?

[680] Have you, when you look back at your life, long for the idea of creating a human -level intelligence system?

[681] Yeah, I'm driven by that.

[682] All my life, I'm driven just by one thing.

[683] But I go slowly.

[684] I go from what I know to the next step, incrementally.

[685] so without imagining what the end goal looks like do you imagine what the end goal is going to be a machine that can answer sophisticated questions counterfactuals of regret compassion responsibility and free will so what is a good test is a touring test a reasonable test free will doesn't exist yet there's no how we Would you test free will?

[686] So far, we know only one thing.

[687] If robots can communicate with reward and punishment among themselves, hitting each other on the wrist and say you shouldn't have done that.

[688] Playing better soccer because they can do that.

[689] What do you mean?

[690] Because they can do that.

[691] Because they can communicate among themselves.

[692] Because of the communication they can do the soccer.

[693] Because they communicate, like us, reward and punishment, yes, you didn't pass the ball the right time, and so forth, therefore you're going to sit on the bench for the next two.

[694] If they start communicating like that, the question is, will they play a better soccer?

[695] As opposed to what?

[696] As opposed to what they do now?

[697] Without this ability to reason about reward and punishment, responsibility.

[698] And?

[699] So far, I can only think about communication.

[700] Communication is not necessarily natural language, but just communication.

[701] Just communication.

[702] And that's important to have a quick and effective means of communicating knowledge.

[703] If the coach tells you you should have passed a ball, pink, he conveys so much knowledge to you as opposed to what?

[704] Go down and change your software.

[705] That's the alternative.

[706] but the coach doesn't know your software.

[707] So how can the coach tell you you should have passed a ball?

[708] But our language is very effective.

[709] You should have passed the ball.

[710] You know your software, you tweak the right module, and next time you don't do it.

[711] Now that's for playing soccer or the rules are well -defined.

[712] No, no, no, not well -defined.

[713] When you should pass the ball...

[714] Is not well -defined.

[715] No, it's very soft, very noisy.

[716] Yes, you have to do it under pressure.

[717] It's art. But in terms of aligning values between computers and humans, do you think this cause and effect type of thinking is important to align the values, values, morals, ethics under which the machines make decisions, is the cause effect where the two can come together?

[718] Cause effect is a necessary component.

[719] to build a ethical machine because the machine has to empathize to understand what's good for you to build a model of you as a recipient which should be very much what is compassion they imagine that you suffer pain as much as me as much as me I do have already a model of myself right so it's very easy for me to map you to mine I don't have to rebuild a model It's much easier to say, oh, you're like me. Okay, therefore I would not hate you.

[720] And the machine has to try to fake to be human, essentially so you can imagine that you're like me, right?

[721] Moreover, who is me?

[722] That's the fact that's consciousness.

[723] They have a model of yourself.

[724] Where do you get this model?

[725] You look at yourself as if you are a part of the environment.

[726] If you build a model of yourself versus the environment, then you can say, I need to have a model of myself.

[727] I have abilities, I have desires, and so forth.

[728] I have a blueprint of my software.

[729] Not a full detail, because I cannot get the halting problem.

[730] But I have a blueprint.

[731] So on that level of a blueprint, I can modify things.

[732] I can look at myself in the mirror and say, hmm, if I change this, tweak this model, I'm going to perform differently.

[733] That is what we mean by free will.

[734] And consciousness.

[735] What do you think is consciousness?

[736] Is it simply self -awareness, including yourself into the model of the world?

[737] That's right.

[738] Some people tell me, no, this is only part of consciousness, and then they start telling me what they really mean by consciousness, and I lose them.

[739] For me, consciousness is having a blueprint of your software.

[740] Do you have concerns about the future of AI, all the different trajectories of all of our research?

[741] Yes.

[742] Where's your hope, where the movement heads, were your concerns?

[743] I'm concerned because I know we are building a new species that has a capability of exceeding us, exceeding our capabilities, and can breed itself and take over the world, absolutely.

[744] It's a new species that is uncontrolled.

[745] We don't know the degree to which we control it.

[746] We don't even understand what it means to be able to control this new species.

[747] So I'm concerned.

[748] I don't have anything to add to that because it's such a gray area, that's unknown.

[749] It's never happened in history.

[750] The only time it happened in history was evolution with a human being.

[751] It wasn't very successful, was it?

[752] Some people said it was a great success.

[753] For us, it was, but a few people along the way, or a few creatures along the way would not agree.

[754] So it's just because it's such a gray area, there's nothing else to say.

[755] We have a sample of one.

[756] Sample of one.

[757] That's us.

[758] But some people would look at you and say, yeah, but we were looking to you to help us make sure that the sample two works out okay.

[759] Actually, we have more than a sample of mind.

[760] We have theories, and that's good.

[761] We don't need to be statisticians.

[762] So a sample of one doesn't mean poverty of knowledge.

[763] It's not.

[764] sample of one plus theory, conjectural theory, of what could happen.

[765] That we do have.

[766] But I really feel helpless in contributing to this argument because I know so little and my imagination is limited and I know how much I don't know and I, but I'm concerned.

[767] You were born and raised in Israel.

[768] Born and raised in Israel, yes.

[769] And later served in Israel military, defense forces.

[770] In the Israel Defense Force.

[771] Yeah.

[772] What did you learn from that experience?

[773] For this experience?

[774] There's a kibbutz in there as well.

[775] Yes, because I was in the Nakhl, which is a combination.

[776] of agricultural work and military service.

[777] I was really idealist.

[778] I wanted to be a member of the Kibbutz throughout my life and to live a communal life.

[779] And so I prepared myself for that.

[780] Slowly, slowly I wanted a greater challenge.

[781] So that's a far world.

[782] away, both.

[783] But I learned from that, what I can't either.

[784] It was a miracle.

[785] It was a miracle that I served in the 1950s.

[786] I don't know how we survived.

[787] The country was under austerity.

[788] It tripled its population from 600 ,000 to a million point eight when I finished college.

[789] No one went hungry.

[790] austerity yes when you wanted to buy to make an omelette in a restaurant you had to bring your own egg and they imprisoned people from bringing food from the farming here from the villages to the city but no one went hungry and I always add to it and higher education did not suffer any budget cut They still invested in me, in my wife, in our generation to get the best education that he could.

[791] So I'm really grateful for the opportunity.

[792] And I'm trying to pay back now.

[793] It's a miracle that we survived the war of 1948.

[794] They were so close to a second genocide.

[795] It was all planned.

[796] But we survive it by a miracle, and then the second miracle that not many people talk about, the next phase, how no one went hungry and the country managed to triple its population.

[797] You know what it means to triple?

[798] Imagine the United States going from, what, the 350 million to a trillion?

[799] Yeah, yeah.

[800] And believe.

[801] This is a really tense part of the world.

[802] It's a complicated part of the world.

[803] Israel and all around.

[804] Yes.

[805] Religion is at the core of that complexity.

[806] One of the components.

[807] Religion is a strong motivating course to many, many people in the Middle East.

[808] In your view, looking back, is religion good for society?

[809] That's a good question for robotic, you know.

[810] There's echoes of that question.

[811] Equip robot with religious beliefs.

[812] Suppose we find out or we agree that religion is good to you to keep you in line.

[813] Should we give the robot the metaphor of God?

[814] As a matter of fact, the robot will get it without us also.

[815] Why?

[816] A robot will reason by metaphor.

[817] And what is the most primitive metaphor a child grows with?

[818] mother smile, father, teaching, father image, and mother image, that's God.

[819] So whether you want it or not, the robot will, well, assuming that the robot is gonna have a mother and father, it may only have a programmer which doesn't supply warmth and discipline.

[820] Well, discipline it does.

[821] So the robot will have this, a model of the trainer, and everything, that happens in the world, cosmology and so is going to be mapped into the programmer.

[822] It's God.

[823] Yeah.

[824] The thing that represents the origin of everything for that robot.

[825] That's the most primitive relationship.

[826] So it's going to arrive there by metaphor.

[827] And so the question is if overall that metaphor has served us well as humans.

[828] I really don't know.

[829] I think it did.

[830] But as long as you keep in mind, it's only a metaphor.

[831] So if you think we can, can we talk about your son?

[832] Yes, yes.

[833] Can you tell his story?

[834] A story?

[835] Daniel?

[836] The story is known.

[837] He was abducted in Pakistan by al -Qaeda -driven sect.

[838] and under various pretences.

[839] I don't even pay attention to what the pretence will.

[840] Originally, they wanted to have the United States deliver some promised airplanes.

[841] It was all made up, and all this demands were bogus.

[842] I don't know really, but eventually.

[843] Eventually, he was executed in front of a camera.

[844] At the core of that is hate and intolerance.

[845] At the core, yes, absolutely, yes.

[846] We don't really appreciate the depth of the hate at which billions of peoples are educated.

[847] We don't understand it.

[848] listened recently to what they teach you in Mogadishu.

[849] When the water stopped in the tap, we knew exactly who did it, the Jews.

[850] The Jews.

[851] We didn't know how, but we knew who did it.

[852] We don't appreciate what it means to us.

[853] The depth is unbelievable.

[854] Do you think all of us are capable of evil and the education, the indoctrination is really what creates evil?

[855] Absolutely, we are capable of evil.

[856] If you are indoctrinated sufficiently long and in -depth, we are capable of ISIS, capable of Nazism.

[857] Yes, we are.

[858] But the question is whether we, after we have gone through some Western education, and we learn that everything is really relative.

[859] It is no absolute God.

[860] It's only a belief in God.

[861] Whether we are capable now of being transformed under certain circumstances to become brutal.

[862] Yeah.

[863] That is a question, I'm worried about it because some people say, yes, given the right circumstances, given the economic, bad economical crisis you are capable of doing it too and that's what I'm I want to believe it I'm not capable this seven years after Daniel's death he wrote an article at the Wall Street Journal titled Daniel Pearl in the normalization of evil what was your message back then and how did it change today over the years I lost What was the message?

[864] The message was that we are not treating terrorism as a taboo.

[865] We are treating it as a bargaining device that is accepted.

[866] People have grievance and they go and bomb restaurants.

[867] It's normal.

[868] Look, you're even not not surprised when I tell you that.

[869] 20 years ago, you say, what?

[870] For grievance, you go and blow a restaurant?

[871] Today is becoming normalized.

[872] The banalization of evil.

[873] And we have created that to ourselves by normalizing, by making it part of political life.

[874] It's a political debate.

[875] every terrorist yesterday becomes a freedom fighter today and tomorrow it becomes a terrorist again it's switchable right and so we should call out evil when there's evil if we don't want to be proud of it become it yeah if we want to separate good from evil that's one of the first thing that what was the In the Garden of Eden, you remember the first thing that God tells him was, hey, you want some knowledge.

[876] Here's the tree of good and evil.

[877] So this evil touched your life personally.

[878] Does your heart have anger, sadness, or is it hope?

[879] I see some beautiful people coming from Pakistan.

[880] I see beautiful people everywhere.

[881] but I see horrible propagation of evil in this country too it shows you how populistic slogans can catch the mind of the best intellectuals today's father's day I didn't know that yeah what's a what's a fond memory you have of Daniel oh many good memories immense he was my mentor he had a sense of balance that I didn't have yeah he saw the beauty in every person he was not as emotional as I am more looking things in perspective he really liked every person he really grew up with the idea that a foreigner is a reason for curiosity, not for fear.

[882] That's one time we went in Berkeley and homeless came out from some dark alley and said, hey, man, can you spare a dime?

[883] I retreated back, you know, two feet back.

[884] And then he just hugged him and said, here's a dime.

[885] Enjoy yourself.

[886] Maybe you want some...

[887] money to take a bus or whatever, where did they get it?

[888] Not for me. Do you have advice for young minds today, dreaming about creating, as you have dreamt, creating intelligence systems?

[889] What is the best way to arrive at new breakthrough ideas and carry them through the fire of criticism and past conventional ideas?

[890] Ask your questions.

[891] Really, your questions are never really.

[892] Your questions are never dumb and solve them your own way and don't take no for an answer.

[893] If they are really dumb, you will find out quickly by trying an arrow to see that they're not leading any place, but follow them and try to understand things your way.

[894] That is my advice.

[895] I don't know if it's going to help anyone.

[896] No, that's brilliantly.

[897] But there is a lot of inertia in science, in academia.

[898] It is slowing down science.

[899] Yeah, those two words, your way.

[900] That's a powerful thing.

[901] It's against inertia, potentially.

[902] Against the flow.

[903] Against your professor.

[904] Against your professor.

[905] I wrote the book of why in order to democratize.

[906] common sense in other to instill rebellious spirit in students so they wouldn't wait until the professor get things right so you wrote the manifesto of the rebellion against the professor against a professor yes so looking back at your life of research what ideas do you hope ripple through the next many decades, what do you hope your legacy will be?

[907] I already have a tombstone carved.

[908] Oh, boy.

[909] The fundamental law of counterfactuals.

[910] That's what...

[911] It's a simple equation.

[912] What, it's counterfactual in terms of a model surgery.

[913] that's it because everything follows on that if you get that all the rest I can die in peace and my student can derive all my knowledge by mathematical means the rest follows yeah thank you so much for talking today I really appreciate it thank you for being so attentive and instigating we did it we did it The coffee helped.

[914] Thanks for listening to this conversation with Judea Pearl.

[915] And thank you to our presenting sponsor, Cash App.

[916] Download it, use code Lex Podcast.

[917] You'll get $10, and $10 will go to first.

[918] A STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future.

[919] If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter.

[920] And now, let me leave you with some words of wisdom from Judea Pearl.

[921] You cannot answer a question that you cannot ask, and you cannot ask a question that you have no words for.

[922] Thank you for listening, and hope to see you next time.