Lex Fridman Podcast XX
[0] The following is a conversation with Ayanna Howard.
[1] She's a roboticist, Professor Georgia Tech, and director of the Human Automation Systems Lab, with research interests in human -robot interaction, assisted robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments.
[2] Like me, in her work, she cares a lot about both robots and human beings, and so I really enjoyed this conversation.
[3] This is the Artificial Intelligence Podcast.
[4] If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, follow on Spotify, support it on Patreon, or simply connect with me on Twitter, a Lex Friedman, spelled F -R -I -D -M -A -N.
[5] I recently started doing ads at the end of the introduction.
[6] I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation.
[7] I hope that works for you and doesn't hurt the listening, experience.
[8] experience.
[9] This show is presented by Cash App, the number one finance app in the app store.
[10] I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds.
[11] Cash App also has a new investing feature.
[12] You can buy fractions of a stock, say $1 worth, no matter what the stock price is.
[13] Broker's services are provided by Cash App investing, a subsidiary of Square and member SIPC.
[14] I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their first robotics and Lego competitions.
[15] They educate and aspire hundreds of thousands of students in over 110 countries and have a perfect rating at Charity Navigator, which means that donated money is used to maximum effectiveness.
[16] When you get Cash App from the App Store or Google Play and use code Lex Podcast, you'll get $10 and Cash App will also donate $10 to first, which again is an organization that I've personally seen inspire girls and boys, the dream of engineering a better world.
[17] And now, here's my conversation with Ayanna Howard.
[18] What or who is the most amazing robot you've ever met, or perhaps had the biggest impact on your career?
[19] I haven't met her, but I grew up with her, but of course, Rosie.
[20] So, and I think it's because also...
[21] Who's Rosie?
[22] Rosie from the Jetsons.
[23] She is all things to all people, right?
[24] Think about it.
[25] Like anything you wanted.
[26] It was like magic.
[27] It happened.
[28] So people not only anthropomorphize, but project whatever they wish for the robot to be onto.
[29] Onto Rosie.
[30] But also, I mean, think about it.
[31] She was socially engaging.
[32] She every so often had an attitude, right?
[33] She kept us honest.
[34] She would push back sometimes when, you know, George was doing something.
[35] some weird stuff.
[36] But she cared about people, especially the kids.
[37] She was like the perfect robot.
[38] And you've said that people don't want their robots to be perfect.
[39] Can you elaborate that?
[40] What do you think that is?
[41] Just like you said, Rosie pushed back a little bit every once in a while.
[42] Yeah.
[43] So I think it's that, so if you think about robotics in general, we want them because they enhance our quality of life.
[44] And usually that's linked to some.
[45] something that's functional, right?
[46] Even if you think of self -driving cars.
[47] Why is there a fascination?
[48] Because people really do hate to drive.
[49] Like there's the, like, Saturday driving where I can just speed, but then there's the, I have to go to work every day and I'm in traffic for an hour.
[50] I mean, people really hate that.
[51] And so robots are designed to basically enhance our ability to increase our quality of life.
[52] And so the perfection comes from this aspect of interaction.
[53] If I think about how we drive, if we drove perfectly, we would never get anywhere, right?
[54] So think about how many times you had to run past the light because you see the car behind you is about to crash into you or that little kid kind of runs into the street and so you have to cross on the other side because there's no cars, right?
[55] Like if you think about it, we are not perfect drivers.
[56] Some of it is because it's our world.
[57] And so if you have a robot that is perfect in that sense of the word, they wouldn't really be able to function with us.
[58] Can you linger a little bit on the word perfection?
[59] So from the robotics perspective, what does that word mean and how is sort of the optimal behavior as you're describing different than what we think is perfection?
[60] Yeah, so perfection, if you think about it, In the more theoretical point of view, it's really tied to accuracy, right?
[61] So if I have a function, can I complete it at 100 % accuracy with zero errors?
[62] And so that's kind of, if you think about perfection in the sense of the word.
[63] And in a self -driving car realm, do you think from a robotics perspective, we kind of think that perfection means following the rules perfectly, sort of defining, staying in the lane, changing lanes, when there's a green light you go, when there's a red light, you stop, and that's the, and be able to perfectly see all the entities in the scene.
[64] That's the limit of what we think of as perfection.
[65] And I think that's where the problem comes, is that when people think about perfection for robotics, the ones that are the most successful are the ones that are, quote, unquote, perfect, like I said, Rosie is perfect.
[66] But she actually wasn't perfect in terms of accuracy, but she was perfect in terms of how she interacted and how she adapted.
[67] And I think that's some of the disconnect, is that we really want perfection with respect to its ability to adapt to us.
[68] We don't really want perfection with respect to 100 % accuracy with respect to the rules that we just made up anyway, right?
[69] And so I think there's this disconnect sometimes between what we really want and what happens.
[70] And we see this all the time, like in my research, right?
[71] Like the optimal, quote unquote, optimal interactions are when the robot is adapting based on the person, not 100 % following what's optimal based on the roles.
[72] Just to link around autonomous vehicles for a second, just your thoughts, maybe off the top of head is how hard is that problem do you think based on what we just talked about?
[73] you know there's a lot of folks in the automotive industry they're very confident from Elon Musk to Waymo to all these companies how hard is it to solve that last piece the gap between the perfection and the human definition of how you actually function in this world yeah so this is a moving target so I remember when um all the big companies started to heavily invest in this and there was a number of even robots as well as, you know, folks who were putting in the VCs and corporations, Elon Musk being one of them, that said, you know, self -driving cars on the road with people, you know, within five years.
[74] That was a little while ago.
[75] And now people are saying five years, ten years, twenty years.
[76] Some are saying never, right?
[77] I think if you look at some of the things that are being successful is these basically fixed environments where you still have some anomalies, right?
[78] You still have people walking.
[79] You still have stores, but you don't have other drivers, right?
[80] Like other human drivers are, it's a dedicated space for the cars.
[81] Because if you think about robotics in general, where has always been successful?
[82] I mean, you can say manufacturing, like way back in the day, right?
[83] It was a fixed environment.
[84] Humans were not part of the equation.
[85] We're a lot better than that.
[86] But like when we can carve out scenarios that are closer to that space, then I think that it's where we are.
[87] So a close campus where you don't have self -driving cars and maybe some protection so that the students don't jet in front just because they want to see what happens.
[88] Like having a little bit, I think that's where we're going to see the most success in the near future.
[89] And be slow moving.
[90] Right.
[91] Right, not, you know, 55, 60, 70 miles an hour, but the speed of a golf cart, right?
[92] So that said, the most successful in the automotive industry robots operating today in the hands of real people are ones that are traveling over 55 miles an hour and in unconstrained environment, which is Tesla vehicles, so a Tesla autopilot.
[93] so i just i would love to hear sort of your just thoughts of uh two things so one i don't know if you've gotten to see you've heard about something called smart summon where tesla system autopilot system where the car drives zero occupancy no driver in the parking lot slowly sort of tries to navigate the parking lot to find itself to you and there's some incredible amounts of videos and just hilarity that happens is it awkward tries to navigate this environment, but it's a beautiful nonverbal communication between machine and human that I think is a from, it's like, it's some of the work that you do in this kind of interesting human -robot interaction space.
[94] So what are your thoughts in general about it?
[95] So I do have that feature.
[96] Do you drive a Tesla?
[97] I do.
[98] Mainly because I'm a gadget freak, right?
[99] So I say it's a gadget that happens to have some wheels.
[100] And yeah, I've seen some of the videos.
[101] But what's your experience?
[102] Like, I mean, you're a human robot interaction roboticist.
[103] You're legit sort of expert in the field.
[104] So what does it feel for a machine to come to you?
[105] It's one of these very fascinating things, but also I am hyper, hyper alert, right?
[106] Like, I'm hyper alert.
[107] Like my butt, my thumb is like, oh, okay, I'm ready to take over.
[108] Even when I'm in my car, I'm doing things like automated, backing into, so there's like a feature where you can do this automating backing into our parking space, or bring the car out of your garage, or even, you know, pseudo autopilot on the freeway, right?
[109] I am hypersensitive.
[110] I can feel like as I'm navigating, I'm like, yeah, that's an error right there.
[111] Like I'm very aware of it, but I'm also fascinated by it.
[112] And it does get better.
[113] Like I look and see it's learning.
[114] from all of these people who are cutting it on.
[115] Like, every time I cut it on, it's getting better, right?
[116] And so I think that's what's amazing about it, is that.
[117] This nice dance of you're still hypervigilant.
[118] So you're still not trusting it at all.
[119] And yet you're using it.
[120] On the highway, if I were to, like, what, as a roboticist, we'll talk about trust a little bit.
[121] How do you explain that?
[122] You still use it.
[123] Is it the gadget freak part?
[124] like where you just enjoy exploring technology?
[125] Or is that the right actually balance between robotics and humans is where you use it but don't trust it?
[126] And somehow there's this dance that ultimately is a positive.
[127] Yeah.
[128] So I think I'm, I just don't necessarily trust technology, but I'm an early adopter.
[129] Right.
[130] So when it first comes out, I will use everything, but I will be very, very cautious of how, I use it.
[131] Do you read about it or do you explore it, but just try it.
[132] Do you do like, crudely, to put it crudely, do you read the manual or do you learn through exploration?
[133] I'm an explorer.
[134] If I have to read the manual, then, you know, I do design.
[135] Then it's a bad user interface.
[136] It's a failure.
[137] Elon Musk is very confident that you kind of take it from where it is now to full autonomy.
[138] So from this human -robot interaction where you don't really trust.
[139] And then you try, eye and then you catch it when it fails to, it's going to incrementally improve itself into full, where you don't need to participate.
[140] What's your sense of that trajectory?
[141] Is it feasible?
[142] So the promise there is by the end of next year, by the end of 2020, is the current promise.
[143] What's your sense about that journey that Tesla's on?
[144] So there's kind of three things going on, though, I think in terms of will people go, like as a user, as a adopter, will you trust going to that point?
[145] I think so, right?
[146] Like, there are some users.
[147] And it's because what happens is when you're hypersensitive at the beginning and then the technology tends to work, your apprehension slowly goes away.
[148] and as people, we tend to swing to the other extreme, right?
[149] Because it's like, oh, I was like hyper, hyper fearful or hypersensitive, and it was awesome.
[150] And we just tend to swing.
[151] That's just human nature.
[152] And so you will have, I mean, I - That's a scary notion because most people are now extremely untrusting of autopod.
[153] They use it, but they don't trust it.
[154] And it's a scary notion that there's a certain point where you allow yourself to look at the smartphone for like 20 seconds.
[155] And then there'll be this phase shift where it'll be like 20 seconds, 30 seconds, one minute, two minutes.
[156] It's a scary proposition.
[157] But that's people, right?
[158] That's just, that's humans.
[159] I mean, I think of even our use of, I mean, just everything on the Internet, right?
[160] Like, think about how reliant we are on certain apps and certain engines, right?
[161] 20 years ago, people have been like, oh, yeah, that's stupid.
[162] Like, that makes no sense.
[163] Like, of course, that's false.
[164] Like, now it's just like, oh, of course, I've been using it.
[165] It's been correct all this time.
[166] Of course, aliens, I didn't think they existed, but now it says they do.
[167] Obviously.
[168] 100%.
[169] Earth is flat.
[170] So, okay, but you said three things.
[171] So one is the human.
[172] Okay, so one is the human.
[173] And I think there will be a group of individuals that will swing.
[174] Right?
[175] Teenagers.
[176] Teenagers.
[177] I mean, it'll be teenage.
[178] It'll be adults.
[179] There's actually an age demographic that's optimal for technology adoption.
[180] And you can actually find them and they're actually pretty easy to find.
[181] Just based on their habits, based on, so someone like me who wouldn't, wasn't a roboticist would probably be the optimal kind of person, right?
[182] Early adopter, okay with technology, very comfortable and not hypersensitive.
[183] right?
[184] I'm just hypersensitive because I designed this stuff.
[185] So there is a target demographic that will swing.
[186] The other one, though, is you still have these humans that are on the road.
[187] That one is a harder, harder thing to do.
[188] And as long as we have people that are on the same streets, that's going to be the big issue.
[189] And it's just because you can't possibly map some of the, some of the, of human drivers, right?
[190] Like, as an example, when you're next to that car that has that big sticker called student driver, right?
[191] Like, you are like, oh, either I am going to, like, go around.
[192] Like, we are, we know that that person is just going to make mistakes that make no sense, right?
[193] How do you map that information?
[194] Or if I'm in a car and I look over and I see, you know, two fairly young -looking individuals and there's no student driver bumper and I see them chit -chatting to each other.
[195] I'm like, oh, that's an issue, right?
[196] So how do you get that kind of information and that experience into basically an autopilot?
[197] Yeah.
[198] And there's millions of cases like that where we take little hints to establish context.
[199] I mean, you said kind of beautifully poetic human things, but there's probably subtle things about the environment, about it being maybe time for commuters to start going home from work, and therefore you can make some kind of judgment about the group behavior of pedestrians, blah, blah, blah, so on, so on.
[200] Or even cities, right?
[201] Like, if you're in Boston, how people cross the street, like lights are not an issue versus other places where people will will actually wait for the crosswalk.
[202] Seattle or somewhere peaceful.
[203] But what I've also seen, just even in Boston, that intersection to intersection is different.
[204] So every intersection has a personality of its own.
[205] So certain neighborhoods of Boston are different.
[206] So we kind of, and based on different timing of day, at night, it's all, there's a dynamic to human behavior that we kind of figure out ourselves.
[207] We're not able to, we're not able to introspect and figure it out, but somehow our brain learns it.
[208] We do.
[209] And so you're saying, is there a shortcut?
[210] Is there a shortcut, though, for a robot?
[211] Is there something that could be done, you think, that, you know, that's what we humans do.
[212] It's just like bird flight, right?
[213] That's the example they give for flight.
[214] Do you necessarily need to build a bird that flies or can you do an airplane?
[215] Is there a shortcut to be easy?
[216] So I think the shortcut is, and I kind of, I talk about it as a fixed space, where, so imagine that there's a neighborhood that's a new smart city or a new neighborhood that says, you know what, we are going to design this new city based on supporting self -driving cars.
[217] And then doing things, knowing that there's anomalies, knowing that people are like this.
[218] And designing it based on that assumption that, like, we're going to have this, that would be an example of a shortcut.
[219] So you still have people, but you do very specific things to try to minimize the noise a little bit as an example.
[220] And the people themselves become accepting of the notion that there's autonomous cars, right?
[221] Right.
[222] Like they move into, so right now you have like a, you will have a self -selection bias, right?
[223] Like individuals will move into this neighborhood knowing like this is part of like the real estate pitch.
[224] Right.
[225] And so I think that's a way to do a shortcut.
[226] One, it allows you to deploy.
[227] It allows you to collect then data with these variances and anomalies because people are still people.
[228] But it's a safer space and is more of an accepting space.
[229] I .e., when something in that space might happen because things do, because you already have the self -selection, like people would be, I think, a little more forgiving.
[230] than other places.
[231] And you said three things.
[232] Do we cover all of them?
[233] The third is legal law.
[234] Oh, no. Liability, which I don't really want to touch, but it's still, it's still of concern.
[235] And the mishmash with, like, with policy as well, sort of government, all that, that whole.
[236] That big ball of mess.
[237] Yeah.
[238] Got you.
[239] So that's, so we're out of time now.
[240] Do you think from a robotics perspective, you know, if you're kind of honest of what car, cars do, they kind of threaten each other's life all the time.
[241] So cars are various, I mean, in order to navigate intersections, there's an assertiveness, there's a risk taking, and if you were to reduce it to an objective function, there's a probability of murder in that function, meaning you killing another human being, and you're using that, first of all, it has to be low enough to be acceptable to you on an ethical level as an individual human being, but it has to be high enough for people to respect you to not sort of take advantage of you completely and jaywalk and funny and so on.
[242] I mean, I don't think there's a right answer here, but how do we solve that?
[243] How do we solve that from a robotics perspective when danger and human life is at stake?
[244] Yeah, as they say, cars don't kill people, people kill people.
[245] Right.
[246] So I think...
[247] And now a robotics is, Robotic algorithms would be killing.
[248] Right.
[249] So it will be robotics algorithms that are, no, it will be robotic algorithms don't kill people.
[250] Developers of robotic algorithms kill people, right?
[251] I mean, one of the things is people are still in the loop.
[252] And at least in the near and midterm, I think people will still be in the loop.
[253] At some point, even if it's a developer.
[254] Like we're not necessarily at the stage where, you know, robots are programming autonomous robots with different behaviors quite yet.
[255] It's a scary notion, sorry to interrupt, that a developer has some responsibility in the death of a human being.
[256] That's a heavy burden.
[257] I mean, I think that's why the whole aspect of ethics in our community is so, so important, right?
[258] Like, because it's true.
[259] If you think about it, you can basically say, I'm not going to work on weaponized AI, right?
[260] Like, people can say, that's not what I'm.
[261] going to do.
[262] But yet, you are programming algorithms that might be used in health care algorithms that might decide whether this person should get this medication or not, and they don't, and they die.
[263] Okay, so that is your responsibility, right?
[264] And if you're not conscious and aware that you do have that power when you're coding and things like that, I think that's, that's just not a good thing.
[265] Like, we need to think about this responsibility as we program robots and computing devices much more than we are.
[266] Yeah, so it's not an option to not think about ethics.
[267] I think it's a majority, I would say, of computer science.
[268] It's kind of a hot topic now.
[269] I think about bias and so on, but it's, and we'll talk about it, but usually it's kind of, it's like a very particular group of people that work on that.
[270] And then people who do like robotics are like, well, I don't have to think about that.
[271] there's other smart people thinking about it it seems that everybody has to think about it it's not you can't escape the ethics whether there's bias or just every aspect of ethics that has to do with human beings everyone so think about i'm going to age myself but i remember uh when we didn't have like testers right and so what did you do as a developer you had to test your own code right like you had to go through all the cases and figure it out and you know and then they realized that you know like we probably need to have testing because we're not getting all the things.
[272] And so from there, what happens is like most developers, they do, you know, a little bit of testing, but it's usually like, okay, did my compiler bug out?
[273] Let me look at the warnings.
[274] Okay, is that acceptable or not, right?
[275] Like, that's how you typically think about as a developer and you're just assume that is going to go to another process and they're going to test it out.
[276] But I think we need to go back to those early days when, you know, you're a developer, you're developing.
[277] There should be like this, you know, okay, let me look at the ethical outcomes of this because there isn't a second like testing ethical testers, right?
[278] It's you.
[279] We did it back in the early coding days.
[280] I think that's where we are with respect to ethics.
[281] Like, let's go back to what was good practices only because we were just developing the field.
[282] Yeah, and it's a really heavy burden.
[283] I've had to feel it recently in the last few months.
[284] But I think it's a good one to feel like I've gotten a message more than one from people.
[285] You know, I've unfortunately gotten some attention recently.
[286] And I've gotten messages that say that I have blood in my hands because of working on semi -autonomous vehicles.
[287] So the idea that you have semi -autonomy means people would become, would lose vigilance and so on.
[288] That's actually be humans, as we described.
[289] And because of that, because of this idea that we're creating automation, there will be people be hurt because of it.
[290] And I think that's a beautiful thing.
[291] I mean, it's, you know, there's many nights where I wasn't able to sleep because of this notion.
[292] You know, you really do think about people that might die because of this technology.
[293] Of course, you can then start rationalizing and saying, well, you know what, 40 ,000 people die in the United States every year and we're trying to ultimately try to save lives.
[294] But the reality is your code you've written might kill somebody.
[295] And that's an important burden to carry with you as you design the code.
[296] I don't even think of it as a burden if we train this concept correctly from the beginning.
[297] And I use, and not to say that coding is like being a medical doctor, but think about it.
[298] Medical doctors, if they've been in situations where their patient didn't survive, right?
[299] Do they give up and go away?
[300] No. Every time they come in, they know that there might be a possibility that this patient might not survive.
[301] And so when they approach every decision, like that's in their back of their head.
[302] And so why isn't that we aren't teaching, and those are tools though, right?
[303] They are given some of the tools to address that so that they don't go crazy.
[304] But we don't give those tools so that it does feel like a burden versus something of I have a great gift and I can do great awesome good.
[305] but with it comes great responsibility.
[306] I mean, that's what we teach in terms of, if you think about the medical schools, right?
[307] Great gift, great responsibility.
[308] I think if we just change the messaging a little, great gift, being a developer, great responsibility, and this is how you combine those.
[309] But do you think, I mean, this is really interesting.
[310] It's outside.
[311] I actually have no friends who are sort of surgeons or doctors.
[312] I mean, what does it feel like to make?
[313] make a mistake in a surgery and somebody to die because of that?
[314] Like, is that something you could be taught in medical school, sort of how to be accepting of that risk?
[315] So, because I do a lot of work with healthcare robotics, I have not lost a patient, for example.
[316] The first one's always the hardest, right?
[317] But they really teach the value, right?
[318] So they teach responsibility, but they also teach the value.
[319] Like, you're saving 40 ,000.
[320] But in order to really feel good about that, when you come to a decision, you have to be able to say at the end, I did all that I could possibly do, right?
[321] Versus a, well, I just picked the first widget and, right?
[322] Like, so every decision is actually thought through.
[323] It's not a habit.
[324] It's not a, let me just take the best algorithm that my friend gave me, right?
[325] It's a, is this it?
[326] This is the best?
[327] Have I done my best to do good, right?
[328] And so...
[329] You're right, and I think burden is the wrong word.
[330] It's a gift, but you have to treat it extremely seriously.
[331] Correct.
[332] On a slightly related note, in a recent paper, the ugly truth about ourselves and our robot creations, you discuss, you highlight some biases that may affect the function in various robotic systems.
[333] Can you talk through, if you remember examples, some.
[334] There's a lot of examples.
[335] What is bias, first of all?
[336] Yeah, so bias is this, and so bias, which is different than pressure.
[337] So bias is that we all have these preconceived notions about particular everything from particular groups to habits to identity, right?
[338] So we have these predispositions.
[339] And so when we address a problem, we look at a problem and make a decision, those preconceived notions might affect are outputs or outcomes.
[340] So there the bias could be positive and negative.
[341] And then it's prejudice the negative kind of bias.
[342] Prejudice is the negative, right?
[343] So prejudices that not only are you aware of your bias, but you are then take it and have a negative outcome, even though you are aware.
[344] And there could be gray areas too.
[345] There's always gray areas.
[346] That's the challenging aspect of all ethical questions.
[347] So I always like, so there's a, funny one.
[348] And in fact, I think it might be in the paper because I think I talk about self -driving cars.
[349] But think about this.
[350] We, for teenagers, right, typically, we insurance companies charge quite a bit of money if you have a teenage driver.
[351] So you could say that's an age bias, right?
[352] But no one will claim, I mean, parents will be grumpy, but no one really says that that's not fair.
[353] That's interesting.
[354] We don't, that's right.
[355] That's right.
[356] It's everybody in human factors and safety research almost, I mean, it's quite ruthlessly critical of teenagers.
[357] And we don't question, is that okay?
[358] Is that okay to be agist in this kind of way?
[359] It is, and it is age, right?
[360] It's definitely age.
[361] There's no question about it.
[362] And so this is a gray area, right?
[363] Because you know that, you know, teenagers are more, likely to be in accidents.
[364] And so there's actually some data to it.
[365] But then if you take that same example and you say, well, I'm going to make the insurance higher for an area of Boston because there's a lot of accidents.
[366] And then they find out that that's correlated with socioeconomics.
[367] Well, then it becomes a problem, right?
[368] Like that is not acceptable.
[369] But yet the teenager, which is age, it's against age, is, right?
[370] And the way we figure that out as a society by having conversations, by having discourse.
[371] I mean, throughout history, the definition of what is ethical and not has changed and hopefully always for the better.
[372] Correct, correct.
[373] So in terms of bias or prejudice in robotics, in algorithms, what examples do you sometimes think about?
[374] So I think about quite a bit the medical domain, just because historically, right, the health care domain has had these biases, typically based on gender and ethnicity, primarily, a little on age, but not so much.
[375] You know, historically, if you think about FDA and drug trials, it's, you know, harder to find a woman that, you know, aren't childbearing.
[376] and so you may not test on drugs at the same level.
[377] Right.
[378] So there's these things.
[379] And so if you think about robotics, right, something as simple as I like to design an exoskeleton, right?
[380] What should the material be?
[381] What should the weight be?
[382] What should the form factor be?
[383] Are you, who are you going to design it around?
[384] I will say that in the U .S., you know, women average height and weight is slightly different than guys.
[385] So who are you going to choose?
[386] Like if you're not thinking about it from the beginning as, you know, okay, when I design this and I look at the algorithms and I design the control system and the forces and the torques, if you're not thinking about, well, you have different types of body structure, you're going to design to, you know, what you're used to.
[387] Oh, this fits in my, all the folks in my lab, right?
[388] So think about it from the very beginning as important.
[389] What about sort of algorithms that train on data kind of thing?
[390] Sadly, our society already has a lot of negative bias.
[391] And so if we collect a lot of data, even if it's a balanced way, it's going to contain the same bias that a society contains.
[392] And so, yeah, is there things there that bother you?
[393] Yeah, so you actually said something.
[394] You had said how we have biases, but hopefully we learn from and we become better, right?
[395] And so that's where we are now, right?
[396] So the data that we're collecting is historic.
[397] It's, so it's based on these things when we knew it was bad to discriminate, but that's the data we have.
[398] And we're trying to fix it now, but we're fixing it based on the data that was used in the first place.
[399] Fix it in post.
[400] Right.
[401] And so the decisions, and you can look at everything from the whole aspect of predictive policing, criminal recidivism, there was a recent paper that had the healthcare algorithms, which had a kind of sensational titles.
[402] I'm not pro sensationalism in titles.
[403] But again, you read it, right?
[404] So it makes you read it, but I'm like, really?
[405] Like, ugh, you could have.
[406] What's the topic of the sensationalism?
[407] I mean, what's underneath it, what's, if you could sort of educate me and what kind of bias creeps into the healthcare space.
[408] Yeah, so.
[409] I mean, you already kind of mentioned.
[410] Yeah, so this one was, the headline was racist AI algorithms.
[411] Okay, like, okay, that's totally a clickbait title.
[412] And so you looked at it, and so there was data that these researchers had collected.
[413] I believe, I want to say it was either science or nature, it just was just published.
[414] But they didn't have a sensational title.
[415] It was like the media.
[416] And so they had looked at demographics, I believe, between black and white women, right?
[417] And they showed that there was a discrepancy in the outcomes, right?
[418] And so, and it was tied to ethnicity, tied to race.
[419] The piece that the researchers did actually went through the whole analysis, but of course.
[420] I mean, the journalists with AI are problematic across the board, I would say.
[421] Right.
[422] And so this is a problem, right?
[423] And so there's this thing about, oh, AI, it has all these problems.
[424] We're doing it on historical data.
[425] And the outcomes aren't even based on gender or ethnicity or age.
[426] But I'm always saying it's like, yes, we need to do better.
[427] Right.
[428] We need to do better.
[429] It is our duty to do better.
[430] But the worst AI is still better than us.
[431] Like you take the best of us and we're still worse than the worst.
[432] AI, at least in terms of these things.
[433] And that's actually not discussed, right?
[434] And so I think, and that's why the sensational title, right?
[435] And so it's like, so then you can have individuals go like, oh, we don't need to use this AI.
[436] I'm like, oh, no, no, no, no. I want the AI instead of the doctors that provided that data because it's still better than that.
[437] Yes.
[438] Right?
[439] I think that's really important to linger on.
[440] The idea that this AI is racist, it's like, well, compared to what?
[441] what?
[442] I think we set, unfortunately, way too high of a bar for AI algorithms.
[443] And in the ethical space, where perfect is, I would argue, probably impossible.
[444] Then if we set the bar of perfection, essentially, it has to be perfectly fair, whatever that means, it means we're setting it up for failure.
[445] But that's really important to say what you just said, it's just, well, it's still better.
[446] It is.
[447] And one of the things I think that we don't get enough credit for, just in terms of as developers, is that you can now poke at it, right?
[448] So it's harder to say, you know, is this hospital, is this city doing something, right?
[449] Until someone brings in a civil case, right?
[450] Well, with AI, it can process through all this data and say, hey, yes, there's some, an issue here.
[451] But here it is.
[452] We've identified.
[453] We've identified.
[454] it and then the next step is to fix it.
[455] I mean, that's a nice feedback loop versus like waiting for someone to sue someone else before it's fixed, right?
[456] And so I think that power we need to capitalize on a little bit more, right?
[457] Instead of having the sensational titles, have the, okay, this is a problem and this is how we're fixing it.
[458] And people are putting money to fix it because we can make it better.
[459] I look at like facial recognition how Joy, she basically, basically called out a couple of companies and said, hey.
[460] And most of them were like, oh, embarrassment.
[461] And the next time it had been fixed, right?
[462] It had been fixed better.
[463] Right.
[464] And then it was like, oh, here's some more issues.
[465] And I think that conversation then moves that needle to having much more fair and unbiased and ethical aspects.
[466] As long as both sides, the developers are willing to say, okay, I hear you.
[467] Yes, we are going to improve.
[468] and you have other developers are like, you know, hey, AI, it's wrong, but I love it, right?
[469] Yes.
[470] So speaking of this really nice notion that AI is maybe flawed but better than humans, so it just made me think of it.
[471] One example of flawed humans is our political system.
[472] Do you think, or you said judicial as well, do you have a hope for AI sort of, being elected for president or running our Congress or being able to be a powerful representative of the people?
[473] So I mentioned and I truly believe that this whole world of AI is in partnerships with people.
[474] And so what does that mean?
[475] I don't believe or maybe I just don't, I don't believe that we should have an AI for president.
[476] But I do believe that a president should, should use AI as an advisor, right?
[477] Like, if you think about it, every president has a cabinet of individuals that have different expertise that they should listen to, right?
[478] Like, that's kind of what we do.
[479] And you put smart people with smart expertise around certain issues, and you listen.
[480] I don't see why AI can't function as one of those smart individuals giving input.
[481] So maybe there's an AI on healthcare, maybe there's an AI on education, and, right, like all these things that a human is processing, right?
[482] Because at the end of the day, there's people that are human that are going to be at the end of the decision.
[483] And I don't think as a world, as a culture, as a society, that we would totally believe, and this is us, like this is some fallacy about us, but we need to see that leader, that person as human.
[484] and most people don't realize that, like, leaders have a whole lot of advice, right?
[485] Like, when they say something, it's not that they woke up, well, usually.
[486] They don't wake up in the morning and be like, I have a brilliant idea, right?
[487] It's usually a, okay, let me listen.
[488] I have a brilliant idea, but let me get a little bit of feedback on this, like, okay.
[489] And then it's a, yeah, that was an awesome idea, or it's like, yeah, let me go back.
[490] We already talked through a bunch of them, but are there some possible, solutions to the bias that's present in our algorithms beyond what we just talked about?
[491] So I think there's two paths.
[492] One is to figure out how to systematically do the feedback and corrections.
[493] So right now it's ad hoc, right?
[494] It's a researcher identifies some outcomes that are not, don't seem to be fair, right?
[495] They publish it, they write about it, and either the developer or the companies that have adopted the algorithms may try to fix it, right?
[496] And so it's really ad hoc and it's not systematic.
[497] There's just, it's kind of like, I'm a researcher.
[498] That seems like an interesting problem, which means that there's a whole lot out there that's not being looked at, right?
[499] Because it's kind of researcher driven.
[500] And I don't necessarily have a solution, but that process, I think, could be done a little it better.
[501] One way is I'm going to poke a little bit at some of the corporations, right?
[502] Like maybe the corporations, when they think about a product, they should, instead of, in addition to hiring these, you know, bug, they give these.
[503] Oh, yeah, yeah, yeah.
[504] Like awards when you find a bug.
[505] Yeah.
[506] Yeah.
[507] Security bug.
[508] Yeah.
[509] You know, let's put it like, we will give the, whatever the award is that we give for the people who find these security holes, find an ethics hole, right?
[510] Like, find an unfairness hole.
[511] And we will pay you X for each one you find.
[512] I mean, why can't they do that?
[513] Yeah.
[514] One is a win -win.
[515] They show that they're concerned about it, that this is important, and they don't have to necessarily dedicate their own, like, internal resources.
[516] And it also means that everyone who has, like, their own bias lens, like, I'm interested in age.
[517] And so I'll find the ones based on age, and I'm interested in gender.
[518] Right?
[519] Which means that you get, like, all of these different perspectives.
[520] But you think of it in a data -driven way.
[521] So, like, sort of, if we look at a company like Twitter, it gets, it's under a lot of fire for discriminating against certain political beliefs.
[522] Correct.
[523] And sort of, there's a lot of people, this is the sad thing, because I know how hard the problem is, and I know the Twitter folks are working really hard at it, even Facebook that everyone seems to hate are working really hard at this.
[524] You know, the kind of evidence that people bring is basically, anecdotal evidence.
[525] Well, me or my friend, all we said is X, and for that we got banned.
[526] And that's kind of a discussion of saying, well, look, that's usually, first of all, the whole thing is taken out of context.
[527] So they present sort of anecdotal evidence.
[528] And how are you supposed to, as a company, in a healthy way, have a discourse about what is and isn't ethical, what how do we make algorithms ethical when people are just blowing everything like they're outraged about a particular anecdotal piece of evidence that's very difficult to sort of contextualize in the big data driven way do you have a hope for companies like Twitter and Facebook so I think there's a couple of things going on right first off the remember this whole aspect of we are becoming reliant on technology.
[529] We're also becoming reliant on a lot of these, the apps and the resources that are provided.
[530] So some of it is kind of anger.
[531] Like, I need you, right?
[532] And you're not working for me, right?
[533] But I think, and so some of it, and I wish that there was a little bit of change of rethinking.
[534] So some of it is like, oh, we'll fix it in house.
[535] No. That's like, okay, I'm a fox and I'm going to watch these hens because I think it's a problem that foxes eat hens.
[536] No, right?
[537] Like use, like be good citizens and say, look, we have a problem.
[538] And we are willing to open ourselves up for others to come in and look at it and not try to fix it in house.
[539] Because if you fix it in house, there's conflict of interest.
[540] If I find something, I'm probably going to want to fix it.
[541] And hopefully, the media won't pick it up, right?
[542] And that then causes distrust because someone inside is going to be mad at you and go out and talk about how, yeah, they can the resume survey because it, right?
[543] Like, be best people.
[544] Like, just say, look, we have this issue.
[545] Community, help us fix it.
[546] And we will give you, like, you know, the bug finder fee if you do.
[547] Did you ever hope that the community, us as a human civilization on the whole is good?
[548] and can be trusted to guide the future of our civilization into a positive direction.
[549] I think so.
[550] So I'm an optimist, right?
[551] And, you know, there were some dark times in history, always.
[552] I think now we're in one of those dark times.
[553] I truly do.
[554] In which aspect?
[555] The polarization.
[556] And it's not just U .S., right?
[557] So if it was just U .S., I'd be like, yes, a U .S. thing, but we're seeing it like worldwide, this polarization.
[558] And so I, I, I worry about that, but I do fundamentally believe that at the end of the day, people are good, right?
[559] And why do I say that?
[560] Because anytime there's a scenario where people are in danger, and I will use, so Atlanta, we had Snowmageddon, and people can laugh about that, people at the time, so the city closed for, you know, little snow, but it was ice, and the city closed down.
[561] but you had people opening up their homes and saying hey you have nowhere to go come to my house right hotels were just saying like sleep on the floor like places like you know the grocery stores were like hey here's food there was no like oh how much are you going to pay me it was like this such a community and like people who didn't know each other strangers were just like can I give you a right home and that was a point I was like you know what like that that that reveals that the deeper thing is there's a compassionate love that we all have within us.
[562] It's just that when all of that is taken care of and get bored, we love drama.
[563] And that's, I think almost like the division is the sign of the time as being good, is that it's just entertaining on some unpleasant mammalian level to watch, to disagree with others.
[564] And Twitter and Facebook are actually taking advantage of that in a sense, because it brings you back to the platform, and they're advertiser -driven, so they make a lot of money.
[565] So you go back and you're slick.
[566] Love doesn't sell quite as well in terms of advertisement.
[567] It doesn't.
[568] So you've started your career, NASA Jet Propulsion Laboratory.
[569] But before I'd ask you a few questions there, have you happened to have ever seen Space Odyssey, 2001 Space Odyssey?
[570] Yes.
[571] Okay.
[572] Do you think how 9 ,000?
[573] So we're talking about ethics.
[574] Do you think Hal did the right thing by taking the priority of the mission over the lives of the astronauts?
[575] Do you think Hal is good or evil?
[576] Easy questions.
[577] Yeah.
[578] Hal was misguided.
[579] You're one of the people that would be in charge of an algorithm like Hal.
[580] So how would you do better?
[581] If you think about what happened was there was no fail safe, right?
[582] So perfection, right?
[583] Like, what is that?
[584] I'm going to make something that I think is perfect, but if my assumptions are wrong, it'll be perfect based on the wrong assumptions, right?
[585] That's something that you don't know until you deploy and then you're like, oh, yeah, messed up.
[586] But what that means is that when we design software, such as in Space Odyssey, when we put things out, that there has to be a fail safe.
[587] There has to be the ability that once it's out there, you know, we can grade it as an F and it fails and it doesn't continue, right?
[588] There's some way that it can be brought in and removed and that's aspect.
[589] Because that's what happened with Hal.
[590] It was like assumptions were wrong.
[591] It was perfectly correct based on those assumptions.
[592] And there was no way to change it, change the assumptions at all.
[593] And the change to fallback would be to a human.
[594] So you ultimately think, like, human should be, you know, it's not turtles or AI all the way down.
[595] At some point, there's a human that actually makes a change.
[596] I still think that, and again, because I do human -robot interaction, I still think the human needs to be part of the equation at some point.
[597] So what, just looking back, what are some fascinating things in robotic space that NASA was working at the time, or just in general, what have you gotten to play with and what are your memories from working at NASA?
[598] Yeah, so one of my first memories was they were working on a surgical robot system that could do eye surgery, right?
[599] And this was back in, oh, my gosh, it must have been, oh, maybe 92, 93, 94.
[600] So it's almost like a remote operation of...
[601] Yeah, it was.
[602] it was remote operation, and in fact, you can even find some old tech reports on it.
[603] So think of it, you know, like now we have Da Vinci, right?
[604] Like think of it, but these were like the late 90s, right?
[605] And I remember going into the lab one day and I was like, what's that, right?
[606] And of course, it wasn't pretty, right?
[607] Because the technology, but it was like functional and you had this individual that could use version of haptics to actually do this surgery and they had this mockup of a human face and like the eyeballs, and you can see this little drill.
[608] And I was like, oh, that is so cool.
[609] That one I vividly remember because it was so outside of my like possible thoughts of what could be done.
[610] It's the kind of precision.
[611] And I mean, what's the most amazing of a thing like that?
[612] I think it was the precision.
[613] It was the kind of first time that I had physical.
[614] physically seeing this robot machine, human interface, right, versus, because manufacturing had been, you saw those kind of big robots, right?
[615] But this was like, oh, this is in a person.
[616] There's a person and a robot, like in the same space.
[617] So meeting them in person.
[618] Like for me, it was a magical moment that I can't, it was life transforming that I recently met Spot Mini from Boston Dynamics.
[619] Oh, see.
[620] I don't know why, but on the human robot interaction for some reason I realized how easy it is to anthropomorphize and it was I don't know it was it was almost like falling in love this feeling of meeting and I've obviously seen these robots a lot and video and so on but meeting in person just having that one -on -one time it's different it's different so do you have you had a robot like that in your life that was made you maybe fall in love with the robotics sort of like meeting in person I mean I mean I I loved robotics.
[621] From the beginning.
[622] Yeah, so I was a 12 -year -old.
[623] Like, I'm going to be a roboticist.
[624] Actually, I called it cybernetics.
[625] But so my motivation was Bionic Woman.
[626] I don't know if you know that.
[627] And so, I mean, that was like a seminal moment.
[628] But I didn't meet, like, that was TV, right?
[629] Like, it wasn't like I was in the same space and I met.
[630] I was like, oh, my gosh, you're like real.
[631] Just lingering on Bionic Woman, which, by the way, because I read that about you, I watched bits of it and it's just so, no offense, terrible.
[632] It's cheesy.
[633] It's cheesy.
[634] I've seen a couple of reruns lately.
[635] But of course at the time was probably captured the imagination.
[636] Especially when you're younger, it just captured you.
[637] But which aspect?
[638] Did you think of it?
[639] You mentioned cybernetics.
[640] Did you think of it as robotics?
[641] Or did you think of it as almost constructing artificial?
[642] artificial beings.
[643] Like, is it the intelligent part that, that captured your fascination, or was it the whole thing, like even just the limbs and just the...
[644] So for me, it would have, in another world, I probably would have been more of a biomedical engineer because what fascinated me was the bionic, was the parts, like the bionic parts, the limbs, those aspects of it.
[645] Are you especially drawn to humanoid or human -like robots?
[646] I would say human -like, not humanoid, right?
[647] And when I say human -like, I think it's this aspect of that interaction, whether it's social and it's like a dog, right?
[648] Like, that's human -like because it understands us, it interacts with us at that very social level to, you know, human -nois are part of that, but only if they interact with us as if we are human.
[649] But just to linger on NASA for a little bit, what do you think maybe if you have other memories, but also what do you think is the future of robots in space?
[650] We mentioned how, but there's incredible robots that NASA's working on in general thinking about as we venture out, human civilization ventures out into space.
[651] What do you think the future of robots is there?
[652] Yeah, so I mean, there's a near term.
[653] For example, they just announced the, of the, you know, rover that's going to the moon, which, you know, that's kind of exciting, but that's like near -term.
[654] You know, my favorite, favorite, favorite series is Star Trek, right?
[655] You know, I really hope, and even Star Trek, like, if I calculate the years, I wouldn't be alive.
[656] But I would really, really love to be in that world.
[657] Like, even if it's just at the beginning, like, you know, like voyage, like adventure one.
[658] So basically living in space.
[659] Yeah.
[660] With what robots, what do robots?
[661] With data.
[662] What role?
[663] The data would have to be, even though that wasn't, you know, that was like later.
[664] So data is a robot that has human -like qualities.
[665] Right.
[666] Without the emotion.
[667] Yeah.
[668] You don't like emotion.
[669] Well, so data with the emotion ship was kind of a mess.
[670] right it took a while for for that him to adapt but and so why was that an issue the issue is is that emotions make us irrational agents that's the problem and yet he could think through things even if it was based on an emotional scenario right based on pros and cons but as soon as you made him emotional, one of the metrics he used for evaluation was his own emotions, not people around him, right?
[671] And so we do that as children, right?
[672] So we're very egocentric when we're young.
[673] We are very egocentric.
[674] And so isn't that just an early version of the emotion chip then?
[675] I haven't watched much Star Trek.
[676] Except I have also met adults.
[677] Right?
[678] And so that is a developmental process and I'm sure there's a bunch of psychologists that can go through like you can have a 60 year old adult who has the emotional maturity of a 10 year old right and so there's various phases that people should go through in order to evolve and sometimes you don't so how much psychology do you think a topic that's rarely mentioned in robotics but how much the psychology come to play when you're talking about HRI human robot interaction when you have to have robots that actually interact with humans.
[679] Tons.
[680] So we, like my group, as well as I read a lot in the cognitive science literature, as well as the psychology literature, because they understand a lot about human -human relations and developmental milestones and things like that.
[681] And so we tend to look to see what's been done out there.
[682] sometimes what we'll do is we'll try to match that to see is that human -human relationship the same as human robot sometimes it is and sometimes is different and then when it's different we have to we try to figure out okay why is it different in this scenario but it's the same in the other scenario right and so we try to do that quite a bit would you say that's if we're looking at the future of human robot interaction would you say the psychology piece is the hardest like if it's a funny notion for you as I don't know if you consider yeah I mean one way to ask it do you consider yourself a roboticist or a psychologist oh I consider myself a roboticist that plays the act of a psychologist but if you were look at yourself sort of you know 20 30 years from now do you see yourself more and more wearing the psychology hat sort of another way to put it Are the hard problems in human -robot interactions, fundamentally psychology, or is it still robotics, the perception, manipulation, planning, and all that kind of stuff?
[683] It's actually neither.
[684] The hardest part is the adaptation and the interaction.
[685] So it's the interface, it's the learning.
[686] And so if I think of, like, I've become much more of a roboticist slash AI person than when I, like originally, again, I was about the bionics.
[687] I was electrical and engineer.
[688] I was control theory, right?
[689] Like, and then I started realizing that my algorithms needed like human data, right?
[690] And so that I was like, okay, what is this human thing?
[691] Right?
[692] How do I incorporate human data?
[693] And then I realized that human perception had, like, there was a lot in terms of how we perceived the world.
[694] And so trying to figure out how do I model human perception for my, and so I became a h - or I person, human -robot interaction, person from being a control theory and realizing that humans actually offered quite a bit.
[695] And then when you do that, you become more of an artificial intelligence, AI.
[696] And so I see myself evolving more in this AI world under the lens of robotics, having hardware, interacting with people.
[697] So you're a world -class expert researcher in robotics.
[698] And yet, others, you know, there's a few, it's a small but fierce community of people, but most of them don't take the journey into the H of HRI into the human.
[699] So why did you brave into the interaction with humans?
[700] It seems like a really hard problem.
[701] It's a hard problem and it's very risky as an academic.
[702] Yes.
[703] And I knew that when I started down that journey, that it was very risky.
[704] as an academic in this world that was nuanced.
[705] It was just developing.
[706] We didn't even have a conference, right, at the time.
[707] Because it was the interesting problems.
[708] That was what drove me. It was the fact that I looked at what interests me in terms of the application space and the problems.
[709] And that pushed me into trying to figure out what people were and what humans were and how to adapt to them.
[710] If those problems weren't so interesting, I'd probably still be sending rovers to glaciers, right?
[711] But the problems were interesting.
[712] And the other thing was that they were hard, right?
[713] So I like having to go into a room and being like, I don't know what to do.
[714] And then going back and saying, okay, I'm going to figure this out, I do not, I'm not driven when I go and I'm like, oh, there are no surprises.
[715] Like, I don't find that satisfying.
[716] If that was the case, I'd go someplace and make a lot more money, right?
[717] I think I stay in academic and choose to do this because I can go into a room and like, that's hard.
[718] Yeah, I think just from my perspective, maybe you can correct me on it, but if I just look at the field of AI broadly, it seems that human -robot interaction has the most, one of the most number of open problems.
[719] Like, people, especially relative to how many people are willing to acknowledge that there are.
[720] Because most people are just afraid of the humans, so they don't even acknowledge how many open problems there are.
[721] But in terms of difficult problems to solve, exciting spaces, it seems to be incredible for that.
[722] It is.
[723] And it's exciting.
[724] You've mentioned trust before.
[725] What role does trust from interacting with autopilot to in the medical context?
[726] What role does trust playing in the human -robot interaction space?
[727] So some of the things I study in this domain is not just trust, but it really is over -trust.
[728] How do you think about over -trust?
[729] Like, what is, first of all, what is trust and what is overtrust?
[730] Basically, the way I look at it is, trust is not what you click on a survey.
[731] Trust is about your behavior.
[732] If you interact with the technology based on the decision are the actions of the technology, as if you trust that decision, then you're trusting, right?
[733] And even in my group, we've done surveys that, you know, on the thing, do it.
[734] You trust robots.
[735] Of course not.
[736] Would you follow this robot in a burdening building?
[737] Of course not.
[738] Right?
[739] And then you look at their actions and you're like, clearly your behavior does not match what you think, right?
[740] are what you think you would like to think, right?
[741] And so I'm really concerned about the behavior because that's really at the end of the day when you're in the world, that's what will impact others around you.
[742] It's not whether before you went on to the street, you clicked on like, I don't trust self -driving cars.
[743] You know, that, from an outsider perspective, it's always frustrating to me. Well, I read a lot, so I'm insider in a certain philosophical sense.
[744] It's frustrating to me how often trust is used in service.
[745] and how people say, make claims out of any kind of finding they make while somebody clicking on answer.
[746] Because trust is, yeah, behavior, just, you said it beautiful.
[747] I mean, action, your own behavior is what trust is.
[748] I mean, that everything else is not even close.
[749] It's almost like an absurd comedic poetry that you weave around your actual behavior.
[750] So some people can say they trust, you know, I trust my wife, husband, or not, whatever, but the actions is what speaks volumes.
[751] Right.
[752] You bug their car.
[753] Yeah.
[754] You probably don't trust them.
[755] I trust them and I'm just making sure.
[756] No, no, that's, yeah.
[757] Like even if you think about cars, I think it's a beautiful case.
[758] I came here at some point, I'm sure, on either Uber or Lyft, right?
[759] I remember when it first came out, right?
[760] I bet if they had had a survey, would you get?
[761] get in the car with a stranger and pay them?
[762] Yes.
[763] How many people do you think would have said, like, really?
[764] You know, wait, even worse, would you get in the car with a stranger at 1 a .m. in the morning to have them drop you home as a single female?
[765] Yeah.
[766] Like, how many people would say, uh, that's stupid?
[767] Yeah.
[768] And now look at where we are.
[769] I mean, people put kids like, right?
[770] Like, oh, yeah, my child has to go to school, and I, yeah, I'm going to put my kid in this car with a stranger.
[771] I mean, it's just fascinating how, like, what we think we think is not necessarily matching our behavior.
[772] Yeah, and certainly with robots with autonomous vehicles and all the kinds of robots you work with, that's, it's, it's, yeah, it's, the way you answer it, especially if you've never interacted with that robot before.
[773] If you haven't had the experience, you being able to respond correctly on a survey is impossible.
[774] But what role does trust play in the interaction do you think?
[775] Like, is it good to trust a robot?
[776] What does over -trust mean?
[777] Or is it good to kind of how you feel about autopilot currently, which is like, from a roboticist's perspective, is like, so very cautious?
[778] Yeah, so this is still an open area.
[779] research.
[780] But basically what I would like in a perfect world is that people trust the technology when it's working 100%, and people will be hypersensitive and identify when it's not.
[781] But of course, we're not there.
[782] That's the ideal world.
[783] But we find is that people swing, right?
[784] They tend to swing, which means that if my first, and like we have some papers, like first impressions, and everything, right?
[785] If my first instance with technology with robotics is positive, it mitigates any risk, it correlates with, like, best outcomes, it means that I'm more likely to either not see it when it makes a mistakes or faults, or I'm more likely to forgive it.
[786] And so this is a problem because technology is not 100 % accurate, right?
[787] It's not 100 % accurate, although it may be perfect.
[788] How do you get that first moment right, do you think?
[789] There's also an education about the capabilities and limitations of the system.
[790] Do you have a sense of how you educate people correctly in that first interaction?
[791] Again, this is an open -ended problem.
[792] So one of the study that actually has given me some hope that I were trying to figure out how to put in robotics.
[793] So there was a research study that has showed for medical AI.
[794] systems, giving information to radiologists about, you know, here, you need to look at these areas on the X -ray.
[795] What they found was that when the system provided one choice, there was this aspect of either no trust or over -trust, right?
[796] Like, I'm not going, I don't believe it at all or a yes, yes, yes, yes, and they would miss things, right?
[797] Instead, when the system gave them multiple choices, like here are the three, even if it knew, like, you know, it had estimated that the top area you need to look at was, you know, some place on the x -ray.
[798] If it gave like one plus others, the trust was maintained and the accuracy of the.
[799] of the entire population increased.
[800] Right?
[801] So basically it was a, you're still trusting the system, but you're also putting in a little bit of like your human expertise, like your human decision processing into the equation.
[802] So it helps to mitigate that over -trust risk.
[803] Yeah, so there's a fascinating balance to have to strike.
[804] Yeah.
[805] I haven't figured out.
[806] robots, it's still an open research.
[807] This is exciting open area of research.
[808] Exactly.
[809] So what are some exciting applications of human -robot interaction.
[810] You started a company, maybe you can talk about the exciting efforts there, but in general, also what other space can robots interact with humans and help?
[811] Yeah, so besides health care, because, you know, that's my bias lens.
[812] My other bias lens is education.
[813] I think that, well, one, we definitely, we, in the U .S., you know, we're doing okay with teachers, but there's a lot of school districts that don't have enough teachers.
[814] if you think about the teacher -student ratio for at least public education in some districts, it's crazy.
[815] It's like, how can you have learning in that classroom, right?
[816] Because you just don't have the human capital.
[817] And so if you think about robotics, bringing that in to classrooms as well as the after -school space, where they offset some of this lack of resources in certain communities, I think that's a good place.
[818] and then turning on the other end is using these systems then for workforce retraining and dealing with some of the things that are going to come out later on of job loss, like thinking about robots and AI systems for retraining and workforce development.
[819] I think that's exciting areas that can be pushed even more and it would have a huge, huge impact.
[820] what would you say some of the open problems in education sort of it's exciting so young kids and the older folks or just folks of all ages who need to be retrained who need to sort of open themselves up to a whole other area of work what what are the problems to be solved there how do you think robots can help we have the engagement aspect right so we can figure out the engagement.
[821] That's not a...
[822] What do you mean by engagement?
[823] So identifying whether a person is focused is like that we can figure out.
[824] What we can figure out and there's some positive results in this is that personalized adaptation based on any concepts, right?
[825] So imagine I think about I have an agent and I'm working with a kid learning, I don't know, algebra two, can that same agent then switch and teach some type of new coding skill to a displaced mechanic?
[826] Like what does that actually look like, right?
[827] Like hardware might be the same, content is different, two different target demographics.
[828] of engagement?
[829] How do you do that?
[830] How important do you think personalization is in human robot interaction and not just a mechanic or student, but like literally to the individual human being?
[831] I think personalization is really important, but a caveat is that I think we'd be okay if we can some certain dimensions, then even though it may not be you specifically, I can put you in this group.
[832] So the sample size, this is how they best learn, this is how they best engage.
[833] Even at that level, it's really important.
[834] And it's because, I mean, it's one of the reasons why educating in large classrooms is so hard, right?
[835] You teach to, you know, the median.
[836] But there's these, you know, individuals that are, you know, struggling.
[837] And then you have highly intelligent.
[838] individuals, and those are the ones that are usually, you know, kind of left out.
[839] So highly intelligent individuals may be disruptive, and those who are struggling might be disruptive because they're both bored.
[840] Yeah, and if you narrow the definition of the group or in the size of the group enough, you'll be able to address their individual.
[841] It's not individual needs, but really the most important group needs.
[842] Right.
[843] Right.
[844] And that's kind of what a lot of successful recommender systems do, Spotify and so on.
[845] It's sad to believe, but I'm, as a music listener, probably in some sort of large group.
[846] It's very sadly predictable.
[847] You have been labeled.
[848] Yeah, I've been labeled and successfully so because they're able to recommend stuff.
[849] Yeah, but applying that to education, right?
[850] There's no reason why it can't be done.
[851] Do you have a hope for our education system?
[852] I have more hope for workforce development.
[853] And that's because I'm seeing investments, even if you look at VC and investments in education, the majority of it has lately been going to workforce retraining, right?
[854] And so I think that government investments is increasing.
[855] There's like a claim.
[856] And some of it's based on fear, right?
[857] Like AI's going to come and take over all these jobs.
[858] What are we going to do with all these nonpaying taxes that aren't coming to us by our citizens?
[859] And so I think I'm more hopeful for that.
[860] Not so hopeful for early education.
[861] because it's this, it's still a who's going to pay for it, and you won't see the results for like 16 to 18 years.
[862] It's hard for people to wrap their heads around that.
[863] But on the retraining part, what are your thoughts?
[864] There's a candidate, Andrew Yang, running for presidents, saying that sort of AI automation robots.
[865] Universal, basic income.
[866] universal basic income in order to support us as we kind of automation takes people's jobs and allows you to explore and find other means.
[867] Do you have a concern of society transforming effects of automation and robots and so on?
[868] I do.
[869] I do know that AI robotics will displace workers.
[870] Like, we do know that.
[871] But there'll be other workers that will be other workers that will will be defined new jobs.
[872] What I worry about is that's not what I worry about.
[873] Like, will all the jobs go away?
[874] What I worry about is the type of jobs that will come out, right?
[875] Like people who graduate from Georgia Tech will be okay, right?
[876] We give them the skills.
[877] They will adapt even if their current job goes away.
[878] I do worry about those that don't have that quality of an education, right?
[879] Will they have the ability, the background, to adapt to those new jobs?
[880] jobs.
[881] That I don't know.
[882] That I worry about, which will create even more polarization in our society, internationally, and everywhere.
[883] I worry about that.
[884] I also worry about not having equal access to all these wonderful things that AI can do and robotics can do.
[885] I worry about that.
[886] People like me from Georgia Tech from say MIT will be okay, right?
[887] But that's such a small part of the population that we need to think much more globally of having access to the beautiful things, whether it's AI in healthcare, AI in education, AI and politics, right?
[888] I worry about that.
[889] And that's part of the thing that you were talking about is people that build the technology have to be thinking about ethics, have to be thinking about access and all those things, and not just a small subset.
[890] Let me ask some philosophical, slightly romantic questions.
[891] All right.
[892] People that listen to this.
[893] We'll be like, here he goes again.
[894] Okay.
[895] Do you think one day we'll build an AI system that a person can fall in love with and it would love them back?
[896] Like in the movie, her, for example.
[897] Oh, yeah.
[898] Although she kind of didn't fall in love with him.
[899] Or she fell in love with like a million other people, something like that.
[900] You're the jealous type, I see.
[901] We humans are the jealous type.
[902] Yes.
[903] So I do believe that we can design systems where people would fall in love with their robot, with their AI partner.
[904] That I do believe.
[905] Because it's actually, and I don't like to use the word manipulate, but as we see, there are certain individuals that can be manipulated if you understand the cognitive science about it, right?
[906] Right.
[907] So, I mean, if you could think of all close relationship and love in general as a kind of mutual manipulation, that dance, the human dance, I mean, manipulation is a negative connotation.
[908] And that's why I don't like to use that word particularly.
[909] I guess another way of phrase is you're getting at is it could be algorithmized or something.
[910] It could be.
[911] The relationship building part can be.
[912] I mean, just think about it.
[913] We have, and I don't use dating sites, but from what I heard, there are some individuals that have been dating that have never saw each other, right?
[914] In fact, there's a show, I think, that tries to, like, weed out fake people.
[915] Like, there's a show that comes out, right?
[916] Because, like, people start faking.
[917] Like, what's the difference of that person on the other end being an AI agent, right?
[918] And having a communication, and you building a relationship remotely, like, there's no reason why that can't happen.
[919] In terms of human robot interaction, so what role, you've kind of mentioned with data, emotion being, can be problematic if not implemented well, I suppose.
[920] What role does emotion and some other human -like things, the imperfect things come into play here for good human -robot interaction and something like love?
[921] Yeah, so in this case, and you had asked, can an AI agent love a human back?
[922] I think they can emulate love back, right?
[923] And so what does that actually mean?
[924] It just means that if you think about their programming, they might put the other person's needs in front of theirs in certain situations, right?
[925] You look at, think about it as return on investment.
[926] Like, was my return on investment?
[927] As part of that equation, that person's happiness, you know, has some type of, you know, algorithm waiting to it.
[928] And the reason why is because I care about them, right?
[929] That's the only reason, right?
[930] But if I care about them, and I show that, then my final objective function is length of time of the engagement, right?
[931] So you can think of how to do this, actually quite easily.
[932] But that's not love?
[933] Well, so that's the thing.
[934] I think it emulates love because we don't have a classical definition of love.
[935] Right.
[936] Right, and we don't have the ability to look into each other's minds to see the algorithm.
[937] And I guess what I'm getting at is, is it possible that, especially if that's learned, especially if there's some mystery and black box nature to the system, how is that, you know.
[938] How is it any different?
[939] How is it any different?
[940] And in terms of sort of if the system says I'm conscious, I'm afraid of death, and it does indicate.
[941] that it loves you.
[942] Another way to sort of phrase, I'd be curious to see what you think.
[943] Do you think there'll be a time when robots should have rights?
[944] You've kind of phrased the robot in a very roboticist way and just a really good way, but saying, okay, well, there's an objective function and I could see how you can create a compelling human robot interaction experience that makes you believe that the robot cares for your needs and even something like loves you.
[945] But what if the robot says, please don't turn me off?
[946] What if the robot starts making you feel like there's an entity, a being, a soul there, right?
[947] Do you think there'll be a future?
[948] Hopefully you won't laugh too much at this, but where they do ask for rights?
[949] So I can see a future if we don't address it in the near term where these agents, as they adapt and learn, could say, hey, this should be something that's fundamental.
[950] I hopefully think that we would address it before it gets to that point.
[951] So you think that's a bad future?
[952] Is that a negative thing where they ask or being discriminated against?
[953] I guess it depends on what role have they attained at that point, right?
[954] And so if I think about now.
[955] Careful what you say because the robots 50 years from now will be listening to this and you'll be on TV saying this is what roboticists used to believe.
[956] Well, right?
[957] And so this is my, and as I said, I have a biased lens and my robot friends will understand that.
[958] But so if you think about it, and I actually put this in kind of the, as a roboticist, you don't necessarily think of robots as human with human rights, but you could think them either in the category of property or you can think of them in the category of animals, right?
[959] And so both of those have different types of rights.
[960] So animals have their own rights as a living being, but they can't vote, they can't, right?
[961] They can be euthanized.
[962] But as humans, if we abuse them, we go to jail.
[963] Right?
[964] So they do have some rights that, protect them, but don't give them the rights of like citizenship.
[965] And then if you think about property, property, the rights are associated with the person, right?
[966] So if someone vandalizes your property or steals your property, like there are some rights, but it's associated with the person who owns that.
[967] If you think about it back in the day, and if you remember, we talked about, you know, how society has changed.
[968] Women were property, right?
[969] They were not thought of as having rights.
[970] They were thought of as property of, like their...
[971] Yeah, assaulting a woman meant assaulting the property of somebody else's position.
[972] Exactly.
[973] And so what I envision is that we will establish some type of norm at some point, but that it might evolve, right?
[974] Like if you look at women's rights now, like there are still some countries.
[975] that don't have.
[976] And the rest of the world is like, why that makes no sense, right?
[977] And so I do see a world where we do establish some type of grounding.
[978] It might be based on property rights.
[979] It might be based on animal rights.
[980] And if it evolves that way, I think we will have this conversation at that time, because that's the way our society traditionally has evolved.
[981] Beautifully put, just out of curiosity, Anki, Gibo, Mayfield Robotics, with a robot curie, sci -fi works, rethink robotics.
[982] We're all these amazing robotics companies led, created by incredible roboticists, and they've all went out of business recently.
[983] Why do you think they didn't last longer?
[984] Why is it so hard to run a robotics company, especially one like these, which are fundamentally HRI human -robot interaction robots.
[985] Yeah.
[986] Each one has a story.
[987] Only one of them I don't understand.
[988] And that was Anki.
[989] That's actually the only one I don't understand.
[990] I don't understand it either.
[991] No, no. I mean, I look at, like, from the outside, you know, I've looked at their sheets.
[992] I've looked at, like, the data that's...
[993] Oh, you mean, like, business -wise?
[994] Gotcha.
[995] Yeah.
[996] And, like, I look at all.
[997] I look at that data, and I'm like, they seem to have, like, product market fit.
[998] Like, so that's the only one I don't understand.
[999] The rest of it was product market fit.
[1000] What's product market fit, just instead of, like, how do you think about it?
[1001] Yeah, so although we think robotics was getting there, right?
[1002] But I think it's just the timing, it just, their clock just timed out.
[1003] I think if they had been given a couple more years, they would have been okay.
[1004] but the other ones were still fairly early by the time they got into the market.
[1005] And so product market fit is, I have a product that I want to sell at a certain price.
[1006] Are there enough people out there, the market, that are willing to buy the product at that market price for me to be a functional, viable, profit -bearing company, right?
[1007] So product -market fit.
[1008] If it costs you $1 ,000 and everyone wants it and only is willing to pay a dollar, you have no product market fit, even if you could sell it for, you know, it's enough for a dollar because you can't.
[1009] So how hard is it for robots, sort of maybe if you look at I -Robot, the company that makes Roomba's vacuum cleaners, can you comment on, did they find the right product, market product fit?
[1010] Like, are people willing to pay for robots?
[1011] is also another kind of question.
[1012] So if you think about I -Robot and their story, right?
[1013] Like when they first, they had enough of a runway, right?
[1014] When they first started, they weren't doing vacuum cleaners, right?
[1015] They were a military, they were contracts primarily, government contracts, designing robots.
[1016] Yeah, I mean, that's what they were.
[1017] That's how they started, right?
[1018] And they still do a lot of incredible work there.
[1019] But yeah, that was the initial thing that gave them enough funding to.
[1020] To then try to, the fact.
[1021] vacuum cleaner is what I've been told was not like their first rendezvous in terms of designing a product, right?
[1022] And so they, they were able to survive until they got to the point that they found a product price market, right?
[1023] And even with, if you look at the Rumba, the price point now is different than when it was first released, right?
[1024] It was an early adopter price, but they found enough people who were willing to fund it.
[1025] And I mean, you know, the I forgot what their loss profile was for the first couple of, you know, years.
[1026] But they became profitable in sufficient time that they didn't have to close their doors.
[1027] So they found the right.
[1028] They're still, there's still people willing to pay a large amount of money.
[1029] So over $1 ,000 for a vacuum cleaner.
[1030] Unfortunately, for them, now that they've proved everything out, figured it all out, now there's competitors.
[1031] Yeah.
[1032] And so that's the next thing, right?
[1033] The competition, and they have quite a number, even internationally.
[1034] like there's some products out there you can go to you know Europe and be like oh I didn't even know this one existed so so this is the thing though like with any market I would this is not a bad time although you know as a roboticist it's kind of depressing but I actually think about things like with I would say that all of the companies that are now in the top five or six they weren't the first to the stage, right?
[1035] Like, Google was not the first search and didn't, sorry, Alta Vista, right?
[1036] Facebook was not the first, sorry, MySpace, right?
[1037] Like, think about it.
[1038] They were not the first players.
[1039] Those first players, like, they're not in the top 5, 10 of Fortune 500 companies, right?
[1040] They proved, they started to prove out the market.
[1041] They started to get people interested.
[1042] They started the buzz, but they didn't make it to that next level.
[1043] But the second batch, right?
[1044] The second batch, I think, might make it to the next level.
[1045] When do you think the Facebook of Robotics?
[1046] Sorry, I take that phrase back because people deeply, for some reason, well, I know why, but it's, I think, exaggerated distrust Facebook because of the privacy concerns and so on.
[1047] And with robotics, one of the things you have to make sure is all the things we're talked about is to be transparent and have people deeply trust you to let a robot into their lives into their home.
[1048] But when do you think the second batch of robots will, is it five, 10 years, 20 years that will have robots in our homes and robots in our hearts?
[1049] So if I think about, because I try to follow the VC kind of space in terms of robotic investments.
[1050] And right now, and I don't know if they're going to be.
[1051] successful.
[1052] I don't know if this is a second batch, but there's only one batch that's focused on like the first batch, right?
[1053] And then there's all these self -driving Xs, right?
[1054] And so I don't know if they're a first batch of something or if, like, I don't know quite where they fit in.
[1055] But there's a number of companies, the co -robot, I call them co -robots, that are still getting VC investments.
[1056] they some of them have some of the flavor of like rethink robotics some of them have some of the flavor of like curie um what's a co -robot so basically a robot and human working in the same space so some of the companies are focused on manufacturing so having a robot and human working together in a factory some of these co -robots are robots and humans working in the home working in clinics.
[1057] Like there's different versions of these companies in terms of their products, but they're all, so we think robotics would be like one of the first, at least well -known companies focused on this space.
[1058] So I don't know if this is a second batch or if this is still part of the first batch.
[1059] That I don't know.
[1060] And then you have all these other companies in this self -driving space.
[1061] And I don't know if that's a first batch or again a second batch.
[1062] Yeah.
[1063] So there's a lot of mystery about this now.
[1064] Of course, it's hard to say that this is the second batch until it proves out, right?
[1065] Correct.
[1066] Yeah, exactly.
[1067] Yeah, we need a unicorn.
[1068] Yeah, exactly.
[1069] Why do you think people are so afraid, at least in popular culture, of legged robots like those work in Boston Dynamics or just robotics in general?
[1070] If you were to psychoanalyze that fear, what do you make of it?
[1071] And should they be afraid?
[1072] Sorry.
[1073] So should people be afraid?
[1074] I don't think people should be afraid, but with a caveat.
[1075] I don't think people should be afraid, given that most of us in this world understand that we need to change something, right?
[1076] So given that.
[1077] Now, if things don't change, be very afraid.
[1078] Which is the dimension of change that's needed?
[1079] So changing, thinking about the ramifications, thinking about the ethic, thinking about like the conversation is going on right it's not it's no longer a we're going to deploy it and forget that you know this is a car that can kill pedestrians that are walking across the street right it's we're not in that stage we're a we're putting these roads out there are people out there yes a car could be a weapon like yeah people are now solutions aren't there yet but people are are thinking about this as we need to be ethically responsible as we send these systems out, robotics, medical, self -driving.
[1080] And military, too.
[1081] And military.
[1082] Which is not as often talked about, but it's really where probably these robots will have a significant impact as well.
[1083] Correct, correct, right, making sure that they can think rationally, even having the conversations, who should pull the trigger, right?
[1084] But overall, you're saying if we start to think more and more as a community about these ethical issues, people should not be afraid.
[1085] Yeah, I don't think people should be afraid.
[1086] I think that the return on investment, the impact, positive impact will outweigh any of the potentially negative impacts.
[1087] Do you have worries of existential threats of robots or AI that some people kind of talk about and romanticize about?
[1088] And then in the next decade, the next few decades.
[1089] No, I don't.
[1090] Singularity would be an example.
[1091] So my concept is that, so remember, robots, AI, is designed.
[1092] by people.
[1093] It has our values.
[1094] And I always correlate this with a parent and a child.
[1095] So think about it.
[1096] As a parent, what do we want?
[1097] We want our kids to have a better life than us.
[1098] We want them to expand.
[1099] We want them to experience the world.
[1100] And then as we grow older, our kids think and know they're smarter and better and more intelligent and have better opportunities and they may even stop listening to us.
[1101] They don't go out and then kill us, right?
[1102] Like, think about it.
[1103] It's because it's instilled in them values.
[1104] We instilled in them this whole aspect of community.
[1105] And yes, even though you're maybe smarter and more, have more money and da -da -da, it's still about this love, caring relationship.
[1106] And so that's what I believe.
[1107] So even if, like, you know, we've created the singularity in some archaic system back in, like, 1980 that suddenly evolves, the fact is, it.
[1108] might say, I am smarter.
[1109] I am sentient.
[1110] These humans are really stupid.
[1111] But I think it'll be like, yeah, but I just can't destroy them.
[1112] Yeah.
[1113] Percentimental value.
[1114] It's still just to come back for Thanksgiving dinner every once in a while.
[1115] Exactly.
[1116] That's such, this so beautifully put.
[1117] You've, uh, you've also said that the Matrix may be one of your more favorite AI -related movies.
[1118] Can you elaborate why?
[1119] Yeah.
[1120] It is one of my favorite movies.
[1121] And it's because it represents kind of all the things I think about.
[1122] So there's a symbiotic relationship between robots and humans, right?
[1123] That symbiotic relationship is that they don't destroy us.
[1124] They enslave us.
[1125] Right.
[1126] But think about it.
[1127] Even though they enslaved us, they needed us to be happy.
[1128] Right.
[1129] And in order to be happy, they had to create this crudy world.
[1130] that they then had to live in, right?
[1131] And that's the whole premise.
[1132] But then there were humans that had a choice, right?
[1133] Like you had a choice to stay in this horrific, horrific world where it was your fantasy life with all of the anomalies, perfection but not accurate.
[1134] Or you can choose to be on your own and have maybe no food for a couple of days, but you were totally autonomous.
[1135] And so I think of that as, and that's why.
[1136] So it's not necessarily us being enslaved, but I think about us having the symbiotic relationship.
[1137] Robots and AI, even if they become sentient, they're still part of our society, and they will suffer just as much as we.
[1138] And there will be some kind of equilibrium that will have to find some symbiotic relationship.
[1139] Right.
[1140] And then you have the ethicist, the robotics folks, that are like, no, this has got to stop.
[1141] I will take the other pill.
[1142] Yeah.
[1143] in order to make a difference.
[1144] So if you could hang out for a day with a robot, real or from science fiction, movies, books, safely, and get to pick his or her, their brain, who would you pick?
[1145] I got to say it's data.
[1146] Data.
[1147] I was going to say Rosie, but I don't, I'm not really interested in her brain.
[1148] I'm interested in data's brain.
[1149] Data pre or post -emotion chip?
[1150] Pre.
[1151] But don't you think it would be a more interesting conversation post -emotionship?
[1152] Yeah, it would be drama.
[1153] And I, you know, I'm human.
[1154] I deal with drama all the time.
[1155] Yeah.
[1156] But the reason why I want to pick Data's brain is because I could have a conversation with him and ask, for example, how can we fix this ethics problem?
[1157] Right.
[1158] and he could go through like the rational thinking and through that he could also help me think through it as well.
[1159] And so that's, there's like these questions, fundamental questions.
[1160] I think I can ask him that he would help me also learn from.
[1161] And that fascinates me. I don't think there's a better place to end it.
[1162] I thank you so much for talking to it.
[1163] It was an honor.
[1164] Thank you.
[1165] Thank you.
[1166] This was fun.
[1167] Thanks for listening to this conversation.
[1168] And thank you to our presenting sponsor, Cash.
[1169] app.
[1170] Download it, use code Lex Podcast.
[1171] You'll get $10 and $10 will go to first, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators.
[1172] If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter.
[1173] And now let me leave you with some words of wisdom from Arthur C. Clark.
[1174] whether we are based on carbon or on silicon makes no fundamental difference we should each be treated with appropriate respect thank you for listening and hope to see you next time