Hidden Brain XX
[0] Quick note before we get started.
[1] This episode includes a racial epithet and discussions about pornography.
[2] This is Hidden Brain.
[3] I'm Shankar Vedantam.
[4] Maps are good representations not just of the world we live in, but of how we think about the world we live in.
[5] Over the centuries, our maps have emphasized the places we find important.
[6] They show the limits of our knowledge and the scope of our ambitions.
[7] 700 years ago, Europeans were completely unaware of the existence of North America.
[8] Fast forward to the 1970s, scientists can tell you in detail what the surface of the moon is like.
[9] Today, we're charting out maps of a different sort.
[10] Maps of our minds.
[11] Maps of our minds.
[12] Maps of our minds.
[13] There isn't one cartographer designing these modern maps.
[14] We all are, and the maps are constantly changing.
[15] We start today's show with a personal question.
[16] Have you ever Googled something that you would never dream of saying out loud to another human being?
[17] When we have a question about something embarrassing or deeply personal, many of us today don't turn to a parent or to a friend, but to our computers.
[18] Because there's just some things you just can't ask a real person in real life and you need to ask Google.
[19] Because it's completely anonymous and there are no judgments attached.
[20] Google knows everything.
[21] I agree to that.
[22] Every time we type into a search box, we reveal something about ourselves.
[23] As millions of us look for answers to questions or things to buy or places to meet friends, our searches produce a map of our collective hopes, fears, and desires.
[24] My guest today is Seth Stevens Dividowitz.
[25] He used to be a data scientist at Google, and he's the author of the book Everybody Lies, Big Data, New Data, and what the Internet can tell us about who we read.
[26] really are.
[27] Seth, welcome to Hidden Brain.
[28] Oh, thanks so much for having me, Shankar.
[29] So, Seth, we all know that Google handles billions of searches every day, but one of the insights you've had is that the reason Google knows a lot about us is not just because of the volume of search terms, but because people turn to Google as they might turn to a friend or a confidant.
[30] That's exactly right.
[31] I think there's something very comforting about that little white box that people feel very comfortable telling things that they may not tell anybody else about their sexual interests, their health problems, their insecurities.
[32] And using this anonymous aggregate data, we can learn a lot more about people than we've really ever known.
[33] And one of the ways we can learn a lot more about people is through these very strange correlations.
[34] You find, for example, there's a relationship between the unemployment rate and the kinds of searches people make online.
[35] Yeah, I was looking at what searches correlate most with the unemployment rate.
[36] And I was unemployment benefits, but during the time period I looked at the single search that was most highly correlated with the unemployment rate with a pornography site.
[37] And you can imagine that if a lot of people are out of work, they have nothing to do during the day, they may be more likely to look at porn sites.
[38] Another search that was high on the list was solitaire.
[39] So again, when people are out of work, they're kind of bored.
[40] They do leisure activities and potentially this measure of how much leisure there is on the internet may help us know how many people are out of work on a given day.
[41] And of course, this sort of helps us reconsider what we think of as data.
[42] So when we think about the unemployment rate, as you say, our normal approach is to say how many people are still in jobs.
[43] Let's track down all the jobs.
[44] This is coming at the question entirely differently.
[45] Yeah, I think the traditional way to collect data was to send a survey out to people and have them answer questions, check boxes.
[46] There are lots of problems with this approach.
[47] Many people don't answer surveys and many people lie to surveys.
[48] So the new era of data is kind of looking through all the clues that we leave.
[49] Many of them, not as part of questions or as part of surveys, but just clues we leave as we go through our lives.
[50] One of the important differences between mining this kind of data and the responses we get on surveys has to do with how people report their sexual orientation.
[51] I understand that the kind of queries that you see on Google might reveal something quite different than if you ask people if they're gay.
[52] That's right.
[53] If you ask people in surveys today in the United States about two and a half or three percent of men say that they're primarily attracted to men.
[54] And this number is far higher in certain states where tolerance to homosexuality is greater.
[55] So there are a lot more gay men according to surveys in California than in Mississippi.
[56] But if you look at search data for gay male pornography, it's a tiny bit higher in California, but not that much higher.
[57] And overall, about 5 % of male pornography search is for gay porn.
[58] So almost twice as high as the numbers you get in surveys.
[59] Your research has important implications for a topic that we've looked at a lot on hidden brain, the topic of implicit bias.
[60] People aren't always aware of the biases they hold.
[61] and so scientists have had to find clever ways to unearth these biases.
[62] You think that Google searches can reveal some forms of implicit bias?
[63] That's right.
[64] So one I look at is the questions that parents have about their children.
[65] If you ask many parents today, they would say that they treat their sons and daughters equally, that they're equally excited about their intellectual potential, equally concerned about maybe their weight problems.
[66] But if you aggregate everybody's Google searches, you see large differences in general.
[67] gender that when parents in the United States ask questions starting is my son, they're much more likely to use words such as gifted or a genius than they would in a search starting is my daughter.
[68] When parents in the United States search is my daughter, they're much more likely to complete it with is my daughter overweight or is my daughter ugly.
[69] So parents are much more excited about the intellectual potential of their sons and much more concerned about the physical appearance of their daughters.
[70] Before I get to the next question, Seth, I just want to give a warning to our listeners.
[71] This next section is going to involve a discussion regarding the N -word.
[72] Seth, you report that in some states after Barack Obama was first elected president, there were more Google searches for a certain racist term than searches for first black president.
[73] I think there is a disturbing element to some of this search data where in the United States today, many people, and maybe this is a good thing, don't feel comfortable sharing that they have racist thoughts or racist feelings.
[74] But on Google, they do make these searches in strikingly high frequency.
[75] I need to use sorted language to this.
[76] The measure is the percent of Google searches that include the word and these searches are predominantly searches looking for jokes mocking African Americans.
[77] I should clarify this is not searches for rap lyrics, which tend to use the word ending in A. But if you look at the racist search volumes, I think if you had asked me based on everything I had read about racism in the United States.
[78] I would have thought that racism in the United States predominantly concentrated in the South, that really the big divide of the United States when it comes to racism is South versus North.
[79] But the Google data reveals that's not really the case that racism is actually very, very high in many places in the North, places like Western Pennsylvania or Eastern Ohio or Industrial Michigan or rural Illinois or upstate New York.
[80] The real divide these days when it comes to racism is not North versus South.
[81] It's either.
[82] East versus West.
[83] There's much higher racism east of the Mississippi than west of the Mississippi.
[84] So besides just saying, you know, we know that there are these patterns of racist searches in different parts of the country, you're actually saying you can do more than that.
[85] You can actually predict how different parts of the country might vote in a presidential election based on the kind of Google searches you see in different parts of the country.
[86] Yeah, well, the first thing I found is that there was a large, correlation between racist search volume and parts of the country where Obama did worse than other Democratic candidates had done.
[87] So Barack Obama was the first major party general election nominee who was African American.
[88] And you see a clear relationship that Obama lost large numbers of votes in parts of the country where there are high racist search volumes.
[89] And other researchers have found such as Nate Silver at 538 and Nate Cohn at the New York Times that there was a large correlation between racist search volumes and support for Donald Trump and the Republican Party that parts of the country that made racist searches in high numbers were much more likely to support Donald Trump.
[90] And this relationship was much stronger than really any other variable that they tested.
[91] I'm wondering how you try and understand that kind of information.
[92] It's hard not to listen to what you're saying and draw sort of what seems to be a superficial conclusion, which is that racist people vote for Donald Trump.
[93] I'm not sure.
[94] Is that what you're saying?
[95] That's one of those things where it sounds so offensive to say it that I think everyone tiptoes around the line.
[96] I will say that the data does show a strong correlation between racist searches and support for Donald Trump that is hard to explain with any other explanation.
[97] You know, it's, yeah, I mean, yeah, that kind of is what I'm saying.
[98] I'm not saying that everybody who supported Donald Trump is racist by any stretch of imagination.
[99] There are plenty of people who support Donald Trump without this racist tendency.
[100] but a significant fraction of his supporters, I think, were motivated by racial animus.
[101] Seth Stevens Dividowitz is a former Google Data Scientist and the author of Everybody Lies, Big Data, New Data, and what the Internet can tell us about who we really are.
[102] You spend a lot of time in the book talking about sex.
[103] It turns out to be an area where marketers and companies know that what we say about ourselves is nowhere close to the truth.
[104] Most people report being not interested in pornography, but the website Porn Hub reports that in 2015 alone, viewers watched two and a half billion hours of porn, which is apparently longer than the entire amount of time that humans have been on Earth.
[105] What does this say about us?
[106] The fact that we either have very little insight about ourselves or we're actually lying through our teeth.
[107] Yeah, I'd say we're probably lying through our teeth.
[108] Yeah, I'd say that I do talk a lot about sex in this book.
[109] One thing I like to say is that big data is so powerful it turned me into a sex expert.
[110] because it wasn't a natural area of expertise for me. But I do talk a lot about sexuality.
[111] And I think you do learn a lot about people that's very, very different from what they say and kind of the weirdness at the heart of the human psyche that doesn't really reveal itself in everyday life or at lunch tables, but it does reveal itself at 2 a .m. on Pornhub.
[112] Pornography sites aren't the only ones gathering information about our sexual and romantic preferences.
[113] We now have apps like Tinder and sites like OKCupid that gather tons of data about us.
[114] As a result, these apps and sites know a lot about our romantic preferences.
[115] But for a long time, we've had a human version of big data for romance.
[116] Grandma.
[117] Seth has some personal experience with this big data source.
[118] A couple of years ago, he was having Thanksgiving dinner with his family.
[119] He was 33, didn't have a date with him, and his family was trying to figure out the qualities set needed in a romantic partner.
[120] My family was going back and forth.
[121] My sister was saying that I need a crazy girl because I'm crazy.
[122] My brother was saying that my sister was crazy, that I need a normal girl to balance me out.
[123] And my mom was screaming at my brother and sister that I'm not crazy.
[124] And my dad was then screaming at my mom, that of course Seth is crazy.
[125] So it's kind of a classic Stevens -Dividoids family Thanksgiving where everyone's just yelling at each other for being crazy.
[126] And we're not really getting any progress in learning about what I need in my love life.
[127] And then my soft -spoken 88 -year -old grandma started to speak and everyone went quiet.
[128] And she explained to me that I need a nice girl, not too pretty, very smart, good with people, social so you will do things, sense of humor because you have a good sense of humor.
[129] And I describe why was her advice so much better than everybody else is.
[130] I think one of the reasons is that she's big data, right?
[131] So grandmas and grandpas throughout history have had access to more data points than anybody else.
[132] and they've been able to correlate larger patterns than anybody else has because they've been around longer.
[133] And that's why they've been such an important source of wisdom historically.
[134] The problem, of course, as you also point out, is that it's very hard to disentangle your personal experiences from what actually happens in the world.
[135] And in your grandmother's case, she actually had a very specific piece of relationship advice about the kind of person you should want.
[136] And some of that might not actually be backed up by the empirical evidence.
[137] Yeah.
[138] Well, my grandma has told me on multiple occasions that it's important to have a common set of friends and a partner.
[139] So she lived in a small apartment in Queens, New York with my grandfather.
[140] And every evening, they'd go outside and gossip with their neighbors.
[141] And she thought that was a big part in why their relationship worked.
[142] But actually recently, computer scientists have analyzed data from Facebook.
[143] And they can actually look when people are in relationships and when they're out of relationships and try to predict what factors or relationship make it more likely to last.
[144] One of the things they tested was having a common group of friends.
[145] Some partners on Facebook share pretty much the same friend group and some people have totally isolated friend groups.
[146] And they found, contrary to my grandmother's advice, that having a separate social circle is actually a positive predictor of a relationship lasting.
[147] And so, of course, the risk of trusting the individual is that the individual's intuition about what worked for his or her life might not work for everyone else.
[148] That's right.
[149] I think we tend to get biased by our own situation.
[150] Data scientists have a phrase called weighting data.
[151] Some data points get extra weight in our models.
[152] And our intuition gives too much weight to our own experience.
[153] And we tend to assume that what worked for us will work for others as well.
[154] And that's frequently not the case.
[155] Many companies know that we don't really understand ourselves.
[156] When we come back, we look at how companies are using big data to predict what we're going to do before.
[157] we know it ourselves.
[158] We'll also ask if sites like Google can use data to forecast whether you're going to get a serious illness, should they give you that information?
[159] Stay with us.
[160] This is Hidden Brain.
[161] I'm Shankar Vedantam.
[162] We're speaking today with former Google data scientist, Seth Stevens Dividowitz, about the research in his book, Everybody Lies, Big Data, New Data, and what the Internet can tell us about who we really are.
[163] Netflix used to ask users what kind of movies they wanted to watch.
[164] Set says eventually the company realized that asking this kind of question was a complete waste of time.
[165] Yeah, initially Netflix would ask people what they want to view in the future so they could queue up the movies that they said.
[166] And if you ask people, what are you going to want to watch tomorrow or this weekend?
[167] People are very aspirational.
[168] They want to watch documentaries or about World War II or avant -garde French films.
[169] But then when Saturday or Sunday comes around, they want to watch the same lowbrow comedies that they've always watched.
[170] So Netflix realized they had to just ignore what people told them and use their algorithms to figure out what they'd actually want to watch.
[171] So one of the things that's intriguing about what you just said is I don't think it's actually the case that people were lying to Netflix when they said they wanted to watch the avant -garde film.
[172] They actually genuinely probably aspired to do that.
[173] it might actually be that big data understands people better that they understand themselves.
[174] Yeah, probably even more common than lying to other people is lying to ourselves, particularly when we're trying to predict what we're going to do in two or three days.
[175] We tend to assume that we're going to go to the gym more than we go to the gym or eat better than we actually will eat or watch more intellectual stuff than we actually will watch so the algorithms can correct for this over optimism that we all tend to share.
[176] When you look at a company like Facebook, which has access to these huge amounts of data about us and what we like and whom we like in our relationships, you have to wonder how the company is using this data in all kinds of different ways.
[177] I remember Facebook got into some hot water a couple of years ago because they ran an experiment that seemed to be manipulating how people feel.
[178] And of course, there was a huge outcry about the experiment at the time.
[179] And since then, there hasn't been very much reported about what Facebook is doing, but I suspect that it might just be because Facebook is no longer telling us what it's doing, but it's still doing it anyway.
[180] Every major tech company now runs lots and lots of what are called A -B tests, which are little experiments where you put people into two different groups, a treatment and control group, and you show one group, one version of your site, and the other group, another version of the site, and you see which version gets the most clicks or the most views.
[181] This is really exploded in the tech industry.
[182] There are many, many instances where companies are now using big data against us.
[183] Banks and other financial institutions are using clues from big data to decide who shouldn't get a loan.
[184] I think it's an area of a big concern.
[185] So I talk about a study in the book where they started a peer -to -peer lending site and they studied the techs that people used in their request for loans.
[186] And you can figure out just from what people say in their loans, how likely they are to pay back.
[187] And there are some strange correlations.
[188] For example, if you mention the word God, you're 2 .2 times less likely to pay back, 2 .2 times more likely to default.
[189] And this does get eerie.
[190] Are you really supposed to be penalized if you mention God in a loan application?
[191] That would seem to be really wrong, even evil, right, to penalize somebody for a religious preference.
[192] Basically, everything's correlated with everything, right?
[193] So just about anything anybody does is going to have some predictive power for other things they do.
[194] And the legal system is really not set up for a world in which companies potentially can mine correlations over just about everything anybody does in their life.
[195] I was thinking about an ethical issue.
[196] I'm not sure if necessarily this is a legal issue, but you mentioned in the book that, you know, if someone is Googling, I've been diagnosed with pancreatic cancer, what should I do?
[197] It's reasonable to assume that this person has been diagnosed with pancreatic cancer.
[198] But if you collect all of the people who are Googling what to do about their diagnosis with pancreatic cancer and then work backwards to see what they've been searching for in the weeks and months prior to their diagnosis, you can discover some pretty amazing things.
[199] Yeah, this is a study that researchers used Microsoft Bing data.
[200] They looked at people who searched for just diagnosed with pancreatic cancer and then similar people who never made such a search.
[201] And then they looked at all the health symptoms they had made in the lead up to either a diagnosis or no diagnosis, and they found that there were very, very clear patterns of symptoms that were far more likely to suggest a future diagnosis of pancreatic cancer.
[202] For example, they found that searching for indigestion and then abdominal pain was evidence of pancreatic cancer, while searching for just indigestion without abdominal pain meant a person was much more unlikely to have pancreatic cancer.
[203] And that's a really, really subtle pattern in symptoms, right?
[204] Like a time series of one symptom followed by another symptom is evidence of a potential disease.
[205] It really shows, I think, the power of this data where you can really tease out very subtle patterns and symptoms and figure out which ones are potentially threatening and which ones are benign.
[206] So here's the ethical question.
[207] Once you establish that there is this correlation, that you sort of say, I have a universe of people who clearly have pancreatic cancer, and I work backwards through their search history, and I detect these patterns that no one had thought to look at before, that say these particular kinds of search terms seem to be correlated with people who go on to have the diagnosis versus these search terms that do not go on to predict a diagnosis.
[208] So does a company like Microsoft now have an obligation to tell people who are Googling for these combinations of search terms, look, you might actually need to get checked out.
[209] You might actually need to go see a doctor because, of course, if you can be diagnosed with pancreatic cancer four weeks earlier, you have a much better chance of survival than if you have to wait for a month.
[210] I lean in the direction of yes, some people would not lean that direction.
[211] It could be a little creepy if Google right below the button, I feel lucky, you know, you may have pancreatic cancer.
[212] It's not exactly the most friendly thing to see on a website.
[213] But personally, if I had some sort of symptom pattern that suggested I may have a disease and there was a chance of curing it if I was told, I'd want to know that.
[214] It's just another example that really the ethical and legal framework that we've set up is not necessarily prepared for big data.
[215] Seth Stevens Dividowitz is a former data scientist at Google and the author of the book, Everybody Lies.
[216] big data, new data, and what the internet can tell us about who we really are.
[217] Seth, thank you for joining me today on Hidden Brain.
[218] Thanks so much for having me, Shankar.
[219] Have you ever talked to your computer, cursed it for making a mistake?
[220] PC load letter?
[221] What does that mean?
[222] Have you ever argued with the traffic directions you get from Google Maps or Ways?
[223] Starting route to Grover's Mill Road.
[224] Have you ever looked at a Roomba cleaning the floor on the other side of the room?
[225] and told it, please come over to this side.
[226] Turn left.
[227] Left!
[228] It just ran itself right over the edge.
[229] Robots and artificial intelligence are playing an ever larger role in all of our lives.
[230] Of course, this is not the role that science fiction once imagined.
[231] It doesn't feel pity, a remorse, or fear.
[232] Robots bent on our destruction remain the stuff of movies like Terminator, and robot sentience is still an idea that's far off in the future.
[233] But there's a lot we're learning about smart machines, and there's a lot that smart machines are teaching us about how we connect with the world around us and with each other.
[234] My guest today has spent a lot of time thinking about how we interact with smart machines and how those interactions might change the way we relate to one another.
[235] Kate Darling is a research specialist at the MIT Media Lab.
[236] She joined us recently in front of a live audience at the Hotel Jerome in Aspen, Colorado as part of the Aspen Ideas Festival.
[237] Also on stage was a robot, a green robot dinosaur about the size of a small dog, known as a playo.
[238] It's going to be part of this conversation, but before we get to that, here's Kate.
[239] Kate, welcome to Hidden Brain.
[240] Thank you for having me. You found that there is an interesting point in the relationship between humans and machines, and that point comes when we give a machine.
[241] a name.
[242] I understand that you have three of these Pleo dinosaurs at your home.
[243] Can you tell me some of the names that you have given to your robots?
[244] Yes.
[245] So the very first one I bought, I named Yochai after Yochai Benkler, who's a Harvard professor, who's done some work in intellectual property and other areas that I've always admired.
[246] And the second one I adopted after I filmed a Canadian documentary where the show host had to name the robot and he gave the robot the same name he had, which was Peter.
[247] So the second one has a boring name.
[248] And then the third one is named Mr. Spaghetti.
[249] I don't know if people outside of Boston are familiar with this, but the Boston public transportation system, they wanted to crowdsource a name for their mascot dog.
[250] And the internet decided that the dog should be named Mr. Spaghetti.
[251] And of course, they refused to do that and name the dog hunter.
[252] So Mr. Spaghetti, became a big thing in Boston for a while.
[253] People were very outraged about this, and so I named my PLEO, my third one, Mr. Spaghetti.
[254] I understand that companies actually have found that if you sell a robot with the name of the robot on the box, it changes the way people will interact with that robot, then if you just said this is a dinosaur.
[255] So this is not, I don't have any data on this, but yes, I have talked to companies who feel that it helps with adoption and trust of the technology, even very, very simple robots like boxes on wheels that deliver medicine and hospitals, if you give them a little nameplate that says Betsy, their understanding is that people are a little bit more forgiving of the robot, so instead of this stupid machine doesn't work, they'll say, oh, Betsy made a mistake.
[256] And I'm wondering if you've spent time thinking about why this happens.
[257] At some level, if I came up to you at home, and I said, Kate, is Mr. Spaghetti alive?
[258] You would almost certainly tell me, no, Mr. Spaghetti is not alive.
[259] I assume you don't think Mr. Spaghetti is alive, right?
[260] No. So given that you know that Mr. Spaghetti is not alive, why do you think giving him a name changes your relationship to him?
[261] With robots in particular, it's combined with just our general tendency to anthropomorphize these things.
[262] And we're also primed by science fiction and pop culture to give robots' names and view them as entities with person.
[263] And it's more than just the name, right?
[264] I mean, robots move around in a way that seems autonomous to us.
[265] We respond to that type of physical movement.
[266] Our brains will project intent onto it.
[267] So I think robots are in the perfect mixture of something that we will very willingly treat with human qualities or lifelike qualities.
[268] All right.
[269] So we have this wonderful little prop in front of us.
[270] It's a pleo dinosaur.
[271] I want you to tell me a little bit about the Playa Dinosaur, how it works and how you come to one three of them, Kate.
[272] What is the dinosaur?
[273] What does it do?
[274] It's basically an expensive toy.
[275] I bought the first one, I think, in 2007.
[276] There we go.
[277] It's awake.
[278] They have a lot of motors and touch sensors, and they have an infrared camera and microphones.
[279] So they're pretty cool pieces of technology for a toy, and that's initially why I bought one, because I was fascinated by everything that it can do.
[280] Like, if it starts walking around, it can walk to the edge of the table, it can look down, measure the distance to the floor, it knows that there's a drop, and it'll get scared and walk backwards.
[281] And then they go through different life phases, adolescent and fully grown, and, you know, it'll have moods.
[282] So I think what we should do, we bought the robot at Hidden Brain a couple of weeks, ago.
[283] We haven't had a chance to give it a name yet.
[284] And I thought we should actually reserve the honors for this evening where we're talking to Kate and see if Kate wants to try and name this dinosaur, you know, since she cares about dinosaurs so much.
[285] I was looking up Kate's Twitter feed this morning.
[286] I understand that you're going to have a baby soon.
[287] Congratulations.
[288] Yes, I don't have a name for that either.
[289] Okay.
[290] Just FYI.
[291] She sometimes refers to the baby as baby bot, so just for whatever that's worth.
[292] And one retweet that you have on your Twitter feed cracked me up.
[293] It said, you don't really know how many people you don't like until you start trying to pick baby names.
[294] Yeah, that's a, that's a quote from my husband.
[295] So I don't want to tell me, you apparently haven't yet picked your baby's name.
[296] So do you have any choices of top choices?
[297] Is there a name, a spare name that you might care to give the dinosaur?
[298] Well, the problem is we've had a girl's name picked out for years, and now we're having a boy, and we just can't, we don't even have any contenders.
[299] No contenders.
[300] What would have been your favorite girl's name if you had a girl?
[301] Well, so when I first started dating my now husband, he at some point said, if I ever had a daughter, I already know what I would name her.
[302] And I was like, oh, really?
[303] We're going to fight about this one.
[304] And he said, yeah, I would name her Samantha and Sam for short because Sam is kind of gender neutral.
[305] And I was like, oh, I really love that.
[306] That one was picked out very easily.
[307] All right.
[308] Since you're not having a girl, you're going to have a boy, would you mind if you considered naming the dinosaur Samantha?
[309] How would you feel about that?
[310] Oh, that would be awesome.
[311] We should name the dinosaur Samantha.
[312] All right.
[313] So henceforth, this dinosaur will be called Samantha, or sound for short.
[314] Now, some time ago, Kate conducted a very interesting experiment with the play of dinosaurs.
[315] And to sort of show how this works, I have a second prop here, which is under the table.
[316] It's a hammer, a large hammer, which we borrowed from the hotel.
[317] Now, as you all know, the dinosaur is obviously not alive.
[318] It's just cloth and plastic and a battery and wires.
[319] It has a name, of course, Samantha, but it isn't alive in any sense of the term.
[320] And so, Kate, I'm going to actually give you the hammer.
[321] Oh, no. And I think we might have a little.
[322] little board underneath the table here.
[323] We're going to place the dinosaur on a board.
[324] Kate, would you consider destroying Samantha?
[325] No. It's just a machine.
[326] I only make other people do that.
[327] I don't do it myself.
[328] You wouldn't even consider harming the dinosaur?
[329] Well, so my problem is that I already know the results of our research and that would say something about me as a person.
[330] So I'm going to say, no, I'm not willing to do it.
[331] Kate Darling is a research specialist at the MIT Media Lab.
[332] When we come back, I'll ask her about that researchy references in which she asked volunteers to smash a robot dinosaur.
[333] Welcome back to Hidden Brain.
[334] I'm Shankar Vedantam.
[335] We're discussing our relationships with technology, specifically robots with Kate Darling, a researcher from MIT.
[336] She joined us before a live audience at the Aspen Ideas Festival.
[337] A couple of years ago, Kate conducted an example.
[338] that says a lot about how humans tend to respond to certain kinds of robots.
[339] Tell me about the experiment.
[340] So you had volunteers come up and you basically introduced them to these lovable dinosaurs and then you gave them a hammer like this and you told them to do what?
[341] Well, so, okay, so this was the workshop part that we used the dinosaurs for.
[342] They're a little too expensive to do an experiment with 100 participants.
[343] So the workshop that we did in a non -scientific setting.
[344] We had five of these robot dinosaurs.
[345] We gave them the groups of people and had them name them, interact with them, play with them.
[346] We had them personify them a little bit by doing a little fashion show with a fashion contest.
[347] And then after about an hour, we asked them to torture and kill them.
[348] And we had a variety of instruments.
[349] We had a hammer, a hatchet, and I forget what else.
[350] And but like even though we tried to make it dramatic, it turned out to be a little bit more dramatic than we expected it to be, and they really refused to even hit the things.
[351] And so we had to kind of start playing mind games with them.
[352] And we said, OK, you can save your group's dinosaur if you hit another group's dinosaur with the hammer.
[353] And they tried, and they couldn't do that either.
[354] This one woman was standing over the thing trying, and she just couldn't.
[355] She ended up petting it instead.
[356] And then finally we said, OK, well, we're going to destroy.
[357] all of the robots unless someone takes a hatchet to one of them.
[358] And finally, someone did.
[359] Wait, so you said unless one of you kills one of them, we are going to kill all of them?
[360] Yeah.
[361] I think this might have been my partner's idea.
[362] So I did this with a friend named Honest Gosselt.
[363] We did this at a conference called Lyft in Geneva.
[364] And we had to improvise because people really didn't want to do it.
[365] So we threatened them.
[366] And finally someone did.
[367] She clearly doesn't want you to harm her.
[368] Yeah, clearly, clearly.
[369] So what do you think is going on?
[370] At a rational level, the dinosaur obviously is not a lie.
[371] Why do you think we have such reluctance to harming the dinosaur?
[372] In fact, I might have the battery removed so the dinosaur stops making noise.
[373] Well, I mean, it behaves in a really lifelike way.
[374] I mean, we have over a century of animation expertise in creating compelling characters that are very lifelike that people will automatically project life onto.
[375] I mean, look at Pixar movies, for example.
[376] It's incredible.
[377] And I know that a lot of social roboticists actually work with animators to create these compelling characters.
[378] And so, you know, it's very hard to not see this as some sort of living entity, even though you know perfectly well that it's just a machine, because it's moving in this way that we automatically subconsciously associate with states of mind.
[379] And so I just think it's really uncomfortable to people.
[380] is particularly for robots like this that can display a simulation of pain or discomfort to have to watch that.
[381] I mean, it's just not comfortable.
[382] What did you find in terms of who was willing to do it and who wasn't?
[383] I mean, when you looked at the people who were willing to destroy a dinosaur, a dinosaur like the pleo, you found that there were certain characteristics that were attached to people who were more or less likely to do the deed.
[384] So the follow -up study that we did, not with the dinosaurs, we did with hex bugs, which are a very simple toy that moves around like an insect.
[385] And there we were looking at people's hesitation to hit the hex bug and whether they would hesitate more if we gave it a name and whether they would hesitate more if they had natural tendencies for empathy, for empathic concern.
[386] And we found that people with low empathic concern for other people, they didn't much care about the hex bug and would hit it, much more quickly, and people with high empathic concern would hesitate more, and some even refused to hit the hexbugs.
[387] So in many ways, what you're saying is that potentially the way we relate to these inanimate objects might actually say something about us at a deeper level than just our relationship to the machine.
[388] Yes, possibly.
[389] I mean, we know now, or we have some indication that we can measure people's empathy using robots, which is pretty interesting.
[390] You know, my colleagues and I were discussing ahead of this interview whether you would actually destroy the dinosaur.
[391] And we were torn because we said on the one hand, you of all people should know that these are just machines and that it's an irrational belief to project life -like values on them.
[392] But on the other hand, I said, you know, it's really unlikely she's going to do it because she's going to look like a really bad person if she smashes the dinosaur in front of 200 people.
[393] I mean, I don't if you've been watching Westworld at all, but the people who don't hesitate to shoot the robots, they seem pretty callous to us.
[394] And I think maybe there is something to it.
[395] Of course we can rationalize it.
[396] Of course, you know, if I had to, I could take the hammer and smash the robot and, you know, I wouldn't have nightmares about it.
[397] But I think that perhaps turning off that basic instinct to hesitate to do that might be more harmful than over, you know, I think overriding it might be more harmful than just going with it.
[398] I want to talk about the most important line we draw between machines and humans, and it's not intelligence, but it's consciousness.
[399] I want to play your little clip from Star Trek.
[400] Now tell me, Commander, what is data?
[401] I don't understand.
[402] What is he?
[403] A machine.
[404] Is he?
[405] Are you sure?
[406] Yes.
[407] You see, he's met two of your three criteria of ascension, so what if he meets the third, consciousness in even the smallest degree.
[408] What is he then?
[409] I don't know.
[410] Do you?
[411] Do you?
[412] So this has been a perennial concern in science fiction, which is the idea that at some point machines will become conscious and sentient.
[413] And very often it's in the context of the machines will rise up and harm the humans and destroy us.
[414] But as I read your research, I actually found myself thinking, is our desire to believe that the machines can become conscious, actually just an extension of what we've been talking about the last 20 minutes, which is we project sentience onto machines all the time.
[415] And so when we imagine what they're going to be like in the future, the first thing that pops in our head is they're going to become conscious.
[416] Yeah, I think there's a lot of projection happening there.
[417] I also think that before we get to the question of robot rights and consciousness, you know, we have to ask ourselves, how do robots fit into our lives when we perceive them as conscious?
[418] Because I think that's when it starts to get morally messy and not when they actually inherently have some sort of consciousness.
[419] If humans have a tendency to anthropomorphize machines to see them as human, it isn't surprising that we're also willing to bring all the biases we have toward our fellow human beings into the machine world.
[420] Many of the intelligent assistants being built by major companies, Siri or Alexa, are being given women's names.
[421] Many of the genius machines are often given men's names, Hal or Watson.
[422] Now, you can say Siri and Alexa aren't people.
[423] Why should we care?
[424] Why should we care if people sexually harass their virtual assistance, as has been shown to sometimes happen?
[425] MIT's kid, darling, says we should care because the way we treat robots may have implications for the way we treat other human beings.
[426] It might.
[427] We don't know, but it might.
[428] And one example with the virtual assistance you just mentioned is children.
[429] So parents have started observing, and this is anecdotal.
[430] They've started observing that their kids adopt behavioral patterns based on how they're interacting with these devices and how they're conversing with them.
[431] And there are some cool stories.
[432] Like there was a story in the New York Times a few years ago where a mother was talking about how her autistic son had developed a relationship with Siri, the voice assistant.
[433] And she said this was awesome because Siri is very patient.
[434] She will answer questions repeatedly and consistently.
[435] And apparently this is really important for autistic kids.
[436] But also because her voice recognition is so bad, he learned to articulate his words really clearly and it improved his communication with others.
[437] Now, it's great, but these things aren't designed with autistic kids in mind, right?
[438] That's kind of more of a coincidence than anything.
[439] And so there are also perhaps some unintended effects that are more negative.
[440] And so one guy wrote a blog post, a while back where he said, Amazon's echo is magical, but it's turning my child into an because Alexa doesn't require please or thank you or any of the standard politeness that you want your kids to learn when they're conversing and when they're, you know, demanding things of use.
[441] So, you know, it starts there, but I think that as this technology improves and gets better at mimicking real conversations or life -like behavior, you have to wonder to what extent that gets muddled in our subconscious, and not just in children's subconscious, but maybe even in our own.
[442] Do you think it's a coincidence that most of the virtual assistants are given female names and female identities?
[443] I think it's a combination of whatever market research, but also just people not thinking.
[444] I mean, I visited IBM Watson in Austin, and there's a room that you can go into and you can talk to Watson, and he has this deep booming male voice, and you can ask him.
[445] questions.
[446] And at the time I went there, there was a second AI in the room that turned on the lights and agreed to the visitors, and that one had a female voice.
[447] And I pointed that out, and it seemed like they hadn't really considered that.
[448] So it's, you know, it's a mixture of people thinking, oh, this is going to sell better, and people just not thinking at all, because the teams that are building this technology are predominantly young, white, and male, and they have these blind spots where they don't even consider what biases they might perpetuate through the design of these systems.
[449] So you're sometimes called a robot ethicist, and you've sometimes said we might need to establish a limited legal status for robots.
[450] What do you mean by that?
[451] So, yeah, it's a little bit of a provocation, but my sense is that, you know, if we have evidence that behaving violently towards very lifelike objects, not only tells us something about as a person, but can also change people and desensitize them to that behavior in other contexts.
[452] So, you know, if you're used to kicking a robot dog, you know, are you more likely to kick a real dog, then that might actually be an argument, if that's the case, to give robots certain legal protections the same way that we give animals protections, but for a different reason.
[453] We like to tell ourselves that we give animals protection from abuse because they actually experience pain and suffering.
[454] I actually don't think that's the only reason we do it.
[455] But for robots, the idea would be not that they experience anything, but rather that it's desensitizing to us.
[456] And it has a negative effect on our behavior to be abusive towards the robots.
[457] So here's the thing that sort of it's worth sort of pondering for a moment.
[458] If you hear, for example, that someone owns a bunch of chickens in their farm, right?
[459] So it's their farm, their chickens, they own the chickens.
[460] And they're really mistreating the chickens, torturing them, harming them.
[461] You could sort of make a property rights argument and say they can do whatever they want with their property.
[462] But I think many of us would say, even though the chicken belongs to you, there are certain things you can and cannot do with the chicken.
[463] And I'm not sure it's just about our concern that if you mistreat the chicken, that means you will turn into the kind of person who might mistreat other people.
[464] There's sort of a level, there's a certain moral level at which I think the idea of, abusing animals is offensive to us.
[465] And I'm wondering whether the same thing is true with machines as well, which is it's not just the case that it might be that people who harm machines are also willing to harm humans, but just the act of harming things that look and feel and sound sentient is morally offensive in some way.
[466] Yeah, so I think that's absolutely how we've approached most animal protections, because it's also, it's very clear that we care more about certain animals than others and not based on any biological criteria.
[467] So I think that we just find it morally offensive, for example, to torture cats or, you know, in the United States, we don't like the idea of eating horses, but in Europe, they're like, what's the difference between a horse and a cow?
[468] They're both delicious.
[469] So that's definitely how we tend to operate and how we tend to pass these laws.
[470] And I don't see why that couldn't also apply to machines once they get to a more advanced level where we really do perceive them as.
[471] as lifelike and it is really offensive to us to see them be abused.
[472] The devil's advocate side of that argument, of course, is that would people then say pressing a switch and turning off a machine, that's unethical because you're essentially killing the robot?
[473] But we don't protect animals from being killed.
[474] We just protect them from being treated unnecessarily cruelly.
[475] So I actually think animal abuse laws are a pretty good parallel here.
[476] You mentioned Westworld some moments ago, and I want to play you a clip from Westworld.
[477] for those of you who haven't seen Westworld, humans interact with robots in robots that are extremely lifelike, so lifelike that it's sometimes difficult to tell whether you're talking to a robot or you're talking to a human.
[478] In the scene that I'm about to play you, a man named William interacts with a woman who may or may not be a robot.
[479] You want to ask.
[480] So ask.
[481] Are you real?
[482] Well, if you can't tell, does it matter?
[483] So as I watched the scene, and as I read your work, I actually had a thought, and I wanted to sort of run this thought experiment by you, which is that, you know, on one end of the spectrum, we have these machines that are increasingly becoming lifelike, human -like, you know, they respond in very intelligent ways, they seem as if they're alive.
[484] And on the other hand, we're learning all kinds of things about human beings that show us that even the most complex aspects of our minds are governed by a set of rules and laws, and in In some ways, our minds function a little bit like machines.
[485] And I'm wondering, is there really a huge distinction?
[486] Is it possible, is the real question not so much can machines become more human -like, but is it actually possible that humans are actually just highly evolved machines?
[487] I have no doubt that we are highly evolved machines.
[488] I don't think we understand how we work yet, and I don't think we're going to get to that understanding anytime soon, but yeah, I do think that we follow a set of rules and that we're are essentially programmed.
[489] I don't tend, so I don't distinguish between souls and other entities without souls.
[490] And so it's much easier for me to say, yeah, it's probably all the same.
[491] But I can see that other people would find that distinction difficult.
[492] Do you ever talk about this?
[493] Do you ever run this by other people and sort of say, do you tell your husband, for example, I like you very much, but I think you're a really intelligent machine that I love dearly?
[494] I haven't explicitly said that to him, but...
[495] When you go home from this trip.
[496] Yeah, yeah, we'll see how that goes.
[497] Kate Darling is a research specialist at the MIT Media Lab.
[498] Our conversation today was taped before a live audience at the Hotel Jerome in Aspen, Colorado as part of the Aspen Ideas Festival.
[499] Kate, thank you for joining me today on Hidden Break.
[500] Thank you so much.
[501] This week's show was produced by Raina Cohen, Tara Boyle, René Clare, and Parth Shah.
[502] Our team includes Jenny Schmidt and Mackey Penman.
[503] NPR's vice president for programming is Anya Grunman.
[504] You can find photos and a video of Samantha, our Plio Dinosaur, on our Instagram page.
[505] We're also on Facebook and Twitter.
[506] If you enjoyed this week's show, please share the episode with friends on social media.
[507] I'm Shankar Vedantam.
[508] See you next week.