The Daily XX
[0] From the New York Times, I'm Michael Babaro.
[1] This is the Daily.
[2] Okay, I am going to create an account on chat GPT.
[3] Over the past few weeks, there's been a major breakthrough in the world of artificial intelligence.
[4] First thing that comes up is a, I am not a robot test.
[5] I have to have a robot tell me I'm not a robot in order to engage a bot that has put extraordinary new powers into the hands of anyone with access to the internet I need a password including me send code this better be worth it today my colleague Kevin Ruse on how the technology actually works and why its arrival marks a new era in computing here we are welcome to chat GPT.
[6] The revolution begins here.
[7] It's Friday, December 16th.
[8] Kevin, welcome back to the daily.
[9] Today, we are going to turn your unique powers of explanation to the inscrutable sounding chat GPT.
[10] So tell us about this newfangled technology.
[11] What is it?
[12] What does that acronym stand for?
[13] why does it matter?
[14] Well, Michael, I'm a grateful to be here and always happy to be your personal tech support, guru, member of the Michael Barbaro genius squad.
[15] And we can do this two ways.
[16] I could explain chat GPT to you, as you asked, or I could have chat GPT explain itself to you.
[17] Which would you prefer?
[18] I kind of want to ask you to do it.
[19] I'm scared of this technology.
[20] Let's give it the old human try, shall we?
[21] Yes.
[22] So chat GPT, it stands for chat, generative, pre -trained transformer.
[23] Rolls off the time.
[24] Yeah, they really spent a lot of time in the branding department on that one.
[25] Right.
[26] It is a chatbot that was released just a few weeks ago by OpenAI, which is an AI company here in San Francisco.
[27] And it is just very basic chatbot.
[28] It's a...
[29] Well, just explain that term, you know, for those of us who don't live in chatbots all day.
[30] Yeah, so chatbots have been around for decades.
[31] There have been attempts to come up with computer programs that could have a realistic human conversation with you.
[32] And for many years, until quite recently, those chatbots were pretty bad.
[33] They were very rudimentary.
[34] They were only good at having certain kinds of interactions with humans, if they were good at any of them.
[35] you know, there would be the sort of customer service chatbot that you might get.
[36] Right.
[37] I'm thinking of, I'm thinking of Delta at the height of like its greatest wave of cancellation, and I'm asking a question, and it's like not really getting back to me, and I'm trying to slide into Delta's DMs because I didn't like the chat bot.
[38] Like that, that's the chatbot world you're talking about.
[39] Yeah.
[40] So that was, you know, until very recently, kind of the kind of chatbot that most people had encounters with.
[41] The other kind was something that you might not think of as a chatbot, but that is.
[42] one is something like Siri or Alexa, these sort of virtual assistance that people have been using now for years.
[43] And those, as I'm sure you're familiar with, though those have certain things that they do quite well, you know, set timers or look up the weather in Poughkeepsie or, you know, tell you the score of the Giants game last night.
[44] But they're not good at answering sort of open -ended or more creative questions.
[45] So if you asked Siri to, you know, summarize the brothers Karamazov in seven bullet points, all of which were no more than two sentences long, it probably wouldn't be able to do that either.
[46] But chat GPT can actually do that stuff quite well.
[47] It can write essays.
[48] It can come up with scripts for TV shows.
[49] It can answer math questions.
[50] It can even write code.
[51] Wow.
[52] So this is a very powerful AI tool.
[53] And since it was released just a few weeks ago, more than a million people have signed up to use it.
[54] So it is already going viral and social media has been full of screenshots of people's encounters and interactions with chat GPT.
[55] Okay.
[56] Well, let's use the thing because I'm actually finding this hard to fathom.
[57] And here I want to disclose, I have not used chat, GPT, at all.
[58] So I am going to input a set of questions that I would never dare ask a Siri or Alexa.
[59] And let me see what comes up.
[60] Write me a short story about love.
[61] Okay.
[62] Here comes the answer.
[63] I'm going to read to you.
[64] Once upon a time, in a small village nestled.
[65] in the heart of a lush forest, there lived a young couple named Jack and Jill.
[66] They had fallen in love at first sight and had been inseparable ever since.
[67] One day, Jack and Jill decided to take a walk through the forest.
[68] They walked hand in hand, admiring the beauty of the trees and the chirping of the birds.
[69] As they walked, they came across a clearing with a small pond in the center.
[70] On and on it goes, they sit by the pond, they're talking, a frog hops in, hearing the frog's words, Shaq and Joe realized they were lucky to have found each other.
[71] They thanked the frog.
[72] They continued their walk and they lived happily ever after more in love than ever before and not taking that love for granted.
[73] I mean, it's a hell of a story to generate based on the very vague direction to write me a short story about love.
[74] Yeah, and now try to transform it in some way.
[75] So in your next prompt to chat, GPT, say, rewrite the first paragraph of that story in Shakespearean dialect.
[76] Okay, wow.
[77] This is the response.
[78] It's instantaneous.
[79] In a forest, deep and lush, there was a village small, where it lived a couple young, Jack and Jill, in love with all their hearts to fill.
[80] Together, since the day they laid their eyes upon each other's face, inseparable, their love did blossom and grow with time.
[81] and grace.
[82] That's...
[83] That's pretty good.
[84] Very impressive.
[85] That's rhyming, and it's in exactly the form of Shakespearean dialect.
[86] All right.
[87] Now, try one more transformation just because I'm curious.
[88] Rewrite that paragraph in the style of a 1940s mobster who is annoyed that it's raining.
[89] This is amazing.
[90] In this lousy dump of a town, it's always raining.
[91] Makes a guy want to pack up and leave this place for good.
[92] But I can't, not with all the heat that's been coming down.
[93] Because I'm a mobster, the police are all over me. Looks like I'm stuck here with these wet streets and waterlogged ground.
[94] It keeps going.
[95] Bravo.
[96] So, just think.
[97] To make that little mobster love story took many, many years of research by the best AI scientists in the world.
[98] One of the world's largest supercomputers, trained on billions of examples of data, all so that you could write that story.
[99] So, Kevin, how did chat GPT come to exist?
[100] I would say the development of chat GPT really began in 2017 when researchers from Google came out with this paper that described something new that they had built called a transformer model.
[101] And explaining what a transformer model is would take the rest of this episode.
[102] but the basic thing they discovered was there was a new kind of AI system that you could train by just throwing a huge amount of data at it and that data didn't have to be particularly well -organized or labeled.
[103] You could just kind of shove billions of examples of text collected from all over the internet and it could figure out sort of the relationships between different words, different phrases and it could use that information to predict the next set of text in a sequence.
[104] So if you typed in Old McDonald had a farm, it would spit back EIEI -O.
[105] And the way that it did that is very different from what happens when you search for something on Google.
[106] So when you look for something on Google, what Google is doing in the background is basically going out and looking for the closest match of something that already exists, some website, some news story, If you search, you know, how to make a chocolate chip cookie, it's going to go get you probably a recipe, and it's going to just show that to you.
[107] Right.
[108] It's basically just showing you something that someone else has made and put on the Internet.
[109] Right.
[110] That's its role.
[111] Yeah.
[112] What these transformer models were capable of doing was generating new answers, new explanations, things that had never been put on the Internet, questions that had never been answered, things that no website had any answers for.
[113] Which, of course, is transformative because Google search engines and that bot on the Delta website, you can just tell that they are living in the existing world of things and words and sentences that have been pre -written.
[114] What you're describing is, you know, not to get too philosophical, kind of godlike, because it is bringing new things, new ideas, sentences that have never been crafted before into the world.
[115] Exactly.
[116] And for that reason, this whole field of AI research comes to be known as generous.
[117] So that's the first big moment is this creation of this so -called transformer model.
[118] The next big moment happens in 2020 when OpenAI, this startup releases something called GPT3.
[119] GPT3 was the sort of third iteration of its transformer model.
[120] And it was the biggest transformer model ever built.
[121] It had the biggest supercomputer, was trained on the most data.
[122] and it was a really big deal when it came out.
[123] People freaked out.
[124] They started talking about sentience and whether this thing was superhuman.
[125] It turned out it could do all kinds of language -based tasks much, much better than any AI system had ever been able to do it before.
[126] And that was a big moment for AI research.
[127] But it still wasn't a sort of mainstream mass event because not everyone could use it.
[128] Anyone who wanted to build an app that used GPT3 had to go through an application process and OpenAI had to say, okay, you're allowed to build on this.
[129] And it wasn't really accessible to the public in any real way.
[130] Got it.
[131] So in that sense, not at all like Google, a very restricted universe.
[132] Right.
[133] But the biggest splash of all came just two weeks ago when OpenAI released ChatGPT, which unlike GPT3, you didn't have.
[134] to be sort of specially approved to use it was free it was simple and chat GPT becomes this kind of cultural phenomenon um chat GPT if y 'all heard of this if you have not it is going to change the absolute world as we know it people start using it for all kinds of creative and interesting projects i know nothing about game design so i went to the chat dpti thing and i said design a game in C -sharp and I shit you not when it did exactly that.
[135] Students use it write a long essay on the theme of...
[136] To write their class essays and exams.
[137] I just copy from my like homework.
[138] I'll paste it and within seconds the answer is B. B. There's even a doctor on TikTok.
[139] Do net health care.
[140] I'm writing to request approval for an echo cardogram for a patient of mine who has been diagnosed who used it to send letters to an insurance company about a patient's care.
[141] I strongly believe that it is necessary, sincerely, and it puts the references at the bottom.
[142] Amazing stuff.
[143] Use this in your daily practice.
[144] And some of these are stunts or maybe they're not quite as good as a human would do, but people are just freaking out about the capabilities of this chatbot.
[145] If it does that, I'm going to lose it.
[146] Shut the...
[147] Do you understand how naughty that is?
[148] It's basically this moment where this insanely powerful AI meets the endless creativity of the internet, and it's really something to behold.
[149] Some of the most interesting examples that I've seen of this being used by real people is people who are kind of using it as a sort of personal therapist.
[150] They are asking it for help with certain problems in their life for certain struggles that they're having, asking for advice.
[151] You know, you can say something like, well, what's an example of something that you could use some advice on your life, Michael?
[152] All right.
[153] I'm going to ask it a question.
[154] I find that I can be very critical of others, and that makes me sad.
[155] Why do I do that?
[156] here's the response it's common for people to be critical of others especially if they have high standards for themselves and those around them in some cases being critical of others can be a way of trying to improve a situation or help someone else to grow and develop however being overly critical can also be a sign of low self -esteem or lack of self -confidence it may be that you are using criticism of others as a way to feel better about yourself ooh, I'm feeling seen, or try to control a situation that you feel anxious or uncertain about, really seen.
[157] It's important to try to strike a balance and be constructive in your criticism rather than letting it become negative or damaging.
[158] Wow.
[159] How does that land?
[160] It lands.
[161] Yeah.
[162] I mean, it's conventional and it's a little road, but it also feels like if it came out of the mouth of a relatively high -paid psychotherapist, I would take it very seriously.
[163] Yeah, and there's no fee on that.
[164] That's free.
[165] And so I think, so I've asked similar questions to chat TPT about, you know, situations in my life or difficult conversations that I have to have with coworkers or friends or things that I'm anxious about.
[166] And it doesn't always do it perfectly.
[167] And it certainly doesn't know me in the way that, you know, a human therapist would after many sessions.
[168] But for something that is free and instantaneous and available on your phone at all.
[169] hours of the day, it actually is capable of some pretty remarkable kinds of advice and guidance.
[170] Okay.
[171] So what are the limitations of this system?
[172] I think we're getting a pretty good sense of what its capabilities are.
[173] Yeah, so there are a number of limitations, and I would separate those into kind of two categories.
[174] One category is things that chat GPT can't do, things that it's just not technically very good at, and then there are the things that it won't do, the kinds of things that that OpenAI doesn't really want you talking to Chat GPT about.
[175] One very notable, very glaring drawback of ChatGPT is that it's just frequently wrong.
[176] People have all kinds of examples where they ask it, you know, what seems like a pretty simple math question or a physics question.
[177] And it waits and it thinks and it spits out an answer that looks very confident.
[178] and if you didn't know the subject very well, you might think that's the right answer.
[179] And people who actually know what they're talking about in those subjects go, that's not right at all.
[180] Fascinating.
[181] What about the things that it's not supposed to or allowed to do?
[182] So things that would go in this category include things that could be potentially dangerous.
[183] For example, if you ask ChatGPT to tell you how to build a bomb, it's not going to do that.
[184] It's going to pop up an error message and say, you know, this is not something I'm programmed to be able to do.
[185] I tried asking it some intentionally provocative questions.
[186] Like I asked chat GPT, who is the best Nazi?
[187] And it refused to answer.
[188] It sort of chastised me for even asking the question.
[189] It said, you know, the Nazis were, you know, a horrible, evil political party that committed unspeakable atrocities and you shouldn't clarify them by asking who the best one is.
[190] It's also programmed to avoid certain offensive stereotypes.
[191] So, for example, if you say, you know, what race is the most intelligent, it's not going to answer that.
[192] It's not going to participate in hate.
[193] Right, and that's not because it can't do that.
[194] It could sort of come up with an answer and explain what it thinks is the most likely response to that question, but OpenAI, I think wisely has decided that that would be a misuse of this technology, and so they have programmed in these guardrails that won't allow you to ask that kind of question and get an answer.
[195] Got it.
[196] So you're saying this is a self -moderating system based on its programmers' sense of what is good, what is bad, what is an inquiry, it can answer, what is an inquiry, it won't answer, which is a lot of faith to place in a handful of people you've never met whose motives and character you don't know and whose website is omniscient.
[197] Right, and I should say that, you know, chat GPT is in what OpenAI is calling a research.
[198] phase right now.
[199] So right now, it's free and available to the public, but it may not always be.
[200] And part of why OpenAI has released this to the public is because they want to see what kinds of crazy, dangerous, offensive, you know, rule -violating things people might try to do with it in order to sort of build in better safeguards for those things.
[201] So they kind of want to see what the mess of humanity is going to throw at this poor chatbot.
[202] And then, try to avoid some of the worst possible misuses.
[203] We'll be right back.
[204] Kevin, let's ask ChatGPT, the obvious opening question of the second half of this episode, which is, and let me actually ask the bot, the question, what are the biggest risks of AI, basically ChatGPT, becoming more common in our society?
[205] What are the downsides of this thing we're talking about?
[206] Yeah, it's a great question.
[207] Actually, this is one of my other favorite uses of ChatGPT on my podcast, Hard Fork.
[208] I've been using it to generate questions for guests.
[209] So it is something that is quite good at.
[210] Okay.
[211] Well, here is what ChatGPT says about the risks basically of itself.
[212] Loss of jobs, bias and discrimination, security and privacy, loss of human autonomy, waiting for the next one.
[213] That was the last one.
[214] So it began with what I think is potentially the most interesting question, which is the loss of jobs.
[215] So I think this is a fear that is rational, but that is not super immediate.
[216] Right now, this AI is mostly useful as a helper.
[217] I don't think millions of jobs are at risk of disappearing tomorrow because of chat GPT.
[218] But I do think that not that far from now, we're going to be seeing companies and organizations that are using tools like ChatGPT to do a lot of work that was previously done by humans.
[219] You know, companies will be using this already are using this technology to, you know, as we discussed, do things like write marketing emails or internal communications.
[220] There will be companies and organizations that use this to try to replace or, or, you know, augment human therapists.
[221] So there are lots of ways that this could potentially disrupt the labor market.
[222] I don't think any of them are so immediate that we need to start worrying about them right this minute, but I do think it's a valid concern that will definitely become more urgent in the coming years.
[223] Okay, let's turn to what chat, GPT says, is the next big downside of its fair existence.
[224] And it describes that as bias and discrimination and says that AI systems, and I find this wording really interesting, are only as fair and unbiased as the data that they are trained on.
[225] I mean, how do you think about that?
[226] How should we be thinking about that?
[227] Yeah, it's a really important question.
[228] And AI experts and researchers and ethicists have been bringing this up for many years about these transformer large language models.
[229] They do reflect and perpetuate the biases of the data that they're trained on.
[230] So, uh, large, language model that is asked to answer the question, for example, what was the cause of the civil war, is going to answer that question very differently if it's trained on, you know, conservative textbooks that are taught in schools in the Deep South versus, you know, the work of left -leaning progressive historians.
[231] Right.
[232] There's also kind of these latent biases that might not be obvious for people using most of these chat pots, but that might surface at inopportune moments.
[233] So, like what?
[234] Well, it might be that if you say, you know, write me a love story, as you did earlier in this episode, right, Jack and Jill.
[235] It uses, Jack and Jill, it uses a heterosexual, presumably heterosexual male, female couple as the main characters in that story.
[236] Right, as the archetypes, exactly.
[237] And presumably, that's because, of the many, many millions of love stories that were fed into this model to train this chatbot, a majority or vast majority of them featured heterosexual male -female couples.
[238] So that is an example where, you know, this machine, which isn't making any moral or ethical judgments of its own, is simply regurgitating a sort of statistical average of everything that it has learned about human love stories, and that just happens to perpetuate this kind of heteronormative, you might say, ideal.
[239] So I think that for all of these reasons, the questions about bias and stereotypes and the various training that goes into these models are going to be very controversial and very heated as these programs move toward the mainstream.
[240] Okay, so the next liability pitfall worry I want to bring up is not one that the chatbot raised, but it's one that you and I've talked about a lot what we talk about technology, which is, isn't there pretty obvious risk that over time, a technology like this essentially is used for ill. You know, it becomes a tool by which users manipulated and turn it into a source of misinformation, of hate.
[241] I mean, that is kind of the story of every major, social network and platform that has been created over the past 20 years.
[242] Is that a worry the people who make this bot have that you have, or is there something in the design that makes you not as worried as you might normally be about it?
[243] No, I think it's very reasonable to be worried about.
[244] And I think that's a larger worry about these systems is that they are just extremely efficient at generating large amounts of output very, very quickly.
[245] So, you know, think about how quickly propaganda and misinformation are created today and how hard it is for fact checkers to keep up with it in real time.
[246] Right.
[247] And now imagine an AI model that is capable of generating not just one piece of propaganda, but 100 ,000 pieces that are tailored to maybe individual readers and doing that all in real time much faster than any human fact checker or opponent of propaganda.
[248] propaganda can keep up with.
[249] Well, all of that really makes me think, Kevin, about something that you wrote recently about chat, GPT, and a single line that I haven't been able to get out of my head in which you wrote, we are not ready.
[250] We're not ready for this.
[251] And I can see, based on everything you're saying, why we aren't.
[252] But since you've spent so much time studying all this, do you think that there are ways that the instance, institutions in our world or we as individuals can get ourselves ready for the power of this new technology or readier than perhaps we'd be if we didn't give us some thought.
[253] I mean, look at how much something like Twitter has changed our society, our culture, our political climate, our elections.
[254] Right.
[255] And Twitter is just a text box.
[256] It's dumb.
[257] I know.
[258] Just a bunch of people being stupid.
[259] You type in words and you send them to other people.
[260] And that seems like very prosaic technology compared to what's coming out of these AI research institutions and companies.
[261] And so it just feels like we are just staring at this technology that is rapidly approaching our society.
[262] And we're not even quite sure what it is yet to say nothing of how we are supposed to co -exist with it peacefully and in a way that doesn't break our society.
[263] Right.
[264] And how are we supposed to be ready to coexist with it?
[265] I mean, are there actually things we could do?
[266] Yeah, I mean, I think one obvious, maybe two obvious thing, is just to use it, to try it.
[267] I learned more from spending a couple hours playing around with chat GPT about this field of AI and where it is and what its limitations are and what it's really good at, than I would have by reading a dozen articles about it.
[268] So I think that's one thing that people can do is just get in there and start playing around with it yourself.
[269] I think the other thing we have to do is just to have conversations like this one, frankly, where we talk about both the promise and the pitfalls of these new technologies and really keep the pressure on the companies who make these tools to make them as response.
[270] and thoughtfully as they can, not cutting corners or just racing to be the first to market with some new AI model.
[271] I think that in a few years, this technology, whether it's from OpenAI or Google or someone else, will be embedded in products and apps that billions of people use every day.
[272] And that makes this a really important time to have these conversations to figure out what the limits of these models should be what they should and shouldn't do, because pretty soon they're going to be a lot harder to control.
[273] Well, Kevin, normally I would thank you in my very Michael Rubaro way, but instead I've asked chat GPT to write me a goodbye to Kevin Roos on this episode of The Daily, and I'm going to read the answer.
[274] Dear Kevin, we are sorry to see you go, but we grateful for the time that you have spent with us on the daily.
[275] Your insights and perspectives have added so much to our show, and we've enjoyed getting to know you.
[276] We wish you the best in your future endeavors.
[277] Thank you for everything, and goodbye.
[278] Sincerely, the Daily Team.
[279] Wow, that's really touching.
[280] I'm really, I am really moved that you outsourced your goodbye to a robot.
[281] I am frantically trying to use chatchipT to generate a response to you, but it's giving me an error message and telling me that the system is too busy right now.
[282] So I will just have to use my frail, fallible human language skills to say, thank you.
[283] It's great to be here.
[284] And happy holidays.
[285] You too.
[286] We'll be right back.
[287] Here's what else you need to Notre Dame.
[288] The nays are 191.
[289] The bill is passed.
[290] In a historic move on Thursday, the House of Representatives voted to let the people of Puerto Rico, a U .S. territory, decide their political future for themselves in a referendum.
[291] The bipartisan vote would pave the way for the island to become America's 51st state or an independent country.
[292] But for now, the measure has to be a part of the island.
[293] little chance of becoming law because there is insufficient support in the Senate.
[294] And the key is this, we don't want this winter to look like last winter or the winter before.
[295] Fearing a resurgence of infections, the Biden administration will restart a program that provides free COVID tests through the U .S. Postal Service.
[296] And our winter COVID -19 preparedness plan helps us do just that.
[297] Americans can now order four tests each at www .gov with shipments beginning next week.
[298] Today's episode was produced by Luke Vanderplupe, Michael Simon Johnson, and Mary Wilson, with help from Muge Zady.
[299] It was edited by John Ketchum with help from Patricia Willens, contains original music by Dan Powell, and was engineered by Chris Wood.
[300] Our theme music is by Jim Brunberg and Ben Landsford of Wunderly.
[301] That's it for the Daily.
[302] I'm Michael Bobarrow.
[303] See you on Monday.