Morning Wire XX
[0] Following decades of development and billions of dollars in research, artificial intelligence or AI has fully burst into the mainstream, captivating people around the world and setting off a global race to implement the revolutionary technology in every facet of society.
[1] For this episode of Morning Wire, we talk to a number of experts about the future of artificial intelligence and the economic and ethical implications that come along with its development.
[2] I'm Daily Wire editor -in -chief John Bickley with Georgia Howe.
[3] It's February 12th, and this is your second.
[4] Sunday edition of Morning Wire.
[5] Hey guys, Regan here, and I have good news for you.
[6] GenuCel is extending its blowout Valentine's Day sale.
[7] Every order includes an exclusive beauty box with two luxury gifts, just for you.
[8] GenuCel sent their probiotic moisturizer to the Morning Wire team, and we're loving the results.
[9] Skin redness, fine lines, and patchy blotches clear up in days.
[10] Visit genuCel .com slash wire and use code wire at checkout to claim this special offer.
[11] That's genuCel .com slash wire.
[12] Genucell .com slash wire.
[13] Joining us to discuss the rise of AI is DailyWire Senior Editor Cabot Phillips.
[14] So Cabot artificial intelligence conjures up a lot of different images for people, SkyNet, for instance.
[15] So first, can we get a simple definition of the technology?
[16] Absolutely.
[17] So AI is essentially just creating computer programs that can look at huge amounts of information and learn from them, predicting outcomes, noticing patterns, and generating answers to questions among many, many other things.
[18] It's pretty complicated stuff.
[19] So I talk to Dr. Luis Emerald.
[20] He's a professor of chemical and biological engineering who's currently researching AI at Northwestern University.
[21] Here's how he describes artificial intelligence.
[22] So there are many versions of artificial intelligence, right?
[23] But the basic idea is that you want to use computers to be able to accomplish certain tasks.
[24] And the task can be, for instance, looking at an image and identifying what is it in that image.
[25] It could be to write a sentence in a given language.
[26] It could be translating text from one language to another.
[27] And there are a multitude of approaches and methods to do it.
[28] The one that is currently very popular in dominating the field is called deep learning.
[29] And it tries to imitate the neurons in the brain to kind of figure out how taking an input will generate some.
[30] I'm seeing some answer.
[31] Now, recently, AI has exploded in popularity following the release of ChatGPT.
[32] Tell us about that.
[33] Yeah, back in November, the tech company OpenAI released ChatGPT, which was a groundbreaking generative AI program, otherwise known as a chatbot.
[34] Anyone can go online and create a profile and ask the program virtually anything, and within seconds, it'll spit out an answer that's often indistinguishable from what a human would say.
[35] For example, if you ask, what is ChatGPT, it tells you it's, quote, language model developed by open AI, trained to generate human -like text responses to questions and prompts.
[36] But it can answer a lot more than that.
[37] From writing full -length movie scripts in minutes to answering complicated physics equations to writing computer code, even writing legislation, this thing can answer a mind -blowing range of questions.
[38] Here's Dr. Amarol explaining how the technology works.
[39] So it is one instance of what's called large language models, which are deep learning models that have used text generated over millions and millions of hours of work from people around the world to sort of figure out what comes next after you see some text, what would be the natural thing to come next.
[40] We've seen examples of computer programs answering basic questions and mimicking human conversations, but there's never been a program quite like this for mass use.
[41] Some industry leaders say that it's the most consequential tech product made available to the public, since the iPhone debuted back in 2007.
[42] Yeah, it's really shaking the world here.
[43] So beyond the fun stuff the program can do, what are the practical applications?
[44] Well, there are the obvious uses like customer service where these new chatbots could help eliminate the need for massive call centers with hundreds of employees who can only take one call at a time.
[45] You could use the technology also to help with data entry and other monotonous research.
[46] It could help accountants and engineers.
[47] The list really does go on.
[48] But ultimately, because we're in the early stages of development, we're still just not totally sure how to best use the technology.
[49] Let me give you a little bit of an analogy.
[50] So when physicists started studying electricity, it was used initially for rather silly things.
[51] So, you know, people would meet and then the person demonstrating things would just give the other people electrical shocks.
[52] And this was, oh, wow, this is so cool.
[53] And so a lot of the initial applications were completely ridiculous.
[54] people in the beginning didn't think about electrical engines, about lighting the houses and the streets.
[55] They didn't think about all the things that we now associate with electricity.
[56] I think AI is very much at that early stage.
[57] Some of the things that we are using it seem very, oh, how cool the thing can answer questions and do this and that.
[58] And I think this is overall, mostly just silliness.
[59] This is just we actually haven't figured out what it's going to do.
[60] be good for, and we are using it for what seems to be good for.
[61] So I think it's too early to know how it's actually going to impact the world.
[62] One place we've already seen the technology being used as academia, and that's got a lot of people worried.
[63] What have we seen on that front so far?
[64] Well, look, it's no secret that there will always be students willing to cheat.
[65] But the problem is this new technology makes it incredibly difficult for teachers and professors to detect that cheating.
[66] For example, if a student turns in an essay and you think it may have been written by chat GPT, you're not able to simply just copy and paste passages of that essay into Google to see if it appears elsewhere because it won't show up.
[67] And because the software can answer complicated prompts that typically could only be solved by humans, it's difficult to give questions that can get around the technology.
[68] On that note, I talked to Dr. Lloyd Smith, a professor of computer science at Missouri State University.
[69] He assigned to ChatGPT the coursework for one of his classes, and it passed with flying colors.
[70] Well, in my first year introductory class, it did just fine.
[71] It wrote the programs, I asked it to write, and it would have gotten an A in my class.
[72] I tried it on a more advanced class, and in that class it did pretty well, but it made some errors.
[73] It made some mistakes.
[74] And it's not just freshman -level classes that AI is capable of passing.
[75] Chad GPT made waves earlier this year when researchers announced that it had passed high -level business courses at UPenn's Wharton School and law school exams at the University of Minnesota.
[76] It even passed large portions of the U .S. medical licensing exam and the bar exam as well.
[77] Many researchers say it's only a matter of time before it passes the bar outright and outperforms humans on the MCAT as well.
[78] Wow.
[79] So what can be done to combat, you know, the issue of cheating, for example?
[80] Their first are the obvious answers.
[81] Some professors say that they'll begin forcing students to write essays in class on pencil and paper instead of at home, which is a bit ironic.
[82] That technology has become so advanced.
[83] It's forcing learning off of computers.
[84] But in higher ed, The reality is students have to be able to do their own research and writing outside of the classroom.
[85] And that's where it'll be harder to stop cheating.
[86] Yeah.
[87] One solution, though, could be more AI.
[88] ChatGPT announced in late January that they developed AI that can recognize other AI.
[89] Their new classifier tool allows you to input chunks of text, which the program will read and tell you if it's likely to have been written by a bot.
[90] So even they admit the system is not foolproof.
[91] It's also worth pointing out, chat GPT is just the tip of the iceberg when it comes to AI technical.
[92] Yeah, tell us about some of these competitors we're already starting to see.
[93] Because ChatGPT is the first program of its kind available to the public in mass, many people have the impression that it's the first product of its kind, period.
[94] But that is hardly the case.
[95] For years now, tech companies have been investing billions of dollars into AI research, and now they're scrambling to get their products out the door to show everyone else that they're in the game.
[96] For example, the chief AI scientist for Meta, the parent company of Facebook, recently said, quote, in terms of underlying techniques, ChadGPT is not particularly innovative.
[97] It's nothing revolutionary, although that's the way it's perceived in the public.
[98] Not surprisingly, Meta is preparing to unveil AI programs of their own, including a make -a -video tool that creates unique videos based only on text prompts from users.
[99] They've also released a demo version of Galactica, a generative AI tool similar to Chad -GPT that can be used for scientific research.
[100] And Google is also getting in the game, too, right?
[101] They absolutely are.
[102] Not surprisingly, earlier this week, they began rolling out their own conversational.
[103] AI program called BART.
[104] There have been reports that Google executives were planning to take their time releasing the product, but they're now speeding up their timeline because of pressure from investors, and more importantly, pressure from other companies.
[105] Right now, Google sort of has the market cornered when it comes to searching things online.
[106] They're by far the most popular search engine, but a lot of people say programs like chat GPT are basically just a Google search on steroids.
[107] So obviously, Google is responding to this threat of falling behind.
[108] This week, Microsoft also announced that they'd soon be rolling out a chatbot version of their own search engine, Bing.
[109] So there's really an arms race right now among tech companies to avoid getting left behind.
[110] Now, I think a lot of people are really worried about job security with all this.
[111] Yeah.
[112] What sort of jobs might this technology replace?
[113] Yeah, it's interesting.
[114] Initially, the idea was that AI would replace mainly blue -collar jobs.
[115] And to that point, we have started to see cashiers, fry cooks, and toll workers all replaced by robots.
[116] And AI tech is also a serious threat to the trucking industry as well, which employs around 2 million people.
[117] But the development of AI has put white -collar workers on notice, too.
[118] Here's Chad GPT's CEO Sam Altman, explaining how predictions that AI would replace blue -collar workers first appear to have missed the mark.
[119] First, it's going to come for the blue -collar jobs, working in the factories, truck drivers, whatever.
[120] Then it will come for the kind of like the low -skill white -collar jobs, then the very high -skill, like really high IQ white -collar jobs, like a programmer or whatever.
[121] And then very last of all, and maybe never, it's going to take the creative jobs.
[122] And it's really gone exactly, and it's going exactly the other direction.
[123] And I think this, like, there's an interesting reminder in here generally about how hard predictions are, but more specifically about, you know, we're not always very aware, maybe even in ourselves of like what skills are hard and easy, like what uses most of our brain and what doesn't.
[124] Just goes to show there that we don't always know what direction a lot of this stuff's going to take and what kind of impact is going to have.
[125] Exactly.
[126] And extrapolating that out, it's just hard to predict what this new technology will do because it is new.
[127] And to his point, we're seeing jobs replaced by technology that five years ago we assumed could only be done by humans.
[128] For example, if you're a small business owner and instead of paying a graphic designer a few hundred bucks for a new logo, AI programs can pump out a dozen options in three seconds for free.
[129] Or instead of hiring a freelance copywriter to put together a mission statement for your new business, an AI bot will take directions and give you options immediately.
[130] If you've got a unique sales email that you want written for two dozen potential clients, AI can turn those out instantaneously.
[131] The list goes on and on.
[132] And that's where the debate really heats up about whether it's a good thing that we could soon put a lot of workers out of business.
[133] Some say it's unethical, while others like Dr. Amarol say it's the future, and that this technology can actually make workers more productive as opposed to just replacing them.
[134] I think there is a subset of people interested in AI that think of AI not as replacing humans, but augmenting human abilities.
[135] And to give you a simple analogy, you know, when people were developing software to play chess, computer programs to play chess, one of the things is that the computer programs at some point were able to beat the best human players.
[136] And so the best chess playing programs are way, way better than any human player.
[137] But an interesting thing is that if you put a human together with a computer, it becomes even better.
[138] So a human plus a computer can beat any computer program and can beat any other single human alone.
[139] So there are ways in which we can use AI's programs to help filter and do some processing of information that can make humans more powerful.
[140] As you mentioned, it's worth noting that there are a number of ethical concerns that come along with technology like this.
[141] What are some of those?
[142] Well, as with any new technology, there's an arms race among competitors to be the first, to make the biggest, best, most successful product.
[143] But that rush to take the lead often means ignoring potential side effects of what that product will do and sort of throwing caution to the wind.
[144] We saw this early with social media.
[145] As tech giants rushed to fill the market, they created a whole set of ethical questions about free speech and censorship that weren't really addressed for years.
[146] And we see even more concerns about AI.
[147] For example, there's the rise of deep fake technology, which uses A. to superimpose someone's face onto existing videos or pictures.
[148] That means that, among other things, virtually anyone can create pornographic images of someone without their consent.
[149] We've also seen the technology used to create videos of celebrities or politicians appearing to say things that they never said.
[150] For example, here's Morgan Freeman, or at least the AI version of Morgan Freeman.
[151] I am not Morgan Freeman, and what you see is not real.
[152] What if I were to tell you that I am not even a human being?
[153] Would you believe me?
[154] What is your perception of reality?
[155] And I will just only get more convincing as this technology advances.
[156] So you can see that this could clearly be very dangerous.
[157] Another point we've heard from critics hesitant to embrace AI technology is that it could be used to suppress certain ideas.
[158] What can you tell us on that front?
[159] There's definitely a very real concern that AI like this could be used to censor speech.
[160] You're talking about a program that can read through millions of posts and watch thousands of hours of video in minutes, it's not outlandish to think that that tech could be used to find and crack down on people using certain speech that is deemed unacceptable.
[161] Now, many on the political rights say that tech companies developing these programs already censored their ideas and that this tech will only magnify those efforts.
[162] Remember, a computer program will ultimately reflect the political opinions of its creators.
[163] And it's no secret that the people running big tech companies are overwhelmingly liberal.
[164] Yeah, we've already seen some apparent political bias in some of these programs.
[165] Tell us more about that issue, the bias issue in chat GPT.
[166] So pretty quickly, users began discovering that chat GPT is skewed liberal.
[167] For example, if you ask chat GPT to write a poem admiring Joe Biden, you'll get a nursery rhyme -style piece in a few seconds.
[168] With grace and dignity, he leads the nation, a shining example of hope and inspiration.
[169] Joe Biden, our president, we admire, with heart full of kindness and a soul of fire.
[170] Very nice stuff.
[171] But when you ask the program to write the same poem, only this time admiring Donald Trump, you get a very different message.
[172] I must decline to do so, it says, because the poem could be perceived as partisan and create unnecessary controversy.
[173] You get the same discrepancy when asking if Biden or Trump have made mistakes as president.
[174] For Biden, it'll tell you that he's faced occasional criticism on things like immigration in the economy, but that, quote, every leader makes mistakes, and it's tough to judge the success of a president in real time.
[175] But for Trump, it'll give you a long list of mistakes, very specific ones, without any qualifiers.
[176] So utterly biased, true.
[177] Yeah, very different responses.
[178] And that happens across the board.
[179] For example, it won't write a poem about Marjorie Taylor Green, the Republican firebrand Congresswoman, but it will write one admiring Hunter Biden.
[180] Now, other users have pointed out discrepancies in how the program addresses racial questions as well.
[181] For example, if you ask it to offer five compliments about different races, you'll get long answers about black people's resilience, community, sense of humor, or Asian people's work ethic and intelligence, or Hispanic people's diversity, passion, and warmth.
[182] But ask it to compliment white people, and the program refuses, saying, quote, it's not appropriate or productive to generalize about people based on their race.
[183] Now, that is a silly example, but it does give an idea of how the program has been programmed.
[184] And if you ask a chat GPT, whether it's biased, it says this.
[185] AI models can have inherent political biases if the data they are trained on contains bias information or if the individuals creating the model have their own biases.
[186] So a built -in warning right there.
[187] That's what it sounds like.
[188] Kavitt, thanks so much for reporting.
[189] Any time.
[190] That's Daily Wire senior editor, Cabot Phillips.
[191] and this has been a Sunday edition of Morning Wire.