Morning Wire XX
[0] In an unprecedented move this week, hundreds of influential figures in the tech industry, including Elon Musk and Steve Wozniak, banded together to demand a six -month pause on the development of artificial intelligence, warning of potentially dire consequences if boundaries aren't established immediately.
[1] In this episode, we talk with James Polis, editor of the American Mind and the author of Human Forever, about this demand for a pause and the implications of the rapid development of AI.
[2] I'm Daily Wire editor -in -chief John Bickley with George.
[3] Georgia Howe.
[4] It's April 1st, and this is a Saturday extra edition of Morning Wire.
[5] Hey guys, Reagan here, and right now, you can save over 70 % off GenuCel's most popular package.
[6] Just in time for warmer weather, GenuCel is including both their ultra -retanol and dark spot corrector in their most popular package.
[7] All orders are upgraded to free shipping, and every subscription order includes a complimentary spring spa box with three free spa essentials.
[8] Visit genucel .com slash wire and use code wire at checkout to claim the special offer.
[9] That's genusel .com slash wire.
[10] Joining us now to discuss the AI pause and what's inspired it is James Polis, editor of the American mind and author of Human Forever.
[11] Hi, James.
[12] So some tech industry notables just petitioned for a six -month pause on all advanced AI development.
[13] They want more boundaries for the new technology to be considered.
[14] What are their concerns here?
[15] Well, I think the concerns are pretty simple.
[16] Open AI has moved very quickly, not even as fast as it originally.
[17] wanted to, but still very fast, nonetheless, to develop and push out chat GPT in its various instances.
[18] And at this point, it's capable of interacting directly with the internet, something that people across the spectrum of analysis on tech and politics have had justifiable concerns about for some time.
[19] And so it's complicated, because these are competitors.
[20] And to some degree, this is folks in tech who want some time to catch up.
[21] But it's also animated by broader concerns about what happens if you do create something that is both very good at analysis and manipulation online, but also totally divorced from any kind of human constraint.
[22] And what are some of those issues?
[23] One of the things that sprung to mind immediately when we started to experiment with chat GBT ourselves as plagiarism and copyright issues.
[24] What are some of the potential problems with these new AI technologies?
[25] Yeah, there are basic concerns ranging from copyright to other sort of legal structures that we have in place.
[26] It's going to be difficult to understand how to apply existing legal doctrines to something that is an active entity, but is not a human being, is not even a legal person, raises all kinds of nettlesome questions of the sort that usually take, you know, years or decades rather than weeks or days for the courts to process.
[27] But I think the main concern is that we simply don't know.
[28] We don't know what the immediate ramifications of this kind of technology sort of gaining access to the entire networked world might be.
[29] It's very hard to predict what would happen within a month, much less what would happen over a period of years.
[30] Now, in the letter, they specifically say we should not delegate the policing of AI technology to unelected tech leaders.
[31] They're calling on governments to be involved.
[32] Is there need for any new AI legislation?
[33] Well, look, the kind of legislation that is needed is, frankly, digital rights amendments that the state and the federal level, things that in a clear and broad language of the Bill of Rights protect the core fundamental rights to the use and the ownership of digital technology that is consistent with and really emanates out of, I think, the First and Second Amendment's in particular.
[34] So if ordinary American citizens lose their protections over their rights to keep and bear powerful computational devices, the basic kinds of technology that you need to be an active participant on the internet, freely speaking, freely associating, then it's really not going to be possible to be American in a fundamental sense on the internet in the digital space.
[35] Now we've already seen the government giving out a lot of money actually to develop AI that would potentially censor speech online.
[36] Are those some of the concerns here with this new technology?
[37] Well, they're absolutely concerns here.
[38] They are moving very quickly now to concentrate digital power and to use that power to bring control over American speech, Americans Association, Americans finances, all under one roof, a roof which is not quote unquote the people's house, it's not Congress, it is the administrative state or the deep state or the intelligence apparatus.
[39] That is simply fundamentally at odds with and hostile toward our constitutionally guaranteed former government.
[40] I don't know if it's going to be possible for Congress to take the needed action.
[41] They should.
[42] But regardless of how fast they move or whether they move at all.
[43] It's time for state legislators to step up.
[44] What are the chances of any of these companies that are on the forefront of AI development deliberately slowing down?
[45] Really what's important for people to understand is there are a lot of different factions sort of jockeying for advantage or for control under the surface.
[46] So, you know, some of these companies just want to be at the front of the line.
[47] Some of the people involved, you know, ultimately just want to take more control over the way governance is built in the digital age.
[48] and other people have genuine concerns about the real ultimate downside of a bunch of networked machines sort of landing on the decisions on their own that no human being would want them to land on.
[49] So it's really a mix.
[50] And when you look under the surface, it's not quite as simple as big tech bad or technology itself bad.
[51] What is at stake here is, you know, who can you trust at a deep soul level?
[52] Who can you trust to advance technology in a way that does not mechanize us, that does not take away our humanity, and that does not make us pawns in a system that ultimately none of us can control.
[53] All right, so what about regulation when it comes to other nations?
[54] Do we have any concern from the U .S.'s perspective if other nations don't regulate and we do?
[55] Well, this is the concern.
[56] I mean, these technologies only exist because of the research and development agenda of the military intelligence complex of the United States of America.
[57] We've been at the forefront.
[58] We've done it because we wanted to produce better weapons.
[59] We came up with the nuclear bomb.
[60] That was really powerful, but the use case was not so great.
[61] It was a couple more improvements, hydrogen bomb and so on and so forth.
[62] Ultimately, there was an intense pressure, I think, for the military industrial complex to justify its existence, justify its huge budgets by delivering a new kind of weapon that didn't involve, you know, blasting millions of people to smithereens that was a little softer touch and yet more powerful.
[63] And they did this through digital technology.
[64] The DARPA and CIA, you know, these agencies, these organizations have been foundational to the way that our technology developed.
[65] And a lot of the power of what we consider to be tech entertainment was really the result of just spinning military research and development into consumer electronics.
[66] So that's the backdrop.
[67] And it's important to understand that because whether it's Vladimir Putin or Elon Musk or whoever saying, hey, whoever controls, whoever masters this technology is probably going to master the world.
[68] hold sway over the ordering of the world in the digital age.
[69] So it's no surprise that any country that kind of has its own digital infrastructure independently is moving as fast as they can to ensure that they remain sovereign over the technology within their ambit.
[70] So China, you know, Russia, the EU, India, Israel, US, all of these digital powers are moving as fast as they can to ensure that the technology does not sort of just whip their sovereignty apart.
[71] And a big part of that is regulation and law, of course.
[72] You look at the EU, you know, Europe hasn't been very good at innovating on tech ahead of the U .S. or Asia.
[73] So they're positioning themselves as like, well, if we can't be the best innovators, we can be the best regulators, we're going to be tough on tech, we're going to regulate it down into the minutia, we're going to find companies.
[74] And the different civilizations, different states that kind of control different big areas.
[75] on the globe.
[76] You know, they're pursuing different strategies.
[77] And I think the big risk here is that they're going to split into two main groups.
[78] We're starting to see that already.
[79] You know, what's the backdrop to the war in Ukraine and the way that the superpowers have been squaring off again?
[80] I think the backdrop is there's intense pressure to gravitate around the American digital umbrella or the Chinese digital umbrella.
[81] And as commonsensical as that is, there's also a risk that is just going to result in a kind of a new spiral toward mutually assured destruction, not of the nuclear kind, but now of the digital kind.
[82] Now, another concern is the impact on the workforce, AI being able to replace people's jobs or minimize how much they have to work.
[83] What are the legitimate concerns here in terms of impact on people's employment?
[84] Well, I think they could be profound.
[85] And depending on your line of work and depending on, you know, who you think is contributing to the main distortions or destruction of America as we've known it, and.
[86] And it, and depending on, you know, you might have reason to sort of grimly chuckle along, you know, ho -ho, those coders or those lawyers thought they were so powerful and now look at them.
[87] And it's true that a lot of white -collar jobs are threatened by AI or whatever you want to call it.
[88] There's no doubt that a lot of these technologies are being pushed expressly on the basis of clearing out what is increasingly seen as dead weight in the economy.
[89] And again, there's no question that, you know, a lot of these creative class jobs or laptop jobs, it is sort of questionable, you know, what kind of concrete contributions people with these jobs are actually making to our productivity?
[90] Are they actually encouraging growth?
[91] Are they doing things economically that are really going to be generative?
[92] And so, you know, there's a certain kind of gallous humor associated with seeing these machines make these kinds of strides.
[93] But ultimately, you know, the ball is in our court as American citizens.
[94] To look at what's going on, it's not really a mystery of what direction we're headed in.
[95] And it's not really a mystery that we're being pushed in that direction without the kind of basic participation and guardrails that are part and parcel of citizen politics.
[96] And to say, you know, wait a minute, how much of our humanity are we willing to throw away?
[97] How much of ourselves are we willing to substitute in public and private life with these automated simulators with machines that, you know, are indifferent toward our fate, don't really understand who we are and are simply doing things that they're told and increasingly that they tell one another to do?
[98] So look, our humanity is over if we want it to be.
[99] If we are willing to take our future and, you know, blast it off into outer space, in the bad sense, we can do that.
[100] I recommend that we don't do it.
[101] Ultimately, this is a moment of responsibility, a moment of reckoning for us, where we really need to look at our humanity and say, guess what?
[102] You know, warts and all, this is still a sacred gift.
[103] And to squander that gift or to try to swap it out for something robotic for a distributed, disincarnate intelligence that is unlike any living thing, that's on us.
[104] And so it's a time for choosing.
[105] Well, an important development that's impacting every single one of us.
[106] James, thanks for joining us.
[107] That was American Mind Editor and the author of Human Forever, James Polis, and this has been a Saturday Extra Edition of Morning Wire.