Insightcast AI
Home
© 2025 All rights reserved
Impressum

The AI Revolution is Here | Saturday Extra

Morning Wire XX

--:--
--:--

Full Transcription:

[0] The dawn of artificial intelligence has seen the technology rapidly spreading into just about every industry and every device we use.

[1] And that's sparking a lot of concerns about where we're heading on several fronts, including jobs, privacy, and security.

[2] In this episode, we sit down with Jamie Metzell to discuss how the AI revolution will transform our lives, work, and world for better or worse.

[3] I'm Daily Wire, editor -in -chief John Bickley.

[4] It's Saturday, June 29th, and this is an extra edition of Morning Wire.

[5] Hey guys, producer Brandon here.

[6] Black Rifle Coffee is a veteran -founded company that's doing their part to honor both current and former military personnel as well as first responders.

[7] They support such organizations as the Wounded Warrior Project, Adopt -A -Cop, BJJ, and the Burn Institute.

[8] Right now, you can get 25 % off your first subscription order when you verify your military ID online.

[9] Just go to black riflecoffee .com, create an account, and verify your ID.

[10] Visit black riflecoffee .com to learn more.

[11] joining us now to discuss the latest developments on the AI front and what's on the horizon is Jamie Metzl who wrote the book or at least a book on AI called Super Convergence.

[12] Jamie, first of all, thank you for coming on.

[13] Sure, my pleasure.

[14] Look, AI has really exploded over the past year and a half and Apple has just gotten into the game and announced their plans to use it.

[15] First, can you tell us what we know and don't know about this new Apple product?

[16] So here's what we know.

[17] AI is transforming a whole lot of sectors.

[18] When people think about AI right now, people tend to think, oh, I'm going to go to chat GPT, and I'm going to do AI.

[19] I'm going to ask it a question, and I'll get some kind of answer.

[20] And the way we should be thinking about this is more like electricity.

[21] If I ask you, how did electricity influence your life today?

[22] You can't even answer the question because it's woven into everything.

[23] The obvious things like your computer and this microphone, but your haircut.

[24] your clothes, your house, just really everything.

[25] And so I think what's significant about Apple is we're seeing the migration of AI from a thing where you're going to go and do AI to it's just going to empower these things that you're already using, which is already starting to happen.

[26] But what Apple is saying is we're going to integrate AI into our existing product.

[27] So you're not going to be saying, oh, Siri is now doing more AI.

[28] It's just Siri is going to get smarter.

[29] It's going to have new capabilities.

[30] You're not going to think, oh, for some reason, this, whatever it is, became more efficient or this company became more productive because their accounting infrastructure worked better because of the integration of AI.

[31] It's just going to happen.

[32] And I think that's what we're seeing is generation one was the story of AI doing AI.

[33] Now we're seeing AI just as a piece of everything else.

[34] And that's what's happening with Apple.

[35] I know there's an infinite number of answers to this, but maybe just give us one example.

[36] What will AI in our daily lives look like in the next few years?

[37] Let's maybe stick with Apple.

[38] What kinds of things should we expect from Siri?

[39] So I just think it's going to be with Siri that you're going to be able to have more meaningful interactions with Siri.

[40] I have that now where you kind of go on chat GPT and you can have moderately reasonable expectations, and then I'll ask Siri or Alexa some kind of question.

[41] And these answers and responses that used to seem so smart and novel now seem like, oh, Siri, Alexa, you're an idiot.

[42] Chat GPT is much smarter than you.

[43] And everyone, you should be really polite to all these devices because you'll probably be working for them someday.

[44] And they're going to remember.

[45] But now we're going to see these kinds of interactions, at least on the consumer facing part where we'll be able to have just more meaningful exchanges, both in answering questions, but just doing things for us.

[46] And so in kind of an ideal world, you'll be able to say, hey, I'm going on a trip to Mexico City next week.

[47] What kinds of things should I do?

[48] Or will you book me into a hotel and just more seamless interactions?

[49] But really, the rubber will hit the road in some, first, some foundational areas, and then an application.

[50] So in foundational areas, we're already seeing these co -pilots that are helping us in all sorts of ways.

[51] But one of them is just like when you're using chat GPT, what it's essentially doing is just guessing based on statistical analysis, what's the next letter based on the last letter you've typed, the next word based on the last word that you've typed, and so on from there.

[52] So we've co -pilots for computer programming.

[53] And so programmers are starting to type a line of code, and then their GitHub or whatever copilot they're working with, we'll essentially say, it looks like you're trying to achieve this.

[54] Here's some suggested code.

[55] It may be right, it may be wrong, which is why humans need to be in the loop.

[56] But human coders are getting much faster, and we're learning from the machines, and the machines are learning from us.

[57] And so now we're moving to a world in the direction of a world where humans will get much faster.

[58] where machines will be able to do a lot more of coding, and where you won't need to be a coder in order to code, you can just be a regular person and say to your coding application, hey, write me a program so that the lights in my room flicker off on and off every 15 minutes, and then you'll get some code back.

[59] And so that means rather than a small number of tens of millions of us who have the ability to code, we'll have billions of us.

[60] And when you say, well, how much of our life is mediated through code, it's a ton.

[61] And what if we had better, faster, more code, what would change?

[62] Well, a whole lot of different things.

[63] Then in terms of applications, we'll increasingly see applications in health care where basically our health care will become more predictive and preventive because we're going to be able to identify patterns within our own biology.

[64] We're going to transform agriculture, being able to analyze.

[65] seeds, for example, and make recommendation about what types of seeds might work best in particular environments, what types of fertilizers, manipulating microbiomes, and really across the board.

[66] And so, again, as I was saying before, in the phase one, it's like, oh, that's AI doing AI.

[67] And in phase two, what this is really about, it's just everything is going to accelerate.

[68] All of our technological innovations are going to continue to get faster, and that's going to touch us in a lot of different places.

[69] Now, you've painted a very positive picture here.

[70] There's lots of concerns, obviously, about this as well.

[71] There's concerns about jobs, security, privacy.

[72] There's also concerns about being manipulated by AI programs that have inherent bias because of the nature of how they were produced, who was behind them, the training of them, et cetera.

[73] What would you say to some of those concerns?

[74] Those are very real concerns.

[75] As you know, my new book, Super Convergence, is just out.

[76] And in the book, I highlight all these wonderful things that can happen.

[77] And I also highlight a lot of the things that could go wrong and they could go horribly wrong.

[78] And there's a reason why evolution has preserved over hundreds of millions of years or probably billions of years the feeling of anxiety.

[79] Anxiety worrying about things is our evolutionarily evolved strategy for getting off our rear ends and working to prevent those things from happening.

[80] So I was a member of the World Health Organization Expert Committee on Human Genome Editing, so we're going to have all kinds of new opportunities, both to do great things with assisted reproduction, but do real harms.

[81] I was deeply involved in exploring the issue of COVID -19 origins and these same capabilities that allowed us to develop these vaccines and do it very, very rapidly also may well have contributed to the spillover and the outbreak itself.

[82] Issues of bias where we have these algorithms that we are feeding data, and maybe that data contains our own biases.

[83] We now have the ability to manipulate life, and the question for all of us, and maybe the most important question for us and for future generations is can human beings who suddenly have the increasing ability to create novel intelligence and recreate life in many ways?

[84] these capabilities wisely.

[85] Let me zero in on one of these concerns immediately after the announcement of Apple's new product, which is going to use third parties to help power it.

[86] There were lots of privacy concerns, the idea of AI tracking us in a way that's at a next level in terms of our behavior, predicting our responses, our questions, even our pattern of life.

[87] Are the companies that are producing these products keeping people's privacy in mind when they create them?

[88] The way our AI systems work is by training on data.

[89] And that data is the data that we all individually and collectively generate in the course of our lives.

[90] So we need to have AI systems training on that data.

[91] But if we just leave these companies to their own and say, just take everybody's data, scrape up everything you can, whether from the open internet or from these companies that we have very intimate relationships with, like Google and Apple and our health providers, whatever, if we just say that all data is fair game, this whole enterprise is going to end up as a complete and total disaster.

[92] The last thing we should be doing is trusting the companies who are in the business of taking and using our data for their purposes to do so wisely.

[93] And that's why I know there are people who say, well, government should stay out.

[94] This is an area where we need government.

[95] Too much government can be a problem, but too little government, governance.

[96] and regulation would be a catastrophic problem.

[97] And that's why there need to be rules of the road because if we just say we're trusting these companies, there is, in my view, a 100 % chance that will end in disaster.

[98] Now, another issue that's come up, and we've seen this with celebrities actually now, are copyright issues.

[99] And to me, this seems like a Pandora's box.

[100] How do you address copyright issues with the kinds of AI products that we've already seen?

[101] where is that heading?

[102] Copyright is really tricky.

[103] I live in New York City.

[104] If I go to the Metropolitan Museum and spend the day looking at art, and then I come back and paint a painting at the end of the day, how do I know whether my work isn't 1%, 2 % inspired by Miro or Monet or some artist who I interacted with in that museum?

[105] And so if our AI systems are training on us, by definition, they are accessing the materials that we've created.

[106] I know there are people who talk about cultural appropriation, but all of culture is pretty much cultural appropriation.

[107] And then the question is, how specific, what's the connection between the copyright -controlled content and whatever is generated by the AI?

[108] The New York Times, in my view, has a very credible case against Open AI because they compared the New York Times copyright -protected, work, and then what the AI was generating, claiming it was independent work, and it was very, very similar.

[109] And so there are real issues of copyright protection, and we as a society, we don't want to not have the sharing of cultural content.

[110] I mean, that's in all of our interests.

[111] But if there aren't protections for the creators of work and the creative human beings who are generating that work, and we're going to undermine the market for human creativity.

[112] And what people call machine creativity at this stage is really just derivative of human creativity, but doing that at scale.

[113] So we need to make sure that we do both things.

[114] We need to protect human creativity because that's what's giving.

[115] The reason we have this technology in the first place is because of creative humans.

[116] And they're doing this in part for the, maybe for the good of the world, but in part because they want to be renumerated.

[117] But at the same time, we need to have enough sharing so that these systems can continue to evolve and grow.

[118] And that's true with copyright.

[119] It's true with just access to data.

[120] There's a parallel point with health care.

[121] If I, you know, I've had my whole genome sequenced.

[122] But if I'm the only person in the world who's had my whole genome sequence, it doesn't help me at all.

[123] The reason why my genomic information can become actionable for me, is that it's set in the context of hopefully millions and someday billions of sets of genetic and other biological information from other people.

[124] And it's the role of a society to say, how do we find the right balance between individual privacy protections?

[125] But if we say individual privacy is 100 % protected, then we don't have the shared cultural spaces.

[126] But if we say it's 0 % protected, and then we're going to wipe out a lot of the things that we really value in our societies.

[127] And to further complicate this, then you have the global race for AI where some of these other countries will not put the limits that we might put on our own AI training.

[128] Thus, we're even further behind.

[129] Yeah.

[130] Final question, your book title Super Convergence.

[131] Curious, what does that word mean?

[132] Can you explain why you chose it?

[133] Sure.

[134] So on the global arms race, I think it's really important.

[135] I write about that at length in superconvergence.

[136] You can look at a company like OpenA