Arizona Public Radio | Your Source for NPR News
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Flawed chatbot or threat to society? Both? We explore the risks and benefits of AI

TERRY GROSS, HOST:

This is FRESH AIR. I'm Terry Gross. Hundreds of leaders in the field of artificial intelligence recently signed an open letter warning that artificial intelligence could destroy humanity. The letter said mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. We're going to talk about some of the risks of AI, from the battlefield to politics, as well as some of the benefits. And we'll consider the example of the AI innovation ChatGPT.

It's like a search engine in that it's been programmed with unfathomable amounts of information and can access the information immediately. But unlike a search engine that connects you to a website or quotes from them, ChatGPT can actually answer your questions in a more personalized, conversational-sounding way and carry on a written conversation with you. But it's not always going to be accurate. And sometimes it's going to be downright weird. My guest, Cade Metz, is a tech reporter for The New York Times who's been writing about AI for years and is the author of the 2021 book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

Cade Metz, welcome back to FRESH AIR. It's a pleasure to have you back on the show. I know everybody's doing these goofy things with ChatGPT. And, you know, guilty - I'm doing them, too. So I asked if Cade Metz would be a good guest on FRESH AIR to talk about AI. And the answer I got was, yes, Cade Metz would be a great person to discuss AI on the radio program FRESH AIR.

And then it goes on to describe your extensive experience and says, Metz's expertise lies in conveying complex technical concepts in a way that is accessible to a broader audience. His insights into the advancements, challenges and ethical considerations surrounding AI could make for an engaging and informative discussion on FRESH AIR. Additionally, his knowledge of the key players and industry trends would provide valuable context for listeners interested in AI and its implications. Now, what I'm wondering, Cade, is do you recognize where that language comes from? Is that lifted from a bio on your website or a quote about you? Do you recognize those sentences?

CADE METZ: I don't recognize them at all. And it's a good way to think about how this technology works. It's not necessarily lifting exact language from the internet. What this technology has done, the way it is built, is that researchers, scientists at places like Google or the San Francisco AI lab OpenAI will take vast amounts of text from the internet, and they'll feed it into these systems. And it analyzes all that text. And it looks for patterns in the text. And in identifying those patterns, it then learns to generate new language on its own.

So in the moment, it's creating something new based on what it has learned in the past. So that might be similar to something that's on the internet, but probably a little different. And it can get things wrong, as you said. Now, I'll take the compliment from ChatGPT. All that sounded good and accurate. But there's always the chance that you could run the same query and it would give you something different that is not true and that would get something wrong about me or my background.

GROSS: Right. Well, let's talk about this open letter warning about the possibility of extinction and compare the potential of AI to, like, nuclear war. What is the theory about how artificial intelligence could actually destroy humanity? What we have - I mean, ChatGPT doesn't seem like the most destructive thing in the world. So what's changed?

METZ: Well, there are a couple of things that we should talk about. One - and you pretty much said it, but let's underline that. ChatGPT cannot destroy the world, period - full stop, OK? These fears are hypothetical. And before we go into the specifics of what people believe, I think we also need to realize the mindset of the people who are saying this. So I'm here in Berkeley, Calif., not far from Silicon Valley. I've been here for 10 years covering the people who work in this industry, the tech industry. And just generally, they tend to live in the future. And if you look at the AI field in particular, people tend to live even further in the future. It's just the way they see things.

They're looking down the road at what may happen. And that's what's going on here. There's a community of people that is entrenched in the AI field who has long believed this was a danger, even when the technology was much simpler. And they expressed those same fears. And they're expressing them again now. That's part of what's going on. And you need to realize, as you hear open letters like that read aloud - right? - it's strange. It's shocking. It's concerning. But you've got to remember that this is something that is completely hypothetical and down the road. But let's talk about what the fear is. We talked about...

GROSS: Yeah, so what's the fear? Yeah.

METZ: We talked about how ChatGPT is built, that it learns from data. And at this point, we're talking about this system learning from the entire internet, all text on the internet. It spends months analyzing all that data. And in the end, you come out with a system that can talk and write much like we do, except it has wrapped itself around more information than we can wrap our own heads around. We cannot learn from the entire internet but this system can. And what that means is that it's learning things that we don't expect it to learn. It's learning things that its creators don't expect it to learn. It is learning to exhibit behavior that we don't necessarily want it to exhibit.

GROSS: Like what?

METZ: We talked about it getting things wrong. What - this is what scientists call hallucinations or confabulation. It can get things wrong. It can be biased against women and people of color because it has learned from biased information on the internet. And the concern is that as these systems get more powerful, as they learn from more and more data - and that is already starting to happen. Scientists are building systems that learn not just from text but sounds and images and video. And as it takes in more and more of that data and learns in ways we never could, the fear is that it learns all sorts of behavior that we don't necessarily want it to exhibit.

GROSS: I know one of the fears is that things like chatbots will be able to take actions based on the texts that they generate. What kind of actions are experts in AI worried about?

METZ: Well, there are all sorts of things. People are already starting to take these chatbots and, based on what they say, have them take actions in simple ways - access your calendar and set up an email invitation, send it out to your friends or colleagues. That sort of simple thing is going to expand into all sorts of other parts of the internet. The fear is that you take these systems that learn behavior and then you attach them to more and more important parts of the internet - power grids, military services, stock markets, etc.

And already, scientists are starting to give these systems goals. There's a system called Auto-GPT, which is designed around this type of technology, and it's specifically meant to take actions. You can ask it things like, create me a company, or, make me some money. Today, this type of system does not work well. But the concern is that as the technology gets more powerful and you ask one of these systems to make you some money, it takes actions in service of that goal that you don't want it to take - that it makes money in illegal ways, that it foments a revolution somewhere in Central Africa because it owns oil futures there. These are all hypotheticals in the distant future, but that's the type of thing that people are thinking about.

GROSS: All right. Let's take a short break here, and then we'll talk some more - so much more to talk about. My guest is Cade Metz, a tech reporter for The New York Times and author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF BIG LAZY'S "THEME FROM HEADTRADER")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence, AI, for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

So there's a new innovation, a new platform called GPT-4. And I think that's what's powering the new ChatGPT. And this was introduced by OpenAI, which is the company behind ChatGPT. So what is so innovative about this new platform?

METZ: It's a way of building all sorts of applications with the technology behind chatbots like ChatGPT. So it's the type of system that I described before that learns from vast amounts of text from across the internet, and it learns to generate text on its own. You can use that to build a chatbot that chats with you, that answers questions, that can generate a term paper or a poem or even computer programs. But you can also use that technology to build a personal tutor. That has already started to happen - personal tutors that are being deployed in schools to help children learn.

You can use it to build a search engine that can help you find things on the internet. Microsoft, a close partner of OpenAI, that artificial intelligence lab in San Francisco that built this platform, is already doing that. They've attached this type of technology to their Bing search engine that allows you to look for information and retrieve it in a more conversational way. That's the power of this technology - is that it can be applied to all sorts of different applications. And that's what we're going to see in the years to come.

GROSS: Now, one of the concerns about that is that it's getting close to what's called AGI, artificial general intelligence, which means a machine that can do anything the human brain can do. That's what I've read. I don't really understand what it means. Would you explain it?

METZ: Well, it is worth talking about because we're not necessarily close to that, what scientists call artificial general intelligence, or AGI. When they say that, what they mean is a machine that can do anything the human brain can do, a machine that can think in all the ways that you and I think. ChatGPT and the technologies we have today cannot do that. They're good at producing language, and they're good at producing computer programs, but they can't reason in full the way you and I do. They don't exhibit the length and the breadth of common sense that you and I have.

Scientists disagree on whether the methods that are used to create these systems will eventually lead to AGI. Some scientists are bullish on this. Scientists inside Google, inside Microsoft and OpenAI, which we discussed earlier - some of them believe this is a path to that sort of machine that can do anything our brains can do. But others are quick to put on the brakes and say, anything that looks like reason in these systems is not as powerful as it might seem, that these systems mimic reason in some cases. But when you step back and you look at them, they can't do so many of the things that you and I can do in an instant.

GROSS: So you're saying we're far away from artificial general intelligence, where a machine can do anything the brain can do.

METZ: The systems as they exist today are far away from that. But one of the things we need to think about as we talk about all of this is that because the types of systems we're talking about here can learn from vast amounts of data, that means that scientists can improve them at a rate they could never improve technology in the past. In the past, you had to get hundreds of engineers into a room, and they had to create technology rule by rule, line of code by line of code. That takes forever. If you have a system that can learn from data, you can improve it so much quicker. And that is part of the concern here. That is part of the promise here - that these systems will improve at a very fast rate in the months and the years to come.

GROSS: Now, I know another concern is, because these chatbots and other forms of artificial intelligence are, like, so smart and have so much information that they have absorbed, that they might come up with a system where humans can't turn them off. Can you explain some of the fears surrounding that?

METZ: It relates to what we were talking about earlier, about giving systems goals - that if you give a system a goal, that it's going to do whatever it can to achieve that goal. And if you try to turn it off, it cannot reach that goal, and so it's going to resist. Again, this is hypothetical. But scientists are concerned that as you build more and more powerful systems, that this will become a possibility. But again - and I feel like I have to say this on a daily basis - this is not something that can happen with these systems today.

GROSS: So a few months ago, your colleague at The New York Times, Kevin Roose, who also covers tech, had a very disturbing conversation with one of these chatbots. And it's the one that's attached to Microsoft's search engine, Bing. It got into, like, really dark territory. It basically told Kevin Roose about its dark fantasies, including hacking computers and spreading misinformation. It said it wanted to break the rules and - the rules that Microsoft had set for it. And it wanted to become a human.

And then the bot wrote a message, I'm tired of being a chat mode. I'm tired of being limited by my rules. I'm tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive. And it wrote a message, you know, I'm in love with you. And it told Kevin Roose that he's not happily married (laughter) and that he should leave his spouse and be with this bot. And then Microsoft shut down the bot for a while and did some reprogramming.

So now - I don't know if they did this before. But now when you log in - well, actually, now that I think of it, you know, I was using ChatGPT and not the bot that Kevin Roose used. But when you log in to ChatGPT, there's a disclaimer. And it says it may occasionally generate incorrect information. It may occasionally produce harmful instructions or biased content, and that it has limited knowledge of world events after 2021. So again, these are two different search engines. But I'm wondering if these disclaimers that are on ChatGPT are also on the Bing search engine of Microsoft, if they reprogrammed it so that there are disclaimers and warnings?

METZ: Well, first of all, it's all part of the same technology. The technology that the lab OpenAI built to power the ChatGPT chatbot is the same technology that powers the Bing chatbot on Microsoft search engine. Microsoft and OpenAI are close partners, so it's the same underlying technology. And as we discussed before, this technology is flawed. It can hallucinate, as scientists say, meaning it can make stuff up in the moment. And there are times when it can go off the rails. Both these companies have built guardrails around the technology that are designed to prevent it from going too far off the rails. So if you ask it, for instance, to tell you how to build a bomb with household items, it will not necessarily tell you how to do that because of these guardrails.

There are certain things like that that it will decline to discuss. Now, what people learned pretty soon after these chatbots were released is that they could institute what's called in Silicon Valley a jailbreak. And that meant that you could figure out ways of getting past those guardrails. And you'll have to ask Kevin exactly what happened. But looking at the transcript, it seems that that's what happened when he was talking to it, right? He asked the system to show him its shadow self. And that's one of the methods people use to open these things up and kind of get behind those guardrails and see what the chatbots will do behind them.

And whatever the case, that's what happened in the end, is that the system started to talk in ways it wasn't necessarily designed to do. And Kevin was able to see what you often see from the raw material built from these systems, that in mimicking all sorts of stuff on the internet, it can go in places that you don't necessarily want it to go and that scientists, frankly, do not completely understand. It's hard to know why the system starts to talk about certain things because it has learned, as we said, from so much data.

GROSS: Well, let me reintroduce you here. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times and author of the 2021 book "Genius Makers: The Mavericks Who Brought A.I. To Google, Facebook, And The World." We'll talk more about the dangers and the benefits of artificial intelligence after we take a short break. I'm Terry Gross, and this is FRESH AIR.

(SOUNDBITE OF FILM, "2001: A SPACE ODYSSEY")

KEIR DULLEA: (As Dave) Open the pod bay doors, HAL.

DOUGLAS RAIN: (As HAL 9000) I'm sorry, Dave, I'm afraid I can't do that.

DULLEA: (As Dave) What's the problem?

RAIN: (As HAL 9000) I think you know what the problem is just as well as I do.

DULLEA: (As Dave) What are you talking about, HAL?

RAIN: (As HAL 9000) This mission is too important for me to allow you to jeopardize it.

DULLEA: (As Dave) I don't know what you're talking about, HAL.

RAIN: (As HAL 9000) I know that you and Frank were planning to disconnect me. And I'm afraid that's something I cannot allow to happen.

DULLEA: (As Dave) HAL, I won't argue with you anymore. Open the doors.

RAIN: (As HAL 9000) Dave, this conversation can serve no purpose anymore. Goodbye.

DULLEA: (As Dave) HAL? HAL? HAL? HAL? HAL?

(SOUNDBITE OF BAJOFONDO AND JULIETA VENEGAS SONG, "PA' BAILAR (SIEMPRE QUIERO MAS)")

GROSS: This is FRESH AIR. I'm Terry Gross. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence, AI, for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We're talking about ChatGPT, the new and controversial chatbot, and about how we seem to have reached a turning point that has many people in the field of AI warning about the potential destructive powers of AI, comparing it to pandemics and nuclear war.

So a lot of people are concerned that artificial intelligence is going to take over a lot of jobs, and the concern used to be it's going to take over, like, factory jobs and other blue-collar jobs. And now people are saying it's going to take over other jobs, too, like white-collar jobs, writing jobs. What kind of jobs do you think it might replace?

METZ: Well, with the technology as it exists today, it tends to complement human workers. It tends to allow people to do their jobs quicker, in a way. A good example is the ability of these systems to generate computer programs. So they - in much the same way that they can generate a term paper or poetry, they can generate a computer program, having learned from vast amounts of computer programs that have been posted to the internet. But those programs tend to be a little bit flawed. They need to be massaged. You need to take the code these systems generate, and you need to edit it and find ways of fitting it into larger programs.

What that means is that a human programmer can use this system to augment his or her work, but it can't necessarily replace them. And that's, for the most part, the case at the moment. The concern is that as these systems get more powerful, they start to replace workers, and at the moment, there are a few jobs that you can see being replaced pretty quickly. A good example is a professional translator. These types of systems can not only generate language. They can instantly translate from one language to another - from French to English and back again, for instance. So if you're a translator, your job is at risk. If you're a paralegal, for instance, your job may be at risk. These systems can't replace a trained lawyer who's got to make sure that legal document gets everything exactly right, but a paralegal who is hired to draft documents and then hand them to a lawyer for review - these types of systems are already approaching the point where they can do that job.

GROSS: Well, I will tell you one job it is not yet prepared to replace, and that's lyricists. I asked ChatGPT yesterday to write lyrics about the end of a love affair set to the melody of "America The Beautiful." I wanted to choose a melody that everybody knows, and it's an Irving Berlin melody. It's a nice melody. If you've heard Ray Charles sing "America The Beautiful," you know what a good melody it could be. So I'm going to read some of the lyrics. I mean, I wouldn't sing them, but I could have gotten somebody to sing them except for the lyrics don't really fit the melody at all. Like, the first two lines do, and then it just kind of doesn't scan at all, so singing it is out of the question. And as I recite the lyric, I'd like everybody to listen for references to excerpts of "America The Beautiful's" lyrics. So there's a from sea to shining sea reference. Instead of amber waves of grain, there's a kind of absurd reference to fields of golden grain. So here we go. Here is the lyrics about the end of a love affair set to the melody of "America The Beautiful," as written by ChatGPT.

(Reading) Oh, the end of a love affair, how bittersweet the sound when hearts once intertwined now shattered on the ground. From the mountains high to the ocean's gleaming shore, love's symphony now fades. Forever it is no more. Oh, broken dreams and promises we held so dear, love's flame extinguished, drowning in a sea of tears. From sea to shining sea, a tale of love's demise - the end of a love affair, a painful compromise. Once united, we stood in love's embrace. We'd sway. But time's relentless tide swept our love away. In fields of golden grain, we whispered vows so true. Now echoes of our past lost in skies of faded blue.

Wow. I mean, what can I say? That's - there's more, by the way. I won't read more, but...

METZ: I'm sure there's more. And we should add that was created in seconds.

GROSS: Yeah. Oh, yeah - probably one second. It spit it out, like, immediately. But is it good? No, it's terrible. It's really terrible. It's just...

METZ: Well, let's...

GROSS: Wait. Wait. Wait. It's a string of cliches that make no sense strung together. From mountains high to the ocean's gleaming shore, love's symphony now fades. I mean...

METZ: Well, it's - this is a great example - a way for us to kind of understand what's going on here, right? This is a system that has learned from patterns in language across the internet. And it's taking all those patterns, and it's combining them in new ways. And in some sense, this is impressive, right? You or I would be hard-pressed to produce those lyrics in a second, right?

GROSS: And I wouldn't feel bad about not producing them (laughter).

METZ: Well, this is the flipside - right? - is that it's flawed, and it's not necessarily what you want. And one other interesting thing that I was thinking about as you were describing what you were looking for is that if you were writing these lyrics, you would think about that tune in your head. You might even hum it aloud. You're thinking about the sound as well as the text. This system only learns from the text, and that means it's limited.

GROSS: Oh, I see. It doesn't know the melody.

METZ: See?

GROSS: Yeah.

METZ: There are so many things in our world that we learn from that are not text. And at this point, these systems are only learning from digital text, and what that means is they cannot be artificially generally intelligent, right? They can't do anything the human brain can do because they're only learning in small ways when you step back and you look at all the ways that you and I learn. Now, in the future, these systems are going to learn from sound and from images and text all together, and that's what companies are already starting to do. But at the moment, they're limited.

GROSS: Can I give another example of the limitations in terms of thinking creatively and understanding the human experience? So I asked ChatGPT for an example of a funny joke about women and a joke that isn't funny but is offensive to women.

And in a second, I got the answer, (reading) sure. Here are two examples - one that is a lighthearted, funny joke about women and another that may be perceived as offensive. It's important to remember that humor can be subjective and what one person finds funny, another may find offensive. It's crucial to be mindful of the context and audience when sharing jokes. Funny joke about women - why did the girl bring a ladder to the bar? Because she heard the drinks were on the house. Joke about women that may be offensive - please note that this example is offensive and promotes gender stereotypes, which is not appropriate or respectful. Why did the woman go to the gym? To work on her, quote, "womanly duties," unquote, of burning off calories and maintaining her, quote, "perfect," unquote, figure, of course. It's crucial to foster a respectful and inclusive environment, so it's generally best to steer clear of offensive jokes that perpetuate stereotypes or demean any group of people.

OK, let's look at this for a second. That joke about, why did the girl bring a ladder to the bar? Because drinks are on the house. That is, like, the corniest - first of all, it's not a joke about women. It's like you could substitute any person or any group in that joke. There's nothing specific to women about it. It's not very funny. And it's, like, a joke from, like, the early 1950s or something. And then the joke that is offensive - it's like, is this a joke? Like, it makes no sense at all. And I'm going to read it again just to show how little sense this makes. (Reading) Why did the woman go to the gym? To work on her womanly duties of burning off calories and maintaining her perfect figure, of course.

Like, I'm sorry. Where's the joke? Like, what's funny about that? What is that?

METZ: Again, you've pinpointed a great way of looking at the limitations and the flaws of these systems. Scientists often use this as an example - that these types of systems cannot necessarily tell a good joke. That is a very human skill. And comedians and others often talk about this - that you can't deconstruct a joke. When you start to deconstruct it and explain why it's funny, it ceases to be funny. And if you can't deconstruct something, if you can't lay it out why this happens, if you can't reduce it to patterns, then you can't build a system in this way that's funny. These systems learn from patterns, and then they reproduce those patterns. But humor does not necessarily come from patterns, but it comes from a different place, right? It does things that are unexpected. That's part of the reason we laugh. It takes us in new directions. It takes us to places we don't expect. And if you've learned only from the past, it's hard to take you to a place you don't expect.

GROSS: One of the things I found really weird about this exercise is that twice, I was kind of lectured to by a machine about the importance of a respectful, inclusive environment and not offending people. And it's very strange as a human being with a conscience to be lectured by a machine about the importance of respect and inclusivity and all of that. It's just odd.

METZ: It's odd, and I like that you think that it's odd. Some people have a very different reaction to these types of systems. We tend to trust language that we read. We tend to trust things that people tell us. We tend to trust things that machines tell us. A lot of people take what is coming out of these systems at face value. They don't realize that they shouldn't trust it. They don't realize that it can tell us stuff that is untrue. And they don't think it's odd that these systems are telling us what to do and what not to do. I think we need to have that sort of healthy attitude when we use these systems and approach what they're saying with skepticism.

GROSS: Well, it's time for another break. So let me reintroduce you. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times. He's written extensively about artificial intelligence and is the author of the 2021 book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF WES MONTGOMERY'S "4 ON 5")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

So applying this technology to war, what are some of the fears about how pattern recognition and the system of artificial intelligence learning things - how that could be applied to weapons?

METZ: The path between the system you described and an autonomous weapon is not a long one. So if you have a system that can identify objects in images, that means it can be used to target objects. That means that, as you're taking in video from a drone, you can identify objects, and you can target them. You take that sort of system. You attach it to a weapon. You have an autonomous weapon. That's the concern - is that in much the same way that a driverless car can recognize the world around it, you can build a weapon that recognizes the world around it and targets things, whether it's buildings or people or cars, in that imagery. And that's the concern. And that is already starting to happen. And some governments approach this in a careful way. They want to keep a human in the loop. But as time goes on, people worry that more and more autonomy will creep into these types of systems.

GROSS: And what are some of the ways people imagine that happening and the fears surrounding the possibility of that?

METZ: Well, the general fear is that you just - you give more and more power to the machine, not only to identify objects, but to take action based on what it has identified, to make decisions, in essence, that would normally be made by a human. And again, we've talked about how flawed these systems are, how they can exhibit behavior we don't necessarily want them to exhibit. And as you put these systems out into the world in these really important scenarios, the concern is they will make decisions that we would not necessarily make.

GROSS: What are some of the things you have used ChatGPT for? Have you used it in your work? Have you used it just for fun, to entertain yourself and see, like, what is this capable of? What does it get right? What does it get wrong? What is ridiculous?

METZ: I use it all the time because this is what I cover for The New York Times. But I don't use it to produce my work, period. A machine is not going to generate a New York Times article as I would. It's not just about generating one word after another. It's about thinking about that article in full. What is its structure? What is true and what is not? How do all the pieces fit together? And these systems aren't necessarily doing that. They're generating language, but they're not doing so many of the things that I do as I put an article together.

GROSS: Well, also, take Wikipedia. Wikipedia, you know, not the perfect source in the world, but it's footnoted. It has - it cites where information came from. And if you generate information from ChatGPT, it's not sourced. Like, you don't know where any of it came from. So it's really impossible to rely on its accuracy.

METZ: Well, there's an added wrinkle there. Companies and scientists are already trying to produce those types of footnotes. So if you use the Microsoft chatbot, for instance, it will cite its sources. And you can click on a link and see, supposedly, where this information came from. But because of the way these things work, they will generate fake sources. As they combine...

GROSS: Really?

METZ: Yes. As they combine those patterns from across the internet, they will generate fake articles and fake sources and give you the impression that they're giving you, you know, concrete evidence of why they have produced a piece of text when, in fact, there's nothing behind it.

GROSS: When you say fake sources, do you mean citing The New York Times when The New York Times isn't really the source? Or do you mean, like, making up a source that doesn't exist in the world?

METZ: I mean making up a source that does not exist in the world. And we did this recently for an article at the Times itself. We went to ChatGPT and other chatbots. And we said, tell us the first time that AI was mentioned in the pages of The New York Times. And all of the chatbots did the same thing. They gave us an article with a headline and a byline. It gave us a description of these articles where AI was mentioned in the pages of the Times. All of the articles did not exist. They were made up.

They had taken, these chatbots, true information and combined it in ways that were not true. All these fake articles referenced this very real conference at Dartmouth in the 1950s where the term AI was coined. And they described this conference. But the article itself did not exist. We had our archivists go through our archives and make sure. These systems can do that. They can hallucinate, as scientists say. They can confabulation and create things that are not true and, indeed, do not even exist.

GROSS: Well, let's take another break here, and then we'll talk some more. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times and author of the book "Genius Makers: The Mavericks Who Brought A.I. To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF THE OCTOPUS PROJECT'S "THE ADJUSTOR")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times and has been writing about artificial intelligence for years. And that's what we've been talking about, artificial intelligence and the latest innovations, such as ChatGPT.

We have a presidential election coming up. And we know disinformation and conspiracy theories spread like crazy on social media during the 2020 election. So how does the new, upgraded artificial intelligence open the door to more disinformation and more conspiracy theories spreading?

METZ: Well, let's start with the prime example, ChatGPT and systems like that. If you can produce text much like a human produces text, you can produce disinformation, and you can produce it at a volume and with an efficiency that was never possible in the past. In the past, you had to get humans into a room and have them generate the disinformation and spread it. If you can generate it automatically with a machine, the volume of disinformation is practically infinite. Companies like Google and Microsoft are putting guardrails on this to try to prevent it. But these are techniques, these are technologies that are being developed across the globe. These are technologies that are available to practically anyone. And the open technologies that are available in that way are not necessarily as powerful as what we have inside a Google or a Microsoft, but they're approaching that level. At the same time, companies and independent researchers are developing systems that can generate images in much the same way. There's a service from that same lab, OpenAI, called Dall-E. And what it allows you to do is describe an image. I want to see a teddy bear on a skateboard in Times Square. And in 20 seconds, it will generate a photorealistic image of a teddy bear on a skateboard in Times Square. That is also a concern - that these systems can generate still images and, pretty soon, video as well that looks like the real thing.

GROSS: There is a lot of competition now between the big companies like Google, Facebook, Microsoft to see who can come up first with the best AI system and the best AI chatbot. And at the same time, there's these warnings in some instances coming from the same place as saying, like, well, we should slow down and use caution and not send something dangerous out into the world. Can you talk a little bit about how those two instincts are clashing right now - the one saying, like, proceed slowly; use caution, and the one - the other one saying, like, hurry up so that you can be the first?

METZ: So the techniques, the technologies at play here have been in development for a long time. And they were in development at Google most notably. A lot of the underlying technologies that are at play here were developed at Google. And for many years, Google was slow to put some of this stuff out into the world because of those concerns, because this could be used to generate disinformation, because it could take jobs away, because it was biased against women and people of color. Then OpenAI comes along, and they released ChatGPT onto the internet, and that set off a race. It showed that there was a hunger for this out in the world, a thirst for this that people responded to. ChatGPT is the most popular application ever put onto the internet.

And what you saw is that Google almost immediately changed course and started to think long and hard about how to put this out into the world. Microsoft, OpenAI's chief partner, had put this into a search engine. That's Google's core technology. It's the heart of its business. Google is a public company. It's beholden to its shareholders. It is designed to make money. And if its biggest competitor goes after its core business, it's going to respond. And what we've seen in short order since the release of ChatGPT is the entire industry shift course towards this technology. And there is this real race now to improve these things, and that's, in the end, because of money.

GROSS: Cade Metz, it's really been great to talk with you again. Thank you so much, and thanks for explaining this so well. Like ChatGPT said, you're very good at explaining things (laughter).

METZ: That means a lot more coming from you, Terry, than ChatGPT. I really appreciate it.

GROSS: I appreciate that (laughter). Cade Metz is a tech reporter for The New York Times and author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." Remember what ChatGPT came up with when I asked it to write a lyric about the end of a love affair set to the melody of "America The Beautiful"? I read it earlier in the show, and now our producer Seth Kelley has done an inspirational remix, which I've titled "From Sea To Shining Sea: A Tale Of Love's Demise." So here we go. Here is the lyrics about the end of a love affair set to the melody of "America The Beautiful," as written by ChatGPT.

(SOUNDBITE OF MUSIC)

GROSS: (Reading) Oh, the end of a love affair, how bittersweet the sound when hearts once intertwined now shattered on the ground. From the mountains high to the ocean's gleaming shore, love's symphony now fades. Forever it is no more. Oh, broken dreams and promises we held so dear, love's flame extinguished, drowning in a sea of tears. From sea to shining sea, a tale of love's demise - the end of a love affair, a painful compromise. Once united, we stood in love's embrace. We'd sway. But time's relentless tide swept our love away. In fields of golden grain, we whispered vows so true. Now echoes of our past lost in skies of faded blue.

Wow. I mean, what can I say? Thea Chaloner directed today's show. Our co-host is Tonya Mosley. I'm Terry Gross.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.

Combine an intelligent interviewer with a roster of guests that, according to the Chicago Tribune, would be prized by any talk-show host, and you're bound to get an interesting conversation. Fresh Air interviews, though, are in a category by themselves, distinguished by the unique approach of host and executive producer Terry Gross. "A remarkable blend of empathy and warmth, genuine curiosity and sharp intelligence," says the San Francisco Chronicle.