Accessibility links

Breaking News

The Inside Story - A.I. Risks and Rewards | Episode 111 TRANSCRIPT


A.I. Risks and Rewards | Episode 111 THUMBNAIL horizontal
A.I. Risks and Rewards | Episode 111 THUMBNAIL horizontal

Transcript:

The Inside Story: A.I. Risks and Rewards

Episode 111 - September 28, 2023

TINA TRINH, VOA Correspondent:

Hello I’m Tina Trinh in New York.

This week we’re delving into artificial intelligence, it’s the ability of machines to emulate human intelligence in carrying out tasks and problem solving. It’s a mainstay of our digital lives - programmed into everything from the search engines we consult to the self-driving cars navigating our streets.

The latest developments in A.I. center on generative artificial intelligence, which builds on the automated systems of earlier A.I. to seem more human-like than ever.

But what exactly is it and how does it work? And will generative A.I. tools like ChatGPT become the new normal?

Depending on who you ask, the digital future is either full of promise or positively dystopian.

But the truth likely lies somewhere in the middle and it’s a field that will continue to be shaped by technologists, policymakers and us, the end users, for years to come.

In the next half hour, we’ll explore the ideas and stories around artificial intelligence and what it means for our day-to-day lives.

Here in New York City, home to the largest school district in the U.S., administrators initially banned the use of ChatGPT, an A.I. chatbot, due to fears of student cheating. Several months later, that ban was lifted.

Matt Dibble takes a look at how there are no easy answers when it comes to the use of technology in the classroom.

MATT DIBBLE, VOA Correspondent:

Since its release in 2022, an artificial intelligence-powered writing tool called ChatGPT has won instant acclaim but has also raised concerns, especially on school campuses.

High school student Galvin Fickes [FIK-ess] demonstrated how entering a short command can generate a summary of “Jane Eyre,” a book she was assigned to read.

Galvin Fickes, High School Senior:

I think it did a pretty good job, honestly.

MATT DIBBLE:

Some teachers like LuPaulette Taylor are concerned the freely available tool could be used by students to do their homework for them and undermine learning.

LuPaulette Taylor, High School English Teacher:

The critical thinking that we all need as human beings, and the creativity, and also the benefit of having done something yourself and saying, ‘I did that.’”

MATT DIBBLE:

Across the U.S., school systems are choosing to restrict access to ChatGPT on their computers and networks.

Developed by San Francisco-based OpenAI, ChatGPT is trained on a vast amount of language data from the internet. When prompted, the AI generates a response using the most likely sequence of words, creating original text that mimics human thought.

Annie Chechitelli, Turnitin Chief Product Officer:

There's no, what we call, ‘source document,’ right? Or a smoking gun to look to, to say, ‘Yes, this looks like it was lifted from that.’

MATT DIBBLE:

Annie Chechitelli is chief product officer for Turnitin, an academic integrity service used by educators in 140 countries.

Turnitin’s anti-plagiarism software checks the authenticity of a student paper by scanning the internet for possible matches.

But when AI writes text, each line is novel and unique, making it hard to detect cheating.

There is, however, one distinguishing feature of AI writing.

Eric Wang, Turnitin Vice President for AI:

They tend to write in a very, very average way. Humans all have idiosyncrasies. We all deviate from average one way or another. So, we're able to build detectors that look for cases where an entire document or entire passage is uncannily average.

MATT DIBBLE:

Turnitin’s ChatGPT detector is due out later this year. Wang says keeping up with AI tools will be an ongoing challenge that will transform education.

Eric Wang, Turnitin Vice President for AI:

A lot of things that we hold as norms and as status quo are going to have to shift.

MATT DIBBLE:

AI may become acceptable for some uses in the classroom, just as calculators eventually did. Teacher Steve Wright was impressed when his student used ChatGPT to create a study guide for her calculus class.

Steve Wright, Computer Science Teacher:

You know if ChatGPT can make us throw up our hands and say, ‘No longer can I ask a student to regurgitate a process, but now I'm going to have to actually dig in and watch them think, to know if they're learning’ — that's fantastic.

MATT DIBBLE:

In schools and elsewhere, it seems clear that AI will have a role in writing the future.

Matt Dibble, for VOA News, Oakland, California.

TINA TRINH:

Education isn’t the only field grappling with the implications of artificial intelligence. As reporter Deana Mitchell explores in the next segment, the latest A.I. has moved into the production studio, encroaching on the territory of musical artists.


LawTWINZ:

I’m scared now! This ain’t Drake?!

DEANA MITCHELL, VOA Correspondent:

From the lyrics to the beat, to the voices — a song posted by anonymous TikToker

“Ghostwriter977” sounded just like mega star musicians Drake and The Weeknd.

“Heart on my Sleeve,” made with artificial intelligence, impressed fans but had some in the music industry crying foul.

And experts say it’s not that hard to do.

Rebecca Fiebrink, University of the Arts London:

Once a computer has a sort of understanding of the patterns that underlie voice in general, you don't need necessarily a lot of data from The Weeknd to generate sounds that sound like him.


DEANA MITCHELL:


Universal Music Group, which represents both artists, released a statement emphasizing the need to protect artists’ music. Many of the platforms removed the song.


But, experts say experiments like this with AI are just the beginning.

Ge Wang, Stanford University:

The toothpaste is not going back in the tube. We have to learn how to best live with it.

DEANA MITCHELL:

Wang teaches a class on music and AI. He is concerned at the speed in which AI is developing.

Ge Wang, Stanford University:

What's at stake is how we communicate, how we work, how we govern. What's at stake is how we make art.

DEANA MITCHELL:

Fiebrink says the music industry will have to decide how to fight — or use — AI.

Rebecca Fiebrink, University of the Arts London:

Do they decide that the best way to make money is to start churning out AI songs that don't have much human involvement so they can pocket the sales of those and not pay artists?

DEANA MITCHELL:

Since the release of "Heart of my Sleeve,” the musician Grimes launched an AI voice software allowing others to create music with her voice, offering to split the royalties 50%.

Deana Mitchell for VOA News.

TINA TRINH:

Individual artists might choose to embrace or shun artificial intelligence in their creative process, but what role should companies play in the responsible development of these technologies? Julie Taboh takes a look at what the Biden administration is asking for from big tech.

President Joe Biden:

Companies have an obligation to make sure their technology is safe, before releasing it to the public.

JULIE TABOH, VOA Correspondent:

Tech giants Amazon, Google, Meta, and Microsoft are among a group of seven companies who promised to meet AI safeguards brokered by the White House.

President Joe Biden:

These commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security, and trust.

JULIE TABOH:

Some of the promised commitments are quite practical, says David Broniatowski, an associate professor in the School of Engineering and Applied Science at the George Washington University.

David Broniatowski, George Washington University Professor:

For example, the implementation of watermarking technologies for generative large-language models, is very implementable. Watermarking is a sign or an indicator, in the same way that fact checking may help people to make sense of news that they see online, or credibility ratings may help people to understand what URLs they're looking at, whether it's coming from a good source, or a source that's maybe less credible.

JULIE TABOH:

The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing carried out in part by independent experts - to guard against major risks, such as in biosecurity and cybersecurity, the White House said.

Big tech and smaller players will have to balance risk and innovation.

Jack Clark, Anthropic Co-founder:

We cannot leave the development of artificial intelligence solely to private sector actors.

JULIE TABOH:

Anthropic’s co-founder Jack Clark spoke at a recent U.N. Security Council meeting.

Jack Clark, Anthropic Co-founder:

The governments of the world must come together, develop state capacity, and make the development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.

JULIE TABOH:

As the White House commits to advancing responsible AI technology in the U.S., the administration says it will also collaborate with its international partners on a common framework to govern the development and use of AI.

Julie Taboh, VOA News.

TINA TRINH:

The inherent risks posed by artificial intelligence are considerable, with many still undetermined. What’s clear is that guidelines are needed to govern the use of A.I. but with simultaneous technologies being developed at breakneck pace where do policymakers even begin?

I recently spoke with Suresh Venkatasubramanian, a data and computer science professor at Brown University who has worked at the federal level to help draft the blueprint for an AI Bill of Rights.

Suresh, tell us about the blueprint for an AI Bill of Rights and and how it came about.

Suresh Venkatasubramanian, Brown University Computer and Data Science Professor:

So the blueprint for the AI Bill of Rights is a collection of five principles to guide the design and development of automated systems so as to protect the American public, our civil rights, our civil liberties, our opportunities for advancement and access to critical services that are provided to us by the government and other entities. So what we need are guardrails and protections.

TINA TRINH:

And what are some of these guardrails, the key tenets of this blueprint?

Suresh Venkatasubramanian, Brown University Computer and Data Science Professor:

So there are 5 principles. So these systems should #1, they should be safe. They shouldn't harm us. And effective - they should work as claimed, and this might come as a surprise, but a lot of times technology is deployed in a kind of ‘Let's try it and see what happens’ attitude rather than ‘Let's make sure it works before we put it out.’

Number two, we want systems that are unbiased. They should not discriminate against groups based on characteristics that are either protected under law or are irrelevant to task at hand.

These systems should be prudent with our data. They should use our data sparingly. They should use it only for the task at hand, and they shouldn't be out selling our data to other vendors for who knows what purposes.

#4 is that these systems should be transparent and explainable. We should know if automation is being used to make decisions about us and based on what criteria such systems are coming to the conclusions.

And finally, these systems should come with human consideration and fall back. And there should be ways we can get recourse from people when these systems fail.

TINA TRINH:

So on paper, it seems pretty common-sense principles to adhere to. The language seems broad enough though they could be interpreted in a variety of ways, to varying degrees. So how can we ensure accountability here?

Suresh Venkatasubramanian, Brown University Computer and Data Science Professor:

So along with the blueprint, the document, which is much longer, has what we call a technical companion that sort of gives much more specific details on how to ensure safety, effectiveness. So for example, if we talk about making sure that algorithms are, do not discriminate . . . there's a number of things you can do.

You look at your data, you look at disparities in your data, you see what kind of data you've collected from different people. You understand whether there might be hidden disparities, and you do all that kind of testing. Now what this looks like for a medical diagnostic test will look very different to what it looks like, say, for assigning, giving someone a loan, but doing that test is the common element between these.

TINA TRINH:

How confident are you that these, these guidelines, that companies will actually adhere to them?

Suresh Venkatasubramanian, Brown University Computer and Data Science Professor:

We need new laws for sure. We need the willingness and the ability to enforce existing laws on discrimination, on transparency.

TINA TRINH:

It's one of the few issues, tech regulation, I think that garners bipartisan support, which is a rare thing in DC, but at the same time, we've seen past failures to really enact any meaningful change. I mean, I think about social media and the proliferation of online misinformation and disinformation and social media companies’ failures to really get a handle on that. So can something as widespread and far-reaching as AI actually be regulated?

Suresh Venkatasubramanian, Brown University Computer and Data Science Professor:

So I think for AI, it's not as if we don't have solutions. We actually know what we need to do. And what we're waiting for is for Congress, you know, the executive branch, the regulators, the legislators, the states to just go ahead and do it.

But we know a lot about what is likely to work well for AI, just basic transparency guidelines, reporting, pre-deployment testing, testing for disparities, and the reason I know that will work is because when journalists and civil society advocates have exposed problems with algorithms, it turns out that they use the same methodologies, the same assessment auditing methodologies that could have been used ahead of time before the system was deployed. And the people deploying would have discovered these problems before they deployed it.

TINA TRINH:

What do you see looking ahead as next steps, Suresh?

Suresh Venkatasubramanian, Brown University Computer and Data Science Professor:

We've already seen some other small bits of legislation proposed for example, legislation that would have a chief AI officer in every department and agency within the United States government and so on.

We're seeing a lot of states put out AI legislation proposals, some of them even passed.

I think there's a tendency to look at the one silver bullet that will always solve these problems. I'm actually more encouraged by the fact that there's so many different activities happening, because it makes me confident that some of these will stick and that will slowly over time, going back to the point about culture change, our attitudes and culture around how we expect AI systems to be deployed.

TINA TRINH:

In the wrong hands, A.I. has the potential to not only harm individuals but to destabilize government institutions. The upcoming U.S. presidential election has some worried about the problematic rise of online mis- and disinformation. This time, Deana Mitchell examines how political deepfake videos are further fueled by generative A.I.

DEANA MITCHELL:

Pope Francis did not really wear this designer Balenciaga coat, and former President Donald Trump was not photographed running from the police. These fake images were created with software using generative artificial intelligence.

Deepfakes are already part of the 2024 U.S. presidential campaign, with this video from Trump opponents showing him hugging former White House chief medical adviser Dr. Anthony Fauci, with whom he was often at odds during the COVID-19 pandemic.

The misinformation-debunking show “VERIFY” determined that three of the images are fake.

Voice of Casey Decker, VERIFY Reporter:

If you look more closely at the pictures, you can see more evidence they are AI-generated. Instead of saying White House, it says Mephap.

DEANA MITCHELL:

Deepfakes in political advertising alarm the nonprofit consumer advocacy organization Public Citizen, where Lisa Gilbert is executive vice president.

Lisa Gilbert, Public Citizen Executive Vice President:

One of the things that artificial intelligence is able to do, particularly generative AI, is create easily deepfakes. So, you know, video or audio that looks incredibly real, impossible to discern with the naked eye from a real video.

DEANA MITCHELL:

Public Citizen asked the U.S. Federal Election Commission to make rules for campaign ads that use computer-generated images, but the commission said that was not within its jurisdiction. There is little legal infrastructure for policing deepfakes, says Cornell Law School professor of digital and information law James Grimmelmann.

James Grimmelmann, Cornell Law School:

And the Supreme Court has actually also made it pretty clear that it's quite difficult for them to regulate anything except the most egregiously false and harmful speech.

DEANA MITCHELL:

He expects the use of deepfakes in political campaigns will grow.

James Grimmelmann, Cornell Law School:

And it’ll be very hard to trace the specifics of who created them.

DEANA MITCHELL:

There is currently legislation in Congress that would require political advertisements to state if generative AI was used.

Deana Mitchell for VOA News.

TINA TRINH:

As we’ve seen, generative A.I. can produce synthetic images and videos that are hard to distinguish from the real thing, content that can make its way to bigger and bigger screens like the ones here in Times Square.

But if generative A.I. creates new content based on pre-existing work, who owns that content and can claim rights to it? Amidst an ongoing strike in Hollywood, actors and writers are now grappling with the A.I. versions of themselves and their work, as Genia Dulot reports from Los Angeles.

GENIA DULOT, Reporting for VOA:

A controversial issue in the ongoing strike between Hollywood studios and the writers and screen actors guilds is artificial intelligence, or AI, the use of which has grown exponentially in TV and movie production over the past decade.

Unidentified Speakers:

“…The idea that somebody can take that ownership of yourself away from you. That’s just very frightening.”

“…had actually forced people to go in those rooms and get scanned or else you don’t get paid for the day… .”

“… AI can actually do art, and that’s basically, if it doesn’t frighten you, it should… .”

GENIA DULOT:

Bargaining for TV and movie studios, ((Mandatory CG: AMPTP.ORG)) the Alliance of Motion Picture and Television Producers, or AMPTP, ((end credit)) says it agreed to “fair compensation when a ‘digital replica’ is made of a performer.”

But the Screen Actors Guild, or SAG-AFTRA, says AMPTP is offering background performers pay for one day of work in exchange for the rights to their digital likeness “for the rest of eternity with no compensation.”

AMPTP denies that allegation, saying the claim that "the digital replicas of background actors may be used in perpetuity with no consent or compensation is false."

At the recent Comic-Con convention in San Diego, AI was a major topic of discussion.

Tim Friedlander, the president of the National Association of Voice Actors, or NAVA, said because AI can easily replicate human voices, its members had been affected earlier than other industries.

Tim Friedlander, NAVA President:

One actor found that their voice was taken and used in a promo [ad] that they never did, and the other one worked for a company for three years and for the fourth year, the company did not rehire him because they said we have three years of your voice, we’re just going to create [an] AI synthetic voice.

GENIA DULOT:

American voice actress Cissy Jones says the most-scary scenario for her is if her voice were digitized and programmed to say things that she doesn’t agree with.

Cissy Jones, Voice Actor:

It can be inflammatory content, whether it’s racist, homophobic, NSFW, pornographic -- things that I would never say.

GENIA DULOT:

Duncan Crabtree-Ireland, national executive director of SAG-AFTRA, which is leading strike negotiations with AMPTP, worries that actors could unknowingly give studios consent to use their images or voices.

Duncan Crabtree-Ireland, SAG-AFTRA National Executive Director:

The idea that you can bury the sentence in that huge contract, that says, ‘I give you rights to use my digital replica, in perpetuity, throughout the universe in any technology for any purpose’ -- that’s not good enough, especially when you are creating something new using that person’s identity.

GENIA DULOT:

SAG-AFTRA negotiator and actor Zeke Alton says there should be laws that ensure performers give consent and are compensated.

Zeke Alton, SAG-AFTRA:

What it [a law] would ask for, that any content that comes out of an AI machine that is not a human person be marked as such. That’s the very first steps, so that when we’re looking at advertising or we are looking at art, we realize this is a machine doing it or this is real human art.

GENIA DULOT

When SAG went on strike in July, it joined the Writers Guild of America members, whose contract expired in May, freezing production of scripted shows and ensuring that TV and film fans will have to turn to reruns for their entertainment fix.

Genia Dulot, for VOA News, San Diego.

TINA TRINH:

Not all A.I. is bad. As reporter Maxim Moskalkov discovers, in some cases, the digital likenesses of people allow them and their memories to live on, even after they’ve passed . . .

MAXIM MOSKALKOV, VOA Reporter:


Using artificial intelligence to allow people to stay in touch with a loved one that passed away – is being offered by a few startup companies.

The so- called “resurrection” relies on creating chat bots, or virtual doubles, based on a deceased person’s social data – images, voice and text messages, emails, etc.

Michael Jung works for a company called DeepBrain AI that creates the AI doubles.

Michael Jung, DeepBrain AI:

Let's say, I want to create you. Then we go to the studio, and we film you, and you read 300-500 sentences in front of the camera. And then we have that original video source. And then we do one month of machine learning. After one month of machine learning, we create your avatar. Your digital twin, or your avatar, is going to talk.

MAXIM MOSKALKOV:

Here’s Founder and CEO of a company called StoryFile Stephen Smith sharing some biographic details. In reality, it’s not Smith, but his avatar. The real Smith says his company allows anyone to tell their life story.

Stephen Smith, StoryFile:

Think of it like a living photo album that we've all done since the camera existed. What we're doing is using AI to support the person whose ancestor passes down their memories, as they would do in any other form.

MAXIM MOSKALKOV:

Associate professor of religion at Baylor University Candi Cann believes AI could serve as a good therapeutic tool

that can help grieving people.

Candi Cann, Baylor University:

Let's say, you had a fight with your mom before she died. And so, you are encouraged by your therapist to write a letter to your mom about the things you wish you had said instead and the response you might have received from your mother in that situation.

MAXIM MOSKALKOV:

But the idea of digital resurrection is a complicated concept, both legally and ethically,

says Irina Raicu, director of the Internet Ethics Program at Santa Clara University.

Irinia Raicu, Santa Clara University:

This feels like we can somehow keep a tech version of loved ones alive, it strikes me as weird, relative to embalming, but instead of embalming somebody's body and trying to keep them with us, we are trying to embalm their minds.

MAXIM MOSKALKOV:

Clarence Jones, who runs the institute for nonviolence and social justice at the University of San Francisco, says it could be a way to retain some people’s historical legacy.

Clarence Jones, University of San Francisco:

I think it's a superb technology, the technology of the future. And I want to help them in any way possible because there are people living currently that are far more important than I am, whose information they should be storing.

MAXIM MOSKALKOV:

As with most new technologies there is great potential here, and a great potential for abuse.

Maxim Moskalkov, VOA News, Washington.

TINA TRINH:

That’s all for now. Stay up to date with all the news at VOANews.com.

Follow us on Instagram and Facebook at VOA News. You can find me on Twitter at Tina Trinh NYC. And catch up on past episodes at our free streaming service, VOA Plus.

For all of those behind the scenes who brought you today’s show, I’m Tina Trinh. We’ll

see you next week for The Inside Story.

###

XS
SM
MD
LG