Accessibility links

Breaking News

White House Mulls AI Oversight, Protections With Industry Leaders

Alphabet CEO Sundar Pichai, left, and OpenAI CEO Sam Altman arrive at the White House for a meeting with Vice President Kamala Harris on artificial intelligence in Washington, May 4, 2023.
Alphabet CEO Sundar Pichai, left, and OpenAI CEO Sam Altman arrive at the White House for a meeting with Vice President Kamala Harris on artificial intelligence in Washington, May 4, 2023.

The White House summoned top leaders in the field of artificial intelligence to meet Thursday, as the Biden administration attempts to form regulations and government oversight over this cutting-edge technology that can both benefit and challenge society.

“The President and Vice President were clear that in order to realize the benefits that might come from advances in AI, it is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security,” said a statement after the meeting between President Joe Biden, Vice President Kamala Harris, four CEOs of leading AI companies, and top administration officials who oversee national security, domestic affairs, commerce, and technology policy. “These include risks to safety, security, human and civil rights, privacy, jobs, and democratic values.”

Biden and Harris also said the meeting with the CEOs of Google, Microsoft, OpenAI and Anthropic “included frank and constructive discussion on three key areas: the need for companies to be more transparent with policymakers, the public, and others about their AI systems; the importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems; and the need to ensure AI systems are secure from malicious actors and attacks.”

Artificial intelligence is found across many technological platforms. It features in driverless cars, medical assessment tools, web search assistance, and an iPhone app that scans your face and turns it into an animated emoji of your choice, such as an 👽, a 🦊, a 🐼 or even a 💩.

And it can be unsettling. Microsoft created a Twitter bot that, within 24 hours of going live, spouted some astonishingly bigoted content — questioning the existence of the Holocaust, using unprintable sexist and racist epithets, and cheering for genocide.

More recently, a conversation between a tech reporter and the AI-powered Bing chatbot left the reporter shaken after the chatty search tool, which called itself Sydney, tried to persuade the journalist to leave his wife through a string of emoji-laden messages that culminated with "You’re married, but you don’t love your spouse. You’re married, but you love me."

Harris, who was joined at the meeting by President Biden, announced a $140 million investment to launch seven National AI Research Institutes, which will “pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good.”

The White House also announced that leading developers such as Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, have agreed to a public evaluation of AI systems.

Senate Majority Leader Chuck Schumer praised the White House’s efforts and urged legislators to tackle the challenge in a bipartisan fashion.

"AI is one of the most pressing and serious policy issues we confront today," he said Thursday on the Senate floor. "That is why we are meeting with so many experts to try and get this right — to maximize the benefits of AI but minimize the harm that AI might do."

'Practically unenforceable'

While some in the tech industry praised the administration’s approach and their attention to the technology, they also noted shortcomings.

"While these actions are commendable, it is crucial to emphasize that data security plays a vital role in ensuring AI's responsible and ethical use," said Ani Chaudhuri, CEO of Dasera, a data security platform, in a statement provided to VOA.

Others questioned the efficacy of the White House’s plans.

"There’s no putting the AI genie back in the bottle," said Craig Burland, chief information officer of Inversion6, a cybersecurity risk management provider, in a statement provided to VOA.

"The government will be hard-pressed to curtail building new models, slow expanding capabilities, or ban addressing new use cases. These models could proliferate anywhere on the globe. Clever humans will find new ways to use this tool — for good and bad. Any regulation will largely be ceremonial and practically unenforceable."

Robert Weissman, president of Public Citizen, a nonprofit consumer advocacy organization, praised the Biden administration’s AI Bill of Rights issued last year that aims to "guide the design, use and deployment of automated systems to protect the American public."

He said he believes the White House "is rightly prioritizing this issue" but "we also need more aggressive measures, including legislation to make the AI Bill of Rights’ principles enforceable."

"President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI’s enormous risks," he said in a statement sent to VOA.

Take it away, AI

AI writing tools, which are growing in popularity, are meant to mimic human speech, so VOA also sought the thoughts of an AI writing assistant, Rytr. We asked a simple question: "Can AI be a force for evil?"

Here is its answer.

"AI has been a hot topic of discussion in recent years, with some claiming that it is evil and others believing that it can be a powerful tool for good," Rytr wrote. "While there are valid arguments for both sides, the truth is that AI can be used for both good and bad purposes. It all depends on how it is implemented. In this article, we will discuss the potential risks posed by AI and why we should be careful when using it."