WASHINGTON (AP) 鈥 Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people's rights and safety at risk.

President Joe Biden briefly dropped by the meeting in the White House's Roosevelt Room, saying he hoped the group could 鈥渆ducate us鈥 on what is most needed to protect and advance society.

鈥淲hat you're doing has enormous potential and enormous danger,鈥 Biden told the CEOs, according to a video posted to his Twitter account.

The popularity of AI chatbot ChatGPT 鈥 even Biden has given it a try, White House officials said Thursday 鈥 has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.

But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and .

The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.

In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.

But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.

鈥淲e鈥檙e at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms,鈥 Conner said.

The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT.

Harris said in a statement after the closed-door meeting that she told the executives that 鈥渢he private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.鈥

ChatGPT has led a flurry of new 鈥済enerative AI鈥 tools adding to ethical and about automated systems trained on vast pools of data.

Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it鈥檚 stealing .

Companies worried about being liable for something in their training data might also not have incentives to rigorously track it in a way that would be useful "in terms of some of the concerns around consent and privacy and licensing,鈥 said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

鈥淔rom what I know of tech culture, that just isn鈥檛 done,鈥 she said.

Some have called for disclosure laws to force AI providers to open their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won鈥檛 be easy to provide greater transparency after the fact.

鈥淚t鈥檚 really going to be up to the governments to decide whether this means that you have to trash all the work you鈥檝e done or not," Mitchell said. "Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it鈥檚 already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.鈥

While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.

The companies also face potentially tighter rules in the European Union, where negotiators are putting finishing touches on AI regulations that could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.

When the EU for AI rules in 2021, the focus was on reining in high-risk applications that threaten people鈥檚 safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.

But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems such as those built by OpenAI. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.

A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.

Elsewhere in Europe, Italy over a breach of stringent European privacy rules, and Britain鈥檚 competition watchdog said Thursday it鈥檚 of the AI market.

In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though not likely as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University鈥檚 Center for Security and Emerging Technology.

Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.

鈥淭his would be a way for very skilled and creative people to do it in one kind of big burst,鈥 Frase said.

___

O'Brien reported from Cambridge, Massachusetts. AP writers Seung Min Kim in Washington and Kelvin Chan in London contributed to this report.

___

Follow the AP's coverage of artificial intelligence at .

The 春色直播 Press. All rights reserved.

More Science Stories

Sign Up to Newsletters

Get the latest from 春色直播News in your inbox. Select the emails you're interested in below.