The heads of some of the biggest tech companies, including X, Meta, Microsoft, Alphabet and OpenAI, gathered to discuss AI policy in Washington DC.
Tesla CEO Elon Musk called on Wednesday for a US "referee" for artificial intelligence after he, Meta Platforms CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai met with lawmakers at Capitol Hill behind closed doors at for a forum on regulating AI.
Lawmakers are seeking ways to mitigate dangers of the emerging technology, which has boomed in investment and consumer popularity since the release of OpenAI's ChatGPT chatbot.
Musk said it was need for a regulator to ensure the safe use of AI.
"It's important for us to have a referee," Musk told reporters, comparing it to sports. The billionaire, who also owns the social media platform X, added that a regulator would "ensure that companies take actions that are safe and in the interest of the general public."
Musk said the meeting was a "service to humanity" and said it "may go down in history as very important to the future of civilization".
Musk confirmed he had called AI "a double-edged sword" during the forum.
'The government is ultimately responsible'
Zuckerberg said Congress "should engage with AI to support innovation and safeguards. This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that".
He added it was "better that the standard is set by American companies that can work with our government to shape these models on important issues".
More than 60 senators took part. Lawmakers said there was universal agreement about the need for government regulation of AI, but it was unclear how long it might take and how it would look.
Republican Senator Mike Rounds said it would take time for Congress to act. "Are we ready to go out and write legislation? Absolutely not," Rounds said. "We're not there".
New Jersey Senator Cory Booker said all participants agreed "the government has a regulatory role" but crafting legislation would be a challenge.
Lawmakers want safeguards against potentially dangerous deepfakes such as bogus videos, election interference and attacks on critical infrastructure.
Bipartisan AI policy
"Today, we begin an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass," US Senate Majority Leader Chuck Schumer, a Democrat, said.
"Congress must play a role, because without Congress we will neither maximize AI's benefits, nor minimize its risks".
Other attendees included Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates and AFL-CIO labour federation President Liz Shuler.
Schumer, who discussed AI with Musk in April, said attendees would talk "about why Congress must act, what questions to ask, and how to build a consensus for safe innovation".
In March, Musk and a group of AI experts and executives called for a six-month pause in developing systems more powerful than OpenAI's GPT-4, citing potential risks to society.
This week, Congress is holding three separate hearings on AI. Microsoft President Brad Smith told a Senate Judiciary subcommittee on Tuesday that Congress should "require safety brakes for AI that controls or manages critical infrastructure".
'Biggest monopolists' to give tips - Hawley
Republican Senator Josh Hawley questioned the closed-door session, saying Congress has failed to pass meaningful tech legislation.
"I don't know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money," Hawley said.
Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and generate images whose artificial origins are virtually undetectable.
On Tuesday, Adobe, IBM, Nvidia and five other companies said they had signed President Joe Biden's voluntary AI commitments requiring steps such as watermarking AI-generated content.
The commitments, announced in July, are aimed at ensuring AI's power is not used for destructive purposes. Google, OpenAI and Microsoft signed on in July.
The White House has also been working on an AI executive order.