Tech Policy

The problem with Big Tech’s voluntary AI safety commitments

Experts are skeptical about corporate involvement and warn the US agreement lacks teeth.
article cover

Francis Scialabba

· 3 min read

The European Union might be making strides toward regulating artificial intelligence (with passage of the AI Act expected by the end of the year), but the US government has largely failed to keep pace with the global push to put guardrails around the technology.

The White House, which said it “will continue to take executive action and pursue bipartisan legislation,” introduced an interim measure last week in the form of voluntary commitments for “safe, secure, and transparent development and use of AI technology.”

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to “prioritize research on societal risks posed by AI systems” and “incent third-party discovery and reporting of issues and vulnerabilities,” among other things.

But according to academic experts, the agreements fall far short.

“The elephant in the room here is that the United States continues to push forward with voluntary measures, whereas the European Union will pass the most comprehensive piece of AI legislation that we’ve seen to date,” Brandie Nonnecke, founding director of UC Berkeley’s Citris Policy Lab, told Tech Brew.

“[These companies] want to be there in helping to essentially develop the test by which they will be graded,” Nonnecke said. That, combined with cuts to trust and safety teams in recent months, is cause for skepticism, she added.

Emily Bender, a University of Washington professor who specializes in computational linguistics and natural language processing, said the vagueness of the commitments could be a reflection of what the companies were willing to agree to (the agreement’s voluntary nature at work).

“We really shouldn't have the government compromising with companies,” she said. “The government should act in the public interest and regulate.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Bender also voiced concerns about the measure’s approach to potential future risks, pointing to commitments to give “significant attention” to “the effects of system interaction and tool use” and “the capacity for models to make copies of themselves or ‘self-replicate.’”

“And that to me doesn’t sound like grounded thinking about actual risks,” she added. “I suspect that one of the through threads here is…this AI hype train of believing that the large language models are a step toward what gets called artificial general intelligence, which humanity needs to be protected from because of this weird fantasy world that it becomes sentient or autonomous and takes over,” Bender said. “I don’t see Nvidia and IBM playing that game so much, so that might be part of why they’re not there.”

Both Bender and Nonnecke pointed to the Federal Trade Commission, which opened an investigation into OpenAI in July, as an effective regulatory player in the absence of federal AI legislation. But neither expects much to come from the voluntary commitments.

“I could imagine that the White House was interested in coming to the table because they might feel stymied by the split Congress, and so they can’t directly do that much in terms of regulation,” Bender said. “They want to look like they’re doing something, but there’s no teeth here. This is not regulation. The title is ‘Ensuring Safe, Secure, and Trustworthy AI,’ and I don’t think it does any of that.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.