AI

AI industry split over OpenAI’s call for regulation

We asked executives from around the artificial intelligence world to weigh in on the regulation conversation.
article cover

Win McNamee/Getty Images

· 7 min read

As the race to own the next wave of AI gathers steam in Silicon Valley, so too have calls for more regulation around how this technology is used.

Tech leaders, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter in March calling for a six-month “pause” on advanced AI development. Then, in May, OpenAI CEO Sam Altman testified before a Senate Judiciary subcommittee about the potential dangers of AI and the need for more government intervention.

Altman, along with IBM’s Christina Montgomery, made suggestions for how Congress should take action, with Altman proposing “licensing and testing requirements” and Montgomery suggesting “precision regulation” of specific use cases.

Microsoft President Brad Smith also threw the company’s support behind the creation of a new government agency and licensing system for pre-trained AI models at an event in Washington, DC, last week that laid out Microsoft’s blueprint for AI in a white paper.

As two prominent names in generative AI, OpenAI and IBM were the first to give testimony in what senators said will be a series of hearings. But the broader AI industry remains split on whether regulators should follow one of the courses proposed in the hearing—or whether they should even have a role at all. Some tech leaders backed Altman’s dire warnings and hoped regulation would boost business, while others opposed intervention and worried about its effects on innovation—and competition.

The path to regulating AI is a challenging one in part because legislators themselves have a lot to learn about this type of technology, according to Aram Gavoor, an associate dean at George Washington Law School who focuses on AI policy. But while lawmakers and their staffs will be beefing up on knowledge of AI in the coming months, Gavoor noted that the May hearing was “unusually bipartisan and collaborative.”

Senators on both sides of the aisle generally agreed on the need for regulation, as did all of the witnesses, Gavoor pointed out. “You just don’t often see that in Washington,” he added.

But that consensus doesn’t necessarily extend to the rest of the AI industry.

Who speaks for startups?

Arijit Sengupta, founder and CEO of enterprise AI company Aible, said he wasn’t surprised to hear big industry players like OpenAI and IBM calling for regulation, but suggested that their proposals could stifle competition, including from open-source models and startups.

“If you look at the testimony in that light, and with a little bit of cynicism added in…what is really being said is big, huge companies are saying, ‘Please give licenses to make big models,’” he said. “And if you see this from the perspective of trying to use regulation as a way to close the door behind you, all of the testimony fits in perfectly.”

Gil Perry, CEO and co-founder of generative AI startup D-ID, had similar concerns. While Perry broadly supports more regulation of AI models and their output, he questioned whether some of the measures Altman proposed might benefit more established players over startups.

“Maybe there should be some other method to make sure that people are acting well—have some scoring system,” Perry said. “Because [Altman] would be the first one to be accepted [for a license].”

Dan Mallin, CEO and co-founder of enterprise knowledge management system Lucy, said he’s thinking about the prospect of figuring out compliance with what could be a global patchwork of legislation, and about whether legislators are conflating AI with ChatGPT.

“We have all kinds of use cases for AI, and it has to expand beyond GPT and large language models, or what we’re gonna legislate is GPT and large language models—not AI,” he said.

Rolling out the red carpet red tape


Others in the industry said they’ve been anticipating regulatory activity, and aren’t necessarily deterred by it.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Execs at Adobe, which spearheads an industry group called the Content Authenticity Initiative (CAI) aimed at bringing more transparency to AI-generated content, said they hope that their efforts will help the group stay ahead of any potential regulations.

Ashley Still, the company’s senior vice president and general manager of digital media, pointed to the CAI’s “nutritional label” format, which gives viewers a sense of AI involvement in creation while not revealing too much about a given creative professional’s “secret sauce.” The software giant’s own new generative AI tools have these types of disclaimers built into the process.

“I do believe there’s an argument for regulation here,” Still told Tech Brew. “And at the end of the day, this comes down to consumers being able to understand the content that they’re seeing.”

In the absence of regulation, Adobe isn’t the only company working on the protective measures it takes around generative AI. Grammarly CEO Rahul Roy-Chowdhury said in an email that “regardless of legislation, technology leaders must take accountability for bringing generative AI to their products’ users in a safe, responsible way that doesn’t increase bias or usurp people’s autonomy.”

He said he hopes to see other members of the industry develop their own sets of rules that will cut down on the biggest risks of AI—with or without government intervention.

“I’m glad to see lawmakers engaged in the conversation to define a path forward,” Roy-Chowdhury said. “Companies that prioritize safety, responsibility, and user control in their product development and business practices will find themselves ahead of any legislation that might come.”

Salesforce AI CEO Clara Shih said in an emailed statement that the enterprise company plans to “actively engage with governments and all stakeholders to establish risk-based regulatory frameworks” around AI.

D-ID’s Perry, who’s been working in the generative AI space for nearly a decade, said he’s encouraged by the number of people in the industry calling attention to the risks posed by the technology. He contrasted it with the lack of widespread privacy concern in the early days of social channels.

“It’s not like previously with social networks—that was the opposite, with privacy,” Perry said. “I think legislators and regular regulators are slow here. It’s the companies which are actually saying, ‘Look, this is something dangerous happening.’”

He also said regulation could create a more level playing field for companies that are already trying to follow safe and ethical best practices.

“That’s one of the reasons we want regulation,” said D-ID’s VP of commercial strategy, Matthew Kershaw. “We don’t want to feel like because we’re doing the right thing that we’re going to be penalized for it.”

CF Su, VP of machine learning at document processing platform Hyperscience, said he expects to see some form of “guidance or policy” before the US presidential elections next year, and hopes the industry can “converge on a small set of high-level principles.”

“You can imagine it’s very difficult for any government agency or Congress to work with a very divided industry,” Su said. “A lot of people see the potential, the economic and financial value in it.”

But it may be hard to align everyone’s interests, he added. “Unfortunately, it’s not a pure technical discussion anymore, because it has huge implications on business.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.