AI regulation is taking shape, but startups are being left out

Illustration by Alex Castro / The VergeIn the world of generative AI, it is the big names that get the most airtime. Big Tech players like Microsoft and lavishly funded startups like OpenAI have earned invitations to the White...

AI regulation is taking shape, but startups are being left out

In the world of generative AI, it is the big names that get the most airtime. Big Tech players like Microsoft and lavishly funded startups like OpenAI have earned invitations to the White House and the earliest of what will likely be many, many congressional hearings. They’re the ones that get big profile pieces to discuss how their technology will end humanity. As politicians in the US and beyond grapple with how to regulate AI, this handful of companies has played an outsize role in setting the terms of the conversation. And smaller AI players, both commercial and noncommercial, are feeling left out — while facing a more uncertain future. 

Big AI — a term that’s long overdue for adoption — has been actively guiding potential AI policies. Last month, OpenAI, Meta, Microsoft, Google, Anthropic, and Amazon signed an agreement with the White House promising to invest in responsible AI and develop watermarking features to flag AI-generated content. Soon after, OpenAI, Microsoft, Anthropic, and Google formed the Frontier Model Forum, an industry coalition targeted to “promote the safe and responsible use of frontier AI systems.” It was set up to advance AI research, find best practices, and share information with policymakers and the rest of the AI ecosystem.

But these companies only account for one slice of the generative AI market. OpenAI, Google, Anthropic, and Meta all run what are called foundation models, AI frameworks that can either be language-based or image-focused. On top of these models, there’s a booming sector of far smaller businesses building apps and other tools. They face many of the same forms of scrutiny, but as AI rules are being developed, they worry they’ll have little say in the results and, unlike Big AI with large war chests they can tap for noncompliance, cannot afford disruptions in business. 

“For small and medium companies down the value chain that are in the space but are not really at the table, it adds to that exclusionary feeling of okay, we’re not any of the big guys, so we just have to sit down and wait for somebody to put something on high,” Triveni Gandhi, responsible AI lead at enterprise AI company Dataiku tells The Verge

Gandhi says one example of the potential risks for smaller players is accountability. Companies in the middle of the AI value chain, like Dataiku — which builds data analytics applications and works with clients — don’t have control over how the models they use get information.

So if rules state AI companies will be held responsible for how chatbots use data and answer queries, a company like Dataiku may end up being punished for something it can’t easily change

The Frontier Model Forum has said it will work with civil society groups and governments, but it hasn’t mentioned if membership will be expanded to more AI companies. OpenAI, one of the leaders of the forum, declined to say if the group will open up in the future.  

Down the line, these smaller companies have to be part of the regulatory conversation, and they want a say in how they’ll be scrutinized. 

Ron Bodkin, co-founder of ChainML, said calibrating requirements and fines to the size and scale of AI players would go a long way in meeting many concerns of smaller companies. Dataiku’s Gandhi suggested any industry coalition or standards organization like the International Organization for Standardization, or ISO, should make room for more stakeholders, especially as the needs of those working on foundation models differ from those working with consumers. 

Worries about regulatory capture — when regulated industries strongly influence how policies are created and enforced — have crept into congressional hearings as lawmakers struggled to figure out how to balance innovation and preventing harm. It makes sense for governments to tap large AI companies while figuring out a regulatory framework. But relying on their input runs the risk of alienating companies further down the value chain — and potentially making rules that end up protecting big incumbents from competition.

Allowing large corporations to shape regulations in their favor could leave the actual users — us — without a say in how it’s governed

AI Now sounded the alarm on Big AI’s influence with a report it released in April. The organization warned that companies led the narrative around AI, overstating the importance of the technology to the future and influencing how people think about it. AI Now believes the conversation should be led by regulators and the public.

Beena Ammanath, executive director of the Global Deloitte AI Institute, said that fostering trust in AI technologies will mean involving more than big businesses. Nongovernmental groups, academics, international agencies, and policy experts also need to weigh in. And while lawmakers continue to discuss regulating AI, there’s still time to open up the conversation.

“Creating policies that genuinely prioritize the public interest over corporate gains will be imperative to fostering trust in AI technology, upholding ethical standards, and promoting responsible AI adoption,” says Ammanath.