Generative AI risks concentrating Big Tech’s power. Here’s how to stop it.

Please vote:


Stephanie Arnett/MITTR | Getty


If regulators don’t act now, the generative AI boom will concentrate Big Tech’s power even further. That’s the central argument of a new report from research institute AI Now. And it makes sense. To understand why, consider that the current AI boom depends on two things: large amounts of data, and enough computing power to process it.

Both of these resources are only really available to big companies. And although some of the most exciting applications, such as OpenAI’s chatbot ChatGPT and Stability.AI’s image-generation AI Stable Diffusion, are created by startups, they rely on deals with Big Tech that gives them access to its vast data and computing resources.

“A couple of big tech firms are poised to consolidate power through AI rather than democratize it,” says Sarah Myers West, managing director of the AI Now Institute, a research nonprofit.

Right now, Big Tech has a chokehold on AI. But Myers West believes we’re actually at a watershed moment. It’s the start of a new tech hype cycle, and that means lawmakers and regulators have a unique opportunity to ensure that the next decade of AI technology is more democratic and fair.

What separates this tech boom from previous ones is that we have a better understanding of all the catastrophic ways AI can go awry. And regulators everywhere are paying close attention.

China just unveiled a draft bill on generative AI calling for more transparency and oversight, while the European Union is negotiating the AI Act, which will require tech companies to be more transparent about how generative AI systems work. It’s also planning  a bill to make them liable for AI harms.

The US has traditionally been reluctant to regulate its tech sector. But that’s changing. The Biden administration is seeking input on ways to oversee AI models such as ChatGPT—for example, by requiring tech companies to produce audits and impact assessments, or by mandating that AI systems meet certain standards before they are released. It’s one of the most concrete steps the administration has taken to curb AI harms.

Meanwhile, Federal Trade Commission chair Lina Khan has also highlighted Big Tech’s advantage in data and computing power and vowed to ensure competition in the AI industry. The agency has dangled the threat of antitrust investigations and crackdowns on deceptive business practices.

This new focus on the AI sector is partly influenced by the fact that many members of the AI Now Institute, including Myers West, have spent time at the FTC.

Myers West says her stint taught her that AI regulation doesn’t have to start from a blank slate. Instead of waiting for AI-specific regulations such as the EU’s AI Act, which will take years to put into place, regulators should ramp up enforcement of existing data protection and competition laws.

Because AI as we know it today is largely dependent on massive amounts of data, data policy is also artificial-intelligence policy, says Myers West.

Case in point: ChatGPT has faced intense scrutiny from European and Canadian data protection authorities, and it has been blocked in Italy for allegedly scraping personal data off the web illegally and misusing personal data.

The call for regulation is not just coming from government officials. Something interesting has happened. After decades of fighting regulation tooth and nail, today most tech companies, including OpenAI, claim they welcome it.

The big question everyone’s still fighting over is how AI should be regulated. Though tech companies claim they support regulation, they’re still pursuing a “release first, ask question later” approach when it comes to launching AI-powered products. They are rushing to release image- and text-generating AI models as products even though these models have major flaws: they make up nonsense, perpetuate harmful biases, infringe copyright, and contain security vulnerabilities.

The White House’s proposal to tackle AI accountability with post-AI product launch measures such as algorithmic audits is not enough to mitigate AI harms, AI Now’s report argues. Stronger, swifter action is needed to ensure that companies first prove their models are fit for release, Myers West says.

“We should be very wary of approaches that do not put the burden on companies. There are a lot of approaches to regulation that essentially put the onus on the broader public and on regulators to root out AI-enabled harms,” she says.

And importantly, Myers West says, regulators need to take action swiftly.

“There need to be consequences for when [tech companies] violate the law.”




Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.
On Key

Related Posts