We can’t base AI laws on imagined risks

Please vote:

We’ve heard endless calls to “do something” about AI, but politicians are still basing their thinking on risks they assume are real, not the ones we know about, writes James Boyd-Wallis. Last week Rishi Sunak said he wants to balance safety and innovation with AI, but he needs to do more to get MPs on board.

AI has huge potential – it’s helping people walk again, discovering new drugs and detecting cancer earlier – but there are still concerns. Goldman Sachs reported generative AI could replace 300 million jobs, while Matt Clifford (helping the PM set up an AI taskforce) said it could “kill many humans” in two years. Even 350 global AI experts have warned it could lead to humanity’s extinction!

 

The warnings keep coming and people are shouting for action. Labour and SNP MPs have asked for stronger rules and more government involvement.

Our research reveals a lack of faith in existing regulators to control AI: only 6% of MPs think they have the relevant skills or knowledge. This is shared by both Conservative and Labour, with 7% and 6% respectively believing they’re up to the job. The AI whitepaper from March suggested a sector-by-sector approach, but this is now being called into question.

It also appears that growth and innovation triumph over safety for 14% of MPs; only 23% understand the impacts of AI. Rishi Sunak wants a global safety watchdog like the International Atomic Energy Authority, but it’s hard to make progress without support from politicians.

This is an important conversation; the government’s stance lies between the EU’s stricter AI Act and US’ lighter touch.

We need to figure out what’s safe and not and put limits in place for the latter. The aim should be sensible rules that give flexibility between different industries and uses. Say, retailers who want AI to help recommend the best outfit don’t need to be regulated like healthcare providers. This middle-ground could help maintain the UK’s position as a global AI leader, but only if the PM resists calls for bigger regulation based on assumed risks.If we rush to regulate, we might jeopardise innovation and all the benefits AI can bring. Governments don’t make thoughtful policies out of fear.

So there needs to be a wider discussion about how organisations use AI and its accompanying opportunities and challenges. How do police using facial recognition tech impact bias and privacy? How can AI help put patients at the centre of the NHS and improve outcomes?

And that debate has gotta include everyone: MPs, small and big AI firms, civil society, academics, folks affected by AI.

 
 

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.
On Key

Related Posts

Let’s try to understand AI monosemanticity

Let’s try to understand AI monosemanticity

You’ve probably heard AI is a “black box”. No one knows how it works. Researchers simulate a weird type of pseudo-neural-tissue, “reward” it a little every time it becomes a little more like the AI they want, and eventually it becomes the AI they want.

Read More »