No10: We can’t rule out AI posing ‘existential threat’ to humanity

Please vote:

No10 says it is unable to rule out artificial intelligence (AI) posing an “existential threat” to humanity.

Government wonks have admitted, ahead of Rishi Sunak delivering a speech on AI risks on Thursday, that the technology, if wrongly used, could have the potential to wipe out the human race.

A research paper – for discussion at the government’s AI safety summit at Bletchley Park next week – warns: “There is insufficient evidence to rule out that future frontier AI, if misaligned, misused or inadequately controlled, could pose an existential threat.”

But experts in the Government Office for Science maintain this is “highly unlikely” and would require AI programmes to “outpace mitigations” and “be able to avoid being switched off”.

In his speech, Sunak is expected to say innovations like ChatGPT and Google DeepMind offer “new opportunities for economic growth” and solutions to problems thought impossible.

But he will also stress AI development brings with it “new dangers and new fears” and say: “Doing the right thing, not the easy thing, means being honest with people about the risks.”

Global leaders, academics and businesses will gather at the former home of the World War Two Engima code breakers in a bid to build an international framework for containing AI risks.

It comes as No10 publishes fresh research into a range of future “plausible” scenarios on how AI could develop until 2030 – but stresses that the contents are not government policy.

Under one model, researchers, including UK intelligence experts, warn “AI cyber-attacks on infrastructure and public services” could become “significantly more frequent and severe”. They also warn of the possibility of “terrorist groups trying to develop bioweapons” using AI.

Another model suggests AI could “disrupt the workforce” and “trigger a public backlash”, with a spike in unemployment and rising poverty, particularly in sectors like IT and transport. This could lead to a “net reduction in jobs” or even an “unemployment crisis”, researchers warn.

Other risks from the increasing adoption of the technology include the emergence of fake content that is “almost impossible” to identify, such as voice cloning and biometric data.

This comes just a week after Labour leader Sir Keir Starmer was subject to a fake audio recording being shared online, purporting to be him berating a staff member, which was quickly debunked.

Researchers also say the impact of scams and fraud, including facial cloning, could become “very widespread” and be used for “espionage, misinformation, and political interference”.

One major risk, they say, could be “societal unrest as many members of the public fall victim to organised crime” and businesses see “trade secrets stolen… causing economic damage”.

Significant AI misuse could leave the internet “increasingly polluted” with fears for the “historical record” thanks to the proliferation of false or AI-generated information.

But more positive modelling suggests employers could be able to deploy the tech to “augment rather than displace workers” leading to shorter working hours.

AI could also become as widely used as Siri or Alexa, researchers say, with people using it in their “daily lives” as an “advanced personal assistant”. And they add that it could even have a positive impact on society if medics can use it to solve significant “health challenges”.

Technology secretary Michelle Donelan said the research was “a watershed moment” with the UK “the first country in the world to formally summarise the risks presented by this powerful technology”.

She added: “No country can do this alone, which is why we will be welcoming governments, academics, civil society groups and businesses to build a shared understanding of the risks.”




Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.
On Key

Related Posts

Anduril’s New Drone Killer Is Locked on to AI-Powered Warfare

Anduril’s New Drone Killer Is Locked on to AI-Powered Warfare

After Palmer Luckey founded Anduril in 2017, he promised it would be a new kind of defense contractor, inspired by hacker ingenuity and Silicon Valley speed. The company’s latest product, a jet-powered, AI-controlled combat drone called Roadrunner, is inspired by the grim reality of modern conflict, especially in Ukraine, where large numbers of cheap, agile

Read More »