A.I. ‘controls humanity’ in the worst-case scenario but will probably just find us boring, says Stability AI CEO Emad Mostaque

Please vote:

Emad Mostaque hopes A.I. will find us “a bit boring” but acknowledges that in the worst-case scenario it “basically controls humanity.”

Mostaque is CEO of the fast-growing London-based startup Stability AI, which popularized Stable Diffusion. That’s a generative A.I. tool allowing users to create often remarkably sophisticated images using nothing but text prompts. He made the comments in a BBC interview released this weekend.

“If you have a more capable thing than you, what is democracy in that kind of environment? This is a known unknown,” he told the British broadcaster. “Because we can’t conceive of something more capable than us, but we all know people more capable than us. So, my personal belief is it will be like that movie Her with Scarlett Johansson and Joaquin Phoenix: Humans are a bit boring, and it’ll be like, ‘Goodbye’ and ‘You’re kind of boring.’”

“But I could be wrong,” he added. “I think it deserves to be discussed in a public sphere.”

In March, Mostaque joined Tesla CEO Elon Musk and Apple cofounder Steve Wozniak in signing an open letter calling for pause in A.I. development for anything more advanced than GPT-4, the A.I. chatbot from Microsoft-backed OpenAI, which also makes ChatGPT and DALL-E 2 (the latter, like Stable Diffusion, converts text prompts to images).

“If we have agents that are more capable than us that we cannot control that are going across the internet and [are] hooked up and they achieve a level of automation,” he told the BBC, “what does that mean?”

Stability AI is racing ahead, however, in developing new products—including a text-to-animation tool released this week—and wooing investors. It’s seeking to raise funds at a $4 billion valuation, following a $1 billion valuation last October after raising about $100 million. (Coatue Management and Lightspeed Venture Partners are among its investors.)

At the same time, Stability AI is being sued by Getty Images in a landmark case over copyright. Such a lawsuit was perhaps inevitable given that text-to-image A.I. models like Stable Diffusion are trained using billions of images pulled from the internet.

Asked by the BBC what the worst-case scenario might be, Mostaque said: “Worst-case scenario is that it proliferates and basically it controls humanity. Because you could have a million of these things replicating effectively.”

Unusually, Stable Diffusion is open source, meaning anyone can examine the code, share it, and use it.

In March, Musk, who cofounded and helped fund OpenAI, criticized it for switching away from a nonprofit model, taking hefty investments from Microsoft, and not being open source. He tweeted:

“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”

“I think there shouldn’t have to be a need for trust,” Mostaque told the BBC. “If you build open models and you do it in the open, you should be criticized if you do things wrong and hopefully lauded if you do some things right.”

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.



Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.
On Key

Related Posts

Let’s try to understand AI monosemanticity

Let’s try to understand AI monosemanticity

You’ve probably heard AI is a “black box”. No one knows how it works. Researchers simulate a weird type of pseudo-neural-tissue, “reward” it a little every time it becomes a little more like the AI they want, and eventually it becomes the AI they want.

Read More »