Google’s having some great early talks with EU regulators about the bloc’s ground-breaking AI regulations and how it and other companies can safely and responsibly build AI, Thomas Kurian, the head of Google’s cloud computing division, told CNBC.
They’re looking at tools to address a number of worries surrounding AI – like how to tell the difference between stuff made by humans and stuff made by AI.
Kurian said, “We’re having productive conversations with the EU government ’cause we wanna find a path forward. These technologies have risks but they also have huge potential to give people real value.”
The company showed off their ‘watermarking’ solution for labeling AI-generated images at their I/O event last month.
It hints at how Google and other big tech companies are working on ways to bring private-sector oversight to AI before formal regulations exist.
AI systems are rapidly evolving, with tools like ChatGPT and Stability Diffusion able to do things we never thought possible.
ChatGPT and similar tools are being used by programmers as companions to help them write code.
But EU policymakers and regulators are worried that generative AI models can make it easier to produce copyright-infringing content, which could hurt artists who rely on royalties for income.
These models are trained on huge datasets of publicly available internet data, much of which is copyrighted.
Earlier this month, MEPs approved the EU AI Act which seeks to bring oversight to AI in the bloc. It includes provisions to make sure training data for generative AI tools don’t violate copyright laws.
“We’ve got plenty of European customers building AI apps on our platform,” Kurian said, “so we’re working with the EU gov to make sure we understand their worries.”
“We’re providing tools like recognising when content’s been generated by a model. That’s just as important as saying copyright is important ’cause if you can’t tell what was created by a human or model, you can’t enforce it.”
AI has become a big battleground in the tech industry as companies battle it out to develop the tech – especially generative AI which creates new content from user prompts.
Generative AI’s ability to produce music lyrics and code has blown away academics and execs alike. But it’s also caused concerns about job loss, misinformation, and bias.
Google staffers have even called out the company for their rapid launch of Bard – a chatbot to compete with OpenAI’s ChatGPT – as “rushed,” “botched,” and “un-Googley” on the internal Memegen forum.
Former researchers at Google have also voiced worries about the company’s lack of focus on ethically developing AI tech.
Timnit Gebru, former co-lead of Google’s ethical AI team, raised the alarm over the company’s internal guidelines on AI ethics, and Geoffrey Hinton, the “Godfather of AI,” recently left due to concerns its aggressive push into AI was getting out of control.
Google’s Kurian wants global regulators to know they’re not afraid to welcome regulation. He told CNBC: “We’ve said quite widely that we welcome regulation.
We do think these technologies are powerful enough, so need to be regulated responsibly – we’re working with governments in the EU, UK and many other countries to make sure it happens right.”
The UK has introduced a framework of AI principles for regulators to enforce rather than write into law their own formal regulations.
Biden’s administration and US government agencies have proposed frameworks for regulating AI too.