As research laboratory OpenAI drives artificial intelligence (AI) forward with ChatGPT, AI-related issues such as leaks and privacy concerns have surfaced, further driving unease over what the technology can do.
Due to such concerns related to ChatGPT, the Italian Data Protection Authority officially suspended the use of the AI chatbot in Italy on March 31, making it the first Western nation to do so.
The Italian watchdog is investigating the chatbot gave OpenAI 20 days to propose measures for protecting user’s data, or else it would face a fine of €20 million (US $21.8 million) or up to 4 percent of annual global turnover.
There are two main reasons for the suspension and investigation.
First, a data breach of ChatGPT users’ conversations and payment information of subscribers occurred on March 20. OpenIA said it was “a bug in an open-source library which allowed some users to see titles from another active user’s chat history.”
The bug was patched, but there were other issues.
“Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window,” OpenIA said.
There are currently more than 100 million ChatGPT users, but it is unclear how many people use paid ChatGPT Plus.
Secondly, Open AI claims that its service targets people over 13 years old, but no age verification exists.
In this regard, the Italian watchdog believes that ChatGPT may expose minors to unsuitable answers for their degree of development and self-awareness.
In addition to Italy, Germany is also considering banning ChatGPT, while French and Irish privacy regulators have also contacted their Italian counterparts to understand the legal basis for any ban. These countries’ practices have also led EU privacy regulators to consider whether to use stricter laws to curb the unrestrained development of AI and ChatGPT.
Leaked Data in South Korea
In early April, the South Korean press reported that confidential information at Samsung Electronics in South Korea had been leaked due to the use of ChatGPT by its employees. In response, Samsung said that it would consider banning the use of ChatGPT.
In February and March, Amazon and Walmart warned employees not to use ChatGPT to share sensitive information. Other companies, such as Verizon, JP Morgan Chase, Citigroup, and Goldman Sachs, asked employees to block OpenAI.
The concerns of companies and governments regarding AI are not limited to leaks, false information, and age verification. Some fear it could also impact the global economy and security.
Criminals might also take advantage of AI’s abilities, with Europol warning at the end of March that criminals could use it to enhance their methods of crime. One example given was ChatGPT generating text that mimics human speech styles which can be effective in faking emails, phishing attacks, and spreading false information.
In late March, Elon Musk and other tech giants joined forces at the Future of Life Institute, a nonprofit organization, to release an open letter calling for a six-month moratorium on advanced AI (ChatGPT) research and development. The letter asked: “Should we let machines flood our information channels with propaganda and untruth?”
Meanwhile, Sam Altman, CEO of OpenAI, admitted that he was “particularly worried that these models could be used for large-scale disinformation,” and that the AI is getting better at writing computer codes that could be used for offensive cyberattacks.
The issue of spreading false information has already occurred. Brian Hood, a mayor of a town in Victoria, Australia, warned Open AI in April that he would sue if ChatGPT failed to correct false information about him within 28 days in what could be the first defamation lawsuit against the AI chatbot.
ChatGPT named Hood as the perpetrator of a foreign bribery scandal at a Reserve Bank of Australia subsidiary in the early 2000s. However, he simply worked for the subsidiary, Note Printing Australia, and he reported to the government that the printing company had bribed foreign officials to win a printing contract. Hood was the whistleblower, not the perpetrator.
The Guardian found an unpublished article in April that was very similar in style to the Guardian. This led the Guardian to investigate and confirm the source of the article, which was eventually found to have been fabricated by ChatGPT. The Guardian said they were deeply troubled by the incident and felt that AI could damage their newspaper’s credibility.
In terms of false information, Japanese electronic engineer Li Jixin told The Epoch Times in early April: “While AI brings convenience to humans, it also provides criminals with new tools, just like what the internet does. We need better morals for humans first, and then we need to strengthen the regulatory measures of AI.”
Potential Chaos Caused by AI
In addition to the potential for AI to become a hotbed of false information and crime, there are many advanced technologies that, if misused or abused, could bring more chaos to society.
Tristan Harris, co-founder of the Center for Humane Technology, told Fox on March 29 that while the outcome of AI is not known, its emergence, like the invention of the printing press in the Middle Ages, has changed people’s lives forever.
“Our democracy, our society runs on language,” Harris said. “Code is language, law is language, contracts are language, media is language. When I can synthesize anyone saying anything else and then flood a democracy with untruths, … this is going to exponentiate a lot of the things that we saw with social media,” he said.
“If you let a machine that runs on viral information, your society can sort of spin out into untruths really, really fast.”
AI-Generated Photos of Trump
On March 18, former U.S. President Donald Trump said on Truth Social that he would be indicted by a grand jury in New York. Although the indictment had not yet happened, on March 21, realistic AI-generated photos of Trump “trying to escape” and “being arrested by police” were widely posted online. AI-generated photos fooled some of Trump’s opponents and supporters, causing some confusion.
AI-generated pictures can have other impacts, with one Chinese influencer finding that some of her daily life photos were turned into nude images by a Chinese AI that undresses a person with one click. The indecent pictures were uploaded to the Internet.
In January this year, Microsoft researchers successfully developed a text-to-speech synthetic AI model called VALL-E. When VALL-E learns a specific voice, it can synthesize someone’s speech audio with varied emotions, making it sound extremely realistic.
The AI can edit a person’s voice recording and even change or add what is said. The AI can also combine ChatGPT’s GPT-3 module with other AI models to create fresh audio.
“The AI being developed now certainly has many strengths and weaknesses, and it could be abused to bring chaos and disaster to human society,” Kiyohara Hitoshi, a computer engineer from Japan, told The Epoch Times on April 9.
“However, without ethical practices and standards, even the best tools can harm society,” Kiyohara said.
Kane Zhang contributed to this report.