What measures must be taken by businesses to feel secure in entrusting AI chatbots with confidential data?

Please vote:

At a dinner recently, Marc Benioff, the CEO of Salesforce, was given an awe-inspiring demonstration by Sam Altman of OpenAI.

After Benioff spoke for a few moments, one of OpenAI’s substantial language models (LLMs) was able to reproduce his voice and make audio of a speech that was originally given by John F. Kennedy.

When Benioff asked Altman to show him the file the voice was stored in, the AI produced the output by “weighting” the data in the obscured layers of its artificial neural network, meaning there was no way to see where the information existed.

Even though Benioff himself was unperturbed, he assumed that his customers – particularly those in tightly controlled industries such as banking – would be anxious, saying that “they like to know exactly where that data is housed and they want to audit those places.

AI and the pitfalls for data privacy

Executives, boards, and organizational legal teams are worried that the use of AI chatbots like OpenAI’s ChatGPT and Google’s Bard by employees, in the quest for productivity gains, will put data privacy at risk.

It is understandable to be concerned about how personal information might be shared by a chatbot that is used to formulate tailored emails, reports, and customer service responses, as this opens up the possibility for data to become compromised.

The danger of data privacy breach is not the only risk within the use of AI, as hallucinations, bias, and toxicity can also occur. As such, these ‘technical explanations,’ as noted by Benioff, need to be kept in mind for organizations that work with the technology.

Salesforce is pushing its trust layer

It comes as no surprise that Salesforce is emphasizing staff trust and data protection as one of its AI selling points.

Their “Einstein GPT Trust Layer” technology actually keeps customer data apart from the deep learning models used in generative AI.

The company’s main goal has been to ensure protection of data since its creation in 1999, when it first started to make the process of sharing information secure.

This was further maintained with the release of private predictive training models in 2016. Now with generative AI, more precaution is necessary as some businesses may unintentionally lose data if proper action is not taken.

Salesforce does have its answer with their AI “starter pack” costing $360,000 every year.

This revelation truly shows that generative AI is an unknown realm even for Salesforce.

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.
On Key

Related Posts

Is AI About to Steal Your Job?

Is AI About to Steal Your Job?

Almost all U.S. jobs, from truck driver to childcare provider to software developer, include skills that can be done, or at least supplemented, by generative artificial intelligence (GenAI), according to a recent report. GenAI is artificial intelligence that can generate high-quality content based on the input data used to train it.

Read More »
Procurement in the age of AI

Procurement in the age of AI

Procurement professionals face challenges more daunting than ever. Recent years’ supply chain disruptions and rising costs, deeply familiar to consumers, have had an outsize impact on business buying. At the same time, procurement teams are under increasing pressure to supply their businesses while also contributing to business growth and profitability.

Read More »