Intellectual property at risk if staff use of AI goes unchecked

Please vote:

An engineer at Samsung who used ChatGPT accidentally leaked confidential source code

Businesses are being warned their intellectual property is at risk if staff are allowed to use artificial intelligence without proper checks and balances.

Earlier this year, global tech giant Samsung reportedly banned staff from using generative AI tools, after an accidental leak of confidential source code by an engineer who used ChatGPT.

Mark Presnell of eCommerce integration firm Convergence said submitting queries to platforms like ChatGPT would put them into the public domain.

He said companies needed to ensure staff were aware of the risks.

“Personally, I think those safeguards are not in place. I think this needs to be addressed through education.”

Companies needed to identify what was or was not sensitive and what was or was not intellectual property.

“Perhaps [companies should be] updating employment agreements, particularly clauses relating to confidential information and how that needs to be handled,” Presnell said.

People needed to understand that tools such as ChatGPT or other AI platforms were open, he said.

“It’s a tool, much like the early mobile phones, meant to facilitate our efficiencies. However, as we once transitioned from company-monitored phone calls to personal mobiles in the workplace, we are now navigating a similar shift with AI.

“And with this shift comes a blurred boundary that puts our company’s most valuable assets, our intellectual property, at risk,” Presnell said.




Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.
On Key

Related Posts

Let’s try to understand AI monosemanticity

Let’s try to understand AI monosemanticity

You’ve probably heard AI is a “black box”. No one knows how it works. Researchers simulate a weird type of pseudo-neural-tissue, “reward” it a little every time it becomes a little more like the AI they want, and eventually it becomes the AI they want.

Read More »