ChatGPT’s testing phases pretty quickly revealed that the OpenAI chatbot and others like it could go off the rails – or “hallucinate” – with enough poking and prodding.
OpenAI’s sticking plaster was to limit the amount of queries a user could make before the chatbot descended into madness.
Now, for businesses perhaps wary of AI’s wacko tendencies, GPU giant Nvidia has issued open source software it claims can keep large language models (LLMs) on topic, ensure accurate information, and restrict them from connecting to unsafe apps, which would supposedly prevent implementations in an enterprise setting from being totally inappropriate.
Is your AI hallucinating? Might be time to call in the red team
NeMo Guardrails was released yesterday.
Nvidia said it comes as “many industries are adopting LLMs, the powerful engines behind these AI apps.
They’re answering customers’ questions, summarizing lengthy documents, even writing software and accelerating drug design.
NeMo Guardrails is designed to help users keep this new class of AI-powered applications safe.”
The software is said to work with any LLM and, because it is open source, operates alongside “all the tools that enterprise app developers use,” including LangChain, another open source toolkit that helps plug third-party apps into LLMs, and business automation platform Zapier.
“Virtually every software developer can use NeMo Guardrails – no need to be a machine learning expert or data scientist. They can create new rules quickly with a few lines of code,” Nvidia said.
These rules intercept questions before the chatbot can come up with any old nonsense, and can even force the AI to respond with “I don’t know” instead of presenting something convincing but ultimately false.
Though the software is available on GitHub, Nvidia will also offer it as a supported package via the Nvidia AI Enterprise platform and Nvidia AI Foundations cloud services.
The company will continue to develop and improve NeMo Guardrails, which it said is the product of several years’ research, “as AI evolves.”
Speaking to the press on Monday, Jonathan Cohen, Nvidia’s vice president of applied research, said he believes NeMo Guardrails’ ability to “detect and mitigate hallucination” could be the answer to the tech’s teething problems.
One such hallucination wiped $120 billion off Alphabet’s value because its Bard chatbot incorrectly claimed in a demo that “JWST took the very first pictures of a planet outside of our own solar system.”
It’s also not always good at coding. Really the flaws of the technology are too many to list, but that hasn’t dampened enthusiasm from consumers and businesses for better or worse.
All the same, Nvidia has ridden the AI wave to great financial success.
Thanks to hardware optimized or even designed specifically for AI workloads, Nvidia’s datacenter unit is now bigger than the entire company was in 2020.
Rather than a pang of guilt, this attempt to rein in AI will no doubt help fill Nvidia’s coffers through its supported offerings.