Nvidia's Universal 'Guardrails' Prevent AI Chatbots from Hallucinating False Information

TL;DR Summary
Nvidia has released its open-source "NeMo Guardrails" software, which acts as a censorship bot for apps powered by large language models. The software is designed to help chatbots stay on topic and prevent them from spewing misinformation, offering toxic or outright racist responses, or creating malicious code. NeMo works on top of older language models like OpenAI’s GPT-3 and Google’s T5, and is supposed to work with "all major LLMs supported by LangChain, including OpenAI’s GPT-4." However, it remains unclear how effective an open-source guardrail might be in preventing AI from lying or cheating.
- Nvidia Open Sources Universal 'Guardrails' to Keep Those Dumb AIs in Line Gizmodo
- Nvidia has a new way to prevent A.I. chatbots from 'hallucinating' wrong facts CNBC
- Nvidia Has a Fix for AI Chatbots' Problem: Made-Up Facts, or 'Hallucinations' Barron's
- Nvidia releases software tools to help chatbots watch their language Reuters
- Nvidia says it can prevent chatbots from hallucinating ZDNet
Reading Insights
Total Reads
0
Unique Readers
0
Time Saved
4 min
vs 4 min read
Condensed
88%
798 → 94 words
Want the full story? Read the original article
Read on Gizmodo