Nearly a quarter of organizations polled in a recent McKinsey report said they had experienced negative consequences from generative AI’s inaccuracy. Guardrails, released last fall by Israel-based startup Aporia, adds a collection of small language models between a chatbot and users that work together to intercept inaccurate, inappropriate, or off-topic responses while giving companies better privacy controls. It also prevents users’ attempts to manipulate AI, by, for example, stopping users who pressure a chatbot into giving them a discount. Liran Hason, Aporia’s co-founder and CEO, says the company’s goal is ensuring humanity “can really trust AI." Guardrails’s early clients include insurance giant Munich Re and rental car company Sixt.
More Must-Reads from TIME
- L.A. Fires Show Reality of 1.5°C of Warming
- How Canada Fell Out of Love With Trudeau
- Trump Is Treating the Globe Like a Monopoly Board
- Bad Bunny On Heartbreak and New Album
- 10 Boundaries Therapists Want You to Set in the New Year
- The Motivational Trick That Makes You Exercise Harder
- Nicole Kidman Is a Pure Pleasure to Watch in Babygirl
- Column: Jimmy Carter’s Global Legacy Was Moral Clarity
Contact us at letters@time.com