One of the go-to ways businesses avoid pesky AI hallucinations could have some unforeseen side effects. Companies often turn to retrieval-augmented generation (RAG) as a way to make AI more accurate; the technique essentially turns LLMs into conversational searchbots that surface answers grounded in internal data. But a new report from researchers at Bloomberg found that RAG can also come with safety risks. They found that for unknown reasons, RAG-based models were far more likely to offer up “unsafe” answers around topics like malware, illegal activity, privacy violations, and sexual content. “That RAG can actually make models less safe and their outputs less reliable is counterintuitive, but this finding has far-reaching implications given how ubiquitously RAG is used in GenAI applications,” Amanda Stent, Bloomberg’s head of AI strategy and research in the office of the CTO, said in a statement. The paper’s authors called for more safety research and security exercises to flesh out the risks of RAG-based models. But the finding could have widespread implications for companies that have come to rely on RAG as a way to navigate trusted information for everything from internal employee tools to customer service functions. Keep reading here.—PK |