'According-to Prompting' Language Models Improves Quoting from Pre-Training Data
Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data. Inspired by the journalistic device of “according to sources”, researchers at the Johns Hopkins University in the U.S. propose ‘according-to prompting’: directing LLMs to ground responses against previously...