A postgraduate researcher at the National Institutes of Health (NIH) argued “hallucinations,” or false information produced by large language models (LLMs), make artificial intelligence (AI) ...