Futurism (September 14)
AI hallucinations are “a major problem plaguing the entire industry, greatly undercutting the usefulness of the tech.” The problem appears to be “getting worse as AI models get more capable.” Some experts argue there is no way around the problem as “hallucinations are intrinsic to the tech itself” and that large language models (LLMs) have hit their limits. However, OpenAI believes it has stumbled on the problem and a relatively easy fix. Its researchers posit that LLMs “hallucinate because when they’re being created, they’re incentivized to guess rather than admit they simply don’t know the answer,” as conventional scoring is binary, which rewards correct guesses and penalizes honest admissions of uncertainty. Instead, they believe you can “penalize confident errors more than you penalize uncertainty, and give partial credit for appropriate expressions of uncertainty.”
Tags: AI, AI models, Binary, Capable, Conventional, Experts, Guess, Hallucinations, Incentivized, Intrinsic, LLMs, Researchers, Rewards, Scoring, Tech, Undercutting, Usefulness
