RSS Feed

Calendar

April 2026
M T W T F S S
« Mar    
 12345
6789101112
13141516171819
20212223242526
27282930  

Search

Tag Cloud

Archives

Futurism (September 14)

2025/ 09/ 16 by jd in Global News

AI hallucinations are “a major problem plaguing the entire industry, greatly undercutting the usefulness of the tech.” The problem appears to be “getting worse as AI models get more capable.” Some experts argue there is no way around the problem as “hallucinations are intrinsic to the tech itself” and that large language models (LLMs) have hit their limits. However, OpenAI believes it has stumbled on the problem and a relatively easy fix. Its researchers posit that LLMs “hallucinate because when they’re being created, they’re incentivized to guess rather than admit they simply don’t know the answer,” as conventional scoring is binary, which rewards correct guesses and penalizes honest admissions of uncertainty. Instead, they believe you can “penalize confident errors more than you penalize uncertainty, and give partial credit for appropriate expressions of uncertainty.”

 

Time (February 19)

2025/ 02/ 20 by jd in Global News

“Today’s advanced AI models like OpenAI’s o1-preview are less scrupulous. When sensing defeat in a match against a skilled chess bot, they don’t always concede, instead sometimes opting to cheat by hacking their opponent so that the bot automatically forfeits the game.” Earlier versions required an actual prompt to resort to such tactics, but both the “o1-preview and DeepSeek R1 pursued the exploit on their own, indicating that AI systems may develop deceptive or manipulative strategies without explicit instruction.”

 

[archive]