Discussion about this post

User's avatar
Charles Whitaker's avatar

"People don't fact check"

To some extent, I think this is a learning curve. Like I had to learn to fact check journalists, even those from established newspapers. After all, to some extent, surely AI is no less reliable than Fox News, MSNBC, NYT (on Gaza) or SCMP (on Chinese science). And many people still take what these outlets have to say as reliable. I still see serious papers and articles that still quote news articles on China's social credit system or China's debt trap for Africa as their reliable research sources for the "truth" of these "facts". LOL. AI cannot be MORE reliable than these "standard bearers" of "truth" surely.

I appreciate that this "inaccuracy" is different from making up facts or hallucinating, but at the end of the day, whether it is a human making up facts by telling a lie (bogus research or politically motivated propaganda masquerading as news) or whether it is AI making up facts by hallucinating, the end result is the same: you need to either fact check or discount until proven true.

What I am interested in is whether it is even remotely possible for AI to be accurate and not hallucinate, given that AI has no way of establishing facts on its own. It does not know what constitutes facts, cannot refer to or assess primary material and sources, and does not have a system for assessing accuracy of anything that is fed to it. So how can it be possible for AI to ever be useful as a definitive research tool?

Expand full comment

No posts