It's Going to Be a Great Year
What the Hell Is Going on in Iran?
This Might Have Been the Creepiest Line in Zohran Mamdani's Mayoral Address. And,...
The FBI Thwarted Another New Year's Eve Terror Plot, This Time in North...
The Woke Collapse of Harvard Continues
Democrat Prosecutor Receives Massive Blowback After Statement on Reports of Somali Fraud
The Minnesota Congressional Delegation Is Demanding Answers and Accountability From Tim Wa...
'Locked and Loaded:' President Trump Issues Warning to Iran As Anti-Regime Protests Enter...
Hospital Horror: Afghan Migrant Arrested After Brutal Attack on UK Nurse
Kathy Hochul Just Did a Major U-Turn on Taxing Tips
Does the Minnesota Fraud Scandal Go All the Way to the Somali Government?...
Mamdani Scraps Antisemitism Protections on Day One
Peace Through Strength: Venezuela’s Maduro Suddenly Ready to Negotiate
The ‘Warmth’ of Collectivism Comes With a Body Count — Conservatives Respond to...
Journalist Who Exposed $100M Somali Daycare Fraud Says He’s Now Getting Death Threats
OPINION

Can a Trusted AI System Be Developed?

The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com.
AP Photo/Michael Dwyer

Artificial intelligence is changing everything from how we search for information to how we make decisions. But with that transformation comes a troubling question: Can we trust the answers and conclusions AI offers?

Advertisement

There are good reasons to suggest we cannot. Earlier this year, major newspapers published a list of recommended “beach reads” for the summer, complete with catchy summaries and clever author bios. The only problem? Every single book on the list was made up.

A few months before that, a New York lawyer was sanctioned for filing a legal brief full of citations to court cases that did not exist. The culprit wasn’t laziness or malice, but rather it was ChatGPT, which had simply invented case law out of thin air.

These weren’t isolated incidents. From fake health advice to imaginary sources, AI systems have a growing reputation for making things up while sounding perfectly confident. This phenomenon, known as a “hallucination,” is more than an embarrassment; it’s a threat to credibility, law, and even democracy itself.

The problem begins with data. Artificial intelligence systems don’t “know” facts the way humans do; they detect patterns in the massive amounts of text, images, and numbers they’re fed. If the data is biased, outdated, or false, the system simply amplifies the problem at machine speed.

Tech companies like to call this an engineering challenge, but it’s really a civic one. As AI becomes part of journalism, education, government, and law, misinformation produced by algorithms can distort the very record we rely on to make collective decisions.

A handful of data reformers are trying to change the conversation from one about smarter algorithms to one that relies on verifiable information. Dr. Jose M. Plehn, who founded the company BrightQuery, is devoted to building a“factual infrastructure” for AI.

Advertisement

A “factual infrastructure” makes AI accountable by relying on real-world evidence. For example, BrightQuery collects legal, regulatory, and tax filings from more than 100,000 jurisdictions worldwide, then converts them into structured, searchable data. The result is an enormous economic map that ties millions of companies, individuals, and locations to verified public records.

This creates a system based on a reliable database of verified information. “Every claim made by a model should be traceable to a verifiable record,” says Plehn, who works with the federal National Secure Data Service (NSDS). U.S. government data is based on more accurate, transparent, and usable information across agencies.

Key to this approach is something called data provenance or knowing exactly where data originates, how it was processed, and under what authority it was published. Without it, even the most advanced AI operates “in epistemic fog.”

This concept is gaining traction in global standards like the FAIR principles (Findable, Accessible, Interoperable, Reusable) and in efforts by OASIS Open to establish common provenance rules. The goal is to make every dataset auditable and every AI-generated statement traceable to a credible source. “If you can’t trace the data,” Plehn explains, “you can’t trust the result.”

Plehn took his efforts even further by launching OpenData.org, a nonprofit that makes parts of BrightQuery’s massive database freely available to the public. Journalists, researchers, and policymakers can now access the same verified information that major institutions use. “Open data is just as vital to democracy as the right to free speech,” he said at the project’s debut.

Advertisement

Artificial intelligence will soon play a role in every major institution: law, finance, journalism, education, and beyond. If we don’t build systems that can prove their own accuracy, we risk a future in which truth itself becomes optional.

The challenge before us isn’t just to make AI more powerful. It’s to make it honest. And that starts not with algorithms, but with the data they depend on.

Until we demand verifiable, audit-able information like that championed by the growing factual-data movement, AI will continue to speak confidently while knowing very little. The question is whether we’ll keep listening.



Frank Salvato is an independent journalist and new media analyst who writes on public issues and emerging technology with a focus on accountability, legality, and the advancement of public knowledge.

Editor’s Note: Do you enjoy Townhall’s conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.

Join Townhall VIP and use the promo code FIGHT to get 60% off your VIP membership!

Join the conversation as a VIP Member

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement