The Dems' Reactions to Trump's Iran Strikes Proves Again That They Can't Be...
Whose Side Are Democrats Really On?
Can the Left Ever Stop Its Craziness?
The Media Bombs With Its Coverage of the Iran Attack and Leaves Smoldering...
NYC Mayor's Race Watch: A Referendum on National Housing Policy?
A Wise and Frugal Government We Do Not Have, Nor a Virtuous One
Trump Did the Right Thing
There's Nobody to Talk to
The ‘First String’ Is Back in the Game
With Iran Destabilizing the Middle East, American Energy Is More Important Than Ever
Three Years Since Dobbs and the World’s Still Turning
The Senate Must Act to Bring College Costs Under Control
Three Years After Dobbs, the Result Is a Tale of Two Very Different...
One Big, Beautiful Bill: A Blueprint for America's Comeback
Hours After Trump Declares Ceasefire, Iran Denies Agreement, Blames Israel for Starting Wa...
Tipsheet
Premium

Deepfake Technology Is Now 'One of the Greatest Challenges We Face,' Expert Tells Lawmakers

Artificial intelligence is developing faster than any rules or regulations can keep up.  

At least 30 female students at a New Jersey high school were recently victimized by a classmate who used AI to put their faces on pornographic images and shared them online. Now the students and their families are looking for accountability from officials at the local, state and federal levels.  

The incident was just one of the many examples Republican Rep. Nancy Mace of South Carolina discussed in Wednesday’s House Oversight Subcommittee on Cybersecurity, Information Technology, and Government hearing regarding “Advances in Deepfake Technology.”

While AI deepfakes can be useful in the entertainment industry and to advance medical research, it can also be weaponized, she explained. 

“It can be used to make people appear to say or do things that they have not actually said or done. It can be used to perpetrate various crimes, including financial fraud and intellectual property theft. And it can be used by anti-American actors to create national security threats,” Mace noted. 

Currently, one company that studies deepfakes determined that about 90 percent of them are being used to generate pornographic material, Mace added, making it an urgent issue and one the attorneys general of 54 states and territories are calling on Congress to address, particularly as it relates to the generation of child sexual abuse material.  

Mace said she’s not interested in banning fake images and videos, but there is a problem when fact and fiction are indistinguishable—"we can’t ensure our laws are enforced or that our national security is preserved.” 

Deepfakes are also being used to spread disinformation from war zones.

"Videos purportedly taken from on the ground in Israel, Gaza and Ukraine have circulated rapidly around on social media – only to be proven inauthentic," Mace said. "One AI-generated clip showed the Ukraine president urging troops to put down their arms."

Witness Mounir Ibrahim, executive president of Truepic, a technology company focused on transparency and authenticity in digital content, explained how he saw in his previous work with the UN how images from conflict zones were constantly being questioned as fake or altered, and that was prior to generative AI. 

“Today, this strategy for undermining reality is now commonly referred to as the 'Liar’s Dividend,'" he explained. "Bad actors benefit from the rapid increase in fake and manipulated imagery. It makes their false claims that a real image or video is fake more believable, giving them the ability to sow doubt in what we see and hear online.”

But given the world has digitized nearly every aspect of life, this problem of determining what is real and what is fake is one all should be interested in addressing.

There isn't a "silver bullet," Ibrahim said, which means what's truly needed is a "transparent ecosystem for digital content."

“In my opinion, this is one of the greatest challenges we face today," he said. "Some estimates are that in one to two years, 90 percent of new digital content created online will be wholly or partially synthetic. Without wide adoption of interoperable standards to clearly differentiate authentic content, AI-assisted, and fully generated content, our entire informational ecosystem will be at risk.”

A legislative fix is one tool but Ibrahim said it won't be enough. Work on content provenance is already being advanced, while other stakeholders are exploring different remedies. 

The hearing came just weeks after President Biden signed an executive order establishing new standards to safeguard Americans from the dangers of AI technology. 



Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement