Local News
Deep Fakes: Seeing is Not Believing
“What the eyes see, and the ears hear – the mind believes” – Harry Houdini
In 2017, the term ‘Deep Fakes’ (spelled ‘deepfakes’) was the fashionable buzzword at IBM regarding the latest in influence capabilities using Artificial Intelligence (AI). The possibilities of pervasive disinformation were obvious then, but I didn’t think it would arrive this soon. Deepfake technology improved immensely in a relatively short period of time. Society and regulation will soon be in reaction mode, playing catch-up with the outbreak of new issues accompanying this innovation.
Most info-savvy people are familiar with media spin and the polarization of news along the left and right axis. The terms disinformation, misinformation, spin control, narrative, and so forth are part of our daily lexicon. If you agree that the search for truth is difficult now, wait to you see what’s coming soon. By 2030, we will all be living in a world of disbelief.
By the way, we’re not talking about photo-shopping pictures or imposing celebrity faces on pics like Elvis with North Korean dictator Kim Jong Un below or the existing practice of placing the faces of celebrity starlets on porn photos. No sir, we’re way beyond that now. Deepfakes is on another level. This is going to be a game-changer. Hopefully, it won’t lead to our demise.
Deepfakes are defined as AI-doctored audio and video footage, creating visuals and events that never occurred. In other words, it is a video of a person whose face or body has been digitally altered to appear to be someone else, typically for malicious purposes or to spread false information. This new technology can make people believe something is real when it is not.
Why it matters
If the viewing public cannot discern reality, what are we reduced to? Cybercriminals and foreign governments are stocking up on the AI capabilities that will define the next generation of conflict. Meanwhile, automation and the rise of fake information are stirring unrest. Together these forces can turn society upside down.
The current anti-Artificial Intelligence narrative is generally aimed at the progression of robotics. The doomsayers refer our memory to the movie, ‘Terminator,’ when AI got out of control and pitted mankind versus the machines. We, as a global society, have a quicker chance of getting blindsided by the secondary effects of AI’s other offspring – ‘deepfakes.’
A very real-looking video of a world leader making incendiary threats could, if widely believed, set off a trade war or a conventional war or worse. Increases in deepfake technology allow the manipulation of footage that could easily depict President Biden saying, “Vladimir Putin has 48 hours to vacate the Kremlin, or the U.S. will launch a nuclear strike.” If the Russian President believed the footage to be authentic, he might well launch a first strike. Visualize that concept in the context of President Bush’s 2003 globally televised ultimatum to Saddam Hussein. Current deepfake technology can replace President Bush’s face with President Biden’s face and alter the audio with President Biden’s voice threatening Vladimir Putin. The collective public viewing this on TV would not know it was fake, nor would Mr. Putin. That could have dire consequences for life as we know it. Another danger is the possibility that deepfake technology spreads so that people are unwilling to trust video or audio evidence. Imagine the impact of that in the courtroom. I can see the prosecuting attorney squirming now. Imagine a fake video causing widespread rioting. Given recent events, the latter is not difficult to fathom.
Businesses will bloom that specifically design fake videos for hire. A few already exist. The campaign that hires the best deepfake company will have an advantage. Expect to see dead movie icons promoting products in voice and character that looks real. Visualize John Wayne and Kevin Costner together in an advertisement for Stetson cowboy hats. Then visualize the surge in Stetson’s market share.
Activists will be able to whip up a frenzy with fake inflammatory footage and fake press conference remarks, etc. News agencies will unwittingly broadcast the deepfake material because they can’t tell the difference. Social media-induced demonstrations have existed for years. Think Arab Spring. An accompanying video is worth a thousand words, and properly crafted, a deepfake video can change the trajectory of society. The prospect of deepfake video scams frolics about in my crystal ball. At the very least, it will pose innumerable difficulties for law enforcement.
Basic artificial intelligence applications have become accessible to the public in the past year, opening vast opportunities for creativity as well as confusion. With campaigning already underway for the 2024 Presidential Election, the impact of this technology is already in the limelight. And, what about the impact of foreign countries [think Russia] using these tools to sway public opinion more effectively moving forward?
Just recently, presidential candidate Ron DeSantis’s campaign shared faked images of Donald Trump and Antony Fauci hugging created with artificial intelligence. [You know how fond they are of each other]. A few weeks earlier, a deepfake-generated image of the Pentagon being bombed caused brief stock market dips and a statement from the Department of Defense. This is how far along we are now, but wait a couple of years, and everyone will be second-guessing the validity of everything they see.

[Screengrabs from an ad campaign for Ron DeSantis featured on the DeSantis War Room Twitter account. Images are AI-generated. © Provided by HuffPost]
Several Hollywood stars have recently expressed concern about their likeness being used and about the prospects of having their likeness superimposed on other characters from existing films. This kind of edit makes us wonder what the future of film could look like using this technology. Imagine being able to choose your preferred actor to play the lead in any film you’re watching or, better yet, input your own likeness into the film. Wild possibilities.
Identifying Deep Fakes (Content Authentication)
The flip side of creating deepfakes is the ‘fact-checking’ or identification of fake visuals – referred to as content authentication. While AI can be used to make deepfakes, it can also detect them. As deepfake technology becomes accessible to any computer user, more and more researchers are focusing on deepfake detection with the regulation in mind. This obviously creates the need for many more professionals in this line of work – especially since the technology is spreading through society rather swiftly.
Large corporations like Facebook and Microsoft have taken initiatives to detect and remove deepfake videos. Presently, you can generally detect slight visual miscues in deepfake videos, such as the ears or eyes not matching, facial borders that don’t look right, or improper lighting and shadows. Detecting these flaws is getting harder as the deepfake technology becomes more advanced and videos look more realistic. Just as fact-checking takes time, so does the time lapse between when a deepfake video is released and when it can be authenticated. By then, decisions have been made, and the world has moved on. Of course, the question then becomes; are people more likely to believe a deep fake or a detection algorithm that flags the video as fabricated? Let’s put it this way, you and I won’t be able to tell it’s fake.
Initially, the authentication capability will be in the hands of a few firms but over time, new ‘authentication apps’ will be available to the public. When breaking footage is passed over social media, our first question to each other will be, “Has it been authenticated yet?” That is defined as the point of disbelief.
Technological advancement leading us astray
The lightning speed with which high-tech disinformation can now spread around the globe is already alarming. Deepfake videos make it even harder to discern fact from fiction. The inability to believe what you see during heightened tensions unfolding in real-time is a scary thought. When all the information you require is rife with misinformation, and you can’t believe what you see – you have a problem. When that dilemma is universally prevalent – we all have a problem. Once this line is eroded, truth itself will not exist. Essentially, what your eyes see and your ears hear cannot be trusted anymore – then everything becomes false. We will lose confidence in anything and everything.

