In a surprisingly good news story, the battle against misinformation that so damaged our democratic processes in the 2016 election cycle, can have a worthy adversary: The United States military forces.
Bloomberg News reported this week that fake news and social media posts are such destructive threats to our once-free elections, a massive effort to spot the misinformation can be undertaken by America’s military personnel.
The Defense Advanced Research Projects Agency (DARPA) is seeking new custom software that can unearth fakes hidden among more than 500,000 stories, photos, videos and audio clips.
If successful, the system, after four years of trials, may expand to detect malicious intent and prevent viral fake news from polarizing society.
Our military officials have been working on plans to prevent outside hackers from flooding social channels with false information ahead of the 2020 election.
The much-needed effort has one obstacle, Senate Majority Leader Mitch McConnell’s refusal to consider election security legislation.
Critics have labeled him #MoscowMitch, saying he left the U.S. vulnerable to meddling by Russia, prompting his retort of “modern-day McCarthyism.”
President Trump has rejected allegations that dubious content on platforms like Facebook, Twitter and Google aided his election win.
“The risk factor is social media being abused and used to influence the elections,” Syracuse University assistant professor of communications Jennifer Grygiel said. “It’s really interesting that DARPA is trying to create these detection systems, but good luck is what I say. It won’t be anywhere near perfect until there is legislative oversight. There’s a huge gap, and that’s a concern.”
Bloomberg reported that false news stories and so-called deep fakes are increasingly sophisticated and making it more difficult for data-driven software to spot.
Artificial intelligence has advanced in recent years and is now used by Hollywood, the fashion industry and facial recognition systems.
Researchers have shown that these so-called generative networks can be used to create fake videos.
After the 2016 election, Facebook Chief Executive Mark Zuckerberg played down fake news as a challenge for the world’s biggest social media platform. He later signaled that he took the problem seriously and would let users flag content and enable fact-checkers to label stories in dispute.
In June, he said Facebook made an “executive mistake” when it didn’t act fast enough to identify a doctored video of House Speaker Nancy Pelosi in which her speech was slurred and distorted.
By increasing the number of algorithm checks, the military research agency hopes it can spot fake news with malicious intent before going viral.
These developments hold out hope that America can carry on campaigns and hold elections in the long-standing traditions aimed at truth and honesty.
Using the military — which has highly sophisticated personnel to tackle this problem — is a great idea.
Those who want to allow hackers to toss hundreds of thousand grenades into our election system can be legitimately labeled as un-American.