Despite the efforts of government watchdogs, news organizations and social media sites, the vast majority of Twitter accounts suspected of spreading disinformation and “fake news” in the 2016 presidential election are still operative with just weeks to go before midterm voting.
That’s the bottom-line conclusion of researchers from the Knight Foundation, George Washington University and social media research firm Graphika in an analysis that looked at over 10 million messages from 700,000 Twitter accounts linked to more than 600 disinformation and conspiracy news outlets.
The study, the largest of its kind to date, suggests that Silicon Valley’s leading social media firms face major technical and logistical hurdles in attempting to stop intentionally false news stories and rumors from distorting political debate across their platforms this campaign season. The suspect accounts are persistent despite a Twitter crackdown this summer that included most prominently the banning of Infowars head Alex Jones from the messaging service.
“I am hesitant to suggest more government regulation, but we are at a critical point,” said Richard Zack, head of Our.News, which describes itself as a nonpartisan misinformation filter web tool. “More transparency from the social networks is needed in every aspect of what they’re doing.”
Backing up anecdotal complaints from social media critics, researchers from the Knight Foundation and the other organizations essentially concluded that the sheer volume of data that companies such as Twitter and Facebook process by automated means prevents them from knowing what exactly is on their platforms.
In the report, issued late last week, the researchers documented “a concentrated fake news ecosystem” of bogus accounts, which “are densely connected” with “countless paths to spread.”
The investigation led by special counsel Robert Mueller this year into 2016 Russian election meddling accusations, charged a Kremlin-linked, St. Petersburg-based internet “troll factory” with waging a social media influence operation designed to disrupt the 2016 debate and widen divisions within the American electorate. U.S. intelligence agencies have concluded that the Russians tried to interfere in the 2016 vote to hurt Democrat Hillary Clinton’s electoral chances.
The bot problem
Although covert Kremlin propagandists were part of the problem, analysts say, the prevalence of robotlike automated accounts — or “bots” — pose a bigger threat to the validity of the vote this year and beyond.
“Most of the accounts spreading fake or conspiracy news,” Knight Foundation researchers said, “show evidence of automated posting.”
Propaganda, tall tales from campaign trails and outright lies date back to the nation’s founding, but the chaos of the 2016 disinformation campaign still stings.
A poll conducted earlier this year by Harvard found that 68 percent of U.S. voters say that there is “a lot” of fake news in the mainstream media and 84 percent say it’s difficult to know what online news to believe.
Last month, another report on social media from Stanford and New York University explored almost 600 websites known to produce fake news, tracking how often those sources were shared on Facebook or Twitter from January 2015 to July 2018.
Researchers found that in the lead-up to the 2016 election, “fake news” stories surged across both platforms but afterward dropped off Facebook by more than 50 percent. Twitter shares of those sources, however, continued to increase.
The CEO from both firms, Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey, have testified about the problem to Congress. Google’s top executives have declined to come to Capitol Hill, underscoring the tension between Washington and Silicon Valley over how to deal with the issue and with the social media platforms’ responsibility to police what runs on their sites.
Tech industry and Washington analysts say Twitter and Facebook are taking real steps to curb the spread of misinformation by limiting supply, banning ads from pages that they conclude have repeatedly created or shared false stories.
They are also changing their algorithms to favor news from established, trustworthy publications, a controversial shift given the perception that many in Silicon Valley lean heavily to the left politically and harbor a bias against conservative news outlets and politicians.
Connecticut, New Mexico, Rhode Island and Washington state have enacted laws encouraging media literacy and digital citizenship, but no fake news legislation at the national level has been passed. Polls say voters’ overall awareness is up despite the difficulty in spotting and stopping fake news.
Last week, Mr. Zack was in Washington at the Newseum and Freedom Forum Institute for a panel on how to increase the credibility of information in this age of online disinformation.
His firm, Our.News, essentially crowdsources credibility by encouraging readers to rate stories and sources for trustworthiness. Other organizations have sprung up to encourage online news integrity and self-policing of sources.
“Fact-checking is something most people can do,” Mr. Zack said. “But it takes a ton of time. We are trying to help people look behind the news, which these days is more complicated than ever. But it absolutely has to be done.”