Skip to content

Facebook and Google enabled fake-news links to Las Vegas shooting hoaxes

Author
PUBLISHED: | UPDATED:

Accuracy matters in the moments after a tragedy. Facts can help catch suspects, save lives and prevent panic.

But after the mass shootings in Las Vegas Sunday, the world’s two biggest gateways for information, Google and Facebook, did nothing to quell criticism that they amplify fake news when they steered readers toward hoaxes and misinformation gathering momentum on fringe sites.

Google posted under its “top stories” conspiracy-laden links from 4chan — home to some of the internet’s most ardent trolls. It also promoted a now-deleted story from Gateway Pundit and served videos on YouTube of dubious origin.

The posts all had something in common: They identified the wrong assailant.

Law enforcement officials have named Stephen Paddock as the lone suspect, and so far pinpointed no motive. But the erroneous articles pointed to a different man, labeling him a left-wing, anti-Trump activist.

Meanwhile, Facebook’s Crisis Response page, a hub for users to stay informed and mobilize during disasters, perpetuated the same rumors by linking to sites such as Alt-Right News and End Time Headlines, according to Fast Company.

“This is the same as yelling fire in a crowded theater,” Gabriel Kahn, a professor at the University of Southern California Annenberg School for Communication and Journalism, said of Google’s and Facebook’s response. “This isn’t about free speech.”

The missteps show how, despite promises and efforts to rectify the problem of fake news with fact checkers and other tools after the 2016 presidential election, misinformation continues to undermine the credibility of Silicon Valley’s biggest companies.

Google and Facebook have tweaked their results Monday to give users links to more reputable sources — acknowledging that their algorithms were not prepared for the onslaught of bogus information.

“This should not have appeared for any queries, and we’ll continue to make improvements to prevent this from happening in the future,” a Google representative said about the 4chan link, which surfaced only if users searched for the wrongly identified shooter’s name and not the attack in general.

Facebook did not respond to a request for comment but told Fast Company it regretted the link to Alt-Right News.

“We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused,” Facebook said.

Google, Facebook and Twitter are under growing pressure to better manage their algorithms as more details emerge about how Russia used their platforms to interfere in the presidential election to sow discord.

The platforms have immense influence on what gets seen and read. More than two-thirds of Americans report getting at least some of their news from social media, according to the Pew Research Center. A separate global study published by Edelman last year found that more people trusted search engines (63 percent) for news and information than traditional media such as newspapers and television (58 percent).

Facebook’s and Google’s algorithms are designed to favor the kinds of stories and posts that get the most shares and comments. Promoting those posts drives up engagement, and with it advertising revenue.

But that strategy also helped inflame the spread of fake news during the campaign season — intensifying calls for the platforms to behave more like media companies by vetting the content they promote.

That would require more human management, something tech companies are loath to do given that their very existence is owed to replacing human activity with software.

Still, Facebook has tried to strike a balance. In March, it announced a third-party fact-checking program with PolitiFact, FactCheck.org, Snopes.com, ABC News and the Associated Press. Those partnerships, however, did not stop inaccurate reports from landing on Facebook’s Crisis Response page.

Putting people in charge of content can help tech companies avoid controversy. Snapchat, the messaging app, maintains strict control over news shared on its platform by employing staffers, including journalists, to curate and fact-check its stories. Snapchat attracts far fewer users — and far less content — than Facebook or Google.

Facebook has begun boosting its human oversight team. On Monday, the social network pledged to hire more than 1,000 employees to vet its advertisements for propaganda.

The changes as lawmakers push Facebook, Google and Twitter to be more forthcoming in the investigation into Russian election meddling.

Facebook Monday gave congressional committees more than 3,000 ads purchased during the 2016 election campaign by a firm with ties to Russian intelligence. In a blog post, the company said an estimated 10 million people in the U.S. saw the ads. Last week, Twitter briefed Congress on the number of fake accounts run by Russian operatives. And Google said it would conduct an internal investigation on Russian interference. (In a separate move to placate news organizations, the search giant said Monday it will tweak policies to help publishers reach more readers.)

Still, skepticism abounds that the companies beholden to shareholders are equipped to protect the public from misinformation and recognize the threat their platforms pose to democratic societies. Now, calls are growing to regulate the companies more strictly. As platforms, they aren’t liable for most of the content they distribute.

“These algorithms were designed with intent and the intent is to reap financial reward,” USC’s Kahn said. “They’re very effective, but there’s also collateral damage as a result of designing platforms that way.

“It’s not good enough to say, ‘Hey, we’re neutral. We’re simply an algorithm and a platform.’ They have a major responsibility that they still have not fully come to terms with.”