Google updates algorithm to filter out Holocaust denial and hate sites

Under increasing pressure over the spread of fake news, some social media and web search companies are moving towards valuing information integrity in their results.

The Google logo is seen at the Google headquarters in Brussels.

Virginia Mayo/AP/File

December 21, 2016

Most internet users see Google as a simple portal to all information online. But recently, users using search terms related to the Holocaust or ethnic minorities found a disturbing trend: top results would lead to hate-filled sites.

In order to correct this problem and stem the flood of misinformation getting to users of their popular search engine, Google has changed its search algorithms to prioritize high-quality information, bumping down sites associated with racial hate speech, and to remove anti-Semitic auto-fill queries.

Google has shown reluctance to change its algorithms in the past, preferring to prioritize whatever pages generated the most online sharing and discussion. But instead of providing objective results, Google's algorithms were being manipulated to amplify misinformation and hate speech, reported The Guardian's Carole Cadwalladr in early December. 

The changes to Google come after reports that one of the auto-fill suggestions to complete the search query "are Jews" included "are Jews evil?" Also, the top search for "did the Holocaust happen" linked to a page by Stormfront, an infamous white supremacist group, and searches related to various ethnic minorities would often bring up other sites espousing racist views.

"Judging which pages on the web best answer a query is a challenging problem and we don't always get it right," a Google spokesperson told Fortune. "We recently made improvements to our algorithm that will help surface more high quality, credible content on the web. We'll continue to change our algorithms over time in order to tackle these challenges."

While the Fortune article indicated that the algorithm had kicked in to replace the Stormfront result with a link to the United States Holocaust Museum, this reporter's search still found the white supremacist group in the number one spot, indicating that the changes may not be universal yet.

The apparent increase in hate speech and the glut of fake news brought to national attention during the presidential election, in particular, have caused many to step back and make a sober reevaluation of the internet's role in shaping perceptions of reality. According to a Pew Research poll, four out of ten Americans now get news online, underscoring the influence such sites can yield. 

"Companies that control large segments of the internet, such as Google and Facebook, create 'filter bubbles' because of the algorithms used to present us with data tailored to our habits, beliefs, and identities," Melissa Zimdars, a professor of communications and media at Merrimack College, who has catalogued fake news sources, tells The Christian Science Monitor in an email.

Golden Dawn: five things to know about Greece's 'neo-Nazi' party

"Our behaviors on the internet create a tremendous amount of data about us, and that data is used to tailor search results and our Facebook feeds based on what these companies perceive we want rather than what we may need," she explains.

Over thousands of interactions, this system encourages more sensational stories and websites to pop up in suggested feeds, regardless of their accuracy or origins. 

For some, the idea of filtering out inaccurate top results smacks of censorship. But when thinking about what that means, it's important to remember that "most censorship and filtering – at least, in the US – is usually self-imposed," Nicholas Bowman, a professor of communication studies at West Virginia University, explains in an email to the Monitor. "Movie and TV ratings, for example are set by industry groups, as was the old Comics Code Authority. Essentially, these forms of entertainment were threatened with government sanction and standards unless they themselves could find a way to self-regulate information, and those industries responded in kind."

"What does potentially become a problem, of course, is when those companies begin deciding what is and isn't appropriate, and those decisions are made arbitrarily – or at least, don't match up with larger public sentiment," he adds. 

Dr. Bowman suggests a system similar to Wikipedia's setup as a possible solution to maintaining informational integrity on the internet: a mixture of crowdsourced information like the popularity-driven system the sites currently employ, coupled with authentication from outside sources.

Dr. Zimdars emphasizes that the solution to online hate requires transparency.

Google has "the power to stymie hate speech circulated by hate groups, but this means that all kinds of alternative ideas could be limited through tweaks to its algorithm," she says. "Overall, we need a lot more transparency about why we're seeing what we're seeing, and perhaps more importantly, more knowledge about what we're not seeing."

As information is shared across the internet, Zimdars says, it often becomes "cleansed," disconnected from its original source and normalized into mainstream conversation. This can be a problem, particularly when the hate at the core of racist or biased messages becomes assimilated into social media platforms through the "impartial" algorithms of Facebook, Twitter, and Google.

"Perhaps we tricked ourselves into thinking that there is no more hate, because social norms tended to govern conversations such that most people didn't share these thoughts face-to-face," says Bowman.

"They exist, and I'd say that they are not so much 'stronger than ever' as they are 'louder than ever.' "