‘Bot vs. Bot:’ Texas professors to develop fake-news-fighting software

Motivated by the threat fake news poses to national security, Texas professors are teaming up to write code for detecting false claims lurking on the internet. 

Jessica Gresko/AP/File
Flowers and notes left by well-wishers are displayed outside Comet Ping Pong, the pizza restaurant in Washington, on Friday, Dec. 9, 2016. A man was misled by a fake news story to think the restaurant was a front for a pedophilia ring and opened fire on customers and staff. A team of Texas educators is developing software that aims to help prevent false news claims.

Incensed by what he thought was a pedophilia ring headquartered in a Washington, D.C., pizza restaurant, a man opened fire inside Comet Ping Pong Pizza last year, sending employees and customers scrambling for cover.

The Dallas Morning News reports the shooting was real, but the sex ring – supposedly overseen by 2016 Democratic presidential candidate Hillary Clinton – was not. Instead, it was propaganda passed off as authentic through social media feeds and right-wing websites.

No one was hurt in the Dec. 4 shooting, and the suspect was sentenced in June to four years in prison.

Because of incidents like that one, a group of college instructors in North Texas believes combating fake news is a matter of national security. They're working on a proposal that would use technology to help root out false claims in the news.

"We decided to make national security the focus because of the potential interference in our election coming from Russia," said Chengkai Li, a University of Texas at Arlington associate professor in the Department of Computer Science and Engineering.

Mr. Li and four others – two professors from UTA and two from the University of Texas at Dallas – are collaborating on a project titled "Bot vs. Bot: Automated Detection of Fake News Bots," and they have a one-year grant of $30,000 in seed money from the University of Texas at Austin's Texas National Security Network Excellence Fund to get started.

"This is a seed grant that we hope will lead to a much larger grant that will identify these bots for social media users," Li said. "Right now, you don't know what is coming from a real person and what's coming from a computer, sometimes for malicious, or at least, misleading reasons."

Previously, Li and other colleagues partnered with Stanford and Duke universities to develop ClaimBuster, a fact-checking service developed from a $241,778 grant from the National Science Foundation. ClaimBuster works by letting users type in what they've heard in the news, and the results will produce a sliding scale of accuracy. The lower the number, the less accurate the reports.

The site also has transcripts of all the 2016 presidential debates and heavy documentation of its methodology.

Li and his computer science/engineering colleague Christoph Csallner will apply data mining techniques, coding analysis and other security measures to design an algorithm to spot fake news, with an assist from Mark Tremayne, an assistant professor of communication, and others who come from a journalism background.

UTD associate professor of computer science Zhiqiang Lin and Angela Lee, UTD assistant professor of emerging media and communication, are also part of the project.

The joint effort between the two universities will focus on false accounts spread via Twitter.

"We're not talking about the [Donald] Trump definition of fake news," Mr. Tremayne said. "Trump's definition of fake news is CNN, The Washington Post, The New York Times. We're talking about the pre-Trump definition – stories that have been intentionally passed around with the intent to mislead."

The researchers in North Texas aren't the only ones seeking to identify purveyors of phony information. Melissa Zimdars, an assistant professor of communication at Merrimack College in Massachusetts, developed a checklist of fake news sites shortly after President Trump defeated Mrs. Clinton in the November election.

"I think the most troubling aspect of fake news and the proliferation of misleading information is that it further destabilizes the relationship between individuals and the press as well as between individuals of different political ideologies," she said.

Ms. Zimdars created her checklist for her students after she kept running across false sources cited in their papers. She also realized that even some of her professionally trained colleagues couldn't tell the difference between credible news sources and misleading ones.

She temporarily took down her checklist after she became the target of harassment, Zimdars said, but made it public again after the attacks against her eased.

It remains a live document, but Zimdars no longer updates it. It includes more than 1,000 sources that spread malicious or unreliable information, were satirical or relied on click-bait headlines to capture attention.

"There are plenty of actual things about which to disagree without having to consider alternative truths in the equation," Zimdars said. "How can we function as a society if we're not even sharing or at least understanding some of the same reality?"

Zimdars said readers can get a head start on spotting fake news sites by looking at domain names, such as the "8006" that appears at the end of an otherwise legitimate-looking fake New York Times site, or a "co" that comes after ".com" on sites that otherwise borrow the names of legitimate news outlets.

Li and his partners aren't sure what shape their program will eventually take.

"One form can be a browser plug-in that can tell you something about the truthfulness of something, or it could be a third-party bot or an app or something," Li said.

If the yearlong period ends and the grant isn't renewed, Li said the team will continue to work on the project in classrooms and laboratories.

Research will really start to take shape when students return in the fall, Tremayne said. The group is considering organizing a "hack farm" as a way to attract students to the project.

"The idea is, can we come up with some code to identify fake news bots?" Tremayne said. "Even if it just means something like throwing ideas at the wall and seeing if anything sticks."

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.