For his second interview on Gradcast, LiT.RL lab team member, Yimin Chen, returns to talk about our Digital Deception Detection project, funded by the Government of Canada SSHRC Insight Grant awarded to Dr. Victoria Rubin.Yimin makes the case for why everyone would benefit from an “automatic crap detector.” (Audio here.)
In many ways, the internet has changed how news is produced, disseminated, and read. Whereas news reading in the past was an active process – it would be difficult to “accidentally” read a newspaper – news consumption in the age of social media has become progressively more passive. Now, many people get news incidentally while browsing their Facebook or Twitter feeds, which are full of links and headlines from often dubious sources.
“The ratio of quality information versus the crap,” according to Yimin, “is very skewed towards the crap.” News readers aren’t necessarily thinking less critically than before, but more of our interactions with news are occurring in casual contexts (such as social media) where we may not be predisposed to thinking critically and actively. An intriguing headline may catch our eye, but not pique enough interest for us to read the full article or check the attribution. These are the circumstances where misinformation is likely to take root and spread.
The idea behind the “automatic crap detector” is that different kinds of deceptive news articles, such as hoaxes, gossip, and satire, all exhibit textual features that differentiate them from honest reporting. The goal of this research is to discover and analyze these deceptive cues so that we can automatically assess whether an article is likely to be reliable or not and to flag false or misleading news before it has a chance to spread too far.
Further Readings:
If you’d like to see what we have written about this project in the recent year or two more formally, please see these publications:
Chen, Y., Conroy, N. J., & Rubin, V. L. (2015). News in an Online World: The Need for an “Automatic Crap Detector”. In The Proceedings of the Association for Information Science and Technology Annual Meeting (ASIST2015), Nov. 6-10, St. Louis.
Chen, Y., Conroy, N. J., & Rubin, V. L. (2015). Misleading Online Content: Recognizing Clickbait as “False News”. ACM Workshop on Multimodal Deception Detection (WMDD 2015), joint with the International Conference on Multimodal Interaction (ICMI2015), November 9, 2015, Seattle, Washington, USA. http://dl.acm.org/citation.cfm?id=2823467
Conroy, N. J., Chen, Y., & Rubin, V. L. (2015). Automatic Deception Detection: Methods for Finding Fake News. In The Proceedings of the Association for Information Science and Technology Annual Meeting (ASIST2015), Nov. 6-10, St. Louis.
Rubin, V.L. (2014) Pragmatic and Cultural Considerations for Deception Detection in Asian Languages. TALIP Perspectives, Guest Editorial Commentary, 13 (2).
Rubin, V.L. & Conroy, N. (2012). Discerning truth from deception: Human judgments & automation efforts. First Monday 17 (3-5). dx.doi.org/10.5210/fm.v17i3.3933
Rubin, V.L., Conroy, N., & Chen, Y. (2015). Towards News Verification: Deception Detection Methods for News Discourse. The Rapid Screening Technologies, Deception Detection and Credibility Assessment Symposium, Hawaii International Conference on System Sciences (HICSS48), January 2015.
Rubin, V. L., Chen, Y., and Conroy, N. (2015). Deception Detection for News: Three Types of Fakes. In The Proceedings of the Association for Information Science and Technology Annual Meeting (ASIST2015), Nov. 6-10, St. Louis.