[spacer height=”10px”] Projects @ LiT.RL About “Tools-to-Go” “Data-to-Go” Publications In the Media

ABOUT NEWS VERIFICATION PROJECT

This SSHRC-funded research (in 2015-2018) has enabled creation of the News Verification Browser, a suite of software applications that automatically identify deliberately deceptive or misleading information in online news. The resulting deception detection methodology allows making predictions about each previously unseen news piece: is it likely to belong to the truthful or deceptive category? A system, based on this methodology, alerts users to most likely deceptive or misleading news (e.g., falsifications, satirical fakes, and clickbait) on the website and prompt the users to fact-check further.

BACKGROUND: Digital deception is a deliberate effort to create false beliefs or conclusions in technology-mediated environments. Our research project focuses on deliberate misinformation in text-based online news, provided via mainstream media and citizen journalist websites, news archives and aggregators. Various deception types and degrees will be examined, categorized, and modeled: fake or fabricated news, exaggerated claims, material fact omissions, indirect responses, question-dodging, and subject-changing.

Mistaking deceptive news for authentic reports can create costly negative consequences such as sudden stock fluctuations or reputation loss. Everyday life decision-making, behavior, and mood are influenced by news we receive. When professional analysts sift through the news, their future forecasts, fact and pattern discovery depend on veracity of the news in “big data” knowledge management and curation areas (specifically, in business intelligence, financial and stock market analysis, or national security and law enforcement). In both lay and professional contexts of news consumption, it is critical to distinguish truthful reports from deceptive ones. However, few news verification mechanisms currently exist, and the sheer volume of the information requires novel automated approaches.

Mistaking deceptive news for authentic reports can create costly negative consequences such as sudden stock fluctuations or reputation loss. Everyday life decision-making, behavior, and mood are influenced by news we receive. When professional analysts sift through the news, their future forecasts, fact and pattern discovery depend on veracity of the news in “big data” knowledge management and curation areas (specifically, in business intelligence, financial and stock market analysis, or national security and law enforcement). In both lay and professional contexts of news consumption, it is critical to distinguish truthful reports from deceptive ones. However, few news verification mechanisms currently exist, and the sheer volume of the information requires novel automated approaches.

IMPORTANCE: News verification methods and tools are timely and beneficial to both lay and professional text-based news consumers. The research significance is four-fold:
1) Automatic analytical methods complement and enhance the notoriously poor human ability to discern information from misinformation.
2) Credibility assessment of digital news sources is improved.
3) The mere awareness of potential digital deception constitutes part of new media literacy and can prevent undesirable consequences.
4) The proposed veracity/deception criterion is also seen as a metric for information quality assessment.

ABOUT DECEPTION DETECTION PROJECT (2010-2013)

Victoria Rubin and her graduate students at LiT.RL developed methods to distinguish truth from deception in textual data. The rhetorical structure theory (RST) was used as the analytic framework to identify systematic differences between deceptive and truthful stories in terms of their coherence and structure. A vector space model (VSM) assesses each story’s position in multidimensional RST space with respect to its distance from truthful and deceptive centers as measures of the story’s level of deception and truthfulness. The RST-VSM for determining deception demonstrates that the discourse structure analysis as a significant method for automated deception detection and an effective complement to lexico-semantic analysis. The potential was in developing novel discourse-based tools to alert information users to potential deception in computer-mediated texts and social media. This project came before the hype about “fake news” and was a predecessor of the later R&D efforts in 2014 and on.

About

Meet the Prof.

Teaching

Take courses with Dr. Rubin at FIMS, Western.

Research ↗

Read about studies at LiT.RL.

Students

Meet the team at Rubin’s LiT.RL Lab.

Book ↗

Read about mis-/disinformation.

Blog

“Newsbits-n-bytes:” events, talks, announcements.