Deception Detection Research for News Verification: Automation and the Human Mind
September 20, 2023; 12:00 – 1:20 p.m. See FIMS Events Calendar
Abstract: Artificial intelligence (AI) and natural language processing (NLP) advances in detecting mis- or disinformation are based heavily on psychological research, lie detection, and fact-checking. Misinformation is unintentional spread of deceptive, inaccurate, or misleading information, while disinformation is its intentional counterpart. Either way, the result is problematic: various “fakes” proliferate online, and nobody wants to be ill-informed. So, why does the problem persist? What are the underlying causes? What are the solutions? Dr. Rubin discusses three interacting causal factors that require simultaneous interventions. Human minds, susceptible to being deceived and manipulated, can be more vigorously trained in digital literacy. Toxic digital environments need further legislative oversight. AI can also, at least in part, enhance our human intelligence, given the scale of the problem. Rubin exemplifies systematic analyses that sift through large volumes of textual data. They can distinguish verified truthful language from various types of “fakes” such as clickbait, satire, other falsehoods, and rumors. Success rates vary. If more accurate and reliable systems are made available and routinely used by the general public as assistive technology (like spam filters), the problem of mis- and disinformation can be dampened, when combined with education and regulation.
See more information about her 2022 book on “Misinformation and Disinformation: Detecting Fakes with the Eye and AI,” the Lab’s research projects News Verification (2015-19) and Deception Detection (2010-14).