Book under Contract with Springer Nature Publisher

During my 2020-21 sabbatical, I have been contracted by Springer Nature to write a book, tentatively titled,

“Misinformation and Disinformation: Detecting Fakes with the Eye and AI”

(to be completed by 2022.) The idea was first developed for a FIMS graduate course “Misinformation & Viral Deception” that was first offered in 2019.

Brief Description (in under 50 words):

Can artificial intelligence mimic expert lie detectors and truth seekers? This synthesis of research in psychology of mis- and disinformation, philosophies of truths, language of deceit, and AI, prepares you for three interventions to curtail the infodemic – education, automation, and regulation.

Fuller Book Description (in under 500 words):

Inaccurate, deceptive, and other misleading information is accumulating in large amounts online, it is a Google search away, and it easily seeps into our social media newsfeeds. To date, there is no effective solution to halt mis- or disinformation from propagating. It leaves many of us wondering what can be done to change the situation. How do we personally avoid being misinformed? Most of us prefer making informed decisions – rather than misinformed ones – be it about health, investments, environment, or politics. How can this large-scale societal problem of infodemic be resolved: are there robust measures in information technologies to prevent mass disinformation of the population? What is the future of content verification?

As a computational linguist and information scientist, I have studied this phenomenon at the intersection of human and artificial intelligence, looking for ways to inform and enhance one type of intelligence with another. This book is interdisciplinary. It explores psychological, philosophical and linguistic insights into the nature of truth and deception, trust and credibility, cognitive biases and logical fallacies. I examine professional practices that guard us against deception in law enforcement investigations, scientific inquiry, and investigative reporting. Can artificial intelligent (AI) mimic the procedures and know-hows of the experts, or does it require an entirely new systematic approach? I overview AI ways of spotting fakes to distinguishing them from legitimate truthful content. How do AI tools work to debunk rumors, detect ‘fakes,’ or fact-check automatically? How successful is the state-of-the-art to date? Solving the problem of large-scale infodemic will inevitably involve some adoption of assistive AI technologies. While the human mind is the ultimate built-in detector, it requires preparation and practice in awareness and better thinking. I review what makes us susceptible to being deceived and manipulated. I also suggest how to extricate ourselves from the powers of persuasive propaganda.

The book advocates three concrete countermeasures to control the infodemic. As a society, we need to have the public will and the means for better education of digital media users, more accurate automatic identification of mis- and disinformation, and more stringent regulation of toxic media platforms. When the three proposed interventions are applied simultaneously and consistently, they interrupt interactions of the three causal factors of the infodemic — susceptible hosts, attacks by virulent fakes, and the complacency of conducive digital environments.

This book speaks to those who are timid to adopt and experiment with innovative technologies for verifying digital content and resources. It makes complicated research, especially in AI, more accessible by understanding how AI detection systems work and how they fit with broader societal and individual concerns. The ultimate decisions – to believe, to trust, to make sense – are obviously in the human mind. Meanwhile, AI can sift, sort, shuffle the digital content, to reduce the amount of rubbish to a pile that a human can verify. The book advocates combining greater tech-savviness with better thinking, both as parts of the media and information literacy, to be taught and practiced as a mental hygiene.

Image Explanation

The image above eludes to Online Information Grinding into Viral Deceptive Garbage, a case of info-recycling gone wrong. Just to be clear, it is not an official cover graphics for the book.
It’s just my techno-sepia impression
of a not-so-easy to capture
but oh-so-sad state
of social media
information space.

Leave a Reply

Your email address will not be published. Required fields are marked *