Subscribe to our newsletter

Gerd Altmann/Pixabay

Disinformation and human cognition

Stephan Lewandowsky
Analysis13 August 2019

Disinformation sticks. Once we know something, it is difficult to eradicate it from our memories.

Once people have grown accustomed to the claim that there are Weapons of Mass Destruction (WMDs) in Iraq, it no longer matters that none were found after the invasion of 2003. The belief in Iraqi WMDs has become so entrenched in American public consciousness that in 2014, a decade after the absenceof Iraqi WMDs had become official U.S. government policy, 51% of Republicans and 32% of Democrats were found to believe that the U.S. had indeed discovered WMDs in Iraq.

Disinformation sticks even when people know that it is false

In a study conducted during the initial stages of the invasion of Iraq, colleagues and I presented participants with specific war-related news items, some of which had been subsequently corrected, and asked for ratings of belief as well as memory for the original information and its correction. We discovered that many participants who were certain that the information was corrected also believed it to be true.

This “I know it’s false but I believe it’s true” behaviour is a signature effect associated with the cognition of disinformation. The effect can be readily generated in the laboratory when participants are presented with entirely fictional but plausible scripts about various events. For example, when presented with a story about a fictitious warehouse fire in which the fire was initially ascribed to negligence, people will stick to the negligence explanation even if, later in the strory, the evidence pointing to negligence turned out to be false.

There are a number of reasons why people find it difficult to discard information from their memories when it turns out to be false. The primary factor is that people construct a “mental model” of a story as it unfolds—for example, they may learn that a warehouse fire was caused by negligence. Once a model has been constructed, any correction that identifies an important component of the model as false may fail because removal of that component would create an unexplained and unfilled gap in the mental model. In consequence, people may know very well that a story has been corrected when asked directly, but they may nonetheless rely on the notion of negligence when asked to explain aspects of the fire.

It follows that for a correction to be successful, it has to be accompanied by a causal alternative. Telling people that negligence was not a factor in a warehouse fire is insufficient—but telling them in addition that arson was to blame instead will reduce or eliminate any future reliance on the negligence idea. People are able to update their mental model by replacing one causal component with another.

Disinformation is even stickier when it pertains to people’s worldviews

Updating a mental model of a warehouse fire is relatively straightforward once a causal alternative is provided. Things become far trickier in the political arena, when the disinformation and corrections impinge on people’s deeply-held political convictions. In those circumstances, corrections can be particularly problematic.

An important aspect of corrections in the political arena relates to the “centrality” of the disinformation to people’s attitudes. In a recent study, Ullrich Ecker and Li Ang showed that people’s political views presented no impediment to corrections when the story involved a single politician who was initially suspected of embezzlement before being cleared of the charges later on. By contrast, when the story claimed that all politicians of a particular party were prone to embezzlement, a subsequent correction was strongly affected by partisanship.

In this case, if the disinformation was attitude-incongruent (“all politicians of my favoured party are corrupt”), the retraction was very effective, and conversely, if the disinformation was attitude-congruent (“the other party is corrupt”), the retraction was clearly ineffective.

Thus, while people accept that a single politician of one party or the other may or may not be corrupt, they refuse to accept a wholesale condemnation of their preferred political party because that would require a change in attitude.

Colleagues and I uncovered a further variant of this effect in a recent article in which American voters were presented with false statements made by Donald Trump or Bernie Sanders. Even though people were sensitive to corrections—that is, they believed specific false statements less after they had been corrected—the corrections had, at most, a small impact on people’s feelings about their preferred politician. Trump supporters continued to support Donald Trump and Sanders supporters continued to support Bernie Sanders. This parallels the laboratory results involving the warehouse fire story, where people similarly remember a correction but continue to rely on the initial information.

Can we “unstick” disinformation?

In some cases, a causal alternative may be unavailable or may turn out to be complicated. For example, it is difficult to replace Donald Trump’s claim that vaccines cause autism with a simple causal alternative. The true alternative involves at least three main planks; namely, overwhelming evidence showing the absence of any such link between vaccines and autism, the fact that the purported link arose out of fraudulent and unethical research, and that the link continued to be amplified by irresponsible media outlets long after it was known to be non-existent.

Another way to combat disinformation is to prevent it from sticking in the first place.

If people are made aware that they might be misled before the disinformation is presented, they demonstrably become more resilient to the misinformation. This process is variously known as ‘inoculation’ or ‘prebunking’ and it comes in a number of different forms. At the most general level, an up-front warning may be sufficient to reduce—but not eliminate—subsequent reliance on disinformation.

A more involved variant of inoculation not only provides an explicit warning of the impending threat of disinformation, but it additionally refutes an anticipated argument that exposes the imminent fallacy. For example, if people are informed about the ‘fake-expert’ approach employed by the tobacco industry to undermine the medical consensus about the health risks from smoking (e.g., advertising slogans such as “20,679 Physicians say ‘Luckies are less irritating’ “), they become more resilient to subsequent disinformation relating to climate change that is using the same fake-expert techniques.

Disinformation sticks and is hard to dislodge.

But we can prevent it from sticking in the first place by alerting people to how they might be misled.

 

*Stephan Lewandowsky is Professor of Cognitive Science at the University of Bristol.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

* Your email address will not be published