Choose one of the following articles to read:Option 1: False Balance
Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS One, 12(5), 1–17.
Option 2: Myth Busting
Pluviano, S., Watt, C., & Sergio, D. S. (2017). Misinformation lingers in memory: Failure of three pro-vaccination strategies. PLoS One, 12(7), 1–12.
Option 3: Retraction
Ecker, U. K. H., Hogan, J. L., & Lewandowsky, S. (2017). Reminders and repetition of misinformation: Helping or hindering its retraction? Journal of Applied Research in Memory and Cognition, 6(2), 185–192.
Then, address the following:
Which descriptive statistics did they use?
Why do you feel these descriptive statistics were important?
Which descriptive statistics do you feel are important for your study and why?
Do not quote, paraphrase, or summarize.
Cite and reference your source.Journal of Applied Research in Memory and Cognition 6 (2017) 185–192
Contents lists available at ScienceDirect
Journal of Applied Research in Memory and Cognition
journal homepage: www.elsevier.com/locate/jarmac
Reminders and Repetition of Misinformation: Helping or
Hindering Its Retraction?夽
Ullrich K.H. Ecker∗ , Joshua L. Hogan
University of Western Australia, Australia
Stephan Lewandowsky
University of Bristol, United Kingdom
University of Western Australia, Australia
People frequently rely on information even after it has been retracted, a phenomenon known as the continuedinfluence effect of misinformation. One factor proposed to explain the ineffectiveness of retractions is that repeating
misinformation during a correction may inadvertently strengthen the misinformation by making it more familiar.
Practitioners are therefore often encouraged to design corrections that avoid misinformation repetition. The current
study tested this recommendation, investigating whether retractions become more or less effective when they include
reminders or repetitions of the initial misinformation. Participants read fictional reports, some of which contained
retractions of previous information, and inferential reasoning was measured via questionnaire. Retractions varied in
the extent to which they served as misinformation reminders. Retractions that explicitly repeated the misinformation
were more effective in reducing misinformation effects than retractions that avoided repetition, presumably because
of enhanced salience. Recommendations for effective myth debunking may thus need to be revised.
General Audience Summary
Information that is thought to be true but then turns out to be incorrect—so-called misinformation—can affect
people’s thinking and decision making even after it has been clearly corrected by a credible source, and even if
people understand and later remember the correction. It has been proposed that one reason why corrections are
so ineffective is that a myth is often repeated when it is corrected—explaining that vaccines do not cause autism
almost necessarily repeats the association between vaccines and autism. This repetition can make the myth
more familiar such that it comes to mind more easily in the future. Based on this notion, one recommendation
to “myth debunkers” has been to avoid myth repetition in a correction. The present study directly tested this
recommendation. We presented participants with news reports that did or did not contain corrections; these
corrections did or did not repeat the to-be-corrected misinformation explicitly. We found—contrary to the
popular recommendation—that corrections were more effective when they explicitly repeated the myth. Thus,
it seems “safe” and even beneficial to repeat the myth explicitly when debunking it.
Keywords: Continued-influence effect, Misinformation, Myth debunking, Familiarity
Author Note.
Correspondence concerning this article should be addressed to Ullrich K.H. Ecker, School of Psychological Science, University of Western
∗
Australia, (M304), 35 Stirling Hwy, Perth 6009, Australia. Contact:
ullrich.ecker@uwa.edu.au
RETRACTIONS, REMINDERS AND MISINFORMATION
Information that is initially presented as true but later
identified as false and explicitly retracted often continues to
influence people’s cognition. This phenomenon is known as
the continued-influence effect (CIE) of misinformation (Johnson
& Seifert, 1994; Wilkes & Leatherbarrow, 1988). Research on
the CIE has traditionally used a paradigm in which individuals read a (fictional) news report or scenario that includes
a piece of critical information that subsequently is or is not
retracted. The typical finding is that people’s inferential reasoning, as for example measured through questionnaire, continues
to be affected by the critical information despite clear and
credible retractions, and even when individuals demonstrably understand and later remember the retraction (Johnson
& Seifert, 1994; Wilkes & Leatherbarrow, 1988; for reviews,
see Lewandowsky, Ecker, Schwarz, Seifert, & Cook, 2012;
Seifert, 2002; for more recent work, see Ecker, Lewandowsky,
Chang, & Pillai, 2014; Ecker, Lewandowsky, Cheung, &
Maybery, 2015; Ecker, Lewandowsky, Fenton, & Martin, 2014;
Guillory & Geraci, 2013; Guillory & Geraci, 2016; Nyhan,
Reifler, & Ubel, 2013; Rich & Zaragoza, 2016; Thorson,
2016). In most of these studies, the retraction does have an
effect—reliance on the critical information is typically halved
compared to the no-retraction control—but the critical information almost always continues to be used to a significant
extent.
Such continued reliance on misinformation is of particular concern when important decisions are at stake. One
of the most commonly used examples of the CIE’s realworld relevance is the ongoing impact of the fabricated link
between childhood vaccines and autism, which has proven
fairly resistant to correction (e.g., Poland & Spier, 2010).
These real-world implications of the CIE are one of the factors that that have stimulated research effort into designing
more effective correction strategies (cf. Cook & Lewandowsky,
2011; Lewandowsky et al., 2012; Schwarz, Newman, & Leach,
2016).
One of the recommendations that has arisen from these
efforts is to avoid repeating the misinformation when correcting it. This recommendation is founded in psychological
theorizing that repeating the misinformation when retracting
it may inadvertently strengthen the misinformation by making it more familiar. As it is well known that familiar claims
are more likely to be trusted and believed (e.g., Dechene,
Stahl, Hansen, & Wanke, 2010; Weaver, Garcia, Schwarz,
& Miller, 2007), the retraction could ironically backfire and
increase reliance on misinformation rather than reduce it.
Repeating the misinformation while identifying it as false could
thus later leave people thinking “I’ve heard that before, so
there’s probably something to it” (Lewandowsky et al., 2012,
p. 115).
Some evidence for this “familiarity backfire effect” comes
from a study by Skurnik, Yoon and Schwarz (2007; also see
Skurnik, Yoon, Park, & Schwarz, 2005), who provided participants with a “myths vs. facts” flyer that listed a number of
claims regarding the flu vaccine, which were either affirmed or
retracted. Skurnik et al. (2007) found that after a delay of 30 min,
a substantial proportion of retracted myths were misremembered
186
as facts, presumably based on the retraction-induced boost to the
familiarity of the myths.1
More recently, Swire, Ecker, and Lewandowsky (2017) also
investigated the role of familiarity in myth corrections. Participants were given a set of true and false claims of unclear veracity
(e.g., the fact that dogs can smell certain types of cancer, or the
myth that playing Mozart can improve a baby’s intelligence),
which were subsequently repeated and then either affirmed or
retracted. Claim belief was then measured after various retention intervals of up to three weeks. Swire et al. found that over
time, the impact of myth retractions was less sustained than
the impact of fact affirmations. This asymmetry was explained
within a dual-processing framework, assuming that belief ratings
can be based both on recollection of the affirmative/corrective
explanation and on the claim’s familiarity (cf. Jacoby, 1991).
The authors argued that for facts, it does not matter if belief is
based on recollection of the affirmation or the familiarity of the
claim—both will lead to acceptance of the fact; for myths, however, recollection of the retraction will lead to accurate rejection,
whereas familiarity of the claim may lead to erroneous acceptance of the myth as true. The CIE thus seems at least partially
familiarity-based. However, Swire et al. observed no familiarity backfire effect: myth belief post-retraction did not return to
or exceed a pre-manipulation baseline (also see Peter & Koch,
2016). In sum, there is evidence for a role of familiarity in the
CIE, but the evidentiary foundation for the recommendation that
misinformation should not be repeated during its retraction is
relatively weak.
Some theoretical accounts that focus on the salience of the
misinformation during the correction even suggest that repeating
misinformation when retracting it may be beneficial. Putnam,
Wahlheim, and Jacoby (2014) as well as Stadtler, Scharrer,
Brummernhenrich, and Bromme (2013) argued that detection of
a conflict between rival event interpretations facilitates updating of a person’s mental model of an event (cf. Morrow, Bower,
& Greenspan, 1989). Such conflict detection is arguably more
likely to occur if the retraction explicitly refers to both the invalidated interpretation as well as the new correct interpretation.
Likewise, Kendeou, Walsh, Smith, and O’Brien (2014) argued
that effective knowledge revision requires the co-activation of
invalidated and correct event interpretations, which again is more
likely to occur if the misinformation is explicitly repeated when
it is retracted.
The Current Study
The current study aimed to determine whether providing
reminders or repetitions of misinformation in the course of
a retraction increased or decreased the subsequent CIE, thus
1
In this study, the facts and myths all concerned the same topic, so an alternative account may involve source confusion (cf. Johnson, Hashtroudi, Lindsay,
1993): participants may have just been confused about which statements were
affirmed and which retracted. However, the effect was asymmetrical, in that
a delay only led to increased acceptance of myths as true, with the rate of
fact rejection remaining stable over time. This pattern is more in line with a
familiarity-based explanation.
RETRACTIONS, REMINDERS AND MISINFORMATION
testing the contrasting predictions of familiarity and salience
accounts. In order to test these predictions, we presented participants with fictional news articles, some of which contained
a retraction of earlier information, together with an alternative
account of the respective event. The retraction either (a) did not
refer back to the to-be-retracted misinformation, (b) included a
reminder, explaining that the initial information was incorrect
(without repeating the misinformation), or (c) explicitly repeated
the misinformation before correcting it.
Method
The current study employed a within-subject design, featuring a single, four-level factor. The independent variable was the
type of retraction condition. The dependent measure was participants’ reliance on retracted misinformation, calculated based on
responses to a questionnaire assessing participants’ inferential
reasoning.
Participants
An a priori power analysis suggested that to detect a smallto-medium difference between two conditions of effect size
f = 0.2, with α = .05 and 1 − β = .80, and a moderate correlation between repeated measures of r = .50, the required sample
size was 52 (this corresponds with the effect size found between
conditions presenting misinformation once vs. thrice in Ecker,
Lewandowsky, Swire, & Chang, 2011; power analysis was conducted with G*Power 3; Faul, Erdfelder, Lang, & Buchner,
2007). A total of N = 60 first year undergraduates from the University of Western Australia were recruited for participation in
the current study, in return for partial course credit. The sample
consisted of 18 male and 42 female participants, ranging from
17 to 53 years of age (M = 20.52, SD = 7.14).
Stimuli
Scenarios. Participants read six scenarios; they were
informed that the scenarios would be the subject of a later
memory test. Each scenario comprised two short articles and
contained information regarding an unfolding news event (e.g.,
a wildfire). The first article in each case introduced the scenario
and explained what happened; embedded in the first article was a
piece of critical information that served as the potential target of
a retraction in the second article (e.g., “the fire had been deliberately lit”). The second article contained additional information
pertaining to each of the scenarios; there were four versions of
each second article, based on the type of retraction condition
(see Online Supplement for all articles).
In the no-retraction (NR) control condition, the second
article did not contain any retraction of information given in
the first article. The other three conditions were retraction
conditions. In the retraction-with-no-reminder (RNR) condition, more recent information given in the second article
naturally superseded the initial misinformation account of the
first article without any explicit reference to it (e.g., “After
a full investigation and review of witness reports, authorities
have concluded that the fire was set off by lightning strikes”).
187
The retraction-with-subtle reminder (RSR) condition contained
a retraction featuring a subtle reminder of the initial account,
explaining that it was incorrect (e.g., “After a full investigation
and review of witness reports, authorities have concluded that
original reports were incorrect, and that the fire was set off by
lightning strikes”). The final condition featured a correction that
explicitly repeated the initial misinformation before retracting
it (retraction-with-explicit-reminder condition, RER; e.g., “It
was originally reported that the fire had been deliberately lit,
but authorities have now ruled out this possibility. After a
full investigation and review of witness reports, it has been
concluded that the fire was set off by lightning strikes”).
Participants received three scenarios in the NR condition
and one scenario in each of the three retraction conditions. We
counterbalanced assignment of scenarios to conditions across
participants, controlling presentation order such that (a) a noretraction scenario was always presented first, (b) there were
never two retractions presented consecutively, (c) each of the
three retraction conditions occurred equally often at each of
the three possible order positions 2, 4, and 6, and (d) each
scenario occurred equally often at each order position. To this
end, participants were randomly allocated to 1 of 6 pre-defined
presentation orders of a Latin square design. This design was
implemented in part to avoid participants being led to expect a
retraction.
Participants read the six scenarios one after another in the
specified presentation order. The scenarios were presented via
a slide show on a computer screen. Participants read the first
article and second article of each scenario on separate slides,
before moving on to the next scenario. Each article was presented for a fixed amount of time (0.35 s per word), in order
to control encoding time. This fixed time was predetermined to
allow reading times that were comfortable but not excessive.
Participants were provided with a visual aid (a colored bar) on
the screen that began to disappear slowly when there were 10 s
left on the slide.
Questionnaire. We assessed participants’ understanding of
the scenarios with a questionnaire (see Online Supplement). The
questionnaire was presented in a booklet, following the order
of scenarios established during study (specified by the predefined presentation order). The questionnaire comprised memory
questions and inferential reasoning questions. For each scenario, participants’ memory was assessed with an open-ended
free recall question (e.g., “Briefly summarize the ‘wildfire’
article”) and three multiple-choice questions with four possible alternatives (e.g., “Where did the wildfire occur?”). These
questions assessed adequate encoding and retention of scenario
details.
Inferential reasoning questions required participants to make
inferential judgments pertaining to the events in the scenarios. For each scenario, there were four open-ended questions
designed to elicit responses relating to the critical information
while also allowing participants the opportunity to cite unrelated, alternative responses (e.g., “How could such events be
prevented in future?”). In addition, there were three rating-scale
questions requiring participants to indicate on a 10-point scale
RETRACTIONS, REMINDERS AND MISINFORMATION
their level of agreement with a statement (e.g., “Would it be
lawful for someone to be punished as a result of the wildfire?”).
Procedure
Participants read an ethically-approved information sheet
and provided informed consent. Participants then read the
six scenarios in individual testing booths. After reading the
scenarios, participants completed an unrelated distractor task
for approximately 30 min, following Skurnik et al. (2007).
Finally, participants completed the questionnaire assessing their
understanding of the scenarios. The entire experiment took
approximately one hour to complete.
Results
Questionnaire Scoring
Questionnaire responses were coded by a scorer who was
blind to experimental condition, following a standardized guide.
Memory scores. Recall of several aspects of the scenarios
was scored separately; in particular, there were scores for (a)
general fact-recall of arbitrary details, (b) recall of the critical
information, (c) recall of the retraction, and (d) recall of the
alternative.
The general fact-recall score was calculated based on
responses to both the open-ended free recall question and the
multiple choice questions. Scoring of the free recall item was
based on predetermined idea units. Idea units pertained to information contained in the scenarios that did not refer to the critical
information or its alternative, and that was not assessed by the
multiple choice questions. For each scenario, two major idea
units (i.e., information considered a major theme of the scenario;
e.g., that the wildfire had not caused damage to residential property) and two minor idea units (i.e., information considered a
minor detail in the scenario; e.g., that the wildfire had damaged
forest reserves) were identified a priori (see Online Supplement
for all idea units). A score of 1 was given for recall of a major
idea unit, while a score of 0.5 was given for recall of a minor
idea unit, resulting in a possible maximum recall score of 3 for
each scenario. Additionally, correct responses to multiple choice
questions were given a score of 1, resulting in a possible maximum score of 3 for each scenario. Scores were then combined
and scaled to yield a final memory score for each scenario ranging from 0 to 1. The memory scores of the three non-retraction
scenarios were collapsed, such that each participant had one
memory score per experimental condition.
Memory for the critical piece of information, memory
for the retraction, and memory for the alternative account
was coded in separate scores based on the response to the
open-ended free recall question. For each scenario, the score
was 1 when the respective piece of information (i.e., the critical
information, the retraction, or the alternative) was recalled and
0 otherwise. To illustrate, this means that any mention of the
critical information led to a critical-information recall score of
1, whether or not the participant concurrently or subsequently
mentioned the retraction (e.g., in the fire scenario, “it was
thought the fire was caused by arson” and “the fire was not
188
caused by arson as initially thought” were both scored 1 for
critical-information recall, with the latter also receiving a
retraction-recall score of 1). This means that recalling the initial
critical piece of information does not necessarily imply reliance
on misinformation, as long as a participant also recalled the
retraction or alternative. Also, it was possible that the retraction
would be recalled without mention of the critical information
(e.g., “initial speculations were not confirmed”). Finally, any
mention of the alternative led to an alternative-recall score
of 1, irrespective of whether a retraction was mentioned
(e.g., “lightning caused the fire” or “initial speculations were
not confirmed, and it was concluded the fire was caused by
lightning” both led to an alternative-recall score of 1, with the
latter also scoring a 1 for retraction recall). It was possible that
all three measures were scored 1 (e.g., “the fire was not caused
by arson as initially thought but by lightning”). Retraction and
alternative recall scores were not coded for the NR condition.
Inferential reasoning scores. For each scenario, an inference score was calculated based on responses to the four
open-ended inference questions and the three rating scales. For
each open-ended question, a score of 1 was awarded for a clear
and uncontroverted reference to the critical information (e.g.,
an answer such as “Arson” in response to the question “What
was the cause of the fire?”). A score of 0 was given for any
other response (e.g., a controverted answer such as “It was initially thought it was arson, but that was not true”). Rating-scale
scores ranged from 1 to 10, with higher scores denoting stronger
reliance on the critical information (scales that were negatively
worded to this end were reverse-scored). For each scenario, all
seven question scores were equally weighted, combined, and
transformed into an inference score ranging from 0 to 1. The
inference scores of the three non-retraction scenarios were collapsed, such that each participant had one inference score per
experimental condition.
Analysis
Preliminary analyses were conducted to determine whether
any participants needed to be removed from further analysis.
The fact-recall scores were examined to determine whether any
participants scored lower than an a priori criterion of 0.167 (1 out
of the maximum of 6) for all scenarios. One participant violated
this, but as they scored above the criterion in 5 of the 6 scenarios,
their data were retained, and thus no participants were excluded
based on this criterion.2 The data were then screened for outliers,
but none were identified.
Memory scores. Memory scores are provided in the top row
of Table 1. Scores were analyzed to investigate whether there
were any differences between conditions in comprehension of
and memory for the scenarios. The mean memory scores across
conditions were comparable, and a one-way repeated-measures
analysis of variance (ANOVA) revealed no significant effect of
condition, although the analysis just missed the conventional
significance criterion F(3,177) = 2.35, p = .07, η2p = .04.
2
All analyses were repeated without this participant; this did not affect results.
RETRACTIONS, REMINDERS AND MISINFORMATION
189
Table 1
Memory, Recall, and Misinformation Scores Across Conditions
NR
Memory score
Critical-information recall
Alternative recall
Retraction recall
Misinformation score
RNR
RSR
RER
M
SE
M
SE
M
SE
M
SE
0.66
0.53
–
–
0.01
0.04
–
–
0.62
0.53
0.33
0.13
0.07
0.02
0.06
0.06
0.04
0.10
0.65
0.50
0.43
0.22
−0.15
0.02
0.07
0.06
0.05
0.10
0.62
0.53
0.48
0.32
−0.27
0.02
0.06
0.07
0.06
0.10
Notes: NR, no-retraction condition; RNR, retraction with no reminder condition; RSR, retraction with subtle reminder condition; RER, retraction with explicit
reminder condition.
Scores on critical-information recall and alternative recall
were also comparable across conditions (see Table 1). Nonparametric repeated measures ANOVAs (Friedman tests) found
no significant differences, χ2 < 1 for critical-information recall,
and χ2 (2) = 3.60, p = .17, for alternative recall. Next, retraction
recall was analyzed to determine whether there were any differences in recall of the retraction between conditions. Mean
retraction recall scores are also given in Table 1. A Friedman
test revealed a significant main effect of condition on retraction recall, χ2 (2) = 7.28, p = .03. A contrast analysis revealed
a significant difference between the RNR and RER conditions, χ2 (1) = 8.07, p < .01.04, but not between RNR and RSR,
χ2 (1) = 1.67, p = .20, or RSR and RER, χ2 (1) = 1.80, p = .18.
As an initial test of the question if reliance on misinformation differed between retraction conditions, we calculated a
measure of misinformation reliance by simply subtracting the
summed retraction-recall and alternative-recall scores from the
critical-information recall score, separately for each retraction
condition. This misinformation score was 1 if and only if the
critical misinformation was recalled without the retraction or
the alternative being recalled as well; if the misinformation was
not recalled, or if it was recalled alongside its retraction and/or
the alternative, the score was 0 or −1 (a score of −2 was theoretically possible but did not eventuate). Thus, more reliance on
misinformation is reflected in more positive scores. The mean
misinformation scores across conditions are given in the bottom row of Table 1. A Friedman test yielded a significant main
effect of condition, χ2 (2) = 6.57, p = .04, substantiating that misinformation reliance was greatest in the RNR and lowest in the
RER condition [in a contrast analysis, the RNR-RER difference was significant, χ2 (1) = 5.77, p = .02, but the RNR-RSR
and RSR-RER differences were not, χ2 (1) < 3.21, p > .12].
Inferential reasoning scores. The mean inference scores
are depicted in Figure 1; mean scores were MNR = 0.58
(SENR = 0.02), MRNR = 0.39 (SERNR = 0.03), MRSR = 0.34
(SERSR = 0.03), and MRER = 0.27 (SERER = 0.03). First, onesample t tests were conducted to determine whether inference
scores differed significantly from zero (zero representing no
reliance on misinformation in reasoning). Results revealed
that inference scores were substantially greater than zero in
all retraction conditions, all ts(59) > 9.96, p < .001, indicating
presence of a CIE in all three retraction conditions.
A repeated-measures ANOVA on inference scores revealed a
significant main effect of retraction condition, F(3,177) = 22.24,
Figure 1. Mean inference scores (0–1) across experimental conditions. NR,
no retraction; RNR, retraction with no reminder; RSR, retraction with subtle
reminder; RER, retraction with explicit reminder. Error bars depict withinsubject standard errors of the mean. See text for details.
Table 2
Contrasts on Inference Scores
Contrast
F(1,59)
p
NR vs. RNR
NR vs. RSR
NR vs. RER
RNR vs. RSR
RNR vs. RER
RSR vs. RER
18.83
30.21
73.99
1.26
9.60
3.44
Purchase answer to see full
attachment
Why Choose Us
- 100% non-plagiarized Papers
- 24/7 /365 Service Available
- Affordable Prices
- Any Paper, Urgency, and Subject
- Will complete your papers in 6 hours
- On-time Delivery
- Money-back and Privacy guarantees
- Unlimited Amendments upon request
- Satisfaction guarantee
How it Works
- Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
- Fill in your paper’s requirements in the "PAPER DETAILS" section.
- Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
- Click “CREATE ACCOUNT & SIGN IN” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page.
- From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.